WEB3 infrastructure - Get your own IPFS node.

In previous articles we have explored IPFS and its capabilities, and how it has become one of the pillars of WEB3 as the digital asset storage system par excellence.

node IPFS 2

IPFS or InterPlanetary File System is a peer-to-peer network and its purpose is to publish data (files, directories, websites, etc.) in a decentralized way. In this way we can securely publish data within the network and retrieve it from anywhere on the Internet.

In this article we are going to explore how to make our own installation of an IPFS server node (kubo) and what it brings us to have our own IPFS node.

Installation of a Linux IPFS node

IPFS is cross-platform, being possible to install it on the different current OS such as Windows, Mac and Linux. In this article we focus on the installation based on Ubuntu Linux with the precompiled IPFS Kubo for Golang. We can also download the precompiled binary or source code from the official GIT repository.

GIT: https://github.com/ipfs/kubo
Binary: https://dist.ipfs.tech/#kubo

Kubo runs on most Windows, MacOS, Linux, FreeBSD and OpenBSD systems that meet the following requirements. A base installation uses around 12MB of disk space.

6 GiB of memory
2 CPU cores (kubo is highly parallel)
Download the binary from the official kubo web site

 

Kubo runs on most Windows, MacOS, Linux, FreeBSD and OpenBSD systems that meet the following requirements. A base installation uses around 12MB of disk space.

  • 6 GiB of memory
  • 2 CPU cores (kubo is highly parallel)

Download the binary from the official kubo web site

# wget https://dist.ipfs.tech/kubo/v0.20.0/kubo_v0.20.0_linux-amd64.tar.gz

Decompress tar.gz file

# tar -xzvf kubo_v0.20.0_linux-amd64.tar.gz

Execute file installation

# install.sh
Moved ./ipfs to /usr/local/bin

From this point on we have the binary installed on our host as executable and we will proceed to configure our node in server mode assuming that the host is in a data center; we can also perform the same installation on our local machine.

IPFS stores all its configurations and internal data in the /$HOME/ipfs/ directory. Before using Kubo for the first time, you must initialize the repository.

(NOTE: the initialization does not require root privileges).

# ipfs init --profile server (datacenter) evita que ipfs busque otros nodos en la red local
# ipfs init (local machine)

This will generate the configuration file containing the unique node identifier and the directory structure.

At this point we will create a service to start and for IPFS to start the service automatically at host startup

We will create a file called ipfs.service in the /etc/systemd/system/ directory.

[Unit]

Description=IPFS daemon

Wants=network.target

After=network.target

[Service]

User=ipfs

Environment=”IPFS_PATH=/home/user/ipfs/home/”

ExecStart=/usr/local/bin/ipfs daemon

Restart=on-failure

User=user

Group=user

[Install]

WantedBy=multiuser.target

Once created we will execute the following commands:

# systemctl enable ipfs
# systemctl daemon-reload
# systemctl start ipfs
# systemctl status ipfs

Structure of node

Home /$HOME/ipfs/home/
File configuration /$HOME/ipfs/home/config
FS Blocks /$HOME/ipfs/home/blocks
FS Datastore /$HOME/ipfs/home/datastore

[Unit]

Description=IPFS daemon

Wants=network.target

After=network.target

[Service]

User=ipfs

Environment=”IPFS_PATH=/home/user/ipfs/home/

ExecStart=/usr/local/bin/ipfs daemon

Restart=on-failure

User=user

Group=user

[Install]

WantedBy=multiuser.targetService ports

4001 swarm (IPFS network port)

5001 webUI (it is advisable to secure this port in case you are in a datacenter)

80 gateway (ipfs endpoint, it is advisable to create a DNS and add an SSL layer)

We now have a fully functional IPFS node.

How to upload a file through the IPFS node node

To upload files to the IPFS network through the node we have two options, from the host’s own shell or through the node’s api

API:

# curl -vk “https://nuestro_host:5001/api/v0/add” -F file=@”<file.png>”

{“Name”:”archivo.png”,”Hash”:”QmawvemLB8kPK5t9adkln87E9fHtt1d2ct487Rb1uvc85m7v7″,”Size”:”19″}


#
ipfs pin ls QmawvemLB8kPK5t9adkln87E9fHtt1d2ct487Rb1uvc85m7v7

QmawvemLB8kPK5t9adkln87E9fHtt1d2ct487Rb1uvc85m7v7 recursive

Shell:

#
ipfs add file.png

Pinned or not Pinned? That’s the question

By default, when we upload a file to the IPFS network it will be combed, but what does pinning mean? Each IPFS network server stores locally its files that can be requested by another node, when another node requests a file that we have in our server part or all of the blocks of the file will be replicated in the node that has requested it being replicated in two or several nodes but we will always have in our server the file since by default it is pinned in the server from where it has been uploaded.

# curl -v “http://localhost:5001/api/v0/add?pin=false” -F file=@”archivo.png”

#
ipfs add –pin=false archivo.png

So, in what case we are not interested in not pinning the file?

If we use IPFS when we are developing, maybe we are not interested in filling the node with permanent test files if they are not pinned, we can easily remove them by performing a “GC” to purge the node of the blocks that we are not interested in as well as files that are not pinned.

¿ GC, Garbage collector in IPFS ?

By the nature of P2P networks IPFS nodes share blocks from other servers within the IPFS network, every so often it is healthy to both purge our node and restart the IPFS process periodically to clear the memory and perform an “ipfs repo gc” in order to avoid excessive block storage on our node.

Conclusions

In this article we learn how to install an IPFS server on a Linux distribution, how to upload a file to the IPFS network either from the shell or from the node’s API, why to pin or not to pin a file that we upload to IPFS and how to purge our IPFS server of unwanted files.

In future articles we will go deeper into the use cases of the IPFS node as IPNS, the CID v0 or v1 and how to easily and efficiently secure our IPFS node.