Welcome to HORNET

HORNET is a powerful, community driven IOTA node software written in Go. It is easy to install and runs on low-end devices like the Raspberry Pi 4. Hornet is being built and maintained by a group of community developers alongside the IOTA Foundation. Hornet is a full-fledged software providing full node capabilities including full support of the Chrysalis network update.

More information about IOTA network protocol can be found on the IOTA website. Technical information for developers is part of the Chrysalis documentation.

The IOTA network was upgraded to version IOTA 1.5 also known as Chrysalis. This documentation is focused on running a node as a part of the Chrysalis network (Hornet version 0.6.0+). It is not valid for previous versions of Hornet, such as 0.5.x and below.

By running your own node you have the following benefits:

  • You have direct access to an IOTA network instead of having to connect to and trust someone else's node
  • You help the IOTA network to become more distributed and more resilient by validating messages and value transactions in the IOTA network

Roadmap

Hornet will continue to be updated with upcoming changes to the IOTA protocol. See the roadmap for more information.

Source code

The source code of the project is available on GitHub.

Getting started

Running a node is the best way to use IOTA. By doing so, you have direct access to the Tangle instead of having to connect to and trust someone else's node. Additionally, you help the IOTA network to become more distributed and resilient.

The node software is the backbone of the IOTA network. For an overview of tasks a node is responsible for, please see Node 101.

To make sure that your device meets the minimum security requirements for running a node, please see Security 101.

Please note: make sure you install Hornet version 0.6.0+ since it is the minimum version that targets IOTA 1.5 (Chrysalis) network. Versions below 0.6.0 (such as 0.5.x) target the legacy IOTA network which is not the focus of this documentation.

To handle a potential high rate of messages per second, nodes need enough computational power to run reliably, and should have following minimum specs:

  • 4 cores or 4 vCPU
  • 8 GB RAM
  • SSD storage
  • A public IP address

The amount of storage you need will depend on whether and how often you plan on pruning old data from your local database.

Hornet exposes different functionality on different ports:

  • 15600 TCP - Gossip protocol port
  • 14265 TCP - REST HTTP API port (optional)
  • 8081 TCP - Dashboard (optional)

The mentioned ports are important for flawless node operation. The REST HTTP API port is optional and is only needed if you want to offer access to your node's API. All ports can be customized inside the config.json file.

Please note: the default dashboard only listens on localhost:8081 per default. If you want to make it accessible from the Internet, you will need to change the default configuration (though we recommend using a reverse proxy).

Operating system

Hornet is written in Go and can be deployed on all major platforms using several installation methods.

Hornet ships as a single executable binary (hornet or hornet.exe) and some JSON configuration files; no further dependencies are needed.

Linux (and Raspberry Pi)

Available installation methods:

MacOS

Available installation methods:

Windows

Available installation methods:

Configuration

Hornet uses several JSON configuration files that can be adjusted based on your deployment and use cases:

  • config.json: includes all core configuration parameters
  • peering.json: includes connection details to node neighbors (peers)

See more details regarding the configuration in the post installation chapter.

Nodes 101

The IOTA network is a distributed type of network called Tangle, and the network is distributed among plenty of servers called nodes. Nodes are the backbone of an IOTA network. This section covers what nodes do in an IOTA network.

Nodes are responsible for the following:

  • Providing an API to interact with the Tangle/IOTA network.
  • Validating messages and ledger mutations for consistency.
  • Provide data for other nodes to synchronize to the latest state of the network.

Attaching new messages to the Tangle

A message is a data structure that is actually being broadcasted in the IOTA network and represents a vertex in the Tangle graph. When nodes receive a new message, they attach it to the Tangle by adding it to their local database.

As a result, at any point in time, all nodes may have different messages in their local databases. These messages make up a node's view of the Tangle.

To distribute the messages across the rest of the network, nodes synchronize their local databases with their neighbors.

Synchronizing with the rest of the network

Like any distributed system, nodes in an IOTA network synchronize their databases with others called neighbors to form a single source of truth.

When one node, no matter where it is in the world, receives a message, it will try to gossip it to all its neighbors. This way, all nodes eventually see all messages and store them in their local databases.

To synchronize, nodes in IOTA networks use milestones.

If the node has the history of messages that a milestone references, that milestone is solid. Therefore, nodes know if they are synchronized if the index of their latest solid milestone is the same as the index of the latest milestone that it has received.

When a node is synchronized, it then has enough information to decide which transactions it considers confirmed.

Deciding which messages are confirmed

All messages remain in a pending state until the node is sure of its validity. For a definition of a message, see Messages, payloads, and transactions .

However, even when a message is valid, nodes may not be able to make a decision like in the case of a double spend.

When nodes detect double spends they must decide which message to consider confirmed and which one to ignore. Nodes do this by using consensus rules that are built into their node software using a network protocol.

Keeping a record of the balances on addresses via UTXO

All nodes keep a record of the Unspent Transaction Outputs (UTXO) so they can do the following:

  • Check that a transaction is not transferring more IOTA tokens than are available on the address
  • Respond to clients' requests for their balance
  • Only when a transaction is confirmed, can nodes update their record of balances

Exposing APIs for clients

Nodes come with two set of low-level APIs:

  • HTTP(rest) API
  • Event API

Developers are not needed to communicate with nodes using mentioned low-level API. Developers can successfully leverage iota client libraries that provide a high-level abstraction to all features IOTA nodes provide, either on HTTP API level or Event API level

HTTP Rest API

The HTTP API allows clients to interact with the Tangle and ask nodes to do the following:

  • Get tip messages
  • Attach new messages to the Tangle
  • Do proof of work
  • Get messages from the Tangle

Event API

The Events API allows clients to poll nodes for new messages and other events that happen on nodes. This type of API is useful for building applications such as custodial wallets that need to monitor the Tangle for updates to the balances of certain addresses.

Security 101

This topic provides a checklist of steps for running a reliable and secure node.

Please note that servers that are reachable from the Internet are a constant target from security challengers. Please, make sure you follow minimum security essentials summarized in this article.

Securing your device

The security of the device that's running your node is important to stop attackers from gaining access to it.

You should consider doing the following before running a node on your device:

  • Securing SSH logins
  • Blocking unnecessary ports

Securing SSH logins

If you log into your device through SSH, you should use measures to protect it from unauthorized access. Many guides have been written about this subject. For more information, see 10 Steps to Secure Open SSH. In addition to that, one can also leverage tools such as Fail2ban to harden security even more.

Blocking unnecessary ports

Attackers can abuse any open ports on your device.

To secure your device against attacks on unused open ports, you should close all ports except those that are in use.

To do so, you can use a firewall. All operating systems include firewall options. By having a firewall in place you can completely block unused and unnecessary ports.

On a cloud platforms such as AWS, Azure or GCP, one can block ports on VPS networking settings.

Deciding whether to enable remote proof of work

When you're configuring your node, you have the option to allow it to do proof of work. When this feature is enabled, clients can ask your node to do remote proof of work.

Proof of work takes time and uses your node's computational power. So, you should consider it according to your infrastructure.

Load balancing

If you run more than one node, it's a best practice to make sure that API requests are distributed among all of them.

To evenly distribute the API requests among all your nodes, you can run a reverse proxy server that will act as a load balancer (HAProxy, Traefik, Nginx, Apache, etc.). This way, you can have one domain name for your reverse proxy server that all nodes will send their API calls to. But, on the backend, the nodes with the most spare computational power will process the request and return the response.

Broadcasted messages are atomic and nodes provides restful API to communicate, so sticky sessions or similar tech is not needed.

Reverse proxy

Using a reverse proxy in front of a node is considered a best practice even in case of deployment of a single node only. Reverse proxy adds an additional security layer that can handle tasks such as IP address filtering, abuse rate limiting, SSL encrypting, additional authorization layer, etc.

Hornet apt repository (Linux-distro specific)

Hornet apt repository is maintained by the Hornet developers. It installs Hornet as a systemd service under a user called hornet.

Ubuntu/Debian

Import the public key that is used to sign the software release:

wget -qO - https://ppa.hornet.zone/pubkey.txt | sudo apt-key add -

Add the Hornet APT repository to your APT sources:

sudo sh -c 'echo "deb http://ppa.hornet.zone stable main" >> /etc/apt/sources.list.d/hornet.list'

Update apt package lists and install Hornet:

sudo apt update
sudo apt install hornet

Enable the systemd service:

sudo systemctl enable hornet.service

The Hornet configuration files are located under the /var/lib/hornet directory. See more details on how to configure Hornet under the post installation chapter.

Environment file to configure multiple default parameters are located under the /etc/default/hornet directory.

Start the node; use systemd service to start running Hornet on the Mainnet:

sudo service hornet start

Managing the node

Displaying log output:

journalctl -fu hornet
  • -f: instructs journalctl to continue displaying the log to stdout until CTRL+C is pressed
  • -u hornet: filter log output by user name

Restarting Hornet:

sudo systemctl restart hornet

Stopping Hornet:

sudo systemctl stop hornet

Please note: Hornet uses an in-memory cache, so it is necessary to provide a grace period while shutting it down (at least 200 seconds) in order to save all data to the underlying persistent storage.

See more details on how to configure Hornet under the post installation chapter.


Docker image

Prepared Hornet Docker images (amd64/x86_64 architecture) are available at gohornet/hornet Docker hub.

Make sure that you've installed Docker on your machine before trying to use the Docker images. (Follow this link for install instructions).

Hornet uses JSON configuration files which can be downloaded from the repository on GitHub:

curl -LO https://raw.githubusercontent.com/gohornet/hornet/main/config.json
curl -LO https://raw.githubusercontent.com/gohornet/hornet/main/peering.json

See more details on how to configure Hornet under the post installation chapter.

Create empty directories for the database, snapshots and set user permission to them:

mkdir mainnetdb && sudo chown 39999:39999 mainnetdb
mkdir -p snapshots/mainnet && sudo chown 39999:39999 snapshots -R
  • The Docker image runs under user with uid 39999, so it has a full permission to the given directory

Pull the latest image from gohornet/hornet public Docker hub registry:

docker pull gohornet/hornet:latest

So basically there should be the following files and directories created in the current directory:

.
├── config.json
├── peering.json
├── mainnetdb       <DIRECTORY>
└── snapshots       <DIRECTORY>
    └── mainnet     <DIRECTORY>

3 directories, 2 files

Start the node; using docker run command:

docker run -d --restart always -v $(pwd)/config.json:/app/config.json:ro -v $(pwd)/snapshots/mainnet:/app/snapshots/mainnet -v $(pwd)/mainnetdb:/app/mainnetdb --name hornet --net=host gohornet/hornet:latest
  • $(pwd): stands for the current directory
  • -d: instructs Docker to run the container instance in a detached mode (daemon).
  • --restart always: instructs Docker the given container is restarted after Docker reboot
  • --name hornet: a name of the running container instance. You can refer to the given container by this name
  • --net=host: instructs Docker to use directly network on host (so the network is not isolated). The best is to run on host network for better performance. It also means it is not necessary to specify any ports. Ports that are opened by container are opened directly on the host
  • -v $(pwd)/config.json:/app/config.json:ro: it maps the local config.json file into the container in readonly mode
  • -v $(pwd)/snapshots/mainnet:/app/snapshots/mainnet: it maps the local snapshots directory into the container
  • -v $(pwd)/mainnetdb:/app/mainnetdb: it maps the local mainnetdb directory into the container
  • all mentioned directories are mapped to the given container and so the Hornet in container persists the data directly to those directories

Managing node

Displaying log output:

docker logs -f hornet
  • -f: instructs Docker to continue displaying the log to stdout until CTRL+C is pressed

Restarting Hornet:

docker restart -t 200 hornet

Stopping Hornet:

docker stop -t 200 hornet
  • -t 200: instructs Docker to wait for a grace period before shutting down

Please note: Hornet uses an in-memory cache and so it is necessary to provide a grace period while shutting it down (at least 200 seconds) in order to save all data to the underlying persistent storage.

Removing container:

docker container rm hornet

Using docker-compose

Docker-compose is a tool on top of the Docker engine that enables you (among other features) to define Docker parameters in a structured way via yaml file. Then, with a single docker-compose command, you create and start containers based on your configuration.

Create docker-compose.yml file among the other files created above:

.
├── config.json
├── peering.json
├── docker-compose.yml      <NEWLY ADDED FILE>
├── mainnetdb
└── snapshots
    └── mainnet
version: '3'
services:
  hornet:
    container_name: hornet
    image: gohornet/hornet:latest
    network_mode: host
    restart: always
    cap_drop:
      - ALL
    volumes:
      - ./config.json:/app/config.json:ro
      - ./peering.json:/app/peering.json
      - ./snapshots/mainnet:/app/snapshots/mainnet
      - ./mainnetdb:/app/mainnetdb

Then run docker-compose up in the current directory:

docker-compose up
  • it reads parameters from the given docker-compose.yml and fires up the container named hornet based on them
  • docker-compose up automatically runs containers in detached mode (as daemon)

Restarting and managing containers that were created by docker-compose can be done in the same fashion as mentioned above (docker log, docker restart, etc.).

See more details on how to configure Hornet under the post installation chapter.

Building Hornet Docker image

All files required to start and build Hornet container are also part of the Hornet repo, so you can alternatively clone the repo and starts from there.

This approach is, for example, used in hornet-testnet-boilerplate by our friend Dave Fijter. Please see it here.

This boilerplate also builds a new container image from the source, so it can be used as an alternative to Build from source approach.


Pre-built binaries

There are several pre-built binaries of Hornet for major platforms available including some default configuration JSON files.

This method is considered a bit advanced for production use since you have to usually prepare a system environment in order to run the given executable as a service (in a daemon mode) via systemd or supervisord.

Download the latest release compiled for your system from GitHub release assets, for ex:

curl -LO https://github.com/gohornet/hornet/releases/download/v0.6.0/HORNET-0.6.0_Linux_x86_64.tar.gz

Some navigation hints:

  • HORNET-X.Y.Z_Linux_x86_64.tar.gz: standard 64-bit-linux-based executable, such as Ubuntu, Debian, etc.
  • HORNET-X.Y.Z_Linux_arm64.tar.gz: executable for Raspberry Pi 4
  • HORNET-X.Y.Z_Windows_x86_64.zip: executable for Windows 10-64-bit-based systems
  • HORNET-X.Y.Z_macOS_x86_64.tar.gz: executable for macOS

Extract the files in a folder of your choice (for ex. /opt on Linux), for ex:

tar -xf HORNET-0.6.0_Linux_x86_64.tar.gz
  • Once extracted, you get a main executable file
  • There are also sample configuration JSON files available in the archive (tar or zip)

Run Hornet using --help to get all executable-related arguments:

./hornet --help

Also double check that you have version 0.6.0+ deployed:

./hornet --version

Run Hornet using default settings:

./hornet

Using this method, you have to make sure the executable runs in a daemon mode using for example systemd.

Please note: Hornet uses an in-memory cache, so it is necessary to provide a grace period while shutting it down (at least 200 seconds) in order to save all data to the underlying persistent storage.

See more details on how to configure Hornet under the post installation chapter.

Example of systemd unit file

Assuming the Hornet executable is extracted to /opt/hornet together with configuration files, please find the following example of a systemd unit file:

[Unit]
Description=Hornet
Wants=network-online.target
After=network-online.target

[Service]
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=hornet
PrivateDevices=yes
PrivateTmp=yes
ProtectSystem=full
ProtectHome=yes

User=hornet
WorkingDirectory=/opt/hornet
TimeoutSec=1200
Restart=always
ExecStart=/opt/hornet/hornet

[Install]
WantedBy=multi-user.target

Build from source

This method is considered a bit advanced for production use since you usually have to prepare a system environment in order to run the given executable as a service (in a daemon mode) via systemd or supervisord.

Install Go:

Install Go

Install dependencies: Git and build-essentials:

sudo apt update
sudo apt install git build-essential

Check the golang/git version:

go version
git --version

Make sure you have the latest version from https://golang.org/dl/

Clone the Hornet source code from GitHub:

git clone https://github.com/gohornet/hornet.git && cd hornet

Build the Hornet:

./build_hornet_rocksdb_builtin.sh
  • it builds Hornet based on the latest commit from main branch
  • it takes a couple of minutes

Once it is compiled, then the executable file named hornet should be available in the current directory:

./hornet --version

Example of version:

HORNET 0.6.0-31ad46bb
  • there is also short commit sha added to be sure what commit the given version is compiled against

Run Hornet using --help to get all executable-related arguments:

./hornet --help

Run Hornet using a default settings:

./hornet

Using this method, you have to make sure the executable runs in a daemon mode using for example systemd.

Please note: Hornet uses an in-memory cache, so it is necessary to provide a grace period while shutting it down (at least 200 seconds) in order to save all data to the underlying persistent storage.

See more details on how to configure Hornet under the post installation chapter.

Example of systemd unit file

Assuming the Hornet executable is extracted to /opt/hornet together with configuration files, please find the following example of a systemd unit file:

[Unit]
Description=Hornet
Wants=network-online.target
After=network-online.target

[Service]
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=hornet
PrivateDevices=yes
PrivateTmp=yes
ProtectSystem=full
ProtectHome=yes

User=hornet
WorkingDirectory=/opt/hornet
TimeoutSec=1200
Restart=always
ExecStart=/opt/hornet/hornet

[Install]
WantedBy=multi-user.target

Bootstrapping the Chrysalis Phase 2 Hornet node from the genesis snapshot

  1. Rename the genesis_snapshot.bin to full_snapshot.bin.
  2. Make sure your C2 (Chrysalis Phase 2) Hornet node has no database and no prior snapshot files.
  3. Place the full_snapshot.bin in the directory as defined in the snapshots.fullPath config option (this option contains the full path including the file name).
  4. Adjust protocol.networkID to the same value as used in the -genesis-snapshot-file-network-id="<network-id-for-chrysalis-phase-2>" flag. (This should not really be necessary as the C2 Hornet version will ship with the appropriate default values).
  5. Control that the corresponding protocol.publicKeyRanges are correct.
  6. Start your C2 Hornet node and add peers via the dashboard.

Post-installation

Once Hornet is deployed, all parameters are set via configuration files.

Configuration

The most important ones are:

  • config.json: includes all configuration flags and their values
  • peering.json: includes all connection details to your static peers (neighbors)

Hornet version 0.5.x targets legacy IOTA 1.0 network. Hornet version 1.x.x targets IOTA 1.5 network aka Chrysalis which is the focus of this documentation.

Depending on the installation path you selected, default configuration files may be also part of the installation process and so you may see the following configuration files at your deployment directory:

config.json
config_chrysalis_testnet.json
peering.json
profiles.json

Default configuration

By default, Hornet searches for configuration files in the working directory and expects default names, such as config.json and peering.json.

This behavior can be changed by running Hornet with some altering arguments.

Please see the config.json and peering.json chapters for more information regarding the respective configuration files.

Once Hornet is executed, it outputs all loaded configuration parameters to stdout to show what configuration was actually loaded (omitting values for things like passwords etc.).

All other altering command line parameters can be obtained by running hornet --help or with a more granular output hornet --help --full.

Dashboard

Per default an admin dashboard/web interface plugin is available on port 8081. It provides some useful information regarding the node's health, peering/neighbors, overall network health and consumed system resources.

The dashboard plugin only listens on localhost:8081 per default. If you want to make it accessible from the Internet, you will need to change the default configuration. It can be changed via the following config.json file section:

"dashboard": {
  "bindAddress": "localhost:8081",
  "auth": {
    "sessionTimeout": "72h",
    "username": "admin",
    "passwordHash": "0000000000000000000000000000000000000000000000000000000000000000",
    "passwordSalt": "0000000000000000000000000000000000000000000000000000000000000000"
  }
}

Change dashboard.bindAddress to either 0.0.0.0:8081 to listen on all available interfaces, or the specific interface address accordingly.

Even if accessible from the Internet, any visitor still needs a valid combination of the username and password to access the management section of the dashboard.

The password hash and salt can be generated using the integrated pwdhash CLI tool:

./hornet tools pwdhash

Output example:

Enter a password:
Re-enter your password:
Success!
Your hash: 24c832e35dc542901b90888321dbfc4b1d9617332cbc124709204e6edf7e49f9
Your salt: 6c71f4753f6fb52d7a4bb5471281400c8fef760533f0589026a0e646bc03acd4

pwdhash tool outputs the passwordHash and passwordSalt based on your input password

Copy both values to their corresponding configuration options: dashboard.auth.passwordHash and dashboard.auth.passwordSalt respectively.

In order for the new pasword to take effect, you must restart Hornet.

Peer neighbors

The IOTA network is a distributed network in which data is broadcasted among IOTA nodes through a gossip protocol. To be able to participate in a network, each node has to establish a secure connection to other nodes in the network - to its peer neighbors - and mutually exchange messages.

Node identity

Each node is uniquely identified by a peer identity. Peer identity (also called PeerId) is represented by a public and private key pair. The PeerId represents a verifiable link between the given peer and its public key, since PeerId is a cryptographic hash of peer's public key. It enables individual peers to establish a secure communication channel as the hash can be used to verify an identity of the peer.

Hornet automatically generates a PeerId when it is started for the first time, and saves the identity's public key in a file ./p2pstore/key.pub and the private key within a BadgerDB within ./p2pstore. The generated identity is kept between subsequent restarts.

Each time Hornet starts, the PeerId is written to stdout:

2021-04-19T14:27:55Z  INFO    P2P     never share your ./p2pstore folder as it contains your node's private key!
2021-04-19T14:27:55Z  INFO    P2P     generating a new peer identity...
2021-04-19T14:27:55Z  INFO    P2P     stored public key under p2pstore/key.pub
2021-04-19T14:27:55Z  INFO    P2P     peer configured, ID: 12D3KooWEWunsQWGvSWYN2VR7wNNoHgae4XikBqwSre8K8sVTefu

Your PeerId is an essential part of your multiaddr used to configure neighbors, such as /dns/example.com/tcp/15600/p2p/12D3KooWHiPg9gzmy1cbTFAUekyLHQKQKvsKmhzB7NJ5xnhK4WKq, where 12D3KooWHiPg9gzmy1cbTFAUekyLHQKQKvsKmhzB7NJ5xnhK4WKq corresponds to your PeerId. Your PeerId is also visible on the start page of the dashboard.

It is recommended however to pre-generate the identity, so you can pre-communicate it to your peers before you even start your node and also to retain the identity in case you delete your ./p2pstore by accident.

You can use the p2pidentity CLI tool to generate a PeerId which simply generates a key pair and logs it to stdout:

./hornet tools p2pidentity

Sample output:

Your p2p private key:  7ea40ae657e2b8d46069f2ea6fe8f6ab209fb3f6f6630bc025a11a97e17e5d0675a575803660978d323fef05e871f54ecd94602b15181ba56183f9aba121ede7
Your p2p public key:  75a575803660978d323fef05e871f54ecd94602b15181ba56183f9aba121ede7
Your p2p PeerID:  12D3KooWHjcCgWPnUEP8wNdbL2fx63Cmosk16xyZ25iUZagxmHb4

Now simply copy the value of Your p2p private key to the p2p.identityPrivateKey configuration option.

Your Hornet node will now use the specified private key in p2p.identityPrivateKey to generate the PeerId (which will ultimately be stored in ./p2pstore).

In case there already is a ./p2pstore with another identity, Hornet will panic and tell you that you have a previous identity which does not match with what is defined via p2p.identityPrivateKey ( in that case either delete the ./p2pstore or reset the p2p.identityPrivateKey).

More information regarding the PeerId is available on the libp2p docs page .

Addressing peer neighbors

In order to communicate to your peer neighbors, you also need an address to reach them. Hornet uses the MultiAddresses format (also known as multiaddr) to achieve that.

multiAddr is a convention how to encode multiple layers of addressing information into a single path structure that is future-proof. In other words, multiaddr is able to combine several different pieces of information in a single human-readable and machine-optimized string, including network protocol and PeerId.

For example, a node is reachable using IPv4 100.1.1.1 using TCP on port 15600 and its PeerId is 12D3KooWHjcCgWPnUEP8wNdbL2fx63Cmosk16xyZ25iUZagxmHb4.

A multiaddr encoding such information would look like this:

/ip4/100.1.1.1/tcp/15600/p2p/12D3KooWHjcCgWPnUEP8wNdbL2fx63Cmosk16xyZ25iUZagxmHb4

Note how ip4 is used. A common mistake is to use ipv4.

If a node is reachable using a DNS name (for example node01.iota.org), then the given multiaddr would be:

/dns/node01.iota.org/tcp/15600/p2p/12D3KooWHjcCgWPnUEP8wNdbL2fx63Cmosk16xyZ25iUZagxmHb4

In order to find out your own multiaddr to give to your peers for neighboring, combine the peerId you have gotten from the stdout when the Hornet node started up (or which was shown via the p2pidentity CLI tool) and your configured p2p.bindAddress. Obviously replace the /ip4/<ip_address>//dns/<hostname> segments with the actual information.

More information about multiaddr is available at the libp2p docs page.

Adding node peers

Once you know your node's own multiaddr, it can be exchanged with other node owners to establish a mutual peer connection.

Where to find neighbors?

Join the official IOTA Discord server and join the #fullnodes channel and describe your node location (Europe / Germany / Asia, etc.) with your allocated HW resources and ask for neighbors. Do not publicly disclose your node multiaddr to all readers but wait for an individual direct chat.

Each peer can then be added using the Hornet dashboard (admin section) or peering.json file.

A recommended number of peer neighbors is 4-6 to get some degree of redundancy.

Happy peering

Configuring HTTP REST API

One of the tasks that the node is responsible for is exposing a HTTP REST API for clients that would like to interacts with the IOTA network, such as crypto wallets, exchanges, IoT devices, etc.

By default, the HTTP REST API is publicly exposed on port 14265 and ready to accept incoming connections from the Internet.

Since offering the HTTP REST API to the public can consume resources of your node, there are options to restrict which routes can be called and other request limitations.

HTTP REST API related options exists under the section restAPI within the config.json file:

  "restAPI": {
    "jwtAuth": {
      "enabled": false,
      "salt": "HORNET"
    },
    "excludeHealthCheckFromAuth": false,
    "permittedRoutes": [
      "/health",
      "/mqtt",
      "/api/v1/info",
      "/api/v1/tips",
      "/api/v1/messages/:messageID",
      "/api/v1/messages/:messageID/metadata",
      "/api/v1/messages/:messageID/raw",
      "/api/v1/messages/:messageID/children",
      "/api/v1/messages",
      "/api/v1/transactions/:transactionID/included-message",
      "/api/v1/milestones/:milestoneIndex",
      "/api/v1/milestones/:milestoneIndex/utxo-changes",
      "/api/v1/outputs/:outputID",
      "/api/v1/addresses/:address",
      "/api/v1/addresses/:address/outputs",
      "/api/v1/addresses/ed25519/:address",
      "/api/v1/addresses/ed25519/:address/outputs",
      "/api/v1/treasury"
    ],
    "whitelistedAddresses": [
      "127.0.0.1",
      "::1"
    ],
    "bindAddress": "0.0.0.0:14265",
    "powEnabled": true,
    "powWorkerCount": 1,
    "limits": {
      "bodyLength": "1M",
      "maxResults": 1000
    }
  }

If you want to make the HTTP REST API only accessible from localhost, change the restAPI.bindAddress config option accordingly.

restAPI.permittedRoutes defines which routes can be called from foreign addresses which are not defined under restAPI.whitelistedAddresses.

If you are concerned with resource consumption, consider turning off restAPI.powEnabled, which makes it so that clients must perform Proof-of-Work locally, before submitting a message for broadcast. In case you'd like to offer Proof-of-Work for clients, consider upping restAPI.powWorkerCount to provide a faster message submission experience.

We suggest that you provide your HTTP REST API behind a reverse proxy, such as nginx or Traefik configured with TLS.

Please see some of our additional security recommendations here.

Feel free to explore more details regarding different API calls at the IOTA client library documentation.

Managing node

In this chapter, there is an overview of key concepts that are important to consider during a maintenance cycle of your node.

Storage

Hornet uses embedded database engine that stores its data in a directory on file system. The location is controlled via config.json file under the section db, key path:

"db": {
    "engine": "rocksdb",
    "path": "mainnetdb",
    "autoRevalidation": false
  }

There is a convention that the directory is named after the network type: mainnet vs testnet.

Another important directory is a directory dedicated to snapshots controlled via section snapshots of the config.json, specifically fullPath and deltaPath keys:

"snapshots": {
    "interval": 50,
    "fullPath": "snapshots/mainnet/full_snapshot.bin",
    "deltaPath": "snapshots/mainnet/delta_snapshot.bin",
    "deltaSizeThresholdPercentage": 50.0,
    "downloadURLs": [
      {
        "full": "https://ls.manapotion.io/full_snapshot.bin",
        "delta": "https://ls.manapotion.io/delta_snapshot.bin"
      },
      {
        "full": "https://x-vps.com/full_snapshot.bin",
        "delta": "https://x-vps.com/delta_snapshot.bin"
      },
      {
        "full": "https://dbfiles.iota.org/mainnet/hornet/full_snapshot.bin",
        "delta": "https://dbfiles.iota.org/mainnet/hornet/delta_snapshot.bin"
      }
    ]

The same convention is applied and directories are named after the network type (mainnet vs testnet).

Here is the full overview of all files and directories that are leveraged by the Hornet:

.
├── config.json
├── hornet              <EXECUTABLE>
├── p2pstore
│   ├── [...files]
├── snapshots           <SNAPSHOT DIR>
│   └── testnet
│       ├── delta_snapshot.bin
│       └── full_snapshot.bin
└── testnetdb           <DB DIR>
    ├── [...db files]

Plugins

Hornet can be extended by plugins. Plugins are controlled via section node of the config.json file, specifically disablePlugins and enablePlugins keys:

"node": {
    "alias": "Mainnet",
    "profile": "auto",
    "disablePlugins": [],
    "enablePlugins": []
  },

Additionally, plugins can be controlled via Dashboard/web interface.

Spammer

Hornet integrates a lightweight spamming plugin that spams the network with messages. Since the IOTA network is based on Directed Acyclic Graph in which new incoming messages are connected to previous messages (tips), it is healthy for the network to maintain some level of message rate.

The Spammer plugin allows your node to send a number of data messages at regular interval. The interval is set in the mpsRateLimit key, which is the number of messages per second (TPS) that the plugin should try to send.

For example, value "mpsRateLimit": 0.1 would mean to send 1 message every 10 seconds.

Needless to say, it is turned off by default:

 "spammer": {
    "message": "Binary is the future.",
    "index": "HORNET Spammer",
    "indexSemiLazy": "HORNET Spammer Semi-Lazy",
    "cpuMaxUsage": 0.8,
    "mpsRateLimit": 0.0,
    "workers": 0,
    "autostart": false
  }

This plugin can be also leveraged during a spamming event during which the community tests the throughput of the network.

Snapshots

Your node's ledger accumulates many messages, which uses a significant disk capacity over time. This topic discusses how to configure local snapshots to prune old transactions from your node's database and to create backup snapshot files.

 "snapshots": {
    "interval": 50,
    "fullPath": "snapshots/mainnet/full_snapshot.bin",
    "deltaPath": "snapshots/mainnet/delta_snapshot.bin",
    "deltaSizeThresholdPercentage": 50.0,
    "downloadURLs": [
      {
        "full": "https://ls.manapotion.io/full_snapshot.bin",
        "delta": "https://ls.manapotion.io/delta_snapshot.bin"
      },
      {
        "full": "https://x-vps.com/full_snapshot.bin",
        "delta": "https://x-vps.com/delta_snapshot.bin"
      },
      {
        "full": "https://dbfiles.iota.org/mainnet/hornet/full_snapshot.bin",
        "delta": "https://dbfiles.iota.org/mainnet/hornet/delta_snapshot.bin"
      }
    ]
  },
  "pruning": {
    "enabled": true,
    "delay": 60480,
    "pruneReceipts": false
  }

Snapshot pruning

During a snapshot, messages may be deleted from the ledger if they were confirmed by an old milestone. In other words, the term pruning means the deletion of the old history from the node database:

  • To enable pruning, set the pruning.enabled key to enabled
  • The pruning.delay defines how far back from the current confirmed milestone should be pruned

There are two types of snapshots:

Delta snapshot A delta snapshot points to a specific full snapshot, ie. delta snapshot consists of the changes since the last full snapshot.

Full snapshot The full snapshot includes the whole ledger state to a specific milestone and a solid entry point.

How to work with snapshots

If your run Hornet node for the first time, you need to start it with a full-snapshot. The Hornet downloads it for you automatically from trusted sources.

Additionally, you can start it with an specific delta snapshot.

You can use the Hornet tools for that:

hornet tool
  • snapgen: generates an initial snapshot for a private network
  • snapmerge: merges a full and delta snapshot into an updated full snapshot
  • snapinfo: outputs information about a snapshot file

Core configuration

Hornet uses a JSON standard format as a config file. If you are unsure about JSON syntax, have a look at the official specs here.

The default config file is named config.json. You can change the path or name of the config file by using the -c or --config argument while executing hornet executable.

For Example: hornet -c config_example.json.

You can always get the most up-to-date description of the config parameters by running hornet -h --full.

Table of content


1. REST API

NameDescriptionType
jwtAuthconfig for JWT authobject
permittedRoutesthe allowed HTTP REST routes which can be called from non whitelisted addressesarray of strings
whitelistedAddressesthe whitelist of addresses which are allowed to access the REST APIarray of strings
bindAddressthe bind address on which the REST API listens onstring
powEnabledwhether the node does PoW if messages are received via APIbool
powWorkerCountthe amount of workers used for calculating PoW when issuing messages via APIinteger
limitsconfig for api limitsobject
excludeHealthCheckFromAuthwhether to allow the health check route anywaysbool

JWT Auth

NameDescriptionType
enabledwhether to use JWT auth for the REST APIbool
saltsalt used inside the JWT tokens for the REST API. Change this to a different value to invalidate JWT tokens not matching this new valuestring

Limits

NameDescriptionType
bodyLengththe maximum number of characters that the body of an API call may containstring
maxResultsthe maximum number of results that may be returned by an endpointinteger

Example:

  "restAPI": {
    "authEnabled": false,
    "excludeHealthCheckFromAuth": false,
    "permittedRoutes": [
      "/health",
      "/mqtt",
      "/api/v1/info",
      "/api/v1/tips",
      "/api/v1/messages/:messageID",
      "/api/v1/messages/:messageID/metadata",
      "/api/v1/messages/:messageID/raw",
      "/api/v1/messages/:messageID/children",
      "/api/v1/messages",
      "/api/v1/transactions/:transactionID/included-message",
      "/api/v1/milestones/:milestoneIndex",
      "/api/v1/milestones/:milestoneIndex/utxo-changes",
      "/api/v1/outputs/:outputID",
      "/api/v1/addresses/:address",
      "/api/v1/addresses/:address/outputs",
      "/api/v1/addresses/ed25519/:address",
      "/api/v1/addresses/ed25519/:address/outputs",
      "/api/v1/treasury"
    ],
    "whitelistedAddresses": [
      "127.0.0.1",
      "::1"
    ],
    "bindAddress": "0.0.0.0:14265",
    "powEnabled": false,
    "powWorkerCount": 1,
    "limits": {
      "bodyLength": "1M",
      "maxResults": 1000
    }
  },

2. Dashboard

NameDescriptionType
bindAddressthe bind address on which the dashboard can be access fromstring
devwhether to run the dashboard in dev modebool
authconfig for dashboard authobject

Auth

NameDescriptionType
sessionTimeouthow long the auth session should last before expiringstring
usernamethe auth usernamestring
passwordHashthe auth password+salt as a scrypt hashstring
passwordSaltthe auth salt used for hashing the passwordstring

Example:

  "dashboard": {
    "bindAddress": "localhost:8081",
    "dev": false,
    "auth": {
      "sessionTimeout": "72h",
      "username": "admin",
      "passwordHash": "0000000000000000000000000000000000000000000000000000000000000000",
      "passwordSalt": "0000000000000000000000000000000000000000000000000000000000000000"
    }
  },

3. DB

NameDescriptionType
enginethe used database engine (pebble/bolt/rocksdb)string
paththe path to the database folderstring
autoRevalidationwhether to automatically start revalidation on startup if the database is corruptedbool
debugignore the check for corrupted databases (should only be used for debug reasons)bool

Example:

  "db": {
    "engine": "rocksdb",
    "path": "mainnetdb",
    "autoRevalidation": false,
    "debug": false,
  },

4. Snapshots

NameDescriptionType
intervalinterval, in milestones, at which snapshot files are created (snapshots are only created if the node is synced)integer
depththe depth, respectively the starting point, at which a snapshot of the ledger is generatedinteger
fullPathpath to the full snapshot filestring
deltaPathpath to the delta snapshot filestring
deltaSizeThresholdPercentagecreate a full snapshot if the size of a delta snapshot reaches a certain percentage of the full snapshot (0.0 = always create delta snapshot to keep ms diff history)float
downloadURLsURLs to load the snapshot files from.array of objects

DownloadURLs

NameDescriptionType
fulldownload link to the full snapshot filestring
deltadownload link to the delta snapshot filestring

Example:

"snapshots": {
    "interval": 50,
    "depth": 50,
    "fullPath": "snapshots/mainnet/full_snapshot.bin",
    "deltaPath": "snapshots/mainnet/delta_snapshot.bin",
    "deltaSizeThresholdPercentage": 50.0,
    "downloadURLs": [
      {
        "full": "https://source1.example.com/full_snapshot.bin",
        "delta": "https://source1.example.com/delta_snapshot.bin"
      },
      {
        "full": "https://source2.example.com/full_snapshot.bin",
        "delta": "https://source2.example.com/delta_snapshot.bin"
      }
    ]
  },

5. Pruning

NameDescriptionType
enabledwhether to delete old message data from the databasebool
delayamount of milestone cones to keep in the databaseinteger
pruneReceiptswhether to delete old receipts data from the databasebool

Example:

  "pruning": {
    "enabled": true,
    "delay": 60480,
    "pruneReceipts": false
  },

6. Protocol

NameDescriptionType
networkIDthe network ID on which this node operates onstring
bech32HRPthe HRP which should be used for Bech32 addressesstring
minPoWScorethe minimum PoW score required by the networkfloat
milestonePublicKeyCountthe amount of public keys in a milestoneinteger
publicKeyRangesList of public key ranges from the coordinatorarray of objects

PublicKeyRanges

NameDescriptionType
keypublic keystring
startMilestone start indexinteger
endMilestone end indexinteger

Example:

  "protocol": {
    "networkID": "mainnet1",
    "bech32HRP": "iota",
    "minPoWScore": 4000,
    "milestonePublicKeyCount": 2,
    "publicKeyRanges": [
      {
        "key": "7205c145525cee64f1c9363696811d239919d830ad964b4e29359e6475848f5a",
        "start": 0,
        "end": 0
      },
      {
        "key": "e468e82df33d10dea3bd0eadcd7867946a674d207c39f5af4cc44365d268a7e6",
        "start": 0,
        "end": 0
      },
      {
        "key": "0758028d34508079ba1f223907ac3bb5ce8f6bdccc6b961c7c85a2f460b30c1d",
        "start": 0,
        "end": 0
      }
    ]
  },

7. Proof of Work

NameDescriptionType
refreshTipsIntervalinterval for refreshing tips during PoW for spammer messages and messages passed without parents via APIstring

Example:

  "pow": {
    "refreshTipsInterval": "5s"
  },

8. Requests

NameDescriptionType
discardOlderThanthe maximum time a request stays in the request queuestring
pendingReEnqueueIntervalthe interval the pending requests are re-enqueuedstring

Example:

  "requests": {
    "discardOlderThan": "15s",
    "pendingReEnqueueInterval": "5s"
  },

9. Coordinator

NameDescriptionType
checkpointsconfig for checkpointsobject
intervalthe interval milestones are issuedstring
powWorkerCountthe amount of workers used for calculating PoW when issuing checkpoints and milestonesinteger
quorumconfig for quorumobject
signingconfig for signingobject
stateFilePaththe path to the state file of the coordinatorstring
tipselconfig for tip selectionobject

Checkpoints

NameDescriptionType
maxTrackedMessagesmaximum amount of known messages for milestone tipselectioninteger

Quorum

NameDescriptionType
enabledwhether the coordinator quorum is enabledbool
groupsthe quorum groups used to ask other nodes for correct ledger state of the coordinatorArray of object arrays
timeoutthe timeout until a node in the quorum must have answeredstring

Groups

NameDescriptionType
{GROUP_NAME}the qourum group used to ask other nodes for correct ledger state of the coordinatorArray of objects
{GROUP_NAME}
NameDescriptionType
aliasalias of the quorum client (optional)string
baseURLbaseURL of the quorum clientstring
userNameusername for basic auth (optional)string
passwordpassword for basic auth (optional)string

Signing

NameDescriptionType
providerthe signing provider the coordinator uses to sign a milestone (local/remote)string
remoteAddressthe address of the remote signing provider (insecure connection!)string

Tipsel

NameDescriptionType
heaviestBranchSelectionTimeoutthe maximum duration to select the heaviest branch tipsstring
maxHeaviestBranchTipsPerCheckpointmaximum amount of checkpoint messages with heaviest branch tipsinteger
minHeaviestBranchUnreferencedMessagesThresholdminimum threshold of unreferenced messages in the heaviest branchinteger
randomTipsPerCheckpointamount of checkpoint messages with random tipsinteger

Example:

  "coordinator": {
    "stateFilePath": "coordinator.state",
    "interval": "10s",
    "powWorkerCount": 15,
    "checkpoints": {
      "maxTrackedMessages": 10000
    },
    "tipsel": {
      "minHeaviestBranchUnreferencedMessagesThreshold": 20,
      "maxHeaviestBranchTipsPerCheckpoint": 10,
      "randomTipsPerCheckpoint": 3,
      "heaviestBranchSelectionTimeout": "100ms"
    },
    "signing": {
      "provider": "local",
      "remoteAddress": "localhost:12345"
    },
    "quorum": {
      "enabled": false,
      "groups": {
        "hornet": [
          {
            "alias": "hornet1",
            "baseURL": "http://hornet1.example.com:14265",
            "userName": "",
            "password": ""
          }
        ],
        "bee": [
          {
            "alias": "bee1",
            "baseURL": "http://bee1.example.com:14265",
            "userName": "",
            "password": ""
          }
        ]
      },
      "timeout": "2s"
    }
  },

10. Tangle

NameDescriptionType
milestoneTimeoutthe interval milestone timeout events are fired if no new milestones are receivedstring

Example:

  "tangle": {
    "milestoneTimeout": "30s"
  },

11. Tipsel

NameDescriptionType
maxDeltaMsgYoungestConeRootIndexToCMIthe maximum allowed delta value for the YCRI of a given message in relation to the current CMI before it gets lazyinteger
maxDeltaMsgOldestConeRootIndexToCMIthe maximum allowed delta value between OCRI of a given message in relation to the current CMI before it gets semi-lazyinteger
belowMaxDepththe maximum allowed delta value for the OCRI of a given message in relation to the current CMI before it gets lazyinteger
nonLazyconfig for tips from the non-lazy poolobject
semiLazyconfig for tips from the semi-lazy poolobject

NonLazy

NameDescriptionType
retentionRulesTipsLimitthe maximum number of current tips for which the retention rules are checked (non-lazy)integer
maxReferencedTipAgethe maximum time a tip remains in the tip pool after it was referenced by the first message (non-lazy)string
maxChildrenthe maximum amount of references by other messages before the tip is removed from the tip pool (non-lazy)integer
spammerTipsThresholdthe maximum amount of tips in a tip-pool (non-lazy) before the spammer tries to reduce theseinteger

SemiLazy

NameDescriptionType
retentionRulesTipsLimitthe maximum number of current tips for which the retention rules are checked (semi-lazy)integer
maxReferencedTipAgethe maximum time a tip remains in the tip pool after it was referenced by the first message (semi-lazy)string
maxChildrenthe maximum amount of references by other messages before the tip is removed from the tip pool (semi-lazy)integer
spammerTipsThresholdthe maximum amount of tips in a tip-pool (semi-lazy) before the spammer tries to reduce theseinteger

Example:

  "tipsel": {
    "maxDeltaMsgYoungestConeRootIndexToCMI": 8,
    "maxDeltaMsgOldestConeRootIndexToCMI": 13,
    "belowMaxDepth": 15,
    "nonLazy": {
      "retentionRulesTipsLimit": 100,
      "maxReferencedTipAge": "3s",
      "maxChildren": 30,
      "spammerTipsThreshold": 0
    },
    "semiLazy": {
      "retentionRulesTipsLimit": 20,
      "maxReferencedTipAge": "3s",
      "maxChildren": 2,
      "spammerTipsThreshold": 30
    }
  },

12. Node

NameDescriptionType
aliasthe alias to identify a nodestring
profilethe profile the node runs withstring
disablePluginsa list of plugins that shall be disabledarray of strings
enablePluginsa list of plugins that shall be enabledarray of strings

Example:

  "node": {
    "alias": "Mainnet",
    "profile": "auto",
    "disablePlugins": [
      "Warpsync"
    ],
    "enablePlugins": [
      "Prometheus",
      "Spammer"
    ]
  },

13. P2P

NameDescriptionType
bindMultiAddressesthe bind addresses for this nodearray of strings
connectionManagerconfig for connection managerobject
gossipUnknownPeersLimitmaximum amount of unknown peers a gossip protocol connection is established tointeger
identityPrivateKeyprivate key used to derive the node identity (optional)string
peerStoreconfig for peer storeobject
reconnectIntervalthe time to wait before trying to reconnect to a disconnected peerstring

ConnectionManager

NameDescriptionType
highWatermarkthe threshold up on which connections count truncates to the lower watermarkinteger
lowWatermarkthe minimum connections count to hold after the high watermark was reachedinteger

PeerStore

NameDescriptionType
paththe path to the peer storestring

Example:

  "p2p": {
    "bindMultiAddresses": [
      "/ip4/127.0.0.1/tcp/15600"
    ],
    "connectionManager": {
      "highWatermark": 10,
      "lowWatermark": 5
    },
    "gossipUnknownPeersLimit": 4,
    "identityPrivateKey": "",
    "peerStore": {
      "path": "./p2pstore"
    },
    "reconnectInterval": "30s"
  },

14. Logger

NameDescriptionType
levelthe minimum enabled logging level. Valid values are: "debug", "info", "warn", "error", "dpanic", "panic", "fatal"string
disableCallerstops annotating logs with the calling function's file name and line numberbool
encodingsets the logger's encoding. Valid values are "json" and "console"string
outputPathsa list of URLs, file paths or stdout/stderr to write logging output toarray of strings

Example:

  "logger": {
    "level": "info",
    "disableCaller": true,
    "encoding": "console",
    "outputPaths": [
      "stdout",
      "hornet.log"
    ]
  },

15. Warpsync

NameDescriptionType
advancementRangethe used advancement range per warpsync checkpointinteger

Example:

  "warpsync": {
    "advancementRange": 150,
  }

16. Spammer

NameDescriptionType
messagethe message to embed within the spam messagesstring
indexthe indexation of the messagestring
indexSemiLazythe indexation of the message if the semi-lazy pool is used (uses "index" if empty)string
cpuMaxUsageworkers remains idle for a while when cpu usage gets over this limit (0 = disable)float
mpsRateLimitthe rate limit for the spammer (0 = no limit)float
workersthe amount of parallel running spammersinteger
autostartautomatically start the spammer on node startupbool

Example:

  "spammer": {
    "message": "Binary is the future.",
    "index": "HORNET Spammer",
    "indexSemiLazy": "HORNET Spammer Semi-Lazy",
    "cpuMaxUsage": 0.5,
    "mpsRateLimit": 0,
    "workers": 1,
    "autostart": false
  },

17. MQTT

NameDescriptionType
bindAddressbind address on which the MQTT broker listens onstring
wsPortport of the WebSocket MQTT brokerinteger
workerCountnumber of parallel workers the MQTT broker uses to publish messagesinteger

Example:

  "mqtt": {
    "bindAddress": "localhost:1883",
    "wsPort": 1888,
    "workerCount": 100
  },

18. Profiling

NameDescriptionType
bindAddressthe bind address on which the profiler listens onstring

Example:

  "profiling": {
    "bindAddress": "localhost:6060"
  },

19. Prometheus

NameDescriptionType
bindAddressthe bind address on which the Prometheus exporter listens onstring
fileServiceDiscoveryconfig for file service discoveryobject
databaseMetricsinclude database metricsbool
nodeMetricsinclude node metricsbool
gossipMetricsinclude gossip metricsbool
cachesMetricsinclude caches metricsbool
restAPIMetricsinclude restAPI metricsbool
migrationMetricsinclude migration metricsbool
coordinatorMetricsinclude coordinator metricsbool
debugMetricsinclude debug metricsbool
goMetricsinclude go metricsbool
processMetricsinclude process metricsbool
promhttpMetricsinclude promhttp metricsbool

FileServiceDiscovery

NameDescriptionType
enabledwhether the plugin should write a Prometheus 'file SD' filebool
paththe path where to write the 'file SD' file tostring
targetthe target to write into the 'file SD' filestring

Example:

  "prometheus": {
    "bindAddress": "localhost:9311",
    "fileServiceDiscovery": {
      "enabled": false,
      "path": "target.json",
      "target": "localhost:9311"
    },
    "databaseMetrics": true,
    "nodeMetrics": true,
    "gossipMetrics": true,
    "cachesMetrics": true,
    "restAPIMetrics": true,
    "migrationMetrics": true,
    "coordinatorMetrics": true,
    "debugMetrics": false,
    "goMetrics": false,
    "processMetrics": false,
    "promhttpMetrics": false
  }

20. Gossip

NameDescriptionType
streamReadTimeoutthe read timeout for reads from the gossip streamstring
streamWriteTimeoutthe write timeout for writes to the gossip streamstring

Example:

  "gossip": {
    "streamReadTimeout": "1m",
    "streamWriteTimeout": "10s",
  }

21. Debug

NameDescriptionType
whiteFlagParentsSolidTimeoutdefines the the maximum duration for the parents to become solid during white flag confirmation API callstring

Example:

  "debug": {
    "whiteFlagParentsSolidTimeout": "2s",
  }

22. Legacy

This is part the config used in the migration from IOTA 1.0 to IOTA 1.5 (Chrysalis)

22.1 Migrator

NameDescriptionType
queryCooldownPeriodthe cooldown period of the service to ask for new datastring
receiptMaxEntriesthe max amount of entries to embed within a receiptinteger
stateFilePathpath to the state file of the migratorstring

Example:

  "migrator": {
    "queryCooldownPeriod": "5s",
    "receiptMaxEntries": 110,
    "stateFilePath": "migrator.state",
  }

22.2 Receipts

NameDescriptionType
backupconfig for backupobject
validatorconfig for validatorobject

Backup

NameDescriptionType
enabledwhether to backup receipts in the backup folderbool
folderpath to the receipts backup folderstring

Validator

NameDescriptionType
apiconfig for legacy APIobject
coordinatorconfig for legacy Coordinatorobject
ignoreSoftErrorswhether to ignore soft errors and not panic if one is encounteredbool
validatewhether to validate receiptsbool

Api

NameDescriptionType
addressaddress of the legacy node APIstring
timeouttimeout of API callsstring

Coordinator

NameDescriptionType
addressaddress of the legacy coordinatorstring
merkleTreeDepthdepth of the Merkle tree of the coordinatorinteger

Example:

  "receipts": {
    "backup": {
      "enabled": false,
      "folder": "receipts",
    },
    "validator": {
      "api": {
        "address": "http://localhost:14266",
        "timeout": "5s",
      },
      "coordinator": {
        "address": "JFQ999DVN9CBBQX9DSAIQRAFRALIHJMYOXAQSTCJLGA9DLOKIWHJIFQKMCQ9QHWW9RXQMDBVUIQNIY9GZ",
        "merkleTreeDepth": 18,
      },
      "ignoreSoftErrors": false,
      "validate": false,
    },
  }

Peering configuration

The easiest way to add peers in Hornet is via the Dashboard. Simply go to Peers and click on Add Peer.

But for the sake of completeness this document describes the structure of the peering.json file.

The default config file is named peering.json. You can change the path or name of the config file by using the -n or --peeringConfig argument while executing hornet executable.

The peering.json file contains a list of peers. Peers have the following attributes:

NameDescriptionType
aliasalias of the peerstring
multiAddressmultiAddress of the peerstring

Example:

{
  "peers": [
    {
      "alias": "Node1",
      "multiAddress": "/ip4/192.0.2.0/tcp/15600/p2p/12D3KooWCKWcTWevORKa2KEBputEGASvEBuDfRDSbe8t1DWugUmL"
    },
    {
      "alias": "Node2",
      "multiAddress": "/ip6/2001:db8:3333:4444:5555:6666:7777:8888/tcp/16600/p2p/12D3KooWJDqHjhd8us8XdbKy1Adp5nV6XoI7XhjZbPWAfbAbkLbH"
    },
    {
      "alias": "Node3",
      "multiAddress": "/dns/example.com/tcp/15600/p2p/12D3KooWN7F4eRAYbavnasME8WGXwkrpzWWoZSXfNSEpudmWi9YP"
    }
  ]
}

How to run Hornet as a verifier node

A verifier node is a node which validates receipts. Receipts are an integral component of the migration mechanism used to migrate funds from the legacy into the new Chrysalis Phase 2 network. See here for a more detailed explanation on how the migration mechanism works.

This guide explains how to configure a Hornet node as a verifier node:

  1. Make sure the Receipts plugin is enabled under node.enablePlugins.
  2. Set:
    • receipts.validator.validate to true (this is what enables the verification logic in your node).
    • receipts.validator.ignoreSoftErrors to true or false. If true, the verifier node will not panic if it can not query a legacy node for data. Set it to false if you want to make sure that your verifier node panics if it can not query for data from a legacy node. An invalid receipt will always result in a panic; ignoreSoftErrors only controls API call failures to the legacy node.
    • receipts.validator.api.timeout to something sensible like 10s (meaning 10 seconds).
    • receipts.validator.api.address to the URI of your legacy node. Note that this legacy node must support/have the getWhiteFlagConfirmation and getNodeInfo API commands whitelisted.
    • receipts.validator.coordinator.address to the Coordinator address in the legacy network.
    • receipts.validator.coordinator.merkleTreeDepth to the correct used Merkle tree depth in the legacy network.
  3. Run your Hornet verifier node and let it validate receipts.

Note, it is suggested that you use a loadbalanced endpoint to multiple legacy nodes for receipts.validator.api.address in order to obtain high availability.

If now your verifier node panics because of an invalid receipt, it is clear that a receipt was produced which is not valid, in which case as a verifier node operator, you are invited to inform the community and the IOTA Foundation of your findings. This is by the way the same result as when a milestone is issued by the Coordinator, which diverges from a consistent ledger state.

API Reference

This document specifies the REST API for IOTA node software:

The node event API is in charge of publishing information about events within the node software:

Troubleshooting

Check our Frequently asked questions.

If your question is not covered in the FAQ, feel free to ask in the #hornet channel (official iota discord server).

Something went wrong?

  • Please open a new issue if you detect an error or crash (or submit a PR if you have already fixed it).

FAQ

What is HORNET?

HORNET is a community driven IOTA fullnode. It is written in Go which makes it lightweight and fast.


Does HORNET run on the Mainnet?

Yes, HORNET was released in mid 2020 and replaced the Java implementation called IRI.


Can I run HORNET on a Raspberry Pi?

Yes, you can run HORNET on a Raspberry Pi 4B with an external SSD. But we recommend to run HORNET on a more powerful device.


I have difficulties setting up HORNET. Where can I get help?

Our community loves helping you. Just ask your questions in the #hornet channel on the official IOTA Discord Server


Can I contribute?

Of course, you are very welcome! Just send a PR or offer your help in the #hornet channel on the official IOTA Discord Server


I found a bug, what should I do?

Please open a new issue. We'll have a look at your bug report as soon as possible.


I'm missing feature xyz. Can you add it?

Please open a new feature request. We cannot assure that the feature will actually be implemented. Pull requests are very welcome!

Contributing

By participating to this project, you agree to abide our code of conduct.


How to contribute

Basic setup

HORNET is written in Go.

Prerequisites:

  1. Setup Go 1.14+
  2. Fork HORNET
  3. Test your setup by building HORNET: go build

Make your changes

Make your changes and test them sufficiently.

Create a commit

Commit messages should be well formatted.

You can use this as a guide: Conventional Commits

Submit a pull request

Push your branch to your HORNET fork and open a pull request to the develop branch.

Code of Conduct