An Infernet Node (opens in a new tab) consists of many processes and components. Although optional, understanding the purpose of and interaction between these components might be beneficial for node operators and users alike.

The core components of the Infernet Node are the:

The Web2-specific components of the Node are the:

The Web3-specific components of the Node are the:

Core Components

Container Manager

The Container Manager is a process responsible for managing the lifecycle of all Infernet-compliant Docker containers that power the node. Specifically, it

  • Pulls the Docker images specified in config.json, at node boot.
  • Runs the associated Docker containers on local ports, based on the provided configurations, at node boot.
  • Waits for the containers to "warm up", at node boot.
  • Monitors the containers' health and logs associated events, throughout the entire node lifecycle.


The Guardian is a component that validates job requests and manages access to the node's containers, based on granular container-level firewall rules.

Every time a job request is received by a node, it first passes through the Guardian. The Guardian checks whether:

  • The job's requested containers are supported by the node.
  • The job's first container is external.
  • The job's origin IP is allowed for all requested containers (only relevant for web2-originating requests).
  • The job's origin address is allowed for all requested containers (only relevant for web3-originating requests).
  • The job's origin delegate address is allowed for all requested containers (only relevant for Delegated Subscriptions).

If a job request fails to "pass through" the Guardian, it is disgarded immediately with an error. Otherwise, it is handed off to the Job Orchestrator.

Job Orchestrator

The Job Orchestrator is a component that executes job requests on the Node.

Job requests consist of running one or more containers in sequence. They are defined by a sequence of container IDs to run, i.e. containers: string[], and some arbitrary input data, i.e. data: object.

When n container IDs are specified, the Job Orchestrator calls the corresponding containers in strict order. The provided data is passed into the first container; the output of container i is passed as input into container i+1; and the last container's output is the job's result.

Metric Sender

The Metric Sender is a process that registers the Infernet Node with Ritual, and periodically forwards statistics to our database.

It collects information such as:

  • Node version
  • IP address
  • Hardware profile
  • Hardware utilization
  • Wallet address
  • Supported containers + firewall rules
  • Number of {web2, web3} jobs completed, failed, and pending
    • per node
    • per container

It does not collect information such as:

  • Input data
  • Job results
  • IPs and addresses where jobs originate

The Metric Sender is opt-out and can be disabled entirely by setting the forward_stats configuration to false.

Web2-specific Components

REST server

The Server is a process via which the Node is exposed to the Internet. It supports REST endpoints for:

  • Collecting basic node information, such as supported containers
  • Requesting job creation
  • Requesting job creation in batch
  • Retrieving job status and results

By default, the Server runs on port 4000. For details, see REST API.


Retrieval of Job Results on an Infernet Node is asynchronous, i.e. results need to be fetched with a separate request to the REST Server.

To that end, each node caches Job Results in a local Redis instance. Pending jobs are stored separately from completed jobs for efficient retrieval. Results are persisted to disk asynchronously, i.e. will survive node reboots, and are evicted by Least Recently Used (LRU) order.

Web3-specific Components


The RPC is a component responsible for exposing a higher-level interface over commonly-used (opens in a new tab) functionality. It exposes safe, cached functions to efficiently interface with an Ethereum-compatible chain like get_event_logs or send_transaction.


The Coordinator is a component that implements an off-chain interface to the on-chain Infernet SDK Coordinator contract. It builds on top of the primitive functions defined in the RPC component to offer a higher-level interface to Infernet-specific functionality. Most of the other Web3-components rely on this component for both retrieving on-chain data and preparing off-chain inputs for on-chain delivery.


The Wallet is a component that implements simple transaction signing and relaying functionality. It is the only component in the complete Infernet Node that consumes private key material.

The Wallet is also consumed by the on-chain registration and activation scripts as a common interface for sending transactions.


By design, the Infernet Node is largely stateless. A node can be interrupted at any time and resume as if it is was uninterrupted when restarted. Part of this is possible because we make use of event replay to sync a node to the current remote state of what the on-chain Coordinator contract looks like at chain head.

To that extent, the Listener is a process that periodically listens for event logs emitted from successful transactions posted to the on-chain Coordinator. Specifically, it tracks Subscription creations, cancellations, and fulfillments passing these event messages, filtering to just what is relevant to our node via the Guardian, to the Processor, where they are used to update local node state.

When the node is first started, it snapshot syncs all subscription events up to that point. Following this, a per-block sync via event filters is used.


Generally, Ethereum nodes cache state that is most frequently accessed. When connecting an Infernet Node to an existing Coordinator contract for the first time syncing will be slower than average. Once state is appropriately cached, following syncs will process quicker. For most consumers that don't plan to frequently cycle their nodes, this will not be noticeable.


The Processor, as the name implies, is a process that is responsible for a few tasks:

  1. Tracks incoming subscription creation, deletion, and fulfillment messages
  2. Builds relevant subscription state from those messages
  3. Periodically checks for pending subscription execution
  4. Executes subscription compute jobs
  5. Delivers compute job responses on-chain to the Coordinator
  6. Tracks success or failure of on-chain delivery

The Processor handles both on-chain subscriptions and Delegate Subscriptions sent via the REST server.

More information about the Processor can be understood by examining the state machine comments (opens in a new tab) in the codebase.