Architecture
An Infernet Node (opens in a new tab) consists of many processes and components. Although optional, understanding the purpose of and interaction between these components might be beneficial for node operators and users alike.
The core components of the Infernet Node are the:
The Web2-specific components of the Node are the:
The Web3-specific components of the Node are the:
Core Components
Container Manager
The Container Manager is a process responsible for managing the lifecycle of all Infernet-compliant Docker containers that power the node. Specifically, it
- At node boot:
- Pulls the requested Docker images
- Runs the associated Docker containers on local ports
- Waits for the containers to warm up
- For the entire node lifecycle, monitors the containers' health and logs associated events
Guardian
The Guardian is a component that validates job requests and manages access to the node's containers, based on granular container-level firewall rules.
Every time a job request is received by a node, it first passes through the Guardian. The Guardian checks whether:
- The job's requested containers are supported by the node.
- The job's first container is external.
- The job's origin IP is allowed for all requested containers (only relevant for web2-originating requests)
- The job's origin address is allowed for all requested containers (only relevant for web3-originating requests)
- The job's origin delegate address is allowed for all requested containers (only relevant for Delegated Subscriptions)
- The job is making a payment, and whether the payment meets the minimum payment requirements of its containers
- The job requires proof-generation and whether there exists a container that can generate the proof
If a job request fails to "pass through" the Guardian, it is disgarded immediately with an error. Otherwise, it is handed off to the Job Orchestrator
Job Orchestrator
The Job Orchestrator is a component that executes job requests on the Node.
Job requests consist of running one or more containers in sequence. They are defined by a sequence of container IDs to run, i.e. containers: string[]
, and some arbitrary input data, i.e. data: object
.
When n
container IDs are specified, the Job Orchestrator calls the corresponding containers in strict order. The provided data
is passed into the first container; the output of container i
is passed as input into container i+1
;
and the last container's output is the job's result.
Metric Sender
The Metric Sender is a process that registers the Infernet Node with Ritual, and periodically forwards statistics to our database.
It collects information such as:
- Node version
- IP address
- Hardware profile
- Hardware utilization
- Wallet address
- Supported containers + firewall rules
- Number of {web2, web3} jobs completed, failed, and pending
- per node
- per container
It does not collect information such as:
- Input data
- Job results
- IPs and addresses where jobs originate
The Metric Sender is opt-out and can be disabled entirely by setting the forward stats configuration to false
.
Web2-specific Components
REST server
The Server is a process via which the Node is exposed to the Internet. It supports REST endpoints for:
- Collecting basic node information, such as supported containers
- Requesting job creation
- Requesting job creation in batch
- Retrieving job status and results
By default, the Server runs on port 4000
. For details, see REST API.
Cache
Retrieval of Job Results on an Infernet Node is asynchronous, i.e. results need to be fetched with a separate request to the REST Server.
To that end, each node caches Job Results in a local Redis instance. Pending jobs are stored separately from completed jobs for efficient retrieval. Results are persisted to disk asynchronously, i.e. will survive node reboots, and are evicted by Least Recently Used (LRU) order.
Web3-specific Components
RPC
The RPC is a component responsible for exposing a higher-level interface over commonly-used web3.py (opens in a new tab) functionality. It exposes safe, cached functions to efficiently interface with an Ethereum-compatible chain like get_event_logs
or send_transaction
.
Coordinator
The Coordinator is a component that implements an off-chain interface to the on-chain Infernet SDK Coordinator
contract. It builds on top of the primitive functions defined in the RPC component to offer a higher-level interface to Infernet-specific functionality. Most of the other Web3-components rely on this component for both retrieving on-chain data and preparing off-chain inputs for on-chain delivery.
Wallet
The Wallet is a component that implements simple transaction signing and relaying functionality. It is the only component in the complete Infernet Node that consumes private key material.
PaymentWallet
The PaymentWallet is a component that implements functionality related to the payment of jobs. It is a thin wrapper around the Wallet
contract that provides additional functionality for managing payments, such as:
- Getting the owner of the wallet
- Getting the balance of the wallet
- Calling
approve()
on the wallet to allow the Coordinator to spend funds on behalf of the wallet
Listener
By design, the Infernet Node is largely stateless. A node can be interrupted at any time and resume as if it is was uninterrupted when restarted. Part of this is possible because we make use of event replay to sync a node to the current remote state of what the on-chain Coordinator contract looks like at chain head.
To that extent, the Listener is a process that periodically checks for the latest subscription ID recorded on the Coordinator
.
When a new subscription ID is discovered, all of the subscriptions from the last
synced subscription until that ID will get synced, filtering to just what is relevant to our node via the Guardian,
to the Processor, where they are used to update local node state.
In infernet-node 0.x
subscriptions were discovered via on-chain events. This has now changed due to the instability of eth_getFilter
in various EVM RPC's.
When the node is first started, it snapshot syncs all subscription events up to that point. Following this, a per-block sync is used.
Generally, Ethereum nodes cache state that is most frequently accessed. When connecting an Infernet Node to an existing Coordinator contract for the first time syncing will be slower than average. Once state is appropriately cached, following syncs will process quicker. For most consumers that don't plan to frequently cycle their nodes, this will not be noticeable.
Processor
The Processor, as the name implies, is a process that is responsible for a few tasks:
- Tracks incoming subscription
creation
, messages - Builds relevant subscription state from those messages
- Filters out subscriptions that are
expired
,canceled
orcompleted
- Periodically checks for pending subscription execution
- Executes subscription compute jobs
- Delivers compute job responses on-chain to the Coordinator
- Tracks success or failure of on-chain delivery
The Processor handles both on-chain subscriptions and Delegate Subscriptions sent via the REST server.
More information about the Processor can be understood by examining the state machine comments (opens in a new tab) in the codebase.