Infernet Node
The Infernet Node is a lightweight off-chain client for Infernet responsible for fulfilling compute workloads:
- Nodes listen for on-chain (via the Infernet
Coordinator
contract) or off-chain (via the REST API) requests - Nodes orchestrate dockerized Infernet Containers consuming on-chain and off-chain provided inputs
- Nodes deliver workflow outputs and optional proofs via on-chain transactions or the off-chain API
James has setup a Governor contract for his DAO, inheriting the Infernet
SDK. Every time a proposal is created, the contract
kicks off a new on-chain Subscription
request alerting an Infernet Node of a new
proposal. Once picked up by the node, James' custom
governor-quantitative-workflow
is run and the node responds on-chain with an
output and associated computation proof.
Emily is developing a new NFT collection that lets minters automatically add
new traits to their NFTs by posting what they'd like in plaintext (think, "an
Infernet-green hoodie"). Emily sets up a minting website that posts signed
Delegate Subscriptions to an Infernet node
running her custom stable-diffusion-workflow
. This workflow parses plaintext
user input and generates a new Base64 image, with the Infernet node posting
the final image to her smart contract via an on-chain transaction.
Travis is building a new web app that allows his users to chat with AI
avatars. He posts new messages via Delegate
Subscriptions to his Infernet node running his
custom llm-inference-workflow
via the HTTP API and receives a
response instantly over the same API. He surfaces these responses to users in
his web app, offering a snappy user experience, while his node asynchronously
publishes a proof of computation on-chain, letting his users verify the
integrity of the responses in the future.
Granular configuration
Infernet Nodes offer granular runtime configuration and permissioning. Operators have full flexibility in:
- Running any arbitrary compute workload (by creating an Infernet-compatible container)
- Using both public container images and private images via Docker Hub
- Choosing to serve on-chain, off-chain requests, or both
- Configuring on-chain parameters including
max_gas_limit
, how many blocks to trail chain head, and more - Restricting workload access by IP address, on-chain address, delegated contract address, and more
- Specifying workload configuration parameters (including environment variables, execution ordering, etc.)
- Optionally forwarding diagnostic node system statistics to Ritual
All of these parameters can be configured via a single runtime config.json
file. Read more about sane defaults and modifying this configuration for your own use cases in Node: Configuration.
System specifications
Infernet Node requirements depend greatly on the type of compute workflows you plan to run. Because all workflows run in Docker containers (opens in a new tab), we recommend optimizing for at least a minimum set of requirements that support Virtualization (opens in a new tab). Memory-enhanced machines are preferred.
Minimum Requirements
Minimum | Recommended | GPU-heavy workloads | |
---|---|---|---|
CPU | Single-core vCPU | 4 modern vCPU cores | 4 modern vCPU cores |
RAM | 128MB | 16GB | 64GB |
DISK | 512MB HDD | 500GB IOPS-optimized SSD | 500GB NVME |
GPU | CUDA-enabled GPU |
Off-chain events
If you choose to service off-chain Web2 requests via the REST API you will have to expose port 4000
to the Internet.
On-chain events
If you plan to use your Infernet Node to listen and respond to on-chain events, via the Infernet SDK, you will also need access to a blockchain node RPC.
Running the node locally
Infernet Nodes execute containerized workflows. As such, installing and running (opens in a new tab) a modern version of Docker is a prerequisite, regardless of your choice of how to run the node.
You can run an Infernet Node locally via Docker Compose (opens in a new tab). To do so:
- Create a configuration file, selecting your desired containers and configuration options (see: example configuration (opens in a new tab)).
- Create a new directory & copy the configuration file, as well as the rest of the node's files (opens in a new tab):
docker-compose
,fluent-bit.conf
&redis.conf
.
mkdir deploy
mv config.json deploy/.
# move the rest of the files: docker-compose, fluent-bit.conf, redis.conf. Grab them from our infernet-node repo
cd deploy
# Run the node
docker-compose up -d
All of our starter examples (opens in a new tab) also follow this pattern, so you can easily run them locally. Check out Ritual Learn (opens in a new tab) for a step-by-step guide on how to run a node locally.
Next steps
Once ready, you may choose to:
- Follow an introductory quick start to setting up an Infernet Node end-to-end
- Understand the granular, runtime configuration settings available to you as a node operator
- Read in-depth about the node architecture to better understand what's possible with Infernet
- Explore your options to deploy an Infernet Node in production
- Find out more about the Ritual ML Workflows and the Infernet SDK