Infernet
Node
Introduction

Infernet Node

The Infernet Node (opens in a new tab) is a lightweight off-chain client for Infernet responsible for fulfilling compute workloads:

  1. Nodes listen for on-chain subscriptions (via the Coordinator contract) or off-chain requests (via the REST API)
  2. Nodes orchestrate dockerized containers (aka "services") consuming on-chain and off-chain provided inputs
  3. Nodes deliver outputs and optional proofs via on-chain transactions or the off-chain API

Example use cases

🙋‍♂️

James has setup a Governor contract for his DAO, inheriting the Infernet SDK. Every time a proposal is created, the contract kicks off a new on-chain Subscription request alerting an Infernet Node of a new proposal. Once picked up by the node, James' custom governor-quantitative-workflow is run and the node responds on-chain with an output and associated computation proof.

🙋‍♀️

Emily is developing a new NFT collection that lets minters automatically add new traits to their NFTs by posting what they'd like in plaintext (think, "an Infernet-green hoodie"). Emily sets up a minting website that posts signed Delegate Subscriptions to an Infernet node running her custom stable-diffusion-workflow. This workflow parses plaintext user input and generates a new Base64 image, with the Infernet node posting the final image to her smart contract via an on-chain transaction.

🙋

Travis is building a new web app that allows his users to chat with AI avatars. He posts new messages via Delegate Subscriptions to his Infernet node running his custom llm-inference-workflow via the HTTP API and receives a response instantly over the same API. He surfaces these responses to users in his web app, offering a snappy user experience, while his node asynchronously publishes a proof of computation on-chain, letting his users verify the integrity of the responses in the future.

Granular configuration

Infernet Nodes offer granular runtime configuration and permissioning. Operators have full flexibility in:

  • Running any arbitrary compute workload (by creating an Infernet-compatible container)
  • Using both public container images and private images via Docker Hub
  • Choosing to serve on-chain subscriptions, off-chain requests, or both
  • Configuring on-chain parameters including max_gas_limit, how many blocks to trail chain head, and more
  • Restricting workload access by IP address, on-chain address, or delegated contract address
  • Specifying workload configuration parameters (including environment variables, execution ordering, etc.)
  • Optionally forwarding system statistics to Ritual, which powers the Infernet Router

All of these parameters can be configured via a single runtime config.json file. Read more about sane defaults and modifying this configuration for your own use cases in Configuration.

System specifications

Infernet Node requirements depend greatly on the type of compute workflows you plan to run. Because all workflows run in Docker containers (opens in a new tab), we recommend optimizing for at least a minimum set of requirements that support Virtualization (opens in a new tab). Memory-enhanced machines are preferred.

Minimum Requirements

MinimumRecommendedGPU-heavy workloads
CPUSingle-core vCPU4 modern vCPU cores4 modern vCPU cores
RAM128MB16GB64GB
DISK512MB HDD500GB IOPS-optimized SSD500GB NVME
GPU--CUDA-enabled GPU

Off-chain events

If you choose to service off-chain Web2 requests via the REST API you will have to expose port 4000 to the Internet.

On-chain events

If you plan to use your Infernet Node to listen and respond to on-chain events, via the Infernet SDK, you will also need access to a blockchain node RPC.

Next steps

You may choose to: