Infernet CLI
The recommended way to configure, deploy, and test nodes and their associated containers on a single machine is via the Infernet CLI (opens in a new tab). The CLI simplifies configuration and deployment by making use of recipes (opens in a new tab), which are pre-filled templates that require minimal user input but can be customized as needed.
See Infernet CLI docs (opens in a new tab) for a complete list of commands and options.
Prerequisites
Docker
(opens in a new tab): Infernet Nodes execute containerized workflows. As such, installing and running (opens in a new tab) a modern version of Docker is a prerequisite, regardless of your choice of how to run the node.Python
(opens in a new tab): Required for running the Infernet CLI. Must be Python 3.9 or higher, and includepip
package manager.
Install CLI
In the terminal, run the following command to install the Infernet CLI:
pip install infernet-cli
Prepare Configuration
Set the DEPLOY_DIR
environment variable to point to the directory that will store all relevant deployment files.
The directory will be created if it doesn't exist, and defaults to ./deploy
.
export DEPLOY_DIR=./deploy
Now pull a configuration tempate using the config
command. This will create a partially filled config.json
.
In this example, we pull a template that connects to a local Anvil chain (opens in a new tab) instance, with Infernet SDK contracts predeployed (opens in a new tab).
infernet-cli config anvil --skip
You should expect output similar to the following:
Using configurations:
Chain = 'anvil'
Version = '1.3.0'
GPU support = disabled
Output dir = 'deploy'
Stored base configurations to './deploy'.
To configure services:
- Use `infernet-cli add-service`
- Or edit config.json directly
Notice that the first argument specifies which chain to connect to, in this case a local anvil
instance. The --skip
flag skips optional configuration inputs, which aren't needed for demonstration purposes.
As the output suggests, config.json
is now stored in the deploy/
directory. You can edit it manually (see Configuration), or use the CLI to add services.
Add Services
Although our node is fully configured, it's not running any services yet. To add services, you can use the add-service
command.
Official services
You can add any of Ritual's official services (opens in a new tab) via recipes (opens in a new tab), which greatly simplify configuration and deployment of containers. The complete list of service recipes can be found here (opens in a new tab).
For example, to add the onnx-inference
service:
infernet-cli add-service onnx-inference --skip
You should expect output similar to the following:
Version not provided. Using latest version v2.0.0.
Successfully added service 'onnx-inference-2.0.0' to config.json.
Custom services
For custom container configurations, you can paste the raw JSON directly into the add-service
command:
infernet-cli add-service
You will be prompted:
Enter service configuration JSON, followed by EOF:
You can then paste any valid JSON configuration, e.g.:
{
"id": "onnx-inference-2.0.0",
"image": "ritualnetwork/onnx_inference_service:2.0.0",
"description": "ONNX Inference Service",
"command": "--bind=0.0.0.0:3000 --workers=2",
"env": {},
"external": true,
"gpu": false,
"volumes": [],
"allowed_addresses": [],
"allowed_delegate_addresses": [],
"allowed_ips": [],
"accepted_payments": {},
"generates_proofs": false
}
After pasting the JSON, press Ctrl+D
(EOF) to submit it. You should see output similar to the following:
Successfully added service 'onnx-inference-2.0.0' to config.json.
Alternatively, you can edit the config.json
directly:
{
...,
containers: [
...,
// TODO: Add the following JSON object to the containers array
{
"id": "onnx-inference-2.0.0",
"image": "ritualnetwork/onnx_inference_service:2.0.0",
"description": "ONNX Inference Service",
"command": "--bind=0.0.0.0:3000 --workers=2",
"env": {},
"external": true,
"gpu": false,
"volumes": [],
"allowed_addresses": [],
"allowed_delegate_addresses": [],
"allowed_ips": [],
"accepted_payments": {},
"generates_proofs": false
}
]
}
Both examples above add the onnx-inference-2.0.0
service to the node, identical to the official service.
Adding custom services requires familiarity with the container configuration options. As such, we recommend starting with official services.
Deploy Node
Your node is now fully configured. To deploy the node, use start
:
infernet-cli start
Starting Infernet Node...
Containers started successfully.
To stop it, use stop
:
infernet-cli stop
Stopping Infernet Node...
Containers stopped successfully.
After initial deploy, you can always modify node configurations, add or modify services, etc. However, for these changes to be reflected, you must RESET the node, i.e. wipe its data and re-create it.
You can use infernet-cli reset
to re-create the node, or infernet-cli reset --services
to also re-create the services. Note that updating container configurations always requires a node reset.
For instance, to apply new node AND service configurations:
infernet-cli reset --services
Resetting Infernet Node...
Containers stopped successfully.
Destroying service containers...
Containers started successfully.
Congratulations!
Your Infernet Node is now running, and can fulfill requests to the onnx-inference-2.0.0
service. You can now interact with the node via the REST API.
For your node's REST API to be accessible by the public internet, you will have to expose port 4000
to arbitrary internet traffic. This is a
security risk, and NOT recommended for your local machine. If planning to support off-chain workloads through the REST API, we highly
recommend deploying your production node(s) on AWS or GCP instead, as detailed in Cloud Providers.