There are two service containers related to serving workflows for inference, as well as a container for serving proofs:
The classic inference service is meant to serve classic ml inference workflows.
The classic proof service is meant to serve proofs generated from classic ml training workflows.
The llm inference service is meant to serve LLM inference workflows.
For instructions building these containers directly in docker, see: our GitHub (coming soon), or the workflows page.
If you have custom workflows, images are available on docker hub at:
ritual / infernet-classic-inference (coming soon)
ritual / infernet-classic-proving (coming soon)
ritual / infernet-llm-inference (coming soon)
Which you can use as base images and include the source of your custom workflow. Example:
COPY models/src/your_workflow_class_here.py .
These containers should have corresponding images built and hosted on Dockerhub.
Images are statically declared during node configuration, and are downloaded and ran on start up.
Service implementors that want compute will either have to find a node that has their service hosted, or run their own node.
Classic ML training workflow with ZKPs (coming soon)
LLM Inference workflow (coming soon)
Base Torch Inference workflow (coming soon)