Ritual ML Workflows
FAQs

FAQs

How do I bootstrap a container with these workflows?

There are two service containers related to serving workflows for inference, as well as a container for serving proofs:

The classic inference service is meant to serve classic ml inference workflows.

The classic proof service is meant to serve proofs generated from classic ml training workflows.

The llm inference service is meant to serve LLM inference workflows.

For instructions building these containers directly in docker, see: our GitHub (coming soon), or the workflows page.

If you have custom workflows, images are available on docker hub at:

ritual / infernet-classic-inference (coming soon)

ritual / infernet-classic-proving (coming soon)

ritual / infernet-llm-inference (coming soon)

Which you can use as base images and include the source of your custom workflow. Example:

FROM ritual/infernet-llm-inference:0.0.2
 
WORKDIR /app
COPY models/src/your_workflow_class_here.py .
ENV PYTHONPATH=/app

Where do I host these containers?

These containers should have corresponding images built and hosted on Dockerhub.

How do I deploy a container to an Infernet node?

Images are statically declared during node configuration, and are downloaded and ran on start up.

What are considerations I need to keep in mind for hosting?

Service implementors that want compute will either have to find a node that has their service hosted, or run their own node.

Where are example repos of implementations of these workflows?

Classic ML training workflow with ZKPs (coming soon)

LLM Inference workflow (coming soon)

Base Torch Inference workflow (coming soon)