Ritual ML Workflows
Supported Methods
Types of Workflows

Types of Workflows in Infernet

Within Infernet, we divide templates into 2 types of main workflows: Training and Inference. Workflows consist of customizable python template scripts and corresponding docker images which serve the scripts.

Here's a class structure for how the 2 workflow script classes that can be customized and extended.

Types of steps in a workflow

Ritual's training workflows can plug into classical models, and add in verifiability through its integrations to the ML workflow. Ritual uses MLFLow (opens in a new tab) library for persisting workflow artifacts.

Ritual's inference workflows supports all types of models mentioned here, including those trained with its SDK. Common methods for various upstream and down-stream modeling tasks that are used for large-language models, such as prompt tuning, data context-window pre-processing, output post-processing are also included. For ease-of-use, Infernet containerizes and directly serves inference.

Check out getting setup for more custom installation instructions for each types of workflows.

Here are a few other example workflow scripts. Example classic ML training workflow demonstrating ZKP implementation (coming soon)

LLM Inference workflow here (coming soon)

Base Torch Inference workflow (preprocessing to convert json data into tensor format) (coming soon)

Containers and Images for different ML workflows

Inference for classical ML models.

Docker image coming soon.

sudo docker build -t "ritual_infernet_ml_classic:local" -f classic_ml_inference_service.Dockerfile .
sudo docker run --name=classic_inf_service -d  -p 4998:3000 --env-file classic_ml_inference_service.env  "ritual/infernet-classic-inference:0.0.2" --bind=0.0.0.0:3000 --workers=2

Classic ML Inference Service (classic_ml_inference_service.env)

# FLASK_ prefix is required for the service to pick up these settings
FLASK_CLASIC_WORKFLOW_CLASS="ml.workflows.inference.torch_inference_workflow.TorchInferenceWorkflow"
FLASK_CLASSIC_WORKFLOW_POSITIONAL_ARGS=[]
FLASK_CLASSIC_WORKFLOW_KW_ARGS={"model_name":"your_model_org/your_model_name"}

# the hugging face token is required to download the model
HUGGING_FACE_HUB_TOKEN=YOUR_HUGGING_FACE_TOKEN
Serve proofs from classical models.

Docker image coming soon.

sudo docker build -t "ritual_infernet_ml_proof:local" -f classic_ml_proof_service.Dockerfile .
sudo docker run --name=classic_proof_service -d -p 4997:3000 --env-file classic_proof_service.env "ritual/infernet-classic-proving:0.0.2" --bind=0.0.0.0:3000 --workers=2

Classic ML Proof Service (classic_proof_service.env)

# The name of the model we are generating proofs for
FLASK_MODEL_NAME=your_model_org/your_model_name
# the hugging face token is required to download necessary artifacts
HUGGING_FACE_HUB_TOKEN=YOUR_HUGGING_FACE_TOKEN
Inference for large language models.

Docker image coming soon.

sudo docker build -t "ritual_infernet_ml_llm:local" -f llm_inference_service.Dockerfile .
sudo docker run --name=llm_inf_service -d -p 4999:3000 --env-file llm_inference_service.env "ritual/infernet-llm-inference:0.0.2" --bind=0.0.0.0:3000 --workers=2

LLM Inference Service (llm_inference_service.env)

FLASK_LLM_WORKFLOW_CLASS="ml.workflows.inference.llm_inference_workflow.LLMInferenceWorkflow"
# path of inference HF / Prime inference backend
FLASK_LLM_WORKFLOW_POSITIONAL_ARGS='["http://ENTER-URL-HERE"]'
FLASK_LLM_WORKFLOW_KW_ARGS={}
# the hugging face token is required to download necessary artifacts
HUGGING_FACE_HUB_TOKEN=YOUR_HUGGING_FACE_TOKEN