Ritual ML Workflows
Installation

⚒ Getting Setup

Installation and Usage

Install the SDK, and customize it for your AI needs.

Step 1: Install the library and dependencies

Once you've installed the main SDK, go into the ml folder and run the following.

# Initialize venv
python3 -m venv .venv
source .venv/bin/activate
cd infernet/ml
pip install -e .

Alternatively, you can install the dependencies from requirements.txt and set the PYTHONPATH:

python3 -m venv .venv
source .venv/bin/activate
cd infernet/ml
pip install -r requirements.txt
export PYTHONPATH=src
Step 2: Test your installation

In the same folder, run the below commands to see if your installation went correctly.

pip install -r requirements-test.txt
export PYTHONPATH=src
python -m pytest tests
Step 3: Try an example workflow driver

Try running an example workflow driver here.

python3 src/ml/drivers/example_workflow_driver.py

More info on the Github here. (coming soon)

Step 4: Customize it to your needs.

Want to use train and generate proofs with your classical machine learning model, or use inference results of large language models on-chain? You can mix and match your own use case, by extending templates found in workflows .

To see more, you can follow tutorials for logistic regression training and proof generation or a large language model inference.

Depending on what your use case is, all that's needed maybe as simple as changing the model name! If your use case is more complex, you can customize and do more yourself through extending the template.

Make sure you name the custom workflow script consistently your_workflow_class_here.py to be used in the deploy step.

See more on uploading custom model artifacts on our Github (coming soon).

Optional custom step for LLMs: customize settings and prompt-tuning for large language model

Checkout pre-processing and model configuration options for prompt-tuning here and types of common models supported here.

Optional custom step: Add verification to your model inference

Checkout proof generation and resources here.

Optional for advanced Users: Bring your own model with ZK proving support

You can use bring your own classical models with existing proving support using the deployer.utility script (located at /src/ml/utilities/deployer.py). Checkout this link for more (coming soon).

Voila! You're ready to deploy!

Deploy with the SDK

Ready to deploy? Docker containers and images links can be found here (coming soon).

Containers and Images

There a few containers used for serving models and proof generation. Users can mix and match containers as they need, and easily hook into their workflow script customized above. If you have custom workflow dependencies, you should create your own image using the provided images from huggingface as base images.

Custom images below to install for your use case.

Inference for classical ML models.

Docker image coming soon.

sudo docker build -t "ritual_ritual_ml_classic:local" -f classic_ml_inference_service.Dockerfile .
sudo docker run --name=classic_inf_service -d  -p 4998:3000 --env-file classic_ml_inference_service.env  "ritual/infernet-classic-inference:0.0.2" --bind=0.0.0.0:3000 --workers=2

Classic ML Inference Service (classic_ml_inference_service.env)

# FLASK_ prefix is required for the service to pick up these settings
FLASK_CLASIC_WORKFLOW_CLASS="ml.workflows.inference.torch_inference_workflow.TorchInferenceWorkflow"
FLASK_CLASSIC_WORKFLOW_POSITIONAL_ARGS=[]
FLASK_CLASSIC_WORKFLOW_KW_ARGS={"model_name":"your_model_org/your_model_name"}

# the hugging face token is required to download the model
HUGGING_FACE_HUB_TOKEN=YOUR_HUGGING_FACE_TOKEN
Serve proofs from classical models.

Docker image coming soon.

sudo docker build -t "ritual_ritual_ml_proof:local" -f classic_ml_proof_service.Dockerfile .
sudo docker run --name=classic_proof_service -d -p 4997:3000 --env-file classic_proof_service.env "ritual/infernet-classic-proving:0.0.2" --bind=0.0.0.0:3000 --workers=2

Classic ML Proof Service (classic_proof_service.env)

# The name of the model we are generating proofs for
FLASK_MODEL_NAME=your_model_org/your_model_name
# the hugging face token is required to download necessary artifacts
HUGGING_FACE_HUB_TOKEN=YOUR_HUGGING_FACE_TOKEN
Inference for large language models.

Docker image coming soon.

sudo docker build -t "ritual_ritual_ml_llm:local" -f llm_inference_service.Dockerfile .
sudo docker run --name=llm_inf_service -d -p 4999:3000 --env-file llm_inference_service.env "ritual/infernet-llm-inference:0.0.2" --bind=0.0.0.0:3000 --workers=2

LLM Inference Service (llm_inference_service.env)

FLASK_LLM_WORKFLOW_CLASS="ml.workflows.inference.llm_inference_workflow.LLMInferenceWorkflow"
# path of inference HF / Prime inference backend
FLASK_LLM_WORKFLOW_POSITIONAL_ARGS='["http://ENTER-URL-HERE"]'
FLASK_LLM_WORKFLOW_KW_ARGS={}
# the hugging face token is required to download necessary artifacts
HUGGING_FACE_HUB_TOKEN=YOUR_HUGGING_FACE_TOKEN

For instructions and links building these containers directly in docker, see: our Github here (coming soon).

You can use the above and add in your custom workflow

FROM ritual/infernet-llm-inference:0.0.2
 
WORKDIR /app
COPY ml/src/your_workflow_class_here.py .
ENV PYTHONPATH=/app