Ritual ML Workflows
Supported Methods
Supported AI Methods

✨ Supported Methods


A typical AI pipeline includes various steps such as data gathering, preprocessing, training, evaluation, and inference.

Ritual's AI SDK simplifies the pipeline into modular steps so users can bring some or all of their existing pipelines on-chain, with their models and tool of choice.

There are 2 main types of ML boilerplate workflows users can customize: training, and inference.

Supported Models

Ritual supports classical and large language models, including most models supported by scikit-learn as well as open-source models supported on HuggingFace. See model support for a list of common models supported by Ritual.


Model Files and Observability

Ritual's infernet offers plug-and-play support with existing ML pipelines, so users can bring their own performance observability tools, such as weights and biases, HuggingFace, and more to the training process and deployment. Model and dataset versioning are also modular parts of the process can be integrated with existing tools.

To manage model artifact files needed for proof generation, Infernet integrates with open-source libraries such as MLFlow (opens in a new tab) to track file uploads for the proof generation and verification steps. MLFlow is an open-source machine learning experiment tracking tool, similar to other tools such as AzureML and Weights and Biases.

Verification and Model Trust Guarantees

Infernet's AI pipeline integrates model performance with verification (zkps, fraud proofs) through integrations with existing open-source and custom libraries.

There are current limitations on the type of models that can be supported with proofs. Currently, popular libraries such as Torch and sklearn models, and generally models that can be compiled into ONNX runtimes are supported.

To see more about verification, checkout an example of models supported with proofs used in here.

Model and Data Provenance

Ritual's infernet SDK can be modularized to support uploading training and inference artifact and files in existing open-source and decentralized storage options.