ML Workflows


What are Ritual Machine Learning Workflows?

Ritual provides easy-to-use abstractions for users to create AI/ML workflows that can be deployed on Infernet nodes. The infernet-ml (opens in a new tab) library is a Python SDK that provides a set of tools and extendable classes for creating and deploying machine learning workflows. It is designed to be easy to use, and provides a consistent interface for data pre-processing, inference, and post-processing. pre-processing, inference, and post-processing of data.

Batteries Included

We provide a set of pre-built workflows for common use-cases. We have workflows for running ONNX models, Torch models, any Huggingface model via Huggingface inference client (opens in a new tab) and even for closed-source models such as OpenAI's GPT-4.

Getting Started

Head over to the next section for installation and a quick walkthrough of the ML workflows.

Tutorials & Videos

Head over to Ritual Learn (opens in a new tab) for more end-to-end tutorials and examples that use this library, including:

  1. Prompt to NFT: A tutorial on using Infernet to create & mint an NFT generated by stable diffusion from a prompt.
  2. Running a Torch/ONNX Model: We import the same model in two formats: as an .onnx file & as a .torch file, and use infernet-ml (opens in a new tab)'s workflows to invoke it either from a smart contract or from infernet-node's REST API.
  3. TGI Inference: A tutorial on running Mistral-7b on Infernet, and optionally delivering its output to a smart-contract.
  4. GPT-4 Inference: A tutorial on how to use our closed-source-inference workflows to easily integrate with OpenAI's completions API.