Web3 Quickstart

Quickstart for Web3 Engineers


Ritual lets users easily integrate typical AI use cases and workflows with their on-chain workflow, starting with its infernet SDK. Users can add in the full inference power of AI models, ranging from classical machine learning models to large language models (LLMs), into their existing workflows in a verifiable way.

How does the integration between web3 and AI functionality work?

Ritual's infernet SDK includes modular integration hooks for popular open-source libraries for each step in an integrated AI and web3 workflow: data pre-processing and training, model support (scikit-learn, HuggingFace), verification (zero knowledge proofs, optimistic fraud proofs, various open-source zk libraries), data hosting and provenance, and more.

Ritual's infernet offers support for these steps through AI workflows, which are customizable template scripts and docker images to fully support data-processing, training, inference, and more on a variety of models. Users can use the full predictive power of classical and large language models through editing workflow scripts and deploying in their existing on-chain workflows. See more information below on the type of integration Ritual offers.

Introduction to an AI Pipeline

What's in a typical AI pipeline?

A typical AI pipeline will include steps starting from sourcing and processing data, to training a model, model inference, and finally to evaluation of the model. Models with vastly differing size and predictive powers (e.g. models ranging from small classifiers to LLMs) share the same fundamental modules as a part of their development cycles. Ritual's infernet workflows is an abstraction of the fundamental

Check out more info on the overview here and supported methods.

Verifiability and trust guarantees

Ritual's infernet SDK serves an abstraction for all the components needed in a full AI pipeline so users can pick and choose the AI hooks needed for their use case.

While AI models offer powerful predictive power, their non-deterministic nature requires additional trust guarantees before they can be used in web3 applications. The Infernet SDK hooks into components so users can use verifiable as needed.

Ready to get started? Links here

Here are a few links to get you started.

  1. Installation and Setup
  2. Tutorials Simple logistic classifier tutorial, Large language model tutorial
  3. Supported Methods


Checkout a few tutorials for various types of models.

Simple logistic classifier tutorial

Large language model tutorial

Classic ML training workflow with ZKPs (coming soon)

LLM Inference workflow (coming soon)

Base Torch Inference workflow (coming soon)

FAQs and Resources

FAQs AI Resources