Quickstart for ML Engineers
Ritual lets users easily integrate typical AI use cases and workflows with their blockchain workflow, starting with its Infernet SDK. Users can offer their ML models to others or build protocols with their ML models.
Ritual's Infernet SDK includes modular integration hooks for popular open-source libraries for each step in an integrated AI and web3 workflow: data pre-processing and training, model support (scikit-learn, HuggingFace), verification (zero-knowledge proofs, optimistic fraud proofs, various open-source zk libraries), data hosting and provenance, and more.
Ritual's Infernet supports web3 workflows, which are inheritable contracts to support model inference through on-chain contracts fully. Users can use the full predictive power of classical and large language models by editing workflow scripts and deploying them in their existing on-chain workflows. See more information below on the type of integration Ritual offers.
What's in a typical Web3 pipeline?
A typical Web3 pipeline will include writing smart contracts, design, and deployment onto testnets (optional) and the mainnet of the blockchain you are interested in.
To learn more about solidity, check the following resources and other helpful libraries:
- Crypto Zombies Solidity Tutorial (opens in a new tab)
- Foundry (opens in a new tab), A smart contract development toolchain
- Remix (opens in a new tab), A Solidity IDEA
- Learn more about Remote Procedure Calls (RPCs (opens in a new tab))
Verifiability and trust guarantees
Ritual's infernet SDK serves an abstraction for all the components needed in a full AI pipeline so users can pick and choose the AI hooks needed for their use case.
While AI models offer powerful predictive power, their non-deterministic nature requires additional trust guarantees before they can be used in web3 applications. The Infernet SDK hooks into components so users can use verifiable as needed.
Here are a few links to get you started.
- Are you an experienced ML Engineer with a trained classic model ready to go? Let others leverage your model by deploying it on to an Infernet Node. Learn more about Deployment
- Learn about a ML workflow with Verifiable Computation (coming soon)