Infernet
Node
Containers

Containers

As detailed in our Architecture (opens in a new tab) section, the Container Manager handles the lifecycle of containers. For containers to be compatible with the the Infernet Code, they must conform to a specific interface.

Infernet-compatible Containers

The following rules must hold for a container to be "Infernet-compatible":

  1. The server must run on port 3000 internally. This does not really limit the developer as they can use the port configuration to map the container's internal port to their desired external port.
  2. The server must expose a /service_output endpoint that accepts a POST request. For the correct input format, consult below.
  3. The output format of the containers will be different depending on whether the incoming request has a web2/streaming destination or a web3 destination.

Input Format

The input format for the /service_output endpoint is as follows:

{
    "source": int,
    "destination": int,
    "data": dict[str, Any],
    "requires_proof": bool,
}
  • source: This field will be 0 for on-chain requests, and 1 for off-chain requests.
  • destination: This field will be 0 for on-chain responses, 1 for off-chain responses and 2 for streaming off-chain responses.
  • data: This field must be a dictionary. The keys and values of this dictionary can be anything. Adjust it to your container's needs.
  • requires_proof: This field will be True if the request requires a proof, and False otherwise. This allows containers to know whether they need to generate a proof or not.

Output Format for web-2 jobs

If the destination input is off-chain (i.e. 1). The keys and values of this dictionary can be anything. Adjust it to your specific needs. For example if your container's output is a string response from an LLM, you can wrap the response in a dict with a key response/data/result, etc. like so:

GOOD ✅:

    llm_output: str = some_dope_llm.run()
    return {
      "response": llm_output
    }

BAD ❌: Infernet node will complain

    llm_output: str = some_dope_llm.run()
    return llm_output

If the destination input is streaming (i.e. 2), the output format is not restricted and can be an arbitrary stream of bytes.

Output Format for web-3 jobs

If the destination input is on-chain (i.e. 0), the output format must be a dictionary with the following keys:

{
    "raw_input": str,
    "processed_input": str,
    "raw_output": str,
    "processed_output": str,
    "proof": str,
}

For any of the keys that you're not going to use, simply return an empty string i.e. if your container's job does not include a proof, simply set the proof key to "".

The format of the values for the keys must be a hex string of abi-encoded data.

Example: Minting an NFT

For example, let's say your container generates an image, and wants to deliver the result to an NFT contract which will then mint an NFT to the recipient. You want to deliver an output that is of the format (address to, string imageUrl) where to is the recipient of the NFT and imageUrl is the URl of your generated image.

Your container output format would be:

from eth_abi import encode_abi
    # ...inside your container
    to = "0x1234567890123456789012345678901234567890"
    imageUrl = "https://my-dope-image.com/123.png"
    return {
        "raw_input": "",
        "processed_input": "",
        "raw_output": encode(["address", "string"], [to, imageUrl]).hex(),
        "processed_output": "",
        "proof": "",
    }

On the smart-contract side this can be decoded inside the _receiveCompute (opens in a new tab) as follows:

function _receiveCompute(
    uint32 subscriptionId,
    uint32 interval,
    uint16 redundancy,
    address node,
    bytes calldata input,
    bytes calldata output,
    bytes calldata proof
) internal override {
    (bytes memory raw_output, bytes memory processed_output) = abi.decode(output, (bytes, bytes));
    (address to, string memory imageUrl) = abi.decode(raw_output, (address, string));
    _mintNFT(to, imageUrl);
    // etc.
}

Alternative Output Format for Web3 Jobs

Alternatively, if you follow the same format as our web2 requests: i.e. a dictionary with arbitrary keys and values, then the output param that gets delivered to the smart contract is simply a string encoding of the dictionary.

For example, Returning this in your service:

return {"data": "hello, world"}

Would be decoded in your smart contract as follows:

function _receiveCompute(
    uint32 subscriptionId,
    uint32 interval,
    uint16 redundancy,
    address node,
    bytes calldata input,
    bytes calldata output,
    bytes calldata proof
) internal override {
    (string memory response) = abi.decode(output, (string));
    // response will be "{"data": "hello, world"}"
    // etc.
}