The full stack of
decentralized AI

Every capability. One API key. Six subnet-powered services behind a single endpoint.

τ

Inference

SN58 Handshake + SN64 Chutes
POST /v1/chat/completions

532+ models across every major open-source family. OpenAI-compatible format. Smart routing picks the fastest available miner. Streaming, function calling, JSON mode — all supported.

# inference — same as OpenAI
from openai import OpenAI

client = OpenAI(
    base_url="https://api.opentau.ai/v1",
    api_key="otau_your-key"
)

response = client.chat.completions.create(
    model="meta-llama/llama-3.3-70b-instruct",
    messages=[{
        "role": "user",
        "content": "Explain Bittensor"
    }]
)

Fine-Tuning

SN56 Gradients
POST /v1/fine-tuning/jobs

$5/hr — 5x cheaper than OpenAI. QLoRA and full fine-tune on any open-source model. Upload your dataset, train on decentralized GPUs, deploy the result to your inference endpoint automatically.

# fine-tune for $5/hr
job = client.fine_tuning.jobs.create(
    model="meta-llama/llama-3.1-8b-instruct",
    training_file="file-abc123",
    hyperparameters={
        "method": "qlora",
        "n_epochs": 3,
        "learning_rate": 2e-4
    }
)
# → job.id = "ftjob-..."

Search

SN22 Desearch
POST /v1/search

Decentralized search across X, Reddit, ArXiv, and the open web. Auto-RAG: search results are injected into your LLM context automatically. No API key soup — one endpoint does retrieval and generation.

# decentralized search + RAG
import requests

resp = requests.post(
    "https://api.opentau.ai/v1/search",
    headers={"Authorization": "Bearer otau_your-key"},
    json={
        "query": "latest Bittensor subnet launches",
        "sources": ["web", "x", "arxiv"],
        "auto_rag": True
    }
)

Storage

SN75 Hippius
S3-compat /v1/storage/{bucket}/{key}

Censorship-resistant, IPFS-based storage with 900TB+ capacity. S3-compatible API for training datasets, model weights, and generated assets. Your data on decentralized infrastructure.

# S3-compatible decentralized storage
import boto3

s3 = boto3.client(
    "s3",
    endpoint_url="https://api.opentau.ai/v1/storage",
    aws_access_key_id="otau_your-key",
    aws_secret_access_key="otau_your-secret"
)

s3.upload_file(
    "dataset.jsonl",
    "my-bucket",
    "training/data.jsonl"
)

Verification

SN34 BitMind
POST /v1/verify/image

Deepfake detection with 95% accuracy. Content authenticity verification for images, ensuring AI-generated media is properly identified. Enterprise-grade trust layer for any pipeline.

# verify image authenticity
resp = requests.post(
    "https://api.opentau.ai/v1/verify/image",
    headers={"Authorization": "Bearer otau_your-key"},
    files={"image": open("photo.jpg", "rb")}
)
# → {"is_ai_generated": false, "confidence": 0.97}

Confidential Compute

SN4 Targon
Header: X-Confidential: true

Intel TDX Trusted Execution Environment. Nobody can see your prompts — not miners, not us, not anyone. Add one header to any inference request for enterprise-grade privacy on decentralized infrastructure.

# confidential inference — one header
response = client.chat.completions.create(
    model="meta-llama/llama-3.3-70b-instruct",
    messages=[{
        "role": "user",
        "content": "Analyze this proprietary data..."
    }],
    extra_headers={
        "X-Confidential": "true"
    }
)
# routed through Intel TDX TEE on SN4
τ

How OpenTau compares

Feature OpenRouter Together AI OpenTau
Inference
Fine-tuning --
Decentralized search -- --
Decentralized storage -- --
Deepfake verification -- --
Confidential compute (TEE) -- --
Cross-subnet pipelines -- --
Censorship-resistant -- --
No single point of failure -- --
Free tier --