The full stack of
decentralized AI
Every capability. One API key. Six subnet-powered services behind a single endpoint.
Inference
532+ models across every major open-source family. OpenAI-compatible format. Smart routing picks the fastest available miner. Streaming, function calling, JSON mode — all supported.
# inference — same as OpenAI from openai import OpenAI client = OpenAI( base_url="https://api.opentau.ai/v1", api_key="otau_your-key" ) response = client.chat.completions.create( model="meta-llama/llama-3.3-70b-instruct", messages=[{ "role": "user", "content": "Explain Bittensor" }] )
Fine-Tuning
$5/hr — 5x cheaper than OpenAI. QLoRA and full fine-tune on any open-source model. Upload your dataset, train on decentralized GPUs, deploy the result to your inference endpoint automatically.
# fine-tune for $5/hr job = client.fine_tuning.jobs.create( model="meta-llama/llama-3.1-8b-instruct", training_file="file-abc123", hyperparameters={ "method": "qlora", "n_epochs": 3, "learning_rate": 2e-4 } ) # → job.id = "ftjob-..."
Search
Decentralized search across X, Reddit, ArXiv, and the open web. Auto-RAG: search results are injected into your LLM context automatically. No API key soup — one endpoint does retrieval and generation.
# decentralized search + RAG import requests resp = requests.post( "https://api.opentau.ai/v1/search", headers={"Authorization": "Bearer otau_your-key"}, json={ "query": "latest Bittensor subnet launches", "sources": ["web", "x", "arxiv"], "auto_rag": True } )
Storage
Censorship-resistant, IPFS-based storage with 900TB+ capacity. S3-compatible API for training datasets, model weights, and generated assets. Your data on decentralized infrastructure.
# S3-compatible decentralized storage import boto3 s3 = boto3.client( "s3", endpoint_url="https://api.opentau.ai/v1/storage", aws_access_key_id="otau_your-key", aws_secret_access_key="otau_your-secret" ) s3.upload_file( "dataset.jsonl", "my-bucket", "training/data.jsonl" )
Verification
Deepfake detection with 95% accuracy. Content authenticity verification for images, ensuring AI-generated media is properly identified. Enterprise-grade trust layer for any pipeline.
# verify image authenticity resp = requests.post( "https://api.opentau.ai/v1/verify/image", headers={"Authorization": "Bearer otau_your-key"}, files={"image": open("photo.jpg", "rb")} ) # → {"is_ai_generated": false, "confidence": 0.97}
Confidential Compute
Intel TDX Trusted Execution Environment. Nobody can see your prompts — not miners, not us, not anyone. Add one header to any inference request for enterprise-grade privacy on decentralized infrastructure.
# confidential inference — one header response = client.chat.completions.create( model="meta-llama/llama-3.3-70b-instruct", messages=[{ "role": "user", "content": "Analyze this proprietary data..." }], extra_headers={ "X-Confidential": "true" } ) # routed through Intel TDX TEE on SN4
How OpenTau compares
| Feature | OpenRouter | Together AI | OpenTau |
|---|---|---|---|
| Inference | ✓ | ✓ | ✓ |
| Fine-tuning | -- | ✓ | ✓ |
| Decentralized search | -- | -- | ✓ |
| Decentralized storage | -- | -- | ✓ |
| Deepfake verification | -- | -- | ✓ |
| Confidential compute (TEE) | -- | -- | ✓ |
| Cross-subnet pipelines | -- | -- | ✓ |
| Censorship-resistant | -- | -- | ✓ |
| No single point of failure | -- | -- | ✓ |
| Free tier | ✓ | -- | ✓ |