Darkbloom is a decentralized inference network that connects AI consumers with Mac owners running idle Apple Silicon hardware. As a consumer, you get access to frontier-quality models through an OpenAI-compatible API — with end-to-end encryption ensuring your prompts stay private. As a provider, you earn by running inference on your Mac during idle hours.Documentation Index
Fetch the complete documentation index at: https://docs.darkbloom.dev/llms.txt
Use this file to discover all available pages before exploring further.
Call the API
Make your first inference request in under 5 minutes using any OpenAI SDK.
Become a Provider
Install the provider agent on your Apple Silicon Mac and start earning.
API Reference
Full reference for all inference, account, and network endpoints.
Security & Trust
Learn how Darkbloom ensures your prompts are never visible to providers.
How it works
Darkbloom runs on a three-layer architecture: your requests go to a coordinator running in a Confidential VM, which routes them to attested provider machines running hardened inference processes on Apple Silicon.Available models
| Model | Size | Min RAM | Best for |
|---|---|---|---|
| Gemma 4 26B 8-bit | 26B MoE, 4B active | 36 GB | Fast multimodal requests |
| Qwen3.5 27B Claude Opus 8-bit | 27B dense | 36 GB | Frontier-quality reasoning |
| Trinity Mini 8-bit | 27B Adaptive MoE | 48 GB | Fast agentic inference |
| Qwen3.5 122B MoE 8-bit | 122B MoE, 10B active | 128 GB | Best quality reasoning |
| MiniMax M2.5 8-bit | 239B MoE, 11B active | 256 GB | SOTA coding, ~100 tok/s |
Darkbloom is an experimental research prototype under active development. Expect rough edges and breaking changes. Do not use in production.