Getting started as a provider takes a single terminal command. The installer handles everything — the provider binary, a bundled Python 3.12 runtime, the vllm-mlx inference engine, and the Secure Enclave tooling needed for attestation. You do not need to install Python, set up a virtual environment, or configure anything manually before running the installer.Documentation Index
Fetch the complete documentation index at: https://docs.darkbloom.dev/llms.txt
Use this file to discover all available pages before exploring further.
Install the provider agent
Open Terminal and run:- Download and verify the provider binary (
darkbloom) - Install a bundled Python 3.12 runtime to
~/.darkbloom/python/ - Install vllm-mlx and its dependencies into the bundled Python environment
- Install the Secure Enclave helper (
eigeninference-enclave) - Walk you through model selection and account linking
- Start the provider as a background launchd service
The installer verifies SHA-256 hashes and code signatures for all downloaded components. If any check fails, installation is aborted.
What gets installed
darkbloom binary
darkbloom binary
The provider agent itself — a Rust binary that manages the inference backend, maintains the WebSocket connection to the coordinator, handles attestation, and exposes the CLI. Installed to a location on your
PATH so you can run darkbloom from any terminal.Python 3.12 runtime
Python 3.12 runtime
A self-contained Python 3.12 installation in
~/.darkbloom/python/. It is completely separate from any system Python or Homebrew Python you already have and will not interfere with them.vllm-mlx and mlx-lm
vllm-mlx and mlx-lm
The inference engine that runs models on Apple Silicon’s GPU via Metal. Installed into the bundled Python environment. The provider agent verifies the hash of the installed packages against the coordinator’s runtime manifest on every startup and updates them automatically if they drift.
Secure Enclave tooling
Secure Enclave tooling
The
eigeninference-enclave binary, which the provider uses to generate hardware-bound attestation blobs from your Mac’s Secure Enclave. These proofs bind your machine’s identity to the inference keys used in the current session.Link your account
Your machine needs to be linked to your Darkbloom account for earnings to be credited to you. The setup wizard that runs after installation will prompt you to do this, but you can also run it manually at any time:Pick a model
During setup, the installer shows you which models your hardware can run based on detected memory. You can also browse and download models manually:| Model | Size | Min RAM | Notes |
|---|---|---|---|
| Gemma 4 26B 8-bit | 28 GB | 36 GB | Fast multimodal MoE |
| Qwen3.5 27B Claude Opus 8-bit | 27 GB | 36 GB | Frontier-quality reasoning |
| Trinity Mini 8-bit | 26 GB | 48 GB | Fast agentic inference |
| Qwen3.5 122B MoE 8-bit | 122 GB | 128 GB | Best quality reasoning |
| MiniMax M2.5 8-bit | 243 GB | 256 GB | SOTA coding, ~100 tok/s |
~/.cache/huggingface/hub/) and are only fetched once. If you already have a model downloaded, the provider uses the existing copy.
Start serving
Once a model is downloaded and your account is linked, start the provider:darkbloom start runs the provider as a background daemon that persists across reboots. darkbloom serve keeps the process in your current terminal session — useful when you want to watch the logs directly.
Once running, check that everything is connected:
Configuration file
The provider writes its configuration to~/.config/eigeninference/provider.toml. You can edit this file directly to adjust settings without using the CLI. Changes take effect the next time you run darkbloom start or darkbloom serve.
See the CLI reference for all available commands, or the scheduling guide to configure when your Mac serves requests.