Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.darkbloom.dev/llms.txt

Use this file to discover all available pages before exploring further.

Getting started as a provider takes a single terminal command. The installer handles everything — the provider binary, a bundled Python 3.12 runtime, the vllm-mlx inference engine, and the Secure Enclave tooling needed for attestation. You do not need to install Python, set up a virtual environment, or configure anything manually before running the installer.

Install the provider agent

Open Terminal and run:
curl -fsSL https://api.darkbloom.dev/install.sh | bash
The installer will:
  1. Download and verify the provider binary (darkbloom)
  2. Install a bundled Python 3.12 runtime to ~/.darkbloom/python/
  3. Install vllm-mlx and its dependencies into the bundled Python environment
  4. Install the Secure Enclave helper (eigeninference-enclave)
  5. Walk you through model selection and account linking
  6. Start the provider as a background launchd service
The installer verifies SHA-256 hashes and code signatures for all downloaded components. If any check fails, installation is aborted.

What gets installed

The provider agent itself — a Rust binary that manages the inference backend, maintains the WebSocket connection to the coordinator, handles attestation, and exposes the CLI. Installed to a location on your PATH so you can run darkbloom from any terminal.
A self-contained Python 3.12 installation in ~/.darkbloom/python/. It is completely separate from any system Python or Homebrew Python you already have and will not interfere with them.
The inference engine that runs models on Apple Silicon’s GPU via Metal. Installed into the bundled Python environment. The provider agent verifies the hash of the installed packages against the coordinator’s runtime manifest on every startup and updates them automatically if they drift.
The eigeninference-enclave binary, which the provider uses to generate hardware-bound attestation blobs from your Mac’s Secure Enclave. These proofs bind your machine’s identity to the inference keys used in the current session.
Your machine needs to be linked to your Darkbloom account for earnings to be credited to you. The setup wizard that runs after installation will prompt you to do this, but you can also run it manually at any time:
darkbloom login
This opens a browser window and displays a short code in your terminal. Sign in to your Darkbloom account in the browser, enter the code when prompted, and the link is made. The provider stores an auth token locally and uses it on subsequent connections.
If you skip account linking, your provider will still serve requests but earnings will not be credited to your account. Run darkbloom login to link at any time.

Pick a model

During setup, the installer shows you which models your hardware can run based on detected memory. You can also browse and download models manually:
darkbloom models list
The models available on the network, along with their memory requirements, are:
ModelSizeMin RAMNotes
Gemma 4 26B 8-bit28 GB36 GBFast multimodal MoE
Qwen3.5 27B Claude Opus 8-bit27 GB36 GBFrontier-quality reasoning
Trinity Mini 8-bit26 GB48 GBFast agentic inference
Qwen3.5 122B MoE 8-bit122 GB128 GBBest quality reasoning
MiniMax M2.5 8-bit243 GB256 GBSOTA coding, ~100 tok/s
Model weights are downloaded to your HuggingFace cache (~/.cache/huggingface/hub/) and are only fetched once. If you already have a model downloaded, the provider uses the existing copy.

Start serving

Once a model is downloaded and your account is linked, start the provider:
darkbloom start
darkbloom start runs the provider as a background daemon that persists across reboots. darkbloom serve keeps the process in your current terminal session — useful when you want to watch the logs directly. Once running, check that everything is connected:
darkbloom status
You should see your hardware detected, your model loaded, and the coordinator connection shown as active.

Configuration file

The provider writes its configuration to ~/.config/eigeninference/provider.toml. You can edit this file directly to adjust settings without using the CLI. Changes take effect the next time you run darkbloom start or darkbloom serve. See the CLI reference for all available commands, or the scheduling guide to configure when your Mac serves requests.