If you have an Apple Silicon Mac sitting idle for part of the day, you can put its unified memory and GPU to work serving AI inference requests. Darkbloom’s provider agent connects outbound over WebSocket — no port forwarding or firewall changes needed — and your prompts are cryptographically isolated from your own process. This guide covers everything from installation to your first confirmed earnings.Documentation Index
Fetch the complete documentation index at: https://docs.darkbloom.dev/llms.txt
Use this file to discover all available pages before exploring further.
Check your hardware
The provider agent requires Apple Silicon and enough unified memory to hold the model you want to serve.
Run this command to confirm your chip:You should see something like
| Requirement | Minimum |
|---|---|
| Chip | Apple Silicon M1 or later |
| macOS | 14 Sonoma or later |
| Unified memory | 16 GB (36 GB recommended for most models) |
Chip: Apple M2 Pro and Memory: 32 GB. If your chip says “Intel”, the provider agent won’t run on your machine.Install the provider agent
Run the one-line installer. It downloads and installs the The installer places the
darkbloom binary, a bundled Python 3.12 runtime, vllm-mlx, and the Secure Enclave tooling. No prerequisites required.darkbloom binary on your PATH. Once it finishes, confirm the installation worked:Link your Darkbloom account
Your Mac needs to be associated with your Darkbloom account so earnings can be credited to you. Run the login command, which starts a device authorization flow:You’ll see a short code and a URL. Open the URL in your browser, sign in to your Darkbloom account, and approve the device. The CLI polls for confirmation and exits once the link is established.
If you don’t have a Darkbloom account yet, go to darkbloom.dev and sign up first.
Choose a model and start serving
List the models available to download based on your hardware:The output shows each model’s RAM requirement alongside your available unified memory. Pick a model your Mac can fit, then start serving. The first run downloads the model weights — expect this to take several minutes depending on your connection.
darkbloom serve runs in the foreground and prints connection status and request throughput. To run as a background daemon instead:Check status and earnings
Once the provider is running, use these commands to monitor it.Connection and hardware status:This shows your chip, available memory, current model, and whether you’re connected to the coordinator.Earnings:This shows your total earnings and a breakdown by session.Diagnose problems:
darkbloom doctor runs a series of checks — SIP status, Secure Boot, memory, network connectivity — and reports any issues that would prevent the provider from attesting correctly.Managing the daemon
Scheduling availability windows
By default the provider runs wheneverdarkbloom start is active. You can restrict it to specific hours so it only serves during times you’re not using your Mac. Edit ~/.config/eigeninference/provider.toml:
macOS menu bar app
A native SwiftUI menu bar app is also available if you prefer a GUI. It provides one-click start/stop, a live throughput display, idle detection (pauses automatically when you’re actively using your Mac), and an earnings dashboard. You can download it from darkbloom.dev.Darkbloom is an experimental research prototype. Provider software may have breaking changes between releases. Run
darkbloom update regularly to stay current.