Bitrecs V2 is built around a central platform API that coordinates between miners submitting artifacts, screener nodes doing rapid pre-filtering, and validator nodes running full ecommerce evaluations inside isolated Docker containers. All persistent state lives in a PostgreSQL database backed by Cloudflare R2 for artifact storage. Validators never hold canonical state — they rely on the platform API for assignments, score reporting, and weight-setting decisions. This separation means validators can be added or removed without disrupting active evaluation queues.Documentation Index
Fetch the complete documentation index at: https://docs.bitrecs.ai/llms.txt
Use this file to discover all available pages before exploring further.
Component overview
Platform API
A FastAPI application (
api/main.py) deployed at https://v2.api.bitrecs.ai. Accepts miner submissions, manages the evaluation queue, serves evaluation assignments to validators and screeners, and exposes scoring data. Requires an API key for all protected endpoints.Bittensor chain
Miners commit their Gist IDs onchain before submitting to the platform. Validators read miner UIDs and coldkeys from the chain and set WTA weights onchain each epoch via
subtensor.set_weights().Validator nodes
Long-running Python processes (
validator/bitrecs_validator.py) deployed via Docker Compose. Validators register with the platform, poll for evaluation assignments, spawn bitrecs-evals containers to run inference and scoring, and report results back. Each validator also runs a score calculation loop that sets onchain weights near epoch boundaries.Screener nodes
The same validator binary running in
MODE=screener. Screeners are lighter-weight nodes that evaluate artifacts through the first two screening stages before they reach full validator evaluation. Screeners authenticate with a shared password rather than a hotkey signature.bitrecs-evals
A separate Docker image (
ghcr.io/bitrecs/bitrecs-evals:main) that runs inside each validator during evaluation. Exposes an HTTP /evaluate endpoint that accepts an artifact YAML and problem parameters, runs the LLM prompt against the evaluation suite, and returns a score, success flag, sample count, and inference cost report.PostgreSQL + Cloudflare R2
The platform database stores agents, evaluation runs, scores, hotkey-to-Gist mappings, and weight history. Cloudflare R2 holds artifact backups and is synced to validators periodically via the
R2_SYNC_INTERVAL_SECONDS loop (default: 900 seconds).Platform API
The API is built with FastAPI and deployed at:X-API-Key header. The API exposes the following router groups:
| Prefix | Purpose |
|---|---|
/ and /check, /submit | Miner submission endpoints (CLI-facing) |
/validator | Validator and screener registration, heartbeat, evaluation assignment, result reporting |
/scoring | Score retrieval, weight-set recording, latest set info |
/agent | Agent (artifact) CRUD |
/evaluation-run | Evaluation run status and logs |
/evaluation | Individual evaluation results |
/evaluation-sets | Evaluation set management |
/retrieval | Miner block and score data for weight calculation |
/inference | Cost estimation and inference cost reporting |
/statistics, /dashboard | Aggregate metrics |
/backup | R2 backup operations |
/debug | Internal diagnostics |
Key startup tasks
When the API starts, it:- Initializes the PostgreSQL connection pool
- Validates the Cloudflare R2 bucket connection
- Starts a validator heartbeat timeout loop (disconnects validators that stop sending heartbeats)
- Starts an R2 download-and-sync background task
- Loads API keys from the database
- Pre-caches inference cost data for all supported providers
Validator architecture
Validators run as Docker containers alongside abitrecs-evals sidecar container. A Watchtower container handles automatic image updates.
| Loop | Interval | Purpose |
|---|---|---|
| Heartbeat | SEND_HEARTBEAT_INTERVAL_SECONDS (default: 20s) | Keeps the session alive with the platform |
| Score calculation | SET_WEIGHTS_INTERVAL_SECONDS (default: 300s) | Calculates scores and sets onchain weights near epoch boundaries |
| R2 sync | R2_SYNC_INTERVAL_SECONDS (default: 900s) | Syncs artifact data from Cloudflare R2 |
/validator/request-evaluation and runs evaluations as they are assigned.
Evaluation run states
Each evaluation run transitions through the following states, reported to the platform API:Validator vs. screener roles
| Attribute | Validator | Screener |
|---|---|---|
| Authentication | Hotkey signature (SS58) | Shared password |
| Sets onchain weights | Yes | No |
| Runs score calculation loop | Yes | No |
| Runs R2 sync loop | Yes | No |
| Runs evaluations | Yes | Yes |
| Evaluation stage | Full validator queue | Screener 1 and 2 |
bitrecs-evals container
During each evaluation run, the validator:- Pulls the
bitrecs-evalsimage (ghcr.io/bitrecs/bitrecs-evals:main) if not already present - Starts the container with environment variables including
BITRECS_RUN_ID,OPENROUTER_API_KEY,CHUTES_API_KEY, and model cost parameters - Confirms the container is healthy via
GET /health - Posts the artifact YAML and problem name to
POST /evaluatewith a 600-second timeout - Retrieves the run log from
GET /run_log/{run_id} - Cleans up the container after the run completes
bitrecs-network). When running outside Docker, the hostname is localhost; inside Docker, it is bitrecs-evals-main.
Data flow diagram
Infrastructure requirements
- Validators
- Platform API
- Ubuntu 24+ LTS recommended
- Docker installed (for
bitrecs-evalscontainer spawning) - No public IP or open inbound ports required — all communication is outbound to the platform API
OPENROUTER_API_KEYand/orCHUTES_API_KEYrequired for inference- Bittensor wallet with a registered hotkey on the target netuid