Off-Chain Services
Seven systemd services run alongside the on-chain program on a Hetzner CPX22 server (178.104.120.151) — six always-on Node services plus an obv-engine Python sidecar. All are deployed with automatic restart, health endpoints, and journald logging. Every operation they perform is permissionless — the services are a guarantee that work happens on schedule, not a centralized operator.
Service topology
| Service | Port | Interval | File | Status |
|---|---|---|---|---|
| Oracle Crank + Candle API + WS | 3456 | 5 min | engine/crank.ts | running |
| Oracle Pusher | 3458 | 5 min | oracle-pusher/index.mjs | running |
| Trigger Keeper + Funding Crank | 3457 | 15 s / 8 h | keeper/index.mjs | running |
| Liquidation Bot | 3459 | 30 s | keeper/liquidator.mjs | running |
| Monitor + Telegram | 3460 | 60 s | monitoring/index.mjs | running |
| Trade-History Indexer | 3461 | 30 s | indexer/index.ts | running — live since 2026-05-11; decodes Anchor #[event] logs into a wallet-scoped trade store (exposed publicly at https://api.sportsperp.xyz/indexer/* via Caddy handle_path) |
obv-engine (Python sidecar) | 8100 (loopback) | on-demand | external Python repo | running — XGBoost PV-GF / PV-GA scorer (Ball-AI-derived, OBV Redux whitepaper-aligned) |
Each service exposes a /health endpoint returning {status, uptime, lastCycle, errors} for the monitor service to poll. The obv-engine is consumed by the crank’s engine/live-processor.ts over HTTP (POST /api/live-obv/matches/{id}/{start,events,end}) when ENABLE_REALTIME_OBV=true and OBV_ENGINE_BASE_URL is set; the legacy heuristic impact-estimation path is retained as a fallback.
1. Oracle Crank + Candle API + WS (port 3456)
The central service. Three responsibilities in one process:
Crank (core loop)
Every 5 minutes:
- Fetch season stats for all 20 teams and eligible players from the post-match REST feed.
- Fetch recent match results (for form + PPG calculation).
- Subscribe to the live event feed (GraphQL subscriptions) for any in-progress matches.
- Compute composite indices via
index-calculator.ts— z-scored, scaled to 100–900. - Persist each tick to SQLite (
candles.db) for historical candle reconstruction. - Broadcast the tick on WebSocket to connected frontends.
- Signal the oracle pusher that a new value is available.
Candle API (Express REST)
Exposes /candles/{market_key}?tf=1m|1H|4H|1D&limit=N returning OHLC bars. Frontend charts consume this via HTTPS proxy routes.
WebSocket server
Streams live ticks, live-match events (goals, key passes, defensive actions), and aggregate state changes. Consumed by the trading UI for real-time chart updates.
Also hot-reloads roster.json via fs.watch — any roster change takes effect on the next cycle without a restart.
2. Oracle Pusher (port 3458)
A dedicated bridge between the crank’s SQLite output and the on-chain program:
- Reads new index values from
candles.db. - Decides whether to push: triggered at > 0.5% change from last pushed value (> 0.1% during live matches) or at > 2-hour staleness.
- Signs the transaction with the admin keypair (expected to migrate to multi-source 2-of-N in the mainnet path).
- Sends
update_oraclewith price, confidence (DEFAULT_CONFIDENCE_BPS = 300normal,LIVE_CONFIDENCE_BPS = 600during live matches; flat across team and player markets in v1), and the live-match flag.
The two-process split between crank (pure calculation) and pusher (on-chain commits) is deliberate:
- Decouples failure domains. A data-feed outage pauses the crank; the pusher keeps serving cached data. An RPC outage pauses the pusher; the crank keeps producing data.
- Enables dry-run mode. Pusher supports
--dry-runfor testing transaction shape without committing. - Makes mainnet migration incremental. The same crank will feed the multi-source oracle (2-of-N weighted median consensus,
multi-oracle.mjs— built but not yet deployed).
3. Trigger Keeper + Funding Crank (port 3457)
Two related jobs bundled in one service:
Trigger keeper (15-second loop)
- Loads all
TriggerOrderPDAs viagetProgramAccounts+ memcmp filter. - Reads the latest
MarketConfig.mark_price_emafor each relevant market. - Evaluates the trigger condition (see Order Types).
- Submits
execute_trigger_closeorexecute_trigger_openfor matches. - Earns the keeper reward on successful execution.
Funding crank (8-hour loop)
Invokes apply_funding on every market. The on-chain program computes the plain mean of the premium samples accumulated since the last interval and updates the cumulative counters. Any trader interacting with a position after this call will have their pending funding settled. (Outlier filtering of premium samples is not implemented on-chain in v1; it is tracked as a potential future hardening.)
4. Liquidation Bot (port 3459)
The three-layer cascade keeper. Every 30 seconds:
- Scans all
UserPositionaccounts, computing each position’s current margin ratio against the livemark_price_ema. - Classifies each position into a layer:
ratio ≤ 20%and> 13.33%→ Layer 1 candidateratio ≤ 13.33%and insurance healthy → Layer 2 candidateratio ≤ 13.33%and insurance at cap → Layer 3 candidate
- Executes the appropriate liquidation for each eligible position:
partial_liquidate— earns 5% reward.backstop_liquidate— absorbs into insurance.unwind_backstop— continues to unwind previously absorbed positions (10% per call), earns 3% reward.auto_deleverage— picks a profitable opposing target (PnL × leverage ranking) and force-closes.
All four are permissionless. Competing keepers may run in parallel; the first transaction to land wins the reward, subsequent attempts fail cheaply with a clear error.
5. Monitor + Telegram (port 3460)
A watchdog that polls everything else:
- Health-checks the other services’
/healthendpoints (crank, pusher, keeper, liquidator, indexer). - Checks on-chain state — oracle staleness across all 68 markets, insurance fund balance,
current_backstop_exposure, Program account existence. - Sends Telegram alerts (via the NanoClaw bot integration) for: service down, oracle staleness > 2h, insurance balance < target Ă— 0.5, backstop exposure > cap Ă— 0.75.
Credentials for Telegram live in the service’s EnvironmentFile on Hetzner, never in the repo.
Deployment
All services deploy via systemd unit files in services/:
# on Hetzner
cp services/*.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable --now \
sportsperp-crank \
sportsperp-oracle \
sportsperp-keeper \
sportsperp-liquidator \
sportsperp-monitor \
sportsperp-indexerEach unit file specifies:
User=sportsperp(non-root service account).EnvironmentFile=/etc/default/sportsperp-<service>(data-feed credentials, RPC keys, Telegram tokens).Restart=on-failurewith exponential backoff.LimitNOFILE=8192(WS clients).
Logs flow to journald:
journalctl -u sportsperp-crank -fSee services/README.md and DEPLOY.md for the full deployment runbook.
RPC key split (Helius)
Solana RPC calls are distributed across four Helius keys, each rate-limited to 10 RPS (40 RPS aggregate):
| Key | Consumed by |
|---|---|
HELIUS_RPC_KEY1 | Oracle crank |
HELIUS_RPC_KEY2 | Keeper + Liquidator |
HELIUS_RPC_KEY3 | Monitor + Frontend + SDK scripts |
HELIUS_RPC_KEY4 | Oracle pusher (dedicated) |
Splitting traffic by workload prevents one runaway service from exhausting the overall rate budget. Key 3 is safe to expose in the frontend’s NEXT_PUBLIC_SOLANA_RPC_URL; keys 1, 2, and 4 stay server-side.
Degraded operation
Each service is designed to fail visibly rather than silently:
- Oracle crank: if the REST data feed fails, last-known values are preserved;
oracle_is_liveflag remains false; Telegram alert fires. - Oracle pusher: if RPC fails, retries with kind-specific backoff;
/healthexposeslastCycleAt,lastSuccessfulPushAt(aliaslastPushedAt),bootstrapStatus,backlogSize,lastFailureKind,circuitBreakerOpen, andquotaExhausted. Monitor alerts on quota exhaustion immediately, on degraded state across 2 polls, on backlog growth across 3 polls, on stalelastSuccessfulPushAtwhile backlog is non-empty, and on the pusher’slastCycleAtlagging the crank’slastCycleAt. - Trigger keeper: if RPC or program fails, skipped this cycle; next cycle retries.
- Liquidator: same; plus, liquidations are permissionless so third parties can step in.
- Monitor: if the monitor itself fails, there’s no metaobserver — so the monitor is watched by a simple external uptime check.
No degraded state is ever silent. Traders can check oracle_timestamp on any market PDA to verify the feed isn’t stale, regardless of our services’ status.
Live↔REST ID bridge
The data partner’s REST and live-event APIs use independent ID spaces — e.g. a team might be REST id 1 but live id 21; a player might be REST 39461 but live 106232. Every market in roster.json is keyed by REST (canonical) id, so live events must be translated before they can be attributed. Note that live-side IDs are not guaranteed stable across matches for the same entity — the bridge resolves the mapping against each match’s lineup, not against a fixed cross-match table.
engine/id-bridge.ts is a fail-closed translator: any unmapped Live id drops the event and increments a counter rather than mis-attributing it. Operator overrides live in engine/id-bridge-overrides.json. Behaviour is gated by USE_LIVE_REST_BRIDGE_V1, USE_LIVE_REST_BRIDGE_V2, SHADOW_DROP_DIAG, and ID_BRIDGE_OVERRIDE_STRICT env vars.
Phase 2 (USE_LIVE_REST_BRIDGE_V2=true) was enabled on Hetzner on 2026-05-05. Phase 3 verification runs every 15 minutes Sat/Sun 10–22 UTC via sportsperp-bridge-verify.timer; the first full matchday verification window is Saturday 2026-05-09 (Liverpool–Chelsea KO 11:30 UTC).
Further reading
services/README.md— deployment runbook.DEPLOY.md— full infrastructure setup guide.- Oracle Design — detail on how the crank → pusher pipeline feeds the program.
docs/plans/2026-05-04-live-rest-id-bridge-plan.md— Live↔REST bridge design and verification cadence.