Skip to content

Data Flow

This page explains how historical and live market data moves through the system.

What It Does

  • Historical flow: cache-first reads with REST gap fill.
  • Live flow: WS messages -> queue -> DataFeed -> normalized bars.

Why It Exists

History endpoints are rate limited and can return partial or missing data. A cache reduces REST calls and makes backtests deterministic. Live WS data can be noisy and out-of-order, so normalization and bounded draining protect the strategy loop.

Historical Data Flow

  1. Strategy requests bars via Store.get_history().
  2. Historical cache returns existing bars.
  3. Store detects gaps using market calendar rules.
  4. REST adapter fetches missing windows only.
  5. New bars are normalized and stored back in the cache.

Failure handling: - If cache path is missing, Store fails fast with guidance. - If REST returns auth errors, the Store raises AuthError.

Safety implication: - Cached bars are treated as rebuildable; do not treat the cache as truth.

Live Data Flow

  1. Data WS adapter connects and subscribes to symbols.
  2. WS callbacks push messages into a queue.
  3. DataFeed drains messages in bounded batches.
  4. Messages are normalized into bars and pushed into Backtrader.

Failure handling: - WS reconnect uses explicit backoff. - Stale feeds emit warnings if no messages arrive within the configured threshold.

Scenario Example

Scenario: A strategy subscribes to 800 symbols. - The adapter batches subscribe calls to 100 symbols each. - The DataFeed drains queues in bounded slices to avoid blocking the Backtrader loop. - Operators tune batch size and drain budgets to avoid backlog warnings.