Architecture
Overview
Backtrader Store/Broker/DataFeed for FYERS with normalized OHLCV bars, queue-based WS polling, SQLite cache, and resilient adapters. Follows Backtrader patterns (ibstore-style) but with FYERS specifics (REST + WS, rate limits, caching).
For operator-friendly architecture summaries, see docs/architecture/system-overview.md, docs/architecture/data-flow.md, docs/architecture/order-flow.md, and docs/architecture/recovery-flow.md.
Core Components
- Store (
src/fyers_store/store.py): connection manager, shared queues, reconnection policy, lifecycle hooks for data feeds and broker. - DataFeed (
src/fyers_store/data_feed.py): FSM for historical/live/backfill; gap detection + REST backfill; tick-as-bar normalization with volume deltas and stall detection. - Broker (
src/fyers_store/broker.py): order lifecycle mapping, idempotent order submission, partial-fill tracking, Backtrader notifications. - Adapters (
src/fyers_store/adapters/): thin REST/WS clients; REST owns auth/session; WS pushes to queues and tracks subscriptions/reconnects. - State (
src/fyers_store/state/): SQLite-backed runtime state store for crash-safe broker recovery (orders, positions, trade dedupe keys, WS watermarks). - Persistence (
src/fyers_store/persistence/): pluggable cache interface (default SQLite/WAL) for historical bars, watermarks, gap detection; adapters stay stateless. - Models/Domain: normalized bar schema/validation.
Historical Cache
- Interface:
get_bars,detect_gaps,upsert_bars,get_watermark/set_watermark. - Backend: SQLite/WAL; many readers, one writer at a time. Multi-process writers are supported with WAL +
busy_timeout+ bounded retry/backoff on lock errors (tunable viaconfig/cache.yaml: concurrency). - Schema:
bars(symbol, resolution, ts, ohlcv, source, ingest_time, pk(symbol,resolution,ts));bar_watermarks(symbol,resolution,context,last_ts,updated_at, pk(symbol,resolution,context)). - Behavior: data-driven gap detection; per-context watermarks are hints; idempotent upserts; market-aware gap filtering using NSE session rules (skip weekends/off-session; holidays configurable); advances watermark on empty gaps (e.g., holidays).
- Config:
config/cache.yaml: db_path(required, no default);config/nse_calendar.yamlfor session/holiday rules used by gap detection. - Maintenance: optional periodic background maintenance (WAL checkpoint + PRAGMA optimize/quick_check) driven by
config/cache.yaml: maintenanceto keep WAL bounded and detect corruption early; VACUUM is supported but should remain manual/off-hours. - Resolution policy: cache stores FYERS-native resolutions only; no synthetic caching. Resolution tokens are canonicalized (e.g.,
1m->1,5s->5S,1D->D) so REST calls and cache keys stay consistent. Backtrader handles resampling.
Runtime State Persistence
- Backend: separate SQLite DB defined in
config/state.yaml(state_db_path,account_id) and distinct from the historical cache DB. - Scope: open orders, positions, trade dedupe keys, and WS processing watermarks per (account_id, strategy_id).
- Safety: versioned schema; mismatches fail fast with a clear reset path (delete DB or
clear_state()). - Lifecycle: broker loads persisted state on startup, reconciles with REST snapshots, then begins WS consumption.
Data Flows
- Historical: DB-first -> detect gaps -> REST history (chunked, partial-candle safe) -> normalize -> upsert -> serve merged bars -> update watermark.
- Live L1: Data WS -> queue -> Store drain -> DataFeed poll; SymbolUpdate -> bar with volume delta; drops out-of-order/symbol-less ticks; stall warning if no data for configured window.
- Orders/Trades/Positions: Order WS -> queue -> Broker drain; REST
sync_order_state()for recovery (orderbook/tradebook/positions/holdings/funds). - Multi-strategy routing: Store assigns auto strategy ids per broker, tracks order_id ownership, and routes WS updates to the owning broker; ownership conflicts raise immediately to avoid cross-strategy leakage.
- Order REST: submit/modify/cancel via REST adapter; Store/Broker build payloads.
- Broker cash/value:
getcash/getvalueread REST snapshots (funds, positions) to compute cash + mark-to-market value. - Backtrader notifications: Broker maps order/trade WS events to
bt.Orderstatuses (Submitted -> Accepted -> Partial -> Completed/Cancelled/Rejected), updates executed size/price/value with weighted fills, suppresses duplicate terminal notifications, and emitsnotify_orderfor registered orders. Accepted is emitted before the first partial if FYERS skips it. - bt.Order creation: buy/sell now return
bt.Order, seed Submitted, map FYERS order id intoorder.info['order_id'], and auto-register for notifications; fallback to raw response when Backtrader is absent. - Startup recovery: broker loads persisted state, fetches REST snapshots, reconciles, then starts order WS consumption (fail-fast if reconcile fails).
- Reconnect recovery: on WS reconnect, broker runs a bounded REST reconcile (orderbook/tradebook/positions) before consuming new messages (fail-fast on failure).
- Notification loop: DataFeed drains order notifications each live tick so
notify_orderis invoked without strategy intervention. - Position cache: broker tracks positions from REST snapshots;
getpositionreturns Backtrader Position with size/price when Backtrader is available. - Signed fills & reconciliation: executed size uses signed deltas (sells negative); WS fills hot-update positions; periodic REST reconcile (configurable via
config/broker.yaml) refreshes orders/trades/positions.
Backtrader Compliance
- Order state transitions and idempotent keys (dedupe fills by
tradeNumber). - Poll-based WS integration; no blocking socket reads in the main loop.
- Partial fills handled incrementally; weighted average price maintained.
Logging & Observability
logging_module.config.setup_logging()+config/logging_config.yaml.- Categories:
FYERS_STORE,FYERS_DATA,FYERS_BROKER,FYERS_REST,FYERS_WS,FYERS_AUTH. - Sample/summary logging for high-volume WS traffic; avoid secrets in logs.
Rate Limits & Resilience
- REST: enforced via
RateLimiter(10 rps, 200 rpm, 100k/day defaults) with backoff on 429/503; config atconfig/fyers_rest_limits.yaml. - WS: adapter-owned reconnect with explicit exponential backoff + jitter (SDK auto-reconnect disabled); adapters track subscriptions and resubscribe after reconnect; status messages on connect/disconnect/error.
Assumptions
- L1 market data only for scale (up to ~1000 symbols).
- Separate market data vs order WS sockets; do not assume identical heartbeat/reconnect semantics.
- REST-only for order actions; WS is notification-only.
- Symbol master URLs live in config; cache path is explicit (no default).
- Historical intraday timestamps are epoch UTC; daily bars IST-based at session end; convertible to UTC if trading-date semantics preserved.
Decisions Log
This section consolidates the project decision log that was previously tracked as a separate document.
- Adapters/Store/Broker/DataFeed: Use a thin REST/WS client behind adapter interfaces; Why: explicit behavior, deterministic retries/idempotency, and easier testing than SDK-only usage.
- Adapters: Keep an optional SDK backend for later; Why: allow compatibility without coupling core logic to SDK internals.
- Store/Adapters/DataFeed/Broker: Use separate sockets for market data and order/trade/position updates; Why: FYERS requires separate streams and improves isolation.
- Broker/Adapters/Store: Use Order WebSocket for live order/trade/position updates and REST only for submit/modify/cancel and snapshots; Why: REST rate limits are strict and WS avoids 429s.
- DataFeed/Adapters: L1 market data only; Why: scale to 1000 symbols while staying within 5000 symbol cap and stability guidance.
- Broker/State: Support partial fills with idempotent fill processing; prefer tradeNumber as fill id when present, otherwise synthetic keys; Why: FYERS supports partial fills and Backtrader requires correct lifecycle notifications.
- Broker/State: Use (account_id, order_id) as the order storage key; Why: order id is unique only within an account.
- Broker/State: Use tradeNumber as execution dedupe key; Why: unique per execution within account and stable across WS + REST tradebook.
- Broker: Derive remaining quantity when missing; Why: remainingQuantity is optional in order updates.
- Adapters/Store: Implement WS ping keepalive and reconnect with token refresh + resubscribe; Why: required for long-running stability and per FYERS guidance.
- Broker/State: Reconcile WS after reconnect using REST orderbook/tradebook/positions as source of truth; Why: ensures state correctness after gaps.
- Adapters: Treat market data and order sockets as distinct with separate heartbeat/sequence handling; Why: FYERS services differ in traffic patterns and behavior.
- Broker/Adapters: Keep order create/modify/cancel strictly REST-only; Why: Order WS is notification-only.
- DataFeed/Adapters: Batch market data subscriptions at 100 symbols and actively unsubscribe to stay under 5,000 symbols; Why: avoid internal buffer limits and ensure scalability.
- Adapters/Config: Store symbol master CSV URLs in configuration; Why: FYERS links can change and must be editable without code changes.
- Config: Added symbol master sources and headers to
config/symbol_master_sources.yaml; Why: centralize source URLs and field order. - Store/Package: Create base
src/fyers_store/package structure andtests/scaffold; Why: align with Backtrader patterns before adding class scaffolds. - DataFeed/Adapters: Respect FYERS historical limits (100 days for intraday, 366 days for daily) and partial-candle rule; Why: avoid gaps and partial bars.
- DataFeed/Adapters: Treat intraday history timestamps as UTC epoch and daily bars as IST; Why: FYERS guide specifies this behavior.
- Store/Broker/DataFeed/Router/State: Follow
sample_codesstore/data/broker patterns while extracting router/gateway/caching roles; Why: proven Backtrader compliance with targeted simplification. - Docs/Workflow: Allow updating
*.mdfiles without prior permission; Why: reduce friction for doc updates as requested. - Adapters/Models: Normalize history bars at the adapter boundary into
NormalizedBarwith tz-awaredatetime; Why: single validation point and simpler DataFeed logic. - Adapters/WS/Store: WS adapters push parsed messages to a non-blocking queue and Store drains them; Why: Backtrader is poll-based and must avoid blocking/awaiting.
- Adapters/REST: Add explicit
start()/stop()lifecycle for auth/session ownership; Why: REST auth/refresh is distinct from Store lifecycle. - Broker/Adapters: Broker builds order intent payloads while adapters only transmit; Why: clarify intent vs execution ownership.
- Broker/Strategy: Auto-generate strategy ids per broker and enforce strict order ownership (hard error on conflicts); Why: prevent cross-strategy order leakage and keep routing deterministic.
- Broker/ProductType: Enforce allow list (CNC/INTRADAY only) and block CO/BO/MARGIN/MTF; Why: keep order mapping deterministic and let strategies manage SL/TP exits.
- Broker/State: Persist runtime state in a separate SQLite DB (
config/state.yamlwithstate_db_path+account_id), distinct from the historical cache; Why: crash-safe recovery without mixing durable runtime state and rebuildable history cache. - Broker/State: On startup, load persisted state then reconcile via REST before consuming order WS; on reconnect, run a bounded REST reconcile before processing new messages; Why: close WS gaps and keep state correct across restarts.
- Store/Broker: Route unknown order ids to the primary broker with a warning; Why: avoid double-handling and keep ownership deterministic when REST only returns request_id.
- DataFeed/Store: DataFeed drains market queues and consumes normalized bars; Why: consistent data path for historical and live feeds.
- Broker/Constants: Codify FYERS order side/type values (BUY=1, SELL=-1, LIMIT=1, MARKET=2, STOP=3, STOP_LIMIT=4); Why: central reference for order intent mapping.
- Store/Cache: Introduce a pluggable historical cache interface (default SQLite/WAL) with
barstable and per-context watermarks; gap detection is data-driven on stored coverage (not just watermarks), allowing older backfills without being blocked by forward ingests; adapters remain stateless transports and can swap cache backends without code changes; Why: deterministic, zero-ops local cache with optional future backends. - Cache/Gaps: Filter detected gaps using NSE session rules (skip weekends/off-session; holidays configurable via
config/nse_calendar.yaml); Why: avoid pointless REST backfills for expected non-trading periods and reduce rate-limit pressure. - Cache/Maintenance: Add optional periodic background SQLite maintenance (WAL checkpoint + PRAGMA optimize/quick_check; VACUUM optional) driven by
config/cache.yaml; Why: keep WAL bounded and surface corruption early in long-running multi-process deployments. - Config/Cache: Require an explicit cache db_path (no default) in
config/cache.yaml; parents are created automatically; Why: enforce explicit shared home-dir location while keeping zero-ops setup. - REST/RateLimits: Enforce FYERS REST limits (10 rps, 200 rpm, 100k/day defaults) with chunked history requests, partial-candle trimming, rate limiter, and 429/503 backoff; limits/backoff are configurable via
config/fyers_rest_limits.yaml; Why: protect against throttle blocks and keep behavior tunable per environment. - REST/Orders: Keep
place_ordersingle-shot and raise an explicit unknown-state error with session expiry/multi-device invalidation guidance; apply bounded retry/backoff for modify/cancel/orderbook on transient failures usingconfig/fyers_rest_limits.yaml; Why: avoid duplicate orders while keeping recovery for safe read/modify actions. - Broker/Reconcile: Fail fast on REST reconcile failures (startup, periodic, reconnect) by raising a hard exception with actionable guidance; Why: prevent trading on stale state when snapshots cannot be trusted.
- Ops/Watchdog: Prefer external schedulers (Task Scheduler/systemd/cron) for restarts on fail-fast exits; use restart-on-nonzero with short backoff (10-30s) and consider a daily restart before market open; keep in-process watchdog deferred; Why: clearer ownership of restarts with simpler operational control.
- Cache/Concurrency: Expose SQLite lock contention knobs (
busy_timeout_ms,write_lock_*) inconfig/cache.yaml; Why: allow multi-process tuning per machine without code changes. - Cache/Scope: Cache is on-demand (only when strategies request history), not a proactive full-market store; Why: control disk growth and REST usage.
- Cache/UpdatePolicy: Historical cache performs gap fill only (no forced overwrite refresh); Why: deterministic idempotent behavior and lower risk of clobbering already-ingested bars.
- RateLimits/ProcessScope: Keep REST rate limiting per-process (status quo); reduce per-process limits when running many processes; Why: simplest coordination-free default.
- History/Resolutions: Enforce FYERS-native history resolutions only and canonicalize aliases (
1m->1,5s->5S,1D->D); Why: consistent time math and avoid cache fragmentation while leaving synthetic timeframes to Backtrader resampling. - DataFeed/WS: Map SymbolUpdate to tick-as-bar with delta volume (cumulative
vol_traded_todaydiffed per symbol); drop out-of-order and symbol-less ticks; Why: prevent volume corruption during resampling and ignore malformed feed events. - WS/Subscriptions: Keep subscription tracking in the data WS adapter and resubscribe after reconnect; Store does passthrough subscribe/unsubscribe; Why: adapter owns transport state and reconnect safety without duplicating state in Store.
- WS/Reconnect: Disable SDK auto-reconnect and implement adapter-owned reconnect with explicit exponential backoff + jitter; Why: control retry cadence, avoid reconnect storms/rate limits, and keep reconnection behavior testable/observable.
- WS/Config: Allow WS adapters to load and validate
config/fyers_ws.yamlviaconfig_path, treating config as source of truth when provided; Why: keep adapters modular while enforcing strict config validation. - WS/Config: Expose
data_ws_batch_size(1-100) for market data subscriptions; Why: enforce FYERS batching guidance while keeping the adapter self-contained. - WS/Health: Add data WS idle warnings and configurable subscription cooldowns (
data_ws_subscribe_cooldown_seconds,data_ws_health_check_seconds,data_ws_stale_after_seconds); Why: surface stalled feeds and prevent rapid subscribe/unsubscribe bursts. - WS/Stall Handling: Emit status messages on connect/disconnect/error and warn when no live data arrives for a window; advance watermark on empty gap fetch (e.g., weekend) to avoid repeated refetch; Why: avoid silent stale feeds and pointless retries over non-trading periods.
- Docs: Track critical review findings as actionable work items in
docs/tasks.md; keepdocs/critical_eval_*.mdas historical inputs; Why: avoid drift and keep a single source of truth for what is still open vs completed. - Auth: Add bounded retries/backoff for OTP/PIN/token HTTP calls; Why: reduce transient auth failures without masking hard errors.
- Store: Remove singleton pattern from FyersStore; Why: avoid shared state across tests/runs and simplify lifecycle handling.
- Auth/Logging: Remove commented logger fallback from
src/fyers/auth.py; Why: keep logging centralized and avoid dead code. - Auth/Store/Adapters: Standardize AuthError handling across REST and WS, validate auth on Store startup, and stop WS reconnect loops on auth failure; entrypoints own a single refresh attempt and exit on AuthError; Why: fail fast on invalid sessions while keeping recovery explicit and bounded.
- Backtest/EntryPoint: Add a canonical backtest runner that validates minimum bars before running indicators; Why: avoid cryptic Backtrader indicator errors when history is insufficient.
- Paths/Config: Centralize path resolution via FYERS_STORE_ROOT > platformdirs user_data_dir > dev fallback, disallow CWD fallback in installed packages, and honor env overrides for credentials/logging/token; Why: eliminate brittle relative paths and keep installs portable.
- Config/Versioning: Require
version: 1in all YAML configs with strict validation and human-first error messages; Why: enforce explicit config schema versioning without backward-compat drift. - Logging/Defaults: Attach a NullHandler to the
fyers_storebase logger and load logging config via resolver with packaged defaults; normalize relative handler paths under the store root; Why: keep the library quiet by default while still shipping usable logging templates. - Safety/Environment: Enforce DEV/PAPER/LIVE modes with a live-order guard (override via allow_live_orders/ALLOW_LIVE_ORDERS) and expose the resolved mode in health checks; Why: reduce accidental live trading.
- Health/Timeouts: Add Store.health_check() for REST/WS/DB visibility and enforce bounded REST/WS connect timeouts; Why: avoid silent hangs and improve diagnostics.
- Packaging/CLI: Adopt pyproject metadata with console scripts (
fyers-auth,fyers-store) and a migration utility to move configs/DBs into the resolved store root; Why: support pip installs and reduce manual setup. - Packaging/Namespace: Move resolver implementation into
fyers_store.utils.resolverand remove the top-level module; Why: avoid top-level namespace pollution and keep the public surface explicit. - Docs/Hosting: Add MkDocs Material configuration and a Cloudflare Pages deployment workflow; Why: provide a standard, navigable docs site with a free, reliable static host.