top of page

How to Build a Real-Time Crypto Analyst Agent with LangGraph, MACD, and Slack Alerts

  • 17 hours ago
  • 16 min read

Crypto markets never sleep. At 3 AM on a Tuesday, a whale moves 8,000 BTC and triggers a volume spike that precedes a 12% price run. The RSI was already oversold. The Bollinger Band squeeze had been building for 48 hours. Every signal was there — for anyone who was watching.

Nobody was watching.

The problem is not analysis — it is surveillance. A competent trader can read a MACD crossover, interpret a Bollinger squeeze, and understand what an RSI divergence means. What they cannot do is monitor twenty trading pairs simultaneously, across two exchanges, twenty-four hours a day, seven days a week, with zero latency from signal to alert. That requires automation.

The tools available for this fall into two categories. Commercial platforms like 3Commas and Cryptohopper provide some alerting functionality but are opaque, expensive, and store your strategy logic on their servers. Code-it-yourself approaches using ccxt and pandas-ta give you full control but require you to build the orchestration, scheduling, anomaly detection, notification delivery, and backtesting layers from scratch — typically a multi-month project before you see a single alert.

The Crypto Analyst Agent is the missing middle ground. It is a LangGraph-orchestrated multi-agent system that maintains live WebSocket connections to Binance and Coinbase, calculates MACD, Bollinger Bands, RSI, and VWAP in parallel on every candle close, detects price and volume anomalies using statistical thresholds and targeted LLM analysis, synthesises findings into structured trading signals, delivers formatted alerts to Slack and Telegram, and validates strategies through a dedicated backtesting agent — all running continuously on a single server, visualised on a live Next.js dashboard.

Real-world use cases this application handles:

  • Independent traders automating market surveillance across multiple pairs and exchanges

  • Quantitative developers building and backtesting indicator-driven trading strategies

  • AI engineers studying LangGraph streaming with real-time financial data as the source

  • Technical founders shipping a proprietary market intelligence tool with full data ownership

  • Portfolio managers monitoring risk and receiving optimisation recommendations across crypto holdings

  • CS students building a real-world LangGraph project with live data and production-quality architecture

This article covers the core concept, the graph design, the indicator sub-agent pattern, the backtesting pipeline, and the most common challenges. Full source code is available in the complete course at labs.codersarts.com.


📄 Before you dive in — grab the free PRD template that maps out this entire system: architecture, API spec, sprint plan, and system prompt. [Download the free PRD]


How It Works: Core Concept

The concept powering this system is continuous stateful graph execution driven by a real-time data stream.

Most agent applications are request-response: a user sends input, the graph runs, the graph ends. A market analysis system is different. The data source — a WebSocket feed from Binance or Coinbase — is an infinite stream of candles. The graph must react to each candle, update its state, run analysis, and emit signals — then immediately be ready for the next candle, without restarting.

LangGraph handles this through a persistent StateGraph where the Market Monitor node functions as a continuous source. On each new candle, it updates the candle_buffer in the graph state and triggers the downstream analysis pipeline. The graph state accumulates indicator history, anomaly records, and signal logs across thousands of candle updates — all within the same session, with the full history available to every node.

Why indicator sub-agents instead of a single calculation function. The straightforward approach is to write one function that calculates all four indicators sequentially. This works but is unnecessarily slow — MACD on a 500-candle buffer takes ~30ms, as does each of the others. Sequential execution takes ~120ms. Because these calculations are entirely independent, they can run concurrently via asyncio.gather, collapsing the total time to ~35ms (the slowest single indicator plus minimal overhead). More importantly, the sub-agent pattern makes each indicator independently configurable, testable, and replaceable — you can swap RSI parameters for one pair without touching the others.

Why the Anomaly Detection node has a gate. An always-on LLM call on every candle across 20 pairs at 1-minute intervals would cost over $100/month for anomaly detection alone. The solution is a statistical pre-filter: a Z-score calculation on the candle's volume relative to the 500-candle rolling mean. Only when Z-score > 3.0 (an event that occurs in < 0.3% of candles) does the system invoke gpt-4o-mini for deeper pattern recognition. This reduces LLM calls from 28,800/day to under 50/day while maintaining detection quality for the events that actually matter.



LIVE ANALYSIS PIPELINE (fires on each candle close):

  Binance / Coinbase WebSocket → new candle
          │
          ▼
  [MARKET MONITOR NODE]
  Normalise to MarketTick schema
  Update candle_buffer[pair] (rolling 500)
          │
          ▼
  [INDICATOR FAN-OUT]
  asyncio.gather — 4 concurrent sub-agents:
    ┌─────────────┬─────────────┬─────────────┐
    ▼             ▼             ▼             ▼
  [MACD]      [BOLLINGER]   [RSI]         [VWAP]
  pandas-ta   pandas-ta     pandas-ta     pandas-ta
  ~30ms       ~25ms         ~20ms         ~20ms
    └─────────────┴─────────────┴─────────────┘
          │ (total: ~35ms, not ~95ms)
          ▼
  [INDICATOR JOIN NODE]
  Merge all 4 outputs into aggregated_indicators
          │
          ▼
  [ANOMALY DETECTOR NODE]
  Z-score volume check (< 1ms, no LLM)
          │
          ├──(Z > 3.0 or price Δ > 3%)──→ gpt-4o-mini pattern recognition
          │                                (~1.5 seconds, rare)
          │
          └──(normal)──→ AnomalyReport(severity: info)
          │
          ▼
  [SIGNAL SYNTHESISER NODE]
  Weighted indicator scoring
  gpt-4o summary generation (on OPPORTUNITY / RISK_ALERT signals)
  → TradingSignal JSON + Markdown summary
          │
          ├──→ [NOTIFICATION DISPATCH]  (BackgroundTask)
          │    Rate limit: 1 per pair per 15 min
          │    → Slack Incoming Webhook
          │    → Telegram Bot API
          │
          └──→ [PORTFOLIO OPTIMISER]   (every 10th signal)
               MVO / ERC weighting
               → PortfolioRecommendation


System Architecture Deep Dive

The Crypto Analyst Agent has eight layers. The key design principle throughout is separation of streaming concerns from analysis concerns — the data ingestion layer never touches the LLM, and the LLM layer never touches exchange APIs.

Layer 1 — Next.js 15 Dashboard. Real-time price charts using TradingView's Lightweight Charts library, with MACD and Bollinger Band overlays rendered from the indicator state stream. The alert feed shows the last 50 TradingSignals with type, confidence, and pair. The portfolio panel shows current vs. recommended allocations. The backtest panel displays BacktestReport metrics and the equity curve.

Layer 2 — FastAPI + WebSocket Gateway. Handles session management, WebSocket event streaming to the dashboard, SSE fallback, REST endpoints for historical signals and backtest results, and BackgroundTask orchestration for notifications. One FastAPI instance manages the entire analysis session; notification dispatch is always non-blocking.

Layer 3 — LangGraph Orchestration Engine. The StateGraph definition with 9 nodes and conditional edges. The Market Monitor node is wired as a persistent asyncio task that feeds state updates on each candle arrival. The graph is compiled once with the SqliteSaver checkpointer (prototype) or PostgresSaver (production) and invoked continuously.

Layer 4 — Market Monitor Node. The bridge between raw exchange data and the LangGraph state. It maintains persistent WebSocket connections via ccxt-pro or the websockets library, normalises all incoming data to the MarketTick schema (unified price in USDT, standardised field names), and maintains the rolling 500-candle buffer per pair.

Layer 5 — Indicator Sub-Agent Nodes (×4). Each is an independent asyncio coroutine that reads the candle buffer, calls pandas-ta, and returns a typed IndicatorResult: current value, previous value, signal classification (bullish/bearish/neutral), and signal strength (0–1). No indicator sub-agent knows about any other.

Layer 6 — Analysis Nodes. Anomaly Detection (statistical Z-score + conditional LLM), Signal Synthesis (weighted composite scoring + gpt-4o summary generation), Portfolio Optimisation (MVO/ERC weighting across all pairs).

Layer 7 — Action Nodes. Notification Dispatch (Slack + Telegram + PagerDuty for critical), Backtesting Agent (on-demand historical replay), Alert History writer (PostgreSQL persistence).

Layer 8 — Data Layer. TimescaleDB (or standard PostgreSQL) for high-frequency OHLCV storage and signal history. SQLite for prototype. Candle archives for backtest data access.


Architecture Table

Layer

Component

Role

1

Next.js 15 + Lightweight Charts

Price/indicator charts, alert feed, portfolio panel, backtest viewer

2

FastAPI + WebSocket

Session management, event streaming, REST API, BackgroundTask

3

LangGraph StateGraph

Continuous graph execution, node scheduling, state persistence

4

Market Monitor Node

WebSocket feeds, MarketTick normalisation, candle buffer

5

Indicator Sub-Agents ×4

MACD, Bollinger Bands, RSI, VWAP — concurrent via asyncio.gather

6

Analysis Nodes

Anomaly Detection, Signal Synthesis, Portfolio Optimisation

7

Action Nodes

Notification Dispatch, Backtesting Agent, Alert History

8

TimescaleDB / PostgreSQL

OHLCV archive, signal history, backtest results


The Indicator Sub-Agent Pattern

The sub-agent pattern is the most transferable LangGraph concept in this project — it applies anywhere you need multiple independent analyses to run in parallel on shared data.

Each indicator is implemented as a Python function decorated as a LangGraph node:



async def macd_agent(state: AnalystState) -> dict:
    """MACD sub-agent node — reads candle buffer, writes indicator output."""
    pair    = state["active_pair"]
    buffer  = state["candle_buffers"][pair]

    if len(buffer) < 35:  # minimum candles for MACD 12/26/9
        return {"indicator_outputs": {pair: {"macd": None}}}

    df      = pd.DataFrame(buffer, columns=["timestamp","open","high","low","close","volume"])
    macd    = df.ta.macd(fast=12, slow=26, signal=9)

    current = macd["MACD_12_26_9"].iloc[-1]
    prev    = macd["MACD_12_26_9"].iloc[-2]
    hist    = macd["MACDh_12_26_9"].iloc[-1]
    signal  = "bullish" if current > 0 and current > prev else \
              "bearish" if current < 0 and current < prev else "neutral"

    return {
        "indicator_outputs": {pair: {"macd": {
            "value":    current,
            "prev":     prev,
            "histogram":hist,
            "signal":   signal,
            "strength": min(abs(current) / (abs(current) + 0.001), 1.0),
        }}}
    }

All four indicator nodes share the same candle_buffers state field but write to separate sub-keys of indicator_outputs. The fan-out node spawns all four concurrently:



async def indicator_fan_out(state: AnalystState) -> dict:
    """Run all 4 indicator sub-agents concurrently."""
    results = await asyncio.gather(
        macd_agent(state),
        bollinger_agent(state),
        rsi_agent(state),
        vwap_agent(state),
        return_exceptions=True  # don't cancel all on single failure
    )
    # Merge all 4 result dicts; handle Exception results gracefully
    merged = {}
    for i, result in enumerate(results):
        if isinstance(result, Exception):
            log.warning(f"Indicator {i} failed: {result}")
        else:
            deep_merge(merged, result)
    return merged

The return_exceptions=True parameter is critical — without it, a single NaN in an underpopulated candle buffer crashes all four indicators simultaneously.


The Backtesting Agent

The backtesting agent is where the system proves its value beyond real-time alerting. A strategy that looks compelling on a handful of live signals may have a 35% max drawdown and a negative Sharpe ratio on historical data. Knowing this before trading real capital is the entire point.

The backtesting agent runs entirely outside the live LangGraph graph. It is triggered on-demand via POST /api/backtest and executes as a separate asyncio task:



async def backtesting_agent(
    strategy:       str,
    symbol:         str,
    start_date:     date,
    end_date:       date,
    initial_capital:float,
    seed:           int = 42,
) -> BacktestReport:

    # Load historical OHLCV via ccxt REST (no WebSocket needed)
    candles = await load_historical_ohlcv(symbol, start_date, end_date)

    np.random.seed(seed)  # reproducibility
    portfolio_value = initial_capital
    trades = []
    peak_value = initial_capital

    for i in range(35, len(candles)):  # warm-up period
        buffer = candles[max(0, i-500):i]

        # Run indicator calculation (NO LLM — rule-based only)
        indicators = calculate_indicators_sync(buffer)
        signal     = apply_strategy_rules(strategy, indicators)

        if signal.signal_type == "OPPORTUNITY":
            entry_price = candles[i+1]["open"]  # next-candle open (no lookahead)
            # ... trade simulation logic
            trades.append(Trade(entry_price=entry_price, ...))

    return BacktestReport(
        total_return_pct = (portfolio_value - initial_capital) / initial_capital * 100,
        sharpe_ratio     = calculate_sharpe(trades),
        max_drawdown_pct = calculate_max_drawdown(trades, initial_capital),
        win_rate         = sum(1 for t in trades if t.pnl > 0) / len(trades),
        trade_log        = trades,
        seed             = seed,
    )

Three things make this backtesting agent production-quality rather than a toy:

No lookahead bias. Entry price uses candles[i+1]["open"] — the open price of the candle after the signal candle. Signals are calculated on data up to and including candles[i]. Future data never influences past decisions.

No LLM calls during replay. The live Signal Synthesiser uses gpt-4o to generate natural-language summaries. The backtesting agent uses rule-based signal classification only. This keeps backtest runtime under 30 seconds for 90 days of 1-hour data instead of 6+ hours.

Fixed seed for reproducibility. Any stochastic element in the simulation (simulated slippage, position sizing variation) uses a seeded RNG. Two runs with seed=42 produce identical results — essential for strategy comparison.



Implementation Phases

Phase 1: Environment Setup and Exchange Connectivity

Get live candle data flowing from Binance and Coinbase before writing a single indicator calculation. This phase establishes the data pipeline that everything else depends on — if the WebSocket connection is unreliable, the analysis pipeline is unreliable regardless of how well the indicators are implemented.

Key decisions to make:

  • ccxt vs ccxt-pro vs raw websockets: ccxt for REST (historical data, backtest); ccxt-pro for async WebSocket (production); raw websockets for prototype simplicity

  • Reconnect strategy: exponential backoff is mandatory — Binance enforces a 24-hour listen key expiry; Coinbase drops connections under load; reconnect must be automatic with no human intervention

  • Candle buffer initialisation: at startup, pre-load the last 500 candles via REST so the indicators are ready immediately rather than waiting for 500 minutes of live data

  • Normalisation: every exchange uses slightly different field names and timestamp formats; normalise to MarketTick at the ingest point so all downstream nodes see a consistent schema

Building the exponential backoff reconnect loop and testing it against a Binance WebSocket kill-switch is covered in detail in the full course with working, tested code.


Phase 2: Indicator Sub-Agents with asyncio.gather

Implement all four indicator nodes and wire the fan-out/join pattern. This phase establishes the indicator accuracy baseline — run each indicator against known benchmark values from TradingView or a reference implementation before trusting it for signal generation.

Key decisions to make:

  • pandas-ta vs ta-lib vs manual implementation: pandas-ta is pure Python, pip-installable, and covers all required indicators in one library; ta-lib requires C compilation and is harder to deploy; manual implementation is error-prone

  • Warm-up gating: MACD requires 35 candles, Bollinger Bands 20, RSI 14; gate each sub-agent individually (not all-or-nothing) so VWAP and RSI are available while MACD warms up

  • Signal strength normalisation: raw indicator values have different scales (MACD in price units, RSI 0–100, BB bandwidth as a ratio); normalise all to 0–1 before passing to the Signal Synthesiser

  • return_exceptions=True in asyncio.gather: this is not optional — without it, a NaN propagation in one sub-agent cancels all four, causing an entire candle to be skipped silently

Benchmarking pandas-ta MACD output against TradingView reference values and verifying that fan-out parallelism correctly handles sub-agent failures is covered in detail in the full course.


Phase 3: Anomaly Detection and Signal Synthesis

The anomaly detector is the first node that touches the LLM. The gate matters enormously here — test the Z-score threshold against your target pairs before configuring the LLM invocation, or you will burn your OpenAI budget within the first hour of running on 20 pairs.

Key decisions to make:

  • Z-score window: the rolling mean and standard deviation for volume Z-score should use the full 500-candle buffer, not a short window — short windows produce too many false positives during naturally high-volume periods

  • Anomaly severity classification: not all Z-score > 3.0 events are equal; a volume spike on a 5-minute candle of a major pair is less significant than the same spike on a low-volume alt; weight by pair's average daily volume

  • Signal composite scoring: the weighting scheme (how much MACD contributes vs RSI vs BB position vs anomaly severity) is your strategy's core IP; start with equal weights and adjust based on backtest results, not intuition

  • LLM temperature for synthesis: 0.3 produces consistent signal type classification (OPPORTUNITY / RISK_ALERT / NEUTRAL) while allowing enough variation for natural summaries; higher values produce inconsistent signal_type classifications on similar data

Calibrating the anomaly Z-score threshold on 30 days of BTC/USDT historical data to achieve < 5% false critical alert rate is covered in detail in the full course with a worked calibration example.


Phase 4: Notification Dispatch and Dashboard

Build the Slack and Telegram notification delivery and the live Next.js dashboard. The notification rate limiter is the most important operational safeguard — without it, a volatile market session generates dozens of alerts per pair per hour and renders the system useless within its first week.

Key decisions to make:

  • Rate limit implementation: a per-pair last_alert_timestamp dict in state; check (now - last_alert_timestamp[pair]) > timedelta(minutes=15) before dispatching; critical anomaly alerts (severity: critical) bypass this check with a separate fast-path

  • Slack Block Kit vs plain text: Block Kit produces structured, readable alerts with sections, fields, and action buttons; plain text is simpler to implement but harder to scan quickly during a fast-moving market event

  • Telegram rate limit compliance: Telegram enforces 1 message/second per chat; queue alerts with asyncio.sleep(1.1) between messages; batch simultaneous multi-pair alerts into a single message (up to 3 signals per message)

  • Lightweight Charts integration: TradingView's Lightweight Charts library provides candlestick, line, and histogram series; MACD histogram requires a separate histogram series overlaid on the main chart; indicator updates stream via WebSocket INDICATOR_UPDATE events and are applied as incremental series updates, not full re-renders

Building the Slack Block Kit formatter and the per-pair rate limiter that correctly handles simultaneous critical alerts is covered in detail in the full course.


Phase 5: Backtesting Agent and Portfolio Optimisation

The backtesting agent and portfolio optimiser are the extended features that separate an alerting tool from a genuine analysis system. Building them in Phase 5 means you already have a validated indicator calculation pipeline to reuse — the backtesting agent calls the same calculate_indicators_sync function the live pipeline uses, ensuring backtest and live results are methodologically consistent.

Key decisions to make:

  • Lookahead bias prevention: this is the single most important correctness constraint in backtesting; write a unit test that verifies candles[i+1]["open"] is used for entry price, not candles[i]["close"]

  • Slippage and fees: include exchange fees (Binance standard: 0.1% per trade) and a realistic slippage estimate (0.05% for major pairs) in the trade PnL calculation; backtests without fees significantly overstate real-world performance

  • MVO stability guard: require minimum 60 days of daily returns before enabling MVO; fall back to equal-weight allocation when the covariance matrix condition number exceeds 1000 (indicating an unstable matrix due to insufficient data or highly correlated assets)

  • Backtest UI: the equity curve is the most useful visualisation — a Lightweight Charts line series showing portfolio value over the test period, with trade entry/exit markers as overlay shapes

Implementing the no-lookahead-bias guarantee with a formal unit test, and building the MVO covariance matrix stability check, is covered in detail in the full course.


Common Challenges

1. WebSocket disconnects silently without triggering reconnect logic.


Root cause: Binance and Coinbase WebSocket connections drop without sending a close frame in certain network conditions. The asyncio WebSocket listener is awaiting the next message indefinitely — it never raises an exception, so the reconnect handler never fires. Candle data silently stops arriving; the LangGraph state stops updating; the dashboard shows stale data.


Fix: Set a ping_interval and ping_timeout in the WebSocket connection. If no message is received within ping_timeout seconds, treat the connection as dropped and trigger reconnect. Additionally, monitor last_tick_age_ms in the health endpoint — alert if it exceeds 5× the expected candle interval.


2. pandas-ta returns NaN silently on insufficient candle data.


Root cause: pandas-ta does not raise exceptions on insufficient data — it returns NaN for the first window rows. If you start processing immediately at launch with a thin buffer, NaN values propagate to the signal composite score, producing NaN * weight = NaN and a signal score of NaN.


Fix: Gate each sub-agent on explicit candle count checks. Treat any NaN in indicator output as None and exclude it from the composite score rather than letting it poison the calculation. Log a WARMING_UP state to the dashboard.


3. Simultaneous anomalies across 20 pairs exhaust OpenAI rate limit.


Root cause: A macro market event (e.g. an unexpected Fed announcement) causes correlated volume spikes across all 20 monitored pairs simultaneously. All 20 anomaly detectors breach the Z-score threshold within the same 1-minute candle and attempt to invoke gpt-4o-mini concurrently. Twenty simultaneous API calls spike tokens-per-minute above the Tier 1 limit.


Fix: Add an asyncio.Semaphore(5) guarding all LLM calls within the analysis pipeline. Maximum 5 concurrent LLM calls regardless of pair count. Queue the remaining calls — the 1.5-second delay is acceptable since anomaly detection results are not time-critical at the sub-second level.


4. Backtesting reports inflated returns due to lookahead bias.


Root cause: Using candles[i]["close"] as the entry price for a signal generated at candles[i] close. In live trading, the signal is generated at candle close and the earliest possible entry is the next candle's open — after market participants have already reacted to the same information.


Fix: Always use candles[i+1]["open"] as entry price. Write a unit test: generate a signal at candle i, assert the trade entry uses candles[i+1]["open"]. Run this test in CI to prevent regression.


5. Portfolio MVO produces extreme allocations (100% in one asset).


Root cause: With fewer than 30 data points, the covariance matrix is rank-deficient or numerically unstable. The MVO optimiser finds corner solutions (all weight in one asset) that are artifacts of noise in the sparse return series, not genuine investment insights.


Fix: Check the covariance matrix condition number before running MVO. If np.linalg.cond(cov_matrix) > 1000, fall back to Equal-Risk-Contribution weighting. Display a "low confidence" badge on the portfolio recommendation in the dashboard when ERC fallback is active.


6. Bollinger Band squeeze produces false breakout signals during sideways consolidation.


Root cause: Bollinger Band squeeze (bands narrowing to a local minimum) indicates low volatility and potential breakout — but it does not indicate direction. The Signal Synthesiser, given a squeeze signal without directional confirmation, may classify it as an OPPORTUNITY in the direction of the most recent price momentum, which is frequently wrong.


Fix: Require directional confirmation before generating an OPPORTUNITY signal on a BB squeeze: at minimum, a MACD signal in the same direction AND RSI not in the opposing extreme zone. Document this confluence requirement explicitly in the strategy registry YAML for the "Bollinger Squeeze" strategy.


7. Telegram API rejects messages during rapid multi-pair alert bursts.


Root cause: During a market event, 8–10 alerts fire within 2 seconds across all monitored pairs. The notification dispatcher calls the Telegram API sequentially without delay, hitting the 1 message/second per chat limit and receiving 429 Too Many Requests errors.


Fix: Use asyncio.sleep(1.1) between Telegram API calls. For bursts of 3+ simultaneous alerts, format a single combined message ("Multiple alerts fired: BTC/USDT OPPORTUNITY, ETH/USDT RISK_ALERT, SOL/USDT ANOMALY_ALERT — see dashboard for details"). Reserve individual detailed messages for single-pair events.


8. Candle buffer memory grows unbounded in long-running sessions.


Root cause: The rolling buffer is set to 500 candles, but the LangGraph state stores the entire candle_buffers dict including all pairs. With 20 pairs × 500 candles × 6 fields per candle = 60,000 data points, serialised to JSON for the checkpointer, each state checkpoint approaches 5–10MB. Over hours, checkpoint I/O becomes a bottleneck.


Fix: Store candle buffers in a separate in-memory structure (not in LangGraph state). Pass only the current tick and the indicator results through LangGraph state. Load the candle buffer from the in-memory store inside each indicator sub-agent function. This keeps the checkpointed state under 100KB regardless of buffer size.

Building and debugging these issues against real Binance and Coinbase data — including reproducing the simultaneous anomaly rate-limit scenario by injecting synthetic correlated volume spikes — is covered step by step in the full course.




Ready to Build This Yourself?

Understanding an architecture is not the same as running it live against a Binance WebSocket at 3 AM. The gap between this article and a deployed Crypto Analyst Agent — with a calibrated anomaly detector, a backtested strategy, reliable Slack alerts, and a portfolio optimiser that doesn't produce nonsense — is filled with asyncio WebSocket debugging, Telegram rate-limit management, and lookahead bias prevention.

The Crypto Analyst Agent course on labs.codersarts.com gives you everything you need to go from zero to deployed:

  • ✅ Full source code for all 6 sprints — LangGraph backend + Next.js dashboard, fully commented

  • ✅ Live Binance and Coinbase WebSocket integration with automatic reconnect

  • ✅ All 4 indicator sub-agents (MACD, Bollinger Bands, RSI, VWAP) with asyncio.gather fan-out

  • ✅ Backtesting agent with lookahead-bias unit test, Sharpe ratio, and equity curve

  • ✅ Slack Block Kit and Telegram Bot formatter with rate limiter that survives market events

  • ✅ Portfolio optimisation with MVO stability guard and ERC fallback

  • ✅ LangSmith tracing setup — see per-node latency, LLM call frequency, and cost per signal

  • ✅ Docker Compose + Railway deployment guide

  • ✅ Lifetime access — including updates as ccxt and LangGraph APIs evolve

  • ✅ Community support via the Codersarts Discord

$30.00 Everything above.

Already have a trading strategy you want to implement or a specific exchange integration you need help with? Book a 1:1 guided session at $20/hour — backtest your strategy alongside the Codersarts team, calibrate your indicator thresholds against real historical data, and get your LangGraph graph reviewed live. Session recording included.



Conclusion

The Crypto Analyst Agent is an eight-layer system: a Next.js dashboard with live Lightweight Charts, a FastAPI WebSocket gateway, a LangGraph stateful graph, a Market Monitor node with exchange WebSocket connectivity, four concurrent indicator sub-agents, anomaly detection with a statistical LLM gate, signal synthesis with gpt-4o summaries, and notification dispatch to Slack and Telegram. The key architectural insights are two: the indicator sub-agent fan-out pattern, which collapses four sequential indicator calculations into one parallel batch; and the anomaly detection gate, which limits LLM invocations to statistically significant events and keeps running costs under $25/month.

The simplest place to start is Stack A: LangGraph + FastAPI + ccxt + pandas-ta + SQLite + Slack webhook. One trading pair, one exchange, no portfolio optimiser, no TimescaleDB. You can have a working real-time MACD/RSI/Bollinger alert system running locally and delivering Slack notifications in a weekend.

When you are ready to move from architecture to working code, the full course is waiting at labs.codersarts.com — complete source, live exchange integration, backtesting agent with lookahead-bias guarantees, and a full deployment walkthrough included.

Comments


bottom of page