The Bank of England’s April 2026 Financial Policy Committee record is the first time a G7 central bank has, on the record, named the operational mechanism by which agentic trading systems could produce a coordinated, machine-driven flash crash1. The phrasing is restrained — there is, the Bank says, little evidence that advanced AI is currently being used in ways that create systemic risk in UK finance — but the supervisory direction is unambiguous: the risk could increase rapidly, and the FPC is now running bespoke scenario analysis to model it.

This note reads the FPC record closely and argues three things. First, that the Bank has correctly identified herding as the operational primitive, not the model itself. Second, that the supervisory artifact most firms are missing is not a model risk policy — it is a written description of their agent topology and its decision-log schema. Third, that the simulation programme the Bank has begun is the right shape for this risk class, but its findings will land before most firms have produced the documentation needed to be supervisable inside it.

What the FPC record actually says

The substantive paragraphs identify three distinct dynamics2:

Firms’ private incentives to deploy agentic AI might fail to internalise negative externalities, such as more payments fraud or markets becoming more prone to sharp movements.

This is the externality argument. It is not new in financial-stability literature, but applying it to agentic systems is. The argument runs: a single firm deploying agents to pursue private return optimises for that return; the system-level consequence — that many firms doing the same converge on similar reactions to similar inputs — is not in any one firm’s objective function.

The Bank has begun bespoke scenario analysis and simulations to explore how multiple AI agents might synchronise trading decisions and amplify price moves — a phenomenon often described as herding.

This is the operational description. Synchronise and amplify are the two verbs that matter. Both are testable; both have observable signatures in market data. The Bank is not asking whether AI trading exists. It is asking whether, in stress, multiple AI-driven systems behave correlatedly — and whether that correlation is a function of common training data, common model providers, common harness architectures, or common tool surfaces.

This work could also explore how such dynamics could be mitigated, for example through exploring how agents’ objective functions should best take account of public policy objectives.

This is the policy lever. Objective functions is doing a lot of work in this sentence. It is not obvious what supervisory authority extends to mandating language inside a model’s reward signal — but the Bank is naming the lever publicly, which means firms should expect to be asked, in writing, what their agentic systems are optimising for and what bounds are placed on that optimization.

Why herding is the right primitive

The temptation, reading the record, is to treat model monoculture as the systemic-risk variable. Model monoculture is a real concern — three model providers control the substantial majority of frontier capability deployed in 2026 — but it is the wrong primitive for supervision because it is not what firms control.

Herding, by contrast, is a function of variables firms do control:

  1. Model selection. Whether a firm uses provider A, provider B, or a sovereign / open-weight alternative.
  2. Harness architecture. Whether the firm runs a published agentic framework (TradingAgents, FinRL, AI-Trader) substantially as-shipped, or has materially modified the orchestration topology.
  3. Tool surface. Which market actions are reachable, with what authorization granularity, against which venues.
  4. Decision-log schema. What is recorded, at what boundary, with what retention.
  5. Objective function and risk constraints. Including stop-loss agents, position sizing, and the rules under which a Risk agent overrides analyst agents in the firm’s own internal protocol.

A firm that has documented these five surfaces in writing has produced the artifact a supervisor will need to evaluate herding risk in their specific system. A firm that has not is exposed to a supervisory question it cannot answer in the time the question typically arrives in.

The two-cycle problem

The Bank’s simulation programme will run on its own clock — international cooperation with peer central banks, scenario design, calibration against historical episodes including the May 2010 flash crash and the August 2024 yen-carry unwind. The output is likely to be a published paper or set of papers within twelve to eighteen months of the April 2026 record.

The supervised-firm side runs on a different clock. Most firms standing up agentic execution today are doing so on roadmaps that pre-date the FPC record. The architectural choices being made in 2026 — which model provider, which harness, which tool surface, which decision-log schema — will be locked in long before the Bank’s simulations name what good looks like.

The risk for firms is not that the supervisory bar is ill-defined. The risk is that it is being defined adjacent to architectural decisions that are being made now. Firms that produce clear written descriptions of their agent topology now will be supervisable against whatever frame emerges; firms that defer documentation will discover that retroactive documentation of an agentic system is operationally hard.

Systemic Risk Monitoring Map — risk clusters (model monoculture, agent herding, RAG concentration, harness failure, execution cascades), six monitoring domains (Behavioral Correlation, Data Integrity, Model Governance, Execution Controls, Market Abuse Signals, Institutional Impact), signal sources, and supervisory questions.
Systemic Risk Monitoring Map — the supervisory surface the FPC simulation programme is being calibrated against.

What firms should write down

The minimum supervisable artifact is a five-section document, in writing, kept current:

  1. Agent topology. Named roles (Sentiment, Fundamentals, Quant, Risk, Execution, etc.), their tool surfaces, and the orchestrator’s escalation rules.
  2. Authorization granularity. Which decisions require human authorization at runtime; which are pre-authorized within a documented envelope; which are logged-only.
  3. Decision log. Schema, retention, who reads it, what triggers its review.
  4. Risk-constraint topology. Stop-loss agent design, position-sizing logic, override authority.
  5. Counterparty and venue map. Which venues are reachable and which routing logic governs that reach.

This is the document the Bank’s simulation programme will be calibrated against, even if the Bank does not say so explicitly. Producing it before the simulation programme reports is the cheap version of compliance with whatever frame emerges.

The asymmetric risk

Supervisors rarely know less than the firms they supervise about systemic mechanics. The April 2026 record is unusual in that the Bank is, plainly, writing in advance of widespread deployment — naming a risk that has not yet materialised, in order to shape how it is built before it does. That is the right move from a financial-stability perspective. It also means that firms which treat the FPC record as forward-looking commentary will be wrong-footed by how quickly forward-looking commentary becomes supervisory expectation.

The asymmetric risk, for an incumbent or a registered firm, is not that the simulation programme will reveal a problem the firm cannot solve. It is that the documentation gap will be revealed first, and the firm will be supervisable against whatever the supervisor finds easiest to ask.


Notes and citations

  1. Bank of England, Financial Stability in Focus: Artificial intelligence in the financial system, April 2025 with subsequent FPC discussion through April 2026. The 2026 FPC record adds the agentic-AI scenario analysis programme.

  2. All quoted material is from the public FPC record and adjacent BoE statements as reported in April 2026.

  3. See the Treasury Select Committee’s 2026 report on AI in financial services and the regulator responses published by the FCA, the BoE, and the PRA in the same window.

  4. On the May 6 2010 flash crash, the SEC/CFTC joint report (September 30, 2010) remains the canonical reference; the structural parallels — algorithmic execution under stress, liquidity withdrawal, price discovery breakdown — inform the BoE’s simulation design.

  5. On model concentration in 2026: see public market-share filings and capability benchmarks; three frontier-model providers account for the substantial majority of agentic deployments.

  6. On herding generally in financial markets: Bikhchandani, Hirshleifer, and Welch, “A Theory of Fads, Fashion, Custom, and Cultural Change as Informational Cascades,” Journal of Political Economy 100(5), 1992. The theoretical basis for what the BoE is naming.

  7. On agentic-system documentation expectations: CSA Staff Notice 11-348, CIRO Guidance Note GN-3300, and the FCA’s 2024–2025 AI strategy materials.

  8. On objective functions and policy: see related BIS work on AI in central banking and the IOSCO consultation on AI use by intermediaries.

  9. On retroactive documentation difficulty: practitioner observation across multiple agentic-architecture engagements in 2025–2026.