← Writing

Five FS AI regulatory misconceptions the primary text refutes

Five widely-repeated claims about SR 11-7, the EU AI Act, DORA, GPAI obligations and FINRA — and what the actual regulatory text says, with the engineering implication for each.

Most of what gets repeated about FS AI regulation on LinkedIn is wrong, or at least loose enough with the primary text that the engineering implication runs the wrong way. The five below are the ones I see most often, costing real engineering hours at firms that take the LinkedIn version at face value — building stacks for “high-risk” use cases that aren’t, waiting for “AI rules” that aren’t coming, inheriting a “GPAI provider” obligation set that doesn’t apply.

Each section gives the claim, the primary text, and the engineering implication.

1. “SR 11-7 requires the [X] framework”

The claim. Some specific framework — NIST AI RMF, ISO/IEC 42001, a vendor’s “GenAI MRM platform” — is necessary or sufficient for SR 11-7 compliance.

The primary text. SR 11-7 names no framework. It is a principles-based document built around three pillars: development and implementation, validation, governance. The April 17, 2026 rewrite by the OCC, Federal Reserve, and FDIC carved GenAI explicitly out of scope for the SR 11-7 baseline, leaving the agencies to consult on a separate regime via the forthcoming RFI. So even where the framework claim was technically wrong before, it is now wrong about a regulation that no longer applies to the use case people are usually citing it for.

The engineering implication. Don’t buy a “GenAI MRM platform” because the vendor says it’s SR 11-7 compliant. The architecture answer SR 11-7 has always wanted is provenance, monitoring, and validation evidence — produced by whatever your model lifecycle already produces — not a brand. Build for the signals (per-request traces, model inventory accurate by construction, validator-readable artefacts), and the framework question answers itself in the supervisory conversation.

2. “The EU AI Act high-risk regime captures all bank AI”

The claim. Any AI system inside a regulated financial firm is “high-risk” under the EU AI Act and inherits the full Article 9/10/14 conformity assessment, registration, and ongoing-monitoring stack.

The primary text. Annex III of the AI Act lists the high-risk use cases, and points 5(b) and 5(c) are the only entries that touch FS:

5(b): AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud.

5(c): AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.

That is the entire FS-specific high-risk surface. Fraud detection is explicitly excluded from 5(b). AML, KYC, transaction monitoring, customer service chatbots, embedded compliance tooling, code-generation for engineers, RAG over internal policy documents — none of it is high-risk under Annex III.

The engineering implication. Don’t architect non-Annex-III use cases as if they were high-risk. The conformity assessment, post-market monitoring, registration, and human-oversight specifications in Articles 9–14 are heavy. Run them where the regime requires them — credit scoring, life/health insurance pricing — and not on top of every internal RAG pipeline. The EBA factsheet of November 21, 2025 makes the equivalent point at the regulator level: where the FS-specific use case is in scope, existing CRR/EBA controls already satisfy most of the requirement. Mapping over inventing.

3. “DORA automatically captures AI vendors”

The claim. When OpenAI, Anthropic, Google DeepMind, or Mistral provide a foundation model used inside an EU financial entity, DORA’s Critical Third Party Provider regime captures them and the regulator does the oversight.

The primary text. On 18 November 2025, the European Supervisory Authorities (EBA, EIOPA, ESMA) published the first list of designated CTPPs under DORA. Nineteen providers in total. Roughly five are generic cloud infrastructure (AWS, Azure, GCP and similar). Zero are pure-play foundation-model providers. OpenAI, Anthropic, Google DeepMind, Mistral, Cohere — none of them are on the list.

The engineering implication. The regulatory perimeter currently catches the cloud layer your foundation models run on, but not the foundation models themselves. That means foundation-model risk passes through to your own DORA program: your contracts (acceptable-use clauses, training-data restrictions, audit rights, breach notification), your architecture (model routing, vendor isolation, kill-switch capability, multi-provider fallback), and your incident-response evidence have to carry it. The four-clause language NYDFS suggested in its 21 October 2025 letter on TPSP risk is a usable starting point. Don’t expect ESAs oversight to substitute for fourth-party engineering decisions you have to make yourself.

4. “GPAI obligations kick in at the downstream deployer”

The claim. Any firm fine-tuning a foundation model on its own data becomes a “provider” of a general-purpose AI model under the AI Act, and inherits the full GPAI provider obligation set — model documentation, copyright policy, training-data summary, the lot.

The primary text. The Commission’s Guidelines on the scope of obligations for providers of general-purpose AI models, adopted 18 July 2025, set the GPAI provider threshold at training compute exceeding 10²³ FLOPs. A downstream party fine-tuning an existing GPAI model only becomes a provider in its own right if the modification is “significant” — and the Commission’s indicative threshold is using more than one-third of the compute of the model being modified, or one-third of 10²³ FLOPs if the original training compute is unknown. Below that, you are a deployer or downstream modifier, not a GPAI provider.

The engineering implication. Most domain fine-tuning at FS firms — LoRA on a few thousand examples, RAG with a moderately tuned retriever, prompt-tuned agents, instruction-fine-tuned chat assistants — is many orders of magnitude below 1/3 of the original training compute. You are not a GPAI provider. You are a deployer with deployer obligations (which exist and matter, but are lighter than the provider stack). Architect accordingly: don’t inherit a model-card / training-data-summary / copyright-policy obligation surface that doesn’t apply to you, and don’t let a vendor sell you compliance for a regime you aren’t in. The provider stack belongs at the foundation-model layer; your stack is downstream evidence: input/output logs, deployment context, human oversight, post-deployment monitoring.

5. “FINRA issued new AI rules in 2026”

The claim. The Financial Industry Regulatory Authority issued new rules in 2026 covering generative or agentic AI for broker-dealers, and firms must now implement a parallel AI-specific compliance regime.

The primary text. FINRA issued no new rules. Notice 24-09 — Firms’ Use of Artificial Intelligence, Including Large Language Models — published in 2024, remains authoritative. The 2026 Annual Regulatory Oversight Report’s GenAI section is supervisory guidance, framed as “considerations,” with the operative phrasing “may wish to consider.” The four areas it surfaces (agent access monitoring, human-in-the-loop, action and decision logging, behavioural guardrails) are explicitly framed as exploratory consideration areas, not enumerated mandatory controls. The applicable rules — 3110 (Supervision), 3120 (Supervisory Control Systems), 17a-3/4 (recordkeeping) — are the same rules broker-dealers have been running for human traders.

The engineering implication. Don’t build a parallel AI-specific compliance stack. The supervisory infrastructure that produces evidence for human trader oversight (access controls, structured logs of agent actions, escalation paths, periodic supervisory review) is the same infrastructure that produces evidence for AI agent oversight — what changes is the schema of the events it ingests, not the architecture. The build is “make sure your existing 3110/3120 evidence pipeline accepts AI-agent events alongside human-trader events,” not “stand up a new AI compliance platform.” This is the same posture the prudential agencies took on 17 April 2026 with the SR 11-7 carve-out, six weeks earlier.

The pattern these share

All five misconceptions point in the same direction: build a heavier compliance architecture than the regulation actually requires, and inherit obligations that don’t apply. Each misconception costs engineering time, vendor budget, and — most expensively — slows down the AI deployments the firm could otherwise be shipping.

The regulator-side of this is convergent across jurisdictions. The Commission’s GPAI guidelines, the EBA factsheet, the FCA’s “no new rules” line on AI Live Testing, the SR 11-7 rewrite, and FINRA’s AROR all read as variations of the same posture: principles-based supervision applied to AI systems through existing rules, with regulators publishing examples of practice rather than issuing new mandates.

The build-side answer is symmetric. The architecture you actually need is the one that produces the evidence supervisors will ask for — provenance, traces, model inventory, vendor metadata, drift signals, human-oversight artefacts — across whichever combination of jurisdictions you operate in. One evidence architecture, multiple regulatory surfaces. The lighter compliance stack is the one that maps to the rules that actually apply.