Five days ago the OCC, Federal Reserve and FDIC jointly revised SR 11-7 for the first time in fifteen years. Most of the LinkedIn coverage I’ve seen so far has been some variation of “regulators finally modernise MRM,” which is technically true and almost completely beside the point.

The actual news is in the scope section. Buried in the new guidance is this sentence:

Generative AI and agentic AI models are novel and rapidly evolving. As such, they are not within the scope of this guidance… the agencies plan to issue in the near future a request for information that addresses model risk management generally and considers, in particular, banks’ use of AI, including generative AI and agentic AI and AI-based models.

So: the regulators have formally declined to be stretched. SR 11-7 — and OCC 2011-12 — no longer apply to your GenAI systems. They apply to your VaR model, your credit scoring logistic regression, your IRB PD model, all the traditional quantitative apparatus they were written for in 2011. They explicitly do not apply to your loan-underwriting RAG pipeline or your KYC LLM classifier.

I want to be straightforward about something before going further. I’ve written twice now on the assumption that SR 11-7 already covered these systems and that the question was how to operationalise that. The April 17 rewrite says, on the record, that it doesn’t. That changes the regulatory wrapper around the argument. It does not change the engineering substance — but the framing has to be redone, and so does some of the advice.

What the rewrite actually does

A few things, in plain terms.

First, it carves GenAI and agentic AI out of MRM scope, with a promise of an RFI to follow. No timeline given, “in the near future” being the standard regulator phrase that can mean three months or eighteen.

Second, it explicitly rescinds OCC Bulletin 1997-24 (credit scoring) and the 2021 interagency statement on MRM for BSA/AML. Both deserve their own posts. The 1997 credit scoring rescission is particularly interesting because it removes a piece of US guidance that was being read into adverse-action and fair-lending arguments around algorithmic credit decisions.

Third, and this is the tailoring move that almost no one is flagging: the guidance now states that non-compliance will not result in supervisory criticism and that the standard is “most relevant” to banks above roughly $30bn in assets. Community and regional banks have been given explicit breathing room that SR 11-7 never gave them. This is the deepest tailoring move in US MRM doctrine since Dodd-Frank. If you run a sub-$30bn bank, your MRM programme just got materially less prescriptive.

Fourth — and this is the one practitioners need to internalise — the carve-out does not create an unregulated zone. It removes an MRM peg. It does not remove fair lending law, Reg B, the FCRA adverse-action regime, NYDFS Part 500, third-party risk management expectations under FFIEC, the SEC’s anti-fraud authority over AI-related disclosures, or any of the state-level activity (California DFPI, Colorado AI Act) that has been quietly filling regulatory vacuums for two years. Your GenAI loan-decisioning system is not less regulated today than it was last week. It’s regulated by a different set of authorities, and the consolidation that SR 11-7 provided — one frame to defend the model under — is gone for now.

Why this isn’t a retreat

It’s tempting to read this as the federal regulators stepping back from AI. Read in isolation it looks that way. Read alongside the FCA’s AI Live Testing programme, the BoE FPC’s April tasking on agentic AI in payments, the ECB SSM’s January pivot to GenAI supervision generally, and the BaFin December guidance on AI under DORA — none of which involved new rules either — a different pattern emerges. Western prudential regulators are converging on a shared posture: no AI-specific rulemaking, intensified AI-specific supervision, and a narrowing of the theory of liability for enforcement. The Atkins SEC is doing the same thing on the capital-markets side.

The April 17 rewrite is the US version of that posture. The agencies are not saying GenAI is fine. They are saying SR 11-7 was the wrong frame, and they would like to consult the industry on what the right frame looks like before they commit. Hence the RFI.

That RFI is the leverage moment. Banks, vendors, governance practitioners and consortiums (FINOS very much included) now have an open window to shape what replaces SR 11-7’s coverage of GenAI. The submissions that will land best are the ones grounded in primary text — the specific failure modes the existing guidance can’t reach, the specific controls that do work in production, and the specific ways the four pillars (development, validation, governance, ongoing monitoring) need to flex for systems that don’t behave like a logistic regression.

If you want a flavour of what the regulators are already worried about, Governor Barr in Singapore last November put four words on the record: GenAI decisions need to be “well controlled, numerically and legally precise, explainable, and replicable.” His view was that current systems struggle with all four. That is the implicit prompt for the RFI.

What changes for practitioners this week

Three things.

One: stop citing SR 11-7 as the reason your GenAI system needs validation. It isn’t, anymore. The system still needs validation, but the regulatory peg you hang that on has shifted. For US banks, the cleaner framing is now consumer-protection law plus third-party risk plus internal model governance commitments your board has already made. For UK firms operating in parallel, SS1/23 still applies and is in fact stronger here — Principle 5, the model boundary language, and the PMA framework all carry through. The transatlantic divergence just got wider.

Two: the engineering challenges I wrote about previously haven’t got any easier. The model boundary question is, if anything, more important now, because the regulatory definition of “the model” has been removed and you have to define it yourself in vendor contracts, board reporting, and validation documentation. The determinism argument is also unaffected — deterministic, transparent inference still makes any of these systems easier to govern, regardless of whether SR 11-7 is the frame or whatever follows it.

Three: if your firm is going to submit to the RFI when it drops, start drafting now. The submissions that get attention are the ones written before the deadline panic, and the right ones are technically specific about controls that have actually been implemented in production. There is a small window here in which a single thoughtful submission can do more shaping than a year of LinkedIn posts after the fact.

The thing nobody is saying out loud

The April 17 rewrite confirms something I think most practitioners working in this space have suspected for a while: the gap between traditional model risk management and the actual mechanics of generative and agentic AI was too wide to bridge by interpretation. SR 11-7 was written for systems whose conceptual soundness could be expressed in a few equations. It was always an awkward fit for systems whose behaviour is shaped by a prompt, a vector index, a tool-use loop and a vendor’s silent weekly model update.

The regulators have, in effect, agreed. That is not a defeat for AI governance. It is a clearer starting position than the one we had a week ago, where everyone was trying to defend GenAI systems under guidance that didn’t quite reach them. The next twelve months will determine what fills the gap. The practitioners and firms that engage with the RFI process — not the ones waiting for the rules to land — will be the ones whose architecture and controls end up reflected in the eventual guidance.

For my own part, I’ll write more on what I think a useful RFI submission looks like once the document is published. Until then: read the actual scope section of OCC Bulletin 2026-13. It is short, and most of the LinkedIn hot takes about it have not.


If you’re working on AI governance at a regulated firm and thinking about how to respond to the forthcoming RFI, I’d be interested to compare notes. You can find me on LinkedIn or email paul@paulmerrison.io.