FINRA's Agentic AI Considerations Already Live in Your Rulebook
FINRA's December 2025 AROR doesn't write new AI rules. It flags four consideration areas for firms developing AI agents — each of which already maps onto Rule 3110, 3120, or 17a-3/4. Here's what each one means in practice for a broker-dealer.
FINRA’s December 9, 2025 Annual Regulatory Oversight Report ran to about 80 pages. The two paragraphs that matter for anyone deploying agentic AI in a US broker-dealer are buried in the Gen AI section under a sub-heading called “Emerging Trends in GenAI: Agents.” Most of the LinkedIn coverage I’ve seen treats the AROR as a routine annual document. It is and it isn’t, this year. It’s routine in the sense that FINRA has not written a new rule, has not issued a principles document, and has not adopted a high-risk taxonomy. It’s not routine in the sense that those two paragraphs name four things FINRA wants firms to think about when they deploy AI agents — and each one already lives inside the supervisory rules every broker-dealer is already running.
That second part is the practitioner story. FINRA’s four consideration areas are familiar as soon as you read them with Rule 3110 (Supervision), Rule 3120 (Supervisory Control Systems), and 17a-3/4 (recordkeeping) open in the next tab. The whole exercise is closer to a translation than to a new rulebook.
This is, structurally, the same move the prudential regulators made six weeks later on April 17 with the MRM rewrite: intensified attention to existing rules, no new AI-specific rulemaking. FINRA got there first. It just got there more quietly, and with softer language than the prudential agencies eventually used.
What FINRA actually said
The relevant passage, verbatim:
“Firms exploring and developing AI agents may wish to consider whether the autonomous nature of AI agents presents the firm with novel regulatory, supervisory or operational considerations. The rapidly evolving landscape and capabilities of AI agents may call for supervisory processes that are specific to the type and scope of the AI agent being implemented. Considerations may include: how to monitor agent system access and data handling; where to have ‘human in the loop’ agent oversight protocols or practices; how to track agent actions and decisions; or how to establish guardrails or control mechanisms to limit or restrict agent behaviors, actions or decisions.”
Two things to notice about the language. First, the modal verbs are deliberately soft — “may wish to consider,” “may call for,” “considerations may include.” This is not a regulator drawing a line. It is a regulator naming an area where firms should expect to need to think harder, without prescribing what that thinking has to produce. Second, the four areas at the end are framed as questions, not as enumerated controls. “How to monitor” is an open question; “monitoring agent access” would be an instruction. FINRA chose the question form on purpose.
Practitioners reading this will recognise it as the regulator’s polite way of saying: we know this is coming, we know existing supervisory rules already cover most of it, we are not going to tell you exactly what to do because we want firms to figure it out — but the four things below are the four things that need to be figured out. That is a useful signal. It is not a baseline.
Mapping each consideration onto existing supervision
The consideration areas themselves are not novel in supervisory terms. They are novel in the sense that FINRA has named them out loud as agent-specific concerns. The mapping exercise - what existing rule, written long before agentic AI was a category, already covers each one — is straightforward.
How to monitor agent system access and data handling. Rule 3110(a)(2) requires the firm to designate a registered principal responsible for supervising each business line; Rule 3110(b)(4) requires written supervisory procedures for the review of incoming and outgoing electronic correspondence; and 17a-4(b)(4) requires the preservation of those communications for the appropriate retention period. Translate to agents: an identified supervising principal for each agent deployment, written procedures for what data the agent is allowed to touch, and durable logs that satisfy the recordkeeping rule. None of this is technically novel. It just has to be operationalised against systems whose “communications” include vector retrieval, tool invocations, and inter-agent messages, not just emails and chats.
Where to have ‘human in the loop’ agent oversight protocols or practices. Rule 3110(b) has carried the supervision-of-orders requirement since long before AI was a category. The agentic question is which actions require a human checkpoint and which do not. The wrong answer is “all of them” (defeats the operational case for the agent) or “none of them” (defeats the supervisory case for the firm). The right answer is risk-tiered, documented, and consistent with the firm’s existing supervisory escalation procedures for human traders. If your firm’s procedures require a principal pre-approval for orders above a notional threshold, the agent’s procedures should require the same — and the system should make it impossible for the agent to bypass that checkpoint. None of that is FINRA-specific to agents; it is FINRA-specific to supervision.
How to track agent actions and decisions. This is 17a-3/4 territory. The recordkeeping rules don’t care that the actor was an agent. They require the firm to preserve records of orders, communications, and supervisory reviews for periods running from three to six years depending on the record type, in a non-rewriteable format, indexed and accessible. Agentic systems generate vastly more events than human-driven ones — every tool call, every retrieved context window, every model output is potentially recordable — so the engineering question is which subset constitutes a “record” under the rules. The conservative read is: anything that, if a human had done it, would have been recordable. That includes the agent’s decision rationale and the inputs the agent received, not just the agent’s outputs.
How to establish guardrails or control mechanisms to limit or restrict agent behaviors, actions or decisions. This is the most engineering-heavy of the four. Rule 3120 (Supervisory Control Systems) is the regulatory peg — every firm has to have a system of supervisory controls, test it annually, and report on it to senior management. Agent guardrails are naturally part of that system. Which means the controls themselves — what the agent is allowed to do, what it cannot do, what triggers escalation — should sit inside the same testing, validation, and reporting cycle as the firm’s existing supervisory controls. They are not a one-time configuration step.
The “associated person” question
There is one open question FINRA didn’t answer, and it shapes how all four consideration areas land in practice: whether an AI agent is, for supervisory purposes, an associated person.
Rule 3110(a) requires firms to “establish and maintain a system to supervise the activities of each associated person.” If an agent is one, supervisory obligations attach more or less directly. If an agent isn’t one, the firm is supervising the human who deploys it, and the agent is treated as a tool — closer to a market data system or an order management platform than to a trader.
FINRA didn’t pick a side. The four consideration areas work either way. But the novel regulatory, supervisory or operational considerations sentence is most naturally read as: if the agent is acting with enough autonomy to do things a human associated person would do, the firm should think about supervising it as if it were one. That is not a legal determination — FINRA explicitly stops short of one — but it is the operational posture the four consideration areas suggest.
Practitioners I’ve talked to are split on this. The cleaner regulatory framing is to treat the agent as an instrumentality of a human associated person, and supervise the human. The harder-but-more-defensible framing is to treat the agent itself as the supervised entity. The four-area list is consistent with either. Firms will want to make their own determination, document it, and stick to it — because the worst posture is changing the framing every time a new exam question lands.
What this changes for broker-dealers and asset managers
Three things, concretely.
One: any firm deploying agentic AI in a market-facing capacity now has a clean reference point against which to map their controls. The mapping exercise — taking your existing 3110/3120 procedures and showing how each FINRA consideration area is already addressed — is the document FINRA examiners will most likely ask for in 2026 exams. Doing it now, before an exam, is significantly cheaper than doing it under deadline. It is also the document a firm can hand to its board or its risk committee to demonstrate that agentic deployments are not an unsupervised category.
Two: the recordkeeping question is more demanding than it looks. Most agent platforms log inputs and outputs but not the intermediate reasoning, the retrieved context, or the rejected tool options. Under 17a-3/4, that may not be enough. The right framing is: if a regulator asked you to reconstruct, six months later, why this agent did this thing for this customer, could you? If not, your logging architecture is below the recordkeeping bar.
Three: the guardrails-as-supervisory-controls framing means agent restrictions are not just an engineering preference. They sit naturally under Rule 3120, with the annual testing, documentation, and management reporting that implies. Treat them the way you treat any other supervisory control system — versioned, tested, reviewed, reported.
The broader signal
What FINRA has done here — and what the prudential regulators did on April 17 — is the same move from two different angles. Both have declined to issue new AI-specific rules. Both have flagged the supervisory areas where existing rules need to be applied with more care. And both have shifted the practical work onto firms to demonstrate, document, and defend their controls under regulations that were drafted long before agentic systems existed.
The question for broker-dealers and asset managers is not “what will the AI rule require us to do.” That is not the question that was ever going to be asked, on either the prudential or the capital-markets side. The question is: how do you defend your agent’s supervisory architecture under Rule 3110 — and your evidence under 17a-3/4 — the next time you sit across the table from a FINRA examiner. That conversation is happening on existing rules, not future ones.
Tomorrow I’ll write about where those controls actually live in production — and why the service-mesh control plane is, for most firms running agentic systems at scale, the obvious place to enforce them.
If you’re working through the FINRA agentic-AI mapping exercise at a broker-dealer or asset manager, I’d be interested to compare notes. You can find me on LinkedIn or email paul@paulmerrison.io.