Essays on AI governance, as practised.
Pieces on Model Risk Management, regulatory posture, and the engineering realities of putting generative and agentic AI into banks — written for the people who have to defend it to an examiner.
-
Where Agentic AI Controls Actually Run: FINOS AIGF v2.0 and the Service-Mesh Control Plane
FINRA, the PRA's SS1/23, and EIOPA all expect specific controls on agentic AI. FINOS AIGF v2.0 names six new risks and six mitigations that describe a single architectural pattern — and it's the one service mesh has been doing for microservices for a decade.
-
FINRA's Agentic AI Considerations Already Live in Your Rulebook
FINRA's December 2025 AROR doesn't write new AI rules. It flags four consideration areas for firms developing AI agents — each of which already maps onto Rule 3110, 3120, or 17a-3/4. Here's what each one means in practice for a broker-dealer.
-
SR 11-7 Just Wrote Itself Out of the GenAI Conversation
The April 17, 2026 interagency MRM rewrite formally excludes generative and agentic AI from scope. That's not a retreat — it's an RFI window.
-
Why Deterministic Language Models Would Be A Big Deal For Banks
Coming to the same answer twice is kind of cool
-
Why AI Evals Aren't Optional: The Governance Case for Systematic Evaluation
You can't govern what you can't measure. Here's why evaluation is the foundation of effective AI governance, not a technical nice-to-have.
-
The Model Boundary Question: Why Your LLM Application is Actually a System Model
Most banks think the 'model' is GPT-4. Regulators will disagree.
-
OpenAI's Long Term Model Is...
An agent for everything?
-
Aardvark could be great - if it can really understand threat models
Who ever really trusted SCA anyway?
-
The AI Bias Paradox: Why 'Fairness' Is Harder Than You Think
Different definitions of fairness are mathematically incompatible. Your AI can't be 'fair' in every way - and that matters for FSI lending models
-
Building an AI Risk Assessment Process
Every AI project should go through risk assessment. Here's an 8-step process that actually works for AI, based on the FINOS framework
-
Prompt Injection Attacks: What They Are and Why FSI Should Care
Prompt injection is to LLMs what SQL injection was to databases in 2005. Except most banks don't know it exists yet
-
AI Risk Tiering: Not All AI Systems Are Created Equal
Applying the same governance to internal chatbots and customer-facing credit decisions is either too heavy or too light. Here's how to tier AI by risk and apply proportionate controls
-
Third-Party AI Risk: You Own the Risk, Not Just the API
Using Azure OpenAI doesn't mean Microsoft handles AI governance for you. You still own the risk. Here's what that means in practice
-
AI System Observability: What to Log and Why
Your traditional logging strategy doesn't work for AI systems. Here's what you actually need to log (and what you shouldn't)
-
The RAG Security Problem Nobody's Talking About
RAG gives LLMs access to your documents and databases. It's also a data security nightmare if you're not careful about access control preservation
-
Inside the FINOS AI Governance Framework: A Practical Tour
A guided tour of what makes the FINOS AI Governance Framework different from every other AI governance framework - and why it's actually useful for banks