Articles published in other venues.
Pieces I’ve written for industry publications, partner blogs, and technical journals. For longer form essays, see Writing.
-
Agent Security: What NIST Wants You to Think About Before Your Agent Calls a Tool ↗
Your agent has AWS credentials. It can execute cloud CLI commands. NIST has opinions about this. Here's what tool-calling security looks like in practice.
-
Making Agents Reliable: Auto-Save, Stable IDs, and the Context Window Problem ↗
When your agent crashes at tool call 142 out of 150, you'd better hope the first 141 findings aren't lost. Here are the patterns that made our cost agents production-ready.
-
Not Everything Needs an LLM: When to Remove the AI from Your AI Agent ↗
We built an agent to sync compliance data. Then we built a version without the LLM that runs faster, costs less, and produces identical results. Knowing when to remove the AI is an underrated skill.
-
Two Ways to Build a Cost Agent (And Why We Use Both) ↗
We built two fundamentally different architectures for our cost optimization agents. One lets the LLM drive. The other relegates it to a single call. Both have their place.
-
Your Agent Found 2.4 Percent of the Savings. Now What? ↗
We built a cost optimization agent. It worked. Then we did the math: it was catching 2.4 percent of the savings. Here's what was missing and what we changed.
-
We Built an AI Agent to Cut Our Cloud Bill in Half ↗
Our cloud bill was attracting board-level attention. Instead of hiring a FinOps team, we built AI agents that scan AWS, GCP, and Azure weekly. Here's what we learned.
-
FINRA Just Told You What They'll Examine Your AI Agents On ↗
FINRA's 2026 report has a new section on AI agents with seven named risks and four considerations for firms. Here's what that actually means for your engineering team.
-
Determinism + Transparency: The Use Cases Waiting to Be Unlocked ↗
There's a world where LLMs sit comfortably inside traditional model-risk frameworks. With deterministic inference and transparent models, we're closer than you think.
-
Black Box Models in a Regulated World ↗
"Trust me, the training data is fine" is not a valid response to a regulator. Proprietary LLMs fail on basic transparency requirements that traditional models satisfy easily.
-
You Can't Validate What Never Gives the Same Answer Twice ↗
Your model risk team wants to validate your LLM. They run the same test twice. They get different answers. Meeting adjourned.
-
The Tier 3 Problem: Why Banks Can't Use LLMs for Real Decisions ↗
Banks are leaving $920bn in operational efficiency gains on the table because they can't get LLMs past their risk teams. The issue isn't caution — it's that current LLMs can't satisfy SR 11-7 requirements.
-
The Golden Signals of AI Governance ↗
When your CEO asks "are our AI systems compliant right now?" can you answer in less than three business days? If not, you're governing blind. Here are the five metrics that matter.
-
Global Policy, Local Context ↗
Your CISO says "no GPT-4 for customer data." Two weeks later, support asks why they can't use the better model for public FAQs. Simple AI policies get complicated when they meet reality.
-
Stop Asking Developers to Import Governance Libraries ↗
Your compliance team published a 47-page "Responsible AI Development Guide." Three months later, 6 out of 12 AI services aren't implementing PII filtering. You can't govern by documentation.
-
Automating the Audit Trail ↗
When your auditor asks for compliance evidence, how long does it take to produce? If the answer involves manually reconstructing logs from five different systems, you have an automation problem.
-
The "Tool Use" Problem: When AI Can Click Buttons ↗
Chatbots talk. Agents act. Most governance frameworks were designed for systems that only talk. Here's what happens when AI can actually do things in your production systems.
-
Shifting Compliance Left... and Right ↗
"Shift left" has been the DevOps mantra for a decade. For GenAI, you need to shift left AND right — because what you test before deployment isn't what runs after deployment.
-
Why "Point-in-Time" Validation Fails for GenAI ↗
Traditional point-in-time validation breaks down with GenAI systems. Models change, outputs vary, and attack surfaces are linguistic. Here's why you need continuous compliance checks at runtime.
-
Visualizing AI Governance: Why We Built a Modern CALM Tool ↗
Tetrate developed a modern visualization tool for FINOS CALM to address the critical gap between AI governance frameworks and their practical implementation.
-
AI Governance for CISOs: Securing SaaS AI Deployment ↗
A practical guide for Chief Information Security Officers on implementing controls for governing distributed AI tools across enterprise SaaS applications.
-
Securing the MCP Supply Chain: A New Approach to Agentic AI Governance ↗
Examining how agentic AI systems depending on third-party Model Context Protocol (MCP) services face unique security challenges and proposing infrastructure-level governance solutions.