Elsewhere

Articles published in other venues.

Pieces I’ve written for industry publications, partner blogs, and technical journals. For longer form essays, see Writing.

  1. Tetrate Blog

    Agent Security: What NIST Wants You to Think About Before Your Agent Calls a Tool

    Your agent has AWS credentials. It can execute cloud CLI commands. NIST has opinions about this. Here's what tool-calling security looks like in practice.

  2. Tetrate Blog

    Making Agents Reliable: Auto-Save, Stable IDs, and the Context Window Problem

    When your agent crashes at tool call 142 out of 150, you'd better hope the first 141 findings aren't lost. Here are the patterns that made our cost agents production-ready.

  3. Tetrate Blog

    Not Everything Needs an LLM: When to Remove the AI from Your AI Agent

    We built an agent to sync compliance data. Then we built a version without the LLM that runs faster, costs less, and produces identical results. Knowing when to remove the AI is an underrated skill.

  4. Tetrate Blog

    Two Ways to Build a Cost Agent (And Why We Use Both)

    We built two fundamentally different architectures for our cost optimization agents. One lets the LLM drive. The other relegates it to a single call. Both have their place.

  5. Tetrate Blog

    Your Agent Found 2.4 Percent of the Savings. Now What?

    We built a cost optimization agent. It worked. Then we did the math: it was catching 2.4 percent of the savings. Here's what was missing and what we changed.

  6. Tetrate Blog

    We Built an AI Agent to Cut Our Cloud Bill in Half

    Our cloud bill was attracting board-level attention. Instead of hiring a FinOps team, we built AI agents that scan AWS, GCP, and Azure weekly. Here's what we learned.

  7. Tetrate Blog

    FINRA Just Told You What They'll Examine Your AI Agents On

    FINRA's 2026 report has a new section on AI agents with seven named risks and four considerations for firms. Here's what that actually means for your engineering team.

  8. Tetrate Blog

    Determinism + Transparency: The Use Cases Waiting to Be Unlocked

    There's a world where LLMs sit comfortably inside traditional model-risk frameworks. With deterministic inference and transparent models, we're closer than you think.

  9. Tetrate Blog

    Black Box Models in a Regulated World

    "Trust me, the training data is fine" is not a valid response to a regulator. Proprietary LLMs fail on basic transparency requirements that traditional models satisfy easily.

  10. Tetrate Blog

    You Can't Validate What Never Gives the Same Answer Twice

    Your model risk team wants to validate your LLM. They run the same test twice. They get different answers. Meeting adjourned.

  11. Tetrate Blog

    The Tier 3 Problem: Why Banks Can't Use LLMs for Real Decisions

    Banks are leaving $920bn in operational efficiency gains on the table because they can't get LLMs past their risk teams. The issue isn't caution — it's that current LLMs can't satisfy SR 11-7 requirements.

  12. Tetrate Blog

    The Golden Signals of AI Governance

    When your CEO asks "are our AI systems compliant right now?" can you answer in less than three business days? If not, you're governing blind. Here are the five metrics that matter.

  13. Tetrate Blog

    Global Policy, Local Context

    Your CISO says "no GPT-4 for customer data." Two weeks later, support asks why they can't use the better model for public FAQs. Simple AI policies get complicated when they meet reality.

  14. Tetrate Blog

    Stop Asking Developers to Import Governance Libraries

    Your compliance team published a 47-page "Responsible AI Development Guide." Three months later, 6 out of 12 AI services aren't implementing PII filtering. You can't govern by documentation.

  15. Tetrate Blog

    Automating the Audit Trail

    When your auditor asks for compliance evidence, how long does it take to produce? If the answer involves manually reconstructing logs from five different systems, you have an automation problem.

  16. Tetrate Blog

    The "Tool Use" Problem: When AI Can Click Buttons

    Chatbots talk. Agents act. Most governance frameworks were designed for systems that only talk. Here's what happens when AI can actually do things in your production systems.

  17. Tetrate Blog

    Shifting Compliance Left... and Right

    "Shift left" has been the DevOps mantra for a decade. For GenAI, you need to shift left AND right — because what you test before deployment isn't what runs after deployment.

  18. Tetrate Blog

    Why "Point-in-Time" Validation Fails for GenAI

    Traditional point-in-time validation breaks down with GenAI systems. Models change, outputs vary, and attack surfaces are linguistic. Here's why you need continuous compliance checks at runtime.

  19. Tetrate Blog

    Visualizing AI Governance: Why We Built a Modern CALM Tool

    Tetrate developed a modern visualization tool for FINOS CALM to address the critical gap between AI governance frameworks and their practical implementation.

  20. Tetrate Blog

    AI Governance for CISOs: Securing SaaS AI Deployment

    A practical guide for Chief Information Security Officers on implementing controls for governing distributed AI tools across enterprise SaaS applications.

  21. Tetrate Blog

    Securing the MCP Supply Chain: A New Approach to Agentic AI Governance

    Examining how agentic AI systems depending on third-party Model Context Protocol (MCP) services face unique security challenges and proposing infrastructure-level governance solutions.