I’ve written a lot on this blog about AI governance, model risk, and the mechanics of making AI systems safe and trustworthy in regulated industries. This post is about something different. It’s about what happens when a security person tries to build a product.
Spoiler: it doesn’t go well. Twice.
The confession
My first product was a compliance automation tool. I was a CISO, I knew the compliance space intimately, and I was convinced I could build a better Vanta. I had the domain expertise. I had the technical skills. What I didn’t have was any idea how to get it to market.
So I did what any responsible person does: I read everything. The Mom Test. Ready, Fire, Aim. YC Startup School. Every Alex Hormozi video on sales. I was convinced I just needed to learn one more thing — one more framework, one more podcast, one more blog post — and the path would become clear.
I never launched it.
The second attempt was productised consulting around AI governance — the very thing I write about here. I had the website, the positioning doc, the deck. I followed advice from AI tools about LinkedIn outreach. I posted, I reached out, and I got nothing. Not even polite rejections. Just the infinite silence of a LinkedIn feed that doesn’t care about you.
The pattern
Looking back, the failure mode was identical both times. Not bad ideas — the compliance tool would have been competitive, and AI governance consulting is a real market. The problem was the gap between knowing what to build and knowing how to get it to market.
Everything I was learning was generic. Books give you frameworks. AI tools give you advice and then forget you exist the next session. Reddit gives you a hundred opinions from people who don’t know your situation. For product #1, I needed someone to say “stop reading and go talk to 5 CISOs this week.” For product #2, I needed someone to say “LinkedIn outreach doesn’t work for consulting without a warm audience — build the audience first.”
Both obvious in hindsight. Neither appeared in the generic advice I was consuming.
The gap wasn’t information. I had too much information. The gap was prioritisation — knowing what to do next, specifically, given where I actually was.
Why generic AI advice has a ceiling
I use Claude and ChatGPT constantly. They’re great for answering questions. They’re bad at saying “you’re asking the wrong question.” That requires knowing your history, your stage, your constraints, what you’ve already tried and what happened. A fresh session can’t do that.
Every founder I’ve talked to describes some version of this. They can list ten things they could be working on. They’ve read the books, done the courses, asked the AI tools. What they can’t do is confidently rank those ten things by impact. Not because they’re stupid — because the ranking depends on context that no generic resource has access to.
The standard fix is “get a mentor.” Good mentors are genuinely valuable. But even the best mentors bring their own gravitational pull. A B2B SaaS founder mentoring someone building a marketplace will unconsciously steer toward B2B patterns — suggesting enterprise pricing when the user expects a free tier, asking about the sales pipeline when there isn’t one. The more experienced the mentor, the stronger the pull toward their own playbook.
And mentors can’t be there for the hundreds of small decisions you make between meetings. “Should I spend today on outreach or product?” is the kind of question you face daily, and nobody’s available to answer it in real-time.
So I built something
Launcherly started as a selfish project. I wanted the tool that could have saved my first two products.
The core idea: a team of AI agents — Growth Lead, Research Lead, Strategic Advisor, and others — that share context about your business and work with you over time. Two things make it different from asking ChatGPT.
First, context persists. When the Growth Lead suggests a channel strategy, it knows you’re selling to 500 accounting firms, not building the next Slack. It knows you tried cold outreach last month and it didn’t work, so it stops suggesting cold outreach. Sounds trivial. In practice, every other AI tool I’ve used starts from zero every session.
Second, it sequences. Instead of “here are 47 things you could do,” it tracks where you are in the founder journey and says “you haven’t validated demand yet — here’s a specific experiment to run this week.” The value isn’t giving you idea #11. It’s helping you see that ideas #3 and #7 are the only ones that matter right now.
The irony
There’s an obvious irony in a CISO-turned-AI-governance-person building a startup tool. My day job is telling banks how to make AI systems safe and compliant. My side project is building an AI system for founders who need to move fast and figure things out as they go.
These sound contradictory but they’re actually the same skill applied in different directions. Both are about understanding what AI is genuinely good at and where it needs structure to be useful. In governance, that structure comes from regulatory frameworks and validation processes. In a founder tool, it comes from persistent context and intelligent sequencing.
The common thread: raw AI intelligence isn’t the bottleneck. Context is. A model that’s brilliant but doesn’t know your situation gives you smart-sounding generic advice. A model with deep context about your specific business, constraints, and history gives you something you can actually act on. I spent months writing about why this matters for banks. Then I realised it matters just as much for a founder sitting alone trying to figure out what to work on this week.
Where it is now
Launcherly is live and I’ve been running a beta with a small group of founders. Some of what’s emerged was surprising — particularly how much damage generic playbooks do at the early stage. “Build an MVP” is correct for maybe 60% of business models and wrong for the rest. Try building a marketplace MVP before you’ve proven liquidity on either side. Context changes the answer completely.
I’m still writing about AI governance here. That work matters to me and I don’t plan to stop. But I wanted to be honest about what I’ve been building on the side, why, and what I learned from failing at it twice before getting somewhere.
If you’re a founder and you’ve felt the gap between all the advice available and knowing what actually applies to your situation, that’s the problem Launcherly is built to solve. And if you’re a risk professional reading this wondering why your CISO is moonlighting as a startup founder — well, now you know.