Back to Blog
Guide

I Used AI to Build an AI Visibility Tracker. Here's What I Learned.

5 min read
Share:
I Used AI to Build an AI Visibility Tracker. Here's What I Learned.

How I went from management consulting to launching a SaaS in four weeks using Claude as my only technical cofounder. The workflow, the tools, and the honest bits.

I quit my consulting job a few months ago to try entrepreneurship. No dramatic founder story. No personal pain point that kept me up at night. I just saw a gap in the market, noticed some YC-backed companies building in the space, and thought I could compete.

I didn't tell many people. My wife knows. A few close friends. I haven't told my parents yet, which in Asian culture is probably not great. But I wanted to keep things quiet until I had something real to show.

Four weeks later, SearchSeal is ready to launch. I built the whole thing with Claude as my only technical cofounder.

Here's what I learned.

Finding the right tools took longer than expected

I tried a few AI coding setups before landing on what works for me now. Started with Cursor, experimented with Antigravity, but eventually settled on Claude Max running Claude Code, with Vercel for deployment and Supabase for the backend.

The first few weeks were mostly about figuring out the right workflow. How do you actually collaborate with an AI on a codebase? How much context do you give it? When do you step in versus let it run?

It took maybe two weeks of trial and error before things clicked.

The framework that worked: Judge, Manager, Worker

Here's how I think about it now.

I'm the judge. My job is to make decisions, set direction, and evaluate output. I decide what gets built and whether it's good enough to ship.

Claude chat is the manager. When I have an idea for a feature, I bring it to Claude chat first. We iterate on the approach together, maybe five or six rounds of back and forth. What are the tradeoffs? What's the simplest version? What could go wrong? By the end of that conversation, I have clear guidelines and a plan.

Claude Code is the worker. Once the plan is solid, I hand it off to Claude Code to implement. It writes the code, runs into errors, fixes them, and delivers something I can review.

This separation helped me stop micromanaging the code and start thinking like a product owner.

How the work actually happened

Beyond Claude, a few things sped up the process. The Ralph Wiggum technique helped with larger tasks. It's basically a bash loop that keeps feeding prompts to an AI agent until the job is done. I used it for overnight work and refactors where persistence beats perfection. Vercel and Supabase handled deployment and database without much friction, which meant more time for actual product work.

Morning and afternoon sessions were for building new features. Late nights were for debugging and refactoring. I didn't plan this schedule. It just happened after a few weeks of noticing when I did my best work.

The honest bits

I don't know how a real developer would judge my code. I'm not a software engineer by training. I came from management consulting, where the closest I got to code was Excel macros and the occasional Python script for data analysis.

But I do my best. I refactor when things get messy. I create conventions and stick to them. I organize code into libraries when patterns repeat. Whether that's "good" by engineering standards, I genuinely don't know.

What I do know is that it works. The app runs. Users can sign up, connect their brand, and see their AI visibility data. That's what matters for now.

I also learned to be careful with large refactoring jobs. Claude Code handles scoped tasks well, but when you ask it to restructure half the codebase at once, things can get weird. Smaller batches, more commits, fewer surprises.

The consulting brain: help and hindrance

My background in strategy consulting helped in some ways. I'm used to structuring problems, scoping work, and writing documentation. That translated well to working with AI, where clear prompts and well-defined tasks make a huge difference.

But consulting also trained me to overplan, overthink, and chase perfection. Those habits got in the way constantly. I'd spend hours designing a feature that should have taken thirty minutes to prototype. I'd hesitate to ship something because it wasn't polished enough.

The hardest part of this process wasn't the coding. It was learning to ship before I felt ready.

The real insight

Building with AI is not as hard as it seems. The barrier to entry is lower than ever. But you still need to start, do the reps, and figure out what works for you.

It's also not as easy as the hype suggests. The AI won't build your product for you. It won't make product decisions. It won't tell you what to build or whether anyone will pay for it. Those parts are still on you.

I've built something I'm proud of. Now comes the hard part: shipping, getting feedback, iterating, and finding out if anyone actually wants this.

What's next

There's something funny about using AI to build a tool that tracks AI visibility. SearchSeal exists because AI platforms like ChatGPT and Gemini are becoming how people discover products. And the tool itself was built almost entirely with AI assistance.

SearchSeal launches soon. It tracks your brand's visibility across ChatGPT, Gemini, Perplexity, and Claude. If you're curious whether AI recommends your product when people ask, that's what we built it for.

And if you're thinking about building something yourself with AI tools, my advice is simple: just start. The workflow comes with practice.

Get recommended byChatGPTGeminiClaudeDeepSeekGrok