In partnership with

Stop babysitting dashboards. Ship from Slack. Touch grass.

700+ teams have Viktor reading their Google Ads every morning.

Your media team opens Slack at 8am. There's a cross-platform brief in #growth: Google Ads spend vs. ROAS, Meta CPA by campaign, Stripe revenue by channel. Viktor posted it at 6am. Nobody asked for it.

Last week, one team's Viktor caught a spend spike at 2am on a broad match campaign and flagged it in Slack: "CPA up 340%. Recommend pausing and shifting budget to the top two performers." That would have burned $3K by morning. The media buyer woke up to a problem already handled.

Your strategist reviews spend trends. Your account manager checks revenue attribution. Same Slack channel, same colleague, before anyone's first coffee.

Google Ads, Meta, Stripe. One message. No Looker, no Data Studio. Anomaly detection runs around the clock. Cross-platform reporting runs on autopilot.

5,700+ teams. SOC 2 certified. Your data never trains models.

"Viktor is now an integral team member, and after weeks of use we still feel we haven't uncovered the full potential." — Patrick O'Doherty, Director, Yarra Web

I built an app for my daughter.

My 7-year-old is a self-professed animal expert. She’s not wrong either. She probably knows more about animals than 99% of adults.

She's always asking me about newly discovered species, so I thought it would be fun to build her a place to find them herself.

Claude Code misunderstood my original instructions and built something different than what I described — a general animal explorer instead of a new-species tool.

But it turned out to be genuinely cool and she liked it. We iterated on it. Fixed accuracy issues, built out a quiz system, got to over 1,000 verified species.

My daughter uses it. Her little sister does too.

Yesterday I decided to add a homepage and put it on a custom domain.

That's where things went sideways.

Three rounds in, Claude said it out loud

The design kept coming back wrong.

The first version was awful. I told Claude Code to use the skills and capabilities it had, and even gave it a couple of examples, and told it to do better.

What came back was not much improved, so I had it try again.

No better.

I got fed up, and told Claude Code why it wasn’t working.

Claude Code made a confession:

I thought that was pretty funny, and it was 100% right.

We were stuck in a loop of incremental fixes on a flawed starting point, and no amount of tweaking was going to get us where we needed to go.

This can happen in all kinds of ways with AI.

Whether analyzing data, writing copy, making an image, or designing a homepage.

Here's how you know when you're in that loop. And how you get out.

5 signs your AI is polishing a turd

1. The fixes feel cosmetic. Different wording, different format, slightly adjusted structure. You go two or three rounds and nothing fundamental has changed. You're rearranging furniture in a room that needs to come down.

2. You keep reframing the same problem and getting the same result. You explain it one way, it's not right, you try again with different words, and you get back basically the same thing rephrased. There's a real reason this happens. It's called context rot. As the conversation fills up with failed attempts and your corrections, the AI anchors harder to what it's already produced. The longer you stay in a broken conversation, the worse it gets.

3. Every fix breaks something else. You ask AI to shorten an email, and it cuts the parts that matter. You ask it to make a document more professional, and now it's stiff and lifeless. You fix one thing, and something else stops working. You're chasing symptoms, not the actual problem.

4. You've lost track of what you originally wanted. You're no longer working toward your goal. You're just reacting to the AI's output. If you can't clearly describe what success looks like anymore, that's the sign.

5. The AI starts explaining instead of delivering. "I've made several improvements to address your feedback." "Here's why I structured it this way." When it's justifying its work more than doing it, it knows it's not landing.

4 things to do to fix it (or how to wipe clean)

1. Name it. Say it directly: "Stop. You're polishing a turd. We need to start from scratch." Not "can you try something different" — actually say start over. The AI is not likely to volunteer this on its own. Claude said it to me, but only after I kept pushing. Most of the time, you have to be the one to call it.

2. Start a new conversation. This is the single most important move. Old context is working against you. Every failed attempt, every correction, every "close but not quite" is sitting in that context window anchoring the AI to its broken approach. A fresh conversation has none of that baggage. In Claude Code, /clear does this. In any other AI tool, open a new chat.

3. Describe what you want from scratch. Not "can you make the header bigger" or “make this sound less stuffy”. That's still tethered to the thing that doesn't work. Instead: "Forget everything we built. Here's what I actually need." Start from the outcome you want, not the existing output.

4. Before rebuilding, ask the AI what it thinks you're building. Have it describe the goal back to you in plain terms. The gap between its answer and your answer will tell you exactly where the original misalignment happened and help you write a better prompt the second time.

3 ways to avoid it next time

Get alignment before a single line gets written. Describe the outcome you want, have the AI describe it back, and close the gap before anything gets built. Especially for anything visual. The 60 seconds this takes will save you four rounds of frustration.

Ask for a plan first. Describe what you want, ask for a proposed approach — layout, structure, logic — approve it, then let it build. When the AI has to commit to a plan before executing, it tends to catch its own misunderstandings early.

Ask the AI to audit itself before you get stuck. If the first output isn’t what you wanted, ask it directly: "Is what you built actually solving what I described, or are you working around a flaw in the foundation?" Most of the time, it knows.

It will tell you if the approach is wrong, but only if you ask. If you wait until round four, you're already in the turd-polishing loop.

Ask early, while a restart is still cheap.

And if your AI ever tells you it's polishing a turd?

Screenshot it and send it to me!

Dig Deeper

Hey there! Nice to see you down this far. Before you go…

Thanks for reading!

Nathan Rodgers

👋 Say hello on Substack and LinkedIn

Were you forwarded this email? Click here to subscribe.

Keep Reading