Limited Time: Code VIP50 = 50% off forever on all plans

Which AI Queries Is Your Brand Winning vs Losing?

April 10, 20266 min read

Which AI Queries Is Your Brand Winning vs Losing?

You typed a question into ChatGPT. Maybe something like "which AI search queries is my brand winning?" And the chatbot gave you a vague, generic answer that didn't tell you anything useful about your specific situation. That's the problem in a nutshell: AI assistants are great at explaining concepts, but they can't tell you where your brand stands.

What you actually need is data. Prompt-level data. Because AI brand visibility isn't a single number. It's dozens or hundreds of individual query outcomes, and your brand is winning some of them and losing others right now.

Why "Are We Visible in AI?" Is the Wrong Question

Most marketing teams ask the question at the wrong level of granularity. "Do we appear in AI search results?" is almost meaningless on its own. A project management tool might show up every time someone asks "what's the best tool for agile teams" but never appear when someone asks "best tool for distributed remote teams" or "project management software for creative agencies."

Those three queries represent very different buyer intents. The first might be your core audience. The second and third could be adjacent segments worth capturing. If you're winning one and losing two, your overall "AI visibility score" might look fine while you're actually hemorrhaging potential customers at specific, high-intent moments.

This is prompt-level visibility, and it's the only granularity that actually drives decisions.

The Four Query Categories Every Brand Should Track

When you map out your brand's AI presence at the prompt level, queries tend to fall into four buckets:

  • Winning queries: Prompts where your brand appears prominently and positively. These are your strongholds. You still need to monitor them because AI model outputs shift over time as training data and retrieval patterns change.
  • Contested queries: Prompts where you appear sometimes, or where you appear but so do 3-4 competitors. You're in the consideration set but not the clear recommendation. These are worth a focused content push.
  • Losing queries: Prompts where a competitor is recommended instead of you. These are direct revenue leaks. Someone with intent is being sent to a competitor because the AI decided your competitor was the better answer.
  • Missing queries: Prompts where no brand in your category appears, or where entirely different categories show up. These represent gaps in how AI models understand your space. Being the first brand to own these queries is a real competitive advantage.

The work of improving AI visibility is mostly about moving queries from the bottom three categories into the winning column. But you can't do that work if you don't know which bucket each query falls into.

Why Query Categories Differ So Much Within a Single Brand

It's not random. AI models cite sources, and the sources they cite reflect what's been written about you (and your competitors) on the web. If your blog has strong content about agile project management but nothing about remote team workflows, you'll win agile queries and lose remote-team queries. Simple as that.

This matters because it points directly to what you should create. The gap isn't just "write more content." It's write content that specifically addresses the contexts where you're losing, using the framing and vocabulary that matches how buyers ask those questions in AI prompts.

Citation source analysis adds another layer. Even if you have relevant content, if it's not being cited by AI engines, the content may not be authoritative enough in their view, or it might not be structured in a way that AI engines can extract and attribute cleanly. Understanding which URLs AI engines actually cite is a prerequisite for fixing your losing queries.

A Framework for Prioritizing Which Queries to Fix First

Not all losing queries are equally important. Here's how to triage them:

  1. High intent, losing to a direct competitor: Highest priority. These are buyers in your exact category being redirected. Every day you're losing here is pipeline going to a competitor.
  2. High intent, brand not appearing at all: Also high priority. No brand owns these yet, so the cost to win is lower and the upside is capturing uncontested territory.
  3. Mid-funnel queries where you're contested: Worth working on after the top two. These are awareness-stage queries where appearing more consistently builds familiarity over time.
  4. Adjacent queries in tangential categories: Lower priority unless you're actively expanding into that space. Don't dilute your content effort by chasing queries that won't convert for your specific product.

How Prompt-Level Tracking Works in Practice

Manual tracking is possible but painful. You'd need to run the same prompts across ChatGPT, Gemini, Grok, and other AI engines, record the outputs, look for your brand and competitors, and repeat this process regularly to catch changes. Doing this for even 20 prompts across 4 engines is 80 manual checks per cycle. Nobody does that consistently.

BabyPenguin automates this. You define the prompts that matter for your category, and the platform runs them across all major AI engines, tracks whether your brand appears, notes where competitors appear instead, and shows you the trend over time. The dashboard gives you a prompt-by-prompt breakdown so you can see exactly which queries you're winning and which you're losing, across ChatGPT, Gemini, Grok, and more.

This is the kind of data that lets you walk into a content planning meeting and say "we're losing 7 out of 10 remote-work queries to Competitor X, here's what we're going to do about it." That's a different conversation than "our AI visibility seems low."

If you're new to tracking AI brand presence, this overview of AI brand visibility fundamentals gives useful context before you start defining your prompt list.

What Winning Actually Looks Like

A brand that's winning in AI search isn't just appearing. It's appearing in the right context, with positive framing, for the right buyer intent. When ChatGPT recommends you for "best invoicing tool for freelancers," that's different from appearing in a list of 10 options when someone asks "what are some invoicing tools?" The first is a strong recommendation. The second is a mention.

Prompt-level tracking captures this distinction. You can see not just whether you appeared, but how prominently you were featured and what the surrounding context was. Over time, as you publish better content and earn more citations, you should see your winning queries increase and your losing queries shrink. That trend line is your signal that the strategy is working.

Most marketing teams are flying blind on this right now. They know AI is influencing buyer behavior (the data on this is clear: a meaningful share of B2B buyers are using AI assistants during research), but they don't have visibility into where they stand at the prompt level. Getting that visibility is the first step. Everything else follows from there.

If you want to understand how the citation layer works underneath all of this, the full GEO guide covers the mechanics of how AI engines decide what to recommend and cite.