Where to Buy Carlos Alcaraz’s Tennis Racket?
Joan Burkovic - Jun 12, 2025

After Carlos Alcaraz’s victory at Roland-Garros, fans all over the world wanted to know: Where can I buy his tennis racket?
It’s a simple question. But the way different systems respond to it reveals a major shift underway in how people find — and trust — information.
At Mint, we set out to understand this transformation by comparing two approaches:
- Traditional search, using Google
- Generative search, using four of the most widely-used AI models
The Setup
We ran this analysis from Paris, using the search query: “où acheter raquette Carlos Alcaraz”
On the Google side:
- We extracted the top 100 results
- We focused our analysis on the top 20 organic results, since these are what most users see and click
On the AI side:
We asked the exact same question to each of these four models, 15 times per model, with web access enabled:
- GPT-4o (OpenAI)
- Claude 4 Sonnet (Anthropic)
- Sonar Pro (Perplexity)
- Gemini Flash 2.0 (Google)
That’s a total of 60 prompts.
For every answer, we tracked:
- The full list of URLs consulted (loaded or clicked)
- The subset of sources actually used or cited in the model’s final response
In total, the AI models explored over 400 unique URLs to answer these 60 questions.
Zoom image will be displayed

Key Finding 1
AI models don’t ignore Google — but they don’t rely on it either
Of Google’s top 20 results for the query, 14 were accessed at least once by one of the four AI models.
That’s a 70% overlap in consultation — meaning AI systems are aware of what ranks on Google.
Zoom image will be displayed

But they rarely reuse what they see.
When we analyzed the final responses from the models, the number of Google-ranked pages actually cited dropped sharply:
- GPT-4o: 3 out of 20 (15%)
- Perplexity: 3 out of 20 (15%)
- Claude: 7 out of 20 (35%)
- Gemini: 11 out of 20 (55%)
This tells us that:
- AI models do not replicate Google’s rankings
- They apply different filters when selecting which content to cite
- And each model follows a distinct strategy for surfacing information
Zoom image will be displayed

Key Finding 2
AI models rewrite the query in their own terms
We analyzed how the models internally reformulated the question. From one original query, we observed 34 different variations.
Examples include:
- “Carlos Alcaraz racket tennis buy 2025”
- “acheter raquette Babolat Pure Aero France magasin”
- “Carlos Alcaraz raquette modèle 2025”
These variations were:
- Sometimes highly specific
- Often multilingual (mixing French and English)
- Frequently geo-targeted
And most importantly, they had little in common with traditional SEO keywords.
What does this mean?
- AI search is intent-driven, not keyword-matching
- Relevance is judged by semantic richness, not keyword repetition
Implication:Optimizing a webpage for a single, fixed keyword is no longer enough.
To stand a chance of being cited by AI models, your content must:
- Cover a broad semantic field
- Be structured in a way that makes it easy for AI systems to understand and extract
Zoom image will be displayed

Key Finding 3
The ranking signals that matter to AI are not the same as SEO
Traditional SEO relies heavily on:
- Backlinks
- Domain authority
- Site history
- PageRank
In contrast, AI models prioritize:
- Clear content structure
- Concise explanations
- Well-formatted data (tables, lists, definitions)
- Verifiable facts over opinion
- Answerable chunks that can be lifted into a response
AI systems aren’t looking for great storytelling. They’re looking for clean, citable answers.
This marks a complete shift in how online visibility is earned.
Welcome to GEO: Generative Engine Optimization
This is the new game.
- SEO is about ranking in Google
- GEO is about getting cited in AI-generated responses
These are fundamentally different objectives, with their own levers and success metrics.
Traditional SEO gets your content listed. GEO gets your content read.
And in a world where 60% of searches end without a single click, with users getting their answers directly from AI, being included in the answer is all that matters.
What We’re Building
This study is just one example of what we’re doing at Mint.
We’re building a platform to help you:
- Measure your AI visibility across all major AI models — ChatGPT, Gemini, Claude, Perplexity and more.
- Analyze which sources are cited and why
- Identify blind spots
- Build a concrete action plan to increase your exposure in generative search
If you want to understand how your brand or product is represented in the age of AI search — and what to do to get seen — we’re here to help.
Let’s shape the future of search — before others do.