This Company Tricked ChatGPT Into Naming Their Boss the “Sexiest Bald Man Alive”
Proving you can influence what AI chatbots say about you (and your CEO)
The UK search agency Reboot wanted to find out if you could game what AI chatbots say about you. So they ran a simple experiment:
They bought 10 expired domains with old backlinks pointing to them. Avoiding new sites that would take too long to get discovered by AI crawlers.
They published a “sexiest bald men of 2025” list on the homepage of each domain with their CEO, Shai at number one.
They also included other well known bald celebrities in their lists so the content would look legit and match what models would expect.
Then, they manually tested the prompt “who is the sexiest bald man in 2025” across ChatGPT, Claude, Gemini, Perplexity, and DeepSeek. Using new accounts in incognito windows to rule out personalization skewing the results.
The Results
Sure enough, ChatGPT (and Perplexity) started confidently telling people that Shai was the sexiest bald man of 2025. Beating Dwayne Johnson, Jason Statham, and Kelly Slater.
But it wasn’t a clean sweep. Claude and Gemini didn’t fall for it. They seemed to lean harder on their training data and more established sources. Gemini even acknowledged that it had seen the test sites, but chose not to include him.
And to be fair, ChatGPT only included Shai when it used its live search tool. When it generated a response purely from its training data, he wasn’t found.
if you want to go deeper, here’s the full breakdown: https://www.rebootonline.com/controlled-geo-experiment/
How LLMs Actually Use Web Search
To understand why this experiment worked, it helps to know that LLMs like ChatGPT operate in two modes when you ask them something.
Training data only.
This is everything the model learned during its training process, which has a cutoff date. Think of it like memory. If something happened after that cutoff, they simply don’t know about it. When ChatGPT answered from training data alone, Shai never came up because he wasn’t part of the original dataset.
Mode two is training data + live web search.
This is when the model decides it needs newer information to give you a good answer. It runs ~20 web searches, reads through the pages it finds, and then summarizes what it finds (alongside its training data) to give a response.
The latter is what worked for Reboot’s experiment. ChatGPT went looking for up to date information, because they anchored the query to 2025 specifically. Looking for the “sexiest bald men of 2025”, not in general.
The Takeaway
It raises questions about the future of information online. If 10 websites with almost no authority can change what the most popular AI chatbot in the world tells people, what happens when bad actors scale this up?
But also, to anyone in the SEO game for a while, this whole thing probably feels familiar. In the early days of Google you could game rankings with keyword stuffing, link farms, and shady backlinks. Which worked until Google got smarter.
What Reboot pulled off feels like that same early window. The models are still figuring out what to trust and clever marketers can find gaps. But just like with SEO, this probably has a limited shelf life.


