Comparing Top Traffic Prompts vs Queries Across Different Page Types - Key Differences & Implications
Now that we have more tools providing access to prompts information like Similarweb or Profound from the major AI search platforms, and in particular, with Similarweb showcasing the top prompts bringing traffic per page we have the ability to compare them with the existing, most popular queries that bring most of the organic search traffic to the same pages from Google's traditional search results, and see the major differences and shift in behavior.
Going through a few scenarios of top (AI) prompts vs (traditional search) queries, as well as the pages receiving their traffic, we can clearly see the impact in:
Let's go through a few examples across different scenarios that show the above more clearly:
1. My personal site (aleydasolis.com) home page
Let's see the difference in traditional search top terms vs AI search prompts attracting traffic to my personal site home page in Spanish: https://www.aleydasolis.com/ (note how the English home page is located instead at: https://www.aleydasolis.com/en/, a different page).
The top terms attracting clicks from Google, as you can imagine, are queries variations related to my own name "Aleyda Solis", as well as "consultora SEO" (SEO consultant in Spanish, which is natural given this page is in Spanish).
Nonetheless, my Spanish home page top prompts are completely different, not only because they're very long tail, informational questions, but also their topics and even language:
Note how none of the prompts above are looking for my name or SEO consulting, which are the usual terms providing traffic to this page, but:
So all in all, these are very relevant prompts to refer me, although I wouldn't have chosen the page in Spanish, but English for those prompts in this language, and would have better linked the pages of the resources they refer me for in particular, rather than my home page, but is my Spanish home page where they have gotten that information from and/or also see as more relevant to refer as is the "home" of my entity, most of the links referring to my name are pointing to it, the structured data in it, etc.
It's also important to note, how if you search on Google traditional search for the prompts for which my site is referred AI search results, you won't see it ranking in top positions or ... at all. It's mostly guides, in some of them referring to my resources, or in some scenarios, my resources (SEOFOMO, LearningSEO.io) ranking directly.
2. An informational blog post
We can see something similar in the differences between the top prompts and queries referring traffic to my 5 AI Insights from Google Search Central Live in Madrid on April 9, 2025 post.
The top terms referring organic search traffic from Google are related to the event's name "search central live madrid" and variations of it.
However, the top prompts referring traffic are mostly broader, not even asking about the Google event in Madrid, but about the topics I covered from what Google shared in the event in Madrid, using the post as a reference (and endorsing it from the answers):
Did I write this blog post thinking I would ever get any traffic from people looking for the topics above? No, at least not as a primary target:
In fact, if you check Google's traditional SERPs for those prompts above, you won't see my blog post ranking for them.
However, my post was definitely a relevant source of information for them, and definitely relevant to refer to since I covered the answers to these questions through the content.
3. A more commercially focused blog post
What happens for more commercially focused blog posts? Let's see with The 10 Best Running Shoes for Flat Feet guide from Runner's World.
Since I have no affiliation with the site, I've checked the top terms driving traffic from Google's organic search results from Ahrefs: "best running shoes for flat feet", "running shoes for flat feet" and other similar queries variations are among the top from all over the world, in English (since the guide is in this language) as we would expect from a "query to page matching" system.
In the case of the top prompts, although we can see a few similar long-tail, conversational versions of the top terms like: "what are the best running shoes for flat feet" or "I'm looking for running shoes that provide good arch support for flat feet. Any suggestions", we also have different ones, asking for sub-topics covered through the guide in English without mentioning "flat feet", as well as the same flat feet topic but in Spanish:
Despite the "longer, conversational" style in this case though, there's a higher overlap with Google's traditional search results, with the same article ranking in the top position for "Which running shoes are best for runners with low arches?" in Google.
4. A Product Page of a major retailer
And what happens with ecommerce product pages?
Usually PDPs will rank for the specific product name for which they're very specifically relevant, which happens in this case with the Amazon product page: Anker 555 USB-C Hub (8-in-1), with 100W Power Delivery, 4K 60Hz HDMI Port, 10Gbps USB C and 2 A Data Ports, Ethernet microSD SD Card Reader, for MacBook Pro More, as we can see in the Ahrefs screenshot, with the top queries being: "anker 555", "anker 555 usb-c hub" and related queries for the specific product names.
I found the top prompts for product pages to be particularly interesting, since unlike the top traditional queries, these are very long, descriptive prompts, asking about the characteristics of the product but none mentioning the product name, with prompts in different languages, as happens in this case with Spanish and French along with English ones:
However, when searching for these non-branded, very long-tail prompts in English in Google traditional search, in most of the top results this or another similar Amazon PDP is ranking as well, along with guides and UGC.
5. A Category Page of a major retailer
Finally, let's see what happens with a category page of a major retailer, like the Men's Polo Shirts of American Eagle, which gets most of its organic search traffic from traditional search results from branded "american eagle polo" and "american eagle polo shirts" related queries, but also some non-branded ones like "collar polo shirt", or "men polo shirts". These are usually relatively short, descriptive, matching the overall offering of the page: polo shirts for men.
The top prompts in this case are again in a variety of languages (English, Spanish, Portuguese), longer tail and non-branded:
Although many category pages are ranked in Google for these long tail conversational prompts (along with some informational ones and PDPs via product carousels), the American Eagle one wasn't in the top one in this case.
Through these top prompts vs queries examples across different scenarios, we can see the changes between a few AI and traditional search areas, highlighting the shifts in:
Although there’s a high overlap in principles for optimizing for AI vs traditional search, there are certainly differences due to the shifts above.
How to move forward?
I’ve created an AI Search Content Optimization Checklist, going through the most important aspects to take into account to optimize your content for AI search answers -from chunk optimization, citation worthiness, topical breadth and depth, personalization, etc.-, along with their importance and how to take action.
Digital Marketing Expert | Facebook Ads, Google Ads, YouTube SEO,Server-side Tracking,Social Media Marketing | 3+ Years of Results
1moThanks for sharing, Aleyda
Specializing in Content, SEO, Analytics & Paid Advertising
1moThanks for your insights. I think a major data gap is being overlooked when discussing “top prompts.” With Google Search, we get real query data via GSC and KWP. Such transparency does not exist for AI results. So how are tools like Similarweb deciding which prompts to track? Without data from AI platforms, their methods are unclear—possibly based on assumptions or generated prompts, not real user behavior. That risks feedback loops and unreliable insights. I manually checked all the prompts you cited as appearing for your pages across ChatGPT, Perplexity, Copilot, Gemini, AIO, & AI Mode. None referenced your HP or post. A Spanish-language prompt cited a subpage of your site. Copilot mentioned learningseo.io for the roadmap prompt. Most citations point to clear topical pages, not homepages. Just like traditional SEO. Yes, AI results vary, but not so wildly that sites just disappear entirely, or unrelated sites appear for specific prompts. That just doesn’t hold up. If tools report prompts/citations that can’t be replicated in any aspect, we have a problem: either the data methodology is unreliable, or AI citations are too unstable to track. Either way, analyzing these “top prompts” for insights feels premature at this point.
SEO consultant and entrepreneur | I help SaaS companies and startup founders with search & AI visibility
1moWhere does SimilarWeb take their data from? there's no information on their landing about that
Botanist| Plant Science Researcher| Talent Aquisition Intern @Blink Talent
1moThis is the kind of comparison we need right now. There is a shift from "ranking of keywords" to "qualifying for conversation." Content needs to answer not only rank. AI leans towards summaries, not just links.
Food Technologist with expertise in Food Safety and Quality
1moThat's just incredible. Just how? AI is really a game changer; this post has nailed it.