What we saw
In a research run on the category “analytics for mid-market e-commerce,” one of the brands being tested consistently landed in 4th–5th position in ChatGPT answers and barely appeared in Perplexity. This was unexpected: the brand is not small, it has a strong site with detailed documentation, an active blog, and several case studies with major clients. In traditional Google search, it ranks in the top 5 for its core category queries. By all the usual measures, this is a visible brand.
But in neutral AI100 scenarios, where the models were asked questions such as “what should I choose for a mid-sized online store with a small team,” the brand lost to competitors that were objectively less well known.
What turned out to be the cause
The analysis showed three overlapping problems.
The first and most important was a gap between the language of the site and the language of demand. The brand described itself as a “modular environment for intelligent commerce analytics.” Users — and models reflecting their language — were asking about a “simple reporting service for an online store without a dedicated analyst.” Those two languages barely intersected anywhere. Nowhere on the site was there a direct answer to the question “who is this for?” in the same words a user would use to describe the task.
The second problem was that the model reformulated the task into an adjacent category. Instead of “analytics for e-commerce,” the answer was built around “BI tools for small business.” As a result, the list included two solutions from a neighboring category that the brand did not consider competitors. This is the classic category drift described in a separate article in the corpus.
The third problem was the update lag. Two months before the research run, the brand had launched a new pricing plan aimed at the mid-market segment. But the model still described it as a solution for large companies — information about the new plan had not yet seeped into external reviews or structured data.
What follows from this
The observation confirms several theses that the AI100 corpus describes as persistent.
Machine distinctness and human recognizability are different things. A brand that is well known to people can be functionally invisible to a model if its language does not match the language of the task.
Category drift happens before direct comparison between brands. The brand lost not to a competitor, but to someone else’s frame. The model first renamed the task and only then assembled a list within the new category.
Update lag is not an abstract delay. It is a concrete situation in which the brand has already changed a fact about itself, but the machine has not yet had time to see it.
What the brand could have done
Add a page to the site that directly answers the question “who is our product for?” in the language of the user’s task, not the internal marketing category. Make sure the new pricing plan is described not only on the pricing page, but also in external reviews. Check that the structured markup (Product, Offer) reflects the current pricing plans and target audience.
A repeat research run in 6–8 weeks would show whether the picture changed.
Related materials
Category drift: how a brand loses not only to a competitor, but to someone else’s frame of choice
How a brand can lose not to a competitor but to a different choice frame: AI shifts the user's task into another category and assembles a different set of alternatives.
Open the material →Update lag: how quickly AI systems change their view of a company after news, a product launch, or a price change
Why there is a time gap between a fact changing about a brand and its stable appearance in machine answers — and how to observe this lag in practice.
Open the material →Mini-research card for the AI100 library
An observation card template for recording data from each AI100 test run — so that individual responses build into a research history.
Open the material →Practical action map: how to strengthen a brand’s machine distinctness
Six sequential steps for improving AI visibility: from identity verification through language reassembly and trust contour to monitoring.
Open the material →Check whether your brand is losing visibility due to a language gap
AI100 tests how the model sees the company in neutral scenarios — without prompting the name. If the brand is invisible, the report shows in which storylines it disappears and which adjustments are most likely to shift the result.
Open the sample report →