The internet is not instantaneous for an answer system
The most treacherous mistake in discussions of brand visibility in AI is to assume that the internet is instantaneous. From a human point of view, that intuition is understandable: the news has been published, the price on the site has changed, the product card has been updated, the press release has gone out to the mailing list. It seems that after that, the world should already “know” the company’s new version. But answer systems operate on their own time. They do not live in the brand’s time, but in the time of crawling, indexing, repeated retrieval, synchronization of data feeds, and, finally, renewed answer synthesis. That is why there is almost always a lag — a delay — between an event and its full reflection in the brand’s machine representation. Sometimes it is measured in hours. Sometimes in days. Sometimes in weeks. And in some cases, longer still.
To understand the nature of that delay, it helps to break it down into several layers. The first layer is publication lag: the moment when the company itself actually introduced the change into the canonical source. Very often, a business says, “we’ve already updated the information,” when in fact the update was made only on a single page, without synchronization across documentation, pricing, product cards, and external profiles. The second layer is discovery lag: a search crawler or another technical agent has to notice that the page has changed. The third layer is indexing lag: the change has to enter the index or the platform’s machine-readable infrastructure. The fourth layer is answer-assembly lag: even indexed information does not necessarily surface immediately in a specific AI answer. The fifth layer is source-alignment lag: if external sources continue to describe the brand in the old way, the system may continue, for some time, to hold on to the previous version of the entity as it tries to reconcile conflicting evidence.
In its simplest form, the total delay can be written as:
L_total = L_pub + L_disc + L_index + L_synth
where L_total is the full delay between a change in the company and the appearance of a durably updated machine answer, L_pub is publication lag, L_disc is discovery lag, L_index is indexing lag, and L_synth is synthesis lag. The formula obviously simplifies reality, because some processes may run in parallel. But it is useful precisely as a thinking tool: the brand stops perceiving “updating in AI” as a single magical act and begins to see a sequence of distinct technical and content transitions.
What Google, Bing, and OpenAI say
Official documents from the major platforms confirm this multilayered structure. Google explains that for a page to appear in AI Overviews and AI Mode, it must be indexed and generally eligible to appear in ordinary search with a snippet; there are no special “AI requirements” for that [1]. In other words, before a page can become part of the answer environment, it must pass through the ordinary discipline of search accessibility. Moreover, in the same guidance, Google reminds site owners that indexing and display are not guaranteed even if all requirements are met [1]. In practice, that means that after updating a page, a brand cannot simply assume the work is finished; it still has to wait for the change to be discovered and to become genuinely available to answer modes.
The situation is especially visible in e-commerce. Google explicitly recommends combining structured data on the site with a product data feed in Merchant Center, because structured data improves the accuracy with which price, discounts, shipping, and availability are understood, while the product feed provides greater control over update timing, especially for large and frequently changing catalogs [2]. The same guidance states directly that frequent changes in price and availability are exactly what make the data feed especially important [2]. That is a revealing detail. In the classical editorial logic, the site seemed sufficient. In reality, commercial visibility increasingly depends on how quickly and reliably the system receives a machine-readable signal that something has changed.
Microsoft and the IndexNow ecosystem make that dependency even more explicit. The official IndexNow site describes the protocol as a way to instantly notify participating search engines about content changes, whereas without such a signal discovery may take anywhere from several days to several weeks [3]. Bing states outright that generative search, real-time shopping, price promotions, restocking, and new product launches raise the requirements for data update speed; fragmented feeds and slow indexing cease to be a minor technical nuisance and become a direct cause of lost visibility [4][5]. This means lag has stopped being merely an inconvenience. It has become a competitive factor.
At OpenAI, the logic is the same, though it is expressed through the company’s own infrastructure. Documentation for the OAI-SearchBot crawler says that after a change to robots.txt, the system needs about a day to reconfigure site access for search purposes [6]. That is a small but very important marker: even a simple change in access rules does not take effect instantly. In the commercial layer, OpenAI goes further still and offers a direct product feed specifically intended to allow ChatGPT to “accurately index and display” products with current price and availability [7]. In OpenAI’s shopping help article, the company additionally warns that after a price or shipping change, there may be a delay before the new information is reflected, which is precisely why merchants are offered a direct feed [8]. And in the March 2026 release notes, the company separately reports that it improved product data coverage, freshness, and speed through the Agentic Commerce Protocol [9]. In other words, the leading platforms are not concealing the lag problem — they are building entire product solutions around it.
What determines the duration of the lag
Several strategic conclusions follow from this for brands. First, update lag depends on the type of fact. A change in a company name, core positioning, or product composition is one type of update. A change in price, availability, or return conditions is another. News about a partnership, a funding round, or the release of a research study is a third. These facts have different levels of “machine sensitivity.” Commercial data can usually be accelerated more effectively through feeds, markup, and notification protocols. Reputational and meaning-level changes update more slowly, because they require not only the site to be crawled, but the entire network of external evidence to be reworked.
Second, the lag almost always grows when a brand maintains several weakly synchronized sources of truth. For example, the price has already been updated in the catalog, but remains old in the structured data. Availability has been corrected on the site, but Merchant Center has not yet caught up. A new plan has been published in the blog, but has not been added to the comparison table or reflected in the FAQ section. In that state, the system receives not an update, but a conflict. And when faced with conflict, answer systems tend either to become cautious or to rely on the source that appears more reliable and more formalized within their infrastructure.
Third, lag cannot be reduced to a single site. Even if a brand updates its own pages very quickly, the external contour may continue to live in the old version for a long time. An analytical article, an industry ranking, a directory, an aggregator, an old comparison with competitors — all of these continue to exist and participate in answer assembly. That is why, in sensitive cases, update work must include not only internal publication, but also a program of external synchronization: updating profiles, catalogs, the press kit, company listings, and sometimes proactively correcting widespread errors on third-party platforms.
How to measure and reduce lag
For ai100’s own research database, update lag is one of the most fertile topics. It can be measured almost in laboratory fashion. It is enough to choose a fact type — for example, a price change, the launch of a new feature, or the release of a major study — record the exact moment when the update appears in the canonical source, and then check at regular intervals how long it takes different AI systems to begin reproducing the new version consistently. Such a design yields not only interesting content, but extremely practical knowledge: which platforms react faster to which types of change, where data feeds work better, where external citations matter more, and where crawling signals are critical.
As a result, update lag turns out not to be a minor technical detail, but the heart of a brand’s new operational discipline. In the classic internet, one could afford a certain slowness: the user still came to the site and saw a fresh page. In the answer environment, that is no longer always true. The user encounters the synthesis first. And if that synthesis is assembled from old data, the brand enters the conversation with the market wearing an outdated mask. That is why modern work on AI visibility begins not only with content, but with the speed at which knowledge is updated. Those who know how to shorten the lag gain not simply a fresher site, but a more up-to-date version of themselves in the market’s machine perception.
It is well established that answer updating goes through several stages and may lag in different ways for different types of facts: names, prices, assortment, editorial evaluation, or reviews.
A single uniform update speed for specific platforms and verticals is established much less clearly. These timelines depend on crawl frequency, data availability, query type, and on whether the update also appears in external sources.
The practical purpose of this article is to move the conversation about freshness away from the level of “we think the system is outdated” and into a measurable log of delays by fact type and by update channel.
Sources
Related materials
Mini-research card for the AI100 library
An observation card template for recording data from each AI100 test run — so that individual responses build into a research history.
Open the material →The “answer bubble”: why the same brand looks different in ChatGPT, Google, Copilot, and other systems
Why there is no single AI visibility: the same brand can look noticeably different across ChatGPT, Google AI Overviews, Copilot, and Perplexity.
Open the material →When it makes sense to run a repeat study
There is a delay between a factual change and a machine answer update. A repeat AI100 run makes sense not the day after edits, but after changes have gone through the full cycle — publication, crawl, indexing, synthesis.
Open access to research runs →