SCENE 01 · THE DASHBOARD
You open the traffic report, and your stomach drops.
What zero-click search actually looks like on a Monday morning
You are Maya, Head of Audience at Current, a regional news and lifestyle publisher covering three markets in the Pacific Northwest. Seven reporters. One video producer. A newsletter that people actually open. Last year you hit your best subscriber numbers ever.
This morning, Google organic is down 41% year over year. Discover is down 19%. Your affiliate revenue for the kitchen gear vertical is off a cliff. Your CEO wants a plan by Friday, and she is not asking for a think piece. She is asking what you are going to do.
You are not imagining it. Google organic referrals to publishers dropped 33% globally in the year ending November 2025, and 38% in the U.S. (Press Gazette / Chartbeat / Reuters Institute, Jan 2026). You are part of a pattern.
You take a sip of coffee. You look at the blinking cursor in the draft Slack message to your CEO. Where do you start?
Path A · Diagnosis
SCENE 02 · THE WHY
Your search referrals did not leak. They were absorbed.
You pull up the Similarweb trend line and the picture sharpens. On news-related Google searches, the zero-click rate, where the user gets a full answer on the results page and never clicks through, jumped from 56% to nearly 69% between May 2024 and May 2025 (Similarweb, Jul 2025). That is a 13-point shift in twelve months. When AI Overviews trigger, Similarweb's data puts the zero-click rate around 83% on average. In Google's experimental AI Mode, Semrush measured roughly 93% (Semrush, 2025). For every 100 AI Mode searches, seven people click an external result. Seven.
And the bots are a different kind of problem.
Digital Trends, using TollBit’s monitoring tools, logged 4.1 million AI bot scrapes in one week against fewer than 4,200 referrals back. A 966:1 scrape-to-referral ratio (Media Copilot, Jan 2026). Cloudflare confirmed the pattern industry-wide and reported Anthropic’s ClaudeBot at 38,000:1, OpenAI’s GPTBot at 1,700:1 (Cloudflare, Aug 2025). Ten years ago, Google ran about 2:1. Now even Google’s aggregate is 18:1.
All AI platforms combined still send publishers just 1% of total traffic (Digiday, Dec 2025). Your content is being vacuumed up and served back as answers on a surface you do not own, wrapped in an interface that rarely sends the reader home.
You sit back. The diagnosis is painful but clear. Now what?
Path B · The Three Strategies
SCENE 03 · THREE ROOMS, THREE DOORS
You draw three boxes on the whiteboard.
Block, license, or build—three doors every publisher is facing
Every publisher responding to AI search is doing one of three things. You sketch them out for the editorial meeting at 10.
You imagine each one as a room. Which door do you open?
Path B · Block
SCENE 04A · BOARDING THE WINDOWS
You block the crawlers, and the room goes quiet.
The real cost of updating your robots.txt
You update robots.txt. You talk to your CDN about bot-aware rules. The scraping rate drops. You exhale.
Blocking is a defensive play. It lowers the extraction rate, but it also cuts you off from the one growing referral surface that does not require you to compete on SERPs. Over 3,000 publishers have deployed bot paywalls, but the Reuters Institute survey of 280 media leaders found that 69% expect licensing to produce only supplementary revenue (Reuters Institute, 2026).
You can block and still do more. Most publishers layer blocking with another response.
Path B · License
SCENE 04B · RENTING OUT ROOMS
You pick up the phone to start licensing conversations.
What publishers actually get from AI content licensing deals
Bot paywalls, direct deals, and revenue-sharing programs are on the table. The Reuters Institute survey found 69% of media leaders expect licensing to provide some revenue, though most see it as supplementary (Reuters Institute, 2026). The marketplace is real but nascent, and the pricing leverage sits with the platforms for now.
Licensing pays you for the scrape. It does not give you back the reader. Your food editor’s blender review still gets quoted inside an AI answer with an interface your brand does not control, and the click still does not come home.
Path B · Build
SCENE 04C · THROWING YOUR OWN PARTY
You open the third door and find readers still inside.
On-site AI answers: what it looks like when the reader stays
An on-site AI answer engine replaces or supplements your existing site search with a conversational interface. A reader types or taps a suggested question. The model generates an answer from your archive, calls out the specific articles behind the claims, and offers three to five follow-up questions. The experience feels like ChatGPT. One important difference: the whole conversation happens on your property.
You picture it on Current’s homepage. A reader asks, “What should I cook this weekend that works on a weeknight?” The system pulls three of your food editor’s recipe guides, synthesizes the common thread, credits each article at the claim level, and suggests two follow-ups: “What about one-pan dinners?” and “What pantry staples should I stock for these?”
The reader stays. Your first-party data grows. Your revenue surfaces multiply instead of leaking. But right away you hit a design question that changes what you buy.
Path B · Build · Content Model
SCENE 05 · THE LIBRARY QUESTION
Single-source or multi-corpus?
If your answer engine is single-source, every response is built from your archive alone. You are the library, and every checkout card points back to one of your books. That works beautifully when the question lives inside your beat.
But readers do not know where your coverage ends. When a question extends past your archive, a single-source engine either fails or serves a thin answer. The reader bounces, and you just trained them that your AI “does not know.”
Multi-corpus implementations fill the gap with licensed content from outside publications, with attribution proportional to what each source contributed. The reader stays. Your coverage feels broader than your masthead.
This is the multi-corpus advantage that separates Gist Answers from single-archive tools. Claim-level attribution also preserves editorial integrity. Every statement in an AI answer traces back to its specific source publication, so the reader sees whose reporting stands behind each sentence.
Path C · Platform Comparison
SCENE 06 · THE VENDOR BAKE-OFF
Three platforms. Real differences under the hood.
Three products have emerged as the primary on-site AI options for publishers. They share the core idea of keeping readers on-site and differ on content model, attribution, and how they monetize.
You start ranking them against what Current actually needs. The kitchen vertical has depth; the local politics vertical does not. Your reader base expects Current to sound like Current when it answers.
Monetization
SCENE 07 · FOUR REVENUE SURFACES
You build the revenue model for Friday’s deck.'
On-site AI opens four monetization surfaces, each addressing a different intent signal that pageviews alone never captured.
The key point for your CFO: these streams do not cannibalize display, programmatic, or paywall revenue in the early field data. They monetize expressed intent — a reader who just asked a specific question — which is fundamentally different inventory from a served pageview.
Path D · The Evidence
SCENE 08 · WHAT THE FIELD IS SHOWING
Your CFO wants proof. Here it is.
The Taboola Publisher Product Council released on-site AI engagement data in December 2025, summarizing results across participating sites.
The Reuters Institute 2026 survey found 76% of media leaders are increasing AI engagement investment in 2026 (Reuters Institute, 2026). Digiday called on-site AI “the rewrite of publisher websites in 2026” (Digiday, Jan 2026).
One UX warning worth listening to
The suggested-question format matters. Open text fields underperform because readers encountering on-site AI for the first time do not know how to interact with it. What you surface, how you frame it, and where the widget sits turns out to be as important as the underlying model.
Evaluation
SCENE 09 · FIVE QUESTIONS FOR ANY VENDOR
Five questions to separate a real solution from a quick fix.
You write the five questions on the top of your Friday memo. Below each one you note whether each vendor clears the bar. The deck writes itself.
FAQ
SCENE 10 · QUICK ANSWERS
Frequently asked questions
What is an on-site AI answer engine?
An AI-powered search system embedded on a publisher’s website that generates conversational answers from the publisher’s content and, optionally, a licensed multi-source library. The reader, first-party data, and revenue stay on the publisher’s property rather than flowing to external AI platforms.
How does Gist Answers differ from single-source solutions?
Gist Answers draws from the publisher’s content plus a 700+ publication licensed library (multi-corpus model), uses proportional claim-level attribution, and offers three distinct revenue streams: sponsored questions, generative ads, and publisher-controlled inventory. Single-source tools use only the host archive.
What is the multi-corpus advantage?
When a reader’s question extends beyond the publisher’s own coverage, a single-source engine either fails or delivers an incomplete response. A multi-corpus implementation fills the gap with licensed content from hundreds of additional publications, keeping the reader on-site.
Do on-site AI answers cannibalize existing ad revenue?
Early data shows minimal cannibalization of existing placements or DSP positions. On-site AI engagement creates net-new interactions that did not exist before deployment. The revenue is incremental.
What results are publishers seeing?
On-site AI users generate nearly 3x higher revenue per user, more page views per session, and higher return visit frequency. Suggested questions drive 6x higher click-through than open text fields. AI article summaries increase reading depth rather than replacing it.
Can publishers both license content and deploy on-site AI answers?
Yes. Licensing monetizes external AI consumption through programs like per-scrape fees or revenue sharing. On-site AI answers monetize direct reader engagement. They address different portions of the zero-click revenue gap and are complementary.
Epilogue
SCENE 11 · THE MONDAY MORNING DEBRIEF
Friday. 4:12 p.m. You hit send.
Your memo lands in your CEO’s inbox with three things stacked against each other: what happened to the traffic, why it happened, and what Current is going to do about it. You have a 30-day pilot scoped. You have the five questions the finance team will ask your vendor shortlist. You have a revenue model that does not depend on reclaiming search rankings that may not come back.
The zero-click crisis is structural. On-site AI does not undo it. What it does is invert the relationship: the reader who would have vanished into an AI Overview now finishes the conversation on your property, with your voice, on inventory you own.
Key takeaways
- On-site AI inverts the zero-click problem. Keep reader, data, and revenue on your property instead of ceding them to external platforms.
- Content sourcing is the critical decision. Single-source fails when questions exceed the archive. Multi-corpus keeps readers on-site.
- Early field results: nearly 3x ARPU lift, 6x click-through on suggested questions, minimal cannibalization.
- Four revenue surfaces: sponsored questions, generative ads, publisher-controlled inventory, in-ad conversational units.
- Attribution determines trust. Claim-level citations preserve editorial integrity across multi-source answers.
- The category is mainstreaming. 2026 is the year on-site AI becomes standard publisher infrastructure.


.png)
.png)