Transparency matters when you're showing a business owner a score and a dollar figure. Here's exactly how NarraLoom's AI Search Visibility Audit works — every step, every assumption, every limitation.

Why 20 questions

Every audit analyzes 20 buyer questions. Not 5. Not 50.

Five questions isn't enough to surface meaningful patterns. Fifty creates noise — the questions start overlapping and the report becomes unreadable. Twenty gives enough coverage to identify real gaps while keeping the report focused and actionable.

The questions aren't generic. They're generated for your specific business, market, and location. A plumber in Houston gets different questions than a dentist in Portland or a SaaS company in New York.

How questions are generated

When you enter your URL and location, NarraLoom scrapes your website to understand your business identity — what you do, where you operate, what services or products you offer. It also identifies linked external domains like review sites and directories.

From that analysis, AI generates 20 buyer questions that real people in your market are likely asking. Each question includes an estimated monthly search volume — an AI-generated approximation of how often that question gets searched.

These are buyer questions, not informational queries. "What is a kitchen remodel" is informational. "How much does a kitchen remodel cost in Orange County" is a buyer question — it signals someone who's ready to make a decision.

How we verify coverage

Generating questions is the easy part. Verifying who answers them is where the audit earns its credibility.

After the questions are generated, the verification pipeline runs:

Step 1 — Related brand discovery. The system identifies domains related to your business beyond just your primary URL — subdomains, sister brands, and linked properties.

Step 2 — Primary search verification. For each question, the system searches to find which domains have published content that answers it. This includes your own domain and competitor domains.

Step 3 — Competitor identification. For questions your site doesn't answer, the system identifies which competitors do — with the specific URL, a content snippet, and the competitor's domain name.

Step 4 — AI verification. An AI model reviews the search results to determine whether each question is strongly answered, partially answered, or unanswered by your domain. This catches edge cases where keyword matching might give a false positive or negative.

Step 5 — Safety net filter. A final pass removes false attributions — for example, preventing a directory listing from being counted as your own published content.

The result: each of the 20 questions gets a verified status (covered or uncovered) with evidence — the competitor URL, a content snippet, and a confidence score.

How the score works

Your visibility score is simple: how many of the 20 questions does your website answer?

0–1 out of 20 is a critical gap. Your buyers are finding competitors for nearly every question they ask. 2–4 is needs work. You have some content, but major gaps remain. 5–7 is moderate. You're answering some questions but competitors are ahead on the majority. 8–11 is strong. Your content covers many buyer questions, but there's room to grow. 12 or more out of 20 is elite. You're answering most of the questions your buyers are asking.

The average score for a local service business is 5 out of 20. Most businesses are invisible for the majority of the questions their buyers are asking.

How the revenue estimate works

The revenue figure in the audit is an estimate — not a prediction, not a guarantee.

The formula: total monthly searches across unanswered questions, multiplied by close rate, multiplied by average transaction value, multiplied by 12 months.

Each variable explained:

Monthly searches is the sum of estimated monthly search volume for each unanswered question. These are AI-estimated approximations, not exact search engine data. They're disclosed as estimates throughout the report.

Close rate is what percentage of website visitors become customers. It defaults by business type — 15% for local service providers, 5% for B2B SaaS, 10% for multi-location brands, 3% for consumer products, 8% for B2B products, 2% for authority and nonprofit organizations. You can edit this.

Average transaction value is how much a typical customer is worth. It defaults by business type — $2,000 for local service, $15,000 for B2B SaaS, $3,000 for multi-location, $50 for consumer products, $2,000 for B2B products, $500 for authority and nonprofit. You can edit this.

The label everywhere is "Estimated Annual Gap." You supply the inputs. You can argue with the search volume estimates, but you can't argue with your own close rate and transaction value — those are your numbers.

For businesses classified as weak fit or public institutions, the revenue figure is hidden entirely. The audit is informational only for those cases.

Six business type classifications

Every audit is classified into one of six business types. The classification determines default revenue inputs and influences how the report frames its findings: Local Service Provider, B2B SaaS, Multi-Location Brand, Product (Consumer), Product (B2B), and Authority / Nonprofit.

Classification is AI-determined during the audit and influences the revenue defaults, but every input is editable by the business owner.

Fit assessment

Not every business is a strong fit for content-driven growth. The audit assesses fit on a three-tier scale.

Strong fit means full revenue figures, clear conversion path, and content that directly drives buying decisions. Moderate fit means content helps but isn't the primary driver, with revenue figures shown using softer framing. Weak fit means content is informational, not commercial — no revenue figures shown and no conversion pressure.

Guardrails prevent local service providers, B2B SaaS, and multi-location brands from being classified as weak fit. Public institutions are always routed to custom review.

What the audit doesn't do

Transparency means acknowledging limitations.

Search volumes are estimates. They're AI-generated approximations, not exact search console data. The report discloses this in the methodology section, the FAQ, and inline tooltips.

The audit is a point-in-time snapshot. It reflects your website's coverage when the audit runs. Content you publish after the audit won't be reflected until you run it again.

Competitor evidence is search-verified, not exhaustive. The audit finds competitors who are answering the questions — it doesn't claim to find every competitor.

Revenue is directional, not guaranteed. The estimate shows the scale of the opportunity, not a promise of revenue.

Data provenance

Every audit report includes a data provenance line showing when it was scanned and how many domains were analyzed. An expandable section explains the methodology in plain English.

The score is computed once and persisted. The same number appears everywhere — in the report, the email brief, and the snapshot card. We don't recompute and risk showing different numbers on different surfaces.

Run your own audit

The methodology is the same for every business. The questions, competitors, and scores are unique to yours.

Want to see this in action?

Run a free AI Search Visibility Audit for your business. See which buyer questions you're not answering — and who is.

Run Your Free Audit

60 seconds · No sign-up required

Frequently asked questions

Your competitors might already be answering this question.

NarraLoom finds the buyer questions you're not answering and publishes the content to close the gap. Start with a free audit.

No credit card required

Start a 14-day preview

You'll receive 10 social posts over 14 weekdays + 10 CMS-ready blog posts. No credit card.