Last reviewed by: Lee Thomas, Managing Director, Crescat Digital — 30 April 2026
A managing partner’s prospective client opens ChatGPT and asks for a solicitor specialising in their problem. The answer comes back as a paragraph naming three firms — none of them yours. The enquiry is gone before it started, and you will never see it in your analytics, because the conversation never reached your website. This is the part of the legal marketing landscape that is hardest to feel and hardest to quantify, and it is now happening at meaningful scale.
AI summaries are now intercepting a meaningful share of legal queries before the click. The most rigorous behavioural dataset — Pew Research Center’s analysis of 68,879 Google searches in March 2025 (published July 2025) — found that users clicked through to a website on 8% of searches when an AI summary appeared, against 15% when none did. Click-through to the AI summary’s own cited sources sat at around 1%. Ahrefs, an SEO research firm, has measured a 58% drop in click-through for the top-ranking page on queries where an AI Overview is present (Ahrefs, February 2026). Google has disputed the methodology behind Pew’s figures; the directional finding has been confirmed by enough independent measurements that the trend itself is no longer contested. The available data is mainly US-based, but the structural pattern is now visible across English-language search markets. We discuss this further in our analysis of zero-click search and passage indexing.
This article sets out, for UK law firm decision-makers, how AI answer engines decide which firms to surface, what signals matter, and what 12 to 24 months of inaction looks like. It does so in plain English, with UK regulators and UK directories — not US ones — as the reference points.
Key takeaways
- AI answer engines now mediate the first impression for many legal queries. Around 60% of Google searches return an AI Overview by early 2026, halving the click-through that organic results would otherwise see.
- AI does not rank websites. It selects sources by reading four signals: topical authority, entity and trust signals, site structure and schema markup, and off-site authority.
- Firms invisible to AI search risk losing enquiries to competitors without ever knowing the loss occurred. The cost shows up in conversations that did not start, not in fallen rankings.
- Traditional SEO is necessary but no longer sufficient. By March 2026, only 38% of AI Overview citations came from pages already in Google’s top ten, down from 76% in mid-2025 (Ahrefs). The signals that drive AI selection now include — and increasingly extend beyond — what gets you to page one.
- Acting now builds compounding advantage. AI systems re-cite the sources they have already learned to trust. Waiting 12 to 24 months creates a citation gap that becomes progressively harder to close.
Table of contents
- What “AI search” actually means for UK law firms in 2026
- How AI answer engines decide which firms to surface
- What this means for your firm’s marketing strategy
- What happens if you ignore AI search for 12 to 24 months
- A 90-day priority list for UK law firms
- Common questions about AI search and law firms
- Acting on AI visibility before the citation gap closes
1. What “AI search” actually means for UK law firms in 2026
AI search is not one thing. It is three distinct surfaces — Google AI Overviews, Google AI Mode, and large language model (LLM) answer engines such as ChatGPT, Perplexity, Claude, and Gemini — and each behaves differently. Treating them as one category produces strategy advice that misses for two surfaces out of three.

Two side-by-side flow diagrams. Left, the traditional journey: search → list of firms → click. Right, the AI-mediated journey: search → AI answer citing firms → user may click, refine the question, or search the firm by name. Labels make clear that the right-hand path often ends without a click on any firm’s website. Crescat green primary palette.
Google AI Overviews are the AI-generated paragraphs that appear at the top of Google’s standard search results. Citations are visible; the user can click through. AI Overviews now appear for around 60% of all Google searches by early 2026 (multiple trade analyses), and for a much higher share of legal queries — US measurements put the rate at roughly 78% for “your money or your life” (YMYL) legal queries, the category Google flags for extra scrutiny because the stakes for the user are high. There is no published UK equivalent for that figure, but the prevalence in legal results is high enough that any firm relying on the standard organic listing as their primary visibility channel is now competing for the second-most-prominent slot on the page.
Google AI Mode is the conversational mode where users ask follow-up questions and receive synthesised responses. The user base for legal queries is smaller than AI Overviews, but growing. Citation behaviour overlaps with Overviews; the difference is the user is now in a conversation, not a search.
LLM answer engines are different again. Users come direct to ChatGPT, Perplexity, Claude, or Gemini, ask their question, and receive an answer drawn from the model’s training data, the platform’s live retrieval layer, or both. Google does not see these queries. The firm does not see the lost enquiry. Market share is moving fast: ChatGPT’s share of the consumer AI chatbot market has fallen from 86.7% in January 2025 to roughly 64.5% by January 2026 (Similarweb web-traffic data), while Google’s Gemini has surged from 5.7% to 21.5% over the same period. Microsoft Copilot sits at around 3% globally (StatCounter, March 2026); Perplexity is small in raw share but punches above its weight for research and citation-heavy queries. In the UK specifically, Ofcom’s 2026 reporting puts the share of UK adults using AI tools at 54%; granular UK ChatGPT usage data is not yet publicly available. There is not yet a public study measuring what share of UK legal-buyer journeys touch an LLM at any stage; the consumer-panel data and the broad UK AI-uptake figures both point to a number that is meaningful and rising.
| Surface | What it is | Where it appears | How it cites firms | Click-through behaviour |
|---|---|---|---|---|
| Google AI Overviews | AI-generated paragraph on Google’s results page | Top of standard organic results, before traditional listings | Named citations linked to source sites | Around 8% of users click through to any source (Pew, July 2025); 1% click the cited sources |
| Google AI Mode | Conversational AI search | Separate Google interface; users ask follow-ups | Citations shown alongside synthesised answer | Lower click-through than Overviews; smaller user base for legal queries |
| LLM answer engines | ChatGPT, Perplexity, Claude, Gemini | Outside Google entirely; users come direct | Varies by platform; Perplexity and Claude cite frequently, ChatGPT less consistently | Citation may not produce a click; Google has no visibility into the query |
A regulatory development worth flagging at the outset. The Competition and Markets Authority (CMA) — the UK’s competition regulator — proposed four conduct requirements for Google in January 2026, under the Digital Markets, Competition and Consumers Act 2024 (DMCCA). These include a publisher conduct requirement that would let firms opt out of having their content used in AI Overviews and AI Mode without losing organic ranking. The consultation closed in February 2026; the CMA’s process is ongoing as of April 2026, with formal requirements expected later in the year. The proposed requirements are not yet legally in force. The practical implication once they take effect: firms will be able to opt out, but doing so removes them from the surface that increasingly mediates the first impression. The trade-off rarely favours opting out.
Where this leaves us: AI search is now a real, measurable layer between the searcher and the website. Understanding which firms it picks — and why — is the practical question.
2. How AI answer engines decide which firms to surface
AI does not rank websites the way Google traditionally ranked them. It selects sources by reading four signals — topical authority, entity and trust signals, site structure and schema markup, and off-site authority — and weighting them differently from one platform to another. The four-signal framework is the practical reference the rest of this article uses.
It is worth pausing on one shift first. Through 2024 and into mid-2025, traditional Google ranking served as a reasonable proxy for AI Overview citation: if you ranked in the top ten organic results, you had a good chance of being cited in the AI Overview that sat above them. Ahrefs measured this at 76% in their July 2025 study of 1.9 million citations. By March 2026, the same overlap had dropped to 38% across 4 million AI Overview URLs (Ahrefs, 1 March 2026). Ranking on page one of Google is now necessary but no longer sufficient. The four signals below are where the rest of the citation decision is made.
Topical authority and content depth
A firm’s practice-area pages need to read as if they were written by someone who actually does the work. AI engines reward comprehensive, answer-first content — pages that lead with a direct answer to the reader’s likely question, then expand. A 300-word page that says little more than “we handle employment disputes — call us today” cannot win citation against a 1,500-word page that explains the typical employment-tribunal timeline, the main grounds for unfair dismissal claims, the distinction between unfair and wrongful dismissal, and links to related guidance the firm has published. The shorter page may rank for a branded query; it will not be cited as a source for an informational one.
Dean (Dejan) Cook, founder of Legal Edge, made this argument explicitly in a January 2026 commentary in the Harvard Journal of Law and Technology Digest, a US legal-academic publication. Cook reviewed 50 US law firm websites and found a clear correlation between answer-first content structure and AI Overview citation. The data is US-specific; the structural principle transfers cleanly to UK practice. In our experience across UK legal SEO engagements, we see the same correlation: practice-area pages rebuilt in answer-first form tend to start picking up AI Overview citations within two to three months, while pages left in marketing-copy form do not. These are observed patterns from our own client work, not published benchmarks. (See our guide to topical depth in website structures for the longer treatment.)
Entity and trust signals
For an AI engine, the firm needs to be a recognisable entity — a solid, consistent business that exists in the same form everywhere it is described. The practical checks are:
- NAP consistency. Name, address, and phone number need to match across the firm’s own site, Google Business Profile, Chambers, Legal 500, Review Solicitors, Trustpilot, and the Law Society Find a Solicitor directory. Drift across these listings — a slightly different address on Chambers, an old phone number on Legal 500 — is one of the most common entity-signal failures we see on first audit.
- Solicitor bios with credentials. SRA numbers in solicitor bios. Named author or reviewer on every published article. Speaking, writing, and qualification history visible.
- Clear authorship. Articles attributed to a named individual or, for institutional content like this one, a named reviewer with relevant expertise. The reviewer line at the top of this article (“Last reviewed by: Lee Thomas, Managing Director, Crescat Digital — 30 April 2026”) is a deliberate signal of named human oversight.
The US legal-AI research firm Martindale-Avvo describes ChatGPT’s selection model as “the four R’s” — ratings, reviews, recognitions, and roots. The framework is US-specific (Avvo, Super Lawyers, FindLaw, and Martindale-Hubbell are the four directories ChatGPT cites most often in their research), but the underlying point applies. ChatGPT relies disproportionately on directory presence and review density. For UK firms, the equivalents are Chambers and Legal 500 for recognitions; Review Solicitors, Trustpilot, and Google Business Profile reviews for ratings and reviews; and the Law Society Find a Solicitor directory for roots. A firm absent from those listings is invisible to an entire layer of citation logic.
Site structure and schema markup
Schema markup — a standardised code format that tells search engines and AI systems what a page is about, such as whether it describes a law firm, a person, or a legal service — is what lets an AI engine read a page accurately. The relevant schema types for a law firm site are Organisation, LegalService, FAQ, Article, and Person (for solicitor bios). Implemented properly, schema turns “this is a page” into “this is a page describing the firm’s employment law practice, located at this address, with these named solicitors, regulated by the SRA”. Missing or broken schema is invisible to the firm and visible as a gap to the AI engine.
The other piece is internal linking. AI engines follow a site’s internal link structure to understand which pages are central to the firm’s expertise. Practice-area pages should link to related guides, FAQs, and case studies; bios should link to relevant practice areas; the navigation should make the structure of the firm’s expertise visible. Our internal linking guide goes deeper on this.
Off-site authority
The fourth signal is the corroborating evidence the AI engine finds about the firm elsewhere on the web. Directory listings (covered above), press mentions in trade and national publications, editorial coverage, brand consistency across the web, and citations in third-party content. Off-site authority is what turns a well-built site into a recognised entity.
The platform-specific point here is that ChatGPT relies on off-site signals far more than Google does. Martindale-Avvo’s analysis shows Perplexity and Claude mirror Google’s top results around 75% of the time each, Gemini at around 50%, and ChatGPT under 25%. The data is US-specific and single-source, but the directional pattern is consistent with what we see in UK queries: a firm with strong on-site signals and weak directory or off-site presence will appear inconsistently in ChatGPT and inconsistently in Google AI Overviews, even when ranking well organically.

A concentric-ring diagram with practice-area pages at the centre, site structure and schema markup as the next ring, off-site authority and directories outside that, and brand consistency across the web as the outermost layer. The inner rings are necessary; the outer rings are increasingly determinative for citation in AI Overviews and LLM answer engines.
Want to know where your firm currently stands?
Book an AI Search Visibility Audit. Over seven to ten working days, we run three to five of your priority practice-area-and-location queries through Google AI Overviews, Google AI Mode, and the major LLM answer engines (ChatGPT, Perplexity, Claude, Gemini). You receive a written report — eight to twelve pages — covering where your firm appears today, where it does not, and the entity, schema, and content signals that are driving the gaps. We finish with a 30-minute walk-through call with a senior member of the Crescat team. Free for qualifying firms; no obligation beyond the call.
3. What this means for your firm’s marketing strategy
Four practical shifts follow from the four-signal framework, each with implications for how the firm spends, reports, and runs its marketing function.
Traffic may fall, but lead quality can rise. When AI Overviews answer the basic question, the users who still click through are further down the consideration journey. The pattern we see across firms we work with is that overall organic traffic dips by 10% to 25% in the first six months of significant AI Overview rollout for the firm’s queries, but enquiry quality from the remaining traffic improves. The reframing for the partnership meeting: the dip is not the failure signal it looks like at first glance. It is the AI layer intercepting early-funnel research, leaving more qualified buyers to reach the site. The firms that misdiagnose the dip — and respond by cutting SEO investment — are the ones that lose ground. (For the longer argument on why traffic-as-vanity-metric misleads in legal marketing, see our piece on why a firm’s website looks invisible to serious buyers even when traffic looks fine.)
Ranking first is no longer enough. Being the first organic result on Google does not guarantee citation in the AI Overview that sits above it. As covered in §2, only 38% of AI Overview citations in March 2026 came from pages already in Google’s top ten — down from 76% in mid-2025 (Ahrefs). The traditional SEO playbook — keyword research, technical health, content depth, link building — remains the foundation, because the page still has to be discoverable to be cited. But entity signals, schema, off-site authority, and directory presence now do work that organic ranking does not.
Ethics and accuracy matter more. SRA Rule 8.8 of the Code of Conduct requires that publicity about a firm’s practice be accurate and not misleading. The rule was drafted with direct lawyer communications in mind; it does not yet contemplate AI systems paraphrasing firm content. The SRA hosted a public AI policy and regulation session in February 2026 and has a generative-AI FAQ in preparation, but as of April 2026 there is no specific SRA guidance on AI-paraphrased publicity. The practical reading, drawing on Dean Cook’s analysis in the Harvard Journal of Law and Technology Digest of the parallel question under US professional-responsibility rules, is that disciplinary exposure for what an AI system rewrites further downstream is currently untested and unlikely. Reputational risk from AI mis-paraphrasing is real and immediate. Outdated practice-area pages, abandoned blog posts, and inconsistent solicitor bios are the raw material AI engines pull from. If the source content is current, accurate, and clear, the paraphrase is more likely to be too.
The CMA opt-out is a strategic decision, not a default action. Once the CMA’s conduct requirements take effect, firms will be able to opt out of having content used in AI Overviews without losing organic ranking. Some publishers have argued for the opt-out because AI Overviews suppress click-through. For a law firm, the calculation is different: the click-through loss is real, but invisibility on the surface that mediates the first impression for legal queries is worse. Being cited in an AI Overview — even for a query that ends without a click — keeps the firm in the buyer’s consideration set. Opting out removes that signal entirely.

A wireframe of an AI Overview for a UK legal query (illustrative — no real firm names). Callouts label: where named firms appear in the cited sources strip; where directories (Chambers, Legal 500) dominate; where the “people also ask” expansion sits beneath the Overview; how the user can click through, refine the query, or follow up with a question.
4. What happens if you ignore AI search for 12 to 24 months
The cost of inaction is not the falling ranking the firm can see. It is the citation patterns the firm cannot see, building among AI engines that re-cite sources they have already learned to trust.
Three points matter.
Citation memory. AI engines do not rediscover firms the way Google’s organic crawl does. They re-cite the patterns they have learned. A firm that establishes entity, schema, and content signals over the next year builds citation memory — the AI system has stored that firm as a reliable source for a category of queries, and re-cites it when adjacent queries appear. A firm that waits is not building this layer. When it later tries to compete, it is competing against firms that AI engines have already learned to surface.
Zero-click compounding. The organic traffic pool shrinks as AI Overviews handle more queries directly. The Pew Research Center finding (8% click-through with AI summary, 15% without) is a snapshot of one moment; the directional trend is well established. For commercial-intent legal queries that drive enquiry volume — “no win no fee solicitor London”, “employment tribunal claim time limit”, “conveyancing solicitor near me” — the share that produces a click to any website at all is falling. Firms that are cited in the AI Overview retain a presence in the buyer’s consideration set even when there is no click. Firms that are not cited disappear from the surface entirely.
Lost enquiries the firm never sees. The most expensive failure mode is invisible. A prospective client asks ChatGPT for a solicitor specialising in their problem, receives an answer naming three firms, and contacts one of them. Your firm has no analytics signal for that conversation. The Legal Services Consumer Panel — the independent statutory body that researches the UK legal services market — found in its 2025 Tracker Survey (3,750 consumers) that 57% of UK consumers who received a personal recommendation from a friend or contact still looked online for further information about the provider before contacting them. AI search invisibility does not just affect cold leads; it affects the warm referrals that firms most want to convert. The referred client opens ChatGPT, Google, or Perplexity to verify the recommendation; the firm that does not appear in the verification step loses the warm enquiry.

A line graph showing two trajectories over 24 months. Firm A (begins AI optimisation now) builds steadily, with a more pronounced inflection from month 12 as citation memory compounds. Firm B (continues with traditional SEO only) flatlines on AI visibility through month 12 and begins to decline by month 18 as competitors’ citation memory deepens. Y-axis: AI visibility score (illustrative). X-axis: months from now.
The reframing for the firm’s leadership: the question is not “what is our organic ranking?” It is “what proportion of the buyers asking about our practice areas this month encountered our firm in any form — organic, AI Overview, LLM citation, or directory?” That number is the one that drops while no one is looking.
5. A 90-day priority list for UK law firms
Three phases. None of them require a full strategic rewrite. Each is achievable inside a quarter and produces compounding results across the four signals.
Days 1 to 30: content and structure
Identify the firm’s five to ten priority practice-area pages — typically the practice areas that drive the most enquiry value per matter. Audit each for answer-first structure: does the page lead with a direct answer to the buyer’s likely question, or with a paragraph of marketing copy describing the firm? Pages that lead with marketing copy lose to pages that lead with answers, in both organic ranking and AI citation.
For each priority page, do four things:
- Rewrite the lead so the first paragraph answers the page’s central question
- Add an FAQ section of four to six questions and 50–80-word answers, suitable for FAQ schema
- Cross-link to related pages on the firm’s site (related practice areas, relevant guides, named solicitor bios)
- Remove or rewrite thin content (under 500 words on a major practice area is a signal)
This is the highest-leverage work in the 90 days. In our experience, done well it begins to produce AI Overview citations within two to three months on long-tail informational queries — observed across our own client work, not a published benchmark.
Days 30 to 60: trust and entity hygiene
Standardise the firm’s name, address, and phone number across the firm’s website, Google Business Profile, Chambers, Legal 500, Review Solicitors, Trustpilot, and the Law Society Find a Solicitor directory. The most common failure we see on first audit is inconsistency: a slightly different address on Chambers, an old phone number on Legal 500, an outdated office on Google Business Profile. AI engines treat entity inconsistency as a trust signal — a firm that exists in five subtly different forms is not the same as a firm that exists in one consistent form.
Audit solicitor bios for SRA number presence. Add credentials, qualifications, speaking history, and publications to bios where missing. The bio is part of the firm’s citable content; AI engines pull facts from it.
Build editorial presence on a small number of trusted publications. The Law Society Gazette, Legal Futures, and trade-specific titles for the firm’s main practice areas are the practical targets. Two to four well-placed pieces a year do more than 20 self-published blog posts.
Days 60 to 90: technical and measurement
Implement schema markup on practice-area pages, solicitor bios, and the firm’s primary about page. Organisation, LegalService, FAQ, Article, and Person are the relevant types. Most law firm content management systems support schema implementation through plugins or theme settings; where they do not, the development cost is small.
Check the firm’s robots.txt and bot access settings. By default, allow Googlebot, Bingbot, GPTBot (OpenAI’s training crawler), ClaudeBot (Anthropic’s training crawler), PerplexityBot, and Google-Extended (which controls Gemini training and grounding without affecting Google Search ranking). Blanket-blocking AI crawlers locks the firm out of the AI surfaces that increasingly mediate the first impression. There are reasonable arguments for blocking specific training crawlers if the firm has confidentiality concerns about content that should not be ingested for model training; the default for marketing content is to allow.
Set up AI visibility monitoring. The simplest version: a partner or marketing manager runs five to ten priority queries quarterly across Google AI Overviews, ChatGPT, Perplexity, Claude, and Gemini, and screenshots the citations. The screenshots become the firm’s running record of where it appears, where it does not, and how that has changed quarter over quarter. More sophisticated monitoring tools exist; the manual baseline is enough to start with and serves as the reference point for whether more sophisticated tools are worth the spend.
The 90 days at a glance
| Phase | Focus | Headline actions | Why it matters |
|---|---|---|---|
| Days 1–30 | Content and structure | Identify five to ten priority practice-area pages; rewrite leads in answer-first form; add FAQ sections; cross-link related pages; remove or rewrite thin content | Highest-leverage work. Pages rebuilt in answer-first form begin picking up AI Overview citations within two to three months on long-tail informational queries (Crescat client observation) |
| Days 30–60 | Trust and entity hygiene | Standardise NAP across own site, Google Business Profile, Chambers, Legal 500, Review Solicitors, Trustpilot, Law Society Find a Solicitor; audit solicitor bios for SRA numbers and credentials; build editorial presence on Law Society Gazette / Legal Futures / trade titles | Closes the most common entity-signal failures and builds the off-site corroboration ChatGPT relies on disproportionately |
| Days 60–90 | Technical and measurement | Implement schema markup (Organisation, LegalService, FAQ, Article, Person); review robots.txt — by default allow Googlebot, Bingbot, GPTBot, ClaudeBot, PerplexityBot, Google-Extended; set up quarterly manual AI visibility monitoring across Google AI Overviews, ChatGPT, Perplexity, Claude, Gemini | Makes the firm machine-readable and gives leadership a running record of where the firm appears across AI surfaces |
If this priority list reads as a lot of work, it is precisely the work we structure into a Crescat AI Search Visibility Audit. The audit does the diagnostic step — telling the firm exactly where it stands today and what to fix first — so the 90 days of work targets the right things.
6. Common questions about AI search and law firms
Does ChatGPT actually recommend specific UK law firms?
Yes, but it does so by drawing on directory listings (Chambers, Legal 500, Review Solicitors), media coverage, and firm-website content — and it weights directories more heavily than Google’s AI Overviews do. ChatGPT is the outlier among the major LLM answer engines: Perplexity and Claude mirror Google’s top organic results around 75% of the time each, while ChatGPT does so under 25% in the most-cited research (Martindale-Avvo, US data). The practical implication is that strong directory presence matters disproportionately for ChatGPT visibility.
How long does it take for AI search optimisation to show results?
Impressions in AI Overviews can shift within four to eight weeks once foundation work is in place — schema implemented, NAP consistency restored, priority pages rewritten in answer-first form. Citations in LLM answer engines typically stabilise over three to six months as the engines re-crawl and re-train against the updated content. These are observed patterns from our own client work; they are not published benchmarks, and the timeline varies significantly by practice area, competition, and the firm’s starting position.
Can I opt out of AI Overviews without losing organic rankings?
Once the CMA’s proposed conduct requirements take effect — they were proposed in January 2026 and are still being finalised as of April 2026, with formal requirements expected later in the year — Google will be required to provide an opt-out from AI Overviews and AI Mode without applying any ranking penalty. Firms will be able to opt out, but doing so removes the firm from the surface that increasingly mediates the first impression for legal queries. For most law firms, the trade-off does not favour opting out.
Will AI search replace traditional SEO?
No. AI search is an additional surface, not a replacement. The signals that drive AI visibility — topical authority, entity and trust signals, site structure and schema markup, off-site authority — overlap heavily with the signals that drive traditional SEO. A firm with a healthy SEO foundation has a head start on AI visibility; a firm without one needs to build both at the same time, with structure and entity hygiene done early so the content investment compounds.
7. Acting on AI visibility before the citation gap closes
AI answer engines have changed how legal buyers form a first impression. The firms most visible to AI search in 2027 are the firms making decisions about their entity, content, and structural signals in 2026. The four signals — topical authority, entity and trust signals, site structure and schema markup, off-site authority — are the framework to act against. They are not new in their components; what is new is that they now determine citation in surfaces that mediate the buyer’s first impression before the firm’s website ever gets a chance to.
The window is open, but it is not endless. Firms that build citation memory in 2026 will be re-cited in 2027 and beyond. Firms that wait will be competing against citation patterns AI engines have already learned.
Book an AI Search Visibility Audit
Three specific outcomes from the audit:
- A snapshot of your firm’s visibility across Google AI Overviews, Google AI Mode, and the major LLM answer engines (ChatGPT, Perplexity, Claude, Gemini) for three to five priority practice-area-and-location queries
- An honest read of your firm’s entity, schema, and content signals — what is working, what is breaking citation pickup, and what to fix first
- A 90-day priority list ranked by impact and effort, so you know what to act on this quarter and what can wait
Seven to ten working days from kick-off. Eight to twelve-page written report plus a 30-minute walk-through call with a senior member of the Crescat team (Lee Thomas, Managing Director, or equivalent). Free for qualifying firms. No obligation beyond the call.
Sources
- Pew Research Center, “Google users are less likely to click on links when an AI summary appears in the results”, 22 July 2025. Data captured March 2025 across 68,879 searches.
- Ahrefs, “76% of AI Overview Citations Pull From the Top 10”, July 2025; “Update: 38% of AI Overview Citations Pull From Top 10”, Q1 2026.
- Martindale-Avvo, “The Authority Stack: The 4 Signals AI Uses to Evaluate Lawyers”, 2025.
- Dean (Dejan) Cook, “AI as the New Front Door to Legal Services”, Harvard Journal of Law and Technology Digest, 5 January 2026.
- Legal Services Consumer Panel, “How Consumers Are Choosing Legal Services 2025 Tracker Survey”, July 20
