How do I know if extensions improve performance?

Alexandre Airvault
January 14, 2026

Understand what “extensions” are now (and what “improved performance” actually means)

In today’s Google Ads interface, what most advertisers still call extensions are generally referred to as assets. They’re additional pieces of content and business information—like sitelinks, call buttons, location info, images, headlines, and more—that can be assembled into the final ad a person sees.

The key nuance (and the reason measuring “improvement” can be confusing) is that assets don’t behave like a simple on/off switch. The system can choose which assets to show (or not show) at auction time based on predicted performance for that specific query and user. That’s why you can’t judge assets purely by “I added a sitelink and conversions went up,” without checking what actually served and what changed in click behavior.

Also, define “performance” before you measure it. For lead gen, it’s usually cost per lead and lead quality. For ecommerce, it’s conversion value, ROAS, and profit proxies. For local, it may include calls, directions, and store-visit-related actions. If you don’t lock this down first, assets can look “better” (more clicks) while actually making efficiency worse (higher CPA or lower ROAS).

How to tell if assets are helping: the reports and metrics that matter

Start in the Assets reporting (and interpret the numbers correctly)

The fastest way to answer “are my extensions helping?” is to use the Assets reporting, where you can see impressions, clicks, CTR, cost, and other stats tied to assets. Two common interpretation mistakes cause bad decisions here: misunderstanding what a “click” includes, and assuming totals should add up cleanly.

First, be careful with the Clicks column in asset reporting. In many views, the clicks shown can include clicks on the ad headline and on the asset (if the asset is clickable). If you want to isolate clicks on the asset itself, segment by Click type. That one step prevents a lot of false conclusions like “my sitelinks got 1,000 clicks,” when many of those clicks were actually on the headline.

Second, don’t panic when the total row doesn’t match the sum of individual assets. When multiple assets serve together in a single impression, each asset can register an impression, but the total row removes duplicates (because it’s counting distinct impressions that contained that asset type). The result: individual rows can look like they “overcount” versus the total. This is normal.

For Responsive Search Ads: use the asset report + combinations report (and mind the June 5, 2025 cutoff)

If your question is mainly about classic Search extensions (sitelinks, callouts, structured snippets, etc.), you’ll get the clearest insight from the Responsive Search Ads asset reporting. The ad-level asset report lets you compare assets used within a specific responsive search ad and review metrics like impressions, clicks, cost, conversions, and conversion value.

One important platform change: the older “Performance label” approach has been deprecated in favor of full performance statistics, and those full stats are only available for date ranges on or after June 5, 2025. If you’re looking at earlier date ranges, your reporting and available columns may not align with what you expect today—so always check your date range before you decide the data is “missing” or “wrong.”

To understand how assets work together (which is often the real lever), use the combinations report. This shows common asset combinations and the impressions those combinations are getting. The goal isn’t to “rebuild” static ads from those combinations; it’s to spot patterns like “price + promo language tends to serve together,” then create more assets that reinforce what’s already working.

For Performance Max, Demand Gen, and other asset-heavy campaigns: treat asset-level ratios as directional

In Performance Max (and other formats that assemble ads dynamically), asset-level reporting has become more transparent, including availability of metrics like impressions, clicks, costs, conversions, and conversion value in asset reporting and asset group reporting (availability varies by campaign type).

Here’s the practical rule I use with clients: treat asset-level counts (impressions, clicks, conversions, cost) as useful signals, but treat asset-level ratios (CTR, CPA, ROAS, etc.) as directional only. These ratios can be heavily influenced by which other assets were shown alongside them, so they don’t represent “the isolated performance” of a single asset. When you’re deciding if assets improved performance, prioritize asset group and campaign outcomes first, then use asset-level data to guide creative refreshes.

For Performance Max specifically, make creative decisions in context: review asset group performance, then use asset reporting and the combinations view to understand what themes resonate, what to produce more of, and what to replace.

The most reliable way to prove assets “improved performance”: run a controlled experiment

Why experiments beat “before vs after” comparisons

Assets often change how people interact with your ads (more entry points, more prominent formats), and they can also change auction behavior. That’s why simple “I added callouts last month and CPA dropped” stories can be misleading—seasonality, budget shifts, bidding changes, and query mix can all move your metrics at the same time.

If you want a confident answer, use an experiment to compare a control setup versus a treatment setup, then judge the difference using statistical significance and confidence intervals—especially if the expected lift is modest. In practice, many accounts need at least a couple weeks of data before results stabilize enough to call a winner.

A minimal, high-signal experiment setup (the “do this, not that” checklist)

  • Pick one asset change to test (for example: add sitelinks + callouts to a subset of campaigns, or replace weak sitelinks with new ones), so you can attribute movement to the change.
  • Choose one primary success metric that matches your business goal (Conversions and Cost/conv. for lead gen; Conversion value and ROAS proxies for ecommerce), and keep secondary metrics (CTR, CPC) as diagnostics.
  • Let the experiment run long enough to gather stable data; if results aren’t clear, extend runtime or ensure the experiment receives enough traffic to detect meaningful differences.
  • When reading results, look for the experiment’s estimated performance difference, the confidence interval, and whether the result is statistically significant—don’t cherry-pick one day of lift.

Turning the analysis into action: keep, improve, or remove assets

What “good” looks like (and what to change when it’s not)

Once you’ve validated that assets are helping (or at least not hurting), the next step is to make them more useful. In well-managed accounts, the biggest gains usually come from tightening relevance: sitelinks that map cleanly to high-intent paths, callouts that reinforce differentiators, structured snippets that pre-qualify, and assets that match the campaign’s intent (brand vs non-brand, high-funnel vs bottom-funnel).

When assets are underperforming, I typically act on these signals first: assets with zero impressions for multiple weeks (often a relevance or redundancy issue), assets with high interaction volume but poor downstream conversion performance, and any assets limited or disapproved by policy (fixing eligibility often unlocks volume quickly). The reporting views also allow you to add policy-related columns so you can see why an asset is limited and address it directly.

Don’t misread cost changes: assets can raise CPC and still be a win

It’s normal to see CPC rise after adding assets, because the ad can become more prominent and compete differently in the auction. That doesn’t automatically mean performance got worse; it may be paying slightly more for substantially better-qualified traffic or higher conversion rate. Also remember that there’s no extra fee to add assets—you’re charged for clicks and certain interactions, and an impression won’t generate unlimited charges from assets.

The right way to judge the tradeoff is simple: if cost goes up but conversions and/or conversion value increase enough to keep CPA/ROAS on target (or improve it), the assets are doing their job. If cost goes up and efficiency degrades, either the asset messaging is attracting the wrong clicks, the landing pages behind the assets aren’t aligned, or you need a tighter test (experiment) to isolate what changed.

Let AI handle
the Google Ads grunt work

Try our AI Agents now
Theme What to Look At How to Decide if Assets Helped Key Metrics / Views Relevant Google Ads Docs
Define assets and “improved performance” Assets (formerly called extensions) are sitelinks, callouts, images, locations, etc. that Google assembles with your ad at auction time. They don’t behave like an on/off switch, and different combinations can show for different queries and users. ([support.google.com](https://support.google.com/google-ads/answer/2393094?hl=EN-GB&utm_source=openai)) First define what “better” means for the account: for lead gen, focus on cost per lead and lead quality; for ecommerce, conversion value, ROAS, and profit proxies; for local, calls, directions, and store actions. Avoid judging success on clicks alone if efficiency (CPA/ROAS) worsens. Account- or campaign-level:
  • Conversions, cost / conv.
  • Conversion value, ROAS
  • Calls, directions, store visits (where available)
Ad assets ([support.google.com](https://support.google.com/google-ads/answer/2393094?hl=EN-GB&utm_source=openai))
Asset reporting basics (Search) Use the Assets reporting view to see impressions, clicks, CTR, cost, and conversions for assets. Remember that in many tables the “Clicks” column includes clicks on the headline plus clickable assets; segment by click type to isolate clicks on a specific asset like sitelinks. ([support.google.com](https://support.google.com/google-ads/answer/2454072?hl=en-WS&utm_source=openai)) Judge assets by changes in user behavior and efficiency:
  • Segment by click type to see if additional entry points (e.g., sitelinks) are actually used.
  • Compare conversion metrics for traffic that includes assets vs. the baseline.
  • Don’t expect the sum of asset rows to equal the total; multiple assets can fire impressions in a single ad show.
In Assets reporting:
  • Segment → Click type to separate headline vs. asset clicks.
  • Columns: Impressions, Clicks, CTR, Cost, Conversions, Conversion value.
Use segments in your tables ([support.google.com](https://support.google.com/google-ads/answer/2454072?hl=en-WS&utm_source=openai))
Responsive Search Ads: asset & combinations reports For classic Search-style assets (sitelinks, callouts, snippets, etc.), use the RSA asset report at the ad or campaign level to compare performance of individual assets. Use the combinations report to see which headlines, descriptions, and other assets tend to serve together. ([support.google.com](https://support.google.com/google-ads/answer/13548268?hl=en&utm_source=openai))
  • Use asset stats (impressions, clicks, conversions, conversion value) to identify which lines consistently appear in high-performing combinations.
  • Check your date range: newer interfaces surface full statistics instead of only labels, and historical ranges may differ from current behavior.
  • Use combinations to spot patterns (e.g., “price + promo” themes) and then create more assets that reinforce top combinations rather than trying to recreate static ads.
Within a responsive search ad:
  • View asset details (by asset) for stats per headline/description.
  • Combinations tab to see common asset mixes and their impressions.
Responsive search ad campaign-level text assets ([support.google.com](https://support.google.com/google-ads/answer/13548268?hl=en&utm_source=openai))
Performance Max, Demand Gen, and other asset-heavy formats Performance Max and similar campaign types assemble ads dynamically from text, image, and video assets in asset groups. Reporting now exposes full asset statistics (impressions, clicks, costs, conversions, conversion value) plus additional creative insights. ([support.google.com](https://support.google.com/google-ads/answer/16451273?utm_source=openai))
  • Treat asset-level counts (impressions, clicks, conversions, cost) as useful signals.
  • Treat asset-level ratios (CTR, CPA, ROAS) as directional only because they depend on what else served with the asset.
  • Prioritize asset group and campaign outcomes; use asset and combinations data to decide what themes to expand or replace.
  • Asset group performance: Conversions, Conversion value, ROAS.
  • Asset reporting: Impressions, Clicks, Cost, Conversions, Conversion value by asset.
About Performance Max campaigns
How asset groups work
Performance Max creative reporting and insights ([support.google.com](https://support.google.com/google-ads/answer/10724817/about-performance-max-campaigns?utm_source=openai))
Running controlled experiments Instead of “before vs. after” comparisons, use Google Ads experiments to compare a control setup vs. a treatment setup (for example, with vs. without specific assets) under similar conditions and traffic splits. ([support.google.com](https://support.google.com/google-ads/answer/6261395?hl=en-WS&utm_source=openai))
  • Test one meaningful asset change at a time (e.g., adding sitelinks and callouts, or replacing weak sitelinks).
  • Choose a single primary business metric (Conversions / cost per conv. for lead gen; Conversion value / ROAS for ecommerce).
  • Run long enough (often 2–12 weeks depending on volume) to reach statistical significance and stable confidence intervals.
  • Use the experiment’s estimated performance difference and confidence interval, not day-to-day swings, to call the winner.
Experiments view:
  • Primary: Conversions, Cost / conv., Conversion value, ROAS.
  • Secondary: CTR, CPC for diagnostic insights.
  • Experiment lift and confidence interval columns.
Set up a custom experiment
About custom experiments
Monitor your experiments ([support.google.com](https://support.google.com/google-ads/answer/6261395?hl=en-WS&utm_source=openai))
Ongoing optimization: keep, improve, or remove assets Use reporting to identify:
  • Assets with zero impressions over several weeks (often irrelevant, redundant, or ineligible).
  • Assets that drive high interaction but poor downstream conversion performance.
  • Assets limited or disapproved by policy; fix policy issues to unlock volume quickly.
Keep and expand:
  • High-relevance assets mapped to strong paths (e.g., sitelinks to high-intent pages, callouts that reinforce differentiators).
Improve or remove:
  • Assets with no impressions, poor conversion efficiency, or policy limitations that you can’t or shouldn’t resolve.
Asset and asset association tables:
  • Impressions, Clicks, Conversions, Conversion value.
  • Policy status and policy details columns to diagnose “Limited” and “Disapproved” assets.
About sitelink assets
Ad assets ([support.google.com](https://support.google.com/google-ads/answer/2375416?utm_source=openai))
Cost, CPC changes, and charging for assets Assets can increase ad prominence and alter auction dynamics, which may raise CPCs even when overall performance improves. Many assets (such as sitelinks) are free to add; you pay for clicks and specific interactions, not for simply enabling the asset. ([support.google.com](https://support.google.com/google-ads/answer/2375416?utm_source=openai))
  • Don’t treat higher CPC alone as a negative; compare changes in cost against changes in conversions and conversion value.
  • If cost rises but CPA and ROAS remain on target or improve, assets are likely attracting better-qualified traffic.
  • If efficiency worsens, tighten asset messaging, align landing pages, or use an experiment to isolate which change hurt performance.
Campaign or account level:
  • CPC, Cost, Conversions, Cost / conv.
  • Conversion value, ROAS.
  • Compare periods with asset changes, ideally via experiments for clean attribution.
About sitelink assets (including costs) ([support.google.com](https://support.google.com/google-ads/answer/2375416?utm_source=openai))

Let AI handle
the Google Ads grunt work

Try our AI Agents now

To know whether extensions (now called ad assets like sitelinks, callouts, images, and locations) improve performance, start by defining “better” in business terms (e.g., lower CPA and stronger lead quality for lead gen, higher conversion value/ROAS for ecommerce, more calls or directions for local) rather than judging on clicks alone; then use the Google Ads Assets views and segment by click type to separate headline clicks from asset clicks, compare downstream conversion metrics and efficiency (CPA/ROAS) over a meaningful date range, and, when possible, validate changes with a controlled Google Ads experiment instead of a simple before/after read—especially in formats like Responsive Search Ads and Performance Max where Google mixes assets dynamically and per-asset ratios can be directional. If you want help doing this consistently, Blobr connects to your Google Ads account and runs specialized AI agents that surface what’s actually changing and what to do next, including an agent that optimizes sitelinks based on relevance and performance signals and another that refreshes underperforming headlines while staying aligned with your landing pages.

Understand what “extensions” are now (and what “improved performance” actually means)

In today’s Google Ads interface, what most advertisers still call extensions are generally referred to as assets. They’re additional pieces of content and business information—like sitelinks, call buttons, location info, images, headlines, and more—that can be assembled into the final ad a person sees.

The key nuance (and the reason measuring “improvement” can be confusing) is that assets don’t behave like a simple on/off switch. The system can choose which assets to show (or not show) at auction time based on predicted performance for that specific query and user. That’s why you can’t judge assets purely by “I added a sitelink and conversions went up,” without checking what actually served and what changed in click behavior.

Also, define “performance” before you measure it. For lead gen, it’s usually cost per lead and lead quality. For ecommerce, it’s conversion value, ROAS, and profit proxies. For local, it may include calls, directions, and store-visit-related actions. If you don’t lock this down first, assets can look “better” (more clicks) while actually making efficiency worse (higher CPA or lower ROAS).

How to tell if assets are helping: the reports and metrics that matter

Start in the Assets reporting (and interpret the numbers correctly)

The fastest way to answer “are my extensions helping?” is to use the Assets reporting, where you can see impressions, clicks, CTR, cost, and other stats tied to assets. Two common interpretation mistakes cause bad decisions here: misunderstanding what a “click” includes, and assuming totals should add up cleanly.

First, be careful with the Clicks column in asset reporting. In many views, the clicks shown can include clicks on the ad headline and on the asset (if the asset is clickable). If you want to isolate clicks on the asset itself, segment by Click type. That one step prevents a lot of false conclusions like “my sitelinks got 1,000 clicks,” when many of those clicks were actually on the headline.

Second, don’t panic when the total row doesn’t match the sum of individual assets. When multiple assets serve together in a single impression, each asset can register an impression, but the total row removes duplicates (because it’s counting distinct impressions that contained that asset type). The result: individual rows can look like they “overcount” versus the total. This is normal.

For Responsive Search Ads: use the asset report + combinations report (and mind the June 5, 2025 cutoff)

If your question is mainly about classic Search extensions (sitelinks, callouts, structured snippets, etc.), you’ll get the clearest insight from the Responsive Search Ads asset reporting. The ad-level asset report lets you compare assets used within a specific responsive search ad and review metrics like impressions, clicks, cost, conversions, and conversion value.

One important platform change: the older “Performance label” approach has been deprecated in favor of full performance statistics, and those full stats are only available for date ranges on or after June 5, 2025. If you’re looking at earlier date ranges, your reporting and available columns may not align with what you expect today—so always check your date range before you decide the data is “missing” or “wrong.”

To understand how assets work together (which is often the real lever), use the combinations report. This shows common asset combinations and the impressions those combinations are getting. The goal isn’t to “rebuild” static ads from those combinations; it’s to spot patterns like “price + promo language tends to serve together,” then create more assets that reinforce what’s already working.

For Performance Max, Demand Gen, and other asset-heavy campaigns: treat asset-level ratios as directional

In Performance Max (and other formats that assemble ads dynamically), asset-level reporting has become more transparent, including availability of metrics like impressions, clicks, costs, conversions, and conversion value in asset reporting and asset group reporting (availability varies by campaign type).

Here’s the practical rule I use with clients: treat asset-level counts (impressions, clicks, conversions, cost) as useful signals, but treat asset-level ratios (CTR, CPA, ROAS, etc.) as directional only. These ratios can be heavily influenced by which other assets were shown alongside them, so they don’t represent “the isolated performance” of a single asset. When you’re deciding if assets improved performance, prioritize asset group and campaign outcomes first, then use asset-level data to guide creative refreshes.

For Performance Max specifically, make creative decisions in context: review asset group performance, then use asset reporting and the combinations view to understand what themes resonate, what to produce more of, and what to replace.

The most reliable way to prove assets “improved performance”: run a controlled experiment

Why experiments beat “before vs after” comparisons

Assets often change how people interact with your ads (more entry points, more prominent formats), and they can also change auction behavior. That’s why simple “I added callouts last month and CPA dropped” stories can be misleading—seasonality, budget shifts, bidding changes, and query mix can all move your metrics at the same time.

If you want a confident answer, use an experiment to compare a control setup versus a treatment setup, then judge the difference using statistical significance and confidence intervals—especially if the expected lift is modest. In practice, many accounts need at least a couple weeks of data before results stabilize enough to call a winner.

A minimal, high-signal experiment setup (the “do this, not that” checklist)

  • Pick one asset change to test (for example: add sitelinks + callouts to a subset of campaigns, or replace weak sitelinks with new ones), so you can attribute movement to the change.
  • Choose one primary success metric that matches your business goal (Conversions and Cost/conv. for lead gen; Conversion value and ROAS proxies for ecommerce), and keep secondary metrics (CTR, CPC) as diagnostics.
  • Let the experiment run long enough to gather stable data; if results aren’t clear, extend runtime or ensure the experiment receives enough traffic to detect meaningful differences.
  • When reading results, look for the experiment’s estimated performance difference, the confidence interval, and whether the result is statistically significant—don’t cherry-pick one day of lift.

Turning the analysis into action: keep, improve, or remove assets

What “good” looks like (and what to change when it’s not)

Once you’ve validated that assets are helping (or at least not hurting), the next step is to make them more useful. In well-managed accounts, the biggest gains usually come from tightening relevance: sitelinks that map cleanly to high-intent paths, callouts that reinforce differentiators, structured snippets that pre-qualify, and assets that match the campaign’s intent (brand vs non-brand, high-funnel vs bottom-funnel).

When assets are underperforming, I typically act on these signals first: assets with zero impressions for multiple weeks (often a relevance or redundancy issue), assets with high interaction volume but poor downstream conversion performance, and any assets limited or disapproved by policy (fixing eligibility often unlocks volume quickly). The reporting views also allow you to add policy-related columns so you can see why an asset is limited and address it directly.

Don’t misread cost changes: assets can raise CPC and still be a win

It’s normal to see CPC rise after adding assets, because the ad can become more prominent and compete differently in the auction. That doesn’t automatically mean performance got worse; it may be paying slightly more for substantially better-qualified traffic or higher conversion rate. Also remember that there’s no extra fee to add assets—you’re charged for clicks and certain interactions, and an impression won’t generate unlimited charges from assets.

The right way to judge the tradeoff is simple: if cost goes up but conversions and/or conversion value increase enough to keep CPA/ROAS on target (or improve it), the assets are doing their job. If cost goes up and efficiency degrades, either the asset messaging is attracting the wrong clicks, the landing pages behind the assets aren’t aligned, or you need a tighter test (experiment) to isolate what changed.