How do I know if extensions improve performance?

Alexandre Airvault
January 14, 2026

Start by defining what “improve performance” means (because extensions can “win” in different ways)

In Google Ads, extensions are now called assets. Their job isn’t just to “get more clicks.” Assets can change how prominent your ad looks, which can lift click-through rate (CTR), influence the types of clicks you get (for example, more “menu” clicks vs. generic headline clicks), and even affect whether the system can show the most relevant version of your ad for a given search.

So before you evaluate whether assets improved performance, decide which outcome matters most for the campaign:

  • Lead gen: cost per lead (Cost/conv.), lead volume, and lead quality (if you import offline outcomes).
  • Ecommerce: conversion value, ROAS (conversion value/cost), and profitability (if margins vary by product).
  •  
  • Local/service businesses: calls, direction requests, and form fills—not just CTR.

Also set expectations correctly: assets are “free to add,” but they can change how many people click and what they click. You’re typically charged for clicks and certain interactions as usual, and the system limits charges to no more than two clicks per impression across an ad and its assets.

How to tell if assets are helping using in-platform reporting (the right way)

1) Confirm the asset is eligible and actually serving

This is the most common “false negative” I see: advertisers add assets, then judge them before they’re even showing consistently. Start in the Assets area and check the asset’s Status (Eligible vs. Not eligible). If it’s not eligible (or limited), you’re not evaluating performance—you’re evaluating a setup/policy problem.

Next, look at Impressions for that asset type. If impressions are near-zero, it may simply not be entering enough auctions where it’s predicted to help, or your Ad Rank isn’t high enough for that format to show reliably.

2) Use the Assets page, but interpret “Clicks” correctly

On the Assets page you can view clicks, impressions, CTR, cost, and average CPC for each asset type. One subtle but critical detail: the “Clicks” you see for an asset type typically include clicks on the ad headline as well as clicks on the asset when that asset type appeared.

To isolate whether the asset itself is earning engagement (versus the headline doing all the work), segment the table by Click type. This is how you separate “headline clicks” from sitelink clicks, call clicks, and other asset interactions.

3) Watch conversion efficiency, not just CTR

It’s normal for strong assets to raise CTR. The real question is whether they improve downstream performance. Add conversion-focused columns and judge impact using the same conversion definitions you optimize to (for example, primary conversions vs. all conversions). If CTR rises but Cost/conv. worsens, you may be attracting curiosity clicks instead of qualified intent—or routing people to sitelink pages that aren’t built to convert.

For sitelinks specifically, you can segment and see clicks on the individual sitelink versus other parts of the ad. That makes sitelinks one of the easiest asset types to diagnose: you can tell which links are pulling interest and whether those clicks convert.

4) Don’t let “math that doesn’t add up” confuse you

Asset reporting has counting behaviors that surprise people. If multiple assets serve in the same impression, each asset can get credited with an impression, so summing asset impressions can exceed campaign impressions. Additionally, totals may not equal the sum of individual rows because totals remove duplicates (for example, multiple sitelinks showing together).

Even more important: ratio metrics at the individual asset level (CTR, CPC, CPA, ROAS) should be treated as directional, because assets are evaluated and served in combinations—not in isolation. For combination-heavy formats, you’ll typically get a more reliable read by judging performance at the asset group or campaign level first, then using asset-level metrics to decide what to refresh.

5) Separate manual assets from automated assets (so you don’t “credit the wrong thing”)

Some assets can be created and shown automatically when the system predicts they’ll help (for example, dynamic sitelinks and dynamic structured snippets). These may show alongside or instead of your manual versions, and they’re designed to appear when they’re expected to boost results—which can make simple before/after comparisons misleading if you don’t know what’s actually serving.

Use the dedicated account-level automated assets view when you’re diagnosing automated asset impact. One key nuance: in that report, the “Clicks” column counts clicks on the headline when the automated asset appeared—so it’s measuring “ads with the asset present,” not “clicks on the asset itself.”

The most reliable way: prove it with a controlled test (instead of guessing)

Use an experiment when you want a true answer

If you want to know whether assets improve performance (not just correlate with it), run a controlled experiment where one arm is your current setup and the other arm is the same setup plus (or minus) the asset change you care about. For creative-style changes, ad variations are built to test modifications across multiple campaigns or even the account, with a defined traffic split and an end date.

Duration matters. For statistically meaningful reads—especially when conversion lag exists—plan for a multi-week test window. A common best practice is to run experiments for at least 4–6 weeks (and longer when conversion cycles are long). Some experiment result views also discard the initial ramp-up period to compare both arms more fairly, so don’t panic if early days don’t show in the experiment results view.

A quick “asset impact” experiment checklist (what I’d do in a real account)

     
  • Pick one change: test adding (or removing) one asset type or one asset set, not five changes at once.
  •  
  • Keep budgets unconstrained if possible: if you’re limited by budget, CTR lifts can just reshuffle spend rather than create incremental conversions.
  •  
  • Judge on your primary KPI: conversions + CPA for lead gen, or conversion value + ROAS for ecommerce (not CTR alone).
  •  
  • Run long enough: aim for at least 1–2 full conversion cycles, commonly 4–6 weeks.

How to interpret outcomes (and what to do next)

If you see higher conversions (or conversion value) at the same or better efficiency, keep the assets and expand thoughtfully. In practice, “winning” asset strategies usually involve having enough variety that the system can assemble the best combination auction-by-auction, rather than trying to force one perfect extension.

If CTR rises but CPA/ROAS worsens, the fix is usually not “turn assets off.” More often, you need to tighten relevance. For sitelinks, that can mean removing “high-curiosity” links that don’t convert, replacing them with intent-matched paths (pricing, book, quote, demo), and ensuring those landing pages measure the right micro-conversions (add-to-cart, start checkout, qualified form steps) so the bidding system learns quality—not just volume.

If assets barely show at all, treat it as an eligibility/prominence issue. Assets are part of how your ad competes, and some formats only show when your Ad Rank is high enough. In that situation, improving ad relevance/quality or adjusting bids/bidding targets can be what unlocks consistent serving—only then can you fairly judge whether the assets help.

Let AI handle
the Google Ads grunt work

Try our AI Agents now
Step / Question What to Do in Google Ads How to Judge if Assets Improved Performance Primary Metrics & Views Relevant Google Ads Documentation
1. What does “improve performance” mean for this campaign? Decide the main business goal for the campaign before looking at asset data:
  • Lead gen: focus on qualified leads, not just clicks.
  • Ecommerce: focus on revenue and profitability.
  • Local/service: focus on calls, directions, and form fills.
Remember that assets (extensions) change ad prominence and click mix, not just click volume.
Assets “win” if they improve your at stable or lower cost, or if they drive more of the right interaction type (e.g., calls, high‑intent pages) for the same spend.
  • Conversions, Cost/conv.
  • Conversion value, ROAS
  • Calls, direction clicks, form submissions
  • Clicks & interactions per ad + assets (remember max two chargeable clicks per impression)
About assets
Measure ad asset performance
2. Is the asset eligible and actually serving? On the Assets page:
  • Check Status (Eligible, Limited, Not eligible).
  • Check that impressions for the asset or asset type are not near zero.
If assets don’t serve, you’re diagnosing setup, policy, or Ad Rank issues—not performance.
Consider an asset for performance evaluation only after it:
  • Is Eligible (no blocking policy issues).
  • Has enough impressions across a meaningful time window.
  • Asset Status column
  • Asset‑level Impressions
  • Campaign or account‑level asset reports (for coverage and serving)
Measure ad asset performance
About account level asset reporting
About assets upgrade
3. Are “Clicks” on the Assets page being interpreted correctly? On the Assets page:
  • View performance for each asset or asset type.
  • Segment by Click type to separate headline clicks from asset clicks (for sitelinks, call assets, etc.).
Understand that default “Clicks” typically include headline clicks when that asset appeared.
Judge whether the asset itself is driving engagement by looking at:
  • Clicks on the asset (e.g., sitelink, call button), not just total ad clicks when the asset was present.
  • Relative performance of different asset variants of the same type.
  • Asset report segmented by Click type
  • Clicks on specific sitelinks vs. generic headline
  • Cost and CTR at asset and asset‑type level
Measure ad asset performance
Upgraded assets report table
4. Do assets improve conversion efficiency, not just CTR? Add conversion‑focused columns to the Assets view and relevant campaigns:
  • Use the same conversion actions you optimize to (primary conversions, conversion value).
  • For sitelinks, segment by Click type to see which links are clicked and whether those clicks convert.
Assets are helping when:
  • CTR improves and Cost/conv. stays flat or improves.
  • Conversion value or ROAS improves at similar or lower cost.
  • Sitelinks or other assets route users to pages that produce more qualified actions.
If CTR rises but Cost/conv. worsens, assets may be driving curiosity traffic instead of qualified intent.
  • Conversions, Cost/conv. (or value and ROAS)
  • Per‑sitelink conversions and Cost/conv.
  • Down‑funnel events (add‑to‑cart, start checkout, high‑quality form steps)
Measure ad asset performance
About sitelink assets
5. Is asset reporting “weird math” causing confusion? When reviewing asset reports, keep in mind:
  • Multiple assets can be credited with an impression for a single ad impression.
  • Summed asset impressions can exceed campaign impressions.
  • Totals often de‑duplicate combinations, so totals may not equal row sums.
Treat asset‑level ratio metrics (CTR, CPC, CPA, ROAS) as directional, especially for formats where:
  • Multiple assets show together.
  • Assets are selected as combinations by the system.
Get a stable read at the campaign or asset‑group level first, then use asset metrics to decide what to refine.
  • Campaign/asset‑group level performance vs. asset‑level breakdowns
  • Cross‑campaign or account‑level asset reports for a holistic view
Measure ad asset performance
About asset reporting in Performance Max
About account level asset reporting
6. Are you separating manual assets from automated assets? Distinguish between:
  • Manual assets you created (sitelinks, structured snippets, etc.).
  • Automatically created assets (dynamic sitelinks, automated structured snippets, and other account‑level automated assets).
Use the dedicated Account‑level automated assets view to see how ads perform when automated assets appear.
Don’t credit or blame manual assets for lift driven by automated ones (or vice versa). Automated assets are designed to show when they’re predicted to help, so simple before/after comparisons can be misleading if you don’t know which assets actually served.
  • Account‑level automated assets report (Clicks, Impressions, CTR, Conversions)
  • Source filters (manual vs automatically created) in asset reports
Measure ad asset performance
About account level asset reporting
7. Have you run a controlled test to prove incremental impact? When you need a true causal answer, use experiments:
  • Create an experiment where the only difference is adding, removing, or changing a specific asset or asset set.
  • Use ad variations or custom experiments to apply changes across multiple campaigns while keeping a controlled split.
  • Plan for a multi‑week window (often 4–6 weeks or at least 1–2 full conversion cycles).
Assets are proven to improve performance when the experiment arm with the asset change shows:
  • Higher conversions or conversion value at equal or better CPA/ROAS.
  • Results that are stable over time and, where available, statistically significant in experiment reporting.
  • Experiment comparison view: Conversions, Cost/conv., Conversion value, ROAS
  • Experiment duration and traffic split
About custom experiments
About ad variations
Experiments FAQs
8. How should you interpret experiment outcomes and next steps? After the experiment:
  • If performance improves on your primary KPI, roll out the winning asset setup and expand thoughtfully (more high‑quality variants, not just more of everything).
  • If CTR improves but CPA/ROAS worsens, refine relevance rather than turning assets off:
    • For sitelinks: remove high‑curiosity, low‑conversion paths; focus on pricing, quote, book, demo, or other high‑intent destinations.
    • Ensure landing pages track key micro‑conversions so bidding can optimize for quality, not just volume.
  • If assets rarely show, address eligibility and Ad Rank first (ad quality, bids, or bidding targets).
A “win” usually looks like:
  • More conversions or conversion value at stable or better efficiency.
  • Stronger results when the system can choose from a varied, high‑quality set of assets per auction.
Poor results often signal a relevance or landing‑page issue, not that assets are inherently bad.
  • Experiment results views
  • Per‑asset and per‑sitelink conversion performance
  • Ad Rank and serving diagnostics if impressions are low
About sitelink assets
About assets
Experiments FAQs

Let AI handle
the Google Ads grunt work

Try our AI Agents now

If you’re trying to figure out whether Google Ads extensions (now called “assets”) are truly improving performance, it helps to move beyond surface-level CTR changes and focus on your primary KPI (like leads, cost per conversion, or ROAS), confirm the assets are actually eligible and serving with meaningful impressions, and interpret asset reporting carefully (for example by segmenting by click type and remembering that asset-level math can be directional when multiple assets show together). When you need a definitive answer, a controlled experiment where the only change is the asset setup is often the cleanest way to prove incremental impact. If you want support doing this consistently, Blobr connects to your Google Ads account and uses specialized AI agents to monitor performance, highlight what’s driving results versus wasted spend, and suggest concrete, goal-aligned improvements across ads, assets, and landing pages—while keeping you in full control of what gets applied.

Start by defining what “improve performance” means (because extensions can “win” in different ways)

In Google Ads, extensions are now called assets. Their job isn’t just to “get more clicks.” Assets can change how prominent your ad looks, which can lift click-through rate (CTR), influence the types of clicks you get (for example, more “menu” clicks vs. generic headline clicks), and even affect whether the system can show the most relevant version of your ad for a given search.

So before you evaluate whether assets improved performance, decide which outcome matters most for the campaign:

  • Lead gen: cost per lead (Cost/conv.), lead volume, and lead quality (if you import offline outcomes).
  • Ecommerce: conversion value, ROAS (conversion value/cost), and profitability (if margins vary by product).
  •  
  • Local/service businesses: calls, direction requests, and form fills—not just CTR.

Also set expectations correctly: assets are “free to add,” but they can change how many people click and what they click. You’re typically charged for clicks and certain interactions as usual, and the system limits charges to no more than two clicks per impression across an ad and its assets.

How to tell if assets are helping using in-platform reporting (the right way)

1) Confirm the asset is eligible and actually serving

This is the most common “false negative” I see: advertisers add assets, then judge them before they’re even showing consistently. Start in the Assets area and check the asset’s Status (Eligible vs. Not eligible). If it’s not eligible (or limited), you’re not evaluating performance—you’re evaluating a setup/policy problem.

Next, look at Impressions for that asset type. If impressions are near-zero, it may simply not be entering enough auctions where it’s predicted to help, or your Ad Rank isn’t high enough for that format to show reliably.

2) Use the Assets page, but interpret “Clicks” correctly

On the Assets page you can view clicks, impressions, CTR, cost, and average CPC for each asset type. One subtle but critical detail: the “Clicks” you see for an asset type typically include clicks on the ad headline as well as clicks on the asset when that asset type appeared.

To isolate whether the asset itself is earning engagement (versus the headline doing all the work), segment the table by Click type. This is how you separate “headline clicks” from sitelink clicks, call clicks, and other asset interactions.

3) Watch conversion efficiency, not just CTR

It’s normal for strong assets to raise CTR. The real question is whether they improve downstream performance. Add conversion-focused columns and judge impact using the same conversion definitions you optimize to (for example, primary conversions vs. all conversions). If CTR rises but Cost/conv. worsens, you may be attracting curiosity clicks instead of qualified intent—or routing people to sitelink pages that aren’t built to convert.

For sitelinks specifically, you can segment and see clicks on the individual sitelink versus other parts of the ad. That makes sitelinks one of the easiest asset types to diagnose: you can tell which links are pulling interest and whether those clicks convert.

4) Don’t let “math that doesn’t add up” confuse you

Asset reporting has counting behaviors that surprise people. If multiple assets serve in the same impression, each asset can get credited with an impression, so summing asset impressions can exceed campaign impressions. Additionally, totals may not equal the sum of individual rows because totals remove duplicates (for example, multiple sitelinks showing together).

Even more important: ratio metrics at the individual asset level (CTR, CPC, CPA, ROAS) should be treated as directional, because assets are evaluated and served in combinations—not in isolation. For combination-heavy formats, you’ll typically get a more reliable read by judging performance at the asset group or campaign level first, then using asset-level metrics to decide what to refresh.

5) Separate manual assets from automated assets (so you don’t “credit the wrong thing”)

Some assets can be created and shown automatically when the system predicts they’ll help (for example, dynamic sitelinks and dynamic structured snippets). These may show alongside or instead of your manual versions, and they’re designed to appear when they’re expected to boost results—which can make simple before/after comparisons misleading if you don’t know what’s actually serving.

Use the dedicated account-level automated assets view when you’re diagnosing automated asset impact. One key nuance: in that report, the “Clicks” column counts clicks on the headline when the automated asset appeared—so it’s measuring “ads with the asset present,” not “clicks on the asset itself.”

The most reliable way: prove it with a controlled test (instead of guessing)

Use an experiment when you want a true answer

If you want to know whether assets improve performance (not just correlate with it), run a controlled experiment where one arm is your current setup and the other arm is the same setup plus (or minus) the asset change you care about. For creative-style changes, ad variations are built to test modifications across multiple campaigns or even the account, with a defined traffic split and an end date.

Duration matters. For statistically meaningful reads—especially when conversion lag exists—plan for a multi-week test window. A common best practice is to run experiments for at least 4–6 weeks (and longer when conversion cycles are long). Some experiment result views also discard the initial ramp-up period to compare both arms more fairly, so don’t panic if early days don’t show in the experiment results view.

A quick “asset impact” experiment checklist (what I’d do in a real account)

     
  • Pick one change: test adding (or removing) one asset type or one asset set, not five changes at once.
  •  
  • Keep budgets unconstrained if possible: if you’re limited by budget, CTR lifts can just reshuffle spend rather than create incremental conversions.
  •  
  • Judge on your primary KPI: conversions + CPA for lead gen, or conversion value + ROAS for ecommerce (not CTR alone).
  •  
  • Run long enough: aim for at least 1–2 full conversion cycles, commonly 4–6 weeks.

How to interpret outcomes (and what to do next)

If you see higher conversions (or conversion value) at the same or better efficiency, keep the assets and expand thoughtfully. In practice, “winning” asset strategies usually involve having enough variety that the system can assemble the best combination auction-by-auction, rather than trying to force one perfect extension.

If CTR rises but CPA/ROAS worsens, the fix is usually not “turn assets off.” More often, you need to tighten relevance. For sitelinks, that can mean removing “high-curiosity” links that don’t convert, replacing them with intent-matched paths (pricing, book, quote, demo), and ensuring those landing pages measure the right micro-conversions (add-to-cart, start checkout, qualified form steps) so the bidding system learns quality—not just volume.

If assets barely show at all, treat it as an eligibility/prominence issue. Assets are part of how your ad competes, and some formats only show when your Ad Rank is high enough. In that situation, improving ad relevance/quality or adjusting bids/bidding targets can be what unlocks consistent serving—only then can you fairly judge whether the assets help.