What Estimate Should You Use for Google Ads Performance Evaluation?

Alexandre Airvault
January 19, 2026

Start With the Only “Estimate” That Matters: Your Definition of a Conversion

When people ask what estimate they should use to evaluate Google Ads performance, what they’re really bumping into is this: Google Ads can show several different “versions” of performance depending on which conversion columns, attribution settings, and time assumptions you’re looking at. If you don’t pick the right baseline, your CPA/ROAS analysis can swing wildly—even if nothing meaningful changed in the business.

Use “Conversions” (and “Conversion value”) as your primary performance truth

For day-to-day account evaluation, optimization, and any conversation about “did Google Ads hit the goal?”, the cleanest baseline is the Conversions and Conversion value columns. These columns are built around the conversion actions you’ve designated as the key outcomes you want to optimize toward (your “primary” actions). The practical benefit is consistency: bidding systems and most optimization decisions are aligned to these numbers, so you’re evaluating with the same scoreboard the platform is playing to.

If you need a single performance line to track over time, treat Cost/conv. (lead gen) or Conv. value/cost (ecommerce/value-based) as your headline KPI, and make sure everyone agrees which conversion actions are included.

Use “All conversions” as a diagnostic layer, not the main KPI

All conversions includes both primary and secondary conversion actions (plus certain special sources). This is extremely useful for diagnosing funnel behavior—like whether “add to cart” is rising while “purchase” is flat—but it can easily mislead stakeholders if they interpret it as final business impact. In most mature accounts, “All conversions” is best used to spot friction points and assist behavior, not to grade campaign success.

Use “Results” (and “Results value”) when you want goal-level rollups

If your reporting needs to be more business-friendly—especially for accounts organized around standard goals—Results and Results value can simplify what executives see. Think of these as a clean summary of outcomes across your primary actions grouped by the goals you’ve set up, with visual cues for whether a campaign is actually optimizing for that goal or merely generating it.

Use “Conversions (platform comparable)” only for cross-platform comparisons (primarily Demand Gen)

If you’re comparing performance to other ad platforms, you may need a more apples-to-apples methodology. For certain campaign types, there’s a reporting view designed for that use case (for example, a platform-comparable conversions view for Demand Gen). It adjusts methodology (including view-through behavior and isolating the campaign type’s touchpoints) to better align with how other platforms typically report. The key point: this is a reporting-only lens and should not replace your core Google Ads optimization metrics when comparing across Google campaign types or making bidding decisions.

Know When Google Ads Is Showing You Modeled Data vs. True Forecasting

Not all “estimates” in Google Ads mean the same thing. Some are modeled measurement (filling in gaps where direct observation isn’t possible), and others are forecasting (predicting future outcomes under different bids/budgets/targets). Mixing these up is one of the fastest ways to make bad decisions.

Modeled conversions: treat them as part of real performance, but respect the stabilization period

Modern measurement includes modeling to estimate conversions that can’t be directly observed due to privacy protections or technical limits. Importantly, this does not mean Google is inventing conversions that didn’t happen; it’s estimating attribution for conversions that occurred when the linkage between the ad interaction and the conversion can’t be observed directly.

From an evaluation standpoint, the big operational takeaway is timing: modeled conversions can take several days to fully process and stabilize, and conversion values can be adjusted upward retroactively for a short period while modeling finalizes. That means judging performance on “yesterday vs. today” is often noisy—especially in accounts with shorter sales cycles and aggressive day-to-day changes.

Conversion delay: don’t judge recent days as “complete” unless your business converts instantly

Even with perfect tracking, many customers don’t convert immediately after clicking an ad. Depending on your conversion window settings, conversions can be reported long after the click. This creates a classic reporting trap: spend shows up immediately, conversions arrive later, so recent CPAs can look artificially high and ROAS can look artificially low.

To evaluate performance fairly, you need to account for your typical time-to-convert. Google Ads provides ways to understand delay patterns (including segmenting conversions by “days to conversion”) so you can see how many conversions usually arrive after 1 day, 7 days, 14 days, and so on.

Conversion windows: your “counting horizon” is a setting, not a law of nature

A conversion window is simply the number of days after an ad interaction during which a conversion will be recorded for that conversion action. A shorter window will record fewer conversions for that action; a longer one will capture more delayed conversions. This matters for performance evaluation because two accounts can run identical campaigns and still show different CPAs purely due to window settings.

When you compare platforms, campaigns, or time periods, align conversion windows and reporting assumptions as tightly as possible—or you’re comparing different yardsticks.

Bid strategy and simulator “conversion estimates”: use them for planning, not for grading

Google Ads can show conversion estimates and conversion value estimates in bid strategy reporting and simulator-style tools. These are built from historical performance patterns and help answer “if we change X, what might happen?” They’re useful for scenario planning, setting expectations, and understanding the likely impact of new targets/budgets.

But they are not the same as actual conversion reporting. Estimates are particularly sensitive when your selected date range includes incomplete conversions due to conversion delay, and they can be affected when your account uses conversion adjustments (like returns/cancellations). Treat estimates as directional guidance, then validate with actual results once enough time has passed for conversions to mature.

A Practical Performance Evaluation Framework (So You Stop Debating Columns)

Step 1: Lock your “scoreboard” before you optimize

Before you change budgets, targets, or creative, confirm that the campaigns are optimizing to the right conversion goals and that the right conversion actions are designated as primary vs. secondary. Primary actions are the ones that populate your Conversions/Conversion value reporting and are used for bidding optimization (as long as the campaign is using the corresponding goal). Secondary actions are typically observation-only and roll into All conversions, unless you explicitly include them in a goal setup used for bidding.

This single step prevents the most common reporting disaster: celebrating a CPA drop that was actually caused by switching the “scoreboard” to easier, higher-volume secondary actions.

Step 2: Choose an evaluation window that matches your conversion delay

If you want a clean read on performance, avoid ending your reporting period “today” unless you’re confident most conversions happen same-day. A disciplined approach is to evaluate a date range that ends far enough in the past that the majority of conversions you expect have already been reported. If your conversion window is long, extend this buffer accordingly.

If you must report on the most recent days (common in fast-moving businesses), pair your primary columns with “by conversion time” columns to understand what actually occurred recently, and use your historical conversion delay pattern to interpret whether recent CPAs/ROAS are likely to improve as lagging conversions post.

Step 3: Use attribution models intentionally (and know what’s still supported)

Attribution is not just a reporting preference; it can change the numbers in Conversions/All conversions and can influence how conversion-based bidding optimizes. Today, the practical decision for most advertisers is between data-driven attribution and last click. Several older rule-based models are no longer supported and were upgraded to data-driven attribution, so don’t assume an account is still operating under legacy models just because it was set up years ago.

Also remember that fractional credit is real: you may see decimals in conversion columns because some models distribute credit across multiple interactions.

Step 4: Improve the quality of the “estimate” by improving measurement inputs

If your conversion reporting is undercounting because it can’t match users reliably, your performance evaluation will be skewed and automated bidding will learn from incomplete feedback. One of the most impactful upgrades for many advertisers is enabling enhanced conversions for web, which uses hashed first-party customer data collected on your site to improve matching and recover conversions that would otherwise be missed. This typically improves both reporting confidence and bidding performance over time.

For businesses with returns, cancellations, or changing customer value, conversion adjustments can keep your reporting aligned with real business outcomes by retracting conversions or restating conversion value after the fact. The goal is simple: make your conversion value reflect the value you actually keep.

Quick checklist (use this when performance “suddenly changed”)

  • Confirm you’re evaluating the right column: Conversions/Conversion value (primary) vs All conversions vs Results vs by-conversion-time variants.
  • Confirm your goal setup: the campaign is optimizing for the intended goal, and the correct conversion actions are primary.
  • Confirm your counting method: “Every” vs “One” conversion per ad interaction is correct for each conversion action (sales usually want “every”; leads often want “one”).
  • Check conversion delay: don’t compare a mature period to a period that hasn’t had time for conversions to post.
  • Expect late changes: modeled conversions and attribution can update after the fact; don’t overreact to very recent fluctuations.

So, what estimate should you use?

If your goal is performance evaluation inside Google Ads, use Conversions and Conversion value (and their derived CPA/ROAS metrics) as your primary truth, then interpret them through the lens of conversion delay, modeling stabilization, counting settings, and attribution.

If your goal is planning and scenario forecasting, use simulator and bid strategy conversion estimates as directional guidance—but only “grade” performance using mature, finalized conversion data once lagging conversions have had time to arrive.

If your goal is cross-platform comparison, use the relevant platform-comparable reporting view where available, and standardize conversion events and windows across platforms so you’re comparing like with like—not different definitions of success.

Let AI handle
the Google Ads grunt work

Try our AI Agents now
Topic Key idea from the article How to use it in practice Relevant Google Ads documentation
Primary “scoreboard”: Conversions & Conversion value The main truth for Google Ads performance is the Conversions and Conversion value columns, built from your primary conversion actions. Use Cost/conv. (lead gen) or Conv. value/cost (ecommerce) as your headline KPI, and make sure everyone agrees which actions are included.
  • Define which conversion actions are true business outcomes and mark them as primary.
  • Standardize reporting and optimization on Conversions/Conversion value across teams.
  • Make Cost/conv. or Conv. value/cost the main line you track over time.
See how the Conversions column and related conversion metrics are defined in reporting. ([support.google.com](https://support.google.com/google-ads/answer/11305867?hl=en&utm_source=openai))
All conversions (diagnostic, not main KPI) All conversions combines primary and secondary actions (plus special sources). It’s powerful for understanding funnel behavior but can overstate business results if treated as the main success metric.
  • Use All conversions to inspect assists and upper‑funnel actions (e.g., add to cart, sign‑ups).
  • Keep leadership reporting anchored on primary Conversions, not All conversions.
  • Compare trends: if All conversions rise while primary Conversions are flat, look for funnel friction.
Review how All conversions and related columns differ from Conversions. ([support.google.com](https://support.google.com/google-ads/answer/6270625?hl=en-419&utm_source=openai))
Results & Results value (goal‑level rollups) Results and Results value roll up your primary conversion actions by goal, giving executives a simpler, business‑friendly view of outcomes and whether campaigns are optimizing toward those goals.
  • Group related primary actions into clear goals (e.g., “Online sales”, “Leads”).
  • Use Results for high‑level reports while still using Conversions/Conversion value for deeper analysis.
  • Check whether campaigns are actually bidding to the goal they’re contributing Results for.
Learn how conversion‑related reporting columns map to different goal and result views. ([support.google.com](https://support.google.com/google-ads/answer/11305867?hl=en&utm_source=openai))
Conversions (platform comparable) Conversions (platform comparable) is a special reporting‑only lens (for example in Demand Gen campaigns) designed to align more closely with how other ad platforms count conversions, especially around view‑through behavior and isolating that campaign type’s touchpoints.
  • Use this column only when doing cross‑platform comparisons.
  • Do not use it to drive bidding or to compare Google campaign types to each other.
  • Align conversion events and windows across platforms when using this view.
See conversions (platform comparable) columns for Demand Gen and cross‑platform comparison guidance. ([support.google.com](https://support.google.com/google-ads/answer/15299024?utm_source=openai))
Modeled conversions vs. forecasts Some “estimates” in Google Ads are modeled measurement (filling gaps where conversions can’t be directly observed), while others are forecasts (predicting future performance). Modeled conversions represent real conversions with modeled attribution, not invented events, but they can change for several days as models stabilize.
  • Treat modeled conversions as part of real performance, but avoid judging “yesterday vs. today” in isolation.
  • Give a few days for modeled data and conversion value to stabilize before calling performance shifts.
  • Keep modeling in mind when explaining late‑arriving or upward‑revised conversion numbers.
Use conversion reporting diagnostics to understand how Google fills gaps and updates conversion data over time. ([support.google.com](https://support.google.com/google-ads/answer/6270625?hl=en-419&utm_source=openai))
Conversion delay (time to convert) Spend appears immediately, but many users convert days later. With non‑instant journeys, recent periods will always look worse (higher CPA, lower ROAS) until delayed conversions post.
  • Segment conversions by days to conversion to understand your typical lag.
  • Avoid “grading” performance using date ranges that end today unless conversions are truly same‑day.
  • When forced to look at very recent data, pair standard columns with their “by conversion time” variants.
Explore delay behavior using conversion time segments in conversion tracking reports. ([support.google.com](https://support.google.com/google-ads/answer/6270625?hl=en-419&utm_source=openai))
Conversion windows (counting horizon) A conversion window is the number of days after an ad interaction during which a conversion will be credited. Shorter windows miss late conversions; longer windows capture more but extend how long results keep changing.
  • Align conversion windows with your real buying cycle (for clicks, engaged views, and view‑throughs where relevant).
  • When comparing platforms or periods, ensure windows and reporting views are aligned as closely as possible.
  • Remember that two otherwise‑identical setups can show different CPAs purely from different window settings.
Configure windows while you set up web conversions, including click and view‑through windows and attribution settings. ([support.google.com](https://support.google.com/google-ads/answer/9119707?utm_source=openai))
Bid strategy & simulator conversion estimates Bid strategy reports and simulators show conversion estimates and conversion value estimates based on historical patterns. They’re for planning and scenario exploration, not for grading actual performance.
  • Use simulators to answer “What might happen if we change budget/targets?”
  • Be cautious when the date range includes incomplete conversions (because of delay or modeling).
  • Validate simulator expectations against matured, actual conversion data before making big decisions.
Consult bid strategy and simulator help from within the conversion and bidding reports to interpret estimated vs. observed performance. ([support.google.com](https://support.google.com/google-ads/answer/11305867?hl=en&utm_source=openai))
Scoreboard lock‑in: primary vs. secondary actions Before optimizing, you must “lock” your scoreboard: confirm which conversion actions are primary (used in Conversions and for bidding) and which are secondary (observation‑only, flowing into All conversions unless explicitly used in a goal).
  • Audit conversion actions and clearly label primary vs. secondary in line with business outcomes.
  • Ensure campaigns are optimizing to the intended goal and associated primary actions.
  • Re‑check this setup whenever performance “suddenly” changes, especially after structural edits.
Use the conversion setup workflow in web conversion configuration to review categories, goals, and status for each action. ([support.google.com](https://support.google.com/google-ads/answer/12216226?utm_source=openai))
Choosing an evaluation window For clean performance reads, analyze periods that end far enough in the past for most expected conversions to have been reported. Reporting “up to today” without accounting for lag will systematically undercount results.
  • Base your reporting cutoff on observed conversion delay (for example, evaluate through 7–14 days ago).
  • When you must show very recent results, pair them with historical lag curves to contextualize under‑reporting.
  • Use “by conversion time” columns to understand what actually happened in a recent calendar window.
See options for conversion and conversion‑time columns in the reporting column reference. ([support.google.com](https://support.google.com/google-ads/answer/11305867?hl=en&utm_source=openai))
Attribution models (data‑driven vs. last click) Attribution is not just a reporting preference; it changes how conversions are counted and how bidding learns. Today the practical choice is usually between data‑driven attribution (fractional, model‑based) and last click. Legacy rule‑based models have largely been deprecated or upgraded to data‑driven.
  • Default to data‑driven attribution when eligible; consider last click only when you need strict final‑touch credit.
  • Expect fractional conversion numbers because credit can be shared across multiple interactions.
  • Re‑evaluate performance whenever you change attribution, as historic metrics will shift.
Learn how data‑driven and last‑click attribution models assign credit and how to manage attribution settings. ([support.google.com](https://support.google.com/analytics/answer/10596866?utm_source=openai))
Improving measurement inputs: enhanced conversions Poor matching leads to undercounted conversions and mis‑trained bidding. Enhanced conversions for web use hashed first‑party customer data to recover conversions that standard tracking can’t reliably match.
  • Enable enhanced conversions for key web conversion actions.
  • Work with devs or your tag manager to pass hashed email or other allowed identifiers.
  • Monitor the reported impact after implementation to validate improved measurement.
See enhanced conversions for web for benefits and implementation options, and setup using the Google tag. ([support.google.com](https://support.google.com/google-ads/answer/15712870?utm_source=openai))
Improving measurement inputs: conversion adjustments For businesses with returns, cancellations, or evolving customer value, conversion adjustments let you retract or restate conversions and conversion values so reporting stays aligned with real revenue kept.
  • Implement adjustments where order value or completion status often changes after the initial conversion.
  • Use negative or scaled value adjustments to reflect refunds or partial returns.
  • Regularly reconcile platform revenue with back‑office data and adjust where there are material gaps.
Configure ongoing tracking and correction flows as part of your web conversion setup, including options for updating or retracting prior conversions. ([support.google.com](https://support.google.com/google-ads/answer/9119707?utm_source=openai))
Quick diagnostic checklist When performance “suddenly” changes, issues are often measurement‑related, not real business swings. The article recommends a short checklist: right columns, right goals, right counting method, fair comparison windows, and expectations around late modeled changes.
  • Confirm you’re using the intended column set (Conversions/Conversion value vs. All conversions vs. Results, and “by conversion time” where needed).
  • Check that campaigns are optimizing to the correct goal and primary actions.
  • Validate conversion counting rules (“Every” vs. “One” per interaction) and conversion delay effects.
Cross‑check with your conversion list and statuses in the web conversions setup interface to ensure nothing structural has changed. ([support.google.com](https://support.google.com/google-ads/answer/12216226?utm_source=openai))
Which estimate to use for which purpose The “right” estimate depends on your goal:
  • Inside‑Google Ads performance evaluation: Conversions & Conversion value (and CPA/ROAS), interpreted through delay, modeling, counting, and attribution.
  • Planning and forecasting: Bid strategy/simulator conversion estimates as directional guidance.
  • Cross‑platform comparison: Conversions (platform comparable) (where available) plus aligned events and windows across platforms.
  • Standardize on one primary performance view (Conversions) and use alternatives only for clearly defined use cases.
  • Separate planning tools (simulators, estimates) from grading tools (matured conversion data).
  • For multi‑channel analysis, use platform‑comparable views and consistent conversion setups.
Combine the guidance from conversion reporting columns and platform‑comparable conversions to choose the right estimate for each reporting task. ([support.google.com](https://support.google.com/google-ads/answer/11305867?hl=en&utm_source=openai))

Let AI handle
the Google Ads grunt work

Try our AI Agents now

When you’re evaluating Google Ads performance, the most reliable “estimate” is usually the one tied to your true business outcomes: the Conversions and Conversion value columns built from your primary conversion actions, interpreted with the right context around attribution, conversion delay, and modeled updates; other views like All conversions, Results, or platform-comparable conversions can be useful, but mainly for diagnostics, executive rollups, or cross-platform comparisons. If you want a calmer way to keep that scoreboard consistent over time, Blobr connects to your Google Ads account and runs specialized AI agents that continuously analyze what changed, surface measurement and reporting pitfalls, and translate best practices into clear, prioritized actions—whether that’s tightening waste with keyword cleanup, improving RSA assets with a Headlines Enhancer, or aligning keywords to the right landing pages with a Keyword Landing Optimizer—while you stay in control of what gets applied.

Start With the Only “Estimate” That Matters: Your Definition of a Conversion

When people ask what estimate they should use to evaluate Google Ads performance, what they’re really bumping into is this: Google Ads can show several different “versions” of performance depending on which conversion columns, attribution settings, and time assumptions you’re looking at. If you don’t pick the right baseline, your CPA/ROAS analysis can swing wildly—even if nothing meaningful changed in the business.

Use “Conversions” (and “Conversion value”) as your primary performance truth

For day-to-day account evaluation, optimization, and any conversation about “did Google Ads hit the goal?”, the cleanest baseline is the Conversions and Conversion value columns. These columns are built around the conversion actions you’ve designated as the key outcomes you want to optimize toward (your “primary” actions). The practical benefit is consistency: bidding systems and most optimization decisions are aligned to these numbers, so you’re evaluating with the same scoreboard the platform is playing to.

If you need a single performance line to track over time, treat Cost/conv. (lead gen) or Conv. value/cost (ecommerce/value-based) as your headline KPI, and make sure everyone agrees which conversion actions are included.

Use “All conversions” as a diagnostic layer, not the main KPI

All conversions includes both primary and secondary conversion actions (plus certain special sources). This is extremely useful for diagnosing funnel behavior—like whether “add to cart” is rising while “purchase” is flat—but it can easily mislead stakeholders if they interpret it as final business impact. In most mature accounts, “All conversions” is best used to spot friction points and assist behavior, not to grade campaign success.

Use “Results” (and “Results value”) when you want goal-level rollups

If your reporting needs to be more business-friendly—especially for accounts organized around standard goals—Results and Results value can simplify what executives see. Think of these as a clean summary of outcomes across your primary actions grouped by the goals you’ve set up, with visual cues for whether a campaign is actually optimizing for that goal or merely generating it.

Use “Conversions (platform comparable)” only for cross-platform comparisons (primarily Demand Gen)

If you’re comparing performance to other ad platforms, you may need a more apples-to-apples methodology. For certain campaign types, there’s a reporting view designed for that use case (for example, a platform-comparable conversions view for Demand Gen). It adjusts methodology (including view-through behavior and isolating the campaign type’s touchpoints) to better align with how other platforms typically report. The key point: this is a reporting-only lens and should not replace your core Google Ads optimization metrics when comparing across Google campaign types or making bidding decisions.

Know When Google Ads Is Showing You Modeled Data vs. True Forecasting

Not all “estimates” in Google Ads mean the same thing. Some are modeled measurement (filling in gaps where direct observation isn’t possible), and others are forecasting (predicting future outcomes under different bids/budgets/targets). Mixing these up is one of the fastest ways to make bad decisions.

Modeled conversions: treat them as part of real performance, but respect the stabilization period

Modern measurement includes modeling to estimate conversions that can’t be directly observed due to privacy protections or technical limits. Importantly, this does not mean Google is inventing conversions that didn’t happen; it’s estimating attribution for conversions that occurred when the linkage between the ad interaction and the conversion can’t be observed directly.

From an evaluation standpoint, the big operational takeaway is timing: modeled conversions can take several days to fully process and stabilize, and conversion values can be adjusted upward retroactively for a short period while modeling finalizes. That means judging performance on “yesterday vs. today” is often noisy—especially in accounts with shorter sales cycles and aggressive day-to-day changes.

Conversion delay: don’t judge recent days as “complete” unless your business converts instantly

Even with perfect tracking, many customers don’t convert immediately after clicking an ad. Depending on your conversion window settings, conversions can be reported long after the click. This creates a classic reporting trap: spend shows up immediately, conversions arrive later, so recent CPAs can look artificially high and ROAS can look artificially low.

To evaluate performance fairly, you need to account for your typical time-to-convert. Google Ads provides ways to understand delay patterns (including segmenting conversions by “days to conversion”) so you can see how many conversions usually arrive after 1 day, 7 days, 14 days, and so on.

Conversion windows: your “counting horizon” is a setting, not a law of nature

A conversion window is simply the number of days after an ad interaction during which a conversion will be recorded for that conversion action. A shorter window will record fewer conversions for that action; a longer one will capture more delayed conversions. This matters for performance evaluation because two accounts can run identical campaigns and still show different CPAs purely due to window settings.

When you compare platforms, campaigns, or time periods, align conversion windows and reporting assumptions as tightly as possible—or you’re comparing different yardsticks.

Bid strategy and simulator “conversion estimates”: use them for planning, not for grading

Google Ads can show conversion estimates and conversion value estimates in bid strategy reporting and simulator-style tools. These are built from historical performance patterns and help answer “if we change X, what might happen?” They’re useful for scenario planning, setting expectations, and understanding the likely impact of new targets/budgets.

But they are not the same as actual conversion reporting. Estimates are particularly sensitive when your selected date range includes incomplete conversions due to conversion delay, and they can be affected when your account uses conversion adjustments (like returns/cancellations). Treat estimates as directional guidance, then validate with actual results once enough time has passed for conversions to mature.

A Practical Performance Evaluation Framework (So You Stop Debating Columns)

Step 1: Lock your “scoreboard” before you optimize

Before you change budgets, targets, or creative, confirm that the campaigns are optimizing to the right conversion goals and that the right conversion actions are designated as primary vs. secondary. Primary actions are the ones that populate your Conversions/Conversion value reporting and are used for bidding optimization (as long as the campaign is using the corresponding goal). Secondary actions are typically observation-only and roll into All conversions, unless you explicitly include them in a goal setup used for bidding.

This single step prevents the most common reporting disaster: celebrating a CPA drop that was actually caused by switching the “scoreboard” to easier, higher-volume secondary actions.

Step 2: Choose an evaluation window that matches your conversion delay

If you want a clean read on performance, avoid ending your reporting period “today” unless you’re confident most conversions happen same-day. A disciplined approach is to evaluate a date range that ends far enough in the past that the majority of conversions you expect have already been reported. If your conversion window is long, extend this buffer accordingly.

If you must report on the most recent days (common in fast-moving businesses), pair your primary columns with “by conversion time” columns to understand what actually occurred recently, and use your historical conversion delay pattern to interpret whether recent CPAs/ROAS are likely to improve as lagging conversions post.

Step 3: Use attribution models intentionally (and know what’s still supported)

Attribution is not just a reporting preference; it can change the numbers in Conversions/All conversions and can influence how conversion-based bidding optimizes. Today, the practical decision for most advertisers is between data-driven attribution and last click. Several older rule-based models are no longer supported and were upgraded to data-driven attribution, so don’t assume an account is still operating under legacy models just because it was set up years ago.

Also remember that fractional credit is real: you may see decimals in conversion columns because some models distribute credit across multiple interactions.

Step 4: Improve the quality of the “estimate” by improving measurement inputs

If your conversion reporting is undercounting because it can’t match users reliably, your performance evaluation will be skewed and automated bidding will learn from incomplete feedback. One of the most impactful upgrades for many advertisers is enabling enhanced conversions for web, which uses hashed first-party customer data collected on your site to improve matching and recover conversions that would otherwise be missed. This typically improves both reporting confidence and bidding performance over time.

For businesses with returns, cancellations, or changing customer value, conversion adjustments can keep your reporting aligned with real business outcomes by retracting conversions or restating conversion value after the fact. The goal is simple: make your conversion value reflect the value you actually keep.

Quick checklist (use this when performance “suddenly changed”)

  • Confirm you’re evaluating the right column: Conversions/Conversion value (primary) vs All conversions vs Results vs by-conversion-time variants.
  • Confirm your goal setup: the campaign is optimizing for the intended goal, and the correct conversion actions are primary.
  • Confirm your counting method: “Every” vs “One” conversion per ad interaction is correct for each conversion action (sales usually want “every”; leads often want “one”).
  • Check conversion delay: don’t compare a mature period to a period that hasn’t had time for conversions to post.
  • Expect late changes: modeled conversions and attribution can update after the fact; don’t overreact to very recent fluctuations.

So, what estimate should you use?

If your goal is performance evaluation inside Google Ads, use Conversions and Conversion value (and their derived CPA/ROAS metrics) as your primary truth, then interpret them through the lens of conversion delay, modeling stabilization, counting settings, and attribution.

If your goal is planning and scenario forecasting, use simulator and bid strategy conversion estimates as directional guidance—but only “grade” performance using mature, finalized conversion data once lagging conversions have had time to arrive.

If your goal is cross-platform comparison, use the relevant platform-comparable reporting view where available, and standardize conversion events and windows across platforms so you’re comparing like with like—not different definitions of success.