Start With the Only “Estimate” That Matters: Your Definition of a Conversion
When people ask what estimate they should use to evaluate Google Ads performance, what they’re really bumping into is this: Google Ads can show several different “versions” of performance depending on which conversion columns, attribution settings, and time assumptions you’re looking at. If you don’t pick the right baseline, your CPA/ROAS analysis can swing wildly—even if nothing meaningful changed in the business.
Use “Conversions” (and “Conversion value”) as your primary performance truth
For day-to-day account evaluation, optimization, and any conversation about “did Google Ads hit the goal?”, the cleanest baseline is the Conversions and Conversion value columns. These columns are built around the conversion actions you’ve designated as the key outcomes you want to optimize toward (your “primary” actions). The practical benefit is consistency: bidding systems and most optimization decisions are aligned to these numbers, so you’re evaluating with the same scoreboard the platform is playing to.
If you need a single performance line to track over time, treat Cost/conv. (lead gen) or Conv. value/cost (ecommerce/value-based) as your headline KPI, and make sure everyone agrees which conversion actions are included.
Use “All conversions” as a diagnostic layer, not the main KPI
All conversions includes both primary and secondary conversion actions (plus certain special sources). This is extremely useful for diagnosing funnel behavior—like whether “add to cart” is rising while “purchase” is flat—but it can easily mislead stakeholders if they interpret it as final business impact. In most mature accounts, “All conversions” is best used to spot friction points and assist behavior, not to grade campaign success.
Use “Results” (and “Results value”) when you want goal-level rollups
If your reporting needs to be more business-friendly—especially for accounts organized around standard goals—Results and Results value can simplify what executives see. Think of these as a clean summary of outcomes across your primary actions grouped by the goals you’ve set up, with visual cues for whether a campaign is actually optimizing for that goal or merely generating it.
Use “Conversions (platform comparable)” only for cross-platform comparisons (primarily Demand Gen)
If you’re comparing performance to other ad platforms, you may need a more apples-to-apples methodology. For certain campaign types, there’s a reporting view designed for that use case (for example, a platform-comparable conversions view for Demand Gen). It adjusts methodology (including view-through behavior and isolating the campaign type’s touchpoints) to better align with how other platforms typically report. The key point: this is a reporting-only lens and should not replace your core Google Ads optimization metrics when comparing across Google campaign types or making bidding decisions.
Know When Google Ads Is Showing You Modeled Data vs. True Forecasting
Not all “estimates” in Google Ads mean the same thing. Some are modeled measurement (filling in gaps where direct observation isn’t possible), and others are forecasting (predicting future outcomes under different bids/budgets/targets). Mixing these up is one of the fastest ways to make bad decisions.
Modeled conversions: treat them as part of real performance, but respect the stabilization period
Modern measurement includes modeling to estimate conversions that can’t be directly observed due to privacy protections or technical limits. Importantly, this does not mean Google is inventing conversions that didn’t happen; it’s estimating attribution for conversions that occurred when the linkage between the ad interaction and the conversion can’t be observed directly.
From an evaluation standpoint, the big operational takeaway is timing: modeled conversions can take several days to fully process and stabilize, and conversion values can be adjusted upward retroactively for a short period while modeling finalizes. That means judging performance on “yesterday vs. today” is often noisy—especially in accounts with shorter sales cycles and aggressive day-to-day changes.
Conversion delay: don’t judge recent days as “complete” unless your business converts instantly
Even with perfect tracking, many customers don’t convert immediately after clicking an ad. Depending on your conversion window settings, conversions can be reported long after the click. This creates a classic reporting trap: spend shows up immediately, conversions arrive later, so recent CPAs can look artificially high and ROAS can look artificially low.
To evaluate performance fairly, you need to account for your typical time-to-convert. Google Ads provides ways to understand delay patterns (including segmenting conversions by “days to conversion”) so you can see how many conversions usually arrive after 1 day, 7 days, 14 days, and so on.
Conversion windows: your “counting horizon” is a setting, not a law of nature
A conversion window is simply the number of days after an ad interaction during which a conversion will be recorded for that conversion action. A shorter window will record fewer conversions for that action; a longer one will capture more delayed conversions. This matters for performance evaluation because two accounts can run identical campaigns and still show different CPAs purely due to window settings.
When you compare platforms, campaigns, or time periods, align conversion windows and reporting assumptions as tightly as possible—or you’re comparing different yardsticks.
Bid strategy and simulator “conversion estimates”: use them for planning, not for grading
Google Ads can show conversion estimates and conversion value estimates in bid strategy reporting and simulator-style tools. These are built from historical performance patterns and help answer “if we change X, what might happen?” They’re useful for scenario planning, setting expectations, and understanding the likely impact of new targets/budgets.
But they are not the same as actual conversion reporting. Estimates are particularly sensitive when your selected date range includes incomplete conversions due to conversion delay, and they can be affected when your account uses conversion adjustments (like returns/cancellations). Treat estimates as directional guidance, then validate with actual results once enough time has passed for conversions to mature.
A Practical Performance Evaluation Framework (So You Stop Debating Columns)
Step 1: Lock your “scoreboard” before you optimize
Before you change budgets, targets, or creative, confirm that the campaigns are optimizing to the right conversion goals and that the right conversion actions are designated as primary vs. secondary. Primary actions are the ones that populate your Conversions/Conversion value reporting and are used for bidding optimization (as long as the campaign is using the corresponding goal). Secondary actions are typically observation-only and roll into All conversions, unless you explicitly include them in a goal setup used for bidding.
This single step prevents the most common reporting disaster: celebrating a CPA drop that was actually caused by switching the “scoreboard” to easier, higher-volume secondary actions.
Step 2: Choose an evaluation window that matches your conversion delay
If you want a clean read on performance, avoid ending your reporting period “today” unless you’re confident most conversions happen same-day. A disciplined approach is to evaluate a date range that ends far enough in the past that the majority of conversions you expect have already been reported. If your conversion window is long, extend this buffer accordingly.
If you must report on the most recent days (common in fast-moving businesses), pair your primary columns with “by conversion time” columns to understand what actually occurred recently, and use your historical conversion delay pattern to interpret whether recent CPAs/ROAS are likely to improve as lagging conversions post.
Step 3: Use attribution models intentionally (and know what’s still supported)
Attribution is not just a reporting preference; it can change the numbers in Conversions/All conversions and can influence how conversion-based bidding optimizes. Today, the practical decision for most advertisers is between data-driven attribution and last click. Several older rule-based models are no longer supported and were upgraded to data-driven attribution, so don’t assume an account is still operating under legacy models just because it was set up years ago.
Also remember that fractional credit is real: you may see decimals in conversion columns because some models distribute credit across multiple interactions.
Step 4: Improve the quality of the “estimate” by improving measurement inputs
If your conversion reporting is undercounting because it can’t match users reliably, your performance evaluation will be skewed and automated bidding will learn from incomplete feedback. One of the most impactful upgrades for many advertisers is enabling enhanced conversions for web, which uses hashed first-party customer data collected on your site to improve matching and recover conversions that would otherwise be missed. This typically improves both reporting confidence and bidding performance over time.
For businesses with returns, cancellations, or changing customer value, conversion adjustments can keep your reporting aligned with real business outcomes by retracting conversions or restating conversion value after the fact. The goal is simple: make your conversion value reflect the value you actually keep.
Quick checklist (use this when performance “suddenly changed”)
- Confirm you’re evaluating the right column: Conversions/Conversion value (primary) vs All conversions vs Results vs by-conversion-time variants.
- Confirm your goal setup: the campaign is optimizing for the intended goal, and the correct conversion actions are primary.
- Confirm your counting method: “Every” vs “One” conversion per ad interaction is correct for each conversion action (sales usually want “every”; leads often want “one”).
- Check conversion delay: don’t compare a mature period to a period that hasn’t had time for conversions to post.
- Expect late changes: modeled conversions and attribution can update after the fact; don’t overreact to very recent fluctuations.
So, what estimate should you use?
If your goal is performance evaluation inside Google Ads, use Conversions and Conversion value (and their derived CPA/ROAS metrics) as your primary truth, then interpret them through the lens of conversion delay, modeling stabilization, counting settings, and attribution.
If your goal is planning and scenario forecasting, use simulator and bid strategy conversion estimates as directional guidance—but only “grade” performance using mature, finalized conversion data once lagging conversions have had time to arrive.
If your goal is cross-platform comparison, use the relevant platform-comparable reporting view where available, and standardize conversion events and windows across platforms so you’re comparing like with like—not different definitions of success.
Let AI handle
the Google Ads grunt work
Let AI handle
the Google Ads grunt work
When you’re evaluating Google Ads performance, the most reliable “estimate” is usually the one tied to your true business outcomes: the Conversions and Conversion value columns built from your primary conversion actions, interpreted with the right context around attribution, conversion delay, and modeled updates; other views like All conversions, Results, or platform-comparable conversions can be useful, but mainly for diagnostics, executive rollups, or cross-platform comparisons. If you want a calmer way to keep that scoreboard consistent over time, Blobr connects to your Google Ads account and runs specialized AI agents that continuously analyze what changed, surface measurement and reporting pitfalls, and translate best practices into clear, prioritized actions—whether that’s tightening waste with keyword cleanup, improving RSA assets with a Headlines Enhancer, or aligning keywords to the right landing pages with a Keyword Landing Optimizer—while you stay in control of what gets applied.
Start With the Only “Estimate” That Matters: Your Definition of a Conversion
When people ask what estimate they should use to evaluate Google Ads performance, what they’re really bumping into is this: Google Ads can show several different “versions” of performance depending on which conversion columns, attribution settings, and time assumptions you’re looking at. If you don’t pick the right baseline, your CPA/ROAS analysis can swing wildly—even if nothing meaningful changed in the business.
Use “Conversions” (and “Conversion value”) as your primary performance truth
For day-to-day account evaluation, optimization, and any conversation about “did Google Ads hit the goal?”, the cleanest baseline is the Conversions and Conversion value columns. These columns are built around the conversion actions you’ve designated as the key outcomes you want to optimize toward (your “primary” actions). The practical benefit is consistency: bidding systems and most optimization decisions are aligned to these numbers, so you’re evaluating with the same scoreboard the platform is playing to.
If you need a single performance line to track over time, treat Cost/conv. (lead gen) or Conv. value/cost (ecommerce/value-based) as your headline KPI, and make sure everyone agrees which conversion actions are included.
Use “All conversions” as a diagnostic layer, not the main KPI
All conversions includes both primary and secondary conversion actions (plus certain special sources). This is extremely useful for diagnosing funnel behavior—like whether “add to cart” is rising while “purchase” is flat—but it can easily mislead stakeholders if they interpret it as final business impact. In most mature accounts, “All conversions” is best used to spot friction points and assist behavior, not to grade campaign success.
Use “Results” (and “Results value”) when you want goal-level rollups
If your reporting needs to be more business-friendly—especially for accounts organized around standard goals—Results and Results value can simplify what executives see. Think of these as a clean summary of outcomes across your primary actions grouped by the goals you’ve set up, with visual cues for whether a campaign is actually optimizing for that goal or merely generating it.
Use “Conversions (platform comparable)” only for cross-platform comparisons (primarily Demand Gen)
If you’re comparing performance to other ad platforms, you may need a more apples-to-apples methodology. For certain campaign types, there’s a reporting view designed for that use case (for example, a platform-comparable conversions view for Demand Gen). It adjusts methodology (including view-through behavior and isolating the campaign type’s touchpoints) to better align with how other platforms typically report. The key point: this is a reporting-only lens and should not replace your core Google Ads optimization metrics when comparing across Google campaign types or making bidding decisions.
Know When Google Ads Is Showing You Modeled Data vs. True Forecasting
Not all “estimates” in Google Ads mean the same thing. Some are modeled measurement (filling in gaps where direct observation isn’t possible), and others are forecasting (predicting future outcomes under different bids/budgets/targets). Mixing these up is one of the fastest ways to make bad decisions.
Modeled conversions: treat them as part of real performance, but respect the stabilization period
Modern measurement includes modeling to estimate conversions that can’t be directly observed due to privacy protections or technical limits. Importantly, this does not mean Google is inventing conversions that didn’t happen; it’s estimating attribution for conversions that occurred when the linkage between the ad interaction and the conversion can’t be observed directly.
From an evaluation standpoint, the big operational takeaway is timing: modeled conversions can take several days to fully process and stabilize, and conversion values can be adjusted upward retroactively for a short period while modeling finalizes. That means judging performance on “yesterday vs. today” is often noisy—especially in accounts with shorter sales cycles and aggressive day-to-day changes.
Conversion delay: don’t judge recent days as “complete” unless your business converts instantly
Even with perfect tracking, many customers don’t convert immediately after clicking an ad. Depending on your conversion window settings, conversions can be reported long after the click. This creates a classic reporting trap: spend shows up immediately, conversions arrive later, so recent CPAs can look artificially high and ROAS can look artificially low.
To evaluate performance fairly, you need to account for your typical time-to-convert. Google Ads provides ways to understand delay patterns (including segmenting conversions by “days to conversion”) so you can see how many conversions usually arrive after 1 day, 7 days, 14 days, and so on.
Conversion windows: your “counting horizon” is a setting, not a law of nature
A conversion window is simply the number of days after an ad interaction during which a conversion will be recorded for that conversion action. A shorter window will record fewer conversions for that action; a longer one will capture more delayed conversions. This matters for performance evaluation because two accounts can run identical campaigns and still show different CPAs purely due to window settings.
When you compare platforms, campaigns, or time periods, align conversion windows and reporting assumptions as tightly as possible—or you’re comparing different yardsticks.
Bid strategy and simulator “conversion estimates”: use them for planning, not for grading
Google Ads can show conversion estimates and conversion value estimates in bid strategy reporting and simulator-style tools. These are built from historical performance patterns and help answer “if we change X, what might happen?” They’re useful for scenario planning, setting expectations, and understanding the likely impact of new targets/budgets.
But they are not the same as actual conversion reporting. Estimates are particularly sensitive when your selected date range includes incomplete conversions due to conversion delay, and they can be affected when your account uses conversion adjustments (like returns/cancellations). Treat estimates as directional guidance, then validate with actual results once enough time has passed for conversions to mature.
A Practical Performance Evaluation Framework (So You Stop Debating Columns)
Step 1: Lock your “scoreboard” before you optimize
Before you change budgets, targets, or creative, confirm that the campaigns are optimizing to the right conversion goals and that the right conversion actions are designated as primary vs. secondary. Primary actions are the ones that populate your Conversions/Conversion value reporting and are used for bidding optimization (as long as the campaign is using the corresponding goal). Secondary actions are typically observation-only and roll into All conversions, unless you explicitly include them in a goal setup used for bidding.
This single step prevents the most common reporting disaster: celebrating a CPA drop that was actually caused by switching the “scoreboard” to easier, higher-volume secondary actions.
Step 2: Choose an evaluation window that matches your conversion delay
If you want a clean read on performance, avoid ending your reporting period “today” unless you’re confident most conversions happen same-day. A disciplined approach is to evaluate a date range that ends far enough in the past that the majority of conversions you expect have already been reported. If your conversion window is long, extend this buffer accordingly.
If you must report on the most recent days (common in fast-moving businesses), pair your primary columns with “by conversion time” columns to understand what actually occurred recently, and use your historical conversion delay pattern to interpret whether recent CPAs/ROAS are likely to improve as lagging conversions post.
Step 3: Use attribution models intentionally (and know what’s still supported)
Attribution is not just a reporting preference; it can change the numbers in Conversions/All conversions and can influence how conversion-based bidding optimizes. Today, the practical decision for most advertisers is between data-driven attribution and last click. Several older rule-based models are no longer supported and were upgraded to data-driven attribution, so don’t assume an account is still operating under legacy models just because it was set up years ago.
Also remember that fractional credit is real: you may see decimals in conversion columns because some models distribute credit across multiple interactions.
Step 4: Improve the quality of the “estimate” by improving measurement inputs
If your conversion reporting is undercounting because it can’t match users reliably, your performance evaluation will be skewed and automated bidding will learn from incomplete feedback. One of the most impactful upgrades for many advertisers is enabling enhanced conversions for web, which uses hashed first-party customer data collected on your site to improve matching and recover conversions that would otherwise be missed. This typically improves both reporting confidence and bidding performance over time.
For businesses with returns, cancellations, or changing customer value, conversion adjustments can keep your reporting aligned with real business outcomes by retracting conversions or restating conversion value after the fact. The goal is simple: make your conversion value reflect the value you actually keep.
Quick checklist (use this when performance “suddenly changed”)
- Confirm you’re evaluating the right column: Conversions/Conversion value (primary) vs All conversions vs Results vs by-conversion-time variants.
- Confirm your goal setup: the campaign is optimizing for the intended goal, and the correct conversion actions are primary.
- Confirm your counting method: “Every” vs “One” conversion per ad interaction is correct for each conversion action (sales usually want “every”; leads often want “one”).
- Check conversion delay: don’t compare a mature period to a period that hasn’t had time for conversions to post.
- Expect late changes: modeled conversions and attribution can update after the fact; don’t overreact to very recent fluctuations.
So, what estimate should you use?
If your goal is performance evaluation inside Google Ads, use Conversions and Conversion value (and their derived CPA/ROAS metrics) as your primary truth, then interpret them through the lens of conversion delay, modeling stabilization, counting settings, and attribution.
If your goal is planning and scenario forecasting, use simulator and bid strategy conversion estimates as directional guidance—but only “grade” performance using mature, finalized conversion data once lagging conversions have had time to arrive.
If your goal is cross-platform comparison, use the relevant platform-comparable reporting view where available, and standardize conversion events and windows across platforms so you’re comparing like with like—not different definitions of success.
