How do I know if Google algorithm updates impacted my ads?

Alexandre Airvault
January 12, 2026

First: what “Google algorithm updates” can realistically change in Google Ads

When most advertisers say “an algorithm update,” they’re usually thinking of organic search ranking updates. Those can absolutely change your overall business results (traffic quality, conversion rate, brand demand), but they don’t directly “re-rank” your paid ads the same way.

In Google Ads, performance shifts that feel like an “algorithm update” typically come from one (or a combination) of these areas: how queries match to your targeting (especially with broader matching behavior), how the auction predicts and prices outcomes (Ad Rank and auction-time signals), how automated bidding learns and recalibrates, how policies and eligibility are enforced, and how measurement/attribution is recorded and reported. The key is to stop guessing and pin the change to a date, a campaign subset, and a specific metric (impressions, CTR, CPC, conversion rate, conversion count, conversion value).

Confirm impact fast: build a timeline using the tools already in your account

Step 1: Identify the exact “break point” date and the metric that moved first

Start by narrowing down the first metric that changed, because it tells you where to look. If impressions dropped first, you’re usually dealing with eligibility, budget, bids, targeting, or auction pressure. If impressions are stable but CTR dropped, it’s typically a relevance/creative/position issue. If clicks are stable but conversions dropped, you’re often looking at landing page changes, tracking/attribution, offer changes, lead quality shifts, or conversion delay.

Pick a before/after window that’s long enough to smooth weekday/weekend behavior (often 7 days vs the prior 7 days, or 14 vs prior 14), then keep the time window consistent while you diagnose.

Step 2: Use Explanations to diagnose sudden swings (without building 10 reports)

If you see a big jump or dip, use the in-platform Explanations directly on the metric that changed (for example: cost, clicks, conversions, CPA, ROAS). Explanations are designed specifically for “something changed and I need to know why,” and they often surface drivers like budget changes, bid strategy changes, targeting expansion, auction competition shifts, or conversion-related shifts.

From a practical standpoint, Explanations are most useful when you treat them like a starting hypothesis generator: they tell you where to zoom in next, not what to blindly “fix.”

Step 3: Check Change history like an auditor (including system and API changes)

Before blaming an external update, confirm nothing changed inside your account. In Change history, look for edits to budgets, bid strategies, targets (CPA/ROAS), locations, ad scheduling, keywords, audience signals, assets, final URLs, conversion settings, and experiments. Don’t ignore changes attributed to automated processes or platform/system entries; these can appear as non-human “users” in history and they still correlate strongly with performance shifts.

If you use any third-party tools, scripts, or API-based management, also look for entries that indicate API-driven changes. I’ve seen “mystery drops” that were simply a well-meaning rule or tool tightening targets every few days until volume collapsed.

Step 4: Review Auto-apply recommendations history (this is a common “silent change”)

Auto-applied recommendations can introduce changes that feel like a sudden algorithm shift because they affect eligibility, targeting, and bidding behavior without a person manually editing each campaign. If performance changed unexpectedly, review both what is enabled for auto-apply and what was actually applied recently (the history matters more than the setting).

When I’m diagnosing, I’m not asking “Are recommendations on?” I’m asking “What exactly was applied, on what date, and to which campaigns?” Then I compare that date to the first performance break point.

Step 5: Verify bidding status and “Learning” (many ‘algorithm hits’ are self-inflicted resets)

If you use automated bidding, a significant shift can be caused by learning/recalibration after changes. A bid strategy can enter Learning after you change the strategy itself, adjust targets, or change the campaign’s composition (adding/removing keywords, ad groups, products, etc.). During this period, volatility is normal and over-correcting often extends the pain.

As a rule of thumb, learning stabilizes faster when you have consistent conversion volume and a short conversion cycle. If your conversion cycle is long (for example, leads that convert days later) you can see delayed reporting and delayed stabilization. In many cases, it can take up to around 50 conversion events or roughly 3 conversion cycles for calibration—so “wait and watch” can be the correct move if you recently changed bidding or targets.

Also keep an eye on platform-level bidding option changes. For example, Enhanced CPC was deprecated for Search and Display starting the week of March 31, 2025, which means some accounts experienced strategy shifts or required migrations. If your account was using that approach around that time (or you still see it referenced in UI history), it’s worth validating what your campaigns are effectively running now and whether anything reverted to Manual CPC behavior.

Pin the cause: auction pressure vs eligibility/policy vs budget vs measurement

If impressions dropped: check eligibility, budget, and auction competitiveness

When impressions fall, I look for constraints first, not “bad ads.” Confirm campaigns/ad groups/ads are actually eligible to serve (not paused, not limited, not disapproved). Then check if you’re “Limited by budget,” because budget caps can throttle visibility even if everything else is healthy. If the account is eligible and funded, move to auction pressure.

Auction Insights is your best reality check for “Did the market change?” If competitors’ overlap rate rises, if your outranking share falls, or if their position-above rate increases, you may be losing auctions due to new entrants, aggressive bids, better predicted performance, or shifting user intent. That’s not an “algorithm penalty”—it’s the auction getting tougher.

If ads stopped or volume collapsed: look for policy and status changes (including “Eligible (limited)”)

Policy enforcement can change quickly and can affect serving even if you didn’t touch the ads. Review ad and asset statuses and add “Policy details” into your views so you can see why something is limited or disapproved. Then use Policy manager to identify the specific issues, the scope (one ad vs many), and whether the issue is content-based, destination-based, or account-level.

If you fixed the destination or believe it’s an error, appeal through the proper workflow rather than repeatedly editing ads at random. Appeals are most effective when you correct the underlying issue (especially destination issues) and then submit for review in a controlled, batch-based way so you can clearly track what changed and when.

If CPC rose but conversion rate didn’t: treat it like an auction shift, not a tracking problem

Rising CPCs with stable conversion rate often indicates the same traffic quality is now more expensive (competition, seasonality, or more aggressive bidding in the market). In that situation, the fix is rarely “tweak one headline.” You typically respond with one of three strategies: improve predicted performance (creative, assets, landing page experience), tighten where you show (query/keyword hygiene, negatives, geo/device/time controls), or change how you buy (bid strategy and targets that match your true margins and conversion latency).

If conversions dropped but clicks didn’t: validate measurement and conversion delay before changing campaigns

A very common false alarm is conversion reporting lag or measurement changes. If conversions appear to fall “overnight,” check whether your conversion cycle supports that conclusion. Leads that close later, purchases that take days, or attribution windows can make yesterday look terrible when it’s simply incomplete.

Next, verify conversion measurement status and tag health. If you’ve recently adjusted tagging, consent settings, checkout flows, or introduced new conversion measurement features (like enhanced conversions), you can see temporary discrepancies while data matches and stabilizes. It’s also worth confirming you’re comparing the right columns, because some reporting views are designed to help you compare performance across platform methodologies (for example, specific reporting columns for certain campaign types) and they may not match your primary optimization columns.

A simple diagnostic workflow you can run in 10 minutes (before you blame an update)

  • Confirm the break point date (the first day performance deviated) and which metric moved first (impressions, CTR, CPC, conversions, conversion value).
  • Check account and billing health (anything that can prevent serving regardless of campaign quality).
  • Open Explanations on the exact metric that changed for the affected campaign type.
  • Audit Change history for budgets, bidding/targets, targeting, assets, final URLs, conversion settings, experiments, and any system/API changes.
  • Review Auto-apply history to see if recommendations changed targeting/bids/budgets recently.
  • Check Policy manager + ad/asset statuses (including “Eligible (limited)” and destination-related issues).
  • Review bidding status (Learning vs Limited) and whether recent edits reset the model.
  • Use Auction Insights to confirm whether competitors or auction dynamics shifted.
  • Validate conversion measurement status and account for conversion delay before making major optimizations.

What to do next: wait, revert, or optimize (how I decide in real accounts)

When it’s smartest to wait (and not “fix” anything yet)

If you can clearly tie the break point to a bid strategy change, target change (CPA/ROAS), or a major structural change (new targeting, new asset mix, new feed structure), then some volatility is expected. In those cases, the best move is often to reduce the number of additional changes, ensure budgets aren’t constraining learning, and give the system time to recalibrate—especially if you’re dealing with longer conversion cycles.

When you should revert quickly (this is where most wasted weeks happen)

If you find a specific, recent internal cause that directly blocks serving or corrupts measurement—like disapprovals, destination errors, broken conversion tracking, accidental geo exclusions, aggressive negatives, or an auto-applied change that clearly damaged intent—revert it immediately and document the date/time. The fastest recoveries come from clean reversals, not from layering new “optimizations” on top of a broken baseline.

When you should treat it as a real market change and adapt

If your diagnostics show auctions got tougher (competitors entered, outranking share fell, position-above rate rose) or user demand shifted, then your job is to adapt the buying strategy. That usually means tightening query quality (especially with broad matching behavior), improving creative and assets to lift predicted CTR, strengthening landing page relevance to protect Ad Rank efficiency, and setting targets (CPA/ROAS) that reflect today’s costs rather than last quarter’s.

The biggest mindset shift: don’t ask “Did an update hit me?” Ask “What did the system start predicting differently, and why?” Once you can answer that with a date, a campaign subset, and a metric-first narrative, the fix becomes obvious—and it’s almost always something you can control.

Let AI handle
the Google Ads grunt work

Try now for free
Section / Step Core Question What to Look At Google Ads Tools & Docs Key Takeaways
What Google algorithm updates can change in Google Ads Did a “Google algorithm update” really change my ads, or something else? Distinguish between:
- Organic ranking updates (SEO, brand demand)
- Ads-side changes: query matching, auction pricing (Ad Rank), automated bidding, policy/eligibility, measurement/attribution.
- Google Ads Help Center
Organic updates don’t directly re-rank your ads. For ads, focus on query matching, auction behavior, bidding, policy, and measurement. Always tie changes to a specific date, campaign subset, and metric.
Step 1: Find the break point When did performance first deviate, and which metric moved first? Compare stable time windows (e.g., 7 vs previous 7 days) and identify whether the first shift was in:
- Impressions (eligibility, budget, bids, targeting, auction pressure)
- CTR (relevance, creative, position)
- Conversions with stable clicks (LP, offer, tracking, lead quality, conversion delay).
- Date range comparisons in Google Ads reporting UI The first metric that moved points to the likely cause. Keep the same comparison window while diagnosing.
Step 2: Use Explanations What immediate drivers does Google surface for the performance shift? On the affected campaigns, open Explanations on the metric that changed (cost, clicks, conversions, CPA, ROAS, etc.) and review surfaced causes (budget, bid strategy, targeting expansion, auction competition, conversion issues). - About explanations
- Why you might not have explanations
Treat Explanations as hypothesis generators, not instructions to blindly apply. They tell you where to zoom in next.
Step 3: Audit Change history Did anything inside the account change around that date? In Change history, review edits to:
- Budgets, bid strategies, CPA/ROAS targets
- Locations, schedules, keywords, audience signals
- Assets, final URLs, conversion settings, experiments
- System/automated or API/script changes.
- Review your account history (Change history) Many “mystery drops” are internal: automated rules, tools, or manual edits that coincided with the break point.
Step 4: Auto-apply recommendations history Did auto-applied recommendations silently change targeting, bids, or budgets? Check what auto-apply types are enabled and, more importantly, which recommendations were actually applied, on which dates, and to which campaigns. - Manage auto-apply recommendations Auto-apply can mimic “algorithm updates” because it edits your account in the background. Map applied changes to the first performance break point.
Step 5: Bidding status & Learning Is volatility caused by a bid strategy recalibrating rather than an external update? Review for:
- Bid strategy status (Learning, Limited, etc.)
- Recent changes to bidding type, targets, or campaign structure
- Conversion volume and cycle length (how long it takes to get ~50 conversions or 2–3 conversion cycles).
- Automated bidding status in campaign settings
Many drops follow self-inflicted resets. During Learning, expect volatility and avoid over-correcting unless something is clearly broken.
Impressions dropped Is it eligibility, budget, or a tougher auction? - Check campaign/ad/ad group status (paused, limited, disapproved)
- Confirm you’re not “Limited by budget”
- Use Auction Insights to see overlap rate, outranking share, and position-above rate changes.
- Auction Insights within Google Ads
- Why you might not have insights
Falling impressions usually mean constraints or tougher competition, not a “penalty.” Separate internal constraints from market shifts.
Ads stopped / volume collapsed Did policy or status changes stop serving? - Add “Policy details” columns to see why ads/assets are limited or disapproved
- Use Policy manager to check if issues are content-based, destination-based, or account-level
- Track what you fixed and when, and use the formal appeal flow.
- Policy manager & ad/asset status views in Google Ads Server errors, disapprovals, and “Eligible (limited)” statuses can halt delivery without any manual bid/creative change.
CPC up, CVR flat Did the auction simply get more expensive? With stable conversion rate but higher CPC:
- Treat it as a cost/auction shift (competition, seasonality, more aggressive bidding)
- Consider: improving predicted performance (creative, LP experience), tightening where you show (queries, geo, device, time), or adjusting bid strategies/targets to current margins.
- Auction Insights and bid strategy settings in Google Ads This pattern is rarely a tracking problem; it’s usually the same quality traffic at a higher price.
Conversions down, clicks stable Is it real performance decline or measurement/lag? - Check if your conversion cycle allows “overnight” drops
- Verify tags, consent mode, checkout changes, enhanced conversions, and attribution windows
- Confirm you’re looking at the right reporting columns for the campaign type.
- Conversion actions & tag diagnostics in Google Ads Apparent crashes are often reporting lag or tracking changes. Validate measurement before making big campaign edits.
10‑minute diagnostic workflow What’s the fastest structured way to rule out “algorithm hit”? 1) Confirm break point date & first metric moved
2) Check account & billing health
3) Open Explanations on changed metric
4) Audit Change history (including system/API)
5) Review Auto-apply history
6) Check Policy manager & ad/asset statuses
7) Review bidding status (Learning/Limited, recent resets)
8) Use Auction Insights for market shifts
9) Validate conversion measurement & delay.
- Explanations
- Change history
- Auto-apply recommendations
- Policy manager
- Auction Insights
- Conversion measurement tools
Run this checklist before blaming a Google update. It isolates whether the cause is internal changes, policies, bidding, competition, or measurement.
What to do next: wait, revert, or optimize Given the diagnosis, how should I respond? - Wait when volatility clearly follows bid/target/structural changes and Learning is in progress, assuming tracking and eligibility are healthy.
- Revert quickly when you find hard blockers (disapprovals, destination errors, broken tracking, bad geo/negatives, harmful auto-applied changes). Undo and document date/time.
- Adapt to market change when auctions are tougher or demand shifts: tighten queries, improve creative/assets, strengthen LP relevance, and reset CPA/ROAS targets to today’s economics.
- Change history (for quick reversions)
- Bidding & budget settings
- Creative, assets, and landing page tests
The real question isn’t “Did an update hit me?” but “What did the system start predicting differently, and why?” Once that’s clear, the response (wait, revert, or optimize) is usually obvious.

When performance shifts and it’s unclear whether a “Google algorithm update” is to blame, it helps to quickly tie changes to a specific date and metric, then review what Google Ads is actually showing you (Explanations, Change History, auto-applied recommendations, bidding Learning status, policy/eligibility, Auction Insights, and conversion tracking delays). Blobr is a product that plugs into your Google Ads account and continuously runs this kind of structured analysis for you, using specialized AI agents that look for what changed, where budget may be leaking, and what’s most likely driving volatility, then turns those findings into clear, prioritized recommendations you can review and apply on your terms.

First: what “Google algorithm updates” can realistically change in Google Ads

When most advertisers say “an algorithm update,” they’re usually thinking of organic search ranking updates. Those can absolutely change your overall business results (traffic quality, conversion rate, brand demand), but they don’t directly “re-rank” your paid ads the same way.

In Google Ads, performance shifts that feel like an “algorithm update” typically come from one (or a combination) of these areas: how queries match to your targeting (especially with broader matching behavior), how the auction predicts and prices outcomes (Ad Rank and auction-time signals), how automated bidding learns and recalibrates, how policies and eligibility are enforced, and how measurement/attribution is recorded and reported. The key is to stop guessing and pin the change to a date, a campaign subset, and a specific metric (impressions, CTR, CPC, conversion rate, conversion count, conversion value).

Confirm impact fast: build a timeline using the tools already in your account

Step 1: Identify the exact “break point” date and the metric that moved first

Start by narrowing down the first metric that changed, because it tells you where to look. If impressions dropped first, you’re usually dealing with eligibility, budget, bids, targeting, or auction pressure. If impressions are stable but CTR dropped, it’s typically a relevance/creative/position issue. If clicks are stable but conversions dropped, you’re often looking at landing page changes, tracking/attribution, offer changes, lead quality shifts, or conversion delay.

Pick a before/after window that’s long enough to smooth weekday/weekend behavior (often 7 days vs the prior 7 days, or 14 vs prior 14), then keep the time window consistent while you diagnose.

Step 2: Use Explanations to diagnose sudden swings (without building 10 reports)

If you see a big jump or dip, use the in-platform Explanations directly on the metric that changed (for example: cost, clicks, conversions, CPA, ROAS). Explanations are designed specifically for “something changed and I need to know why,” and they often surface drivers like budget changes, bid strategy changes, targeting expansion, auction competition shifts, or conversion-related shifts.

From a practical standpoint, Explanations are most useful when you treat them like a starting hypothesis generator: they tell you where to zoom in next, not what to blindly “fix.”

Step 3: Check Change history like an auditor (including system and API changes)

Before blaming an external update, confirm nothing changed inside your account. In Change history, look for edits to budgets, bid strategies, targets (CPA/ROAS), locations, ad scheduling, keywords, audience signals, assets, final URLs, conversion settings, and experiments. Don’t ignore changes attributed to automated processes or platform/system entries; these can appear as non-human “users” in history and they still correlate strongly with performance shifts.

If you use any third-party tools, scripts, or API-based management, also look for entries that indicate API-driven changes. I’ve seen “mystery drops” that were simply a well-meaning rule or tool tightening targets every few days until volume collapsed.

Step 4: Review Auto-apply recommendations history (this is a common “silent change”)

Auto-applied recommendations can introduce changes that feel like a sudden algorithm shift because they affect eligibility, targeting, and bidding behavior without a person manually editing each campaign. If performance changed unexpectedly, review both what is enabled for auto-apply and what was actually applied recently (the history matters more than the setting).

When I’m diagnosing, I’m not asking “Are recommendations on?” I’m asking “What exactly was applied, on what date, and to which campaigns?” Then I compare that date to the first performance break point.

Step 5: Verify bidding status and “Learning” (many ‘algorithm hits’ are self-inflicted resets)

If you use automated bidding, a significant shift can be caused by learning/recalibration after changes. A bid strategy can enter Learning after you change the strategy itself, adjust targets, or change the campaign’s composition (adding/removing keywords, ad groups, products, etc.). During this period, volatility is normal and over-correcting often extends the pain.

As a rule of thumb, learning stabilizes faster when you have consistent conversion volume and a short conversion cycle. If your conversion cycle is long (for example, leads that convert days later) you can see delayed reporting and delayed stabilization. In many cases, it can take up to around 50 conversion events or roughly 3 conversion cycles for calibration—so “wait and watch” can be the correct move if you recently changed bidding or targets.

Also keep an eye on platform-level bidding option changes. For example, Enhanced CPC was deprecated for Search and Display starting the week of March 31, 2025, which means some accounts experienced strategy shifts or required migrations. If your account was using that approach around that time (or you still see it referenced in UI history), it’s worth validating what your campaigns are effectively running now and whether anything reverted to Manual CPC behavior.

Pin the cause: auction pressure vs eligibility/policy vs budget vs measurement

If impressions dropped: check eligibility, budget, and auction competitiveness

When impressions fall, I look for constraints first, not “bad ads.” Confirm campaigns/ad groups/ads are actually eligible to serve (not paused, not limited, not disapproved). Then check if you’re “Limited by budget,” because budget caps can throttle visibility even if everything else is healthy. If the account is eligible and funded, move to auction pressure.

Auction Insights is your best reality check for “Did the market change?” If competitors’ overlap rate rises, if your outranking share falls, or if their position-above rate increases, you may be losing auctions due to new entrants, aggressive bids, better predicted performance, or shifting user intent. That’s not an “algorithm penalty”—it’s the auction getting tougher.

If ads stopped or volume collapsed: look for policy and status changes (including “Eligible (limited)”)

Policy enforcement can change quickly and can affect serving even if you didn’t touch the ads. Review ad and asset statuses and add “Policy details” into your views so you can see why something is limited or disapproved. Then use Policy manager to identify the specific issues, the scope (one ad vs many), and whether the issue is content-based, destination-based, or account-level.

If you fixed the destination or believe it’s an error, appeal through the proper workflow rather than repeatedly editing ads at random. Appeals are most effective when you correct the underlying issue (especially destination issues) and then submit for review in a controlled, batch-based way so you can clearly track what changed and when.

If CPC rose but conversion rate didn’t: treat it like an auction shift, not a tracking problem

Rising CPCs with stable conversion rate often indicates the same traffic quality is now more expensive (competition, seasonality, or more aggressive bidding in the market). In that situation, the fix is rarely “tweak one headline.” You typically respond with one of three strategies: improve predicted performance (creative, assets, landing page experience), tighten where you show (query/keyword hygiene, negatives, geo/device/time controls), or change how you buy (bid strategy and targets that match your true margins and conversion latency).

If conversions dropped but clicks didn’t: validate measurement and conversion delay before changing campaigns

A very common false alarm is conversion reporting lag or measurement changes. If conversions appear to fall “overnight,” check whether your conversion cycle supports that conclusion. Leads that close later, purchases that take days, or attribution windows can make yesterday look terrible when it’s simply incomplete.

Next, verify conversion measurement status and tag health. If you’ve recently adjusted tagging, consent settings, checkout flows, or introduced new conversion measurement features (like enhanced conversions), you can see temporary discrepancies while data matches and stabilizes. It’s also worth confirming you’re comparing the right columns, because some reporting views are designed to help you compare performance across platform methodologies (for example, specific reporting columns for certain campaign types) and they may not match your primary optimization columns.

A simple diagnostic workflow you can run in 10 minutes (before you blame an update)

  • Confirm the break point date (the first day performance deviated) and which metric moved first (impressions, CTR, CPC, conversions, conversion value).
  • Check account and billing health (anything that can prevent serving regardless of campaign quality).
  • Open Explanations on the exact metric that changed for the affected campaign type.
  • Audit Change history for budgets, bidding/targets, targeting, assets, final URLs, conversion settings, experiments, and any system/API changes.
  • Review Auto-apply history to see if recommendations changed targeting/bids/budgets recently.
  • Check Policy manager + ad/asset statuses (including “Eligible (limited)” and destination-related issues).
  • Review bidding status (Learning vs Limited) and whether recent edits reset the model.
  • Use Auction Insights to confirm whether competitors or auction dynamics shifted.
  • Validate conversion measurement status and account for conversion delay before making major optimizations.

What to do next: wait, revert, or optimize (how I decide in real accounts)

When it’s smartest to wait (and not “fix” anything yet)

If you can clearly tie the break point to a bid strategy change, target change (CPA/ROAS), or a major structural change (new targeting, new asset mix, new feed structure), then some volatility is expected. In those cases, the best move is often to reduce the number of additional changes, ensure budgets aren’t constraining learning, and give the system time to recalibrate—especially if you’re dealing with longer conversion cycles.

When you should revert quickly (this is where most wasted weeks happen)

If you find a specific, recent internal cause that directly blocks serving or corrupts measurement—like disapprovals, destination errors, broken conversion tracking, accidental geo exclusions, aggressive negatives, or an auto-applied change that clearly damaged intent—revert it immediately and document the date/time. The fastest recoveries come from clean reversals, not from layering new “optimizations” on top of a broken baseline.

When you should treat it as a real market change and adapt

If your diagnostics show auctions got tougher (competitors entered, outranking share fell, position-above rate rose) or user demand shifted, then your job is to adapt the buying strategy. That usually means tightening query quality (especially with broad matching behavior), improving creative and assets to lift predicted CTR, strengthening landing page relevance to protect Ad Rank efficiency, and setting targets (CPA/ROAS) that reflect today’s costs rather than last quarter’s.

The biggest mindset shift: don’t ask “Did an update hit me?” Ask “What did the system start predicting differently, and why?” Once you can answer that with a date, a campaign subset, and a metric-first narrative, the fix becomes obvious—and it’s almost always something you can control.