First, confirm the CTR drop is real (and pinpoint exactly where it’s happening)
When someone tells me “CTR dropped this week,” my first step is to turn that into a clean comparison. A week-over-week CTR change can be completely legitimate, but it can also be a reporting artifact (different day-of-week mix), a traffic-mix shift (more mobile, more partner traffic, more top-of-funnel queries), or a single campaign/ad group dragging the average down.
Use a like-for-like date comparison (same days of week)
Compare the most recent 7 days to the previous 7 days, and make sure you’re matching the same number of weekdays/weekend days. For example, if today is January 13, 2026, a clean comparison is January 6–12, 2026 vs. December 30, 2025–January 5, 2026. If your business is sensitive to weekends (many are), a “Mon–Sun vs. Mon–Sun” comparison is far more reliable than “last 7 days” viewed on different days.
Don’t diagnose CTR at the account level
Account CTR is an average of many different auctions. A “drop” often comes from one of these: one campaign suddenly getting more impressions, one brand campaign losing top placement, or a new keyword match pulling in broad, low-intent queries. Your job is to find where the CTR changed and what changed at the same time.
- Sort by biggest impression increase (campaigns/ad groups/keywords). CTR often falls simply because you’re showing more often in auctions you weren’t winning before.
- Segment by device (mobile vs. desktop). A device-mix shift can move CTR meaningfully even if nothing is “wrong.”
- Segment by network (core search vs. search partners, if applicable). Partner inventory can behave very differently and can dilute CTR even while adding incremental volume.
- Check ad and keyword status for “limited” or review-related issues that reduce eligibility or disrupt serving consistency.
- Review change history for the exact date the CTR shifted—bids, budgets, targeting, keyword edits, assets, schedules, and automation settings can all move CTR fast.
The most common reasons CTR drops week-to-week (and how to prove each one)
1) You lost position or visibility due to Ad Rank pressure (competition changed)
CTR is heavily position-dependent. If competitors became more aggressive (higher bids, better creatives, stronger assets, better relevance), your ads may still show—but in lower positions, or less frequently in top placements—pulling CTR down. In real accounts, this is one of the most common “nothing changed on my side” CTR drops.
How to prove it: Look at impression share diagnostics. If you see more “lost due to rank,” you’re losing auctions you previously won, or you’re winning them in weaker positions. Then check competitive signals like Auction insights to see whether others are appearing above you more often, overlapping more frequently, or taking a larger share of eligible impressions.
2) Budget constraints changed your traffic mix (you’re showing at “worse times”)
When budgets are tight, delivery gets uneven. You may stop showing during your highest-intent hours (when competition is strongest) and end up serving more in cheaper, lower-intent auctions—often with lower CTR. This can happen even if total impressions look stable.
How to prove it: Check lost impression share due to budget and compare hour-of-day performance (or day-of-week). If CTR is stable in your historically strong segments but you’re gaining impressions elsewhere, you’ve got a delivery mix issue, not necessarily an ad copy issue.
3) Your ads started triggering for different queries (match behavior, close variants, or automation expansion)
Week-to-week CTR drops are frequently query-mix problems. If you expanded keywords, changed match types, added broad targeting, loosened location/audience settings, or enabled an expansion feature, you can suddenly enter a lot of new auctions. Those impressions are “real,” but the intent is often broader—so CTR falls.
Two especially common culprits are close variants (which apply across match types) and query expansion from automation. Close variants can match to searches that are similar in meaning/intent, not necessarily identical in wording, which is great for scale but can surprise advertisers when it changes the visible query mix.
How to prove it: Use the search terms report to compare “this week vs. last week.” You’re looking for new categories of terms, a surge in informational phrases, or terms that technically relate but don’t match your offer. If you find irrelevant terms, add negatives thoughtfully (over-blocking can reduce volume and can also prevent automated systems from finding valuable searches).
4) An ad/asset change lowered relevance or made the message less compelling
CTR doesn’t just respond to your offer; it responds to how clearly the offer matches the searcher’s intent in the limited space on the results page. If you edited headlines/descriptions, pinned too aggressively, removed a strong call-to-action, or swapped to a weaker landing page theme, CTR can drop quickly.
If you use responsive ads, also pay attention to Ad Strength as a practical diagnostic tool. It’s not a “serving lever,” but it’s a strong indicator of whether you’re giving the system enough unique, relevant options to assemble competitive messages for different intents.
5) Policy/review status limited eligibility (even partially)
Policy labels and review events can reduce where and how often ads serve. Even when your ads are still technically “eligible,” a limitation can restrict certain placements, locations, audiences, or formats. The practical result is often a quieter but noticeable performance shift: fewer high-performing impressions, more leftover impressions, and lower CTR.
How to prove it: Check the status column for ads, assets, keywords, and destinations. Also review any policy manager alerts and recent review events around the date CTR changed.
6) Network mix changed (search partners added or suddenly started contributing more volume)
If search partners are enabled, you can see CTR fluctuations that don’t mirror core search behavior. Partner sites can present ads differently, and clicks may not always reflect the same level of intent as core search. Importantly, lower CTR on partner inventory doesn’t necessarily mean your ads are “bad,” but it can pull down the blended CTR you see in platform averages.
How to prove it: Segment performance by network. If core search CTR is stable but partner CTR dropped (or partner impressions surged), you’ve found the explanation. From there, decide whether partner volume aligns with your goals, and test excluding it if it’s dragging efficiency or lead quality.
7) Automated creative features changed what users saw (text customization / AI settings)
If you’re using AI-assisted text generation features that create additional headlines/descriptions based on your site, existing ads, and keywords, your visible message can change without you editing the ad manually. That’s often positive long-term, but week-to-week it can introduce learning, experimentation, and message shifts that move CTR.
How to prove it: Review asset details and “served” asset reporting. Look for newly introduced automatically generated text assets, shifts in top-performing headline combinations, or messaging that’s accurate but less compelling than your best manual copy.
Fixes that reliably raise CTR (without tanking lead quality)
Start with relevance: tighten the mapping between intent, keyword, ad group, and ad copy
The most sustainable CTR improvements come from intent alignment, not gimmicks. If CTR dropped because you expanded into broader auctions, your best fix is to split intent into separate ad groups (or campaigns) and tailor messaging. High-intent queries want specificity (pricing, availability, service area, exact product), while early-stage queries want clarity and proof (benefits, differentiators, trust, fast next step).
As you tighten relevance, use the search terms report as your feedback loop. Promote winners into dedicated ad groups, and block truly irrelevant terms with negatives. If you’re running more automated campaign types, use exclusions carefully and prioritize the most precise controls available for brand-related filtering where appropriate.
Improve your “ad package,” not just the headline: strengthen assets and on-page consistency
In competitive auctions, your CTR is often won by the total ad experience: strong headlines, a clear offer, and high-impact assets that make the ad larger and more useful. Add every asset that genuinely helps a user choose you (sitelinks, callouts, structured snippets, images where eligible). Assets can improve prominence and expected click impact, but they work best when they reinforce a single, coherent promise.
Also keep the landing page message consistent with the ad. If users don’t trust the click (because the page feels mismatched), the platform learns that your ads are less satisfying for that intent over time, which can show up as weaker expected CTR and softer overall performance.
Use impression share metrics to decide whether CTR needs creative work or auction work
If you’re losing significant impression share due to rank, you can write better ads all day and still struggle to stabilize CTR because you’re not showing in the placements that generate strong engagement. In that case, you typically need a combination of stronger relevance (keyword/ad/landing page alignment), improved asset coverage, and bids/targets that allow you to compete in the auctions you actually want.
If you’re losing impression share due to budget, fix delivery first. Otherwise, you’ll keep measuring CTR on a changing slice of auctions.
Control volatility: isolate changes and avoid stacking experiments
When CTR drops “this week,” the fastest path back is usually to stop the bleeding and reduce variables. Avoid changing bids, budgets, targeting, ads, and automation settings all in the same day. Undo or roll back the most suspicious recent change first (the one that aligns with the CTR drop date), then let performance stabilize before layering additional optimizations.
A practical “CTR recovery” checklist I use in real accounts
- Locate the drop: campaign → ad group → keyword/search term, and segment by device + network.
- Confirm eligibility: look for limited/review/policy statuses affecting ads, assets, keywords, or destinations.
- Check auction pressure: impression share losses due to rank/budget + competitor movement in auction diagnostics.
- Audit query quality: identify new low-intent themes in search terms; add negatives only where truly necessary.
- Refresh messaging where it matters: rewrite RSAs to cover top intents, reduce redundancy, avoid over-pinning, and ensure at least one strong ad per ad group.
- Rebuild asset coverage: add missing assets that make your ad more useful and specific; ensure each asset supports the same promise.
If you want, tell me which campaign type you’re looking at (Search vs. Performance Max vs. Shopping vs. Display/Video), whether search partners are enabled, and the exact dates you’re comparing. With just those details, I can tell you which 2–3 diagnostics usually find the cause fastest for your situation.
Let AI handle
the Google Ads grunt work
| Step / Issue | What to Look At | Why It Affects CTR | Key Google Ads Tools & Docs |
|---|---|---|---|
| Confirm the CTR drop is real | Compare a clean 7‑day range to the previous 7‑day range, matching the same days of week (e.g., Mon–Sun vs. Mon–Sun). Focus on campaigns and ad groups with the largest CTR change, not just account‑level averages. | Misaligned date ranges or roll‑up averages can show “fake” drops driven by weekend mix, seasonality, or one outlier campaign rather than a real performance issue. | Use date range comparisons and table sorting in the Campaigns and Ad groups views. Review recent changes with the change history tool to align timing with the CTR shift. |
| Segment by device & network | Break results down by device (mobile vs. desktop) and by network (Google Search vs. search partners). Identify where CTR actually moved. | A shift toward mobile or search partner inventory can lower blended CTR even if nothing is wrong with your core Google Search performance. | Use the “Segment” options in campaign and ad group tables and review how search partners work in the Search Network documentation. |
| Check eligibility & status issues | Review status for ads, assets, keywords, and destinations (limited, under review, disapproved, low search volume, etc.) around the date CTR changed. | Policy and review limitations can quietly remove high‑quality impressions, leaving more lower‑intent inventory and pulling CTR down. | Use status columns and filters in Ads, Keywords, and Assets. Refer to keyword status guidance such as low search volume and related status help articles. |
| 1) Ad Rank & position pressure | Review impression share and “Search Lost IS (rank)” at campaign/ad group level. Check Auction insights for changes in overlap rate, position‑above rate, and top‑of‑page presence. | If competitors improve bids or quality, you may keep serving but in lower, less‑clicked positions, causing a CTR drop even if your ads didn’t change. | Use impression share metrics such as Search Lost IS (rank) and the Auction insights report described in the same documentation. |
| 2) Budget constraints & “worse time” delivery | Compare “Search Lost IS (budget)” and analyze hour‑of‑day / day‑of‑week performance. Look for more impressions in historically weaker hours while strong hours lose coverage. | Tight budgets can force the system to skip high‑intent, competitive auctions and serve more often in cheaper, low‑intent times, reducing overall CTR. | Use the impression share metrics in the same impression share and Auction insights documentation, plus time‑based segments in reports. |
| 3) Query mix changes (match types, close variants, expansion) | Compare “this week vs. last week” in the search terms report. Look for new themes, broader informational queries, or terms that don’t match your offer. | Entering more broad or loosely related auctions increases impressions from lower‑intent searches, which naturally lowers CTR even if your ads haven’t changed. | Use the search terms report to promote strong queries into keywords and add negative keywords for truly irrelevant terms. |
| 4) Ad or asset changes reduced relevance | Review recent edits to headlines, descriptions, pinning, and landing page messaging. Check RSA performance and combination reporting before and after the change. | If the visible message no longer clearly matches user intent, or if pinning reduces ad variation, fewer users will click—even if keywords and bids are unchanged. | Use Ad details and the Ad Strength for responsive search ads documentation for guidance on asset coverage, uniqueness, and pinning. |
| 5) Policy / review status limiting where you show | Check policy labels and review events in the account around the CTR drop date, including any alerts in policy‑related tools. | Partial limitations can quietly remove impressions from your best placements, locations, or audiences, leaving lower‑quality impressions that drag down CTR. | Use policy and status columns plus any policy‑related tools (for example, policy manager views) described in Google Ads Help to identify and resolve limitations. |
| 6) Network mix shift (search partners vs. core search) | Segment by network to compare CTR and impression volume on Google Search vs. search partners before and after the drop. | Partner sites can show ads in different layouts and contexts; their lower average CTR can pull down your blended account CTR even if Google Search performance is stable. | Use network segmentation and review how partners behave in the About the Google Search Network documentation. |
| 7) Automated creative / AI text changes | Review asset reporting for RSAs to see which headlines/descriptions were served more often this week. Look for newly auto‑generated text or different top combinations. | When automatically created or AI‑generated assets start serving more, the effective message in the auction can change, causing short‑term CTR volatility. | Use asset performance views and consult the Ad Strength guidance for best practices on supplying enough high‑quality assets. |
| Relevance & structure fixes | Rebuild tight groupings between intent, keyword, ad group, and ad copy. Use search terms as a feedback loop: promote strong queries into new ad groups and add negatives only for clearly irrelevant searches. | Stronger intent alignment makes ads feel more “made for” the query, improving CTR without resorting to clickbait or hurting lead quality. | Combine the search terms report with keyword and ad group restructuring to keep high‑intent and broad‑intent traffic separated and messaged differently. |
| Strengthen the full ad package | Audit asset coverage: sitelinks, callouts, structured snippets, images (where eligible), and other extensions. Ensure landing page messaging closely matches the ad’s promise. | Larger, more informative ads win attention and clicks, and consistent landing pages help Google learn that your ads satisfy that intent, reinforcing CTR over time. | Use ad asset setup and landing page alignment best practices in Google Ads Help along with RSA Ad Strength recommendations. |
| Use impression share to choose “creative vs. auction” fixes | If you’re losing a lot of impressions to rank, prioritize improving relevance, assets, and competitive bids/targets. If you’re losing to budget, address budgets and scheduling first. | Creative tweaks alone can’t fix CTR if you rarely show in strong positions; similarly, a moving budget baseline makes it hard to read CTR trends. | Rely on impression share and loss metrics in the advanced reports for online sales guidance to decide whether to focus on ad quality, bids, or budgets. |
| Control volatility & stack changes carefully | Map CTR changes against the change history timeline. Roll back the most suspicious recent change first and avoid editing bids, budgets, targeting, and ads all at once. | Multiple overlapping tests make it impossible to know what caused the CTR drop and can prolong instability. | Use the change history tool to isolate impactful events and measure CTR recovery after each adjustment. |
| Practical CTR recovery checklist | 1) Locate the drop (campaign → ad group → keyword/search term; segment by device and network). 2) Confirm eligibility and policy status. 3) Check auction pressure (rank/budget loss and Auction insights). 4) Audit query quality and negatives. 5) Refresh RSA messaging and asset coverage where it matters most. | Following a consistent checklist prevents chasing noise and helps you quickly identify whether the issue is eligibility, competition, traffic mix, or messaging. | Combine account views, search terms reporting, impression share and Auction insights, RSA Ad Strength, and change history to execute the checklist end‑to‑end. |
If your CTR dropped this week, start by confirming it’s a real change (compare a clean Mon–Sun week to the previous Mon–Sun, then zoom into the specific campaigns and ad groups driving the decline) and map the timing against Google Ads Change History to rule out “fake” drops caused by day-of-week mix or one outlier. Next, segment performance by device and network, since a shift toward mobile traffic or Search Partners can pull down blended CTR even when core Google Search is stable, and check for eligibility or policy/review issues that may have limited your best ads, assets, keywords, or landing pages. From there, look for auction pressure (higher competition can push you into lower positions), budget constraints that change when your ads show (more impressions during weaker hours), query-mix drift in the Search Terms report (broader or less relevant searches), and recent RSA/asset edits or auto-generated asset serving that may have changed your message. If you want a faster way to pinpoint whether it’s traffic mix, rank, budget, queries, or creative, Blobr connects to your Google Ads and runs specialized AI agents that continuously audit these exact areas—like optimizing callout extensions and improving landing-page-to-ad alignment—so you get a clear, prioritized set of fixes without having to manually chase every report.
First, confirm the CTR drop is real (and pinpoint exactly where it’s happening)
When someone tells me “CTR dropped this week,” my first step is to turn that into a clean comparison. A week-over-week CTR change can be completely legitimate, but it can also be a reporting artifact (different day-of-week mix), a traffic-mix shift (more mobile, more partner traffic, more top-of-funnel queries), or a single campaign/ad group dragging the average down.
Use a like-for-like date comparison (same days of week)
Compare the most recent 7 days to the previous 7 days, and make sure you’re matching the same number of weekdays/weekend days. For example, if today is January 13, 2026, a clean comparison is January 6–12, 2026 vs. December 30, 2025–January 5, 2026. If your business is sensitive to weekends (many are), a “Mon–Sun vs. Mon–Sun” comparison is far more reliable than “last 7 days” viewed on different days.
Don’t diagnose CTR at the account level
Account CTR is an average of many different auctions. A “drop” often comes from one of these: one campaign suddenly getting more impressions, one brand campaign losing top placement, or a new keyword match pulling in broad, low-intent queries. Your job is to find where the CTR changed and what changed at the same time.
- Sort by biggest impression increase (campaigns/ad groups/keywords). CTR often falls simply because you’re showing more often in auctions you weren’t winning before.
- Segment by device (mobile vs. desktop). A device-mix shift can move CTR meaningfully even if nothing is “wrong.”
- Segment by network (core search vs. search partners, if applicable). Partner inventory can behave very differently and can dilute CTR even while adding incremental volume.
- Check ad and keyword status for “limited” or review-related issues that reduce eligibility or disrupt serving consistency.
- Review change history for the exact date the CTR shifted—bids, budgets, targeting, keyword edits, assets, schedules, and automation settings can all move CTR fast.
The most common reasons CTR drops week-to-week (and how to prove each one)
1) You lost position or visibility due to Ad Rank pressure (competition changed)
CTR is heavily position-dependent. If competitors became more aggressive (higher bids, better creatives, stronger assets, better relevance), your ads may still show—but in lower positions, or less frequently in top placements—pulling CTR down. In real accounts, this is one of the most common “nothing changed on my side” CTR drops.
How to prove it: Look at impression share diagnostics. If you see more “lost due to rank,” you’re losing auctions you previously won, or you’re winning them in weaker positions. Then check competitive signals like Auction insights to see whether others are appearing above you more often, overlapping more frequently, or taking a larger share of eligible impressions.
2) Budget constraints changed your traffic mix (you’re showing at “worse times”)
When budgets are tight, delivery gets uneven. You may stop showing during your highest-intent hours (when competition is strongest) and end up serving more in cheaper, lower-intent auctions—often with lower CTR. This can happen even if total impressions look stable.
How to prove it: Check lost impression share due to budget and compare hour-of-day performance (or day-of-week). If CTR is stable in your historically strong segments but you’re gaining impressions elsewhere, you’ve got a delivery mix issue, not necessarily an ad copy issue.
3) Your ads started triggering for different queries (match behavior, close variants, or automation expansion)
Week-to-week CTR drops are frequently query-mix problems. If you expanded keywords, changed match types, added broad targeting, loosened location/audience settings, or enabled an expansion feature, you can suddenly enter a lot of new auctions. Those impressions are “real,” but the intent is often broader—so CTR falls.
Two especially common culprits are close variants (which apply across match types) and query expansion from automation. Close variants can match to searches that are similar in meaning/intent, not necessarily identical in wording, which is great for scale but can surprise advertisers when it changes the visible query mix.
How to prove it: Use the search terms report to compare “this week vs. last week.” You’re looking for new categories of terms, a surge in informational phrases, or terms that technically relate but don’t match your offer. If you find irrelevant terms, add negatives thoughtfully (over-blocking can reduce volume and can also prevent automated systems from finding valuable searches).
4) An ad/asset change lowered relevance or made the message less compelling
CTR doesn’t just respond to your offer; it responds to how clearly the offer matches the searcher’s intent in the limited space on the results page. If you edited headlines/descriptions, pinned too aggressively, removed a strong call-to-action, or swapped to a weaker landing page theme, CTR can drop quickly.
If you use responsive ads, also pay attention to Ad Strength as a practical diagnostic tool. It’s not a “serving lever,” but it’s a strong indicator of whether you’re giving the system enough unique, relevant options to assemble competitive messages for different intents.
5) Policy/review status limited eligibility (even partially)
Policy labels and review events can reduce where and how often ads serve. Even when your ads are still technically “eligible,” a limitation can restrict certain placements, locations, audiences, or formats. The practical result is often a quieter but noticeable performance shift: fewer high-performing impressions, more leftover impressions, and lower CTR.
How to prove it: Check the status column for ads, assets, keywords, and destinations. Also review any policy manager alerts and recent review events around the date CTR changed.
6) Network mix changed (search partners added or suddenly started contributing more volume)
If search partners are enabled, you can see CTR fluctuations that don’t mirror core search behavior. Partner sites can present ads differently, and clicks may not always reflect the same level of intent as core search. Importantly, lower CTR on partner inventory doesn’t necessarily mean your ads are “bad,” but it can pull down the blended CTR you see in platform averages.
How to prove it: Segment performance by network. If core search CTR is stable but partner CTR dropped (or partner impressions surged), you’ve found the explanation. From there, decide whether partner volume aligns with your goals, and test excluding it if it’s dragging efficiency or lead quality.
7) Automated creative features changed what users saw (text customization / AI settings)
If you’re using AI-assisted text generation features that create additional headlines/descriptions based on your site, existing ads, and keywords, your visible message can change without you editing the ad manually. That’s often positive long-term, but week-to-week it can introduce learning, experimentation, and message shifts that move CTR.
How to prove it: Review asset details and “served” asset reporting. Look for newly introduced automatically generated text assets, shifts in top-performing headline combinations, or messaging that’s accurate but less compelling than your best manual copy.
Fixes that reliably raise CTR (without tanking lead quality)
Start with relevance: tighten the mapping between intent, keyword, ad group, and ad copy
The most sustainable CTR improvements come from intent alignment, not gimmicks. If CTR dropped because you expanded into broader auctions, your best fix is to split intent into separate ad groups (or campaigns) and tailor messaging. High-intent queries want specificity (pricing, availability, service area, exact product), while early-stage queries want clarity and proof (benefits, differentiators, trust, fast next step).
As you tighten relevance, use the search terms report as your feedback loop. Promote winners into dedicated ad groups, and block truly irrelevant terms with negatives. If you’re running more automated campaign types, use exclusions carefully and prioritize the most precise controls available for brand-related filtering where appropriate.
Improve your “ad package,” not just the headline: strengthen assets and on-page consistency
In competitive auctions, your CTR is often won by the total ad experience: strong headlines, a clear offer, and high-impact assets that make the ad larger and more useful. Add every asset that genuinely helps a user choose you (sitelinks, callouts, structured snippets, images where eligible). Assets can improve prominence and expected click impact, but they work best when they reinforce a single, coherent promise.
Also keep the landing page message consistent with the ad. If users don’t trust the click (because the page feels mismatched), the platform learns that your ads are less satisfying for that intent over time, which can show up as weaker expected CTR and softer overall performance.
Use impression share metrics to decide whether CTR needs creative work or auction work
If you’re losing significant impression share due to rank, you can write better ads all day and still struggle to stabilize CTR because you’re not showing in the placements that generate strong engagement. In that case, you typically need a combination of stronger relevance (keyword/ad/landing page alignment), improved asset coverage, and bids/targets that allow you to compete in the auctions you actually want.
If you’re losing impression share due to budget, fix delivery first. Otherwise, you’ll keep measuring CTR on a changing slice of auctions.
Control volatility: isolate changes and avoid stacking experiments
When CTR drops “this week,” the fastest path back is usually to stop the bleeding and reduce variables. Avoid changing bids, budgets, targeting, ads, and automation settings all in the same day. Undo or roll back the most suspicious recent change first (the one that aligns with the CTR drop date), then let performance stabilize before layering additional optimizations.
A practical “CTR recovery” checklist I use in real accounts
- Locate the drop: campaign → ad group → keyword/search term, and segment by device + network.
- Confirm eligibility: look for limited/review/policy statuses affecting ads, assets, keywords, or destinations.
- Check auction pressure: impression share losses due to rank/budget + competitor movement in auction diagnostics.
- Audit query quality: identify new low-intent themes in search terms; add negatives only where truly necessary.
- Refresh messaging where it matters: rewrite RSAs to cover top intents, reduce redundancy, avoid over-pinning, and ensure at least one strong ad per ad group.
- Rebuild asset coverage: add missing assets that make your ad more useful and specific; ensure each asset supports the same promise.
If you want, tell me which campaign type you’re looking at (Search vs. Performance Max vs. Shopping vs. Display/Video), whether search partners are enabled, and the exact dates you’re comparing. With just those details, I can tell you which 2–3 diagnostics usually find the cause fastest for your situation.
