Choose the right way to A/B test landing pages in Google Ads (because not all “tests” are equal)
When advertisers say they’re “A/B testing landing pages,” they often mean they’re running two ads with two different URLs and watching conversion rate. The problem is that this rarely behaves like a true A/B test, because modern ad serving prioritizes what it believes will perform best, which can skew traffic and make results hard to trust.
A true A/B test in Google Ads is one where traffic (and usually budget) is intentionally split between a control and a variant so you can compare outcomes over the same time period—without the system quietly reallocating exposure in a way that invalidates the comparison.
The most reliable method: Custom Experiments (built-in traffic split and clean reporting)
If you want the closest thing to a “real” landing page A/B test directly inside Google Ads, use a custom experiment. This setup creates an experiment version of your campaign and splits eligible auctions between the original and the experiment based on the split option you choose. The big advantage is that you’re not relying on ad rotation quirks to force fairness—you’re using a dedicated experimentation framework designed for comparisons.
What about Ad Variations?
Ad variations are excellent when you want to test a single ad copy change (like a new call-to-action) across lots of ads at once. They’re not my first pick for landing page testing because landing page tests usually require tight control over where traffic goes and how that traffic is counted. If your goal is strictly landing page performance, custom experiments are the cleaner tool.
If you run Performance Max: control “Final URL expansion” before you test landing pages
Performance Max can automatically send users to different pages on your domain if final URL expansion is enabled (it’s commonly on by default). That’s the opposite of what you want during a landing page A/B test, because you’re no longer testing Page A versus Page B—you’re testing “whatever pages the system decides” versus your intended variant. If you’re serious about landing page testing, lock down the landing page behavior first so the only meaningful difference between test arms is the page you’re trying to evaluate.
Before you start: set up measurement so your test doesn’t lie to you
Landing page tests fail far more often from tracking and attribution issues than from “bad creative.” Before you split traffic, make sure the conversion you’re optimizing for is measured consistently and credits the right campaign clicks.
Pick one primary conversion (and keep it stable for the full test)
Decide what “winning” means and don’t change it midstream. For lead gen, that might be a form submission. For ecommerce, it’s usually a purchase (and ideally conversion value, not just conversion count). If you’re importing conversions from Analytics key events into Google Ads, confirm the exact key event you’re using and keep that consistent for the duration of the experiment so you don’t shift the goalposts halfway through.
Make your URLs test-friendly (without breaking tracking)
You can test landing pages using two separate URLs (for example, /landing-a versus /landing-b) or by using a single URL with a parameter that swaps the page experience (for example, ?variant=a versus ?variant=b). Either approach can work. What matters is that each test arm sends users to a reliably distinct experience and that your tracking setup doesn’t break when you swap URLs.
If you use tracking templates and URL parameters, make sure they include a proper landing page URL insertion so clicks still resolve to the correct final URL. This is especially important in setups that rely on automated targeting types that require final URL insertion to function correctly.
Quick pre-test checklist (do this once, then don’t touch it)
- Confirm your primary conversion action is firing correctly and attributing to Google Ads clicks.
- Ensure both landing pages load fast, work on mobile, and have the same offer and traffic intent (you’re testing the page, not the promotion).
- Confirm your final URL doesn’t redirect users to a different domain (this can cause disapprovals and/or tracking issues).
- If using Performance Max, restrict or disable behaviors that could route traffic to unexpected pages during the test.
Step-by-step: A/B test landing pages using a Google Ads custom experiment
This is the workflow I use when I want decision-grade landing page results from Google Ads traffic.
Step 1: Define a single hypothesis and a single primary KPI
Keep it simple: “Landing Page B will increase lead conversion rate by reducing friction above the fold,” or “Landing Page B will increase purchase conversion value by improving product clarity.” Choose one primary KPI that matches how the campaign is optimized (for example, CPA if you optimize for leads, or ROAS/conversion value if you optimize for revenue). You can still monitor secondary metrics, but don’t let them override your main KPI unless there’s a clear business reason.
Step 2: Create Page A (control) and Page B (variant), aligned to the same intent
Both pages must match the promise of the ad. If your ads target “Emergency plumber near me,” don’t send one variant to a generic homepage and the other to a service page—that’s not a fair landing page test; it’s an intent mismatch test. Make the pages different in design or structure, but equal in offer, pricing, and message.
Step 3: Create the custom experiment and split traffic cleanly
In Google Ads, go to the Experiments area and create a custom experiment based on the campaign you want to test. During setup, you’ll choose the experiment dates and the experiment split.
In most cases, a 50/50 split is the fastest path to a confident result because you accumulate comparable data in both arms at the same time. For Search campaigns, you typically have split options that affect whether the same person can see both versions across multiple searches. If you want a cleaner user-level experience (a person sees only one version), use the option designed to keep users assigned to a single arm. If you need results faster and can tolerate the same user potentially seeing both arms across separate searches, use the option that randomizes per search.
Important operational note: avoid making changes while the experiment runs. Changes to the base campaign aren’t automatically reflected in the experiment, and editing either side mid-test can make results harder to interpret.
Step 4: Make exactly one change in the experiment: swap the final URL to Page B
Once the experiment campaign exists, edit the ads (or the relevant URL setting) in the experiment arm only, replacing the control landing page with the variant landing page. Keep everything else the same: keywords, targeting, bids, ad copy, assets, and extensions. The more you keep constant, the more confidently you can attribute performance differences to the landing page.
Step 5: Let the experiment run long enough to collect meaningful conversion data
As a rule, low-conversion tests need time. If you end an experiment early because you’re impatient, you usually “select” a winner that was just variance. In practice, many campaigns need multiple weeks to reach a stable conclusion, especially if Smart Bidding is involved and needs time to adjust within each arm.
Also remember that ads in the experiment may need review time before they serve normally. If you want the experiment to start cleanly, schedule it to begin in the future so approvals don’t skew early delivery.
Step 6: Read results like an operator, not like a gambler
When evaluating, start with your primary KPI (CPA or ROAS), then validate with conversion rate, conversion volume, and downstream quality signals (like qualified leads or refunded orders). A landing page that “wins” on conversion rate but produces lower-quality leads is not a winner—it’s a volume trap.
If results are favorable, apply the experiment changes back to the original campaign (or convert the experiment into a new independent campaign if that fits your account structure). If results are mixed or inconclusive, don’t force a decision—extend the test or simplify the change you’re trying to measure.
Common landing page A/B testing mistakes (and how to avoid them)
Mistake: Testing landing pages while also changing bidding, keywords, or ads
If you change multiple levers at once, you won’t know what caused the improvement (or decline). Landing page tests should isolate the page. If you want to test bidding strategy, do that as a separate experiment.
Mistake: Letting automation send traffic to unexpected pages
Some campaign types and settings can route users dynamically. That can be great for performance, but it’s bad for controlled experimentation. During a landing page test, lock your final URL behavior so each arm consistently sends users where you intend.
Mistake: Declaring victory on “micro metrics”
Clicks, bounce rate, and time on site can be interesting diagnostics, but they’re not the finish line. For Google Ads, winning means improved business outcomes at acceptable cost—typically CPA, ROAS, conversion value, or lead quality. Use micro metrics to explain why something happened, not to decide the winner.
Mistake: Broken attribution from messy URL tracking
If you use tracking templates, final URL suffixes, or third-party click trackers, be meticulous. Your tracking must resolve correctly to the landing page in both arms. If one variant breaks parameter handling, you can end up “proving” the other page is better when the truth is you just measured it correctly.
Mistake: Ending too early (or running too long without a decision rule)
Set an expectation upfront for how you’ll decide: minimum duration, minimum conversions per arm, and the primary KPI threshold that justifies a rollout. That keeps you from stopping early when a graph looks exciting, and it keeps you from running indefinitely when the test is clearly not producing a meaningful difference.
Let AI handle
the Google Ads grunt work
Let AI handle
the Google Ads grunt work
If you’re setting up landing page A/B tests in Google Ads, the key is keeping the experiment “clean”: use Custom Experiments to split traffic and budget intentionally, change only the final URL between control and variant, lock in one primary conversion goal, and watch out for automation (like Performance Max Final URL expansion) that can quietly reroute traffic and muddy results. Blobr can support this kind of disciplined testing by connecting to your Google Ads account and surfacing clear, prioritized actions via specialized AI agents, including ones focused on landing page alignment and keyword-to-page matching, so you can spot tracking issues, relevance gaps, or wasted spend before they skew your experiment outcomes.
Choose the right way to A/B test landing pages in Google Ads (because not all “tests” are equal)
When advertisers say they’re “A/B testing landing pages,” they often mean they’re running two ads with two different URLs and watching conversion rate. The problem is that this rarely behaves like a true A/B test, because modern ad serving prioritizes what it believes will perform best, which can skew traffic and make results hard to trust.
A true A/B test in Google Ads is one where traffic (and usually budget) is intentionally split between a control and a variant so you can compare outcomes over the same time period—without the system quietly reallocating exposure in a way that invalidates the comparison.
The most reliable method: Custom Experiments (built-in traffic split and clean reporting)
If you want the closest thing to a “real” landing page A/B test directly inside Google Ads, use a custom experiment. This setup creates an experiment version of your campaign and splits eligible auctions between the original and the experiment based on the split option you choose. The big advantage is that you’re not relying on ad rotation quirks to force fairness—you’re using a dedicated experimentation framework designed for comparisons.
What about Ad Variations?
Ad variations are excellent when you want to test a single ad copy change (like a new call-to-action) across lots of ads at once. They’re not my first pick for landing page testing because landing page tests usually require tight control over where traffic goes and how that traffic is counted. If your goal is strictly landing page performance, custom experiments are the cleaner tool.
If you run Performance Max: control “Final URL expansion” before you test landing pages
Performance Max can automatically send users to different pages on your domain if final URL expansion is enabled (it’s commonly on by default). That’s the opposite of what you want during a landing page A/B test, because you’re no longer testing Page A versus Page B—you’re testing “whatever pages the system decides” versus your intended variant. If you’re serious about landing page testing, lock down the landing page behavior first so the only meaningful difference between test arms is the page you’re trying to evaluate.
Before you start: set up measurement so your test doesn’t lie to you
Landing page tests fail far more often from tracking and attribution issues than from “bad creative.” Before you split traffic, make sure the conversion you’re optimizing for is measured consistently and credits the right campaign clicks.
Pick one primary conversion (and keep it stable for the full test)
Decide what “winning” means and don’t change it midstream. For lead gen, that might be a form submission. For ecommerce, it’s usually a purchase (and ideally conversion value, not just conversion count). If you’re importing conversions from Analytics key events into Google Ads, confirm the exact key event you’re using and keep that consistent for the duration of the experiment so you don’t shift the goalposts halfway through.
Make your URLs test-friendly (without breaking tracking)
You can test landing pages using two separate URLs (for example, /landing-a versus /landing-b) or by using a single URL with a parameter that swaps the page experience (for example, ?variant=a versus ?variant=b). Either approach can work. What matters is that each test arm sends users to a reliably distinct experience and that your tracking setup doesn’t break when you swap URLs.
If you use tracking templates and URL parameters, make sure they include a proper landing page URL insertion so clicks still resolve to the correct final URL. This is especially important in setups that rely on automated targeting types that require final URL insertion to function correctly.
Quick pre-test checklist (do this once, then don’t touch it)
- Confirm your primary conversion action is firing correctly and attributing to Google Ads clicks.
- Ensure both landing pages load fast, work on mobile, and have the same offer and traffic intent (you’re testing the page, not the promotion).
- Confirm your final URL doesn’t redirect users to a different domain (this can cause disapprovals and/or tracking issues).
- If using Performance Max, restrict or disable behaviors that could route traffic to unexpected pages during the test.
Step-by-step: A/B test landing pages using a Google Ads custom experiment
This is the workflow I use when I want decision-grade landing page results from Google Ads traffic.
Step 1: Define a single hypothesis and a single primary KPI
Keep it simple: “Landing Page B will increase lead conversion rate by reducing friction above the fold,” or “Landing Page B will increase purchase conversion value by improving product clarity.” Choose one primary KPI that matches how the campaign is optimized (for example, CPA if you optimize for leads, or ROAS/conversion value if you optimize for revenue). You can still monitor secondary metrics, but don’t let them override your main KPI unless there’s a clear business reason.
Step 2: Create Page A (control) and Page B (variant), aligned to the same intent
Both pages must match the promise of the ad. If your ads target “Emergency plumber near me,” don’t send one variant to a generic homepage and the other to a service page—that’s not a fair landing page test; it’s an intent mismatch test. Make the pages different in design or structure, but equal in offer, pricing, and message.
Step 3: Create the custom experiment and split traffic cleanly
In Google Ads, go to the Experiments area and create a custom experiment based on the campaign you want to test. During setup, you’ll choose the experiment dates and the experiment split.
In most cases, a 50/50 split is the fastest path to a confident result because you accumulate comparable data in both arms at the same time. For Search campaigns, you typically have split options that affect whether the same person can see both versions across multiple searches. If you want a cleaner user-level experience (a person sees only one version), use the option designed to keep users assigned to a single arm. If you need results faster and can tolerate the same user potentially seeing both arms across separate searches, use the option that randomizes per search.
Important operational note: avoid making changes while the experiment runs. Changes to the base campaign aren’t automatically reflected in the experiment, and editing either side mid-test can make results harder to interpret.
Step 4: Make exactly one change in the experiment: swap the final URL to Page B
Once the experiment campaign exists, edit the ads (or the relevant URL setting) in the experiment arm only, replacing the control landing page with the variant landing page. Keep everything else the same: keywords, targeting, bids, ad copy, assets, and extensions. The more you keep constant, the more confidently you can attribute performance differences to the landing page.
Step 5: Let the experiment run long enough to collect meaningful conversion data
As a rule, low-conversion tests need time. If you end an experiment early because you’re impatient, you usually “select” a winner that was just variance. In practice, many campaigns need multiple weeks to reach a stable conclusion, especially if Smart Bidding is involved and needs time to adjust within each arm.
Also remember that ads in the experiment may need review time before they serve normally. If you want the experiment to start cleanly, schedule it to begin in the future so approvals don’t skew early delivery.
Step 6: Read results like an operator, not like a gambler
When evaluating, start with your primary KPI (CPA or ROAS), then validate with conversion rate, conversion volume, and downstream quality signals (like qualified leads or refunded orders). A landing page that “wins” on conversion rate but produces lower-quality leads is not a winner—it’s a volume trap.
If results are favorable, apply the experiment changes back to the original campaign (or convert the experiment into a new independent campaign if that fits your account structure). If results are mixed or inconclusive, don’t force a decision—extend the test or simplify the change you’re trying to measure.
Common landing page A/B testing mistakes (and how to avoid them)
Mistake: Testing landing pages while also changing bidding, keywords, or ads
If you change multiple levers at once, you won’t know what caused the improvement (or decline). Landing page tests should isolate the page. If you want to test bidding strategy, do that as a separate experiment.
Mistake: Letting automation send traffic to unexpected pages
Some campaign types and settings can route users dynamically. That can be great for performance, but it’s bad for controlled experimentation. During a landing page test, lock your final URL behavior so each arm consistently sends users where you intend.
Mistake: Declaring victory on “micro metrics”
Clicks, bounce rate, and time on site can be interesting diagnostics, but they’re not the finish line. For Google Ads, winning means improved business outcomes at acceptable cost—typically CPA, ROAS, conversion value, or lead quality. Use micro metrics to explain why something happened, not to decide the winner.
Mistake: Broken attribution from messy URL tracking
If you use tracking templates, final URL suffixes, or third-party click trackers, be meticulous. Your tracking must resolve correctly to the landing page in both arms. If one variant breaks parameter handling, you can end up “proving” the other page is better when the truth is you just measured it correctly.
Mistake: Ending too early (or running too long without a decision rule)
Set an expectation upfront for how you’ll decide: minimum duration, minimum conversions per arm, and the primary KPI threshold that justifies a rollout. That keeps you from stopping early when a graph looks exciting, and it keeps you from running indefinitely when the test is clearly not producing a meaningful difference.
