1) Set up an A/B test that won’t lie to you
A/B testing landing pages for ads is simple in concept—send similar traffic to two pages and compare conversion performance—but it’s surprisingly easy to “test” and learn the wrong thing. The goal is to make the landing page the only meaningful difference, while keeping tracking, targeting, and budgets stable enough that the result is actually attributable to the page.
Pick one primary success metric (and make sure it’s measured correctly)
Before you touch anything, decide what “better” means. For lead gen, that’s often Cost / conversion and Conversion rate. For ecommerce, it may be Conversion value and Return on ad spend. Whatever you choose, make sure your conversion measurement is set up for the action that truly matters (purchase, qualified lead, booked call, etc.), not a soft micro-action that’s easy to inflate.
If you’re going to run the test using automated bidding, you’ll get cleaner outcomes when the bidding system is optimizing toward the same conversion action you’re using to judge the winner. Misaligned goals are one of the fastest ways to end up with “Page B won” results that don’t hold up after you scale.
Keep the test clean: one change, consistent intent, consistent offer
The most dependable landing page tests change one major idea at a time: the hero message, the form length, the proof section, the CTA, the pricing display, or the page layout. If you change all of them at once, you’ll still potentially find a winner—but you won’t know why it won, which makes it harder to iterate.
Also make sure the two pages match the intent of the same ads. If Page A is “Request a demo” and Page B is “Start a free trial,” you’re not just testing pages—you’re testing offers and funnel stages. That’s not wrong, but it’s a different test and should be planned (and judged) differently.
Use URL hygiene so you can diagnose results later
Tag each variant consistently so you can segment performance in reporting tools and quickly spot tracking issues. The cleanest way is usually appending a parameter at the end of the landing page URL (for example, a simple “variant” parameter). In most cases, you’ll want to place parameters that must reach the final landing page in a URL suffix field designed for that purpose, and reserve tracking templates for third-party tracking scenarios.
One more practical detail: click measurement and redirects can affect load time and sometimes tracking behavior. Modern accounts use parallel tracking, which sends users directly to the final URL while measurement happens in the background—this is good for landing page speed and helps reduce measurement-related drop-off. Your test pages should be equally fast; otherwise, you’re partly testing speed, not just content.
2) The most reliable way: run the test inside the ad platform with an Experiment
If you want a true A/B split where the ad system handles randomization and comparison, run a campaign experiment and change only the final URL (landing page) in the treatment. This approach keeps targeting, auctions, and most delivery dynamics as comparable as possible.
Method A (recommended): Custom Experiments for landing page A/B tests
Custom experiments let you create a trial version of an existing campaign and split the original campaign’s traffic and budget between the control (original) and treatment (experiment). For a landing page test, your treatment change is simply swapping the final URL to Page B while everything else remains consistent.
- Create the experiment from the original campaign and ensure the treatment campaign mirrors the control campaign settings, ads, and targeting.
- Change only the final URL in the treatment to your alternate landing page.
- Choose a split that matches your goal: for Search you can typically choose either a cookie-based split (users tend to consistently see one variant) or a search-based split (assignment can happen per search). For Display, the split is designed so users only see one arm.
- Set a sensible traffic/budget split (often 50/50 when you want the fastest learning with balanced risk).
- Schedule the experiment long enough to learn and avoid making changes mid-flight.
Two operational notes that matter more than most people expect. First, experiments have a defined runtime window (commonly in the “weeks” range), and you can extend the end date while the experiment is running if you need more data. Second, ad reviews can delay start, so scheduling a start date slightly in the future can prevent your experiment from “starting” before the treatment ads are eligible to serve.
How long should you run it? Aim for decisions, not dates
Instead of fixating on “run it for 14 days,” plan around volume and stability. You’re looking for enough conversions that normal daily volatility doesn’t dominate the conclusion. As a rule, higher daily conversion volume gets you to confident outcomes faster; low-volume accounts will need longer tests (or bigger changes that create a stronger signal).
Also, be careful about the learning period. Automated systems often need time to adapt after a change, and early days can be noisy. A practical approach is to review results while excluding the first chunk of days so the initial ramp doesn’t skew the comparison.
3) Advanced situations: Performance Max, URL expansion, and “hidden” routing issues
If you use Performance Max, control where traffic is allowed to land
Landing page A/B testing gets tricky in campaigns that can automatically choose the final landing page. In Performance Max, final URL expansion is commonly enabled by default, meaning the system may send users to different pages on your domain based on predicted relevance. That’s great for performance, but it can corrupt a landing page test if you’re trying to force traffic evenly between Page A and Page B.
If you need a clean A/B test, you generally have two options: turn off final URL expansion for the duration of the test, or use URL exclusions so the campaign can’t route traffic to pages that aren’t part of your test. If you’re explicitly trying to measure the impact of URL expansion itself, there are experiment formats designed to compare expansion-on versus expansion-off, including the ability to exclude specific URLs in the treatment so your test stays focused.
Common routing pitfalls that break landing page tests
Even when you think you’re splitting traffic 50/50, real-world routing can silently override your setup. Watch for redirects that send mobile users to different templates, geo-routing that changes content, consent banners that behave differently, and A/B tools that randomize again after the ad platform already randomized (double randomization can create uneven splits and confusing results).
Also confirm your URL structure is policy-compliant and consistent. Your visible URL and landing experience need to align to the same site expectations, and your tracking setup should reliably produce the same landing destination for the same variant every time.
4) How to analyze results and confidently pick a winner
Use experiment reporting the right way (confidence, intervals, and “no clear winner”)
When you monitor an experiment, don’t just look at raw conversion counts. Use the experiment summary view to compare performance and pay attention to the confidence level and the confidence interval around the observed lift. A “+8% conversion rate” result with wide uncertainty may be directionally interesting but not decision-grade yet.
If you see “no clear winner,” that’s not failure—it usually means one of three things: you need more time, you need more volume (often budget), or the pages are genuinely similar in performance. In that third case, it’s often smarter to test a bigger idea rather than polishing minor wording.
Pair conversion results with landing page diagnostics (speed and mobile experience)
Alongside your conversion metrics, review landing page performance diagnostics so you can separate “the message worked” from “the page loaded faster.” Use the landing page reporting area to identify pages with weaker mobile friendliness rates, and watch for pages that fail mobile tests inconsistently (those intermittent failures can crush conversion rate and look like “Variant B is worse” when it’s actually a technical reliability issue).
Speed differences matter because even small delays can reduce conversion volume, especially on mobile. If Variant B is heavier (more scripts, larger images, extra widgets), you may be testing friction and load time as much as you’re testing persuasion.
5) A practical checklist for your next landing page A/B test
- One primary KPI aligned to your campaign optimization goal (CPA, conversion rate, value, ROAS).
- One meaningful page change (or a clearly-defined “bundle” of changes with a follow-up plan).
- Stable traffic split using an experiment so control and treatment share comparable auctions and intent.
- Clean measurement with consistent conversion tracking and consistent URL parameters per variant.
- Control routing (especially in campaign types that can automatically choose landing pages).
- Enough volume to reduce noise, and exclude the earliest learning/ramp period when interpreting results.
If you follow that structure, landing page testing stops being a guessing game and becomes an engine: test, learn, apply the winner, then immediately test the next highest-impact hypothesis.
Let AI handle
the Google Ads grunt work
| Step | What to do | Why it matters for landing page A/B tests | Relevant Google Ads features / docs |
|---|---|---|---|
| 1. Define a trustworthy success metric | Choose one primary KPI (e.g., CPA, conversion rate, conversion value, ROAS) and ensure your conversion action tracks the real business outcome (purchase, qualified lead, booked call). Align your bid strategy’s optimization goal with that same conversion action. | Keeps the test from “lying” by avoiding soft goals (e.g., page views, scrolls) that are easy to inflate and ensures automated bidding learns toward the metric you’ll use to pick a winner. |
Set up your conversions Set up your web conversions |
| 2. Keep the test clean and traffic comparable | Change only one major landing page element (hero, form, proof, CTA, layout, pricing display) per test, and keep the offer and intent consistent across variants. Make sure ads for both variants promise the same action (e.g., all “Request a demo” vs. mixing “Demo” and “Free trial”). | Isolates the landing page as the main difference so you can attribute performance changes to the page, not to different offers or funnel stages. | Use the same campaign, ad groups, and targeting, then test only the final URL via an experiment (see Step 3). |
| 3. Use Experiments to split traffic properly | Create a custom experiment from your existing campaign and have the treatment campaign mirror the control. In the treatment, change only the final URL to Page B and keep the split (often 50/50) stable for the full test window. | Lets Google Ads randomize traffic while keeping auctions, targeting, and budgets comparable, producing a true A/B split and cleaner results than manually duplicating campaigns or ad groups. |
About custom experiments Monitor your experiments Experiments FAQs |
| 4. Use URL hygiene and stable tracking | Tag each variant with consistent parameters (for example, using a simple “variant” value) via the final URL suffix, and reserve tracking templates for third‑party tracking needs. Rely on parallel tracking so users go directly to the landing page while click measurement happens in the background. | Ensures you can segment performance by variant, avoids broken tracking, and keeps load time and routing behavior as similar as possible between pages so you’re testing content, not tracking quirks. | About tracking in Google Ads (tracking templates, custom parameters, final URL suffix, and parallel tracking) |
| 5. Handle Performance Max and URL expansion carefully | For Performance Max, decide whether to allow final URL expansion during the test. To keep a clean A/B test, either turn expansion off temporarily or use URL rules and exclusions so traffic is restricted to your test pages. | Prevents the system from sending some traffic to “random” site URLs, which would dilute your test and make it impossible to know how much traffic actually hit Page A vs. Page B. | About Final URL expansion in Performance Max |
| 6. Watch for routing and landing page quality issues | Check for redirects that send different devices or regions to different templates, consent flows that behave differently, or additional on‑site A/B tools that re‑randomize traffic after Google’s experiment split. Use the landing pages reporting area to monitor mobile friendliness and technical reliability for each variant. | Routing quirks and technical issues (like mobile‑unfriendly layouts or intermittent failures) can look like one page “losing” even when the message is stronger, because users never get a smooth experience. | Evaluate performance of landing pages (landing pages report, mobile friendly rate, AMP and diagnostics) |
| 7. Analyze experiment results and decide a winner | Use the experiment reporting view to compare KPIs, paying attention to confidence levels and confidence intervals, not just raw conversion counts. Exclude the initial “learning” days when automated bidding is still adapting, and be prepared to declare “no clear winner” if confidence is low or results are too similar. | Focuses on statistically meaningful lifts instead of day‑to‑day noise, and prevents overreacting to small, uncertain differences that may disappear when you roll out the change. | Monitor your experiments (experiment reporting, significance and intervals) |
| 8. Use a repeatable checklist | For every landing page A/B test, confirm: one primary KPI aligned with your bidding goal, one meaningful page change, a stable experiment‑based split, clean conversion and URL tracking, controlled routing (especially in Performance Max), and enough volume and runtime to reduce noise. | Turns landing page optimization into a repeatable engine—test, learn, ship the winner, then move on to the next highest‑impact hypothesis instead of running ad‑hoc, inconclusive tests. | Combine: Set up your conversions, About custom experiments, About tracking in Google Ads, and Evaluate performance of landing pages. |
Let AI handle
the Google Ads grunt work
If you’re A/B testing landing pages for ads, the biggest unlock is keeping the experiment “clean”: pick one primary conversion metric that matches your bidding goal, change only one meaningful page element at a time, split traffic with Google Ads Experiments (rather than duplicating campaigns), and keep tracking and URL parameters consistent so you can trust the readout—especially if you’re using Performance Max and need to control URL expansion. If you want help turning that checklist into repeatable work inside your account, Blobr connects to Google Ads and runs specialized AI agents that continuously spot landing page alignment and tracking issues, suggest high-impact page and keyword-to-page improvements, and package them as clear actions you can apply on your terms.
1) Set up an A/B test that won’t lie to you
A/B testing landing pages for ads is simple in concept—send similar traffic to two pages and compare conversion performance—but it’s surprisingly easy to “test” and learn the wrong thing. The goal is to make the landing page the only meaningful difference, while keeping tracking, targeting, and budgets stable enough that the result is actually attributable to the page.
Pick one primary success metric (and make sure it’s measured correctly)
Before you touch anything, decide what “better” means. For lead gen, that’s often Cost / conversion and Conversion rate. For ecommerce, it may be Conversion value and Return on ad spend. Whatever you choose, make sure your conversion measurement is set up for the action that truly matters (purchase, qualified lead, booked call, etc.), not a soft micro-action that’s easy to inflate.
If you’re going to run the test using automated bidding, you’ll get cleaner outcomes when the bidding system is optimizing toward the same conversion action you’re using to judge the winner. Misaligned goals are one of the fastest ways to end up with “Page B won” results that don’t hold up after you scale.
Keep the test clean: one change, consistent intent, consistent offer
The most dependable landing page tests change one major idea at a time: the hero message, the form length, the proof section, the CTA, the pricing display, or the page layout. If you change all of them at once, you’ll still potentially find a winner—but you won’t know why it won, which makes it harder to iterate.
Also make sure the two pages match the intent of the same ads. If Page A is “Request a demo” and Page B is “Start a free trial,” you’re not just testing pages—you’re testing offers and funnel stages. That’s not wrong, but it’s a different test and should be planned (and judged) differently.
Use URL hygiene so you can diagnose results later
Tag each variant consistently so you can segment performance in reporting tools and quickly spot tracking issues. The cleanest way is usually appending a parameter at the end of the landing page URL (for example, a simple “variant” parameter). In most cases, you’ll want to place parameters that must reach the final landing page in a URL suffix field designed for that purpose, and reserve tracking templates for third-party tracking scenarios.
One more practical detail: click measurement and redirects can affect load time and sometimes tracking behavior. Modern accounts use parallel tracking, which sends users directly to the final URL while measurement happens in the background—this is good for landing page speed and helps reduce measurement-related drop-off. Your test pages should be equally fast; otherwise, you’re partly testing speed, not just content.
2) The most reliable way: run the test inside the ad platform with an Experiment
If you want a true A/B split where the ad system handles randomization and comparison, run a campaign experiment and change only the final URL (landing page) in the treatment. This approach keeps targeting, auctions, and most delivery dynamics as comparable as possible.
Method A (recommended): Custom Experiments for landing page A/B tests
Custom experiments let you create a trial version of an existing campaign and split the original campaign’s traffic and budget between the control (original) and treatment (experiment). For a landing page test, your treatment change is simply swapping the final URL to Page B while everything else remains consistent.
- Create the experiment from the original campaign and ensure the treatment campaign mirrors the control campaign settings, ads, and targeting.
- Change only the final URL in the treatment to your alternate landing page.
- Choose a split that matches your goal: for Search you can typically choose either a cookie-based split (users tend to consistently see one variant) or a search-based split (assignment can happen per search). For Display, the split is designed so users only see one arm.
- Set a sensible traffic/budget split (often 50/50 when you want the fastest learning with balanced risk).
- Schedule the experiment long enough to learn and avoid making changes mid-flight.
Two operational notes that matter more than most people expect. First, experiments have a defined runtime window (commonly in the “weeks” range), and you can extend the end date while the experiment is running if you need more data. Second, ad reviews can delay start, so scheduling a start date slightly in the future can prevent your experiment from “starting” before the treatment ads are eligible to serve.
How long should you run it? Aim for decisions, not dates
Instead of fixating on “run it for 14 days,” plan around volume and stability. You’re looking for enough conversions that normal daily volatility doesn’t dominate the conclusion. As a rule, higher daily conversion volume gets you to confident outcomes faster; low-volume accounts will need longer tests (or bigger changes that create a stronger signal).
Also, be careful about the learning period. Automated systems often need time to adapt after a change, and early days can be noisy. A practical approach is to review results while excluding the first chunk of days so the initial ramp doesn’t skew the comparison.
3) Advanced situations: Performance Max, URL expansion, and “hidden” routing issues
If you use Performance Max, control where traffic is allowed to land
Landing page A/B testing gets tricky in campaigns that can automatically choose the final landing page. In Performance Max, final URL expansion is commonly enabled by default, meaning the system may send users to different pages on your domain based on predicted relevance. That’s great for performance, but it can corrupt a landing page test if you’re trying to force traffic evenly between Page A and Page B.
If you need a clean A/B test, you generally have two options: turn off final URL expansion for the duration of the test, or use URL exclusions so the campaign can’t route traffic to pages that aren’t part of your test. If you’re explicitly trying to measure the impact of URL expansion itself, there are experiment formats designed to compare expansion-on versus expansion-off, including the ability to exclude specific URLs in the treatment so your test stays focused.
Common routing pitfalls that break landing page tests
Even when you think you’re splitting traffic 50/50, real-world routing can silently override your setup. Watch for redirects that send mobile users to different templates, geo-routing that changes content, consent banners that behave differently, and A/B tools that randomize again after the ad platform already randomized (double randomization can create uneven splits and confusing results).
Also confirm your URL structure is policy-compliant and consistent. Your visible URL and landing experience need to align to the same site expectations, and your tracking setup should reliably produce the same landing destination for the same variant every time.
4) How to analyze results and confidently pick a winner
Use experiment reporting the right way (confidence, intervals, and “no clear winner”)
When you monitor an experiment, don’t just look at raw conversion counts. Use the experiment summary view to compare performance and pay attention to the confidence level and the confidence interval around the observed lift. A “+8% conversion rate” result with wide uncertainty may be directionally interesting but not decision-grade yet.
If you see “no clear winner,” that’s not failure—it usually means one of three things: you need more time, you need more volume (often budget), or the pages are genuinely similar in performance. In that third case, it’s often smarter to test a bigger idea rather than polishing minor wording.
Pair conversion results with landing page diagnostics (speed and mobile experience)
Alongside your conversion metrics, review landing page performance diagnostics so you can separate “the message worked” from “the page loaded faster.” Use the landing page reporting area to identify pages with weaker mobile friendliness rates, and watch for pages that fail mobile tests inconsistently (those intermittent failures can crush conversion rate and look like “Variant B is worse” when it’s actually a technical reliability issue).
Speed differences matter because even small delays can reduce conversion volume, especially on mobile. If Variant B is heavier (more scripts, larger images, extra widgets), you may be testing friction and load time as much as you’re testing persuasion.
5) A practical checklist for your next landing page A/B test
- One primary KPI aligned to your campaign optimization goal (CPA, conversion rate, value, ROAS).
- One meaningful page change (or a clearly-defined “bundle” of changes with a follow-up plan).
- Stable traffic split using an experiment so control and treatment share comparable auctions and intent.
- Clean measurement with consistent conversion tracking and consistent URL parameters per variant.
- Control routing (especially in campaign types that can automatically choose landing pages).
- Enough volume to reduce noise, and exclude the earliest learning/ramp period when interpreting results.
If you follow that structure, landing page testing stops being a guessing game and becomes an engine: test, learn, apply the winner, then immediately test the next highest-impact hypothesis.
