How and why you should pause paid search (for science)

I won’t stand on this soapbox for long, but while I’m here, I want to take a second to advocate for something counterintuitive coming from someone with the title of Sr. Analyst, Paid Search: you should pause paid search. 

Not outright. Not all at once. But with the right structure and parameters, pausing paid search can be an invaluable source of data for us as PPC professionals. 

The incrementality question

When it comes to demonstrating return, I’d wager most of us measure some variation of total revenue over total cost. If it’s high, we’re doing well. If not, we rethink and rework until we hit the ROAS we want.

That’s not inherently bad. It’s a quickfire estimate of efficacy, and I find myself doing the same thing. The problem is that number doesn’t tell us much. It doesn’t tell us if we actually lost money buying users who would have converted anyway. And it doesn’t show what might have happened if we simply didn’t spend.

Admittedly, brand spend usually gets the heat here. As an industry, we have a gut feeling that branded search doesn’t bring in many new users. But we should take that same fine-toothed comb to non-brand search, which can drive significantly lower returns than we might guess.

To really answer those questions, we need to conduct an A/B incrementality test.

How to structure an incrementality test  

To start, I’m sure many of us know what an A/B test is, but let’s review. 

An A/B test is an experiment in which we take two groups, a control group and a test group, and measure the change in a behavior or outcome as a result of a change in a variable. 

With any good test, it’s important to outline the test parameters. For our purposes, we should try to think about the following questions:

  • How long are you running this test?
  • What are we comparing and what are we measuring?
  • What method will you use to analyze the results?
  • What attribution model will you use to measure paid search impact?

The first question is yours to answer. We recommend a test that spans at least a few months to capture some potential seasonal fluctuations. The second question is also pretty straightforward. The goal of this experiment is to measure two things:

  1. When we pause branded ad spend, how much does traffic to our organic listings change?
  2. When we pause paid search spend (branded or non-branded), what is the impact on topline revenue or leads?

When it comes to our method of analysis, there’s a little more nuance, but we’ll talk about our options there in a few paragraphs. And on the attribution model, the important part is to keep it consistent across your tests. Choose one, and stick with it.

Once you have all that outlined, there are two big things you want to do before you begin your test.

1. Set up your test and control groups

To measure the impact of paid search, the variable we’re changing is spend. To measure that effectively, we need two groups: a test and a control. Rather than sort our groups at the user level, we can create these groups with geographic data.

The methodology outlined in Tadelis et al. offers a great example of how to set this up with Nielsen DMA regions to ensure our two groups consist of regions with similar sales & seasonal trends. 

Begin by choosing a proportion of your geographic regions for testing. The paper uses a subset of 30%, but this number is up to you and your comfort with the risks of pausing paid search.

We want to measure the impact of our spend on topline revenue. To do that, it’s doubly important to account for seasonal variations in performance across geographic regions. There are a few quick-and-dirty ways to get a sense of seasonality in your data. There are also SEO seasonality tools that can be adjusted for PPC purposes. If you’re curious, feel free to explore some more technical, but still comprehensible methods to parse out seasonality. 

Next, within your subset of geographic regions, sort and pair them based on sales data and seasonality. Split that list down the middle, so that both groups look more-or-less the same and are easy to compare. Those are your two groups.

2. Double, triple, and quadruple check measurement 

This is critical. If you can’t trust your website’s measurement, all of your results here, and bluntly, all of your digital efforts, are asterisked by a cloud of mis-firing tags and maligned revenue numbers. 

Before you start any testing, we recommend conducting a tracking audit of your website. This will prevent bad results and make sure you have a clear understanding of how advertising traffic, engagement, and revenue is measured.

3. Start the test

Once you’ve got your groups split, and you’re confident in measurement, you’re good to test. Roll out the pause in your specified areas and start collecting data. Pay special focus to 

  • Brand ad  spend
  • Brand ad clicks & impressions 
  • Branded organic traffic 
  • Total sitewide conversions

4. Analyzing your findings

After the test has run its course, we’re on to the fun part.  The structure of this test lets us deploy a Difference-in-Difference test (D-I-D), which compares the impact of a change between two groups to estimate its effects. 

While I won’t walk through the specifics of the D-I-D test — and why every marketer should use it more often — in this article, I will provide some interesting resources to help you conduct it. Below are some articles that walk through examples of the process.

I’ve also created a Google Colab notebook for you to use to run this analysis yourself. It contains instructions and more information about the output of the test. Find that notebook and make a copy here.

Once you have that, you can follow the instructions in the doc, and you’ll be good to go!

Pause paid search for science

A lot of the work that goes into building a paid search program is focused on the strategy. We research, forecast, plan, launch, and test new tactics. But the broader question remains: once we have our program up-and-running, how do we show that the results are really helping the clients we serve?

Deliberately pausing paid search to measure its impact is, in my view, a vital part of any engagement. It’s a check-in, another test we should conduct alongside regular search term reports. It lets us keep incrementality top of mind, and it ensures we are making the most of budgets and effort.

That said, it’s also important to note that the test and analysis outlined here are not the only ones that chip away at this question. These are cursory, and there are many variables these models don’t account for, as discussed in the Google Colab notebook. But hopefully this approach offers a good first step to incorporate deeper, more structured testing into our PPC strategies. 


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About The Author

John is a Senior Paid Media Analyst at Uproer, where he works to build paid search strategies for clients in the e-commerce and SaaS spaces. He’s drawn to the ideas, channels, tactics, and emerging trends that tackle big issues in marketing. And he approaches SEM with a focus on data privacy, incrementality, and social impact. When he’s not knee-deep in a spreadsheet, John volunteers with local climate organizations and helps spread their message through search.

http://feeds.searchengineland.com/~r/searchengineland/~3/NjqnrrP73bQ/how-and-why-you-should-pause-paid-search-for-science-349082