Confessions of a fed-up ad fraud researcher: ‘Prevention is always behind’

This article is part of our Confessions series, in which we trade anonymity for candor to get an unvarnished look at the people, processes and problems inside the industry. More from the series →

The mechanics of ad fraud are widely discussed but hard to understand. For the latest in our anonymous Confessions series, we talked to a fraud researcher who does work for brands, agencies and ad tech platforms. The researcher said agencies have perverse incentives when it comes to picking verification firms and a poor understanding of statistical sampling.

Here are excerpts, edited for clarity.

Content-recommendation widgets like Taboola and Outbrain have been getting a lot of criticism since the presidential election for their role in funding misleading and extreme content. Do these widgets also have a role in funding fraud?
Anything on a cost-per-click is a very dangerous model, and they pay publishers who put their widgets on their sites by the click. And these widgets drive more revenue if they are on sites with high click-through rates, and sites with high CTRs are likely to have fraud. So it all creates an environment to support fraud.

GroupM’s chief digital officer Rob Norman recently wrote that the impact of ad fraud is often overestimated. What did you think of this?
It was an attempt to calm the frantic concern on the client side because everywhere else clients are reading, ad fraud is portrayed as a huge and growing problem.

But how do you know whose numbers to believe? Sure, agencies and verification vendors want to tell clients that there is little fraud, but anti-fraud consultants have motivation to make fraud seem like a gigantic problem they can help fix.
Fraud lives in the shadows and does everything it can to not be counted, so estimates are just estimates. It is healthy that we have a wide range of estimates because it shows people’s different perspectives. But what has remained constant is that the prevention is always one step behind.

Since fraudsters are always finding new ways to trip the verification services, are these vendors actually effective?
For sure. But they only help point out where the problem might be. Their clients still need to do their own digging and continually remove fraud from their supply chain.

A lot of verification vendors remain. By now, shouldn’t advertisers have figured out which ones are the most effective?
Clients don’t always have incentive to pick the most accurate vendors. A lot of times agencies do a bake-off. They will run tests with two vendors, compare results and pick the vendor that reported the lowest percentage of fraud.

What’s so bad about that?
The agency is just picking the number they are comfortable with. Let’s say one vendor reports 80 percent fraud and another reports 20 percent. Many agencies will pick the vendor that reports 20 percent because they don’t want a bad number to get out to clients. But the smart ones focused on the long term will pick the vendor with a higher percentage so that they can figure out what’s going on.

Aside from using vendors, what else can advertisers do to lower their exposure to fraud?
Ask basic questions. Ask whoever you are buying from how they get their audience. If they continue to have a fixed cost-per-user acquisition, there is fraud going on because when you are gaining real audience, you don’t actually know for sure what the click-through rate will be ahead of time. Ask the websites what proportion of their traffic is paid. A lot of fraud publishers and their shady ad networks will back away the more you question them.

That seems pretty simple. Aren’t advertisers already doing this vetting?
The industry is so busy executing buys, they assume their vendors are doing the screening. It is just a matter of having time to look under the hood periodically.

What do you think is the biggest thing people get wrong about fraud research?
The sampling. When advertisers talk about fraud detection, how often do they look at how much of their campaign is really being scanned by their vendors?

What do you mean?
A lot of vendors will scan just 10 percent of an advertiser’s impressions. And they’ll say that’s a big enough sample size. But sample sizes and confidence intervals are only correlated if the sample is representative of the population. And they don’t know if what the vendor is collecting is actually representative of the entire campaign.

https://digiday.com/?p=234356

More in Marketing

Marketing Briefing: Marketers test retail media even more as the third-party cookie crumbles

For marketers who weren’t as keen to spend on retail media networks previously, the first-party data pitch of retail media networks is now more appealing.

Ad execs enter crucial phase of Google’s Privacy Sandbox experimentation

Ad execs are diving into three major areas of the Privacy Sandbox without tweaking a thing. It’s all about tracking outcomes for them.

Special report: The third-party cookie primer

A catch-up on all things third-party cookies.