If personalized advertising is banned, who bears the cost?

Earlier this month, a bill was introduced to both the House and the Senate that would effectively prevent digital ads from being targeted with any data other than contextual signals or user location. The Banning Surveillance Advertising Act (BSAA) was sponsored by Representative Anna G. Eshoo (D-CA) and co-sponsored by Representative Jan Schakowsky (D-IL) and Senator Corey Booker (D-NJ), and, if passed, it would forcefully and dramatically upend the digital advertising industry. Support for the bill was expressed by the Electronic Privacy Information Center (EPIC), the Anti-Defamation League, Common Sense Media, and Accountable Tech, which last year filed a petition with the FTC to “prohibit the anticompetitive practice of surveillance advertising.”

The term ‘surveillance advertising’ is dysfunctionally vague and unhelpful, as I detail in this article (the title of which is admittedly designed to induce clicks). To my mind, the term ‘surveillance advertising’ describes the first-party / third-party feedback loop of the hub-and-spoke model for advertising targeting that I describe here, but the BSAA bill is more aggressive than merely disrupting the flow of user data across contexts: the bill would prevent any behavioral or demographic data from being used to target ads to users, allowing only for contextual and geographical targeting (and some restrictions around the use of geographical targeting are identified in the bill’s text).

The BSAA bill is a quick read at just 20 pages, but as a broad overview:

  • The bill prevents ad networks and ad platforms (what the bill terms advertising facilitators) from targeting ads to users, or from knowingly allowing advertisers to target ads to users through (1) the provision of user or device lists, (2) the contact information of users, (3) unique identifiers that might be connected to individuals or devices, or (4) other personal information that could identify specific users;
  • The bill prevents advertisers from using any data to target ads to users that was purchased from or otherwise obtained from a third party, or that identifies the user as being a member of a protected class, which the bill defines as: “actual or perceived race, color, ethnicity, national origin, religion, sex (including sexual orientation and gender identity or gender expression), familial status, or disability of an individual or group of individuals”;
  • Notably, the bill allows for advertisers to utilize data they have collected from users directly to target ads, so long as the advertiser provides a written attestation to the ad network or ad platform that the data being used was not purchased from a third party and that it doesn’t identify an individual as being a member of a protected class;
  • The bill empowers the FTC and states’ attorneys general to enforce the bill, and it includes a private right of action that allows for fines to be levied in cases of violation.

Two days after the bill was introduced, the European Parliament voted to approve the initial draft of the Digital Services Act (DSA), which would similarly restrict the scope and use cases of targeting for digital advertising. Among other restrictions, the DSA prevents ad platforms from using sensitive information to target ads to individuals, and it requires that users be given the opportunity to opt-out of ads personalization.

The lack of a consent condition is my primary objection to the BSAA bill: my belief is that any legislation related to the general governance of ads targeting should be predicated first on user consent. If ads targeting is to be regulated, those regulations should allow users to determine for themselves whether they want to surrender data to ads platforms and other intermediaries for the sake of improving the relevancy of the ads to which they are exposed. On that point, my protestations against Apple’s App Tracking Transparency (ATT) privacy policy have focused predominantly on the fact that the loaded language used in the ATT opt-in prompt — especially in contrast to the opt-in prompt that Apple uses for the ads personalization mechanism for its own ad network — ultimately robs users of the opportunity to make an informed choice about their data.

But beyond a philosophical disagreement with the way that user consent is treated, the BSAA bill raises a larger question:

How might the BSAA bill pay for the damage that deteriorated digital ad targeting inflicts on the broader economy?

One of the more surreal, reality-interrogating experiences that any advertising practitioner will face is colliding with what I call the digital advertising fear complex: the belief that all digital advertising is fraudulent, and that all ads targeting machinery is deceptive smoke-and-mirrors that delivers no economic benefit to advertisers. This belief is present in the subtext of what the sponsors of the BSAA describe in their support for this kind of regulation. The general sentiment is that ad tech provides very little or no value and merely serves as a personal data civ to erode user privacy. So why not regulate it into oblivion?

Of the bill, Representative Eshoo said (emphasis mine):

I’m proud to partner with Senator Booker and Congresswoman Schakowsky on legislation to ban this toxic business model that causes irreparable harm to consumers, businesses, and our democracy.

From the perspective of a disciple of the “all digital advertising is fraud” school, granular and user-centric forms of ad targeting are the purest forms of rent-seeking because they provide no value to an advertising campaign beyond what contextual targeting allows. And if targeted advertising is seen as nothing but a harm and a privacy tax on consumers — benefiting only ad platforms — then banning it outright, instead of requiring user consent, is good policy. But that view is not consensus across the digital advertising landscape.

Certainly, the largest ad platforms don’t feel this way. Meta (née Facebook) fairly infamously published a study almost immediately before ATT was announced in which it declared that its personalized ads targeting technology contributes 50% of the CPM value to the traffic it serves through its Facebook Audience Network DSP. Criteo, the French ad tech behemoth, cut its 2018 revenue forecast by 20% in response to Apple’s announced rollout of Intelligent Tracking Prevention (ITP), the privacy framework that blocks tracking in the Safari browser. On the day that Google announced it would delay its own rollout of third-party cookie deprecation from 2022 to 2023, ad tech stocks rallied aggressively: The Trade Desk’s stock price jumped by 16%, and Criteo’s stock closed up 12%.

Some might note that these reactions and apocalyptic premonitions don’t refute the idea that “all advertising is fraud,” given that they are offered up by the ad tech industry. That’s a fair point. But it’s not just the ad tech industry that feels this way. The UK’s Competition and Markets Authority (CMA) published an Online Advertising and Digital Platforms report in July 2020 that estimates that online publishers could see a revenue impairment of up to 70% if third-party cookies are deprecated. From the report:

The evidence suggests that the user data used for targeting digital advertising is highly valuable to advertisers and publishers. For example, Google ran a trial in 2019 to compare the revenue publishers received from personalised advertising with revenue from non-personalised ads. Our analysis of the results suggests that UK publishers earned around 70% less revenue when they were unable to sell personalised advertising but competed with others who could.

A group of online publishers recently submitted a complaint to the EU’s competition commissioner, Margrethe Vestager, requesting that the EU intervene into Google’s planned deprecation of third-party cookies. These publishers argue that the deprecation of third-party cookies in Google’s Chrome browser would severely impair their businesses and privilege Google’s own ad network (note that third-party cookies are already blocked in the Safari and Firefox browsers). These publishers are not advocating in the interests of ad tech; the complaint represents nothing but the interests of the publishers that authored it. Clearly, these publishers believe that personalized advertising produces real economic benefit to them.

I believe there are two preconditions for any productive discussion about regulating personalized advertising:

First, a recognition of the privacy / utility tradeoff that dictates that the consumer experience can be improved with personalization in any form, including advertising personalization. Awareness of the benefits to consumers of relevant advertising doesn’t necessitate a full embrace of ads personalization. It’s entirely possible that a majority of consumers would refuse to share their behavioral data with advertising platforms in exchange for more relevant ads. I believe that a consent mechanic, unpolluted with loaded and leading language, allows consumers to make informed choices about their privacy and the use of their data in ads personalization.

And second, a recognition of the economic impact of banning ads personalization. The pain felt by a ban on ads personalization will not be exclusively incurred by ad tech companies. Publishers and advertisers alike will face an adverse operating environment if ads personalization is banned, and by logical extension, so will consumers. The framing of the BSAA is such that a ban on ads personalization is presented as totally cost-free from the consumer perspective, but that’s not the case. If large publishers lose ads revenue, they’ll need to either charge more for their content, allow themselves to be acquired by much larger publishers (to form of Content Fortresses), or shutter. And advertisers will be faced with exactly those same choices if they can no longer efficiently reach relevant audiences. Note again that none of this is to say that personalized advertising must be preserved for these reasons. But a sensible decision on the matter cannot be reached without recognizing this economic tradeoff.