The following is an excerpt from Ignacio Cofone’s new book, “The Privacy Fallacy: Harm and Power in the Information Economy,” out now.


Anti-privacy Choice Design

In 2018, you may have noticed a spike in the number of corporate emails you received. Dozens of companies (some of them you didn’t know you interacted with) contacted you to let you know that they updated their privacy policy because they want to better protect your privacy. The reality is that companies around the world had to adjust their privacy policies to comply with Europe’s newly arrived GDPR, and they had a legal mandate to notify you that they did. Your relationship with those companies, though, didn’t change much after they made those changes.

Anti-privacy design strategies present privacy notices and choices in a way that exploits people’s biases. They “nudge” us, mostly against our interests and for companies’ profits. Design techniques aren’t neutral, so they can be used to influence our decisions. Real-world people in an uncertain data context can be and are exploited by corporations’ design choices.

When dealing with real people in the information economy, design holds power. As the Norwegian Consumer Council puts it, corporations “nudge users away from privacy-friendly choices.” We disregard privacy settings and terms and conditions not because we’re lazy, but rather because service providers’ designs push us towards disadvantageous choices.

Examples of exploitative choice design aren’t difficult to find: they’re everywhere in our daily lives. Phone apps ask us if we’d like them to “connect” with Google or Instagram instead of making a new account, which sounds quite nice – they don’t ask whether we want them to share our personal data with each other, which is what’s happening. YouTube used to tell its users who wanted to turn off personalized ads that doing so meant they would lose the ability to mute the ads, which is problematic in offices and disruptive elsewhere. Facebook asks you to consent to facial recognition because it can “help protect you from strangers using your photo” and “tell people with visual impairments who’s in a photo or video,” leaving out that facial recognition helps advertisers approach users based on perceived emotional states and identify them when doing so would be unwelcome. Digital products are deliberately designed to be addictive, from the type of font used to the frequency of popups. The information economy provides incentives to design them that way because, the more that we use these products, the more ads we see and the more information is gathered about us.

Designers routinely shape choice architectures to get people to accept terms that benefit corporations. For example, they can endorse one alternative and highlight losses in the other or minimize privacy concerns, priming people into opting in. Many companies use buttons of different sizes and colors for options they want their users to click. Others try to influence us by exploiting our aversion to losses. JC Penney highlights the number of people who viewed a product in the last day to develop a sense of urgency. If you try to deactivate an account with language-learning app Duolingo, it will tell you: “Hoooooold up – are you sure you want to do this? Duolingo teaches you how to live, love, and speak another language. If you leave you might just never be able to reach your full potential in life.”

The design of privacy notices influences people despite the information shared in those notices. User-friendly privacy notice designs inspire trust in people regardless of whether the data practices they describe are privacy-protective, as the notifications from 2018 did. People don’t just respond to design; they respond to design despite the content of notices.

One way in which anti-privacy choice architecture contributes to our inability to understand privacy settings is what economists call the “framing effect.” Framing privacy choice settings in convenient ways, such as labelling them as “app settings” rather than “privacy settings,” increases the likelihood of people agreeing to data practices. Moreover, nonuser-friendly designs obstruct perceived privacy controls. Studies comparing widely used privacy policy formats with best practices – such as short overviews of the privacy policies and explanations in understandable language – found that participants couldn’t reliably understand the best practices better.

Anti-privacy design is often targeted. One technique, called “confirmshaming,” is to shame people into acting favorably towards a corporation. Think of it as exploiting your fear of missing out. For example, when we decline an email newsletter, some websites shame us into changing our choice with banners that read something along the lines of: “Don’t go! We’ll miss you!” Similarly, the “social proof” technique exploits the influence of our peers’ privacy behavior over ours by highlighting how many chose the option that the corporation would like us to take. Social proof creates what behavioral scientists call a bandwagon effect. This effect operates when we do something after we see how many people around us are doing it, like when NBA teams get more fans as they progress in the season or Catholics become more religious when a pope of their nationality is elected. Targeted techniques combine an unprecedented knowledge of our decision-making vulnerabilities with an unprecedented ability to reach millions of individuals with messages adjusted for them. They press on our vulnerabilities to drive us into surveillance.

Targeted techniques undermine people’s choices in a legal context where their agreement is the utmost protection mechanism. By normalizing surveillance, they even distort our expectations and preferences. They don’t only interfere with individual autonomy, but also with the law’s effectiveness at protecting us, because the assumption that we can freely decide what consequences we’re willing to accept is essential to both.

At their worst, designs hypertarget the most vulnerable, such as those with gambling addictions, medical issues, teenage depression, or a pending divorce. Until a controversy sparked in 2017, for example, Facebook allowed advertisers to target millions of teenagers who showed to be in psychologically vulnerable states, such as feeling “worthless”, “insecure,” and “defeated.” Hypertargeted designs steer vulnerable individuals towards surveillance. Corporations with unprecedented access to knowledge about people’s vulnerabilities can use those vulnerabilities to manipulate people at a low cost because they can do so at scale.

[…]

Dark Patterns

Dark patterns are the ultimate form of design manipulation. They’re strategies meant to manipulate people into agreeing to data practices through deception. Dark patterns interfere with people’s decision-making, making them act in the interests of online services and, potentially, to their own detriment. They go beyond nudging designs and surveillance by default. Interface designer Harry Brignull, who coined the term, defined them as strategies that are “carefully crafted to trick users into doing things […] with a solid understanding of human psychology, and they do not have the user’s interests in mind.” Empirical evidence finds them strikingly effective.

The strategies are varied. Some types of dark patterns, known as obstruction, are designed to trick people into picking a specific option by presenting options asymmetrically. For example, if you try to cancel a subscription with the Financial Times, you’ll find a preselected option to change the subscription to a different (sometimes more expensive) one; to find the “cancel subscription” button you’d have to scroll to the bottom of the webpage and find it in smaller font. The New York Times, for a while, made it difficult for subscribers to cancel: after easily subscribing online, subscribers had to call or use a chat with long waiting times to cancel. Google assures people that they can easily delete their data in a dashboard, but makes the dashboard containing those options unnecessarily difficult to navigate and the options difficult to exercise.

Some types of dark patterns involve trick questions with intentional ambiguity. LinkedIn sometimes asks its users yes/no questions but, instead of allowing the user to answer “no,” the alternative to “yes” is “No, show me more.” Barclays, among many other companies, makes you “[t]ick the box if you dont want marketing messages” (emphasis added). A related type of dark pattern involves hiding useful information, making it difficult for people to make choices that the designer disfavors, such as deleting a profile. Booking.com, for example, pretends to have a greyed-out button to delete your account, but the account deletion process requires you to check your email instead. Facebook used to have several steps on its website to refuse cookies that included having to click on the button “accept cookies.” Bumble’s explanation of how to delete your profile directs you to its “disable date mode” and “snooze” features instead. Obstruction, trick questions, and hidden information have been found to be highly effective for companies to manipulate their users.

Default settings or preselected options that benefit corporations turn into dark patterns when they’re hidden or deceptive – like some systems of tracking. If you registered with eBay using the “sign in with Google” feature, for example, you automatically opted into marketing emails. In Google’s ad settings, it’s impossible to know whether “ads personalization across the web” is turned on or off by default. In one experiment, preselection combined with clicking through another window more than doubled the number of people who were left with the pro-corporation option.

Other tactics leverage deception to exploit our limited bandwidth to process information. Urgency tactics, such as false low stock messages, shortly expiring offers, or countdown timers, do this. To trick people into resubscribing, The New Yorker sends a pretend final demand letter that reads “statement of account: FINAL NOTICE,” but is just a request to renew the subscription, which would otherwise expire. “Nagging” dark patterns are repeated or invasive requests to act in the interests of the service provider. Duolingo’s push notifications are so insistent that they’ve been the object of parody.

Dark patterns, as these examples show, are widespread. You can find them throughout the information economy. One study identified them in 95 percent of the free Android apps in the Google Play Store.

The GDPR, and privacy laws outside the EU modeled after it, create a framework that attempts to limit manipulation practices such as dark patterns. The GDPR, for example, requires that services be designed with data protection by default. This default includes the principles of transparency and data minimization. Arguably, most of the examples mentioned here breach these principles as they purposefully obscure beneficial options for people and aim to collect data that the service doesn’t need to function.

These legal frameworks have been largely ineffective at protecting people from dark patterns. Most dark patterns I mentioned proliferated after the GDPR came into force. In one broad study of pop-up banners after the GDPR, only 11.8 percent of websites were found compliant with minimum requirements so as not to be considered as deploying dark patterns. If any of the dark patterns described here sound familiar to you from your online interactions with different corporations, then you already know that the law has been ineffective at preventing them.

Responding to these challenges, researchers propose a variety of legal strategies to address the consent problems dark patterns produce. Most of these strategies fit within the traditionalist paradigm. Some believe that more effective enforcement of existing laws is all we need. Traditionalist laws, however, are oblivious to manipulation because they rely on the myth of apathy. Others suggest antitrust legislation, arguing that a more competitive market would work well. Antitrust proposals, though, don’t address the deep consent flaws involved in dark patterns because they rely on the fictitious idea that people behave rationally with full information – and more corporate alternatives would enable people to change the ecosystem. Some suggest using the contract law doctrine of undue influence. But the doctrine is more suitable for individual cases within bilateral contractual relationships than it is for systemic problems among numerous parties, some of which never interacted before. Other proposals promisingly shift the focus toward extracontractual accountability. Some privacy researchers propose fiduciary duties, which mandate you to act in the best interest of someone else. Those fiduciary duties imply imposing duties of care, confidentiality, and loyalty on corporations. In a later chapter, I argue that a properly executed duty not to exploit is enough.

Dark patterns aren’t the disease. They’re a symptom of law’s focus on individual control, which is ineffective in an environment plagued with manipulation tactics that undermine people’s autonomy by impairing real choices. Dark patterns’ pervasiveness poses difficult questions about the validity of individual agreements for the information economy overall. Navigating the necessary context-dependent nuances to distinguish valid from invalid agreements under dark patterns is a herculean task for any regulator or judge. And if no one can identify which agreements are invalid, how can we justify relying on them?

Excerpted from “The Privacy Fallacy: Harm and Power in the Information Economy” by Ignacio Cofone. Published in the United States by Cambridge University Press, November 2023. Copyright © 2023 by Ignacio Cofone. All rights reserved.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.