The idiosyncrasies of the American approach to regulation have left the world’s largest economy ill-equipped to protect consumers and guide firms when it comes to policy issues surrounding digital platforms. The privacy and data protection subcommittee of the Stigler Center’s Digital Platforms Project proposes three complementary approaches to protecting privacy and security interests.


On May 15-16, the Stigler Center will host its third annual antitrust and competition conference. Titled “Digital Platforms, Markets and Democracy: A Path Forward,” the conference will bring together dozens of top scholars, policymakers, journalists, and entrepreneurs.

During the 2018 conference, a consensus emerged that the political and economic issues raised by the market power of tech platforms must be addressed. To provide independent expertise on the appropriate policy responses, the Stigler Center formed the Committee for the Study of Digital Platforms. The committee is composed of four specialized subcommittees: the economy and market structure; privacy and data protection; the media; and the political system. Each subcommittee is comprised of a chair and specialists in different fields (economics, law, data sciences, media, public policy, political science, venture capitalists, etc.). Its ultimate goal is to produce independent white papers that will inform decision and policymaking.

During the conference, the four subcommittees will discuss their initial conclusions. In preparation, we will publish the executive summary of each preliminary report. Find them all here


Any serious effort to analyze the emerging policy issues surrounding digital platforms must contend with a host of privacy and security challenges. Core aspects of the American framework, such as faith in industry self-regulation, the embrace of a sectoral approach involving overlapping state and federal laws, heavy reliance on notice and choice, and the lack of remedial powers for the nation’s primary privacy and security regulator, leave the world’s largest economy ill-equipped to protect consumers and guide firms. The idiosyncrasies of the American approach also impede efforts to harmonize global privacy law, threatening the free flow of data in international e-commerce.

There are several reasons why neither industry self-regulation nor notice and choice approaches to protecting privacy and security are satisfactory. Firms that collect personal information do not internalize all the harms associated with privacy and security breaches, nor do they have adequate incentives to consider how their choices affect the interests of consumers who are not their customers. These problems are compounded by the highly technical nature of privacy and security decisions, the difficulty of establishing clear causal links between particular practices and subsequent consumer harms, and the obstacles to evaluating firms’ key investments in data security. Notice and choice fails because it mainly produces a lengthy wall of legalese text that may be read by regulators, but will be neither read nor understood by consumers themselves. Consumer consent under these circumstances is fictitious.

There are a number of existing reform proposals in the privacy and security domain, some quite laudable. Our subcommittee’s approach is not to provide a roadmap for comprehensive national privacy legislation. Rather, we have focused our attention on developing three complementary approaches to protect privacy and security interests.

The first of our three proposals advocates the use of data-driven contractual default rules in privacy and security. Default rules are starting points for contractual relations. For example, they govern the scope of data collection and the retention of a consumer’s personal information by a platform. By definition, any default rule can be modified by the mutual agreement of the contracting parties. We can contrast these default rules with mandatory rules, which provide contractual rights and responsibilities that cannot be altered by the parties. While there is an important role for mandatory rules to play in instances where consumer preferences are very homogenous, where collective action problems arise, or where significant information asymmetries exist, as with highly technical and complex issues, default rules are a vital tool in the regulatory arsenal. Default rules are particularly important when consumers have heterogeneous preferences, consumers or firms possess relevant information to which regulators do not have ready access, and substantial externalities are not present in a transaction.

We propose that the content of contractual default provisions depend on the articulated beliefs of ordinary consumers as measured by rigorous survey instruments. Based on our own pilot testing of such instruments involving the privacy and security practices of Facebook, Google, Amazon, and other platforms, consumers say they often prefer (but do not as frequently expect) default provisions that enhance their privacy and security. In privacy and security settings there will be many instances in which it is appropriate for the law to use “consumertarian” default rules—i.e., the legal defaults preferred or expected by a majority of consumers. While these default rules will not always reflect the preferences of platforms and other firms that are contracting with consumers, these firms will have the opportunity to convince their customers that waiving such protections will make them better off. In that sense, consumertarian default rules will function as information-forcing devices that reduce unexpected surprises, encourage dialogue, and deter problematic privacy and security practices. Because many default protections will be sticky under these circumstances, and firms will have incentives to be selective about which rights they ask consumers to waive, the result of our proposal will be to heighten privacy and security protections for consumers. This proposal for consumertarian default rules is superficially similar to the Privacy-by-Default regime that was enacted as part of the European General Data Protection Regulation (GDPR), though we believe our approach better accommodates the importance of context and heterogeneous consumer preferences.

Under our approach, waivers of protections granted to consumers by default would be valid if the waiver was narrow (aimed at waiving one right rather than a large aggregation of rights), knowing, and secured in a non-manipulative manner. This brings us to our second reform proposal, which provides a yardstick for differentiating manipulative “dark patterns” and legitimate efforts by businesses to persuade consumers. 

Dark patterns are user interfaces that make it difficult for users to express their actual preferences or that nudge users into taking actions that do not comport with their preferences or expectations. Examples of dark patterns abound in privacy and security. Some impose asymmetric transaction costs in an effort to prompt consumers to select the option that firms prefer while maintaining a semblance of consumer choice. For example, Ticketmaster’s smartphone app prompts consumers with a query about whether they will enable push notifications. The only options available are “OK” and “Maybe Later.” If the consumer selects “OK,” the app will stop asking. But if the consumer selects “Maybe Later” then the app will ask again before too long, and again after that, until the consumer relents. To take another example, many pieces of software are designed to make it easy to alter default settings in a way that diminishes consumer privacy but hard to enable privacy protective settings. Firms may employ intentionally ambiguous terminology in an effort to confuse consumers into opting for a service they do not want, or they may manipulate consumers by targeting acute emotional vulnerabilities. One attribute all these dark patterns share is a tendency to exploit “System 1” (quick, instinctive) decision-making and suppress more deliberative “System 2” thought processes.

“The idiosyncrasies of the American approach also impede efforts to harmonize global privacy law, threatening the free flow of data in international e-commerce.”

Dark patterns have become prevalent in recent years. Academics and non-profit organizations have documented the spread of dark patterns, but no published research examines the efficacy of these dark patterns in prompting consumers to make choices that are inconsistent with what consumers would select in a neutral choice architecture. Our report fills a significant gap in the literature, thanks to an extensive data collection effort by Jamie Luguri and Lior Strahilevitz. Those authors, after obtaining IRB approval, exposed a census-weighted, nationally representative sample of 1,762 Americans to a decision-making framework in which the control group was offered an easy yes / no choice over whether to sign up for an expensive identity theft protection plan, an experimental group was subjected to relatively mild dark-pattern interventions, and a second experimental group was subjected to the aggressive use of dark patterns. Both dark pattern conditions were designed to prompt consumers to agree to pay for an identity theft protection plan that few members of the control group wanted.

The bottom-line results from this dark pattern experiment were striking. Employing mild dark patterns increased the percentage of consumers who ultimately agreed to accept the data protection plan by 228 percent (from roughly 11 percent to 26 percent of subjects). Employing aggressive dark patterns increased the percentage of consumers who agreed to accept the data protection plan by 371 percent (from approximately 11 percent to 42 percent). In other words, in both the mild and aggressive dark pattern environments, it was more likely than not that consumers were agreeing to sign up for the plan the academic researchers were purportedly selling them because of the dark pattern, and not because of an underlying desire to purchase the plan itself.

Notably, the experiment’s use of aggressive dark patterns generated a customer backlash. Consumers in the aggressive dark pattern condition had their moods adversely affected, and consumers in all dark patterns conditions were less likely to agree to participate in follow-up research by the same researchers. These data indicate that market forces constrain the use of dark patterns somewhat. On the other hand, relatively mild dark patterns generated either no such effects or weaker effects, depending on the metric. In short, firms face significant incentives to avoid using the most blatant and annoying dark pattern strategies, but the use of subtler dark patterns seems to be mostly upside for corporate bottom lines. These mild strategies seem to substantially increase the percentage of consumers who will sign up for a good or service without alienating too many consumers, at least in the short run. 

Two other headline findings emerged from the new research. First, the better educated the consumers were, the less vulnerable they were to having their choices manipulated via dark patterns. These effects were statistically significant and highly troubling. Dark patterns work on many people, but lower socio-economic status individuals are especially vulnerable to them. Second, varying the costs of the service consumers were being sold made no difference to consumers’ propensity to agree to purchase the data protection plan. Consumers who were told the plan would cost either $2.99 per month or $8.99 per month after a 6-month free trial accepted the program at statistically indistinguishable rates.

In light of these findings, our report advocates a per se legal rule that will apply to many situations involving the use of dark patterns to prompt consumers to share personal information. Wherever a firm’s choice architecture more than doubles the percentage of users who agree to an exchange when compared with a neutral choice architecture, consumers’ consent is not valid. Moreover, dark pattern tactics that satisfy this “more likely than not” test should be treated as unfair and deceptive practices in trade, which are prohibited by federal and state consumer protection laws. Still, this approach may be under-inclusive, and there could be other settings in which a dark pattern does not satisfy this per se test but is still problematic. To deal with those situations, multi-factor balancing tests that identify the dark patterns most likely to diminish consumer well-being are appropriate. Relevant considerations include the extent to which a dark pattern raises the transaction costs of opting out of unpopular settings, the extent to which the dark pattern targets problematic consumer vulnerabilities, and the extent to which a dark pattern is hidden rather than transparent.

Our final proposal focuses on mitigating certain security threats that are caused by data breaches. A single data breach at one platform or digital service can present major problems for other platforms and services. The reason stems from password reuse. Consumers frequently use identical or very similar login credentials at multiple sites. As a result, hackers may obtain credentials from one site and then quickly try to use those same pilfered credentials to gain access to various other sites or platforms. The main way that firms currently try to protect themselves is by purchasing stolen credentials from the hackers themselves, which creates perverse incentives.

Among the various reforms to be considered, private data breach clearinghouses are preferable given existing technological constraints. Ideally, such clearinghouses could use techniques like private-set membership testing. This approach, which is in some ways similar to Google’s recently released Password Checkup extension, uses advanced mathematics to test whether user passwords are repeated across multiple web sites without disclosing login credentials. Ideally, firms would be required to make their own data available for queries in order to ping the database itself, and this reciprocal arrangement could promote the comprehensiveness of the database. The clearinghouse proposal would encounter some challenges, ranging from the paramount need to protect the clearinghouse as a single source of failure to the technical challenges of using private-set membership testing to reveal instances where very similar but non-identical passwords are being reused at multiple sites. Still, the subcommittee has concluded that, on balance, such an approach is superior to viable alternative approaches, such as those modeled after the Cybersecurity Information Sharing Act of 2015 (CISA). A CISA-style information sharing regime could trouble consumers, and CISA’s success has been hampered because of its weak incentives to share information on computer security threats.

Subcommittee on Privacy and Data Protection:

  • Chair: Lior Strahilevitz, Sidley Austin Professor of Law, University of Chicago
  • Lorrie Cranor, Director and Bosch Distinguished Professor in Security and Privacy Technologies, CyLab Privacy and Security Institute, and FORE Systems Professor, Computer Science and Engineering & Public Policy, Carnegie Mellon University
  • Florencia Marotta-Wurgler, Professor of Law, New York University School of Law
  • Jonathan Mayer, Assistant Professor of Computer Science and Public Affairs, Princeton University
  • Paul Ohm, Professor of Law and Associate Dean for Academic Affairs, Georgetown University Law Center
  • Katherine Strandburg, Alfred B. Engelberg Professor of Law, New York University School of Law
  • Blase Ur, Neubauer Family Assistant Professor of Computer Sciences, University of Chicago

DISCLAIMER: The purpose of these preliminary reports is to identify what are the new challenges digital platforms pose to the economic and political structure of our countries. These reports also try to identify the set of possible tools that might address these challenges. Yet, there is potential disagreement among the members of the committees on which of these problems is most troubling, which tools might work best, whether some tools will work at all or even whether the damage they might produce is larger than the problem they are trying to fix. Not all committee members agree with all the findings or proposals contained in this report. The purpose of these preliminary reports, thus, is not to unanimously provide a perfect list of policy fixes but to identify conceptual problems and solutions and start an academic discussion from which robust policy recommendations can eventually be drafted.  

For more on this, check out the following episode of the Capitalisn’t podcast:

The ProMarket blog is dedicated to discussing how competition tends to be subverted by special interests. The posts represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty. For more information, please visit ProMarket Blog Policy