Home The Digital Economy Big Tech Are We Tumbling Toward an Adults-Only Internet?

Are We Tumbling Toward an Adults-Only Internet?

Meg Leta Jones, Mac Milin Kiran and Cal Newport argue that the introduction of age verification mechanisms in proposed child online safety legislation may unintentionally result in an adults-only internet, as platforms opt to deny access to children rather than implement complex compliance measures. This potential outcome necessitates a fundamental debate about the intended audience of the internet and the balance between protecting children and preserving their rights to access and expression in the digital realm.


Last month, Florida Governor Ron DeSantis signed into law H.B. 3 “Social Media Use for Minors,” banning social media accounts for those under 14 and requiring parental consent for 14-16 year-olds. The Florida law is at the vanguard of a growing wave of proposed federal and state legislative efforts aimed at reducing harms associated with kids spending time online,  including the Kids Online Safety Act (KOSA)—which has gathered more than sixty backers in the United States Senate— California’s Age-Appropriate Design Code Act—which is currently blocked by a federal court—and the forthcoming updated Children’s Online Privacy Protection Rule (COPPA 2.0), crafted to significantly strengthen the protections from the original 1998 version of the bill. 

We at the Center for Digital Ethics at Georgetown University have been cataloging these legislative developments since the Summer of 2023, and have watched with interest as these proposed laws have generated furious debate—and in some cases, court challenges—surrounding their potential impact on privacy and First Amendment rights. This ongoing discussion is justified. Attempts to regulate technology, from the Hays Code that controlled motions pictures throughout the mid-twentieth century to the proposed AI moratoriums of our current moment, have often generated concerns of unintended consequences and overreach. However, we have come to believe that these contemporary conversations about child online safety legislation, in their focus on the small-scale details about implementation, are missing a potentially much larger-scale consequence that impacts the fundamental question of whom the internet is intended to serve. 

To explain our concern, it’s useful first to take a closer look at what these current bills are proposing. This new surge of child online safety legislation largely relies on varied combinations of three main policy tools: age-based prohibitions, parental consent and control, and basic data protection principles, such as clear disclosures, impact assessments, and privacy-preserving default settings. The details of these tools, both technically and legally, are complicated. Critical to our examination, however, is their shared requirement for online service providers to distinguish between adults and children clearly and accurately. Under the new Florida law, for instance, social media platforms will need to verify the age of users anonymously through a third-party service. The Utah “Minor Protection in Social Media Act” requires social media platforms to develop systems for age verification that are 95% accurate. The California Age Appropriate Design Code law requires, more generally, that sites and services estimate the age of users. For simplicity, we can call this general need to distinguish between adults and children as age verification.

Proponents of kid online safety laws fall into two main camps: those who want to use age verification to make the internet safer for kids, and those who want to use verification to support the ability of parents to keep their kids away from material they deem unsafe. After signing the previously mentioned Utah law, which focuses on parental control, Governor Spencer Cox explained in a Meet the Press appearance, “We’re going to look back ten years from now and say ‘what did we do?’ We destroyed a generation with this stuff.” His state’s new law requires social media companies to create a host of supervisory tools, obtain parental consent for children under 18, and provide parental visibility to posts and messages on their children’s accounts. 

Meanwhile, in California, Assembly member Buffy Wicks celebrated the enactment of the California Age Appropriate Design Code (CAADC), a canonical example of the safer internet model, saying, “California is leading the way in making the digital world safe for American children, becoming the first state in the nation to require tech companies to install guardrails on their apps and websites for users under 18.” The CAADC prevents collecting children’s location data and using personal information in a way that is materially detrimental to the well-being of the child, requires platforms to create impact assessments and ensure all defaults are set to private, and, in contrast to Utah, show kids obvious signals when the platform allows a parent to monitor their accounts. 

Many opponents of these proposed strategies have focused on the specific mechanisms that might be used to implement the age verification they all require. Necessitating proof of age, some critics claim, could significantly invade privacy and lead to increased surveillance. For example, the Electronic Frontier Foundation highlights the risk of undermining anonymity through age verification requirements, which demand users to upload identity documents and biometric information. Libertarian think tank R Street warns that whenever users want to post content critical of the government or ask intimate questions about relationships or health, they’ll have to proffer their government IDs, or accept that “the internet’s going to have to scan your face.” Privacy advocates like S.T.O.P. warn more generally that well-intentioned bills demanding tech companies to identify underage users will heighten online surveillance, creating harms that outweigh the benefits of their original protective intent. 

Other opponents have focused instead on potential First Amendment concerns surrounding age verification-driven controls. One strain of this criticism notes that the content-related components of these new laws could unduly limit the freedom of speech of its users by pressuring platforms to engage in overly cautious moderation practices to avoid legal liabilities, potentially stifling a wide range of lawful expression under the guise of protecting children​​. NetChoice, a trade association that represents major tech companies (including Meta, Amazon, and TikTok among others), has been actively challenging child safety bills in several states, including Florida and Texas, based on the argument that they infringe on First Amendment rights and could lead to a form of government control over online speech and business that they consider unconstitutional.

Another related strain of this critique argues that children themselves have a First Amendment right to expression that is violated by blanket age-related bans. At the moment, federal judges in Ohio, Arkansas, and California have temporarily blocked children’s online protection legislation, partly on these grounds, arguing that the government needs to apply more narrowly tailored solutions to the problem at hand. “Foreclosing minors under sixteen from accessing all content on websites that the Act purports to cover, absent affirmative parental consent, is a breathtakingly blunt instrument for reducing social media’s harm to children,” wrote District Judge Algenon L. Marbley, in his recent order to temporarily block a parental consent law in Ohio. 

Despite this fierce opposition, it still seems as likely as not that significant child online safety legislation will eventually be enacted and enforced. Opponents characterize age verification mechanisms as burdensome, inaccurate, and invasive, but from a technical perspective, acceptable age verification mechanisms are not hard to imagine. Such verification can be built directly into phone, tablet, and computer operating systems. One could imagine, for example, verifying your age once for the iOS operating system on your phone, and then later, when accessing an age-restricted service, having your phone simply deliver a simple binary signal that verifies the current user is an adult; a scheme that provides no additional identifying information to the third-party. In such a scenario, there would be no need to transmit government documents or biometric data to participate in online spaces. (Indeed, the recent Florida law specifically requires “anonymous age verification” by third parties of this type.)

Similarly, the First Amendment challenges to these laws are far from settled. Some of the existing temporary injunctions are based on the judge’s concern that the proposed law would not pass strict scrutiny, the highest possible standard for evaluating whether the government can reduce a potential right. But it’s unclear whether such scrutiny will ultimately be applied. (Indeed, the original COPPA bill from 1998, which prevents children under 13 from accessing many online services, is currently being implemented by the Federal Trade Commission despite decades of court challenges along such lines.) A closer look at some of these existing injunctions reveals that much of the courts’ ire regarding these initial bills has as much to do with a perceived vagueness or sloppiness in their construction (a fair critique) as any fundamental issue. 

Others injunctions deploy a skepticism about the nature and cause of the problems, as well as interpretations of protected speech that are so broad as to render it virtually impossible to regulate any type of data processing, impact assessments, or disclosures. Over the next year, these cases will make their way through the Circuit Court system, which there are reasons to believe will be more lenient toward such legislation. Last month, for example, the Fifth Circuit Court of Appeals upheld a 2023 Texas law that requires age verification for pornography websites. In doing so it did not apply strict scrutiny, did not find the age verification options particularly burdensome, and did not question the legislature’s assessment of the problem.   

It’s here that we return to our original overlooked concern: the real issue at stake if laws of this type are eventually enacted and enforced is not found in the specific details of what these laws require, but their general introduction of age verification into the internet ecosystem. Once major content platforms are forced to verify the age of their users, and, in some non-trivial manner, treat adults differently than children to avoid regulatory penalty, a much blunter alternative course of action arises: voluntarily choosing to deny access to kids. Properly implementing parental consent is complicated. As is strict content moderation, or the requirement to avoid addictive design features or privacy-violating settings for young users. It might be easier for a platform to simply require an adult signal from a user to access their content, thus avoiding altogether the difficulties of providing a “safe” alternative for those who are younger. Here we really are encountering a classic example of unintended consequences at the intersection of technology and society. As we fight over the minutiae of making online spaces safer for kids we are, without even realizing it, laying the foundation to wall them out of these spaces altogether. 

We underscore this possibility of a voluntarily implemented adults-only internet neither to warn nor celebrate, but instead to surface what should be the central debate at the moment about kids and the internet. If standardized age verification might accidently create an adults-only internet, we must figure out now, before we stumble further down this path, whether this is a desirable outcome. 

For many, the obvious answer to this question is “no.” Writing recently in the New York Times, David French summarized the popular free speech response toward this issue: “When you regulate access to social media, you’re regulating access to speech, and the First Amendment binds the government to protect the free-speech rights of children as well as adults.” From this perspective, online services like social media have become so intertwined into modern discourse that to lose access to it is a fundamental denial of rights. We must preserve kid’s access to the full internet, in all its glory and terror

But not everyone sees the platforms targeted by these laws as necessary for the practice of free expression. They can also be considered optional for-profit products that generate well-documented harms to young users. Indeed, a research-driven case for restricting large swaths of internet activity until after puberty is gaining both cultural and scientific traction. Viveck Murthy, the U.S. Surgeon General, has now, on multiple occasions, proposed that parents should wait until their kids are sixteen before providing them unrestricted access to the internet through a smartphone. Meanwhile, at the moment, NYU social psychologist Jonathan Haidt’s new book, The Anxious Generation, has been dominating the bestseller lists and public debate with similar recommendations. From this perspective, services like social media join the list of many narrow types of content or substances that we deem inappropriate for the still-developing brains of children. 

For the many who might embrace a future scenario in which kids have limited access to services like social media, simply staying the current regulatory course might be sufficient to achieve this vision. For those who want instead a more family-friendly internet, or more parental control, but also want to maintain the ability of younger users to access most types of online services, a major course correction is required—one that moves regulatory action away from age verification mechanisms and returns to a focus on improving digital literacy and increasing user pressure on platforms to offer more moderation or control setting options.

Deciding between these two courses should be the primary debate on this issue. Before we get too lost in the intricacies of biometrics and strict scrutiny, in other words, we must first answer the fundamental question: who is the internet for? If we don’t answer this question for ourselves, it might soon be answered for us.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.

Exit mobile version