Four artificial intelligence experts at the University of Chicago discuss the AI trends and issues they’ll be watching for in 2024.


Anjali Adukia, Assistant Professor and Director of the Messages, Identity and Inclusion in Education (MiiE Lab)

Much of the focus on artificial intelligence (AI) in education has centered on writing tools like ChatGPT, a large language model that enables users to, among other tasks, “converse” with the software to create a document that meets requirements for length, format, style, and level of detail. Similarly, other AI systems can create images and videos based on prompts. Needless to say, these tools have raised concerns among educators. In both cases, it is easy to imagine students surreptitiously using these tools to complete assignments, adding another layer of scrutiny for already overstretched teachers.

However, more than one-off assignments, AI has the potential to fundamentally affect how students learn and, ultimately, what they learn. Indeed, AI will almost certainly become an everyday part of education, which naturally raises concerns and important questions: Over time, as they rely more and more on AI, how will students learn to write? More fundamentally, will they even need to learn to write or even think for themselves? Or will writing and critical thinking skills only apply to constructing better AI prompts? Further, will AI tools take over art and creativity, from the visual arts to music?

For me, the introduction of AI into the classroom is more than theoretical. In my classes, I have adapted assignments to incorporate AI, while also requiring students to interrogate the resulting text by checking for errors, suggesting improvements, and generally critiquing the piece on content, style, and effective messaging. For students to succeed at this task, they must understand the material, be able to think critically, and ultimately be able to clearly express their thoughts in writing and speech themselves. Certainly, AI can be a useful tool: it can help with hypothesis generation, suggest ways to improve writing, and summarize texts. So, used with care and caution, AI can complement education while at the same time enhancing our learning and workplaces, raising the bar on productivity and efficiency while leaving time to focus on more creative tasks.

That said, my relative optimism is greatly tempered by the nature of AI, which most people forget is based on predicting human-like responses to prompts based on large amounts of past data. ChatGPT, for example, summarizes text by extracting its features and generating phrases that predict a possible summary of the content. However, it does not have true knowledge or true understanding of context, and therefore is not reliable. Likewise, to put our schools—and workplaces—in thrall of a backward-looking, contextually clueless “intelligence” system is fraught with risks. Moreover, it can only take creativity and human expression so far. Yes, AI is here to stay, but it must be on our terms. In 2024, educators will need to find productive ways to define and incorporate, rather than prohibit as a forbidden fruit, the use of AI in the classroom.

Anthony J. Casey, Donald M. Ephraim Professor of Law and Economics, Faculty Director, The Center on Law and Finance

The public and professional perception of machine learning and AI technologies has undergone a dramatic sea change in the last two years. While some have been predicting a world with advanced automated content creation for years, the mainstream recognition of this inevitability seems to have arrived only with the popularization of ChatGPT. This awakening of the public will have dramatic effects on the practice of law.

Before 2023, lawyers, judges, and legal scholars would naively (but so confidently) tell you that machine learning cannot write arguments and briefs. At best, they would argue, it can help you with legal research. The high-profile “hallucinated” legal research cases of 2023, have flipped the dialogue. Lawyers not understanding the technology of large language models used them to draft briefs where the legal citations turned out to be false.

Some observers have taken the wrong lesson from this. They think this proves that automated technology can’t do what lawyers do. In fact, the lesson should be the opposite. The technology succeeded at the high-level writing, which the uninformed observers thought to be impossible. It failed at the workaday task of research. The status quo cannot last. Law firms are already designing (have already designed!) better technology that can add accurate automated legal research to the mix.

As a result, the near future of AI & law is one of automated lawyering and machine-learning technology that can advise judges. Questions of self-driving contracts and robot judges must be addressed as they will be transforming every aspect of legal practice. And once these technologies are deployed to resolve private disputes, they will no doubt change the way public law is created and “consumed.” As Anthony Niblett and I have been saying for almost a decade, the nature of law as we know it is changing. The fundamental challenge for tomorrow’s today’s lawmakers is to understand how the purpose of law can be translated into a meaningful objective that can be automated. If this challenge is met, the improvement of legal content will be immense. If the challenge is ignored, hallucinated legal research will be the least of our worries.

James Evans, Max Palevsky Professor in Sociology and Director of the Knowledge Lab

OpenAI’s ChatGPT, built atop Google’s machine-learning transformer architecture, captivated the world at the end of November 2022, attracting more than 100 million individual users within 2 months. Underlying ChatGPT is GPT-4, a massive 1.7 trillion parameter language model that took nearly 100 days and $100 million to train. A few competitors are close behind, including Anthropic’s Claude (supported by Amazon) and Deep Mind’s Gemini (wholly owned by Google). The overnight success of these models has shifted the “Turing test” (Turing 1950), which assesses an algorithm’s ability to successfully impersonate human interaction, from unreasonable aspiration to baseline expectation. Since then, new AI services have emerged daily to automate and alter human tasks ranging from computer programming to journalism to art, science, and invention. With this technology, AI is transforming both routine and creative tasks and promises to change the unfolding future of work. These models have enabled automated translations of government documents, but also seamlessly merge fact and fiction, and have become linchpins in persuasion campaigns ranging from commercial advertisements to political mis- and disinformation initiatives. Increasingly, these models are used to control, execute, and synthesize outputs from other tools, including databases, programming languages, and first-principle simulators. 

The size and scale of these so-called “foundation models,” which can be fine-tuned for specific tasks, prohibit any but the largest big technology companies to build from the ground up. This will likely result in a mid-term oligopoly within the AI services market, resulting in the stable dominance of a few dominant models (e.g., “Coke or Pepsi?”) that attract the lion’s share of attention, with innumerable small, special-purpose AI models at the periphery. In the coming year, we will witness a flurry of attempts to capitalize on these models, to identify their “killer apps,” and to integrate them within established commercial pipelines, such as online advertisement. Insofar as these and related models will increasingly be used to make critical societal decisions that impact human welfare (e.g., judicial determinations, job hiring, college admittance, research investments, grant disbursement, investment allocations), governments will intensify their exploration of AI regulation.

Increasingly, I expect that we will see governments build checks and balances into a diverse ecology of AI algorithms. They will soon realize that we can only control powerful commercial AIs with other AIs designed adversarially to audit, regulate, discipline, and govern them. Finally, for society we will see new services, like intelligent search and native language-controlled automation, emerging rapidly. Powerful language agents are also becoming much more persuasive, and so blur the boundary between credible facts and motivated opinions. Thinning the information thicket that results will likely require a coordinated effort between (1) company initiatives to certify the information their AIs manufacture (e.g., like early search engines did effectively for pornography), (2) government policy requirements, and (3) new markets for personalized information agents and adaptive filters that enable customized, and potentially sheltered and polarized, information environments. 

Aziz Huq, Frank and Bernice J. Greenberg Professor of Law

At least since the invention of the backpropagation algorithm in the 1990s, modern AI and the circulation of data upon which it rests has been only loosely regulated: The result has been an efflorescence of start-ups, putatively killer apps, and unicorns. The state, of course, has never been wholly absent—as a funder, a client, and silent partner. But as a regulator it has been most notable by its silence.

This impression has been somewhat misleading for a while now, and 2024 will reveal that regulation is already here and getting thicker and more consequential. This is so even if the most hotly debated efforts to force AI developers to account for spillover harms, such as bias and privacy invasions, have yet to have much practical effect. For these efforts are also perhaps the least consequential. For if the regulatory landscape truly changes, it will be because of more subtle, tectonic shifts.

On the surface, the regulatory landscape is becoming sharply divided by jurisdiction. In December 2023, the European Union’s parliament and council reached a provisional agreement on a sweeping AI Act. In 2021, 2022, and 2023, the Chinese government issued separate tranches of AI-regulation targeting discrete issues, such as generative AI, and accreting with quietly but unrelenting confidence a comprehensive approach. In the United States, Biden’s executive order on AI largely regulates government directly, but may start to have ripple effects on the private economy thanks to the federal government’s heft in procurement. At the same time, Congressional polarization makes any comprehensive regulation highly unlikely.

Yet under the surface, the United States remains a highly influential global regulator—simply using more subterranean tools. Central to these are 2022 and 2023 export controls on semiconductors and the equipment necessary to make them, largely aimed at China. The latter, of course, has responded in kind.  As this trade war intensifies—perhaps crescendoing under a new Trump administration—the markets for the raw materials that make up AI will come under increasing strain. And unlike direct regulation, there are not going to legal challenges or workarounds to soften the blow.

Indirectly, if not directly, the world of do-as-thou-wilt entrepreneur is coming to a close.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.