Three University of Chicago artificial intelligence experts reveal the trends and issues they’ll be watching in 2024.
Anjali Adukia, assistant professor and director of Messages, identity and inclusion in education (MiiE Lab)
Much of the attention to artificial intelligence (AI) in education has focused on writing tools such as ChatGPT, a large language model that allows users, among other tasks, to “ converse” with the software to create a document that meets the length and format requirements. , style and level of detail. Similarly, other AI systems can create images and videos based on prompts. Needless to say, these tools have raised concerns among educators. In either case, it’s easy to imagine students surreptitiously using these tools to complete assignments, adding an extra level of oversight for already overwhelmed teachers.
However, beyond one-off missions, AI has the potential to fundamentally influence how students learn and ultimately What they learn. Indeed, AI will almost certainly become an everyday part of education, which naturally raises important concerns and questions: over time, as they rely more and more on AI, how will students learn to write? More fundamentally, will they even need learn to write or even think for themselves? Or will writing and critical thinking skills only apply to creating better AI prompts? Additionally, will AI tools take over art and creativity, from visual arts to music?
For me, introducing AI into the classroom is more than theoretical. In my classes, I have adapted assignments to incorporate AI, while asking students to interrogate the resulting text by checking for errors, suggesting improvements, and generally critiquing the article for content, style, and the effectiveness of the message. For students to be successful in this task, they must understand the material, be able to think critically, and ultimately be able to clearly express their thoughts themselves in writing and orally. Certainly, AI can be a useful tool: it can help generate hypotheses, suggest ways to improve writing, and summarize texts. So, used with care and caution, AI can complement education while improving our learning and workplaces, raising the bar for productivity and efficiency while allowing time to focus on tasks more creative.
That said, my relative optimism is greatly tempered by the nature of AI, which most people forget relies on predict human-like responses to prompts based on large amounts of pass data. ChatGPT, for example, summarizes text by extracting its features and generating sentences that predict a possible summary of the content. However, he does not have true knowledge or understanding of the context and is therefore unreliable. Likewise, placing our schools – and workplaces – in the grip of a retrospective and contextually ignorant “intelligence” system carries many risks. Furthermore, human creativity and expression can only go so far. Yes, AI is here to stay, but it has to be done on our terms. In 2024, educators will need to find productive ways to define and integrate, rather than prohibit as forbidden fruit, the use of AI in the classroom.
James Evans, Max Palevsky Professor of Sociology and Director of the Knowledge Lab
OpenAI’s ChatGPT, built on Google’s machine learning transformation architecture, captivated the world in late November 2022, attracting over 100 million individual users in 2 months. Underpinning ChatGPT is GPT-4, a massive 1.7 trillion parameter language model that took nearly 100 days and $100 million to train. A few competitors follow closely, notably Claude from Anthropic (backed by Amazon) and Gemini from Deep Mind (100% owned by Google). The immediate success of these models displaced the “Turing test” (Turing 1950), which evaluates an algorithm’s ability to impersonate human interaction, from unreasonable aspiration to basic expectations. Since then, new AI services are emerging daily to automate and modify human tasks, ranging from computer programming and journalism to art, science and invention. With this technology, AI is transforming routine and creative tasks and promises to change the future of work. These models enabled automated translations of government documents, but also seamlessly merged fact and fiction, and became mainstays of persuasion campaigns ranging from commercial advertisements to misinformation and disinformation policy initiatives. Increasingly, these models are used to control, execute, and synthesize the output of other tools, including databases, programming languages, and first-principles simulators.
The size and scale of these so-called “base models,” which can be adjusted for specific tasks, prohibit any but the largest tech companies from starting from scratch. This will likely result in a medium-term oligopoly within the AI services market, resulting in the stable dominance of a few dominant models (e.g., “Coke or Pepsi?”) that attract the lion’s share of attention, with countless small special models. AI models for the edge. Over the coming year, we will see a wave of attempts to capitalize on these models, identify their “killer applications” and integrate them into established business channels, such as online advertising. As these and other related models will increasingly be used to make critical societal decisions impacting human well-being (e.g., legal decisions, recruitment, college admission, investments in research, grant disbursement, investment allocations), governments will intensify their exploration of AI regulation.
Increasingly, I expect governments to put checks and balances in a diverse ecology of AI algorithms. They will soon realize that we can only control powerful commercial AIs with other AIs designed in adversarial ways to audit, regulate, discipline and govern them. Finally, for society, we will see new services quickly emerge, such as intelligent search and native language-controlled automation. Powerful linguistic agents also become much more persuasive and thus blur the line between credible facts and reasoned opinions. Reducing the amount of information obtained will likely require a coordinated effort between (1) companies’ initiatives to certify the information produced by their AIs (e.g., as early search engines did effectively for pornography), (2) government policy requirements and (3) market news for personalized information agents and adaptive filters that enable personalized, and potentially protected and polarized, information environments.
Aziz Huq, Frank and Bernice J. Greenberg Professor of Law
At least since the invention of backpropagation algorithm In the 1990s, modern AI and the flow of data it relies on were only loosely regulated: the result was an efflorescence of start-ups, so-called killer apps and unicorns. The state, of course, has never been entirely absent – as funder, client and silent partner. But as a regulator, he was most notable for his silence.
This impression has been somewhat misleading for some time now, and 2024 will reveal that the regulations are already there and becoming thicker and more consequential. This is so even though the more controversial efforts to force AI developers to account for induced harms, such as bias and privacy invasions, have yet to have much practical effect. Because these efforts are also perhaps the least consistent. Because if the regulatory landscape really changes, it will be due to more subtle tectonic shifts.
On the surface, the regulatory landscape is increasingly sharply divided across jurisdictions. In December 2023, the Parliament and the Council of the European Union reached a provisional agreement on a comprehensive AI law. In 2021, 2022, and 2023, the Chinese government released separate tranches of AI regulations targeting distinct issues, such as generative AI, and creating with quiet but steady confidence a comprehensive approach. In the United States, Biden’s executive order on AI largely regulates the government directly, but could begin to have ripple effects on the private economy thanks to the federal government’s public procurement clout. At the same time, the polarization of Congress makes comprehensive regulation very unlikely.
Yet beneath the surface, the United States remains a highly influential global regulator, simply using more underground tools. At the center of these measures are 2022 and 2023 export controls on semiconductors and the equipment needed to make them, largely destined for China. Of course, the latter responded in the same way. As this trade war escalates – perhaps even with a new Trump administration – the commodity markets that make up AI will come under increasing strain. And unlike direct regulation, there will be no legal recourse or workarounds to soften the blow.
Indirectly, if not directly, the world of entrepreneurs doing what you want is coming to an end.
Articles represent the opinions of the authors, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.