Higher education and AI, a September 2024 update
Bringing together many threads detected by the scanner
How might AI impact the future of higher education?
That is the foundational question for this newsletter. I’ve been researching and writing around it, teasing out emergent AI implications in other parts of the human experience - economics, politics, culture, etc. - and today wanted to circle back to the original point.
Let me draw out a series of stories and trends which have appeared in the past month or so. We’ll consider student usage, the cheating problem, research, investment into teaching AI, and relevant jobs.
Student usage
A Digital Education Council study describes many students using AI at least somewhat. A headline number is 86% of students engaging generative AI, especially ChatGPT, with Grammarly and Microsoft Copilot in a second tier. (Interestingly, Quizlet claims a similar number, 82%.)
I was fascinated by their breakdown of current use cases:
Actually writing is just one quarter. Reading assistance - summary, paraphrase - are a bit more popular, which suggests we should pay at least as much attention to those as writing. I’m especially impressed by search taking far away the lead - DEC’s headline for this slide, “Is Gen AI the new Google?” might be an apt one.
DEC’s finding on what kinds of AI functions student prefer is also fascinating:
Chatbots take the lead, followed by various functions which make the college experience more convenient, more smooth.
Now, this is self-reporting, which means we should take results with a grain of salt.
Cheating via AI
The Wall Street Journal broke a story that OpenAI has a tool to track ChatGPT usage, but hasn’t activated it yet. It’s a watermark which should attach to all output from that chatbot. Why not throw the switch? Some discussion held that using such watermarks to detect cheating would unfairly hit non-English speakers (and writers). OpenAI also worried about its misuse, including by hackers.
Yet a few weeks later OpenAI changed course, backing a California law (the California Provenance, Authenticity and Watermarking Standards Act (AB 3211)) which would require AI firms to produce watermarks. Europe’s AI Act (which we already wrote about) also requires such technology.
Meanwhile, Turnitin released a version of its very flawed AI detector for Spanish language texts.
Wiley published a survey of students and their attitudes towards AI. A rough consensus appeared that generative AI enables cheating and we should see that increase. I was struck by a gap (albeit self-described) between instructors and students using AI in class:
The cheating problem is worse than we thought, if this study is correct. The authors tested several hundred instructors to see how they detected AI-generated prose. Generally, the teachers could not reliably determine authorship. Worse, they tended to be confident that they could.
Let’s look back at that DEC study. If about one quarter of students using AI are using it to write first drafts (say about 21% of students in general), does that mean about one fifth of students are now admitting to having ChatGPT etc. write papers for them? Are 21% of students openly declaring using AI to cheat?
One entertaining instructor response to the AI cheating wave involves Batman.
Research
What role might AI play in aiding or creating academic research? We’re already seeing signs of researchers using generative AI in their work. For example, an MIT team used the tech to analyze complex engineering problems, aiming to reduce testing time.
Going further, could we use AI to create research ideas? A recent study asked human judges to compare computer science ideas generated by humans and AI. Judges ranked the AI ideas more highly.
There are some interesting caveats in the study, including the question of humans not performing at peak and the different kinds of idea. Yet I would add this to the growing pile of examples of AI exceeding people.
Meanwhile, the National Science Foundation (NSF) gave $20 million to a University of Chicago project intending to use AI to anticipate new research directions. As far as I can tell the plan is to work with a corpus of recent scientific research, then use large language models (LLMs) to point to next steps:
By mapping funded research proposals, scientific papers, and their resulting patents and products, the team plans to build models that are “chronological.” These time-aware models will hopefully allow researchers to predict or recognize disruptive advances the moment they occur. These models will also identify how such discoveries and inventions change the landscape to reveal new, follow-on opportunities. Policymakers could then use this data to guide funding and talent toward these potential discoveries.
Elsewhere, an international team working with Sakana.ai developed what they call AI Scientist, which “generates novel research ideas, writes code, executes experiments, visualizes results, describes its findings by writing a full scientific paper, and then runs a simulated review process for evaluation.” The authors posted a paper here, and made their code available here. (One professor reacted by going through stages of grief.)
Running an evaluation? The Chronicle of Higher Education writes about researchers concerned that their colleagues are using ChatGPT to help with, or just to create, peer review. Yes, this means we’re seeing the early signs of researchers producing research with the help of AI, and their peers reviewing it with same. Harvard’s Misinformation Review observes the rising tide of AI-generated papers and raises the problem of quality degradation.
At a more meta level, the Qatar Foundation, the Institute of International Education, and a group of universities* launched a research effort to better understand AI’s impact on education. The “WISE Global Research Consortium on AI, higher education, and the global workforce” will work over the next year. They don’t have a web presence that I could find yet, but I’ve reached out to various participants to learn more.
Investment
Yale University announced it would spend more than $150 million to improve its AI capabilities. Specifically, the funds will go to hardware, software, research, and professional development:
expanding Yale’s supply of graphic processing units, expanding access to AI tools, increasing support for research and education and facilitating interdisciplinary collaboration on AI.
Emory University launched an AI and humanity center.
From the other side, as it were, Nvidia partnered with the state of California to supply hardware and content to teach people about AI:
The initiative, cosigned by Newsom and Jensen Huang, founder and CEO at Nvidia, promises to train students, educators, and workers; support job creation; promote innovation; and implement AI to solve social challenges and improve the lives of all California residents.
The MoU also included a few ancillary goals for the project, namely bringing new AI resources from Nvidia into community colleges, including supporting the curriculum, hardware, software, AI labs, workshops, and more.
Speaking of which…
Teaching about AI
Do colleges and universities teach students enough about AI? We have one note to share today. The Digital Education Council report cited earlier portrays massive student dissatisfaction with campuses teaching AI. “80% of students report that their university AI integration does not fully meet their expectations… 59% expect their universities to increase the use of AI in teaching and learning.”
In terms of campus operations, most students in the DEC report want campuses to up faculty development on this score: “73% agree that universities should provide training for faculty on the effective use of AI tools”. Most students (71%) also tend to want some voice in shaping institutional AI preparation.
In short, that report outlines a massive call for increased student work with AI.
Jobs
A Cengage study found that while employers tend to use generative AI, college graduates felt unprepared to use the tech at work. The report detected a serious amount of student anxiety on this score:
Grads are already feeling the negative impact of programs that are not yet
exposing them to GenAI tools and skills. It’s impacting their career trajectories, with 51% of graduates saying the pace of technology is making them second-guess their career choice, up from 33% in 2023 – a trend most pronounced among vocational graduates (58%). Moreover, more than a third (39%) feel threatened that GenAI could replace them or their job entirely.
UNICEF put out a job search for a Research consultant – AI in Education” as part of their Global Office of Research and Foresight (archived). Either they hired someone or closed the search, as the original page is now gone.
To sum up: generative AI is increasingly impacting higher education, although we only have glimpses at the moment. Students use the technology to some degree and tend to want more engagement with it. Some universities and businesses are investing in that integration. Research into AI’s educational implications is building; more, we’re seeing projects to use AI to generate new research. The cheating problem persists at scale. Taken together, these stories and trends call for colleges and universities, along with other organizations, to increase their exploration, use, and support of AI.
This goes directly against the body of criticism we’ve seen for years, which urge academics to resist, object to, or disengage from AI. As I’ve been saying, the deep divide in academia over how to respond to AI mirrors the broader cultural split. And in both cases investment of all kinds (money, time, research, raw usage) is continuing to grow.
One more point: from what I’ve seen, research into students’ actual use of AI is all over the place. And some of it is commercially driven (cf the Quizlet link). We need better data into what college and university students actually do and think about the technology.
It’s a cliche to speak of a generative AI revolution, but like most cliches this has some truth to it. We’re a couple of years in and already seeing all kinds of upset, effort, spending, anxiety, and hope in higher education. “Revolution” might be the apt word.
More to come.
*those universities: “Ashesi University (Ghana), Universidad Camilo Jose Cela (Spain), University of Pennsylvania (USA), The Birla Institute of Technology & Science (India), Nazarbayev University (Kazakhstan), Universidad de los Andes (Colombia) and Hamad Bin Khalifa University (Qatar).”
(thanks to Alex Couros, the Marginal Revolution crew, and the excellent Phil Hill for links)
Great article as usual. I caution against over-interpretting the survey responses from students on how they use AI. This wasn't an open-ended question, and they were only given these options. Nor was frequency of these options noted. If a student mainly uses genAI as I do, as an ideation partner, critical reviewer, coder, or process advisor, then that could not be properly captured by the survey. The survey design is slanted toward certain uses.
This article thoughtfully examines the growing impact of AI on higher education, focusing on student usage, concerns about cheating, advancements in research, investment in AI capabilities, and the increasing demand for AI-related skills in the workforce. It showcases how students and researchers are using AI tools to enhance their learning and research
Furthermore, the article discusses the role of AI in academic research, showcasing how researchers are leveraging AI to generate novel ideas and streamline complex analyses. This suggests a collaborative dynamic where AI augments human capabilities, allowing for more efficient and innovative outcomes.
However, the article also raises concerns about the ethical implications of this collaboration, particularly regarding academic integrity and the potential for cheating. This duality emphasizes the need for a balanced approach that recognizes the benefits of AI while addressing the challenges it poses. Note cheating is a human activity that predates AI.
The article calls for a deeper exploration of how the collaboration between humans and AI can be managed effectively within the educational landscape. This prompts a critical consideration of how institutions can foster a productive partnership between human intelligence and artificial intelligence.
Focusing on the collaboration between humans and AI emphasizes the synergy that can enhance creativity, decision-making, and problem-solving. AI can process vast amounts of data and identify patterns, while humans bring context, empathy, and ethical considerations. This partnership can lead to more innovative solutions and a better understanding of complex issues. Emphasizing collaboration rather than separation encourages a more integrated approach to technology that benefits both humans and machines.
The notion of viewing AI as a separate entity can lead to misunderstandings about its capabilities and limitations. Focusing on collaboration can enhance the effectiveness of both human and AI systems. To illustrate the benefits of collaboration, consider the following key points:
1. Complementarity: AI excels at processing large amounts of data, identifying patterns, and automating routine tasks, while humans bring creativity, emotional intelligence, and contextual understanding. Together, they can tackle complex problems that neither could handle alone.
2. User Empowerment: When AI is seen as a tool for enhancement rather than a replacement, it can empower users to make better decisions, innovate faster, and increase productivity. This collaboration can lead to more informed and nuanced outcomes.
3. Trust and Ethics: Collaboration fosters trust in AI systems. When humans understand how AI supports their work, they may be more inclined to consider ethical implications and ensure responsible use. This can help prevent the pitfalls of over-reliance or misuse.
4. Continuous Learning: Collaborating with AI can foster a culture of continuous learning. As humans interact with AI, they can gain insights that improve their skills and understanding, leading to a symbiotic relationship where both parties evolve.
5. Feedback Loops: The interaction between humans and AI creates feedback loops that can enhance both systems. Human feedback helps AI refine its algorithms, while AI can provide insights that spur human creativity and problem-solving.
6. Enhanced Problem-Solving: Complex issues often require diverse perspectives. Collaboration between human intuition and AI analytical capabilities can lead to innovative solutions, particularly in fields like healthcare, environmental science, and engineering.
7. Democratization of AI: When we emphasize collaboration, we can also make AI more accessible. Educating users on how to work with AI tools democratizes technology, enabling broader participation in its development and application.
Emphasizing collaboration rather than separation fosters a healthier, more realistic, and productive relationship with AI. It positions both human intelligence and artificial intelligence as co-contributors to progress, creativity, and problem-solving. This approach acknowledges the strengths and limitations of both, leading to better outcomes in various domains. I don’t think we can tease these apart.
Thoughts?