How is higher education responding to generative AI?
That is the overarching theme of this Substack. I often contextualize or outflank it by describing related or framing aspects: economics, culture, regulations, and technological developments, to name a few. That’s because we need to bear these contexts in mind as we think through academia’s choices and possibilities.
Today I’ll zero in on higher education directly. I’m scanning the horizon for the past month to identify a range of academic approaches to AI.
This is also just one installment. I have a large backlog of higher ed and AI stories, and hope to get them to you steadily.
Let’s start with the classroom, be it in-person, on-line, or blended, then address the academic stories which take place around that learning space.
1: In the classroom
AI-powered teaching assistants Morehouse College plans to roll out synthetic TAs, a combination of generative AI for text and voice interaction with a visual avatar. One professor calls the result spatial AI, which is a phrase I’m not sure I agree with (is this actually evoking a space?), but which I also find fascinating to conjure with.
Note that Morehouse is not a rich college, yet they’ve organized this effort. Some AI projects are very academically accessible.
Using individual AI tutors for social learning In EDUCAUSE Review York University professor Ron Owston describes ways instructors can create good learning experiences by having students use individual AI tutors together. He uses AITutorPro to generate examples, and I recommend checking it out: easy to use.
Note his conclusions: first, “We need to shift our thinking about GenAI tutors serving only as personal learning tools.” Further, “we should start designing more features into GenAI tutors to take advantage of social learning.”
The capability of AI Tutor Pro to allow summary tables to be copied or be directly posted to social media is an example of this. Another feature that could be added to tutors is to enable multiple logins for teams of students to learn together. The addition of a tracking tool to record individual and team activity would be valuable as well. Uploading content to the AI tutors so that all students can work with the same materials would also be a helpful addition.
There’s a long tradition of thinking about technology as individualized or isolating, and it of course can be both of those things. Yet it’s important to remember how social technologies often are. I’d love to see more examples of this kind of work.
A glimpse into student attitudes about AI An Art & Science Group survey (not openly available; here’s one summary) identified several interesting ways students perceive AI. The key point is their concern about how they see other students using the technology:
55 percent said they were somewhat or strongly worried that other students’ AI usage will hurt their chances of getting into their chosen college. Similarly, about three out of five said they believe other students’ AI usage will affect their chance of getting scholarships and limit career opportunities after college.
Other students might get an unfair advantage in the fierce competitions for admissions, scholarships, and jobs. This is very useful to know. I wonder how many students deliberately choose not to use AI in order to avoid charges of unfairness.
A significant number of students has another dark view of the tech:
Two out of five students said AI tools contribute to misinformation, with the same amount calling the use of AI a form of cheating. A similar percentage (43 percent) believe using the tools contributes to a significant decline in critical thinking and creativity.
They sound like some faculty, staff, and other commentators. I wonder if these fairly broadly held views will cause their holders to use AI but say they don’t.
Again, I can’t access the study, so there’s a limit to what I can assess about it. But these strike me as fascinating and useful findings for educators.
Against AI-enabled cheating At least five Chinese universities have adopted strong anti-AI-enabled cheating policies. There’s an interesting mix of policies, from mandating several software checks of papers in different stages to setting out a percentage of student content which must be (as far as the software determines) free of bot-generated material. No word on how efficacious these detectors are.
Remember that the Chinese state is very pro-AI, and that there is a very active industry there working on the technology.
Is liberal education the best way to prepare students for generative AI? I think so, and there’s been some discussion along these lines (for example). Much depends on what the speaker means by “liberal education.” Some mean “a lot of humanities,” which makes sense here, in terms of teaching careful attention to language and iterative conversation. For others liberal arts means interdisciplinary thinking, and that really works for AI, as one has to bring together writing and computer science for a start. We can quickly add visual literacy for image generators, then media, gender studies, political science, and more for building up critical awareness.
2: Beyond the classroom but still academically related
Faculty leading AI professional development A group of college and university faculty in Florida launched an organization designed to help them improve their understanding of AI, the Florida AI Learning Consortium (FALCON). It looks like they meet online and hope to gather once a year in person. I like their special interest group breakdown: AI Ethics and Policy, AI Use-Cases and Faculty Development, AI Literacy and Workforce Readiness, AI Knowledge Hub, Beyond Data Privacy and Security, Grants, Publications, and Scholarship.
How many other states are creating or operating such peer support groups?
Generative AI in educational multimedia Glenda Morgan writes about the Infocomm audio-visual product show in Las Vegas. There she found “AI everywhere.” One use it to gradually improve media quality. Some used AI to power interactions between instructors, students, and media.
It’s a thoughtful piece. Morgan notes that such services can be expensive, given the licensing changes vendors pay to AI providers. She also identifies an impending problem with competing AI services on campus: “With AI tools embedded in the LMS and in productivity tools such as MS Word or Google Docs, there is no need to buy or license a physical device such as an AI-enabled smartboard.” This yields a major management problem, especially at the enterprise level. Morgan cannily reminds us of the popular split between media services and IT; a campus might end up getting AI from both sides. “AI is going to be in everything: the monitor on the wall, the microphone in the ceiling, and the kiosk that helped you locate your classroom or other campus locations…”
Morgan also notes a backend AI function for
leveraging AI to enhance interaction quality, whether in video conferencing, content display, or broadcasting. Many applications of AI in these contexts make intuitive sense, such as improving sound quality, tracking speakers for better camera ,and audio alignment, providing navigational aids, and automating content management for digital signage.
Publishers using generative AI in educational content Wolters Kluwer Health announced it would add more AI capacity to its nursing education offerings. There isn’t much detail in the release, unfortunately. This is about as close as it gets to clarity:
AI technology can help ease faculty and student pain points by leveraging a nursing student’s performance data for tailored remediation and to prepare students for safer clinical practice. This allows faculty to focus their time on overall class needs, creating active, clinical-based activities that foster students' clinical judgment.
Note the rationale given for this new work: “the upcoming solutions will help reduce faculty workload and offer students AI-personalized remediation tools with rich content, based specifically on their performance data.” Reducing faculty workflow without reducing faculty, eh?
Regulating AI in higher education California Assemblymember Sabrina Cervantes (D-Riverside) successfully sent a bill to the governor’s desk which would “ensure that our community college students are taught by qualified human faculty rather than artificial intelligence (AI).” The law, should it go into effect, doesn’t ban AI from the classroom entirely. Instead, “the bill still permits the use of artificial intelligence as a peripheral tool to support human faculty in performing tasks like course development, grading, and tutoring.” Nathan Greenfield interviewed me about this proposed law for University World News.
One key detail: Cervantes states that she worked with the Faculty Association of California Community Colleges (FACCC) to craft the bill. This is therefore a collaboration between faculty and a legislator.
Another detail: Greenfield concludes with this note about the law being conservative:
Rather than breaking new legal ground, Larry Galizio, president and CEO of the Community College League of California, a non-profit company that provides support to the trustees of the state’s 116 community colleges, sees AB 2370 as “reinforcing the status quo as to what type of credentials and minimum qualifications are required to teach at the community colleges”.
Some concluding thoughts:
Some of these stories are about involving human instructors in the learning experience with AI. That’s in opposition to campuses using AI to replace humans entirely. Along these lines is the idea of instructors organizing themselves for professional development. There’s a crying need for the latter, I think.
Regulating AI in education at the government and institutional levels is still wide open, it seems.
AI as helper technology is a theme worth attending to. Be it using AI to improve media quality or helping instructors and students with learning, this is a different way of thinking about the tech than, say, as professor replacer.
That’s it for now. The scanner is currently groaning under the virtual weight of detected stories. More coming up soon!
(thanks to Peter Shea and Nathan Greenfield for sharing stories for this issue)