There are two main value propositions for a college education experience (well, there are more, but here are two): immersion in subject matter and proficiency in academic competencies. In the broadest sense, these are the characteristics that employers value in a graduate: the former as an objective measure of knowledge and the latter serving more as a punctuation to knowledge in the form of being able to actually think (which employers value most, IMO).
Higher education, as an institution, may capitulate to AI in the areas of being at the apex of subject matter expertise. However, even having been diminished in that sense, we still have value as the environment where competencies are practiced and refined. If HE were to embrace being the institution where AI competencies are integrated into thinking and producing something of value, then our legitimacy as educators would be sustained.
We should be focusing more on what makes an education legitimate in the eyes of stakeholders more so than focusing on whether AI is better or worse at something humans already do.
Well, we're not talking about making playground equipment out of asbestos here. The influx of AI is <mcluhan> mostly a rearrangement of cognitive patterns from a medium that "shapes and controls the scale and form of human association and action." Focusing on content is of lesser importance than paying attention to the messages of AI, which are mostly internal experiences </mcluhan>:
- The answers to questions are simply a matter of computing.
- The optimal means by which we overcome stopping points in cognitive movement is to avoid human friction.
- Human-level discourse is inefficient.
- It is socially acceptable to outsource human relationships to a generative proxy.
- All of the information created by humans over the millennia belongs to whoever has the greatest corporate power to take it and use it as a basis for serving a business model.
Are these concerning? Absolutely. Like exterminating whales? I dunno. That comes across to me as another entry on the Pessimist's Archive: https://pessimistsarchive.org/. (click through the items on the timeline at the bottom and note the contemporaneous clippings).
Didn't photography do the same thing? Motion picture film? Radio? Television? I'm not advocating for the perpetuation of malevolent behavior. However, blaming the technology is a cop-out (referring to the anti-AI contingency). To suggest that technology *causes* anti-social behavior suggests that humans can neither control their behavior nor are they responsible for it, which smells a lot like rape culture. The curative for that is cultural growth - not banning the technology.
I appreciate the criticism. For me it is simply about this technology welding onto the rest of work, society, and pretty much everything. So we must adapt. But I think we can and there will be new jobs that mix tech with humanities: https://www.collegetowns.org/p/ai-wrangler-job-of-the-future-combines
Latham only touches on a major element of college for traditional-aged students, which is being part of a community. While those students will now tell you what they want is training at college that gives them a high-paying career, and I'm sure that feels especially true to Latham as a business school prof, the reality is that they seek community (perhaps 'connections' or 'networking') and make choices based on that during this time in their lives. Faculty tend to center themselves as the community builders of colleges, so I can see that as his perspective, and they have an important role in doing that. But he fails to address the ways that students are also interested in communities that they themselves build. His upper-level students in business classes may be less directly and obviously interested in that since they may have already established theirs, but that's not what traditional-aged students base their initial enrollment decisions on.
"Human interaction is not as important to today’s students" is an odd claim - they seek out other students and staff for so many interactions unrelated to courses and the physical plant. Students choose to interact with us when they could, faster and quicker, interact with AI for an adequate and acceptable response. They are often too fatigued these days to properly show up for each other, but that doesn't mean that they don't want other people to show up for them.
There are two main value propositions for a college education experience (well, there are more, but here are two): immersion in subject matter and proficiency in academic competencies. In the broadest sense, these are the characteristics that employers value in a graduate: the former as an objective measure of knowledge and the latter serving more as a punctuation to knowledge in the form of being able to actually think (which employers value most, IMO).
Higher education, as an institution, may capitulate to AI in the areas of being at the apex of subject matter expertise. However, even having been diminished in that sense, we still have value as the environment where competencies are practiced and refined. If HE were to embrace being the institution where AI competencies are integrated into thinking and producing something of value, then our legitimacy as educators would be sustained.
We should be focusing more on what makes an education legitimate in the eyes of stakeholders more so than focusing on whether AI is better or worse at something humans already do.
Very well said, Steve. (I like your use of "punctuated")
Yet what do you say to those who find generative AI ethically repulsive?
AI is not all necessarily generative. As Potkalitsky says, AI can be employed as an inquiry tool or a generative tool. [ https://nickpotkalitsky.substack.com/p/the-search-evolution-balancing-inquiry ]
Otherwise, many other things in society have been marked as repulsive (TV, social media, comic books), yet we persist. ¯\_(ツ)_/¯
I meant literally generative, as in generating text and other items. As opposed to its uses.
I take your point about those other things, but perhaps AI is even more upsetting. Perhaps like whaling.
Well, we're not talking about making playground equipment out of asbestos here. The influx of AI is <mcluhan> mostly a rearrangement of cognitive patterns from a medium that "shapes and controls the scale and form of human association and action." Focusing on content is of lesser importance than paying attention to the messages of AI, which are mostly internal experiences </mcluhan>:
- The answers to questions are simply a matter of computing.
- The optimal means by which we overcome stopping points in cognitive movement is to avoid human friction.
- Human-level discourse is inefficient.
- It is socially acceptable to outsource human relationships to a generative proxy.
- All of the information created by humans over the millennia belongs to whoever has the greatest corporate power to take it and use it as a basis for serving a business model.
Are these concerning? Absolutely. Like exterminating whales? I dunno. That comes across to me as another entry on the Pessimist's Archive: https://pessimistsarchive.org/. (click through the items on the timeline at the bottom and note the contemporaneous clippings).
I like the McLuhan view, but I also like building equipment. I can offend everyone equally, I think.
The objections you raise are good, but I was thinking of even more:
-reproducing old biases and prejudices
-exploiting workers (doing guardrails)
-driving under- and unemployment
-AI used for human abuse and suffering, from deepfakes to authoritarian surveillance
-drowning good content with junk
And more.
Didn't photography do the same thing? Motion picture film? Radio? Television? I'm not advocating for the perpetuation of malevolent behavior. However, blaming the technology is a cop-out (referring to the anti-AI contingency). To suggest that technology *causes* anti-social behavior suggests that humans can neither control their behavior nor are they responsible for it, which smells a lot like rape culture. The curative for that is cultural growth - not banning the technology.
I appreciate the criticism. For me it is simply about this technology welding onto the rest of work, society, and pretty much everything. So we must adapt. But I think we can and there will be new jobs that mix tech with humanities: https://www.collegetowns.org/p/ai-wrangler-job-of-the-future-combines
Latham only touches on a major element of college for traditional-aged students, which is being part of a community. While those students will now tell you what they want is training at college that gives them a high-paying career, and I'm sure that feels especially true to Latham as a business school prof, the reality is that they seek community (perhaps 'connections' or 'networking') and make choices based on that during this time in their lives. Faculty tend to center themselves as the community builders of colleges, so I can see that as his perspective, and they have an important role in doing that. But he fails to address the ways that students are also interested in communities that they themselves build. His upper-level students in business classes may be less directly and obviously interested in that since they may have already established theirs, but that's not what traditional-aged students base their initial enrollment decisions on.
"Human interaction is not as important to today’s students" is an odd claim - they seek out other students and staff for so many interactions unrelated to courses and the physical plant. Students choose to interact with us when they could, faster and quicker, interact with AI for an adequate and acceptable response. They are often too fatigued these days to properly show up for each other, but that doesn't mean that they don't want other people to show up for them.
I am curious why he did a 180 degree turn from his earlier piece in Inside Higher Ed where he advised the professoriate to resist AI - https://www.insidehighered.com/opinion/views/2024/06/14/memo-faculty-ai-not-your-friend-opinion