3 Comments
Jul 8, 2023·edited Jul 8, 2023Liked by Bryan Alexander

Two things, well, maybe three. First, as faculty, we need to explore gen AI and become proficient before attempting to decide what to do about it. My own example - starting early December, I am investing at least an hour a day, every day, in learning about AI and using it. As a result, I am now able to do my grant work in a team of one instead of 3-4 people. I wish I can add a couple more people with similar skills but I cannot find them. Also, I can now tell what AI can and cannot do for you. You need to be able to direct it, and prime it thoughtfully every step of the way in order to get a meaningful outcome. Theoretically, this is what we need to teach our students to do. Second of all, I tried different approaches in Spring 2023 courses to prevent misuse of chat gpt. So far, detailed rubrics with requirements for students to do some tasks that the early versions of GPT could not do worked well. But by Fall, I feel that my chatgpt proofing will not stand; I can bypass them with more specialized AI now. My students will certainly discover it. This brings us to number three. In academia, we should move to competence-r based training and assessment. Like in my MBA capstone, with a semester or two long projects built around a practical case, students are required to jump all the hoops and use whatever assistance they can get but in the end it is their final oral presentation for the entire school is what really matters the most. There are other things. Students that do not use AI will be at a disadvantage. Perhaps, AI should be taught and required as a professional competency.

Expand full comment

It's ironic that Slides GPT stopped you from creating a presentation on cheating... that seems to be a misapplied filter (presumably, there are few students presenting on that topic for class!)

Expand full comment