1 Comment

This is some post my man. Covers an entire atlas of LLM ground, and brilliantly so, as usual. Not being a Futurist, I wasn't aware of the PEST and STEEP methodologies of study. I'll assume Psychological impacts may fit somewhere under the "Social" umbrella and the Legal aspects (aside from those related to IP which you mentioned), will bridge the entire PEST/STEEP gamut.

In terms of the psychological aspects, I think mainly that of trust. We've all experienced the hallucinations and references to fictitious citations, but when these models are given more autonomy over control systems, how will humans trust the outputs? Trust the guardrails? I have a Tesla (and I love it). I let the car drive itself on the highway, I am trusting the programming and machine learning which went into the sensory systems that make autopilot possible. Yet, I am a programmer myself and how many times have I used one of my own apps for months before discovering a circumstance I had not foreseen which thwarts my logic blocks and derails the thing? A LOT! And knowing this, I still trust the car to propel me down the highway amidst trucks and dozens of other multi-ton hunks of steel?

Like the age-old balance between security and convenience, this magnetism extends directly to the balance between trust and convenience. But this is not new - this balance goes to the heart of using any technology. Do I trust my phone to not be listening to my conversations, do I trust social media platforms to respect my privacy when my account is, in fact, set to 'private', do I trust when Microsoft claims my documents and Teams/email communications which Copilot will use for generative text assistance will not be used to improve the LLM and/or its content be made available to the outputs other users request? It's a big one.

So looking forward to more Bryan, and thank you.

Expand full comment