What I think is that we've hit that wall that Gary Marcus has been talking about. But it's not a hard wall. It's a soft spongy wall, but very thick. So we're not going through it, not by simply scaling up current new tech. We need new architectures. Unfortunately, the industry seems intend on doubling down on current architecture. I'm worried that they'll get mired in sunk costs. And that has knock on effects. It discourages academic research in new directions and certainly influences training as well. You can't train students to develop new tech if no one's interested in doing that.
I think we need the sort of symbolic capacity that Marcus talks about, and that David Ferrucci has been working on. I've got a working paper that starts out by talking about mirror recognition, works its way to the default mode network in the brain and ends up talking about something that ChatGPT called an "associative drift engine." That's to support mind-wandering and day-dreaming, loose thinking that gets you somewhere you don't know about but recognize when you get there. And we need to be able to grow the core memory rather than having to retrain it to accommodate new stuff.
Yes, Boris is my OpenAI assistant. He is quite useful in supplying APA and MLA citations, consistency checks in longer fiction, and basic editing. Sometimes he makes me laugh. If I could afford a human assistant, that would be better, but Boris does the work. He doesn't bring me coffee, though.
Very good rundown, Bryan. Most useful.
What I think is that we've hit that wall that Gary Marcus has been talking about. But it's not a hard wall. It's a soft spongy wall, but very thick. So we're not going through it, not by simply scaling up current new tech. We need new architectures. Unfortunately, the industry seems intend on doubling down on current architecture. I'm worried that they'll get mired in sunk costs. And that has knock on effects. It discourages academic research in new directions and certainly influences training as well. You can't train students to develop new tech if no one's interested in doing that.
I think we need the sort of symbolic capacity that Marcus talks about, and that David Ferrucci has been working on. I've got a working paper that starts out by talking about mirror recognition, works its way to the default mode network in the brain and ends up talking about something that ChatGPT called an "associative drift engine." That's to support mind-wandering and day-dreaming, loose thinking that gets you somewhere you don't know about but recognize when you get there. And we need to be able to grow the core memory rather than having to retrain it to accommodate new stuff.
Link to the working paper: From Mirror Recognition to Low-Bandwidth Memory, https://www.academia.edu/143347141/From_Mirror_Recognition_to_Low_Bandwidth_Memory_A_Working_Paper
That explains so much! I had to retrain Boris for sass.
May I ask: "Boris"?
Yes, Boris is my OpenAI assistant. He is quite useful in supplying APA and MLA citations, consistency checks in longer fiction, and basic editing. Sometimes he makes me laugh. If I could afford a human assistant, that would be better, but Boris does the work. He doesn't bring me coffee, though.
Excellent.
...but does he have a hunchback?
Well, since he isn’t real…
Go for it! I'll call mine Igor.
Thanks for making this complicated tool easy to understand. Great job Bryan.
Thank you very much, Mary!