Hope for humanity after hyperintelligent AI (large language models), pt. 1.
Wanted you to know that I'd get back to you next semester after I've finished your course, cleverly disguised as an article. I mean, gheez! I read the article 3 hours ago and have been down the rabbit hole of many of these links, coming back to your article, exploring more of the references. Especially the article about "AI knows you better than you do" - this is profoundly sobering shit. And I'm not really kidding about this being the basis of a whole course. Because it serves as a launching pad for a point made in the Wired article, which is that knowing yourself is more important now than ever, because in the past, you never had competition from any other influence trying to "know yourself" better than you do, but that's no longer the case. And I get your point that "knowing yourself" ain't easy and hasn't worked yet, but I feel more optimistic about that strategy than anything else. I'm looking forward to your take on storytelling as our best bet. Thanks for your work on this.
Love the take. And totally agree with the need for religiosity/community and the need for the reincarnation of narratives for the techno-flashpoint of our current times.
But, and it is kinda petty, Solid W for you - for recognizing Huxley's Brave New World as the more superior-subliminal blueprint of how a dystopia can emerge. 1984 is good too, but the constant parroting of "Big Brother" this and that by everyone on social media kinda gets to me. Because most people don't recognize the Huxleyan narrative is actually the more prevalent mode of coercion the world over.
Wow. WOW. W O W ! ❤️ You brought forth so many thoughts, feelings and emotions as I was reading I was gobsmacked. The landscape of our lives and the stories we tell rolled into a sliver of silicon, every angle of this beautiful and frightening revolution that’s upon us, examined with thought and insight. Thank you.
"The essence of abstraction is preserving information that is relevant in a given context, and forgetting information that is irrelevant in that context." --From my Python 101 textbook.
Abstraction allows us to store more information in our brains (and in our collective consciousness), because that info can be compressed- like zipping files. For example, when you say "Hebrews" I know what you mean- the story of the Israelites as primarily told in the Bible, and all the details of their journey and ideology. There's no need to repeat that entire phrase, you can just use the keyword. And if I forget,the specifics, I can reference the Bible, or ask a pastor, or read Spurgeon's commentaries.
If experienced reality (felt, embodied) is the first brain,
And spoken/written language is the second brain (q.v. Tiago Forte),
Then collective consciousness / dialectic (such as this discussion, or wikipedia) is third brain,
Then AI is either fourth brain, or a tool for more readily accessing brains 2-3,
But so how do we make sense of all that information? Large language models can only regurgitate or at best re-synthesize information that's already out there, and they're not great at constructing meaning.
So then, in line with your essay, I think stories are what we need to order and prioritize that information, to construct a logical (or at least useful) pattern of information, to draw a map to help us navigate the hyperlandscapes which abound (and are only growing).
Absolutely resonates, wonderful range on your part ... some part of me was waiting for Milton to be invoked in all this AI
Masterful--another great post, Taylor. Well done!
This is great. Bravo. Loved so many parts of this.