“By having computers “solve” the simulation, you haven’t proved that you are on your way to solving the reality it represents. You’ve just proved you don’t understand the point of a damn game.”
This is pretty interesting since most people have an internal monologue which manifests as “words” akin to a noir detective in a film. I think that this difference in neural architecture imo explains why some people react differently to the concept of LLMs since LLMs are clearly coded according to this idea of an internal monologue.
I agree, except for the wrinkle that no thoughts originate in words, just that most people are so habituated to language (and possibly lack sufficient self-awareness) that they don't notice the translation.
Well, since alphabetic language was invented just a few thousand years ago, I think you'd want to prove that no one on the planet thought a thought before that.
Perhaps AI won't ever solve natural language? Because, unlike chess, the rules of natural language constantly change. Teenagers and other clique groups (think beatniks, or techies) create new jargons all the time, specifically to be incomprehensible to the general public, and some of those diffuse into the common vocabulary. Think how the meaning of "cool" changed...
If LLMs are stuck with tracking the moving averages, maybe we'll always be able to stay a step ahead?
My husband has frontal lobe shrinkage and it affects his ability to speak and understand language. One day he won’t be able to tell me his needs, nor will I be able to understand what he wants. It will become a guessing game. I will have to intuitively know what he needs and live with it.
We are already at the point where so called AI enabled grammar checkers are homogenising humans’ writing style, grinding them down with distracting and screen-cluttering prompts.
We are faced not just with replacement but re-programming.
One question. Do you think we will really will get to the point "when no human will be able to construct a sentence more beautifully or saliently than a machine, as judged by other humans?" Isn't there a possibility that model collapse (I wish we still called it Habsburg AI) will kick in before that happens? Language seems different than chess, where the computer can continue to improve without more human input. Isn't there some chance the language mastery you prophesy would require more human data than is available to the LLMs? In other words, unlike chess, they can't just keep getting better by playing against other AIs?
See e.g. this provocative piece on Model Collapse:
This is above my head to predict with any meaning. But, one of my family members using GPT to help them respond to a tricky family drama amounts to a "close enough for normal people" sort of benchmark. Since that has already happened, I imagine it's only a matter of time before you could trick the very astute. Plus, I remember some studies that suggest that people like AI poetry better than Keats or something? Either way, the rest of my argument stays the same. Great question.
This is a great article. Good work. I think AI could act as a scapegoat, as it already is, to further divide the haves and have-nots. To me, A Brave New World is looking more likely, but through projects like Neurolink instead of pills. I could see a cognitive implant being something that becomes a form of UBI.
You get paid to have it. The trade off is your very humanity and privacy. Of course, such things will be pitched as a must-have tool of evolution. Get it “installed” or risk becoming a Neaderthal.
People will be afraid that without it they will become irrelevant, defenseless bodies who are unable to sustain their own existence. However, many won’t understand that it’s the total opposite. Probably the data that will be most valuable is the data they can’t (don’t) have: those who aren’t hardwired to the mothership. Hard pass for me.
I used AI intentionally six times in the last week, and unintentionally (e.g., the Google “AI Overview”) about a dozen more times. Essentially every time, the result was unusable — and when it wasn’t immediately obvious that it was unusable, it took me more time to verify manually than it would have to do it myself in the first place. (In fact, in the case of the one large data search I had it do, my confidence in the results is so low, I’ll probably just do that search myself anyway.)
This is good, but I wanted to add that there are physical limits to computing. No computer can actually map out all possible chess moves because each turn's possible chess moves increases exponentially (from 20 for first player to 400 for second to 120+ million for the first player's third turn). The same can be said of writing (although it's more diffocult, as you noted). There will never be enough computing power or time for any of that.
The key is to remove the hold reductive materialism has on the scientific class, perhaps especially the closed system of the neo Darwinists. Bring the telos of embodied consciousness back to the forefront.
Well articulated and some excellent points. I particularly like the note that software folk may be over-optimizing toward the “game”, instead of toward reality, with the subcultures around cryptocurrency and GenAI being two prime examples.
“By having computers “solve” the simulation, you haven’t proved that you are on your way to solving the reality it represents. You’ve just proved you don’t understand the point of a damn game.”
Brilliant insight right there.
As a note tho: do you have an internal monologue or not?
Not really. It "stiffens" into sentences when I'm trying to make a thought precise for a specific reason, but 95% of the time it's non-verbal.
This is pretty interesting since most people have an internal monologue which manifests as “words” akin to a noir detective in a film. I think that this difference in neural architecture imo explains why some people react differently to the concept of LLMs since LLMs are clearly coded according to this idea of an internal monologue.
I agree, except for the wrinkle that no thoughts originate in words, just that most people are so habituated to language (and possibly lack sufficient self-awareness) that they don't notice the translation.
That is a statement which requires proof! For some people thinking = words, and so you would need to provide proof that this isn’t the case.
Well, since alphabetic language was invented just a few thousand years ago, I think you'd want to prove that no one on the planet thought a thought before that.
Language is much older than the alphabet tho
Perhaps AI won't ever solve natural language? Because, unlike chess, the rules of natural language constantly change. Teenagers and other clique groups (think beatniks, or techies) create new jargons all the time, specifically to be incomprehensible to the general public, and some of those diffuse into the common vocabulary. Think how the meaning of "cool" changed...
If LLMs are stuck with tracking the moving averages, maybe we'll always be able to stay a step ahead?
My husband has frontal lobe shrinkage and it affects his ability to speak and understand language. One day he won’t be able to tell me his needs, nor will I be able to understand what he wants. It will become a guessing game. I will have to intuitively know what he needs and live with it.
Sorry about your husband. I'm glad he has you around to understand him, which you're right is mostly beyond language where it really counts.
You may find Aphantasia interesting. I personally think that embodiment and visualization mayn't be a necessary criteria for AGI.
We are already at the point where so called AI enabled grammar checkers are homogenising humans’ writing style, grinding them down with distracting and screen-cluttering prompts.
We are faced not just with replacement but re-programming.
Agree
Fantastic piece.
One question. Do you think we will really will get to the point "when no human will be able to construct a sentence more beautifully or saliently than a machine, as judged by other humans?" Isn't there a possibility that model collapse (I wish we still called it Habsburg AI) will kick in before that happens? Language seems different than chess, where the computer can continue to improve without more human input. Isn't there some chance the language mastery you prophesy would require more human data than is available to the LLMs? In other words, unlike chess, they can't just keep getting better by playing against other AIs?
See e.g. this provocative piece on Model Collapse:
https://alwaysthehorizon.substack.com/p/urban-bugmen-and-ai-model-collapse
This is above my head to predict with any meaning. But, one of my family members using GPT to help them respond to a tricky family drama amounts to a "close enough for normal people" sort of benchmark. Since that has already happened, I imagine it's only a matter of time before you could trick the very astute. Plus, I remember some studies that suggest that people like AI poetry better than Keats or something? Either way, the rest of my argument stays the same. Great question.
Great piece. AI should cause us to reconsider the nature of human language itself, and realize it is more powerful than it appears
Thanks, Ethan.
This is a great article. Good work. I think AI could act as a scapegoat, as it already is, to further divide the haves and have-nots. To me, A Brave New World is looking more likely, but through projects like Neurolink instead of pills. I could see a cognitive implant being something that becomes a form of UBI.
You get paid to have it. The trade off is your very humanity and privacy. Of course, such things will be pitched as a must-have tool of evolution. Get it “installed” or risk becoming a Neaderthal.
People will be afraid that without it they will become irrelevant, defenseless bodies who are unable to sustain their own existence. However, many won’t understand that it’s the total opposite. Probably the data that will be most valuable is the data they can’t (don’t) have: those who aren’t hardwired to the mothership. Hard pass for me.
100%
I’d choose to be Neanderthal if cyborg was the only other choice.
I used AI intentionally six times in the last week, and unintentionally (e.g., the Google “AI Overview”) about a dozen more times. Essentially every time, the result was unusable — and when it wasn’t immediately obvious that it was unusable, it took me more time to verify manually than it would have to do it myself in the first place. (In fact, in the case of the one large data search I had it do, my confidence in the results is so low, I’ll probably just do that search myself anyway.)
So well written and enjoyable to expose this insight with your choice of words. Thank you James.
This is good, but I wanted to add that there are physical limits to computing. No computer can actually map out all possible chess moves because each turn's possible chess moves increases exponentially (from 20 for first player to 400 for second to 120+ million for the first player's third turn). The same can be said of writing (although it's more diffocult, as you noted). There will never be enough computing power or time for any of that.
The key is to remove the hold reductive materialism has on the scientific class, perhaps especially the closed system of the neo Darwinists. Bring the telos of embodied consciousness back to the forefront.
I think this is what Alan Watts would say about AI, were he still alive.
Beautiful. No notes.
Well articulated and some excellent points. I particularly like the note that software folk may be over-optimizing toward the “game”, instead of toward reality, with the subcultures around cryptocurrency and GenAI being two prime examples.