Semantic Apocalypse Now
AI solves language games, not reality.
“Like an alien consciousness,” is how chess grandmasters describe the play-style of supercomputers.
The chess “apocalypse” began some time after November 2005, when grandmaster Ruslan Ponomariov defeated Deep Fritz, the last time a human would ever beat the machine.
Unlike human players, the computer can calculate hundreds of thousands of possible outcomes for each move and reliably select the ones with the highest statistical likelihood of victory. In this way, chess computers approach the “ideal” gameplay along a logarithmic curve. Meaning: small improvements will continue to be made, but effectively the game is “solved.” This is even more true in practice, because no human player, even savants like Magnus Carlsen, stand a chance against them. It’s like fighting a battleship with a handful of rocks.
Crossing a similar event horizon with language would be the “semantic apocalypse” - when no human will be able to construct a sentence more beautifully or saliently than a machine, as judged by other humans.
We can, I think, glean a lot about this new apocalypse from the former.
For one, how could anyone be interested in playing - much less watching other people play - a game that is solved? You can go online right now and the computer will be able to tell you the “correct” move, a thousand times more quickly and accurately than if Magnus himself were standing over your shoulder. Lots of people cheat this way, so you’ll likely lose if you don’t. Where’s the fun left in that?
Well, it would be no fun if human players were trying to play like computers and just doing a bad job at it, but that’s not how good human players play at all.
Grandmasters aren’t making hundreds of thousands of calculations in their head. They’re seeing the “gestalt” of the board and “feeling” out what move to make next, using their embodied intuition, while knowing that the opponent is doing the same - and so trying to get in their head, while also trying to make sure his head is not gotten into. Thousands of hours of play - and, yes, learning technical rules and calculations, but only at first - have allowed each piece and possible move to “give way” into seeing a living, breathing whole; to feel potential dangers and boons and, importantly, to ignore most of what doesn’t matter. As good chess players become great, activity actually migrates away from their calculating left brain, toward their intuitive right.
That’s why the AI feels like an “alien” to the grandmaster. The computer does not form gestalts and therefore act with any sort of discernible personality, spirit, vitality, or “ethos.” It’s a bit like trying to compete with a calculator at a game of adding large numbers in your head. Yeah, it wins, but so what?
Magnus can’t compete with the computers, but millions still want to watch Magnus because Magnus has a personality, including a risk tolerance, aggression, sportsmanship, mindgames and therefore morality. It’s fun to watch him for the same reason it’s fun to watch athletes: we want to see how great people apprehend the world in real time, from within their bodies, which is a gestalt judgment, rather than a set of strictly measurable skills. This all manifests automatically as “admiration.”
There is no possible sterility: It is always moral and it is always embodied, even at the highest levels of abstraction. No young prodigy of mathematics or music was ever inspired by a calculator or a player-piano.
And now I hear the cries of the AI guys: these are just Luddite artifacts of human sentimentality. Nice on the weekend, maybe, but the endgame of AI is to solve reality better and more efficiently than any human mind ever could. Life is a more complex chess board, they say, and so it can be solved in the same way, just with more computations. First, it was chess, then language and mathematics, next images and videos, finally, it will be all of reality. That will be what we call AGI.
To that I say… you sure about that?
Think about why we even invented games like chess: you put a few simple rules on a grid with a few pieces, suddenly combinations of possible moves explode into numbers so large they can’t be grasped. In this way, the game is easy to learn, difficult to master. That makes it a perfect social game: you can teach your kid, and you can also practice into the night to finally beat your smart-ass co-worker. Through these dramas, you may better understand the sub-personalities comprising yourself and others. All of this can be done safely within the “confines” of the game board, which evokes the same sort of wherewithal needed in real life, but without the mortal risk. It is, in a word, a simulation.
By having computers “solve” the simulation, you haven’t proved that you are on your way to solving the reality it represents. You’ve just proved you don’t understand the point of a damn game. You’ve brought a revolver to a boxing match.
And the same is true for language.
This will be where communication becomes difficult, though, because we are, at this very moment, playing the game I want to show is not the same as the reality it points to.
Most people, I find, think that thinking is language. But that’s not true. Thinking is embodied. I know this because if you lose the parts of your ancient cerebellum where you visualize and move your body through space, you can no longer think. Words are just what happens, very automatically at this point, when we look back and try to record a thought, which itself was a rush of images and sensations. This is made even more obvious, also, when you realize that words are themselves intricate metaphors for embodied actions - the word “metaphor” is itself a metaphor, for example, meaning in Greek “to carry across.”
Like chess, language has relatively simple rules, so almost anyone can play, but uncountable possible combinations, therefore a very high skill ceiling. These are the qualities of any good game, you will notice.
The large language models are getting better at and even approaching “solving” the language game. This will, no doubt, shake our world to the core. Many people will lose their jobs, I’m sure. What’s terrifying about this inevitability, however, is not the possibility of AI stealing our humanity. It can’t. Rather, it’s the sad fact that such a large number of people make their living pushing linguistic symbols around in arbitrage, like a machine. This is an existential threat to investment bankers and performance marketers, maybe, but if perfect chess robots don’t make Magnus Carlsen irrelevant, I struggle to see how LLM’s are suddenly going to start “solving” reality more generally just because they “solve” the simplified game we created to represent it.
As a writer, GPT feels like an “alien consciousness” - without vitality or an ethos. It can’t even teach me to write better because, like chess bots, it takes an entirely different approach - one which calculates possible outcomes rather than a lust to communicate. It’s probably already better than me at choosing the “right” word to go next in this very sentence. But it can’t feel the gestalts within that gives rise to the sentence! so, who cares, then? As tempting as it is to pop this essay into GPT and say “rate this,” I know I probably shouldn’t “peek” at the “correct” answer to anything I hope to convey, because I will hinder my invisible progress toward being able to apprehend the whole. It might also patch over my faults and cracks that when, seen in aggregate, emerge an enormous and unexpected fresco image on walls of my being, far beyond my conscious knowledge or any machine’s smooth perfection, which may be, in the end, what makes this display I’m making of myself worth watching, despite, perhaps, my craft being technically inferior to the machine - if not now, one day very soon. So, I try to not think about it beyond helping me transcribe my handwriting.
What really frightens people, I think, is that we have become totally convinced that “the language game” is reality. We’ve forgotten that we intentionally “slowed down” in the flow of being, through language, in order to give ourselves a medium by which to reflect on things. But the reflection isn’t the thing. The ever-changing motion of actual experience are the things. Language is best when it only attempts to gesture at this underlying reality, as in poetry, and does not attempt to freeze and imprison reality by its own arbitrary rules, as in analytic philosophy.
I don’t expect this will clear things up for many people, because being confused about this is extremely profitable at the moment. Investors are throwing billions to see if enough microchips will make the language supercomputers suddenly into a consciousness god. A supposed “god” by the way, that a large percentage of its builders think will destroy the world. Let’s not think too hard about why they’re so eager to summon it, then.
As the LLMs continue to fail to generalize, we will continue to come up with creative reasons why we’re “not quite there yet.” The most recent one I heard was that each single neural cell might each have more “compute” than we previously thought. We therefore might have underestimated the computational power of the human brain by several orders of magnitude. To that I say: “Duh.” I remind you that not only does your neocortex have compute power (which the LLMs simulate, roughly), but so does every living part of your biology. Have you ever seen how a single one of your trillions of cells repairs its own DNA? Hell, a slime mold can memorize a maze better than my Roomba.
The goalposts will stretch forever on the horizon, I think, because we’re confused about our games. Straining the metaphor: the reason we stuck a goalpost in the ground in the first place is to make an arbitrary game to practice for the real thing. There is no fixed goalpost of reality itself, though, because reality at its core is fundamentally unlike the games we play within it, including language and mathematics. The final rules won’t ever be clearly defined. Even at the smallest level, you can’t calculate the motion of a particle and its position at the same time. You can’t even know if it is a wave, not a particle.
Our natural nervous systems are perfectly adapted to this ultimate uncertainty by being able to “play.” This happens from a single cell to trillions, all participating in the “spirit” of your personality. From that very human perspective, you see the whole instead of the parts. This is not a low-resolution stand-in for what will eventually be a total computation of all possible outcomes, but actually a fundamentally more correct way of apprehending reality. Machines - and machine-like thinking - fundamentally can’t do this, at literally any level of analysis, because they begin by breaking into halting ones and zeros what is finally flow. They are only good at computing, in hindsight, outcomes for a small subset of our own invented games.
We are not apes with a dim calculator in the upper left quadrant of our skulls, which need to be upgraded so that we may abandon our bodies. You actually can’t “solve” reality without a body. In fact, you can’t “solve” or “resolve” it at all. The tension of incalculable paradox is like the tension of a guitar string, making its music possible. You have to play it.
I do still fear AI, though, not because I think it can do what I do. It can’t, specifically because it doesn’t have a wife and feet and bowel movements. I fear it because I think it will give a lot of power to a mistake: one that thinks that the games we build to simulate parts of reality - like money or status markers - are the meat of being itself.
I believe that this mistake - the ideology of the materialist and his machines - has already caused much suffering in the world. AI can and will augment this mistake beyond comprehension, and that is a sort of apocalypse. Just not the one the AI guys envision.
We can only fight them by becoming even more human. I realize, though, that as the language machines arrive, many will quit this game of writing altogether. Many more will use the machines to cheat others into thinking they can think.
The right thing to do, I think, is to become like Magnus Carlsen: become a master of play, despite the apocalypse.



“By having computers “solve” the simulation, you haven’t proved that you are on your way to solving the reality it represents. You’ve just proved you don’t understand the point of a damn game.”
Brilliant insight right there.
My husband has frontal lobe shrinkage and it affects his ability to speak and understand language. One day he won’t be able to tell me his needs, nor will I be able to understand what he wants. It will become a guessing game. I will have to intuitively know what he needs and live with it.