I was on a Zoom call with founders of an AI unicorn company — which means a company worth over a billion dollars that is not yet listed in the stock market. You haven’t heard about them because they are in “stealth mode.” I can’t tell you more because I’m in “NDA mode.”
But, despite all this big money flying around, AI has failed to make a dent in the stock market. What’s up with that? Are these guys just dumping their money down the toilet?
As far as I can tell from these calls, they are building trojan horses with AI tools, waiting for the right moment to pop out and ransack. In the next five years, we might see one of the largest shifts of corporate power in history. You don’t know their names yet, but you will.
To see how AI is actually going to change the world, look where the money is going. Executives who built Google and Facebook left their high-paying jobs to build AI powered vertical SaaS products.
I envisioned terrifying egregors with superhuman intelligence. Instead, I got SaaS products.
I should’ve guessed it. In the 30s, everyone thought robotic arms would make music, burb babies, and do dishes. Instead, we got automotive assembly lines. It was a massive transformation of human productivity, but was boring compared to the sci-fi visions. We might expect the same this time around, too.
What AI companies actually do is take all of the disparate data accumulated over the last couple of decades and turn those into large language models. Basically, chatbots that you can use to navigate anything from home insurance to scheduling flights.
Industries like healthcare are inefficient, opaque, and bloated because for every interaction there are checkers and checkers of checkers, ad nauseam. AI could easily replace them today. If they did, transparency would increase, bureaucratic bloat would rot away, and costs would plummet.
The only thing standing in the way of that is some “change management:” upending established companies and their corrupt government ties by undercutting their price points. This is what VCs are investing billions in, right now.
Most people are cynical about this kind of thing. For one, 90% of clerical jobs would go away, and people don’t want to lose jobs, even if they are bullshit jobs. They also don’t trust that new massive corporations would do any better than the old ones.
But, of course, historically, the overall trend has been positive. Large companies fail for a lack of foresight, and new companies take over. New types of jobs and more wealth are created. Always we squawk, but history marches on.
I’m not trying to be Pollyanna about it. The people who built the current corrupt systems are quietly building a new wave of trillion-dollar companies. A semblance of our oligarchy is always preserved. What can ya do – you gotta take the good with the corrupt.
I’m no “techno-optimist.” These are, as they always were, moral problems that stem from the decisions of individuals. You can’t automate and scale courage and truth.
Lies are nested in the techno-optimism. A lot of these tech people talk (publicly) about building gods, the singularity, or replacing human intelligence. That’s mostly overblown hype-machines to fuel investment dollars – and they know it. Because, privately, a lot of what they talk about is smoke and mirrors. Google, for example, had AI guided by humans to make it seem more impressive. There's a lot of money to be made in us being afraid of AI.
The truth is just, well… pretty boring.
If, for example, you create a relatively simple chatbot to remove all of the friction to decide who gets a loan and for what rate, you remove 90% of the insurance workforce. Your rates go way down and visibility goes way up.
If you've seen "The Big Short" or know anything about what happened in 2008, it was all caused by a select few bankers benefiting from nobody knowing what the hell “subprime mortgage bundles” were. But, if everybody had access to all that information with a simple query, it would be harder for parasites to benefit from ignorance gaps. That prevents 2008 from happening.
Entrepreneurs are working hard to enrich themselves by building this new transparency, not because they are so good, but because they want to be the next Morgan Stanley. Their lower price point will force out the parasites and dead-weight. At least at first. Insurance and healthcare will drastically reduce costs overall as a result.
Yes, AI will greatly increase the power of the parasites and politicians, too. Who knows what that will look like. But we ought to realize that every time we’ve had a big leap forward, we've gotten richer and happier and psychopaths have a harder time succeeding. It’s been abject poverty and homicidal highwaymen up until about a hundred years ago or so, and it's been a logarithmic line upward since then.
To look to the future of AI and hope for cheaper healthcare and limited power of stockbrokers and other parasites is, I don't think, too optimistic. Pessimism is convincing enough, but it’s also easy. And it has been wrong, historically, so far. We did not run out of food in the year 2000, in case you didn’t notice.
Of course, we encode our biases (which are mind-parasites) into the machines, and they blind us. But luckily for us, when we're wrong, we pay for it. And so, there is always an evolutionary drive to fix it, given enough time. Happily, AI will also increase the velocity of consequences. That means that our wrong decisions will be quickly catastrophic, and our right decisions will make us wealthier than we can possibly imagine.
Will this all reign in the age of the machine? I doubt it.
Like the futurists of the early 20th century, we've made a fundamental mistake in our assessment of what humans do. We are not clockwork and clockwork is not motivated to do anything without us.
We were afraid, for example, the chess-playing robots would make human chess irrelevant. It did the opposite. Even though we could watch two machines play near-perfect games of chess, we are not interested. We got even more excited to watch Magnus Carlsen play.
Even if AI wrote the world's greatest novel (and it will), you won't read it. Think about it: What novel have you ever read that you didn't know the author? The history of writing is not a history of book titles; it's a history people who have taken their limited embodied perspective, made courageous decisions, and imbued us with a desire to imitate them.
All economic value originates from this spirit. The billions of dollars of clerical work of the healthcare industry, for example, are all founded on the simple human desire to care for the sick. All corruption is just a parasite on this desire.
The singularity is boring because nothing will change, fundamentally. We still have the same moral problems we’ve always had. Except now, our immorality will be implemented at a rate that might kill us before we can correct for it.
In that case, more than ever, we have to be engaged with boring-old-reality. That’s the solution to the “alignment problem.”
I know the “true” singularity is still sighted for the future… reminds me of Herbert Simon speculation in 1965 that "machines will be capable, within twenty years, of doing any work a man can do.” As always, we’ll see.
So far, the “singularity” looks to me more like SaaS integrations than Skynet.
What IS exciting, is if we play our cards right, the “singularity” will allow us more time and resources to do what makes us human: to easily transcribe our thoughts to the page, to break down a script for us to better act, or, I don’t know, time to do a kickflip.
I’m just glad I have time to do a kickflip now.
Good one James. Change Management 🙄
Always is and always has been difficult and clunky.
My company created one of the first SaaS products in ‘99 using a LAMP setup—mostly for the financial disclosure Reg 22 that we knew was coming. Bootstrapped the whole thing without investors until I sold it in 2009. Software still in use today by thousands of public companies. Sometimes boring is enough—but the change management for the practitioners in IR and PR was always the toughest sell.