AI isn’t what helps you solve that
Yesterday everything that involved automation by a computer was “robotics”. Robot Process Automation was the solution to every single human process problem ever in any business, and it was the only solution. You may as well have renamed the computer to the “Robotic Process Machine”, because everything it did was reduced down to that.
Well times have changed, and the hot new technology of matrix multiplications have replaces RPA. Now everything is AI. On some level, It’s hard to care. Lay people will call anything a computer does the wrong thing, and who can blame them. Our terminology is often so convoluted and technical that we have a hard time sticking to it. The difference this time is the amount of shit being sold to them. A person mislabeling useful software products as RPA might buy a license to IBM RPA, whereas a person mislabeling useful software as AI will give all their money to Sam Altman since the functional economy is going to end.
Some people might be raising their fingers to rebut me already, and I get that. AI is a useful area of research. It’s even a useful area of engineering. Plenty of good systems are classic AI systems. That’s not what normal people see though. Talk to any regular human being right now, and ask them what AI is. They’ll tell you it’s LLMs, text generation. OpenAI is functionally not an AI company, it’s a deep learning neural network company, and that sub-area of AI has completely subsumed the entire field.
Take this little snippet of a recent episode of the Ezra Klein Show1 where the two guests and Ezra are discussing build-out of the electric grid. The guests state that this process is slowed by the studies that have to be done by industry experts to avoid grid collapse. Ezra, I think half jokingly asks if ChatGPT could help, and one of the guests responds that no ChatGPT can’t, but AI agents could Supercharge these experts.
This is where I absolutely want to tear my hair out. No AI agent can do anything like anything they claim in that conversation. LLMs cannot do any sort of study, they cannot do any sort of calculation, and I can promise you that the one thing these experts do not need is a textbox to “chat” with the data.
Computers can help here. We can implement these calculations in software and make the magic math machine do it very quick. These results can be presented in whatever way we wish. None of this is AI, none of it has anything to do with ChatGPT or OpenAI, they are not going to help solve that problem. Their product is as useful to solving this problem as Facebook is.
It’s infuriating, if you work on actual problems, for snake oil salesmen to imply that their tool can solve the very real problems you know exist, and that take focused effort to solve. They swoop in, suck up all the oxygen, take in all the resources available, and leave before anything has to be delivered.
I happen to like the Ezra Klein Show. I think they are genuinely trying to help here, and I don’t have any reason to believe any of the 3 people on this episode are grifters. I think they’ve been hoodwinked, the AI industry has pulled the wool over their eyes and tricked them into believing that AI was the limiting factor in all technology. It wasn’t. The limiting factor is our ability to fund it and build it. These studies aren’t difficult to implement in the computer, it just takes time and actual effort.