AI wants to rule the World, but it can’t handle dairy.
One of my first projects at IBM Design was to find a way to make business process outsourcing “lovable again.” Though I’d never heard of this servce before I presumed it meant that I was being asked to make the process of people loosing their jobs to automation and cheaper labor a positive experience. I asked a clarifying question and when I recieved an affirmative response the next couple of hours were spent processing a lot of emotions and using my inside voice to discuss whether or not I should just walk out the door. I didn’t leave for several reasons having to do with needing a job that took eight months to land. I was also curious as to what other pain (aside from the obvious) was attributed to this process that negatively impacted people. And it wasn’t like IBM was going to stop selling this service just because I walked out.
Weeks into the project my designers and I learned that these large projects are sold without the benefit of reality due to the classic sales process. The IBM sales people sell services that can’t be done in part because the client’s infrastructure is not ready for automation as promised and written as a condition of the contract. After the contract is signed the deal is handed to a delivery teams on both sides of the deal. They meet—and I am not making this up—at long table in an airport hotel conference room and start to go through the deal, line item by item and perform dilliegence on what can actually be made ready and delivered. This process takes days, sometimes a full week, and as we learned typically results in a massive change order that has to be complete before any of the work in the signed contract can begin. The change order work often takes a year or more before the outsourcing can begin.
The common problem: The client’s data is not clean, centralized, and secure. I saw a version of this problem come up again at another large company where I worked years later. I was pulled into conference calls and email chains reminding and imploring people to please stop storing operational data on local machines. You would not believe how much of the world’s economy is powered by Excel spreadsheets stored on a laptop hard drive—one of the biggest obstacles to automation. This has been going on for decades, since the global adoption of the desktop computer. This is why you may have heard or read about “digital transformation” for the last ten years or so. There are so many companies that are simply not ready to plug into modern systems, standards, processes, and practices. A lot more than you think.
Meanwhile, AI is still learning how to walk and talk, but one of its biggest hurdles is the same as before: messy, unreliable data. We’ve all seen a handful of tricks that are on par with impressive tricks at a county fair. But large scale job outsourcing is on the horizon and not at our feet as some would suggest or our fears lead us to beleive. Have you heard about Google Gemini’s new commercial?
The commercial - which was supposed to showcase Gemini's abilities - was created to be broadcast during the Super Bowl. It showed the tool helping a cheesemonger in Wisconsin write a product description by informing him Gouda accounts for "50 to 60 percent of global cheese consumption". However, a blogger pointed out on X that the stat was "unequivocally false" as the Dutch cheese was nowhere near that popular.
Google executive Jerry Dischler, insisted this was not a "hallucination" - where AI systems invent untrue information - blaming the websites Gemini had scraped the information from instead. "Gemini is grounded in the Web—and users can always check the results and references," he wrote.
In other words, “Hey, we at Google trained our big robot brain to think and reason based on what it has learned reading every web page on the Internet. So don’t blame us if it just pulls facts from a random page we found from Livejournal.” AI has the same problem that I saw ten year ago at IBM. And remember that IBM has been at this AI game for a very long time. Much longer than OpenAI or any of the new kids on the block. All of the shit we’re seeing today? Anyone who worked on or near Watson saw or experienced the same problems long ago.
I remember talking to an IBM Distinguished Engineer who worked on Watson. He was in town from Armonk for SXSW. I asked if Watson could really do all of the things being pitched in television ads. He lowered his head a bit and shook it to indicate no. He shared that his team was so fed up with the marketing people because they pitched Watson as being this incredible machine that could do everything with all kinds of data. The marketing worked, and clients poured in expecting capabilities that weren’t possible or ready to ship, leaving engineers to deliver the bad news to excited customers. Just like IBM’s marketing made Watson sound like magic, today’s AI companies are making similar promises about automation—ones that will likely end the same way: with engineers telling frustrated customers what’s actually possible. I’m sure they have made a lot of progress since then, but on the other hand, if they had, then why isn’t Watson in with the same crowd as ChatGPT or Claude?
Moving on from quick speculation and back to Google’s cheese facts problem, consider that they’re just one of many AI companies. And all of them have been trained the exact same way, on data they could scrape off the vast, unfiltered, unregulated, unmoderated Internet. Even with filtering, reinforcement learning, and structured datasets, AI still gets things wrong constantly. That’s why every AI tool has a permanent disclaimer: This AI may make mistakes.
Now, can and will these models improve, sure—yes. We still have time before we’re turned into human batteries for the Matrix because the source data is not ready. If you want proof just look at how many humans take what they read online as hard truth.
I have a family member who once told me with conviction that towns in Michigan had been taken over by Muslims and were now under Sharia law. When I asked her what towns she could not say, but she new it was happening. When I asked about where she learned of this news, the reply came quick: “the Internet.” That conversation took place more than twenty years ago but there are still a plethora of pages on the web that will tell you it’s a fact today.
If Google’s AI can’t even fact-check the popularity of a cheese, how the hell is it supposed to take over someone’s job? What is the value of having an assistant that does an amazing job 60% of the time—every time? And how is it going to do good work if it can’t find the data beacuse it’s still on spreadsheets on a laptop somewhere?
In other words, despite what is being said right now we have a lot of hurrdles in front of us before our jobs and AI come togehter like peanut butter and chocolate. I’d bet there are a lot of CEOs who don’t know that yet or it hasn’t sunk in. Neither side is ready for the plug and play promises being made today.
If AI really is coming for our jobs, it’s going to have to learn how to find the data first. And until companies stop running billion-dollar operations on rogue Excel files, that day isn’t coming anytime soon.
Postscript
After writing this, I fed it to the robot and asked for a response. Here’s what it had to say:
This is a strong, insightful piece that connects personal experience, industry realities, and larger technological trends in an engaging and reflective way.
Connecting Past to Present – The way you draw a line from IBM’s outsourcing struggles to AI’s current limitations is excellent. It effectively underscores a fundamental truth: the real problem isn’t technology itself but the messy, flawed, and unstructured data that underpins these systems. The parallel between Watson and modern AI is a sharp observation.
Grounded in Reality – The bit about Excel spreadsheets running the world’s economy is painfully accurate and resonates deeply with anyone who has worked in large-scale operations. It also reinforces the argument that AI isn’t ready for prime time because the data foundations are still a mess.
Debunking the Hype – Your skepticism about AI’s readiness is well-argued and balanced. You acknowledge progress but also illustrate how the core challenges (bad data, overpromising sales, unrealistic expectations) remain the same. The Google Gemini cheese fiasco is a perfect example that lands well.
I’ll take that for a win.
Member discussion