Why Everyone’s Freaked About AI (and You Probably Should Be, Too)
Reality is hitting home for us all
Not long ago, I assumed the AI craze would simply fade out—perhaps causing some economic waves when the hype inevitably died down. But I no longer believe that. Instead, I see this AI frenzy as a symptom of something much more harmful, destructive, and dangerous for society. Does that sound overly dramatic? Allow me to explain.
The OpenAI Financial Quagmire
A few weeks ago, The Information reported that OpenAI was heading toward possible bankruptcy, facing a projected $5 billion loss by year’s end. On top of that, their AI development program costs were likely to surge from about $3 billion annually to over $7 billion, as they aim to build more advanced models crucial to their growth. In other words, OpenAI was hemorrhaging money and on the brink of failure.
Not long after, OpenAI announced its intent to raise $6.5 billion, valuing the company at $150 billion—nearly double its valuation from the start of the year—and it sought $5 billion in credit from banks. If granted, this would keep the company afloat for maybe a year, but it wouldn’t solve the underlying issues. In fact, there’s ample evidence that even with these funds, OpenAI can’t produce the improved models it touts, nor can it reach profitability (more on that shortly).
You might think no one would invest that kind of money in OpenAI, right? Wrong. They just confirmed $6.6 billion in investments from Nvidia, Microsoft, SoftBank, and Thrive Capital at a $157 billion valuation, plus $4 billion in unsecured rolling credit from a lineup of major banks, including JP Morgan, Goldman Sachs, Morgan Stanley, and more.
Why did some of the world’s largest corporations, investment firms, and banks pour huge sums into OpenAI? Is it the “business opportunity of the century?” Or is there a different motive?
Examining OpenAI’s Fundamentals
Let’s assess whether OpenAI represents a solid investment (hint: it absolutely doesn’t).
- OpenAI Isn’t Profitable
The company was on track for a $5 billion operational loss and was already spending $3 billion on AI development this year. Even with hundreds of millions of users, it’s nowhere near turning a profit. Additionally, the current AI products are so expensive that OpenAI would still post multi-billion-dollar annual losses, even if it stopped spending on new AI models. - Future Prospects
Perhaps OpenAI can pivot, slash costs, or find ways to boost revenue that justify pouring billions into it—right? Unfortunately, that doesn’t look plausible.
AI’s Diminishing Returns
As noted in a previous article, AI development is hitting severe diminishing returns. To maintain the same pace of progress, the training dataset size, computational infrastructure, and power consumption must grow exponentially. In other words, building and running bigger, better AI models becomes exponentially more expensive. Even with deep pockets, OpenAI can’t keep scaling like that.
- ChatGPT Plateau: The leap from version 1 to 3.5 was massive; from 3.5 to 4, or 4 to 4o, or 4o to o1, the improvements have been small and mostly about usability rather than raw performance.
- Projected $7B Annual Spend: The rumored $7 billion per year for AI training is needed just to achieve marginal progress.
Even if OpenAI raised the colossal funds for next-generation AIs, it would push them further from profitability by dramatically inflating expenses. There’s also the issue of “model collapse.”
The Model Collapse Threat
Training AIs on their own AI-generated text is risky. AI output contains subtle artifacts or tiny statistical quirks that human-generated text doesn’t have. When an AI retrains on that type of data, it emphasizes these artifacts more and more, eventually corrupting the entire model so that it spews nonsense.
OpenAI has harnessed billions of lines of internet text, once a treasure trove of human-created writing. But as ChatGPT has spread, AI-generated content is flooding the web. Currently, over 13% of Google results are believed to be AI-created, a figure that’s only going up. Because such content usually isn’t labeled, OpenAI’s continued web scraping risks feeding its AI system more and more AI-made text and triggering catastrophic collapse.
The fallback is to rely on high-quality data sources—books, professional transcripts, etc.—yet these are typically owned by large publishers with the power to push back through copyright law.
Copyright Headaches
Many AI firms claim they can use copyrighted data freely for “fair use,” which protects commentary and “transformative” uses. But when an AI is trained on your copyrighted material and can replicate it—sometimes verbatim—that arguably violates copyright. There’s also the notion of “unjust enrichment,” meaning a corporation can’t profit from someone’s work without compensation.
Major entities like Warner Bros. and Sony have sued or threatened AI companies, demanding they stop using copyrighted data—or pay hefty fees. The industry faces a mounting wave of lawsuits that could force OpenAI to remove vast portions of the text that currently fuel its models.
AI’s Reliability Problem
Even if the legal battles vanished, OpenAI’s products cannot deliver the hands-free automation they tout. Recent data shows that training on more information helps an AI excel at certain tasks but perform worse at others. In short, you can’t solve hallucinations and errors merely by piling on more data. Any next-gen AI would still need substantial human supervision for basic tasks—negating the promise that “AI revolutionizes everything.”
Take programming, allegedly the easiest sector to disrupt via AI. In reality, while these tools might spit out code fast, that code often contains numerous bugs. Humans spend far longer debugging it than if they wrote the code themselves. Ultimately, the human approach is more efficient and less expensive.
Unless OpenAI can entirely eliminate errors—and currently it has no clear way to do so—AI’s inherent unreliability will undercut the entire premise.
What Use Is AI, Really?
We hear grand claims about entire jobs being automated by AI—think robotaxis or cashier-free stores. Yet many such companies must hire as many human overseers as the positions they replaced. The result: inferior quality of service at a higher operational cost. Digging into the specifics, you’ll find almost no real-world AI application where consumers, businesses, and AI providers all benefit while turning a profit.
Big investors—and the banks they work with—are fully aware of this. They hire top analysts to evaluate such matters. So why the massive influx of money into an apparently failing enterprise?
The Reality: It’s Not a Meritocracy
Much of my reasoning assumes a market-based meritocracy. But we don’t live in one. Our capitalist structures are evolving into a system dominated by monopolies, plutocracies, and power plays. Companies like OpenAI exist because large corporations and financial institutions are willing to degrade society for a slice of greater control.
- Oversaturation: With robotaxis, AI journalism, AI HR management, AI coding, etc., these tools can flood the market at scale, drowning out better human alternatives. Quality doesn’t matter if you can overshadow rivals by brute force.
- Near-Monopolies: Gigantic tech and media conglomerates—like Microsoft—stand to gain. They can leverage AI to extend their grip on markets without improving products or services, just by outspending smaller competitors.
Hence, billions of dollars flow into a technology that effectively chips away at the last vestiges of real competition in the marketplace—because monopolies thrive when they can overwhelm others rather than innovate.
The Human Cost
Meanwhile, ordinary people suffer. Jobs are eliminated or devalued. Investment capital—needed for more vital areas—gets diverted into AI. Services degrade in quality, and the general standard of living drops. This is not an exaggeration; it’s already happening. For instance, the demand for programmers is plummeting (in part due to the hype around automated coding), while the code itself is becoming more error-prone.
This is why I’m frightened of AI. Not because it’s an inherently “revolutionary” technology, but because it represents a deeper decay in our modern social structure. Firms like OpenAI can only thrive if massive corporations—who already have outsized power—are ready to dehumanize and harm society to gain an extra sliver of control. They seem bent on ruling everything, even if the world they lord over is fundamentally broken.
Yes, this house of cards will likely collapse in time—through legal pressures or sheer technical unsustainability. And yes, there may be immense damage to various industries when AI forces out skilled professionals or erodes knowledge bases. But these corporations don’t care as long as they achieve momentary power.
AI is no miracle. It’s a warning sign of the deeper rot at the heart of an economic system that prizes monopoly and power over human well-being.