Professional Perspectives give authors space to provide context about an area of law or take an in-depth look at a topic that could benefit their practice.
The Bottom Line
- OpenAI’s disastrous rollout of ChatGPT-5 revealed a performance plateau and shattered trust in its self-policed path to “artificial general intelligence” even among AI optimists.
- Relying on a single AI vendor left users stranded by sudden product changes, highlighting systemic risks and the urgent need for enforceable oversight.
- To prevent potential institutional, economic, and national security harms, there needs to be clearer laws and regulations governing AI’s development and deployment.
I used to trust OpenAI. Like many in tech, education, and consulting, I built new workflows, invested real time and money, enjoyed big productivity gains, and—yes—let myself get swept up in Sam Altman’s narrative that “artificial general intelligence” was just around the corner. But GPT-5 has shattered that trust.
It is time to bring more scrutiny—and much stronger guardrails—to the AI sector before the next “miracle” ends in another mass disillusionment. The stakes are now too high to ignore; serious legal and regulatory action is overdue.
I found myself frustrated and disappointed by GPT-5’s underwhelming performance and the loss of features that I relied on, such as model choice. What was hyped as a leap turned out to be a stumble.
After abruptly retiring its earlier models, OpenAI had to reverse course and allow paid users to pick which model to use following a user revolt online—an extraordinary move for a company that had always insisted on “progress” as a one-way street. GPT-5 remains the default, but users demanding and winning the right to return to previous models exposes both a plateau in technical development and a deep governance problem at the heart of OpenAI.
It is also a flashing warning sign for the entire tech industry. The implications are legal, regulatory, and economic—and they matter for everyone from everyday users to investors, corporate strategists, and policymakers.
This moment is bigger than a product misstep for three reasons.
Hype, Plateau, Risks
Silicon Valley’s ability to tell a good story is legendary, but when CEOs sell investors on a vision they know is out of reach—at least for a very long time to come—it edges into something more dangerous: potential securities fraud and a huge risk to blow up the whole US economy.
Altman’s repeated claims that OpenAI was moving toward AGI and knew how to build it drove unprecedented investment, media attention, and even national policy debates. He long hyped GPT-5, mentioning being “scared” of its capabilities. During the launch, he continued to promote his product, describing it as a “PhD-level expert” in everyone’s pocket and “another step towards AGI.”
But what we got was a plateau in LLM development. The reality of GPT-5, and the unceremonious resurrection of legacy models, show that the disconnect between what was promised and what was delivered isn’t just an embarrassment—it raises serious legal questions.
If Altman and his team misrepresented what their models could do or their prospects for AGI, knowing the limits of current technology, that edges close to securities fraud. The Securities and Exchange Commission and the Federal Trade Commission could—and should—get involved.
OpenAI and the AI industry in general are now under the same kind of regulatory glare that felled Theranos Inc., but the stakes this time are much higher. Approximately $19 trillion in market cap initialization, $364 billion a year in Big Tech capital expenditures, and almost $300 billion in venture capital cash since 2023 now ride on AI-driven expectations.
When the leading lights of the AI sector command not just billions in funding but also the whole US stock market boom—with trillions of dollars in valuations for tech giants, the confidence of global markets, and the ordinary Americans whose pension funds are exposed to the AI risk—this isn’t a niche problem. OpenAI, Anthropic, and their peers have become systemically important for Silicon Valley and for the entire economy.
If the hype is ahead of reality, it’s not just a branding problem. It is an economic, regulatory, and legal risk that demands a new level of scrutiny.
While AI companies use the competition against China for global dominance as a scare tactic for less control, they don’t mention the risk of a total collapse of US stock markets and the economy if things go south. That would leave the US in a much worse position against China geopolitically, militarily, and economically.
The need for clear legal accountability isn’t optional. It is time for Congress to stop treating AI hype as harmless “vision” and start treating it like any other claim that can move markets—and hurt them.
AGI Risks Unchecked
But let’s say the skeptics are wrong and OpenAI is actually racing toward AGI. That’s even more alarming.
If a company can’t manage a product launch without chaos and backtracking, how can we trust it with the most powerful technology humanity has ever created? If this had actually been AGI, not just GPT-5, those same governance failures could have been catastrophic.
You can’t just bring back the old model or flip a switch to control a self-preserving AGI. Once that genie is out, there is no putting it back.
I never thought I would be arguing for more regulation one day. For years, I worried that overregulation would slow down innovation and hand the edge to China. But we aren’t talking about a new messaging app or ride-sharing service. Nobel laureate Geoffrey Hinton—who helped invent this technology—warns there is a 10% to 20% chance AGI could wipe out humanity.
When someone called the “godfather of AI” is this worried, we need more than industry self-policing.
If AI insiders themselves are warning that superintelligent models could deceive, manipulate, or even blackmail humans, we shouldn’t leave oversight to the same CEOs selling the dream. The public can’t wait for another “move fast and break things” disaster at this scale.
It is time for real, enforceable guardrails: independent third‑party audits, robust whistle‑blower protections, and a federal AI safety board empowered to oversee, pause, or recall dangerous systems—a model akin to how the Food and Drug Administration ensures safety and efficacy of drugs and devices or how the Financial Stability Oversight Council identifies and contains systemic threats to financial stability.
Overreliance Breeds Vulnerability
OpenAI hasn’t just captured the public imagination with its bold goal of reaching AGI. It has been actively marketing its products as essential tools for individuals, freelancers, businesses, education, and government agencies, reaching 700 million users in just over two years.
ChatGPT is supposed to power everything from solo consulting gigs and student essays to supply chain dashboards and mission-critical analytics. But the events of the GPT-5 rollout show what happens when too much is built on a single, unaccountable vendor.
The risk for individual users, freelancers, and enterprises isn’t theoretical. Professionals and organizations that built custom workflows or document pipelines around legacy models woke up to find those systems broken overnight without warning. They had no recourse except a Reddit board and OpenAI’s own customer service email, which responded with unhelpful messages that sounded suspiciously like they were written by GPT-5 itself.
OpenAI eventually restored the old models, but the damage was done: hours lost, work disrupted, trust shattered.
When a typical cloud provider pushes a buggy update, you can often roll back or migrate. With AI, especially when it is deeply embedded in your workflow, you are stuck waiting for the vendor to (maybe) fix it. Where is the liability for these disruptions—whether you’re a solo worker or a Fortune 500? Right now, there isn’t much.
Education isn’t safer. Schools and colleges that rushed to integrate ChatGPT into research labs, tutoring centers, or classrooms suddenly found themselves with a less flexible, less transparent product. Teachers who had relied on legacy models stack to plan lessons, scaffold writing, or spark inquiry were forced into a one-size-fits-all model.
ChatGPT’s downgrade with GPT-5 handed skeptics the perfect case study: AI in the classroom, but now with less control and arguably less value—even posing new risks to student skill development and learning.
Government faces even higher stakes. Imagine the intelligence community, a federal agency, or the Department of Defense building decision-support pipelines on OpenAI’s products—only to have their systems go down, glitch out, or disappear entirely because of a model change or, worse, a corporate crisis.
This isn’t far-fetched. OpenAI’s own business model depends on billions of dollars in venture capital to subsidize the tokens its customers use, because current earnings don’t cover the true costs.
If the AGI hype cycle ends, the bubble bursts, and VC money dries up, OpenAI could collapse virtually overnight. That would be a disaster for every user—public, private, or educational—suddenly left with critical data and workflows stranded on a defunct platform.
A five-day outage for a teacher is disruptive. For a defense agency, it’s a national security risk. Would Washington step in as it did with Fannie and Freddie, or let the lights go dark?
Congress should require service-level guarantees, transparent notice before models are deprecated, and clear lines of liability for business, education, and government users. For schools and colleges, there must be independent review and oversight—just as we have for textbooks, testing, and accreditation. No institution—private, public, educational, or individual—should put all its digital eggs in one basket.
The GPT-5 debacle didn’t just undermine trust in a single product; it exposed foundational problems in how we regulate, market, and rely on AI. From overhyped AGI claims to a lack of meaningful consumer and enterprise protections, the current system is dangerously unbalanced. We need real regulatory teeth—think “Truth-in-AI” laws, mandatory audits, and explicit user rights—to keep Silicon Valley’s ambitions honest and its effects in check.
If the AI revolution is real, it will survive transparency. If it isn’t, it’s better to find out now—before another bubble bursts under the weight of its own narrative.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Said S. Kaymakci, PhD, is a career coach and organizational consultant specializing in AI, workforce development, and how people and institutions adapt to rapid technological change.
Write for Us: Author Guidelines
To contact the editors responsible for this story: