Why Sam Altman’s AGI reflections could lead to peril or progress: Opinion

Artificial intelligence could lead to a Star Trek future — or spiral into a Mad Max reality.

5 Min Read
OpenAI CEO Sam Altman attends Donald Trump's presidential inauguration on Jan. 20.

Artificial intelligence is no longer just a buzzword — it’s driving almost every major technological shift happening today.

Generative AI, in particular, is reshaping industries across the board. The impact is staggering, but behind this incredible potential lies a darker, more unsettling truth.

Years ago, in Driver in the Driverless Car, I warned of a pivotal moment, when exponential technologies like AI would lead us to one of two futures: the utopian vision of Star Trek or the dystopian nightmare of Mad Max. As OpenAI CEO Sam Altman announces that his teams are on track to achieve artificial general intelligence (AGI) and even start working toward attaining superintelligence, we are now standing at that junction.

A Mad Max future? 

Will AI help us build a more equitable, advanced society, or will it spiral out of control, pushing us toward collapse? In Taming Silicon Valley, Gary Marcus argues that the generative AI we see today is not the AI we should want — it is effectively taking us down the path to Mad Max. The rapid deployment of these technologies — driven more by corporate profit than ethical considerations — creates a world where misinformation, invasion of privacy, and economic inequality are worsening. 

Generative AI dazzles with its capabilities. Designers, filmmakers, writers, and scientists are using AI tools like GPT-4 and DALL-E to create content and innovations unimaginable just a few years ago. Healthcare is being revolutionized, drug discovery is speeding up, and education is being personalized to each student’s unique learning style. Imagine a world where life-saving treatments are discovered in months, not years.

Companies like mine, Vionix Biosciences, are already using AI to analyze complex medical data and detect diseases faster and more accurately than ever. Across industries, AI is shrinking timelines that once took decades down to mere months. This transformation is reshaping how we think about healthcare, research, and the future of medicine. 

But here’s the catch: AI isn’t just augmenting human creativity — it’s replacing it. Entire industries are feeling the squeeze. Writers, designers, and even software engineers — once thought immune to automation — face the very real threat of obsolescence. Jobs aren’t being enhanced; they’re being erased. Corporations are quick to claim that AI will “augment” human workers, but let’s be real. These companies are already slashing labor costs by replacing workers with machines. AI can do it faster, cheaper, and often better. The question isn’t whether AI will replace jobs; it’s how many, and how fast. 

Deepfake AI future

As if job displacement weren’t enough, generative AI presents an even graver threat: disinformation on an industrial scale. Deepfakes, AI-generated fake news, and automated disinformation campaigns are no longer hypothetical threats — they’re happening right now. As Marcus notes, AI is already being used to manipulate public opinion, disrupt elections, and stoke societal division.

Imagine living in a world where you can’t trust anything you see or hear. That’s where we’re headed. AI-generated deepfakes can fabricate events that never happened. AI-written articles can flood social media with misinformation. It’s becoming nearly impossible to separate fact from fiction — and that’s not just bad for democracy, it’s catastrophic. 

We’re witnessing the slow unraveling of public trust in real time — and the world is being polarized like never before. When we can no longer trust our information sources, democracy itself begins to crumble. Voters can be manipulated, policies twisted, and leaders can rise to power on lies fabricated by machines. And the tools we need to fight it aren’t keeping pace with AI innovation. 

What Altman doesn’t tell you 

The fact is that laws are codified ethics, and ethics are a consensus society develops over time. Ideally, we — society — should be shaping that consensus thoughtfully. But when it comes to AI, it’s not us who are steering the ship. It’s a handful of big tech companies driven not by ethics but by profit and market domination.

Take Altman. He positions himself as a visionary, claiming that his mission is to build AI for the benefit of humanity. But if you look closely, it’s clear that profit, not ethics, is his primary driver. Altman and others like him are racing to roll out AI systems faster than regulators can keep up, effectively setting the rules by default. They control the narrative, release the technology, and reap the rewards — while the rest of society deals with the fallout. 

These companies claim they’re “innovating for the good of humanity,” but the reality is far more self-serving. They’re competing for market share without concern for societal impact. Accountability, transparency, and ethical oversight are being bulldozed in the rush. When AI spreads disinformation, amplifies bias, or eliminates jobs, who is held responsible? Right now, no one. 

Surviving the AI revolution

Governments must step up now. We need regulations that enforce transparency, protect privacy, and hold companies accountable when AI systems go off the rails. These are not optional; they’re essential. But government action alone won’t be enough. We need a societal shift.

AI can do incredible good, but only if it’s developed with humanity’s best interest in mind — not just corporate profits. We must demand more from the companies building these systems. They should be held to higher ethical standards and face real consequences when their technologies cause harm. 

We also need to invest in retraining and upskilling workers. AI is here to stay, and while it will displace jobs, it can also create new opportunities — if we prepare for them. This isn’t just about surviving the AI revolution; it’s about thriving in it. 

We have two choices: take control of AI and steer it toward a Star Trek future, where technology solves global problems and uplifts humanity, or let it spiral into a Mad Max reality, where the powerful few exploit it and the rest of society is left to suffer. Let’s take the wheel before it’s too late.

Vivek Wadhwa is an academic, entrepreneur, and author. His book, From Incremental to Exponential, explains how large companies can see the future and rethink innovation.

This article was written by an external contributor and does not represent the views of MONIIFY. Want to share your opinion with our readers? Pitch your column at commentary@moniify.com