AI Started As A Dream To Save Humanity. Then, Big Tech Took Over.
Many AI builders say this tech promises a path to utopia. Others say it could bring about the collapse of our civilization. In reality, the science-fiction scenarios have distracted us from the more insidious ways AI is threatening to harm society.
After clicking the link to this article and reading these first few words, you might be half-wondering if a human wrote them. Don't worry, I'm not offended. Two years ago, the thought wouldn't have even crossed your mind. But today, machines are generating articles, books, illustrations and computer code that seem indistinguishable from the content created by people.
Remember the "novel-writing machine" in the dystopian future of George Orwell's 1984 and his "versificator" that wrote popular music? Those things exist now, and the change happened so fast that it's given the public whiplash, leaving us wondering whether some of today's office workers will have jobs in the next five to 10 years. Millions of white-collar professionals suddenly look vulnerable. Talented young illustrators are wondering if they should bother going to art school.
What's remarkable is how quickly this has all come to pass. In the 15 years that I've written about the technology industry, I've never seen a field move as quickly as artificial intelligence has in just the last two years. The release of ChatGPT in November 2022 sparked a race to create a whole new kind of AI that didn't just process information but generated it. Back then, AI tools could produce wonky images of dogs. Now they are churning out photorealistic pictures of Donald Trump with pores and skin texture that look lifelike.
Many AI builders say this technology promises a path to utopia. Others say it could bring about the collapse of our civilization. In reality, the science-fiction scenarios have distracted us from the more insidious ways AI is threatening to harm society by perpetuating deep-seated biases, threatening entire creative industries, and more.
Behind this invisible force are companies that have grabbed control of AI's development and raced to make it more powerful. Driven by an insatiable hunger to grow, they've cut corners and misled the public about their products, putting themselves on course to become highly questionable stewards of AI.
No other organizations in history have amassed so much power or touched so many people as today's technology juggernauts. Alphabet Inc.'s Google conducts web searches for 90% of Earth's internet users, and Microsoft Corp. software is used by 70% of humans with a computer. The release of ChatGPT sparked a new AI boom, one that since November 2022 has added a staggering $6.7 trillion to the market valuations of the six Big Tech firms - Alphabet, Amazon.com Inc., Apple Inc., Meta Platforms Inc., Microsoft and most recently, Nvidia Corp.
Yet none of these companies are satisfied. Microsoft has vied for a chunk of Google's $150 billion search business, and Google wants Microsoft's $110 billion cloud business. To fight their war, each company has grabbed the ideas of others. Dig into this a bit deeper, and you'll find that AI's present reality has really been written by two men: Sam Altman and Demis Hassabis.
One is a scrawny and placid entrepreneur in his late 30s who wears sneakers to the office. The latter is a former chess champion in his late 40s who's obsessed with games. Both are fiercely intelligent, charming leaders who sketched out visions of AI so inspiring that people followed them with cult-like devotion. Both got here because they were obsessed with winning. Altman was the reason the world got ChatGPT. Hassabis was the reason we got it so quickly. Their journey has not only defined today's race but also the challenges coming our way, including a daunting struggle to steer AI's ethical future when it is under the control of so few incumbents.
Hassabis risked scientific ridicule when he established DeepMind in 2010, the first company in the world intent on building AI that was as smart as a human being. He wanted to make scientific discoveries about the origins of life, the nature of reality and cures for disease. "Solve intelligence, and then solve everything else," he said.
A few years later, Altman started OpenAI to try to build the same thing but with a greater focus on bringing economic abundance to humanity, increasing material wealth, and helping "us all live better lives," he tells me. "This can be the greatest tool humans have yet created, and let each of us do things far outside the realm of the possible."
Their plans were more ambitious than even the zealous Silicon Valley visionaries. They planned to build AI that was so powerful it could transform society and make the fields of economics and finance obsolete. And Altman and Hassabis alone would be the purveyors of its gifts.
In their quest to build what could become humankind's last invention, both men grappled with how such transformative technology should be controlled. At first they believed that tech monoliths like Google and Microsoft shouldn't steer it outright, because those firms prioritized profit over humanity's well-being. So for years and on opposite sides of the Atlantic Ocean, they both fumbled for novel ways to structure their research labs to protect AI and make benevolence its priority. They promised to be AI's careful custodians.
But both also wanted to be first. To build the most powerful software in history, they needed money and computing power, and their best source was Silicon Valley. Over time, both Altman and Hassabis decided they needed the tech giants after all. As their efforts to create superintelligent AI became more successful and as strange new ideologies buffeted them from different directions, they compromised on their noble goals. They handed over control to companies that rushed to sell AI tools to the public with virtually no oversight from regulators, and with far-reaching consequences.
This concentration of power in AI threatened to reduce competition and herald new intrusions into private life and new forms of racial and gender prejudice. Ask some popular AI tools to generate images of women, and they'll make them scantily clad by default; ask for photorealistic CEOs, and they'll generate images of White men. Some systems when asked for a criminal will generate images of Black men. In a ham-fisted effort to fix those stereotypes, Google released an image-generating tool in February 2024 that badly overcompensated, then shut it down. Such systems are on track to be woven into our media feeds, smartphones and justice systems, sometimes without due care for how they might shape public opinion, thanks to a relative lack of investment in ethics and safety research.
Altman and Hassabis' journey was not all that different from one two centuries ago, when two entrepreneurs named Thomas Edison and George Westinghouse went to war. Each had pursued a dream of creating a dominant system for delivering electricity to millions of consumers. Both were inventors-turned-entrepreneurs, and both understood that their technology would one day power the modern world. The question was this: Whose version of the technology would come out on top? In the end, Westinghouse's more efficient electrical standard became the most popular in the world. But he didn't win the so-called War of the Currents. Edison's much larger company, General Electric, did.
As corporate interests pushed Altman and Hassabis to unleash bigger and more powerful models, it's been the tech titans who've emerged as the winners, only this time the race was to replicate our own intelligence.
Now the world has been thrown into a tailspin. Generative AI promises to make people more productive and bring more useful information to our fingertips through tools like ChatGPT. But every innovation has a price to pay. Businesses and governments are adjusting to a new reality where the distinction between real and "AI-generated" is a crapshoot. Companies are throwing money at AI software to help displace their employees and boost profit margins. And devices that can conduct new levels of personal surveillance are cropping up.
We got here after the visions of two innovators who tried to build AI for good were eventually ground down by the forces of monopoly. Their story is one of idealism but also one of naivety and ego - and of how it can be virtually impossible to keep an ethical code in the bubbles of Big Tech and Silicon Valley. Altman and Hassabis tied themselves into knots over the stewardship of AI, knowing that the world needed to manage the technology responsibly if we were to stop it from causing irreversible harm. But they couldn't forge AI with godlike power without the resources of the world's largest tech firms. With the goal of enhancing human life, they would end up empowering those companies, leaving humanity's welfare and future caught in a battle for corporate supremacy.
After selling DeepMind to Google in 2014, Hassabis and his co-founders tried for years to spin out and restructure themselves as a nonprofit-style organization. They wanted to protect their increasingly powerful AI systems from being under the sole control of a tech monolith, and they worked on creating a board of independent luminaries that included former heads of state like Barack Obama to oversee its use. They even designed a new legal charter that would prioritize human well-being and the environment. Google appeared to go along with the plan at first and promised its entity billions of dollars, but its executives were stringing the founders along. In the end, Google tightened its grip on DeepMind, making the research lab that once focused on "solving intelligence" to help cure cancer or solve climate change now largely focused on developing its core AI product, Gemini.
Sam Altman made a similar kind of shift, having founded OpenAI on the premise of building AI for the benefit of humanity, "free from financial obligations." He has spent the last seven years twisting out of that commitment, restructuring his nonprofit as a "capped profit" company so that it could take billions of investment from Microsoft, to effectively become a product arm for the software firm. Now he is reportedly looking to restructure again in order to become more investor friendly and raise several billion more dollars. One likely outcome: He'll neuter the nonprofit board that ensures OpenAI serves humanity's best interests.
After the release of ChatGPT, I was struck by how these two innovators had both pivoted from their humanitarian visions. Sure, Silicon Valley's grand promises of making the world a better place often look like a ruse when its companies make addictive or mediocre services, and its founders become billionaires. But there's something more unsettling about Altman and Hassabis' shift away from their founding principles. They were both trying to build artificial general intelligence, or computers that could surpass our brainpower. The ramifications were huge. And their pivots have now brought new levels of influence and power to today's tech giants. The rest of us are set to find out the price.
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)