Advanced AI keeps Sundar Pichai up at night and makes Sam Altman a bit scared. Here's why some tech execs are wary of its potential dangers.

Advanced AI keeps Sundar Pichai up at night and makes Sam Altman a bit scared. Here's why some tech execs are wary of its potential dangers.

Sam Altman and Sundar Pichai.
Ramin Talaie/Getty Images and Kimberly White/Getty Images for GLAAD

Generative artificial intelligence has undergone rapid advances in recent months.
The launch of OpenAI’s ChatGPT has prompted some tech companies to increase their focus on AI.
However, not everyone is feeling optimistic about the new technology. 

The tech world’s obsession with generative artificial intelligence shows no signs of cooling off.

A wave of consumer enthusiasm following the launch of OpenAI’s viral ChatGPT has prompted some major tech companies to pour resources into AI development and launch new AI-powered products. 

But not everyone is feeling optimistic about the highly intelligent technology. 

Last month, several high-profile tech figures, including Elon Musk and Steve Wozniak, threw their weight behind an open letter calling for a pause on developing advanced AI. The letter cited various concerns about the consequences of developing tech more powerful than OpenAI’s GPT-4, including risks to democracy.

Senior figures at some tech companies like Google and even OpenAI itself have pushed back against aspects of the letter, highlighting issues with some of its technical points and practicality. 

Here’s what tech executives are saying about the potential dangers of advanced AI tech.

Elon Musk

Elon Musk has been cautious about AI for some time. 

Back in 2018, the billionaire memorably said AI was more dangerous than nuclear warheads. “It scares the hell out of me,” he said during a conference.

Since then, Musk has doubled down on some of his doomsday predictions. In a recent interview with Tucker Carlson, Musk said AI had the potential to destroy civilization.

See also  Concurrent and Proximate Cause of Loss in California 

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production. In the sense that it has the potential — however small one may regard that probability — but it is non-trivial and has the potential of civilization destruction,” he said.

Despite Musk’s rhetoric, Insider’s Kali Hays previously reported that Musk is in the midst of establishing his own generative AI project. The billionaire has founded a new company called X.AI, per the Financial Times.

Sundar Pichai

Alphabet CEO Sundar Pichai told CBS in an interview for “60 Minutes” that AI would one day “be far more capable than anything we’ve seen before.”

Pichai said the speed of AI development and concerns about deploying it in the wrong way kept him awake.

“We don’t have all the answers there yet, and the technology is moving fast,” he said. “So does that keep me up at night? Absolutely.” 

Pichai also addressed the open letter last month. He told The New York Times Hard Fork podcast: “I think there is merit to be concerned about it.”

“So I think while I may not agree with everything that’s there in the details of how you would go about it, I think the spirit of it is worth being out there,” he added. 

Sam Altman 

OpenAI CEO Sam Altman has said he’s a “little bit afraid” of AI. 

“I think it’s weird when people think it’s like a big dunk that I say I’m a little bit afraid,” Altman told podcast host Lex Fridman during a March episode. “And I think it’d be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid.”

See also  Tesla falsely advertised Autopilot, Full Self-Driving, says California DMV

In an earlier interview with ABC News, Altman said that “people should be happy” that his company was “a little bit scared” of the potential of artificial intelligence.

Demis Hassabis

DeepMind, a subsidiary of Google’s parent company Alphabet, is one of the world’s leading AI labs. The company’s CEO, Demis Hassabis, has also been urging caution around AI development.

“I would advocate not moving fast and breaking things,” Hassabis told Time in January, referring to an old Facebook motto coined by Mark Zuckerberg, which encouraged engineers to approach work with speed and experimentation. 

“When it comes to very powerful technologies — and obviously AI is going to be one of the most powerful ever — we need to be careful,” he said. “It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” 

In a recent interview with “60 Minutes,” Hassabis said there was a possibility AI might become self-aware one day.

“Philosophers haven’t really settled on a definition of consciousness yet but if we mean self-awareness, and these kinds of things … I think there’s a possibility that AI one day could be,” he said.