Artificial intelligence pioneer Geoffrey Hinton speaks at the Thomson Reuters Financial and Risk Summit in Toronto, December 4, 2017.
Mark Blinch | Reuters
Geoffrey Hinton, known as “The Godfather of AI,” received his Ph.D. in artificial intelligence 45 years ago and has remained one of the most respected voices in the field.
For the past decade Hinton worked part-time at Google, between the company’s Silicon Valley headquarters and Toronto. But he has quit the internet giant, and he told the New York Times that he’ll be warning the world about the potential threat of AI, which he said is coming sooner than he previously thought.
related investing news
an hour ago
“I thought it was 30 to 50 years or even longer away,” Hinton told the Times, in a story published Monday. “Obviously, I no longer think that.”
Hinton, who was named a 2018 Turing Award winner for conceptual and engineering breakthroughs, said he now has some regrets over his life’s work, the Times reported, citing near-term risks of AI taking jobs, and the proliferation of fake photos, videos and text that appear real to the average person.
In a statement to CNBC, Hinton said, “I now think the digital intelligences we are creating are very different from biological intelligences.”
Hinton referenced the power of GPT-4, the most-advanced large language model (LLM) from startup OpenAI, whose technology has gone viral since the chatbot ChatGPT was launched late last year. Here’s how he described what’s happening now:
“If I have 1000 digital agents who are all exact clones with identical weights, whenever one agent learns how to do something, all of them immediately know it because they share weights,” Hinton told CNBC. “Biological agents cannot do this. So collections of identical digital agents can acquire hugely more knowledge than any individual biological agent. That is why GPT-4 knows hugely more than any one person.”
Hinton was sounding the alarm even before leaving Google. In an interview with CBS News that aired in March, Hinton was asked what he thinks the “chances are of AI just wiping out humanity.” He responded, “It’s not inconceivable. That’s all I’ll say.”
Google CEO Sundar Pichai has also publicly warned of the risks of AI. He told “60 Minutes” last month that society isn’t prepared for what’s coming. At the same time, Google is showing off its own products, like self-learning robots and Bard, its ChatGPT competitor.
But when asked if “the pace of change can outstrip our ability to adapt,” Pichai downplayed the risk. “I don’t think so. We’re sort of an infinitely adaptable species,” he said.
Over the past year, Hinton has reduced his time at Google, according to an internal document viewed by CNBC. In March of 2022, he moved to 20% of full-time. Later in the year he was assigned to a new team within Brain Research. His most recent role was vice president and engineering fellow, reporting to Jeff Dean within Google Brain.
In an emailed statement to CNBC, Dean said he appreciated Hinton for “his decade of contributions at Google.”
“I’ll miss him, and I wish him well!” Dean wrote. “As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”
Hinton’s departure is a high-profile loss for Google Brain, the team behind much of the company’s work in AI. Several years ago, Google reportedly spent $44 million to acquire a company started by Hinton and two of his students in 2012.
His research group made major breakthroughs in deep learning that accelerated speech recognition and object classification. Their technology would help form new ways of using AI, including ChatGPT and Bard.
Google has rallied teams across the company to integrate Bard’s technology and LLMs into more products and services. Last month, the company said it would be merging Brain with DeepMind to “significantly accelerate our progress in AI.”
According to the Times, Hinton said he quit his job at Google so he could freely speak out about the risks of AI. He told the paper, “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”
Hinton tweeted on Monday, “I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.”
Dell Technologies CEO Michael Dell said Tuesday that while demand for computing power is “tremendous,” the production of artificial intelligence data centers will eventually top out.
“I’m sure at some point there’ll be too many of these things built, but we don’t see any signs of that,” Dell said on “Closing Bell: Overtime.”
The hardware maker’s server networking business grew 58% last year and was up 69% last quarter, Dell said. As large language models have evolved to more multimodal and multi-agent systems, the demand for AI processing power and capacity has continued to be strong.
Read more CNBC tech news
Dell’s AI servers are powered by Nvidia‘s Blackwell Ultra chips. The company then sells its devices to customers like cloud service provider CoreWeave and xAI, Elon Musk’s startup.
Dell shares rose over 3% Tuesday after increasing its expected long-term revenue and profit growth in an analyst meeting.
The computer maker raised its expected annual revenue growth to 7% to 9%, up from its previous target of 3% to 4%, with diluted earnings per share now expected to be 15% higher, up from its previous 8% target.
The company reported strong second-quarter earnings in August, and said it planned to ship $20 billion worth of AI servers in fiscal 2026. That is double what it sold last year.
The Motion Picture Association on Monday urged OpenAI to “take immediate and decisive action” against its new video creation model Sora 2, which is being used to produce content that it says is infringing on copyrighted media.
Following the Sora app’s rollout last week, users have been swarming the platform with AI-generated clips featuring characters from popular shows and brands.
“Since Sora 2’s release, videos that infringe our members’ films, shows, and characters have proliferated on OpenAI’s service and across social media,” MPA CEO Charles Rivkin said in a statement.
OpenAI CEO Sam Altman clarified in a blog post that the company will give rightsholders “more granular control” over how their characters are used.
But Rivkin said that OpenAI “must acknowledge it remains their responsibility – not rightsholders’ – to prevent infringement on the Sora 2 service,” and that “well-established copyright law safeguards the rights of creators and applies here.”
OpenAI did not respond to a request for comment.
Concerns erupted immediately after Sora videos were created last week featuring everything from James Bond playing poker with Altman to body cam footage of cartoon character Mario evading the police.
Although OpenAI previously held an opt-out system, which placed the burden on studios to request that characters not appear on Sora, Altman’s follow-up blog post said the platform was changing to an opt-in model, suggesting that Sora would not allow the usage of copyrighted characters without permission.
However, Altman noted that the company may not be able to prevent all IP from being misused.
“There may be some edge cases of generations that get through that shouldn’t, and getting our stack to work well will take some iteration,” Altman wrote.
Copyright concerns have emerged as a major issue during the generative AI boom.
Disney and Universal sued AI image creator Midjourney in June, alleging that the company used and distributed AI-generated characters from their films and disregarded requests to stop. Disney also sent a cease-and-desist letter to AI startup Character.AI in September, warning the company to stop using its copyrighted characters without authorization.
Thoma Bravo co-founder Orlando Bravo said that valuations for artificial intelligence companies are “at a bubble,” comparing it to the dotcom era.
But one key difference in the market now, he said, is that large companies with “healthy balance sheets” are financing AI businesses.
Bravo’s private equity firm boasts more than $181 billion in assets under management as of June, and focuses on buying and selling enterprise tech companies, with a significant chunk of its portfolio invested in cybersecurity.
Bravo told CNBC’s “Squawk on the Street” on Tuesday that investors can’t value a $50 million annual recurring revenue company at $10 billion.
“That company is going to have to produce a billion dollars in free cash flow to double an investor’s money, ultimately,” he said. “Even if the product is right, even if the market’s right, that’s a tall order, managerially.”
Read more CNBC tech news
OpenAI recently finalized a secondary share sale that would value the ChatGPT-maker at $500 billion. The company is projected to make $13 billion in revenue for 2025.
Nvidia recently said it would invest up to $100 billion in OpenAI, in part, to help the ChatGPT maker lease its chips and build out supercomputing facilities in the coming years.
Other public companies have soared on AI promises, with Palantir’s market cap climbing to $437 billion, putting it among the 20 most valuable publicly traded companies in the U.S., and AppLovin now worth $213 billion.
Even early-stage valuations are massive in AI, with Thinking Machines Lab notching a $12 billion valuation on a $2 billion seed round.
Despite the inflated numbers, Bravo emphasized that there’s a “big difference” between the dotcom collapse and the current landscape of AI.
“Now you have some really big companies and some big balance sheets and healthy balance sheets financing this activity, which is different than what happened roughly 25 years ago,” he said.