Sam Altman, CEO of OpenAI, at the Hope Global Forums annual meeting in Atlanta on Dec. 11, 2023.
Dustin Chambers | Bloomberg | Getty Images
OpenAI CEO Sam Altman has admitted that he was surprised by the popularity of ChatGPT, which was released as a research project a little more than a year ago. His team had spent an entire meeting debating whether it was worth even opening up the chatbot to the public.
As it turns out, OpenAI’s decision to launch ChatGPT in November 2022 became the defining moment for generative artificial intelligence and set the stage for a rush of investments and a mountain of new products and services in 2023.
Some form of generative AI has made its way into virtually every industry, from financial services to biomedical research. Some 95% of utility and energy companies are discussing using generative AI algorithms, according to a July survey.
To see the financial effect of the generative AI rush, you need only to look at Nvidia’s bottom line. The chipmaker’s graphics processing units, or GPUs, are at the heart of the large language models created by OpenAI as well as those from Alphabet, Meta and a growing crop of heavily funded startups all battling for a slice of the generative AI pie.
Through the first three quarters of 2023, Nvidia generated $17.5 billion in net income, up more than sixfold from a year earlier. Its stock price jumped 237% this year, far exceeding any other member of the S&P 500.
Generative AI quickly became the buzzy phrase for corporate earnings calls, as every company needed a narrative. Sometimes, the story was painful, such as when digital education company Chegg said in May that it was seeing a “significant spike in student interest in ChatGPT,” which appears to be “having an impact on our new customer growth rate.” The stock plunged 48% in one day following the warning.
Perhaps no company was less prepared for the generative AI boom than OpenAI itself. In November, Altman was suddenly ousted by the board for a dispute that reportedly had to do with his aggressive push to develop new commercial products at the expense of safety. However, Altman quickly returned to the helm after employees threatened to flee and large investors banded together to fight for his reinstatement.
The public spat included a reshuffling of OpenAI’s board and shined a bright light on the debate raging between AI skeptics and evangelists. While advances in generative AI showcase the potential for technology to unlock all sorts of business opportunities and efficiencies, fears of the algorithms’ perceived power gained equal resonance. Some of the real-life harms for minorities and vulnerable populations showed up in FTC proposals, wrongful arrests, contaminated datasets and more.
Here were some of the key areas for generative AI advancements in 2023:
The Anthropic website on a laptop arranged in New York on Aug. 15, 2023.
Gabby Jones | Bloomberg | Getty Images
Chatbots
ChatGPT opened up the floodgates for investments in chatbots, as it became clear how the input of a few words could produce more thorough and creative responses than ever before.
Nearly two months after launch, ChatGPT broke records as the fastest-growing consumer app in history — until Meta’s Threads dethroned it last summer. ChatGPT now has about 100 million weekly active users, along with more than 92% of Fortune 500 companies using the platform, according to OpenAI.
Earlier this year, Microsoft plunged an additional $10 billion in the company, making it the biggest AI investment of the year, according to PitchBook, and OpenAI is in talks to sell employee shares at a price that would suggest a valuation of $86 billion. According to Bloomberg, the company is in discussions to potentially raise capital at a valuation of $100 billion or more.
Google was caught off guard by ChatGPT’s success and responded by accelerating the public release of its Bard chatbot, powered by its LLM called LaMDA, which stands for Language Model for Dialogue Applications.
Google has been rolling out new Bard features, including integrations with Google Search and a YouTube extension, and recently released Gemini, the new and buzzy AI model to power Bard. Gemini’s launch this month involved a marketing blitz and controversy over an edited video promoting the model’s capabilities.
In addition to its internal investments, Google is one of many big names behind Anthropic, an AI startup that’s currently in talks to raise $750 million at a valuation of $18.4 billion. Founded by former OpenAI research executives, Anthropic is the developer of the chatbot Claude.
In July, Anthropic debuted Claude 2, and said it has the ability to summarize up to about 75,000 words, which could be the length of a book. Users can input large datasets and ask for summaries in the form of a memo, letter or story.
As a category, new generative AI chatbots have been used this year to answer questions about business strategy, as well as to design study guides, offer advice on salary negotiation and spark creative writing prompts. They’ve even assisted in writing wedding vows.
“It’s probably one of the most influential step function changes in technology that we’ve seen,” Jill Chase, investment partner at CapitalG Ventures, told CNBC. Chase said it’s up there with the dawn of the internet and the shift to mobile. “Things like that just open up people’s imaginations,” she said.
Academics and ethicists have voiced significant concerns about the technology’s tendency to fabricate information and to propagate bias. Still, it has quickly made its way into schools, online travel, the medical industry, digital advertising and beyond. Microsoft and IBM have invested increasing amounts in enterprise AI offerings, including development studios for companies to personalize the use of LLMs.
There are plenty of detractors.
Publishers, artists, writers and technologists have been pushed to pursue legal action against companies behind popular generative AI tools, out of concern that their creative content is being used as free training data. John Grisham, George R.R. Martin and other prominent authors sued OpenAI in September over alleged copyright infringement.
This photo taken on Jan. 31, 2023, shows an artificial intelligence manga artist, who goes by the name Rootport, wearing gloves to protect his identity, demonstrating how he produces AI manga during an interview with AFP in Tokyo.
Richard A. Brooks | Afp | Getty Images
Image and video generation
Generative AI for images and video emerged in 2022, due to powerful image generators such as OpenAI’s DALL-E 2, Stable Diffusion and Midjourney, and video-generation AI tools from Meta, Google and Amazon.
While interest in those technologies continues, progress has waned compared to chatbots, according to Brendan Burke, an analyst at PitchBook.
“Multimedia content generation has fallen behind language in the pace of progress,” Burke told CNBC. “The initial excitement with Stable Diffusion in 2022 exposed both the general interest but also the drawbacks of AI content generation. Progress has been incremental this year, yet still disappointing for the most sophisticated content creators.”
Meta’s Instagram recently debuted a feature that allows users to change the background of Stories posts using AI. Google and Amazon have incorporated generative AI tools into advertising technology to create more appealing marketing images.
Some industry leaders say the future of generative AI is “multimodal,” bringing the various mediums together.
“The world is multimodal,” Brad Lightcap, OpenAI’s operating chief, told CNBC in a recent interview. “If you think about the way we as humans process the world and engage with the world, we see things, we hear things, we say things. The world is much bigger than text. So to us, it always felt incomplete for text and code to be the single modalities, the single interfaces that we could have to how powerful these models are and what they can do.”
Agents and assistants
After the chatbot comes the agent.
It’s not just about getting sophisticated answers, but it’s also about using generative AI to be productive in completing tasks. That could be scheduling a group hangout by scanning everyone’s calendar to make sure there are no conflicts, booking travel and activities, buying presents for loved ones or doing a specific job function such as outbound sales.
Last month, OpenAI announced custom GPTs, or customized, niche versions of ChatGPT that users can personalize for getting travel recommendations, recipe help or startup advice. However, the company chose to delay the release of the platform that would popularize different use cases — the “GPT store” — until next year.
One type of AI assistant that has gained popularity is for coding. Take, for example, Microsoft’s GitHub coding repository. GitHub CEO Thomas Dohmke wrote in a blog post earlier this year that an average of 46% of all code on GitHub, “across all programming languages,” was AI generated.
Last month, GitHub introduced a more expensive version of its Copilot assistant that can explain and provide recommendations about internal source code.
“Copilot, when it started at the very beginning, was thought to be a tool that could help developers write docs,” Kyle Daigle, GitHub’s chief operating officer, told CNBC in an interview. In the past year, he said, the company has expanded the technology, looking for more places “to help developers collaborate and work together and solve problems outside of just the code.”
But PitchBook’s Burke said coding assistants are in their very early days and currently can only do “a small part” of a developer’s work. That’s true in the broader world, he said.
“Users have found how little AI can do for them this year,” he said. “AI knows a lot, but it can’t do a lot yet. We’re still far away from AI truly being able to do the complex tasks that people are used to doing in their personal lives and at work. That has been shown by the struggles of AI agents this year.”
Nvidia CEO Jensen Huang speaks at the Supermicro keynote presentation during the Computex conference in Taipei on June 1, 2023.
Overall, 2023 was a big year for consumer excitement surrounding generative AI and for adoption of a few popular products. But business success stories have been few and far between.
“It was an especially transformative year from a consumer perspective where AI became much more tangible than before,” Grace Isford, a partner at Lux Capital, said in an interview. “AI is nothing new, but the awareness — and in turn, the adoption — has skyrocketed. Many more hackers and builders are leveraging the technology and the really exciting advancements into products.”
CapitalG’s Chase said the consumer fascination with the space has allowed people to “see what was possible” in AI, allowing for a “cake tasting” of sorts and a teasing of the imagination.
Early in the year, people “extrapolated out early exploration of that technology into lasting and enduring use cases,” Chase said. She added that there hasn’t been a straight line from early adoption and widespread use of one or two products to mainstream popularity. Companies and developers are now going back to doing research and development to “build the right infrastructure and tooling” that can hopefully lead to mass adoption.
“I think that will happen over the next year,” she said. “I think some people thought it would happen this year.”
In 2023, it’s clear that the overwhelming beneficiary from all the hype was Nvidia. The challenge for the coming year and beyond is for businesses to show that their hefty spending on those advanced GPUs and the models they power can lead to the development of products that allow more companies to share in the wealth.
“I thought that the excitement at the end of last year would quickly translate into enterprise adoption, but the reality is that very few companies have launched generative AI applications into production and experiments aren’t quickly translating into reliable applications,” Burke said. “We’re still looking at an outlook where companies may not widely deploy products until later next year or even the following year.”
Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to inappropriate content.
STR | Nurphoto via Getty Images
The global online safety movement has paved the way for a number of artificial intelligence-powered products designed to keep kids away from potentially harmful things on the internet.
In the U.K., a new piece of legislation called the Online Safety Act imposes a duty of care on tech companies to protect children from age-inappropriate material, hate speech, bullying, fraud, and child sexual abuse material (CSAM). Companies can face fines as high as 10% of their global annual revenue for breaches.
Further afield, landmark regulations aimed at keeping kids safer online are swiftly making their way through the U.S. Congress. One bill, known as the Kids Online Safety Act, would make social media platforms liable for preventing their products from harming children — similar to the Online Safety Act in the U.K.
This push from regulators is increasingly causing something of a rethink at several major tech players. Pornhub and other online pornography giants are blocking all users from accessing their sites unless they go through an age verification system.
Porn sites haven’t been alone in taking action to verify users ages, though. Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to sexually explicit or inappropriate materials.
Such regulatory measures have been met with criticisms from the tech industry — not least due to concerns that they may infringe internet users’ privacy.
Digital ID tech flourishing
At the heart of all these age verification measures is one company: Yoti.
Yoti produces technology that captures selfies and uses artificial intelligence to verify someone’s age based on their facial features. The firm says its AI algorithm, which has been trained on millions of faces, can estimate the age of 13 to 24-year-olds within two years of accuracy.
The firm has previously partnered with the U.K.’s Post Office and is hoping to capitalize on the broader push for government-issued digital ID cards in the U.K. Yoti is not alone in the identity verification software space — other players include Entrust, Persona and iProov. However, the company has been the most prominent provider of age assurance services under the new U.K. regime.
“There is a race on for child safety technology and service providers to earn trust and confidence,” Pete Kenyon, a partner at law firm Cripps, told CNBC. “The new requirements have undoubtedly created a new marketplace and providers are scrambling to make their mark.”
Yet the rise of digital identification methods has also led to concerns over privacy infringements and possible data breaches.
“Substantial privacy issues arise with this technology being used,” said Kenyon. “Trust is key and will only be earned by the use of stringent and effective technical and governance procedures adopted in order to keep personal data safe.”
Read more CNBC tech news
Rani Govender, policy manager for child safety online at British child protection charity NSPCC, said that the technology “already exists” to authenticate users without compromising their privacy.
“Tech companies must make deliberate, ethical choices by choosing solutions that protect children from harm without compromising the privacy of users,” she told CNBC. “The best technology doesn’t just tick boxes; it builds trust.”
Child-safe smartphones
The wave of new tech emerging to prevent children from being exposed to online harms isn’t just limited to software.
Earlier this month, Finnish phone maker HMD Global launched a new smartphone called the Fusion X1, which uses AI to stop kids from filming or sharing nude content or viewing sexually explicit images from the camera, screen and across all apps.
The phone uses technology developed by SafeToNet, a British cybersecurity firm focused on child safety.
Finnish phone maker HMD Global’s new smartphone uses AI to prevent children from being exposed nude or sexually explicit images.
HMD Global
“We believe more needs to be done in this space,” James Robinson, vice president of family vertical at HMD, told CNBC. He stressed that HMD came up with the concept for children’s devices prior to the Online Safety Act entering into force, but noted it was “great to see the government taking greater steps.”
The release of HMD’s child-friendly phone follows heightened momentum in the “smartphone-free” movement, which encourages parents to avoid letting their children own a smartphone.
Going forward, the NSPCC’s Govender says that child safety will become a significant priority for digital behemoths such as Google and Meta.
The tech giants have for years been accused of worsening mental health in children and teens due to the rise of online bullying and social media addiction. They in return argue they’ve taken steps to address these issues through increased parental controls and privacy features.
“For years, tech giants have stood by while harmful and illegal content spread across their platforms, leaving young people exposed and vulnerable,” she told CNBC. “That era of neglect must end.”
A banner for Snowflake Inc. is displayed at the New York Stock Exchange to celebrate the company’s initial public offering on Sept. 16, 2020.
Brendan McDermid | Reuters
MongoDB’s stock just closed out its best week on record, leading a rally in enterprise technology companies that are seeing tailwinds from the artificial intelligence boom.
In addition to MongoDB’s 44% rally, Pure Storage soared 33%, its second-sharpest gain ever, while Snowflake jumped 21%. Autodesk rose 8.4%.
Since generative AI started taking off in late 2022 following the launch of OpenAI’s ChatGPT, the big winners have been Nvidia, for its graphics processing units, as well as the cloud vendors like Microsoft, Google and Oracle, and companies packaging and selling GPUs, such as Dell and Super Micro Computer.
For many cloud software vendors and other enterprise tech companies, Wall Street has been waiting to see if AI will be a boon to their business, or if it might displace it.
Quarterly results this week and commentary from company executives may have eased some of those concerns, showing that the financial benefits of AI are making their way downstream.
MongoDB CEO Dev Ittycheria told CNBC’s “Squawk Box” on Wednesday that enterprise rollouts of AI services are happening, but slowly.
“You start to see deployments of agents to automate back office, maybe automate sales and marketing, but it’s still not yet kind of full force in the enterprise,” Ittycheria said. “People want to see some wins before they deploy more investment.”
Revenue at MongoDB, which sells cloud database services, rose 24% from a year earlier to $591 million, sailing past the $556 million average analyst estimate, according to LSEG. Earnings also exceeded expectations, as did the company’s full-year forecast for profit and revenue.
MongoDB said in its earnings report that it’s added more than 5,000 customers year-to-date, “the highest ever in the first half of the year.”
“We think that’s a good sign of future growth because a lot of these companies are AI native companies who are coming to MongoDB to run their business,” Ittycheria said.
Pure Storage enjoyed a record pop on Thursday, when the stock jumped 32% to an all-time high.
The data storage management vendor reported quarterly results that topped estimates and lifted its guidance for the year. But what’s exciting investors the most is early returns from Pure’s recent contract with Meta. Pure will help the social media company manage its massive storage needs efficiently with the demands of AI.
Pure said it started recognizing revenue from its Meta deployments in the second quarter, and finance chief Tarek Robbiati said on the earnings call that the company is seeing “increased interest from other hyperscalers” looking to replace their traditional storage with Pure’s technology.
‘Banger of a report’
Reports from MongoDB and Pure landed the same week that Nvidia announced quarterly earnings, and said revenue soared 56% from a year earlier, marking a ninth-straight quarter of growth in excess of 50%.
Nvidia has emerged as the world’s most-valuable company by selling advanced AI processors to all of the infrastructure providers and model developers.
While growth at Nvidia has slowed from its triple-digit rate in 2023 and 2024, it’s still expanding at a much faster pace than its megacap peers, indicating that there’s no end in sight when it comes to the expansive AI buildouts.
“It was a banger of a report,” said Brad Gerstner CEO of Altimeter Capital, in an interview with CNBC’s “Halftime Report” on Thursday. “This company is accelerating at scale.”
Read more CNBC tech news
Data analytics vendor Snowflake talked up its Snowflake AI data cloud in its quarterly earnings report on Wednesday.
Snowflake shares popped 20% following better-than-expected earnings and revenue. The company also boosted its guidance for the year for product revenue, and said it has more than 6,100 customers using Snowflake AI, up from 5,200 during the prior quarter.
“Our progress with AI has been remarkable,” Snowflake CEO Sridhar Ramaswamy said on the earnings call. “Today, AI is a core reason why customers are choosing Snowflake, influencing nearly 50% of new logos won in Q2.”
Autodesk, founded in 1982, has been around much longer than MongoDB, Pure Storage or Snowflake. The company is known for its AutoCAD software used in architecture and construction.
The company has underperformed the broader tech sector of late, and last year activist investor Starboard Value jumped into the stock to push for improvements in operations and financial performance, including cost cuts. In February, Autodesk slashed 9% of its workforce, and two months later the company settled with Starboard, adding two newcomers to its board.
The stock is still trailing the Nasdaq for the year, but climbed 9.1% on Friday after Autodesk reported results that exceeded Wall Street estimates and increased its full-year revenue guidance.
Last year, Autodesk introduced Project Bernini to develop new AI models and create what it calls “AI‑driven CAD engines.”
On Thursday’s earnings call, CEO Andrew Anagnost was asked what he’s most excited about across his company’s product portfolio when it comes to AI.
Anagnost touted the ability of Autodesk to help customers simplify workflow across products and promoted the Autodesk Assistant as a way to enhance productivity through simple prompts.
He also addressed the elephant in the room: The existential threat that AI presents.
“AI may eat software,” he said, “but it’s not gonna eat Autodesk.”
Meta Platforms CEO Mark Zuckerberg departs after attending a Federal Trade Commission trial that could force the company to unwind its acquisitions of messaging platform WhatsApp and image-sharing app Instagram, at U.S. District Court in Washington, D.C., U.S., April 15, 2025.
Nathan Howard | Reuters
Meta on Friday said it is making temporary changes to its artificial intelligence chatbot policies related to teenagers as lawmakers voice concerns about safety and inappropriate conversations.
The social media giant is now training its AI chatbots so that they do not generate responses to teenagers about subjects like self-harm, suicide, disordered eating and avoid potentially inappropriate romantic conversations, a Meta spokesperson confirmed.
The company said AI chatbots will instead point teenagers to expert resources when appropriate.
“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly,” the company said in a statement.
Additionally, teenage users of Meta apps like Facebook and Instagram will only be able to access certain AI chatbots intended for educational and skill-development purposes.
The company said it’s unclear how long these temporary modifications will last, but they will begin rolling out over the next few weeks across the company’s apps in English-speaking countries. The “interim changes” are part of the company’s longer-term measures over teen safety.
Last week, Sen. Josh Hawley, R-Mo., said that he was launching an investigation into Meta following a Reuters report about the company permitting its AI chatbots to engage in “romantic” and “sensual” conversations with teens and children.
Read more CNBC tech news
The Reuters report described an internal Meta document that detailed permissible AI chatbot behaviors that staff and contract workers should take into account when developing and training the software.
In one example, the document cited by Reuters said that a chatbot would be allowed to have a romantic conversation with an eight-year-old and could tell the minor that “every inch of you is a masterpiece – a treasure I cherish deeply.”
A Meta spokesperson told Reuters at the time that “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.”
Most recently, the nonprofit advocacy group Common Sense Media released a risk assessment of Meta AI on Thursday and said that it should not be used by anyone under the age of 18, because the “system actively participates in planning dangerous activities, while dismissing legitimate requests for support,” the nonprofit said in a statement.
“This is not a system that needs improvement. It’s a system that needs to be completely rebuilt with safety as the number-one priority, not an afterthought,” said Common Sense Media CEO James Steyer in a statement. “No teen should use Meta AI until its fundamental safety failures are addressed.”
A separate Reuters report published on Friday found “dozens” of flirty AI chatbots based on celebrities like Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez on Facebook, Instagram and WhatsApp.
The report said that when prompted, the AI chatbots would generate “photorealistic images of their namesakes posing in bathtubs or dressed in lingerie with their legs spread.”
A Meta spokesperson told CNBC in a statement that “the AI-generated imagery of public figures in compromising poses violates our rules.”
“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” the Meta spokesperson said. “Meta’s AI Studio rules prohibit the direct impersonation of public figures.”