Connect with us

Published

on

Sam Altman, CEO of OpenAI, at the Hope Global Forums annual meeting in Atlanta on Dec. 11, 2023.

Dustin Chambers | Bloomberg | Getty Images

OpenAI CEO Sam Altman has admitted that he was surprised by the popularity of ChatGPT, which was released as a research project a little more than a year ago. His team had spent an entire meeting debating whether it was worth even opening up the chatbot to the public.

As it turns out, OpenAI’s decision to launch ChatGPT in November 2022 became the defining moment for generative artificial intelligence and set the stage for a rush of investments and a mountain of new products and services in 2023.

Some form of generative AI has made its way into virtually every industry, from financial services to biomedical research. Some 95% of utility and energy companies are discussing using generative AI algorithms, according to a July survey

To see the financial effect of the generative AI rush, you need only to look at Nvidia’s bottom line. The chipmaker’s graphics processing units, or GPUs, are at the heart of the large language models created by OpenAI as well as those from Alphabet, Meta and a growing crop of heavily funded startups all battling for a slice of the generative AI pie.

Through the first three quarters of 2023, Nvidia generated $17.5 billion in net income, up more than sixfold from a year earlier. Its stock price jumped 237% this year, far exceeding any other member of the S&P 500.

Generative AI quickly became the buzzy phrase for corporate earnings calls, as every company needed a narrative. Sometimes, the story was painful, such as when digital education company Chegg said in May that it was seeing a “significant spike in student interest in ChatGPT,” which appears to be “having an impact on our new customer growth rate.” The stock plunged 48% in one day following the warning.

Perhaps no company was less prepared for the generative AI boom than OpenAI itself. In November, Altman was suddenly ousted by the board for a dispute that reportedly had to do with his aggressive push to develop new commercial products at the expense of safety. However, Altman quickly returned to the helm after employees threatened to flee and large investors banded together to fight for his reinstatement.

The public spat included a reshuffling of OpenAI’s board and shined a bright light on the debate raging between AI skeptics and evangelists. While advances in generative AI showcase the potential for technology to unlock all sorts of business opportunities and efficiencies, fears of the algorithms’ perceived power gained equal resonance. Some of the real-life harms for minorities and vulnerable populations showed up in FTC proposals, wrongful arrests, contaminated datasets and more.

Here were some of the key areas for generative AI advancements in 2023: 

The Anthropic website on a laptop arranged in New York on Aug. 15, 2023.

Gabby Jones | Bloomberg | Getty Images

Chatbots

ChatGPT opened up the floodgates for investments in chatbots, as it became clear how the input of a few words could produce more thorough and creative responses than ever before.

Nearly two months after launch, ChatGPT broke records as the fastest-growing consumer app in history — until Meta’s Threads dethroned it last summer. ChatGPT now has about 100 million weekly active users, along with more than 92% of Fortune 500 companies using the platform, according to OpenAI. 

Earlier this year, Microsoft plunged an additional $10 billion in the company, making it the biggest AI investment of the year, according to PitchBook, and OpenAI is in talks to sell employee shares at a price that would suggest a valuation of $86 billion. According to Bloomberg, the company is in discussions to potentially raise capital at a valuation of $100 billion or more.

Google was caught off guard by ChatGPT’s success and responded by accelerating the public release of its Bard chatbot, powered by its LLM called LaMDA, which stands for Language Model for Dialogue Applications.

Google has been rolling out new Bard features, including integrations with Google Search and a YouTube extension, and recently released Gemini, the new and buzzy AI model to power Bard. Gemini’s launch this month involved a marketing blitz and controversy over an edited video promoting the model’s capabilities. 

In addition to its internal investments, Google is one of many big names behind Anthropic, an AI startup that’s currently in talks to raise $750 million at a valuation of $18.4 billion. Founded by former OpenAI research executives, Anthropic is the developer of the chatbot Claude.

In July, Anthropic debuted Claude 2, and said it has the ability to summarize up to about 75,000 words, which could be the length of a book. Users can input large datasets and ask for summaries in the form of a memo, letter or story.

As a category, new generative AI chatbots have been used this year to answer questions about business strategy, as well as to design study guides, offer advice on salary negotiation and spark creative writing prompts. They’ve even assisted in writing wedding vows. 

“It’s probably one of the most influential step function changes in technology that we’ve seen,” Jill Chase, investment partner at CapitalG Ventures, told CNBC. Chase said it’s up there with the dawn of the internet and the shift to mobile. “Things like that just open up people’s imaginations,” she said.

Academics and ethicists have voiced significant concerns about the technology’s tendency to fabricate information and to propagate bias. Still, it has quickly made its way into schools, online travel, the medical industry, digital advertising and beyond. Microsoft and IBM have invested increasing amounts in enterprise AI offerings, including development studios for companies to personalize the use of LLMs. 

There are plenty of detractors.

Publishers, artists, writers and technologists have been pushed to pursue legal action against companies behind popular generative AI tools, out of concern that their creative content is being used as free training data. John Grisham, George R.R. Martin and other prominent authors sued OpenAI in September over alleged copyright infringement.

This photo taken on Jan. 31, 2023, shows an artificial intelligence manga artist, who goes by the name Rootport, wearing gloves to protect his identity, demonstrating how he produces AI manga during an interview with AFP in Tokyo.

Richard A. Brooks | Afp | Getty Images

Image and video generation 

Generative AI for images and video emerged in 2022, due to powerful image generators such as OpenAI’s DALL-E 2, Stable Diffusion and Midjourney, and video-generation AI tools from Meta, Google and Amazon

While interest in those technologies continues, progress has waned compared to chatbots, according to Brendan Burke, an analyst at PitchBook. 

“Multimedia content generation has fallen behind language in the pace of progress,” Burke told CNBC. “The initial excitement with Stable Diffusion in 2022 exposed both the general interest but also the drawbacks of AI content generation. Progress has been incremental this year, yet still disappointing for the most sophisticated content creators.”

Meta’s Instagram recently debuted a feature that allows users to change the background of Stories posts using AI. Google and Amazon have incorporated generative AI tools into advertising technology to create more appealing marketing images.

Some industry leaders say the future of generative AI is “multimodal,” bringing the various mediums together.

“The world is multimodal,” Brad Lightcap, OpenAI’s operating chief, told CNBC in a recent interview. “If you think about the way we as humans process the world and engage with the world, we see things, we hear things, we say things. The world is much bigger than text. So to us, it always felt incomplete for text and code to be the single modalities, the single interfaces that we could have to how powerful these models are and what they can do.”

Agents and assistants

After the chatbot comes the agent.

It’s not just about getting sophisticated answers, but it’s also about using generative AI to be productive in completing tasks. That could be scheduling a group hangout by scanning everyone’s calendar to make sure there are no conflicts, booking travel and activities, buying presents for loved ones or doing a specific job function such as outbound sales.

Last month, OpenAI announced custom GPTs, or customized, niche versions of ChatGPT that users can personalize for getting travel recommendations, recipe help or startup advice. However, the company chose to delay the release of the platform that would popularize different use cases — the “GPT store” — until next year. 

One type of AI assistant that has gained popularity is for coding. Take, for example, Microsoft’s GitHub coding repository. GitHub CEO Thomas Dohmke wrote in a blog post earlier this year that an average of 46% of all code on GitHub, “across all programming languages,” was AI generated.

Last month, GitHub introduced a more expensive version of its Copilot assistant that can explain and provide recommendations about internal source code.

“Copilot, when it started at the very beginning, was thought to be a tool that could help developers write docs,” Kyle Daigle, GitHub’s chief operating officer, told CNBC in an interview. In the past year, he said, the company has expanded the technology, looking for more places “to help developers collaborate and work together and solve problems outside of just the code.” 

But PitchBook’s Burke said coding assistants are in their very early days and currently can only do “a small part” of a developer’s work. That’s true in the broader world, he said. 

“Users have found how little AI can do for them this year,” he said. “AI knows a lot, but it can’t do a lot yet. We’re still far away from AI truly being able to do the complex tasks that people are used to doing in their personal lives and at work. That has been shown by the struggles of AI agents this year.” 

Nvidia CEO Jensen Huang speaks at the Supermicro keynote presentation during the Computex conference in Taipei on June 1, 2023.

Walid Berrazeg | Sopa Images | Lightrocket | Getty Images

Looking ahead 

Overall, 2023 was a big year for consumer excitement surrounding generative AI and for adoption of a few popular products. But business success stories have been few and far between.

“It was an especially transformative year from a consumer perspective where AI became much more tangible than before,” Grace Isford, a partner at Lux Capital, said in an interview. “AI is nothing new, but the awareness — and in turn, the adoption — has skyrocketed. Many more hackers and builders are leveraging the technology and the really exciting advancements into products.” 

CapitalG’s Chase said the consumer fascination with the space has allowed people to “see what was possible” in AI, allowing for a “cake tasting” of sorts and a teasing of the imagination.

Early in the year, people “extrapolated out early exploration of that technology into lasting and enduring use cases,” Chase said. She added that there hasn’t been a straight line from early adoption and widespread use of one or two products to mainstream popularity. Companies and developers are now going back to doing research and development to “build the right infrastructure and tooling” that can hopefully lead to mass adoption.

“I think that will happen over the next year,” she said. “I think some people thought it would happen this year.”

In 2023, it’s clear that the overwhelming beneficiary from all the hype was Nvidia. The challenge for the coming year and beyond is for businesses to show that their hefty spending on those advanced GPUs and the models they power can lead to the development of products that allow more companies to share in the wealth.

“I thought that the excitement at the end of last year would quickly translate into enterprise adoption, but the reality is that very few companies have launched generative AI applications into production and experiments aren’t quickly translating into reliable applications,” Burke said. “We’re still looking at an outlook where companies may not widely deploy products until later next year or even the following year.”

WATCH: There’s good news for Microsoft about corporate AI spending plans

There's good news for Microsoft about corporate AI spending plans in 2024


Don’t miss these stories from CNBC PRO:

Continue Reading

Technology

How Elon Musk’s plan to slash government agencies and regulation may benefit his empire

Published

on

By

How Elon Musk’s plan to slash government agencies and regulation may benefit his empire

Elon Musk’s business empire is sprawling. It includes electric vehicle maker Tesla, social media company X, artificial intelligence startup xAI, computer interface company Neuralink, tunneling venture Boring Company and aerospace firm SpaceX. 

Some of his ventures already benefit tremendously from federal contracts. SpaceX has received more than $19 billion from contracts with the federal government, according to research from FedScout. Under a second Trump presidency, more lucrative contracts could come its way. SpaceX is on track to take in billions of dollars annually from prime contracts with the federal government for years to come, according to FedScout CEO Geoff Orazem.

Musk, who has frequently blamed the government for stifling innovation, could also push for less regulation of his businesses. Earlier this month, Musk and former Republican presidential candidate Vivek Ramaswamy were tapped by Trump to lead a government efficiency group called the Department of Government Efficiency, or DOGE.

In a recent commentary piece in the Wall Street Journal, Musk and Ramaswamy wrote that DOGE will “pursue three major kinds of reform: regulatory rescissions, administrative reductions and cost savings.” They went on to say that many existing federal regulations were never passed by Congress and should therefore be nullified, which President-elect Trump could accomplish through executive action. Musk and Ramaswamy also championed the large-scale auditing of agencies, calling out the Pentagon for failing its seventh consecutive audit. 

“The number one way Elon Musk and his companies would benefit from a Trump administration is through deregulation and defanging, you know, giving fewer resources to federal agencies tasked with oversight of him and his businesses,” says CNBC technology reporter Lora Kolodny.

To learn how else Elon Musk and his companies may benefit from having the ear of the president-elect watch the video.

Continue Reading

Technology

Why X’s new terms of service are driving some users to leave Elon Musk’s platform

Published

on

By

Why X's new terms of service are driving some users to leave Elon Musk's platform

Elon Musk attends the America First Policy Institute gala at Mar-A-Lago in Palm Beach, Florida, Nov. 14, 2024.

Carlos Barria | Reuters

X’s new terms of service, which took effect Nov. 15, are driving some users off Elon Musk’s microblogging platform. 

The new terms include expansive permissions requiring users to allow the company to use their data to train X’s artificial intelligence models while also making users liable for as much as $15,000 in damages if they use the platform too much. 

The terms are prompting some longtime users of the service, both celebrities and everyday people, to post that they are taking their content to other platforms. 

“With the recent and upcoming changes to the terms of service — and the return of volatile figures — I find myself at a crossroads, facing a direction I can no longer fully support,” actress Gabrielle Union posted on X the same day the new terms took effect, while announcing she would be leaving the platform.

“I’m going to start winding down my Twitter account,” a user with the handle @mplsFietser said in a post. “The changes to the terms of service are the final nail in the coffin for me.”

It’s unclear just how many users have left X due specifically to the company’s new terms of service, but since the start of November, many social media users have flocked to Bluesky, a microblogging startup whose origins stem from Twitter, the former name for X. Some users with new Bluesky accounts have posted that they moved to the service due to Musk and his support for President-elect Donald Trump.

Bluesky’s U.S. mobile app downloads have skyrocketed 651% since the start of November, according to estimates from Sensor Tower. In the same period, X and Meta’s Threads are up 20% and 42%, respectively. 

X and Threads have much larger monthly user bases. Although Musk said in May that X has 600 million monthly users, market intelligence firm Sensor Tower estimates X had 318 million monthly users as of October. That same month, Meta said Threads had nearly 275 million monthly users. Bluesky told CNBC on Thursday it had reached 21 million total users this week.

Here are some of the noteworthy changes in X’s new service terms and how they compare with those of rivals Bluesky and Threads.

Artificial intelligence training

X has come under heightened scrutiny because of its new terms, which say that any content on the service can be used royalty-free to train the company’s artificial intelligence large language models, including its Grok chatbot.

“You agree that this license includes the right for us to (i) provide, promote, and improve the Services, including, for example, for use with and training of our machine learning and artificial intelligence models, whether generative or another type,” X’s terms say.

Additionally, any “user interactions, inputs and results” shared with Grok can be used for what it calls “training and fine-tuning purposes,” according to the Grok section of the X app and website. This specific function, though, can be turned off manually. 

X’s terms do not specify whether users’ private messages can be used to train its AI models, and the company did not respond to a request for comment.

“You should only provide Content that you are comfortable sharing with others,” read a portion of X’s terms of service agreement.

Though X’s new terms may be expansive, Meta’s policies aren’t that different. 

The maker of Threads uses “information shared on Meta’s Products and services” to get its training data, according to the company’s Privacy Center. This includes “posts or photos and their captions.” There is also no direct way for users outside of the European Union to opt out of Meta’s AI training. Meta keeps training data “for as long as we need it on a case-by-case basis to ensure an AI model is operating appropriately, safely and efficiently,” according to its Privacy Center. 

Under Meta’s policy, private messages with friends or family aren’t used to train AI unless one of the users in a chat chooses to share it with the models, which can include Meta AI and AI Studio.

Bluesky, which has seen a user growth surge since Election Day, doesn’t do any generative AI training. 

“We do not use any of your content to train generative AI, and have no intention of doing so,” Bluesky said in a post on its platform Friday, confirming the same to CNBC as well.

Liquidated damages

Bluesky CEO: Our platform is 'radically different' from anything else in social media

Continue Reading

Technology

The Pentagon’s battle inside the U.S. for control of a new Cyber Force

Published

on

By

The Pentagon's battle inside the U.S. for control of a new Cyber Force

A recent Chinese cyber-espionage attack inside the nation’s major telecom networks that may have reached as high as the communications of President-elect Donald Trump and Vice President-elect J.D. Vance was designated this week by one U.S. senator as “far and away the most serious telecom hack in our history.”

The U.S. has yet to figure out the full scope of what China accomplished, and whether or not its spies are still inside U.S. communication networks.

“The barn door is still wide open, or mostly open,” Senator Mark Warner of Virginia and chairman of the Senate Intelligence Committee told the New York Times on Thursday.

The revelations highlight the rising cyberthreats tied to geopolitics and nation-state actor rivals of the U.S., but inside the federal government, there’s disagreement on how to fight back, with some advocates calling for the creation of an independent federal U.S. Cyber Force. In September, the Department of Defense formally appealed to Congress, urging lawmakers to reject that approach.

Among one of the most prominent voices advocating for the new branch is the Foundation for Defense of Democracies, a national security think tank, but the issue extends far beyond any single group. In June, defense committees in both the House and Senate approved measures calling for independent evaluations of the feasibility to create a separate cyber branch, as part of the annual defense policy deliberations.

Drawing on insights from more than 75 active-duty and retired military officers experienced in cyber operations, the FDD’s 40-page report highlights what it says are chronic structural issues within the U.S. Cyber Command (CYBERCOM), including fragmented recruitment and training practices across the Army, Navy, Air Force, and Marines.

“America’s cyber force generation system is clearly broken,” the FDD wrote, citing comments made in 2023 by then-leader of U.S. Cyber Command, Army General Paul Nakasone, who took over the role in 2018 and described current U.S. military cyber organization as unsustainable: “All options are on the table, except the status quo,” Nakasone had said.

Concern with Congress and a changing White House

The FDD analysis points to “deep concerns” that have existed within Congress for a decade — among members of both parties — about the military being able to staff up to successfully defend cyberspace. Talent shortages, inconsistent training, and misaligned missions, are undermining CYBERCOM’s capacity to respond effectively to complex cyber threats, it says. Creating a dedicated branch, proponents argue, would better position the U.S. in cyberspace. The Pentagon, however, warns that such a move could disrupt coordination, increase fragmentation, and ultimately weaken U.S. cyber readiness.

As the Pentagon doubles down on its resistance to establishment of a separate U.S. Cyber Force, the incoming Trump administration could play a significant role in shaping whether America leans toward a centralized cyber strategy or reinforces the current integrated framework that emphasizes cross-branch coordination.

Known for his assertive national security measures, Trump’s 2018 National Cyber Strategy emphasized embedding cyber capabilities across all elements of national power and focusing on cross-departmental coordination and public-private partnerships rather than creating a standalone cyber entity. At that time, the Trump’s administration emphasized centralizing civilian cybersecurity efforts under the Department of Homeland Security while tasking the Department of Defense with addressing more complex, defense-specific cyber threats. Trump’s pick for Secretary of Homeland Security, South Dakota Governor Kristi Noem, has talked up her, and her state’s, focus on cybersecurity.

Former Trump officials believe that a second Trump administration will take an aggressive stance on national security, fill gaps at the Energy Department, and reduce regulatory burdens on the private sector. They anticipate a stronger focus on offensive cyber operations, tailored threat vulnerability protection, and greater coordination between state and local governments. Changes will be coming at the top of the Cybersecurity and Infrastructure Security Agency, which was created during Trump’s first term and where current director Jen Easterly has announced she will leave once Trump is inaugurated.

Cyber Command 2.0 and the U.S. military

John Cohen, executive director of the Program for Countering Hybrid Threats at the Center for Internet Security, is among those who share the Pentagon’s concerns. “We can no longer afford to operate in stovepipes,” Cohen said, warning that a separate cyber branch could worsen existing silos and further isolate cyber operations from other critical military efforts.

Cohen emphasized that adversaries like China and Russia employ cyber tactics as part of broader, integrated strategies that include economic, physical, and psychological components. To counter such threats, he argued, the U.S. needs a cohesive approach across its military branches. “Confronting that requires our military to adapt to the changing battlespace in a consistent way,” he said.

In 2018, CYBERCOM certified its Cyber Mission Force teams as fully staffed, but concerns have been expressed by the FDD and others that personnel were shifted between teams to meet staffing goals — a move they say masked deeper structural problems. Nakasone has called for a CYBERCOM 2.0, saying in comments early this year “How do we think about training differently? How do we think about personnel differently?” and adding that a major issue has been the approach to military staffing within the command.

Austin Berglas, a former head of the FBI’s cyber program in New York who worked on consolidation efforts inside the Bureau, believes a separate cyber force could enhance U.S. capabilities by centralizing resources and priorities. “When I first took over the [FBI] cyber program … the assets were scattered,” said Berglas, who is now the global head of professional services at supply chain cyber defense company BlueVoyant. Centralization brought focus and efficiency to the FBI’s cyber efforts, he said, and it’s a model he believes would benefit the military’s cyber efforts as well. “Cyber is a different beast,” Berglas said, emphasizing the need for specialized training, advancement, and resource allocation that isn’t diluted by competing military priorities.

Berglas also pointed to the ongoing “cyber arms race” with adversaries like China, Russia, Iran, and North Korea. He warned that without a dedicated force, the U.S. risks falling behind as these nations expand their offensive cyber capabilities and exploit vulnerabilities across critical infrastructure.

Nakasone said in his comments earlier this year that a lot has changed since 2013 when U.S. Cyber Command began building out its Cyber Mission Force to combat issues like counterterrorism and financial cybercrime coming from Iran. “Completely different world in which we live in today,” he said, citing the threats from China and Russia.

Brandon Wales, a former executive director of the CISA, said there is the need to bolster U.S. cyber capabilities, but he cautions against major structural changes during a period of heightened global threats.

“A reorganization of this scale is obviously going to be disruptive and will take time,” said Wales, who is now vice president of cybersecurity strategy at SentinelOne.

He cited China’s preparations for a potential conflict over Taiwan as a reason the U.S. military needs to maintain readiness. Rather than creating a new branch, Wales supports initiatives like Cyber Command 2.0 and its aim to enhance coordination and capabilities within the existing structure. “Large reorganizations should always be the last resort because of how disruptive they are,” he said.

Wales says it’s important to ensure any structural changes do not undermine integration across military branches and recognize that coordination across existing branches is critical to addressing the complex, multidomain threats posed by U.S. adversaries. “You should not always assume that centralization solves all of your problems,” he said. “We need to enhance our capabilities, both defensively and offensively. This isn’t about one solution; it’s about ensuring we can quickly see, stop, disrupt, and prevent threats from hitting our critical infrastructure and systems,” he added.

Continue Reading

Trending