A gamer uses a computer powered with an Nvidia Corp. chip at the Gamescon video games trade fair in Cologne, Germany, on Wednesday, Aug. 23, 2023. Gamescon runs until Sunday, Aug. 27. Photographer: Alex Kraus/Bloomberg via Getty Images
Bloomberg | Bloomberg | Getty Images
It’s not just human life that will be remade by the rapid advance in generative artificial intelligence. NPCs (non-playable characters), the figures who populate generated worlds in video games but have to date largely run on limited scripts — think the proprietor of the store you enter — are being tested as one of the first core gaming aspects where AI can improve gameplay and immersiveness. A recent partnership between Microsoft Xbox and Inworld AI is a prime example.
Better dialogue is just the first step. “We’re creating the tech that allows NPCs to evolve beyond predefined roles, adapt to player behavior, learn from interactions, and contribute to a living, breathing game world,” said Kylan Gibbs, chief product officer and co-founder of Inworld AI. “AI NPCs are not just a technological leap. They’re a paradigm shift for player engagement.”
It’s also a big opportunity for the gaming companies and game developers. Shifting from scripted dialogue to dynamic player-driven narratives will increase immersion in a way that drives replayability, retention, and revenue.
The interaction between powerful chips and gaming has for years been part of the success story at Nvidia, but there is now a clear sense in the gaming industry that it is just beginning to get to the point where AI will take off, after some initial uncertainty.
“All developers are interested in how artificial intelligence can impact game development process,” John Spitzer, vice president of developer and performance technology at Nvidia, recently told CNBC, and he cited powering non-playable characters as a key test case.
It’s always been true that technological limits and possibilities overdetermine the gaming worlds developers can create. The technology behind AI NPCs, Gibbs says, will become a catalyst for a new era of storytelling, creative expression, and innovative gameplay. But much of what is to come will be “games we have yet to imagine,” he said.
Bing Gordon, an Inworld advisor and former chief creative officer at Electronic Arts, said the biggest advancements in gaming in recent decades have been through improvements in visual fidelity and graphics. Gordon, who is now chief product officer at venture capital firm Kleiner Perkins and serves on the board of gaming company Take-Two Interactive, believes AI will remake the world of both the gamer and game designer.
“AI will enable truly immersive worlds and sophisticated narratives that put players at the center of the fantasy,” Gordon said. “Moreover, AI that influences fundamental game mechanics has the potential to increase engagement and draw players deeper into your game.”
The first big opportunity for gen AI may be in gaming production. “That’s where we expect to see a major impact first,” said Anders Christofferson, a partner within Bain & Company’s media & entertainment practice.
In other professional tasks, such as creating presentations using software like PowerPoint and first drafts of speeches, gen AI is already doing days of work in minutes. Initial storyboard design and NPC dialogue creation are made for gen AI, and that will free up developer time to focus on the more immersive and creative parts of game making, Christofferson said.
Creating unpredictable worlds
A recent Bain study noted that AI is already taking on some tasks, including preproduction and planning out of game content. Soon it will play a larger role in developing characters, dialogue, and environments. Gaming executives, Bain’s research shows, expect AI to manage more than half of game development within five years to a decade. This may not lead to lower production costs — blockbuster games can run up total development costs of $1 billion — but AI will allow games to be delivered more quickly, and with enhanced quality.
Ultimately, the proliferation of gen AI should allow the development process of games to include the average gamer in content creation. This means that more games will offer what Christofferson calls a “create mode” allowing for increased user-generated content — Gibbs referred to it as “player-driven narratives.”
The current human talent shortage, a labor issue that exists across the software engineering space, isn’t something AI will solve in the short-term. But it may free developers up to put more time into creative tasks and learn how best to use the new technology as they experiment. A recent CNBC study found that across the labor force, 72% of workers who use AI say it makes them more productive, consistent with research Microsoft has conducted on the impact of its Copilot AI in the workplace.
“GenAI is very nascent in gaming and the emerging landscape of players, services, etc. is very dynamic – changing by the day,” Christofferson said. “As with any emerging technologies, we expect lots of learning to take place regarding GenAI over the next few years.”
Given how much change is taking place in gaming, it may simply be too difficult to forecast AI’s scale at the moment, says Julian Togelius, associate professor of computer science and engineering at New York University. He summed up the current state of AI implementation as a “medium-size deal.”
“In the game development process, generative AI is already in use by lots of people. Programmers use Copilot and ChatGPT to help them write code, concept artists experiment with Stable Diffusion and Midjourney, and so on,” said Togelius. “There is also a big interest in automated game testing and other forms of AI-augmented QA,” he added.
The Microsoft and Inworld partnership will test two of the key AI implications in the video game industry: design-time and assistance with narrative generation. If a game has thousands of NPCs in it, having AI generate individual backstories for each of them can save enormous development time — and having generative AI working while players interact with NPCs could also enhance gameplay.
The latter will be trickier to achieve, Togelius said. “I think this is much harder to get right, partly because of the well-known hallucination issues of LLMs, and partly because games are not designed for this,” he said.
Hallucinations occur when large language models (LLMs) generate responses that deviate from context or rational meaning — they speak nonsensically but grammatically, about things that don’t make sense or have any relation to the given context. “Video games are designed for predictable, hand-crafted NPCs that don’t veer off script and start talking about things that don’t exist in the game world,” Togelius said.
Traditionally, NPCs behave in predictable ways that have been hand-authored by a designer or design team. Predictability, in fact, is a core tenant of the video game world and its design process. Open-ended games are thrilling because of their sense of infinite possibility, but to function reliably there is great control and predictability built into them. Unpredictability in the gaming world is a new realm, and could be a barrier to having AI gain wider use. Working out this balance will be a key to moving forward with AI.
“I think we are going to see modern AI in more and more places in games and game development very soon,” Togelius said. “And we will need new designs that work with the strengths and weaknesses of generative AI.”
SpaceX is valued at around $400 billion and is critical for U.S. space access, but it wasn’t always the powerhouse that it is today.
Elon Musk founded SpaceX in 2002. Using money that he made from the sale of PayPal, Musk and his new company developed their first rocket, the Falcon 1, to challenge existing launch providers.
“There were actually a lot of startup aerospace companies looking to take on this market. They recognized we had a monopoly provider called United Launch Alliance. They had merged the Boeing and Lockheed rocket launch capacity to one company, and they were charging the government hundreds of millions of dollars to launch satellites,” said Lori Garver, a former deputy administrator at NASA.
In 2003, Musk paraded Falcon 1 around the streets of Washington hoping to attract the attention of government agencies and the multi-million dollar contracts that they offered. It worked, and in 2004, SpaceX secured a few million dollars from the Defense Advanced Research Projects Agency, or DARPA, and the U.S. Air Force to further develop its rockets.
Despite the government support, the company struggled. Its first three launches of the Falcon 1 failed to reach orbit.
“NASA, and specifically the the initial commercial cargo contract, is what saved the company when it was on the brink of bankruptcy,” said Chris Quilty, president and Co-CEO of Quilty Space, a space-focused research firm.
NASA awarded the $1.6 billion contract, known as Commercial Resupply Services to SpaceX in 2008, just months after the first successful flight of the Falcon 1. The contract called on SpaceX to use its new rocket, the Falcon 9, along with its Dragon capsule to ferry cargo and supplies to the International Space Station over the course of 12 missions. In 2014, SpaceX won another NASA contract worth $2.6 billion to develop and operate vehicles to ferry astronauts to and from the International Space Station.
Today, SpaceX dominates large parts of the space market from launch to satellites. In 2024, SpaceX conducted a record-breaking 134 orbital launches, more than double the amount of launches done by the next most prolific launch provider, the China Aerospace Science and Technology Corporation, according to science and technology consulting firm BryceTech. These 134 launches accounted for 83% of all spacecraft launched last year. According to a July report by Bloomberg, SpaceX was valued at $400 billion.
SpaceX’s Dragon capsule and Falcon 9 rocket are the primary means by which NASA launches astronauts and supplies to the International Space Station. The company’s Starlink satellites have become indispensable for providing internet access to remote areas as well as to U.S. allies during wartime. The company’s Starship rocket, though still in testing, is also key to the U.S. plan to return to the moon. SpaceX is also building a network of spy satellites for the U.S. government called Starshield as part of a $1.8 billion contract. Even competitors including Amazon and OneWeb have launched their satellites on SpaceX rockets.
“The ecosystem of space is changed by, really it’s SpaceX,” Garver said. “The lower cost of access to space is doing what we had dreamed of. It is built up a whole community of companies around the world that now have access to space.”
Sanjay Beri, chief executive officer and founder of Netskope Inc., listens during a Bloomberg West television interview in San Francisco, California.
David Paul Morris | Bloomberg | Getty Images
Cloud security platform Netskope will go public on the Nasdaq under the ticker symbol “NTSK,” the company said in an initial public offering filing Friday.
The Santa Clara, California-based company said annual recurring revenue grew 33% to $707 million, while revenues jumped 31% to about $328 million in the first half of the year.
But Netskope isn’t profitable yet. The company recorded a $170 million net loss during the first half of the year. That narrowed from a $207 million loss a year ago.
Netskope joins an increasing number of technology companies adding momentum to the surge in IPO activity after high inflation and interest rates effectively killed the market.
So far this year, design software firm Figma more than tripled in its New York Stock Exchange debut, while crypto firm Circle soared 168% in its first trading day. CoreWeave has also popped since its IPO, while trading app eToro surged 29% in its May debut.
Read more CNBC tech news
Netskope’s offering also coincides with a busy period for cybersecurity deals.
Founded in 2012, Netskope made a name for itself in its early years in the cloud access security broker space. The company lists Palo Alto Networks, Cisco, Zscaler, Broadcom and Fortinet as its major competitors.
Netskope’s biggest backers include Accel, Lightspeed Ventures and Iconiq, which recently benefited from Figma’s stellar debut.
Morgan Stanley and JPMorgan are leading the offering. Netskope listed 13 other Wall Street banks as underwriters.
Meta CEO Mark Zuckerberg makes a keynote speech at the Meta Connect annual event at the company’s headquarters in Menlo Park, Calif., on Sept. 25, 2024.
Manuel Orbegozo | Reuters
Meta is planning to use its annual Connect conference next month to announce a deeper push into smart glasses, including the launch of the company’s first consumer-ready glasses with a display, CNBC has learned.
That’s one of the two new devices Meta is planning to unveil at the event, according to people familiar with the matter. The company will also launch its first wristband that will allow users to control the glasses with hand gestures, the people said.
Connect is a two-day conference for developers focused on virtual reality, AR and the metaverse. It was originally called Oculus Connect and obtained its current moniker after Facebook changed its parent company name to Meta in 2021.
The glasses are internally codenamed Hypernova and will include a small digital display in the right lens of the device, said the people, who asked not to be named because the details are confidential.
The device is expected to cost about $800 and will be sold in partnership with EssilorLuxottica, the people said. CNBC reported in October that Meta was working with Luxottica on consumer glasses with a display.
Meta declined to comment. Luxottica, which is based in France and Italy, didn’t respond to a request for comment.
Meta began selling smart glasses with Luxottica in 2021 when the two companies released the first-generation Ray-Ban Stories, which allowed users to take photos or videos using simple voice commands. The partnership has since expanded, and last year included the addition of advanced AI features that made the second generation of the product an unexpected hit with early adopters.
Luxottica owns a number of glasses brands, including Ray-Ban, and licenses many others like Prada. It’s unclear what brand Luxottica will use for the glasses with AR, but a Meta job listing posted this week said the company is looking for a technical program manager for its “Wearables organization,” which “is responsible for the Ray-Ban AR glasses and other wearable hardware.”
In June, CNBC reported that Meta and Luxottica plan to release Prada-branded smart glasses. Prada glasses are known for having thick frames and arms, which could make them a suitable option for the Hypernova device, one of the people said.
Last year, Meta CEO Mark Zuckerberg used Connect to showcase the company’s experimental Orion AR glasses.
The Orion features AR capabilities on both lenses, capable of blending 3D digital visuals into the physical world, but the device served only as a prototype to show the public what could be possible with AR glasses. Still, Orion built some positive momentum for Meta, which since late 2020 has endured nearly $70 billion in losses from its Reality Labs unit that’s in charge of building hardware devices.
With Hypernova, Meta will finally be offering glasses with a display to consumers, but the company is setting low expectations for sales, some of the sources said. That’s because the device requires more components than its voice-only predecessors, and will be slightly heavier and thicker, the people said.
Meta and Ray-Ban have sold 2 million pairs of their second-generation glasses since 2023, Luxottica CEO Francesco Milleri said in February. In July, Luxottica said that revenue from sales of the smart glasses had more than tripled year over year.
As part of an extension agreement between Meta and Luxottica announced in September, Meta obtained a stake of about 3% in the glasses company according to Bloomberg. Meta also gets exclusive rights to Luxottica’s brands for its smart glasses technology for a number of years, a person familiar with the matter told CNBC in June.
Although Hypernova will feature a display, those visual features are expected to be limited, people familiar with the matter said. They said the color display will offer about a 20 degree field of view — meaning it will appear in a small window in a fixed position — and will be used primarily to relay simple bits of information, such as incoming text messages.
Andrew Bosworth, Meta’s technology chief, said earlier this month that there are advantages to having just one display rather than two, including a lower price.
“Monocular displays have a lot going for them,” Bosworth said in an Instagram video. “They’re affordable, they’re lighter, and you don’t have disparity correction, so they’re structurally quite a bit easier.”
‘Interact with an AI assistant’
Other details of Meta’s forthcoming glasses were disclosed in a July letter from the U.S. Customs and Border Patrol to a lawyer representing Meta. While the letter redacted the name of the company and the product, a person with knowledge of the matter confirmed that it was in reference to Meta’s Hypernova glasses.
“This model will enable the user to take and share photos and videos, make phone calls and video calls, send and receive messages, listen to audio playback and interact with an AI assistant in different forms and methods, including voice, display, and manual interactions,” according to the letter, dated July 23.
The letter from CBP was part of routine communication between companies and the U.S. government when determining the country of origin for a consumer product. It refers to the product as “New Smart Glasses,” and says the device will feature “a lens display function that allows the user to interface with visual content arising from the Smart Features, and components providing image data retrieval, processing, and rendering capabilities.”
CBP didn’t provide a comment for this story.
The Hypernova glasses will also come paired with a wristband that will use technology built by Meta’s CTRL Labs, said people familiar with the matter. CTRL Labs, which Meta acquired in 2019, specializes in building neural technology that could allow users to control computing devices using gestures in their arms.
The wristband is expected to be a key input component for the company’s future release of full AR glasses, so getting data now with Hypernova could improve future versions of the wristband, the people said. Instead of using camerasensors to track body movements, as with Apple’s Vision Pro headset, Meta’s wristband uses so-called sEMG sensortechnology, which reads and interprets the electrical signals from hand movements.
One of the challenges Meta has faced with the wristband involves how people choose to wear it, a person familiar with the product’s development said. If the device is too loose, it won’t be able to read the user’s electrical signals as intended, which could impact its performance, the person said. Also, the wristband has run into issues in testing related to which arm it’s worn on, how it works on men versus women and how it functions on people who wear long sleeves.
The CTRL Labs team published a paper in Nature in July about its wristband, and Meta wrote about it in a blog post. In the paper, the Meta team detailed its use of machine learning technology to make the wristband work with as many people as possible. The additional data collected by the upcoming device should improve those capabilities for future Meta smart glasses.
“We successfully prototyped an sEMG wristband with Orion, our first pair of true augmented reality (AR) glasses, but that was just the beginning,” Meta wrote in the post. “Our teams have developed advanced machine learning models that are able to transform neural signals controlling muscles at the wrist into commands that drive people’s interactions with the glasses, eliminating the need for traditional—and more cumbersome—forms of input.”
Bloomberg reported the wristband component in January.
Meta has recently started reaching out to developers to begin testing both Hypernova and the accompanying wristband, people familiar with the matter said. The company wants to court third-party developers, particularly those who specialize in generative AI, to build experimental apps that Meta can showcase to drum up excitement for the smart glasses, the people said.
In addition to Hypernova and the wristband, Meta will also announce a third-generation of its voice-only smart glasses with Luxottica at Connect, one person said.
That device was also referenced by CBP in its July letter, referring to it as “The Next Generation Smart Glasses.” The glasses will include “components that provide capacitive touch functionality, allowing users to interact with the Smart Glasses through touch gestures,” the letter said.