Connect with us

Published

on

Nvidia-backed video generation startup Luma AI is joining a growing wave of U.S. tech companies launching operations in the U.K., with major plans for a London expansion revealed on Tuesday.

The Palo Alto-headquartered startup will look to hire around 200 employees — making up around 40% of its workforce — at its new London base by early 2027, across research, engineering, partnerships and strategic development. 

The expansion comes two weeks on from Luma announcing a $900 million funding round led by Saudi Public Investment Fund-owned AI company Humain, which saw it hit a valuation upwards of $4 billion. The startup previously received backing from Nvidia. 

Luma is building “world models,” a class of AI models that are able to learn from video, audio and images, alongside text, and which large language models (LLMs) like those powering OpenAI’s ChatGPT and Google’s Gemini use. 

The startup is currently targeting marketing, advertising, media and entertainment sectors with its video models, which it sells via an application programming interface (API) and as part of a content creation suite.

“With this Series C raise and the upcoming build-out of global compute infrastructure, we have the capital and capacity to bring world-scale AI to creatives everywhere,” said Amit Jain, CEO and co-founder of Luma AI. “Launching across Europe and the Middle East is the logical next step in putting this power directly in the hands of storytellers, agencies and brands globally.”

The U.K. is the starting point of the expansion because of its access to talent, Jain told CNBC.

“London has some of the best people when it comes to research, given the universities here and institutions like DeepMind,” he said. “We also consider London to be the entry point to the European market.”

AI generated image created by Luma’s Ray3 model (Luma AI)

Luma AI

Luma is the latest in a wave of North American AI labs doubling down on the U.K. and Europe as they look to take advantage of talent pools and revenue opportunities.

In November, San Francisco-based Anthropic announced plans to open offices in Paris and Munich, months on from kicking off a hiring spree in London and Dublin. Canadian AI startup Cohere said it would open a Paris office to become its EMEA headquarters in September and OpenAI announced a new office in Munich in February.

While world models may not yet be as developed as LLMs, some researchers say they are as, if not more, crucial in the pursuit of achieving artificial general intelligence (AGI).

“These kinds of visual models are about a year to a year-and-a-half behind language models right now,” said Jain. 

But world models will become the “natural interface” for AI for most day-to-day use in time, he predicted, pointing to the amount of time people spend watching video content each day. 

Tech giants including Google, Meta and Nvidia are all developing world models for a range of use cases.

Luma released its latest model, Ray3, in September, which Jain told CNBC benchmarks higher than OpenAI’s Sora and at similar levels to Google’s Veo 3.

Continue Reading

Technology

Beta stock jumps 9% on $1 billion motor deal with air taxi maker Eve Air Mobility

Published

on

By

Beta stock jumps 9% on  billion motor deal with air taxi maker Eve Air Mobility

Beta Technologies strikes $1B electric motor manufacturing deal with Eve Air Mobility

Beta Technologies shares surged more than 9% after air taxi maker Eve Air Mobility announced an up to $1 billion deal to buy motors from the Vermont-based company.

Eve, which was started by Brazilian airplane maker Embraer and is now under Eve Holding, said the manufacturing deal could equal as much as $1 billion over 10 years. The Florida-based company said it has a backlog of 2,800 vehicles.

Shares of Eve Holding gained 14%.

Eve CEO Johann Bordais called the deal a “pivotal milestone” in the advancement of the company’s electric vertical takeoff and landing, or eVTOL, technology.

“Their electric motor technology will play a critical role in powering our aircraft during cruise, supporting the maturity of our propulsion architecture as we progress toward entry into service,” he said in a release.

Read more CNBC tech news

Continue Reading

Technology

Amazon launches cloud AI tool to help engineers recover from outages faster

Published

on

By

Amazon launches cloud AI tool to help engineers recover from outages faster

Mateusz Slodkowski | SOPA Images | Lightrocket | Getty Images

Amazon’s cloud unit on Tuesday announced AI-enabled software designed to help clients better understand and recover from outages.

DevOps Agent, as the artificial intelligence tool from Amazon Web Services is called, predicts the cause of technical hiccups using input from third-party tools such as Datadog and Dynatrace. AWS said customers can sign up to use the tool Tuesday in a preview, before Amazon starts charging for the service.

The AI outage tool from AWS is intended to help companies more quickly figure out what caused an outage and implement fixes, Swami Sivasubramanian, vice president of agentic AI at AWS, told CNBC. It’s what site reliability engineers, or SREs, do at many companies that provide online services.

SREs try to prevent downtime and jump into action during live incidents. Startups such as Resolve and Traversal have started marketing AI assistants for these experts. Microsoft’s Azure cloud group introduced an SRE Agent in May.

Rather than waiting for on-call staff members to figure out what happened, the AWS DevOps Agent automatically assigns work to agents that look into different hypotheses, Sivasubramanian said.

“By the time the on-call ops team member dials in, they have an incident report with preliminary investigation of what could be the likely outcome, and then suggest what could be the remediation as well,” Sivasubramanian told CNBC ahead of AWS’ Reinvent conference in Las Vegas this week.

Commonwealth Bank of Australia has tested the AWS DevOps Agent. In under 15 minutes, the software found the root cause of an issue that would have taken a veteran engineer hours, AWS said in a statement.

The tool relies on Amazon’s in-house AI models and those from other providers, a spokesperson said.

AWS has been selling software in addition to raw infrastructure for many years. Amazon was early to start renting out server space and storage to developers since the mid-2000s, and technology companies such as Google, Microsoft and Oracle have followed.

Since the launch of ChatGPT in 2022, these cloud infrastructure providers have been trying to demonstrate how generative AI models, which are often training in large cloud computing data centers, can speed up work for software developers.

Over the summer, Amazon announced Kiro, a so-called vibe coding tool that produces and modifies source code based on user text prompts. In November, Google debuted similar software for individual software developers called Antigravity, and Microsoft sells subscriptions to GitHub Copilot.

WATCH: Amazon rolls out AI-powered tools to help big AWS customers update old software

Amazon rolls out AI-powered tools to help big AWS customers update old software

Continue Reading

Technology

Amazon to let cloud clients customize AI models midway through training for $100,000 a year

Published

on

By

Amazon to let cloud clients customize AI models midway through training for 0,000 a year

Attendees pass an Amazon Web Services logo during AWS re:Invent 2024, a conference hosted by Amazon Web Services, at The Venetian hotel in Las Vegas on Dec. 3, 2024.

Noah Berger | Getty Images

Amazon has found a way to let cloud clients extensively customize generative AI models. The catch is that the system costs $100,000 per year.

The Nova Forge offering from Amazon Web Services gives organizations access to Amazon’s AI models in various stages of training so they can incorporate their own data earlier in the process.

Already, companies can fine-tune large language models after they’ve been trained. The results with Nova Forge will lean more heavily on the data that customers supply. Nova Forge customers will also have the option to refine open-weight models, but training data and computing infrastructure are not included.

Organizations that assemble their own models might end up spending hundreds of millions or billions of dollars, which means using Nova Forge is more affordable, Amazon said.

AWS released its own models under the Nova brand in 2024, but they aren’t the first choice for most software developers. A July survey from Menlo Ventures said that by the middle of this year, Amazon-backed Anthropic controlled 32% of the market for enterprise LLMs, followed by OpenAI with 25%, Google with 20% and Meta with 9% — Amazon Nova had a less than 5% share, a Menlo spokesperson said.

The Nova models are available through AWS’ Bedrock service for running models on Amazon cloud infrastructure, as are Anthropic’s Claude 4.5 models.

“We are a frontier lab that has focused on customers,” Rohit Prasad, Amazon head scientist for artificial general intelligence, told CNBC in an interview. “Our customers wanted it. We have invented on their behalf to make this happen.”

Nova Forge is also in use by internal Amazon customers, including teams that work on the company’s stores and the Alexa AI assistant, Prasad said.

Reddit needed an AI model for moderating content that would be sophisticated about the many subjects people discuss on the social network. Engineers found that a Nova model enhanced with Reddit data through Forge performed better than commercially available large-scale models, Prasad said. Booking.com, Nimbus Therapeutics, the Nomura Research Institute and Sony are also building models with Forge, Amazon said.

Organizations can request that Amazon engineers help them build their Forge models, but that assistance is not included in the new service’s $100,000 annual fee.

AWS is also introducing new models for developers at its Reinvent conference in Las Vegas this week.

Nova 2 Pro is a reasoning model whose tests show it performs at least as well as Anthropic’s Claude Sonnet 4.5, OpenAI’s GPT-5 and GPT-5.1, and Google’s Gemini 3.0 Pro Preview, Amazon said. Reasoning involves running a series of computations that might take extra time in response to requests to produce better answers. Nova 2 Pro will be available in early access to AWS customers with Forge subscriptions, Prasad said. That means Forge customers and Amazon engineers will be able to try Nova 2 Pro at the same time.

Nova 2 Omni is another reasoning model that can process incoming images, speech, text and videos, and it generates images and text. It’s the first reasoning model with that range of capability, Amazon said. Amazon hopes that, by delivering a multifaceted model, it can lower the cost and complexity of incorporating AI models into applications.

Tens of thousands of organizations are using Nova models each week, Prasad said. AWS has said it has millions of customers. Nova is the second-most popular family of models in Bedrock, Prasad said. The top group of models are from Anthropic.

WATCH: Amazon set to kick off AI conference next week: Maxim’s Tom Forte on what to expect

Amazon set to kick off AI conference next week: Maxim's Tom Forte on what to expect

Continue Reading

Trending