Connect with us

Published

on

Sundar Pichai, CEO of Google and Alphabet, speaks on artificial intelligence during a Bruegel think tank conference in Brussels, Belgium, on Jan. 20, 2020.

Yves Herman | Reuters

Google on Wednesday announced MedLM, a suite of new health-care-specific artificial intelligence models designed to help clinicians and researchers carry out complex studies, summarize doctor-patient interactions and more.

The move marks Google’s latest attempt to monetize health-care industry AI tools, as competition for market share remains fierce between competitors like Amazon and Microsoft. CNBC spoke with companies that have been testing Google’s technology, like HCA Healthcare, and experts say the potential for impact is real, though they are taking steps to implement it carefully.

The MedLM suite includes a large and a medium-sized AI model, both built on Med-PaLM 2, a large language model trained on medical data that Google first announced in March. It is generally available to eligible Google Cloud customers in the U.S. starting Wednesday, and Google said while the cost of the AI suite varies depending on how companies use the different models, the medium-sized model is less expensive to run. 

Google said it also plans to introduce health-care-specific versions of Gemini, the company’s newest and “most capable” AI model, to MedLM in the future.

Aashima Gupta, Google Cloud’s global director of health-care strategy and solutions, said the company found that different medically tuned AI models can carry out certain tasks better than others. That’s why Google decided to introduce a suite of models instead of trying to build a “one-size-fits-all” solution. 

For instance, Google said its larger MedLM model is better for carrying out complicated tasks that require deep knowledge and lots of compute power, such as conducting a study using data from a health-care organization’s entire patient population. But if companies need a more agile model that can be optimized for specific or real-time functions, such as summarizing an interaction between a doctor and patient, the medium-sized model should work better, according to Gupta.

Real-world use cases

A Google Cloud logo at the Hannover Messe industrial technology fair in Hanover, Germany, on Thursday, April 20, 2023.

Krisztian Bocsi | Bloomberg | Getty Images

When Google announced Med-PaLM 2 in March, the company initially said it could be used to answer questions like “What are the first warning signs of pneumonia?” and “Can incontinence be cured?” But as the company has tested the technology with customers, the use cases have changed, according to Greg Corrado, head of Google’s health AI. 

Corrado said clinicians don’t often need help with “accessible” questions about the nature of a disease, so Google hasn’t seen much demand for those capabilities from customers. Instead, health organizations often want AI to help solve more back-office or logistical problems, like managing paperwork.  

“They want something that’s helping them with the real pain points and slowdowns that are in their workflow, that only they know,” Corrado told CNBC. 

For instance, HCA Healthcare, one of the largest health systems in the U.S., has been testing Google’s AI technology since the spring. The company announced an official collaboration with Google Cloud in August that aims to use its generative AI to “improve workflows on time-consuming tasks.” 

Dr. Michael Schlosser, senior vice president of care transformation and innovation at HCA, said the company has been using MedLM to help emergency medicine physicians automatically document their interactions with patients. For instance, HCA uses an ambient speech documentation system from a company called Augmedix to transcribe doctor-patient meetings. Google’s MedLM suite can then take those transcripts and break them up into the components of an ER provider note.

Schlosser said HCA has been using MedLM within emergency rooms at four hospitals, and the company wants to expand use over the next year. By January, Schlosser added, he expects Google’s technology will be able to successfully generate more than half of a note without help from providers. For doctors who can spend up to four hours a day on clerical paperwork, Schlosser said saving that time and effort makes a meaningful difference. 

“That’s been a huge leap forward for us,” Schlosser told CNBC. “We now think we’re going to be at a point where the AI, by itself, can create 60-plus percent of the note correctly on its own before we have the human doing the review and the editing.” 

Schlosser said HCA is also working to use MedLM to develop a handoff tool for nurses. The tool can read through the electronic health record and identify relevant information for nurses to pass along to the next shift. 

Handoffs are “laborious” and a real pain point for nurses, so it would be “powerful” to automate the process, Schlosser said. Nurses across HCA’s hospitals carry out around 400,000 handoffs a week, and two HCA hospitals have been testing the nurse handoff tool. Schlosser said nurses conduct a side-by-side comparison of a traditional handoff and an AI-generated handoff and provide feedback.

With both use cases, though, HCA has found that MedLM is not foolproof.

Schlosser said the fact that AI models can spit out incorrect information is a big challenge, and HCA has been working with Google to come up with best practices to minimize those fabrications. He added that token limits, which restrict the amount of data that can be fed to the model, and managing the AI over time have been additional challenges for HCA. 

“What I would say right now, is that the hype around the current use of these AI models in health care is outstripping the reality,” Schlosser said. “Everyone’s contending with this problem, and no one has really let these models loose in a scaled way in the health-care systems because of that.”

Even so, Schlosser said providers’ initial response to MedLM has been positive, and they recognize that they are not working with the finished product yet. He said HCA is working hard to implement the technology in a responsible way to avoid putting patients at risk.

“We’re being very cautious with how we approach these AI models,” he said. “We’re not using those use cases where the model outputs can somehow affect someone’s diagnosis and treatment.”

Getty Images

Google also plans to introduce health-care-specific versions of Gemini to MedLM in the future. Its shares popped 5% after Gemini’s launch earlier this month, but Google faced scrutiny over its demonstration video, which was not conducted in real time, the company confirmed to Bloomberg

In a statement, Google told CNBC: “The video is an illustrative depiction of the possibilities of interacting with Gemini, based on real multimodal prompts and outputs from testing. We look forward to seeing what people create when access to Gemini Pro opens on December 13.”

Corrado and Gupta of Google said Gemini is still in early stages, and it needs to be tested and evaluated with customers in controlled health-care settings before the model rolls out through MedLM more broadly. 

“We’ve been testing Med-PaLM 2 with our customers for months, and now we’re comfortable taking that as part of MedLM,” Gupta said. “Gemini will follow the same thing.” 

Schlosser said HCA is “very excited” about Gemini, and the company is already working out plans to test the technology, “We think that may give us an additional level of performance when we get that,” he said.

Another company that has been using MedLM is BenchSci, which aims to use AI to solve problems in drug discovery. Google is an investor in BenchSci, and the company has been testing its MedLM technology for a few months.  

Liran Belenzon, BenchSci’s co-founder and CEO, said the company has merged MedLM’s AI with BenchSci’s own technology to help scientists identify biomarkers, which are key to understanding how a disease progresses and how it can be cured. 

Belenzon said the company spent a lot of time testing and validating the model, including providing Google with feedback about necessary improvements. Now, Belenzon said BenchSci is in the process of bringing the technology to market more broadly.  

“[MedLM] doesn’t work out of the box, but it helps accelerate your specific efforts,” he told CNBC in an interview. 

Corrado said research around MedLM is ongoing, and he thinks Google Cloud’s health-care customers will be able to tune models for multiple different use cases within an organization. He added that Google will continue to develop domain-specific models that are “smaller, cheaper, faster, better.”  

Like BenchSci, Deloitte tested MedLM “over and over” before deploying the technology to health-care clients, said Dr. Kulleni Gebreyes, Deloitte’s U.S. life sciences and health-care consulting leader.

Deloitte is using Google’s technology to help health systems and health plans answer members’ questions about accessing care. If a patient needs a colonoscopy, for instance, they can use MedLM to look for providers based on gender, location or benefit coverage, as well as other qualifiers. 

Gebreyes said clients have found that MedLM is accurate and efficient, but it’s not always great at deciphering a user’s intent. It can be a challenge if patients don’t know the right word or spelling for colonoscopy, or use other colloquial terms, she said. 

“Ultimately, this does not substitute a diagnosis from a trained professional,” Gebreyes told CNBC. “It brings expertise closer and makes it more accessible.”

Continue Reading

Technology

Rivian announces new AI tech, in-house chip and robotaxi ambitions

Published

on

By

Rivian announces new AI tech, in-house chip and robotaxi ambitions

Rivian debuted new tech at its first “Autonomy and AI Day” on Thursday in Palo Alto, California.

Credit: Rivian

Electric vehicle maker Rivian Automotive has developed a custom chip, car computer and new artificial intelligence models that will enable it to bring self-driving features to its forthcoming vehicles, the company revealed at its first “Autonomy and AI Day” on Thursday in Palo Alto, California.

Rivian also said it plans to roll out an Autonomy+ subscription with “continuously expanding capabilities” to customers in early 2026, to be powered by its Rivian Autonomy Processors and autonomy computers.

The Autonomy+ offering will be priced at $2,500 as a one-time upfront purchase or is available for $49.99 per month to start. By comparison, competitor Tesla offers its premium FSD (Supervised) option for $8,000 upfront or a $99 per month fee.

The company said in a statement that a near-future software update will include a “Universal Hands-Free,” capability, allowing Rivian customers “hands-free driving” on “over 3.5 million miles of roads in North America, covering the vast majority of marked roads in the US.”

Unlike its primary competitor, Tesla, Rivian said it intends to use lidar, or light detection and ranging, systems and radar sensors in its forthcoming cars to enable “level 4,” or fully automated driving, as defined by SAE Levels of Driving Automation.

A passenger can sleep in the back seat in a level 4 self-driving car while it carries them to their destination in normal traffic and weather conditions. Waymo, the Alphabet-owned robotaxi leader in the U.S., considers its vehicles level 4.

Rivian CEO RJ Scaringe said Thursday the company’s forthcoming self-driving vehicles enable the company to pursue robotaxis, which Tesla has promised for years but has yet to launch.

“Now, while our initial focus will be on personally, owned vehicles, which today represent a vast majority of the miles to the United States, this also enables us to pursue opportunities in the rideshare space,” Scaringe said during the event.

Stock Chart IconStock chart icon

hide content

Rivian and Tesla stock’s since Rivian went public.

Rivian is not alone in aiming to deliver autonomous systems that meet level 4 expectations, while rolling out partially automated features along the way to drivers who generally want these to reduce fatigue on long drives or make them safer behind the wheel overall.

Tesla and General Motors are working on their own proprietary driverless systems, while Honda, Lucid and Nissan have partnered with venture-backed autonomous vehicle tech startups (Helm.AI, Nuro and Wayve respectively) to develop similar systems with a range of different technical approaches.

Powering Rivian’s self-driving aspirations will be a new in-house chip developed by the company, which is set to launch in 2026. Vidya Rajagopalan, Rivian vice president of electrical hardware, said the chip uses “multi-chip module” packaging and has “high memory bandwidth,” which is “key for AI applications.” Rivian’s chip boasts bandwidth of 205 gigabytes per second.

Rivian is under pressure to prove its future growth potential to investors and to grow its customer base amid slowing sales of battery electric vehicles in the U.S. and competition from Chinese EV makers internationally.

The fully electric vehicle segment has experienced a sales slump domestically after the Trump administration put an early end in September to a $7,500 federal tax credit previously available for EV buyers in the U.S.

Shares of Rivian are up about 25% this year, but remain off more than 80% since the company’s 2021 initial public offering amid internal and external challenges.

Continue Reading

Technology

Broadcom reports fourth quarter earnings after the bell

Published

on

By

Broadcom reports fourth quarter earnings after the bell

A Broadcom sign is pictured as the company prepares to launch new optical chip tech to fend off Nvidia in San Jose, California, U.S., September 5, 2025.

Brittany Hosea-small | Reuters

Broadcom is scheduled to report its fourth-quarter earnings after market close on Thursday.

Here’s what analysts are expecting, according to LSEG:

  • Earnings per share: $1.86, adjusted
  • Revenue: $17.49 billion

Wall Street is expecting Broadcom’s overall revenue to increase 25% in the quarter ended in October, from $14.05 billion a year earlier.

Analysts are expecting the chipmaker to guide for $1.95 in adjusted earnings per share on $18.27 billion in sales in the current quarter.

The report comes as investors increasingly see Broadcom as well-placed to capitalize on the AI infrastructure boom both with its custom chips, which it calls XPUs, and the networking technology needed to build massive data centers where thousands of AI chips work as one.

Broadcom stock is at all-time highs and has climbed 75% so far in 2025 as its custom chips, such as Google’s tensor processing units, are increasingly seen as a rival to Nvidia’s AI chips. The company has a market cap of $1.91 trillion.

Google released its latest AI model, Gemini 3, during the quarter, which it said was trained entirely on its TPU chips.

Another Broadcom AI customer is OpenAI. The AI startup said in October that it will start deploying custom chips for AI developed with Broadcom starting next year.

Broadcom CEO Hock Tan is expected to discuss the company’s pipeline of AI chips and partners with investors on Thursday.

“We expect investors to focus on FY26 AI revenue guidance, Google and OpenAI revenue contributions, and gross margin trajectory given the steep ramp of custom XPUs,” Goldman Sachs analyst James Schneider wrote in a note last month. He has the equivalent of a buy rating on the stock.

WATCH: Broadcom-OpenAI deal expected to be cheaper than current GPU options

Broadcom-OpenAI deal expected to be cheaper than current GPU options

Continue Reading

Technology

Disney’s OpenAI stake is ‘a way in’ to AI and Sora will help reach younger audience, Iger tells CNBC

Published

on

By

Disney's OpenAI stake is 'a way in' to AI and Sora will help reach younger audience, Iger tells CNBC

Disney CEO on $1 billion investment in OpenAI: 'This is a good investment for the company'

The Walt Disney Company’s $1 billion equity investment in OpenAI will serve as “a way in” to artificial intelligence, which will have a significant long-term impact on Disney’s business, Disney CEO Bob Iger told CNBC’s “Squawk on the Street” on Thursday.

“We want to participate in what Sam is creating, what his team is creating,” Iger said. “We think this is a good investment for the company.”

Disney announced its investment in OpenAI as part of an agreement on Thursday that will allow users to make AI videos with its copyrighted characters on the startup’s app called Sora.

More than 200 characters, including Mickey Mouse, Darth Vader and Cinderella, will be available on the platform through a three-year licensing agreement, which Iger said would be exclusive to OpenAI at the beginning of the term.

As new AI products have exploded into the mainstream, several media companies, including Disney, have taken legal action in an effort to safeguard their intellectual property.

Iger said Disney has been “aggressive” at protecting its IP, but he has been “extremely impressed” with OpenAI’s growth as well as their willingness to license content.

“No human generation has ever stood in the way of technological advance, and we don’t intend to try,” Iger said. “We’ve always felt that if it’s going to happen, including disruption of our current business models, then we should get on board.”

Shares of Disney are up more than 1% on Thursday.

Read more CNBC tech news

Barton Crockett, a senior internet media research analyst, told CNBC that Disney’s investment is “a great endorsement for OpenAI.”

He said it’s important for companies like Disney to understand the importance of user-generated and AI-generated content.

“I think it’s crucial for a content-creation company, like Disney, to get ahead of that,” he said.

OpenAI launched Sora in September, and the short-form video app allows people to generate content by simply typing in a prompt.

The app quickly rose to the top of Apple’s App Store, but as users flooded the platform with videos of popular brands and characters, large media players began to raise concerns around safety and copyright infringement.

Iger said Disney’s deal with OpenAI “does not in any way represent a threat to creators at all,” in part because characters’ voices as well as talent names and likenesses are not included.

“In fact, the opposite,” Iger said. “I think it honors them and respects them, in part because there’s a license fee associated with it.”

OpenAI CEO Sam Altman said there will be guardrails in place around how Disney’s characters will be used on Sora.

“It’s very important that we enable Disney to set and evolve those guardrails over time, but they will of course be in there,” Altman told CNBC on Thursday.

WATCH: Watch CNBC’s full interview with Disney CEO Bob Iger and OpenAI CEO Sam Altman

Continue Reading

Trending