Connect with us

Published

on

Sundar Pichai, CEO of Google and Alphabet, speaks on artificial intelligence during a Bruegel think tank conference in Brussels, Belgium, on Jan. 20, 2020.

Yves Herman | Reuters

Google on Wednesday announced MedLM, a suite of new health-care-specific artificial intelligence models designed to help clinicians and researchers carry out complex studies, summarize doctor-patient interactions and more.

The move marks Google’s latest attempt to monetize health-care industry AI tools, as competition for market share remains fierce between competitors like Amazon and Microsoft. CNBC spoke with companies that have been testing Google’s technology, like HCA Healthcare, and experts say the potential for impact is real, though they are taking steps to implement it carefully.

The MedLM suite includes a large and a medium-sized AI model, both built on Med-PaLM 2, a large language model trained on medical data that Google first announced in March. It is generally available to eligible Google Cloud customers in the U.S. starting Wednesday, and Google said while the cost of the AI suite varies depending on how companies use the different models, the medium-sized model is less expensive to run. 

Google said it also plans to introduce health-care-specific versions of Gemini, the company’s newest and “most capable” AI model, to MedLM in the future.

Aashima Gupta, Google Cloud’s global director of health-care strategy and solutions, said the company found that different medically tuned AI models can carry out certain tasks better than others. That’s why Google decided to introduce a suite of models instead of trying to build a “one-size-fits-all” solution. 

For instance, Google said its larger MedLM model is better for carrying out complicated tasks that require deep knowledge and lots of compute power, such as conducting a study using data from a health-care organization’s entire patient population. But if companies need a more agile model that can be optimized for specific or real-time functions, such as summarizing an interaction between a doctor and patient, the medium-sized model should work better, according to Gupta.

Real-world use cases

A Google Cloud logo at the Hannover Messe industrial technology fair in Hanover, Germany, on Thursday, April 20, 2023.

Krisztian Bocsi | Bloomberg | Getty Images

When Google announced Med-PaLM 2 in March, the company initially said it could be used to answer questions like “What are the first warning signs of pneumonia?” and “Can incontinence be cured?” But as the company has tested the technology with customers, the use cases have changed, according to Greg Corrado, head of Google’s health AI. 

Corrado said clinicians don’t often need help with “accessible” questions about the nature of a disease, so Google hasn’t seen much demand for those capabilities from customers. Instead, health organizations often want AI to help solve more back-office or logistical problems, like managing paperwork.  

“They want something that’s helping them with the real pain points and slowdowns that are in their workflow, that only they know,” Corrado told CNBC. 

For instance, HCA Healthcare, one of the largest health systems in the U.S., has been testing Google’s AI technology since the spring. The company announced an official collaboration with Google Cloud in August that aims to use its generative AI to “improve workflows on time-consuming tasks.” 

Dr. Michael Schlosser, senior vice president of care transformation and innovation at HCA, said the company has been using MedLM to help emergency medicine physicians automatically document their interactions with patients. For instance, HCA uses an ambient speech documentation system from a company called Augmedix to transcribe doctor-patient meetings. Google’s MedLM suite can then take those transcripts and break them up into the components of an ER provider note.

Schlosser said HCA has been using MedLM within emergency rooms at four hospitals, and the company wants to expand use over the next year. By January, Schlosser added, he expects Google’s technology will be able to successfully generate more than half of a note without help from providers. For doctors who can spend up to four hours a day on clerical paperwork, Schlosser said saving that time and effort makes a meaningful difference. 

“That’s been a huge leap forward for us,” Schlosser told CNBC. “We now think we’re going to be at a point where the AI, by itself, can create 60-plus percent of the note correctly on its own before we have the human doing the review and the editing.” 

Schlosser said HCA is also working to use MedLM to develop a handoff tool for nurses. The tool can read through the electronic health record and identify relevant information for nurses to pass along to the next shift. 

Handoffs are “laborious” and a real pain point for nurses, so it would be “powerful” to automate the process, Schlosser said. Nurses across HCA’s hospitals carry out around 400,000 handoffs a week, and two HCA hospitals have been testing the nurse handoff tool. Schlosser said nurses conduct a side-by-side comparison of a traditional handoff and an AI-generated handoff and provide feedback.

With both use cases, though, HCA has found that MedLM is not foolproof.

Schlosser said the fact that AI models can spit out incorrect information is a big challenge, and HCA has been working with Google to come up with best practices to minimize those fabrications. He added that token limits, which restrict the amount of data that can be fed to the model, and managing the AI over time have been additional challenges for HCA. 

“What I would say right now, is that the hype around the current use of these AI models in health care is outstripping the reality,” Schlosser said. “Everyone’s contending with this problem, and no one has really let these models loose in a scaled way in the health-care systems because of that.”

Even so, Schlosser said providers’ initial response to MedLM has been positive, and they recognize that they are not working with the finished product yet. He said HCA is working hard to implement the technology in a responsible way to avoid putting patients at risk.

“We’re being very cautious with how we approach these AI models,” he said. “We’re not using those use cases where the model outputs can somehow affect someone’s diagnosis and treatment.”

Getty Images

Google also plans to introduce health-care-specific versions of Gemini to MedLM in the future. Its shares popped 5% after Gemini’s launch earlier this month, but Google faced scrutiny over its demonstration video, which was not conducted in real time, the company confirmed to Bloomberg

In a statement, Google told CNBC: “The video is an illustrative depiction of the possibilities of interacting with Gemini, based on real multimodal prompts and outputs from testing. We look forward to seeing what people create when access to Gemini Pro opens on December 13.”

Corrado and Gupta of Google said Gemini is still in early stages, and it needs to be tested and evaluated with customers in controlled health-care settings before the model rolls out through MedLM more broadly. 

“We’ve been testing Med-PaLM 2 with our customers for months, and now we’re comfortable taking that as part of MedLM,” Gupta said. “Gemini will follow the same thing.” 

Schlosser said HCA is “very excited” about Gemini, and the company is already working out plans to test the technology, “We think that may give us an additional level of performance when we get that,” he said.

Another company that has been using MedLM is BenchSci, which aims to use AI to solve problems in drug discovery. Google is an investor in BenchSci, and the company has been testing its MedLM technology for a few months.  

Liran Belenzon, BenchSci’s co-founder and CEO, said the company has merged MedLM’s AI with BenchSci’s own technology to help scientists identify biomarkers, which are key to understanding how a disease progresses and how it can be cured. 

Belenzon said the company spent a lot of time testing and validating the model, including providing Google with feedback about necessary improvements. Now, Belenzon said BenchSci is in the process of bringing the technology to market more broadly.  

“[MedLM] doesn’t work out of the box, but it helps accelerate your specific efforts,” he told CNBC in an interview. 

Corrado said research around MedLM is ongoing, and he thinks Google Cloud’s health-care customers will be able to tune models for multiple different use cases within an organization. He added that Google will continue to develop domain-specific models that are “smaller, cheaper, faster, better.”  

Like BenchSci, Deloitte tested MedLM “over and over” before deploying the technology to health-care clients, said Dr. Kulleni Gebreyes, Deloitte’s U.S. life sciences and health-care consulting leader.

Deloitte is using Google’s technology to help health systems and health plans answer members’ questions about accessing care. If a patient needs a colonoscopy, for instance, they can use MedLM to look for providers based on gender, location or benefit coverage, as well as other qualifiers. 

Gebreyes said clients have found that MedLM is accurate and efficient, but it’s not always great at deciphering a user’s intent. It can be a challenge if patients don’t know the right word or spelling for colonoscopy, or use other colloquial terms, she said. 

“Ultimately, this does not substitute a diagnosis from a trained professional,” Gebreyes told CNBC. “It brings expertise closer and makes it more accessible.”

Continue Reading

Technology

Beta stock jumps 9% on $1 billion motor deal with air taxi maker Eve Air Mobility

Published

on

By

Beta stock jumps 9% on  billion motor deal with air taxi maker Eve Air Mobility

Beta Technologies strikes $1B electric motor manufacturing deal with Eve Air Mobility

Beta Technologies shares surged more than 9% after air taxi maker Eve Air Mobility announced an up to $1 billion deal to buy motors from the Vermont-based company.

Eve, which was started by Brazilian airplane maker Embraer and is now under Eve Holding, said the manufacturing deal could equal as much as $1 billion over 10 years. The Florida-based company said it has a backlog of 2,800 vehicles.

Shares of Eve Holding gained 14%.

Eve CEO Johann Bordais called the deal a “pivotal milestone” in the advancement of the company’s electric vertical takeoff and landing, or eVTOL, technology.

“Their electric motor technology will play a critical role in powering our aircraft during cruise, supporting the maturity of our propulsion architecture as we progress toward entry into service,” he said in a release.

Read more CNBC tech news

Continue Reading

Technology

Amazon launches cloud AI tool to help engineers recover from outages faster

Published

on

By

Amazon launches cloud AI tool to help engineers recover from outages faster

Mateusz Slodkowski | SOPA Images | Lightrocket | Getty Images

Amazon’s cloud unit on Tuesday announced AI-enabled software designed to help clients better understand and recover from outages.

DevOps Agent, as the artificial intelligence tool from Amazon Web Services is called, predicts the cause of technical hiccups using input from third-party tools such as Datadog and Dynatrace. AWS said customers can sign up to use the tool Tuesday in a preview, before Amazon starts charging for the service.

The AI outage tool from AWS is intended to help companies more quickly figure out what caused an outage and implement fixes, Swami Sivasubramanian, vice president of agentic AI at AWS, told CNBC. It’s what site reliability engineers, or SREs, do at many companies that provide online services.

SREs try to prevent downtime and jump into action during live incidents. Startups such as Resolve and Traversal have started marketing AI assistants for these experts. Microsoft’s Azure cloud group introduced an SRE Agent in May.

Rather than waiting for on-call staff members to figure out what happened, the AWS DevOps Agent automatically assigns work to agents that look into different hypotheses, Sivasubramanian said.

“By the time the on-call ops team member dials in, they have an incident report with preliminary investigation of what could be the likely outcome, and then suggest what could be the remediation as well,” Sivasubramanian told CNBC ahead of AWS’ Reinvent conference in Las Vegas this week.

Commonwealth Bank of Australia has tested the AWS DevOps Agent. In under 15 minutes, the software found the root cause of an issue that would have taken a veteran engineer hours, AWS said in a statement.

The tool relies on Amazon’s in-house AI models and those from other providers, a spokesperson said.

AWS has been selling software in addition to raw infrastructure for many years. Amazon was early to start renting out server space and storage to developers since the mid-2000s, and technology companies such as Google, Microsoft and Oracle have followed.

Since the launch of ChatGPT in 2022, these cloud infrastructure providers have been trying to demonstrate how generative AI models, which are often training in large cloud computing data centers, can speed up work for software developers.

Over the summer, Amazon announced Kiro, a so-called vibe coding tool that produces and modifies source code based on user text prompts. In November, Google debuted similar software for individual software developers called Antigravity, and Microsoft sells subscriptions to GitHub Copilot.

WATCH: Amazon rolls out AI-powered tools to help big AWS customers update old software

Amazon rolls out AI-powered tools to help big AWS customers update old software

Continue Reading

Technology

Amazon to let cloud clients customize AI models midway through training for $100,000 a year

Published

on

By

Amazon to let cloud clients customize AI models midway through training for 0,000 a year

Attendees pass an Amazon Web Services logo during AWS re:Invent 2024, a conference hosted by Amazon Web Services, at The Venetian hotel in Las Vegas on Dec. 3, 2024.

Noah Berger | Getty Images

Amazon has found a way to let cloud clients extensively customize generative AI models. The catch is that the system costs $100,000 per year.

The Nova Forge offering from Amazon Web Services gives organizations access to Amazon’s AI models in various stages of training so they can incorporate their own data earlier in the process.

Already, companies can fine-tune large language models after they’ve been trained. The results with Nova Forge will lean more heavily on the data that customers supply. Nova Forge customers will also have the option to refine open-weight models, but training data and computing infrastructure are not included.

Organizations that assemble their own models might end up spending hundreds of millions or billions of dollars, which means using Nova Forge is more affordable, Amazon said.

AWS released its own models under the Nova brand in 2024, but they aren’t the first choice for most software developers. A July survey from Menlo Ventures said that by the middle of this year, Amazon-backed Anthropic controlled 32% of the market for enterprise LLMs, followed by OpenAI with 25%, Google with 20% and Meta with 9% — Amazon Nova had a less than 5% share, a Menlo spokesperson said.

The Nova models are available through AWS’ Bedrock service for running models on Amazon cloud infrastructure, as are Anthropic’s Claude 4.5 models.

“We are a frontier lab that has focused on customers,” Rohit Prasad, Amazon head scientist for artificial general intelligence, told CNBC in an interview. “Our customers wanted it. We have invented on their behalf to make this happen.”

Nova Forge is also in use by internal Amazon customers, including teams that work on the company’s stores and the Alexa AI assistant, Prasad said.

Reddit needed an AI model for moderating content that would be sophisticated about the many subjects people discuss on the social network. Engineers found that a Nova model enhanced with Reddit data through Forge performed better than commercially available large-scale models, Prasad said. Booking.com, Nimbus Therapeutics, the Nomura Research Institute and Sony are also building models with Forge, Amazon said.

Organizations can request that Amazon engineers help them build their Forge models, but that assistance is not included in the new service’s $100,000 annual fee.

AWS is also introducing new models for developers at its Reinvent conference in Las Vegas this week.

Nova 2 Pro is a reasoning model whose tests show it performs at least as well as Anthropic’s Claude Sonnet 4.5, OpenAI’s GPT-5 and GPT-5.1, and Google’s Gemini 3.0 Pro Preview, Amazon said. Reasoning involves running a series of computations that might take extra time in response to requests to produce better answers. Nova 2 Pro will be available in early access to AWS customers with Forge subscriptions, Prasad said. That means Forge customers and Amazon engineers will be able to try Nova 2 Pro at the same time.

Nova 2 Omni is another reasoning model that can process incoming images, speech, text and videos, and it generates images and text. It’s the first reasoning model with that range of capability, Amazon said. Amazon hopes that, by delivering a multifaceted model, it can lower the cost and complexity of incorporating AI models into applications.

Tens of thousands of organizations are using Nova models each week, Prasad said. AWS has said it has millions of customers. Nova is the second-most popular family of models in Bedrock, Prasad said. The top group of models are from Anthropic.

WATCH: Amazon set to kick off AI conference next week: Maxim’s Tom Forte on what to expect

Amazon set to kick off AI conference next week: Maxim's Tom Forte on what to expect

Continue Reading

Trending