Connect with us

Published

on

Sundar Pichai, CEO of Google and Alphabet, speaks on artificial intelligence during a Bruegel think tank conference in Brussels, Belgium, on Jan. 20, 2020.

Yves Herman | Reuters

Google on Wednesday announced MedLM, a suite of new health-care-specific artificial intelligence models designed to help clinicians and researchers carry out complex studies, summarize doctor-patient interactions and more.

The move marks Google’s latest attempt to monetize health-care industry AI tools, as competition for market share remains fierce between competitors like Amazon and Microsoft. CNBC spoke with companies that have been testing Google’s technology, like HCA Healthcare, and experts say the potential for impact is real, though they are taking steps to implement it carefully.

The MedLM suite includes a large and a medium-sized AI model, both built on Med-PaLM 2, a large language model trained on medical data that Google first announced in March. It is generally available to eligible Google Cloud customers in the U.S. starting Wednesday, and Google said while the cost of the AI suite varies depending on how companies use the different models, the medium-sized model is less expensive to run. 

Google said it also plans to introduce health-care-specific versions of Gemini, the company’s newest and “most capable” AI model, to MedLM in the future.

Aashima Gupta, Google Cloud’s global director of health-care strategy and solutions, said the company found that different medically tuned AI models can carry out certain tasks better than others. That’s why Google decided to introduce a suite of models instead of trying to build a “one-size-fits-all” solution. 

For instance, Google said its larger MedLM model is better for carrying out complicated tasks that require deep knowledge and lots of compute power, such as conducting a study using data from a health-care organization’s entire patient population. But if companies need a more agile model that can be optimized for specific or real-time functions, such as summarizing an interaction between a doctor and patient, the medium-sized model should work better, according to Gupta.

Real-world use cases

A Google Cloud logo at the Hannover Messe industrial technology fair in Hanover, Germany, on Thursday, April 20, 2023.

Krisztian Bocsi | Bloomberg | Getty Images

When Google announced Med-PaLM 2 in March, the company initially said it could be used to answer questions like “What are the first warning signs of pneumonia?” and “Can incontinence be cured?” But as the company has tested the technology with customers, the use cases have changed, according to Greg Corrado, head of Google’s health AI. 

Corrado said clinicians don’t often need help with “accessible” questions about the nature of a disease, so Google hasn’t seen much demand for those capabilities from customers. Instead, health organizations often want AI to help solve more back-office or logistical problems, like managing paperwork.  

“They want something that’s helping them with the real pain points and slowdowns that are in their workflow, that only they know,” Corrado told CNBC. 

For instance, HCA Healthcare, one of the largest health systems in the U.S., has been testing Google’s AI technology since the spring. The company announced an official collaboration with Google Cloud in August that aims to use its generative AI to “improve workflows on time-consuming tasks.” 

Dr. Michael Schlosser, senior vice president of care transformation and innovation at HCA, said the company has been using MedLM to help emergency medicine physicians automatically document their interactions with patients. For instance, HCA uses an ambient speech documentation system from a company called Augmedix to transcribe doctor-patient meetings. Google’s MedLM suite can then take those transcripts and break them up into the components of an ER provider note.

Schlosser said HCA has been using MedLM within emergency rooms at four hospitals, and the company wants to expand use over the next year. By January, Schlosser added, he expects Google’s technology will be able to successfully generate more than half of a note without help from providers. For doctors who can spend up to four hours a day on clerical paperwork, Schlosser said saving that time and effort makes a meaningful difference. 

“That’s been a huge leap forward for us,” Schlosser told CNBC. “We now think we’re going to be at a point where the AI, by itself, can create 60-plus percent of the note correctly on its own before we have the human doing the review and the editing.” 

Schlosser said HCA is also working to use MedLM to develop a handoff tool for nurses. The tool can read through the electronic health record and identify relevant information for nurses to pass along to the next shift. 

Handoffs are “laborious” and a real pain point for nurses, so it would be “powerful” to automate the process, Schlosser said. Nurses across HCA’s hospitals carry out around 400,000 handoffs a week, and two HCA hospitals have been testing the nurse handoff tool. Schlosser said nurses conduct a side-by-side comparison of a traditional handoff and an AI-generated handoff and provide feedback.

With both use cases, though, HCA has found that MedLM is not foolproof.

Schlosser said the fact that AI models can spit out incorrect information is a big challenge, and HCA has been working with Google to come up with best practices to minimize those fabrications. He added that token limits, which restrict the amount of data that can be fed to the model, and managing the AI over time have been additional challenges for HCA. 

“What I would say right now, is that the hype around the current use of these AI models in health care is outstripping the reality,” Schlosser said. “Everyone’s contending with this problem, and no one has really let these models loose in a scaled way in the health-care systems because of that.”

Even so, Schlosser said providers’ initial response to MedLM has been positive, and they recognize that they are not working with the finished product yet. He said HCA is working hard to implement the technology in a responsible way to avoid putting patients at risk.

“We’re being very cautious with how we approach these AI models,” he said. “We’re not using those use cases where the model outputs can somehow affect someone’s diagnosis and treatment.”

Getty Images

Google also plans to introduce health-care-specific versions of Gemini to MedLM in the future. Its shares popped 5% after Gemini’s launch earlier this month, but Google faced scrutiny over its demonstration video, which was not conducted in real time, the company confirmed to Bloomberg

In a statement, Google told CNBC: “The video is an illustrative depiction of the possibilities of interacting with Gemini, based on real multimodal prompts and outputs from testing. We look forward to seeing what people create when access to Gemini Pro opens on December 13.”

Corrado and Gupta of Google said Gemini is still in early stages, and it needs to be tested and evaluated with customers in controlled health-care settings before the model rolls out through MedLM more broadly. 

“We’ve been testing Med-PaLM 2 with our customers for months, and now we’re comfortable taking that as part of MedLM,” Gupta said. “Gemini will follow the same thing.” 

Schlosser said HCA is “very excited” about Gemini, and the company is already working out plans to test the technology, “We think that may give us an additional level of performance when we get that,” he said.

Another company that has been using MedLM is BenchSci, which aims to use AI to solve problems in drug discovery. Google is an investor in BenchSci, and the company has been testing its MedLM technology for a few months.  

Liran Belenzon, BenchSci’s co-founder and CEO, said the company has merged MedLM’s AI with BenchSci’s own technology to help scientists identify biomarkers, which are key to understanding how a disease progresses and how it can be cured. 

Belenzon said the company spent a lot of time testing and validating the model, including providing Google with feedback about necessary improvements. Now, Belenzon said BenchSci is in the process of bringing the technology to market more broadly.  

“[MedLM] doesn’t work out of the box, but it helps accelerate your specific efforts,” he told CNBC in an interview. 

Corrado said research around MedLM is ongoing, and he thinks Google Cloud’s health-care customers will be able to tune models for multiple different use cases within an organization. He added that Google will continue to develop domain-specific models that are “smaller, cheaper, faster, better.”  

Like BenchSci, Deloitte tested MedLM “over and over” before deploying the technology to health-care clients, said Dr. Kulleni Gebreyes, Deloitte’s U.S. life sciences and health-care consulting leader.

Deloitte is using Google’s technology to help health systems and health plans answer members’ questions about accessing care. If a patient needs a colonoscopy, for instance, they can use MedLM to look for providers based on gender, location or benefit coverage, as well as other qualifiers. 

Gebreyes said clients have found that MedLM is accurate and efficient, but it’s not always great at deciphering a user’s intent. It can be a challenge if patients don’t know the right word or spelling for colonoscopy, or use other colloquial terms, she said. 

“Ultimately, this does not substitute a diagnosis from a trained professional,” Gebreyes told CNBC. “It brings expertise closer and makes it more accessible.”

Continue Reading

Technology

Week in review: The Nasdaq’s worst week since April, three trades, and earnings

Published

on

By

Week in review: The Nasdaq's worst week since April, three trades, and earnings

Continue Reading

Technology

Too early to bet against AI trade, State Street suggests 

Published

on

By

Too early to bet against AI trade, State Street suggests 

Momentum and private assets: The trends driving ETFs to record inflows

State Street is reiterating its bullish stance on the artificial intelligence trade despite the Nasdaq’s worst week since April.

Chief Business Officer Anna Paglia said momentum stocks still have legs because investors are reluctant to step away from the growth story that’s driven gains all year.

“How would you not want to participate in the growth of AI technology? Everybody has been waiting for the cycle to change from growth to value. I don’t think it’s happening just yet because of the momentum,” Paglia told CNBC’s “ETF Edge” earlier this week. “I don’t think the rebalancing trade is going to happen until we see a signal from the market indicating a slowdown in these big trends.”

Paglia, who has spent 25 years in the exchange-traded funds industry, sees a higher likelihood that the space will cool off early next year.

“There will be much more focus about the diversification,” she said.

Her firm manages several ETFs with exposure to the technology sector, including the SPDR NYSE Technology ETF, which has gained 38% so far this year as of Friday’s close.

The fund, however, pulled back more than 4% over the past week as investors took profits in AI-linked names. The fund’s second top holding as of Friday’s close is Palantir Technologies, according to State Street’s website. Its stock tumbled more than 11% this week after the company’s earnings report on Monday.

Despite the decline, Paglia reaffirmed her bullish tech view in a statement to CNBC later in the week.

Meanwhile, Todd Rosenbluth suggests a rotation is already starting to grip the market. He points to a renewed appetite for health-care stocks.

“The Health Care Select Sector SPDR Fund… which has been out of favor for much of the year, started a return to favor in October,” the firm’s head of research said in the same interview. “Health care tends to be a more defensive sector, so we’re watching to see if people continue to gravitate towards that as a way of diversifying away from some of those sectors like technology.”

The Health Care Select Sector SPDR Fund, which has been underperforming technology sector this year, is up 5% since Oct. 1. It was also the second-best performing S&P 500 group this week.

Disclaimer

Continue Reading

Technology

People with ADHD, autism, dyslexia say AI agents are helping them succeed at work

Published

on

By

People with ADHD, autism, dyslexia say AI agents are helping them succeed at work

Neurodiverse professionals may see unique benefits from artificial intelligence tools and agents, research suggests. With AI agent creation booming in 2025, people with conditions like ADHD, autism, dyslexia and more report a more level playing field in the workplace thanks to generative AI.

A recent study from the UK’s Department for Business and Trade found that neurodiverse workers were 25% more satisfied with AI assistants and were more likely to recommend the tool than neurotypical respondents.

“Standing up and walking around during a meeting means that I’m not taking notes, but now AI can come in and synthesize the entire meeting into a transcript and pick out the top-level themes,” said Tara DeZao, senior director of product marketing at enterprise low-code platform provider Pega. DeZao, who was diagnosed with ADHD as an adult, has combination-type ADHD, which includes both inattentive symptoms (time management and executive function issues) and hyperactive symptoms (increased movement).

“I’ve white-knuckled my way through the business world,” DeZao said. “But these tools help so much.”

AI tools in the workplace run the gamut and can have hyper-specific use cases, but solutions like note takers, schedule assistants and in-house communication support are common. Generative AI happens to be particularly adept at skills like communication, time management and executive functioning, creating a built-in benefit for neurodiverse workers who’ve previously had to find ways to fit in among a work culture not built with them in mind.

Because of the skills that neurodiverse individuals can bring to the workplace — hyperfocus, creativity, empathy and niche expertise, just to name a few — some research suggests that organizations prioritizing inclusivity in this space generate nearly one-fifth higher revenue.

AI ethics and neurodiverse workers

“Investing in ethical guardrails, like those that protect and aid neurodivergent workers, is not just the right thing to do,” said Kristi Boyd, an AI specialist with the SAS data ethics practice. “It’s a smart way to make good on your organization’s AI investments.”

Boyd referred to an SAS study which found that companies investing the most in AI governance and guardrails were 1.6 times more likely to see at least double ROI on their AI investments. But Boyd highlighted three risks that companies should be aware of when implementing AI tools with neurodiverse and other individuals in mind: competing needs, unconscious bias and inappropriate disclosure.

“Different neurodiverse conditions may have conflicting needs,” Boyd said. For example, while people with dyslexia may benefit from document readers, people with bipolar disorder or other mental health neurodivergences may benefit from AI-supported scheduling to make the most of productive periods. “By acknowledging these tensions upfront, organizations can create layered accommodations or offer choice-based frameworks that balance competing needs while promoting equity and inclusion,” she explained.

Regarding AI’s unconscious biases, algorithms can (and have been) unintentionally taught to associate neurodivergence with danger, disease or negativity, as outlined in Duke University research. And even today, neurodiversity can still be met with workplace discrimination, making it important for companies to provide safe ways to use these tools without having to unwillingly publicize any individual worker diagnosis.

‘Like somebody turned on the light’

As businesses take accountability for the impact of AI tools in the workplace, Boyd says it’s important to remember to include diverse voices at all stages, implement regular audits and establish safe ways for employees to anonymously report issues.

The work to make AI deployment more equitable, including for neurodivergent people, is just getting started. The nonprofit Humane Intelligence, which focuses on deploying AI for social good, released in early October its Bias Bounty Challenge, where participants can identify biases with the goal of building “more inclusive communication platforms — especially for users with cognitive differences, sensory sensitivities or alternative communication styles.”

For example, emotion AI (when AI identifies human emotions) can help people with difficulty identifying emotions make sense of their meeting partners on video conferencing platforms like Zoom. Still, this technology requires careful attention to bias by ensuring AI agents recognize diverse communication patterns fairly and accurately, rather than embedding harmful assumptions.

DeZao said her ADHD diagnosis felt like “somebody turned on the light in a very, very dark room.”

“One of the most difficult pieces of our hyper-connected, fast world is that we’re all expected to multitask. With my form of ADHD, it’s almost impossible to multitask,” she said.

DeZao says one of AI’s most helpful features is its ability to receive instructions and do its work while the human employee can remain focused on the task at hand. “If I’m working on something and then a new request comes in over Slack or Teams, it just completely knocks me off my thought process,” she said. “Being able to take that request and then outsource it real quick and have it worked on while I continue to work [on my original task] has been a godsend.”

Continue Reading

Trending