Sundar Pichai, CEO of Google and Alphabet, speaks on artificial intelligence during a Bruegel think tank conference in Brussels, Belgium, on Jan. 20, 2020.
Yves Herman | Reuters
Google on Wednesday announced MedLM, a suite of new health-care-specific artificial intelligence models designed to help clinicians and researchers carry out complex studies, summarize doctor-patient interactions and more.
The move marks Google’s latest attempt to monetize health-care industry AI tools, as competition for market share remains fierce between competitors like Amazon and Microsoft. CNBC spoke with companies that have been testing Google’s technology, like HCA Healthcare, and experts say the potential for impact is real, though they are taking steps to implement it carefully.
The MedLM suite includes a large and a medium-sized AI model, both built on Med-PaLM 2, a large language model trained on medical data that Google first announced in March. It is generally available to eligible Google Cloud customers in the U.S. starting Wednesday, and Google said while the cost of the AI suite varies depending on how companies use the different models, the medium-sized model is less expensive to run.
Google said it also plans to introduce health-care-specific versions of Gemini, the company’s newest and “most capable” AI model, to MedLM in the future.
Aashima Gupta, Google Cloud’s global director of health-care strategy and solutions, said the company found that different medically tuned AI models can carry out certain tasks better than others. That’s why Google decided to introduce a suite of models instead of trying to build a “one-size-fits-all” solution.
For instance, Google said its larger MedLM model is better for carrying out complicated tasks that require deep knowledge and lots of compute power, such as conducting a study using data from a health-care organization’s entire patient population. But if companies need a more agile model that can be optimized for specific or real-time functions, such as summarizing an interaction between a doctor and patient, the medium-sized model should work better, according to Gupta.
Real-world use cases
A Google Cloud logo at the Hannover Messe industrial technology fair in Hanover, Germany, on Thursday, April 20, 2023.
Krisztian Bocsi | Bloomberg | Getty Images
When Google announced Med-PaLM 2 in March, the company initially said it could be used to answer questions like “What are the first warning signs of pneumonia?” and “Can incontinence be cured?” But as the company has tested the technology with customers, the use cases have changed, according to Greg Corrado, head of Google’s health AI.
Corrado said clinicians don’t often need help with “accessible” questions about the nature of a disease, so Google hasn’t seen much demand for those capabilities from customers. Instead, health organizations often want AI to help solve more back-office or logistical problems, like managing paperwork.
“They want something that’s helping them with the real pain points and slowdowns that are in their workflow, that only they know,” Corrado told CNBC.
For instance, HCA Healthcare, one of the largest health systems in the U.S., has been testing Google’s AI technology since the spring. The company announced an official collaboration with Google Cloud in August that aims to use its generative AI to “improve workflows on time-consuming tasks.”
Dr. Michael Schlosser, senior vice president of care transformation and innovation at HCA, said the company has been using MedLM to help emergency medicine physicians automatically document their interactions with patients. For instance, HCA uses an ambient speech documentation system from a company called Augmedix to transcribe doctor-patient meetings. Google’s MedLM suite can then take those transcripts and break them up into the components of an ER provider note.
Schlosser said HCA has been using MedLM within emergency rooms at four hospitals, and the company wants to expand use over the next year. By January, Schlosser added, he expects Google’s technology will be able to successfully generate more than half of a note without help from providers. For doctors who can spend up to four hours a day on clerical paperwork, Schlosser said saving that time and effort makes a meaningful difference.
“That’s been a huge leap forward for us,” Schlosser told CNBC. “We now think we’re going to be at a point where the AI, by itself, can create 60-plus percent of the note correctly on its own before we have the human doing the review and the editing.”
Schlosser said HCA is also working to use MedLM to develop a handoff tool for nurses. The tool can read through the electronic health record and identify relevant information for nurses to pass along to the next shift.
Handoffs are “laborious” and a real pain point for nurses, so it would be “powerful” to automate the process, Schlosser said. Nurses across HCA’s hospitals carry out around 400,000 handoffs a week, and two HCA hospitals have been testing the nurse handoff tool. Schlosser said nurses conduct a side-by-side comparison of a traditional handoff and an AI-generated handoff and provide feedback.
With both use cases, though, HCA has found that MedLM is not foolproof.
Schlosser said the fact that AI models can spit out incorrect information is a big challenge, and HCA has been working with Google to come up with best practices to minimize those fabrications. He added that token limits, which restrict the amount of data that can be fed to the model, and managing the AI over time have been additional challenges for HCA.
“What I would say right now, is that the hype around the current use of these AI models in health care is outstripping the reality,” Schlosser said. “Everyone’s contending with this problem, and no one has really let these models loose in a scaled way in the health-care systems because of that.”
Even so, Schlosser said providers’ initial response to MedLM has been positive, and they recognize that they are not working with the finished product yet. He said HCA is working hard to implement the technology in a responsible way to avoid putting patients at risk.
“We’re being very cautious with how we approach these AI models,” he said. “We’re not using those use cases where the model outputs can somehow affect someone’s diagnosis and treatment.”
Getty Images
Google also plans to introduce health-care-specific versions of Gemini to MedLM in the future. Its shares popped 5% after Gemini’s launch earlier this month, but Google faced scrutiny over its demonstration video, which was not conducted in real time, the company confirmed to Bloomberg.
In a statement, Google told CNBC: “The video is an illustrative depiction of the possibilities of interacting with Gemini, based on real multimodal prompts and outputs from testing. We look forward to seeing what people create when access to Gemini Pro opens on December 13.”
Corrado and Gupta of Google said Gemini is still in early stages, and it needs to be tested and evaluated with customers in controlled health-care settings before the model rolls out through MedLM more broadly.
“We’ve been testing Med-PaLM 2 with our customers for months, and now we’re comfortable taking that as part of MedLM,” Gupta said. “Gemini will follow the same thing.”
Schlosser said HCA is “very excited” about Gemini, and the company is already working out plans to test the technology, “We think that may give us an additional level of performance when we get that,” he said.
Another company that has been using MedLM is BenchSci, which aims to use AI to solve problems in drug discovery. Google is an investor in BenchSci, and the company has been testing its MedLM technology for a few months.
Liran Belenzon, BenchSci’s co-founder and CEO, said the company has merged MedLM’s AI with BenchSci’s own technology to help scientists identify biomarkers, which are key to understanding how a disease progresses and how it can be cured.
Belenzon said the company spent a lot of time testing and validating the model, including providing Google with feedback about necessary improvements. Now, Belenzon said BenchSci is in the process of bringing the technology to market more broadly.
“[MedLM] doesn’t work out of the box, but it helps accelerate your specific efforts,” he told CNBC in an interview.
Corrado said research around MedLM is ongoing, and he thinks Google Cloud’s health-care customers will be able to tune models for multiple different use cases within an organization. He added that Google will continue to develop domain-specific models that are “smaller, cheaper, faster, better.”
Like BenchSci, Deloitte tested MedLM “over and over” before deploying the technology to health-care clients, said Dr. Kulleni Gebreyes, Deloitte’s U.S. life sciences and health-care consulting leader.
Deloitte is using Google’s technology to help health systems and health plans answer members’ questions about accessing care. If a patient needs a colonoscopy, for instance, they can use MedLM to look for providers based on gender, location or benefit coverage, as well as other qualifiers.
Gebreyes said clients have found that MedLM is accurate and efficient, but it’s not always great at deciphering a user’s intent. It can be a challenge if patients don’t know the right word or spelling for colonoscopy, or use other colloquial terms, she said.
“Ultimately, this does not substitute a diagnosis from a trained professional,” Gebreyes told CNBC. “It brings expertise closer and makes it more accessible.”
A DoorDash bag on a bicycle in New York, US, on Tuesday, May 6, 2025.
Yuki Iwamura | Bloomberg | Getty Images
DoorDash reported third-quarter earnings that missed analyst expectations and said it expects to spend “several hundred million dollars” on new initiatives and development in 2026.
The stock sank 9% following the report.
Here’s how the company did compared to LSEG estimates:
Earnings: 55 cents per share vs 69 cents per share expected
Revenue: $3.45 billion vs $3.36 billion expected.
“We wish there was a way to grow a baby into an adult without investment, or to see the baby grow into an adult overnight, but we do not believe this is how life or business works,” the company wrote in its earnings release to explain the boosted spending.
DoorDash said it is developing a new global tech platform that progressed in 2025 but is expected to accelerate in 2026, noting the direct and opportunity costs in the near term. The company announced its Dot autonomous delivery robot in September.
The food delivery platform’s revenue increased 27% from a year earlier.
DoorDash posted net income of $244 million, or 55 cents per share, in Q3, up from $162 million, or 38 cents per share, a year ago.
Read more CNBC tech news
Total orders grew 21% over the prior year to 776 million during the quarter that closed Sept. 30, just above the 770.13 million expected by FactSet.
The company expects Adjusted EBITDA for the fourth quarter in the range of $710 million to $810 million, a midpoint of $760 million. Analysts polled by FactSet expected $806.8 million for Q4.
DoorDash closed its acquisition of British food delivery company Deliveroo on Oct. 2, a deal that valued the UK company at about $3.9 billion.
The company expects a depreciation and amortization expense of $700 million for the fiscal year, exclusive of the acquisition. A stock-based compensation expense of $1.1 billion is also expected for fiscal 2025.
DoorDash expects Deliveroo to add $45 million to adjusted EBITDA in Q4 and about $200 million to adjusted EBITDA in 2026.
Snap shares climbed 15% on Wednesday after the company issued its third-quarter earnings, reporting revenue that beat analysts expectations and a $500 million stock repurchase program.
Here is how the company did compared with Wall Street’s expectations:
Earnings per share: Loss of 6 cents. That figure is not comparable to analysts’ estimates.
Revenue: $1.51 billion vs. $1.49 billion expected, according to LSEG
Global daily active users: 477 million vs. 476 million expected, according to StreetAccount
Global average revenue per user (ARPU): $3.16 vs. $3.13 expected, according to StreetAccount
Snap also announced that it is partnering with the startup Perplexity AI, which “will integrate its conversational search directly into Snapchat.” The feature is set to appear in Snapchat starting in early 2026, Snap said.
“Perplexity will pay Snap $400 million over one year, through a combination of cash and equity, as we achieve global rollout,” Snap said in the letter. “Revenue from the partnership is expected to begin contributing in 2026.”
The partnership represents “a first step in Snap’s effort to make Snapchat a platform where leading AI companies can connect with its global community in creative and trusted ways,” the two companies said in their announcement.
In the company’s earnings call, Snap CEO Evan Spiegel said Perplexity will have “default placement in our chat inbox” and the startup will “control the responses from their chatbot inside of Snapchat.”
Although Snap will not be selling “advertising against the Perplexity responses,” Spiegel said that the integration “will help Perplexity drive additional subscribers, which I think is something that will be valuable to their business.”
“We have a really unique opportunity ahead to help distribute AI agents through our chat interface,” Spiegel said.
While Snapchat users will still be able to engage with the company’s My AI chatbot, the integrated Perplexity AI service will provide them with “real-time answers from credible sources and explore new topics within the app,” the companies said.
Regarding Snap’s expensive foray into developing augmented reality glasses, Spiegel said the company plans to create a separate subsidiary around the Specs AR glasses to speed up development with partners.
Snap said fourth-quarter sales will come in between $1.68 billion and $1.71 billion. That figure’s midpoint of $1.695 billion is slightly ahead of Wall Street expectations of $1.69 billion.
For the third quarter, Snap said sales grew 10% year over year while it logged a net loss of $104 million. During the same quarter last year, Snap recorded a net loss of $153 million.
The Snapchat parent said that third-quarter adjusted earnings before interest, taxes, depreciation and amortization, or EBITDA, came in at $182 million, ahead of the $125 million that StreetAccount was projecting.
The company also said that its adjusted EBITDA for the fourth quarter will be between $280 million and $310 million, which tops StreetAccount’s projections of $255.4 million.
Snap shares were down 32% for the year, as of Wednesday’s close, compared to the Nasdaq’s 22% gain.
Although the company’s shares soared as high as 25% in after-hours trading on Wednesday, they began their descent after Snap finance chief Derek Andersen detailed some of the company’s sales-related challenges on the earnings call.
“The North America LCS segment remains the primary headwind to our overall revenue growth,” said Andersen, adding that the company is seeing more growth and demand for Snap’s ad products from small-to-medium sized businesses in other regions.
In a letter to investors, Snap said that government regulations like Australia’s social media minimum age bill and related policy developments “are likely to have negative impacts on user engagement metrics that we cannot currently predict.”
“While we remain committed to our goal of serving 1 billion global monthly active users, we expect overall DAU may decline in Q4 given these internal and external factors, and as noted above we expect particularly negative impacts in certain jurisdictions,” Snap said in the letter.
The Australian senate passed the bill in November 2024, and when the law comes into effect next month, companies like Facebook and Instagram parent Meta, TikTok and Snap will be penalized if they fail to adequately prevent children under 16 from possessing accounts on their respective platforms.
Snap also said in the investor letter that the “upcoming rollout of platform-level age verification” from companies like Apple and Google could also negatively impact user metrics in the future.
Utah and California have signed online-child safety bills that put the onus on app store makers to verify user ages. Utah’s law is set to fully take effect in May 2026.
“We are also preparing for the upcoming rollout of platform-level age verification, which will use new signals provided by Apple — and soon Google — to help us better determine the age of our users and remove those we learn are under 13,” Snap said in the letter.
Snap’s warning to investors underscores how new laws, policies and regulations around the globe are beginning to impact tech firms.
In the letter, Snap also said that some of its efforts to improve monetization, such as its Snapchat+ subscription service, could result in “adverse impact on engagement metrics as these experiences are rolled out globally.”
Pinterest shares tanked on Tuesday after the company reported third-quarter results that missed on earnings per share and provided weaker-than-expected guidance. The company’s finance chief Julia Donnelly told analysts that Pinterest expects “broader trends and market uncertainty continuing with the addition of a new tariff in Q4 impacting the home furnishing category.”
Alphabet said that its total advertising revenue for the third quarter rose 13% year-over-year to $74.18 billion, while YouTube’s online ad sales climbed 15% to $10.26 billion.
Reddit said last Thursday that third-quarter sales surged 68% year-over-year to $585 million. The company’s global daily active uniques increased 19% year-over-year to 116 million, surpassing estimates of 114 million.
A pedestrian walks past Amazon Ireland corporate offices in Dublin, as Amazon.com, Inc., said on Tuesday it plans to cut its global corporate workforce by as many as 14,000 roles and seize the opportunity provided by artificial intelligence (AI), in Dublin, Ireland, Oct. 28, 2025.
Damien Eagers | Reuters
A new bipartisan bill seeks to provide a “clear picture” of how artificial intelligence is affecting the American workforce.
Sens. Mark Warner, D-Va., and Josh Hawley, R-Mo., on Wednesday announced the AI-Related Job Impacts Clarity Act. It would require publicly traded companies, certain private companies and federal agencies to submit quarterly reports to the Department of Labor detailing any job losses, new hires, reduced hiring or other significant changes to their workforce as a result of AI.
The data would then be compiled by the Department of Labor into a publicly available report.
“This bipartisan legislation will finally give us a clear picture of AI’s impact on the workforce,” Warner said in a statement. “Armed with this information, we can make sure AI drives opportunity instead of leaving workers behind.”
The proposed legislation comes as politicians, labor advocates and some executives have sounded the alarm in recent years about the potential for widespread job loss due to AI.
In May, Anthropic CEO Dario Amodei said that the AI tools that his company and others are building could eliminate half of all entry-level white-collar jobs and cause unemployment to spike up to 20% in the next one to five years. Anthropic makes the chatbot Claude.
Layoffs have been announced recently at companies across the tech, retail, auto and shipping industries, with executives citing myriad reasons, from AI and tariffs to shifting business priorities and broader cost-cutting efforts. Job cuts announced at Amazon, UPS and Target last month totaled more than 60,000 roles.
Some experts have questioned whether AI is fully to blame for the layoffs, noting that companies could be using the technology as cover for concerns about the economy, business missteps or cost cutting initiatives.