Microsoft announced Thursday it is teaming up with digital pathology provider Paige to build the world’s largest image-based artificial intelligence model for identifying cancer.
The AI model is training on an unprecedented amount of data that includes billions of images, according to a release. It can identify both common cancers and rare cancers that are notoriously difficult to diagnose, and researchers hope it will eventually help doctors who are struggling to contend with staffing shortages and growing caseloads.
Paige develops digital and AI-powered solutions for pathologists, which are doctors who carry out lab tests on bodily fluids and tissues to make a diagnosis. It’s a specialty that often operates behind the scenes, and it’s crucial for determining a patient’s path forward.
“You don’t have cancer until the pathologist says so. That’s the critical step in the whole medical edifice,” Thomas Fuchs, co-founder and chief scientist at Paige, told CNBC in an interview.
But despite pathologists’ essential role in medicine, Fuchs said their workflow has not changed much in the last 150 years. To diagnose cancer, for instance, pathologists usually examine a piece of tissue on a glass slide under a microscope. The method is tried and true, but if pathologists miss something, it can have dire consequences for patients.
As a result, Paige has been working to digitize the pathologists’ workflow to improve accuracy and efficiency within the specialty.
Doctors working with Paige technology
Source: Paige
The company has received approval from the Food and Drug Administration for its viewing tool FullFocus, which allows pathologists to examine scanned digital slides on a screen instead of relying on a microscope. Paige also built an AI model that can help pathologists identify breast cancer, colon cancer and prostate cancer when it appears on the screen.
Digital pathology is costly
Paige is the only company that has received FDA approval for pathologists to use its AI as a secondary tool for identifying prostate cancer, and CEO Andy Moye said this is likely in part because of barriers related to storage costs and data collection.
Digitizing a single slide can require over a gigabyte of storage, so the infrastructure and costs associated with large-scale data collection balloon quickly. Fuchs said the storage costs can be inhibiting for smaller health systems, which is why wealthy academic centers have historically been the only organizations that can afford to invest in digital pathology.
Paige spun out of the Memorial Sloan Kettering Cancer Center in New York in 2017 and has a “fantastic wealth of data,” according to Moye, which is why the company was able to build its own AI-powered solutions in the first place. To put the scale in perspective, Paige has 10 times more data than Netflix, including all the shows and movies that exist on the platform.
But in order to expand its operations and build an AI tool that can identify more cancer types, Paige turned to Microsoft for help. Over the past year and a half, Paige has been using Microsoft’s cloud storage and supercomputing infrastructure to build an advanced new AI model.
Paige’s original AI model used more than 1 billion images from 500,000 pathology slides, but Fuchs said the model the company has built with Microsoft is “orders of magnitude larger than anything out there.” The model is training on 4 million slides to identify both common and rare cancers, which can be difficult to diagnose. Paige said it is the largest computer vision model that has ever been announced publicly.
“Until ChatGPT got released, no one really understood how this is going to impact their lives. I would argue this is very similar for cancer patients going forward,” Moye said. “This is sort of a groundbreaking, land-on-the-moon kind of moment for cancer care.”
Moye added that the company is thinking of ways to incorporate predictive modeling to give pathologists and patients easy access to information about their biomarkers and genomic mutations down the line.
Desney Tan, vice president and managing director of Microsoft Health Futures, said Microsoft’s infrastructure is a key component of the partnership, but that the company is also working to develop the new algorithms, detection and diagnostics that Paige is hoping to deliver in the next couple of years.
He added that though the technology is powerful, it’s meant to enrich pathologists, not replace them.
“We think of these AI implements, these technologies, as tools, really just as the stethoscope is a tool, just as the X-ray machine is a tool,” Tan told CNBC in an interview. “AI is a tool that is to be wielded by a human.”
On Thursday, Paige and Microsoft will publish a paper on the model through Cornell University’s preprint server arXiv. The paper quantifies the impact of the new model compared with existing models, and Fuchs said it outperforms anything that has been built in academia up to this point.
But the preprint is just the first step of a much longer journey. Paige wanted to make the research available to the broader community while it is under peer review, and the company intends to submit to the scientific journal Nature. The process can take months, if not longer. Paige also has years of work ahead before it will be able to roll the model out as a product — including thorough testing and collaboration with regulators to ensure it is safe and accurate.
Ultimately, Fuchs said the AI model will solve the storage problem for health systems, while also helping pathologists work through cases and arrive at a diagnosis more quickly. For some patients, it could mean the difference between waiting two days and two weeks to find out what’s wrong.
“The more you go away from academic medical centers, especially in community clinics where pathologists are completely overwhelmed across all cancer types with so many cases, there, the impact is quite drastic,” Fuchs said. “That really helps to democratize access to health care in these places.”
Okta on Tuesday topped Wall Street’s third-quarter estimates and issued an upbeat outlook, but shares fell as the company did not provide guidance for fiscal 2027.
Shares of the identity management provider fell more than 3% in after-hours trading on Tuesday.
Here’s how the company did versus LSEG estimates:
Earnings per share: 82 cents adjusted vs. 76 cents expected
Revenue: $742 million vs. $730 million expected
Compared to previous third-quarter reports, Okta refrained from offering preliminary guidance for the upcoming fiscal year. Finance chief Brett Tighe cited seasonality in the fourth quarter, and said providing guidance would require “some conservatism.”
Okta released a capability that allows businesses to build AI agents and automate tasks during the third quarter.
CEO Todd McKinnon told CNBC that upside from AI agents haven’t been fully baked into results and could exceed Okta’s core total addressable market over the next five years.
“It’s not in the results yet, but we’re investing, and we’re capitalizing on the opportunity like it will be a big part of the future,” he said in a Tuesday interview.
Revenues increased almost 12% from $665 million in the year-ago period. Net income increased 169% to $43 million, or 24 cents per share, from $16 million, or breakeven, a year ago. Subscription revenues grew 11% to $724 million, ahead of a $715 million estimate.
For the current quarter, the cybersecurity company expects revenues between $748 million and $750 million and adjusted earnings of 84 cents to 85 cents per share. Analysts forecast $738 million in revenues and EPS of 84 cents for the fourth quarter.
Returning performance obligations, or the company’s subscription backlog, rose 17% from a year ago to $4.29 billion and surpassed a $4.17 billion estimate from StreetAccount.
This year has been a blockbuster period for cybersecurity companies, with major acquisition deals from the likes of Palo Alto Networks and Google and a raft of new initial public offerings from the sector.
Marvell Technology Group Ltd. headquarters in Santa Clara, California, on Sept. 6, 2024.
David Paul Morris | Bloomberg | Getty Images
Semiconductor company Marvell on Tuesday announced that it will acquire Celestial AI for at least $3.25 billion in cash and stock.
The purchase price could increase to $5.5 billion if Celestial hits revenue milestones, Marvell said.
Marvell shares rose 13% in extended trading Tuesday as the company reported third-quarter earnings that beat expectations and said on the earnings call that it expected data center revenue to rise 25% next year.
The deal is an aggressive move for Marvell to acquire complimentary technology to its semiconductor networking business. The addition of Celestial could enable Marvell to sell more chips and parts to companies that are currently committing to spend hundreds of billions of dollars on infrastructure for AI.
Marvell stock is down 18% so far in 2025 even as semiconductor rivals like Broadcom have seen big valuation increases driven by excitement around artificial intelligence.
Celestial is a startup focused on developing optical interconnect hardware, which it calls a “photonic fabric,” to connect high-performance computers. Celestial was reportedly valued at $2.5 billion in March in a funding round, and Intel CEO Lip-Bu Tan joined the startup’s board in January.
Optical connections are becoming increasingly important because the most advanced AI systems need those parts tie together dozens or hundreds of chips so they can work as one to train and run the biggest large-language models.
Currently, many AI chip connections are done using copper wires, but newer systems are increasingly using optical connections because they can transfer more data faster and enable physically longer cables. Optical connections also cost more.
“This builds on our technology leadership, broadens our addressable market in scale-up connectivity, and accelerates our roadmap to deliver the industry’s most complete connectivity platform for AI and cloud customers,” Marvell CEO Matt Murphy said in a statement.
Marvell said that the first application of Celestial technology would be to connect a system based on “large XPUs,” which are custom AI chips usually made by the companies investing billions in AI infrastructure.
On Tuesday, the company said that it could even integrate Celestial’s optical technology into custom chips, and based on customer traction, the startup’s technology would soon be integrated into custom AI chips and related parts called switches.
Amazon Web Services Vice President Dave Brown said in a statement that Marvell’s acquisition of Celestial will “help further accelerate optical scale-up innovation for next-generation AI deployments.”
The maximum payout for the deal will be triggered if Celestial can record $2 billion in cumulative revenue by the end of fiscal 2029. The deal is expected to close early next year.
In its third-quarter earnings on Tuesday, Marvell earnings of 76 cents per share on $2.08 billion in sales, versus LSEG expectations of 73 cents on $2.07 billion in sales. Marvell said that it expects fourth-quarter revenue to be $2.2 billion, slightly higher than LSEG’s forecast of $2.18 billion.
Amazon Web Services’ two-track approach to artificial intelligence came into better focus Tuesday as the world’s biggest cloud pushed forward with its own custom chips and got closer to Nvidia . During Amazon ‘s annual AWS Re:Invent 2025 conference in Las Vegas, Amazon Web Services CEO Matt Garman unveiled Trainium3 — the latest version of the company’s in-house custom chip. It has four times more compute performance, energy efficiency, and memory bandwidth than previous generations. AWS said that early results of customers testing Trainium3 are reducing AI training and inference costs by up to 50%. Custom chips, like Trainium, are becoming more and more popular for the big tech companies that can afford to make them. And, their use cases are broadening. For example, Google’s tensor processing units (TPUs), co-designed by Broadcom , have also been getting a lot of attention since last month’s launch of the well-received Gemini 3 artificial intelligence model. It is powered by TPUs. There was even a report that Meta Platforms was considering TPUs in addition to Nvidia ‘s graphics processing units (GPUs), which are the gold standard for all-purpose AI workloads. At the same time, Amazon also announced that it’s deepening its work with Nvidia. In Tuesday’s keynote, Garman introduced AWS Factories, which provides on-premise AI infrastructure for customers to use in their own data centers. The service combines Trainium accelerators and Nvidia graphics processing units, which allows customers to access Nvidia’s accelerated computing platform, full-stack AI software, and GPU-accelerated applications. By offering both options, Amazon aims to keep accelerating AWS cloud capacity and, in turn, revenue growth to stay on top during a time of intense competition from Microsoft ‘s Azure and Alphabet ‘s Google Cloud, the second and third place horses in the AI race, by revenue. Earlier this year, investors were concerned when second-quarter AWS revenue growth did not live up to its closest competitors. In late October’s release of Q3 results, Amazon went a long way to putting those worries to rest. Amazon CEO Andy Jassy said at the time , “AWS is growing at a pace we haven’t seen since 2022, re-accelerating to 20.2% YoY.” He added, “We’ve been focused on accelerating capacity — adding more than 3.8 gigawatts (GW) in the past 12 months.” Tuesday’s announcements come at a pivotal time for AWS as it tries to rapidly expand its computing capacity after a year of supply constraints that put a lid on cloud growth. As great as more efficient chips are, they don’t make up for the capacity demand that the company is facing as AI adoption ramps up, which is why adding more gigawatts of capacity is what Wall Street is laser-focused on. Fortunately, Wall Street argues that the capacity headwind should flip to a tailwind. Wells Fargo said Trainium3 is “critical to supplementing Nvidia GPUs and CPUs in this capacity build” to close the gap with rivals. In a note to investors on Monday, the analysts estimate Amazon will add more than 12 gigawatts of compute by year-end 2027, boosting total AWS capacity to support as much as $150 billion in incremental annual AWS revenue if demand remains strong. In a separate note, Oppenheimer said Monday that AWS has already proven its ability to improve capacity, which has already doubled since 2022. Amazon plans to double it again by 2027. The analysts said that such an expansion could translate to 14% upside to 2026 AWS revenue and 22% upside in 2027. Analysts said each incremental gigawatt of compute added in recent quarters translated to roughly $3 billion of annual cloud revenue. Bottom line While new chips are welcome news that helps AWS step deeper into the AI chip race, Amazon’s investment in capacity and when that capacity will be unlocked is what investors are more locked in on because that’s how it will fulfill demand. The issue is not a demand issue; it’s a supply issue. We are confident in AWS’ ability to add the capacity. In fact, there’s no one company in the world that could deal with this kind of logistics problem, at this scale, better than Amazon. Amazon shares surged nearly 14% to $254 each in the two sessions following the cloud and e-commerce giant’s late Oct. 30 earnings print. The stock has since given back those gains and then some. As of Tuesday’s close, shares were up 6.5% year to date, a laggard among its “Magnificent Seven” peers, and underperforming the S & P 500 ‘s roughly 16% advance in 2025. (Jim Cramer’s Charitable Trust is long AMZN, NVDA. See here for a full list of the stocks.) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling a stock in his charitable trust’s portfolio. If Jim has talked about a stock on CNBC TV, he waits 72 hours after issuing the trade alert before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY , TOGETHER WITH OUR DISCLAIMER . NO FIDUCIARY OBLIGATION OR DUTY EXISTS, OR IS CREATED, BY VIRTUE OF YOUR RECEIPT OF ANY INFORMATION PROVIDED IN CONNECTION WITH THE INVESTING CLUB. NO SPECIFIC OUTCOME OR PROFIT IS GUARANTEED.