Connect with us

Published

on

Dr. Scott Gottlieb is a CNBC contributor and is a member of the boards of Pfizer, genetic testing startup Tempus, health-care tech company Aetion Inc. and biotech company Illumina. He is also a partner at the venture capital firm New Enterprise Associates.

Researchers at Harvard presented a study demonstrating an achievement that would challenge any medical student. ChatGPT, a large language model, passed the U.S. Medical Licensing Exam, outperforming about 10 percent of medical students who fail the test annually.

related investing news

A.I. is changing national security systems. Here's what's ahead for defense stocks

CNBC Pro

The inevitable question isn’t so much if but when these artificial intelligence devices can step into the shoes of doctors. For some tasks, this medical future is sooner than we think.

To grasp the potential of these tools to revolutionize the practice of medicine, it pays to start with a taxonomy of the different technologies and how they’re being used in medical care.

The AI tools being applied to health care can generally be divided into two main categories. The first is machine learning, which uses algorithms to enable computers to learn patterns from data and make predictions. These algorithms can be trained on a variety of data types, including images.

The second category encompasses natural language processing, which is designed to understand and generate human language. These tools enable a computer to transform human language and unstructured text into machine-readable, organized data. They learn from a multitude of human trial-and-error decisions and emulate a person’s responses.

A key difference between the two approaches resides in their functionality. While machine learning models can be trained to perform specific tasks, large language models can understand and generate text, making them especially useful for replicating interactions with providers.

In medicine, the use of these technologies is generally following one of four different paths. The first encompass large language models that are applied to administrative functions such as processing medical claims or creating and analyzing medical records. Amazon’s HealthScribe is a programmable interface that transcribes conversations between doctors and patients and can extract medical information, allowing providers to create structured records of encounters.

The second bucket involves the use of supervised machine learning to enhance the interpretation of clinical data. Specialties such as radiology, pathology and cardiology are already using AI for image analysis, to read MRIs, evaluate pathology slides or interpret electrocardiograms. In fact, up to 30% of radiology practices have already adopted AI tools. So have other specialties. Google Brain AI has developed software that analyzes images from the back of the eye to diagnose diabetic macular edema and diabetic retinopathy, two common causes of blindness.

Since these tools offer diagnoses and can directly affect patient care, the FDA often categorizes them as medical devices, subjecting them to regulation to verify their accuracy. However, the fact that these tools are trained on closed data sets, where the findings in data or imaging have been rigorously confirmed, gives the FDA increased confidence when assessing these devices’ integrity.

The third broad category comprises AI tools that rely on large language models that extract clinical information from patient-specific data, interpreting it to prompt providers with diagnoses or treatments to consider. Generally known as clinical decision support software, it evokes a picture of an brainy assistant designed to aid, not to supplant, a doctor’s judgment. IBM’s “Watson for Oncology” uses AI to help oncologists make more informed decisions about cancer treatments, while Google Health is developing DeepMind Health to create similar tools.

As long as the doctor remains involved and exercises independent judgment, the FDA doesn’t always regulate this kind of tool. The FDA focuses more on whether it’s meant to make a definitive clinical decision, as opposed to providing information to help doctors with their assessments.

The fourth and final grouping represents the holy grail for AI: large language models that operate fully automated, parsing the entirety of a patient’s medical record to diagnose conditions and prescribe treatments directly to the patient, without a physician in the loop.

Right now, there are only a few clinical language models, and even the largest ones possess a relatively small number of parameters. However, the strength of the models and the datasets available for their training might not be the most significant obstacles to these fully autonomous systems. The biggest hurdle may well be establishing a suitable regulatory path. Regulators are hesitant, fearing that the models are prone to errors and that the clinical data sets on which they’re trained contain wrong decisions, leading AI models to replicate these medical mistakes.

Overcoming the hurdles in bringing these fully autonomous systems to patient care holds significant promise, not only for improving outcomes but also for addressing financial challenges.

Health care is often cited as a field burdened by Baumol’s theory of cost disease, an economic theory, developed by economist William J. Baumol, that explains why costs in labor-intensive industries tend to rise more rapidly than in other sectors. In fields such as medicine, it’s less likely that technological inputs will provide major offsets to labor costs, as each patient encounter still requires the intervention of a provider. In sectors such as medicine, the labor itself is the product.

To compensate for these challenges, medicine has incorporated more non-physician providers to lower costs. However, this strategy reduces but doesn’t eliminate the central economic dilemma. When the technology becomes the doctor, however, it can be a cure for Baumol’s cost disease.

As the quality and scope of clinical data available for training these large language models continue to grow, so will their capabilities. Even if the current stage of development isn’t quite ready to completely remove doctors from the decision-making loop, these tools will increasingly enhance the productivity of providers and, in many cases, begin to substitute for them.

Continue Reading

Technology

Europe sets its sights on multi-billion-euro gigawatt factories as it plays catch-up on AI

Published

on

By

Europe sets its sights on multi-billion-euro gigawatt factories as it plays catch-up on AI

Data storage tapes are stored at the National Energy Research Scientific Computing Center (NERSC) facility at the Lawrence Berkeley National Laboratory, which will house the U.S. supercomputer to be powered by Nvidia’s forthcoming Vera Rubin chips, in Berkeley, California, U.S. May 29, 2025.

Manuel Orbegozo | Reuters

Europe is setting its sights on gigawatt factories in a bid to bolster its lagging artificial intelligence industry and meet the challenges of a rapidly-changing sector.

Buzz around the concept of factories that industrialize manufacturing AI has gained ground in recent months, particularly as Nvidia CEO Jensen Huang stressed the importance of the infrastructure at a June event. Huang hailed a new “industrial revolution” at the GTC conference in Paris, France, and said his firm was working to help countries build revenue-generating AI factories through partnerships in France, Italy and the U.K.

For its part, the European Union describes the factories as a “dynamic ecosystem” that brings together computing power, data and talent to create AI models and applications.

The bloc has long been a laggard behind the U.S. and China in the race to scale up artificial intelligence. With 27 members in the union, the region is slower to act when it comes to agreeing new legislation. Higher energy costs, permitting delays and a grid in dire need of modernization can also hamper developments.

Henna Virkkunen, the European Commission’s executive vice president for tech sovereignty, told CNBC that the bloc’s goal is to bring together high quality data sets, computing capacity and researchers, all in one place.

“We have, for example, 30% more researchers per capita than the U.S. has, focused on AI. Also we have around 7,000 startups [that] are developing AI, but the main obstacle for them is that they have very limited computing capacity. And that’s why we decided that, together with our member states, we are investing in this very crucial infrastructure,” she said.

These are very big investments because they are four times more powerful when it comes to computing capacities than the biggest AI factories.

Henna Virkkunen

European Commission’s executive vice president for tech sovereignty

“We have everything what is needed to be competitive in this sector, but at the same time we want to build up our technological sovereignty and our competitiveness.”

So far, the EU has put up 10 billion euros ($11.8 billion) in funding to set up 13 AI factories and 20 billion euros as a starting point for investment in the gigafactories, marking what it says is the “largest public investment in AI in the world.” The bloc has already received 76 expressions of interest in the gigafactories from 16 member states across 60 sites, Virkkunen said.

The call for interest in gigafactories was “overwhelming,” going far beyond the bloc’s expectations, Virkkunen noted. However, in order for the factories to make a noteworthy addition to Europe’s computing capacity, significantly more investment will be required from the private sector to fund the expensive infrastructure.

‘Intelligence revolution’

The EU describes the facilities as a “one-stop shop” for AI firms. They’re intended to mirror the process carried out in industrial factories, which transform raw materials into goods and services. With an AI factory, raw data goes into the input, and advanced AI products are the expected outcome.

It’s essentially a data center with additional infrastructure related to how the technology will be adopted, according to Andre Kukhnin, equity research analyst at UBS.

“The idea is to create GPU [graphics processing units] capacity, so to basically build data centers with GPUs that can train models and run inference… and then to create an infrastructure that allows you to make this accessible to SMEs and parties that would not be able to just go and build their own,” Kukhnin said.

How the facility will be used is key to its designation as an AI factory, adds Martin Wilkie, research analyst at Citi.

“You’re creating a platform by having these chips that have insane levels of compute capacity,” he said. “And if you’ve attached it to a grid that is able to get the power to actually use them to full capacity, then the world is at your feet. You have this enormous ability to do something, but what the success of it is, will be defined by what you use it for.”

Telecommunications firm Telenor is already exploring possible use cases for such facilities with the launch of its AI factory in Norway in November last year. The company currently has a small cluster of GPUs up and running, as it looks to test the market before scaling up.

Telenor’s Chief Innovation Officer and Head of the AI Factory Kaaren Hilsen and EVP Infrastructure Jannicke Hilland in front of a Nvidia rack at the firm’s AI factory

Telenor

“The journey started with a belief — Nvidia had a belief that every country needs to produce its own intelligence,” Telenor’s Chief Innovation Officer and Head of the AI Factory Kaaren Hilsen told CNBC.

Hilsen stressed that data sovereignty is key. “If you want to use AI to innovate and to make business more efficient, then you’re potentially putting business critical and business sensitive information into these AI models,” she said.

The company is working with BabelSpeak, which Hilsen described as a Norwegian version of ChatGPT. The technology translates sensitive dialogues, such as its pilot with the border police who can’t use public translation services because of security issues.

We’re experiencing an “intelligence revolution” whereby “sovereign AI factories can really help advance society,” Hilsen said.

Billion-euro investments

Virkkunen said the region’s first AI factory will be operational in coming weeks, with one of the biggest projects launching in Munich, Germany in the first days of September. It’s a different story for the gigafactories.

“These are very big investments because they are four times more powerful when it comes to computing capacities than the biggest AI factories, and it means billions in investments. Each of these need three to five billion [euros] in investment,” the commissioner said, adding that the bloc will look to set up a consortium of partners and then officially open a call for investment later this year.

Bertin Martens, senior research fellow at Bruegel, questioned why such investments needed to subsidized by government funds.

“We don’t know yet how much private investment has been proposed as a complement to the taxpayer subsidy, and what capacity and how big these factories are. This is still very much unclear at this stage, so it’s very hard to say how much this will add in terms of computing capacity,” he said.

Power consumption is also a key issue. Martens noted that building an AI gigafactory may take one to two years — but building a power generation of that size requires much more time.

“If you want to build a state-of-the-art gigafactory with hundreds of thousands of Nvidia chips, you have to count on the power consumption of at least one gigawatt for one of those factories. Whether there’s enough space in Europe’s electricity grid in all of these countries to create those factories remains to be seen… this will require major investment in power regeneration capacity,” he told CNBC.

UBS forecasts that the current installed global data center capacity of 85 GW will double due to soaring demand. Based on the EU’s 20-billion-euro investment and the plan for each factory to run 100,000 advanced processors, UBS estimates each factory could be around 100-150 MW with a total capacity for all of the facilities of around 1.5-2 GW.

That could add around 15% to Europe’s total capacity — a sizeable boost, even when compared to the U.S., which currently owns around a third of global capacity, according to the data.

Following the announcement of the EU-U.S. trade framework, EU chief Ursula von der Leyen said Sunday that U.S. AI chips will help power the bloc’s AI gigafactories in a bid to help the States “maintain their technological edge.”

“One could argue that it’s relatively easy, provided you have the money. It’s relatively easy to buy the chips from Nvidia and to create these hardware factories, but to make it run and to make it economically viable is a completely different question,” Martens told CNBC.

He said that the EU will likely have to start at a smaller scale, as the region is unable to immediately build its own frontier models in AI because of their expense.

“I think in time, Europe can gradually build up its infrastructure and its business models around AI to reach that stage, but that will not happen immediately,” Martens said.

Continue Reading

Technology

India overtakes China in smartphone exports to the U.S. as manufacturing jumps 240%, report shows

Published

on

By

India overtakes China in smartphone exports to the U.S. as manufacturing jumps 240%, report shows

Workers assemble smartphones at Dixon Technologies’ Padget Electronics Pvt factory in Uttar Pradesh, India, on Jan. 28, 2021.

Bloomberg | Bloomberg | Getty Images

India has overtaken China to become the top exporter of smartphones to the U.S., according to research firm Canalys, reflecting the shift in manufacturing supply chain away from Beijing amid tariff-fueled uncertainty.

Smartphones assembled in India accounted for 44% of U.S. imports of those devices in the second quarter, a significant increase from just 13% in the same period last year. Total volume of smartphones made in India soared 240% from a year earlier, Canalys said.

In contrast, the share of Chinese smartphone exports to the U.S. shrank to 25% in the quarter ended June, from 61% a year earlier, Canalys data released Monday showed. Vietnam’s share of smartphone exports to the U.S. was also higher than that of China at 30%.

The surge in shipments from India was primarily driven by Apple‘s accelerated shift toward the country at a time of heightened trade uncertainty between the U.S. and China, said Sanyam Chaurasia, principal analyst at Canalys. This is the first time India exported more smartphones to the U.S. than China.

Apple has reportedly been speeding up its plans to make most of its iPhones sold in the U.S. at factories in India this year, with the aim of manufacturing around a quarter of all iPhones in the country in the next few years.

Trump has threatened Apple with additional tariffs and urged the company’s CEO Tim Cook to make iPhones domestically, a move experts have said would be nearly impossible as it would push iPhone prices higher.

While many of Apple’s core products, including iPhones and Mac laptops, have received exemptions from Trump’s “reciprocal tariffs,” officials have warned that it could be a temporary reprieve.

Its global peers, Samsung Electronic and Motorola, have also been striving to move assembly for U.S.-bound smartphones to India, though their shift has been significantly slower and is limited in scale compared with Apple, according to Canalys.

Last-mile assembly

Many global manufacturers have been increasingly shifting their final assembly to India, allocating more capacity in the South-Asian nation to serve the U.S. market, said Renauld Anjoran, CEO of Agilian Technology, an electronics manufacturer in China.

The Guangdong-based company is now renovating a facility in India with plans to move part of its production to the country. “The plan for India is moving ahead as fast as we can,” Anjoran said. The company expects to begin trial production runs soon before ramping up to full-scale manufacturing.

While shipments, which represent the number of devices sent to retailers do not reflect final sales, they are a proxy for market demand.

Overall, iPhone shipments declined by 11% year on year to 13.3 million units in the second quarter, reversing the 25% growth in the prior quarter, according to Canalys.

Shares of Apple have tumbled 14% this year, partly on concerns over its high exposure to tariff uncertainty and intensifying competition in smartphones and artificial intelligence sector.

While the company has begun assembling iPhone 16 Pro models in India, it still relies heavily on China’s more mature manufacturing infrastructure to meet U.S. demand for the premium model, Canalys said.

In April, Trump imposed a 26% tariff on imports from India, much lower than the triple-digit tariffs on China at the time, before pausing those duties until an Aug 1. deadline.

— CNBC’s Arjun Kharpal contributed to this story.

Continue Reading

Technology

Waymo plans to bring its robotaxi service to Dallas in 2026

Published

on

By

Waymo plans to bring its robotaxi service to Dallas in 2026

A Waymo rider-only robotaxi is seen during a test ride in San Francisco, California, U.S., December 9, 2022. 

Paresh Dave | Reuters

Alphabet’s Waymo unit plans on bringing its robotaxi service to Dallas next year, adding to a growing list of prospective U.S. markets for 2026, including Miami and Washington, D.C.

Rental car company Avis Budget Group will be managing the Waymo fleet in Dallas, via a new partnership the companies announced Monday.

Avis CEO Brian Choi said in a statement that the agreement marks a “milestone” for the company, which is now also working to become “a leading provider of fleet management, infrastructure and operations to the broader mobility ecosystem.”

Waymo robotaxi testing is already underway in downtown Dallas involving the company’s Jaguar I-PACE electric vehicles with the Waymo Driver system. That combines automated driving software, sensors and other hardware that power the vehicles’ “level 4,” driverless operations.

Passengers will be able to hail a driverless ride using the Waymo app in Dallas. In some other markets, Waymo only makes its services available through ride-hailing platform Uber.

Waymo has surged ahead in the robotaxi market while other autonomous vehicle developers, including Tesla, Amazon-owned Zoox, and venture-backed startups such as Nuro, May Mobility and Wayve, are working to make autonomous transportation a commercial reality in the U.S.

Waymo says it conducts more than 250,000 paid weekly trips in the markets where it operates commercially, including Atlanta, Austin, Los Angeles, Phoenix and San Francisco.

Waymo’s steepest competition internationally comes from Baidu’s robotaxi venture Apollo Go in China, which is eyeing expansion in Europe.

On Alphabet’s second-quarter earnings call, execs boasted that, “The Waymo Driver has now autonomously driven over 100 million miles on public roads, and the team is testing across more than 10 cities this year, including New York and Philadelphia.”

The business has become significant enough that Alphabet even added a category to its Other Bets revenue description in its latest quarterly filing.

“Revenues from Other Bets are generated primarily from the sale of autonomous transportation services, healthcare-related services and internet services,” the filing said.

The Other Bets segment remains relatively small, however, with revenue coming in at $373 million in the quarter, up from $365 million a year ago. The division still reported a loss of $1.25 billion, widening from $1.13 billion in the second quarter of 2024.

WATCH: Waymo co-CEO on 10 million driverless rides and Tesla’s coming robtaxi challenge

Waymo co-CEO on 10 million driverless rides and Tesla’s coming robotaxi challenge

Continue Reading

Trending