Connect with us

Published

on

Dr. Scott Gottlieb is a CNBC contributor and is a member of the boards of Pfizer, genetic testing startup Tempus, health-care tech company Aetion Inc. and biotech company Illumina. He is also a partner at the venture capital firm New Enterprise Associates.

Researchers at Harvard presented a study demonstrating an achievement that would challenge any medical student. ChatGPT, a large language model, passed the U.S. Medical Licensing Exam, outperforming about 10 percent of medical students who fail the test annually.

related investing news

A.I. is changing national security systems. Here's what's ahead for defense stocks

CNBC Pro

The inevitable question isn’t so much if but when these artificial intelligence devices can step into the shoes of doctors. For some tasks, this medical future is sooner than we think.

To grasp the potential of these tools to revolutionize the practice of medicine, it pays to start with a taxonomy of the different technologies and how they’re being used in medical care.

The AI tools being applied to health care can generally be divided into two main categories. The first is machine learning, which uses algorithms to enable computers to learn patterns from data and make predictions. These algorithms can be trained on a variety of data types, including images.

The second category encompasses natural language processing, which is designed to understand and generate human language. These tools enable a computer to transform human language and unstructured text into machine-readable, organized data. They learn from a multitude of human trial-and-error decisions and emulate a person’s responses.

A key difference between the two approaches resides in their functionality. While machine learning models can be trained to perform specific tasks, large language models can understand and generate text, making them especially useful for replicating interactions with providers.

In medicine, the use of these technologies is generally following one of four different paths. The first encompass large language models that are applied to administrative functions such as processing medical claims or creating and analyzing medical records. Amazon’s HealthScribe is a programmable interface that transcribes conversations between doctors and patients and can extract medical information, allowing providers to create structured records of encounters.

The second bucket involves the use of supervised machine learning to enhance the interpretation of clinical data. Specialties such as radiology, pathology and cardiology are already using AI for image analysis, to read MRIs, evaluate pathology slides or interpret electrocardiograms. In fact, up to 30% of radiology practices have already adopted AI tools. So have other specialties. Google Brain AI has developed software that analyzes images from the back of the eye to diagnose diabetic macular edema and diabetic retinopathy, two common causes of blindness.

Since these tools offer diagnoses and can directly affect patient care, the FDA often categorizes them as medical devices, subjecting them to regulation to verify their accuracy. However, the fact that these tools are trained on closed data sets, where the findings in data or imaging have been rigorously confirmed, gives the FDA increased confidence when assessing these devices’ integrity.

The third broad category comprises AI tools that rely on large language models that extract clinical information from patient-specific data, interpreting it to prompt providers with diagnoses or treatments to consider. Generally known as clinical decision support software, it evokes a picture of an brainy assistant designed to aid, not to supplant, a doctor’s judgment. IBM’s “Watson for Oncology” uses AI to help oncologists make more informed decisions about cancer treatments, while Google Health is developing DeepMind Health to create similar tools.

As long as the doctor remains involved and exercises independent judgment, the FDA doesn’t always regulate this kind of tool. The FDA focuses more on whether it’s meant to make a definitive clinical decision, as opposed to providing information to help doctors with their assessments.

The fourth and final grouping represents the holy grail for AI: large language models that operate fully automated, parsing the entirety of a patient’s medical record to diagnose conditions and prescribe treatments directly to the patient, without a physician in the loop.

Right now, there are only a few clinical language models, and even the largest ones possess a relatively small number of parameters. However, the strength of the models and the datasets available for their training might not be the most significant obstacles to these fully autonomous systems. The biggest hurdle may well be establishing a suitable regulatory path. Regulators are hesitant, fearing that the models are prone to errors and that the clinical data sets on which they’re trained contain wrong decisions, leading AI models to replicate these medical mistakes.

Overcoming the hurdles in bringing these fully autonomous systems to patient care holds significant promise, not only for improving outcomes but also for addressing financial challenges.

Health care is often cited as a field burdened by Baumol’s theory of cost disease, an economic theory, developed by economist William J. Baumol, that explains why costs in labor-intensive industries tend to rise more rapidly than in other sectors. In fields such as medicine, it’s less likely that technological inputs will provide major offsets to labor costs, as each patient encounter still requires the intervention of a provider. In sectors such as medicine, the labor itself is the product.

To compensate for these challenges, medicine has incorporated more non-physician providers to lower costs. However, this strategy reduces but doesn’t eliminate the central economic dilemma. When the technology becomes the doctor, however, it can be a cure for Baumol’s cost disease.

As the quality and scope of clinical data available for training these large language models continue to grow, so will their capabilities. Even if the current stage of development isn’t quite ready to completely remove doctors from the decision-making loop, these tools will increasingly enhance the productivity of providers and, in many cases, begin to substitute for them.

Continue Reading

Technology

Tesla must pay portion of $329 million in damages after fatal Autopilot crash, jury says

Published

on

By

Tesla must pay portion of 9 million in damages after fatal Autopilot crash, jury says

A jury in Miami has determined that Tesla should be held partly liable for a fatal 2019 Autopilot crash, and must compensate the family of the deceased and an injured survivor a portion of $329 million in damages.

Tesla’s payout is based on $129 million in compensatory damages, and $200 million in punitive damages against the company.

The jury determined Tesla should be held 33% responsible for the fatal crash. That means the automaker would be responsible for about $42.5 million in compensatory damages. In cases like these, punitive damages are typically capped at three times compensatory damages.

The plaintiffs’ attorneys told CNBC on Friday that because punitive damages were only assessed against Tesla, they expect the automaker to pay the full $200 million, bringing total payments to around $242.5 million.

Tesla said it plans to appeal the decision.

Attorneys for the plaintiffs had asked the jury to award damages based on $345 million in total damages. The trial in the Southern District of Florida started on July 14.

The suit centered around who shouldered the blame for the deadly crash in Key Largo, Florida. A Tesla owner named George McGee was driving his Model S electric sedan while using the company’s Enhanced Autopilot, a partially automated driving system.

While driving, McGee dropped his mobile phone that he was using and scrambled to pick it up. He said during the trial that he believed Enhanced Autopilot would brake if an obstacle was in the way. His Model S accelerated through an intersection at just over 60 miles per hour, hitting a nearby empty parked car and its owners, who were standing on the other side of their vehicle.

Naibel Benavides, who was 22, died on the scene from injuries sustained in the crash. Her body was discovered about 75 feet away from the point of impact. Her boyfriend, Dillon Angulo, survived but suffered multiple broken bones, a traumatic brain injury and psychological effects.

“Tesla designed Autopilot only for controlled access highways yet deliberately chose not to restrict drivers from using it elsewhere, alongside Elon Musk telling the world Autopilot drove better than humans,” Brett Schreiber, counsel for the plaintiffs, said in an e-mailed statement on Friday. “Tesla’s lies turned our roads into test tracks for their fundamentally flawed technology, putting everyday Americans like Naibel Benavides and Dillon Angulo in harm’s way.”

Following the verdict, the plaintiffs’ families hugged each other and their lawyers, and Angulo was “visibly emotional” as he embraced his mother, according to NBC.

Here is Tesla’s response to CNBC:

“Today’s verdict is wrong and only works to set back automotive safety and jeopardize Tesla’s and the entire industry’s efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial.

Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator – which overrode Autopilot – as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash.

This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver – from day one – admitted and accepted responsibility.”

The verdict comes as Musk, Tesla’s CEO, is trying to persuade investors that his company can pivot into a leader in autonomous vehicles, and that its self-driving systems are safe enough to operate fleets of robotaxis on public roads in the U.S.

Tesla shares dipped 1.8% on Friday and are now down 25% for the year, the biggest drop among tech’s megacap companies.

The verdict could set a precedent for Autopilot-related suits against Tesla. About a dozen active cases are underway focused on similar claims involving incidents where Autopilot or Tesla’s FSD— Full Self-Driving (Supervised) — had been in use just before a fatal or injurious crash.

The National Highway Traffic Safety Administration initiated a probe in 2021 into possible safety defects in Tesla’s Autopilot systems. During the course of that investigation, Tesla made changes, including a number of over-the-air software updates.

The agency then opened a second probe, which is ongoing, evaluating whether Tesla’s “recall remedy” to resolve issues with the behavior of its Autopilot, especially around stationary first responder vehicles, had been effective.

The NHTSA has also warned Tesla that its social media posts may mislead drivers into thinking its cars are capable of functioning as robotaxis, even though owners manuals say the cars require hands-on steering and a driver attentive to steering and braking at all times.

A site that tracks Tesla-involved collisions, TeslaDeaths.com, has reported at least 58 deaths resulting from incidents where Tesla drivers had Autopilot engaged just before impact.

Read the jury’s verdict below.

Continue Reading

Technology

Crypto wobbles into August as Trump’s new tariffs trigger risk-off sentiment

Published

on

By

Crypto wobbles into August as Trump's new tariffs trigger risk-off sentiment

A screen showing the price of various cryptocurrencies against the US dollar displayed at a Crypto Panda cryptocurrency store in Hong Kong, China, on Monday, Feb. 3, 2025. 

Lam Yik | Bloomberg | Getty Images

The crypto market slid Friday after President Donald Trump unveiled his modified “reciprocal” tariffs on dozens of countries.

The price of bitcoin showed relative strength, hovering at the flat line while ether, XRP and Binance Coin fell 2% each. Overnight, bitcoin dropped to a low of $114,110.73.

The descent triggered a wave of long liquidations, which forces traders to sell their assets at market price to settle their debts, pushing prices lower. Bitcoin saw $172 million in liquidations across centralized exchanges in the past 24 hours, according to CoinGlass, and ether saw $210 million.

Crypto-linked stocks suffered deeper losses. Coinbase led the way, down 15% following its disappointing second-quarter earnings report. Circle fell 4%, Galaxy Digital lost 2%, and ether treasury company Bitmine Immersion was down 8%. Bitcoin proxy MicroStrategy was down by 5%.

Stock Chart IconStock chart icon

hide content

Bitcoin falls below $115,000

The stock moves came amid a new wave of risk off sentiment after President Trump issued new tariffs ranging between 10% and 41%, triggering worries about increasing inflation and the Federal Reserve’s ability to cut interest rates. In periods of broad based derisking, crypto tends to get hit as investors pull out of the most speculative and volatile assets. Technical resilience and institutional demand for bitcoin and ether are helping support their prices.

“After running red hot in July, this is a healthy strategic cooldown. Markets aren’t reacting to a crisis, they’re responding to the lack of one,” said Ben Kurland, CEO at crypto research platform DYOR. “With no new macro catalyst on the horizon, capital is rotating out of speculative assets and into safer ground … it’s a calculated pause.”

Crypto is coming off a winning month but could soon hit the brakes amid the new macro uncertainty, and in a month usually characterized by lower trading volumes and increased volatility. Bitcoin gained 8% in July, according to Coin Metrics, while ether surged more than 49%.

Ether ETFs saw more than $5 billion in inflows in July alone (with just a single day of outflows of $1.8 million on July 2), bringing it’s total cumulative inflows to $9.64 to date. Bitcoin ETFs saw $114 million in outflows in the final trading session of July, bringing its monthly inflows to about $6 billion out of a cumulative $55 billion.

Don’t miss these cryptocurrency insights from CNBC Pro:

Continue Reading

Technology

Google has dropped more than 50 DEI-related organizations from its funding list

Published

on

By

Google has dropped more than 50 DEI-related organizations from its funding list

Google CEO Sundar Pichai gestures to the crowd during Google’s annual I/O developers conference in Mountain View, California, on May 20, 2025.

David Paul Morris | Bloomberg | Getty Images

Google has purged more than 50 organizations related to diversity, equity and inclusion, or DEI, from a list of organizations that the tech company provides funding to, according to a new report.

The company has removed a total of 214 groups from its funding list while adding 101, according to a new report from tech watchdog organization The Tech Transparency Project. The watchdog group cites the most recent public list of organizations that receive the most substantial contributions from Google’s U.S. Government Affairs and Public Policy team.

The largest category of purged groups were DEI-related, with a total of 58 groups removed from Google’s funding list, TTP found. The dropped groups had mission statements that included the words “diversity, “equity,” “inclusion,” or “race,” “activism,” and “women.” Those are also terms the Trump administration officials have reportedly told federal agencies to limit or avoid.

In response to the report, Google spokesperson José Castañeda told CNBC that the list reflects contributions made in 2024 and that it does not reflect all contributions made by other teams within the company.

“We contribute to hundreds of groups from across the political spectrum that advocate for pro-innovation policies, and those groups change from year to year based on where our contributions will have the most impact,” Castañeda said in an email.

Organizations that were removed from Google’s list include the African American Community Service Agency, which seeks to “empower all Black and historically excluded communities”; the Latino Leadership Alliance, which is dedicated to “race equity affecting the Latino community”; and Enroot, which creates out-of-school experiences for immigrant kids. 

The organization funding purge is the latest to come as Google began backtracking some of its commitments to DEI over the last couple of years. That pull back came due to cost cutting to prioritize investments into artificial intelligence technology as well as the changing political and legal landscape amid increasing national anti-DEI policies.

Over the past decade, Silicon Valley and other industries used DEI programs to root out bias in hiring, promote fairness in the workplace and advance the careers of women and people of color — demographics that have historically been overlooked in the workplace.

However, the U.S. Supreme Court’s 2023 decision to end affirmative action at colleges led to additional backlash against DEI programs in conservative circles.

President Donald Trump signed an executive order upon taking office in January to end the government’s DEI programs and directed federal agencies to combat what the administration considers “illegal” private-sector DEI mandates, policies and programs. Shortly after, Google’s Chief People Officer Fiona Cicconi told employees that the company would end DEI-related hiring “aspirational goals” due to new federal requirements and Google’s categorization as a federal contractor.

Despite DEI becoming such a divisive term, many companies are continuing the work but using different language or rolling the efforts under less-charged terminology, like “learning” or “hiring.”

Even Google CEO Sundar Pichai maintained the importance diversity plays in its workforce at an all-hands meeting in March.

“We’re a global company, we have users around the world, and we think the best way to serve them well is by having a workforce that represents that diversity,” Pichai said at the time.

One of the groups dropped from Google’s contributions list is the National Network to End Domestic Violence, which provides training, assistance, and public awareness campaigns on the issue of violence against women, the TTP report found. The group had been on Google’s list of funded organizations for at least nine years and continues to name the company as one of its corporate partners.

Google said it still gave $75,000 to the National Network to End Domestic Violence in 2024 but did not say why the group was removed from the public contributions list.

WATCH: Alphabet’s valuation remains highly attractive, says Evercore ISI’s Mark Mahaney

Alphabet's valuation remains highly attractive, says Evercore ISI's Mark Mahaney

Continue Reading

Trending