Connect with us

Published

on

As a fourth-year ophthalmology resident at Emory University School of Medicine, Riley Lyons biggest responsibilities include triage: When a patient comes in with an eye-related complaint, Lyons must make an immediate assessment of its urgency. Use Our Content

It can be republished for free. Are you covered by Medi-Cal?

We want to hear about your experiences and, with your permission, may incorporate your story into our coverage. Please tell us what it has been like for you as you have sought and received care, including the good and the bad, the obstacles and the successes.Share Your Story

He often finds patients have already turned to Dr. Google. Online, Lyons said, they are likely to find that any number of terrible things could be going on based on the symptoms that they’re experiencing.

So, when two of Lyons fellow ophthalmologists at Emory came to him and suggested evaluating the accuracy of the AI chatbot ChatGPT in diagnosing eye-related complaints, he jumped at the chance.

In June, Lyons and his colleagues reported in medRxiv, an online publisher of health science preprints, that ChatGPT compared quite well to human doctors who reviewed the same symptoms and performed vastly better than the symptom checker on the popular health website WebMD. And despite the much-publicized hallucination problem known to afflict ChatGPT its habit of occasionally making outright false statements the Emory study reported that the most recent version of ChatGPT made zero grossly inaccurate statements when presented with a standard set of eye complaints.

The relative proficiency of ChatGPT, which debuted in November 2022, was a surprise to Lyons and his co-authors. The artificial intelligence engine is definitely an improvement over just putting something into a Google search bar and seeing what you find, said co-author Nieraj Jain, an assistant professor at the Emory Eye Center who specializes in vitreoretinal surgery and disease.

But the findings underscore a challenge facing the health care industry as it assesses the promise and pitfalls of generative AI, the type of artificial intelligence used by ChatGPT: The accuracy of chatbot-delivered medical information may represent an improvement over Dr. Google, but there are still many questions about how to integrate this new technology into health care systems with the same safeguards historically applied to the introduction of new drugs or medical devices.

The smooth syntax, authoritative tone, and dexterity of generative AI have drawn extraordinary attention from all sectors of society, with some comparing its future impact to that of the internet itself. In health care, companies are working feverishly to implement generative AI in areas such as radiology and medical records. Email Sign-Up

Subscribe to KFF Health News' free Morning Briefing. Your Email Address Sign Up

When it comes to consumer chatbots, though, there is still caution, even though the technology is already widely available and better than many alternatives. Many doctors believe AI-based medical tools should undergo an approval process similar to the FDAs regime for drugs, but that would be years away. Its unclear how such a regime might apply to general-purpose AIs like ChatGPT.

There’s no question we have issues with access to care, and whether or not it is a good idea to deploy ChatGPT to cover the holes or fill the gaps in access, it’s going to happen and it’s happening already, said Jain. People have already discovered its utility. So, we need to understand the potential advantages and the pitfalls.

The Emory study is not alone in ratifying the relative accuracy of the new generation of AI chatbots. A report published in Nature in early July by a group led by Google computer scientists said answers generated by Med-PaLM, an AI chatbot the company built specifically for medical use, compare favorably with answers given by clinicians.

AI may also have better bedside manner. Another study, published in April by researchers from the University of California-San Diego and other institutions, even noted that health care professionals rated ChatGPT answers as more empathetic than responses from human doctors.

Indeed, a number of companies are exploring how chatbots could be used for mental health therapy, and some investors in the companies are betting that healthy people might also enjoy chatting and even bonding with an AI friend. The company behind Replika, one of the most advanced of that genre, markets its chatbot as, The AI companion who cares. Always here to listen and talk. Always on your side.

We need physicians to start realizing that these new tools are here to stay and they’re offering new capabilities both to physicians and patients, said James Benoit, an AI consultant. While a postdoctoral fellow in nursing at the University of Alberta in Canada, he published a study in February reporting that ChatGPT significantly outperformed online symptom checkers in evaluating a set of medical scenarios. They are accurate enough at this point to start meriting some consideration, he said.

Still, even the researchers who have demonstrated ChatGPTs relative reliability are cautious about recommending that patients put their full trust in the current state of AI. For many medical professionals, AI chatbots are an invitation to trouble: They cite a host of issues relating to privacy, safety, bias, liability, transparency, and the current absence of regulatory oversight.

The proposition that AI should be embraced because it represents a marginal improvement over Dr. Google is unconvincing, these critics say.

That’s a little bit of a disappointing bar to set, isn’t it? said Mason Marks, a professor and MD who specializes in health law at Florida State University. He recently wrote an opinion piece on AI chatbots and privacy in the Journal of the American Medical Association. I don’t know how helpful it is to say, Well, let’s just throw this conversational AI on as a band-aid to make up for these deeper systemic issues, he said to KFF Health News.

The biggest danger, in his view, is the likelihood that market incentives will result in AI interfaces designed to steer patients to particular drugs or medical services. Companies might want to push a particular product over another, said Marks. The potential for exploitation of people and the commercialization of data is unprecedented.

OpenAI, the company that developed ChatGPT, also urged caution.

OpenAIs models are not fine-tuned to provide medical information, a company spokesperson said. You should never use our models to provide diagnostic or treatment services for serious medical conditions.

John Ayers, a computational epidemiologist who was the lead author of the UCSD study, said that as with other medical interventions, the focus should be on patient outcomes.

If regulators came out and said that if you want to provide patient services using a chatbot, you have to demonstrate that chatbots improve patient outcomes, then randomized controlled trials would be registered tomorrow for a host of outcomes, Ayers said.

He would like to see a more urgent stance from regulators.

One hundred million people have ChatGPT on their phone, said Ayers, and are asking questions right now. People are going to use chatbots with or without us.

At present, though, there are few signs that rigorous testing of AIs for safety and effectiveness is imminent. In May, Robert Califf, the commissioner of the FDA, described the regulation of large language models as critical to our future, but aside from recommending that regulators be nimble in their approach, he offered few details.

In the meantime, the race is on. In July, The Wall Street Journal reported that the Mayo Clinic was partnering with Google to integrate the Med-PaLM 2 chatbot into its system. In June, WebMD announced it was partnering with a Pasadena, California-based startup, HIA Technologies Inc., to provide interactive diital health assistants. And the ongoing integration of AI into both Microsofts Bing and Google Search suggests that Dr. Google is already well on its way to being replaced by Dr. Chatbot.

This article was produced by KFF Health News, which publishes California Healthline, an editorially independent service of the California Health Care Foundation. Related Topics California Health Industry States Georgia Contact Us Submit a Story Tip

Continue Reading

UK

Police rehearsed a knife attack scenario on a train line in March – here’s what went differently this time

Published

on

By

Police rehearsed a knife attack scenario on a train line in March - here's what went differently this time

British Transport Police held an emergency exercise for press officers in March, which ironically involved a stabbing on a train travelling south near Huntingdon.

In the training drill, the train stopped immediately between stations when a passenger pulled the emergency cord.

It took police 25 minutes to reach the train and casualties, far longer than the eight minutes in which Cambridgeshire firearms officers reached the scene at Huntingdon station.

Follow latest: ‘Nothing to suggest’ terror attack, police say

Chris Webb, a crisis communications expert who helped run the exercise, said: “People think if you pull the emergency cord on a train it stops immediately, but that’s not what happens these days.

“As soon as the driver knows there is a problem, he or she radios the line operator HQ and they discuss where to stop.

“The decision last night was to keep going to Huntingdon station, where it was much easier for armed police to get on.”

More from UK

Forensic investigators at Huntingdon train station in Cambridgeshire
Image:
Forensic investigators at Huntingdon train station in Cambridgeshire

He added: “It must have been awful for passengers when the train kept going for another ten minutes or so.

“It’s always a balance. It might have prolonged the attack, but stopping in the middle of nowhere can mean the attack stops but it’s much more difficult for the emergency services to get there.”

Mr Webb, former head of news at Scotland Yard, said such exercises are held regularly by train operators.

A similar drill was carried out on the London Underground weeks before the 7/7 bombings in 2005.

“There are always lessons to learn but you cannot guard against everything.”

In the training exercise in March the suspect was a white man with mental health issues. He was shot dead by police.

Please use Chrome browser for a more accessible video player

‘Two suspects are British nationals’

Read more:
What we know so far about the Huntingdon train stabbings
Eyewitness recalls horror attack on high-speed train

What happened in the Huntingdon attack?

Police triggered the Plato code to all emergency services in their initial response to the Huntingdon train stabbing, but that did not label it a terrorist attack.

Plato is called for a major incident where it’s thought a suspect is on the loose and has already, or is liable to, cause serious injury.

Plato does not denote a terror attack, though it is often used in terrorist incidents.

A forensic investigator on the platform by the train at Huntingdon train station in Cambridgeshire
Image:
A forensic investigator on the platform by the train at Huntingdon train station in Cambridgeshire

In a Plato response paramedics, fire fighters and other first responders are sent to a safe rendezvous point while armed police go in and deal with the suspect.

Plato depicts a situation where unarmed responders are vulnerable and are kept back until it is safe to approach casualties.

There are exceptions and it’s understood the East of England Ambulance Service has a special Hazardous Area Response Team (HART) which was allowed to accompany armed police onto the platform where the two suspects were arrested last night.

Once other first responders were allowed in, Plato was called off – an important part of the operation.

Plato was called during the initial response to the Manchester Arena bomb attack in 2017, but the fire service was not told it had been called off for two hours and that meant its officers did not go in to help with the rescue.

Continue Reading

Science

Hubble Observes Massive Stellar Eruption from EK Draconis, Hinting at Life’s Origins

Published

on

By

Astronomers using the Hubble Space Telescope observed a huge solar storm from EK Draconis, a young Sun-like star. The eruption’s energy may trigger atmospheric chemistry, forming greenhouse gases and organic molecules. Such events could mirror early solar activity that helped spark life on ancient Earth and distant exoplanets.

Continue Reading

Politics

Coinbase mulls $2B BVNK startup acquisition in stablecoin push: Report

Published

on

By

Coinbase mulls B BVNK startup acquisition in stablecoin push: Report

Coinbase mulls B BVNK startup acquisition in stablecoin push: Report

Stablecoins are becoming an important source of income for Coinbase, as they accounted for about 20% of the exchange’s total revenue during the third quarter of 2025.

Continue Reading

Trending