Connect with us

Published

on

As a fourth-year ophthalmology resident at Emory University School of Medicine, Riley Lyons biggest responsibilities include triage: When a patient comes in with an eye-related complaint, Lyons must make an immediate assessment of its urgency. Use Our Content

It can be republished for free. Are you covered by Medi-Cal?

We want to hear about your experiences and, with your permission, may incorporate your story into our coverage. Please tell us what it has been like for you as you have sought and received care, including the good and the bad, the obstacles and the successes.Share Your Story

He often finds patients have already turned to Dr. Google. Online, Lyons said, they are likely to find that any number of terrible things could be going on based on the symptoms that they’re experiencing.

So, when two of Lyons fellow ophthalmologists at Emory came to him and suggested evaluating the accuracy of the AI chatbot ChatGPT in diagnosing eye-related complaints, he jumped at the chance.

In June, Lyons and his colleagues reported in medRxiv, an online publisher of health science preprints, that ChatGPT compared quite well to human doctors who reviewed the same symptoms and performed vastly better than the symptom checker on the popular health website WebMD. And despite the much-publicized hallucination problem known to afflict ChatGPT its habit of occasionally making outright false statements the Emory study reported that the most recent version of ChatGPT made zero grossly inaccurate statements when presented with a standard set of eye complaints.

The relative proficiency of ChatGPT, which debuted in November 2022, was a surprise to Lyons and his co-authors. The artificial intelligence engine is definitely an improvement over just putting something into a Google search bar and seeing what you find, said co-author Nieraj Jain, an assistant professor at the Emory Eye Center who specializes in vitreoretinal surgery and disease.

But the findings underscore a challenge facing the health care industry as it assesses the promise and pitfalls of generative AI, the type of artificial intelligence used by ChatGPT: The accuracy of chatbot-delivered medical information may represent an improvement over Dr. Google, but there are still many questions about how to integrate this new technology into health care systems with the same safeguards historically applied to the introduction of new drugs or medical devices.

The smooth syntax, authoritative tone, and dexterity of generative AI have drawn extraordinary attention from all sectors of society, with some comparing its future impact to that of the internet itself. In health care, companies are working feverishly to implement generative AI in areas such as radiology and medical records. Email Sign-Up

Subscribe to KFF Health News' free Morning Briefing. Your Email Address Sign Up

When it comes to consumer chatbots, though, there is still caution, even though the technology is already widely available and better than many alternatives. Many doctors believe AI-based medical tools should undergo an approval process similar to the FDAs regime for drugs, but that would be years away. Its unclear how such a regime might apply to general-purpose AIs like ChatGPT.

There’s no question we have issues with access to care, and whether or not it is a good idea to deploy ChatGPT to cover the holes or fill the gaps in access, it’s going to happen and it’s happening already, said Jain. People have already discovered its utility. So, we need to understand the potential advantages and the pitfalls.

The Emory study is not alone in ratifying the relative accuracy of the new generation of AI chatbots. A report published in Nature in early July by a group led by Google computer scientists said answers generated by Med-PaLM, an AI chatbot the company built specifically for medical use, compare favorably with answers given by clinicians.

AI may also have better bedside manner. Another study, published in April by researchers from the University of California-San Diego and other institutions, even noted that health care professionals rated ChatGPT answers as more empathetic than responses from human doctors.

Indeed, a number of companies are exploring how chatbots could be used for mental health therapy, and some investors in the companies are betting that healthy people might also enjoy chatting and even bonding with an AI friend. The company behind Replika, one of the most advanced of that genre, markets its chatbot as, The AI companion who cares. Always here to listen and talk. Always on your side.

We need physicians to start realizing that these new tools are here to stay and they’re offering new capabilities both to physicians and patients, said James Benoit, an AI consultant. While a postdoctoral fellow in nursing at the University of Alberta in Canada, he published a study in February reporting that ChatGPT significantly outperformed online symptom checkers in evaluating a set of medical scenarios. They are accurate enough at this point to start meriting some consideration, he said.

Still, even the researchers who have demonstrated ChatGPTs relative reliability are cautious about recommending that patients put their full trust in the current state of AI. For many medical professionals, AI chatbots are an invitation to trouble: They cite a host of issues relating to privacy, safety, bias, liability, transparency, and the current absence of regulatory oversight.

The proposition that AI should be embraced because it represents a marginal improvement over Dr. Google is unconvincing, these critics say.

That’s a little bit of a disappointing bar to set, isn’t it? said Mason Marks, a professor and MD who specializes in health law at Florida State University. He recently wrote an opinion piece on AI chatbots and privacy in the Journal of the American Medical Association. I don’t know how helpful it is to say, Well, let’s just throw this conversational AI on as a band-aid to make up for these deeper systemic issues, he said to KFF Health News.

The biggest danger, in his view, is the likelihood that market incentives will result in AI interfaces designed to steer patients to particular drugs or medical services. Companies might want to push a particular product over another, said Marks. The potential for exploitation of people and the commercialization of data is unprecedented.

OpenAI, the company that developed ChatGPT, also urged caution.

OpenAIs models are not fine-tuned to provide medical information, a company spokesperson said. You should never use our models to provide diagnostic or treatment services for serious medical conditions.

John Ayers, a computational epidemiologist who was the lead author of the UCSD study, said that as with other medical interventions, the focus should be on patient outcomes.

If regulators came out and said that if you want to provide patient services using a chatbot, you have to demonstrate that chatbots improve patient outcomes, then randomized controlled trials would be registered tomorrow for a host of outcomes, Ayers said.

He would like to see a more urgent stance from regulators.

One hundred million people have ChatGPT on their phone, said Ayers, and are asking questions right now. People are going to use chatbots with or without us.

At present, though, there are few signs that rigorous testing of AIs for safety and effectiveness is imminent. In May, Robert Califf, the commissioner of the FDA, described the regulation of large language models as critical to our future, but aside from recommending that regulators be nimble in their approach, he offered few details.

In the meantime, the race is on. In July, The Wall Street Journal reported that the Mayo Clinic was partnering with Google to integrate the Med-PaLM 2 chatbot into its system. In June, WebMD announced it was partnering with a Pasadena, California-based startup, HIA Technologies Inc., to provide interactive diital health assistants. And the ongoing integration of AI into both Microsofts Bing and Google Search suggests that Dr. Google is already well on its way to being replaced by Dr. Chatbot.

This article was produced by KFF Health News, which publishes California Healthline, an editorially independent service of the California Health Care Foundation. Related Topics California Health Industry States Georgia Contact Us Submit a Story Tip

Continue Reading

UK

Fourteen children arrested on suspicion of manslaughter over Gateshead fire released on bail

Published

on

By

Fourteen children arrested on suspicion of manslaughter over Gateshead fire released on bail

All 14 children arrested on suspicion of manslaughter after a boy died in a fire have been released on police bail, officers said.

Layton Carr, 14, was found dead near the site of a fire at Fairfield industrial park in the Bill Quay area of Gateshead on Friday.

Northumbria Police said on Saturday that they had arrested 11 boys and three girls in connection with the incident.

In an update on Sunday, a Northumbria Police spokesman said: “All those arrested have since been released on police bail pending further inquiries.”

Please use Chrome browser for a more accessible video player

Teenager dies in industrial estate fire

Firefighters raced to the industrial site shortly after 8pm on Friday, putting out the blaze a short time later.

Police then issued an appeal for Carr, who was believed to be in the area at that time.

In a statement on Saturday, the force said that “sadly, following searches, a body believed to be that of 14-year-old Layton Carr was located deceased inside the building”.

More on Northumbria

David Thompson, headteacher of Hebburn Comprehensive School, where Layton was a pupil, said the school community was “heartbroken”.

Mr Thompson described him as a “valued and much-loved member of Year 9” and said he would be “greatly missed by everyone”.

He added that the school’s “sincere condolences” were with Layton’s family and that the community would “rally together to support one another through this tragedy”.

A fundraising page on GoFundMe has been set up to help Layton’s mother pay for funeral costs.

Pic: Gofundme
Image:
Pic: Gofundme

Organiser Stephanie Simpson said: “The last thing Georgia needs to stress trying to pay for a funeral for her Boy Any donations will help thank you.”

One tribute in a Facebook post read: “Can’t believe I’m writing this my nephew RIP Layton 💔 forever 14 you’ll be a massive miss, thinking of my sister and 2 beautiful nieces right now.”

Detective Chief Inspector Louise Jenkins, of Northumbria Police, also said: “This is an extremely tragic incident where a boy has sadly lost his life.”

She added that the force’s “thoughts are with Layton’s family as they begin to attempt to process the loss of their loved one”.

They are working to establish “the full circumstances surrounding the incident” and officers will be in the area to “offer reassurance to the public”, she added.

A cordon remains in place at the site while police carry out enquiries.

Continue Reading

UK

Football bodies could be forced to pay towards brain injury care costs of ex-players

Published

on

By

Football bodies could be forced to pay towards brain injury care costs of ex-players

Football bodies could be forced to pay towards the care costs of ex-players who have been diagnosed with brain conditions, under proposals set to be considered by MPs.

Campaigners are drafting amendments to the Football Governance Bill, which would treat conditions caused by heading balls as an “industrial injuries issue”.

The proposals seek to require the football industry to provide the necessary financial support.

Campaigners say existing support is not fit for purpose, including the Brain Health Fund which was set up with an initial £1m by the Professional Footballers’ Association (PFA), supported by the Premier League.

But the Premier League said the fund has supported 121 families with at-home adaptations and care home fees.

From England‘s 1966 World Cup-winning team, both Jack and Bobby Charlton died with dementia, as did Martin Peters, Ray Wilson and Nobby Stiles.

Neil Ruddock speaks to Sky's Rob Harris outside parliament
Image:
Neil Ruddock speaks to Sky’s Rob Harris outside parliament

Ex-players, including former Liverpool defender Neil Ruddock, went to parliament last week to lobby MPs.

More on Dementia

Ruddock told Sky News he had joined campaigners “for the families who’ve gone through hell”.

“A professional footballer, greatest job in the world, but no one knew the dangers, and that’s scary,” he said.

“Every time someone heads a ball it’s got to be dangerous to you. You know, I used to head 100 balls a day in training. I didn’t realise that might affect my future.”

A study co-funded by the PFA and the Football Association (FA) in 2019 found footballers were three and a half times more likely to die of a neurodegenerative disease than members of the public of the same age.

‘In denial’

Among those calling on football authorities to contribute towards the care costs of ex-players who have gone on to develop conditions such as Alzheimer’s and dementia is Labour MP Chris Evans.

Mr Evans, who represents Caerphilly in South Wales, hopes to amend the Bill to establish a care and financial support scheme for ex-footballers and told a recent event in parliament that affected ex-players “deserve to be compensated”.

Greater Manchester Mayor Andy Burnham, who helped to draft the amendment, said the game was “in denial about the whole thing”.

Mr Burnham called for it to be seen as “an industrial injuries issue in the same way with mining”.

In January, David Beckham lent his support to calls for greater support for footballers affected by dementia.

One of the amendments says that “the industry rather than the public should bear the financial burden”.

Read more from Sky News:
Woman missing for more than 60 years found ‘alive and well’
Meghan posts new photo of Prince Harry amid backlash

A spokesperson for the FA said it was taking a “leading role in reviewing and improving the safety of our game” and that it had “already taken many proactive steps to review and address potential risk factors”.

An English Football League spokesperson said it was “working closely with other football bodies” to ensure both professional and grassroots football are “as safe as it can be”.

The PFA and Premier League declined to comment.

Continue Reading

UK

Terror arrests came in context of raised warnings about Iran, with ongoing chaos in its own backyard

Published

on

By

Terror arrests came in context of raised warnings about Iran, with ongoing chaos in its own backyard

These are two separate and unrelated investigations by counter-terror officers.

But the common thread is nationality – seven out of the eight people arrested are Iranian.

And that comes in the context of increased warnings from government and the security services about Iranian activity on British soil.

Please use Chrome browser for a more accessible video player

Counter terror officers raid property

Last year, the director general of MI5, Ken McCallum, said his organisation and police had responded to 20 Iran-backed plots presenting potentially lethal threats to British citizens and UK residents since January 2022.

He linked that increase to the ongoing situation in Iran’s own backyard.

“As events unfold in the Middle East, we will give our fullest attention to the risk of an increase in – or a broadening of – Iranian state aggression in the UK,” he said.

The implication is that even as Iran grapples with a rapidly changing situation in its own region, having seen its proxies, Hezbollah in Lebanon and Hamas in Gaza, decimated and itself coming under Israeli attack, it may seek avenues further abroad.

More on Iran

The government reiterated this warning only a few weeks ago, with security minister Dan Jarvis addressing parliament.

“The threat from Iran sits in a wider context of the growing, diversifying and evolving threat that the UK faces from malign activity by a number of states,” Jarvis said.

“The threat from states has become increasingly interconnected in nature, blurring the lines between: domestic and international; online and offline; and states and their proxies.

“Turning specifically to Iran, the regime has become increasingly emboldened, asserting itself more aggressively to advance their objectives and undermine ours.”

Read more:
Anybody working for Iran in UK must register or face jail, government announces

As part of that address, Jarvis highlighted the National Security Act 2023, which “criminalises assisting a foreign intelligence service”, among other things.

So it was notable that this was the act used in one of this weekend’s investigations.

The suspects were detained under section 27 of the same act, which allows police to arrest those suspected of being “involved in foreign power threat activity”.

Those powers are apparently being put to use.

Continue Reading

Trending