Connect with us

Published

on

Attendees at HIMSS in Orlando, Florida 2024.

Courtesy of HIMSS

The hottest new technology for doctors promises to bring back an age-old health-care practice: face-to-face conversations with patients.

As more than 30,000 health and tech professionals gathered among the palm trees at the HIMSS conference in Orlando, Florida, this week, ambient clinical documentation was the talk of the exhibition floor. 

This technology allows doctors to consensually record their visits with patients. The conversations are automatically transformed into clinical notes and summaries using artificial intelligence. Companies like Microsoft’s Nuance Communications, Abridge and Suki have developed solutions with these capabilities, which they argue will help reduce doctors’ administrative workloads and prioritize meaningful connections with patients. 

“After I see a patient, I have to write notes, I have to place orders, I have to think about the patient summary,” Dr. Shiv Rao, founder and CEO of Abridge, told CNBC at HIMSS. “So what our technology does is it allows me to focus on the person in front of me — the most important person, the patient — because when I hit start, have a conversation, then hit stop, I can swivel my chair and within seconds, the note’s there.” 

Administrative workloads are a major problem for clinicians across the U.S. health-care system. A survey published by Athenahealth in February found that more than 90% of physicians report feeling burned out on a “regular basis,” largely because of the paperwork they are expected to complete. 

More than 60% of doctors said they feel overwhelmed by clerical requirements and work an average of 15 hours per week outside their normal hours to keep up, the survey said. Many in the industry call this at-home work “pajama time.” 

Since administrative work is mostly bureaucratic and doesn’t directly influence doctors’ decisions around diagnoses or patient care, it has served as one of the first areas where health systems have seriously begun to explore applications of generative AI. As a result, ambient clinical documentation solutions are having a real moment in the sun. 

“There isn’t a better place to be,” Kenneth Harper, general manager of DAX Copilot at Microsoft, told CNBC in an interview. 

Microsoft’s Nuance announced its ambient clinical documentation tool Dragon Ambient eXperience (DAX) Express in a preview capacity last March. By September, the solution, now called DAX Copilot, was generally available. Harper said there are now more than 200 organizations using the technology. 

Microsoft acquired Nuance for around $16 billion in 2021. The company had a two-story exhibition booth in the exhibit hall that was often packed with attendees

Harper said the technology saves doctors several minutes per encounter, though the exact numbers vary depending on the specialty. He said his team gets feedback about the service almost daily from doctors who claim it has helped them take better care of themselves — and even saved their marriages.

Doctors using A.I. to fight burnout: Apps for medical record technology

Harper recounted a conversation with one physician who was considering retirement after practicing for more than three decades. He said the doctor was feeling worn out from years of stress, but he was inspired to keep working after he was introduced to DAX Copilot. 

“He said, ‘I literally think I’m going to practice for another 10 years because I actually enjoy what I do,'” Harper said. “That’s just a personal anecdote of the type of impact this is having on our care teams.” 

At HIMSS, Stanford Health Care announced it is deploying DAX Copilot across its entire enterprise. 

Gary Fritz, chief of applications at Stanford Health Care, said the organization had initially started by testing the tool within its exam rooms. He said Stanford recently surveyed physicians about their use of DAX Copilot and 96% found it easy to use. 

“I don’t know that I’ve ever seen that big a number,” Fritz told CNBC in an interview. “It is a big deal.”

Dr. Christopher Sharp, chief medical information officer at Stanford Health Care and one of the physicians who tested DAX Copilot, said it is “remarkably seamless” to use. He said the tool’s immediacy and reliability are accurate and strong but could improve at capturing a patient’s tone. 

Sharp said he thinks the tool saves him documentation time and has changed how he spends that time. He said he is often reading and editing notes instead of composing them, for instance, so it is not as though the work has disappeared entirely.

In the near term, Sharp said he’d like to see more capabilities for personalization within DAX Copilot, both at an individual and specialty level. Even so, he said it was easy to see the value of it from the start.

“The moment that that first document returns to you, and you see your own words and the patient’s own words being reflected directly back to you in a usable fashion, I would say that from that moment, you’re hooked,” Sharp told CNBC in an interview.

Fritz said it is still early in the product life cycle, and Stanford Health Care is still working out exactly what deployment will look like. He said DAX Copilot will likely roll out in specialty-specific tranches. 

Attendees at HIMSS in Orlando, Florida 2024.

Courtesy of HIMSS

In January, Nuance announced the general availability of DAX Copilot within Epic Systems’ electronic health record (EHR). Most doctors create and manage patient medical records using EHRs, and Epic is the largest vendor by hospital market share in the U.S., according to a May report from KLAS Research

Integrating a tool like DAX Copilot directly into doctors’ EHR workflow means they won’t need to switch apps to access it, which helps save time and reduce their clerical burden even further, Harper said. 

Seth Hain, senior vice president of R&D at Epic, told CNBC that more than 150,000 notes have been drafted into the company’s software by ambient technologies since the HIMSS conference last year. And the technology is scaling fast. Hain said more notes have already been drafted in 2024 than in 2023.

“You’re seeing health systems who have worked through an intentional process of acclimating their end users to this type of technology, now beginning to rapidly roll that out,” he said. 

A company named Abridge also integrates its ambient clinical documentation technology directly within Epic. Abridge declined to share the exact number of health organizations using its technology. It announced at HIMSS that California-based UCI Health is rolling out the company’s solution system-wide. 

Rao, the CEO of Abridge, said the rate at which the health-care industry has adopted ambient clinical documentation feels “historic.” 

Abridge announced a $30 million Series B funding round in October, led by Spark Capital, and four months later, the company closed a $150 million Series C round, according to a February release. Rao said tail winds like physician burnout have turned into a “tornado” for Abridge, and it will use these funds to continue to invest in the science behind the technology and explore where it can go next. 

The company is saving some doctors as much as three hours a day, Rao said, and is automating more than 92% of the clerical work it focuses on. Abridge’s technology is live across 55 specialties and 14 languages, he added. 

Abridge has a Slack channel called “love stories,” which was viewed by CNBC, where the team will share the positive feedback they get about their technology. One message from this week was from a doctor who said Abridge helped them take their least favorite part of their job away and saves them around an hour and a half each day.

“That’s the type of feedback that absolutely inspires everybody in the company,” Rao said.

Suki CEO Punit Soni said the ambient clinical documentation market is “sizzling.” He expects rapid growth to continue through the next couple of years, though, like all hype cycles, he said he thinks the dust will settle.

Soni founded Suki more than six years ago after hypothesizing that there would be a need for a digital assistant to help doctors manage clinical documentation. Soni said Suki is now used by more than 30 specialties in around 250 health organizations nationwide. Six “large health systems” have gone live with Suki in the past two weeks, he added. 

“For four to five years I’ve sat around, basically with the shop open, hoping somebody will show up. Now the entire mall is here, and there’s a line outside the door of people wanting to deploy, ” Soni told CNBC at HIMSS. “It’s very, very exciting to be here.”

Suki’s website says its technology can reduce the time a physician spends on documentation by an average of 72%. The company raised a $55 million funding round in 2021 led by March Capital. It will likely raise another round in the latter half of the year, Soni said.

Soni said Suki is focused on deploying its technology at scale and exploring additional applications, like how ambient documentation could be used to assist nurses. He said the Spanish language is coming to Suki soon, and customers should expect most major languages to follow. 

“There is so much that has to happen,” he said. “In the next decade, all of health-care tech is going to look completely different.”

Continue Reading

Technology

Nvidia-supplier SK Hynix third-quarter profit jumps 62% to a record high on AI-fueled memory demand

Published

on

By

Nvidia-supplier SK Hynix third-quarter profit jumps 62% to a record high on AI-fueled memory demand

A man walks past a logo of SK Hynix at the lobby of the company’s Bundang office in Seongnam on January 29, 2021.

Jung Yeon-Je | AFP | Getty Images

South Korea’s SK Hynix on Wednesday posted record quarterly revenue and profit, boosted by a strong demand for its high bandwidth memory used in generative AI chipsets.

Here are SK Hynix’s third-quarter results versus LSEG SmartEstimates, which are weighted toward forecasts from analysts who are more consistently accurate:

  • Revenue: 24.45 trillion won ($17.13 billion) vs. 24.73 trillion won
  • Operating profit: 11.38 trillion won vs. 11.39 trillion won

Revenue rose about 39% in the September quarter compared with the same period a year earlier, while operating profit surged 62%, year on year.

On a quarter-on-quarter basis, revenue was up 10%, while operating profit grew 24%.

SK Hynix makes memory chips that are used to store data and can be found in everything from servers to consumer devices such as smartphones and laptops.

The company has benefited from a boom in artificial intelligence as a key supplier of high-bandwidth memory or HBM chips used to power AI data center servers. 

“As demand across the memory segment has soared due to customers’ expanding investments in AI infrastructure, SK Hynix once again surpassed the record-high performance of the previous quarter due to increased sales of high value-added products,” SK Hynix said in its earnings release. 

HBM falls into the broader category of dynamic random access memory, or DRAM — a type of semiconductor memory used to store data and program code that can be found in PCs, workstations and servers.

SK Hynix has set itself apart in the DRAM market by getting an early lead in HBM and establishing itself as the main supplier to the world’s leading AI chip designer, Nvidia

However, its main competitors, U.S.-based Micron and South Korean-based tech giant Samsung, have been working to catch up in the space.

“With the innovation of AI technology, the memory market has shifted to a new paradigm and demand has begun to spread to all product areas,” SK Hynix Chief Financial Officer Kim Woohyun said in the earnings release.

“We will continue to strengthen our AI memory leadership by responding to customer demand through market-leading products and differentiated technological capabilities,” he added.

The HBM market is expected to continue to boom over the next few years to around $43 billion by 2027, giving strong earnings leverage to memory manufacturers such as SK Hynix, MS Hwang, research director at Counterpoint Research, told CNBC.

“[F]or SK Hynix to continue generating profits, it’ll be important for the company to maintain and enhance its competitive edge,” he added.

A report from Counterpoint Research earlier this month showed that SK Hynix held a leading 38% share of the DRAM market by revenue in the second quarter of the year, increasing its shares after having overtaken Samsung in the first quarter. 

The report added that the global HBM  market grew 178% year over year in the second quarter, and SK Hynix dominated the space with a 64% share.

Continue Reading

Technology

Celestica CEO explains the company’s role in the AI boom

Published

on

By

Celestica CEO explains the company's role in the AI boom

Celestica CEO Rob Mionis: If AI is a speeding freight train, we’re laying the tracks ahead of it

Celestica CEO Rob Mionis explained how his company designs and manufactures infrastructure that enables artificial intelligence in a Tuesday interview with CNBC’s Jim Cramer.

“If AI is a speeding freight train, we’re laying the tracks ahead of the freight train,” Mionis said.

He pushed back against the notion that the AI boom is a bubble, saying that the technology has gone from a “nice to have” to a “must have.”

Celestica reported earnings Monday after close, managing to beat estimates and raise its full-year outlook. The stock hit a 52-week high during Tuesday’s session and closed up more than 8%. Celestica has had a huge run over the past several months, and shares are currently up 253.68% year-to-date.

Mionis described some of Celestica’s business strategies, including how the Canadian outfit chose to move away from commodity markets and into design and manufacturing. He told Cramer that choice “has paid off in spades” for his company.

Celestica’s focus on design and manufacturing enables the company to “consistently execute at scale,” he added.

He detailed Celestica’s data center work, saying the company makes high-speed networking and storage system for hyperscalers, digital native companies and other enterprise names.

Mionis praised the company’s partnership with semiconductor maker Broadcom, saying Celestica uses Broadcom’s silicon in a lot of its designs.

“What it means for us is when they launch a new piece of silicon — so the Tomahawk 6 is their 1.6 terabyte silicon — when they launch that into the marketplace, they’ll work with us to develop products, and those products end up in the major hyperscalers.”

Celestica CEO Rob Mionis goes one-on-one with Jim Cramer

Jim Cramer’s Guide to Investing

Sign up now for the CNBC Investing Club to follow Jim Cramer’s every move in the market.

Disclaimer The CNBC Investing Club Charitable Trust owns shares of Broadcom.

Questions for Cramer?
Call Cramer: 1-800-743-CNBC

Want to take a deep dive into Cramer’s world? Hit him up!
Mad Money TwitterJim Cramer TwitterFacebookInstagram

Questions, comments, suggestions for the “Mad Money” website? madcap@cnbc.com

Continue Reading

Technology

Wikipedia founder Jimmy Wales isn’t worried about Elon Musk’s Grokipedia: ‘Not optimistic he will create anything very useful right now’

Published

on

By

Wikipedia founder Jimmy Wales isn't worried about Elon Musk's Grokipedia:  'Not optimistic he will create anything very useful right now'

Wikipedia founder Jimmy Wales on what it takes to build online trust in a world of misinformation

Elon Musk‘s Wikipedia rival Grokipedia got off to a “rocky start” in its public debut, but Wikipedia founder Jimmy Wales didn’t even have to take a look at the AI’s output to know what he expected.

“I’m not optimistic he will create anything very useful right now,” Wales said at the CNBC Technology Executive Council Summit in New York City on Tuesday.

Wales had plenty of choice words for Musk, notably in response to allegations that there is “woke bias” on Wikipedia. “He is mistaken about that,” Wales said. “His complaints about Wiki are that we focus on mainstream sources and I am completely unapologetic about that. We don’t treat random crackpots the same as The New England Journal of Medicine and that doesn’t make us woke,” he said at the CNBC event. “It’s a paradox. We are so radical we quote The New York Times.”

“I haven’t had the time to really look at Grokipedia, and it will be interesting to see, but apparently it has a lot of praise about the genius of Elon Musk in it. So I’m sure that’s completely neutral,” he added.

Wales’ digs at Grokipedia — which has its own wiki page — were less about any ongoing spat with Musk and more about his significant concerns about the efforts by all large language models to create a trusted online source of information.

“The LLMs he is using to write it are going to make massive errors,” Wales said. “We know ChatGPT and all the other LLMs are not good enough to write wiki entries.”

Musk seems equally certain of the opposite outcome: “Grokipedia will exceed Wikipedia by several orders of magnitude in breadth, depth and accuracy,” he wrote in a post on Tuesday night.

Wales gave several real-world examples of why he doesn’t have faith in LLMs to recreate what Wikipedia’s global community has built over decades at a fraction of the cost — he estimated the organization’s hard technology costs as $175 million annually versus the tens of billions of dollars big tech companies are constantly pouring into AI efforts, and by one Wall Street estimate, a total of $550 billion in AI spending expected by the so-called hyperscalers next year.

One example Wales cited of LLM’s inaccuracy relates to his wife. Wales said he often asks new chatbot models to research obscure topics as a test of their abilities, and asking who his wife is, a “not famous but known” person, he said, who worked in British politics, always results in a “plausible but wrong” answer. Any time you ask an LLM to dig deep, Wales added, “it’s a mess.”

He also gave the example of a German Wiki community member who wrote a program to verify the ISBN numbers of books cited, and was able to trace notable mistakes to one person. That person ultimately confessed they had used ChatGPT to find citations for text references and the LLM “just very happily makes up books for you,” Wales said. 

Elon Musk says Grok 3 is going to be 'scary smart'

Wales did say the battles into which he has been drawn, by Musk and by AI, do reinforce a serious message for Wikipedia. “It’s really important for us and the Wiki community to respond to criticism like that by doubling down on being neutral and being really careful about sources,” he said. “We shouldn’t be ‘wokepedia.’ That’s not who we should be or what people want from us. It would undermine trust.”

Wales thinks the public and the media often give Wikipedia too much credit. In its early days, he says, the site was never as bad as the jokes made about it. But now, he says, “We are not as good as they think we are. Of course, we are a lot better than we used to be, but there is still so much work to do.”

And he expects the challenges from technology, and from misinformation, to get worse, with the ability to use LLMs to create fake websites with plausible text getting better and likely able to fool the public. But he says they will have a hard time fooling the Wiki community, which has spent 25 years studying and debating trusted information sources. “But it will fool a lot of people and that is a problem,” he said.

In some cases, this same new technology, which “makes stuff up that is completely useless,” may be useful to Wikipedia, he said. Wales has been doing some work on finding limited domains where AI can uncover additional information in existing sources that should be added to a wiki, a use of gen AI he described as currently being “kind of okay.”

“Maybe it helps us do our work faster,” he said. That feedback loop could be very useful for the site if it developed its own LLM that it could train, but the costs associated with that have led the site to hold off any formal effort while it continues to test the technology, he added.

“We are really happy Wiki is now part of the infrastructure of the world, which is a pretty heavy burden on us. So when people say we’ve gotten biased, we need to take that seriously and work on anything related to it,” Wales said.

But he couldn’t resist putting that another way, too: “We talk about errors that ChatGPT makes. Just imagine an AI solely trained on Twitter. That would be a mad, angry AI trained on nonsense,” Wales said.

Continue Reading

Trending