Home Business Hospitals use A.I. like Microsoft Nuance’s DAX app to fight burnout

Hospitals use A.I. like Microsoft Nuance’s DAX app to fight burnout

0
Hospitals use A.I. like Microsoft Nuance’s DAX app to fight burnout

Doctors using A.I. to fight burnout: Apps for medical record technology

When Dr. Tra’chella Johnson Foy greets her patients, she sits across from them facing away from the computer in the exam room. Then, she pulls out her phone, and asks for permission to record the appointment.

“It listens in on our visit so I can pay more attention to you,” explains Foy, a family physician at Baptist Health in Jacksonville, Florida, while looking straight at her patient.

Foy and other doctors at Baptist Health have been using the DAX app, powered by artificial intelligence, from Microsoft’s Nuance division since last year. The program transcribes doctors’ and patients’ comments, then creates a clinical physician summary formatted for an electronic health record. 

Dr. Trachella Johnson

CNBC

The app frees doctors from having to type up notes during patient visits, and from having to finish them up at night. A practice so common doctors have a nickname for it.

“Pajama time — which should be the time where you’re getting ready to wind down and go to bed. We’re usually still charting and noting and doing things that are going to enhance the life of the patient but not necessarily our own quality of life,” Foy said.

The cost of tackling burnout

Harnessing AI programs to put pajama time to rest, and helping doctors and nurses fight burnout, is a top priority for Baptist Health’s chief digital and information officer Aaron Miri.

“There’s new economies of scale … that healthcare will be able to get into [by] leveraging AI,” Miri said. “You eliminate all the administrative redundancy, and bureaucracy overhead, and you allow folks to work at top of license.”

Administrative processes like documenting visits, requesting insurance pre-authorization for procedures, and processing bills account for about 25% of health care costs, according to a National Bureau of Economic Research study. 

The researchers estimate adopting AI to simplify those tasks could help hospitals cut their total costs by 5% to 11% in the next five years, while physician groups could achieve up to 8% savings, and health insurers up to 10%.

But the upfront investment won’t be cheap: An Advisory Board survey of health care executives last year found that one in four expected to see costs for artificial intelligence and analytics increase 25%.

Larger health systems like Baptist may be in a better position to fund that investment than smaller hospitals, and more likely to have the tech staffing to help integrate the new generative A.I. solutions.

“If it cost me X, but I just made my patients a whole lot happier and my physicians a whole lot more productive? Well, there’s an answer right there by itself,” said Miri.

Keeping people in the mix

Right now, hospital systems working with the new generative AI programs to automate administrative tasks are requiring doctors and nurses to check over the automated documents before they’re included in medical records.

“What organizations are doing is they’re looking at these high-impact use cases, but also making sure that they mitigate the risks and looking at ways that we can choose the scenarios where we put a human in the middle,” said Dr. David Rhew, chief medical officer and VP of healthcare for Microsoft’s Worldwide Commercial Business.

But there are concerns that as organizations look to cut costs and boost efficiency, automation could take humans out of the mix.

Former FDA commissioner Scott Gottlieb worries that generative AI could eventually eliminate some doctors’ jobs by creating “large language models that operate fully automated, parsing the entirety of a patient’s medical record to diagnose conditions and prescribe treatments directly to the patient, without a physician in the loop.”

Patients are also wary of how the technology could be used for their own care. Nearly two-thirds of those surveyed in CNBC’s All America Survey last month said they would be uncomfortable with AI being used to diagnose medical issues.

Dr. Lloyd Minor, the dean of the Stanford School of Medicine, worries more about how the fast-moving technology could be used to impact patient access to care.

“My deepest fear is that medical data is used in a pernicious way, either to block access to the appropriate healthcare, or to distort the way that health care is delivered,” said Minor, who helped launch an initiative to promote responsible use of AI.

Last month, health insurers Cigna and UnitedHealthcare were each sued over the use of conventional computer algorithms to deny medical claims.

“Generative AI should open doors for access, it should provide pathways for providing equitable care that have not existed in the past,” Minor said.

In July, the White House secured a pledge from seven of the leading U.S. companies in artificial intelligence to commit to collaborating within the industry to build in safeguards into the fast-evolving technology.

The group included Google, Amazon Web Services and Microsoft — all three have launched generative AI products for health care.

Health systems are already a popular target for hackers and data thieves, despite rigorous regulatory privacy requirements. Generative AI is developing so quickly, the fear is that efforts to develop safety guardrails for the new technology are already playing catch up.

“It’s very important for us as a society to embrace the responsible AI principles of being able to move forward… so that the good actors are defining the future and not allowing the bad actors to potentially define that,” said Rhew.