AI is now in your doctor's offices. Washington is unsure what to do.


AI is now in your doctor's offices. Washington is unsure what to do.

Washington hasn’t written the rules for the new artificial intelligence in health care even though doctors are rapidly deploying it — to interpret tests, diagnose diseases and provide behavioral therapy.

Products that use AI are going to market without the kind of data the government requires for new medical devices or medicines. The Biden administration hasn’t decided how to handle emerging tools like chatbots that interact with patients and answer doctors’ questions — even though some are already in use. And Congress is stalled. Senate Majority Leader Chuck Schumer said this week that legislation was months away.

Advocates for patient safety warn that until there’s better government oversight, medical professionals could be using AI systems that steer them astray by misdiagnosing diseases, relying on racially biased data or violating their patients’ privacy.

“There's no good testing going on and then they're being used in patient-facing situations — and that's really bad,” Suresh Venkatasubramanian, a Brown University computer scientist, said of the AI systems physicians are adopting.

Venkatasubramanian has a unique vantage point on the issue. He helped draft the Blueprint for an AI Bill of Rights the Biden administration issued in October 2022. The blueprint called for strong human oversight to ensure artificial intelligence systems do what they’re supposed to.

But the document is still just a piece of paper; President Joe Biden hasn’t asked Congress to ratify it, and no lawmaker has moved to do so.



There’s evidence that Venkatasubramanian’s concern is warranted. New York City has formed a Coalition to End Racism in Clinical Algorithms and is lobbying health systems to stop using AI that the coalition says relies on data sets that underestimate Black individuals’ lung capacity and their ability to give birth vaginally after a cesarean section, and that overestimate their muscle mass.

Even some AI developers are worried about how doctors are using their systems. “Sometimes when our users got used to our product, they would start just kind of blindly trusting it,” said Eli Ben-Joseph, co-founder and CEO of Regard, a company that boasts 1.7 million diagnoses made with its tech, which embeds into a health systems' medical records.

Regard implemented safeguards, warning doctors if they move too quickly or don’t read all of the system’s output.

Congress is far from a consensus on what to do, despite holding a summit with tech industry leaders last month.

The Food and Drug Administration, which has taken the lead for Biden, has authorized new AI products before they go to market — without the sort of comprehensive data required of drug and device makers. The agency then monitors them for adverse events.

Troy Tazbaz, the director of the agency’s Digital Health Center of Excellence, said the FDA recognizes it needs to do more. AI products made for health care use and similar to ChatGPT, the bot that can pass medical exams, require “a vastly different paradigm” to regulate, he explained. But the agency is still working on what.

Meanwhile, AI’s adoption in health care is racing ahead even though the systems, Venkatasubramanian said, are “incredibly fragile.” In diagnosing patients, he sees risks of error and the possibility of racial bias. He suspects physicians will trust the systems’ judgments too readily.

Nearly all of the 10 innovators building the technology who spoke to POLITICO acknowledged the dangers, without oversight.

“There are probably a number of examples already today — and there will be more coming in the next year — where organizations are deploying large language models in a way which is actually not very safe,” Ross Harper, founder and CEO of Limbic, a company that uses AI in a behavioral therapy app, said.

‘They would start just kind of blindly trusting it’

Limbic has achieved a medical device certification in the U.K., and Harper said the company is moving forward in the U.S. despite the regulatory uncertainty.

“It would be wrong to not leverage these new tools,” he said.

Limbic’s chatbot, which the company said is the first of its kind in America, works through a smartphone app — in conjunction with a human therapist.

Patients can send messages to the bot about what they’re thinking and feeling, and the bot follows therapy protocols in responding, using artificial intelligence and a separate statistical model to ensure the responses are accurate and helpful.

A therapist provides input for the AI to guide its conversations. And the AI reports back to the therapist with notes from its chats, better informing the patient’s future therapy sessions.

Another firm, Talkspace, uses AI it said can help flag people at risk of suicide after analyzing conversations with therapists.

Other AI products create and summarize patient charts — as well as review them and suggest a diagnosis.

Much of it is aimed at helping overworked doctors lighten their loads.

Safety and innovation

Students of the technology said AI systems that change — or “learn” — as they get more information could become more or less helpful over time, changing their safety or effectiveness profile.

And determining the impacts of those changes becomes even more difficult because companies closely guard the algorithms at the heart of their products — a proprietary “black box” that protects intellectual property but stands in the way of regulators and outside researchers.

The Office of the National Coordinator for Health Information Technology at HHS has proposed a policy aimed at getting more transparency about AI systems being used in health, but it doesn’t focus on the safety or efficacy of those systems.

“How do we actually regulate something like that without necessarily losing the pace of innovation?” Tazbaz asked, targeting the agency’s primary challenge for AI. “I always say that innovation always has to work within a parameter — a safety parameter.”

There are no existing regulations specifically addressing the technology, so the FDA is planning a novel system.

Tazbaz believes the FDA will create a process of ongoing audits and certifications of AI products, hoping to ensure continuing safety as the systems change.

The FDA has already approved about 520 AI-enabled devices — mostly for radiology, where the technology has shown promise in reading X-rays. FDA Commissioner Robert Califf said in an August meeting he believed the agency has done well with predictive AI systems, which take data and conjecture an outcome.

But many products currently in development are using newer, more advanced technology capable of responding to human queries — something Califf called a “sort of scary area” of regulation. Those present even more challenges to regulators, experts said.


And there’s another risk, too: Rules that are too onerous could quash innovation that might yield benefits for patients if it can make care better, cheaper and more equitable.

The agency is taking care not to stunt the new tech’s growth, Tazbaz said, talking with industry leaders, hearing their concerns and sharing the agency's thinking.

The World Health Organization’s approach is not unlike that of Washington: one of concern, guidance and discussion. But with no power of its own to regulate, the WHO recently suggested that the governments among its members step up the pace.

AI models “are being rapidly deployed, sometimes without a full understanding of how they may perform,” the body said in a statement.

Still, whenever it moves to tighten the rules, the FDA can expect pushback.

Some industry leaders have suggested that doctors are themselves a kind of regulator, since they are experts making the final decision regardless of AI co-pilots.

Others argue even the current approval process is too complicated — and burdensome — to support rapid innovation.

“I kind of feel like I’m the technology killer,” said Brad Thompson, an attorney at Epstein Becker Green who counsels companies on their use of AI in health care, by “fully inform[ing] them of the regulatory landscape.”

‘Would I personally feel safe?’

In the past, Thompson would have gone to Congress with his concerns.

But lawmakers aren’t sure what to do about AI, and legislating slowed while Republicans selected a new speaker. Now, lawmakers have to reach a deal on funding the government in fiscal 2024.

“That avenue just isn’t available now or in the foreseeable future,” Thompson said of attempts to update regulations through Congress, “and it just breaks my heart.”

Schumer recently convened an AI forum to try to sort out what Congress should do about the technology across sectors. The House also has an AI task force, though its output is likely tied to its ability to solve leadership and government funding challenges.

Rep. Greg Murphy (R-N.C.), co-chair of the Doctors Caucus, said he wants to let state governments lead on regulating the technology.

Louisiana Sen. Bill Cassidy, the ranking Republican of the committee that oversees health policy, has said Congress should do more — but without making it more difficult to innovate.

Cassidy’s plan addresses many of the concerns raised by researchers, regulators and industry leaders, but he hasn’t proposed a bill to implement it.

Given the uncertainty, some of the big players in health tech are deliberately targeting “low-risk, high-reward” AI projects, as electronic health record giant Epic’s Garrett Adams put it. That includes drafting notes, summarizing information and acting as more of a secretary than a co-pilot for doctors.

But the implementation of those technologies could lay the groundwork for more aggressive advances. And a number of companies are charging ahead, even suggesting that their products will inevitably replace doctors.

“We want to eventually transition parts of our tech to become stand-alone — to become fully automated and remove the doctor or the nurse from the loop,” Ben-Joseph said, suggesting a 10- or 20-year timeframe.

Count Tazbaz among the skeptics.

“I think the medical community needs to effectively look at the liabilities,” he said of AI used to diagnose patients. “Would I personally feel safe? I think it depends on the use case.”

----------------------------------------

By: Daniel Payne
Title: AI has arrived in your doctor’s office. Washington doesn’t know what to do about it
Sourced From: www.politico.com/news/2023/10/28/ai-doctors-healthcare-regulation-00124051
Published Date: Sat, 28 Oct 2023 06:00:00 EST

Read More