Dr. Po-Hao Chen, vice chair for artificial intelligence in the Diagnostics Institute at the Cleveland Clinic, uses AI to speed up stroke diagnosis. He says a person always oversees AI and makes the final decision on a diagnosis.
Dr. Po-Hao Chen, vice chair for artificial intelligence in the Diagnostics Institute at the Cleveland Clinic, uses AI to speed up stroke diagnosis. He says a person always oversees AI and makes the final decision on a diagnosis. (Photo courtesy of the Cleveland Clinic)

While artificial intelligence in hospitals is still in its early stages of being regulated, Akron-area hospitals are implementing it in multiple ways, along with training and education.

Christopher Congeni, a partner at the Amundsen Davis law firm‘s Cleveland office, said there is a host of concerns hospitals, physician groups and private practices must consider when they decide to use AI.

Some of these include bias, transparency and maintaining the use of AI as a tool rather than as an entity that can replace people.

“Health care is very, very regulated, and that presents challenges because we’re still trying to figure out how to regulate AI,” he said.

Hospitals are adopting AI algorithms to help with everything from interpreting radiology reports to summarizing findings and identifying stroke patients in emergency rooms more quickly.

Congeni said the use of AI in hospitals is in the risk-assessment stage, and as regulations and laws are developed, minimizing risk through comprehensive compliance plans is important.

“You have to figure out how far to go,” Congeni said. “Is it part of the actual diagnosis or treatment, or is it just a foundational piece to make something easier for the provider?”

Dr. Po-Hao Chen, vice chair for artificial intelligence in the Diagnostics Institute at the Cleveland Clinic, uses AI to speed up stroke diagnosis. He says a person always oversees AI and makes the final decision on a diagnosis.
Dr. Po-Hao Chen, vice chair for artificial intelligence in the Diagnostics Institute at the Cleveland Clinic, uses AI to speed up stroke diagnosis. He says a person always oversees AI and makes the final decision on a diagnosis. (Photo courtesy of the Cleveland Clinic)

Potential bias in data

Naomi Scheinerman, an assistant professor of bioethics at Ohio State University, said doctors need to think about the data used to train the AI model they’re going to use.

“We don’t have the perfect image of knowledge in society of conditions and how they affect different populations and groups,” she said. “We have disproportionate representation in the data of dominant, majoritarian groups.”

Congeni said an AI algorithm could inadvertently amplify existing biases, leading to a discriminatory outcome in patient care.

Steve Worrell is the CEO of Riverain Technologies in Miamisburg, which created an algorithm that both University Hospitals and the Cleveland Clinic use. He said his company takes this into consideration while training its AI.

As part of the company’s development process, employees make sure they have diversity in the data they acquire so it captures variability across different patient populations.

Signal background

Suggested Reading

“It’s really important when you train these systems that you have adequate representation of different patient populations,” Worrell said. “Generally speaking, with these algorithms, the more data you have, the better.”

Devora Shapiro, an associate professor of medical ethics at Ohio University, said, without proper training and vetting, AI could potentially be harmful.

“As an ethicist, one might be concerned, are we harming patients potentially without a clear understanding of that risk-benefit profile?” she asked.

Physicians could rely on AI too much

Shapiro said there is a concern that physicians could come to rely on AI and trust it too much.

“There is a question of whether the use of artificial intelligence in practice over the long-term makes individuals, both in medicine, potentially, and in other areas, other professions, a little bit less quick with their critical thinking skills, with their precision and their attention,” she said.

Evidence is required, Shapiro said, to ensure physicians are still taking on the critical thinking and analysis work.

Dr. Po-Hao Chen, vice chair for artificial intelligence in the Diagnostics Institute at the Cleveland Clinic, said AI never makes a diagnosis on its own at the institute — instead, a person oversees its process and makes the final decision.

Careful implementation is required

The process of using AI in hospitals should be deliberate and methodical, Shapiro said.

“I am concerned that we are not being as careful as we ought to be in the integration and implementation of artificial intelligence tools in hospitals and in medical practice,” she said.

Dr. Leonardo Kayat Bittencourt, vice chair of innovation at University Hospitals, said UH has developed a careful method of implementing AI.

“We do a few rounds of what the industry calls ‘shadow mode,’ which is activating the AI tool on the background for a very few select users to monitor how it’s going for a few weeks or months, depending on the complexity of the AI tool,” said Bittencourt, who also works in abdominal imaging.

After that, the experts reconvene regularly to assess how the tool is running before any official implementation decisions are made. The results are checked against the data being collected, and once they are satisfied, they roll it out to clinical production alongside education.

Training is taken seriously at UH, Bittencourt said, with multiple education opportunities for staff available.

“We have it very, very deeply in our mission and in our activities to continuously educate people,” Bittencourt said.

AI in health care isn’t fully regulated yet

Congeni said he has concerns some groups will take regulating AI more seriously than others.

“Where it ends and where it begins is huge,” he said. “Defining those clear lines in the compliance plans is important.”

He emphasized the importance of creating written policies, procedures and compliance plans.

“The concern that people potentially have these days is that the use of AI has infiltrated more of [medical] practice than we ought to have,” Shapiro said.

Scheinerman said the AI’s potential, if used correctly, could be promising.

“If we could get this technology to be super well-trained and effective, it could help speed up, be more accurate and faster and save lives,” she said.

Lauren Cohen is a community reporting intern for the Akron Beacon Journal and Signal Akron. The position is funded through a grant from the Knight Foundation, which is a financial supporter of Signal Akron.

Lauren Cohen is a senior journalism major at Kent State University. She is a community reporting intern for the Akron Beacon Journal and Signal Akron.