Home 5 Lab Industry Advisor 5 Lab Compliance Advisor 5 Compliance Perspectives-lca 5 Are Current Regulations Keeping Labs from Benefiting from AI?

Are Current Regulations Keeping Labs from Benefiting from AI?

by | Oct 7, 2024 | Compliance Perspectives-lca, Essential, Lab Industry Advisor

Though needed to protect data and patient privacy, some in the industry say that more flexibility is needed

As written about previously in G2 Intelligence publications, artificial intelligence (AI) has great potential to help labs be more efficient and do more with fewer staff.1 However, according to some in the industry, current regulations—while necessary for data security and patient privacy—are hampering labs’ ability to fully benefit from such technologies.

“AI has a tremendous ability to be an extender and to augment efficiency in facilities, but that drive for efficiency at the department level then gets tamped down by the regulatory compliance aspect of things when you get to implementation,” says Aaron Nichols, chief operating officer at Arkana Laboratories, an esoteric pathology laboratory.

A drag on efficient data exchange

Though not mentioning specific examples, he adds that AI’s strength depends on the quality and amount of data it is given, but regulations are often a drag on the fast, efficient data exchange required to easily build such large data repositories. “When you can’t open up the freest exchange of data possible, you limit the amount of data that you can have in your system and the speed at which it can get there,” he says. “If the regulatory compliance aspect of things can’t accommodate these new innovations, and we can’t find new solutions to ensure security, whether it’s some sort of new protocol or standard that we could utilize and all understand to be safe and secured in some way, then we will continue to have these sorts of delays.”

Steve Box, global business development director at healthcare software company X-Lab/Labgnostic, says that, in addition to becoming more flexible, regulations also need to do a better job of ensuring that vendors in the healthcare IT space are held accountable. That, in turn, will lead to smoother integration of AI and other technologies.

“Ensuring vendors like ourselves—whether we’re providing integration, or LISs [laboratory information systems], or EMRs [electronic medical records]—are held to account is important,” Box says. “The enabler for that is better regulation that is going to hold those vendors to account to make integration easier.”

Regulations unlikely to change soon

However, regulatory change may be unlikely to come along anytime soon, according to HIPAA compliance attorney Aleksandra Vold, partner at law firm BakerHostetler.

“It’s hard to think of a way that the regulations could change to allow greater ease of sharing without diminishing the privacy protections that they were designed to create,” Vold says. In fact, the legal world is “seeing a move toward greater privacy protections” and more processes labs and other providers must follow to ensure compliance with regulations, such as those related to the Health Insurance Portability and Accountability Act (HIPAA), she adds.

She points out that the rulemaking process in general is slow and won’t keep up with the rapid evolution of AI technology.

“Even if we were to see a loosening or a carve out for some sort of AI community usage, or something along those lines, it’s not going to be there for us for several years,” Vold says. “It’s not a priority at this point for HHS OCR [U.S. Department of Health and Human Services Office for Civil Rights].”

Vold says that one area where she believes a loosening of restrictions makes sense involves the 18 identifiers for protected health information (PHI). When it comes to complying with patient privacy regulations related to HIPAA, providers must remove those 18 specific identifiers from PHI, or obtain an expert opinion that the data is not identifiable, in order for it to be shared with third parties without patient consent.2

Without an expert opinion (which can be difficult and expensive to obtain), this means that a disclosure of a spreadsheet that includes just one of the 18 identifiers—even if it’s not one of the more direct identifiers like names, addresses, phone numbers, etc.—about patients a lab serves would potentially be considered a breach of PHI by HHS OCR. That is the case even though it’s unlikely that such limited information would actually lead to the identification of any patient, Vold explains.

“I would love to see a more common-sense approach to the identifiability and de-identification standard,” she says. “I think that would really be helpful without significantly reducing—or maybe reducing at all—the privacy protections that were intended.”

Getting regulators’ attention

As far as regulations being a barrier to labs fully benefiting from AI, Vold says that though “there’s no escaping the transparency, consents, and authorizations that are needed for certain things,” getting regulators to realize the negative impacts of regulations could be the key to improving such rules.

“There are some significant unintended consequences, as far as research and partnerships that could be happening,” Vold says. She says that her clients often feel like ‘we can’t be compliant and be ahead of the game.’”

However, Vold adds that, while not wanting to be negative, regulations involving AI likely won’t be significantly addressed until the first big data breach involving AI. Because AI is so new, and compliance departments still unsure exactly how it’s being used in their organizations, it’s difficult for them to know how an AI breach may occur or how to address it in their incidence response plans, Vold explains.

“Until we go through the actual scary scenario, it’s hard to relate that [incidence response] document or that plan to reality and see what works and what doesn’t,” she says. “Once we have an example, I think that’s where we’re going to see the compliance activity really ramp up, and regulations really take hold…and hopefully future-proof [such incidents] from happening again.”

No “big, friendly push” toward AI

“Until then, I think there’s going to be a lot of working groups that talk about AI and maybe some position statements” on what stakeholders want in the future, “but I don’t see it as likely that we’ll have a big, friendly push toward AI in the near future,” Vold says.

However, she adds that innovation and more efficient data-sharing are still possible, even within the constraints of current regulations, if things are done on a smaller scale. An example of this would be Hospital A partnering with an AI provider to solve a problem, rather than Hospital A being one of 1,000 other customers the AI company is solving a problem for, Vold explains.

Because each hospital has hundreds of thousands of patients and therefore reams of data, Vold believes that such “two-party” partnerships can still achieve a lot.

“I think minimizing concerns around data sharing outside of the particular contract and the particular relationship is going to do great things,” she says. “For the innovation side of things, the software providers just have to take down their grandiose world-changing, disruptive ideologies for a second and just say, ‘We’re going to change it for Hospital A’…because when they take that global position, that’s where the data sharing becomes kind of a nightmare.”

However, within their own compliant frameworks and infrastructure, it’s possible for AI companies and providers such as labs to gain some of the benefits of innovation and greater efficiency, Vold says.

References:

  1. https://www.g2intelligence.com/expert-qa-harnessing-the-power-of-ai/
  2. https://www.hhs.gov/hipaa/for-professionals/special-topics/de-identification/index.html

Subscribe to view Essential

Start a Free Trial for immediate access to this article