Keeping Your AI Tools Safe
The potential occupational safety pros and cons of artificial intelligence implementation into the clinical lab
Although it can be easy to forget in the whirlwind of excitement, innovation, and development, artificial intelligence (AI) is a tool. In the right hands, like all tools implemented by laboratorians, AI can unlock new potential for ensuring the lab’s safe and ongoing operation and the accurate and efficient delivery of quality outcomes for patients.1 However, when the hands wielding them are untrained, careless, or lacking in judgment, those same tools can increase risks not only for patients, but for laboratorians as well. Given the increasing interest in, development, and use of AI-powered tools,2 it’s important to assess how they affect clinical lab safety profiles and what laboratorians can do in this dynamic environment to ensure that, no matter what, their AI tools do good, not harm.
The OSH opportunities…
“AI solutions are on the horizon to improve safety in many work areas, including clinical laboratories, but it is not clear how they might change the safety and compliance landscape,” says Jay Vietas, chief of emerging technologies in the Division of Science Integration (DSI), at the National Institute for Occupational Safety and Health (NIOSH). This uncertainty regarding the future occupational safety and health (OSH) landscape is highlighted in a 2024 report by the European Agency for Safety and Health at Work (EU-OSHA) that explores the potentially profound implications—both positive and negative—of AI and its many applications in the healthcare sector.3
EU-OSHA highlights numerous OSH areas as potential beneficiaries of AI’s potential. One example is the automation of physical and cognitive tasks to mitigate laboratorians’ stress levels and burnout risk without compromising focus on patients’ needs and clinical context. By reducing healthcare practitioners’ psychosocial burdens and mental workloads, performance in high-stress scenarios may improve, which in turn would increase the likelihood of positive outcomes for patients.
As Vietas explains, AI may also offer specific benefits for clinical labs’ OSH processes and procedures as it continues to evolve. “There are a couple of tools using AI that may be applied in the clinical laboratory,” he says. “The first is monitoring systems with computer vision, a form of AI, which could be trained to observe, catalog, and report unsafe practices and behaviors in the clinical laboratory. The second is data-driven analysis of the practices in a clinical laboratory, which could be evaluated through ML to determine which processes pose the greatest risk to personnel. Both of these tools are technically feasible today, but require significant investment. As the cost of these systems decreases and ease of use improves, they may be incorporated into clinical lab settings.”
… and the OSH obstacles
However, with the potential benefits also come potential challenges. For example, EU-OSHA notes that increased use of AI systems may result in automation bias, the tendency to over-rely on these technologies. This can be especially problematic if users become less vigilant and proactive in monitoring automated systems and miss critical problems that require human intervention, ultimately leading to an increase in safety risks and accidents. Similar to this is the concern regarding deskilling—the loss of certain skills from the workforce—an especially serious safety concern when these skills affect an individual’s ability to identify and address safety issues.
As the US federal institute responsible for conducting research and making recommendations for the prevention of work-related injury and illness, NIOSH has investigated AI’s impacts. In 2023, it did so through the lens of OSH inequities, preventable disparities in work-related injury, illness, morbidity, and mortality that are closely linked to social, economic, and environmental disadvantage resulting from structural and historical discrimination or exclusion.4 Through their review, Vietas and his colleagues found that AI’s benefits may not be equally available to all users. “Although AI technologies may potentially improve clinical laboratory safety, there is the potential for these systems to be used in a manner that reinforces bias,” he explains. “This can occur through management practices (monitoring some groups more than others) or through the use of data that is already biased. Awareness of these issues is the first step toward developing policies and procedures that promote health equity.”
Treading carefully forward
Almost everything regarding AI is new, including its potential safety issues, which makes careful forward progress paramount. “Though I am not aware of any compliance errors that apply to AI/ML use in healthcare or the clinical laboratory, it will be important for lab personnel to understand the opportunities and limitations of these tools,” Vietas says. “Oversight will be critical to achieving the desired outcomes. To improve the likelihood that these tools are implemented and used safely, they should be designed (at a minimum) with input from workers, management, and safety professionals. They should also be tested and evaluated before implementation and—once implemented—should be periodically reevaluated to ensure that they are functioning properly. All personnel impacted by the tool should be trained on its associated processes and procedures, as well as on the capacity of the tool and its limitations.”
Regarding resources, Vietas notes that, although currently there may not be any regulations specifically governing AI/ML use in the clinical laboratory, there are numerous guidelines available for lab managers and administrators to turn to during this process, including the White House’s Blueprint for an AI Bill of Rights,5 President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,6 and an Artificial Intelligence Risk Management Framework from the National Institute of Standards and Technology.7 “None of these guidance documents are specific to clinical laboratories, but each of them provides principles and concepts that, when followed, increase the likelihood of safe and effective artificial intelligence tools.”
So, can AI technologies be used to enhance labs’ OSH compliance? Like most things, the answer isn’t a simple yes or no. “Potentially yes, but that will depend upon how these systems are designed and implemented in the clinical laboratory workspace,” Vietas says. “All of these systems will have the potential for error, but they are also likely to improve safety outcomes when used properly.”
References:
- TR Undru et al. Integrating artificial intelligence for clinical and laboratory diagnosis – a review. Maedica (Bucur). 2022;17(2):420–426. doi:10.26574/maedica.2022.17.2.420.
- H Hou et al. Artificial intelligence in the clinical laboratory. Clin Chim Acta. 2024; 559(1):119724. doi:10.1016/j.cca.2024.119724.
- European Agency for Safety and Health at Work. Automation of cognitive and physical tasks in the health and social care sector: implications for safety and health. May 08, 2024. https://osha.europa.eu/sites/default/files/documents/Automation%20of%20cognitive%20and%20physical%20tasks%20in%20health%20and%20social%20care%20sector_EN.pdf.
- E Fisher et al. Occupation safety and health equity impacts of artificial intelligence: a scoping review. Int J Environ Res Public Health. 2023;20(13):6221. doi:10.3390/ijerph20136221.
- The White House. Blueprint for an AI Bill of Rights. https://www.whitehouse.gov/ostp/ai-bill-of-rights.
- The White House. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. October 30, 2023. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence.
- The National Institute of Standards and Technology. AI Risk Management Framework. July 26, 2024. https://www.nist.gov/itl/ai-risk-management-framework.
Subscribe to view Essential
Start a Free Trial for immediate access to this article