: Having a reasonable explanation for why a recommendation is being made is a crucial component of being able to trust an AI tool. Within medical care delivery, there should be a clear, logical rationale for taking actions that can impact patient health outcomes. AI tools develop their own internal rules, based on data fed into them during their training phase. These rules should be exposed to auditors, tool users, and patients in order to give human operators the confidence to accept or reject recommendations. Thus, the template recommends that algorithm-based systems be required to provide comprehensible explanations to human operators about why a particular decision was made, along with a list of the most influential variables that led to the decision.
: Healthcare work is a high-impact domain that has the potential to inflict severe privacy, health, and financial burdens on patients. Consequently, AI-based healthcare tools must be designed to avoid and mitigate the impact of social biases on recommendations used in medical care delivery. There is also a legal responsibility to avoid violating U.S. anti-discrimination laws by limiting prejudiced behavior based on protected attributes like race, gender, religion, etc. Fairness and equity are subjective concepts within algorithmic systems, and often depend on context to be correctly defined.
To solve this, AI tools should have the ability to accommodate the complexity of algorithmic fairness by continuously monitoring multiple metrics that capture different aspects of equity. The template recommends establishing a process for defining multiple fairness metrics that are customized for the use case of the AI tool, as well as a monitoring system with defined alert thresholds for flagging potentially biased behavior.
: Medical devices and software need to have high levels of accuracy, reliability, and availability in order to avoid negative impacts on patient health outcomes. This extends into AI tools that are used within a patient’s medical decision-making process. However, these tools can be complex to govern over time. Algorithms have the ability to evolve as updates and changes are iteratively applied by developers when, for example, adding new functionalities, improving or maintaining model performance, or patching security vulnerabilities.
To address the need for standards and the evolving nature of algorithmic products, the template recommends establishing a set of service-level agreements (SLAs) that define contractual business requirements for the technology, as well as algorithm change protocols (ACPs) that track developers’ anticipated future changes to the algorithm.
: The HIPAA “safe harbor” clause that allows organizations to share and sell de-identified health data has been shown to be vulnerable to potential re-identification of patient data. The safe harbor clause applies when health data has been stripped of 18 defined types of patient-identifying variables, but third-party datasets and artificial intelligence techniques be able to re-capture identifying attributes from de-identified data.
The template recommends restricting the sharing, selling, and re-identification of health data, similar to the on the sale of de-identified personal health information, as defined in the California Consumer Privacy Act (CCPA). If this restriction is infeasible for the contract, the template recommends the deletion of identifiable fields as opposed to anonymization, to reduce the risk of re-identification of patient data.
: A combination of factors — including the opacity of algorithm logic, lack of access to the technical source code and datasets, and a high barrier of technical literacy — prevents patients or physicians from being effective advocates for the impacts of algorithms on patient communities.
The template recommends contractually requiring third-party audits by experts who have transparent access to technical resources within the contractor’s systems. The third-party auditors should be paid by the procurement organization in order to avoid conflicts of interest between the auditor and the contractor. To be sensitive to contractor concerns around intellectual property, non-disclosure agreements can be enforced with the third-party auditor to prevent the leakage of trade secrets.
: A framework for continuous monitoring and evaluation should be in place in order to guarantee the performance quality of the algorithmic system.
In order to identify success and failure states for the tool, the template recommends establishing a set of monitoring metrics and acceptable threshold bounds that are agreed to by the procuring organization and the algorithm contractor. There should be clear warnings when this threshold is being approached, and a defined remediation policy should be ready for activation. A communication plan should also be in place to ensure the contracting agency is informed in advance of any updates or changes to the tool.