Algorithms for identifying cancers, security systems based on facial and biometric identification, computers that can translate different languages—just about anything becomes possible when machine learning is applied to translate massive amounts of data faster and more efficiently than the human brain. But there’s also a price to pay for these advances in artificial intelligence (AI), namely the loss of privacy rights. Currently, there’s no regulation to rein in corporate use of AI applications for aggressive and intrusive purposes. In addition, the data used for AI analysis typically incorporates and thus perpetuates the prejudices and blind spots of those who supply it. Thus, for example, facial recognition technology has been linked to disproportionately wrongful arrest rates of Blacks and other marginalized groups. AI-based medical decision support solutions rely on historic utilization and case data that don’t account for the medical experiences of disadvantaged populations who have limited access to the health system.
With that in mind, the White House Office of Science and Technology Policy (OSTP) has published a document called Blueprint for an AI Bill of Rights “to support [corporations in] the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems.” The Blueprint sets out five basic principles of protection against AI abuses that members of the public should have:
1. Assurance of Safe and Effective Systems
You should be protected against AI systems that are unsafe or ineffective. Those who deploy, develop, and remove systems should act proactively to protect you from harms, including use of inappropriate or irrelevant data and the “compounded harm” from its reuse.
2. Protection from Algorithmic Discrimination
Systems should be designed equitably so you don’t have to face algorithmic discrimination of any kind. According to the Blueprint, “algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.”
3. Data Privacy
You should be protected from abusive data practices via built-in design system protections giving you notice and control over how data is used. Collection, use, and disclosure of private data should be undertaken with your consent and limited to the minimum necessary to accomplish the purpose of such collection, use, and disclosure. There should also be additional limitations for use of data related to health, work, education, criminal justice, finance, and other “sensitive domains.” You should also be able to find out how your data is being used and ensure that such use conforms with your expectations in providing consent.
4. Notice and Explanation
“You should know that an automated system is being used and understand how and why it
contributes to outcomes that impact you,” the Blueprint states. “Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.” Summary reports explaining data uses in plain language should be made public whenever possible.
5. Human Alternatives, Consideration and Fallback
“You should be able to opt out from automated systems in favor of a human alternative, where appropriate” based on “reasonable expectations.” If an automated system fails in a way that may impact you, there should be a way that you can access “timely human consideration” and ensure corrective action is taken. You should also understand how the escalation and correction processes work and get timely and accessible reports of their results, according to the Blueprint.
Significance of the Blueprint
As the OSTP is quick to stress, the Blueprint “is non-binding and does not constitute U.S. government policy” or an official interpretation of any existing laws. In other words, the big tech companies at whom the Blueprint is aimed are under no compulsion to follow it and face no penalties if they choose not to do so. By contrast, the European Union AI Act, which contains protections that are far more robust than the Blueprint, will become an enforceable law when it takes full effect in 2024. However, the Blueprint does represent a framework for future legislation and regulation.
See the full article in the December 2022 National Lab Reporter, posted in advance of PDF publication.