The Artificial Intelligence (AI) Bill of Rights
One small step for civil liberties, or two steps back for tech innovation?
In our latest episode, we are talking about “The Blueprint for an AI Bill of Rights.”
This blueprint is a guide on the design, use, and deployment of automated systems that protect the American public. It was written by the Office of Science and Technology Policy (OSTP), an organization established by the Executive Office of the President in 1976 with a broad mandate to advise the President and others within the Executive Office of the President on the effects of science and technology on domestic and international affairs.
We are going to dig a little deeper into the principles outlined in this guide and evaluate the strengths and weaknesses of this blueprint. If you prefer to listen, the podcast episode is below:
The White House Office of Science Technology and Policy (OSTP) released a document titled the “Blueprint for an AI Bill of Rights”. This guide was written in response to the emergence of automated systems (like our friend ChatGPT) that make decisions across sectors like healthcare, manufacturing, finance, and more. Though the use of technology, data, and automated systems has brought extraordinary benefits and progress, the guide states that “this important progress must not come at the price of civil rights or democratic values.”
This framework describes protections that should be applied with respect to all automated systems that have the potential to meaningfully impact individuals’ or communities’ exercise of rights, opportunities, or access. This includes not only addressing inequity, in President Biden’s words but also the right to privacy, which he called “the basis for so many more rights that we have come to take for granted that are ingrained in the fabric of this country.”
The OSTP spent a year organizing six panels that were publicly held online to host representatives like researchers, technologists, advocates, journalists, and policymakers. These panels focused on topics like consumer rights and protections, the criminal justice system, equal opportunities and civil justice, artificial intelligence and democratic values, social welfare and development, and the healthcare system. The recordings of these discussions are available online.
It establishes 5 principles with a technical companion on “guiding the design, use, and deployment” of automated systems. These five principles outlined in this guide are 1) Safe and Effective Systems, 2) Algorithmic Discrimination Protections, 3) Data Privacy, 4) Notice and Explanation, and 5) Human Alternatives, Consideration, and Feedback.
1) Safe and Effective Systems
This point is half about processes - having the right testing process in place pre-deployment, ensuring there’s a protocol for risk assessment and mitigation, transparent reporting - and always having the option based on these assessments to not deploy or even remove the system. The other half is what exactly they should be protecting for or preventing: the foreseeable harms of data used in design/development/deployment and the harm of data reuse.
As a technical guide, throughout the AI Bill of Rights, you’ll find illustrative examples of when these guidelines may be relevant. There are several healthcare-related examples throughout this guide, which indicate the wide-ranging concerns that stakeholders have about the use of algorithms particularly in clinical medicine.
“A proprietary model was developed to predict the likelihood of sepsis in hospitalized patients and was implemented at hundreds of hospitals around the country. An independent study showed that the model predictions underperformed relative to the designer’s claims while also causing ‘alert fatigue’ by falsely alerting the likelihood of sepsis. This sepsis model has poor discrimination and calibration in predicting sepsis and the widespread adoption despite poor performance raises really big concerns on if the model is trained and optimized properly and the process behind it was robust enough.” Source
“Many current models accent feedback loops that may not be accurate. An algorithm used to deploy police was found to repeatedly send police to neighborhoods they regularly visit, even if those neighborhoods were not the ones with the highest crime rates. These incorrect crime predictions were the result of a feedback loop generated from the reuse of data from previous arrests and algorithm predictions.” Source
2) Algorithmic Discrimination Protections
This principle is about not experiencing discrimination by algorithms and systems should be used and designed in an equitable way. The point here is that people who oversee the implementation of algorithms (designers, developers, and deployers) should ensure that there are measures in place to protect communities from discrimination. An algorithm should not discriminate against a person due to their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.
Some illustrative examples:
In 2020, Health Bytes published an article detailing the failings of a risk assessment model built by Optum that disproportionately harmed Black patients. The AI Bill of Rights mentions the same example where an “algorithm designed to identify patients with high needs for healthcare systematically assigned lower scores (indicating that they were not as high need) to Black patients than to those of white patients, even when those patients had similar numbers of chronic conditions and other markers of health. In addition, healthcare clinical algorithms that are used by physicians to guide clinical decisions may include sociodemographic variables that adjust or “correct” the algorithm’s output on the basis of a patient’s race or ethnicity, which can lead to race-based health inequities. Source
In the private sector, Amazon started but eventually scrapped, a hiring tool that learned the features of a company’s employees (predominantly men) rejected women applicants for spurious and discriminatory reasons; resumes with the word “women’s,” such as “women’s chess club captain,” were penalized in the candidate ranking. Essentially model taught itself that male candidates were preferred. Source
3) Data Privacy
This principle is about how you should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used. People who build automated data systems should obtain your consent and feedback on the collection, use, access, transfer, and deletion of your data. These consent requests should be clear, concise, and understandable. Adds an additional point on enhanced protections on sensitive domains, including health, work, education, criminal justice, and finance, and for data pertaining to youth.
Some illustrative examples applicable to healthcare:
“Location data, acquired from a data broker, can be used to identify people who visit abortion clinics.” Source
“Continuous positive airway pressure machines gather data for medical purposes, such as diagnosing sleep apnea, and send usage data to a patient’s insurance company, which may subsequently deny coverage for the device based on usage data. Patients were not aware that the data would be used in this way or monitored by anyone other than their doctor.” Source
This whole practice of defining privacy guidelines just isn’t a standard for the US yet, and we lack the comprehensive regulatory framework to ensure data privacy rights. The AI Bill of Rights is a step in the right direction to formalizing data privacy rights.
4) Notice and Explanation
This principle is about how you should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. This understanding can be facilitated by having plain language documentation about how this system works, having explanations for outcomes that are made (especially when a variety of inputs are used to make the decision) and people should be made aware of key functionality changes.
“A formal child welfare investigation was opened against a parent based on an algorithm and without the parent ever being notified that data was being collected and used as part of an algorithmic child maltreatment risk assessment. The lack of notice or an explanation made it harder for those performing child maltreatment assessments to validate the risk assessment and denies parents knowledge that could help them contest a decision.” Source
5) Human Alternatives, Consideration, and Feedback
This principle is about how you should be able to opt-out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. Everyone should have access to a fallback system if an algorithm fails or causes harm.
“A patient was wrongly denied access to pain medication when the hospital’s software confused her medication history with that of her dog’s. Even after she tracked down an explanation for the problem, doctors were afraid to override the system, and she was forced to go without pain relief due to the system’s error.” Source
“An unemployment benefits system in Colorado required, as a condition of accessing benefits, that applicants have a smartphone in order to verify their identity. No alternative human option was readily available, which denied many people access to benefits.” Source
Drawbacks, Outcomes, and the Future of AI Regulation
Some academics believe that the AI Bill of Rights does not go far enough and will be largely ineffective. They wish the document had more of the checks and balances available in the European Union’s Artificial Intelligence Act. I, which has actual accountability and enforceable clauses.
On the other hand, some say that these guidelines are stifling AI innovation. Eric Schmidt, the former CEO of Alphabet, says that he “would not regulate things until we have to.” He also adds that “there are too many things that early regulation may prevent from being discovered.”
Despite these concerns, several federal agencies have adopted their own responsible use of AI systems. According to Brookings, “at least a dozen agencies have issued some sort of binding guidance for the use of automated systems in the industries under their jurisdiction.”
This bill of rights could be the starting point for mandates by regulatory agencies on the use of automated systems. One relevant example is related to the mandates on health information exchange by electronic health systems:
The Office of the National Coordinator of Health Information Technology (ONC) currently mandates the seamless and secure access, exchange, and use of electronic health information. Came out of investigative reports and calls to tackle information blocking done by major EHR providers like Epic and Cerner. The rule is designed to give patients and their healthcare providers secure access to health information. It also aims to increase innovation and competition by fostering an ecosystem of new applications to provide patients with more choices in their healthcare. It calls on the healthcare industry to adopt standardized application programming interfaces (APIs), which will help allow individuals to securely and easily access structured electronic health information using smartphone applications.
With the rise of generative algorithms like ChatGPT is no surprise that many people are worried about what a future with artificial intelligence will look like. Though these algorithms have a limited scope,for now, we’ll be vying for ChatGPT’s love when the AI takeover happens.
Thanks for joining us for the latest episode of Health Bytes. Let us know what you think about the AI Bill of Rights!