The AI Bill of Rights: a human-centric perspective

8-12-2022

White House AI Bill of RightsIn early October, the White House took a major step towards codifying key rights in the age of artificial intelligence (AI) with its release of the Blueprint for an AI Bill of Rights. It was drawn up by the White House Office of Science and Technology Policy (OSTP) with the goal of guiding the responsible use of AI while empowering people, companies (especially big tech companies), and policymakers across the United States.

The Blueprint identifies five principles to govern the design, use, and deployment of automated systems. These principles are:

  • Having safe and effective systems

  • Algorithmic discrimination protection

  • Data privacy

  • Notice and explanation

  • Human consideration, alternatives, and fallback

At Mitek, we’ve been thinking about this milestone document in the context of identity protection. With almost 60 countries now having a national AI strategy in place, the Blueprint sets the stage for AI to become ubiquitous within the identity protection space over the coming years. In this article, we take a closer look at the Blueprint through a humanistic lens to uncover some key points pertinent to the identity community.

 

Finding the intersection between people and the AI code of conduct

Throughout history, progressive system automation has improved the lives of millions across the United States. These artificial intelligence systems have been predominantly driven by human input and guidance. With the adoption of AI, the human element takes the passenger seat as data-driven algorithms gain more influence over decision-making processes. 

Unintended algorithmic bias inherent to AI systems have since been shown to exclude or disqualify certain members of the American public from participating in spaces where identity verification is a prerequisite. Algorithms used in hiring and credit decisions have been found to reflect and reproduce unwanted inequities or embed new harmful discrimination. 

These effects have alienating consequences among affected communities. Studies show that 34% of people are afraid of AI, while 24% think AI will be damaging to society. The figures are even higher among visible minorities. Going a step further, only 5% of all consumers would prefer AI/automated systems on all interactions when engaging with banks.

These outcomes are deeply harmful – but not inevitable. The artificial intelligence bill seeks to be anti-bias, considerate, and clear in its language to ensure a fair and human centric approach to evolving AI adoption

 

Learn why authentication and biometrics are so important to the new AI code of conduct

Breaking down the principles of the AI Bill of Rights 

Principle 1: Safe and effective systems

This principle states that “the public should be protected from unsafe or ineffective systems”. Just as our Constitutional Bill of Rights protects our basic civil rights from the government, measures that protect individuals from faulty AI systems must be in place.

To that effect, we believe that public consultation with diverse communities must be carried out to identify potential harm of the system on the individual. Data systems used for identity verification must be proactively designed to protect individuals from harms stemming from unintended, yet foreseeable, uses of the system. And finally, irrelevant or inappropriate data should be identified and eliminated from the data collection process. 

Principle 2: Algorithmic discrimination protections

The Blueprint requires that algorithms should not expose individuals to discrimination and that systems should be designed and used equitably. Unchecked artificial intelligence exacerbates existing disparities and creates new roadblocks for already-marginalized groups, including communities of color, gender, and people with disabilities. 

When AI technology is developed or used in ways that does not adequately take into account existing inequities, we see real-life harms that impede on civil liberties and democratic values, such as biased predictions that lead to the exclusion of certain groups of people. We believe this principle to be paramount in maintaining the dignity, and ensuring inclusion of all members of society in active participation.

Principle 3: Data privacy

The principle of data privacy states that users “should be protected from abusive data practices via in-built protections”, and that they should have agency over how their data is being used. This section of the Artificial Intelligence Bill of Rights is the closest to already being reality, with some parts of it already passed as law – and for good reason. 

With data collection’s explosive rise over the last decade, the world has seen rampant abuses of personal data for marketing and functions not agreed to by the user. Fortunately, this principle reinforces the requirement for user consent to be communicated in clear, brief language before any data collection can be conducted. We also believe that, where possible, users should have access to reporting that confirms their data decisions have been respected.

Principle 4: Notice and explanation

Users “should know when an automated system is being used” and understand its impact on them. In our view, this is imperative to allowing the user to provide informed consent. People need to know when automated technologies will be used for their data collection. 

Furthermore, any updates to artificial intelligence system should be communicated and explained. This practice contributes to building trust between users and increases confidence in AI applications.

Principle 5: Human alternatives, consideration, and fallback

The fifth principle states that users “should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems”. We believe in respecting people’s decisions, no matter the reasons behind them. 

Having a human alternative to AI can put the user at ease by allowing for a human-to-human exchange of information and dialogue. It also provides an opportunity for systems to be improved in areas where a user may have concerns, such as AI-based decisions that users may want to contest. Principle 5 ensures that a business’s greatest asset, i.e., people, remain central to its operations.

What the future holds

The AI Bill of Rights is a step in the right direction of AI regulation and will incline the federal government to lead by example with its application. There will undoubtedly be future iterations of this bill as industries give more room for AI to have a larger seat at the operations table. Maintaining human rights and democratic values as the focal point of future bills will be a determining factor in the successful integration of AI within society.

 

 Read more about artificial intelligence and the future of identity

 

Check out the new Gartner report: Market Guide for Identity Verification Innovation for more on digital identity and fraud prevention:

View complimentary report now