Company News

Our Predictions for Ethical AI in 2022

At Credo AI, we’re optimistic about the growth we’ve seen in the Ethical AI space in the last year — from emerging regulations to growing customer demand, here’s what we think will happen to continue this momentum in 2022

January 10, 2022
Author(s)
Susannah Shattuck
Contributor(s)
No items found.
At Credo AI, we’re optimistic about the growth we’ve seen in the Ethical AI space in the last year — from emerging regulations to growing customer demand, here’s what we think will happen to continue this momentum in 2022.

This past year was a busy one for us at Credo AI. In 2021, we spoke with over 120 organizations about operationalizing Responsible AI. Across all of our customers and partners, we’ve seen a clear pattern: a steadily growing need for tools, standards, and guardrails that will help them align their AI systems with human values.

Based on everything I’ve seen happening in the Ethical AI space in 2021, here are a few of the things I believe (and sincerely hope) will happen in 2022. I’m feeling optimistic about the growth of the space in the coming year!

1. Organizations will feel the pressure to put their “ethical AI principles” into real practice.

According to the AI Index Report, in 2018, 45 different organizations across the public and private published some kind of AI Ethics Principles. These days, nearly every big tech company has a nice website or blog post promoting their stance on how to build “ethical AI.”

But in those three years since people began to say the right thing, relatively little has been done to figure out how to do the right thing. That is, very few organizations have truly operationalized their AI ethics principles at scale. There are a lot of reasons for this — lack of regulatory urgency (see prediction #2), lack of standards and benchmarks (see prediction #3), and the immaturity of tools available in the market available to help with this.

What will change in 2021: As new regulations roll out and the public’s growing awareness of ethical AI issues translates into real market demand for companies to put their money where their mouth is, I predict that many organizations will feel pressure to prove that they are acting on their AI ethics principles. We’ll see more “Ethical AI” teams emerge with C-level sponsorship and representation. Responsibility will shift from resting largely on the builders of AI systems — data scientists and ML engineers — to being shared across multiple stakeholders within the organization. Dedicated roles will emerge for managing compliance with an organization’s stated AI value as this becomes a core enterprise priority.

2. New regulations will promote growth of an “AI assurance” ecosystem.

There are already many existing regulations that have started to establish guardrails for use of data and AI, from GDPR to the Fair Credit Reporting Act. But it’s clear that in 2022, even more significant, sector-specific AI regulation is coming; regulatory bodies and agencies around the world spent 2021 drafting and debating new rules about how AI systems should be governed. But the key question that everybody is wondering — including regulators themselves — is what these new laws will actually entail.

From what we’ve seen so far, I believe that the key focus of AI regulation in 2022 will be on establishing requirements for AI system assessment and auditing. From the conformity assessments of the EU AI Act to the “fairness audits” required by New York City’s recently passed law on the use of algorithmic hiring systems, organizations are beginning to face new requirements around evaluating the risks associated with their AI systems.

What will change in 2022: While many companies were able to rely on self-assessment of AI-related risk in 2021, they will no longer be allowed to police themselves — particularly when it comes to sensitive use cases. I hope and strongly believe that these new requirements will promote the growth of a robust ecosystem of independent AI auditors, assessment tools, and public repositories of AI risk assessment reports (and I’m not alone in thinking this; the CDEI in the UK recently published a report that points in the same direction).

3. Ethical AI standards and benchmarks will start to emerge.

One of the biggest challenges that many organizations face today when it comes to operationalizing Ethical AI and managing regulatory compliance is that there are very few standards and benchmarks for “what good looks like.” I’ve heard everyone from C-level executives to data science managers complain that they want to do the right thing, but they don’t know what to measure themselves against.

The good news is that new regulations are demanding that standards be defined, and the research community is coming together to tackle this problem. I think that 2022 is going to be a critical year for establishing standards across a variety of Ethical AI vectors, including AI fairness, transparency, and more. In 2021, NIST, the IEEE, and other standard-setting bodies started the critical work convening experts across industry, academia, and public policy to develop standard frameworks for evaluating and mitigating AI risk. I am also particularly excited about the emergence of independent research organizations like Dr. Timnit Gebru’s DAIR.

What will change in 2022: Researchers will continue the critical work of defining benchmarks for what it means for an AI system to be “ethical,” fair, explainable, and more — and as a result, I hope that we’ll start to see companies that develop ML systems begin competing not only against performance benchmarks but also against ethical benchmarks.

4. Third party AI risk assessment will become a top organizational priority.

In 2021, the concept of “foundation models” emerged, and the AI research community began to grapple with the idea that in the not-so-distant future, broadly trained, enormously large models may become the basis for most AI-driven applications. Regardless of where you stand on the importance and potential of foundation models, a critical truth is beginning to emerge that cannot be ignored when it comes to thinking about AI ethics: for most organizations, it will make more financial sense to buy and then customize ML models and AI systems from a third party vendor than building in-house.

But when it comes to understanding whether an externally-developed ML model or AI system meets ethical requirements, organizations are mostly flying in the dark. Very few private-sector organizations today evaluate their third party ML models or AI systems for ethical risk, while public-sector organizations are just starting to do so. And very few AI/ML vendors proactively report ethical assessment results to their customers, outside of what is required by regulation.

What will change in 2022: As more companies become subject to regulations requiring AI risk assessment, they will begin to demand the same assessments of their AI/ML vendors. Vendors are going to have to figure out how to meet customers’ new demands for ethical AI reporting, in addition to meeting new regulatory requirements.

Between new regulatory requirements and growing public demand, it’s clear that 2022 is going to be a banner year for Responsible AI. For organizations and practitioners working with ML models and AI systems, the time put your AI principles into practice. Whether you’re not sure where to get started or you already have a plan in place and need help operationalizing it, Credo AI is here to help.

Find out how the Credo AI risk management solution can support your AI governance efforts, no matter how far along in the journey you are. Take our AI governance readiness survey or schedule a demo today.