Roundtable Recap: Realizing Responsible AI in Washington, DC

Naysa Mishler
Head of Marketing
July 15, 2022
No items found.

Last month, Credo AI in partnership with our investors, Sands Capital, kicked off our Realizing Responsible AI roundtable series in Washington, D.C. with policymakers, industry, academia and other key stakeholders in the Responsible AI space to discuss how to move from principles to practice. Our Founder and CEO, Navrina Singh, was joined by Northrop Grumman’s VP & Chief Compliance Officer, Carl Hahn, and EqualAI’s President and CEO Miriam Vogel for a discussion focused on driving organizational change, starting with first-movers in industries including HR tech, healthcare, financial services and defense.

Here are the key takeaways from our conversation in DC:

1/ Embed enterprise values into company culture to accelerate RAI adoption. 

Enterprise values help shape people’s behaviors and actions, including the design, development and deployment of AI. It’s more than simply checking a box – it’s the creation of an actionable culture that empowers employees to share in your mission to realize the responsible use of AI. The foundation of this type of culture is alignment on enterprise values across the organization. Navrina Singh recommends that once those values are set, organizations must work to codify these values, build them into their organizational infrastructure, observe the impact on employees and repeat the process with diverse voices providing input at every stage. 

2 / Mitigate unintended consequences throughout an AI model’s lifecycle. 

“AI is a reflection of our society and the company's building and deploying it.” said Miriam Vogel.  She suggests that companies take a closer look at the potential beneficiaries and the unintended harms associated with automated decision-making tools and AI models. “Without representation in the development and testing phase, there is serious danger that natural biases will remain unexposed until full-scale deployment.” 

Even if you aren’t building AI, you may still be using it. In fact, 80% of global HR departments use AI tools like resume analyzers and chatbots. (SHRM, 2019). New York City is one of the first localities to pass legislation requiring any algorithmic hiring tools to be audited annually for disparate impact, beginning January 1, 2023. Laws like these will help shape future regulations and ultimately, enable AI to create more equality and inclusivity. Learn more about Credo AI's audit offering here.

3/  Develop trust, transparency and action with multi-stakeholder engagement.

Carl Hahn warned, “Without public trust, without policy leader trust, without the trust of the people using your technology, you will fail.” Carl emphasized if you can build trust, you can deliver enormous value to your organization.

Trust and transparency in AI is a dynamic process; oftentimes requiring engagement across technical and oversight teams. Share your learnings and best practices, and utilize tools that enable multi-stakeholder collaboration. As mentioned in the Credo AI manifesto, the Responsible AI ecosystem needs a community of practice to deliver on the promise of AI. Credo AI is bridging the oversight deficit by operationalizing the behavior changing policies to align incentives across technical and business stakeholders. To help align your organization, consider establishing cross-functional working groups or a RAI board. Badge programs like EqualAI can help executives identify and reduce unconscious bias and create an action plan to develop and maintain responsible AI governance.

4/ Acknowledge external pressures and the increasingly important role of ESG in RAI.

AI policy is expanding, from the EEOC DOJ Guidance to the NIST Risk Management Framework and the EU AI Act, but regulation is not the only pressure organizations are facing. Employees, consumers, investors and businesses are also demanding more transparency, trust and accountability. 

AI oversight and accountability is quickly becoming a board level issue as well, given the potential for Directors and Companies to be exposed to legal liabilities if their AI systems are not designed, developed and deployed properly. “Stakeholders expect us to do this. I know our Board does….If you aren’t thinking about RAI as a differentiator and an ESG issue, you will be left behind.” - Panelist

As part of our commitment to ensure Responsible AI becomes the global standard, Credo AI is bringing together experts including policymakers, data scientists and business leaders across risk, compliance and audit to be part of the solution. Join our community waitlist here.

You may also like

Better Together: The difference between MLOps & AI Governance and why you need both to deliver on Responsible AI

At Credo AI, we believe that AI Governance is the missing—and often forgotten—link between MLOps and AI’s success to meet business objectives. In this blog post, we’ll start by defining MLOps and AI Governance, how they differ, and why both are needed for the successful realization of AI/ML projects. Let’s take a closer look at MLOps and AI Governance with respect to scope of work, stakeholder involvement, and development lifecycle.

Fast Company Names Credo AI One of the Next Big Things In Tech

Today, I am thrilled to announce that Credo AI has been named by Fast Company as one of the 2022 Next Big Things in Tech – a prestigious award honoring the most innovative technologies that promise to shape industries, serve as catalysts for further innovation, and drive positive change to society within the next five years.

Operationalizing Responsible AI: How do you “do” AI Governance?

Now that we’ve established what AI governance is and why it’s so important, let’s talk strategy; how does one do AI governance, and what does an effective AI governance program look like? At the highest level, AI governance can be broken down into four components—four distinct steps that make up both a linear and iterative process: 1) Alignment: identifying and articulating the goals of the AI system, 2) Assessment: evaluating the AI system against the aligned goals, 3) Translation: turning the outputs of assessment into meaningful insights, and 4) Mitigation: taking action to prevent failure. Let’s take a deeper look at what happens during each of these steps and how they come together to form a governance process designed to prevent catastrophic failure.

Join the movement to make
Responsible Al a reality