Roundtable Recap: Realizing Responsible AI in Washington, DC

Naysa Mishler
Head of Marketing
July 15, 2022
No items found.

Last month, Credo AI in partnership with our investors, Sands Capital, kicked off our Realizing Responsible AI roundtable series in Washington, D.C. with policymakers, industry, academia and other key stakeholders in the Responsible AI space to discuss how to move from principles to practice. Our Founder and CEO, Navrina Singh, was joined by Northrop Grumman’s VP & Chief Compliance Officer, Carl Hahn, and EqualAI’s President and CEO Miriam Vogel for a discussion focused on driving organizational change, starting with first-movers in industries including HR tech, healthcare, financial services and defense.

Here are the key takeaways from our conversation in DC:

1/ Embed enterprise values into company culture to accelerate RAI adoption. 

Enterprise values help shape people’s behaviors and actions, including the design, development and deployment of AI. It’s more than simply checking a box – it’s the creation of an actionable culture that empowers employees to share in your mission to realize the responsible use of AI. The foundation of this type of culture is alignment on enterprise values across the organization. Navrina Singh recommends that once those values are set, organizations must work to codify these values, build them into their organizational infrastructure, observe the impact on employees and repeat the process with diverse voices providing input at every stage. 

2 / Mitigate unintended consequences throughout an AI model’s lifecycle. 

“AI is a reflection of our society and the company's building and deploying it.” said Miriam Vogel.  She suggests that companies take a closer look at the potential beneficiaries and the unintended harms associated with automated decision-making tools and AI models. “Without representation in the development and testing phase, there is serious danger that natural biases will remain unexposed until full-scale deployment.” 

Even if you aren’t building AI, you may still be using it. In fact, 80% of global HR departments use AI tools like resume analyzers and chatbots. (SHRM, 2019). New York City is one of the first localities to pass legislation requiring any algorithmic hiring tools to be audited annually for disparate impact, beginning January 1, 2023. Laws like these will help shape future regulations and ultimately, enable AI to create more equality and inclusivity. Learn more about Credo AI's audit offering here.

3/  Develop trust, transparency and action with multi-stakeholder engagement.

Carl Hahn warned, “Without public trust, without policy leader trust, without the trust of the people using your technology, you will fail.” Carl emphasized if you can build trust, you can deliver enormous value to your organization.

Trust and transparency in AI is a dynamic process; oftentimes requiring engagement across technical and oversight teams. Share your learnings and best practices, and utilize tools that enable multi-stakeholder collaboration. As mentioned in the Credo AI manifesto, the Responsible AI ecosystem needs a community of practice to deliver on the promise of AI. Credo AI is bridging the oversight deficit by operationalizing the behavior changing policies to align incentives across technical and business stakeholders. To help align your organization, consider establishing cross-functional working groups or a RAI board. Badge programs like EqualAI can help executives identify and reduce unconscious bias and create an action plan to develop and maintain responsible AI governance.

4/ Acknowledge external pressures and the increasingly important role of ESG in RAI.

AI policy is expanding, from the EEOC DOJ Guidance to the NIST Risk Management Framework and the EU AI Act, but regulation is not the only pressure organizations are facing. Employees, consumers, investors and businesses are also demanding more transparency, trust and accountability. 

AI oversight and accountability is quickly becoming a board level issue as well, given the potential for Directors and Companies to be exposed to legal liabilities if their AI systems are not designed, developed and deployed properly. “Stakeholders expect us to do this. I know our Board does….If you aren’t thinking about RAI as a differentiator and an ESG issue, you will be left behind.” - Panelist

As part of our commitment to ensure Responsible AI becomes the global standard, Credo AI is bringing together experts including policymakers, data scientists and business leaders across risk, compliance and audit to be part of the solution. Join our community waitlist here.

You may also like

Tools for Transparency: What Makes a Trustworthy AI System? | MozFest 2023

In recent years, the development and use of artificial intelligence (AI) systems have skyrocketed, leading to an urgent need for accountability and transparency in the industry. To shed light on this topic, Ehrik Aldana, Tech Policy Product Manager at Credo AI, was invited to give a lightning talk at Mozilla Fest 2023.

Local Law No. 144: NYC Employers & Vendors Prepare for AI Bias Audit with Credo AI’s Responsible AI Governance Platform

The clock is ticking! With New York City Local Law No. 144’s (LL-144) enforcement deadline fast approaching (April 15th, 2023), companies are scrambling to ensure they comply with the new AI regulation. While some organizations are still unsure how to start their journey, others—like AdeptID—have already taken the lead to demonstrate their commitment to Responsible AI practices. In this blog post, we will briefly describe Local Law No. 144, share how Credo AI is supporting HR Employers and Vendors, and showcase how we have supported AdeptID in their efforts to adhere to the legal requirements established by LL-144.

NYC AI Bias Audit: 7 Things You Need to Know About the Updated NYC Algorithmic Hiring Law

The clock is ticking, and we at Credo AI are committed to helping organizations understand what the recent updates and key aspects of the new regulation mean for them. In today’s blog post, we will outline seven things employers and employment agencies need to know about the requirements for NYC AI Bias Audits, also known as NYC Local Law No. 144.

Join the movement to make
Responsible Al a reality