Company News

Roundtable Recap: Realizing Responsible AI in Washington, DC

Last month, Credo AI in partnership with our investors, Sands Capital, kicked off our Realizing Responsible AI roundtable series in Washington, D.C. with policymakers, industry, academia and other key stakeholders in the Responsible AI space to discuss how to move from principles to practice.

July 15, 2022
Author(s)
Naysa Mishler
Contributor(s)
No items found.

Last month, Credo AI in partnership with our investors, Sands Capital, kicked off our Realizing Responsible AI roundtable series in Washington, D.C. with policymakers, industry, academia and other key stakeholders in the Responsible AI space to discuss how to move from principles to practice. Our Founder and CEO, Navrina Singh, was joined by Northrop Grumman’s VP & Chief Compliance Officer, Carl Hahn, and EqualAI’s President and CEO Miriam Vogel for a discussion focused on driving organizational change, starting with first-movers in industries including HR tech, healthcare, financial services and defense.

Here are the key takeaways from our conversation in DC:

1/ Embed enterprise values into company culture to accelerate RAI adoption. 

Enterprise values help shape people’s behaviors and actions, including the design, development and deployment of AI. It’s more than simply checking a box – it’s the creation of an actionable culture that empowers employees to share in your mission to realize the responsible use of AI. The foundation of this type of culture is alignment on enterprise values across the organization. Navrina Singh recommends that once those values are set, organizations must work to codify these values, build them into their organizational infrastructure, observe the impact on employees and repeat the process with diverse voices providing input at every stage. 

2 / Mitigate unintended consequences throughout an AI model’s lifecycle. 

“AI is a reflection of our society and the company's building and deploying it.” said Miriam Vogel.  She suggests that companies take a closer look at the potential beneficiaries and the unintended harms associated with automated decision-making tools and AI models. “Without representation in the development and testing phase, there is serious danger that natural biases will remain unexposed until full-scale deployment.” 

Even if you aren’t building AI, you may still be using it. In fact, 80% of global HR departments use AI tools like resume analyzers and chatbots. (SHRM, 2019). New York City is one of the first localities to pass legislation requiring any algorithmic hiring tools to be audited annually for disparate impact, beginning January 1, 2023. Laws like these will help shape future regulations and ultimately, enable AI to create more equality and inclusivity. Learn more about Credo AI's audit offering here.

3/  Develop trust, transparency and action with multi-stakeholder engagement.

Carl Hahn warned, “Without public trust, without policy leader trust, without the trust of the people using your technology, you will fail.” Carl emphasized if you can build trust, you can deliver enormous value to your organization.

Trust and transparency in AI is a dynamic process; oftentimes requiring engagement across technical and oversight teams. Share your learnings and best practices, and utilize tools that enable multi-stakeholder collaboration. As mentioned in the Credo AI manifesto, the Responsible AI ecosystem needs a community of practice to deliver on the promise of AI. Credo AI is bridging the oversight deficit by operationalizing the behavior changing policies to align incentives across technical and business stakeholders. To help align your organization, consider establishing cross-functional working groups or a RAI board. Badge programs like EqualAI can help executives identify and reduce unconscious bias and create an action plan to develop and maintain responsible AI governance.

4/ Acknowledge external pressures and the increasingly important role of ESG in RAI.

AI policy is expanding, from the EEOC DOJ Guidance to the NIST Risk Management Framework and the EU AI Act, but regulation is not the only pressure organizations are facing. Employees, consumers, investors and businesses are also demanding more transparency, trust and accountability. 

AI oversight and accountability is quickly becoming a board level issue as well, given the potential for Directors and Companies to be exposed to legal liabilities if their AI systems are not designed, developed and deployed properly. “Stakeholders expect us to do this. I know our Board does….If you aren’t thinking about RAI as a differentiator and an ESG issue, you will be left behind.” - Panelist

As part of our commitment to ensure Responsible AI becomes the global standard, Credo AI is bringing together experts including policymakers, data scientists and business leaders across risk, compliance and audit to be part of the solution. Join our community waitlist here.