2022 Global Responsible AI Summit: Key Highlights and Takeaways

Catharina Doria
Marketing Manager
November 10, 2022
No items found.
“We never want to get to the point where we've got to design an algorithm to teach us what it means to be human again." - Renée Cummings, AI Ethicist, Criminologist & Data Activist in Residence at University of Virginia.


On October 27th, Credo AI hosted the 2022 Global Responsible AI Summit, bringing together experts from AI, data ethics, civil society, academia, and government to discuss the opportunities, challenges, and actions required to make the responsible development and use of AI a reality. The Summit attracted more than 1,100 registrants across 6 continents, making it one of the leading Responsible AI gatherings of the year.

From policy to business and ethics to research, we had the pleasure of hosting seventeen experts from multidisciplinary fields who provided actionable insights on how organizations can be more inclusive, transparent, and fair with their AI technology. 

Highlights from the Summit include:
  • Congressman Haley Stevens, representing the 11th District of Michigan, opened the 2022 Summit with a question that would echo throughout the day:
    "Will our technology of today work for us tomorrow?"
  • Reid Hoffman, Partner at Greylock Partners, Co-Founder of LinkedIn and Inflection AI, shared that Responsible AI can help us understand our "goals and targets," where we are moving toward and how we'll get there.
  • Raj Seshadri, President of Data & Services at Mastercard, and Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, discussed how they tackle #ResponsibleAI in their respective enterprises.
  • Josh Lee Kok Thong, Managing Director (APAC) Future of Privacy Forum, Ed Teather, Director of AI Initiatives at CEIMIA, and Elham Tabassi, Chief of Staff ITL at NIST, discussed the differences and convergences of AI governance policies from four regions: the United States, the United Kingdom, the European Union, and Singapore.
  • Dr. Alondra Nelson, Deputy Assistant to the President and Deputy Director for Science and Society in the White House Office of Science and Technology Policy, concluded the summit by presenting insights on the newly released Blueprint for an AI Bill of Rights and its importance in today's AI landscape.
Reid Hoffman and Navrina Singh at the panel Responsible AI & Democracy. Watch now on demand.

After an inspiring day addressing responsible AI from diverse perspectives, we distilled all the insights shared by our experts into five main takeaways. Without further ado, here they are!

Main Takeaways

1. Translation is crucial for effective multi-stakeholder collaboration.

The responsible AI industry needs one common language to ensure we can effectively tackle the risks and benefits of AI. Organizations working with multidisciplinary teams, as well as stakeholders in the industry, should strive to communicate effectively and clearly to avoid losing sector-specific terms in translation. 

"AI has the potential to be pro-human in nature, but in order to achieve this outcome, we must align on what 'good' looks like as a society, which will only be possible through open, collaborative and ongoing discussion." - Reid Hoffman, Partner at Greylock Partners and Co-Founder at LinkedIn and Inflection AI.
2. Responsible AI can be a competitive advantage for businesses.

Responsible AI practices are more than a compliance check— they represent an added value for businesses. By implementing responsible practices, organizations should expect to increase innovation and customer trust, as well as gain a competitive edge over their competitors.

"Once you are confident you have the proper guardrails and the right safety nets—the checks and balances, the tests, and the healthy intellectual debate—you can let innovation loose. Because then, you have principles like privacy by design and know people will innovate in the right way." - Raj Seshadri, President of Data & Services, Mastercard.
3. Context matters.

Achieving trustworthy AI depends on a shared understanding that governance and oversight of AI are industry-specific, application-specific, and data-specific to ensure that it is fit for purpose. Organizations and stakeholders must ensure they use the correct terminology and metrics, as well as follow appropriate best practices for their specific industry.

"AI is all about context. When we talk about bias, fairness & transparency, we need specificities for different contexts" - Elham Tabassi, Chief of Staff (NIST)
4. Inclusion is crucial for the development of responsible technologies.

Artificial intelligence has the potential to exacerbate and perpetuate existing biases and prejudices. As a result, organizations should ensure diverse voices are included throughout the development lifecycle of any AI project — especially as new technologies like generative AI emerge.

"If we don't do due diligence at the start of our products and projects, then we have to correct for issues that emerge afterward. We need to imagine a world where we create rather than correct." - Margaret Mitchell, Researcher and Chief Ethics Scientist, Hugging Face.
5. The time is now.

Organizations should start their responsible AI journeys today to stay ahead of the Responsible AI revolution. To support this endeavor, stakeholders should consider joining communities of practice to learn from others' experiences in the industry. From Credo AI, we welcome you to join our Responsible AI Community waitlist to learn from the best minds in the field.

“President Biden has been clear. We must act now to confront these challenges, to safeguard the rights of individuals, to protect the mental health of children, to end online hate and harassment, and to ensure technology is working for everyone.” - Dr. Alondra Nelson, Deputy Assistant to the President & Deputy Director for Science and Society in the White House Office of Science and Technology Policy.


Our five main takeaways show there is still much work to be done. Moving the industry from principles to practices is a shared responsibility and requires collaboration from the entire ecosystem. We hope our #2022GlobalResponsibleAISummit and the learnings shared by our experts have given you the knowledge, tools, and community to help ensure the responsible development and use of AI. On behalf of Credo AI, we thank all speakers, participants, and partners for joining us. We could not have done it without you!

If you haven’t done so, click here to watch our Summit and learn more about #RAI from leading experts in the industry! 

You may also like

Better Together: The difference between MLOps & AI Governance and why you need both to deliver on Responsible AI

At Credo AI, we believe that AI Governance is the missing—and often forgotten—link between MLOps and AI’s success to meet business objectives. In this blog post, we’ll start by defining MLOps and AI Governance, how they differ, and why both are needed for the successful realization of AI/ML projects. Let’s take a closer look at MLOps and AI Governance with respect to scope of work, stakeholder involvement, and development lifecycle.

Fast Company Names Credo AI One of the Next Big Things In Tech

Today, I am thrilled to announce that Credo AI has been named by Fast Company as one of the 2022 Next Big Things in Tech – a prestigious award honoring the most innovative technologies that promise to shape industries, serve as catalysts for further innovation, and drive positive change to society within the next five years.

Operationalizing Responsible AI: How do you “do” AI Governance?

Now that we’ve established what AI governance is and why it’s so important, let’s talk strategy; how does one do AI governance, and what does an effective AI governance program look like? At the highest level, AI governance can be broken down into four components—four distinct steps that make up both a linear and iterative process: 1) Alignment: identifying and articulating the goals of the AI system, 2) Assessment: evaluating the AI system against the aligned goals, 3) Translation: turning the outputs of assessment into meaningful insights, and 4) Mitigation: taking action to prevent failure. Let’s take a deeper look at what happens during each of these steps and how they come together to form a governance process designed to prevent catastrophic failure.

Join the movement to make
Responsible Al a reality