Company News

2022 Global Responsible AI Summit: Key Highlights and Takeaways

On October 27th, Credo AI hosted the 2022 Global Responsible AI Summit, bringing together experts from AI, data ethics, civil society, academia, and government to discuss the opportunities, challenges, and actions required to make the responsible development and use of AI a reality.

November 10, 2022
Author(s)
Catharina Doria
Contributor(s)
No items found.
“We never want to get to the point where we've got to design an algorithm to teach us what it means to be human again." - Renée Cummings, AI Ethicist, Criminologist & Data Activist in Residence at University of Virginia.

Introduction

On October 27th, Credo AI hosted the 2022 Global Responsible AI Summit, bringing together experts from AI, data ethics, civil society, academia, and government to discuss the opportunities, challenges, and actions required to make the responsible development and use of AI a reality. The Summit attracted more than 1,100 registrants across 6 continents, making it one of the leading Responsible AI gatherings of the year.

From policy to business and ethics to research, we had the pleasure of hosting seventeen experts from multidisciplinary fields who provided actionable insights on how organizations can be more inclusive, transparent, and fair with their AI technology. 

Highlights from the Summit include:

  • Congressman Haley Stevens, representing the 11th District of Michigan, opened the 2022 Summit with a question that would echo throughout the day:
    "Will our technology of today work for us tomorrow?"
  • Reid Hoffman, Partner at Greylock Partners, Co-Founder of LinkedIn and Inflection AI, shared that Responsible AI can help us understand our "goals and targets," where we are moving toward and how we'll get there.
  • Raj Seshadri, President of Data & Services at Mastercard, and Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, discussed how they tackle #ResponsibleAI in their respective enterprises.
  • Josh Lee Kok Thong, Managing Director (APAC) Future of Privacy Forum, Ed Teather, Director of AI Initiatives at CEIMIA, and Elham Tabassi, Chief of Staff ITL at NIST, discussed the differences and convergences of AI governance policies from four regions: the United States, the United Kingdom, the European Union, and Singapore.
  • Dr. Alondra Nelson, Deputy Assistant to the President and Deputy Director for Science and Society in the White House Office of Science and Technology Policy, concluded the summit by presenting insights on the newly released Blueprint for an AI Bill of Rights and its importance in today's AI landscape.
Reid Hoffman and Navrina Singh at the panel Responsible AI & Democracy. Watch now on demand.

After an inspiring day addressing responsible AI from diverse perspectives, we distilled all the insights shared by our experts into five main takeaways. Without further ado, here they are!

Main Takeaways

1. Translation is crucial for effective multi-stakeholder collaboration.

The responsible AI industry needs one common language to ensure we can effectively tackle the risks and benefits of AI. Organizations working with multidisciplinary teams, as well as stakeholders in the industry, should strive to communicate effectively and clearly to avoid losing sector-specific terms in translation. 

"AI has the potential to be pro-human in nature, but in order to achieve this outcome, we must align on what 'good' looks like as a society, which will only be possible through open, collaborative and ongoing discussion." - Reid Hoffman, Partner at Greylock Partners and Co-Founder at LinkedIn and Inflection AI.

2. Responsible AI can be a competitive advantage for businesses.

Responsible AI practices are more than a compliance check— they represent an added value for businesses. By implementing responsible practices, organizations should expect to increase innovation and customer trust, as well as gain a competitive edge over their competitors.

"Once you are confident you have the proper guardrails and the right safety nets—the checks and balances, the tests, and the healthy intellectual debate—you can let innovation loose. Because then, you have principles like privacy by design and know people will innovate in the right way." - Raj Seshadri, President of Data & Services, Mastercard.

3. Context matters.

Achieving trustworthy AI depends on a shared understanding that governance and oversight of AI are industry-specific, application-specific, and data-specific to ensure that it is fit for purpose. Organizations and stakeholders must ensure they use the correct terminology and metrics, as well as follow appropriate best practices for their specific industry.

"AI is all about context. When we talk about bias, fairness & transparency, we need specificities for different contexts" - Elham Tabassi, Chief of Staff (NIST)

4. Inclusion is crucial for the development of responsible technologies.

Artificial intelligence has the potential to exacerbate and perpetuate existing biases and prejudices. As a result, organizations should ensure diverse voices are included throughout the development lifecycle of any AI project — especially as new technologies like generative AI emerge.

"If we don't do due diligence at the start of our products and projects, then we have to correct for issues that emerge afterward. We need to imagine a world where we create rather than correct." - Margaret Mitchell, Researcher and Chief Ethics Scientist, Hugging Face.

5. The time is now.

Organizations should start their responsible AI journeys today to stay ahead of the Responsible AI revolution. To support this endeavor, stakeholders should consider joining communities of practice to learn from others' experiences in the industry. From Credo AI, we welcome you to join our Responsible AI Community waitlist to learn from the best minds in the field.

“President Biden has been clear. We must act now to confront these challenges, to safeguard the rights of individuals, to protect the mental health of children, to end online hate and harassment, and to ensure technology is working for everyone.” - Dr. Alondra Nelson, Deputy Assistant to the President & Deputy Director for Science and Society in the White House Office of Science and Technology Policy.

Conclusion

Our five main takeaways show there is still much work to be done. Moving the industry from principles to practices is a shared responsibility and requires collaboration from the entire ecosystem. We hope our #2022GlobalResponsibleAISummit and the learnings shared by our experts have given you the knowledge, tools, and community to help ensure the responsible development and use of AI. On behalf of Credo AI, we thank all speakers, participants, and partners for joining us. We could not have done it without you!

If you haven’t done so, click here to watch our Summit and learn more about #RAI from leading experts in the industry!