Company

2022 Global Responsible AI Summit: Key Highlights and Takeaways

Catharina Doria
Marketing Manager
November 10, 2022
11/10/2022
Contributor(s):
No items found.
“We never want to get to the point where we've got to design an algorithm to teach us what it means to be human again." - Renée Cummings, AI Ethicist, Criminologist & Data Activist in Residence at University of Virginia.

Introduction

On October 27th, Credo AI hosted the 2022 Global Responsible AI Summit, bringing together experts from AI, data ethics, civil society, academia, and government to discuss the opportunities, challenges, and actions required to make the responsible development and use of AI a reality. The Summit attracted more than 1,100 registrants across 6 continents, making it one of the leading Responsible AI gatherings of the year.

From policy to business and ethics to research, we had the pleasure of hosting seventeen experts from multidisciplinary fields who provided actionable insights on how organizations can be more inclusive, transparent, and fair with their AI technology. 

Highlights from the Summit include:
  • Congressman Haley Stevens, representing the 11th District of Michigan, opened the 2022 Summit with a question that would echo throughout the day:
    "Will our technology of today work for us tomorrow?"
  • Reid Hoffman, Partner at Greylock Partners, Co-Founder of LinkedIn and Inflection AI, shared that Responsible AI can help us understand our "goals and targets," where we are moving toward and how we'll get there.
  • Raj Seshadri, President of Data & Services at Mastercard, and Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, discussed how they tackle #ResponsibleAI in their respective enterprises.
  • Josh Lee Kok Thong, Managing Director (APAC) Future of Privacy Forum, Ed Teather, Director of AI Initiatives at CEIMIA, and Elham Tabassi, Chief of Staff ITL at NIST, discussed the differences and convergences of AI governance policies from four regions: the United States, the United Kingdom, the European Union, and Singapore.
  • Dr. Alondra Nelson, Deputy Assistant to the President and Deputy Director for Science and Society in the White House Office of Science and Technology Policy, concluded the summit by presenting insights on the newly released Blueprint for an AI Bill of Rights and its importance in today's AI landscape.
Reid Hoffman and Navrina Singh at the panel Responsible AI & Democracy. Watch now on demand.

After an inspiring day addressing responsible AI from diverse perspectives, we distilled all the insights shared by our experts into five main takeaways. Without further ado, here they are!

Main Takeaways

1. Translation is crucial for effective multi-stakeholder collaboration.

The responsible AI industry needs one common language to ensure we can effectively tackle the risks and benefits of AI. Organizations working with multidisciplinary teams, as well as stakeholders in the industry, should strive to communicate effectively and clearly to avoid losing sector-specific terms in translation. 

"AI has the potential to be pro-human in nature, but in order to achieve this outcome, we must align on what 'good' looks like as a society, which will only be possible through open, collaborative and ongoing discussion." - Reid Hoffman, Partner at Greylock Partners and Co-Founder at LinkedIn and Inflection AI.
2. Responsible AI can be a competitive advantage for businesses.

Responsible AI practices are more than a compliance check— they represent an added value for businesses. By implementing responsible practices, organizations should expect to increase innovation and customer trust, as well as gain a competitive edge over their competitors.

"Once you are confident you have the proper guardrails and the right safety nets—the checks and balances, the tests, and the healthy intellectual debate—you can let innovation loose. Because then, you have principles like privacy by design and know people will innovate in the right way." - Raj Seshadri, President of Data & Services, Mastercard.
3. Context matters.

Achieving trustworthy AI depends on a shared understanding that governance and oversight of AI are industry-specific, application-specific, and data-specific to ensure that it is fit for purpose. Organizations and stakeholders must ensure they use the correct terminology and metrics, as well as follow appropriate best practices for their specific industry.

"AI is all about context. When we talk about bias, fairness & transparency, we need specificities for different contexts" - Elham Tabassi, Chief of Staff (NIST)
4. Inclusion is crucial for the development of responsible technologies.

Artificial intelligence has the potential to exacerbate and perpetuate existing biases and prejudices. As a result, organizations should ensure diverse voices are included throughout the development lifecycle of any AI project — especially as new technologies like generative AI emerge.

"If we don't do due diligence at the start of our products and projects, then we have to correct for issues that emerge afterward. We need to imagine a world where we create rather than correct." - Margaret Mitchell, Researcher and Chief Ethics Scientist, Hugging Face.
5. The time is now.

Organizations should start their responsible AI journeys today to stay ahead of the Responsible AI revolution. To support this endeavor, stakeholders should consider joining communities of practice to learn from others' experiences in the industry. From Credo AI, we welcome you to join our Responsible AI Community waitlist to learn from the best minds in the field.

“President Biden has been clear. We must act now to confront these challenges, to safeguard the rights of individuals, to protect the mental health of children, to end online hate and harassment, and to ensure technology is working for everyone.” - Dr. Alondra Nelson, Deputy Assistant to the President & Deputy Director for Science and Society in the White House Office of Science and Technology Policy.

Conclusion

Our five main takeaways show there is still much work to be done. Moving the industry from principles to practices is a shared responsibility and requires collaboration from the entire ecosystem. We hope our #2022GlobalResponsibleAISummit and the learnings shared by our experts have given you the knowledge, tools, and community to help ensure the responsible development and use of AI. On behalf of Credo AI, we thank all speakers, participants, and partners for joining us. We could not have done it without you!

If you haven’t done so, click here to watch our Summit and learn more about #RAI from leading experts in the industry! 

You may also like

🔥 Unlocking AI's Full Potential: Playbooks for Organizational Change Management in Responsible AI

Leveraging the capabilities of Artificial Intelligence (AI), including Generative AI, has become a key focus for executive teams across various industries worldwide. With tools such as ChatGPT 4 or DALL·E 2, AI is no longer a technology reserved for a select few in the future. Instead, it is here today, offering great benefits to both individuals and businesses. By streamlining operations, improving customer experiences, and boosting productivity, AI can provide a remarkable competitive edge to companies.However, just like with any rapidly advancing new technology, there are significant challenges that need to be addressed. Concerns such as AI bias, misuse, and misalignment have caused customer outrage and prompted companies to implement stricter guidelines for the development and use of AI.Moving fast with AI is critical.But breaking things can lead to more problems than benefits. At Credo AI, one of our missions is to empower organizations to harness the full potential of AI with intention, so they can reap the benefits without compromising speed. This means supporting organizations to move fast without breaking things.To support organizations in this endeavor, we have created the ultimate resource for effective, Responsible AI change management: our Responsible AI Playbooks (5 of many more to come! 😉). Regardless of the stage of AI maturity, our comprehensive guide provides practical guidance for organizations and employees to adopt and implement Responsible AI with ease, allowing them to quickly leverage and unlock the benefits of AI without risking any potential damage in the process. Are you ready to realize the full potential of AI? We are! Let’s start!

Growing Pressure to Regulate AI: Proposed State Bills Call for Impact Assessments and Transparency

In recent weeks, there has been a significant increase in the number of AI-related state bills introduced across the United States.This reflects greater pressures to address AI and automated decision-making systems used in the government and private sector – and the potential risks they present. States have taken different approaches to fill the current gaps in regulation, including the development of task forces and the allocation of funding for research. Additionally, a number of bills have proposed measures aimed at increasing transparency around AI systems, including requirements for algorithmic impact assessments and registries/inventories of AI systems used. These transparency measures are growing in popularity as a regulatory tool to ensure that AI systems are trustworthy and safe, affecting developers and deployers of AI products, both in the private and public sectors.

Tools for Transparency: What Makes a Trustworthy AI System? | MozFest 2023

In recent years, the development and use of artificial intelligence (AI) systems have skyrocketed, leading to an urgent need for accountability and transparency in the industry. To shed light on this topic, Ehrik Aldana, Tech Policy Product Manager at Credo AI, was invited to give a lightning talk at Mozilla Fest 2023.

Join the movement to make
Responsible Al a reality