Credobility:

The only AI Governance Platform deeply integrated into the global policy and standards ecosystem.

Credo AI is a trusted partner for global policymakers, regulators, and standard setters. Our team includes staff with prior experience at the European Commission, technology trade associations, and the U.S. Department of Commerce, as well as some of the largest enterprises in Europe (including our on-the-ground staff in Europe). Credo AI’s Policy team connects with key stakeholders in the U.S. Congress, as well as Mayors, Governors, and State-Level legislators.

Credo AI’s CEO and Founder, Navrina Singh, sits on the National Artificial Intelligence Advisory Committee (NAIAC), which advises President Biden and the National AI Initiative Office and is a young global leader with the World Economic Forum as well as an OECD AI Expert serving on the OECD’s Expert Group on AI Risk and Accountability. Navrina previously served as an executive board member of the Mozilla Foundation and Mozilla AI, supporting its trustworthy AI charter.

A Glimpse into our Global Impact and Ecosystem:

EU-U.S. TTC Joint Roadmap for Trustworthy AI and Risk Management

Credo AI’s Global Policy Director, Evi Fuelle, moderated a panel discussion at the Austrian Embassy in Washington entitled "Transatlantic Approaches to AI Regulation in Times of Great Power Competition," at the invitation of the Austrian Embassy and Bertelsmann Foundation, with leading voices in technology policy including Rob Atkinson (CEO, ITIF), Peter Fatelnig (Minister-Counsellor, EU Delegation to the U.S.), and Molly Montgomery (former U.S. Department of State, Director of Public Policy, Meta).

June 12, 2023

Credo AI Joins Verify Foundation

Credo AI is one of the Foundational Members of  the AI Verify Foundation, a not-for-profit Foundation that is a wholly owned subsidiary of the Infocommunications Media Development Authority of Singapore (IMDA), the aim of which is to harness the collective power and contributions of the international open-source community to develop AI governance testing tools that can better enable the development and deployment of trustworthy AI.

June 7, 2023

EU AI Act OpenLoop Sandbox

Credo AI was chosen as one of a select group of small and medium-sized enterprises (SMEs) to participate in a trailblazing initiative called Open Loop (a global policy experimentation program supported by Meta), designed to test various aspects of the EU AI Act in practice. Credo AI had the chance to apply design thinking to the policy, testing it "in practice" by asking ourselves difficult questions & sharing ideas across our product, policy, and data science teams at Credo AI. Credo AI provided preeminent thought leadership to a robust discussion on the implementation of the EU AI Act, including our experience working with industry, as well as our expertise in creating governance artifacts (transparency reporting, algorithmic impact assessments, algorithm design evaluations, model cards and more) for enterprises of all sizes, with a variety of AI use cases. Learn more here.

June 5, 2023

National AI Advisory Committee (NAIAC) Public Hearing at U.S. Department of Commerce

As a member of the National Artificial Intelligence Advisory Committee (NAIAC), Credo AI's CEO Navrina Singh spoke about the year-long work of the NAIAC at a public hearing hosted by the U.S. Department of Commerce, announcing the release of the Committee’s Year 1 Report. The NAIAC - launched in April 2022 - is tasked with advising the President and the National AI Initiative Office on topics related to the National AI Initiative.

May 1, 2023

UK Centre for Data Ethics and Innovation (CDEI) Algorithmic Impact Assessments Workshop

Hosted by CDEI and Ada Lovelace Institute, Credo AI was the only enterprise selected to present at the UK Government Centre for Data Ethics and Innovation workshop entitled “Exploring Tools for Trustworthy AI: Impact Assessments,” at White Hall in London.

In this workshop, Credo AI showcased our RAI Governance Platform, and research conducted into Algorithmic Impact Assessment prototypes for generative AI and human resources, to an audience of global regulators and enterprises, including the UK Information Commissioner’s Office, Ada Lovelace Institute, The Alan Turing Institute, British Standards Institute (BSI), DeepMind, Mastercard, Northrop Grumman, NHS AI Lab, and more.

This workshop provided an interactive opportunity for regulators and legislators, as well as standard-setting bodies and impacted enterprises, to exchange dialogue over best practices for algorithmic transparency reporting.

April 18, 2023

OECD AI Risk and Accountability Expert Working Group

Credo AI’s CEO Navrina Singh spoke at  OECD.AI AI Risk and Accountability Expert Working Group meeting at OECD Headquarters in Paris, France, as part of critical discussions on the impact of generative AI on AI policy worldwide, including discussions of the NIST Risk Management Framework, International Standards on AI, and the European Union AI Act. 

Through the OECD.AI Network of Experts workstream on AI risk, the OECD is engaging with partner organizations, including the International Organization for Standardization (ISO), Institute of Electrical and Electronics Engineers (IEEE), National Institute of Standards and Technology (NIST), European Committee for Electrotechnical Standardization (CEN-CENELEC), the European Commission (EC), Council of Europe (CoE), UNESCO, OECD, EU-US Trade and Technology Council (TTC) and Responsible AI Institute (RAII)-WEF to identify common guideposts to assess AI risk and impact for Trustworthy AI. The goal is to help implement effective and accountable trustworthy AI systems by promoting global consistency.

April 17, 2023

Policy Intelligence: Translating Policy and Standards to Code

Drawing from our experiences and discussions with global policymakers and standard setters, Credo AI has developed extensive and deep “Policy Intelligence.” Credo AI integrates this expertise and the most up-to-date insights into our Responsible AI Governance Platform. This work combines a profound technical grasp of AI risks with extensive policy and regulatory knowledge.

Our Policy Intelligence feeds into our Policy Packs, Credo AI’s technical requirements developed in collaboration with our research team to translate high-level concepts into checklists of actionable steps to ensure your AI systems are responsible, safe, and compliant.

Credo AI is trusted by the ones that build trust

Recognized as 2022 World Economic Forum Technology Pioneer
Received SOC 2 Type II-Certification
Named Key Responsible AI Governance Platform by IDC
CEO appointed to National AI Advisory Committee (NAIAC)
Named one of the Next Big Things In Tech by Fast Company
Analyst Coverage
Analyst Coverage
CB Insights’ annual list of the 100 most promising private AI companies in the world
Recognized by Madrona Ventures, Pitchbook and Goldman Sachs as a 2022 Intelligent Applications Top 40 winner for our work in Responsible AI. Learn more about the award and see a full list of recipients here.

It doesn’t end there

The knowledge we gather is shared through our expert content in our Resource Center. To learn more about topics such as the EU AI Act, NIST AI Risk Management Framework, and how to embark on your AI governance journey, please visit our Resource Center.

Adopt AI with confidence today

The Responsible AI Governance Platform enables AI, data, and business teams to track, prioritize, and control AI projects to ensure AI remains profitable, compliant, and safe.