The Credo AI Blog

Insights and stories from the people revolutionizing Responsible AI
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Credo AI Product Update: Build Trust in Your AI with New Transparency Reports & Disclosures

Today, we’re excited to announce the release of a major update to the Responsible AI Platform focused on Responsible AI transparency reports and disclosures. These new capabilities are designed to help companies standardize and streamline the assessment of their AI/ML systems for Responsible AI issues like fairness and bias, explainability, robustness, security, and privacy, and automatically produce relevant reports and disclosures to meet new organizational, regulatory and legal requirements and customer demands for transparency.

Articles

Better Together: The difference between MLOps & AI Governance and why you need both to deliver on Responsible AI

At Credo AI, we believe that AI Governance is the missing—and often forgotten—link between MLOps and AI’s success to meet business objectives. In this blog post, we’ll start by defining MLOps and AI Governance, how they differ, and why both are needed for the successful realization of AI/ML projects. Let’s take a closer look at MLOps and AI Governance with respect to scope of work, stakeholder involvement, and development lifecycle.

Fast Company Names Credo AI One of the Next Big Things In Tech

Today, I am thrilled to announce that Credo AI has been named by Fast Company as one of the 2022 Next Big Things in Tech – a prestigious award honoring the most innovative technologies that promise to shape industries, serve as catalysts for further innovation, and drive positive change to society within the next five years.

Operationalizing Responsible AI: How do you “do” AI Governance?

Now that we’ve established what AI governance is and why it’s so important, let’s talk strategy; how does one do AI governance, and what does an effective AI governance program look like? At the highest level, AI governance can be broken down into four components—four distinct steps that make up both a linear and iterative process: 1) Alignment: identifying and articulating the goals of the AI system, 2) Assessment: evaluating the AI system against the aligned goals, 3) Translation: turning the outputs of assessment into meaningful insights, and 4) Mitigation: taking action to prevent failure. Let’s take a deeper look at what happens during each of these steps and how they come together to form a governance process designed to prevent catastrophic failure.

Cutting Through the Noise: What Is AI Governance and Why Should You Care?

There is a lack of consensus around what AI Governance actually entails. We’d like to cut through the noise and provide a definition of AI Governance rooted in Credo AI’s experience working with organizations across different industries and sectors, collaborating with policymakers and standard-setting bodies worldwide, and supporting various global 2000 customers to deliver responsible AI at scale.

2022 Global Responsible AI Summit: Key Highlights and Takeaways

On October 27th, Credo AI hosted the 2022 Global Responsible AI Summit, bringing together experts from AI, data ethics, civil society, academia, and government to discuss the opportunities, challenges, and actions required to make the responsible development and use of AI a reality. The Summit attracted more than 1,100 registrants across 6 continents, making it one of the leading Responsible AI gatherings of the year.

Credo AI Product Update: Build Trust in Your AI with New Transparency Reports & Disclosures

Today, we’re excited to announce the release of a major update to the Responsible AI Platform focused on Responsible AI transparency reports and disclosures. These new capabilities are designed to help companies standardize and streamline the assessment of their AI/ML systems for Responsible AI issues like fairness and bias, explainability, robustness, security, and privacy, and automatically produce relevant reports and disclosures to meet new organizational, regulatory and legal requirements and customer demands for transparency.

NYC Bias Audit Law: Clock ticking for Employers and HR Talent Technology Vendors

On January 1, 2023, the New York City (NYC) Local Law 144, aka NYC bias audit law for automated employment decision tools, will go into effect. With only a few months left for organizations to be compliant, it is a good time to discuss the impact of this legislation and highlight the areas for improvement as the legislation starts to mature.

Roundtable Recap: Realizing Responsible AI in Washington, DC

Last month, Credo AI in partnership with our investors, Sands Capital, kicked off our Realizing Responsible AI roundtable series in Washington, D.C. with policymakers, industry, academia and other key stakeholders in the Responsible AI space to discuss how to move from principles to practice.

The Need for Comprehensive Technology Governance at the Intersection of Tools & Rules

Effective technology governance requires tools that understand what the technology is doing. This is especially true in the case of Artificial Intelligence (AI), where tools which explain and interpret what the AI is doing become critical.

Credo AI Announces $12.8M Series A Funding Round for Responsible AI

I’m thrilled to announce that Credo AI has raised $12.8 million in Series A funding, led by Sands Capital with participation from our existing Series Seed investors Decibel VC and AI Fund.

Credo AI Named as Technology Pioneer 2022 by World Economic Forum

We are honored that the World Economic Forum has designated Credo AI as one of this year’s Technology Pioneers, the organization’s annual acknowledgement of start-up and growth-stage companies with the potential to significantly impact business and society through new technologies.

Credo AI Announces the World's First Responsible AI Governance Platform

Responsible AI is essential for ensuring that organizations build stakeholder trust in their use of AI. Today we are announcing the availability of the world’s first context-driven Responsible AI Governance Platform – one that meets an organization wherever it is in its AI governance journey.

Credo AI’s Founder and CEO Navrina Singh Appointed to the National Artificial Intelligence Advisory Committee (NAIAC)

Our CEO Navrina Singh's thoughts on being appointed to the National AI Advisory Committee, part of the U.S. Department of Commerce which will advise the President and the National AI Initiative Office on a range of issues related to artificial intelligence (AI).

Operationalizing Responsible AI is an Essential Endeavor That Just Can’t Wait

As the growth and business-driving importance of artificial intelligence (AI) continues to surge through organizations in every industry, the need to operationalize Responsible AI is becoming ever more critical. Enterprises who rely on AI as a key element of their business are exposed to extreme risk through lackluster AI governance systems.

Future-Proofing Automated Employment Decision Tool Use to Comply with AI Regulations

Over the past decade, many companies have adopted some form of automation for the hiring process by using what are now called Automated Employment Decision Tools (AEDT). The use of Artificial Intelligence (AI) algorithms in these AEDT has amplified our concerns about bias.

Our Predictions for Ethical AI in 2022

At Credo AI, we’re optimistic about the growth we’ve seen in the Ethical AI space in the last year — from emerging regulations to growing customer demand, here’s what we think will happen to continue this momentum in 2022

Build Better Futures with Ethical AI

A Credo AI Manifesto - We are living through a technological revolution. The invention of agriculture broke humanity out of the long cycle of hunting and gathering...

Credo AI Comments on NIST’s Artificial Intelligence Risk Management Framework

Credo AI is pleased to submit the comments below in response to NIST’s Request for Information on the proposed Artificial Intelligence Risk Management Framework.