The Credo AI Blog

Insights and stories from the people revolutionizing Responsible AI
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Vendor Risk Assessment Portal: Streamline Third-Party AI Risk Management to Build Trust and Decrease Risk

In today's competitive business landscape, the adoption of Artificial Intelligence (AI) has become a critical component of an organization's strategy, providing an unparalleled unique selling proposition and a significant competitive edge for businesses. Whereas many companies are actively trying to adopt AI across their business, others have already established comprehensive systems to leverage its benefits.Yet, in the pursuit of accessing the most cutting-edge AI/ML solutions technology available, companies often find themselves relying on AI offerings from third-party vendors, suppliers, and cloud service providers.

Articles

Tools for Transparency: What Makes a Trustworthy AI System? | MozFest 2023

In recent years, the development and use of artificial intelligence (AI) systems have skyrocketed, leading to an urgent need for accountability and transparency in the industry. To shed light on this topic, Ehrik Aldana, Tech Policy Product Manager at Credo AI, was invited to give a lightning talk at Mozilla Fest 2023.

Vendor Risk Assessment Portal: Streamline Third-Party AI Risk Management to Build Trust and Decrease Risk

In today's competitive business landscape, the adoption of Artificial Intelligence (AI) has become a critical component of an organization's strategy, providing an unparalleled unique selling proposition and a significant competitive edge for businesses. Whereas many companies are actively trying to adopt AI across their business, others have already established comprehensive systems to leverage its benefits.Yet, in the pursuit of accessing the most cutting-edge AI/ML solutions technology available, companies often find themselves relying on AI offerings from third-party vendors, suppliers, and cloud service providers.

Credo AI expands its global footprint and grows its team in Europe

Since our founding, Credo AI has been a global company committed to ensuring technology is always in service to humanity. Artificial Intelligence touches nearly every aspect of our lives, and the benefits and harms of this technology now include existential implications for our society and economy. This tension between AI risks and rewards is playing out in real-time as cities, regions, nation-states, and supranational institutions around the world work towards shaping a future powered by AI. 

Credo AI’s Reflections on How AI Systems Behave, and Who Should Decide

How should AI systems behave, and who should decide? OpenAI posed these critical questions in a recent post outlining their future strategy. At Credo AI, our focus is AI Governance, a field concerned with these same questions! Given the importance of the increasingly general AI models, including “Generative AI” systems and “Foundation Models,” we believe it is important to communicate our thoughts on these weighty questions.

Local Law No. 144: NYC Employers & Vendors Prepare for AI Bias Audit with Credo AI’s Responsible AI Governance Platform

The clock is ticking! With New York City Local Law No. 144’s (LL-144) enforcement deadline fast approaching (April 15th, 2023), companies are scrambling to ensure they comply with the new AI regulation. While some organizations are still unsure how to start their journey, others—like AdeptID—have already taken the lead to demonstrate their commitment to Responsible AI practices. In this blog post, we will briefly describe Local Law No. 144, share how Credo AI is supporting HR Employers and Vendors, and showcase how we have supported AdeptID in their efforts to adhere to the legal requirements established by LL-144.

NYC AI Bias Audit: 7 Things You Need to Know About the Updated NYC Algorithmic Hiring Law

The clock is ticking, and we at Credo AI are committed to helping organizations understand what the recent updates and key aspects of the new regulation mean for them. In today’s blog post, we will outline seven things employers and employment agencies need to know about the requirements for NYC AI Bias Audits, also known as NYC Local Law No. 144.

Credo AI Lens™: the ultimate open-source framework for Responsible AI assessments

While ML capabilities have developed at a staggering pace, guidelines and processes to mitigate ML risks have lagged behind. That’s where AI Governance comes in, which defines the policies and processes needed to safeguard AI systems. While there are many components to successfully operationalize the responsible development of AI systems, a chief need is assessing AI systems to evaluate whether they behave adequately for their intended purpose. This assessment challenge is central to Credo AI’s efforts to develop AI Governance tools. In this blog, we introduce Credo AI Lens, our open-source assessment framework built to support the assessment needs of your AI Governance process.

CEO Message: A look at 2022 and a glimpse into 2023

It's been an impact-driven year, and while it is not possible to share all that has happened, I want to take a moment to highlight some industry-shaping product, policy, and ecosystem moments with you. In this year-end review, I am elated to spotlight some of Credo AI’s most significant achievements and reflect on our progress in addressing the challenges and opportunities of the AI industry.

5 AI Predictions for 2023: The Year of AI Governance to deliver on the promise of Responsible AI

In this blog post, we'll cover some of the significant ways Responsible AI will evolve over the next year, what 2023 is going to look like in bringing meaningful action to Responsible AI and how businesses can take advantage of them now to stay ahead of the curve. Here are our thoughts.

Designing Truly Human-Centered AI

As we enter the era where AI has the potential to impact almost every aspect of our lives, there is a growing need to ensure that AI systems are designed with human values and experiences at their core. This is a high level introduction to Human-Centric AI (HC-AI), a Responsible AI methodology.

AI Governance in the time of Generative AI

Generative AI systems are the next frontier of technological systems. Putting aside what they presage in terms of future AI advancements, generative AI systems are already some of the most versatile, accessible tools humanity has ever created. The excitement around this space is palpable - you see it in trending social media posts of Dall·E images, new research and product innovation, and growing investment in generative AI companies. But if you are like most, this excitement is tempered by a feeling of anxiety.

Better Together: The difference between MLOps & AI Governance and why you need both to deliver on Responsible AI

At Credo AI, we believe that AI Governance is the missing—and often forgotten—link between MLOps and AI’s success to meet business objectives. In this blog post, we’ll start by defining MLOps and AI Governance, how they differ, and why both are needed for the successful realization of AI/ML projects. Let’s take a closer look at MLOps and AI Governance with respect to scope of work, stakeholder involvement, and development lifecycle.

Fast Company Names Credo AI One of the Next Big Things In Tech

Today, I am thrilled to announce that Credo AI has been named by Fast Company as one of the 2022 Next Big Things in Tech – a prestigious award honoring the most innovative technologies that promise to shape industries, serve as catalysts for further innovation, and drive positive change to society within the next five years.

Operationalizing Responsible AI: How do you “do” AI Governance?

Now that we’ve established what AI governance is and why it’s so important, let’s talk strategy; how does one do AI governance, and what does an effective AI governance program look like? At the highest level, AI governance can be broken down into four components—four distinct steps that make up both a linear and iterative process: 1) Alignment: identifying and articulating the goals of the AI system, 2) Assessment: evaluating the AI system against the aligned goals, 3) Translation: turning the outputs of assessment into meaningful insights, and 4) Mitigation: taking action to prevent failure. Let’s take a deeper look at what happens during each of these steps and how they come together to form a governance process designed to prevent catastrophic failure.

Cutting Through the Noise: What Is AI Governance and Why Should You Care?

There is a lack of consensus around what AI Governance actually entails. We’d like to cut through the noise and provide a definition of AI Governance rooted in Credo AI’s experience working with organizations across different industries and sectors, collaborating with policymakers and standard-setting bodies worldwide, and supporting various global 2000 customers to deliver responsible AI at scale.

2022 Global Responsible AI Summit: Key Highlights and Takeaways

On October 27th, Credo AI hosted the 2022 Global Responsible AI Summit, bringing together experts from AI, data ethics, civil society, academia, and government to discuss the opportunities, challenges, and actions required to make the responsible development and use of AI a reality. The Summit attracted more than 1,100 registrants across 6 continents, making it one of the leading Responsible AI gatherings of the year.

Credo AI Product Update: Build Trust in Your AI with New Transparency Reports & Disclosures

Today, we’re excited to announce the release of a major update to the Responsible AI Platform focused on Responsible AI transparency reports and disclosures. These new capabilities are designed to help companies standardize and streamline the assessment of their AI/ML systems for Responsible AI issues like fairness and bias, explainability, robustness, security, and privacy, and automatically produce relevant reports and disclosures to meet new organizational, regulatory and legal requirements and customer demands for transparency.

NYC Bias Audit Law: Clock ticking for Employers and HR Talent Technology Vendors

On January 1, 2023, the New York City (NYC) Local Law 144, aka NYC bias audit law for automated employment decision tools, will go into effect. With only a few months left for organizations to be compliant, it is a good time to discuss the impact of this legislation and highlight the areas for improvement as the legislation starts to mature.

Roundtable Recap: Realizing Responsible AI in Washington, DC

Last month, Credo AI in partnership with our investors, Sands Capital, kicked off our Realizing Responsible AI roundtable series in Washington, D.C. with policymakers, industry, academia and other key stakeholders in the Responsible AI space to discuss how to move from principles to practice.

The Need for Comprehensive Technology Governance at the Intersection of Tools & Rules

Effective technology governance requires tools that understand what the technology is doing. This is especially true in the case of Artificial Intelligence (AI), where tools which explain and interpret what the AI is doing become critical.

Credo AI Announces $12.8M Series A Funding Round for Responsible AI

I’m thrilled to announce that Credo AI has raised $12.8 million in Series A funding, led by Sands Capital with participation from our existing Series Seed investors Decibel VC and AI Fund.

Credo AI Named as Technology Pioneer 2022 by World Economic Forum

We are honored that the World Economic Forum has designated Credo AI as one of this year’s Technology Pioneers, the organization’s annual acknowledgement of start-up and growth-stage companies with the potential to significantly impact business and society through new technologies.

Credo AI Announces the World's First Responsible AI Governance Platform

Responsible AI is essential for ensuring that organizations build stakeholder trust in their use of AI. Today we are announcing the availability of the world’s first context-driven Responsible AI Governance Platform – one that meets an organization wherever it is in its AI governance journey.

Credo AI’s Founder and CEO Navrina Singh Appointed to the National Artificial Intelligence Advisory Committee (NAIAC)

Our CEO Navrina Singh's thoughts on being appointed to the National AI Advisory Committee, part of the U.S. Department of Commerce which will advise the President and the National AI Initiative Office on a range of issues related to artificial intelligence (AI).

Operationalizing Responsible AI is an Essential Endeavor That Just Can’t Wait

As the growth and business-driving importance of artificial intelligence (AI) continues to surge through organizations in every industry, the need to operationalize Responsible AI is becoming ever more critical. Enterprises who rely on AI as a key element of their business are exposed to extreme risk through lackluster AI governance systems.

Future-Proofing Automated Employment Decision Tool Use to Comply with AI Regulations

Over the past decade, many companies have adopted some form of automation for the hiring process by using what are now called Automated Employment Decision Tools (AEDT). The use of Artificial Intelligence (AI) algorithms in these AEDT has amplified our concerns about bias.

Our Predictions for Ethical AI in 2022

At Credo AI, we’re optimistic about the growth we’ve seen in the Ethical AI space in the last year — from emerging regulations to growing customer demand, here’s what we think will happen to continue this momentum in 2022

Build Better Futures with Ethical AI

A Credo AI Manifesto - We are living through a technological revolution. The invention of agriculture broke humanity out of the long cycle of hunting and gathering...

Credo AI Comments on NIST’s Artificial Intelligence Risk Management Framework

Credo AI is pleased to submit the comments below in response to NIST’s Request for Information on the proposed Artificial Intelligence Risk Management Framework.