The Credo AI Blog

Insights and stories from the people revolutionizing Responsible AI
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

What is the EU AI Act? Frequently asked questions, answered.

For businesses operating within any of the twenty-seven countries that make up the European Union, understanding and complying with the EU AI Act will be the key to successfully developing and deploying AI in Europe (avoiding penalties and actively contributing to the responsible deployment of AI worldwide). This factsheet is intended to answer some of the most common questions about the EU AI Act, providing essential insights to help businesses prepare for compliance and navigate the evolving landscape of AI regulation successfully.

Articles

What is the EU AI Act? Frequently asked questions, answered.

For businesses operating within any of the twenty-seven countries that make up the European Union, understanding and complying with the EU AI Act will be the key to successfully developing and deploying AI in Europe (avoiding penalties and actively contributing to the responsible deployment of AI worldwide). This factsheet is intended to answer some of the most common questions about the EU AI Act, providing essential insights to help businesses prepare for compliance and navigate the evolving landscape of AI regulation successfully.

Introducing GenAI Guardrails: Your Control Center for Safe & Responsible Adoption of Generative AI

Today, we’re announcing the general availability of Credo AI’s GenAI Guardrails, a powerful new set of governance capabilities as part of the Credo AI Responsible AI Platform, designed to help organizations understand and mitigate the risks of generative AI so that they can realize its full potential.

Mastering AI Risks: Building Trustworthy Systems with the NIST AI Risk Management Framework (RMF) 1.0

To support the rapid growth of Artificial Intelligence adoption, the National Institute of Standards and Technology (NIST) spent a significant amount of time gathering stakeholder feedback from both the public and private sectors in order to publish a comprehensive NIST AI Risk Management Framework 1.0 (AI RMF) on January 26th, 2023. Two months later (on March 30, 2023), NIST also released a companion AI RMF Playbook for voluntary use – which suggests ways to navigate and use the AI Risk Management Framework (AI RMF) to incorporate trustworthiness considerations in the design, development, deployment, and use of AI systems.

Introducing the New AI Registry: Your Control Center for AI Adoption

Today, we are thrilled to announce a significant update to our AI Governance Platform. Say hello to our new AI Registry—your one-stop shop for managing the ROI of AI for your enterprise.

How Businesses can Prepare for the EU AI Act: Including the Latest Discussions related to General Purpose AI

The European Parliament will vote to reach political agreement on the EU AIA on April 26th, and it is highly likely that the European Parliament’s latest version of the EU AIA text will include new provisions concerning General Purpose AI Systems (GPAIS), adding a new framework and safeguards which will place obligations on both GPAI providers and downstream developers. These new obligations will most likely include testing and technical documentation requirements, requiring GPAIS providers to test for safety, quality, and performance standards, and expectations for both GPAIS providers and GPAIS downstream developers be able to describe ways to understand the model in a more comprehensive way via technical documentation (the model must be safe and understandable). This documentation could be akin to the format known as “AI model cards,” and may be expected to include information on performance, cybersecurity, risk, quality, and safety.

NYC Releases Final Rules for Automated Employment Decision Systems (Effective July 5, 2023)

Today, the New York City Department of Consumer and Worker Protection (DCWP) released its Notice of Adoption of the Final Rules for Local Law 144, requiring employers and employment agencies to provide a bias audit of automated employment decision tools (AEDTs). The enforcement date for these rules has been delayed to July 5, 2023 (previously April 15, 2023).

🔥 Unlocking AI's Full Potential: Playbooks for Organizational Change Management in Responsible AI

Leveraging the capabilities of Artificial Intelligence (AI), including Generative AI, has become a key focus for executive teams across various industries worldwide. With tools such as ChatGPT 4 or DALL·E 2, AI is no longer a technology reserved for a select few in the future. Instead, it is here today, offering great benefits to both individuals and businesses. By streamlining operations, improving customer experiences, and boosting productivity, AI can provide a remarkable competitive edge to companies.However, just like with any rapidly advancing new technology, there are significant challenges that need to be addressed. Concerns such as AI bias, misuse, and misalignment have caused customer outrage and prompted companies to implement stricter guidelines for the development and use of AI.Moving fast with AI is critical.But breaking things can lead to more problems than benefits. At Credo AI, one of our missions is to empower organizations to harness the full potential of AI with intention, so they can reap the benefits without compromising speed. This means supporting organizations to move fast without breaking things.To support organizations in this endeavor, we have created the ultimate resource for effective, Responsible AI change management: our Responsible AI Playbooks (5 of many more to come! 😉). Regardless of the stage of AI maturity, our comprehensive guide provides practical guidance for organizations and employees to adopt and implement Responsible AI with ease, allowing them to quickly leverage and unlock the benefits of AI without risking any potential damage in the process. Are you ready to realize the full potential of AI? We are! Let’s start!

Growing Pressure to Regulate AI: Proposed State Bills Call for Impact Assessments and Transparency

In recent weeks, there has been a significant increase in the number of AI-related state bills introduced across the United States.This reflects greater pressures to address AI and automated decision-making systems used in the government and private sector – and the potential risks they present. States have taken different approaches to fill the current gaps in regulation, including the development of task forces and the allocation of funding for research. Additionally, a number of bills have proposed measures aimed at increasing transparency around AI systems, including requirements for algorithmic impact assessments and registries/inventories of AI systems used. These transparency measures are growing in popularity as a regulatory tool to ensure that AI systems are trustworthy and safe, affecting developers and deployers of AI products, both in the private and public sectors.

IDC and Credo AI Survey: The Business Case for Responsible AI

Revenue, customer satisfaction, profitability, shareholder value — all success metrics executives in any business spend their days and nights trying to figure out how to boost. A recent survey by IDC, sponsored by Credo AI, revealed that more than 500 executives globally believe they would see estimated increases of ~22-29% YoY across these metrics — with responsible AI.

Tools for Transparency: What Makes a Trustworthy AI System? | MozFest 2023

In recent years, the development and use of artificial intelligence (AI) systems have skyrocketed, leading to an urgent need for accountability and transparency in the industry. To shed light on this topic, Ehrik Aldana, Tech Policy Product Manager at Credo AI, was invited to give a lightning talk at Mozilla Fest 2023.

Vendor Risk Assessment Portal: Streamline Third-Party AI Risk Management to Build Trust and Decrease Risk

In today's competitive business landscape, the adoption of Artificial Intelligence (AI) has become a critical component of an organization's strategy, providing an unparalleled unique selling proposition and a significant competitive edge for businesses. Whereas many companies are actively trying to adopt AI across their business, others have already established comprehensive systems to leverage its benefits.Yet, in the pursuit of accessing the most cutting-edge AI/ML solutions technology available, companies often find themselves relying on AI offerings from third-party vendors, suppliers, and cloud service providers.

Credo AI expands its global footprint and grows its team in Europe

Since our founding, Credo AI has been a global company committed to ensuring technology is always in service to humanity. Artificial Intelligence touches nearly every aspect of our lives, and the benefits and harms of this technology now include existential implications for our society and economy. This tension between AI risks and rewards is playing out in real-time as cities, regions, nation-states, and supranational institutions around the world work towards shaping a future powered by AI. 

Credo AI’s Reflections on How AI Systems Behave, and Who Should Decide

How should AI systems behave, and who should decide? OpenAI posed these critical questions in a recent post outlining their future strategy. At Credo AI, our focus is AI Governance, a field concerned with these same questions! Given the importance of the increasingly general AI models, including “Generative AI” systems and “Foundation Models,” we believe it is important to communicate our thoughts on these weighty questions.

Local Law No. 144: NYC Employers & Vendors Prepare for AI Bias Audit with Credo AI’s Responsible AI Governance Platform

The clock is ticking! With New York City Local Law No. 144’s (LL-144) enforcement deadline fast approaching (April 15th, 2023), companies are scrambling to ensure they comply with the new AI regulation. While some organizations are still unsure how to start their journey, others—like AdeptID—have already taken the lead to demonstrate their commitment to Responsible AI practices. In this blog post, we will briefly describe Local Law No. 144, share how Credo AI is supporting HR Employers and Vendors, and showcase how we have supported AdeptID in their efforts to adhere to the legal requirements established by LL-144.

NYC AI Bias Audit: 7 Things You Need to Know About the Updated NYC Algorithmic Hiring Law

The clock is ticking, and we at Credo AI are committed to helping organizations understand what the recent updates and key aspects of the new regulation mean for them. In today’s blog post, we will outline seven things employers and employment agencies need to know about the requirements for NYC AI Bias Audits, also known as NYC Local Law No. 144.

Credo AI Lens™: the ultimate open-source framework for Responsible AI assessments

While ML capabilities have developed at a staggering pace, guidelines and processes to mitigate ML risks have lagged behind. That’s where AI Governance comes in, which defines the policies and processes needed to safeguard AI systems. While there are many components to successfully operationalize the responsible development of AI systems, a chief need is assessing AI systems to evaluate whether they behave adequately for their intended purpose. This assessment challenge is central to Credo AI’s efforts to develop AI Governance tools. In this blog, we introduce Credo AI Lens, our open-source assessment framework built to support the assessment needs of your AI Governance process.

CEO Message: A look at 2022 and a glimpse into 2023

It's been an impact-driven year, and while it is not possible to share all that has happened, I want to take a moment to highlight some industry-shaping product, policy, and ecosystem moments with you. In this year-end review, I am elated to spotlight some of Credo AI’s most significant achievements and reflect on our progress in addressing the challenges and opportunities of the AI industry.

5 AI Predictions for 2023: The Year of AI Governance to deliver on the promise of Responsible AI

In this blog post, we'll cover some of the significant ways Responsible AI will evolve over the next year, what 2023 is going to look like in bringing meaningful action to Responsible AI and how businesses can take advantage of them now to stay ahead of the curve. Here are our thoughts.

Designing Truly Human-Centered AI

As we enter the era where AI has the potential to impact almost every aspect of our lives, there is a growing need to ensure that AI systems are designed with human values and experiences at their core. This is a high level introduction to Human-Centric AI (HC-AI), a Responsible AI methodology.

AI Governance in the time of Generative AI

Generative AI systems are the next frontier of technological systems. Putting aside what they presage in terms of future AI advancements, generative AI systems are already some of the most versatile, accessible tools humanity has ever created. The excitement around this space is palpable - you see it in trending social media posts of Dall·E images, new research and product innovation, and growing investment in generative AI companies. But if you are like most, this excitement is tempered by a feeling of anxiety.

Better Together: The difference between MLOps & AI Governance and why you need both to deliver on Responsible AI

At Credo AI, we believe that AI Governance is the missing—and often forgotten—link between MLOps and AI’s success to meet business objectives. In this blog post, we’ll start by defining MLOps and AI Governance, how they differ, and why both are needed for the successful realization of AI/ML projects. Let’s take a closer look at MLOps and AI Governance with respect to scope of work, stakeholder involvement, and development lifecycle.

Fast Company Names Credo AI One of the Next Big Things In Tech

Today, I am thrilled to announce that Credo AI has been named by Fast Company as one of the 2022 Next Big Things in Tech – a prestigious award honoring the most innovative technologies that promise to shape industries, serve as catalysts for further innovation, and drive positive change to society within the next five years.

Operationalizing Responsible AI: How do you “do” AI Governance?

Now that we’ve established what AI governance is and why it’s so important, let’s talk strategy; how does one do AI governance, and what does an effective AI governance program look like? At the highest level, AI governance can be broken down into four components—four distinct steps that make up both a linear and iterative process: 1) Alignment: identifying and articulating the goals of the AI system, 2) Assessment: evaluating the AI system against the aligned goals, 3) Translation: turning the outputs of assessment into meaningful insights, and 4) Mitigation: taking action to prevent failure. Let’s take a deeper look at what happens during each of these steps and how they come together to form a governance process designed to prevent catastrophic failure.

Cutting Through the Noise: What Is AI Governance and Why Should You Care?

There is a lack of consensus around what AI Governance actually entails. We’d like to cut through the noise and provide a definition of AI Governance rooted in Credo AI’s experience working with organizations across different industries and sectors, collaborating with policymakers and standard-setting bodies worldwide, and supporting various global 2000 customers to deliver responsible AI at scale.

2022 Global Responsible AI Summit: Key Highlights and Takeaways

On October 27th, Credo AI hosted the 2022 Global Responsible AI Summit, bringing together experts from AI, data ethics, civil society, academia, and government to discuss the opportunities, challenges, and actions required to make the responsible development and use of AI a reality. The Summit attracted more than 1,100 registrants across 6 continents, making it one of the leading Responsible AI gatherings of the year.

Credo AI Product Update: Build Trust in Your AI with New Transparency Reports & Disclosures

Today, we’re excited to announce the release of a major update to the Responsible AI Platform focused on Responsible AI transparency reports and disclosures. These new capabilities are designed to help companies standardize and streamline the assessment of their AI/ML systems for Responsible AI issues like fairness and bias, explainability, robustness, security, and privacy, and automatically produce relevant reports and disclosures to meet new organizational, regulatory and legal requirements and customer demands for transparency.

NYC Bias Audit Law: Clock ticking for Employers and HR Talent Technology Vendors

On January 1, 2023, the New York City (NYC) Local Law 144, aka NYC bias audit law for automated employment decision tools, will go into effect. With only a few months left for organizations to be compliant, it is a good time to discuss the impact of this legislation and highlight the areas for improvement as the legislation starts to mature.

Roundtable Recap: Realizing Responsible AI in Washington, DC

Last month, Credo AI in partnership with our investors, Sands Capital, kicked off our Realizing Responsible AI roundtable series in Washington, D.C. with policymakers, industry, academia and other key stakeholders in the Responsible AI space to discuss how to move from principles to practice.

The Need for Comprehensive Technology Governance at the Intersection of Tools & Rules

Effective technology governance requires tools that understand what the technology is doing. This is especially true in the case of Artificial Intelligence (AI), where tools which explain and interpret what the AI is doing become critical.

Credo AI Announces $12.8M Series A Funding Round for Responsible AI

I’m thrilled to announce that Credo AI has raised $12.8 million in Series A funding, led by Sands Capital with participation from our existing Series Seed investors Decibel VC and AI Fund.

Credo AI Named as Technology Pioneer 2022 by World Economic Forum

We are honored that the World Economic Forum has designated Credo AI as one of this year’s Technology Pioneers, the organization’s annual acknowledgement of start-up and growth-stage companies with the potential to significantly impact business and society through new technologies.

Credo AI Announces the World's First Responsible AI Governance Platform

Responsible AI is essential for ensuring that organizations build stakeholder trust in their use of AI. Today we are announcing the availability of the world’s first context-driven Responsible AI Governance Platform – one that meets an organization wherever it is in its AI governance journey.

Credo AI’s Founder and CEO Navrina Singh Appointed to the National Artificial Intelligence Advisory Committee (NAIAC)

Our CEO Navrina Singh's thoughts on being appointed to the National AI Advisory Committee, part of the U.S. Department of Commerce which will advise the President and the National AI Initiative Office on a range of issues related to artificial intelligence (AI).

Operationalizing Responsible AI is an Essential Endeavor That Just Can’t Wait

As the growth and business-driving importance of artificial intelligence (AI) continues to surge through organizations in every industry, the need to operationalize Responsible AI is becoming ever more critical. Enterprises who rely on AI as a key element of their business are exposed to extreme risk through lackluster AI governance systems.

Future-Proofing Automated Employment Decision Tool Use to Comply with AI Regulations

Over the past decade, many companies have adopted some form of automation for the hiring process by using what are now called Automated Employment Decision Tools (AEDT). The use of Artificial Intelligence (AI) algorithms in these AEDT has amplified our concerns about bias.

Our Predictions for Ethical AI in 2022

At Credo AI, we’re optimistic about the growth we’ve seen in the Ethical AI space in the last year — from emerging regulations to growing customer demand, here’s what we think will happen to continue this momentum in 2022

Build Better Futures with Ethical AI

A Credo AI Manifesto - We are living through a technological revolution. The invention of agriculture broke humanity out of the long cycle of hunting and gathering...

Credo AI Comments on NIST’s Artificial Intelligence Risk Management Framework

Credo AI is pleased to submit the comments below in response to NIST’s Request for Information on the proposed Artificial Intelligence Risk Management Framework.