White Paper

IDC and Credo AI Survey: The Business Case for Responsible AI

Naysa Mishler
Head of Marketing
March 23, 2023
3/23/2023
Contributor(s):
Catharina Doria

Revenue, customer satisfaction, profitability, shareholder value — all success metrics executives in any business spend their days and nights trying to figure out how to boost. A recent survey by IDC, sponsored by Credo AI, revealed that more than 500 executives globally believe they would see estimated increases of ~22-29% YoY across these metrics — with responsible AI.

"Organizations around the world are both excited by the capabilities of AI, especially Generative AI over the past few months, and also recognize the importance and benefits of responsible AI adoption," said Navrina Singh, Founder and CEO of Credo AI. "However, there are still significant challenges to overcome, particularly around building confidence in AI and ensuring compliance with regulations. This survey is designed to help organizations identify these challenges and provide actionable insights for implementing responsible AI practices."

The Big Picture

The possibilities and economic benefits of AI are significant. Global companies are set to spend $151 billion on Artificial Intelligence (AI) solutions in 2023 (IDC, 2023). The technology is expected to contribute an additional $15.7 trillion to global GDP by 2030 and continues to make waves with advancements like ChatGPT and Github Copilot. However, as the adoption of AI rises, it rises along with a range of potential risks that must be addressed to ensure a positive impact. How can risks be mitigated and adoption encouraged?

AI Governance to Accelerate AI Innovation

For organizations to ensure their AI is designed, developed, and deployed responsibly, and to secure business growth, manage risk, and foster trust, it is imperative to adopt AI governance practices. This assertion is supported by research conducted by IDC and Credo AI, which involved over 500 B2B enterprise companies worldwide. The study aimed to provide a comprehensive understanding of the current state of responsible AI adoption and offer practical solutions for organizations to ensure ethical use and development.

Let’s dig into the findings and solutions a little further. 

Insight #1: AI is expected to drive significant business improvement across a wide range of areas.

The adoption of responsible AI by organizations is expected to have a tangible impact on business metrics. According to the survey, businesses that adopt an "AI-first, ethics-forward" approach to their AI investments expect to see a 22-29% year-over-year improvement in various business metrics, such as increased revenue, heightened customer satisfaction, sustainable operations, higher profits, and reduced business risks.

But what makes an organization "AI-first, ethics-forward"? 

In simple terms, these organizations prioritize responsible AI and strongly emphasize ethical considerations, trust, and compliance when implementing or employing AI technology. Overall, they understand AI’s potential negative impact on people's lives and risk for business—such as damage to brand reputation and reduced public trust—and the need to mitigate AI risk through operationalizing AI governance. 

Insight #2: Two-thirds of enterprise executives have reservations or low confidence in building and using AI ethically, responsibly, and compliantly.

Despite the clear benefits of responsible AI and AI governance, many companies have yet to fully embrace it. The level of confidence in using AI ethically, responsibly, and compliantly among organizations is currently mixed. According to IDC & Credo AI’s research, only 39% of respondents have a very high level of confidence, 33.1% have some confidence with reservations, and 27.4% have very low confidence.  

Due to the increasing importance of regulations and potential business benefits, organizations with low or some confidence in using AI ethically and responsibly in their operations and decision-making will likely shift towards a very high confidence level in the coming years. But how and who can help organizations start their journey to move from low to high confidence?

Insight #3:  Globally, CIOs are the primary owners of an organization’s responsible AI strategy to effectively drive business impact.

Chief Information Officers (CIOs) are responsible for the processes of managing, evaluating, and assessing how well the organization is managing its IT resources. As such, they also have ownership when it comes to overseeing the implementation of AI systems in a way that minimizes any unethical inputs or results. 

Interestingly the survey revealed that while the top 3 roles responsible for AI/Machine Learning platform selection are IT Architect/IT Operations, Chief AI Office/Head of AI and CTO, the top 3 roles responsible for Responsible AI platform selection are Chief AI officer/Head of AI, IT Architect/IT Operations and CIO. This highlights the importance of alignment and integration between the two both when it comes to technical aspects as well as governance. 

Insight #4: Organizations without a responsible AI strategy worry about costs to business.

The failure to understand the interconnected nature of responsible AI and AI/Machine Learning and their need for coherent governance can lead to significant costs for organizations, such as data breaches, privacy whistleblowers and regulatory issues.

In the survey, data privacy loss (31.4%), hidden costs (29.8%) and poor customer experience (27.6%) were highlighted as top concerns for organizations without responsible AI. 

When organizations fail to ensure that there is appropriate governance in place for their AI/ML systems that enable fair outcomes they risk creating negative customer experiences that can lead to reduced satisfaction and loyalty. Furthermore, such failures can result in a regulatory backlash that can damage a brand's reputation. 

Prioritizing responsible AI can help businesses ensure that their AI/ML systems produce fair outcomes, protect privacy, and comply with regulations. As a result, businesses can improve customer experience, increase trust in the brand, and build a positive reputation as a responsible organization.

Insight #5: Organizations view the EU AI act as setting the tone for global AI regulation.

From the NIST AI Risk Management Framework to the NYC Algorithmic Hiring Law No. 144, or the Singapore Model Governance Framework, the rapid development of AI has led to an abundance of new global laws, regulations, and frameworks being created to mitigate risk and ensure responsible AI use and deployment. As a result, the world is witnessing an influx of new regulations developed to address AI's potential negative impacts. 

Expressly, the EU AI Act is noted as the first in line and most critical AI regulation for organizations to ensure compliance, with its provisions and requirements being widely recognized as the benchmark for Responsible AI implementation globally.

With increased regulation in the next two years, spending on MLOps and RAI tools/software is expected to increase between 6-8% globally.

Insight #6: Software tools will play a key role in simplifying the management of AI governance.

Responsible AI encompasses more than just setting up governance structures. It also involves translating ethical and legal frameworks into statistical concepts that can be represented in software. Hence, understanding the relationship between Machine Learning Operations (MLOps) and AI governance is critical. 

MLOps and AI governance provide insight into the behavior of AI systems, allowing stakeholders to make informed decisions about how the systems are built, deployed, and maintained. However, without MLOps, it becomes difficult to automate or streamline AI Governance, resulting in a lack of trustworthiness and audibility of technical insights into model and system behavior. Similarly, without AI Governance, MLOps are disconnected from the most impactful risks to a business—such as legal, financial, and brand risks. Without a holistic approach, technical teams cannot fully detect and address technical-driven AI risks, hindering the ability to build better products.

Scaling AI responsibly is a difficult task that requires input from multiple stakeholders throughout an organization and its ecosystem. To meet this challenge, many organizations are looking for Responsible AI Governance platforms that function as an additional layer on top of MLOps. These platforms provide the necessary capabilities to achieve superior business benefits and responsibly handle the complexities of scaling AI. Key features of these platforms include multiple deployment options, support for industry and region-specific regulations, and collaboration among multiple stakeholders.

To learn more about MLOps vs. AI Governance, visit the blog post Better Together: The difference between MLOps & AI Governance and why you need both to deliver on Responsible AI.

Conclusion: Responsible AI Drives Innovation & Business Growth

It’s clear that companies that invest in AI governance today will reap the benefits tomorrow. After all, integrating AI generates significant innovation, user benefits and business value, ranging from efficiency and productivity gains to new capabilities and business model expansion. And doing so responsibly will mitigate the risk of any unwanted consequences. 

As you deploy AI within your organization, ensure you have the right guardrails in place to accelerate innovation — and benefit from a clear competitive edge. 

Get Started with Credo AI Today

Start your journey today to unleash the power of responsible AI across your organization with Credo AI.

  • Download the full Credo AI sponsored IDC survey, here.
  • View one of our webinars on a range of AI topics, here.
  • Request a demo, here.

You may also like

🔥 Unlocking AI's Full Potential: Playbooks for Organizational Change Management in Responsible AI

Leveraging the capabilities of Artificial Intelligence (AI), including Generative AI, has become a key focus for executive teams across various industries worldwide. With tools such as ChatGPT 4 or DALL·E 2, AI is no longer a technology reserved for a select few in the future. Instead, it is here today, offering great benefits to both individuals and businesses. By streamlining operations, improving customer experiences, and boosting productivity, AI can provide a remarkable competitive edge to companies.However, just like with any rapidly advancing new technology, there are significant challenges that need to be addressed. Concerns such as AI bias, misuse, and misalignment have caused customer outrage and prompted companies to implement stricter guidelines for the development and use of AI.Moving fast with AI is critical.But breaking things can lead to more problems than benefits. At Credo AI, one of our missions is to empower organizations to harness the full potential of AI with intention, so they can reap the benefits without compromising speed. This means supporting organizations to move fast without breaking things.To support organizations in this endeavor, we have created the ultimate resource for effective, Responsible AI change management: our Responsible AI Playbooks (5 of many more to come! 😉). Regardless of the stage of AI maturity, our comprehensive guide provides practical guidance for organizations and employees to adopt and implement Responsible AI with ease, allowing them to quickly leverage and unlock the benefits of AI without risking any potential damage in the process. Are you ready to realize the full potential of AI? We are! Let’s start!

Growing Pressure to Regulate AI: Proposed State Bills Call for Impact Assessments and Transparency

In recent weeks, there has been a significant increase in the number of AI-related state bills introduced across the United States.This reflects greater pressures to address AI and automated decision-making systems used in the government and private sector – and the potential risks they present. States have taken different approaches to fill the current gaps in regulation, including the development of task forces and the allocation of funding for research. Additionally, a number of bills have proposed measures aimed at increasing transparency around AI systems, including requirements for algorithmic impact assessments and registries/inventories of AI systems used. These transparency measures are growing in popularity as a regulatory tool to ensure that AI systems are trustworthy and safe, affecting developers and deployers of AI products, both in the private and public sectors.

Tools for Transparency: What Makes a Trustworthy AI System? | MozFest 2023

In recent years, the development and use of artificial intelligence (AI) systems have skyrocketed, leading to an urgent need for accountability and transparency in the industry. To shed light on this topic, Ehrik Aldana, Tech Policy Product Manager at Credo AI, was invited to give a lightning talk at Mozilla Fest 2023.

Join the movement to make
Responsible Al a reality