Public trust in AI is falling at the exact moment AI adoption is exploding.
This paradox is at the center of our recent webinar, Accelerating AI Transparency for the Built, Made, and Imagined World, with Ousama Lakhdar-Ghazal, the Director of Trusted AI & Data Privacy at Autodesk.
For Autodesk, across the architecture, construction, manufacturing, and media industries, customers are asking the same question: can I trust the AI you’re building into the tools I rely on every day?
Here are the top five insights from that conversation.
1. The AI Trust Gap is widening: adoption is high, but agentic AI is barely scaling
Erihk Aldana, Director of Product Management at Credo AI, opened with an AI “pulse check” using industry surveys to show a stark pattern:
- Overall AI and generative AI adoption are high and still climbing, at 88% of companies reporting AI in core business units (McKinsey)
- Agentic AI usage (AI agents that can take actions on your behalf) is still very low and early-stage, with roughly 10% of organizations are experimenting. (McKinsey)
- Meanwhile, trust in AI is declining in advanced economies, where usage is highest. (KPMG)
In other words: the business value of AI is obvious enough to drive adoption, but lacking the assurance to drive real confidence in its full-scale implementation.
This is the AI Trust Gap:
AI is everywhere — whether through shadow or sanctioned AI adoption — but AI trust is lagging, resulting in enterprises’ inability to scale sophisticated AI use cases like agentic AI.
2. AI trust starts internally: if you can’t see your AI, you can’t sell your AI
One of Ousama’s core messages was simple and a little uncomfortable: You can’t be transparent with customers if you don’t even know what’s happening inside your own house.
AI trust starts as an internal discipline before it ever becomes a public promise.
An enterprise-wide inventory of AI is a critical first step as shadow AI use cases creep in, and models and model versions change overnight in common tools. Enterprises need to ask:
- Who is using AI?
- What models, APIs, and third-party tools are in play?
- Where is “shadow AI” creeping in?
3. AI transparency and trust is a distinct competitive advantage
One of the strongest signals from the webinar was Autodesk’s finding that 70% of customers felt their AI questions were answered after seeing the company’s AI transparency cards.
That isn’t just a communications win. It’s commercial proof that clear explanations of how AI works increase confidence in the product itself. Autodesk’s cards, inspired by nutrition labels, distill what an AI feature does, what data it uses, and how it’s secured into something anyone can quickly understand.
Because the cards are publicly available, sales and customer teams can drop them directly into procurement and security conversations, while users no longer have to dig through a dense whitepaper to figure out whether their data is being used or whether a feature can be turned off.
And the relevance extends beyond Autodesk.
Every enterprise is now, in practice, an AI software company. Whether you’re building AI into your products, embedding AI into workflows, or reselling AI capabilities, customers will expect the same level of clarity. Just as security pages, SOC 2 reports, and privacy policies became standard, AI transparency will become a universal requirement. Those who adopt it early will simply look like safer, more serious partners, and they’ll close deals faster because they eliminate AI risk anxiety at the buying table.
4. Transparency has to be layered and audience-aware, not a data dump
A major theme from Ousama’s talk was that more information is not the same as more transparency.
Overloading people with jargon or technical detail doesn’t create clarity; it creates fatigue. Autodesk’s solution is a layered transparency strategy that gives each audience what they need without overwhelming them. The transparency card itself offers a high-level, approachable view of what an AI feature is and how it behaves.
The point is simple: the “right” transparency depends on who’s asking. A regulator, a construction safety officer, a financial auditor, and a creative director in visual effects all care about different risks. By designing transparency as a system, and not a single PDF, Autodesk meets each of them at the level they need.
5. Building AI trust is a journey—but there is a roadmap
Ousama summed it up perfectly:
“Transparency shouldn’t be a destination. It needs to be part of the journey.”
Autodesk doesn’t treat its transparency cards as a finished product. After benchmarking industry practices, releasing a v1, and surveying customers, they learned where explanations weren’t clear and where some users wanted more depth. That feedback is now shaping a v2 with richer detail for advanced users. This iterative approach matters because AI itself evolves constantly. Models change, regulations shift, customer expectations rise, and transparency has to evolve with them.
Beneath that lies a strategic decision: you can wait for regulators to dictate what to disclose, or you can lead.
Autodesk is not aiming for bare-minimum compliance—they’re aiming to be trusted. In a crowded market where AI is everywhere and skepticism is growing, companies that invest early in meaningful transparency will be the ones invited into long-term transformation work, while others fade into the noise.
You can watch the full webinar recording here, or get started on your AI trust journey with Credo AI.
DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.



.png)