A Q&A Feature with Lisa Felten, Sr. Software Engineer at Credo AI
TLDR;
- Why Credoer Lisa joined: After ~15+ years at massive companies like Box and HPE
- What is changing in the era of builders and AI architects
- What enterprises are underestimating with respect to AI governance
- Advice for engineers who want an AI career
Jerome J. Sanders: You’ve built for 15+ years at companies like Box and HPE—systems at scale, real customers, real stakes. What convinced you that Credo AI was the place where your engineering leadership would be highest, and what problem here felt too important to ignore?
Lisa Felten: I make many personal and professional decisions based on people and collaborative environments. I wanted to work with Benji Zamora, Head of Engineering at Credo AI, again. He has been instrumental in my career growth as an engineer, and also as a person. Benji brings calm to even the most chaotic and stressful situations, empowers his people to do their best, focused work, and has an uncanny ability to cut through complicated problems to the heart of the matter. I jumped at the opportunity to work with him again.
Another reason I came to Credo AI was to familiarize myself with the industry: I wanted to learn about AI from a governance perspective, given how quickly it has become ubiquitous across many use cases. AI is everywhere, and yet the understanding of how to use it safely and observably has yet to match its accelerated growth.
Finally, I knew I could provide useful learnings to this team from my prior body of experiences working at larger enterprise organizations.
Jerome J. Sanders: We’re entering an era of the builders and AI architects. Agentic systems, AI observability, and ‘shadow AI’ showing up everywhere. From your perspective, what’s fundamentally different about building software at this moment, and what are you most excited to build (or redefine) because of it?
Lisa Felten: The most fundamental shift I am observing, given that I started college at a time (20+ years ago) when the most lauded achievements as a software engineer were to be intrinsically fluent with multiple languages and memorize syntax, object oriented patterns, and engineering best practices…but now all of that is AI-powered at your fingertips, even past the earlier improvements of everything being searchable online (Google, StackOverflow, Experts Exchange, etc).
This shift means it’s less necessary to be a pedantic expert and more important to quickly and effectively leverage resources and existing bodies of knowledge to feed and guide AI, to use your architectural experience to build and solve problems at a higher level than writing the code line by line.
Yes, “anyone can build” now, but “anyone” cannot build the same level of robust and production-ready system that a trained engineer can. This is what I think is critical to understand. Engineers should not be concerned with losing their jobs to AI…unless they are not learning or deepening their knowledge, using the new tools to stay relevant. The shift to orchestration of containerized web applications was much the same, though not as radically fast.
When you have spent years building systems, learning from mistakes, you start to see patterns (and anti-patterns) emerge, and hopefully over time you’ll tend to make decisions that lead your team away from pitfalls and toward faster iteration on solid products. These insights and learnings are what you need to leverage and guide the use of these high-powered tools.
Jerome J. Sanders: When enterprises say they want ‘AI governance,’ what do they usually underestimate, from either a technical or operational standpoint? And what is Credo AI’s engineering team doing that turns AI governance from a checkbox into something that accelerates innovation and trusted AI in production?”
Lisa Felten: We are building what the other critical teams of the company are ideating, from multiple angles. Credo AI has incredibly smart people with their fingers on various pulses across multiple industries, paying close attention to the most up-to-date news and legal/policy shifts around AI. We are picking up signals as technology changes by the minute, synthesizing these insights into useful features that enterprises really need. They are educating prospective customers to help them understand the emerging business landscape as it changes at lightspeed.
Additionally, we are also learning from our customers and informing the product teams as to what is most important to build first. We (engineering) take all this input from product and design, and quickly build the frameworks and information systems to support the product features, sometimes on what we’ve already built, and sometimes in zero-to-one environments.
As engineers at Credo AI, we are solving novel problems that require new ways of thinking and building, which extends to helping our customers close any gaps they may have in contextual and evolving knowledge around AI governance with deep expertise, analysis, and technical foundation to build in a way that scales in a trusted manner.
Jerome J. Sanders: Pick one Credo AI project you’re proudest of. What was the ambiguous engineering problem at the start, what did you ship, and what did it unlock for customers (or the our engineering team) that wasn’t possible before?
Lisa Felten: Previously, we supported user comments in our Credo AI product, and only allowed comments during a certain phase of a resource life cycle. At first, the proposition seemed quite straightforward and a simple shift to allow commenting at any time, allowing persistence outside the resource’s life cycle.
As I dug into the technical planning of the project proposal, I discovered that the way it was built was fundamentally incongruent with the proposed usage. It required a redesign of how we modeled and stored comments data so it could be used more effectively across the product.
This was a much larger endeavor than initially assumed, and I pushed to properly refactor the system rather than try to kludge in a faster, incomplete solution. While this felt like a bit of a miss (initial launch date pushed out by multiple weeks), when we did ship the new feature, there were no issues: a near-flawless rollout with no customer-reported regressions or newly-incurred bugs with the feature!
This experience goes to show that when you dig deep into a problem with due diligence, and make the choice to address systemic or fundamental problems before building on top of them just to meet a deadline, teams can save themselves a LOT of trouble and issues going forward. This mindset translates very well to other engineering initiatives and reaffirms my preferred approach: start slow in the beginning, ask the tough questions, and then accelerate into production.
The impact for customers was that they got reliable improved functionality in a single, seamless release rollout migrating the old system to the new, with no negative effects on their workflows or data. This also underscored the importance of speeding up our release cycle from roughly monthly/quarterly to weekly, and this shifted our approach from a more big-bang rollout of features to shipping frequent vertical slices of functionality behind feature flags as a way to make visible progress incrementally.
Jerome J. Sanders: For engineers aiming to break into the AI space, especially those coming from ‘traditional’ software, what skills or habits will set them apart over the next 3-6 months? If you were mentoring someone to become a standout engineer at Credo AI, what would you have them build, learn, and practice?
Lisa Felten: To thrive as an engineer in this evolving AI-powered landscape, whether at Credo AI or elsewhere, your industry experience is an invaluable asset to bring to the table. Have you worked at companies of different sizes? Have you worked with different languages, frameworks, and tech stacks? Have you been involved in larger migrations of legacy codebases or from monoliths to microservices?
People who have a perspective grounded in industry experience and solid engineering chops are highly desirable especially in this time when “anyone can build”. If you’re just starting out, challenge yourself to lean into asking those harder questions about “why” and “how” things work, or don’t work. Don’t just accept an answer from AI (or anywhere) without doing some testing to prove it in a practical setting.
Don’t shy away from USING AI, but make sure you are a good driver and navigator, leaning on hard-won industry knowledge that exists on the web or in published formats.
To get over the hump of being skeptical about using AI:
- Start small: have it modify or create a single function or class/module.
- Learn how to provide MCP (model context protocol) and guidance to AI so that it needs fewer iterations to get to your desired quality of output. Your mental model becomes easier to maintain, and you don’t have to get distracted as much by the details. Use AI to effectively tighten and speed up your development cycle.
As a human architect, you bring intuition and value judgments to help avoid anti-patterns, leveraging your learnings and the learnings of other engineers. AI can augment your brilliance with the right prompts and context, and thus, you remain invaluable as long as you remain an active participant in the loop.
- Practical Enterprise-grade practices include:
- Respect Production environments
- Leverage A/B testing
- Use AI to quickly build POCs to test your solutions (in a pre-prod environment!)
- Communicate effectively and collaboratively with others (actively listen!)
- Put customer experience first in your mind when making decisions
- Test-driven development saves a lot of time
If Lisa’s experience this sounds like you, and you share our vision for leading trust in AI, we’d love to get you started on your AI governance journey.
DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.






