An open source Responsible AI Assessment Framework for Data Scientists to rapidly assess models and generate results for governance review within the Credo AI app.
Lens is able to export a full report as a notebook or html. This notebook contains plots and markdown cells describing the results for each assessment run. These reports can then easily be pushed or uploaded to the Credo AI Governance App for an AI Use Case and stakeholder review.
Lens can generate a host of plots, all optimized for usability for the multi-stakeholder AI Use Case and Governance teams.
When using Lens with the full Credo AI Governance App, Data Scientists can easily pull requirements and push results back to the platform for review, which will automatically score technical assessment results for risk and compliance.
Use Lens as a standalone Python library or as a Notebook. A diverse set of demonstration notebooks are included with Lens that helps you get started and become familiar with its many capabilities.
Along with the Lens Quickstart we have several useful demos:
Add the Lens Python module to your existing pipeline to run an assessment and validate a build