AI Governance

AI Governance in the time of Generative AI

Ian Eisenberg
Head of Data Science
December 8, 2022
12/8/2022
Contributor(s):
No items found.

Part 1: Painting the Landscape 

Produced on Lexica.Art, the “Stable Diffusion Search Engine”


Generative AI systems are the next frontier of technological systems. Putting aside what they presage in terms of future AI advancements, generative AI systems are already some of the most versatile, accessible tools humanity has ever created. The excitement around this space is palpable - you see it in trending social media posts of Dall·E images, new research and product innovation, and growing investment in generative AI companies. But if you are like most, this excitement is tempered by a feeling of anxiety. 

Many people recognize that this moment of upheaval comes with significant risks. If you are an artist, these risks may take a concrete form as many commercial applications of your trade seem poised to be automated. But even if you are not so obviously connected to the current applications of Generative AI, a justifiable anxiety remains; there are significant moral, legal, and technological concerns to address. These are a new kind of dual-use technology, where it is easy to imagine both an age of unbridled human creation and devastating tragedy. How will humanity direct this powerful system to support our own flourishing? 

At Credo AI, we focus on AI Governance as a tool to realize the positive impacts of AI and, even more importantly, reduce the most egregious consequences of AI mishaps. We have described what we mean by AI Governance in another post, but here it is sufficient to say “governance” is the practical application of our collective wisdom, built into institutions, regulations, processes, and technological controls, that direct how AI systems are developed and used. AI Governance is a young field, and younger still when applied to generative AI systems—but governance itself is a kind of technology, and we expect it to mature along with AI systems. Indeed, it must.

In this blog series we will dive into the rapidly changing world of Generative AI. This first blog sets the groundwork and describes what these systems are. The second part will cover the many risks presented by these systems. Finally, in a third blog we will describe some methods to mitigate those risks, primarily focusing on AI Governance as the key tool.

What is Generative AI? 

“Generative AI” has become a kind of buzzword applied to a rapidly evolving technology, so naturally, its specific definition is a bit fuzzy. A simple definition today is that Generative AI systems use pre-existing content like text, audio, video, or any other data source to create new content in response to a query. These systems are exemplified by language models like GPT3 and image generators like Dall·E, Midjourney, and Stable Diffusion and are founded on fairly general-purpose algorithms (e.g., transformers and diffusers for now), which means that the specifics of the input/output modality are not critical limitations of their use. After language and images; video, audio, code, and even actions themselves may be the output of these systems. Such advances don’t require new leaps of imagination or groundbreaking innovation⁠—only engineering time and industry will. 

For now, the main blockers of Generative AI are datasets and compute resources. These models seem to follow AI scaling laws, which means that their performance and capabilities predictably improve as their dataset and number of parameters become larger. The observation of these scaling laws is the underlying current pushing forward these changes, as they promise that with sufficient resource investment, a better system will be delivered. Essentially, AI is continuing to move from a principally scientific field, to an engineering one. The upshot here is that we can be confident that Generative AI will grow in flexibility and capability that cover more modalities and abilities.

While the above points to increasingly general systems, the final deployed system doesn’t have to be generic. While many systems today are general and can perform many tasks given the right prompt (indeed, that is the main selling point), more specialized generative systems can and will be created. Think of the Generative AI models as a foundation on which more specific systems can be built. This infrastructure is constructed by summarizing and synthesizing all content ever created (to be only slightly hyperbolic!) and packaging it into a format that humans can understand and query. That infrastructure is impressive, but it’s only infrastructure! What will we build upon it?

One method of building on this infrastructure is by “fine-tuning” general systems for more specific purposes without substantial architectural changes. This approach changes the model by adjusting its weights, making it purpose-built for a subset of tasks. For instance, fine-tuning general language systems has given rise to programming tools like Codex, and fine-tuning image generators like Stable Diffusion have resulted in individual-specialized image generators. Another approach doesn’t even need fine-tuning and relies on clever prompting of the model. Prompts are the inputs for these systems, typically taking the form of natural language. 

While the general public often hears of small prompts like “a teddy bear skateboarding at Times Square,” prompts can be very long and complicated. One kind of prompt gives the model multiple examples of what is expected before giving the user’s input. These examples might be related to translation, summarization, or many other tasks. After just a few of these examples, the model can perform the task well (so-called few shot learning). The skill of creating these inputs has a name: “prompt engineering”. Prompt engineering is the process of constructing a long or specialized prompt to coax the generative model to perform the desired role. We use the word “coax” intentionally. Often it’s not clear what abilities these systems have until you ask in the right way! As systems scale, they sometimes display surprising capabilities that are hard to predict or access. This point has profound implications for the capabilities and risks of these systems, which we will return to in a followup blog on the risks of Generative AI.

The upshot is it is inevitable that these systems will proliferate and become an important tool in diverse fields. Investors like Sequoia Capital are attempting to make these predictions more precise, mapping Generative AI into known functions and fields. More far-reaching implications are certainly possible, from the destruction of commercial creative fields, a reinvention of human-AI interaction, a new renaissance of human creativity, or an epistemic apocalypse brought on by misinformation and noise injected automatically into our digital ecosystem. 

From Generative AI to General AI

These relatively sober descriptions reflect a certain reality but miss something - these systems are BONKERS! Their capabilities are far beyond what anyone could have predicted a few years ago, let alone just one decade ago, before the first convolutional networks dominated the ImageNet competition and started the current age of AI we live in. Their capacity for “intelligence” and “creativity” is astounding, and their capabilities are consistently defying expectations. 

Comparison from Dall·E (left )to Dall·E 2 (right) from Open AI’s original announcement  post.

It’s easy to make these subjective claims more quantitative. AI systems are consistently tested against benchmarks that the field uses to gauge progress. When forecasters predict when AI systems will reach a certain performance on challenging benchmarks, they are too conservative. Progress is consistently faster. The Generative AI systems of today are not going to be the systems of the future (or even a year from now!). Dall·E was published in 2021, and the leap from Dall·E to Dall·E 2 in one year is extraordinary. The further advances of companies like Stable Diffusion, along with the plummeting costs and democratization, all mean that advancement will accelerate. This is an example of exponential technological progress, and, as many of us learned during the recent pandemic, it’s hard for us to intuit exponential growth, let alone plan for it. While progress at this breakneck speed is not guaranteed in the future, it is probable enough that it is worth focusing on how we can ensure that these world-altering technologies serve us best.

So with that said, let’s not overfit to the particular name: “Generative AI,” and the current capabilities of these systems. Instead, see these systems as one moment in a trajectory of AI progress that should make your head spin. Each name emphasizes certain characteristics to the detriment of others, but we believe Generative AI is particularly problematic in underemphasizing the future capabilities of the systems. Below we introduce a few other names and definitions that are gaining prominence in different communities, which all help define related (but not identical systems). 

Foundation Models

"Any model that is trained on broad data (generally using self-supervision at scale) that can be adapted (e.g., fine-tuned) to a wide range of downstream tasks"

General Purpose AI (GPAI)

“AI systems are AI systems that have a wide range of possible uses, both intended and unintended by the developers. They can be applied to many different tasks in various fields, often without substantial modification and fine-tuning.”

Artificial General Intelligence (AGI):

AGI has been in use by many communities over the decades and refers to AI systems that can perform any cognitive task a human could. There is often an underlying assumption that such a system would also display superintelligence, defined by Nick Bostrom as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest."


Transformative AI (TAI)

TAI defines AI systems by their consequences, rather than their capabilities. TAI is defined as a system that “precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution.” This term is more prominent amongst people concerned with existential or catastrophic AI risk, or AI systems that can automate innovation and technology discovery.


AGI, and TAI are beyond the scope of this blog series, but we included them because it is useful to think about the current crop of AI systems in the context of these more futuristic ones. As we have said a few times already, advancement in AI has been very fast, and AGI may be here faster than most of us would think. There’s a niche (but growing) field forecasting AGI timelines, and the predictions are consistently in the “few decades” realm. If you are like us, that seems very soon! Recognizing the relationship between Generative AI, GPAI, and AGI will allow us to conceptually, socially, and technically prepare ourselves better.

That said, we’ll use the phrase “general purpose AI” for the rest of this series, as we believe it captures the key features that interact with risk assessment and governance concerns in the near term, though we could have easily used “Foundation Model.” 

Conclusions

In this post, we’ve introduced Generative AI, put them in the context of other general AI systems. In the next blog, we will lay out some of the risks that we will have to contend with. Some of these risks are easy to understand, as they closely relate to our current legal landscape or concerns. Some are harder to define because they rely on imagining the future world changed by transformative technologies. These second kinds of considerations are the key feature of thinking about AI risk - we must be humble and plan for surprises. Rather than think, “how can I account for such-and-such risk,” we must instead think, “how can I set up a system to identify emerging risks and quickly mitigate them?” 

It’s quite a different perspective and requires agile and informed governance. In our final post, we will describe how AI governance today is (or is not) meeting the challenge of directing GPAI systems to support human flourishing. Stay tuned!

You may also like

NYC AI Bias Audit: 7 Things You Need to Know About the Updated NYC Algorithmic Hiring Law

The clock is ticking, and we at Credo AI are committed to helping organizations understand what the recent updates and key aspects of the new regulation mean for them. In today’s blog post, we will outline seven things employers and employment agencies need to know about the requirements for NYC AI Bias Audits, also known as NYC Local Law No. 144.

Credo AI Lens™: the ultimate open-source framework for Responsible AI assessments

While ML capabilities have developed at a staggering pace, guidelines and processes to mitigate ML risks have lagged behind. That’s where AI Governance comes in, which defines the policies and processes needed to safeguard AI systems. While there are many components to successfully operationalize the responsible development of AI systems, a chief need is assessing AI systems to evaluate whether they behave adequately for their intended purpose. This assessment challenge is central to Credo AI’s efforts to develop AI Governance tools. In this blog, we introduce Credo AI Lens, our open-source assessment framework built to support the assessment needs of your AI Governance process.

CEO Message: A look at 2022 and a glimpse into 2023

It's been an impact-driven year, and while it is not possible to share all that has happened, I want to take a moment to highlight some industry-shaping product, policy, and ecosystem moments with you. In this year-end review, I am elated to spotlight some of Credo AI’s most significant achievements and reflect on our progress in addressing the challenges and opportunities of the AI industry.

Join the movement to make
Responsible Al a reality