A Better Paperclip Maximizer?

This is a transcript of Episode #135 of the Eat For The Planet Podcast. Listen to the audio below.

I thought I’d use this episode to explore some ideas on how we can design better food solutions from the ground up that are more sustainable and equitable. 

But first, a disclaimer - what I’m going to explore is just a framework and a way to think about new systems or solutions. By no means does it imply that it will lead to perfect or even good outcomes. The global food system operates in complex and convoluted ways. Many of its biggest knots will take years to untie and will require equally complex and convoluted solutions before we can hope to be trending in any sort of positive direction.

That being said, change is not impossible, and it can only happen if we first understand what we’re actually working with.

So I want to kick things off by talking about a classic thought experiment from the field of artificial intelligence, which is often used to explain how AI might inadvertently lead to negative consequences, including catastrophic ones that can result in the end of civilization and all of humanity as the AI tries to maximize its goals. 

Some of you may be familiar with this thought experiment called the “Paperclip Maximizer” which surfaced in the year 2003, in a paper titled "Ethical Issues in Advanced Artificial Intelligence," by Swedish philosopher Nick Bostrum.  But for the sake of those listening that may be hearing this for the first time, I’ll explain it briefly.

A “paperclip maximizer” is an artificial intelligence program with the simple goal of maximizing paperclip production. In addition to being programmed with all necessary and known data about methods of production and materials used in paperclip manufacturing, including details about the supply chains involved, the paperclip maximizer is also able to work to improve its own intelligence with the goal of maximizing paperclip production. According to the thought experiment, such an AI would not only optimize production, but would also continue to undergo an intelligence explosion that makes it capable of innovating better and better techniques to maximize the number of paperclips it produces. 

At some point, the AI would start to run out of original raw materials and would identify and utilize new renewal and non-renewal resources for the purposes of optimizing its ability to produce paperclips. As the AI gets smarter and smarter and more efficient at producing paperclips, it will eventually start to completely deplete non-renewable resources and start using up renewable resources at an unsustainable rate. If you follow this thought experiment, it ends with the paperclip maximizer converting the entire planet, including all its resources – humans included – to produce paperclips and will eventually convert the entire universe into paperclips. 

Sounds pretty horrific, doesn’t it? 

Keep in mind that this is just a thought experiment and offers a worse-case scenario of what could happen if we successfully design artificial intelligence as a simple self-improving optimization process that does not have any guardrails based on human values or common sense for that matter. However, the lessons here extend beyond artificial intelligence. 

No prizes for guessing where I’m going with this. One could draw some pretty interesting correlations between the paperclip maximizer and the agricultural revolution, led by technological advancements that created new farming practices and farming outputs, which evolved over hundreds of years into the modern industrialized food system.

The current system exists to produce the most amount of food at the lowest possible cost - in other words, the paperclip maximizer of food.

Now the question is where has it gotten us?

As we know too well, the global food system now accounts for over one-third of greenhouse gas emissions and modern agriculture is destabilizing key planetary boundaries which if crossed could generate unacceptable environmental change, potentially endangering human existence. 

If being devastating to the future of the planet wasn’t bad enough, it also happens to be cruel and wasteful, and it is failing the billion people who seek to make a living by working in the system and the 7.8 billion who seek to be nourished by it today.

Yet, the food system marches on, as we scramble to do our best to slow it down or develop a better way forward .

But what have we learned from our past mistakes? How can we ensure that our new solutions truly do better?

More that 20 years ago, Neil Postman, the American author and media theorist shared his thoughts on “Five Things We Need to Know About Technological Change”

Here’s a summary of the 5 points he raised:

First: We always pay a price for technology; the greater the technology, the greater the price. 

Second, that there are always winners and losers, and that the winners always try to persuade the losers that they are really winners

Third, that there is embedded in every great technology an epistemological, political or social prejudice. Sometimes that bias is greatly to our advantage. Sometimes it is not. 

Fourth, technological change is not additive; it is ecological, which means, it changes everything.

And fifth, technology tends to become mythic; that is, perceived as part of the natural order of things, and therefore tends to control more of our lives than is good for us. .... When a technology becomes mythic, it is always dangerous because it is then accepted as it is, and is therefore not easily susceptible to modification or control.

Neil Postman obviously offers a lot there for us all to think about. While he was referring largely to developments in information and computer technology, all the five points are equally, if not more relevant when we talk about technology in the context of food production, distribution or consumption. 

I referenced this talk not because it provides us with any clear answers on how we should design better technology, but because it does provide us a good foundation to build on.  

If we go back to the thought experiment of the Paperclip Maximizer, it was obvious that an AI with a simple algorithm that’s trying to produce more paperclips and get smarter so it can do its assigned task better was missing the context of human values. It is an expression of technology at its worst, having escaped the control of its human developers and now running as an autonomous agent. 

While Neil Postman was not describing AI or the food system, he was warning us about how all technology or perhaps all systems are by design capable of good and bad and some, if not all of it’s negative impacts can perhaps be mitigated by some better design decisions upfront. 

So how do we make better design decisions when constructing systems or solutions or technologies in the food space?

I’m going to introduce one final concept that I’ve borrowed from the field of information systems design and human-computer interaction. I was first introduced to this concept back in the late 2000s, when I worked in the technology industry on privacy issues. 

The concept is Value sensitive design, developed by Batya Friedman and Peter Kahn at the University of Washington. Value sensitive design seeks to provide theory and method to account for human values in a principled and systematic manner throughout the design process.  It outlines the need for designing technical systems in a way that brings human values to the forefront, while also honoring existing and well-functioning technical efforts.

Here’s what this could look like in the context of food solutions or food technology. 

The methodology calls for a combination of conceptual, empirical, and technical investigations. Some of it, especially aspects connected to human-computer interaction don’t necessarily translate well into food solutions, but they definitely provide a good guide for the right questions. 

Firstly for clarity a value in this context can be defined as “what a person or group of people consider important in life.”

So I’m going to close out this episode with a few questions that I believe we should be asking of all new food solutions, so we may be able to make better design decisions:

Here are the questions:

  1. Who are the direct and indirect stakeholders affected by the solution? Could it be farm, factory, restaurant or grocery workers, how about consumers, or perhaps communities surrounding farms, or existing food businesses and their employees.

  2. How are they affected?

  3. What values are implicated? In other words, what issues does the solution or product bring up? In the context of food, as we know this could bring up a range of issues from animal abuse, farm worker abuse, exploitation of labor, unsafe and unhealthy food for consumers, and many more. 

  4. How should we engage in trade-offs among competing values in the design, implementation, and use of the solution (for example should we value accessibility and convenience over Nutrition in the case of a a food product or Fairness (say worker safety and wages) over efficiency in the case of a food production operation or impact on soil or land health over impact on human health?

  5. Should moral values (for example, compassion for people or animals, or impact on the planet) have greater weight than non-moral values (for example, hyper-growth, or revenue goals,  or shareholder value, or global scale)?

  6. Further, if you analyze the product or solution, can you identify if it readily supports certain values while rendering other activities and values more difficult to realize?

  7. And lastly, it is important to consider what would happen if the product or technology or solution scaled or became pervasive over a certain timeframe. What individuals, groups, or larger systems would be inadvertently affected by it in positive or negative ways?

Luckily, many of us who are working on transforming our current food system are already asking some, if not all these questions. 

And if that’s the case, there’s definitely a lot of hope for the future. We just need to keep in mind that if we want to design new systems with values factored in, we first have to take inventory of all the values and stakeholders and then carefully examine all the trade-offs we are making.

This doesn’t mean we can’t make any trade-offs, but at least we are making these decisions from the right vantage point, while acknowledging that there’s more work to be done before the solution can truly address the values of all the stakeholders involved. 

That’s the only way we can ensure that the future of food is truly transformational, and not just a paperclip maximizer that’s a little more sustainable and humane.