Interview with Gerd Gigerenzer on the Future of AI

DECODING PSYCHOLOGICAL AI: A CONVERSATION WITH GERD GIGERENZER

Exploring Heuristics, AI, and the Art of Navigating Uncertainty in Entrepreneurship.


Gerd Gigerenzer is a renowned psychologist and decision theorist, known for his work on heuristics and their applications in decision-making across various domains, including artificial intelligence and entrepreneurship. He also serves as the director of the Center for Adaptive Behavior at the Max Planck Institute.


Could you please explain the concept of heuristics and how they influence human decision-making processes for our audience who may not be familiar with your work?

Absolutely. A heuristic is a simple rule used to make decisions. For instance, in social contexts, people often follow what others are doing, a heuristic known as imitation. They tend to buy the same shoes or products that others are buying. Another aspect is choosing the majority option or the best option, which are different heuristics.

Essentially, heuristics are strategies for making decisions when we face uncertainty and cannot calculate the best or optimal solution.

Our audience primarily consists of founders and entrepreneurs. One common question is how to navigate the noise, survivorship bias, and small sample sizes prevalent in entrepreneurship. Traditional business education often teaches expected utility maximization as the way to make decisions. However, when you step into the real world, it's a different story.

In a world of uncertainty like entrepreneurship, true optimization, finding the absolute best solution, is often unattainable. Business schools commonly teach expected utility maximization as a form of optimization.

However, a more practical approach acknowledges our human limitations. Even if we can't find the absolute best solution for the future, we can still make decisions that are good and better than other strategies. For example, consider Harry Markowitz, the economist who won a Nobel Memorial Prize for solving the problem of allocating money to multiple assets. His method, the mean-variance portfolio, assumes that you can estimate future returns, variances, and covariance of all assets. But when he invested his own retirement funds, he didn't use this complex optimization. Instead, he applied a simple heuristic known as "1 over N," which involves allocating money equally among the assets.


This raises an important question: can we identify situations where optimization or heuristics will lead to better outcomes, such as according to the Sharpe ratio or other criteria? This is what I call the question of ecological rationality, where rationality is not just about internal consistency but about real-world success.

Can you elaborate on the conditions under which a heuristic like "1 over N" might outperform a Nobel Prize-winning optimization model?

Certainly. The performance of a heuristic versus an optimization model depends on the specific circumstances. For example, when the number of assets (N) is very large, Markowitz's optimization method requires numerous estimates, leading to higher estimation errors, making it less effective. On the other hand, when N is smaller, the heuristic may perform better. Studies have shown that in certain scenarios, "1 over N" outperformed the optimization model in seven out of eight investment problems tested.

The key question is: how can a heuristic outperform optimization? That's a question worth exploring further. 

Now, the answer to this is quite straightforward because you can't optimize in an uncertain world. Optimization relies on estimating parameters from past data, and if the future doesn't resemble the past, heuristics become a more reliable bet. For example, the "1 over N" heuristic assumes that future outcomes are unpredictable, so it opts for perfect diversification.

This illustrates the difference between heuristics and optimization models. In certain cases, heuristics can outperform optimization models, particularly in situations marked by uncertainty where past data may not be a reliable guide.

If you are operating in a stable world where nothing new ever happens, then fine-tuning based on past data can yield more accurate results compared to heuristic methods. However, heuristics are always quicker and offer immediate decision-making. The more uncertainty you face, the more advisable it becomes to adopt a simple and robust approach.

You've discussed the distinction between stable and ill-defined problems in your papers, especially in the context of algorithms. Could you delve deeper into this distinction?

This distinction also applies when we talk about artificial intelligence, particularly deep neural networks. These networks essentially function as regression or discrimination models but are recursive and highly powerful. They rely on parameter estimation, which in turn depends on vast amounts of data. However, this data is most valuable when the future closely resembles the past.

In stable environments, such as chess, and certain industry applications, this type of AI excels, as it deals with well-defined and stable problems. But when dealing with uncertainties, such as predicting human behavior, even the best AI models can be outperformed by very simple heuristics that take into account factors like age or previous convictions.

Human behavior introduces a high level of uncertainty, and heuristics offer transparency, allowing us to examine their decision-making processes, which is often challenging with black-box AI models.

Can heuristics be applied to AI, or can AI benefit from incorporating heuristics into its models?

Indeed, the study of the human mind reveals that humans are essentially heuristic machines. Our evolution has equipped us to navigate uncertain situations, making predictions and guesses about the future. When developing AI programs to predict uncertain events, one approach is to study how humans employ heuristics and incorporate these strategies into AI systems. This approach dates back to Herbert Simon, one of the pioneers of AI, who envisioned studying experts and programming the heuristics they use to make computers intelligent.

While machine learning has enjoyed significant success, particularly in scenarios like chess and industry applications, it would be a mistake to disregard the insights gained from understanding human heuristics, especially when dealing with uncertainty. 

For instance, consider predicting the behavior of unpredictable viruses like the flu or the coronavirus. These viruses don't follow well-defined patterns, as we've seen in recent years. Google engineers attempted to predict flu trends between 2008 and 2015 by analyzing search terms. They used a complex big data algorithm with numerous variables, essentially a black box model. However, in 2009, an unexpected event occurred—the swine flu emerged out of season. The algorithm, which had learned that the flu is high in winter and low in summer, couldn't adapt quickly enough. This incident led the engineers to reconsider their approach. When faced with a complex problem and a complex algorithm that isn't working, the question becomes: Do you make the algorithm more complex or simpler?

Based on our research, it appears that simplicity is the key to heuristics, as it enhances their robustness. In contrast, Google engineers chose a different path by increasing complexity, incorporating around 160 variables into their model, which ultimately did not improve its performance. They made several updates to this complex model, but it still failed to provide accurate predictions.

My team and I are working on what we term "psychological AI." This involves studying how the human brain, evolved over time, deals with rapidly changing and unexpected situations. One aspect we've drawn from memory research is the human tendency to quickly forget information. We've formalized this into an algorithm known as the "recency heuristic." This heuristic simply looks at the most recent data point, predicts that the next one will be the same, and ignores all other information. It's a straightforward approach with no free parameters, easy to understand, and doesn't rely on secrecy or extensive calculations. For instance, in predicting flu trends, our algorithm takes the latest data point from the CDC (Center for Disease Control), which is usually one to two weeks old, and predicts that the current week will follow the same pattern. It's clear, transparent, and doesn't require large amounts of data.

When unexpected events like the swine flu occur, the recency algorithm may make an initial error but quickly adjusts because it doesn't get bogged down by big data, unlike Google's flu trend algorithm, which is data-intensive. We tested this approach from 2008 to 2015, and every year, the simple recency heuristic outperformed Google's big data model, including all four of its updates. This illustrates how a basic heuristic rooted in human psychology can outperform a complex big data model.

Why do big players in the industry, despite their financial resources and incentives, seem to neglect this perspective on AI? Wouldn't it be more cost-effective and efficient to test AI models against psychological heuristics to see which ones perform better?


That's an excellent question with multiple facets. One reason might be the "complexity illusion" where there's a belief that adding more data and complexity to algorithms will always lead to better results. While this holds true in stable, well-defined environments, it falls short in uncertain, rapidly changing situations. Additionally, the understanding that algorithm performance is conditional on the environment is not yet widely developed.

Many people tend to argue that a particular algorithm is the best without recognizing that performance is context-dependent. Furthermore, even among psychologists, there can be a tendency to downplay the abilities of the human mind, especially when it comes to others. This attitude is evident in the literature in behavioral economics, where heuristics like the recency heuristic are often labeled as sources of irrationality without thorough testing.

Overall, there's a need for a shift in mindset to consider ecological factors and acknowledge the potential of human-inspired heuristics in AI development.

People often underestimate the potential of the human mind and sometimes even look down on it. However, there are more factors at play. Let's revisit the example of Harry Markowitz and his mean-variance model, which earned him a Nobel Prize. In contrast, we have the heuristic "1 over n" that, in many situations, outperforms the Nobel Prize-winning model.

The complexity illusion is a common pitfall; people tend to believe that complexity equates to excellence, while simplicity cannot be effective. This mindset resembles what Nassim Taleb observed in finance. His insights, despite their practical success, are still overlooked by major business schools and often disregarded.

Another concern arises from large language models generating content at scale, such as blog posts, books, and essays. With AI generating and training on the data produced by AI, it raises questions about the necessity of human involvement. What are your thoughts on this trend?

We're witnessing a situation where AI is generating vast amounts of content, and at some point, AI will be trained on data that was entirely generated by AI in the past. It's as if humans are being gradually phased out of the loop. Many of my colleagues, when writing articles or funding proposals, employ AI models like GPT. I often wonder how many reviewers are also using AI models to evaluate the papers written by researchers.

On a more concerning note, we're seeing the emergence of fake papers generated by AI through so-called paper mills. This is particularly prevalent in medical research, where it's estimated that perhaps over 10 percent of published papers are fraudulent. In some cases, doctors who lack the time or expertise to conduct research pay scientific assistance companies to write entire papers for them, a business transaction that can cost tens of thousands of dollars.

This undermines the integrity of science through commercial interests.

On another note, do you believe that algorithms like deep neural networks have the potential to move beyond being advanced statistical machines and start mimicking human understanding to some extent?

Deep neural networks are, at their core, advanced statistical machines. They don't possess human-like understanding or intelligence in the same way humans do. A clear illustration of this can be seen in image categorization tasks. Deep neural networks can distinguish objects like school buses from regular cars based on patterns they've learned from training data. However, the errors they make are fundamentally different from those made by humans. There are adversarial algorithms that can exploit the weak points of deep neural networks, leading to situations where the network identifies objects incorrectly. For instance, an image with alternating yellow and black stripes may fool the network into thinking it's a school bus. Humans would never make this error. Moreover, adding random noise to an image can confuse deep neural networks, whereas it has a lesser impact on human perception. 

Even when deep neural networks and humans achieve similar classification performance, the processes behind their decisions differ. Major experimental findings in vision, for instance, are not mirrored in deep neural networks. Therefore, it's important to recognize that while AI may produce high-quality text or poems, it operates differently from human intelligence and can be deceived.

Could you elaborate on the distinction between risk and uncertainty and its relevance to AI, particularly in light of the stable world principle?

The distinction between risk and uncertainty is crucial. In a world of risk, you have complete knowledge about all possible outcomes, their consequences, and the associated probability distributions. For instance, consider playing roulette at a casino, where you know precisely the possible outcomes (numbers from 0 to 36), their monetary consequences, and the probabilities. This is a world of risk, but it's rather uninteresting. As Fyodor Dostoevsky once quipped, in a world of perfect rationality, nothing happens. Most decision theory focuses on such risk-based scenarios, including expected utility maximization, which operates in a risk-based framework.

However, in the real world, we often face uncertainty, which is a realm where unforeseen events can occur. Let me share an amusing example from Berlin, where I reside. A few years ago, an unexpected event made headlines globally. A man was picnicking nude, which is a normal practice in Berlin. Out of the blue, a wild boar appeared with two piglets and snatched the man's yellow bag, including his laptop, before running away. This illustrates the unpredictability of uncertainty. In a world of uncertainty, heuristics come into play. In such situations, there's no one-size-fits-all algorithm like a single deity guiding decisions. Instead, you rely on multiple heuristics.

Markowitz's "one-over-N" heuristic is helpful in some allocation scenarios, but it doesn't cover all situations. Similarly, imitation doesn't work when there's no one to imitate, such as in the case of Robinson Crusoe. So, judgment becomes essential in dealing with uncertainty, a dimension often overlooked in the world of risk-focused decision-making.

In a world of uncertainty, a curious phenomenon emerges: "less is more." Beyond a certain point, acquiring more information can hinder decision-making rather than enhance it. Robustness, not optimization, becomes the key, as optimizing in an uncertain world can render the system fragile, leading to breakdowns as we've seen in financial crises. Finance models, like Markowitz and Merton's, excel in stable worlds but falter when the unexpected occurs, often requiring bailouts with taxpayers' money.

These insights from the distinction between risk and uncertainty and the need for heuristics in uncertain scenarios are often overlooked, particularly in academia.

Yes, it's a valid point. Many problems stem from the academic focus on models dealing solely with risk or stable environments, often confusing uncertainty with risk. This oversight leads to misconceptions and inappropriate models. On the other hand, machine learning experts, aware of the intractability of many problems, tend to disregard psychological and decision-making theories, assuming them to be irrelevant. They may consider aspects like morality in self-driving cars, but often only after the fact.

Regarding self-driving cars, we're yet to see level five autonomous vehicles safely navigating all conditions without human backup. Despite claims of imminent breakthroughs by figures like Elon Musk, we're not there yet. Betting on self-driving cars for regular traffic, which is highly uncertain, could lead to unexpected and fascinating consequences.

It's called level four cars. A level four car is defined as a car that can drive without human backup, but not everywhere. In order to exploit the limits of the technology, we would have to redesign our cities, perhaps by adding walls or fences to streets, ensuring that these level four cars can drive safely while prohibiting human drivers, who introduce uncertainty. This approach involves changing our infrastructure and even making ourselves more predictable. AI can handle predictable behavior, unlike our current unpredictable habits, as seen in places like San Francisco with self-driving car startups.

Do you foresee potential major accidents or challenges as this technology develops?

Most of the cars labeled as self-driving are not level-five cars. For instance, in the famous accident where a pedestrian was killed, a human driver was in the car, not paying attention. Level three cars, which can perform specific tasks like maintaining distance, have been around for years. However, level three cars require a human backup driver in case of problems, which is not a viable solution. Waiting for something to go wrong while sitting behind the wheel for hours is impractical. Adaptation to the limited capabilities of AI may be our future. Alternatively, we could invest in improving public transport systems, which may be a more practical European alternative.

Elon Musk is known for his claims about self-driving cars being just around the corner. However, we're still waiting for the widespread adoption of level-five autonomous vehicles. Musk's stance against public transport is another interesting aspect. He may make such claims to influence politicians and sway public opinion against traditional public transport systems. It's an attention economy, where making sensational claims keeps investors engaged.

I firmly believe that all sensitive algorithms should be made transparent to the public. I've served on advisory committees for the German government, where we've made such proposals. We need to empower individuals and educate them about the reality of data usage and the hidden costs of "free" services that extract personal data. For example, the pay-with-your-data model on social media is akin to a coffee house where users are the product, not the customer. It's important to understand this and explore alternatives.

We should have the right to pay for our own coffee, eliminate the sales force, and become customers once more. This shift would require governmental support and a shift in public awareness. Many people are still unaware of the extent to which they are sleepwalking into surveillance.

Can you share your thoughts on genetic intelligence and whether we might mimic it in AI in the future?

Human intelligence is a combination of nature and nurture. Genetics plays a role, but so do training and expertise. We don't fully understand human intelligence or its genetic basis. If we did, we might be able to replicate it in AI. However, AI doesn't possess intuition, a vital aspect of human intelligence. Integrating intuition into AI could be a significant advancement.

AI's impact depends on the people behind it, as AI itself is not moral or immoral. The moral questions revolve around the intentions and actions of those using AI, particularly in high-stakes decisions.

Thank you for sharing your insights. Are there any upcoming projects or books you're working on that might interest our audience?

I've recently published a book titled "The Intelligence of Intuition," which delves into the role of intuition in decision-making and its potential in AI. I'm also working on projects related to psychological AI, exploring how human capabilities can be incorporated into AI to improve predictions and decision-making.

Thank you for joining us today, and it's been a pleasure discussing these important topics.

It was a pleasure, and I encourage everyone to keep thinking critically and staying informed in our rapidly evolving technological world. The real danger lies in complacency, not AI itself.

Learn More About Gerd Gigerenzer’s Work:

If you enjoyed the interview, consider subscribing to our newsletter:

Join the conversation

or to participate.