Computing Metaphors and Human Cognition

In 1946, John von Neumann and his wife, Klara, successfully upgraded ENIAC to enable stored-program computing. This machine, programmed and operated by an amazing team of women, kicked off the modern computer age, making ENIAC the first fully electronic general-purpose computer. What became known as the von Neumann architecture, which includes a Central Processing Unit, arithmetic logic unit, memory, input and output mechanisms, external storage, and control units that bus information around, has remained the dominant architecture for computers up until this day.

John von Neumann was also deeply curious about how the brain worked. He was fascinated with the recent discovery of how neurons “…have all-or-none character, that is two states: Quiescent and excited…” This biological structure served as a significant inspiration for the architecture of ENIAC, and he continued to explore the relationship between computers and human cognition until his death in 1957. His unfinished paper, “The Computer and the Brain,” has remained an influential work. In it, von Neumann hypothesized that computer components might be likened to human organs, with both systems relying on electrical impulses to ferry, process, and express information. For many decades, philosophers and mathematicians followed von Neumann’s path, creating different variations on the metaphor with varying degrees of literal application.

However, new data shows us that our brains are structured differently than von Neumann imagined. Recent neuroscience research indicates several ways our brains differ from computer hardware. In his book, The Idea of the Brain, neuroscientist Matthew Cobb explores the history of how computing metaphors have served as a way to describe human consciousness. While using a metaphor of hardware is still the dominant view, there is a shift in the field.

“As we understand more,” Cobb writes, “localization of function will become increasingly blurred and imprecise, and brains will be understood primarily in terms of circuits and their interaction rather than on the basis of anatomical regions, viewed as modules…models that see the brain as active, responding to incoming sensory information and exploring and selecting future possibilities rather than simply processing and transmitting signals, will provide a dynamic view of brain function.”

Unlike the von Neumann architecture in our computers, our brains don’t have singular components, store our memories like files, or bus information around in pre-determined paths. While the metaphor of our brain as a computer may feel intuitive, this metaphor is increasingly problematic in the face of new evidence.

Standardization

Your brain has constructed itself into a distinct neural structure different from everyone else on the planet. Unlike a CPU, which can be designed with complicated circuitry and stamped out standardized for mass production, your brain constantly re-wires itself based on your experiences.

Because your experience is distinct, the neural makeup of your brain is, too. Sensory information that your body interprets is different from what other people sense. We are not clones of each other, and we do not all share a universal experience.

Neuroplasticity

When you get a computer, the CPU doesn’t dynamically change its circuitry each time you pull up a new program. Hardware is more or less fixed in the form it shipped from the factory. Your brain, however, does re-wire itself — constantly. This ongoing process, known as neuroplasticity, means that new information changes the physical structure of your neurons.

One way to visualize this is to imagine a college campus with buildings surrounding a large grass field. Students must leave class from one building and walk across the grass to get to the next one. Over time, the grass will become worn down, and paths will become visible. The paths between buildings with many students attending classes will be more visible than those with relatively few students.

When the administration changes where classes are held, these paths will eventually morph to reflect those changes. If a path is no longer used frequently, new grass will grow back. If a path is used more, the grass will grow less.

As neuroscientists say, “neurons that fire together wire together.” Repeated experiences strengthen connections and change how you sense and process information.

Predictions and Merge Conflicts

Computer hardware is more or less designed to execute commands consistently. However, our brains have evolved to moderate energy through a sophisticated and perpetual guessing game. We make predictions, develop simulations representing those predictions, compare our simulations to the inputs we receive, and resolve any conflicts before moving on.

When there is a mismatch between our expectations and experience, we encounter a prediction error. Resolving these errors is critical to learning, motivation, and building concepts and mental models for how we experience the world.

For example, let’s say you’re visiting a friend for tea. On the table in front of you, there is a small bowl filled with tiny white crystals. Based on experience, context clues, and what you sense in your current situation, you predict that the substance in the bowl will taste sweet. As your mind simulates the concept of sugar, your body prepares itself in anticipation. Your mouth begins to salivate, and the reward centers in your brain become more active, prompting a craving. While your friend is out of the room, you decide to sneak a spoonful of the substance into your mouth.

Primed to experience sweetness, you put the crystals into your mouth, expecting to savor a spoonful of sugar. But what you experience is incredibly different than what you expected, resulting in a prediction error. You resolve the conflict by rationalizing that the substance in the bowl wasn’t sugar. What you experienced was a spoonful of salt! You seem to spit it out involuntarily. Your body was one step ahead, constantly preparing you for action. Now you're ready to form a new hypothesis. What happened? You simulate several ideas. Did your friend play a trick on you? Make a mistake? Or are they angry with you and looking for revenge? You pick the best option among the ones available. Your friend must have made a mistake when they filled the bowl. You gather more evidence when they come back into the room by sharing your experience. You compare your hypothesis to their story. This time, your prediction did not encounter an error. Your friend confirms your hypothesis, and you both enjoy a good laugh.

In software, we also have tools that help us manage merge conflicts. If helpful, you might imagine your brain’s active inference similar to how you would use a version control system, such as Git, to manage changes to a codebase.

Processing Power

Computers can take advantage of a relatively flat architecture which means they can perform computational techniques, such as a brute force algorithm, that humans can’t. For example, when IBM’s Deep Blue beat the chess grandmaster Gary Kasperov in 1997, it had the ability to evaluate roughly 200 million moves per second. When armed with libraries of opening moves and end games, it can scan its database for its next optimal move much faster than a human.

Humans process information much differently. We can’t add additional processing power as computers can. Our brains use about 20% of our metabolic energy, regardless of what we do. When we focus on something, our brains must prioritize.

Memory

In a computer, memory follows a predictable process. Information is encoded, stored away, and then retrieved. A file could be opened and closed hundreds of times without changing.

Human memory works very differently. There is no central storage center. Our minds don’t have a virtual filing cabinet with a logical structure. Instead of retrieving information and putting it back into storage without changes, our brains rewrite memories each and every time we retrieve them. Our memories morph over time.

Emergence and Redundancy

Computer hardware doesn’t adapt to its surroundings. It’s static, not dynamic. On the other hand, our brains operate under the principles of self-organizing systems. Our brains also operate with significant amounts of redundancy. Dissimilar components can perform similar functions. As Cobb explains, “One sign that our metaphors may be losing their explanatory power is the widespread assumption that much of what nervous systems do…can only be explained as emergent properties — things that you cannot predict from an analysis of the components, but which emerge as the system functions…”

Complex Signal

Computers operate with one kind of signal. Electrons are either flowing, or they are not. With a human, there are a wide variety of signals, such as neurotransmitters, hormones, genes, and other biological chemicals. The signals that enable consciousness are much more complex than an electrical circuit board.

The makeup of these complex signals also varies from person to person. Signals also rely on receptors, which act like locks to a signal’s key. Signals and receptors work together. The amount and rate at which many of these signals are produced can also change over time based on lived experience. Stress, diet, sleep, and many other factors contribute to the specific signal cocktail that courses through our body at any given time.

Incorporating an Object-Oriented Metaphor

Instead of thinking of the brain as simply hardware, it might be more useful to incorporate the idea of software into the metaphor. As Cobb explains, “In a computer, software and hardware are separate; however, our brains and our minds consist of what can best be described as wetware in which what is happening and where it is happening are completely intertwined.”

When it comes to software paradigms, object-oriented programming (OOP) is particularly suited to this metaphor because its inventor, Alan Kay, used his background as a biologist to create the underlying pattern on which OOP is built. Inspired by cellular structures, Kay designed OOP to mimic how nature used “simple mechanisms controlling complex processes and one kind of building block to differentiate into all needed building blocks.” OOP is the backbone of many programming languages used today, such as C#, Java, Python, PHP, and Ruby, to name a few.

In the late 1960’s Kay was taking a critical look at the programming language LISP, when he observed some quirks. “The pure language was supposed to be based on functions, but its most important components–such as lambda expressions, quotes, and [conditionals] –were not functions at all, and instead were called special forms.” This got Kay thinking — what if all of these components were treated like cells in the human body? Cells have boundaries, the membrane, so they can “act as components by presenting a uniform interface to the world.” Kay incorporated this design into his programming language, Smalltalk, and named the “cells” of the system “objects.” As Kay describes:

“Instead of dividing ‘computer stuff’ into things each less strong than the whole–such as data structures, procedures, and functions that are the usual paraphernalia of programming languages–each Smalltalk object is a recursion of the entire possibilities of the computer. Thus, its semantics are a bit like having thousands and thousands of computers all hooked together by a very fast network. Questions of concrete representation can thus be postponed almost indefinitely because we are mainly concerned that the computers behave appropriately and are interested in particular strategies only if the results are off or come back too slowly…Smalltalk’s contribution is a new design paradigm – which I called object-oriented.”

When we’re applying the metaphor of a computer to human cognition, the notion of recursion, or self-similar repetition, can be a helpful addition. Recursion occurs all the time in nature. If you’ve ever seen the branches of a tree, the center of a flower, or the crystals of a snowflake, you’ve seen the power of recursion. Simple rules, repeated in a self-similar manner, can create extraordinarily complex structures.

States and Behaviors

In OOP, an object has two characteristics: state and behavior. A state describes the object’s properties or attributes. In the human body, we rely on several different types of information to assess our current state, such as:

  • Exteroception: external stimuli, such as sight, hearing, taste, touch, smell, and temperature.

  • Proprioception: body movement and position

  • Interception: internal stimuli, such as heart rate, hunger, thirst, respiration, pain, fatigue, and more.

  • Episodic Memory: knowledge about specific events from our past

The second characteristic of an object is its behavior. Which actions can it perform? What can it do? Some neuroscientists, such as David Wolpert, believe our brains are ultimately designed for one thing: movement. Our cognitive experiences, such as sensing, thinking, feeling, and dreaming, serve this sole purpose. They enable us to produce complex and adaptable movements that will increase our chances of survival in an ever-changing environment.

Compression

With all of these discrete attributes and behaviors, managing complexity is critical. This is achieved through various forms of abstraction in OOP and our brains.

In software, information hiding enables developers to use a piece of code without seeing the details of how it was designed or implemented. This is similar to how you can operate a car without knowing all the mechanical details of the engine. OOP uses an information-hiding type known as encapsulation, where the state and behavior details are hidden within the object. This enables developers to build complex programs dynamically.

Our brains also use information hiding through a process known as compression. Using a hierarchical parallel structure, signals in our brains move through successive layers of complexity until they reach our prefrontal cortex, where our most complex executive functioning occurs. Between each layer, the signals are compressed into a sort of summary. Compression lets us filter out details, reduce redundancy, and synthesize information. This process can keep going, resulting in many layers of synthesis.

Neuroscientist Lisa Feldman Barrett describes this process as how a detective’s field reports move up through a chain of command. The detective on the ground interviews twenty different witnesses and writes up a summary to pass along to their captain. The captain then collects all the summaries that detectives have provided and creates a new summary for the police chief. When a city official needs information from the chief, they’ll create a new summary, too. 

Compression in the brain is the process by which abstraction can occur.

Concepts

Abstraction in the brain comes in the form of semantic representation, which can also be described as schemas, mental models, or concepts. Concepts enable us to quickly categorize, label, and assign meaning to what we sense and experience in a quick and efficient way. Concepts help us quickly communicate, particularly with people with whom we share similar experiences or a common culture. When someone asks what you had for lunch, you could describe your meal as “lettuce leaves, tomatoes, cucumbers, carrots that were chopped and placed into a bowl,” but it’s much easier to describe your meal simply as “salad.”

Similar to objects in OOP, a concept contains information about its attributes and behaviors. For example, you’re likely able to distinguish a desk from a table without giving it much thought, even though their shape, height, and materials can be remarkably similar. This is because you’ve developed two distinct concepts. Perhaps you distinguish a desk because it is typically used for work. You may also use your episodic memory to classify something as a table because when you ate your lunch, you placed your salad on a similar surface.

Concepts are extraordinarily flexible. For example, desks and tables can be grouped into a more general concept called furniture. At the same time, concepts can become more and more granular. The concept of tables can be further refined into end tables, coffee tables, dining tables, and more. The appropriate level of granularity is often situationally dependent.

We use concepts to describe more than just things we can see and touch. Concepts can be goal-based, such as “things that are pleasurable.” Concepts exist for abstract ideas, such as money and trust. Concepts like emotions, intuitions, and dreams can be more feeling-oriented. They can also be analytical, as formal logic and mathematics are concepts. We also use concepts to construct our social reality, using conceptual heuristics to quickly get a sense of the people around us based on their personality, physical features, names, nationalities, clothing, behaviors, and more. However, we need to be cautious here. Judging another person based on our preconceived notions may be quick and convenient, but it also leads to stereotyping and other forms of discriminatory behavior.

Concept Templates and Instances

In OOP, objects are created from a template known as a class. Each class acts like a blueprint, and each time that blueprint is used, a distinct instance of that class is created as an object. You can think of this similarly to how blueprints are used to build houses. An entire neighborhood of houses can be built using a single blueprint. However, each house built from the blueprint is a distinct entity and may have slight variations compared to its neighbors.

Concepts work in much the same way. We construct conceptual templates that get instantiated in real-time. These templates include a concept’s attributes, behavior, and associated episodic memories. Often, this information is hidden from our conscious awareness thanks to compression. However, with effort and intention, we can dig deeper to explore how our concepts are constructed.

For example, the conceptual template I’ve constructed for the emotion I describe as “anxious” looks like this: my heart races, my lips get dry, my body becomes twitchy, and I talk faster. I also include relevant information from past experiences in my concept template. “Anxious” has shown up more frequently for me in small group settings than in large group settings. Each time I give a presentation, I experience a distinct instance of the concept template that I know as “anxious.”

Modifying Concepts

Concepts aren’t fixed. They constantly change. Concepts operate in a closed-loop system. New experiences get added to our episodic memory, associative learning leads to new connections, and resolving cognitive merge conflicts helps us change our perspective. All this new information is incorporated into our mental dataset, which means we have an amazing capacity for change.

For example, when I give keynote presentations at large conferences, I typically experience lower anxiety levels than when I facilitate a workshop for a small team. Part of the reason why I feel more comfortable giving keynote presentations is that I have more experience presenting in this format. I know what to expect, and I feel more prepared. Using my episodic memory, I can recall times when my presentations went well. I can draw on the experience of receiving positive feedback, which means the degree of “anxious” I experience in these situations has lessened over time, to the point where my “anxious” concept doesn’t quite feel accurate anymore. When I present at a conference now, “eager” or “antsy” is more likely to be the concept that gets instantiated.

This makes concepts an inherently personal experience. My concept of “anxious” probably doesn’t look exactly the same as yours. For example, my palms don’t sweat, but yours might. Public speaking, particularly to a large audience, is also consistently rated as the most common fear, even ahead of death. So, my experience of feeling relatively low anxiety speaking in front of large groups might seem absurd because it doesn’t conform to the statistical norm of the population. This reminds us that we can induce patterns from members of a population, but those patterns likely can't be universally applied to every individual within that population. It’s true that most people are afraid of public speaking. It's also true that my experience doesn't fit that pattern. With emotions, we run into trouble when we assume that an inference we draw from a common pattern can be treated as an immutable fact. In nature, variation is the norm.

Concepts can be a wonderful conduit to empathy. When we become curious about what information is hidden in concepts and work to achieve alignment between our concepts and those of others, that’s when empathy takes place.

Emotions Are Concepts

Emotions are a specific type of concept that incorporates mood, also known as affect, into its template. Affect is a generalized feeling in our body. Affect becomes an emotion when we construct a concept out of it. Affect is the feeling. Emotion is its meaning.

Mapping Our Moods

Affect can be distilled into a relatively simple diagram based on two properties: valence and intensity. Valence describes how pleasant or unpleasant a feeling is. Intensity is the degree to which a feeling is activated or calm.

When it comes to affect, people possess a wide range of sensitivity. Some people are so sensitive to changes in affect that emotional regulation is a significant challenge. Other people can easily sense physical sensations, such as a stomachache, but struggle to make a connection to an affective feeling or name it as an emotional concept.

Neither of these predispositions is necessarily bad, good, or better than the other. What is important is recognizing that people experience affect and emotions differently. When we acknowledge and honor these variations, we can work together more effectively

Building an Emotional Vocabulary

Feeling affect and conceptualizing it into an emotion are two different things. People may be so overwhelmed by the affect that they experience too much cognitive noise to form a clear concept. At the same time, a person with a low affect sensitivity might lack the sensory awareness necessary to construct a distinct concept. In both cases, labeling emotions as distinct concepts becomes a challenge.

In the late 1990s, neuroscientist Lisa Feldman Barrett directed an experiment where her lab “asked hundreds of test subjects to keep track of their emotional experiences for weeks or months as they went about their daily lives… These new experiments revealed something that had not been documented before: everyone we tested used the same emotion words like ‘angry,’ ‘sad,’ or ‘afraid’ to communicate their feelings but not necessarily to mean the same thing… we discovered that people vary tremendously in how they differentiate their emotional experiences.”

Side note: This discovery launched Barrett on a course that would lead her to completely disrupt the current popular understanding that emotions can be detected accurately through facial expressions by developing the Theory of Constructed Emotions, which forms much of the basis of this article. If you work in AI/ML, I think it's absolutely critical that you become familiar with her work and begin following some of her concrete recommendations at the end of this research paper.

If you struggle to describe your emotions accurately, know you’re not alone. Many people struggle with emotional granularity, the term Barrett coined for the ability to describe emotions in a precise and nuanced manner. People with low emotional granularity use only a few general emotional concepts to describe their internal experiences: “upset,” “bad,” “fine,” “happy,” etc. However, people with high levels of emotional granularity use words that convey a greater sense of accuracy. They use discrete emotional concepts.

For example, instead of describing their experience as “upset,” a person with a high degree of emotional granularity might choose more subtle variants that imply additional context, such as “grief” or “worry.” Where a low-granularity person might describe themselves as feeling “fine,” a high-granularity person might distinguish between different variations of “fine” such as “satisfied,” “grateful,” or “ambivalent.”

From an OOP perspective, we can think of emotional granularity similarly to how we would use the single-responsibility principle. Classes generally work best when they are small, specific, and have a singular purpose. The opposite of the single responsibility is often called a “god class,” a class that is huge and packs many different responsibilities in one place. Using these two concepts, we might see that “upset” is an example of a "god class" emotional concept, whereas “grief” more closely follows the single responsibility principle.  

Emotional granularity is a skill that is relatively easy for most people to develop. Mindfulness training can help individuals better sense their affect, expanding emotional vocabulary can help individuals discover more precise word choices, and applying new emotion words in context can help individuals strengthen their conceptual association.

When individuals operate with higher emotional granularity, we can observe signs of significant benefits. Emotional granularity helps us manage energy efficiently, overcome fear, handle stress in healthy ways, regulate our emotions, and resist the biasing effect of emotions on our judgment. When emotional granularity is present in a group, collaboration is easier, and conflict is productive instead of dysfunctional. Building a strong emotional vocabulary is a key aspect of technical empathy.

Emotions Are Complex, and So Is Software

For many people I speak with, emotions feel mysterious and squishy. But I've learned that we can use our existing understanding as software developers to explore emotions and empathy more technically. While our brains may not work like hardware like von Neumann originally imagined, staying open and curious in the face of new data can help us discover new insights about ourselves and others.