John Holland’s complexity studies helped us understand what this complex bacterial colony has in common with your brain, political theory, ants, computers, urban life and more. (Courtesy of UC San Diego)
On Aug. 9, 2015, we lost John Holland, computer scientist, psychologist, and complexity scientist. John defined new areas of research and tools for exploring them. Along the way, he made significant contributions in a startlingly unusual range of fields. He gave us new ways to solve problems and changed how many of us think, literally.
More traditional obituaries list his awards, positions, and publications. Here I would like to take a moment to describe my impressions of the intellectual contributions of a rare cross-disciplinary genius—a term I use sparingly but that truly applies here.
Holland began graduate school at Michigan in the field of philosophy, but his interests in signal processing and algorithmic computation moved him into the as-yet-undefined field of computer science.
He became a student of Arthur Burks, who himself had helped John von Neumann build and design one of the original all-purpose electronic digital computers, the iconic ENIAC. This intellectual history played a central role in Holland’s intellectual development.
Von Neumann introduced the idea of self-reproducing cellular automata: computer programs capable of making copies of themselves. He had proven that any such automata must contain a set of instructions that are then copied to the offspring. In other words, von Neumman had proven that something like DNA must exist–and had done so before Crick and Watson discovered DNA’s structure.
Holland was fascinated with von Neumann’s “creatures” and began wrestling with the challenge and potential of algorithmic analogs of natural processes. He was not alone. Many pioneers in computer science saw computers as a metaphor for the brain.
Holland did as well, but his original contribution was to view computation through a far more general lens. He saw collections of computational entities as potentially representing any complex adaptive system, whether that might be the brain, ant colonies, or cities.
His pursuit became a field. In brief, “complex adaptive systems” refer to diverse interacting adaptive parts that are capable of emergent collective behavior. The term emergence, to quote Nobel-winning physicist Phil Anderson’s influential article, captures those instances where “more is different.” Computation in the brain is an example of emergence. So is the collective behavior of an ant colony. To borrow physicist John Wheeler’s turn of phrase, Holland was interested in understanding “it from bit.’”
In 1962, Holland wrote a sort paper titled “Outline for a Logical Theory of Adaptive Systems.” In this paper, which foreshadows his seminal later work, Holland observed that an adaptive environment can be modeled as a population of problems and that many of these problems were high dimensional and complex. To solve these larger problems, systems would need to somehow self-construct subproblems that, when solved, would point to the solution of larger problems–not an easy task.
In 1975, Holland offered up more than an outline. He released a seminal work, “Adaptation in Natural and Artificial Systems.” Dense, challenging, and thought-provoking, the work has left a lasting imprint in computer science, operations research, biology, ecology, psychology, economics, political science, and philosophy of science.
Most notably, the book introduced the world to perhaps Holland’s greatest contribution, and certainly his most prominent: genetic algorithms. Like many moments of genius, Holland’s idea in retrospect seems obvious. He used evolution as a metaphor for an algorithm that could be used to solve problems, and in doing so defined the field of evolutionary computation. The genetic algorithm consists of three steps:
Step 1: Create a population of random solutions to a problem and represent them as binary strings. Think of each potential solution as an individual and as the binary representation of its DNA.
Step 2: Think of the problem to be solved as a fitness function that assigns a numerical value to each proposed solution.
Step 3: Repeatedly apply genetic mechanisms: reproduction of the more fit, genetic crossover (sexual recombination), mutation, and inversion to the strings to create a new population to create more fit individuals.
Holland’s genetic algorithms computationally implement an analog of genetic evolution, where survival of the fittest means that better solutions reproduce more often. As an algorithm, evolution worked. Over the past 40 years, genetic algorithms have proven to be an effective general purpose optimization procedure.
Equally important, within the algorithmic representation, Holland could prove analytic results. His Schema Theorem revealed how evolution solves problems by locating solutions to parts of the problem (what he called schema), reproducing those through survival of the fittest, and then recombining them through sexual recombination.
As substantial as their impact within optimization, genetic algorithms may have made an even larger contribution to modeling. They have been used to represent ecosystems, competing firms, political parties, collections of individuals, and even competing ideas within an individual.
In sum, Holland had not only mimicked evolution on a computer, he’d developed a powerhouse algorithm, and provided a framework within which scientists could derive analytic results that have deepened our understanding of how evolution works and helped identify conditions when evolution may get stuck in the shallows and miseries.
He had only begun.
In that same book (yes, the same book), Holland defined a general purpose problem algorithm called Learning Classifier Systems. These consisted of a population of if-then rules that passed messages back and forth. In a Classifier System, the if-then rules evolved using a genetic algorithm and the fitness of each rule emerged naturally in the model via what Holland called a bucket brigade algorithm. This algorithm within an algorithm offered a solution to the conundrum he had identified earlier: enabling fitnesses for solutions to subproblems to emerge in the process of solving the larger problem.
Classifier systems were capable of learning sophisticated tasks. They could play checkers. They could operate a pipeline.
Not surprisingly, Holland found a community of people interested in applying his algorithmic models to the brain. Along with psychologists Richard Nisbett and Keith Holyoak and philosopher Paul Thagard, Holland undertook a multi-year, cross-disciplinary study induction – the process of inferring the general from the particular.
Their contribution, published as “Induction Processes of Inference, Learning, and Discovery,” offered a general rule-based system that again linked the natural (the brain) and the artificial (computers).
Their system, a version of classifier systems, assumes a set of competing rules. At any one moment, a single rule is activated and sends a message. The rule that responds to that message is chosen according to the rule’s past usefulness and how closely the conditions of the rule match the message.
In demonstrating how sequences of these messages can produce inductive reasoning, Holland and coauthors made an early and seminal contribution to what has become the field of Cognitive Science. At the time, this was a loose collection of computer scientists, mathematicians, psychologists, and philosophers trying to make analytic headway into understanding cognition.
In the 1990s Holland helped to define the field of Complex Systems, characterizing both its boundaries and frontiers. To the very end, John continued to play with ideas in his unique joyful, mischievous, generous way.
We miss him. We will continue to explore, examine, and expand the ideas he so eloquently shared.
Scott Page is the Leonid Hurwicz Collegiate Professor of Complex Systems, Political Science, and Economics at the University of Michigan and an external faculty member at the Santa Fe Institute.