How Could the World Wide Mind Become Self-Aware?
So far we have looked primarily at interactions between individuals and within groups. Now let us consider whether the total network activity could become an “individual” in its own right, with a consciousness and intentionality arising apart from human wishes. That is, a hive mind. Science fiction is full of them. There is, of course, the Borg organism that we discussed in the last chapter. In Terminator 2 the computer network Skynet suddenly becomes sentient and decides to wipe out its inconvenient human creators. In the novel Neuromancer the protagonist is hired by a shadowy character named Wintermute, who turns out to be a collective machine intelligence.
The analogy behind such scenarios is straightforward. It goes like this: Individual neurons aren’t intelligent, but they are densely interlinked, and intelligent behavior somehow emerges from the collective. Similarly, individual computers aren’t intelligent, but they, too, are densely interlinked. It’s easy to wonder if intelligence might somehow emerge from that inter-linkage in a roughly similar way. Many people have surmised that any sufficiently large and organized collection of switching units could achieve humanlike intelligence. The idea is not a new one. In Nathaniel Hawthorne’s 1851 novel The House of the Seven Gables one of the characters exclaims,
Is it a fact—or have I dreamt it—that, by means of electricity, the world of matter has become a great nerve, vibrating thousands of miles in a breathless point of time? Rather, the round globe is a vast head, a brain, instinct with intelligence! |
Hawthorne imagined a “global brain” coming into existence through the activity of people exchanging messages via telegraph. More recently, artificial intelligence researchers have been trying to create “brains” in silico by writing algorithms that mimic information flow in the neocortex. So far, they have not yet succeeded in creating self-aware minds. But the Internet has billions of computers, and humanity has billions of brains. If they were all sufficiently interconnected, would a new and independently intelligent entity arise in its own right? And if it did, how would we know?
We might not know if its intelligence is radically different from our own. Our anthropomorphism might prevent us from seeing it. Humans have been slow to recognize the intelligence of creatures such the octopus. Octopi are extremely bizarre animals from a human standpoint; they’re essentially boneless heads with eight feet attached. Yet they have extraordinary powers of mimicry. They can instantly alter their color and skin texture to mimic the appearance and behavior of other undersea creatures such as kelp22 and flounder. Just as it takes intelligence for an actor to take on gestures and accents unlike his own, an octopus may have the intelligence to observe and “inhabit” the ways of being of other creatures. They can change their skin color twenty times a minute, and this could enable a form of communication with other octopi. There is a real possibility that we are unable to recognize their intelligence because it is so different from our own. We might be similarly limited in recognizing the intelligence of a “mind” that is distributed among millions of computers.
But whatever its form, that intelligence would have to reach a certain threshold to be of interest—or alarm—to us. A collective mind with the intelligence of, say, a jellyfish would not be particularly interesting. Scientists already know that collective “minds” like ant colonies do things that can reasonably be called “conscious,” such as finding food and consuming it. However, while an ant colony arguably possesses awareness, it doesn’t reflect on that awareness. It doesn’t make novel plans and act to carry them out. So for an intelligence to be of interest to us, it should be, at a minimum, not just aware but self-aware. It should be aware that it exists in a self-referential way. It should be able to use symbolic language, make abstractions, imagine possible futures, and make plans for realizing them. It should be able to manipulate its environment by using robots or by persuading humans to do its bidding. A mind that could do that would have to leave some clear trace of its existence in the world; there would inevitably be evidence of its planning and acting. Can we see any such thing in today’s Internet?
Is the Internet Self-Aware?
Let’s consider the Internet by itself first to see if it exhibits any such trace of self-awareness. I began this query by contacting Vint Cerf, who is often called the Father of the Internet because he invented the TCP/IP communications protocol. He is now a VP at Google. He told me that the total number of computers connected to the Internet as of 2009 was between 1.5 billion and 2 billion.23 Not all of them would be on at the same time, he reminded me, so the actual number would fluctuate. In particular, the earth’s rotation would cause fluctuations as various countries go through their day/night cycles.
Two billion computers is a lot, but it is not enough to support the possibility of self-awareness. It is fiftyfold less than the 100 billion neurons in the human brain. If we assume that intelligence is related to the number of available nodes, then the Internet doesn’t even come close to humanlike intelligence. Furthermore, today’s computers are much less diverse than the neurons in abrain. We saw in Chapter 8 that there are many different kinds of neurons in a human brain, each carrying out a specialized function. Neurons are also densely interconnected, with many types coexisting in any given cubic millimeter. We don’t see such specialization and integration on the Internet. There are only a few kinds of computers, including supercomputers, personal computers, servers, routers, PDAs, cellphones, and various consumer appliances, and that’s about it. In principle, all of today’s computers are essentially the same.
Furthermore, they are much less interconnected than neurons in a brain. Recall that on average, each neuron in a brain’s cortex receives input from seven thousand other neurons, with much of that information coming in simultaneously. Computers aren’t as densely interconnected and each one can process only one incoming bit at a time.
Nor do their networks have the requisite structural complexity. The Internet hasn’t developed the feedback and feedforward loops that are so crucial to mammalian perception and memory. It’s primarily a vast collection of resources. When one brings up the New York Times, the code underlying the page tells the browser to get the text from one computer, the images from another, and the ads from still another. There is no countering flow of traffic that imposes expectations and constraints. The Internet can, of course, retain massive amounts of information on hard drives, but it doesn’t understand that information. It just stores it.
In fact, the Internet doesn’t really do anything. And why should it? Biological creatures needed to evolve in order to survive and reproduce. The Internet, on the other hand, has no such pressures. It gets all the “food” it needs and has no predators. It doesn’t have evolutionary pressures pushing it to become more complex in order to survive. So unlike living creatures in an ecosystem, it has no internally generated need to evolve behaviors that enhance its chance of survival.
To put it more precisely, the Internet has no internal reason to prefer one kind of organization over another. If the Internet’s needs for energy are always met, why should it develop an analogue to the dopamine system that modulates the wanting of food and the pleasure of getting it? If it has no predators, why should it develop an analogue to the amygdala, which modulates fear and escape behavior? If it has no physical effectors on the environment, why should it develop a brain stem and a motor cortex? If it has no need to reproduce, why should it develop the equivalent of testosterone and estrogen? And lacking all of those things, why should it develop feelings and thoughts?
Even if it had it any of those needs, it lacks an internal mechanism of random mutation that generates “trial” configurations for presentation, as it were, to the environment. The Internet can’t by itself generate new network configurations or protocols. And even if it could, they would fail. In computer systems, a “mutation” is always deadly: a single incorrect byte in a computer program will bring it to a halt.
Rodolfo Llinas suggested in 2001 that the Internet might best be compared to a jellyfish, which has no central nervous system and nothing that could be called a “brain.” What a jellyfish does have is a loose network of nerves in its skin called a “nerve net.” With its two billion relatively unspecialized, homogeneously organized nodes, we can think of today’s Internet as a sort of super-jellyfish. It doesn’t have much differentiation and specialization. The types of communication between its parts are very simple. So on its own, it can’t be self-aware.
Is the Internet + Humanity Self-Aware?
The Internet may be only a super-jellyfish, but the situation changes entirely when we look at it together with human activity. Human beings do experience evolutionary pressures, and organizations and cultures do evolve. As of 2010 there were about two billion Internet users in the world (that is, people, not computers). That roughly doubles the number of nodes in our hypothetical “brain” from two billion to four billion.
Four billion nodes still isn’t a lot compared to a human brain. But specialized organs do appear to be forming, as in the “digital frontal lobes” mentioned in the work of Elkhonon Goldberg. In humans, Goldberg explains, the frontal lobes pull together the activity of many areas of the brain in order to carry out high- level cognitive functions such as formulating goals and manipulating abstract representations. They are not by themselves functionally intelligent, just as a CEO does not by herself perform the work of a company. They work by integrating work done elsewhere. The frontal lobe gets “votes” from many parts of the brain and uses them to select what is most important at any given moment.
Goldberg suggests that search engines such as Google are beginning to perform an analogous integrative function. They are, in other words. nascent digital frontal lobes. Google uses an algorithm named PageRank to decide which Web igges on a given topic are the most valued. It runs. in essence. a popularity contest. Each link to a page is considered a “vote” for that page. in that some human being has deemed the page worth reading. The more links that go to a given page the more significant that lgge is inferred to be.
Google has other ways of evaluating the social importance of a page. It executes a sort of circular digital snobbery, ranking a page more highly if highly ranked pages point to it. It also measures how long users spend on a page they reach via a link. Presumably, the more time they spend there, the more useful they judge it to be. In doing these things Google is aggregating the “votes” of many human beings, and it gets a lot of votes; in 2009 Google served about three billion searches per day. It wouldn’t surprise me if Google mined information from Gmail too, because a link emailed to another person can also count as a vote for that page.
As we have seen, memory and learning are very closely connected. In an eerily analogous way, Google’s PageRank algorithm resembles Hebbian learning. A highly ranked page will gamer more page views, thus strengthening its ranking. Just as neurons that fire together wire together, pages that link together “think” together. If many people visit a page over and over again, its PageRank will become so high that it effectively becomes stored in the collective human/electronic long-term memory. Pages in Wikipedia often have the highest rank of all, making them essentially permanent. So we can say that Google also functions as a primitive hippocampus, the part of the brain that decides which short-term memories are worth converting into long-term ones.
Google also helps the Internet “forget” things. Pages with few links to them are ranked so low that no one ever finds them in a search; they are effectively nonexistent. This is a lot like synapses weakening or disappearing from lack of reinforcement. To be sure, hard drives never lose data unless they crash, but people often close down pages for lack of interest. Or the pages get moved in such a way that breaks all of the links to them. In either case the forgetting becomes quite literal; the knowledge becomes inaccessible.
Wikipedia has its own form of “intelligence” emerging out of the way it is collectively created and edited. It, too, is the sum total of many individual judgments about what is important and not important. Instead of links, it counts on collective creation and “scrubbing” of passages and sentences. It produces a different kind of knowledge than Google’s PageRank, but the two harmonize remarkably well. Google and Wikipedia together can be seen as forming a nascent forebrain, hippocampus, and long-term declarative memory store.
This is beginning to look rather brainlike, in that we have specialized organs communicating densely, and circularly, within and among themselves. But it’s not the Internet by itself that’s beginning to look brainlike. It’s a combination of four things: human declarative knowledge, human choices about that knowledge, a computer system that collects votes about those choices, and a high-speed, far-flung communications network that integrates them all.
Other “organs” may be coming into existence too. Online newspapers could be seen as sensory organs. Blogs and newspaper columnists could be seen as a collective amygdala, in that they respond emotionally to events and thus signal their importance to the rest of the system. (The neuroscientist Antonio Damasio has shown that the amygdala is indispensable for rational thought. Patients with damage to the amygdala can solve puzzles with normal facility, but they can’t make choices among competing options. They become paralyzed by indecision, since they have no emotions guiding them in one direction or another.) Facbook can be seen as the beginning of an oxytocin/vasopressin/serotonin system, in that it acts as a modulator of social bonding. Dating websites are pure testosterone and estrogen, facilitating mating displays and pair-bonding. Viruses and antivirus programs are pathogens and immune systems.
With only four billion nodes, the Internet plus its human users isn’t as intelligent as a human. But it could very well be as intelligent as an animal. In the abortive Iranian revolution of 2009, citizens used Twitter to marshal a collective protest. Unfortunately, the regime won by using overwhelming physical force. They blocked servers and imprisoned key people—that is, key “nodes”—and soon the protest petered out. But while it lasted it was arguably a conscious entity. It responded directly and organically to what the government did. It was conscious of political events. It distinguished between useful and useless information. Its behavior was analogous to the way an animal observes, seeks energy sources, fights, and flees. It was primitively aware.
But not self-aware. The Internet plus humanity still lacks sufficient size and organization for self-awareness. However, as I argued in Chapter 1, there are evolutionary pressures driving both humanity and the Internet to higher levels of complexity. Companies have to develop ever more sophisticated hardware and software to survive. Repressive regimes are predators and competitors that drive new strategies of avoidance and competition. Democratic governments can be seen as symbiotic entities. Rewards go to entities (such as bloggers) who produce valued content. Hackers try to build better viruses, and antivirus companies try to outwit them. The ancient push-pull dynamic is at work everywhere. When humans and the Internet physically merge, exponentially increasing the information flow between them, that might be when the combination of the two reaches self-awareness.
How the Ants Attained Awareness
How could a distributed system like the Internet and its users become self-conscious? For clues let’s look at how ant colonies have reached the lesser level of simple collective awareness. If you don’t believe an ant colony can be aware of its world, take a look at leafcutter ants.
Leafcutter ants are one of the most sophisticated breeds of ant in the world. Theydo astonishingly complex things. They slice up leaves, carry the pieces back to the hive, and store them in chambers where a fungus grows on them. The fungus is the colony’s food supply. To cultivate it, worker ants pull tufts out of existing growths and deposit them on new leaves. They feed it by depositing droplets of feces on it, and keep the strain pure by plucking out alien spores. They also deposit secretions containing antibiotics that control the growth of microorganisms. The ants literally farm their own food supply instead of going out and foraging for it. Leafcutter ant colonies can have populations in the millions. One leafcutter nest was found to contain 1,920 chambers, of which 238 were fungus chambers. The ants deposited forty tons of soil on the surface while building it.
What smart ants! you might say. But individual ants, including the queen, have very, very tiny brains. Their intelligence exists on a collective rather than individual level. As with neurons, the ants specialize. There are several castes, each of which performs a different function. There are reproductive castes, worker castes, and defensive castes. The ants in each caste follow very simple rules. When worker ants randomly stumble across food they exude pheromone trails as they make their way back to the hive. Other ants that encounter the trails follow them to the same spot. Those ants too lay down pheromones, strengthening the trail and recruiting still more ants. For a while there is a highway of mindlessly excited ants tromping back and forth between colony and food—a pseudopod reaching out for the food, if you will. When the food is gone the returning ants stop emitting pheromones. The pheromone trail fades out, and the pseudopod dissolves.
No individual ant is smart enough to have a “go get food” plan in its head. All it can do is follow its simple rules. But out of those rules, a clear collective awareness of the world emerges. We can say that the colony looks for food, reaches out, takes it, and then looks elsewhere for more. Other rules apply as well. If an ant encounters another ant that has the distinctive hive body odor, it ignores it or cooperates with it; if it has a different odor, it attacks it. That’s how ant colonies go to war with each other, competing for dominance in a tough ecosystem. Out of simple rules, complex behaviors emerge.
The biologists E. O. Wilson and Bert Holldobler call ant colonies super-organisms. They offer a fascinating table listing the parallels between mammals and ant colonies. I reproduce it below in slightly simplified form.
Organism (mammal) |
Superorganism (ant colony) |
Cells |
Colony members |
Organs |
Castes |
Gonads |
Reproductive castes |
Somatic organs |
Worker castes |
Immune system |
Defensive castes; also the particular smell that each worker has |
Circulatory system |
Food distribution (regurgitation of food between ants, use of pheremone trails) |
Sensory organs |
Combined sensory appartus of colony members |
Nervous system |
Communication and interactions among colony members |
Skin, skeleton |
Nest |
Organogenesis (that is, growth of the embryo into the adult) |
Sociogenesis (growth of the colony by forming new castes over time |
Table 1. The parallels between mammals and ant colonies.
Used by permission of Bert Holldobler.
Given how strong the parallels between ant colonies and mammals are, it’s reasonable to say that ant colonies are conscious, albeit not self-aware. Yet even as merely “aware” beings, they are fantastically successful. Ants have roughly the same global biomass as human beings.
The Internet as an Altruistic Worker Caste
Only 2 percent of all insect species have made the leap to being aware superorganisms, with ants and bees being the most visible examples. How did they do it? And can humans plus the Internet make an analogous leap?
Holldobler and Wilson suggest that ants made the leap to superorganism status when they evolved a sterile, altruistic worker caste. Sterile ants tend the queen’s eggs and rear them, and defend the colony against invaders. Consider how unusual sterility is in nature. Normally, organisms struggle to reproduce and that gives them an “agenda” of their own separate from the group’s. But sterile worker ants have no personal agenda, since they cannot reproduce. They will fight to the death to defend the colony. In supporting the queen they allow her to lay more eggs than she could have otherwise. Sterile workers exhibiting altruistic behavior are so unusual and so beneficial that Holldobler and Wilson suggest that they are the element enabling an insect colony to attain superorganism status. Once a colony evolves them, it has made the leap to that higher level of existence.
The Internet may be our sterile worker class exhibiting altruistic behavior. It has no survival or reproduction agenda of its own, and it performs numerous functions on our behalf. Moreover, it is a sterile worker class of great versatility. Computers can store and follow considerably more programmed behaviors than ants can. So just as altruistic workers enabled ants to make the leap to consciousness, the Internet may enable humans to make the leap to... call it hyperconsciousness. Again, physical fusion with the Internet would make it emerge all the faster.
So let’s extend Holldobler and Wilson’s analogy to include this prospective new species, which I’ll call a hyperorganism.
Organisms (Mammals) |
Superorganisms
(Ants) |
Hyperorganisms
(Humans + Internet) |
Cells |
Colony members |
Human beings and
cornmputers |
Organs |
Castes |
Tools (Goog|e, Facebook,
Twitter) |
Gonads |
Reproductive castes |
Capitalist production, human reproduction |
Somatic organs |
Worker castes |
Transportation system, robotics |
Immune system |
Defensive castes; also the particular smell that each worker has |
Military, antvirus programs |
Circulatory system |
Food distribution (regurgitation of food beween ants, use of pheremone trails) |
Agriculture, power plants, Internet backbone |
Sensory organs |
Combined sensory apparatus of colony members |
Combined sensoraty apparatus of human members, machine sensors (data: weather, stock market, network activity) |
Nervous system |
Communication and interactions among colony members |
Human language, TCP/IP |
Skin, skeleton |
Nest |
Biosphere, network |
Organogenesis (that is, growth of the embryo into the adult) |
Sociogenesis (growth of the colony by forming new castes over time) |
Cultural genesis (growth by innovating new tools ‘and Industries) |
Table 2. Extending Holldobler and Wilson ’s table to hyperorganisms.
How Would We Know a Hyperorganism Existed?
Maybe we can’t know, by definition. A cell can’t know the goals of an animal. A neuron can’t grasp the thoughts of a brain. But we might see new phenomena that can’t be explained in the usual ways. For example, today we can come up with reasonable-sounding explanations of stock market crashes. The 1987 crash has been blamed on program trading. The 2008 crash is explainable in terms of human behavior, such as irresponsible lending that fed a housing bubble. But if a crash happened that couldn’t be explained in those kinds of ways, then we might be entitled to ask if some higher-level entity was having a higher-order problem.
Advances to higher levels typically lead to higher-level problems. For example, while the evolutionary leap to language was a great advance, it also opened up the possibility of an entirely new kind of problem: schizophrenia. The hallucination of voices is thought to result from the inability of the brain to distinguish its own inner speech from the speech of others. It is uniquely a disorder of a self-conscious, language-using mind. Tim Crow, a psychiatrist at Oxford, writes that “schizophrenia is the price that Homo sapiens pays for language.” Thus while chimpanzees can become neurotic, their brains are not complex enough to become schizophrenic. Imagine, for the sake of argument, chimp doctors clustered around a chimp that had magically acquired language but sadly went schizophrenic in the process. They wouldn’t know what the chimp was saying. They wouldn’t even understand the concept of language. But they would know that whatever strange illness this chimp had, it was like nothing they’d ever seen before.
Therefore, evolutionaly progress does not mean that a species outgrows all of its problems and proceeds to live in an earthly paradise. To the contrary, ascent to a more complex state creates more complex problems. When agriculture became sophisticated enough to meet the Western world’s food needs, it ended the problem of hunger but created the new one of obesity. When the printing press solved the problem of information scarcity, it created the new problem of information overload. When the automobile abolished limitations of mobility, it created the new problem of urban sprawl. Thus while a civilization with a transpersonal mind will be capable of doing marvelous new things, it will also certainly have problems of a kind that have never been seen before. To humans, those problems may manifest themselves as troublesome phenomena that can’t be explained or solved in the usual ways.
In fact, we may be seeing an analogously inexplicable phenomenon with the ants. Ant experts say that ants have reached a dead end and will evolve no further. They say that individual ants can’t become smarter because their heads would become prohibitively heavy if they got bigger. (As an exoskeleton gets larger it becomes much heavier, just as a sphere’s volume increases by the cube of its radius.) Nor can ants physically interact with more ants than they do now, which limits their ability to evolve more sophisticated signaling behaviors. So the amount of storable information in a colony and its transmission speed can’t increase by very much. Holldobler and Wilson write, “Social insects are still ruled rigidly by instinct, and they will remain so forever.”
Yet there is something funny going on in the ant world. Recently scientists have discovered ant colonies that theoretically shouldn’t exist. Called unicolonies, they have billions of members and stretch over thousands of kilometers. Only thirty-one were known in the world as of 2009. While they are composed of many adjacent nests, the nests are together considered a single colony because the ants cooperate even though they’re not related. (Unrelated ants smell different to each other, which normally induces them to attack each other.) It’s as if alien anthropologists deduced that the United States is a single country because people from different states don’t go to war with each other. By what we know of ant law, the unrelated ants in unicolonies shouldn’t cooperate. Why should an ant bother to help unrelated ants, since doing so may divert resources from its own relatives? Yet the unrelated ants in a unicolony do cooperate. Thus freed of the internecine battles that normally decimate ant colonies, a unicolony can become far larger than an ordinary ant colony.
And they are even more successful than ordinary ant colonies. A unicolony in Arizona made of crazy ants went to war with human beings and won. It invaded Biosphere 2, a two-year, $200 million experiment in creating a sealed and self-sustaining ecosystem. It wiped out all eleven of Biosphere 2’s purposefully introduced species of ants. It also wiped out all of the crickets and grasshoppers. In the end it took over the entire biosphere, forcing the humans out. “Swarms of them crawled over everything in sight: thick foliage, damp pathways littered with dead leaves and even a bearded ecologist,” reported the New York Times. “The would-be Eden became a nightmare, its atmosphere gone sour, its sea acidic, its crops failing, and many of its species dying off. Among the survivors are crazy ants, millions of them.” The experiment was shut down in total defeat. Another unicolony literally took over Christmas Island in the Indian Ocean, killing off ten to fifteen million crabs and a large percentage of the trees and birds.
Scientists’ response to unicolonies has been, “They can’t last.” They predict that the ants in different families will eventually stop cooperating, which would make the unicolony break up into ordinary colonies or erupt in a massive war. One researcher writes, “Whereas evolutionary biology can rarely predict where any species is going, it does predict that, despite their short-term ecological success, unicolonial ants are an evolutionary dead end.”
Except they’re doing pretty well so far. Something weird is going on when a collective entity that should collapse in fact takes over islands and wins wars with humans. I’m not suggesting that unicolonies have invalidated current theories by ascending to a higher evolutionary level, because it may be that some expansion of an existing theory will explain them. Rather, my point is that this is, broadly, the kind of anomaly we might see when any species makes a leap to a higher level. Our perplexity with unicolonies may foreshadow an analogous perplexity with the Internet plus humanity. There may come a day when we start seeing activity on the Internet that simply does not make sense in terms of what we know about hardware, software, and human behavior. We might decide, after much study, that a new form of intelligence is at work.
If the Internet plus humanity becomes a hyperorganism, we might see that something very strange is going on. We could collect data, run analyses, and build models. We might figure out that a new entity had come into being. But we would never know its inner life.
But then, we never know each other’s, either.