Renowned neurologist Richard Cytowic exposes the dangers of multitasking in the digital age.
This article was originally featured on The MIT Press Reader. This article is adapted from Richard Cytowic's book "Your Stone Age Brain in the Screen Age."
Whether applied to machines or human brains, the term "multitasking" is a misnomer. Despite marketing claims, your computer does not multitask, and neither does your brain. The latter simply cannot, whereas a computer's processor divvies up each clock cycle and apportions a slice of time -- 200 milliseconds, say -- to each task. Round and round it goes until everything is done. The inherent inefficiency of having to split up processor time is why your computer bogs down the more you ask it to do.
Brains respond the same way when multiple tasks vie for attention. We lack the energy to do two things at once effectively, let alone three or five. Try it, and you will do each task less well than if you had given each one your full attention and executed them sequentially.
According to Cal Newport, a professor of computer science at Georgetown University, simply jumping over to check your inbox and coming right back can be just as damaging as multitasking. "When you looked at that email inbox for 15 seconds, you initiated a cascade of cognitive changes." He says even minor switching from a current task to a different one is "productivity poison."
Stanford University professor Clifford Nass presciently saw multitasking as particularly insidious. One of his most cited studies assumed that heavy multitaskers would excel at ignoring irrelevant information and at switching tasks, and accordingly would have superior recall. He was wrong on all counts: "We were absolutely shocked," he told "Frontline" in 2009. "Multitaskers are terrible at ignoring irrelevant information; they're terrible at keeping information in their head nicely and neatly organized; and they're terrible at switching from one task to another."
Nass assumed that people would stop trying to multitask once shown the evidence of how bad they were at it. But his subjects were "totally unfazed," continuing to believe themselves excellent at multitasking and "able to do more and more and more." If individuals in a controlled experiment are this oblivious and refuse to change when confronted with proof of their shoddy performance, then what hope do the rest of us have as we wade through the daily sea of digital distractions?
Watching television while using another smart device is so common that over 60 percent of U.S. adults regularly engage in "media multitasking." Compared to controls, media multitaskers have more trouble maintaining attention and a propensity to forget; their anterior cingulate cortex (a brain structure involved in directing attention) is physically smaller than controls'. Another study found that the more minutes children engaged in screen multitasking at age 18 months, the worse their preschool cognition and the more behavioral problems they exhibited at four and six years. The authors advise positive parenting and avoidance of media screen multitasking before the age of two.
The challenges of multitasking are especially acute in fields like medicine, where attention to detail can mean the difference between life and death. A powerful example comes from a training session with George Washington University medical students in which we scrutinize an incident that reportedly occurred at another well-known teaching hospital. The story was shared with me during our faculty development sessions, though the specific details were withheld due to privacy and confidentiality concerns.
During bedside rounds on the pediatric cancer ward, the story goes, the whoosh of an incoming text distracted the resident physician, who was entering medication orders and updating the electronic chart. She was a digital native; the message was about a friend's upcoming party, nothing crucial in the context of the moment. But it momentarily seized her attention long enough to flush her working memory. (When you are interrupted you don't have a chance to flush your working memory completely; a remnant of attention always remains behind, hooked onto the previous task. The larger the residue you hang on to, the higher the switching costs will be.) The clinical team at the bedside was discussing changing the dose of an intravenous drug, and she failed to enter the change. By the time the omission was discovered, the four-year-old patient had developed kidney failure and gone into shock. In a different setting of 257 nurses and 3,308 pediatric intensive care patients, medication errors occurred when a text or phone call came in on a nurse's assigned institutional phone "in the 10 minutes leading up to a medication administration attempt."
We have all experienced how electronic medical records steal time from mutual doctor-patient engagement -- one example of the negative consequences of multitasking. Instead of looking, listening, and laying on hands, physicians now must type and check boxes on multiple screens to satisfy bureaucratic demands. A doctor may spend the entire appointment time facing a computer screen. Logically, it should not matter whether a doctor takes handwritten or electronic notes. But it does matter because "hearing" is not the same as "listening." The former is a passive act of perceiving audible sounds whereas the latter is an active effort to understand another's perspective, what they are feeling and trying to communicate.
Electronic records demand so much of a physician's attention and working memory that attentive listening has become impossible. Forced to focus on the screen, they miss reading facial expressions and patient body language. Doctors have been handling interruptions for decades without making these kinds of errors. But screen-based distractions are different in kind, leading to more frequent errors during situations that demand attention. Something powerful takes hold of attention's so-called spotlight. Perhaps it is time to resurrect a phrase that all parents once knew: "Look at me when I'm talking to you."
At George Washington University a colleague and I teach small groups of medical students over their four years of study. We instruct them in clinical reasoning and professional development. We watch smart young adults transform into singular professionals who master huge amounts of factual knowledge as well as the know-how to apply it judiciously in the practice of medicine. Accomplishing a transition like this demands a high degree of sustained focus. It is the art of medicine we teach because the human being is more than the human body. Increasingly, though, I witness inattention undermining our students, especially the undergraduates I encounter.
As I waited for the elevator the other day, the doors opened and a dozen undergrads spilled out. All were staring down at their phones, oblivious to my presence even as they jostled and bumped into me. The screen lock on their attention had created a blind spot that made me invisible, a normal feature of perception called "inattentional blindness" or "change blindness." (You can see a mind-blowing example of it by watching the "invisible gorilla" test on YouTube.) The phenomenon is not a flaw or an optical illusion: The brain evolved to ignore whatever lies outside its immediate focus even when it stares us in the face. Some types of brain damage suspend patients in a perpetual state of inattentional blindness, a variety of agnosia (from the Greek meaning "not knowing"). In common terms, neurology calls this looking but not seeing. A growing proportion of the population seems to be drifting through life, looking at their screens but not seeing what is going on around them.
Because attention is like a sharp-edged spotlight, we can never know what we are missing. Anything outside its perimeter lies, by definition, within our mental blind spot. The undergraduates who mindlessly careened into me had already acquired habits that were actively undermining their ability to learn, think, and remember. Worse, their screen fixation made them oblivious to their self-inflicted handicap.
An enduring myth says we use only 10 percent of our brain. The other 90 percent presumably stands idly by to serve as spare capacity. If the premise of untapped intellectual capacity were true, then the depictions of film characters ranging from Johnny Mnemonic to Lucy and Limitless would be documentaries rather than thrilling science fiction.
Yet two-thirds of the American public and more than a quarter of its science teachers (yes!) mistakenly believe the 10 percent myth, which perhaps underlies assumptions that one can multitask and overcome distractions by sheer force of will. Worse, many American science teachers believe that enriching a child's environment -- with Baby Einstein videos or iPads clamped to bassinets, car seats, and potty trainers -- enhances intellect, despite a dearth of supportive evidence they can do any such thing. On the contrary, ample evidence explains why introducing tech impedes the natural development of a child who would otherwise have normal amounts of person-to-person interaction. It is true that growing up isolated and deprived of human contact drastically stunts brain development. But it does not logically follow that using tech to supplement a child's typical environment will boost cognitive development. Too much stimulation is equally detrimental as not having enough. Besides, it isn't stimulation per se that is crucial but the social context in which it occurs.
By living in a rich social environment, our neural networks self-calibrate, self-assemble, and adapt to stimulation, experience, and context. Yet energy consumption trumps all other factors. When we measure how the brain actually uses energy, the proposition that we have untapped reserves doesn't hold up. There is no sluice gate to open that will provide more juice to multitask or think genius league thoughts. To see why this is so, look at brain size and how it scales to the energy it must consume just to stay alive. During the past 2.5 million years the human brain grew proportionally much faster than the human body. Our central nervous system is nine times larger than expected for a mammal of our weight. The cortex constitutes 80 percent of the brain's volume, and its prodigious consumption of energy begat higher caloric meals and the invention of cooking. Cooking renders the calories in food more easily absorbed and allows the consumption of meat protein and carbohydrates that are otherwise indigestible in their raw forms.
A rat's or a dog's brain consumes about 5 percent of the animal's total daily energy requirement. A monkey's brain consumes 10 percent. An adult human brain accounts for merely 2 percent of the body's mass yet consumes 20 percent of the calories we ingest, whereas a child's brain consumes 50 percent and an infant's 60 percent. These numbers are larger than one would expect for their relative sizes because, in all vertebrates, brain size scales in proportion to body size. Big brains are calorie-expensive to maintain, let alone operate, and the discrepancy means that energy demand is the limiting factor no matter what size a particular brain reaches.
It is also costly in terms of energy consumption to generate electrical spikes in a cell. This we know thanks to research undertaken during the administration of general anesthesia. As a person loses consciousness, their brain activity gradually shuts down until it reaches the "isoelectric condition," the point at which half the calories burned simply go toward housekeeping -- the pumping of sodium and potassium ions across cell membranes to maintain the resting electrical charge that keeps the brain's physical structure intact. This never-ending pumping means that the brain must be an energy hog.
Even if only a tiny percentage of neurons in a brain region were to fire simultaneously, the energy burden of generating spikes over the entire brain would still be unsustainable. Here is where the innate efficiency of evolution comes in. Letting just a small fraction of cells signal at a given time -- known as "sparse coding" -- uses the least amount of energy but carries the most information because a small number of signals have thousands of possible paths by which to spread themselves out over the brain.
A major drawback of the sparse coding scheme is that it costs a lot to maintain our 86 billion neurons. If some neurons never fire (meaning that if cells don't generate a current strong enough to travel down the axon and cross the synapse to the next neuron down the line), then they are superfluous, and evolution should have jettisoned them long ago. But it didn't. What evolution did discover by way of natural selection was the optimum proportion of cells a brain can keep active at any given instant. That number depends on the ratio between a resting neuron's housekeeping cost and the additional cost of sending a signal down its axon. For maximum efficiency, it turns out that between 1 and 16 percent of cells should be active at any given moment. We do use 100 percent of our brain, just not all of it at the same instant.
Maintaining this balance is part of homeostasis, which we can think of as a budget whose currency consists of all the metabolic molecules that maintain our 86 billion neurons. As with a budget based on money, you can run a metabolic deficit and have your ledger go into the red. When this happens, the brain slashes processes that are too expensive, resulting in fatigue, boredom, clumsy errors, and foggy thinking. The window of maximum efficiency noted above refers to an instantaneous snapshot of energy consumed by whatever neurons are firing at the time. The need to marshal resources in the most efficient manner also explains why most brain operations must be unconscious.
Keeping ourselves alert and conscious, along with shifting, focusing, and sustaining attention, are the most energy-intensive things our brain can do. The high energy cost of cortical activity is why selective attention -- focusing on one thing at a time -- exists in the first place and why multitasking is an unaffordable fool's errand.