18  Algorithm aversion

In the web magazine Behavioral Scientist, Jason Collins writes:

The first American astronauts were recruited from the ranks of test pilots, largely due to convenience. As Tom Wolfe describes in his incredible book The Right Stuff, radar operators might have been better suited to the passive observation required in the largely automated Mercury space capsules. But the test pilots were readily available, had the required security clearances, and could be ordered to report to duty.

Test pilot Al Shepherd, the first American in space, did little during his first, 15-minute flight beyond being observed by cameras and a rectal thermometer (more on the “little” he did do later). Pilots rejected by Project Mercury dubbed Shepherd “spam in a can.”

Other pilots were quick to note that “a monkey’s gonna make the first flight.” Well, not quite a monkey. Before Shepherd, the first to fly in the Mercury space capsule was a chimpanzee named Ham, only 18 months removed from his West African home. Ham performed with aplomb.

But test pilots are not the type to like relinquishing control. The seven Mercury astronauts felt uncomfortable filling a role that could be performed by a chimp (or spam). Thus started the astronauts’ quest to gain more control over the flight and to make their function more akin to that of a pilot. A battle for decision-making authority—man versus automated decision aid—had begun.

They wanted a window to look out of, which they got. They wanted control over the Mercury-Redstone rocket that would carry the capsule into space, which was denied. They wanted control over the thrusters that controlled the orientation of the capsule in space. They also wanted manual control over re-entry, such as using the thrusters to set the angle of attack. They were given a manual override for the thrusters and re-entry procedure, but the automatic systems were left in place. They also asked for an emergency hatch through which to get out of the capsule after splashdown; they otherwise had to wait until the hatch was unbolted from the outside. This request was granted.

For the second sub-orbital flight, “piloted” by Gus Grissom, the emergency hatch was in place. Whether or not Grissom, as Tom Wolfe colorfully phrased it, “screwed the pooch” and blew the hatch early after splashdown has been the subject of some debate. A more likely explanation was that after getting ahead of the post-landing checklist and disabling the emergency hatch safety mechanisms too early, Grissom bumped the plunger to trigger the explosive bolts. Or the bolts may have blown on their own. Regardless, the Mercury capsule “Liberty Bell 7” ended up 4.9 kilometers below the sea surface, and Grissom was pulled from the water near-drowned. A desire for control almost cost Grissom his life.

Grissom’s experiences were not unique. Early flights were typified by operator errors linked to the requested modifications. After testing the manual attitude control during the first flight, Shepherd forgot to turn it off when he reactivated the automatic system. This cost fuel that was thankfully was not required on such a short flight. In the second orbital flight, Scott Carpenter, was late in starting the re-entry procedure and left both the manual and automatic systems on for 10 minutes. As a result, he ran out of fuel during re-entry. Thankfully he survived, although his overshoot of the target by 250 miles led to an impression for 40 minutes that he was dead. “I’m afraid … we may have … lost an astronaut,” announced a teary Walter Cronkite before Carpenter was found.

From the wish to control a space capsule’s angle of attack on re-entry, to unwillingness to get into a lift without an operator, the reluctance to have our decisions and actions replaced by automated systems extends through a range of human activity and decision-making. It took nearly 50 years for people to accept automated lifts. Today, over three quarters of Americans are afraid to ride in a self-driving vehicle. Today, over three quarters of Americans are afraid to ride in a self-driving vehicle.

Human resistance to relinquishing decision-making to automated decision aids has been the subject of detailed research (for simplicity, I’ll refer to “automated decision aids” as algorithms). Despite the evidence of the superiority of (often simple) algorithms to human decision makers in many contexts, from psychiatric and medical diagnoses to university admissions offices (see here, here and here for some reviews), we humans tend not to listen to the answers (see here, here and here for examples of this reluctance). When humans are given a choice between their own judgment and that of a demonstrably superior algorithm, they will generally choose the former, even when it comes at the expense of themselves or their performance.

Continue reading the article here.

18.1 Algorithm appreciation

In their article, Algorithm appreciation: People prefer algorithmic to human judgment, Logg et al. (2019) write:

Even though computational algorithms often outperform human judgment, received wisdom suggests that people may be skeptical of relying on them (Dawes, 1979). Counter to this notion, results from six experiments show that lay people adhere more to advice when they think it comes from an algorithm than from a person. People showed this effect, what we call algorithm appreciation, when making numeric estimates about a visual stimulus (Experiment 1A) and forecasts about the popularity of songs and romantic attraction (Experiments 1B and 1C). Yet, researchers predicted the opposite result (Experiment 1D). Algorithm appreciation persisted when advice appeared jointly or separately (Experiment 2). However, algorithm appreciation waned when: people chose between an algorithm’s estimate and their own (versus an external advisor’s; Experiment 3) and they had expertise in forecasting (Experiment 4). Paradoxically, experienced professionals, who make forecasts on a regular basis, relied less on algorithmic advice than lay people did, which hurt their accuracy. These results shed light on the important question of when people rely on algorithmic advice over advice from people and have implications for the use of “big data” and algorithmic advice it generates.

The article in Behavioural Scientist focuses on algorithm aversion. Logg et al. (2019) identify algorithm appreciation. How could these two concepts be reconciled?