Tuesday, February 1, 2022

sources of power, Gary Klein, 1998

 

Klein, Gary, 1944-
Sources of power : how people make decision / Gary Klein.
1. decision-making.

1998
685.403

   (Klein, Gary, Sources of power : how people make decision / Gary Klein., 1. decision-making., 1998, 2001, 685.403, MIT Press, )

Gary Klein, Sources of power : how people make decision, 1998               [ ]


MIT Press
Seventh printing, 2001


p.4
   Naturalistic decision making is concerned with high stakes. When a fireground commander makes a poor decision, lives can be lost. When a design engineer makes a poor decision, hundreds of thousands of dollars can be lost. 
   We are interested in experienced decision makers since only those who know something about the domain would usually be making high-stakes choices. Furthermore, we see experience as a basis for the sources of power we want to understand. 

p.5
   We want to know how people carry on even when faced with uncertainty because if inadequate information that may be missing, ambiguous, or unreliable--either because of errors in transmission or deception by an adversary. 

p.5
   Naturalistic decision making is concerned with poorly defined procedures. 

p.5
   Cue learning refers to the need to perceive patterns and make distinctions.

p.7
1984 
U.S. Army Research Institute for the Behavioral and Social Sciences, which is in charge of studying the human side of the battlefield equation. 

p.8
Turnover in the army is high, with people coming in for two- or four-year stints. Even officers who stay in for a full twenty years are rotated every few years. For example, a new tank commander may spend 6 months getting trained in the rudiments, then another years coming up to speed. That gives him little more than a year to help train other people before he is moved to his next rotation. 

p.8
The army has doctrine about how decisions should be made, but it seemed that soldiers usually did not follow the doctrine. 

p.8
We developed our recognitional model of decision making from this study and spent the next several years following up leads from our results, but almost every major design feature in my original plan was wrong. 

p.9
Only two of the six expectations worked out well. The others were wrong. 

p.9
Our later studies showed that military commanders use the same strategies as the fireground commanders. 7

p.10
By focusing on the nonroutine cases, we were asking them about the most interesting ones--the ones they come back to the station house and tell everybody else about. We were asking for their best stories, and they were happy to oblige. 

p.11
They were not actually making a decision; they were constructing a justification. 

p.13
People who are good at what they do relish the chance to explain it to an appreciative audience.

p.13
In fact, firefighters who watched what she was doing but were not on her list to be contacted would ask her for permission to be interviewed. They wanted to explain to her and to themselves what had happened at critical times. 

p.14
We tried to identify what we call decision points--times when several courses of action were open. 

p.15
To new firefighters, all roofs feel spongy. 

p.17
Listening to the Data 

p.17
   <skip the first sentence> It was not that the commanders were refusing to compare options; rather, they did not have to compare options. I had been so fixated on what they were not doing that I had missed the real finding: that the commanders could come up with a good course of action from the start. That was what the stories were telling us. Even when faced with a complex situation, the commanders could see it as familiar and know how to react.
   The commanders' secret was that their experience let them see a situation, even a nonroutine one, as an example of a prototype, so they knew the typical course of action right away. Their experience let them identify a reasonable reaction as the first one they considered, so they did not bother thinking of others. They were not being perverse. They were being skillful. We now call this strategy Recognition-primed Decision Making.

p.20
   The difference between singular and comparative evaluation is linked to the research of Herbert Simon, who won a Nobel Prize for economics. Simon (1957) identified a decision strategy he calls satisficing: selecting the first option that works. Satisficing is different from optimizing, which means trying to come up with the best strategy. Optimizing is hard, and it takes a long time. Satisficing is more efficient. The singular evaluation strategy is based on satisficing. Simon used the concept of satisficing to describe the decision behavior of businesspeople. 

p.20
If their first choice did not work out, they might consider others--not to find the best but to find the first one that works. 

p.21
   Before we did this study, we believed that novices impulsively jumped at the first option they could think of, whereas experts carefully deliberated about the merits of different courses of action. Now it seemed that it was the experts who could generate a sinlge course of action, while novices needed to compare different approaches. 

p.23
Finally, they give up, abandon their pride, and call in some consultants. They call in the team of “Boots and Coots”, former colleagues of Red Adair, a world-famous fighter of oil well fires.
   Boots and Coots arrive, look briefly at the scene, and say that they will need a great deal more foam. “We don't have that much foam,” the volunteer firegound commander argue. “Of course, not,” Boots and Coots answer. “We've already ordered it. It will be here tomorrow.”
   From that point on, under the direction of the experts, the fire operations go smoothly. The entire fire is extinguished within the next two days. Although no one is seriously injured, the cost of the fire is estimated at $10 to $15 million. 

p.23
It is what I do when I have to buy a house or a car. I have to start from scratch, identifying features I might want, looking at the choices. 

p.29
The advice is more helpful for beginners than for experienced decision makers. In most applied settings, beginners are not going to be put in a position to make critical decisions. 

p.30
   One final application involves training. The ideas set forth in this chapter imply that we do not make someone an expert through training in formal method of analysis. Quite the contrary is true, in fact: we run the risk of slowing the development of skills. If the purpose is to train people in time-pressured decision making, we might require that the trainee make rapid responses rather than ponder all the implications. If we can present many situations an hour, several hours a day, for days or weeks, we should be able to improve the trainee's ability to detect familiar patterns. The design of the scenarios is critical, since the goal is to show many common cases to facilitate a recognition of typicality along with different types of rare cases so trainees will be prepared for these as well. 


p.32
Example 4.1
The Sixth Sense
-------------------------------------------------------------------------------
It is a simple house fire in a one-story house in a residential neighborhood. The fire is in the back, in the kitchen area. The lieutenant leads his hose crew into the building, to the back, to spray water on the fire, but the fire just roars back a them.
   “Odd”, he thinks. The water should have more of an impact. They try dousing it again, and get the same results. They retreat a few steps to regroup.
   Then the lieutenant starts to feel as if something is not right. He doesn't have any clues; he just doesn't feel right about being in that house, so he orders his men out of the building--a perfectly standard building with nothing out of the ordinary.
   As soon as his men leave the building, the floor where they had been standing collapses. Had they still been inside, they would have plunged into the fire below. 
-------------------------------------------------------------------------------

   “A sixth sense”, he assured us, and part of the makeup of every skilled commander. Some close questioning revealed the following facts:
  •  He had no suspicion that there was a basement in the house. 
  •  He did not suspect that the seat of the fire was in the basement, directly underneath the living room where he and his men were standing when he gave his order to evacuate. 
  •  But he was already wondering why the fire did not react as expected. 
  •  The living room was hotter than he would have expected for a small fire in the kitchen of single-family home. 
  •  It was very quiet. Fires are noisy, and for a fire with this much heat, he would have expected a great deal of noise. 

   The whole pattern did not fit right. His expectation were violated, and he realized he did not quite know what was going on. That was why he ordered his men out of the building. With hindsight, the reasons for the mismatch were clear. Because the fire was under him and not in the kitchem, it was not affected by his crew's attack, the rising heat was much greater than he had expected, and the floor acted like a baffled to muffle the noise, resulting in a hot but quiet environment.
   This incident helped us understand how commanders make decisions by recognizing when a typical situation is developing. In this case, the events were not typical, and his reaction was to pull back, regroup, and try to get a better sense of what was going on. 

p.33
By the end of the interview, the commander could see how he had used the available information to make his judgment. (I think he was proud to realize how his experience had come into play. Even so, he was a little shaken since he had come to depend on his sixth sense to get him through difficult situation, and it was unnerving for him to realize that he might never have had ESP.) 

p.33
He may not have been able to articulate the patterns or describe their features, but he was relying on the pattern-matching process to let him feel comfortable that he had the situation scoped out. Nevertheless, he did not seem to be aware of how he was using his experience because he was not doing it consciously or deliberately. He did not realize there were other ways he could have sized the situation up. He would see what was going on in front of his eyes but not what was going on behind them, so he attributed his expertise to ESP.
   This is one basis for what we call intuition: recognizing things without knowing how we do the recognizing. 

p.33
We are drawn to certain cues and not others because of our situation awareness. (This must happen all the time. Try to imagine going through a day without making these automatic responses.) 

p.33
My claim in this chapter is that intuition grows out of experience. 
   We should not be surprised that the commander in this case was not aware of the way he used his experience. Rather than giving him specific facts from memory, the experience affected the way he saw the situation. 

pp.33-34
Another reason that he could not describe his use of experience was that he was reacting to things that were not happening rather than to things that were. 

p.34
   Now we can say that at least some aspects of intuition come from the ability to use experience to recognize situations and know how to handle them. 

p.34
Our experience will sometimes mislead us, and we will make mistakes that add to our experience base. 

p.35
The commander who thought he had ESP was so discomfited when his expectancies were violated that he pulled his crew out of the building. 

p.35
In February 1992, I heard about a curious incident in which the HMS Gloucester, a British Type 42 destroyer, was attacked by a Silkworm missle near the end of the Persian Gulf War.  


pp.35-39
Example 4.2
The Mystery of the HMS Gloucester 
-------------------------------------------------------------------------------
In February 1992, I heard about a curious incident in which the HMS Gloucester, a British Type 42 destroyer, was attacked by a Silkworm missile near the end of the Persian Gulf War. The officer in charge of air defense believed strongly that the radar contact was a hostile missile, not a friendly aircraft, seconds after first detection and before the identification procedure had been carried out--even though the radar blip was indistinguishable from an aircraft, and the U.S. Navy had been flying airplanes through the same area. The officer could not explain how he believed this was a Silkworm missile. The experts who looked at the recording later said there was no way to tell them apart. Nevertheless, he insisted that he knew. And he shot the object down.
   At the time, his captain was not so confident. We watched the videotape of the radar scope and listened to the voices. When the radar blip is destroyed, the captain asks hesitantly, “Whose bird was it?” (that is, who shot the missile that destroyed this unknown track?). The anti-air warfare officer nervously replies, “It was ours, sir.” For the next four hours the HMS Gloucester sweats out the possibility that they shot down an American plane. 
   The mystery of the HMS Gloucester was how the officer knew it was a silkworm missile, not an aircraft.
   In July and August 1993, I conducted a workshop on cognitive task analysis interviews for George Brander, a human factors specialist at the Defense Research Agency in the United Kingdom. (The methods of using cognitive task analysis for interviewing are discussed in chapter 11.) Brander arranged to have us practice the methods with actual naval officers. One of them was Lieutenant Commander Michael Riley, the anti-air warfare officer on the Gloucester who spotted the Silkworm.
   We expected that Riley would be tired of going over the incident, but we found just the reverse. He was still puzzling it out, and he suggested to us that we focus our session around the Silkworm attack.
   The facts were simple. The Gloucester was stationed around 20 miles off the coast of Kuwait, near Kuwait City. The Silkworm missile was fired around 5:00 A.M. As soon as he saw it, Riley believed it was a missile. He watched it closely for around 40 seconds until he had gathered enough information to confirm his intuition. Then he fired the Gloucester's own missiles and brought the Silkworm down. The whole incident lasted only around 90 seconds, and the Gloucester almost did not get its shot off in time. Riley confessed that when he first saw the radar blip, “I believed I had one minute left to live.” The puzzle was how he knew it was a Silkworm instead of an American A-6 aircraft. The Silkworm travels at around 600 to 650 knots, the same speed as the American A-6s as they return from bombing runs. Both are around the same size and present the same profile on the radar scopes. They are the same size because of all the explosive the Silkworm carries. It is about as large as a single-decker bus, large enough to devastate a type 42 destroyer like the Gloucester.
   There are four ways to distinguish an American A-6 airplane from an Iraqi Silkworm missile.
   The first way is location. The Allied forces knew the location of the Iraqi Silkworm sites and the naval ships. Theoretically the airplanes should return to aircraft carriers by preestablished routes, but the American pilots returning from bombing runs were cutting corners, and flying over the Silkworm site in question. All the previous day they had done so. Even worse, the British Navy ships had recently moved closer to shore, and the pilots had not yet taken this changed position into account, so the A-6s were frequently overflying the ships. Riley and others had insisted that the practice of overflying ships be ended, but he had not seen any change. So the first cue, location, was useless for identifying the radar blip. 
   Radar is the second way to distinguish airplanes from missiles. The A-6s were fitted with identifiable radar, but most of them did not have their radar on when they were returning (the radar would make them more easily detectable by the enemy). Thus, the absence of radar was not conclusive.
   The third way is a special system, Identifying Friend or Foe (IFF), which allows an aircraft to be electronically interrogated to find out its status. Pilots obviously shut it down as they approach enemy territory, because it would be a homing beacon for hostile missiles. They are supposed to switch it back on when they leave enemy territory so their own forces will know not to shoot them down. Yet after completing a bombing run and avoiding enemy defenses, many A-6 pilots were late in turning their IFF back on. So the absence of IFF did not prove anything either.
   Finally is altitude. The Silkworm would fly at around 1,000 feet, and the A-6s at around 2,000 to 3,000 feet and climbing. Therefore, altitude was the primary cue for identification (unless an A-6 had damaged flaps and had to fly lower, but none had been seen coming in below 2,000 feet). Unfortunately, the Gloucester's 992 and 1022 radars do not give altitude information. In fact, it didn't have any radar that worked over land, so the first time it picked up a track was after the track went “feet wet” (i.e., flew off the coast and over water). The radars sweep vertically, through 360 degrees, until the radar operator spots a possible target. Only then can the Gloucester turn on the 909 radar that sweeps in horizontal bands, to determine roughly the altitude of the target. It takes about 30 seconds to get altitude information after the 909 is turned on. (Maddeningly, the Gloucester's weapons director failed in his first two attempts to type in the track number, first because the track number was changed just before he typed it in, and next, he transposed the digits.) As a result, it was not until around 44 seconds into the incident that the 909 informed Riley that the target was flying at 1,000 feet. Only then did he issue the order to fire missiles at the track. Yet he had left it was a Silkworm almost from the instant he saw it, before the 909 radar was even initiated, and long before it gave him altitude information. Because there was no objective basis for his judgment, Riley confessed to us that he had come to believe it had been ESP. 
   You can see how little information these is there. To make matters worse, clouds of smoke particles from the burning oil fields were adhering to moisture in the air and obscuring the radars. The Gloucester's mission was to protect a small battle force, including the USS Missouri, whose guns were pounding the Kuwait coast, some minesweepers clearing the way for the ships to get closer, and a few other ships as well. The Missouri wanted to get closer to the coast and on the day of the attack was only 20 miles off. And the closer it got, the less time the Gloucester had to react to a Silkworm attack. 
   Riley told us about the background of the incident. The war was ending, with American-led forces driving up the coast toward Kuwait City. Soon they would overrun the Silkworm site. The constant shelling from the Missouri was taking its toll. Also, the Allies had just flown a helicopter feint. Large numbers of helicopters, launched off carriers, staged a mock attack and the flew back. Riley had earlier run a mental simulation, putting himself in the minds of the Iraqi Silkworm operators. If they did not fire their missiles soon, they would lose any chance. There was nothing to save the missiles for. And they had a nice, fat target, the Missouri. If Riley was a Silkworm operator, this was when he would fire his missile. 
   The Gloucester's crew had been working more than a month on a six-hour on, six-hour-off schedule. That meant six hours of staring at radar screens, then six hours of eat, perform other tasks, and grab some sleep. Fatigue had been building up during that time. Riley's shift had started at midnight, so his crew had been going for five hours. Because of Riley's imagining what he would do if he were running a Silkworm site, he believed they were under greater risk than any time earlier. Perhaps an hour before the attack, he warned his crew to be on their highest alert, because this was when the Iraqis were likely to fire at them. Riley repeated his warning again, maybe at 4:55 A.M. As a result, the crew was ready when the missile came. 
   When we pressed Riley about what he was noticing when he first spotted the radar blip, he said that he knew it was a missile within the first five seconds. Since the radar sweep on the 922 is around four seconds, that means he identified it by the second sweep. Riley said he felt it was accelerating, almost imperceptibly. That was the clue. The A-6s flew at a constant speed, but this track seemed to be accelerating as it came off the coast. Fortunately, there were no other air tracks at the time, so he and his crew could give it their full attention. Otherwise, he doubts he would have noticed the acceleration. 
   That should have wrapped things up--except that after Riley left the interview, we discovered some inconsistencies in his account. First, he had no way to calculate acceleration until he saw at least three radar sweeps. He needed to compare the distance between sweeps 1 and 2 to the distance between 2 and 3, to see which was larger. Even more troubling, there was no difference at all between the distance traveled by the track during its entire course. We could not see any signs of acceleration, nor could the experts who analyzed the tape. So, using objective measures, there was no indication of acceleration. 
   We also wondered about Riley's sense that he knew it was a missile almost from the first contact. That first blip was recorded a little way off the coast, because the ground clutter had masked the missile until it flew far enough over water. This took one or two sweeps. Then the 992 radar picked up the track. What was there about that track that alerted Riley? We watched the tape again and again, trying to figure out. Eventually we succeeded. Rob Ellis, from the Defence Research Agency at Farnborough, realized what it was. (Before reading on, you may want to reread the information and see what you come up with. All the relevant information has been presented.) 
   Ellis tried to figure out why a track would look as if it was accelerating, when it really was not, and before all the necessary information was in. He realized that the one difference between an A-6 and a Silkworm was altitude: 1,000 feet versus around 3,000 feet. Just as the track came off the coast, the track was masked by ground clutter. The Gloucester was 20 miles away. Ellis reasoned that the 992 radar would pick up a track flying at 3,000 feet earlier than one flying at 1,000 feet. The lower track would be masked by ground for a longer time. Maybe that meant that the higher tracks, at 3,000 feet, could be spotted on radar on the second radar sweep, after they went feet wet (i.e., flew off the coast and over water), whereas the Silkworm, flying at 1,000 feet, would not give a radar return until the third radar sweep. Perceptually, the Silkworm would first be spotted farther off the coast than the A-6s had been. The Gloucester's crew, and Riley, were accustomed to A-6s. They knew the location of the Silkworm site, and they were looking for a radar blip coming from that direction, at a certain distance off the coast. Instead, Riley saw a blip farther off the coast than usual. That caught his attention and chilled him. The second radar return showed the usual distance for a track flying around 600 to 650 knots. Compared to how far that first track had come, it must have felt as if the track was moving really fast when it came off the coast. Then it seemed to slow down. Riley must have had a sense of great acceleration as the confounded altitude with speed.
   We asked Riley if he wanted to hear our hypothesis, and when we explained it, he agreed that we might have solved the riddle. Although the 992 radar does not scan for altitude, a skilled observer was able to infer altitude using the distance from the coast where the blip was first seen when the track went feet wet (i.e., flew off the coast and over water). 

-------------------------------------------------------------------------------

p.39
   In this example, as in the previous one, it is the mismatch or anomaly that the decision maker noticed. Perhaps such instances are difficult to articulate because they depend on a deviation from a pattern rather than the recognition of a prototype. 

pp.39-40
The Infected Babies

In this project, we studied the way nurses could tell when a very premature infant was developing a life-threatening infection. Beth Crandall, one of my coworkers, had gotten funding from the National Institutes of Health to study decision making and expertise in nurses. She arranged to work with the nurses in the neonatal intensive care unit (NICU) of a large hospital. These nurses cared for newly born infants who were premature or otherwise at risk. 
   Beth found that one of the difficult decisions the nurses had to make was to judge when a baby was developing a septic condition--in other words, an infection. These infants weighted only a few pounds--some of them, the microbabies, less than two pounds. When babies this small develop an infection, it can spread through their entire body and kill them before the antibiotics can stop it. Noticing the sepsis as quickly as possible is vital. 
   Somehow the nurses in the NICU could do this. They could look at a baby, even a microbaby, and tell the physician when it was time to start the antibiotic (Crandall and Getchell-Reiter 1993). Sometimes the hospital would do tests, and they would come back negative. Nevertheless, the baby went on antibiotics, and usually the next day the test would come back positive.
   This is the type of skilled decision making that interests us the most. Beth began by asking the nurses how they were able to make these judgments. “It's intuition”, she was told, or else “through experience.” And that was that. The nurses had nothing more to say about it. They looked. They knew. End of story.
   That was even more interesting: expertise that the person clearly has but cannot describe. Beth geared up the methods we had used with the firefighters. Instead of asking the nurses general questions, such as, “How do you make this judgment?” she probed them on difficult cases where they had to use the judgment skills. She interviewed nurses one at a time and asked each to relate a specific case where she had noticed an infant developing sepsis. The nurses recall incidents, and in each case they could remember the details of what had caught their attention. The cues varied from one case to the next, and each nurse had experienced a limited number of incidents. Beth compiled a master list of sepsis cues and patterns of cues in infants and validated it with specialists in neonatology. 
   Some of the cues were the same as those in the medical literature, but almost half were new, and some cues were the opposite of sepsis cues in adults. For instance, adults with infections tend to become more irritable. Premature babies, however, become less irritable. If a microbaby cried every time it was lifted up to be weighted and then one day it did not cry, that would be a danger signal to the experienced nurse. Moreover, the nurses were not relying on any single cue. They often reacted to a pattern of cues, each one subtle, that together signaled an infant in distress. 

pp.40-41
The project with the NICU nurses was draining for our staff. The problem was not solely the minor shocks of inadvertently witnessing distressing medical procedures or the strain of seeing such tiny babies struggling to survive. It was the strain in the nurses themselves. More than half of the nurses interviewed cried at some point as they recalled infants who had not made it, signs they should have seen but missed, and even babies who had close calls. None of our other studies were as emotionally demanding as this one. 

p.41
One member of the research team was Marvin Thordsen. 
   In Idaho, Marvin was attached to a team in charge of the fire on one of the mountains. He dutifully tagged along, listening and looking agreeable. After a few days of this, Marvin was sitting in on one of their planning meetings. The team got to an issue and realized that they had already made this decision several days ago, but no one could remember what they decided. Marvin could listen only so long before he broke down. Knowing that he was violating the creed of observers just to watch and never to intervene, he flipped back a few pages in his notebook and read to them what their plan had been. Jaws dropped open as the team found out how helpful it is to have someone serving as their official memory. By the end of his stint with the Forest Service, he was included as part of the planning team. Before finishing their meetings, they would ask him if he had anything to add. 

p.42
If you want people to size up situations quickly and accurately, you need to expand their experience base. One way is to arrange for a person to receive more difficult cases. 

p.42
In contrast, firefighters in a large city with many old buildings can get a tremendous amount of experience in a short time. 

p.43
Because this type of perceptual expertise was not getting shared or compiled, it was a particularly hard thing for new nurses on the unit to master. 

p.43
Beth eventually developed training materials to illustrate all of the critical cues that the nurses could use to diagnose when an infant was in the early stages of an infection. There were different ways to present these materials, such as simple lists of cues; Beth embedded the cues in the stories themselves so that the nurses could see how the cue appeared in context. 

p.43
The paramedics we interviewed said they could judge whether the person was actually having a heart attack or just suffering from indigestion. They also said they could tell when a person was going to have a heart attack, days or even months ahead. 

p.44
Instead, it clogs up, like a pump. Sometimes it clogs up quickly, as when a clot lodges somewhere (here the balloon metaphor may come in). When it clogs up slowly, during congestive heart failure, there are signs. Areas of the body that are less important get less blood. By knowing what they are and by being alert to patterns in several of these areas, you can detect a problem in advance. The skin gets less blood and turns grayish. That is one of the best signs. The wrists and ankles show swelling. The mouth can look greenish. 

p.44
It should be possible to train ordinary citizens to look at each other and recognize when a friend or coworker is starting to show the signs of impending heart problems. 

p.44
In both places, the emphasis on pattern matching seemed more useful than lessons on formal analysis of alternate options. 

p.45
During a visit to the National Fire Academy we met with one of the senior developers of training programs. In the middle of the meeting, the man stood up, walked over to the door, and closed it. Then in a hushed voice he said, “To be a good fireground commander, you need to have a rich fantasy life.”
   He was referring to the ability to use the imagination, to imagine how the fire got started, how it was going to continue spreading, or what would happen using a new procedure. A commander who cannot imagine these things is in trouble.
   Why did the developer close the door before he revealed this ability? Because the idea of using fantasy as a source of power is as embarrassing as the idea of using intuition as a source of power. He was using the term fantasy to refer to a heuristic strategy decision researchers call mental simulation, that is, the ability to imagine people and objects consciously and to transform those people and objects through several transitions, finally picturing them in a different way than at the start. This process is not just building a static snapshot. Rather, it is building a sequence of snapshots to play out and to observe what occurs. 

p.51
   We found that as early as 1946 Adriaan de Groot had studied the mental simulation of chess master. Two decision researchers, Kahneman and Tversky (1982), had written a paper on the simulation heuristic, based on laboratory studies. They described how a person might build a simulation to explain how something might happen; if the simulation required too many unlikely events, the person would judge it to be implausible.2 

p.51
Charles Perrow, Normal Accidents (1984)

p.52
Each seemed to rely on just a few factors--rarely more than three. It would be like designing a machine that had only three moving parts.

p.52
Also, there was another regularity: the mental simulations seemed to play out for around six different transition states, usually not much more than that. 

p.52
If you cannot keep track of endless transitions, it is better to make sure the mental simulation can be completed in approximately six steps. 

p.54
Andrzej (pronounced Andrei) Bloch, 

p.55
To put things in perspective for me, he noted that food shortages were the traditional source of unrest in Poland and Russia; people were more likely to protest food shortages than a lack of political freedom. If they could not afford to buy bread, that might cause the government to collapse. 

pp.55-56 
   Andrzej created wonderful simulations. Without prompting, he boiled it down to three variables: the rate of inflation, the rate of unemployment, and the rate of foreign exchange. I asked Andrzej to imagine how the Polish economy would do on these three variables by quarter for the year 1990. According to Andrzej, since the government was not going to fight inflation artificially, the inflation rate was going to zoom up from its (then) current rate of 80 percent a year to an annual rate of about 1,000 percent for a few months. (This meant prices would increase around 80 percent a month instead of 80 percent a year.) Goods were going to become quite expensive. Prices would rise faster than wages. Quickly, people would not be able to afford to buy very much, so demand would fall, and the prices would stabilize. He estimated that this would take about three months. To put things in perspective for me, he noted that food shortages were the traditional source of unrest in Poland and Russia; people were more likely to protest food shortages than a lack of political freedom. If they could not afford to buy bread, that might cause the government to collapse. Nevertheless, he felt that the euphoria over the Solidarity movement was high enough and that the period of sharp inflation would be short enough so there would not be problems on this score. When I reviewed his predictions with him a year later, we found that his was accurate. He had accurately called the sharp increase to up 1,000 percent for January and February, as well as the downturn to around 20 to 25 percent by April and thereafter.
   Next, he considered unemployment. If the government had the courage to drop unproductive industries, many people would lose their jobs. This would start in about six months as the government sorted things out. The unemployment would be small by U.S. standards, rising from less than 1 percent to maybe 10 percent. For Poland, this increase would be shocking. Politically, it might be more than the government could tolerate and might force it to end the experiment with capitalism. When we reviewed his estimates, we found that unemployment had not risen as quickly as he expected, probably, Andrzej believed, because the government was not as ruthless as it said it would be in closing unproductive plants. Even worse, if a plant was productive in areas A, B, and C and was terrible in D and E, and then as long as they made a profit, they continued their operations without shutting down areas D and E. So the system faced a built-in resistance to increased unemployment.
   Finally, he looked at foreign exchange, which he saw as a balancing force. As the exchange rate got worse, increasing from 700 zlotys per dollar to 1,500 zlotys per dollar, people would find foreign goods too expensive so they would buy more Polist items. Similarly, outsiders would find that Polish-made items were a bargain, so exports would boom, increasing employment and improving economic health. He thought this might take a few years to accomplish, if at all. He expected that during 1990, the exchange rate would continue to increase, eventually to 1,400 zlotys per dollar. He expected that the government would intervene at that point. During the year I noted that the zlotys went up to around 900 per dollar and stayed there. Andrzej had been too pessimistic. In 1991, I discussed this with him, and he felt that the problem again was that the government was softening the blows. Had the full market economy shift been made as advertised, the rate would have increased much faster, and the shift would have been finished much quicker. 
   This mental simulation depended on three factors and on a few transitions (rapid inflation, reduced level of inflation, gradual rise in unemployment and loss in exchange rate, improved employment, and finally, stabilized exchange rate).
   Andrzej was not finished. He estimated the likelihood of success for this market economy experiment at 60 percent. A virtuoso at simulating Polish futures, he generated pessimistic mental simulations and showed how the experiment could fail. He switched to political simulations. 

p.57
   The implications of this minor sideline in an exploratory study are clear: without a sufficient amount of expertise and background knowledge, it may be difficult or impossible to build a mental simulation. The expert, despite his desire to see the market economy experiment work, could imagine different ways for it to fail and to anticipate early warning signs. He told me about several (e.g., if the rate of inflation does not come down below an annual rate of 50 percent by April, start worrying). 

p.57
   The example of the Polish economy shows how difficult it is to construct a useful mental simulation. But once it is constructed, it is impressive. We do this all the time in areas about which we are knowledgeable. 

p.58
   In assembling the action sequence, figure 5.3 reminds us that mental simulations generally move through six transitions, driven by around three causal factors. Once the person tries to assemble the action sequence, he or she evaluates it for coherence (Does it make sense?), applicability (Will it get what I need?), and completeness (Does it include too much or too little?). If everything checks out, the action sequence is run and applied to form an explanation, model, or projection. If the internal evaluation turns up difficulties, the person may reexamine the need and/ or the parameters and try again. 
   The cases Beth and I reviewed fit into two major categories: the person was trying to explain what had happened in the past, or trying to imagine what was going to happen in the future. 

p.58
For cases where a person was trying to explain what had happened in the past, the reason was either to make sense of a specific event (such as a juror's trying to figure out if the evidence showed that the defendant had committed the crime) or to make sense of a general class of events by deriving a model (such as Einstein's imagining how a beam of light shining through a hole in an elevator might seem to curve if the elevator was moving). 

p.62
Projecting into the Future
In many cases, decision makers try to project into the future, either to predict what is going to happen and perhaps to prepare for it (manufacturers bidding on a new part who try to imagine how they will make the part and how long that will take) or to watch a potential course of action to find out if it has any flaws (e.g., the car rescue).
   Figure 5.5 shows how you can try to build a bridge from your present condition to a future one. You know the initial state and you are trying to imagine the target state. Sometimes you also have a good picture of the target state, as in the truck example, and your job is to figure out how to convert one into the other. What is new here is the way you run and review the action sequence. Recall the car rescue. The team leader put his plan under a microscope, scrutinizing each step to see if there could be a problem. He was trying to find pitfalls in advance. In the end, he evaluated his plan based on the nature and severity of the problems that he found. 

p.69
Marvin Cohen (1997), snap-back 
   Marvin Cohen (1997) believes that mental simulation is usually self-correcting through a process he has called snap-back. Mental simulation can explain away disconfirming evidence, but Cohen has concluded that it is often wise to explain away mild discrepancies since the evidence itself might not be trustworthy. However, there is a point when we have explained away so much that the mental simulation becomes very complicated.6  We look at all the new evidence that had been explained away to see if maybe there is not another simulation that makes more sense. Cohen believes that until we have an alternate mental simulation, we will keep patching the original one. We will not be motivated to assemble an alternate simulation until there is too much to be explained away. 

([
p.136
   Thus the coping strategies of the two hemispheres are fundamentally different. The left hemisphere's job is to create a belief system or model and to fold new experiences into that belief system. If confronted with some new information that doesn't fit the model, it relies on Freudian defense mechanisms to deny, repress or confabulate--anything to preserve the status quo. The right hemisphere's strategy, on the other hand, is to play “Devil's Advocate”, to question the status quo and look for global inconsistencies. When the anomalous information  reaches a certain threshold, the right hemisphere decides that it is time to force a complete revision of the entire model and start from scratch. The right hemisphere thus forces a “Kuhnian  paradigm shift” in response to anomalies, whereas the left hemisphere always tries to cling tenaciously to the way things were. 
   Now consider what happens if the right hemisphere is damaged.6  The left hemisphere is then given free rein to pursue its denials, confabulations and other strategies, as it normally does. 

   (Ramachandran, V.S., Phantoms in the brain : probing the mysteries of the human mind / V. S. Ramachandran, and Sandra Blakeslee., 1. neurology--popular works., 2. brain--popular works., 3. neurosciences--popular works., 1998, 612.82, ) 
    ])

p.274
Decision makers noticed the signs of a problem but explained it away. They found a reason not to take seriously each piece of evidence that warned them of an anomaly. As a result, they did not detect the anomaly in time to prevent a problem.5   ([  most of the time, you and the firm only get call-in and compensate for solving problem, but not so much for preventing problem, like, for example, in medicine and a healthcare system that compensate for fixing problem and, does not compensate for preventing problem, assuming that it is possible to detect the early signs and symptoms of a creeping sickness. ])

p.70
This has also been called the garden path fallacy: taking one step that seems very straightforward, and then another, and each step makes so much sense that you do not notice how far you are getting from the main road. Cohen is developing training methods that will help people keep track of their thinking and become more aware of how much contrary evidence they have explained away so they can see when to start looking for alternate explanations or predictions. 

p.70
That was the moment of snap-back; the accumulated strain of pushing away inconvenient evidence caught up with me. 


pp.70-71
The crystal ball does not show why it was wrong. The officers have to sift through the evidence and come up with another explanation, and perhaps another. In doing so, they see that the same evidence can be interpreted in different ways. 

p.72
(The forecast was made before the 1973 oil crisis in which political events speeded up the adjustment.) Anticipating the jump in prices was the easy part. The hard part was to convey this change to the executives at Royal Dutch/Shell. 

p.72
Pierre Wack (1985a, 1985b)
“Scenarios,” wrote Wack (1985s), “must help decisions makers develop their own feel for the nature of the system, the forces at work within it, the uncertainties that underlie the alternative scenarios, and the concepts useful for interpreting key data” (p. 140). 

p.73
This examples shows how mental simulations can gain force when made explicit; the executives responded more favorably to decision scenarios than to forecasts based on statistics and error probabilities. They abandoned their incremental strategy, and their response successfully anticipated the sharp price increases.8 

p.73
We should be careful in assuming that consumers know how products work. Some were using the product inappropriately, getting unsatisfactory results, and blaming the product. 

p.83
He used this remote control indicator to query the aircraft's transponder, working with the IFF system. 

p.83
Once the Vincennes's crew became convinced that the track belonged to an F-14, that assumption colored the way they treated it and thought about it. 

p.84
The military airplane was squawking Mode II. So there was no decision error here, just a human error that let the Airbus become correlated with a Mode II signal reserved for military airplanes. 

p.84
The researcher concluded that the mistake about altitude seemed to match these data; subjects cannot be trusted to make accurate identifications because their expectancies get in the way. 

p.86
I asked Captain Rogers how long it might take to infer altitude. He said perhaps 5 to 10 seconds. 

p.100


p.105
But afterward, there is time to go over the game record to look for opportunities that were missed, early signals that were not noticed, or assessments and assumptions that were incorrect. In this way, an experience (even a single game) can be recycled and reused. In many field settings where there are limited opportunities for experience, developing the discipline of reviewing the decision-making processes for each incident can be valuable. 

p.105
   The decision requirements exercise is for the squad leaders to identify the key judgments and decisions facing them, why they are difficult, and where they can go wrong. These decision requirements are the high drivers, the specific decisions skills that they need to polish. 

p.114
In 1952, Masaru Ibuka, working for Tokyo Tsushin Kogyo (later to become Sony Corporation), ...

p.115
   Peter Schwartz (1991) describes a project he performed for Royal Dutch/Shell. The question was about the Soviet Union's future policies on selling petroleum. Schwartz wondered whether there was any reason for the USSR to make dramatic policy changes. In reviewing some demographic data, he realized that the USSR could be headed for a sudden crisis. The population of elderly was increasing sharply, but the population of young adults entering the working class was decreasing. Schwartz wondered what this discontinuity might mean, and in exploring the implications, he was able to forecast, years ahead of the event, the possible destabilization of the USSR. He had discovered a leverage point for predicting a society under strain. 

p.116
Unless they had a sense of how the problem could be solved, they did not get engaged. The leverage point provided them with a sense of the solvability of the problem. 

p.116
   In another example, Lee Task, a specialist in optics, noticed that pilots using night vision goggles could not read their instrument panels while landing their aircraft. He believed that the key data elements on the instrument panel could be incorporated into the night vision google display, so he identified this to his sponsors as a work project. Within a few months, he had developed a successful prototype. 

p.116
Clausewitz refered to this capacity as the coup d'oiel, the rapid size-up that identifies the critical points in the terrain. 

  ‡coup d'oiel [Fr., lit., stroke of eye] a rapid glance; quick view or survey 

p.121
   This approach to problem solving can be traced back to the German research psychologist Karl Duncker, one of the central figures in the Gestalt psychologist school in Europe. The Gestalt school emphasized perceptual approaches to thought. Rather than treating thought as calculating ways of manipulating symbols, the Gestaltists viewed thought as learning to see better, using skills such as pattern recognition. 

p.124
Then comes a difficult judgment: the solvability of the problem.4  Somehow we use our experience to make this judgment even before we start working to come up with a solution. 

p.125
De Groot (1945) and Isenberg (1984) have suggested that what triggers active problem solving is the ability to recognize when a goal is reachable. 

p.129
According to Reitman (1965), a problem can be ill structured if the initial state is not defined, the terminal state is undefined, or the procedure for transforming the initial state into the terminal state is undefined. 

p.133
   Third, there are other strategies in addition to the means-ends analysis. Using means-ends analysis to reduce differences is not the same as noticing opportunities. When we solve problems, we are alert to opportunities that let us make progress, even if those opportunities do not correspond to the obstacles we are trying to eliminate.10  In addition, Voss, Greene, Post, and Penner (1983) studied ill-structured social science problems and found little evidence for straightforward means-ends analysis. 
   Fourth, in solving problems we do not reformulate goals merely by removing constraints. We sometimes make radical shifts. Think again about the car rescue in example 5.1. The commander of the rescue team was not eliminating unnecessary constraints as much as he was changing the nature of the goal: lifting the victim through the roof instead of getting him out of one of the doors.11
   The artificial intelligence program is not a technique for generating options. Instead, it is a procedure for setting up a search space and then using heuristics to achieve more efficient searches to find a good option. Conducting rapid searches is what digital computers do best. A computer does not have to do anything constructive to search through a space. It does not have to generate anything new. If the search space can be structured well enough, it can turn up findings that are novel. For example, if you give a computer a thousand different ice cream flavors, ten types of cones, and five hundred toppings, it will identify a range of options no one ever considered before.12 
   One of the primary mechanisms of artificial intelligence is to spread out the alternatives exhaustively and filter through them efficiently. This is the same strategy used in the analytical approaches to decision making. These approaches urge us to generate large numbers of options in order to be reasonably sure that the set will include a good option. Then we are supposed to search through these options, filtering out the inadequate ones, to find a successful one. Computational approaches try to reduce thinking to searching. Thus they show their greatest successes with tasks that can be transformed into searches. 
([

Michael Jordan and Dan Klein, our local experts in machine learning, found two dwarfs that should be added to support machine learning:

12. Dynamic Programming
Dynamic programming is an algorithmic technique that computes solutions by solving simpler overlapping subproblems. It is particularly applicable for optimization problems where the optimal result for a problem is built up from the optimal result for the subproblems.

11. Backtrack Branch and Bound
Backtrack and Branch-and-Bound: These involve solving various search and global optimization problems for intractably large spaces. Some implicit method is required in order to rule out regions of the search space that contain no interesting solutions. Branch and bound algorithms work by the divide and conquer principle: the search space is subdivided into smaller subregions (“branching”), and bounds are found on all the solutions contained in each subregion under consideration.

<---------------------------------------------------------------------------->

A View of the Parallel Computing Landscape

Krste Asanovic, Rastislav Bodík, James Demmel, Tony Keaveny, Kurt Keutzer,
John Kubiatowicz, Nelson Morgan, David Patterson, Koushik Sen,
John Wawrzynek, David Wessel, and Katherine Yelick

© 2010 
One estimate is that it takes a decade for a new compiler optimization
to become part of production compilers. How can researchers innovate rapidly if compilers and operating systems evolve glacially?


The seven dwarfs are:
(The dwarfs were also called motifs, as some preferred we find a word other than dwarf.)
(A dwarf is an algorithmic method that captures a pattern of computation and communication.)

1. Structured Grids – including adaptive mesh replacement
2. Unstructured Grids
3. Fast Fourier Transform (later Spectral Methods)
4. Dense Linear Algebra
5. Sparse Linear Algebra
6. Particles (later N-body)
7. Monte Carlo
( The identification of these computational patterns in turn owes a debt to Phil Colella’s unpublished work on the “Seven Dwarfs of Parallel Computing.”;──“A Design Pattern Language for Engineering (Parallel) Software” by Kurt Keutzer and Tim Mattson, in the Intel Technology Journal 13 (2010): 4.)

 embedded computing

8. Finite State Machines—for control applications
9. Combinational Circuits—for security and error correction

 electronic design automation

10. Graph Algorithms
11. Backtrack Branch and Bound (machine learning, ala Artificial Intelligence)
                               ( ala - also label as )
                               (  ML - machine learning )
                               (  AI - artificial intelligence )
                               (  DS - data science )
(See Chapter 3 on Content-Based Image Retrieval.)

12. Dynamic Programming (machine learning, ala Artificial Intelligence)

computations in graphical models and the nature of computations in graph algorithms, particularly graph traversal

13. Graphical Models—for probabilistic reasoning in Machine Learning

     ])


p.151
   These missing events can be described as negative cues.3  Experience is important for allowing decision makers to form and use expectancies. 

p.151
Only through expectancies can someone notice that something did not happen. In “Silver Blaze”, the vital clue was a dog that did not bark at night. The dog usually barked when strangers came by. The fact that the murderer passed the dog in silence meant that the dog recognized him (Doyle 1905). 

p.153


p.154
   We recently studied weather forecasters, to try to learn how they predict changes in ceiling that allow aircraft to take off and land at airports. (Ceiling refers to the lowest layer of clouds on an overcast day.) One observation was made on a day when the ceiling was too low for aircraft operations.

p.154
He said his prediction was for the ceiling to lift above 1,000 feet by 2:00 P.M. that afternoon. (It was then about 10:00 A.M.) Probing for counterfactual thinking, we asked what sequence of events might occur that would result in the ceiling lifting earlier than that, by noon. He was unable to imagine such a possibility. He had followed a set of rules to generate his prediction and could not conceive of a different world. To us, this signaled the fact that he was not an expert. 

p.154
(A mindless information-gathering strategy is not likely to be useful.) Experienced decision makers appear to be able to spot opportunities where the information that can be helpful can be readily obtained. For example, a weather forecaster trying to predict when a ceiling will lift may notice that the ground temperature is not rising as rapidly as usual during the morning. The critical cue here is the trend in temperature increase, and the interpretation of this trend in relationship to the typical pattern of increase. Moreover, this trend can easily tracked using one of the available displays. Skilled decision makers may be able to seek information more effectively than novices. This skill in information seeking would result in a more efficient search for data that clarifying the status of the situation. 

p.155
Jacobs and Jaques (1991) use the term time horizon. Different tasks require different time horizons, referring to the amount of look-ahead needed. 

p.156
   Experts also experience the past. As we saw, a skilled designer can look at a part and perceive how it must have been manufactured and how the decisions were made to form it one way rather than another. 

p.157
Fine Discriminations

Experts can detect differences that novices cannot see, cannot even force themselves to see. 


p.157
This was not a matter of intelligence. It was a matter of experience.7 

p.157
That's because “it” is not a fact (the Civil War began in 1861) or an insight (dividing one number into another is like subtracting it several times). You cannot learn just by being told or learn it all of a sudden. It takes lots of experience, and lots of variety in that experience, to notice differences. 

pp.158-159

p.158
   Four components of metacognition seem most important:  memory limitation, having the big picture, self-critiques, and strategy selection.8 

p.158
They can also factor in their level of alertness, their ability to sustain concentration, and so forth. 

p.158
   Experts are not only better at forming situation awareness and seeing the big picture, but they can detect when they are starting to lose the big picture. Rather than waiting until they have become hopelessly confused, experts sense any slippage and make the necessary adaptations. 

p.158
Experts also seem more likely to critique their judgments and their plans, since they can use their experience to see where the judgments might be wrong and their plans weak. 

p.159
   Using these abilities, experts can think about their own thinking to change their strategies. Regardless of whether they want to avoid memory limitations, loss of the big picture, continued performance difficulties, or poor judgements and plans, experts try to find more robust strategies. 

p.159
Everyone needs some experience with a task before they can anticipate where they will run into trouble. They need to have some experience with different strategies for handling a task in order to learn about their own abilities, both strengths and weaknesses, so they can take these into account. 

p.161
We are often fooled into thinking that the procedures are going to be carried out easily. In fact, procedures often take much experience to interpret. Rules tell you that when a certain condition occurs, initiate a certain action. The trick is knowing when the first condition has occurred. The recipe can state, “When it is brown on top, take it out of the oven,” but brown on top is not so obvious. Brown on top stretches from “just starting to change color” all the way to “beginning to smoke”.

p.169
In chapter 7, I discussed a project for the U.S. Marine Corps to help squad leaders learn like experts, rather than trying to teach them to think like experts. 

pp.169-170
   Cognitive task analysis is the description of the expertise needed to perform complex tasks. The steps of cognitive task analysis are to to locate sources of expertise (and acquire background knowledge in the process), evaluate the quality of the expertise, perform knowledge elicitation to get inside the head of the skilled decision makers, process the finding so they can be interpreted to others, and apply the findings. Traditional task analyses have concentrated on the procedures to be followed and have had relatively little to say about perception, judgment, and decision-making skills. As we move to more complex jobs, especially as information technologies place more demands on workers and their supervisors, we have to go beyond the traditional task analyses. 

p.170
   In the early 1800s, petroleum was a nuisance. It fouled the drinking water and got on farmers' boots. Then in 1854 a Canadian geologist figured out how to extract kerosene from petroleum, and the headlines read, “Good News for Whales”. All of a sudden, petroleum was a resource. Today we have a variety of petroleum engineering disciplines for making use of petroleum. We have ways to locate the petroleum, assay its quality, extract it, process it, and use it (see table 10.1). 

p.170
Therefore, we can talk about a discipline of knowledge engineering. 

p.175
Key Points
  •  Experts can perceive things that are invisible to novices: fine discriminations, patterns, alternate perspectives, missing events, the past and the future, and the process of managing decision-making activities.
  •  Skilled chess players show high-quality moves, even under extreme time pressure, and high-quality moves as the first ones they consider. ([ this bullet point should not be applied to business case decision-making model or, a doctor or nurse making a diagnosis. ])
  •  Training to high-skill levels should emphasize perceptual skills, along with mastery of procedures. 

p.177
We see the world as patterns. Many of these patterns seem to be built into the way our eyes work. We have detectors to notice lines and boundaries. The world is organized in our eyes to highlight contrasts, before any information reaches our brains. We have other powerful organizers to frame the visual world into Gestalts, so we naturally group things together that are close to each other. If a flock of birds flies overhead, we see it as one flock, sharing a common fate. Each time the flock shifts direction, we do not have to track the trajectories of each bird individually. If one bird flies off on its own, that is the bird we notice. It has broken the pattern of common fate, and it commands our attention. Infants see the world in this way. Show 4-month-old infants several dots moving together, and they treat them as one unit. Send one dot off by itself, and the infant is surprised. We know it is surprised because it stops drinking its milk at that instant. It shows a startle reflex. Even infants organize the visual world through patterns. 

pp.177-178
   A story is a blend of several ingredients:1

  •  Agents--the people who figure in the story. 
  •  Predicament--the problem the agents are trying to solve. 
  •  Intentions--what the agents plan to do. 
  •  Actions--what the agents do to achieve their intentions. 
  •  Objects--the tools the agents will use. 
  •  Causality--the effects (both intended and unintended) of carrying out actions. 
  •  Context--the many detail surrounding the agents and actions. 
  •  Surprises--the unexpected things that happen in the story. 

   In a simple form, a story ties these and other ingredients together. Here is a story we heard during the project Beth Crandall did with nurses.

   1. ... story grammar (a set of primitive features typical of all stories)  ... story grammar (Wilensky 1983).  ... Pennington and Hastie (1993) and Schank (1990). 

pp.178-179
Example 11.1
The Case of the Infant Whose Heart Wasn't Beating 80 times a minute
-------------------------------------------------------------------------------


-------------------------------------------------------------------------------


p.179
   This story is a warning not to trust machines because they can mislead. It is a warning that lifesaving methods, such as air tubes, can kill the infants they are supposed to sustain. It is a story about expertise. The nurse who had seen a baby die of pneumopericardium could recognize the symptoms better than the other nurse. This is also a permission story. It tells when it is all right to make a fuss, to refuse to be reassured even if you have to risk a friendship. It tells about the culture of the hospital, where the boundaries are, and what it takes to convince others. You may find additional lessons in the story.
   Stories like this contain many different lessons and are useful as a form of vicarious experience for people who did not witness the incident. They also help to preserve values, by showing newcomers the kind of environment they are entering. For our purposes as researchers, these kinds of stories also help us understand situations and relationships.
   We like to hear good stories retold. What is more interesting is our need to tell stories, again and again. Each telling helps us understand more about the lessons embedded in the story. I suspect that this need to tell stories starts very early, even before the beginning of language. I have even seen storytelling in a child who had not yet started to talk--my nephew Alexander.  


p.180
Features of Good Stories

A story about the infant whose heart almost strangled is effective for several reasons. It is dramatic. A child almost died; only a last-minute intervention saved him. It allows empathy. We can imagine ourselves being the nurse who ignored the warning, so if there are lessons to be learned, we want to learn them. It is instructive. We sense wisdom in this story, even if we are not sure of all the messages. Therefore, we want to keep it in the back of our heads as an analogue in case we ever wind up in a similar situation. Drama, empathy, and wisdom are key. Stories are remembered because they are dramatic. They are used because we can identify with one or more of the actors. They are told and retold because of the wisdom they contain--the lessons that keep emerging with each telling.

p.181
In contrast, a story records an event that happened within a natural context. It documents that under these conditions, these causes operating simultaneously  produce this result. In its way, a story is also a report of an experiment, linking cause and effect. It says, “Under these conditions, this is what happens.”

p.181
   Even the request for more details can be seen as an attempt to pin down the conditions more carefully, to understand after the fact what the causes really were. 

p.210
[System designers] draw on previous projects that they did or on other people's projects with which the are familiar. In studying the types of evidence and information on which designers rely, Klein and Brezovic (1986) found that design engineers prefer to gather firsthand evidence by running little demonstrations using mockups. When demonstrations are impractical, the design engineers looked for previous systems to serve as analogues. They used these analogues to tell them what tolerances to use, what configurations, and so forth. 

p.211
   The technique of comparability analysis has been used for many different functions during the past 20 years. 

p.211
Failures such as this will reduce the credibility of the method even though the failure was in the application of the method. The method was misused because the people applying it did not understand the logic behind it. 

p.217
The Goeben [a German battleship stationed in the Mediterranean Sea at the start of World War I] also bottled up 95 percent of the Russian shipping (since their only warm water ports were on the Black Sea), helping to create the hardships that led to the Russian Revolution. 

p.218
We assume that living in a shared culture will provide us a basis of common referents. 

p.221
By telling the front office exactly what to do but not giving the rationale, he was leaving them vulnerable when the original plan fell apart.  

p.222
When you communicate intent, you are letting the other team members operate more independently and improvise as necessary. You are giving them a basis for reading your mind more accurately. 

p.223
   Larry Shattuck (1995), 

p.224
   It is easy to say we want to encourage improvisation and initiative and to make sure that people understand why they have been given certain assignments. In reality, this practice turns out to be difficult, because it means that the people at the higher echelons must give up some of their control.  

p.225
(Klein 1994)
There are seven types of information that a person could present to help the people receiving the request to understand what to do:

1. The purpose of the task (the higher-level goals).
2. The objective of the task (an image of the desired outcome).
3. The sequence of steps in the plan. 
4. The rationale for the plan. 
5. The key decisions that may have to be made. 
6. Antigoals (unwanted outcomes). 
7. Constraints and other considerations. 


pp.228-229 
   Karl Weick (1983) has described a streamlined version of a Commander's Intent statement. In Weick's version, these are five facets:
  •  Here's what I think we face. 
  •  Here's what I think we should do. 
  •  Here's why. 
  •  Here's what we should keep our eye on. 
  •  Now, talk to me. 

p.229
   The art of describing your intent is to give as little information as you can. The more details you pack in, the more you obsure your main points. However, if you leave out an important consideration, you run the risk that the person will become confused at a critical decision point. 

p.229
   The concept of intent can be applied to equipment as well as to people. Particularly in working with sophisticated computer equipment, we struggle to figure out what the machine is trying to do. 

p.230
One of these is the flight management system, which helps track the course that an airplane takes from the time it takes off until it lands. This replaces the autopilot and adds to the computerized capability of keeping the plane on course at the correct speed, vectored in the right direction. George Kaempf managed a project funded by NASA in which we studied Flight Management Systems (Kaempf, Klein, and Thordsen 1991). We found that computer systems also need to communicate intent.


Example 13.7
The Flight Mismanagement System
-------------------------------------------------------------------------------
An airplane is on a routine flight from the West Coast to the East Coast. It is a red eye special and is flying above 30,000 feet. It is 3:30 A.M.
   A company employee riding in the jumpseat of the cockpit kicks the rudder control blade by mistake, so that the rudder deflects to an extreme positions. At the time of this incident, the switch was located close to the floor, behind a pedestal and thereby shielded from view. No one sees that the control has been displaced, and the flight management system does not notify the crew.
   Up to this point, the flight is routine. When the rudder control blade is kicked, the flight management system responds by compensating with other flight controls to keep the aircraft in straight and level flight. The crew notices no change, and the flight management system does not signal to the crew that anything unusual has happened. Because the rudder control blade has been left at an inappropriate setting, the flight management system continues to compensate.
   When the flight management system reaches its limit, it gives up. It turns off, handing the controls back to the unsuspecting pilots, with the aircraft in an out-of-tolerance condition. Without warning, the aircraft stalls and begins to plummet. The crew first believes it has an engine problem. The regain control, the crew takes a number of ineffective actions that only make the problem worse. The airplane is falling and increasing in speed.
   The story has a happy ending. The pilots are able to wrestle the plane back under control, pulling it out of steep dive. They eventually figure out what has gone wrong and reset the rudder. But after that, it is hard for the pilots to trust the flight management system fully again.
-------------------------------------------------------------------------------


pp.230-231
   One problem with the flight management system was that the pilots could not anticipate what the system was going to do. According to Earl Wiener (1989), the questions regarding automated systems most often during nonroutine events are: “What is it doing? Why is it doing that? What is it doing to do next?”

p.231
These systems have to be able to identify the appropriate times to convey intent and the appropriate level and format for doing it. Only then will the human team members feel that they can read the “minds” of their computer colleagues and feel comfortable working with them. 


pp.236-238
Example 14.2
The Best Teams: Wildland Firefighters
-------------------------------------------------------------------------------

-------------------------------------------------------------------------------


p.238
In some industries, the security officer knows that the chance of a single emergency during a 20-year career is less than 50 percent. 

p.242
In contrast, the member of experienced teams want to know as much as possible about the overall status of the team. They realize that they may have to compensate for others, ask for help, or pitch in for team goals even if they have to abandon their own task temporarily. Experienced teams have integrated identities; the members identify themselves in relationship to the whole team. Inexperienced teams have fragmentary identities and focus on individual assignments more than team requirements.6 

p.243
Finally, when they have the basics down, they can free up attention to see the challenges facing the team as a whole. 

p.243
milk box

p.244
Team Metacognition
Metacognition refers to the concept of thinking about thinking. It emerged from research with children to describe how they learn to take their own thinking strategies into account. They learn the limits of their memory and acquire strategies for working around these limits, such as knowing when to reread something because they are not sure they understood it. Children cannot develop good skills for metacognition until their behaviors become sufficiently stable and predictable for them to anticipate what will happen and take the necessary steps. <skip last sentence of paragraph>

p.249
Most likely, some did notice the error but adhered to U.S. Navy culture of not correcting a superior officer. 

p.250
What gets entered into collective consciousness is only a small part of what all the team members are thinking about. There are many good ideas that never get spoken--and many good ideas that could be combined into real breakthroughs. 

p.250
Marvin Thordsen
More important, Marvin found that once the interruption was dealt with, the team usually did not return to its original discussion. It moved on to a different topic. The flow of the discussion was driven by random associations people brought up, not by an agenda. 

p.251
Once the action started, the team members took their cues from what others were doing, adjusting and coordinating on the spot. 

p.252
Ideas That Control the Team
Skilled rowers refer to a phenomenon they call swing, in which all four or eight rowers catch the water at the same instant, and it feels as if the boat has gone flying out of the water. The rowers stop worrying about their individual actions and try to synchronize their movements, to gain the power of coherent focus like light waves that become lasers when they are brought into coherence. 
   During a team meeting when individuals are waiting for an opening to speak and preparing what they will say, it sometimes happens that an idea is articulated that captures everyone's attention and refocuses the discussion. We can say that the idea captured the team. It brought the thoughts of the team into coherence. This usually does not last very long, and in most meetings it does not happen at all. When it does occur, it offers a glimpse of the team mind. 

p.255
   In 1995 Duke Power Company hired Klein Associates to study the team decision making in the emergency response organization of one of its nuclear power generating stations.  
Dave Klinger (project leader), Doug Harrington (independent consultant)

p.256
At first, there were more than 80 people crowded into one room. 

p.256
Dave and Doug found that the heavy workload was caused, in part, by having too many unnecessary people. They tried cutting out assistants and nonrelevant staff, and performance got better. By the end of the project, the staff was down to 35, and workload had reduced, not increased. 

p.256
   They decided to institute a new room layout. They placed individuals who must share information next to each other. They moved the status board so that all the major players at the command table could see it. They reorganized the board to show plant status, the status of teams in the plant, the status of equipment, the most recent events, and current priorities. 

pp.256-257
There was a shared perception that the problems presented in the drill were easier than in previous drills, whereas in actuality the exercise was more challenging. The improved teamwork just made it seem easier because the team members were no longer getting in their own way. 

p.262
The goal of making the thinking explicit means that a community can arrive at a common perspective and that teams can be set up to work separately on different parts of a problem with some confidence that their work will fit together at the end. 

p.267
   The designers would not have done a good job if they insisted on a consistent principle, such as retaining a default option once someone selects it until it is changed. They had to understand how I was going to use the system and design around my needs. They had to preserve the consistency of function rather than consistency of feature. 

p.267
   Logic is indifferent to truth. The goal of logic is to root out inconsistent beliefs and generate new beliefs consistent with the original set. Logic does not consider whether our beliefs are true. A logical person can be wrong in everything she or he believes and still be consistent. 

p.267
We try to perceive inconsistencies in order to detect anomalies; the anomalies trigger our efforts to diagnose situations and initiate problem solving. We try to see the inconsistencies. 


pp.268-269


p.271
I am interested only in the cases where we regret the way we made the decision, not the outcome.

p.271
Simply knowing that the outcome was unfavorable should not matter. Knowing what you failed to consider would matter. 

p.271
The studies showed that in making judgments, we rely on information that is more readily available and appears more representative of the situation. 

p.273
Jim Reason, at the University of Manchester, finds that the operator of a system who is blamed for the error is often the victim of a series of problems of faulty design and practice (1990). Reason coined the term latent pathogens to refer to all the problems such as poor design, poor training, and poor procedures, that may be undetected until the operator falls into the trap. It is easy to blame the operator for the mistake, yet all the earlier problems made the mistake virtually inevitable. 

p.274
Decision makers noticed the signs of a problem but explained it away. They found a reason not to take seriously each piece of evidence that warned them of an anomaly. As a result, they did not detect the anomaly in time to prevent a problem.5 

Example 6.1
The Missed Diagnosis
-------------------------------------------------------------------------------

-------------------------------------------------------------------------------

p.276
One definition of uncertainty (paraphrasing Lipshitz and Shaul 1997) is “doubt that threatens to block action”. Key pieces of information are missing, unreliable, ambiguous, inconsistent, or too complex to interpret, and as a result a decision maker will be reluctant to act. In many cases, the action will be delayed or will be overtaken by events as windows of opportunity close. 

p.277
   Schmitt and Klein (1996) identified four sources of uncertainty:

   1. Missing information. Information is unavailable. It has not been received or has been received but cannot be located when needed. 

   2. Unreliable information. The credibility of the source is low, or is perceived to be low even if the information is highly accurate. 

   3. Ambiguous or conflicting information. There is more than one reasonable way to interpret the information. 

   4. Complex information. It is difficult to integrate the different facets of the data. 

   We can also identify several different levels of uncertainty: the level of data; the level of knowledge, in which inferences are drawn about the data; and the level of understanding, in which the inferences are synthesized into projections of the future, into diagnoses and explanations of events. 


p.277
It is more likely that the information age will change the challenges posed by uncertainty. 

p.279
Previously information was missing because no one had collected it; in the future, information will be missing because no one can find it. 

p.279
By way of analogy, when radar was introduced into commercial shipping, it was for the intent of improving safety so that ships could avoid collisions when visibility was poor. The actual impact was that ships increased their speed, and accident rates stayed constant. On the decision front, we expect to find the same thing. Planning cycles will be expedited, and the plans will be made with the same level of uncertainty as there was before. 

p.279
On the battlefield, plans are vulnerable to the cascading probability of disruption. 

p.280
We can learn the wrong lessons from experience. 

p.282
Jim Shanteau (1992) has suggested that we will not build up real expertise when: 
  •  The domain is dynamic. 
  •  We have to predict human behavior. 
  •  We have less chance for feedback. 
  •  The task does not have enough repetition to build a sense of typicality. 
  •  We have fewer trials. 

p.282
Lia Di Bello
If she gave people a task that violated the rules they had been using, the experts would quickly notice the violation and find a way to work around it. 

p.283
A unit designed to reduce small errors helped to create a large one.

p.283
Jen Rasmussen (1974)
   Since defenses in depth do not seem to work, Rasmussen suggests a different approach: instead of erecting defenses, accept the malfunctions and errors, and make their existence more visible. 

p.283
Instead of trusting the systems (and, by extension, the cleverness of the design engineers) we can trust the competence of the operators and make sure they have the tools to maintain situation awareness throughout the incident.8  

p.284
  •  Experience does not translate directly into expertise if the domain is dynamic, feedback is inadequate, and the number and variety of experiences is too small. 


p.291
   Regarding the nature of our data, one weakness of our work is that most of the studies relied on interviews rather than formal experiments to vary one thing at a time and see its effect. There are sciences that do not manipulate variables, such as geology or astronomy or anthropology. Naturalistic decision making research may be closer to anthropology than psychology. Sometimes we observe decision makers in action, but we rely on introspection in nearly all our studies. We ask people to describe what they are thinking, and we analyze their responses. We do not know if the things they are telling us are true, or maybe just some ideas they are making up. We can repeat the studies or, better yet, other investigators can repeat the studies to see if they get the same results. Nevertheless, no one can confidently believe what the decision makers say. 
   The use of introspection raises questions about how much to trust the findings of studies. However, alternate methods of scientific inquiry have their own problems and limitations. Research on naturalistic decision making collects and reports data, and it can be used as a source of ideas and hypotheses. The think-aloud data are soft, and fuzzy, and they are difficult to interpret. Nevertheless, we can still learn a lot by observing and questioning people as they perform realistic tasks with natural contexts. 

NOTES
p.295
2. As far as I know, the term sources of power was originated in cognitive science by Doug Lenat (1984), a researcher in the area of artificial intelligence. Lenat used the term sources of power to designate the analytical abilities of breaking a problem down into elements and performing basic operations on these elements as a way of solving a problem. The sources of power discussed in this book go beyond analytical abilities and include abilities that have traditionally been difficult for the field of artificial intelligence to handle. The analytical sources of power that Lenat and other artificial intelligence researchers have emphasized are described in sources such as J. R. Anderson (1983) and Newell (1990). Dreyfus (1972) has presented a critique of conventional artificial intelligence approaches. The work described in this book has been heavily influenced by Dreyfus and the Heideggerian perspective he described. 

p.296
5. ... In other words, the firefighters seemed to be using functional categories rather than structural categories. They were organizing their world on the basis of the reponse patterns called for. 

p.297
10. Marvin Cohen (personal communication) suggested this criticism: if we do not trust people's ability to make large judgments, we should not trust people to make small judgments. Erev, Bornstein, and Wallsten (1993) have shown another problem with rational choice (e.g., multiattribute utility analyses) strategies, which is that people make worse decisions if they perform analyses first. But should not be too worried. Jim Shanteau (1992) has noted that when people are dissatisfied with the results of a rational choice exercise, they often change their rating to make it come out the way they want. 

p.297
3. ... Laboratory studies often find that naive subjects show confirmation bias, but Shanteau (1992) has found that experienced decision makers do not fall prey to confirmation bias. Rather, they search for evidence that would be incompatible with their interpretations. 

pp.299-300
2. Isenberg (1984), in studying business managers, noted the importance of finding problems (i.e., detecting subtle anomalies that can be early signs of pitfalls). The ability to find problem, notice anomalies, and identify opportunities, is nontrivial (see also Shulman 1965; Shulman, Loupe, and Piper 1968). Failure to detect a problem [problem extraction] or opportunity can result from lack of experience in judging a rate of change or in not having a sense of typicality that is sufficiently strong to highlight anomalies when they are just emerging.
   A slow problem onset can also result in delayed problem detection. The slow onset is what Xiao, Milgram, and Doyle (1997), in their study of anesthesiologists, refer to as “going sour” incidents. There are no clear markers that the situation is degrading, and by the time the problem solver realizes it, the viable options may have disappeared. De Groot (1946/1965) has referred to the judgment of urgency in chess as the early detection that a threat needs to be taken seriously. 

p.302
2. It is easy to toss around concepts such as expertise. How much experience does it take to become an expert? Ericsson and Charness (1994) reviewed studies of expertise in a wide variety of tasks and found that the top performers appeared to practice around four hours a day, six to seven days a week, for approximately a decade. Individual differences in abilities, strength, or other factors did not have the same impact as sheer amount of time spent practicing. You might think that people who were better stayed with the task longer, so the initial skill level was a key to the amount of dedication shown. However, the data do not support this idea. The different studies that Ericsson and Charness reviewed failed to demonstrate that the eventual experts started out as gifted. Rather, it was their dedication that paid off. 
   One of the keys that Ericsson and Charness identified is the way experts practice. Often they set practice goals for themselves. For example, if a child's mother has insisted she spend an hour practicing the piano, she will be watching the clock the whole time. However, if she marks off an hour to play a certain piece through without any errors and almost has it at the end of the hour, she might consider trying it again--maybe once or twice--until she feels her hands getting too tired. This proactive attitude toward practice is different from simply putting time in. We can see this same difference between children doing spelling drills and children trying to master video games. Work such as the Ericsson and Charness study should open up more research into strategies for gaining expertise. 

p.303
4. .... The pilot's goals, the functions being performed, determine which data elements are relevant. 


p.169
   Hubert Dreyfus and Stuart Dreyfus (1986) have described how people move from the level of novices to experts. They claim that novices follow rules, whereas the experts do not. We would be mistaken to think that the experts had learned the rules so well they did not have to refer to them. Hubert Dreyfus uses the example of learning to ride a bicycle by using training wheels. As adults, we do not believe we learned to use those training wheels so well that they became an ingrained part of our bicycle riding perspective. We outgrew the need for training wheels. We developed a sense of bicycle dynamics.13 

p.304
13. For example, a tennis player might begin by swinging a foreign object, a racket, but after hundreds of hours, that racket will feel like part of his or her arm. The tennis players are not going to swing their rackets at the ball. They are going to feel as if they are hitting the ball themselves. The incorporation of the tool as a body part is like an infant's learning to incorporate an arm so that the strange blob that goes floating by becomes a tool that is used naturally and without any attention. 
   Radar operators who start by learning to operate a complex piece of equipment wind up seeing through the equipment, sensing the objects directly, and adjusting the set as naturally as they squint to look at something in the distance. An instructor pilot once told me that when he started to fly, he was nervous much of the time, afraid that he might make a mistake. After several months, he no longer felt that was flying the airplane but that he himself was flying. That was the point where flying became enjoyable. 

p.169
   Presenting the procedures to trainees gives them a false sense of progress. This confidence dissipates when novices realize that applying the procedures depends on context and that no one can tell them what context is. Judgment and decision tasks in natural settings are rarely straightforward.  

p.169
We have to clarify their strategies and their ways of perceiving the situation. We have to elaborate on each aspect of expertise to that expertise becomes the guidepost for the training. In chapter 7, I discussed a project for the U.S. Marine Corps to help squad leaders learn like experts, rather than trying to teach them to think like experts. 

p.308
8. An additional direction is exemplified by the work of Richard Nisbett (1993). Nisbett has tried to identify and use natural reasoning patterns to teach people to do a better job of representing evidence and considering the implications of their actions. 

No comments:

Post a Comment

737 rudder issue

 • March 3, 1991    ── united airline flight 585    ── 737-200 in Colorado Springs in 1991    ── summary: loss of control due to rudder hard...