Beware of switching onto automatic pilot

Despite using verbal checks to ensure patient safety, healthcare staff make mistakes. One explanation for this is that they respond automatically to questioning without matching their actions to their words, explained Brian Toft at a Health Informatics Forum meeting in July. In looking to eradicate such errors, technology has a part to play, albeit it fairly small. Helen Boddy, BCS assistant editor, reports.

When you first learn to drive, you need to concentrate really hard on what you are doing, but after a while you can sail along on automatic pilot. You can even do something else at the same time, such as talking to a passenger, or changing radio stations.

In such cases automaticity is generally seen as an advantage but in others, it can be hazardous, for example, when responses become automatic to verbal double checking in the NHS. Automaticity can result in a respondent saying that an action has been taken, even when it has not, according to Brian, who is a specialist in the investigation of adverse effects.

He suggested that responses to verbal checklists can be likened to a priest calling out and the congregation giving back the answers. He has written a paper about the phenomenon, which he has called involuntary automaticity.

Independent enquiry

Brian wrote the paper after conducting an independent enquiry into a case where staff missed an error in radiotherapy treatment on multiple occasions. He was the first non-physician to chair an enquiry into the death of a patient.

In investigating the case, Brian considered what could have caused dedicated professionals to make such mistakes, given that he was convinced that they were neither negligent nor uncommitted.

He noted that studies had shown that the verbal witnessing procedure does not always prevent errors. In his paper he cites Krause et al who observed that the error rate for the administration by two nurses was 2.12 and for one nurse it was 2.98.

Even though there was an improvement using two nurses the error rate was still greater than two per 1,000. As studies also showed that errors were made in a variety of healthcare settings, Brian suspected that there must be mechanisms that predispose people to make such mistakes.

He looked for clues beyond the NHS and found that automaticity was wider spread than healthcare. Barshi, for example, had written a paper with examples from the aircraft industry where accidents had happened despite aircrew having completed verbal checklists:

Barshi said: 'Automaticity has… a cost that manifests itself in procedures that are highly routinized but require close attention, such as verbal checklist procedures. In such procedures, errors occur because the routine leads to automaticity.'

One example given by Barshi was the case of a Boeing 737 aircraft about to land in Wyoming in 1983. The aircrew went through their routine pre-landing checklist. The co-pilot who was flying the aircraft responded to the captain that he had lowered the wheels when in fact he had not.

The aircrew had gone through the checking process without being affected by it, according to Barshi. It was assumed that since they had run the checks everything must be ok and safe to proceed.

Similarly, according to Brian, there is a danger when quality procedures have been put in place that human beings think that no mistakes will then be made. Indeed, in the radiotherapy case, safety regulations were being followed. Furthermore, colleagues believed each other to be professional, another factor that surfaced in Brian's research.

Cases of involuntary automaticity by aircrews suggested that accidents were more likely when the aircrews knew each other well and trusted each other's professionalism. This trust meant that they believed the respondents' response and did not question it further.

Another factor that could aggravate involuntary automaticity is an increase in the number of people carrying out checks. If there are two people, there is ambiguous accountability with each assuming the other is doing the check.

High stress levels would also provide an environment where involuntary automaticity would flourish, according to various researchers, including Nguyen and Bibbings. Brian's investigation into the above-mentioned radiotherapy case showed that the department in question suffered a large workload, time pressures, recruitment problems, and understaffing. The work environment was far from optimal.

Parallels with aircrew

Brian believes many parallels can be drawn between the working environment of the healthcare and aircraft industries:

  • Use of verbal checklists;
  • High stress and fatigue levels;
  • Trust in the professionalism of colleagues.

If anything, Brian believes that the healthcare profession is likely to be more prone to involuntary automaticity than aircrews because the former use verbal checks more than the latter.

When he published his paper in 2005, Brian could find no others about involuntary automaticity. No-one has contradicted his published paper to date, and he feels it is very important to stress that his findings show the value of an independent in-depth investigation as it reveals new knowledge.

He believes that now we know about involuntary automaticity, we should be looking to eradicate it, and he had several suggestions of how to reduce it.

Remedial measures

Technology can be used to help reduce the need for verbal checks. For example, bar codes can be used to help identify patients, misidentification being one cause of the incorrect administering of drugs.

However, Brian believes that technology cannot be the sole solution for several reasons. Technology can assist people, but it is neutral and only as good as the people who use it. 'Nothing is a substitute for original thinking and keeping your eye on the ball,' said Brian.

Furthermore, human beings can still make errors with technology because people will find ways of circumventing technology.

Moreover, it is people who create, operate and maintain technology. People put the errors in the technological systems. In the above radiotherapy case people input incorrect settings. When others checked these settings they did not see the error. Technology is neutral.

Furthermore, there is a risk that humans take as gospel what comes out of computer, and act upon it often without thinking, according to Brian. That is clearly risky if garbage has been put in. Even though silent checks by computers can be useful, humans still need to check against their output.

Solutions to involuntary automaticity must therefore also involve people, according to Brian. One measure that he proposes is to get the challenger to ask only one question and wait for the response to it before asking for another. Another proposal is to have two independent checkers, both responsible but conducting their checks one at a time.

A tactile and oral response could also help, for example by NHS staff touching the lever or whatever needs to be moved.

Another suggestion by Brian, which he thought of since publishing his paper, is that you could involve the patient in the checks. The patient could, for example, help check the label on the medication.

If the concept of involuntary automaticity is correct, it has other implications, such as the potential impact on court cases of negligence.

It would mean that management is responsible for errors as they create the environment and workload of employees. If it is management’s responsibility because they put staff in an unsatisfactory position, legislation could be turned on its head.

More Articles