Distributed cognition: designing for the expert and the machine

In 2013 the Journal of Patient Safety published a report that found that 440,000 Americans were dying each year from preventable medical error. In the three years that have elapsed since then new technologies to prevent such errors have been developed – if not necessarily implemented.

But how far can throwing technology at the problem get us?

In four year study at a tertiary hospital in Hong King a team analysed the number of medication errors due to technology. Of these, just 1.9% were due to the technology itself. The remainder, 98.1%, were due to ‘socio-technological’ factors – the errors originated from the ‘knowledge coupling’ between the expert and the machine.


One example of such errors comes from my short time in anaesthetics. The oxygen saturation probe shows a line (trace showing a pulse) and a number. If it falls off the patient’s finger or can’t get a good reading, the trace is flat or wildly erratic and a question mark is displayed instead of the number. But there is a grey zone in its user interface design. Sometimes it is not detecting a signal, but the trace continues to appear relatively good, if slightly attenuated. And it continues to display the last recorded number. We had a situation where a patient was desaturating – the oxygen in their blood was rapidly dropping – but the trace looked ok and the number said 100%. My consultant, due to years of experience, had a lower threshold of trust in the machine and decided to reposition the probe. The trace improved and the number refreshed at 80%. 


Well designed, integrated and intelligent software may go a long way towards reducing deaths due to medical error, but new kinds of errors can and are arising. Because it is not enough to design software that is usable, or enjoyable (not that these software programmes necessarily were – its likely that they weren’t). Experts using technology are subject to all sorts of cognitive and decision making biases that also need to be taken into account.

Some examples below:

technology cognition and error

Coiera, E. (2015). Technology, cognition and error. BMJ quality & safety,24(7), 417-422.

Dr Itiel Dror at UCL describes the use of technology in expert domains as a form of ‘distributed cognition.’ There is, he argues, a spectrum from human only cognition, for example, observing and diagnosing a skin infection, to machine only cognition. Some recent examples of which are emerging from Stanford’s computational pathologist, Enlitic’s deep learning for radiology and Watson’s oncology recommendations.

Dror, I. E. & Harnad, S. (eds.) (2008). Cognition Distributed: How Cognitive Technology Extends Our Minds. (258 pp.) John Benjamins, Amsterdam.

As we progress towards the machine end of the spectrum, we’ll redefine the role of the clinician. There may be new kinds of cognitive error of a different nature to those that we face now.

Of the most concerning to me at the moment, considering the emergence of artificial intelligence, are errors of omission and commission.

Omission occurs where the clinician doesn’t do something because the system didn’t tell her to. And commission is where the clinician does something just because the machine said so.

A recent article in FastCoDesign highlighted that one of the biggest problems with AI is not some Terminator style apocalypse (though Nick Bostrom would disagree), but the gradual attrition of our ability to make decisions.

We’ve already seen it in the attrition of pilot’s skills which caused the Air France crash in 2009. The Bureau of Investigations and Analysis (BEA) has called for improved pilots training even in the context of highly automated flying.

How might we design systems to fully benefit from the combined cognitive efforts of experts and their machines, and reduce the errors that arise from their interaction?

One answer perhaps lies in a new approach called Deep Design. Coined by Sheldon Pacotti at Frog Design in a recent article. He argues that until now, design in technology has been focussed on shutting the user out. When something goes wrong and crashes, we don’t want to know why, we just want it fixed.

With intelligent algorithms, however, we need to create a conversation. We, as consumers as well as experts, want to know why it came to that decision or recommendation. As experts it will be essential in maintaining some level of decision making abilities.

But also, the machines – at the moment – need us. An algorithm must be trained, and for an expert to effectively do this, transparency into it’s process is needed.

I’m not a believer in the robot doctor concept. Healthcare is too human ever to be fully automated. If we’re receiving a cancer diagnosis, we want to be told the news by someone who is also going to die. And so, in carving out the future of healthcare technology one of the most important questions is, how does the human fit into all of this?

Cover image courtesy: Matteo Farinella, Neurocomic

LEAVE A COMMENT