Human errors and reliability of Coronvirus test results

Human errors and reliability of Coronvirus test results

Before agreeing to testing either for evidence of current infection, or to check if the subject had antibodies, the UK Government insisted that the tests selected must produce 100% results.   It is claimed that this has been achieved which is good news.  However, it is only part of the problem. The tests might be perfect but the accuracy of the output of mass testing carried out by human beings might be very different. There are many hazards and opportunities for error regardless of the effacy of the tests themselves.  This is a deep and complex subject.  There have since WW2 been many in depth scientific studies related to mass testing and the results are relevant.  I have just extracted what I believe to be the most relevant here. There is more.

There is the question of the testers themselves.  Presumably young and old, both sexes, introverts/extroverts, fit, less than fit, experienced/inexperienced are all variables that might be relevant. Did the Government authorities identify the characteristics of the perfect tester?  Do they know what those characteristics must be if they are to match the individual to the task?  If the testers are to work in groups, have they considered the group dynamics of having different personality types working together and the possible negative outcomes resulting from getting it wrong?  The converse is also true. How do some individuals cope with working alone?

What thought have they given to task related issues?  Are they aware of the possible decline in performance due to task complexity and looking for what are described as ‘multiple monitors’?   The fact that different individuals have a tendency to look for different things.  Mass multiple repetitive inspection has its own hazards.   It is very difficult for humans to remain attentive in such situations. Scientifically, this is referred to as ‘attention deficiency syndrome’ or ‘vigilance decrement’.    There are many published experiments on this, the best known being the Mackworth Clock Test.   These experiments show that human inspection performance can fall off both rapidly and dramatically the longer the time on the task.   It also shows that whilst humans might identify the correct average number of positive results, further investigation can show that the ‘rejects’ will probably contain a mixture of both false positives and negatives.   There is also Broadbent’s single channel theory which was initially an attempt to explain why, with 100% radar coverage of the sky by highly motivated technicians, so many flying bombs got through completely undetected.

A further concern is the problem of workplace design. Factors such as noise, interruptions, lighting, background colours, ability to contrast the wanted signal from its background known scientifically as ‘signal to noise ratio’ are all likely to be relevant but it beggars belief that in the rush to develop a perfect test, anywhere near enough thought has been given to these issues. It is very unlikely that the testees are all working in standardised ergonomically ideal environments.

How big a problem might this be?   Scientific data suggests that just by being a human being regardless of much of these comments people are likely to miss about 10% of signals that are perfectly visible. Study of the relevant scientific papers suggest that It is not at all unusual in some situations of 100% inspection for the inspector to miss as many as 90% of the unwanted characteristic. Food for thought.

David Hutchins, Principal of DHI, Tutor and Consultant

Author of 9 best selling books on quality-related topics including, Quality Beyond Borders, Quality Circles, Hoshin Kanri, TQM and Just in Time.

Leave a Reply

Your email address will not be published. Required fields are marked *

LMS Login