It’s all too common to hear in the news that some service outage or leak of data is due to a ‘computer problem’. It’s enough to make you wonder if software testing services exist at all. Yet in many cases, the computer is a largely innocent party, and it’s human error in using the system that’s to blame.
The Human Factor
As the recent leak of data from a London clinic shows, it only takes a simple error – clicking the CC instead of the BCC button – to cause a major problem. Yet it’s impossible to cut people out of the process altogether. While computers are very good at doing repetitive tasks quickly, humans still have the edge – at the moment – when it comes to being able to interpret complex situations and in carrying out creative tasks.
The trouble is that humans make mistakes, and it’s hard to completely eliminate that. It’s very difficult to make something that’s completely safe.
Learning from Mistakes
History is littered with mistakes, and they are an important part of the process of developing any technology. The reason why our trains and aircraft are among the safest means of travel is that over the years each accident has led to an investigation that has recommended improved safety procedures.
This reactive approach works to an extent in the technology industry too, but because there’s no central regulatory body, as there is with railways and airlines, it’s up to individual organizations to implement their own procedures. If an enterprise hasn’t suffered a problem itself, it’s easy for management to become complacent and develop an ‘it won’t happen to us’ attitude.
It’s human nature to not think too hard about what might go wrong, and that can be our major weakness. So what can we do about it? You can create systems in such a way that they minimize the potential for error.
Given the human factor, it’s unrealistic to expect that errors can be completely eliminated, but a company is judged on how it responds when problems do occur.