I want to walk you through something I read a while ago and have been stewing on since. From October 18th in Wired Magazine, a robotic cannon killed nine people and wounded fourteen others.
They wrote that these machines are supposed to select and aim at a target, and “[wait] only for a human to pull the trigger.” Except that sometimes, “these machines start firing mysteriously on their own.”
They call it a software glitch. A malfunction. The problem I have with these terms is that, similar to the term “accident” in a traffic collision, they tend to imply no one is to blame.
I submit that somewhere, some programmer is to blame. If you write code, no matter how trivial, your job is to ensure the efficiency, consistency, and most of all the accuracy of your code. To fail in this regard can be tantamount to negligent homicide.
Coding errors can cause great cost; in the simplest of projects, this cost may only be in time, but soon that cost becomes money. In greater projects, it might be personal possessions, or public relations, or in still greater projects, lives.
This is not the first time something like this has happened. I am reminded of the infamous Therac-25, which between 1985 and 1987 was involved in at least five deaths due to poor interface design and failure to sanitize inputs. The Northeast Blackout of 2003, caused by a race condition in power monitoring software. The MIM-104 Patriot whose failure resulted in the deaths of 28 soldiers in Saudi Arabia in 1991 due to an error in time synchronization.
These are our responsibilities, laid upon us by virtue of our interest in the technologies which run our world today and those that will tomorrow: To provide value through technology by relieving stresses in tasks, or by relieving those tasks altogether, and to protect ourselves and our fellow man from those very technologies we create, and to the best of our abilities, from himself.