Justifying Human Involvement in the AI Decision-Making Loop

Despite their increasingly sophisticated decision-making abilities, AI systems still need human inputs.

In 1983, during a period of high Cold War tensions, Soviet information systems abruptly sounded an alert that warned of five incoming nuclear missiles from the United States. A lieutenant colonel of the Soviet Air Defense Forces, Stanislav Petrov, faced a difficult decision: Should he authorize a retaliatory attack? Fortunately, Petrov chose to question the system’s recommendation. Instead of approving the retaliation, he decided that a real attack was unlikely based on several outside factors — one of which was the small number of “missiles” reported by the system — and moreover, even if it was real, he didn’t want to be the one to complete the destruction of the planet. After his death in May 2017, a profile credited him with “quietly saving the world” by not escalating the situation.

As it happens, Petrov was right — the mistake was a failure of the computer system to distinguish the sun’s reflection off clouds from the light signatures relevant to a missile launch. Retaining a human mind in this decision-making loop may have saved mankind.

With increases in potential for decision-making based on artificial intelligence (AI), businesses face similar (though hopefully less consequential) questions about whether and when to remove humans from their decision-making processes.

There are no simple answers. As the 1983 incident demonstrates, a human can add value by scrutinizing a system’s results before action. But long before that, people also had a foundational role in developing the algorithms underlying the classification system and selecting the data used to train and evaluate the efficacy of the resulting system. In this case, humans could have added more value by helping the classification system prevent misclassification. Yet this training and development role doesn’t seem to make the news in the way that the intervention role does. We don’t know how many times nuclear warning systems operated amazingly well to keep from raising false alarms — we only know when they didn’t. People also add value by helping AI learn in the first place.

Before we humans get too cozy in these roles, we should be careful before extrapolating too much from this sample size of one. If humans are looking for justification for our continued involvement, the prevention of calamity is certainly valid. The resulting emotional appeal derived from an anecdote with unacceptable consequences (“think of the children!”) is compelling. But as guidance for normal business practice, the scenario may not have much in common with the use of AI in modern business practices.

A lot has changed in 34 years. While far from perfect, there have been huge improvements in AI. Building off vast training data, prediction is much more accurate in many scenarios. Now, systems would be much less likely to misinterpret sunlight on high-altitude clouds as incoming missiles. In business, accuracy continues to improve in areas such as loan default risk, fraudulent credit card transactions, and even in less concrete (but important) decisions about the potential performance of job candidates. AI continues to improve, and the clear advantage that humans once held is diminishing.

Read the entire article on MIT Sloan Management Review

Share this post