Robots will replace workers. Yawn. I’ve been reading variants of this storyline since around 1978. Since then, robots have increased in range and sophistication, yet employment levels have still kept going up. Also, during that time, and especially in the last 10-15 years, automated intelligence systems have been making bigger and bigger mistakes, leading directly to the collapse of entire businesses, yet always, in our technophile, misanthropic business culture, escaping all blame, so that technophile, misanthropic futurists can project an even bigger future for artificial intelligence, ignoring the lessons of the recent past, determined to repeat mistakes over and over again, on a bigger and bigger scale. Every time that AI messes up, the wreckage has to be cleared up by humans, which partly explains why employment rates keep on rising.
These ‘futurists’ need stopping. They are dangerous.
How did it come to this? Their misjudgements rest on a fundamental misunderstanding of what intelligence is. Our technophile, misanthropic culture equates intelligence with computation, chronicling with pride, for example, at how the best computers will always beat a grandmaster at chess. This game is perfect for AI: known parameters, known rules, the ability to compute millions of permutations per second.
What a computer cannot do, however, is know how to react if the game is interrupted by a terrorist incident, or to know that a couple searching for their lost toddler is a more important priority than whether or not to move your queen to knight 4. This inability to respond to unexpected events, to know how to react to unknown unknowns, is one of AI’s most serious weaknesses. But there are others: inability to empathize, to value the practical wisdom of an experienced employee, or understand such concepts as family love, hatred or political extremism.
So what were these corporate mistakes, how did they come about, and why where they so damaging? Let’s look at three examples, in chronological order. Familiarize yourself with the thinking errors that caused them, because the pattern is set to be repeated.
MFI in 2003-04
The British furniture retailer had a highly effective decentralized order and delivery system, in which experienced local store managers knew the stock and took care of customer service. This was replaced by a centralized automated system. It started making errors, but the designers had not built in contingency to intervene and correct them (this is a recurring thinking error in design of automated customer service systems; the pretence that machines are infallible). Customer service plummeted, so did reputation and business dried up. The company went into liquidation.
Investment banks 1990s-2008
As part of the lobbying to be allowed to police themselves, investment banks fooled regulators, and themselves, into believing that automated risk management systems were more advanced and sophisticated than experienced human oversight. In order to justify using automated risk management systems, that rest on the Value at Risk model that assumes risk can be quantified, they pretended that market risk was a closed game with known parameters, like chess. They were therefore completely ill-equipped to cope with the asymmetric, unpredictable dynamics of property asset bubbles exploding. Lehman Brothers went bust, and many others had to be rescued by ‘unsophisticated’ politicians and central bankers.
Google, which owns YouTube, was starting to enjoy bumper revenues from online advertising, relatively cheap to administer compared to rival news media organizations that generate their own material. Far cheaper and more advanced to let users produce content, and let algorithms place the ads. In 2017, however, reputable advertisers started to realize that their products were being promoted alongside hate videos by politically extremist organizations. Worse, their money was directly lining the pockets of these toxic groups, directed by YouTube algorithms that generate income based on numbers of views. The advertisers began withdrawing millions of dollars and pounds and redirected them to ‘old-fashioned’ media outlets.
It could be argued that these examples feature poor implementation of AI, and do not necessarily place a question mark over its wider validity. Well yes, but making that point kind of makes my point too: the technophile, misanthropic culture that has plagued the business world for decades, and that has led to such poor implementation, is also steering research priorities and decisions on applications. This means that such mistakes are very likely to recur. Google, for example, still refuses to put human judgement in charge of vetting of politically extreme videos on YouTube. It would rather see its business model damaged, perhaps irreparably, than cease to worship at the altar of AI, which has become akin to a religion.
Of course, it may be the case, some centuries from now, that there will be machines capable of spontaneity, improvisation, sensitivity, empathy and good judgement. But why wait that long, given that we already have entities superbly geared to such areas of expertise? They’re called humans. To refuse to deploy people for what they are best at is profoundly unintelligent. It’s like preferring bottled cow’s milk to mother’s milk for a new-born baby, despite the multiple health benefits of the latter, simply on the grounds that it’s newer.
The bigger problem is that only a small minority of organizations are effective at deploying human intelligence. What you find is that such enlightened entities tend also to be smarter in their use of technology. AI will always be dumb in some areas; humans in others. The best partnerships harness the strengths of both.