‘Consider Phlebas’ by Iain M Banks

‘Consider Phlebas’ is the first book of the groundbreaking Culture series. Using a shape shifting mercenary for hire, the book follows a war between an Artificial Intelligence-led civilization (the Culture) and a religious empire (the Iridians). The basis of the Culture is that AI is the overarching decision entity which sets out rules and behaviors based on its logical processes. In return, the AI uses technology to meet every possible want of its citizens as long as they follow its decisions. This book doesn’t cover the intricacies of the Culture but explores the conflict between a cold, calculating machine vs. passionate, unpredictable animals.

My mind went immediately to the current debate on how we should use the forms of machine learning and, in the future, Artificial Intelligence. Proponents of AI led decision-making point to our increased understanding of our many inherent biases. We are becoming aware that our base programming which enabled us to function and evolve to date, may be ill-suited to the analysis that is needed to live in this increasingly complex society and address the wicked problems of our age.

The Visual Capitalist has developed an infographic of our behavioral biases – available in hi-res here.

 

Courtesy of the Visual Capitalist

The list of biases is astounding and growing. We have less and less claim on the ‘rational’ so AI proponents are suggesting that we develop and deploy rational systems that seek to analyze, predict and decide in ways that reduce the impact of our biases.

I recently had a conversation with an asset manager based in Denver and, while discussing the current risks to companies from potential changes in how people see data privacy, he mentioned that he was a supporter of decreased data privacy because it would enable us to build highly informed AI that could make decisions for us. This is in line with the fictional Culture. The asset manager saw that a number of behavioral biases were hindering us in our progress on global issues including climate change and income equality. An AI decision making process might help generate more progress on these issues.

There is more discussion to be had on the ethics, safety and design of decision-making AI deployed at a large scale but its hard to argue that we, as humanity, should continue to have sole sole decision making capacity given our list of biases. But I feel uncomfortable thinking about losing control of my decisions. Maybe that feeling is a behavioral bias and so we should do it.

Amazon