Three questions to ask when reading articles about artificial intelligence

You may have noticed by now that there seem to be a couple of recurring themes in the plethora of articles and news programmes about artificial intelligence (AI). These themes can be summed up as a) “The dangers of AI” and b) “The limitations of AI”.

Articles addressing the dangers of AI tend to focus on issues such as the threat of widespread job losses to AI, the possibility of inherent bias (such as racism and sexism), the lack of transparency in decisions made by AI systems and, as a result, the inability to plead your case with AI (“Computer says ‘No’”).

The second theme focuses on how AI often struggles with tasks that humans find easy, such as recognising sarcasm, humour or unusual images. Furthermore, these articles point out that AI applications often need to be ‘trained’ against vast amounts of data to get anywhere near human levels of accuracy and can sometimes be easily fooled into making dumb mistakes.

The purpose of this post is not to explore the dangers or limitations of artificial intelligence but rather to draw attention to dangers and limitations of the hundreds of stories written about AI.

With this in mind, when confronted with a news article about artificial intelligence, a reader might find it useful to ask themselves three questions.

Is this article really about AI?

Defining AI is tricky. Oxford Reference.com defines it as “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Unfortunately for many authors, AI has become such an elastic term that it now encompasses everything from self-driving cars to estimating credit worthiness. The common denominator in these examples is the use of the term ‘algorithm’.

An algorithm is simply a process, or a set of rules or formulae used to solve a problem. One could argue that traffic lights, washing machines and even wasps are ‘algorithmic’ in behaviour. The well-known cookery writer Delia Smith suggests an algorithm for estimating how long it takes to cook a chicken: “20 minutes per pound plus 20 minutes for good measure”. This can be expressed mathematically as y = mx + c. Which also happens to also be the equation for a straight line. Which turns out to be the basis of one of the oldest predictive modelling techniques in the world: linear regression.

In fact, a huge number of machine learning techniques are basically more complex versions of linear regression. Nevertheless, the data models they generate are nothing like as sophisticated as a self-driving car or ‘digital assistants’ such as Amazon’s Alexa or Apple’s Siri. If the article mentions a system that is used to estimate an outcome or predict a value, then it’s probably not fair to describe it as an “AI platform”. It’s more likely to be a predictive model, and that’s not the same thing at all.

Is the algorithm guilty?

The algorithm is often the bête noire of AI articles. An impersonal calculating machine that spits out decisions that, on closer inspection, are found to be profoundly unfair or often wildly inaccurate.

But consider the fact that most ‘algorithms’ are in fact just data models. In other words, they are the result of patterns that have been found in the data. You can think of them as simplifications of the data.

When reading these stories, can we replace the word ‘algorithm’ with the word ‘data’ without changing the meaning of the sentence? If so, then perhaps the issue is really about how we collect data and dangers of getting that wrong. Because if the algorithm is a reflection of the data, then the data is simply a mirror of what? Of us, of course.

What if we swap algorithms for people?

Another game to play, is to swap the word ‘algorithm’ for the word ‘person’. There are plenty of news stories that illustrate how algorithms can make biased recommendations when trained against biased data. Examples include algorithms offering job interviews to people with European sounding names, generating higher recidivism risk scores for African Americans so that they are declined bail, and being less likely to offer university places or loans to women applicants.

Of course, in reality algorithms aren’t racist or sexist – but people are, and the data patterns that the algorithms are built on reflect this. The real danger is that they might even exaggerate and magnify this situation, especially when they are tasked with generating high volumes of decisions.

Are there however, situations where the algorithm might be less likely to prejudiced than a human? When asking for a pay rise or requesting a loan, are examples where the algorithm gives less weight to factors such as gender, ethnicity or age than human decision makers?

In fact, there are several examples: from a study that found an algorithm that was more likely to award mortgages to those who had been previously ‘underserved’, to a machine learning system that selected more ‘non-traditional’ candidates for job interviews than human managers.

Nevertheless, there are far more news articles about ‘algorithmic prejudice’ and although it remains an important issue, we should bear in mind that humans don’t exactly have a great reputation for impartiality.

Download your free copy of our Understanding Significance Testing white paper
Subscribe to our email newsletter today to receive updates on the latest news, tutorials and events, and get your free copy of our latest white paper.
We respect your privacy. Your information is safe and will never be shared.
Don't miss out. Subscribe today.
×
×
WordPress Popup Plugin
Scroll to Top