Skip to main content

Nigel Miller, MD of Cordless Group tells us why we should be wary of AI

‘The current danger is not that machines become more intelligent than we are – the real danger is basically clueless machines being ceded authority beyond their competence.’ Discuss.

What do we mean by a clueless machine?

There are two main types of AI. Narrow AI operates within a defined range and handles a task, such as spam filtering, language assistance, parking sensors or playing chess. General AI is superior to this as it operates across bounds, exhibiting some human type intelligence and starting to challenge human thinking. AI (narrow) is currently what we see more widely deployed and is good at repetitive, robotic tasks, but not creative thinking and forming new ideas, making it, essentially, clueless.

AI is now everywhere we turn, giving us information quickly and easily, making the risk of trusting AI too much of an easy road to go down!

Think about the personalisation of search engines and the use of voice control technology such as Alexa or Siri. These bots can come across as more believable than searching yourself through a search engine. By naming the technology, we humanise and trust it, although the information verbally given to us is simply plucked from the web via algorithms, so we don’t even know the reputability of the source.

Another example: never trust a Sat Nav! No matter how expensive a GPS device is, when it comes to getting where you want to go, it’s only as good as the satellite network and its map data. There has been much media coverage around people driving into hazardous situations because their Sat Nav told them to. You have been warned!

Automatic Number Plate Recognition (ANPR) is used in car parks all over the country, to calculate what we owe for parking. However, it can go wrong! For example, if you input a character incorrectly, the system will not match you with its records and fine you for not having a ticket! Or the camera doesn’t get a clear view of your number plate leaving, the system will think you have not left!

Talking of automobiles, let’s touch on self-driving cars. How much would you really trust them? I personally think that if this industry adopts black box thinking just like in aviation, for reporting faults, errors and incidents to learn from mistakes and continuously improve, then we could put more faith into it, but we really are at the beginning of the self-driving road, as it were.

What about in the workplace? Facial recognition, RFID and location sensing can be very useful to help us navigate and find people and services. However, there is a risk that companies can drill down to look at what people are doing and where they are going, therefore breaking the trust of the users.

Certainly, if my trust in a system were lost in this way due to human intervention and I felt as though I was being spied upon, I wouldn’t be happy!

Machines are good at following rules, data analysis, speed, accuracy, repetition and are always on. Meanwhile, humans are good at judgement, empathy, creativity, improvising and leadership.

So blindly following a machine is human error. To put it another way, don’t just go when you see a green light. Stop, look and listen first.

To talk to Cordless about AI, say [email protected]

Discuss Vacancy

Submit CV