Artificial intelligence is growing at an exponential rate, transforming multiple industries and sectors around the planet. As these systems evolve and become more capable, the question on everyone’s lips is how will we manage these intelligence machines in the future?
Tech giants such as Alphabet, Amazon, Facebook, IBM and Microsoft – as well as individuals like Mark Zuckerberg and Elon Musk – believe that now is the right time to talk about the ethical impact of AI. So what are the main issues that are keeping these AI experts awake at night?
How do machines affect our behaviours?
AI bots are becoming better at mimicking human conversation and relationships. In 2015 a bot won the Turing Challenge for the first time – tricking humans into believing they were speaking with a fellow human being.
This milestone is just the start – we as humans will be interacting with machines far more frequently in the not so distant future, which then raises the question regarding the ability for machines to be self-programmed to trigger human emotions.
When used right, this could evolve into an opportunity to nudge society towards more beneficial behaviour. However, in the wrong hands it could prove detrimental. It also raises the issue of control, and the ability for machines to self-learn to reach their end goal.
How do we humans remain in control?
The reason we are at the top of the food chain isn’t down to our brawn but our brains. This poses a serious question about AI. Will it one day have the same advantage over us and leap frog us to the top. We can’t rely on the classic ‘pull the plug’ either, because in the future, advanced, self-learning AI machines will anticipate this move. This is what experts call “singularity”: the point in time when humans are no longer the most intelligent beings on planet earth. A sobering thought!
How can we guard against mistakes?
As with humans, machine intelligence comes from learning. AI systems usually have a training phase in which they learn to detect the right patterns and act based inputs. Once an AI machine is trained, it then goes into a test phase where it is provided with a new dataset.
Training phases cannot possibly cover all examples that an AI system may deal with in the real world. Which is why AI can be fooled in ways that humans can’t. For example, random dot patterns can lead a machine to “see” things that aren’t there.
If we want to live in a world where important processes are powered by AI, we need to ensure that these machines perform as planned, and that humans can’t overpower it for their own personal means.
How do we distribute the wealth created by automation and AI machines?
It is predicated that AI will result in large job losses as processes are automated. By using AI, a company can reduce their reliance on humans with individuals who have ownership in AI-driven companies making all the money. We are already seeing a widening wealth gap between the c-suite and the rest of the workforce. How do we structure a fair post-labour economy?
How do we reduce AI bias?
AI cannot always be trusted to be fair and neutral. We shouldn’t forget that AI systems are (currently) created by humans, who can be biased and judgemental.
Once again, if used right, artificial intelligence can become a catalyst for positive change but in the wrong hands could set us backwards in a world that is making progress for equality.
How do we secure our AI?
Cybersecurity will become even more important in the future – the more powerful these AI machines become, the more it can be used for nefarious reasons. This applies not only to robots manufactured to replace human soldiers but also to AI systems that can cause damage if used maliciously.