AI Beginners Guide

If you mentioned ‘artificial intelligence’ in the boardroom ten years ago, there’s a good chance that you would get some rather odd looks. Fast forward a decade and AI is at the top of the buzzword bingo playlist!

Before we launch into what artificial intelligence actually is, and how it can be applied. We first need to look at how this change in the last ten years has come about….

Big Data

AI technology is the lynchpin for digital transformation across a range of industries – and what underpins everything is data, which – if you haven’t read LinkedIn or a whitepaper from Forrester in the last 12 months – is the new oil!

The amount of data that is generated is growing exponentially. To give you an idea, 90% of the data in the world today has been created in the last two years alone. Our current output according to research is roughly 2.5 quintillion bytes a day and this is only set to grow, especially when more and more devices become connected aka the internet of things (IOT).

The glut of data has led to intensified research into ways in which it can be processed, analysed and actioned. Machines are far more efficient and cost-effective in comparison to humans to do this type of work – the problem in the past was how to train the machines to be ‘smart’.

As with any form of innovation, the increased interest in research both across industry and academia has led to breakthroughs and advances that can now be applied in a commercial setting. From self-driving cars to healthcare – the potential for change is huge.

What exactly is Artificial Intelligence?

Ask ten people and you will probably get ten very difference answers – that’s because AI spans so many different industries and processes. The concept of what defines AI has also changed over time, however the theory behind AI is the idea of building machines that are capable of thinking like humans.

After all, humans have been interpreting the world and using information to make decisions for thousands of years. If we want to build machines that can help us become more efficient, then it makes sense to use ourselves as the blueprint.

‘Artificial Intelligence is the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-marking and translation between languages’

AI research and development work is split into two main fields. The first is ‘applied AI’ which uses the principles of simulating the human mind to carry out a specific task. The other is referred to as ‘generalised AI’ – which looks at developing machine AI that can handle multiple tasks.

Applied Artificial Intelligence

Research and development into specialised AI has already started to come to fruition and filter from academia into commercial settings. From medicine, where it’s being used to diagnose patients based on genomic data to finance, where data is being analysed in real-time to detect fraud or place trades. In manufacturing, AI is used to manage workforces, increase efficiency in the packing process and for predicting faults before they occur.

In the world of consumer electronics, AI technology is baked into everything from smartphone assistants like Apple’s Siri to autonomous cars which many in the industry believe will outnumber manually driven cars within the next decade.

Artificial General Intelligence (AGI)

The AGI Society defines generalised AI as:

“An emerging field aiming at the building of “thinking machines”; that is general-purpose systems with intelligence comparable to that of the human mind (and perhaps ultimately well beyond human general intelligence)”.

Generalised AI is less advanced as an industry – to replicate the human brain requires a deeper understanding of the organ and more computing power than is commonly available to researchers.

However, given the speed in which computer technology is evolving and taking into account Moore’s Law – it won’t be long before this branch of AI is established in processes across multiple sectors.

A new generation of computer chip is already in development, these chips, known as ‘neuromorphic processors’ are being designed to handle brain-simulator code in a more efficient manner.

Large tech companies are also developing cognitive computing platforms that can process simulations of human neurological processes to carry out an every-growing range of tasks. A great example of this would be IBM’s Watson system which came into the spotlight in 2008 when it won the U.S gameshow – Jeopardy.

Machine Learning and AI

One area of AGI that is more advanced is Machine learning – which is an application of AI based on the idea that we should just give machines access to data and let them learn themselves.

Machine learning (ML) has developed rapidly in the last few years and has become so integral to AI that the two terms are often used interchangeably.

Uber, Spotify and Google to name but a few, all make use of ML to improve processes and provide a better customer experience. ML capabilities can broadly fit into four main categories:

Optimisation: For example, calculating the shortest route to a destination.
Anomaly detection: Identifying variables and values that aren’t in-line with expected behaviour such as fraud detection.
Segmentation: Treating different groups of data differently. For example, targeting a specific segment with a specific message, based on their propensity to convert.
Object identification: Finding a particular thing within an image or classifying songs.

Within the field of machine learning, artificial neural networks (ANNs) have been key to teaching computers to think and act like a human.

Neural Networks

Traditional computing uses a series of statements to perform a specified tasks. Neural networks, however, use a network of nodes (that act like neurons) and edges (which act like synapses) to process and manage data. Inputs are then fed into the system and outputs are generated.

A neural network is designed to work by classifying information. The system works based on probability – based on the data it’s fed, the machine is able to make decisions with a degree of certainty. The addition of a feedback loop (sending results back into the algorithm) enables ‘self-learning’.

What is the future of AI?

Although the future of AI is an exciting one, there are real fears amongst experts that the development of AI which equals or exceeds our own could have negative implications for the future of humanity. For example, if we can automate processes using machines, the result will be large swathes of job losses, which will lead to huge societal change.

Concerns about AI led to the ‘Partnership in AI’ foundation being set up last year – members include some of the largest tech companies on the planet including Amazon, Facebook, IBM, Google and Microsoft. The groups aim is to research and provide advice with regards to the ethical impact of AI and to produce guidelines for future research and deployment of AI and robots.

Close