AI Glossary

Do you know your ML from your AI? Your AAN from your CNN? As artificial intelligence becomes more mainstream, as do the subset of specialisms, acronyms and general marketing buzzwords. In this article we aim to distil some of the biggest terms in the world of AI.

A

Agents: Also known as bots – these agents are autonomous software programs that respond to their environment and act on behalf of humans in order to achieve an end goal.

Algorithm: A set of instructions or rules give to an AI or machine to help it learn on its own. The most popular algorithms are regression, clustering, classification and recommendation.

Artificial Intelligence (AI): A systems ability to perform tasks and make decisions that are similar to human behaviour and intelligence.

Artificial neural network (ANN): A model created to replicate the human brain to solve tasks that are too difficult for a traditional computer to solve. ANN’s require large amounts of computing power.

Autonomous: Autonomy means that an AI construct doesn’t need human intervention. Driverless cars illustrate the term “autonomous” in varying degrees.

B

Black box: An AI system often runs calculations which can’t be understood by humans or would require extensive resource to understand the maths, yet the system will output useful information. When this happens, it’s called black box learning. The real work that happens behind the scenes to arrive at a decision is unknown, but we don’t seek to understand as we know what rules it used to get to the result.

Bayesian Network: A model that calculates the probalistic relationship between a set of random variables. A Bayesian network, for example, can be used to calculate the probabilities of various diseases being present based on the given symptoms.

C

Chatbots: A chatbot is designed to simulate conversation with a human through voice and/or text commands. They are often used as the interface for AI programmes such as Siri. You may also come across them on websites that operate ‘live chat’ channels.

Classification: Classification algorithms programme machines to assign a category to a data point based on a training data set.

Cluster analysis: A type of unsupervised learning that is used to find hidden correlations or groupings in data. Clusters are modelled with a measure of similarity defined by metric such as probalistic distance.

Clustering: Clustering algorithms let machines group data based on similar traits.

Cognitive computing: A computerised model that attempts to mimic how the brain thinks. It requires self-learning through natural language processing (NLP),data mining and pattern recognition.

Convolutional neural network (CNN): A neural network that identifies and makes decisions based on images.

D

Data mining: The examination of data to discover patterns that can be of further use.

Data science: A field that combines information transfer, statistics and computer science to provide insight from structured or unstructured data.

Decision tree: A branch-based model used to map decisions – similar to a flow chart.

Descriptive Model: A summary of a dataset that describes the relationships in the data.

Deep learning: The ability for machines to mimic human thought patterns through artificial neural networks (ANN).

G

Game AI: A form of AI that is specific to gaming that uses algorithms to replace randomness. It is used to generate human-like intelligence and reaction-based actions taken by the player. Call of Duty uses game AI when you are playing against the computer.

Genetic Algorithm: A method for solving problems by mimicking the process of biological evolution and natural selection.

H

Heuristic search techniques: Narrows down the search for solutions by eliminating options that are incorrect.

I

Inductive reasoning: The ability to derive generalised theories or conclusions by analysing patterns in large data sets.

K

Knowledge engineering: Focuses on building knowledge-based systems, including all social, technical and scientific aspects of it.

L

Logic programming: A type of programming in which computation is carried out based on the database of facts and rules.

M

Machine intelligence: A term that encompasses classical learning algorithms, machine learning and deep learning.

Machine learning (ML):  A subset of AI that focuses on algorithms, allowing machines to learn without being programmed. Such programs can use past performance data to predict and improve future performance.

N

Natural language generation (NLG): A machine learning task that attempts to generate language that is indiscernible from language generated by humans.

Natural language processing (NLP):  A machine learning task that aims to improve the interaction between humans and computers. NLP aims to recognise human communication as it is meant to be understood.

O

Optical character recognition: A computer system that takes images of handwritten or printed text and converts them into machine-readable text.

P

Predictive analytics: The act of analysing historic and current data to look for patterns that can help make predictions about the future.

R

Recurrent neural network (RNN): A type of neural network that records data and the outcome is fed back through the network to form a cycle.

Regression: A statistical model that is used to determine the strength of a relationship between variables.

Reinforced learning: A type of machine learning in which the machines are ‘taught’ to achieve the end goal through a process of reward. Think Pavlov’s dogs for machines.

S

Supervised learning: A type of machine learning in which human input and supervision are an integral part of the ML process. More common than unsupervised learning.

T

Target function: The specific task an AI system has been programmed to complete.

Training data set: The training data set is the data given to the machine after the training and validation phases have been complete.

Transfer learning: Once an AI has successfully learned a new function, it can continue to build on its own knowledge against a new task without user input.

Turing Test: A test developed by Alan Turing which is meant to identify true AI. The test is based on a process in which a series of judges attempt to identify interactions with a control (human) and interactions with the machine being tested

U

Unsupervised learning: A type of machine learning algorithm used to draw conclusions from datasets consisting of input data that has no labelled responses. One of the most common unsupervised learning methods is a cluster analysis.

V

Validation data set: In machine learning (ML), the validation data set is the data given to the machine after the learning phase. The validation data is used to identify relationships in the data that will be most effective to use in predicting future performance.

Close