AI NewsEthics in AI

UK Government to use AI to Fight Extremism

Amber Rudd, the UK Home Secretary announced today that the government have partnered with a local AI firm to develop a machine learning tool that will automatically detect propaganda produced by the Islamic State terror group with an ‘extremely high degree of accuracy’.

The technology is capable of working across a number of different types of download and video streaming platforms in real-time, and is intended to be integrated into the upload process – as the government wants propaganda videos to be blocked before it’s even uploaded.

The concept of content moderation prior to upload through filtering is something the European Commission has been pushing hard for. It’s an approach that is highly controversial, with critics describing the concept as ‘censorship machines’ that damage the right to free speech.

12 months prior, the UK government asked tech firms to reduce the time it takes for them to moderate and remove extremist content, from 36 hours to 2. However, it would seem the government have decided to take the lead by commissioning their own machine learning technology to demonstrate what is possible.

The government hired AI and ML experts, ASI Data Science, and paid the UK firm £600,000 in public funds to develop the tool, which according to Rudd, uses “advanced machine learning” to analyse video and audio to determine whether the footage is ISIS propaganda. It has been claimed that the tool can automatically flag 94% of propaganda videos with 99.995% accuracy. If the platform were to process 1m randomly selected videos, only 50 would require additional human review, the Home Office said.

Understandably, the Home Office has not released the methodology behind the model, which it says was trained with 1,000 Islamic State videos, but Rudd did explain in a press conference that the government would share it with smaller companies in order to help them combat online extremism.

Though many major tech companies such as Facebook and Twitter already use similar technology on their own websites. The hope is that the tool can be applied to smaller video platforms, like Vimeo, Telegraph and pCloud, which have seen a large rise in ISIS propaganda.
Research has found that the terror group has used over 400 different websites to upload their content last year, highlighting the importance of technology that is universal across multiple platforms.

Speaking ahead of a two-day visit to Silicon Valley, Rudd said that she was not ruling out forcing tech firms to use the new AI tool, though at this point she is confident that tech firms both large and small will cooperate to help the fight against extremism.

Richard Young

Richard has been interested in the AI space for some years. Questions of ethics can raise some serious problems. What if AI can learn who is pre-disposed to cancer and then not give them health insurance?

Related Articles