AI News

Using Crowdsourcing to Develop Ethical Artificial Intelligence

Due to rapid advancements in AI over the last few years, experts have now started to consider the best way to give AI systems a moral backbone. One popular method that is currently being explored is to teach AI to behave ethically by learning from decisions made by the average man (or woman) on the street.

Researchers from MIT are already testing this methodology – creating a system called the ‘Moral Machine’. In order to generate a dataset of moral views, visitors to a website were asked to make choices based on what a self-driving car should do next when faced with some rather tricky scenarios. For example, if a driverless car was heading towards pedestrians, should it run over and kill three adults to spare two children?

The ‘Moral Machine’ was able to collect vast amounts of data from a random sample of people which was then fed into an AI system. The researchers, then asked the system to predict how humans want a self-driving car to react in similar but previously untested situations.

Discussing the research, Iyad Rahwan – a researcher at MIT said: “This proof of concept, [shows] that democracy can help address the grand challenge of ethical decision making in AI.”

The theory behind having to choose between two morally problematic outcomes – the ‘double effect’ – isn’t new. However, applying the theory to an AI system isn’t something that mankind has had to worry about in the past.

Other experts in the field – including the data science team at Duke University – believe the best way forward is to have a ‘general framework’ that describes how AI will make ethical decisions. Their view – like the team at MIT – is that by aggregating collective moral views on different issues, the framework would be far more comprehensive than one produced by an individual or organisation.

An alternative view to AI morality

The theory behind crowdsourced morality isn’t foolproof James Grimmelmann from Cornell School of Law believes the idea of crowdsourced morality itself is inherently flawed.

“It doesn’t make the AI ethical, it makes the AI ethical or unethical in the same way that large numbers of people are ethical or unethical.”

Elon Musk co-chairman of OpenAI is of the mindset that creating ethical AI is a case of having clear policies and guidelines that govern deployment. This view is prevalent across the industry. Alphabet – Google’s parent company – who own DeepMind now have an ethics and society unit whose sole function is to govern.
The team at MIT acknowledge that their research is still just at the proof of concept stage. However, their democratic approach could work:

“Democracy has its flaws but [we’re] a big believer in democracy. Even though people can make decisions we don’t agree with, overall democracy works.”

Richard Young

Richard has been interested in the AI space for some years. Questions of ethics can raise some serious problems. What if AI can learn who is pre-disposed to cancer and then not give them health insurance?

Related Articles

Close