Oscar Chang, a PhD student at Columbia University, and co-author of the whitepaper explained that the goal was to see if AI could, without human intervention, self-improve by mimicking the biological self-replication process.
“The primary motivation here is that AI agents are powered by deep learning, and a self-replication mechanism allows for Darwinian natural selection to occur, so a population of AI agents can improve themselves simply through natural selection – just like in nature – if there was a self-replication mechanism for neural networks.”
Chang drew inspiration from quines computing – a programme that reproduces copies of its own source code. However, instead of source code, their neural network replicates weightings which determine the connections between different neurons that are being cloned.
The two scientists put their new neural network into action by applying it to image classification tasks using the MNIST dataset, where computers have to identify the correct digit from a set of handwritten numbers from 0 to 9.
Self-replicating AI doesn’t like to multi-task
60,000 MNIST images were required for training, and another 10,000 were fed through for testing. The AI system had an accuracy rate of 90.41%. Although this may sound quite impressive, the performance pales in comparison to other image recognition models available.
The paper argues that the “self-replication occupies a significant portion of the neural network’s capacity.” In other words, the neural network is trying to split its effort across image recognition and self-replication.
“To our knowledge, we are the first to tackle the problem of building a self-replication mechanism in a neural network. As such, our work should be best viewed as a proof of concept,” Chang was quick to add.
Although the system may not be able to multi-task, the researchers believe there are a number of applications that self-replicating AI could be deployed within, including self-repairing damaged AI programmes.
Self-replication could be used as a last resort for detecting damaged or returning a damaged AI system back to normal.