‘Memory foam’ approach to machine learning
Many of us have been waiting for the rise of the machines ever since James Cameron’s action epic The Terminator stunned audiences with its apocalyptic visions of an Arnold-shaped artificial intelligence. The the truth is AI would need a sea change to even begin approaching the kind of crafty mechanical guile exhibited by the terminators in the movie. Though few experts have been willing to speculate about when such a singularity might occur, a recent publication titled “A ‘memory foam’ approach to unsupervised learning” suggests it could be sooner than most believe.
Great strides have been made in recent expert systems that have caused a stir in artificial intelligence of late, such as Google’s speech recognition algorithm and the Netflix recommendation system. But all of these systems are based on a model of artificial intelligence that’s unlikely to achieve the generalized intelligence humans exhibit. This is because they require large sets of training data in a labeled format. In some circumstances, this can produce results that allow the AI to far outclass its human counterparts — for instance, when provided a large database of labeled tumor CAT scans, the AI can quickly become better than humans at recognizing cancerous growths.
The trouble arises when it comes to gaining an understanding of an object or process as a whole and generalizing that knowledge across multiple domains. That kind of learning pertains to a field of AI that is still relatively undeveloped called unsupervised learning. This is the kind of learning humans excel at.
Great strides have been made in recent expert systems that have caused a stir in artificial intelligence of late, such as Google’s speech recognition algorithm and the Netflix recommendation system. But all of these systems are based on a model of artificial intelligence that’s unlikely to achieve the generalized intelligence humans exhibit. This is because they require large sets of training data in a labeled format. In some circumstances, this can produce results that allow the AI to far outclass its human counterparts — for instance, when provided a large database of labeled tumor CAT scans, the AI can quickly become better than humans at recognizing cancerous growths.
The trouble arises when it comes to gaining an understanding of an object or process as a whole and generalizing that knowledge across multiple domains. That kind of learning pertains to a field of AI that is still relatively undeveloped called unsupervised learning. This is the kind of learning humans excel at.
Towards the goal of creating a more robust system of unsupervised learning, a team at Loughborough University in the UK has been perfecting an artificial intelligence model based on “memory Foam.” The name hints at the nature of the model itself. Memory foam, which has become a popular component of mattresses, can take on an infinite variety of curvatures depending on the impression left on it by the person. In a similar vein, a computer employing the memory-foam approach learns to recognize stimuli by gaining an overall impression of sensory stimuli left upon it. Many believe this method more closely resembles the actual working of the human brain rather than algorithms used in supervised machine learning.
If early demonstrations are any indication, the model could represent the sea change the field of artificial intelligence has been waiting for. Like doting parents, the Loughborough team chose a nursery song as the first stimuli to expose their AI to. According to their study, the AI learned to recognize “Mary Had a little Lamb,” assimilating and remembering the musical model, likeliest frequencies, and other components of the song. This suggests the computer was able to gain a much more nuanced understanding of the song than it could have using supervised learning. But perhaps most importantly, their model could be combined with supervised learning algorithms, allowing the AI to benefit from the best of both methodologies. Such a combined approach might well lead to the kind of strong AI embodied by Arnold in The Terminator.
If early demonstrations are any indication, the model could represent the sea change the field of artificial intelligence has been waiting for. Like doting parents, the Loughborough team chose a nursery song as the first stimuli to expose their AI to. According to their study, the AI learned to recognize “Mary Had a little Lamb,” assimilating and remembering the musical model, likeliest frequencies, and other components of the song. This suggests the computer was able to gain a much more nuanced understanding of the song than it could have using supervised learning. But perhaps most importantly, their model could be combined with supervised learning algorithms, allowing the AI to benefit from the best of both methodologies. Such a combined approach might well lead to the kind of strong AI embodied by Arnold in The Terminator.