Many may look at the end of 2017 as a defining moment in human history as the year when an Artificially Intelligent system created its own “child” — another AI capable of a specific task.
A few weeks ago, Google’s AutoML — an artificial intelligence programme created to build artificial intelligence programmes — spawned a “child” using its reinforcement learning technique. This works like Machine Learning, except it is entirely automated where AutoML (the “parent”) acts as the neural network for its task-driven AI child.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
Named NASNet, the AI child then was tasked to recognise objects which included people, cars, traffic lights, handbags, backpacks, among others, in a video in real-time.
In fact, NASNet did really well in two well-known image classification tests. On ImageNet image classification, NASNet achieved a prediction accuracy of 82.7 percent on the validation set, surpassing all previous Inception models that Google built. Additionally, NASNet performed 1.2 percent better than all previous published results and is on par with the best unpublished result so far.
“NASNet may be resized to produce a family of models that achieve good accuracies while having very low computational costs. For example, a small version of NASNet achieves 74 percent accuracy, which is 3.1 percent better than equivalently-sized, state-of-the-art models for mobile platforms. The large NASNet achieves state-of-the-art accuracy while halving the computational cost,” said an official Google statement.
Google brain researchers wrote in their blog post, “We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined.”
This new development has again brought the much-argued issue to the forefront. Does artificial intelligence — as well as artificial intelligence that can create artificial intelligence — spell mankind’s doom?
While many noted names such as physicist Stephen Hawking, Tesla and SpaceX’s CEO Elon Musk, and most recently US former Secretary of State Hillary Clinton, have already sounded alarm bells with regards to Artificial Intelligence and its potential harm, there are others who don’t agree.
Facebook’s Mark Zuckerberg, Microsoft founder Bill Gates and futurist Ray Kurzweil are some of the famous names who believe that AI can actually do more good to the humans than harm.
Kurzweil, who currently runs a group at Google writing automatic responses to users’ emails in cooperation with the Gmail team, had once famously said, “It was fire that kept us warm, cooked our food, but also burnt our houses down. Technology is always a double-edged sword.”