With the advent of new and AI-backed phones, very soon, most of us won’t even bother searching for the item, address or even product that we want. We would simply point our camera at it, and the phone would recognise the content in front of it.
While many brands have already inculcated the technology in their phones, the concept is still primarily used with artificial systems that extract information from images. The image data can be of any form — video sequences, views from multiple cameras, or even a multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models for the construction of computer vision systems.
But Is The System Perfect Yet?
The British Machine Vision Association and Society for Pattern Recognition defines it as, “A system concerned with the automatic extraction, analysis and understanding of useful information from a single image or a sequence of images. It involves the development of a theoretical and algorithmic basis to achieve automatic visual understanding.”
Image recognition, especially facial recognition technology, which involves training computer programs to recognize objects based on databases of images, has caused issues for other services. From the perspective of engineering, computer vision seeks to automate tasks that the human visual system can do. With the help of Artificial Intelligence and Deep Neural Networks, recognising objects, faces and images has become increasingly easier.
Over 100,000 people subscribe to our newsletter.
See stories of Analytics and AI in your inbox.
Here Are Some Of The Major Fails In Image Recognition:
- In 2016, researchers were able to fool a commercial facial recognition system into thinking that they were somebody else just by wearing a pair of patterned glasses. A special (think, funky-looking) sticker overlay with a hallucinogenic print was stuck onto the frames of the specs. The twists and curves of the pattern looked random to humans, but to a computer designed to pick out noses, mouths, eyes, and ears, they resembled the contours of someone’s face — any face the researchers chose, in fact. The facial recognistion system was so confused that it even recognised one of the researchers as the Pope!
- One of the most recent facial recognition fail that comes to mind is during the launch of the eagerly-awaited iPhone X. In September earlier this year, Craig Federighi, Apple’s senior vice president of Software Engineering, struggled to unlock the brand new phone while demonstrating Face ID, Apple’s new facial recognition software. “Unlocking it is as easy as looking at it and swiping up,” he said. But when it failed to unlock, he told the audience, “Let’s try that again.” But after he was prompted to enter his passcode, Federighi was forced to get a backup device to continue with the demonstration.
- Jeff Clune, co-author of a 2015 paper on images that fooled DNNs and other image recognition software says, “Take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognisable to human eyes that DNNs believe with near certainty are familiar objects. Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.
- In 2014, Teve Talley, a Denver-based financial advisor had been booked on the charges of two bank robberies. The evidence against him was a grainy CCTV camera footage which and a computer facial recognition software that matched his broad-shoulders, skin colour, sex, age, hair, eyes, and square jaw with that of the actual criminal. “Typically, the forensics community relied on experts in a binary way: Is this the same guy or not the same guy?” Akil N Jain, one of the world’s leading pioneers of face recognition technology, explained in an interview. “The focus has shifted to ‘How can you be so sure? Give us some confidence level.’ The forensic community needs to accept that examiners can make mistakes, and they need to say, ‘How can we avoid that?’”
- In 2015, Google’s newly-launched Photos service, which uses machine learning to automatically tag photos, had made a huge miscalculation when it automatically tagged two African-Americans as “gorillas.” in the folders. The user, a US-based computer programmer reported the problem via Twitter when he found that Google Photos had created an album labeled “gorillas” that exclusively featured photos of him and his African-American friend. At the time, developers at Google had immediately apologised for the gaffe and then worked to fix the app’s database.
- A few years ago, Flickr’s auto-recognition tool for sorting photos had gone awry and had identified one picture of a black man with the tags “ape” and “animal”. Though the racist implications were obvious, it had also identified a white woman with the same tags.