How do Face Recognition and Object Recognition Work in Computers?

How do Face Recognition and Object Recognition Work in Computers?

29901534001_5650420188001_5650408575001-vs

After computers recognizing human faces, AI systems are now showcasing their ability to classify objects in videos and photographs. Eager to vest vision skills in all types of machines, businesses and government agencies have gained a whole lot of interest in this ability. These machines could be medical scanners that can detect skin cancer, in-store cameras, personal robots, drones, or self-driving cars. Then there are smartphones which could be unlocked only with a glance.

Face Recognition is a Thing of the Past, All Eyes on Object Recognition

Algorithms designed to recognize individual faces and detect facial features have become quite advanced compared to those used decades ago. Measuring facial dimensions is a common method involved in face recognition. For instance, the distance between one corner of the eye to another or from the nose to ear is measured. This information is then dissected into numbers and compared with relatable data from other images. They are found to match better the closer they are. Today, such analysis is supported by enormous stores of digital imagery and larger computing power.

However, brain-like neural networks have also been focused upon since years through various researches. Such networks are said to hold the ability to automatically learn to recognize things in an image by searching for patterns in big data sets. In a 2010-2017 annual image recognition competition, computers were observed to better distinguish between different Welsh corgi breeds owing to their ability to absorb the knowledge required to make such distinctions. However, they have been confused by statues and other more abstract forms.