While Microsoft researchers utilize “deep learning” models to understand human speech and build computer systems, those at Google are using these techniques to recognize cats. Well, at Google’s X lab, a team of scientists headed by the Stanford University computer scientist Andrew Y. Ng and the Google fellow Jeff Dean connected around 16,000 computer sensors and, through machine learning, succeeded in assessing feline behavior. The software-based neural network communicated with internet and studied about 10 million digital images from YouTube videos to identify the right object out of 20,000 distinct items.
The simulation suggests and authenticates how neurons detect significant objects and follow biological life. The Google brain used memory locations to categorize common features among millions of images and pull together a digital feline image. The mechanical brain wasn’t allowed any assistance whatsoever to identify features. It also validates that machine learning algorithms improve tremendously when exposed to huge data. The research might support machine language translation, speech recognition and better image search.
The researchers will let the world know about their exploits at a conference in Edinburgh, Scotland.