A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://www.wired.com/2014/12/fb/ below:

Facebook Envisions AI That Keeps You From Uploading Embarrassing Pics

The technology has become so important to the internet's biggest names that we're seeing a kind of arms race for deep-learning talent. Google snapped up Geoff Hinton, the University of Toronto professor who founded the deep learning movement alongside LeCun and others. Chinese search giant Baidu recently nabbed Andrew Ng, who helped found the deep learning program at Google. And since he was hired last year to run FAIR, LeCun has stolen some notable names from the Mountain View search giant, including Jason Weston and Tomas Mikolov.

The Power of Language

Deep learning isn't really a new technology. LeCun, Hinton, and others have explored the basic concepts since the '80s, and according to John Platt, a longtime researcher at Microsoft, the software giant was using similar techniques to provide handwriting recognition on tablet PCs a good ten years ago. But as Platt points out, thanks to advances in computer hardware---and the internet's ability to generate the massive amounts of data needed to help train neural nets---the technology has recently taken off in enormous ways.

Across the industry, it's already reinventing image and speech recognition. But like Google, LeCun and FAIR are pushing for more. The next big frontier, he says, is natural language processing, which seeks to give machines the power to understand not just individual words but entire sentences and paragraphs.

Before coming to Facebook, Mikolov led the creation of a deep learning system called Word2Vec, which aims to determine the particular relationships between words, and Google says this was used to improve its "knowledge graph," the system that helps the company's search engine map all those complex connections among websites. Now, he and Weston have brought this kind of expertise to the Facebook lab.

In the short term, LeCun explains, Facebook aims to create systems that can automatically answer simple questions. The company recently demonstrated a tool that can ingest a summary of The Lord of The Rings and then answer questions about the books. And it's exploring a kind of artificial short-term-memory that seeks to improve translation systems using what are called "recurrent neural nets." Just as you can think of a neural net as the cerebral cortex that handles the translation itself, he says, his team is building a system akin to the hippocampus that can serve as "scratch pad" memory for that cortex.

'An AI-Complete Problem'

The larger aim, LeCun says, is to create things like his digital assistant, things that can closely analyze not only photos but all sorts of other stuff posted to Facebook. "You need a machine to really understand content and understand people and be able to hold all that data," he says. "That is an AI-complete problem."

But at the same time, the team is looking beyond this sort of thing, hoping to anticipate the ways that Facebook will evolve in the more distant future---five or ten years down the road. LeCun hints this might involve the Oculus Rift---the virtual reality headset that Facebook acquired earlier this year---saying his team has at least discussed research with the Oculus team.

Certainly, there are limits to the company's AI ambitions. At one point, LeCun indicates that Facebook is not yet exploring AI in combination with robotics. But he does say this is something he's interested in exploring with his academic research, under the aegis of NYU. It's the next logical step.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4