Facebook’s chief artificial intelligence scientist Yann LeCun helped spearhead the rise of deep learning, the cutting-edge AI technology used by companies like Google and Amazon to quickly translate languages and identify objects in photos.
At the core of deep learning is software called a neural network, which sifts through enormous amounts of data so that it can notice patterns more quickly than humans. But this technology requires tremendous computing power, prompting semiconductor makers like Intel and hardware startups to explore radical new computer chip designs for the job that gobble less energy and improve the efficiency of certain AI computational tasks.
LeCun present a new research paper on Monday at the International Solid State Circuits Conference in San Francisco that will outline his vision for AI’s future. In particular, he’ll focus on how the chips and hardware that makes it possible must evolve.
Here are a few highlights from his talk:
1. From translating languages to policing content
Although companies like Facebook, Google, and Microsoft are exploring specialized computer chips that reduce energy consumption, LeCun is blunt about why such innovation is important—new computer chips will allow companies to use even more neural nets inside their data centers than what’s possible today.
As a result, tasks like online speech translation could be supercharged so that they could be done in real time. Meanwhile, AI systems would be able to analyze every frame in a video in an effort to identify people or objects rather than just a few stills—thereby significantly boosting accuracy.
LeCun also believes that content moderation, like scanning text for offensive language or fake news, could be improved using better computer chips. For a company like Facebook that struggles with deleting propaganda and abusive behavior from its service, those advancements couldn’t come soon enough.
2. A world of “smarter” vacuum cleaners and lawnmowers
One trend LeCun is closely watching is computer chips that can fit in everyday devices like vacuum cleaners and lawnmowers. Imagine a futuristic lawnmower loaded with neural networks that could recognize the difference between weeds and garden roses, he explains.
LeCun also envisions even more sophisticated mobile computing chips that can run neural networks directly on the devices themselves rather than having to send information back to data centers for processing. Already, some smartphones are designed with AI built in that can recognize a user’s face to unlock the device, but improved computer chips will be necessary for more advanced tasks.
Another hurdle to AI are today’s batteries, he says. The technology eats a lot of energy, which means that using AI on some smaller devices is limited.
3. Giving computers some common sense
Despite advances in deep learning, computers still lack common sense. They would need to review thousands of images of an elephant to independently identify them in other photos.
In contrast, children quickly recognize elephants because they have a basic understanding about the animals. If challenged, they can extrapolate that an elephant is merely a different kind of animal—albeit a really big one.
LeCun believes that new kinds of neural networks will eventually be developed that gain common sense by sifting through a smorgasbord of data. It would be akin to teaching the technology basic facts that it can later reference, like an encyclopedia. AI practitioners could then refine these neural networks by further training them to recognize and carry out more advanced tasks than modern versions.
But it would only be possible using more powerful computer chips—ones that LeCun hopes are just around the corner.