Where's AI going next? There may be no better place to ask that question than this week's Conference and Workshop on Neural Information Processing Systems - better known as NIPS - in Long Beach, Calif.

And there may be no one more fun to ask than MILABOT - the AI developed by a team at the University of Montreal that's one of the two demonstrations we're hosting at our booth at the conference.

The bot's specialty: entangling users in open-ended conversations that can veer from cat puns to probing questions about your relationship with your mother.

'Some people wind up talking to it for 20-30 minutes about their personal lives,' Iulian Vlad Serban, one of the researchers who built it, said as he invited AI researchers to step up and put it to the test.

And when asked about the future of AI, MILABOT spat out the answer you'll hear from many of the more than 7,000 students and researchers engaged in freewheeling conversations spilling out into the hallways of the Long Beach Convention Center this week. 'I'm going to have to think about that one for a while,' MILABOT replied.

Like everyone at NIPS - one of the world's premier AI gatherings - NVIDIA is working to answer this question, too.

In part by supporting the work of researchers like Serban through our NVIDIA AI Labs, or NVAIL program, through which we support research at 20 top universities and institutes, offering technical assistance from our engineers, support for students and access to our DGX AI supercomputing systems.

Watch and Learn

One answer: deep learning will help machines interact with the physical world - and the humans who inhabit it - much more fluidly.

'I think that the next few years are going to be about autonomous machines,' said NVIDIA CEO Jensen Huang during a visit to our booth, where he stopped to talk with UC Berkeley's Sergey Levine, Chelsea Finn and Frederik Ebert about their work. 'You're at the intersection of AI and machines that can interact with the physical world.'

The team from the Berkeley AI Research Lab - or BAIR - brought a pair of demos to NIPS that show how new deep learning techniques are making this possible.

In the first of two demos from BAIR you'll place an object - such as your car keys or a pair of glasses - in a robot's workspace. You'll then click on a user interface to show the machine where the object you want moved is, and where you want it to go.

The robot will then imagine - through a video prediction users can watch - where the object you specified will move based on the actions the robot will take. The robot will then use this prediction to plan its next moves.

Thanks to an innovative convolutional neural network design, the robot's skill has surprised even some of the students who helped train it - over the course of a single day last month.

In the second demo, you'll demonstrate a task, such as putting something in a container, by guiding a robot arm. Using video of your demonstration, the robot will find the container and put the same item in it.

Talk to Us

MILABOT was another demo that captivated NIPS attendees, many of them researchers eager to find ways to trip up the chatbot.

It can be done, but, when not being tortured by a researcher who tricked the bot into making a 'pronoun disambiguation,' MILABOT can keep a conversation going, even if it has to resort to a bad pun.

'I don't own a cat, but if I did, I would like her meowy much,' MILABOT replies when asked if it likes cats. (Cats, of course, are a running joke among AI researchers.)

Created by the Montreal Institute for Learning Algorithms to compete for the Amazon Alexa Prize competition, this chatbot doesn't rely on one conversational model, but 22 models.

Making small talk with people via speech or text is a challenge computer scientists have been grappling with at least since MIT's Joseph Weizenbaum created ELIZA - which spits out frustratingly superficial responses to human questioning - four decades ago.

Unlike ELIZA, MILABOT relies on what its creators describe as an 'ensemble' of models. They include template-based models, bag-of-words models, sequence-to-sequence neural network latent variable neural network models as well as a variant of the original ELIZA.

The real trick is using deep reinforcement learning to pick which of these models to use. To do that, MILABOT uses reinforcement learning - where software agents learn by taking a long string of actions to maximize a cumulative reward - applied to data crowdsourced from real interactions with people.

It's not perfect, but it's enough to draw a crowd - and keep them entertained as they throw one curveball after another at the software.

Stop By

To see these demos, and many more, stop by our booth at NIPS.

Nvidia Corporation published this content on 07 December 2017 and is solely responsible for the information contained herein.
Distributed by Public, unedited and unaltered, on 08 December 2017 00:12:07 UTC.

Original documenthttps://blogs.nvidia.com/blog/2017/12/07/wheres-ai-going-next-ask-an-ai-nips/

Public permalinkhttp://www.publicnow.com/view/6D9B4D3EF02A79F2CD10A1C92F0806DB5EC101A4