Unsupervised learning of 3D structure from images

the morning paper

Unsupervised learning of 3D structure from images

Unsupervised learning of 3D structure from images Rezende et al. (Google DeepMind) NIPS,2016

Earlier this week we looked at how deep nets can learn intuitive physics given an input of objects and the relations between them. If only there was some way to look at a 2D scene (e.g., an image from a camera) and build a 3D model of the objects in it and their relationships… Today’s paper choice is a big step in that direction, learning the 3D structure of objects from 2D observations.

The 2D projection of a scene is a complex function of the attributes and positions of the camera, lights and objects that make up the scene. If endowed with 3D understanding agents can abstract away from this complexity to form stable disentangled representations, e.g., recognizing that a chair is a chair whether seen from above or from…

View original post 774 more words

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s