Intelligent robots require advanced vision capabilities to perceive and interact with the real physical world. While computer vision has made great strides in recent years, its predominant paradigm still focuses on analyzing image pixels to infer 2D output representations (bounding boxes, segmentations, etc.), which remain far from sufficient for real-world robotics applications.
In this talk, I will advocate the use of complete 3D scene representations that enable intelligent systems to not only recognize what is seen (e.g. Am I looking at a chair?), but also predict contextual information about the complete 3D environment beyond visible surfaces (e.g. What could be behind the table? Where should I look to find an exit?). As examples, I will present a line of my recent works that demonstrate the power of these representations through amodal 3D object detection (Sliding Shape and Deep Sliding Shapes), analyzing and synthesizing 3D scenes (Semantic Scene Completion), and predicting semantic and 3D structure outside the image field of view (Im2Pano3D). Finally, I will discuss some ongoing efforts on how these 3D scene representations can further enable and benefit from real-world robotic interactions (in the Amazon Robotic Challenge), shifting the way we view computer vision problems from the perspective of a passive observer to that of an active explorer.
宋舒然于2018年从美国普林斯顿大学毕业获得博士学位（导师：Thomas Funkhouser），将于2019年加入美国哥伦比亚大学担任助理教授，目前在谷歌公司担任访问研究员。研究方向为：计算机图形学、计算机视觉、机器人等。宋舒然本科毕业于香港科技大学，博士期间曾获 Facebook 奖学金， 西贝尔奖学金等。研究主页：vision.princeton.edu/people/shurans/