Follow Datanami:
May 20, 2020

VisionLabs to Hold Online ‘Machine Can See’ Summit

May 20, 2020 — VisionLabs to hold Machines Can See – the 4th annual summit on Artificial Intelligence, Computer Vision and Machine Learning. This year the summit will go online on June 8-10. The main goal of the event is to connect world-leading scientists from academia and industry with a wide international audience of researchers and AI enthusiasts.

Machines Can See 2020 aims to share latest results and scientific ideas in labs including CMU, Georgia Tech, Inria, Facebook, Google, Intel, Samsung and Yandex. Presentations will highlight advances in the three main topics – Neural networks and deep learning for image and video recognition; Navigation, autonomous driving and robotics; as well as Neural image generation and animated human avatars. After the presentations the speakers will share their future visions on AI in live discussions.

The 4th summit will go fully online to reach the maximum audience during the COVID–19 period. The summit will have a free registration and will feature scientific talks, online Q&A sessions with the speakers and live panel discussions.

The main web-site http://machinescansee.com

June 8

Speaker: Deva Ramanan (CMU / Argo AI)

Topic: Embodied perception in-the-wild

About speaker:

Deva Ramanan is an associate professor at the Robotics Institute at Carnegie Mellon University, a principle scientist at Argo AI, and the director of the CMU Argo AI Center for Autonomous Vehicle Research. Prior to joining CMU, he was an associate professor at UC Irvine. His research interests span computer vision and machine learning, with a focus on visual recognition. He was awarded the David Marr Prize in 2009, the PASCAL VOC Lifetime Achievement Prize in 2010, an NSF Career Award in 2010, the UCI Chancellor’s Award for Excellence in Undergraduate Research in 2011, the IEEE PAMI Young Researcher Award in 2012, was named one of Popular Science’s Brilliant 10 researchers in 2012, was named a National Academy of Sciences Kavli Fellow in 2013, and won the Longuet-Higgins Prize in 2018 for fundamental contributions in computer vision. He is an associate editor of IJCV and PAMI and a regular area chair of CVPR, ICCV and ECCV.

Speaker: Cordelia Schmid (INRIA / Google)

Topic: Video understanding

About speaker:

Cordelia Schmid holds a permanent researcher position at Inria since 1997, where she is a research director. Starting 2018 she has a joint appointment with Google research. She has published more than 300 articles, mainly in computer vision. She has been editor-in-chief for IJCV (2013–2018), a program chair of IEEE CVPR 2005 and ECCV 2012 as well as a general chair of IEEE CVPR 2015, ECCV 2020 and ICCV 2023. In 2006, 2014 and 2016, she was awarded the Longuet-Higgins prize for fundamental contributions in computer vision that have withstood the test of time. She is a fellow of IEEE. She was awarded an ERC advanced grant in 2013, the Humbolt research award in 2015 and the Inria & French Academy of Science Grand Prix in 2016. She was elected to the German National Academy of Sciences, Leopoldina, in 2017. In 2018 she received the Koenderink prize for fundamental contributions in computer vision. She received the Royal Society Milner award in 2020.

Speaker: Vladlen Koltun (Intel)

Topic: Machines that see in the real world

About speaker:

Vladlen Koltun is the Chief Scientist for Intelligent Systems at Intel. He directs the Intelligent Systems Lab, which conducts high-impact basic research in computer vision, machine learning, robotics, and related areas. He has mentored more than 50 PhD students, postdocs, research scientists, and PhD student interns, many of whom are now successful research leaders.

Speaker: Jitendra Malik (Berkeley / Facebook)

About speaker:

Jitendra Malik is Arthur J. Chick Professor in the Department of Electrical Engineering and Computer Sciences. at the University of California at Berkeley and a Research Director of Facebook AI Research in Menlo Park. Jitendra’s group has worked on computer vision, computational modeling of biological vision, computer graphics and machine learning. Several well-known concepts and algorithms arose in this work, such as anisotropic diffusion, normalized cuts, high dynamic range imaging, shape contexts and R-CNN. His publications have received numerous best paper awards, including five test of time awards – the Longuet-Higgins Prize for papers published at CVPR (twice) and the Helmholtz Prize for papers published at ICCV (three times). He received the 2013 IEEE PAMI-TC Distinguished Researcher in Computer Vision Award, the 2014 K.S. Fu Prize from the International Association of Pattern Recognition, the 2016 ACM-AAAI Allen Newell Award, the 2018 IJCAI Award for Research Excellence in AI, and the 2019 IEEE Computer Society Computer Pioneer Award.

Discussion of the first day: Deva Ramanan, Cordelia Schmid, Vladlen Koltun, Jitendra Malik; moderated by Ivan Laptev (Inria / VisionLabs)

June 9

Speaker: Josef Sivic (INRIA / CTU)

Topic: Weakly supervised learning for visual recognition

About speaker:

Josef Sivic is a senior researcher at Inria in Paris and a distinguished senior researcher at the Czech Institute of Robotics, Informatics and Cybernetics at the Czech Technical University in Prague. He received the habilitation degree from Ecole Normale Superieure in Paris in 2014, PhD from the University of Oxford in 2006 and MSc degree from Czech Technical University in 2002. Before joining Inria he was a post-doctoral associate at the Computer Science and Artificial Intelligence Lab at the Massachusetts Institute of Technology. He received the British Machine Vision Association Sullivan Thesis Prize, three test-of-time awards at major computer vision conferences (1x CVPR, 2x ICCV), and an ERC Starting Grant. He is a chair at the Paris AI Research Institute.

Speaker: Laurens van der Maaten (Facebook)

Topic: From Visual Recognition to Visual Understanding

About speaker:

Laurens van der Maaten is a Research Director at Facebook AI Research (FAIR). He leads FAIR’s New York site. His research focuses on machine learning and computer vision. Before, he worked as an Assistant Professor (with tenure) at Delft University of Technology, as a post-doctoral researcher at UC San Diego, and as a Ph.D. student at Tilburg University. He is interested in a variety of topics in machine learning and computer vision. Currently, he is working on embedding models, large-scale weakly supervised learning, visual reasoning, and cost-sensitive learning. CVPR’s second Best Paper Award went to Gao Huang, Zhuang Liu, Laurens van der Maaten and Kilian Q. Weinberger for their research on “Densely Connected Convolutional Networks.” Research for the paper was conducted by Cornell University in collaboration with Tsinghua University and Facebook AI Research.

Speaker: James Hays (Georgia tech)

Topic: Thermal imaging for grasp understanding

James Hays is an associate professor of computing at Georgia Institute of Technology and Principal Scientist at Argo AI. Previously, he was the Manning assistant professor of computer science at Brown University. James received his Ph.D. from Carnegie Mellon University and was a postdoc at Massachusetts Institute of Technology. His research interests span computer vision, computer graphics, robotics, and machine learning. His research often involves exploiting non-traditional data sources (e.g. internet imagery, crowdsourced annotations, thermal imagery, human sketches, autonomous vehicle sensor data) to explore new research problems (e.g. global geolocalization, sketch to real, hand-object contact prediction). James is the recipient of an NSF CAREER Award and Sloan Fellowship.

Discussion of the first day: Josef Sivic, Laurens van der Maaten, James Hays; moderated by Manohar Paluri (Facebook)

June 10

Speaker: Yaser Sheikh (CMU / Facebook)

Topic: Photorealistic Telepresence

About speaker: Yaser Sheikh is an Associate Professor at the Robotics Institute, Carnegie Mellon University, with appointments in the Mechanical Engineering Department. He also leads Oculus Research Pittsburgh, a Facebook lab focused on Social VR. His research is focused on machine perception and rendering of social behavior, spanning sub-disciplines in computer vision, computer graphics, and machine learning. He has won Popular Science’s Best of What’s New Award, the Honda Initiation Award (2010), best paper awards at WACV (2012), SAP (2012), SCA (2010), and ICCV THEMIS (2009), and placed first in the MSCOCO Keypoint Challenge (2016). His research has been featured by various media outlets including The New York Times, The Verge, Popular Science, BBC, MSNBC, New Scientist, slashdot, and WIRED.

Speaker: Victor Lempisky (Samsung / Skoltech)

Topic: Neural image generation

About speaker:

Victor Lempitsky leads the Samsung AI Center in Moscow as well as the Vision, Learning, Telepresence (VIOLET) Lab at this center. He is also an associate professor at Skolkovo Institute of Science and Technology (Skoltech). In the past, Victor was a researcher at Yandex, at the Visual Geometry Group (VGG) of Oxford University, and at the Computer Vision group of Microsoft Research Cambridge. He has a PhD (“kandidat nauk”) degree from Moscow State University (2007).  Victor’s research interests are in various aspects of computer vision and deep learning, in particular, generative deep learning. He has served as an area chair for top computer vision and machine learning conferences (CVPR, ICCV, ECCV, ICLR, NeurIPS) on multiple occasions. His recent work on neural head avatars was recognized as the most-discussed research publication of 2019 by Altmetric Top 100 rating.

Speaker: Artem Babenko (Yandex)

Topic: Unsupervised Discovery of Interpretable Directions in the GAN Latent Space

About speaker:

Artem Babenko received his MS degree in computer science from Moscow Institute of Physics and Technology (MIPT) in 2012. Currently, he is a researcher at Yandex and also holds a teacher assistant position in National Research University Higher School of Economics (HSE). Artem’s research is focused on problems of large scale image retrieval and recognition.

Speaker: Abhinav Gupta (CMU / Facebook)

About speaker:

Abhinav Gupta is an Associate Professor at the Robotics Institute, Carnegie Mellon University and Research Manager at Facebook AI Research (FAIR). Abhinav’s research focuses on scaling up learning by building self-supervised, lifelong and interactive learning systems. Specifically, he is interested in how self-supervised systems can effectively use data to learn visual representation, common sense and representation for actions in robots. Abhinav is a recipient of several awards including ONR Young Investigator Award, PAMI Young Research Award, Sloan Research Fellowship, Okawa Foundation Grant, Bosch Young Faculty Fellowship, YPO Fellowship, IJCAI Early Career Spotlight, ICRA Best Student Paper award, and the ECCV Best Paper Runner-up Award. His research has also been featured in Newsweek, BBC, Wall Street Journal, Wired and Slashdot.

Discussion of the first day: Yaser Sheikh, Victor Lempisky Artem Babenko, Abhinav Gupta; moderated by Ivan Laptev (Inria / VisionLabs)

“By organizing MCS summits over the last three years we have succeeded to create a vibrant community of computer vision and AI professionals. Running an online event during the COVID-19 period is an excellent alternative in the present circumstances. We are dedicated to continue delivering valuable experience for all attendees of the summit akin to the physical events of the last years.” – Alexander Khanin, co-founder and chairman of the Board of VisionLabs.

About VisionLabs

VisionLabs is a Netherlands based company with expertise in computer vision and machine learning. The company develops products and solutions in the areas of visual recognition, including face recognition, as well as virtual and augmented reality. VisionLabs products are based on state-of-the-art algorithms and technologies developed internally by the company. Over the years, VisionLabs retains its leadership in the international NIST competition for facial recognition. The quality of our products and technology is confirmed by numerous deployments at our customers and is proven by prizes regularly obtained by the company at independent third-party competitions. Together with our partners we deliver solutions globally to the security, retail, banking, transportation and other industries. https://visionlabs.ai/


Source: VisionLabs

Datanami