Intel Labs has joined with academia and industry leaders in continual learning (CL) to co-organize the CVPR 2020 Workshop on Continual Learning in Computer Vision (CLVISION) on June 14. This one-day virtual workshop will be held during CVPR 2020, one of the top conferences for computer vision, machine learning and artificial intelligence (AI) trends.
CL is one of the most crucial milestones of computer vision and AI in general. CL is an emerging direction for solving the challenges faced by AI or deep learning (DL) algorithms when only a small amount of data is available, especially for new tasks with limited training. Through this CL capability, the system autonomously accumulates knowledge, continually improving to an acceptable high level of accuracy without intensive human labeling effort.
Intel Labs, along with organizations including Element AI, MILA, University of Cambridge and Google Research, co-organized CLVISION to gather researchers and engineers to discuss the latest advances in CL. The workshop features regular paper presentations, invited speakers and technical benchmark challenges to present the current state of the art, as well as the limitations and future directions for computer vision in CL. Seven academia and industry thought leaders, including Razvan Pascanu of DeepMind, Cordelia Schmid of INRIA and Chelsea Finn of Stanford University, will give talks on continual, few-shot and meta learning research.
Intel Labs is proud to sponsor CLVISION awards for the best paper and the technical benchmark challenge. The comprehensive two-phase challenge track will assess novel CL solutions in the computer vision context based on three different CL protocols. The challenge provides the first opportunity for comprehensive evaluation on a shared hardware platform for a fair comparison. Participants will use Intel Labs' OpenLORIS-Object dataset, which is one of two testbed datasets for CL algorithms and applications.
Intel Labs had one paper accepted at the workshop event: CatNet: Class Incremental 3D ConvNets for Lifelong Egocentric Gesture Recognition by Zhengwei Wang, Tejo Chalasani and Aljosa Smolic (researchers at V-SENSE at Trinity College Dublin), and Qi She (senior research scientist at Intel Labs China). Video recognition systems for virtual reality/augmented reality (VR/AR) devices should ideally be designed to support incremental update and customization of gestures. The research team demonstrated how a class incremental network (CatNet), which is a 3D convolutional framework capable of lifelong learning, would be beneficial for adding gestures to VR/AR systems. The incremental learning makes use of memory efficiently, enabling fast learning for new class samples and while not forgetting the previous class samples.
Learn more about the virtual workshop through CVPR 2020.