The recent success of deep learning has shown that a deep architecture in conjunction with abundant quantities of labeled training data is the most promising approach for most vision tasks. However, annotating a large-scale dataset for training such deep neural networks is costly and time-consuming, even with the availability of scalable crowdsourcing platforms like Amazon’s Mechanical Turk. As a result, there are relatively few public large-scale datasets (e.g., ImageNet and Places2) from which it is possible to learn generic visual representations from scratch.
Thus, it is unsurprising that there is continued interest in developing novel deep learning systems that train on low-cost data for image and video recognition. Among different solutions, crawling data from Internet and using the web as a source of supervision for learning deep representations has shown promising performance for a variety of important computer vision applications. However, the datasets and tasks differ in various ways, which makes it difficult to fairly evaluate different solutions, and identify the key issues when learning from web data.
This workshop aims at promoting the advance of learning state-of-the-art visual models directly from the web, and bringing together computer vision researchers interested in this field. To this end, we release a large scale web image dataset named WebVision for visual understanding by learning from web data. The datasets consists of 2.4 million of web images crawled from Interenet for 1,000 visual concepts. A validation set consists of 50K images with human annotation are also provided for the convenience algorithm development.
Based on this dataset, we also organize the first Challenge on Visual Understanding by Learning from Web Data. The final results will be announced at the workshop, and the winners will be invited to present their approaches at the workshop. An invited paper tack will also be included in the workshop.
News 10.08.2017: Slides of talks, presentations, and the workshop have been uploaded. See workshop schedule for the links
News 22.07.2017: Prof. Jitendra Malik is unable to give a talk due to schedule conflict. We are happy to welcome Dr. Chen Sun from Goolge to give a talk instead
News 23.06.2017: Test phase has started!
News 16.05.2017: Google meta information updated
News 18.04.2017: Test images released
News 04.04.2017: Original training images released
News 01.04.2017: Development kit released
News 22.03.2017: README.txt added and Flickr & Google Metadata updated because of missing q1632.json files
News 7.03.2017: The workshop website is now online. The dataset and challenge development kit will be released soon!
|Opening Remarks, Rahul Sukthankar (Google Research & CMU)
|Invited Talk:Learning from Web-scale Image Data for Visual Recognition, Chen Sun (Google Research)
|Database Overview and Challenge Overview, Wen Li & Limin Wang (ETH Zurich)
|Participant Presentation by Malong AI Research
|Participant Presentation by SHTU_SIST
|Invited Talk:Exploiting Noisy Web Data for Large-scale Visual Recognition, Lamberto Ballan (Stanford University & University of Padova)
|Participant Presentation by VISTA
|Participant Presentation by CRCV
|Invited Talk: Towards Web-scale Video Understanding, Olga Russakovsky (Princeton University)
|Award Session & Closing Remarks
Researchers are invited to participate the WebVision challenge, which aims to advance the area of learning useful knowledge and effective representation from noisy web images and meta information. The knowledge and representation could be used to solve vision problems. In particular, we organize two tasks to evaluate the learned knowledge and representation: (1) WebVision Image Classification Task, and (2) Pascal VOC Transfer Learning Task. The second task is built upon the first task. Researchers can participate into only the first task, or both tasks.
The WebVision dataset is composed of training, validation, and test set. The training set is downloaded from Web without any human annotation. The validation and test set are human annotated, where the labels of validation data are provided and the ones of test data are withheld. To imitate the setting of learning from web data, the participants are required to learn their models solely on the training set and submit classification results on the test set. In this sense, the validation data and labels could be simply used to tune hyper-parameters and cannot be used to learn the model weights.
This task is designed for verify the knowledge and representation learned from the WebVision training set on the new task. Hence, participants are required to submitting results to the first task and transfer the only models learned in the first task. We choose the image classification task of Pascal VOC to test the transfer learning performance. Participant could exploit different ways to transfer the knowledge learned in the first task perform image classification Pascal VOC. For example, treating the learned models as feature extractors and learning the SVM classifier based on the features. The evaluation protocol strictly follows the previous Pascal VOC.
The WebVision dataset provides the web images and their corresponding meta information (e.g., query, title, comments, etc.) and more information can be found at the dataset page. Learning from web data poses several challenges such as
Participant are encouraged to design new methods to solve these challenges.
A poster session will be held at the workshop. The goal is to provide a stimulating space for researchers to share their works with scientific peers. We welcome researchers to submit their recent works on any topics related to learning from web data.
|Challenge Submissions Deadline
|June 30, 2017
|Challenge Award Notification
|July 10, 2017
|Paper Submission Deadline
|July 2, 2017
|Paper Acceptance Notification
|July 3, 2017
|Paper Camera-Ready Deadline
|July 15, 2017
|Workshop date (co-located with CVPR'17)
|July 26, 2017
All deadlines are at 23:59 Pacific Standard Time.