19.June New Orleans, US

NTIRE 2022

New Trends in Image Restoration and Enhancement workshop

and challenges on image and video processing

in conjunction with CVPR 2022

Sponsors (TBU)




Call for papers

Image restoration, enhancement and manipulation are key computer vision tasks, aiming at the restoration of degraded image content, the filling in of missing information, or the needed transformation and/or manipulation to achieve a desired target (with respect to perceptual quality, contents, or performance of apps working on such images). Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the fulfillment of further tasks, as image restoration, enhancement and manipulation serves as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis etc. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.

This workshop builds upon the success of the previous NTIRE editions: at CVPR 2017 , 2018 , 2019 , 2020, 2021 and at ACCV 2016 . Moreover, it relies on all the people associated with the CLIC 2018, 2019, 2020, 2021 , PIRM 2018 , AIM 2019 , 2020 , 2021 , Mobile AI 2021 and NTIRE events such as organizers, PC members, distinguished speakers, authors of published paper, challenge participants and winning teams.

Papers addressing topics related to image restoration, enhancement and manipulation are invited. The topics include, but are not limited to:

  • Image/video inpainting
  • Image/video deblurring
  • Image/video denoising
  • Image/video upsampling and super-resolution
  • Image/video filtering
  • Image/video de-hazing, de-raining, de-snowing, etc.
  • Demosaicing
  • Image/video compression
  • Removal of artifacts, shadows, glare and reflections, etc.
  • Image/video enhancement: brightening, color adjustment, sharpening, etc.
  • Style transfer
  • Hyperspectral imaging
  • Underwater imaging
  • Methods robust to changing weather conditions / adverse outdoor conditions
  • Image/video restoration, enhancement, manipulation on constrained settings
  • Visual domain translation
  • Multimodal translation
  • Perceptual enhancement
  • Perceptual manipulation
  • Depth estimation
  • Image/video generation and hallucination
  • Image/video quality assessment
  • Image/video semantic segmentation
  • Saliency and gaze estimation
  • Aerial and satellite imaging restoration, enhancement, manipulation
  • Studies and applications of the above.

NTIRE 2021 has the following associated groups of challenges:

  • image challenges
  • video/multi-frame challenges

The authors of the top methods in each category will be invited to submit papers to NTIRE 2022 workshop.

The authors of the top methods will co-author the challenge reports.

The accepted NTIRE workshop papers will be published under the book title "CVPR 2022 Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library

For those with a keen interest on efficiency and deployment of solutions on mobile devices, we refer to the Mobile AI 2022 workshop and challenges co-organized at CVPR 2022.



Contact:

Radu Timofte, radu.timofte@vision.ee.ethz.ch

Computer Vision Laboratory, ETH Zurich, Switzerland

Chair for Computer Vision, University of Wurzburg, Germany

NTIRE 2022 image challenges

One needs to check the corresponding Codalab competition(s) in order to learn more about and to register to access the data and participate in the challenge(s) of interest.

Important dates



Challenges Event Date (always 5PM Pacific Time)
Site online January 17, 2022
Release of train data and validation data January 21, 2022
Validation server online January 31, 2022
Final test data release, validation server closed March 23, 2022 (EXTENDED)
Test restoration results submission deadline March 30, 2022 (EXTENDED)
Fact sheets, code/executable submission deadline March 30, 2022 (EXTENDED)
Preliminary test results release to the participants April 1, 2022 (EXTENDED)
Paper submission deadline for entries from the challenges April 13, 2022 (EXTENDED)
Workshop Event Date (always 11:59PM Pacific Time)
Paper submission server online January 21, 2022
Paper submission deadline March 20, 2022 (EXTENDED)
Paper submission deadline (only for methods from NTIRE 2022 challenges or papers reviewed elsewhere!) April 13, 2022 (EXTENDED)
Paper decision notification April 15, 2022 (EXTENDED)
Camera ready deadline April 19, 2022 (EXTENDED)
Workshop day June 19, 2022

Submit



Instructions and Policies
Format and paper length

A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in double column. The paper format must follow the same guidelines as for all CVPR 2022 submissions.
https://cvpr2022.thecvf.com/author-guidelines

Double-blind review policy

The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.

Dual submission policy

Dual submission is allowed with CVPR2022 main conference only. If a paper is submitted also to CVPR and accepted, the paper cannot be published both at the CVPR and the workshop.

Submission site

https://cmt3.research.microsoft.com/NTIRE2022

Proceedings

Accepted and presented papers will be published after the conference in CVPR Workshops proceedings together with the CVPR2022 main conference papers.

Author Kit

https://cvpr2022.thecvf.com/sites/default/files/2021-10/cvpr2022-author_kit-v1_1-1.zip
The author kit provides a LaTeX2e template for paper submissions. Please refer to the kit for detailed formatting instructions.

People



Organizers (TBU)

  • Radu Timofte, University of Wurzburg & ETH Zurich,
  • Luc Van Gool, KU Leuven & ETH Zurich,
  • Ming-Hsuan Yang, University of California at Merced & Google,
  • Kyoung Mu Lee, Seoul National University,
  • Eli Shechtman, Creative Intelligence Lab at Adobe Research,
  • Martin Danelljan, ETH Zurich,
  • Shuhang Gu, OPPO & University of Sydney,
  • Lei Zhang, Alibaba / Hong Kong Polytechnic University
  • Michael Brown, York University
  • Kai Zhang, ETH Zurich
  • Goutam Bhat, ETH Zurich
  • Chao Dong, SIAT
  • Cosmin Ancuti, UCL
  • Codruta Ancuti, University Politehnica Timisoara
  • Eduardo Perez Pellitero, Huawei Noah's Ark Lab
  • Ales Leonardis, Huawei Noah's Ark Lab & University of Birmingham
  • Andreas Lugmayr, ETH Zurich
  • Jinjin Gu, University of Sydney
  • Ren Yang, ETH Zurich
  • Andres Romero, ETH Zurich
  • Egor Ershov, IITP RAS
  • Marko Subasic, University of Zagreb
  • Boaz Arad, Voyage81
  • Yawei Li, ETH Zurich
  • Yulun Zhang, ETH Zurich
  • Yulan Guo, National University of Defense Technology
  • Siavash Arjomand Bigdeli, CSEM


PC Members (TBU)

  • Mahmoud Afifi, Apple
  • Codruta Ancuti, UPT
  • Cosmin Ancuti, Polytechnic University of Timisoara
  • Boaz Arad, Ben-Gurion University of the Negev
  • Siavash Arjomand Bigdeli, CSEM
  • Nick Barnes, Australian National University
  • Michael S. Brown, York University
  • Chia-Ming Cheng, MediaTek
  • Cheng-Ming Chiang, MediaTek
  • Martin Danelljan, ETH Zurich
  • Christophe De Vleeschouwer, Université Catholique de Louvain
  • Tali Dekel, Weizmann Institute of Science
  • Chao Dong, SIAT
  • Weisheng Dong, Xidian University
  • Touradj Ebrahimi, EPFL
  • Graham Finlayson, University of East Anglia
  • Corneliu Florea, University Politechnica of Bucharest
  • Peter Gehler, Amazon
  • Bastian Goldluecke, University of Konstanz
  • Shuhang Gu, OPPO & University of Sydney
  • Christine Guillemot, INRIA
  • Felix Heide, Princeton University & Algolux
  • Chiu Man Ho, OPPO,
  • Hiroto Honda, Mobility Technologies Co Ltd.
  • Zhe Hu, Hikvision Research
  • Andrey Ignatov, ETH Zurich
  • Sing Bing Kang, Zillow Group
  • Aggelos Katsaggelos, Northwestern University
  • Vivek Kwatra, Google
  • Samuli Laine, NVIDIA
  • Jean-Francois Lalonde, Laval University
  • Christian Ledig, VideaHealth
  • Seungyong Lee, POSTECH
  • Suyoung Lee, Seoul National University
  • Kyoung Mu Lee, Seoul National University
  • Victor Lempitsky, Skoltech & Samsung
  • Ales Leonardis, Huawei Noah's Ark Lab & University of Birmingham
  • Juncheng Li, The Chinese University of Hong Kong
  • Yawei Li, ETH Zurich
  • Stephen Lin, Microsoft Research
  • Ming-Yu Liu, NVIDIA Research
  • Chen Change Loy, Chinese University of Hong Kong
  • Guo Lu, Beijing Institute of Technology
  • Vladimir Lukin, National Aerospace University
  • Kede Ma, City University of Hong Kong
  • Vasile Manta, Technical University of Iasi
  • Rafal Mantiuk, University of Cambridge
  • Zibo Meng, OPPO
  • Rafael Molina, University of Granada
  • Yusuke Monno, Tokyo Institute of Technology
  • Hajime Nagahara, Osaka University
  • Vinay P. Namboodiri, IIT Kanpur
  • Federico Perazzi, Bending Spoons
  • Fatih Porikli, Qualcomm CR&D
  • Wenqi Ren, Chinese Academy of Sciences
  • Antonio Robles-Kelly, Deakin University
  • Andres Romero, ETH Zurich
  • Aline Roumy, INRIA
  • Yoichi Sato, University of Tokyo
  • Yoav Y. Schechner, Technion, Israel
  • Christopher Schroers, Disney Research | Studios
  • Nicu Sebe, University of Trento
  • Eli Shechtman, Creative Intelligence Lab at Adobe Research
  • Gregory Slabaugh, Queen Mary University of London
  • Sabine Süsstrunk, EPFL
  • Yu-Wing Tai, Kuaishou Technology & HKUST
  • Masayuki Tanaka, Tokyo Institute of Technology
  • Hao Tang, ETH Zurich
  • Jean-Philippe Tarel, IFSTTAR, France
  • Christian Theobalt, MPI Informatik
  • Qi Tian, Huawei Cloud & AI
  • Radu Timofte, University of Wurzburg & ETH Zurich
  • George Toderici, Google
  • Luc Van Gool, ETH Zurich & KU Leuven
  • Jue Wang, Tencent
  • Longguang Wang, National University of Defense Technology
  • Oliver Wang, Adobe Systems Inc
  • Ting-Chun Wang, NVIDIA
  • Yingqian Wang, National University of Defense Technology
  • Ming-Hsuan Yang, University of California at Merced & Google
  • Ren Yang, ETH Zurich
  • Wenjun Zeng, Microsoft Research
  • Kai Zhang, ETH Zurich
  • Richard Zhang, UC Berkeley & Adobe Research
  • Yulun Zhang, ETH Zurich
  • Ruofan Zhou, EPFL
  • Jun-Yan Zhu, Carnegie Mellon University
  • Wangmeng Zuo, Harbin Institute of Technology

Invited Talks (TBA)



Schedule (TBA)


A subset of the accepted NTIRE workshop papers have also oral presentation.
All the accepted NTIRE workshop papers are published under the book title "2022 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library