19.June New Orleans, US (Hybrid)

NTIRE 2022

New Trends in Image Restoration and Enhancement workshop

and challenges on image and video processing

in conjunction with CVPR 2022

Join NTIRE 2022 workshop online Zoom for LIVE, talks, Q&A, interaction

The event starts 19.06.2022 at 8:00 CDT / 13:00 UTC / 21:00 China time.
Check the NTIRE 2022 schedule.
No registration required.


Call for papers

Image restoration, enhancement and manipulation are key computer vision tasks, aiming at the restoration of degraded image content, the filling in of missing information, or the needed transformation and/or manipulation to achieve a desired target (with respect to perceptual quality, contents, or performance of apps working on such images). Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the fulfillment of further tasks, as image restoration, enhancement and manipulation serves as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis etc. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.

This workshop builds upon the success of the previous NTIRE editions: at CVPR 2017 , 2018 , 2019 , 2020, 2021 and at ACCV 2016 . Moreover, it relies on all the people associated with the CLIC 2018, 2019, 2020, 2021 , PIRM 2018 , AIM 2019 , 2020 , 2021 , Mobile AI 2021 and NTIRE events such as organizers, PC members, distinguished speakers, authors of published paper, challenge participants and winning teams.

Papers addressing topics related to image restoration, enhancement and manipulation are invited. The topics include, but are not limited to:

  • Image/video inpainting
  • Image/video deblurring
  • Image/video denoising
  • Image/video upsampling and super-resolution
  • Image/video filtering
  • Image/video de-hazing, de-raining, de-snowing, etc.
  • Demosaicing
  • Image/video compression
  • Removal of artifacts, shadows, glare and reflections, etc.
  • Image/video enhancement: brightening, color adjustment, sharpening, etc.
  • Style transfer
  • Hyperspectral imaging
  • Underwater imaging
  • Methods robust to changing weather conditions / adverse outdoor conditions
  • Image/video restoration, enhancement, manipulation on constrained settings
  • Visual domain translation
  • Multimodal translation
  • Perceptual enhancement
  • Perceptual manipulation
  • Depth estimation
  • Image/video generation and hallucination
  • Image/video quality assessment
  • Image/video semantic segmentation
  • Saliency and gaze estimation
  • Aerial and satellite imaging restoration, enhancement, manipulation
  • Studies and applications of the above.

NTIRE 2021 has the following associated groups of challenges:

  • image challenges
  • video/multi-frame challenges

The authors of the top methods in each category will be invited to submit papers to NTIRE 2022 workshop.

The authors of the top methods will co-author the challenge reports.

The accepted NTIRE workshop papers will be published under the book title "CVPR 2022 Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library

For those with a keen interest on restoration, enhancement and manipulation, or on efficiency and deployment of solutions on mobile devices, we refer to the Mobile AI 2022 workshop and challenges co-organized at CVPR 2022 and Advances in Image Manipulations workshop and challenges co-organized at ECCV 2022.


Prof.Dr. Radu Timofte, radu.timofte@uni-wuerzburg.de

Chair for Computer Vision, University of Wurzburg, Germany

Computer Vision Laboratory, ETH Zurich, Switzerland

Important dates

Challenges Event Date (always 5PM Pacific Time)
Site online January 17, 2022
Release of train data and validation data January 21, 2022
Validation server online January 31, 2022
Final test data release, validation server closed March 23, 2022 (EXTENDED)
Test restoration results submission deadline March 30, 2022 (EXTENDED)
Fact sheets, code/executable submission deadline March 30, 2022 (EXTENDED)
Preliminary test results release to the participants April 1, 2022 (EXTENDED)
Paper submission deadline for entries from the challenges April 13, 2022 (EXTENDED)
Workshop Event Date (always 11:59PM Pacific Time)
Paper submission server online January 21, 2022
Paper submission deadline March 20, 2022 (EXTENDED)
Paper submission deadline (only for methods from NTIRE 2022 challenges or papers reviewed elsewhere!) April 13, 2022 (EXTENDED)
Paper decision notification April 15, 2022 (EXTENDED)
Camera ready deadline April 19, 2022 (EXTENDED)
Workshop day June 19, 2022 (hybrid)


Instructions and Policies
Format and paper length

A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in double column. The paper format must follow the same guidelines as for all CVPR 2022 submissions.

Double-blind review policy

The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.

Dual submission policy

Dual submission is allowed with CVPR2022 main conference only. If a paper is submitted also to CVPR and accepted, the paper cannot be published both at the CVPR and the workshop.

Submission site



Accepted and presented papers will be published after the conference in CVPR Workshops proceedings together with the CVPR2022 main conference papers.

Author Kit

The author kit provides a LaTeX2e template for paper submissions. Please refer to the kit for detailed formatting instructions.


Organizers (TBU)

  • Radu Timofte, University of Wurzburg & ETH Zurich,
  • Luc Van Gool, KU Leuven & ETH Zurich,
  • Ming-Hsuan Yang, University of California at Merced & Google,
  • Kyoung Mu Lee, Seoul National University,
  • Eli Shechtman, Creative Intelligence Lab at Adobe Research,
  • Martin Danelljan, ETH Zurich,
  • Shuhang Gu, OPPO & University of Sydney,
  • Lei Zhang, Alibaba / Hong Kong Polytechnic University
  • Michael Brown, York University
  • Kai Zhang, ETH Zurich
  • Goutam Bhat, ETH Zurich
  • Chao Dong, SIAT
  • Cosmin Ancuti, UCL
  • Codruta Ancuti, University Politehnica Timisoara
  • Eduardo Perez Pellitero, Huawei Noah's Ark Lab
  • Ales Leonardis, Huawei Noah's Ark Lab & University of Birmingham
  • Andreas Lugmayr, ETH Zurich
  • Jinjin Gu, University of Sydney
  • Ren Yang, ETH Zurich
  • Andres Romero, ETH Zurich
  • Egor Ershov, IITP RAS
  • Marko Subasic, University of Zagreb
  • Boaz Arad, Voyage81
  • Yawei Li, ETH Zurich
  • Yulun Zhang, ETH Zurich
  • Yulan Guo, National University of Defense Technology
  • Siavash Arjomand Bigdeli, CSEM

PC Members (TBU)

  • Mahmoud Afifi, Apple
  • Codruta Ancuti, UPT
  • Cosmin Ancuti, Polytechnic University of Timisoara
  • Boaz Arad, Ben-Gurion University of the Negev
  • Siavash Arjomand Bigdeli, CSEM
  • Nick Barnes, Australian National University
  • Michael S. Brown, York University
  • Chia-Ming Cheng, MediaTek
  • Cheng-Ming Chiang, MediaTek
  • Martin Danelljan, ETH Zurich
  • Christophe De Vleeschouwer, Université Catholique de Louvain
  • Tali Dekel, Weizmann Institute of Science
  • Chao Dong, SIAT
  • Weisheng Dong, Xidian University
  • Touradj Ebrahimi, EPFL
  • Graham Finlayson, University of East Anglia
  • Corneliu Florea, University Politechnica of Bucharest
  • Peter Gehler, Amazon
  • Bastian Goldluecke, University of Konstanz
  • Shuhang Gu, OPPO & University of Sydney
  • Christine Guillemot, INRIA
  • Felix Heide, Princeton University & Algolux
  • Chiu Man Ho, OPPO,
  • Hiroto Honda, Mobility Technologies Co Ltd.
  • Zhe Hu, Hikvision Research
  • Andrey Ignatov, ETH Zurich
  • Sing Bing Kang, Zillow Group
  • Aggelos Katsaggelos, Northwestern University
  • Vivek Kwatra, Google
  • Samuli Laine, NVIDIA
  • Jean-Francois Lalonde, Laval University
  • Christian Ledig, VideaHealth
  • Seungyong Lee, POSTECH
  • Suyoung Lee, Seoul National University
  • Kyoung Mu Lee, Seoul National University
  • Victor Lempitsky, Skoltech & Samsung
  • Ales Leonardis, Huawei Noah's Ark Lab & University of Birmingham
  • Juncheng Li, The Chinese University of Hong Kong
  • Yawei Li, ETH Zurich
  • Stephen Lin, Microsoft Research
  • Ming-Yu Liu, NVIDIA Research
  • Chen Change Loy, Chinese University of Hong Kong
  • Guo Lu, Beijing Institute of Technology
  • Vladimir Lukin, National Aerospace University
  • Kede Ma, City University of Hong Kong
  • Vasile Manta, Technical University of Iasi
  • Rafal Mantiuk, University of Cambridge
  • Zibo Meng, OPPO
  • Rafael Molina, University of Granada
  • Yusuke Monno, Tokyo Institute of Technology
  • Hajime Nagahara, Osaka University
  • Vinay P. Namboodiri, IIT Kanpur
  • Federico Perazzi, Bending Spoons
  • Fatih Porikli, Qualcomm CR&D
  • Wenqi Ren, Chinese Academy of Sciences
  • Antonio Robles-Kelly, Deakin University
  • Andres Romero, ETH Zurich
  • Aline Roumy, INRIA
  • Yoichi Sato, University of Tokyo
  • Yoav Y. Schechner, Technion, Israel
  • Christopher Schroers, Disney Research | Studios
  • Nicu Sebe, University of Trento
  • Eli Shechtman, Creative Intelligence Lab at Adobe Research
  • Gregory Slabaugh, Queen Mary University of London
  • Sabine Süsstrunk, EPFL
  • Yu-Wing Tai, Kuaishou Technology & HKUST
  • Masayuki Tanaka, Tokyo Institute of Technology
  • Hao Tang, ETH Zurich
  • Jean-Philippe Tarel, IFSTTAR, France
  • Christian Theobalt, MPI Informatik
  • Qi Tian, Huawei Cloud & AI
  • Radu Timofte, University of Wurzburg & ETH Zurich
  • George Toderici, Google
  • Luc Van Gool, ETH Zurich & KU Leuven
  • Jue Wang, Tencent
  • Longguang Wang, National University of Defense Technology
  • Oliver Wang, Adobe Systems Inc
  • Ting-Chun Wang, NVIDIA
  • Yingqian Wang, National University of Defense Technology
  • Ming-Hsuan Yang, University of California at Merced & Google
  • Ren Yang, ETH Zurich
  • Wenjun Zeng, Microsoft Research
  • Kai Zhang, ETH Zurich
  • Richard Zhang, UC Berkeley & Adobe Research
  • Yulun Zhang, ETH Zurich
  • Ruofan Zhou, EPFL
  • Jun-Yan Zhu, Carnegie Mellon University
  • Wangmeng Zuo, Harbin Institute of Technology

Invited Talks

Michael Elad

Technion, Israel Institute of Technology

Title: The New Era of Image Denoising

Abstract: Image denoising is one of the oldest and most studied problems in image processing. An extensive work over several decades has led to thousands of papers on this subject, and to many well-performing algorithms for this task. As expected, the era of deep learning has brought yet another revolution to this subfield, and took the lead in today’s ability for noise suppression in images. This talk focuses on recently discovered abilities and opportunities of image denoisers. We expose the possibility of using image denoisers for serving other problems, such as regularizing general inverse problems and serving as the engine for image synthesis. We also unveil the (strange?) idea that denoising and other inverse problems might not have a unique solution, as common algorithms would have you believe. Instead, we describe constructive ways to produce randomized and diverse high perceptual quality results for inverse problems.

Bio: Michael Elad holds a B.Sc. (1986), M.Sc. (1988) and D.Sc. (1997) in Electrical Engineering from the Technion in Israel. Since 2003 he holds a faculty position in the Computer-Science department at the Technion. Since February 2022 Prof. Elad is on Sabbatical, managing the research activity in Verily Israel. Prof. Elad works in the field of signal and image processing, specializing in particular on inverse problems, sparse representations and deep learning. He has authored hundreds of publications in leading venues, many of which have led to exceptional impact. Prof. Elad has served as an Associate Editor for IEEE-TIP, IEEE-TIT, ACHA, SIAM-Imaging-Sciences - SIIMS and IEEE-SPL. During the years 2016-2021 Prof. Elad served as the Editor-in-Chief for SIIMS. Michael received numerous teaching and research awards and grants, including an ERC advanced grant in 2013, the 2008 and 2015 Henri Taub Prizes for academic excellence, and the 2010 Hershel-Rich prize for innovation, the 2018 IEEE SPS Technical Achievement Award for contributions to sparsity-based signal processing; the 2018 IEEE SPS Sustained Impact Paper Award for his K-SVD paper, and the 2018 SPS best paper award for his paper on the Analysis K-SVD. Michael is an IEEE Fellow since 2012, and a SIAM Fellow since 2018.

Rakesh Ranjan and Harshit Khaitan

Meta/Facebook - Reality Labs

Title: Energy Efficient Image Restoration for Augmented Reality

Abstract: AR devices would be one of our gateways in to the Metaverse. They need to capture, augment and replay the real and the virtual world, all in a very energy efficient manner. In this talk we will present the system constraints that these devices face and how to mitigate them by designing Energy Efficient AI based Image and Video Processing. While much of the literature in this area measures progress in efficiency by measuring the number of parameters and macs, in this talk we make a case for energy efficiency as a holistic goal and present model architectural choices to achieve that.

Bio: Rakesh Ranjan is a Senior Research Scientist Manager in Facebook Reality Labs. Rakesh and his team pursue research in the areas of AI based low-level computer vision, 3D reconstruction and scene understanding for Augmented and Virtual Reality devices. Prior to Facebook, Rakesh was a Research Scientist at Nvidia where he worked in AI for Real Time Graphics (DLSS) and AI for Cloud Gaming (GeForce Now). Rakesh also spent 5 years at Intel Research as a PhD and full-time researcher.
Harshit Khaitan is the Director of AI Accelerator at Meta where he leads building AI Accelerators for Reality labs products. Prior to Meta, he was technical lead and co-founder for the Edge Machine learning accelerators at Google, responsible for MLA in Google Pixel 4 (Neural Core) and Pixel 6 (Google Tensor SoC). He has also held individual and technical leadership positions at Google’s first Cloud TPU, Nvidia Tegra SoCs and Nvidia GPUs. He has 10+ US and international patents in On-device AI acceleration. He has a Master’s degree in Computer Engineering from North Carolina State University and a Bachelor’s degree in Electrical Engineering from Manipal Institute of Technology, India.

Zhou Wang

University of Waterloo

Title: Image Restoration – Puzzles in Performance Evaluation and Problem Formulation

Abstract: Image restoration/enhancement has been an old research topic with continuously renewed interest in the past decades. While a large number of image restoration methods have been developed for a wide variety of applications, somewhat surprisingly, limited thoughts have been put on clearly defining the performance evaluation criterion or accurately formulating the image restoration problem. In this talk, we share some of the puzzles we have for open discussion. Specifically, image restoration algorithms are often evaluated and compared either by some quality metric of the restored images or by certain signal fidelity/distortion measure between the original and restored images, but we argue that neither of them is precisely the desired target for image restoration. While some compromise between the two might be a better option, there is a lack of clean theory or methodology to determine the balancing point. Recently, there has been some interesting discussion on the idea of the “perception-distortion tradeoff”, aiming to find some theoretical compromise between quality (perception) and distortion, but the quality (perception) is defined by the divergence between the distributions of the original and restored images, which makes the problem even more perplexing. We hope this talk can invite some insightful discussions, which may help direct the future development of image restoration.

Bio: Zhou Wang is a Canada Research Chair and Professor in the Department of Electrical and Computer Engineering, University of Waterloo. His research interests include image/video/multimedia processing, coding, communication, computational vision, and machine learning, with focuses on perceptual quality assessment and perceptually motivated processing. He has more than 200 publications in the fields with over 80,000 citations based on Google Scholar statistics. Dr. Wang is a Fellow of IEEE, a Fellow of Royal Society of Canada - Academy of Science, and a Fellow of Canadian Academy of Engineering. He is a recipient of 2014 Steacie Memorial Fellowship, and several paper awards by IEEE Signal Processing Society. He is also a two-time recipient of Engineering/Technology Emmy Awards, one in 2015 as an individual, and the other in 2021 by SSIMWAVE Inc. of which he is the Chief Scientist.

Richard Zhang


Title: Anycost and Any-resolution Image Synthesis

Abstract: Generative models, such as GANs, have proliferated and enabled photorealistic image synthesis and editing. However, in some respects, they remain limited. First, due to the computational cost of high-quality generators, it takes seconds to see the results of a single edit, prohibiting interactive user experience. Inspired by modern rendering software, we propose AnycostGAN, a single generator that supports elastic computation. The model can be executed at various cost budgets (up to 10x computation reduction), providing “fast interactive previews” during user interaction. Secondly, scaling models to higher resolutions (beyond 1024) remains challenging. We observe that training pipelines currently operate on fixed-resolution datasets, even though natural images come in a variety of sizes. We argue that every pixel matters and construct datasets at their native resolutions. To take advantage of such data, we introduce continuous-scale training in our method, Any-resolution GANs. We demonstrate generation beyond 2k resolution on a variety of datasets. Together, these works add flexibility in computation and resolution to the generative modeling process -- both at the data curation stage for training efficiency and at execution time for improved user experiences.

Bio: Richard Zhang is a Senior Research Scientist at Adobe Research, with interests in computer vision, deep learning, machine learning, and graphics. He obtained his PhD in EECS, advised by Professor Alexei A. Efros, at UC Berkeley in 2018. He graduated summa cum laude with BS and MEng degrees from Cornell University in ECE. He is a recipient of the 2017 Adobe Research Fellowship. At Adobe, in addition to collaborating with student interns, he has helped bring synthesis technologies to products, such as Photoshop Neural Filters and Photoshop Elements. More information can be found on his webpage: http://richzhang.github.io/

Federico Perazzi

Bending Spoons

Title: Opening Remini AI - Image Restoration in Production

Abstract: Remini is a photo enhancer app, top-selling on Google Play and the Apple store. It uses state-of-the-art image restoration techniques to restore and upsample images jointly. Remini sparks on facial enhancement, being state-of-the-art among other approaches like GPEN and GFPGAN. In this talk, we will discuss the challenges we faced and the solutions taken to improve the restoration quality since its acquisition a year ago. Beyond architectural and design choices, we will emphasize the importance of knowledge distillation - a Swiss-knife tool for the productization of AI-powered features, that improves quality while reducing latency.

Bio: Federico Perazzi is the head of AI at Bending Spoons, one of Europe's fastest-growing tech companies and a leading developer of mobile apps. Before joining Bending Spoons in Milan, Italy, Federico was a research scientist in the On-Device AI team at Facebook Reality Labs, working on image enhancement for Facebook's AI glasses. Prior to Facebook, he was a researcher in Adobe's Creative Intelligence lab, where he co-authored several papers on denoising, generative models, and semantic image understanding. During his doctoral studies, Federico worked in the Imaging and Video Processing team at Disney Research Zurich, where he later remained as a postdoctoral researcher. He earned his Ph.D. in 2017 from ETH university in Zurich, with a dissertation on video object segmentation. Federico's current research endeavors lie at the intersection of computer graphics, computer vision, and machine learning. Highlights of his past research include co-designing the stitching algorithm for the Disney Parks attraction "Soarin' Around the World," enhancing facial details in Adobe Photoshop, and denoising Monte Carlo renderings in Adobe Dimension.

Kai Zhang

ETH Zurich

Title: Deep Blind Image Super-Resolution and Denoising for Practical Applications

Abstract: While recent years have witnessed a dramatic upsurge of exploiting deep neural networks toward solving image super-resolution and denoising, existing methods mostly rely on simple degradation assumptions and do not generalize well for practical applications. In this talk, I will present a complex but practical degradation model to synthesize training data for training deep blind image super-resolution and denoising models. I will then demonstrate that the new degradation model can help to significantly improve the practicability of deep blind image super-resolution and denoising models.

Bio: Kai Zhang is currently a postdoctoral researcher at Computer Vision Lab, ETH Zurich, Switzerland, working with Prof. Luc Van Gool and Prof. Radu Timofte. He received his Ph.D. degree from School of Computer Science and Technology, Harbin Institute of Technology, China, in 2019. He was a research assistant from July, 2015 to July, 2017 and from July, 2018 to April, 2019 in Department of Computing of The Hong Kong Polytechnic University. His research interests include deep plug-and-play image restoration, deep unfolding image restoration, and blind image restoration.


Join NTIRE 2022 Zoom meeting for LIVE, talks, Q&A, interaction.
No registration required.

A subset of the accepted NTIRE workshop papers have also oral presentation.
All the accepted NTIRE workshop papers are published under the book title "2022 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library

List of NTIRE 2022 papers (poster panel allocation)

papers (pdf, suppl. mat) available at https://openaccess.thecvf.com/CVPR2022_workshops/NTIRE

(Full Day Poster #86a) MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction
Yuanhao Cai (Tsinghua Univisity, Tsinghua Shenzhen International Graduate School)*; Jing Lin (Tsinghua Univisity, Tsinghua Shenzhen International Graduate School); Zudi Lin (Harvard University); Haoqian Wang (Tsinghua Shenzhen International Graduate School, Tsinghua University); Yulun Zhang (ETH Zurich); Hanspeter Pfister (Harvard University); Radu Timofte (University of Wurzburg & ETH Zurich); Luc Van Gool (ETH Zurich)
(Full Day Poster #87a) IMDeception: Grouped Information Distilling Super-Resolution Network
Mustafa Ayazoglu (Aselsan Research)*
(Full Day Poster #88a) Residual Local Feature Network for Efficient Super-Resolution
FangYuan Kong (ByteDance); Mingxi Li (ByteDance)*; Songwei Liu (bytedance); Ding Liu (Bytedance); Jingwen He (Bytedance Inc); Bai Yang (ByteDance); Fangmin Chen (ByteDance); Lean FU (ByteDance)
(Full Day Poster #89a) Edge-enhanced Feature Distillation Network for Efficient Super-Resolution
Yan Wang (Nankai University)*
(Full Day Poster #90a) NTIRE 2022 Challenge on Learning the Super-Resolution Space
Andreas Lugmayr (ETH Zurich)*; Martin Danelljan (ETH Zurich); Radu Timofte (University of Wurzburg & ETH Zurich)
[project] [poster] [slides]
(Full Day Poster #) Unpaired Real-World Super-Resolution with Pseudo Controllable Restoration
Andres Romero (ETH Zürich)*; Radu Timofte (University of Wurzburg & ETH Zurich); Luc Van Gool (ETH Zurich)
(Full Day Poster #92a) LAN: Lightweight Attention-Based Network for RAW-to-RGB Smartphone Image Processing
Daniel Wirzberger Raimundo (ETH Zurich)*; Andrey Ignatov (ETH Zurich); Radu Timofte (University of Wurzburg & ETH Zurich)
(Full Day Poster #93a) Efficient Image Super-Resolution with Collapsible Linear Blocks
li wang (Xilinx)*; Dong Li (Xilinx); Lu Tian (Xilinx,Inc.); Yi Shan (Xilinx)
(Full Day Poster #94a) A Lightweight Network for High Dynamic Range Imaging
Qingsen Yan (The University of Adelaide)*; song zhang (Xidian University); Weiye Chen (xidianUniversity); Yuhang Liu (The University of Adelaide); Zhen Zhang (University of Adelaide ); Yanning Zhang (Northwestern Polytechnical University); Javen Qinfeng Shi (University of Adelaide); Dong Gong (The University of New South Wales)
(Full Day Poster #95a) Blueprint Separable Residual Network for Efficient Image Super-Resolution
Zheyuan Li (SIAT)*; Yingqi Liu (Shenzhen Institute of Advanced Technology); Xiangyu Chen (University of Macau); Haoming CAI (Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences); Jinjin Gu (The University of Sydney); Yu Qiao (Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences); Chao Dong (SIAT)
(Full Day Poster #96a) DRHDR: A Dual branch Residual Network for Multi-Bracket High Dynamic Range Imaging
Juan Marín-Vega (University of Southern Denmark)*; Michael MSL Sloth (Esoft); Peter Schneider-Kamp (SDU); Richard Röttger (University of Southern Denmark)
(Full Day Poster #97a) Fast and Memory-Efficient Network Towards Efficient Image Super-Resolution
Zongcai Du (Nanjing University)*; Ding Liu (Bytedance); Jie Liu (Nanjing University); Jie Tang (Nanjing University); Gangshan Wu (Nanjing University); Lean FU (ByteDance)
(Full Day Poster #98a) NTIRE 2022 Spectral Recovery Challenge and Data Set
Boaz Arad (Ben-Gurion University of the Negev)*; Radu Timofte (University of Wurzburg & ETH Zurich); Rony Yahel (Voyage81); Nimrod Morag (Voyage81); Amir Bernat (Voyage81)
(Full Day Poster #99a) NTIRE 2022 Spectral Demosaicing Challenge and Data Set
Boaz Arad (Ben-Gurion University of the Negev)*; Radu Timofte (University of Wurzburg & ETH Zurich); Rony Yahel (Voyage81); Nimrod Morag (Voyage81); Amir Bernat (Voyage81)
(Full Day Poster #100a) Rendering Nighttime Image Via Cascaded Color and Brightness Compensation
Zhihao Li (Nanjing University)*; Yi Si (Nankai University); Zhan Ma (Nanjing University)
[poster][video] [project]
(Full Day Poster #101a) NTIRE 2022 Challenge on Stereo Image Super-Resolution: Methods and Results
Longguang Wang (National University of Defense Technology); Yulan Guo (National University of Defense Technology)*; Yingqian Wang (National University of Defense Technology ); Juncheng Li (The Chinese University of Hong Kong); Shuhang Gu (ETH Zurich, Switzerland); Radu Timofte (University of Wurzburg & ETH Zurich)
(Full Day Poster #102a) SwiniPASSR: Swin Transformer based Parallax Attention Network for Stereo Image Super-Resolution
Kai Jin (Bigo Technology Pte. Ltd.); Zeqiang Wei (Beijing University of Posts and Telecommunications ); Angulia Yang (Bigo Technology Pte. Ltd.); Sha Guo (Peking University); Mingzhi Gao (Bigo Technology Pte. Ltd.); Xiuzhuang Zhou (Beijing University of Posts and Telecommunications)*; Guodong Guo (IDL, Baidu Research)
[slides] [video]
(Full Day Poster #103a) Self-Calibrated Efficient Transformer for Lightweight Super-Resolution
Wenbin Zou (Fujian Normal University)*; Tian Ye (Jimei University); Weixin Zheng (Fuzhou University); Yunchen Zhang (China Design Group Ltd.Co); Liang Chen (Fujian Normal University); Yi Wu (Fujian Normal University)
(Full Day Poster #104a) Conformer and Blind Noisy Students for Improved Image Quality Assessment
Marcos V. Conde (University of Würzburg)*; Maxime Burchi (JMU-CVLab); Radu Timofte (University of Wurzburg & ETH Zurich)
(Full Day Poster #105a) NTIRE 2022 Challenge on Perceptual Image Quality Assessment
Jinjin Gu (The University of Sydney)*; Haoming CAI (Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences); Chao Dong (SIAT); Jimmy Ren (SenseTime Research;Qing Yuan Research Institute, Shanghai Jiao Tong University); Radu Timofte (University of Wurzburg & ETH Zurich)
(Full Day Poster #106a) FS-NCSR: Increasing Diversity of the Super-Resolution Space via Frequency Separation and Noise-Conditioned Normalizing Flow
Ki-Ung Song (Seoul National University)*; Dongseok Shim (Seoul National University); Kang-wook Kim (Supertone Inc.); Jaeyoung Lee (Seoul National University); Younggeun Kim (MINDsLab Inc.)
(Full Day Poster #107a) Image Multi-inpainting via Progressive Generative Adversarial Networks
Jiayin Cai (Kuaishou)*; Changlin Li (Kuaishou); Xin Tao (Kuaishou); Yu-Wing Tai (Kuaishou Technology / HKUST)
(Full Day Poster #108a) Do What You Can, With What You Have: Scale-aware and High Quality Monocular Depth Estimation Without Real World Labels
Kunal Swami (Samsung Research India Bangalore and Indian Institute of Science)*; Amrit K Muduli (Samsung R & D Institute India - Bangalore); Uttam Gurram (Samsung Research Institute Bangalore); Pankaj Bajpai (Samsung R & D Institute India - Bangalore)
(Full Day Poster #109a) Blind Non-Uniform Motion Deblurring using Atrous Spatial Pyramid Deformable Convolution and Deblurring-Reblurring Consistency
Dong Huo (University of Alberta)*; Abbas Masoumzadeh (University of Alberta); Herbert Yang (University of Alberta)
(Full Day Poster #110a) Nonuniformly Dehaze Network for Visible Remote Sensing Images
Zhaojie Chen (Zhejiang University); Qi Li (Zhejiang University); Huajun Feng (Zhejiang Univerisity); Zhihai Xu (Zhejiang University); Yueting Chen (Zhejiang Univerisity)*
(Full Day Poster #111a) Transformer for Single Image Super-Resolution
Zhisheng Lu (Peking University Shenzhen Graduate School); Juncheng Li (The Chinese University of Hong Kong)*; Hong Liu (Peking University Shenzhen Graduate School); Chaoyan Huang (Nanjing University of Posts and Telecommunications); Linlin Zhang (Peking University Shenzhen Graduate School); Tieyong Zeng (The Chinese University of Hong Kong)
(Full Day Poster #112a) NL-FFC: Non-Local Fast Fourier Convolution for Image Super Resolution
Abhishek Kumar Sinha (Indian Space Research Organization)*; Manthira Moorthi S (ISRO); Debajyoti Dhar (ISRO)
(Full Day Poster #113a) Zoom-to-Inpaint: Image Inpainting with High-Frequency Details
Soo Ye Kim (KAIST); Kfir Aberman (Google); Nori Kanazawa (Google); Rahul Garg (Google); Neal Wadhwa (Google Inc.); Huiwen Chang (Google); Nikhil Karnad (Google Research); Munchurl Kim (Korea Advanced Institute of Science and Technology); Orly Liba (Google)*
(Full Day Poster #114a) Underwater Light Field Retention : Neural Rendering for Underwater Imaging
Sixiang Chen (Jimei University)*; Tian Ye (Jimei University); Erkang Chen (Jimei University); Yun Liu (Southwest University); Yuche Li (China University of Petroleum); Yi Ye (jimei university)
(Full Day Poster #115a) Online Meta Adaptation for Variable-Rate Learned Image Compression
Wei Jiang (Alibaba Group)*; Wei Wang (Alibaba Group US); Songnan Li (Tencent); Shan Liu (Tencent America)
[poster][video 1m][video 10m]
(Full Day Poster #116a) Dual-Domain Image Synthesis using Segmentation-Guided GAN
Dena Bazazian (University of Bristol)*; Andrew Calway (University of Bristol); Dima Damen (University of Bristol)
(Full Day Poster #117a) Identity Preserving Loss for Learned Image Compression
Jiuhong Xiao (New York University)*; Lavisha Aggarwal (Amazon); Prithviraj Banerjee (Amazon.com); Manoj Aggarwal (Amazon); gerard medioni (USC)
(Full Day Poster #118a) A Closer Look at Blind Super-Resolution: Degradation Models, Baselines, and Performance Upper Bounds
Wenlong Zhang (HKPolyU)*; Guang-Yuan Shi (The Hong Kong Polytechnic University); Yihao Liu (University of Chinese Academy of Sciences); Chao Dong (SIAT); Xiao-Ming Wu (PolyU Hong Kong)
(Full Day Poster #119a) Exploiting Distortion Information for Multi-degraded Image Restoration
Wooksu Shin (Ajou University); Namhyuk Ahn (NAVER WEBTOON AI); Jeong-Hyeon Moon (Ajou University); Kyung-Ah Sohn (Ajou University)*
(Full Day Poster #120a) NTIRE 2022 Image Inpainting Challenge: Report
Andres Romero (ETH Zürich)*; Angela Castillo (Universidad de los Andes); Jose M Abril-Nova (Universidad de los Andes); Radu Timofte (University of Wurzburg & ETH Zurich)
(Full Day Poster #121a) Multiple Degradation and Reconstruction Network for Single Image Denoising via Knowledge Distillation
Juncheng Li (The Chinese University of Hong Kong)*; Hanhui YANG (The Chinese University of Hong Kong); Qiaosi Yi (East China Normal University); Faming Fang (East China Normal University); Guangwei Gao (Nanjing University of Posts and Telecommunications); Tieyong Zeng (The Chinese University of Hong Kong); Guixu Zhang (East China Normal University)
(Full Day Poster #122a) Dual Heterogeneous Complementary Networks for Single Image Deraining
Yuto Nanba (Yamaguchi Univeristy); Hikaru Miyata (Yamaguchi University); Xian-Hua Han (Yamaguchi University)*
(Full Day Poster #123a) Patch-wise Contrastive Style Learning for Instagram Filter Removal
Furkan Osman Kınlı (Özye?in University)*; Bar?? Özcan (Özye?in University); Furkan Kirac (Ozyegin University)
(Full Day Poster #124a) Arbitray-Scale Image Synthesis
Evangelos Ntavelis (ETH Zurich & CSEM), Mohamad Shahbazi (ETH Zurich), Iason Kastanis (CSEM), Radu Timofte (University of Wurzburg & ETH Zurich), Martin Danelljan (ETH Zurich), Luc Van Gool (ETH Zurich)
(Full Day Poster #125a) RePaint: Inpainting Using Denoising Diffusion Probabilistic Models
Andreas Lugmayr (ETH Zurich), Martin Danelljan (ETH Zurich), Andres Romero (ETH Zurich), Fisher Yu (ETH Zurich), Radu Timofte (University of Wurzburg & ETH Zurich), Luc Van Gool (ETH Zurich)
[project] [slides] [poster]

(Full Day Poster #126a) DRT: A Lightweight Single Image Deraining Recursive Transformer
Yuanchu Liang (The Australian National University)*; Saeed Anwar (The Australian National University); Yang Liu (The Australian National University & Data61)
(Full Day Poster #127a) Towards Real-world Shadow Removal with a Shadow Simulation Method and a Two-stage Framework
Jianhao Gao (Wuhan University); Quanlong Zheng (City University of HongKong)*; Yandong Guo (OPPO Research Institute)
(Full Day Poster #128a) Deep Image Interpolation: A Unified Unsupervised Framework for Pansharpening
Jianhao Gao (Wuhan University); Jie Li (Wuhan University); Xin Su (Wuhan University); Menghui Jiang (Wuhan University); Qiangqiang Yuan (Wuhan University)*
[video] [poster]
(Full Day Poster #129a) Boundary-aware Image Inpainting with Multiple Auxiliary Cues
Yohei Yamashita (Toyota Technological Institute); Kodai Shimosato (Toyota Technological Institute); Norimichi Ukita (TTI-J)*
(Full Day Poster #130a) GenISP: Neural ISP for Low-Light Machine Cognition
Igor Morawski (National Taiwan Univeristy)*; Yu-An Chen (National Taiwan University); Yu-Sheng Lin (National Taiwan University); Shusil Dangi (Qualcomm Inc.); Kai He (Qualcomm Inc.); Winston H. Hsu (National Taiwan University)
(Full Day Poster #131a) Nighttime Image Dehazing Based on Variational Decomposition Model
Yun Liu (Southwest University)*; Zhongsheng Yan (Southwest University); Aimin Wu (Chongqing College of International Business and Economics); Tian Ye (Jimei University); Yuche Li (China University of Petroleum)
(Full Day Poster #132a) AnoDDPM: Anomaly Detection with Denoising Diffusion Probabilistic Models using Simplex Noise
Julian A Wyatt (Durham University)*; Adam Leach (Durham University); Sebastian M Schmon (Improbable); Chris G. Willcocks (Durham University)
(Full Day Poster #133a) VFHQ: A High-Quality Dataset and Benchmark for Video Face Super-Resolution
Liangbin Xie (Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China)*; Xintao Wang (Tencent); Honglun Zhang (Applied Research Center, Tencent PCG); Chao Dong (SIAT); Ying Shan (Tencent)
(Full Day Poster #134a) Unpaired Face Restoration via Learnable Cross-Quality Shift
Yangyi Dong (Shanghai Jiao Tong University)*; Xiaoyun Zhang (Shanghai Jiao Tong University); Zhixin Wang (Shang Hai Jiao Tong University); Ya Zhang (Cooperative Medianet Innovation Center, Shang hai Jiao Tong University); Siheng Chen (Shanghai Jiao Tong University); Yan-Feng Wang (Cooperative medianet innovation center of Shanghai Jiao Tong University)
(Full Day Poster #91a) Exposure Correction Model to Enhance Image Quality
Fevziye Irem Eyiokur Yaman (Karlsruhe Institute of Technology); Dogucan Yaman (Karlsruhe Institute of Technology)*; HAZIM KEMAL EKENEL (Istanbul Technical University, Turkey); Alexander Waibel (Karlsruhe Institute of Technology (KIT))
(Full Day Poster #135a) Complete and temporally consistent video outpainting
Loic Dehan (KU Leuven)*; Wiebe Van Ranst (KU Leuven); Vandewalle Patrick (KU Leuven); Toon Goedemé (KU Leuven - EAVISE)
(Full Day Poster #136a) Alpha Matte Generation from Single Input for Portrait Matting
Dogucan Yaman (Karlsruhe Institute of Technology)*; HAZIM KEMAL EKENEL (Istanbul Technical University, Turkey); Alexander Waibel (Karlsruhe Institute of Technology (KIT))
(Full Day Poster #137a) A New Dataset and Transformer for Stereoscopic Video Super-Resolution
Hassan Imani (Bahcesehir University); MD BAHARUL ISLAM (Bahcesehir University)*; Lai-Kuan Wong (Multimedia University)
(Full Day Poster #138a) Comparison of CoModGans, LaMa and GLIDE for Art Inpainting Completing M.C Escher’s Print Gallery
Lucia Cipolina-Kun (University of Bristol)*; Simone Caenazzo (RiskCare); Gaston Mazzei (Université Paris-Saclay)
(Full Day Poster #139a) Multi-encoder Network for Parameter Reduction of a Kernel-based Interpolation Architecture
Issa Khalifeh (Queen Mary University of London)*; Marc Gorriz Blanch (BBC); Ebroul Izquierdo (Queen Mary University of London); Marta Mrak (BBC)
(Full Day Poster #140a) A robust non-blind deblurring method using deep denoiser prior
Yingying Fang (Imperial College London); Hao Zhang (The Chinese University of Hong Kong); Hok Shing Wong (The Chinese University of Hong Kong); Tieyong Zeng (The Chinese University of Hong Kong)*
(Full Day Poster #141a) BSRT: Improving Burst Super-Resolution with Swin Transformer and Flow-Guided Deformable Alignment
Ziwei Luo (Megvii)*; Youwei Li (Megvii); Shen Cheng (Megvii); lei yu (Megvii); Qi Wu (Megvii); wen zhihong (MEGVII technology); Haoqiang Fan (Megvii Inc(face++)); Jian Sun (Megvii Technology); Shuaicheng Liu (UESTC; Megvii)
[video] [slides] [poster] [project]
(Full Day Poster #142a) NTIRE 2022 Challenge on High Dynamic Range Imaging: Methods and Results
Eduardo Pérez Pellitero (Huawei Noah's Ark Lab)*; Sibi Catley-Chandar (Huawei Noah's Ark Lab); Richard Shaw (Huawei London Research Centre); Ales Leonardis (Huawei Noah's Ark Lab); Radu Timofte (University of Wurzburg & ETH Zurich)
(Full Day Poster #143a) Progressive Training of A Two-Stage Framework for Video Restoration
Meisong Zheng (Alibaba Group); Qunliang Xing (Alibaba Group)*; Minglang Qiao (Alibaba Group); Mai Xu (None); Lai Jiang (None); Huaida Liu (Alibaba); Ying Chen (Alibaba Group)
(Full Day Poster #144a) Gamma-enhanced Spatial Attention Network for Efficient High Dynamic Range Imaging
Fangya Li (Communication University of China)*; Ruipeng Gang (Academy of Broadcasting Science, NRTA); Chenghua Li (Institute of Automation Chinese Academy of Sciences); Jinjing Li (Communication University of China); Sai Ma (Academy of Broadcasting Science, NRTA); Chenming Liu (Academy of Broadcasting Science, NRTA); Yizhen Cao (Communication University of China)
(Full Day Poster #145a) NTIRE 2022 Burst Super-Resolution Challenge
Goutam Bhat (ETH Zurich)*; Martin Danelljan (ETH Zurich); Radu Timofte (University of Wurzburg & ETH Zurich)
(Full Day Poster #146a) NTIRE 2022 Challenge on Efficient Super-Resolution: Methods and Results
Yawei Li (ETH Zurich)*; Kai Zhang (ETH Zurich); Radu Timofte (University of Wurzburg & ETH Zurich); Luc Van Gool (ETH Zurich)
[project] [poster] [video]
(Full Day Poster #147a) A Hybrid Network of CNN and Transformer for Lightweight Image Super-Resolution
Jinsheng Fang (Minnan Normal University); Hanjiang Lin (Minnan Normal University); Xinyu Chen (Minnan Normal University); Kun Zeng (Minjiang University)*
(Full Day Poster #148a) Motion Aware Double Attention Network for Dynamic Scene Deblurring
Dan Yang (Huawei); Mehmet Yamac (Huawei Technologies Oy (Finland) Co. Ltd)*
(Full Day Poster #149a) Efficient Progressive High Dynamic Range Image Restoration via Attention and Alignment Network
Gaocheng Yu (AntGroup)*; Jin Zhang (AntGroup); Zhe Ma (AntGroup); Hongbin Wang (Ant Group)
(Full Day Poster #150a) Fast-n-Squeeze: towards real-time spectral reconstruction from RGB images
Mirko Agarla (University of Milano - Bicocca); Simone Bianco (University of Milano - Bicocca)*; Marco Buzzelli (University of Milano - Bicocca); Luigi Celona (University of Milano - Bicocca); Raimondo Schettini (University of Milano - Bicocca)
(Full Day Poster #151a) Attentions Help CNNs See Better: Attention-based Hybrid Image Quality Assessment Network
Shanshan Lao (Tsinghua University); Yuan Gong (Tsinghua University); Shuwei Shi (Tsinghua University); Sidi Yang (Tsinghua University); Tianhe Wu (Tsinghua University); Jiahao Wang (Tsinghua University); Weihao Xia (University College London); Yujiu Yang (Tsinghua University)*
[slides] [poster] [video]
(Full Day Poster #152a) Multi-Bracket High Dynamic Range Imaging with Event Cameras
Nico Messikommer (University of Zurich & ETH Zurich)*; Stamatios Georgoulis (Huawei); Daniel Gehrig (University of Zurich & ETH Zurich); Stepan Tulyakov (Huawei); Julius Erbach (Huawei); Alfredo Bochicchio (Huawei); Yuanyou Li (Huawei); Davide Scaramuzza (University of Zurich & ETH Zurich, Switzerland)
(Full Day Poster #153a) Bidirectional Motion Estimation with Cyclic Cost Volume for High Dynamic Range Imaging
An Gia Vien (Dongguk University); Seonghyun Park (Dongguk University); Truong T.N Mai (Dongguk University); Gahyeon Kim (Dongguk University); Chul Lee (Dongguk University)*
(Full Day Poster #154a) MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment
Sidi Yang (Tsinghua University); Tianhe Wu (Tsinghua University); Shuwei Shi (Tsinghua University); Shanshan Lao (Tsinghua University); Yuan Gong (Tsinghua University); Mingdeng Cao (Tsinghua University); Jiahao Wang (Tsinghua University); Yujiu Yang (Tsinghua University)*
(Full Day Poster #155a) Image Quality Assessment with Gradient Siamese Network
Heng Cong (Interactive Entertainment Group of Netease Inc)*; Lingzhi Fu (Interactive Entertainment Group of Netease Inc); Rongyu Zhang (Interactive Entertainment Group of Netease Inc); Yusheng Zhang (Interactive Entertainment Group of Netease Inc); Hao Wang (Interactive Entertainment Group of Netease Inc); jiarong he (NTES); Jin Gao (Interactive Entertainment Group of Netease Inc)
(Full Day Poster #156a) Deep-FlexISP: A Three-Stage Framework for Night Photography Rendering
Shuai Liu (Xiaomi)*; Chaoyu Feng (Xiaomi); Wang Xiaotao (XIaomi); Hao Wang (Xiaomi); Ran Zhu (Xiaomi); Yongqiang Li (Xiaomi); LEI LEI (Xiaomi)
[poster][video, slides]
(Full Day Poster #157a) NTIRE 2022 Challenge on Super-Resolution and Quality Enhancement of Compressed Video: Dataset, Methods and Results
Ren Yang (ETH Zurich)*; Radu Timofte (University of Wurzburg & ETH Zurich)
(Full Day Poster #158a) NAFSSR: Stereo Image Super-Resolution Using NAFNet
Xiaojie Chu (Peking University)*; Liangyu Chen (Megvii Technology); Wenqing Yu (Megvii Technology)
(Full Day Poster #159a) Asymmetric Information Distillation Network for Lightweight Super Resolution
Zhikai Zong (Shandong University)*; Lin Zha (Hisense); Jiang Jiande (Hisense); Xiaoxiao Liu (Hisense)
(Full Day Poster #160a) DRCR Net: Dense Residual Channel Re-calibration Network with Non-local Purification for Spectral Super Resolution
Jiaojiao Li (xidian university)*; Songcheng Du (Xidian University); Chaoxiong Wu (xidian university); Yihong Leng (Xidian University); Song Rui (Xidian University); Yunsong Li (Xidian University)
(Full Day Poster #161a) MSTRIQ: No Reference Image Quality Assessment Based on Swin Transformer with Multi-Stage Fusion
Jing Wang (ByteDance)*; Haotian Fan (ByteDance); Xiaoxia Hou (ByteDance); yitian xu (ByteDance); tao li (bytedance); xuechao lu (ByteDance); Lean FU (ByteDance)
(Full Day Poster #162a) Adaptive Feature Consolidation Network for Burst Super-Resolution
Nancy Mehta (Indian Institute of Technology Ropar, Punjab, India)*; Akshay Dudhane ( Mohamed bin Zayed University of Artificial Intelligence); Subrahmanyam Murala (IIT Ropar); Syed Waqas Zamir (IIAI); Salman Khan (MBZUAI/ANU); Fahad Shahbaz Khan (MBZUAI)
(Full Day Poster #163a) NTIRE 2022 Challenge on Night Photography Rendering
Egor Ershov (IITP RAS)*; Alex Savchik (Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute), Moscow, Russia.); Denis Shepelev (Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute)); Nikola Banic (Gideon Brosers); Michael S Brown (York University); Radu Timofte (University of Wurzburg & ETH Zurich)
(Full Day Poster #164a) GLaMa: Joint Spatial and Frequency Loss for General Image Inpainting
Zeyu Lu (Harbin Institute of Technology)*; Junjun Jiang (Harbin Institute of Technology); Junqin Huang (Beihang University); Gang Wu (Harbin Institute of Technology); Xianming Liu (Harbin Institute of Technology)
[poster] [video]
(Full Day Poster #165a) VideoINR: Learning Video Implicit Neural Representation for Continuous Space-Time Super-Resolution
Zeyuan Chen (UC San Diego);Yinbo Chen (UC San Diego); Jingwen Liu (UC San Diego); Xingqian Xu (UIUC & Picsart AI Research); Vidit Goel (Picsart AI Research); Zhangyang Wang (UT Austin); Humphrey Shi (Picsart AI Research & U of Oregon & UIUC); Xiaolong Wang (UC San Diego)

papers (pdf, suppl. mat) available at https://openaccess.thecvf.com/CVPR2022_workshops/NTIRE