19.June VIRTUAL, US

NTIRE 2021

New Trends in Image Restoration and Enhancement workshop

and challenges on image and video processing

in conjunction with CVPR 2021

Join Mobile AI 2021 workshop online Zoom for LIVE, talks, Q&A, interaction

The event starts 20.06.2021 at 7:00 PST / 14:00 UTC / 22:00 China time.
Check the Mobile AI 2021 schedule.
No registration required.

Join NTIRE 2021 workshop online Zoom for LIVE, talks, Q&A, interaction

The event starts 19.06.2021 at 7:00 PST / 14:00 UTC / 22:00 China time.
Check the NTIRE 2021 schedule.
No registration required.


Sponsors




Call for papers

Image restoration, enhancement and manipulation are key computer vision tasks, aiming at the restoration of degraded image content, the filling in of missing information, or the needed transformation and/or manipulation to achieve a desired target (with respect to perceptual quality, contents, or performance of apps working on such images). Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the fulfillment of further tasks, as image restoration, enhancement and manipulation serves as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis etc. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.

This workshop builds upon the success of the previous NTIRE editions: at CVPR 2017 , 2018 , 2019 , 2020 and at ACCV 2016 . Moreover, it relies on all the people associated with the CLIC 2018, 2019, 2020 , PIRM 2018 , AIM 2019 , 2020 and NTIRE events such as organizers, PC members, distinguished speakers, authors of published paper, challenge participants and winning teams.

Papers addressing topics related to image restoration, enhancement and manipulation are invited. The topics include, but are not limited to:

  • Image/video inpainting
  • Image/video deblurring
  • Image/video denoising
  • Image/video upsampling and super-resolution
  • Image/video filtering
  • Image/video de-hazing, de-raining, de-snowing, etc.
  • Demosaicing
  • Image/video compression
  • Removal of artifacts, shadows, glare and reflections, etc.
  • Image/video enhancement: brightening, color adjustment, sharpening, etc.
  • Style transfer
  • Hyperspectral imaging
  • Underwater imaging
  • Methods robust to changing weather conditions / adverse outdoor conditions
  • Image/video restoration, enhancement, manipulation on constrained settings
  • Visual domain translation
  • Multimodal translation
  • Perceptual enhancement
  • Perceptual manipulation
  • Depth estimation
  • Image/video generation and hallucination
  • Image/video quality assessment
  • Image/video semantic segmentation, depth estimation
  • Aerial and satellite imaging restoration, enhancement, manipulation
  • Studies and applications of the above.

NTIRE 2021 has the following associated groups of challenges:

  • image challenges
  • video challenges

The authors of the top methods in each category will be invited to submit papers to NTIRE 2021 workshop.

The authors of the top methods will co-author the challenge reports.

The accepted NTIRE workshop papers will be published under the book title "CVPR 2021 Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library

For those with a keen interest on efficiency and deployment of solutions on mobile devices, we refer to the Mobile AI 2021 workshop and challenges co-organized at CVPR 2021.



Contact:

Radu Timofte, radu.timofte@vision.ee.ethz.ch

Computer Vision Laboratory

ETH Zurich, Switzerland

Important dates



Challenges Event Date (always 5PM Pacific Time)
Site online December 22, 2020
Release of train data and validation data January 1, 2021
Validation server online January 5, 2021
Final test data release, validation server closed March 15, 2021
Test restoration results submission deadline March 20, 2021
Fact sheets, code/executable submission deadline March 20, 2021
Preliminary test results release to the participants March 22, 2021
Paper submission deadline for entries from the challenges April 4, 2021 (EXTENDED)
Workshop Event Date (always 5PM Pacific Time)
Paper submission server online January 11, 2021
Paper submission deadline March 15, 2021 (EXTENDED)
Paper submission deadline (only for methods from NTIRE 2021 challenges or CVPR 2021 rejected papers!) April 4, 2021 (EXTENDED)
Paper decision notification April 8, 2021
Camera ready deadline April 18, 2021
Workshop day June 19, 2021

Submit



Instructions and Policies
Format and paper length

A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in double column. The paper format must follow the same guidelines as for all CVPR 2021 submissions.
http://cvpr2021.thecvf.com/node/33

Double-blind review policy

The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.

Dual submission policy

Dual submission is allowed with CVPR2021 main conference only. If a paper is submitted also to CVPR and accepted, the paper cannot be published both at the CVPR and the workshop.

Submission site

https://cmt3.research.microsoft.com/NTIRE2021

Proceedings

Accepted and presented papers will be published after the conference in CVPR Workshops proceedings together with the CVPR2021 main conference papers.

Author Kit

http://cvpr2021.thecvf.com/sites/default/files/2020-09/cvpr2021AuthorKit_2.zip
The author kit provides a LaTeX2e template for paper submissions. Please refer to the kit for detailed formatting instructions.

People



Organizers

  • Radu Timofte, ETH Zurich,
  • Luc Van Gool, KU Leuven & ETH Zurich,
  • Ming-Hsuan Yang, University of California at Merced & Google,
  • Kyoung Mu Lee, Seoul National University,
  • Eli Shechtman, Creative Intelligence Lab at Adobe Research,
  • Martin Danelljan, ETH Zurich,
  • Shuhang Gu, OPPO & University of Sydney,
  • Seungjun Nah, Seoul National University,
  • Sanghyun Son, Seoul National University,
  • Suyoung Lee, Seoul National University,
  • Ruofan Zhou, EPFL,
  • Majed El Helou, EPFL,
  • Sabine Süsstrunk, EPFL,
  • Lei Zhang, Alibaba / Hong Konh Polytechnic University
  • Michael Brown, York University
  • Goutam Bhat, ETH Zurich
  • Chao Dong, SIAT
  • Cosmin Ancuti, UCL
  • Codruta Ancuti, University Politehnica Timisoara
  • Eduardo Perez Pellitero, Huawei Noah's Ark Lab
  • Ales Leonardis, Huawei Noah's Ark Lab & University of Birmingham
  • Oliver Nina, AF Research Lab
  • Abdullah Abuolaim, York University
  • Jimmy Ren, SenseTime
  • Andreas Lugmayr, ETH Zurich
  • Bob Lee, Wright Brothers Institute
  • Jinjin Gu, SNU
  • Ren Yang, ETH Zurich


PC Members

  • Codruta Ancuti, UPT
  • Cosmin Ancuti, Polytechnic University of Timisoara
  • Boaz Arad, Ben-Gurion University of the Negev
  • Siavash Arjomand Bigdeli, CSEM
  • Nick Barnes, Australian National University
  • Michael S. Brown, York University
  • Cheng-Ming Chiang, MediaTek
  • Sunghyun Cho, POSTECH
  • Martin Danelljan, ETH Zurich
  • Tali Dekel, Weizmann Institute of Science
  • Touradj Ebrahimi, EPFL
  • Chao Dong, SIAT
  • Weisheng Dong, Xidian University
  • Touradj Ebrahimi, EPFL
  • Majed El Helou, EPFL
  • Graham Finlayson, University of East Anglia
  • Corneliu Florea, University Politechnica of Bucharest
  • Peter Gehler, Amazon
  • Bastian Goldluecke, University of Konstanz
  • Shuhang Gu, OPPO & University of Sydney
  • Christine Guillemot, INRIA
  • Felix Heide, Princeton University & Algolux
  • Chiu Man Ho, OPPO,
  • Hiroto Honda, Mobility Technologies Co Ltd.
  • Zhe Hu, Hikvision Research
  • Zhiwu Huang, ETH Zurich
  • Andrey Ignatov, ETH Zurich
  • Seon Joo Kim, Yonsei University
  • In So Kweon, KAIST
  • Christian Ledig, VideaHealth
  • Suyoung Lee, Seoul National University
  • Kyoung Mu Lee, Seoul National University
  • Seungyong Lee, POSTECH
  • Victor Lempitsky, Skoltech & Samsung
  • Ales Leonardis, Huawei Noah's Ark Lab & University of Birmingham
  • Yawei Li, ETH Zurich
  • Stephen Lin, Microsoft Research
  • Ming-Yu Liu, NVIDIA Research
  • Vladimir Lukin, National Aerospace University
  • Kai-Kuang Ma, Nanyang Technological University, Singapore
  • Vasile Manta, Technical University of Iasi
  • Zibo Meng, OPPO
  • Rafael Molina, University of Granada
  • Yusuke Monno, Tokyo Institute of Technology
  • Seungjun Nah, Seoul National University
  • Hajime Nagahara, Osaka University
  • Vinay P. Namboodiri, IIT Kanpur
  • Federico Perazzi, Facebook
  • Fatih Porikli, Qualcomm CR&D
  • Hayder Radha, Michigan State University
  • Wenqi Ren, Chinese Academy of Sciences
  • Antonio Robles-Kelly, Deakin University
  • Andres Romero, ETH Zurich
  • Aline Roumy, INRIA
  • Yoichi Sato, University of Tokyo
  • Nicu Sebe, University of Trento
  • Eli Shechtman, Creative Intelligence Lab at Adobe Research
  • Gregory Slabaugh, Queen Mary University of London
  • Sanghyun Son, Seoul National University
  • Sabine Süsstrunk, EPFL
  • Yu-Wing Tai, Kuaishou Technology & HKUST
  • Hugues Talbot, Université Paris Est
  • Masayuki Tanaka, Tokyo Institute of Technology
  • Hao Tang, University of Trento
  • Jean-Philippe Tarel, IFSTTAR
  • Radu Timofte, ETH Zurich
  • George Toderici, Google
  • Luc Van Gool, ETH Zurich & KU Leuven
  • Ashok Veeraraghavan, Rice University
  • Jue Wang, Tencent
  • Oliver Wang, Adobe Systems Inc
  • Ting-Chun Wang, NVIDIA
  • Xintao Wang, The Chinese University of Hong Kong
  • Ming-Hsuan Yang, University of California at Merced & Google
  • Ren Yang, ETH Zurich
  • Wenjun Zeng, Microsoft Research
  • Kai Zhang, ETH Zurich
  • Richard Zhang, UC Berkeley & Adobe Research
  • Yulun Zhang, Northeastern University
  • Ruofan Zhou, EPFL
  • Jun-Yan Zhu, Adobe Research & CMU
  • Wangmeng Zuo, Harbin Institute of Technology

Invited Talks



Alan Bovik

University of Texas at Austin

Title: Getting “High” on Frame Rates

Abstract: Modern streaming video providers continuously seek to improve consumer experiences by delivering higher-quality, denser content. An important direction that bears study is high-frame rate (HFR) videos, which present unique problems involving balances between frame rate, video quality, and compression. I will describe new large-scale perceptual studies that we have conducted that are focused on these issues. I will also describe new computational video quality models that address highly practical questions, such as frame rate selection versus compression, and how to combine space-time sampling with compression. My hopes are that these contributions will help further advance the global delivery of HFR video content.

Bio: Al Bovik is the Cockrell Family Regents Endowed Chair Professor at The University of Texas at Austin. His research interests land squarely at the nexus of visual neuroscience, deep learning, and digital streaming and social media. His many international honors include a 2020 Technology and Engineering Emmy® Award, the 2019 Progress Medal of the Royal Photographic Society, the 2019 IEEE Fourier Award, the 2017 OSA Edwin H. Land Medal, a 2015 Primetime Emmy® Award from the Academy of Television Arts and Sciences, and the Norbert Wiener and ‘Sustained Impact’ Awards of the IEEE Signal Processing Society.

Wangmeng Zuo

Harbin Institute of Technology

Title: Towards Guided and Generic Blind Face Restoration

Abstract: Blind restoration of unconstrained blurry, noisy, low-resolution, or compressed face images is a challenging low level vision task with many real-world applications such as faces in albums, old photos and old films, etc. While generic blind face restoration is difficult, we begin with a practically more feasible setting by exploiting both the degraded observation and a high-quality exemplar image for guided face restoration. Optical flow and moving least-square are subsequently introduced for spatial alignment of guidance image. And multiple adaptive spatial feature fusion is deployed to incorporate guidance features in an adaptive and progressive manner. By introducing generic dictionaries, we further extend adaptive spatial fusion for generic blind face restoration, and finally present a general framework for handling both generic and guided blind face restoration.

Bio: Wangmeng Zuo is currently a Professor in the School of Computer Science and Technology, Harbin Institute of Technology. He received the Ph.D. degree in computer application technology from the Harbin Institute of Technology, Harbin, China, in 2007. His current research interests include image enhancement and restoration, face image restoration and editing, object detection, visual tracking, and image classification. He has published over 100 papers in top tier academic journals and conferences. His publications have received 20,000+ citations in terms of Google Scholar. He has served as a Tutorial Organizer in ECCV 2016, an Associate Editor of IEEE Trans. Pattern Analysis and Machine Intelligence, and IEEE Trans. Image Processing.

Rakesh Ranjan, Federico Perazzi

Facebook Reality Labs

Title: Enhancing AR Camera using Deep learning

Abstract: AR Glasses promise to be the next compute platform, enabling a frictionless way of interacting with the world around us, by augmenting our perception of the real-world. They must be lightweight all day wearable devices, limiting the size of the sensors which can go into them as well as they must adhere to strict power and thermal limitations. In the first part of this talk we will present some of the constraints on camera imaging imposed by these limitations. In the second part we will present an efficient on-device neural network that employs a novel feature align layer and a perceptual loss computed in RAW space to achieve state-of-the-art results at a fraction of the computational cost.

Bio: Rakesh Ranjan is a Research Scientist Manager in Facebook Reality Labs. Rakesh and his team pursue research in the areas of AI based low-level Computer Vision and Graphics for Augmented and Virtual Reality devices. Prior to Facebook, Rakesh was a Research Scientist at Nvidia where he worked in the area of AI for Real Time Graphics (DLSS) and AI for Cloud Gaming (GeForce Now). Rakesh also spent 5 years at Intel Research as a PhD and full-time researcher.

Federico Perazzi is a Research Scientist On-Device AI team, at Facebook Reality Labs working on image enhancement vision tasks for the camera of the next-gen Facebook AR Glasses. Before joining Facebook, Federico was part of the Creative Intelligence Lab, at Adobe Research, where he co-authored several papers on the topic of denoising, generative models, and semantic image understanding. Federico spent eight years as Intern, Ph.D., and Post-Doctoral Researcher at Disney Research Zurich, in the Imaging and Video Processing Group. He obtained his Ph.D. in 2017 from ETHZ on the topic of video object segmentation.

Ayush Tewari, Christian Theobalt

MPI for Informatics, Saarland University

Title: Synthesis of Portrait Images with 3D Control

Abstract: Photorealistic and semantically controllable synthesis of portrait images has many applications in movies, virtual reality, and casual photography. Recent generative models, such as StyleGAN, have demonstrated high-quality synthesis of portrait images. However, they lack intuitive control over the 3D scene parameters. In this talk, I will cover some recent methods where we introduce controllability in pretrained generative models, allowing for high-quality as well as controllable synthesis of portrait images. I will describe how a pretrained generative model can offer a lot of advantages, especially in the presence of limited supervised training data.

Bio: Ayush Tewari is a Ph.D. student in the 'Visual Computing and Artificial Intelligence' department at the Max Planck Institute for Informatics in Saarbruecken, Germany. He received his M.Sc. in Computer Science from Grenoble INP, and B.Tech. in Computer Science and Engineering from IIIT Hyderabad. His research interests are in computer vision, computer graphics, and machine learning, with a focus on self-supervised 3D reconstruction and synthesis problems.

Christian Theobalt is a Professor of Computer Science and the head of the research group 'Graphics, Vision & Video' at the Max Planck Institute for Informatics, Saarbruecken, Germany. He is also a professor at Saarland University. His research lies on the boundary between Computer Vision and Computer Graphics. For instance, he works on 4D scene reconstruction, marker-less motion and performance capture, machine learning for graphics and vision, and new sensors for 3D acquisition. Christian received several awards, for instance the Otto Hahn Medal of the Max-Planck Society (2007), the EUROGRAPHICS Young Researcher Award (2009), the German Pattern Recognition Award (2012), an ERC Starting Grant (2013) and an ERC Consolidator Grant (2017). In 2015, he was elected one of Germany's top 40 innovators under 40 by the magazine Capital. He is a co-founder of theCaptury.

Qi Tian, Lin Liu

Huawei Cloud & AI

Title: Challenges and Solutions for Intelligent Image Restoration

Abstract: In recent years, with the development of deep learning, image restoration and enhancement, such as image denoising and image super-resolution, have attracted more and more attention. This talk will focus on the challenges faced by image restoration and propose some solutions: 1) For the problem that real data is difficult to obtain, we introduce methods of synthesizing more real data, self-supervised learning, and fine-tuning using pre-trained models. 2) For the existing models have limited in mining useful information, we first introduce the concept of ‘guidance restoration’, then introduce some self-guidance and external-guidance methods. 3) At last, we introduce some video or burst methods for image restoration.

Bio: Qi Tian is currently a Chief Scientist in Artificial Intelligence at Cloud BU, Huawei. From 2018-2020, he was the Chief Scientist in Computer Vision at Huawei Noah’s Ark Lab. Before that he was a Full Professor in the Department of Computer Science, the University of Texas at San Antonio (UTSA) from 2002 to 2019. During 2008-2009, he took one-year Faculty Leave at Microsoft Research Asia (MSRA).
Dr. Tian received his Ph.D. in ECE from University of Illinois at Urbana-Champaign (UIUC) and received his B.E. in Electronic Engineering from Tsinghua University and M.S. in ECE from Drexel University, respectively. Dr. Tian’s research interests include computer vision, multimedia information retrieval and machine learning and published 630+ refereed journal and conference papers. His Google citation is over 28500+ with H-index 81. He was the co-author of best papers including IEEE ICME 2019, ACM CIKM 2018, ACM ICMR 2015, PCM 2013, MMM 2013, ACM ICIMCS 2012, a Top 10% Paper Award in MMSP 2011, a Student Contest Paper in ICASSP 2006, and co-author of a Best Paper/Student Paper Candidate in ACM Multimedia 2019, ICME 2015 and PCM 2007.
Dr. Tian received 2017 UTSA President’s Distinguished Award for Research Achievement, 2016 UTSA Innovation Award, 2014 Research Achievement Awards from College of Science, UTSA, 2010 Google Faculty Award, and 2010 ACM Service Award. He is the associate editor of IEEE TMM, IEEE TCSVT, ACM TOMM, MMSJ, and in the Editorial Board of Journal of Multimedia (JMM) and Journal of MVA. Dr. Tian is the Guest Editor of IEEE TMM, Journal of CVIU, etc. Dr. Tian is a Fellow of IEEE(2016).
Lin Liu Lin Liu is a student in the Electronic Engineering and Information Science Department of University of Science and Technology of China (USTC). He received his Bachelor degree in the Information Security Department of University of Science and Technology of China in 2019. Previously, he spent 8 months at the Noah's Ark Lab, Huawei Tech as a research intern. His research interests are computer vision, machine learning and low-level vision.

Join Mobile AI 2021 Zoom meeting for LIVE, talks, Q&A, interaction.
No registration required.
Join NTIRE 2021 Zoom meeting for LIVE, talks, Q&A, interaction.
No registration required.

A subset of the accepted NTIRE workshop papers have also oral presentation.
All the accepted NTIRE workshop papers are published under the book title "2021 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library



List of NTIRE 2021 papers

papers (pdf, suppl. mat) available at https://openaccess.thecvf.com/CVPR2021_workshops/NTIRE

Multi-Scale Self-Calibrated Network for Image Light Source Transfer
Yuanzhi Wang, Tao Lu, Yanduo Zhang, Yuntao Wu
[video][slides] Physically Inspired Dense Fusion Networks for Relighting
Amirsaeed Yazdani, Tiantong Guo, Vishal Monga
[video][slides] Noise Conditional Flow Model for Learning the Super-Resolution Space
Younggeun Kim, Donghee Son
[video] NTIRE 2021 Challenge on Image Deblurring
Seungjun Nah, Sanghyun Son, Suyoung Lee, Radu Timofte, Kyoung Mu Lee
Long-Tailed Recognition of SAR Aerial View Objects by Cascading and Paralleling Experts
Cheng-Yen Yang, Hung-Min Hsu, Jiarui Cai, Jenq-Neng Hwang
KernelNet: A Blind Super-Resolution Kernel Estimation Network
Mehmet Yamac, Baran Ataman, Aakif Nawaz
Edge Guided Progressively Generative Image Outpainting
Han Lin, Maurice Pagnucco, Yang Song
Robust Image-to-Image Color Transfer Using Optimal Inlier Maximization
Magnus Oskarsson
[video] EBSR: Feature Enhanced Burst Super-Resolution With Deformable Alignment
Ziwei Luo, Lei Yu, Xuan Mo, Youwei Li, Lanpeng Jia, Haoqiang Fan, Jian Sun, Shuaicheng Liu
Learning a Cascaded Non-Local Residual Network for Super-Resolving Blurry Images
Haoran Bai, Songsheng Cheng, Jinhui Tang, Jinshan Pan
NTIRE 2021 Learning the Super-Resolution Space Challenge
Andreas Lugmayr, Martin Danelljan, Radu Timofte
(ASNA) An Attention-Based Siamese-Difference Neural Network With Surrogate Ranking Loss Function for Perceptual Image Quality Assessment
Seyed Mehdi Ayyoubzadeh, Ali Royat
Deep Learning-Based Distortion Sensitivity Prediction for Full-Reference Image Quality Assessment
Sewoong Ahn, Yeji Choi, Kwangjin Yoon
Beyond Joint Demosaicking and Denoising: An Image Processing Pipeline for a Pixel-Bin Image Sensor
S M A Sharif, Rizwan Ali Naqvi, Mithun Biswas
[video] SRKTDN: Applying Super Resolution Method to Dehazing Task
Tianyi Chen, Jiahui Fu, Wentao Jiang, Chen Gao, Si Liu
Shadow Removal With Paired and Unpaired Learning
Florin-Alexandru Vasluianu, Andres Romero, Luc Van Gool, Radu Timofte
HDRUNet: Single Image HDR Reconstruction With Denoising and Dequantization
Xiangyu Chen, Yihao Liu, Zhengwen Zhang, Yu Qiao, Chao Dong
S3Net: A Single Stream Structure for Depth Guided Image Relighting
Hao-Hsiang Yang, Wei-Ting Chen, Sy-Yen Kuo
Region-Adaptive Deformable Network for Image Quality Assessment
Shuwei Shi, Qingyan Bai, Mingdeng Cao, Weihao Xia, Jiahao Wang, Yifan Chen, Yujiu Yang
Single Image Dehazing Using Bounded Channel Difference Prior
Xuan Zhao
DeepObjStyle: Deep Object-Based Photo Style Transfer
Indra Deep Mastan, Shanmuganathan Raman
Restoration of Video Frames From a Single Blurred Image With Motion Understanding
Dawit Mureja Argaw, Junsik Kim, Francois Rameau, Chaoning Zhang, In So Kweon
[video][slides] IQMA Network: Image Quality Multi-Scale Assessment Network
Haiyang Guo, Yi Bin, Yuqing Hou, Qing Zhang, Hengliang Luo
LTNet: Light Transfer Network for Depth Guided Image Relighting
Yu Zhu, Bosong Ding, Chenghua Li, Wanli Qian, Fangya Li, Yiheng Yao, Ruipeng Gang, Chunjie Zhang, Jian Cheng
Pixel-Guided Dual-Branch Attention Network for Joint Image Deblurring and Super-Resolution
Si Xi, Jia Wei, Weidong Zhang
[video][project page] Unifying Guided and Unguided Outdoor Image Synthesis
Muhammad Usman Rafique, Yu Zhang, Benjamin Brodie, Nathan Jacobs
[poster] Efficient Space-Time Video Super Resolution Using Low-Resolution Flow and Mask Upsampling
Saikat Dutta, Nisarg A. Shah, Anurag Mittal
A Two-Stage Deep Network for High Dynamic Range Image Reconstruction
S M A Sharif, Rizwan Ali Naqvi, Mithun Biswas, Sungjun Kim
EGB: Image Quality Assessment Based on Ensemble of Gradient Boosting
Dounia Hammou, Sid Ahmed Fezza, Wassim Hamidouche
[video] Improved Noise2Noise Denoising With Limited Data
Adria Font Calvarons
Variational AutoEncoder for Reference Based Image Super-Resolution
Zhi-Song Liu, Wan-Chi Siu, Li-Wen Wang
Self-Supervised Multi-Task Pretraining Improves Image Aesthetic Assessment
Jan Pfister, Konstantin Kobs, Andreas Hotho
[poster] Overparametrization of HyperNetworks at Fixed FLOP-Count Enables Fast Neural Image Enhancement
Lorenz K. Muller
[video] EDPN: Enhanced Deep Pyramid Network for Blurry Image Restoration
Ruikang Xu, Zeyu Xiao, Jie Huang, Yueyi Zhang, Zhiwei Xiong
NTIRE 2021 Challenge on Burst Super-Resolution: Methods and Results
Goutam Bhat, Martin Danelljan, Radu Timofte
[video] NTIRE 2021 NonHomogeneous Dehazing Challenge Report
Codruta O. Ancuti, Cosmin Ancuti, Florin-Alexandru Vasluianu, Radu Timofte
Guidance Network With Staged Learning for Image Enhancement
Luming Liang, Ilya Zharkov, Faezeh Amjadi, Hamid Reza Vaezi Joze, Vivek Pradeep
[video] NTIRE 2021 Challenge for Defocus Deblurring Using Dual-Pixel Images: Methods and Results
Abdullah Abuolaim, Radu Timofte, Michael S. Brown
Toward Interactive Modulation for Photo-Realistic Image Restoration
Haoming Cai, Jingwen He, Yu Qiao, Chao Dong
[poster] Generic Image Restoration With Flow Based Priors
Leonhard Helminger, Michael Bernasconi, Abdelaziz Djelouah, Markus Gross, Christopher Schroers
NTIRE 2021 Challenge on Quality Enhancement of Compressed Video: Dataset and Study
Ren Yang, Radu Timofte
[video] ADNet: Attention-Guided Deformable Convolutional Network for High Dynamic Range Imaging
Zhen Liu, Wenjie Lin, Xinpeng Li, Qing Rao, Ting Jiang, Mingyan Han, Haoqiang Fan, Jian Sun, Shuaicheng Liu
Adaptive Spatial-Temporal Fusion of Multi-Objective Networks for Compressed Video Perceptual Enhancement
He Zheng, Xin Li, Fanglong Liu, Lielin Jiang, Qi Zhang, Fu Li, Qingqing Dang, Dongliang He
A Two-Branch Neural Network for Non-Homogeneous Dehazing via Ensemble Learning
Yankun Yu, Huan Liu, Minghan Fu, Jun Chen, Xiyao Wang, Keyan Wang
[video] NTIRE 2021 Depth Guided Image Relighting Challenge
Majed El Helou, Ruofan Zhou, Sabine Susstrunk, Radu Timofte
Weighted Multi-Kernel Prediction Network for Burst Image Super-Resolution
Wooyeong Cho, Sanghyeok Son, Dae-Shik Kim
Cross Modality Knowledge Distillation for Multi-Modal Aerial View Object Classification
Lehan Yang, Kele Xu
[video] Dual Contrastive Learning for Unsupervised Image-to-Image Translation
Junlin Han, Mehrdad Shoeiby, Lars Petersson, Mohammad Ali Armin
Single Image HDR Synthesis Using a Densely Connected Dilated ConvNet
Akhil K. A., Jiji C. V.
[video] DW-GAN: A Discrete Wavelet Transform GAN for NonHomogeneous Dehazing
Minghan Fu, Huan Liu, Yankun Yu, Jun Chen, Keyan Wang
[video] NTIRE 2021 Challenge on Video Super-Resolution
Sanghyun Son, Suyoung Lee, Seungjun Nah, Radu Timofte, Kyoung Mu Lee
NTIRE 2021 Challenge on Quality Enhancement of Compressed Video: Methods and Results
Ren Yang, Radu Timofte
[video] PnG: Micro-Structured Prune-and-Grow Networks for Flexible Image Restoration
Wei Jiang, Wei Wang, Shan Liu, Songnan Li
[slides] HINet: Half Instance Normalization Network for Image Restoration
Liangyu Chen, Xin Lu, Jie Zhang, Xiaojie Chu, Chengpeng Chen
[video] NTIRE 2021 Multi-Modal Aerial View Object Classification Challenge
Jerrick Liu, Nathan Inkawhich, Oliver Nina, Radu Timofte
Three Gaps for Quantisation in Learned Image Compression
Shi Pan, Chris Finlay, Chri Besenbruch, William Knottenbelt
NTIRE 2021 Challenge on High Dynamic Range Imaging: Dataset, Methods and Results
Eduardo Perez-Pellitero, Sibi Catley-Chandar, Ales Leonardis, Radu Timofte
VSpSR: Explorable Super-Resolution via Variational Sparse Representation
Hangqi Zhou, Chao Huang, Shangqi Gao, Xiahai Zhuang
[video] Symmetric Parallax Attention for Stereo Image Super-Resolution
Yingqian Wang, Xinyi Ying, Longguang Wang, Jungang Yang, Wei An, Yulan Guo
Multi-Modal Bifurcated Network for Depth Guided Image Relighting
Hao-Hsiang Yang, Wei-Ting Chen, Hao-Lun Luo, Sy-Yen Kuo
Efficient CNN Architecture for Multi-Modal Aerial View Object Classification
Casian Miron, Alexandru Pasarica, Radu Timofte
[poster] Attention! Stay Focus!
Tu Vo
Multi-Scale Selective Residual Learning for Non-Homogeneous Dehazing
Eunsung Jo, Jae-Young Sim
NTIRE 2021 Challenge on Perceptual Image Quality Assessment
Jinjin Gu, Haoming Cai, Chao Dong, Jimmy S. Ren, Yu Qiao, Shuhang Gu, Radu Timofte
Instagram Filter Removal on Fashionable Images
Furkan Kinli, Baris Ozcan, Furkan Kirac
[video] Boosting the Performance of Video Compression Artifact Reduction With Reference Frame Proposals and Frequency Domain Information
Yi Xu, Minyi Zhao, Jing Liu, Xinjian Zhang, Longwen Gao, Shuigeng Zhou, Huyang Sun
Single-Image HDR Reconstruction With Task-Specific Network Based on Channel Adaptive RDN
Guannan Chen, Lijie Zhang, Mengdi Sun, Yan Gao, Pablo Navarrete Michelini, YanHong Wu
[video][slides] SRFlow-DA: Super-Resolution Using Normalizing Flow With Deep Convolutional Block
Younghyun Jo, Sejong Yang, Seon Joo Kim
[video] Perceptual Image Quality Assessment With Transformers
Manri Cheon, Sung-Jun Yoon, Byungyeon Kang, Junwoo Lee
Wide Receptive Field and Channel Attention Network for JPEG Compressed Image Deblurring
Donghyeon Lee, Chulhee Lee, Taesung Kim