15.June Seattle, Washington

NTIRE 2020

New Trends in Image Restoration and Enhancement workshop

and challenges on image and video restoration and enhancement

in conjunction with CVPR 2020

Check CVPR 2020 virtual NTIRE workshop landing page for LIVE, Q&A, recordings, interaction


Call for papers

Image restoration, enhancement and manipulation are key computer vision tasks, aiming at the restoration of degraded image content, the filling in of missing information, or the needed transformation and/or manipulation to achieve a desired target (with respect to perceptual quality, contents, or performance of apps working on such images). Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the fulfillment of further tasks, as image restoration, enhancement and manipulation serves as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis etc. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.

This workshop builds upon the success of the previous NTIRE editions: at CVPR 2017 and 2018 and 2019 and at ACCV 2016 . Moreover, it relies on all the people associated with the CLIC 2018, 2019, 2020 , PIRM 2018 , AIM 2019 and NTIRE events such as organizers, PC members, distinguished speakers, authors of published paper, challenge participants and winning teams.

Papers addressing topics related to image restoration, enhancement and manipulation are invited. The topics include, but are not limited to:

  • Image/video inpainting
  • Image/video deblurring
  • Image/video denoising
  • Image/video upsampling and super-resolution
  • Image/video filtering
  • Image/video de-hazing, de-raining, de-snowing, etc.
  • Demosaicing
  • Image/video compression
  • Removal of artifacts, shadows, glare and reflections, etc.
  • Image/video enhancement: brightening, color adjustment, sharpening, etc.
  • Style transfer
  • Hyperspectral imaging
  • Underwater imaging
  • Methods robust to changing weather conditions / adverse outdoor conditions
  • Image/video restoration, enhancement, manipulation on constrained settings
  • Image/video processing on mobile devices
  • Visual domain translation
  • Multimodal translation
  • Perceptual enhancement
  • Perceptual manipulation
  • Depth estimation
  • Image/video generation and hallucination
  • Image/video quality assessment
  • Image/video semantic segmentation, depth estimation
  • Studies and applications of the above.

NTIRE 2020 has the following associated groups of challenges:

  • image challenges
  • video challenges

The authors of the top methods in each category will be invited to submit papers to NTIRE 2020 workshop.

The authors of the top methods will co-author the challenge reports.

The accepted NTIRE workshop papers will be published under the book title "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library


Radu Timofte, radu.timofte@vision.ee.ethz.ch

Computer Vision Laboratory

ETH Zurich, Switzerland

NTIRE 2020 video challenges

Important dates

Challenges Event Date (always 5PM Pacific Time)
Site online December 01, 2019
Release of train data and validation data December 17, 2019
Validation server online January 06, 2020
Final test data release, validation server closed March 16, 2020
Test results submission deadline March 26, 2020 (EXTENDED)
Fact sheets and code/executable submission deadline March 26, 2020 (EXTENDED)
Preliminary test results release to the participants March 28, 2020 (EXTENDED)
Paper submission deadline for entries from the challenges April 09, 2020 (EXTENDED)
Workshop Event Date (always 5PM Pacific Time)
Paper submission server online January 20, 2020
Paper submission deadline (regular workshop papers) March 22, 2020 (EXTENDED)
Paper submission deadline (only for methods from challenges!) April 09, 2020 (EXTENDED)
Regular papers decision notification April 10, 2020 (EXTENDED)
Camera ready deadline April 19, 2020 (EXTENDED)
Workshop day June 15, 2020


Instructions and Policies
Format and paper length

A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in double column. The paper format must follow the same guidelines as for all CVPR 2020 submissions.

Double-blind review policy

The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.

Dual submission policy

Dual submission is allowed with CVPR2020 main conference only. If a paper is submitted also to CVPR and accepted, the paper cannot be published both at the CVPR and the workshop.

Submission site



Accepted and presented papers will be published after the conference in CVPR Workshops proceedings together with the CVPR2020 main conference papers.

Author Kit

The author kit provides a LaTeX2e template for paper submissions. Please refer to the example egpaper_for_review.pdf for detailed formatting instructions.



Radu Timofte

Radu Timofte is lecturer and research group leader in the Computer Vision Laboratory, at ETH Zurich, Switzerland. He obtained a PhD degree in Electrical Engineering at the KU Leuven, Belgium in 2013, the MSc at the Univ. of Eastern Finland in 2007, and the Dipl. Eng. at the Technical Univ. of Iasi, Romania in 2006. He serves as a reviewer for top journals (such as TPAMI, TIP, IJCV, TNNLS, TCSVT, CVIU, PR) and conferences (ICCV, CVPR, ECCV, NeurIPS) and is associate editor for Elsevier CVIU journal and, starting 2020, for IEEE Trans. PAMI and for SIAM Journal on Imaging Sciences. He serves as area chair for ACCV 2018, ICCV 2019 and ECCV 2020. He received a NIPS 2017 best reviewer award. His work received the best student paper award at BMVC 2019, a best scientific paper award at ICPR 2012, the best paper award at CVVT workshop (ECCV 2012), the best paper award at ChaLearn LAP workshop (ICCV 2015), the best scientific poster award at EOS 2017, the honorable mention award at FG 2017, and his team won a number of challenges including traffic sign detection (IJCNN 2013) and apparent age estimation (ICCV 2015). He is co-founder of Merantix and co-organizer of NTIRE, CLIC, AIM and PIRM events. His current research interests include sparse and collaborative representations, deep learning, optical flow, image/video compression, restoration and enhancement.

Shuhang Gu

Shuhang Gu received the B.E. degree from the School of Astronautics, Beijing University of Aeronautics and Astronautics, China, in 2010, the M.E. degree from the Institute of Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology, China, in 2013, and Ph.D. degree from the Department of Computing, The Hong Kong Polytechnic University, in 2017. He currently holds a post-doctoral position at ETH Zurich, Switzerland. His research interests include image restoration, enhancement and compression.

Ming-Hsuan Yang

Ming-Hsuan Yang received the PhD degree in Computer Science from University of Illinois at Urbana-Champaign. He is a full professor in Electrical Engineering and Computer Science at University of California at Merced. He has published more than 120 papers in the field of computer vision. Yang serves as a program co-chair of ACCV 2014, general co-chair of ACCV 2016, and program co-chair of ICCV 2019. He serves as an editor for PAMI, IJCV, CVIU, IVC and JAIR. His research interests include object detection, tracking, recognition, image deblurring, super resolution, saliency detection, and image/video segmentation.

Lei Zhang

Lei Zhang (M'04, SM'14, F'18) received his B.Sc. degree in 1995 from Shenyang Institute of Aeronautical Engineering, Shenyang, P.R. China, and M.Sc. and Ph.D degrees in Control Theory and Engineering from Northwestern Polytechnical University, Xi'an, P.R. China, respectively in 1998 and 2001, respectively. From 2001 to 2002, he was a research associate in the Department of Computing, The Hong Kong Polytechnic University. From January 2003 to January 2006 he worked as a Postdoctoral Fellow in the Department of Electrical and Computer Engineering, McMaster University, Canada. In 2006, he joined the Department of Computing, The Hong Kong Polytechnic University, as an Assistant Professor. Since July 2017, he has been a Chair Professor in the same department. His research interests include Computer Vision, Pattern Recognition, Image and Video Analysis, and Biometrics, etc. Prof. Zhang has published more than 200 papers in those areas. As of 2018, his publications have been cited more than 36,000 times in the literature. Prof. Zhang is an Associate Editor of IEEE Trans. on Image Processing, SIAM Journal of Imaging Sciences and Image and Vision Computing, etc. He is a "Clarivate Analytics Highly Cited Researcher" from 2015 to 2018.

Luc Van Gool

Luc Van Gool received a degree in electro-mechanical engineering at the Katholieke Universiteit Leuven in 1981. Currently, he is a full professor for Computer Vision at the ETH in Zurich and the Katholieke Universiteit Leuven in Belgium. He leads research and teaches at both places. He has authored over 300 papers. Luc Van Gool has been a program committee member of several, major computer vision conferences (e.g. Program Chair ICCV'05, Beijing, General Chair of ICCV'11, Barcelona, and of ECCV'14, Zurich). His main interests include 3D reconstruction and modeling, object recognition, and tracking and gesture analysis. He received several Best Paper awards (eg. David Marr Prize '98, Best Paper CVPR'07, Tsuji Outstanding Paper Award ACCV'09, Best Vision Paper ICRA'09). In 2015 he received the 5-yearly Excellence Award in Applied Sciences by the Flemish Fund for Scientific Research, in 2016 a Koenderink Prize and in 2017 a PAMI Distinguished Researcher award. He is a co-founder of more than 10 spin-off companies and was the holder of an ERC Advanced Grant (VarCity). Currently, he leads computer vision research for autonomous driving in the context of the Toyota TRACE labs in Leuven and at ETH, as well as image and video enhancement research for Huawei.

Cosmin Ancuti

Cosmin Ancuti received the PhD degree at Hasselt University, Belgium (2009). He was a post-doctoral fellow at IMINDS and Intel Exascience Lab (IMEC), Leuven, Belgium (2010-2012) and a research fellow at University Catholique of Louvain, Belgium (2015-2017). Currently, he is a senior researcher/lecturer at University Politehnica Timisoara. He is the author of more than 50 papers published in international conference proceedings and journals. His area of interests includes image and video enhancement techniques, computational photography and low level computer vision.

Codruta O. Ancuti

Codruta O. Ancuti is a senior researcher/lecturer at University Politehnica Timisoara, Faculty of Electrical and Telecommunication Engineering. She obtained the PhD degree at Hasselt University, Belgium (2011) and between 2015 and 2017 she was a research fellow at University of Girona, Spain (ViCOROB group). Her work received the best paper award at NTIRE 2017 (CVPR workshop). Her main interest of research includes image understanding and visual perception. She is the first that introduced several single images-based enhancing techniques built on the multi-scale fusion (e.g. color-to grayscale, image dehazing, underwater image and video restoration.

Kyoung Mu Lee

Kyoung Mu Lee received the B.S. and M.S. Degrees from Seoul National University, Seoul, Korea, and Ph. D. degree in Electrical Engineering from the University of Southern California in 1993. Currently he is with the Dept. of ECE at Seoul National University as a full professor. His primary research interests include scene understanding, object recognition, low-level vision, visual tracking, and visual navigation. He is currently serving as an AEIC (Associate Editor in Chief) of the IEEE TPAMI, an Area Editor of the Computer Vision and Image Understanding (CVIU), and has served as an Associate Editor of the IEEE TPAMI, the Machine Vision Application (MVA) Journal and the IPSJ Transactions on Computer Vision and Applications (CVA), and the IEEE Signal Processing Letter. He is an Advisory Board Member of CVF (Computer Vision Foundation) and an Editorial Advisory Board Member for Academic Press/Elsevier. He also has served as Area Chars of CVPR, ICCV, ECCV, and ACCV many times, and serves as a general co-chair of ACM MM 2018, ACCV2018 and ICCV2019. He was a Distinguished Lecturer of the Asia-Pacific Signal and Information Processing Association (APSIPA) for 2012-2013.

Michael S. Brown

Michael S. Brown obtained his BS and PhD in Computer Science from the University of Kentucky in 1995 and 2001, respectively. He is currently a professor and Canada Research Chair at York University in Toronto. Dr. Brown has served as an area chair multiple times for CVPR, ICCV, ECCV, and ACCV and was the general chair for CVPR 2018. He has served as an associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) and is currently on the editorial board of the International Journal of Computer Vision (IJCV). His research interests including computer vision, image processing, and computer graphics.

Eli Shechtman

Eli Shechtman is a Principal Scientist at the Creative Intelligence Lab at Adobe Research. He received the B.Sc. degree in Electrical Engineering (magna cum laude) from Tel-Aviv University in 1996. Between 2001 and 2007 he attended the Weizmann Institute of Science where he received with honors his M.Sc. and Ph.D. degrees in Applied Mathematics and Computer Science. In 2007 he joined Adobe and started sharing his time as a post-doc with the University of Washington in Seattle. He published over 60 academic publications and holds over 20 issued patents. He served as a Technical Paper Committee member at SIGGRAPH 2013 and 2014, as an Area Chair at CVPR'15, ICCV'15 and CVPR'17 and serves an Associate Editor at TPAMI. He received several honors and awards, including the Best Paper prize at ECCV 2002, a Best Poster Award at CVPR 2004, a Best Reviewer Award at ECCV 2014 and published two Research Highlights papers in the Communication of the ACM journal.

Zhiwu Huang

Zhiwu Huang is currently a postdoctoral researcher in the Computer Vision Lab, ETH Zurich, Switzerland. He received the PhD degree from Institute of Computing Technology, Chinese Academy of Sciences in 2015. His main research interest is in human-focussed video analysis with Riemannian manifold networks and Wasserstein generative models.

Seungjun Nah

Seungjun Nah is a Ph. D. student at Seoul National University, advised by Prof. Kyoung Mu Lee. He received his BS degree from Seoul National University in 2014. He has worked on computer vision research topics including image/video deblurring, super-resolution, and neural network acceleration. He won the 1st place award from NTIRE 2017 super-resolution challenge and workshop. He co-organized the NTIRE 2019, AIM 2019 workshops and challenges on video quality restoration. He has reviewed conference (ICCV, CVPR, SIGGRAPH Asia) and journal (IJCV, TNNLS, TMM, TIP) paper submissions. He is one of the best reviewers in ICCV 2019. His research interests include visual quality enhancement, realistic dataset construction, low-level computer vision, and efficient deep learning. He is currently a guest scientist at Max Planck Institute for Intelligent Systems.

Kai Zhang

Abdelrahman Kamel Siddek Abdelhamed

Abdelrahman Abdelhamed is a PhD candidate at York University supervised by Prof. Michael S. Brown. He obtained an MSc degree in computer science from National University of Singapore in 2016, and both MSc and BSc degrees in computer science from Assiut University in Egypt in 2014 and 2009, respectively. He received a best MSc thesis award from Assiut University and a best paper award form the Color and Imaging Conference 2019. He has volunteered as a reviewer for top conferences and journals (such as CVPR, ICCV, ECCV, AAAI, and TIP), a website chair for CVPR 2018, and a co-organizer for NTIRE 2019 workshop. He has received a number of research awards including the Ontario Trillium Scholarship, AdeptMind Scholarship, and NUS Reseach Scholarship. His current research interests include computer vision, computational imaging, machine learning, and human-computer interaction.

Mahmoud Afifi

Mahmoud Afifi is a PhD candidate at York University supervised by Prof. Michael S. Brown. He obtained both MSc and BSc degrees in information technology from Assiut University in Egypt in 2015 and 2009, respectively. He received two best paper awards form the Color and Imaging Conference 2019 and IEEE International Conference on Mobile Data Management 2018. He has volunteered as a reviewer for several conferences and journals (such as, BMVC, WACV, IEEE Transactions on Intelligent Transportation Systems). He was an outstanding reviewer (honourable mention) at BMVC'19. His current research interests include computer vision and computational imaging.

Boaz Arad

Boaz Arad is the CTO of ``Voyage 81'', a university Ben-Gurion University of the Negev (BGU) spin-off startup company. Technology developed during his Ph.D. studies at the BGU Interdisciplinary Computational Vision Lab now powers Voyage 81's core offerings. Alongside Prof. Ohad Ben-Shahar, Boaz collected and now curates the largest natural hyperspectral image database published to date. For his work on hyperspectral data recovery he was awarded the EMVA “Young Professional Award 2017” as well as the Zlotowski Center for Neuroscience “Best Research Project of 2016” award.

Martin Danelljan

Martin Danelljan received his Ph.D. degree from Linköping University, Sweden in 2018. He is currently a postdoctoral researcher at ETH Zurich, Switzerland. His main research interests are online machine learning methods for visual tracking and video object segmentation, probabilistic models for point cloud registration, and machine learning with no or limited supervision. His research in the field of visual tracking in particular has attracted much attention. In 2014, he won the Visual Object Tracking (VOT) Challenge and the OpenCV State-ofthe-Art Vision Challenge. Furthermore, he achieved top ranks in VOT2016 and VOT2017 challenges. He received the best paper award in the computer vision track in ICPR 2016.

Shanxin Yuan, Gregory Slabaugh, Ales Leonardis, Ohad Ben-Shahar, Graham Finlayson, Yi-Tun Lin, Dario Fuoli, Florin-Alexandru Vasluianu, Sanghyun Son, Andreas Lugmayr, Shai Givati

Program committee (TBU)

  • Abdelrahman Abdelhamed, York University, Canada
  • Mahmoud Afifi, York University, Canada
  • Timo Aila, NVIDIA Research
  • Codruta Ancuti, Universitatea Politehnica Timisoara, Romania
  • Cosmin Ancuti, UCL, Belgium
  • Boaz Arad, Voyage81, Israel
  • Nick Barnes, The Australian National University, Australia
  • Ohad Ben-Shahar, Ben Gurion University of the Negev, Israel
  • Yochai Blau, Technion, Israel
  • Michael S. Brown, Samsung Research/York University, Canada
  • Jianrui Cai, Hong Kong Polytechnic University
  • Subhasis Chaudhuri, IIT Bombay, India
  • Chia-Ming Cheng, MediaTek Inc., Taiwan
  • Cheng-Ming Chiang, MediaTek Inc., Taiwan
  • Sunghyun Cho, POSTECH, Korea
  • Christophe De Vleeschouwer, Universite Catholique de Louvain (UCL), Belgium
  • Chao Dong, SIAT, China
  • Weisheng Dong, Xidian University, China
  • Touradj Ebrahimi, EPFL, Switzerland
  • Graham Finlayson, University of East Anglia, UK
  • Corneliu Florea, University Politehnica of Bucharest, Romania
  • Alessandro Foi, Tampere University of Technology, Finland
  • Peter Gehler, University of Tuebingen, MPI Intelligent Systems, Amazon, Germany
  • Bastian Goldluecke, University of Konstanz, Germany
  • Luc Van Gool, ETH Zurich and KU Leuven, Belgium
  • Shuhang Gu, ETH Zurich, Switzerland
  • Christine Guillemot, INRIA, France
  • Michael Hirsch, Amazon
  • Chiu Man Ho, OPPO, China
  • Hiroto Honda, DeNA Co., Japan
  • Zhe Hu, Hikvision Research
  • Jia-Bin Huang, Virginia Tech, US
  • Zhiwu Huang, ETH Zurich, Switzerland
  • Andrey Ignatov, ETH Zurich, Switzerland
  • Michal Irani, Weizmann Institute, Israel
  • Sing Bing Kang, Zillow Group
  • Vivek Kwatra, Google
  • In So Kweon, KAIST, Korea
  • Christian Ledig, Imagen Technologies, US
  • Kyoung Mu Lee, Seoul National University, Korea
  • Seungyong Lee, POSTECH, Korea
  • Victor Lempitsky, Samsung AI & Skoltech, Russia
  • Ales Leonardis, Huawei Noah's Ark Lab
  • Stephen Lin, Microsoft Research Asia
  • Yi-Tun Lin, University of East Anglia, UK
  • Ming-Yu Liu, NVIDIA Research
  • Chen Change Loy, Chinese University of Hong Kong
  • Vladimir Lukin, National Aerospace University, Ukraine
  • Kede Ma, City University of Hong Kong, US
  • Vasile Manta, Technical University of Iasi, Romania
  • Yasuyuki Matsushita, Osaka University, Japan
  • Peyman Milanfar, Google and UCSC, US
  • Rafael Molina Soriano, University of Granada, Spain
  • Yusuke Monno, Tokyo Institute of Technology, Japan
  • Hajime Nagahara, Osaka University, Japan
  • Seungjun Nah, Seoul National University, Korea
  • Vinay P. Namboodiri, IIT Kanpur, India
  • Sylvain Paris, Adobe Research
  • Federico Perazzi, Adobe Research
  • Wenqi Ren, Chinese Academy of Sciences
  • Tobias Ritschel, University College London, UK
  • Antonio Robles-Kelly, Deakin University, Australia
  • Aline Roumy, INRIA, France
  • Yoichi Sato, University of Tokyo, Japan
  • Konrad Schindler, ETH Zurich, Switzerland
  • Nicu Sebe, University of Trento, Italy
  • Eli Shechtman, Adobe Research, US
  • Boxin Shi, Peking University, China
  • Wenzhe Shi, Twitter Inc.
  • Gregory Slabaugh, Huawei Noah's Ark Lab
  • Sabine Susstrunk, EPFL, Switzerland
  • Hugues Talbot, Universite Paris Est, France
  • Robby T. Tan, Yale-NUS College, Singapore
  • Masayuki Tanaka, Tokyo Institute of Technology, Japan
  • Jean-Philippe Tarel, IFSTTAR, France
  • Radu Timofte, ETH Zurich, Switzerland
  • George Toderici, Google, US
  • Jue Wang, Face++ (Megvii)
  • Oliver Wang, Adobe Systems Inc
  • Ting-Chun Wang, NVIDIA
  • Xintao Wang, The Chinese University of Hong Kong
  • Ming-Hsuan Yang, University of California at Merced, Google AI
  • Shanxin Yuan, Huawei Noah's Ark Lab
  • Wenjun Zeng, Microsoft Research
  • Kai Zhang, ETH Zurich, Switzerland
  • Lei Zhang, The Hong Kong Polytechnic University
  • Richard Zhang, Adobe
  • Jun-Yan Zhu, Adobe Inc., US
  • Wangmeng Zuo, Harbin Institute of Technology, China

Invited Talks

Jan Kautz


Title: Image to Image Translation [slides, video]

Abstract: Recent progress in generative models and particularly generative adversarial networks (GANs) has been remarkable. They have been shown to excel at image-to-image translation problems as well as other things such as image synthesis. I will present a number of our recent methods in this space, which, for instance, can translate images from one domain (e.g., day time) to another domain (e.g., night time) in an unsupervised fashion.

Bio: Jan Kautz is VP of Learning and Perception Research at NVIDIA. Jan and his team pursue fundamental research in the areas of computer vision and deep learning, including visual perception, geometric vision, generative models, and efficient deep learning. His and his team's work has been recognized with various awards and has been regularly featured in the media. Before joining NVIDIA in 2013, Jan was a tenured faculty member at University College London. He holds a BSc in Computer Science from the University of Erlangen-Nürnberg (1999), an MMath from the University of Waterloo (1999), received his PhD from the Max-Planck-Institut für Informatik (2003), and worked as a post-doctoral researcher at the Massachusetts Institute of Technology (2003-2006).

Zibo Meng


Title: Looking into the dark: from image to video [slides, video]

Abstract: It is extremely difficult to acquire images with good qualities due to low signal to noise rate. In this talk, an approach for low-light image enhancement is presented. It achieves state-of-the-art performance on a public dataset and has been delivered to OPPO Reno2 and Reno3 series as the “Ultra Dark Mode”. Furthermore, a discussion of applying the developed approach to video enhancement is presented.

Bio: Dr. Zibo Meng is a deep learning scientist from OPPO US R&D center. He is working on developing novel algorithms for image/video enhancement for mobile devices. He received his Ph.D. degree in computer science from University of South Carolina in 2018 under the supervision of Dr. Yan Tong. He has published over 20 papers and serves as a reviewer for more than 20 international conferences and journals.

Schedule, check LIVE, Q&A, recordings, interaction

All the accepted NTIRE workshop papers have poster presentation. A subset have also oral presentation."
The accepted NTIRE workshop papers will be published under the book title "2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library

List of NTIRE 2020 papers (poster session allocation)


(Poster session I #16) Multi-Step Reinforcement Learning for Single Image Super-Resolution
Kyle Vassilo; Cory Heatwole; Tarek Taha; Asif Mehmood
(Poster session I #18) A review of an old dilemma: demosaicking first, or denoising first?
Jin Qiyu; Gabriele Facciolo; Jean-Michel Morel
(Poster session I #26) LIDIA: Lightweight Learned Image Denoising with Instance Adaptation
Gregory Vaksman; Michael Elad; Peyman Milanfar
(Poster session I #30) FabSoften: Face Beautification via Dynamic Skin Smoothing, Guided Feathering, and Texture Restoration
Sudha Velusamy; Rishubh Parihar; Raviprasad Kini; Aniket Rege
(Poster session I #39) Replacing Mobile Camera ISP with a Single Deep Learning Model
Andrey Ignatov; Luc Van Gool; Radu Timofte
(Poster session I #46) L^2UWE: A Framework for the Efficient Enhancement of Low-Light Underwater Images Using Local Contrast and Multi-Scale Fusion
Tunai Porto Marques; Alexandra Branzan Albu
(Poster session I #49) Rendering Natural Camera Bokeh Effect with Deep Learning
Andrey Ignatov; Jagruti Patel; Radu Timofte
(Poster session I #50) Deep Wavelet Network with Domain Adaptation for Single Image Demoireing
Xiaotong Luo; Jiangtao Zhang; Ming Hong; Yanyun Qu; Yuan Xie; Li Cui-hua
(Poster session I #52) Hierarchical Regression Network for Spectral Reconstruction from RGB Images
Yuzhi Zhao; Po Lai Man; Qiong Yan; Wei Liu; Tingyu Lin
(Poster session I #54) Investigating Loss Functions for Extreme Super-Resolution
Younghyun Jo; Sejong Yang; Seon Joo Kim
(Poster session I #57) C3Net: Demoiréing Network Attentive in Channel, Color and Concatenation
Sangmin Kim; Hyungjoon Nam; Jisu Kim; Jechang Jeong
(Poster session I #60) Guided Frequency Separation Network for Real World Super-Resolution
Yuanbo Zhou; Wei Deng; Tong Tong; Qinquan Gao
(Poster session I #62) Trident Dehazing Network
Jing Liu; Haiyan Wu; Yuan Xie; Yanyun Qu; Lizhuang Ma
(Poster session I #72) NH-HAZE: An Image Dehazing Benchmark with Non-Homogeneous Hazy and Haze-Free Images
Codruta O. Ancuti; Cosmin Ancuti; Radu Timofte
(Poster session I #82) High-Resolution Dual-Stage Multi-Level Feature Aggregation for Single Image and Video Deblurring
Stephan Brehm; Sebastian Scherer; Rainer Lienhart
(Poster session I #83) NTIRE 2020 Challenge on Image Demoireing: Methods and Results
Shanxin Yuan; Radu Timofte; Ales Leonardis; Gregory Slabaugh et al.
(Poster session I #103) Superkernel Neural Architecture Search for Image Denoising
Marcin Możejko; Tomasz Latkowski; Łukasz Treszczotko; Michał Szafraniuk; Krzysztof Trojanowski
(Poster session I #104) Residual Pixel Attention Network for Spectral Reconstruction from RGB Images
Hao Peng; Xiaomei Chen; Jie Zhao
(Poster session I #105) FBRNN:feedback recurrent neural network for extreme image super-resolution
Jun Yeop Lee; Jaihyun Park; Kanghyu Lee; Jeongki Min; Gwantae Kim; Bokyeung Lee; Bonhwa Ku; David Han; Hanseok Ko
(Poster session I #107) NTIRE 2020 Challenge on NonHomogeneous Dehazing
Codruta O. Ancuti; Cosmin Ancuti; Florin-Alexandru Vasluianu; Radu Timofte et al.


(Poster session II #19) Sensor-realistic Synthetic Data Engine for Multi-frame High Dynamic Range Photography
Jinhan Hu; Gyeongmin Choe; Zeeshan Nadir; Osama Hassan; Seok-Jun Lee; Hamid Sheikh; Youngjun Yoo; Mike Polley
(Poster session II #20) ImagePairs: Realistic Super Resolution Dataset via Beam Splitter Camera Rig
Hamid Vaezi Joze; Ilya Zharkov; Karlton Powell; Carl Ringler; Luming Liang; Andy Roulston; Vivek Pradeep
(Poster session II #23) Identity Enhanced Residual Image Denoising
Saeed Anwar; Cong Huynh; Fatih Porikli
(Poster session II #24) Structure Preserving Compressive Sensing MRI Reconstruction using Generative Adversarial Networks
Puneesh Deora; Bhavya Vasudeva; Saumik Bhattacharya; Pyari Pradhan
(Poster session II #27) Sky Optimization: Semantically aware image processing of skies in low-light photography
Orly Liba; Longqi Cai; Yun-Ta Tsai; Elad Eban; Yair Movshovitz-Attias; Yael Pritch; Huizhong Chen; Jonathan Barron
(Poster session II #29) Fast and Flexible Image Blind Denoising via Competition of Experts
Shunta Maeda
(Poster session II #38) Semantic Pixel Distances for Image Editing
Josh Myers-Dean; Scott Wehrwein
(Poster session II #63) Densely Self-guided Wavelet Network for Image Denoising
Wei Liu; Qiong Yan; Yuzhi Zhao
(Poster session II #64) MMDM: Multi-frame and Multi-scale for Image Demoiréing
Shuai Liu; Chenghua LI; Nan Nan; Ziyao Zong; Ruixia Song
(Poster session II #65) Real-World Super-Resolution using Generative Adversarial Networks
Haoyu Ren; Amin Kheradmand; Mostafa El-Khamy; Shuangquan Wang; Dongwoon Bai; Jungwon Lee
(Poster session II #70) Perceptual Extreme Super-Resolution Network with Receptive Field Block
Taizhang Shang; Qiuju Dai; Shenchen Zhu; Tong Yang; Yandong Guo
(Poster session II #76) Real Image Denoising based on Multi-scale Residual Dense Block and Cascaded U-Net with Block-connection
Long Bao; Zengli Yang; Shuangquan Wang; Dongwoon Bai; Jungwon Lee
(Poster session II #78) Ensemble Dehazing Networks for Non-homogeneous Haze
Mingzhao Yu; Venkateswararao Cherukuri; Tiantong Guo; Vishal Monga
(Poster session II #81) Unsupervised Real-World Super Resolution with Cycle Generative Adversarial Network and Domain Discriminator
Gwantae Kim; Jaihyun Park; Kanghyu Lee; Jun Yeop Lee; Jeongki Min; Bokyeung Lee; David Han; Hanseok Ko
(Poster session II #87) Real-World Super-Resolution via Kernel Estimation and Noise Injection
Xiaozhong Ji; Yun Cao; Ying Tai; Chengjie Wang; Jilin Li; Feiyue Huang
(Poster session II #90) Unsupervised Image Super-Resolution with an Indirect Supervised Path
Shuaijun Chen; Zhen Han; Enyan Dai; Xu Jia; Liu Ziluan; Liu Xing; Xueyi Zou; Chunjing Xu; Jianzhuang Liu; Qi Tian
(Poster session II #91) Dual-domain Deep Convolutional Neural Networks for Image Demoireing
An Gia Vien; Hyunkook Park; Chul Lee
(Poster session II #97) NTIRE 2020 Challenge on Video Quality Mapping: Methods and Results
Dario Fuoli; Zhiwu Huang; Martin Danelljan; Radu Timofte et al.
(Poster session II #99) Knowledge Transfer Dehazing Network for NonHomogeneous Dehazing
Haiyan Wu; Jing Liu; Yuan Xie; Yanyun Qu; Lizhuang Ma
(Poster session II #110) NTIRE 2020 Challenge on Real Image Denoising: Dataset, Methods and Results
Abdelrahman Abdelhamed; Mahmoud Afifi; Radu Timofte; Michael Brown et al.
(Poster session II #108) NTIRE 2020 Challenge on Perceptual Extreme Super-Resolution: Methods and Results
Kai Zhang; Shuhang Gu; Radu Timofte et al.
(Poster session II #109) NTIRE 2020 Challenge on Real-World Image Super-Resolution: Methods and Results
Andreas Lugmayr; Martin Danelljan; Radu Timofte et al.


(Poster session III #1) DA-cGAN: A Framework for Indoor Radio Design Using a Dimension-Aware Conditional Generative Adversarial Network
Chun-Hao Liu; Hun Chang; Taesuh Park
(Poster session III #3) Joint Learning of Blind Video Denoising and Optical Flow Estimation
Songhyun Yu; Bumjun Park; Junwoo Park; Jechang Jeong
(Poster session III #5) Deploying Image Deblurring across Mobile Devices: A Perspective of Quality and Latency
Cheng-Ming Chiang; Yu Tseng; Yu-Syuan Xu; Hsien-Kai Kuo; Yi-Min Tsai; Guan-Yu Chen; Koan-Sin Tan; Wei-Ting Wang; Yu-Chieh Lin; Shou-Yao Tseng; Wei-Shiang Lin; Chia-Lin Yu; BY Shen; Kloze Kao; Chia-Ming Cheng; Hung-Jen Chen
(Poster session III #7) MSFSR: A Multi-Stage Face Super-Resolution with Accurate Facial Representation via Enhanced Facial Boundaries
Yunchen Zhang; Yi Wu; Liang Chen
(Poster session III #12) Color-wise Attention Network for Low-light Image Enhancement
Yousef Atoum; Mao Ye; Liu Ren; Ying Tai; Xiaoming Liu
(Poster session III #13) GradNet Image Denoising
Yang Liu; Saeed Anwar; Liang Zheng; Qi Tian
(Poster session III #15) Photosequencing of Motion Blur using Short and Long Exposures
Vijay Rengarajan; Shuo Zhao; Ruiwen Zhen; John Glotzbach; Hamid Sheikh; Aswin Sankaranarayanan
(Poster session III #33) Physically Plausible Spectral Reconstruction from RGB Images
Yi-Tun Lin; Graham Finlayson
(Poster session III #47) NTIRE 2020 Challenge on Image and Video Deblurring
Seungjun Nah; Sanghyun Son; Radu Timofte; Kyoung Mu Lee et al.
(Poster session III #68) Deep Generative Adversarial Residual Convolutional Networks for Real-World Super-Resolution
Rao Muhammad Umer; Gian Luca Foresti; Christian Micheloni
(Poster session III #71) Unsupervised Real Image Super-Resolution via Generative Variational AutoEncoder
Zhi-Song Liu; Zhisong Liu; Wan-Chi Siu; Marie-Paule Cani; Li-Wen Wang; Chu-Tak Li
(Poster session III #73) NTIRE 2020 Challenge on Spectral Reconstruction from an RGB Image
Boaz Arad; Radu Timofte; Yi-Tun Lin; Graham Finlayson; Ohad Ben-Shahar et al.
(Poster session III #79) NonLocal Channel Attention for NonHomogeneous Image Dehazing
Kareem Metwaly; Xuelu Li; Tiantong Guo; Vishal Monga
(Poster session III #80) Residual Channel Attention Generative Adversarial Network for Image Super-Resolution and Noise Reduction
Jie Cai; Zibo Meng; Chiu Man Ho
(Poster session III #84) Adaptive Weighted Attention Network with Camera Spectral Sensitivity Prior for Spectral Reconstruction from RGB Images
Jiaojiao Li; Chaoxiong Wu; Song Rui; Yunsong Li; Fei Liu
(Poster session III #85) Unsupervised Single Image Super-Resolution Network (USISResNet) for Real-World Data Using Generative Adversarial Network
Kalpesh Prajapati; Vishal Chudasama; Heena Patel; Kishor Upla; Raghavendra Ramachandra; Kiran Raja; Christoph Busch
(Poster session III #93) Moiré Pattern Removal via Attentive Fractal Network
Dejia Xu; Yihao Chu; Qingyan Sun
(Poster session III #95) SimUSR: A Simple but Strong Baseline for Unsupervised Image Super-resolution
Namhyuk Ahn; Jaejun Yoo; Kyung-Ah Sohn
(Poster session III #101) RGB to Spectral Reconstruction via Learned Basis Functions and Weights
Biebele Joslyn Fubara; Mohamed Sedky; David Dyke
(Poster session III #102) Fast Deep Multi-patch Hierarchical Network for Nonhomogeneous Image Dehazing
Sourya Dipta Das; Saikat Dutta