15.June Seattle, Washington

NTIRE 2020

New Trends in Image Restoration and Enhancement workshop

and challenges on image and video restoration and enhancement

in conjunction with CVPR 2020

Sponsors








Call for papers

Image restoration, enhancement and manipulation are key computer vision tasks, aiming at the restoration of degraded image content, the filling in of missing information, or the needed transformation and/or manipulation to achieve a desired target (with respect to perceptual quality, contents, or performance of apps working on such images). Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the fulfillment of further tasks, as image restoration, enhancement and manipulation serves as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis etc. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.

This workshop builds upon the success of the previous NTIRE editions: at CVPR 2017 and 2018 and 2019 and at ACCV 2016 . Moreover, it relies on all the people associated with the CLIC 2018, 2019, 2020 , PIRM 2018 , AIM 2019 and NTIRE events such as organizers, PC members, distinguished speakers, authors of published paper, challenge participants and winning teams.

Papers addressing topics related to image restoration, enhancement and manipulation are invited. The topics include, but are not limited to:

  • Image/video inpainting
  • Image/video deblurring
  • Image/video denoising
  • Image/video upsampling and super-resolution
  • Image/video filtering
  • Image/video de-hazing, de-raining, de-snowing, etc.
  • Demosaicing
  • Image/video compression
  • Removal of artifacts, shadows, glare and reflections, etc.
  • Image/video enhancement: brightening, color adjustment, sharpening, etc.
  • Style transfer
  • Hyperspectral imaging
  • Underwater imaging
  • Methods robust to changing weather conditions / adverse outdoor conditions
  • Image/video restoration, enhancement, manipulation on constrained settings
  • Image/video processing on mobile devices
  • Visual domain translation
  • Multimodal translation
  • Perceptual enhancement
  • Perceptual manipulation
  • Depth estimation
  • Image/video generation and hallucination
  • Image/video quality assessment
  • Image/video semantic segmentation, depth estimation
  • Studies and applications of the above.

NTIRE 2020 has the following associated groups of challenges:

  • image challenges
  • video challenges

The authors of the top methods in each category will be invited to submit papers to NTIRE 2020 workshop.

The authors of the top methods will co-author the challenge reports.

The accepted NTIRE workshop papers will be published under the book title "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library

Contact:

Radu Timofte, radu.timofte@vision.ee.ethz.ch

Computer Vision Laboratory

ETH Zurich, Switzerland



NTIRE 2020 video challenges

Important dates



Challenges Event Date (always 5PM Pacific Time)
Site online December 01, 2019
Release of train data and validation data December 17, 2019
Validation server online January 06, 2020
Final test data release, validation server closed March 16, 2020
Test results submission deadline March 26, 2020 (EXTENDED)
Fact sheets and code/executable submission deadline March 26, 2020 (EXTENDED)
Preliminary test results release to the participants March 28, 2020 (EXTENDED)
Paper submission deadline for entries from the challenges April 09, 2020 (EXTENDED)
Workshop Event Date (always 5PM Pacific Time)
Paper submission server online January 20, 2020
Paper submission deadline (regular workshop papers) March 22, 2020 (EXTENDED)
Paper submission deadline (only for methods from challenges!) April 09, 2020 (EXTENDED)
Regular papers decision notification April 10, 2020 (EXTENDED)
Camera ready deadline April 19, 2020 (EXTENDED)
Workshop day June 15, 2020

Submit



Instructions and Policies
Format and paper length

A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in double column. The paper format must follow the same guidelines as for all CVPR 2020 submissions.
http://cvpr2020.thecvf.com/submission/main-conference/author-guidelines

Double-blind review policy

The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.

Dual submission policy

Dual submission is allowed with CVPR2020 main conference only. If a paper is submitted also to CVPR and accepted, the paper cannot be published both at the CVPR and the workshop.

Submission site

https://cmt3.research.microsoft.com/NTIRE2020

Proceedings

Accepted and presented papers will be published after the conference in CVPR Workshops proceedings together with the CVPR2020 main conference papers.

Author Kit

http://cvpr2020.thecvf.com/sites/default/files/2019-09/cvpr2020AuthorKit.zip
The author kit provides a LaTeX2e template for paper submissions. Please refer to the example egpaper_for_review.pdf for detailed formatting instructions.

People



Organizers

Radu Timofte

Radu Timofte is lecturer and research group leader in the Computer Vision Laboratory, at ETH Zurich, Switzerland. He obtained a PhD degree in Electrical Engineering at the KU Leuven, Belgium in 2013, the MSc at the Univ. of Eastern Finland in 2007, and the Dipl. Eng. at the Technical Univ. of Iasi, Romania in 2006. He serves as a reviewer for top journals (such as TPAMI, TIP, IJCV, TNNLS, TCSVT, CVIU, PR) and conferences (ICCV, CVPR, ECCV, NeurIPS) and is associate editor for Elsevier CVIU journal and, starting 2020, for IEEE Trans. PAMI and for SIAM Journal on Imaging Sciences. He serves as area chair for ACCV 2018, ICCV 2019 and ECCV 2020. He received a NIPS 2017 best reviewer award. His work received the best student paper award at BMVC 2019, a best scientific paper award at ICPR 2012, the best paper award at CVVT workshop (ECCV 2012), the best paper award at ChaLearn LAP workshop (ICCV 2015), the best scientific poster award at EOS 2017, the honorable mention award at FG 2017, and his team won a number of challenges including traffic sign detection (IJCNN 2013) and apparent age estimation (ICCV 2015). He is co-founder of Merantix and co-organizer of NTIRE, CLIC, AIM and PIRM events. His current research interests include sparse and collaborative representations, deep learning, optical flow, image/video compression, restoration and enhancement.

Shuhang Gu

Shuhang Gu received the B.E. degree from the School of Astronautics, Beijing University of Aeronautics and Astronautics, China, in 2010, the M.E. degree from the Institute of Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology, China, in 2013, and Ph.D. degree from the Department of Computing, The Hong Kong Polytechnic University, in 2017. He currently holds a post-doctoral position at ETH Zurich, Switzerland. His research interests include image restoration, enhancement and compression.

Ming-Hsuan Yang

Ming-Hsuan Yang received the PhD degree in Computer Science from University of Illinois at Urbana-Champaign. He is a full professor in Electrical Engineering and Computer Science at University of California at Merced. He has published more than 120 papers in the field of computer vision. Yang serves as a program co-chair of ACCV 2014, general co-chair of ACCV 2016, and program co-chair of ICCV 2019. He serves as an editor for PAMI, IJCV, CVIU, IVC and JAIR. His research interests include object detection, tracking, recognition, image deblurring, super resolution, saliency detection, and image/video segmentation.

Lei Zhang

Lei Zhang (M'04, SM'14, F'18) received his B.Sc. degree in 1995 from Shenyang Institute of Aeronautical Engineering, Shenyang, P.R. China, and M.Sc. and Ph.D degrees in Control Theory and Engineering from Northwestern Polytechnical University, Xi'an, P.R. China, respectively in 1998 and 2001, respectively. From 2001 to 2002, he was a research associate in the Department of Computing, The Hong Kong Polytechnic University. From January 2003 to January 2006 he worked as a Postdoctoral Fellow in the Department of Electrical and Computer Engineering, McMaster University, Canada. In 2006, he joined the Department of Computing, The Hong Kong Polytechnic University, as an Assistant Professor. Since July 2017, he has been a Chair Professor in the same department. His research interests include Computer Vision, Pattern Recognition, Image and Video Analysis, and Biometrics, etc. Prof. Zhang has published more than 200 papers in those areas. As of 2018, his publications have been cited more than 36,000 times in the literature. Prof. Zhang is an Associate Editor of IEEE Trans. on Image Processing, SIAM Journal of Imaging Sciences and Image and Vision Computing, etc. He is a "Clarivate Analytics Highly Cited Researcher" from 2015 to 2018.

Luc Van Gool

Luc Van Gool received a degree in electro-mechanical engineering at the Katholieke Universiteit Leuven in 1981. Currently, he is a full professor for Computer Vision at the ETH in Zurich and the Katholieke Universiteit Leuven in Belgium. He leads research and teaches at both places. He has authored over 300 papers. Luc Van Gool has been a program committee member of several, major computer vision conferences (e.g. Program Chair ICCV'05, Beijing, General Chair of ICCV'11, Barcelona, and of ECCV'14, Zurich). His main interests include 3D reconstruction and modeling, object recognition, and tracking and gesture analysis. He received several Best Paper awards (eg. David Marr Prize '98, Best Paper CVPR'07, Tsuji Outstanding Paper Award ACCV'09, Best Vision Paper ICRA'09). In 2015 he received the 5-yearly Excellence Award in Applied Sciences by the Flemish Fund for Scientific Research, in 2016 a Koenderink Prize and in 2017 a PAMI Distinguished Researcher award. He is a co-founder of more than 10 spin-off companies and was the holder of an ERC Advanced Grant (VarCity). Currently, he leads computer vision research for autonomous driving in the context of the Toyota TRACE labs in Leuven and at ETH, as well as image and video enhancement research for Huawei.

Cosmin Ancuti

Cosmin Ancuti received the PhD degree at Hasselt University, Belgium (2009). He was a post-doctoral fellow at IMINDS and Intel Exascience Lab (IMEC), Leuven, Belgium (2010-2012) and a research fellow at University Catholique of Louvain, Belgium (2015-2017). Currently, he is a senior researcher/lecturer at University Politehnica Timisoara. He is the author of more than 50 papers published in international conference proceedings and journals. His area of interests includes image and video enhancement techniques, computational photography and low level computer vision.

Codruta O. Ancuti

Codruta O. Ancuti is a senior researcher/lecturer at University Politehnica Timisoara, Faculty of Electrical and Telecommunication Engineering. She obtained the PhD degree at Hasselt University, Belgium (2011) and between 2015 and 2017 she was a research fellow at University of Girona, Spain (ViCOROB group). Her work received the best paper award at NTIRE 2017 (CVPR workshop). Her main interest of research includes image understanding and visual perception. She is the first that introduced several single images-based enhancing techniques built on the multi-scale fusion (e.g. color-to grayscale, image dehazing, underwater image and video restoration.

Kyoung Mu Lee

Kyoung Mu Lee received the B.S. and M.S. Degrees from Seoul National University, Seoul, Korea, and Ph. D. degree in Electrical Engineering from the University of Southern California in 1993. Currently he is with the Dept. of ECE at Seoul National University as a full professor. His primary research interests include scene understanding, object recognition, low-level vision, visual tracking, and visual navigation. He is currently serving as an AEIC (Associate Editor in Chief) of the IEEE TPAMI, an Area Editor of the Computer Vision and Image Understanding (CVIU), and has served as an Associate Editor of the IEEE TPAMI, the Machine Vision Application (MVA) Journal and the IPSJ Transactions on Computer Vision and Applications (CVA), and the IEEE Signal Processing Letter. He is an Advisory Board Member of CVF (Computer Vision Foundation) and an Editorial Advisory Board Member for Academic Press/Elsevier. He also has served as Area Chars of CVPR, ICCV, ECCV, and ACCV many times, and serves as a general co-chair of ACM MM 2018, ACCV2018 and ICCV2019. He was a Distinguished Lecturer of the Asia-Pacific Signal and Information Processing Association (APSIPA) for 2012-2013.

Michael S. Brown

Michael S. Brown obtained his BS and PhD in Computer Science from the University of Kentucky in 1995 and 2001, respectively. He is currently a professor and Canada Research Chair at York University in Toronto. Dr. Brown has served as an area chair multiple times for CVPR, ICCV, ECCV, and ACCV and was the general chair for CVPR 2018. He has served as an associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) and is currently on the editorial board of the International Journal of Computer Vision (IJCV). His research interests including computer vision, image processing, and computer graphics.

Eli Shechtman

Eli Shechtman is a Principal Scientist at the Creative Intelligence Lab at Adobe Research. He received the B.Sc. degree in Electrical Engineering (magna cum laude) from Tel-Aviv University in 1996. Between 2001 and 2007 he attended the Weizmann Institute of Science where he received with honors his M.Sc. and Ph.D. degrees in Applied Mathematics and Computer Science. In 2007 he joined Adobe and started sharing his time as a post-doc with the University of Washington in Seattle. He published over 60 academic publications and holds over 20 issued patents. He served as a Technical Paper Committee member at SIGGRAPH 2013 and 2014, as an Area Chair at CVPR'15, ICCV'15 and CVPR'17 and serves an Associate Editor at TPAMI. He received several honors and awards, including the Best Paper prize at ECCV 2002, a Best Poster Award at CVPR 2004, a Best Reviewer Award at ECCV 2014 and published two Research Highlights papers in the Communication of the ACM journal.

Zhiwu Huang

Zhiwu Huang is currently a postdoctoral researcher in the Computer Vision Lab, ETH Zurich, Switzerland. He received the PhD degree from Institute of Computing Technology, Chinese Academy of Sciences in 2015. His main research interest is in human-focussed video analysis with Riemannian manifold networks and Wasserstein generative models.

Seungjun Nah

Seungjun Nah is a Ph. D. student at Seoul National University, advised by Prof. Kyoung Mu Lee. He received his BS degree from Seoul National University in 2014. He has worked on computer vision research topics including image/video deblurring, super-resolution, and neural network acceleration. He won the 1st place award from NTIRE 2017 super-resolution challenge and workshop. He co-organized the NTIRE 2019, AIM 2019 workshops and challenges on video quality restoration. He has reviewed conference (ICCV, CVPR, SIGGRAPH Asia) and journal (IJCV, TNNLS, TMM, TIP) paper submissions. He is one of the best reviewers in ICCV 2019. His research interests include visual quality enhancement, realistic dataset construction, low-level computer vision, and efficient deep learning. He is currently a guest scientist at Max Planck Institute for Intelligent Systems.

Kai Zhang

Abdelrahman Kamel Siddek Abdelhamed

Abdelrahman Abdelhamed is a PhD candidate at York University supervised by Prof. Michael S. Brown. He obtained an MSc degree in computer science from National University of Singapore in 2016, and both MSc and BSc degrees in computer science from Assiut University in Egypt in 2014 and 2009, respectively. He received a best MSc thesis award from Assiut University and a best paper award form the Color and Imaging Conference 2019. He has volunteered as a reviewer for top conferences and journals (such as CVPR, ICCV, ECCV, AAAI, and TIP), a website chair for CVPR 2018, and a co-organizer for NTIRE 2019 workshop. He has received a number of research awards including the Ontario Trillium Scholarship, AdeptMind Scholarship, and NUS Reseach Scholarship. His current research interests include computer vision, computational imaging, machine learning, and human-computer interaction.

Mahmoud Afifi

Mahmoud Afifi is a PhD candidate at York University supervised by Prof. Michael S. Brown. He obtained both MSc and BSc degrees in information technology from Assiut University in Egypt in 2015 and 2009, respectively. He received two best paper awards form the Color and Imaging Conference 2019 and IEEE International Conference on Mobile Data Management 2018. He has volunteered as a reviewer for several conferences and journals (such as, BMVC, WACV, IEEE Transactions on Intelligent Transportation Systems). He was an outstanding reviewer (honourable mention) at BMVC'19. His current research interests include computer vision and computational imaging.

Boaz Arad

Boaz Arad is the CTO of ``Voyage 81'', a university Ben-Gurion University of the Negev (BGU) spin-off startup company. Technology developed during his Ph.D. studies at the BGU Interdisciplinary Computational Vision Lab now powers Voyage 81's core offerings. Alongside Prof. Ohad Ben-Shahar, Boaz collected and now curates the largest natural hyperspectral image database published to date. For his work on hyperspectral data recovery he was awarded the EMVA “Young Professional Award 2017” as well as the Zlotowski Center for Neuroscience “Best Research Project of 2016” award.

Martin Danelljan

Martin Danelljan received his Ph.D. degree from Linköping University, Sweden in 2018. He is currently a postdoctoral researcher at ETH Zurich, Switzerland. His main research interests are online machine learning methods for visual tracking and video object segmentation, probabilistic models for point cloud registration, and machine learning with no or limited supervision. His research in the field of visual tracking in particular has attracted much attention. In 2014, he won the Visual Object Tracking (VOT) Challenge and the OpenCV State-ofthe-Art Vision Challenge. Furthermore, he achieved top ranks in VOT2016 and VOT2017 challenges. He received the best paper award in the computer vision track in ICPR 2016.

Program committee (TBU)

  • Abdelrahman Abdelhamed, York University, Canada
  • Mahmoud Afifi, York University, Canada
  • Timo Aila, NVIDIA Research
  • Codruta Ancuti, Universitatea Politehnica Timisoara, Romania
  • Cosmin Ancuti, UCL, Belgium
  • Boaz Arad, Voyage81, Israel
  • Nick Barnes, The Australian National University, Australia
  • Ohad Ben-Shahar, Ben Gurion University of the Negev, Israel
  • Yochai Blau, Technion, Israel
  • Michael S. Brown, Samsung Research/York University, Canada
  • Jianrui Cai, Hong Kong Polytechnic University
  • Subhasis Chaudhuri, IIT Bombay, India
  • Chia-Ming Cheng, MediaTek Inc., Taiwan
  • Cheng-Ming Chiang, MediaTek Inc., Taiwan
  • Sunghyun Cho, POSTECH, Korea
  • Christophe De Vleeschouwer, Universite Catholique de Louvain (UCL), Belgium
  • Chao Dong, SIAT, China
  • Weisheng Dong, Xidian University, China
  • Touradj Ebrahimi, EPFL, Switzerland
  • Graham Finlayson, University of East Anglia, UK
  • Corneliu Florea, University Politehnica of Bucharest, Romania
  • Alessandro Foi, Tampere University of Technology, Finland
  • Peter Gehler, University of Tuebingen, MPI Intelligent Systems, Amazon, Germany
  • Bastian Goldluecke, University of Konstanz, Germany
  • Luc Van Gool, ETH Zurich and KU Leuven, Belgium
  • Shuhang Gu, ETH Zurich, Switzerland
  • Christine Guillemot, INRIA, France
  • Michael Hirsch, Amazon
  • Chiu Man Ho, OPPO, China
  • Hiroto Honda, DeNA Co., Japan
  • Zhe Hu, Hikvision Research
  • Jia-Bin Huang, Virginia Tech, US
  • Zhiwu Huang, ETH Zurich, Switzerland
  • Andrey Ignatov, ETH Zurich, Switzerland
  • Michal Irani, Weizmann Institute, Israel
  • Sing Bing Kang, Zillow Group
  • Vivek Kwatra, Google
  • In So Kweon, KAIST, Korea
  • Christian Ledig, Imagen Technologies, US
  • Kyoung Mu Lee, Seoul National University, Korea
  • Seungyong Lee, POSTECH, Korea
  • Victor Lempitsky, Samsung AI & Skoltech, Russia
  • Ales Leonardis, Huawei Noah's Ark Lab
  • Stephen Lin, Microsoft Research Asia
  • Yi-Tun Lin, University of East Anglia, UK
  • Ming-Yu Liu, NVIDIA Research
  • Chen Change Loy, Chinese University of Hong Kong
  • Vladimir Lukin, National Aerospace University, Ukraine
  • Kede Ma, City University of Hong Kong, US
  • Vasile Manta, Technical University of Iasi, Romania
  • Yasuyuki Matsushita, Osaka University, Japan
  • Peyman Milanfar, Google and UCSC, US
  • Rafael Molina Soriano, University of Granada, Spain
  • Yusuke Monno, Tokyo Institute of Technology, Japan
  • Hajime Nagahara, Osaka University, Japan
  • Seungjun Nah, Seoul National University, Korea
  • Vinay P. Namboodiri, IIT Kanpur, India
  • Sylvain Paris, Adobe Research
  • Federico Perazzi, Adobe Research
  • Wenqi Ren, Chinese Academy of Sciences
  • Tobias Ritschel, University College London, UK
  • Antonio Robles-Kelly, Deakin University, Australia
  • Aline Roumy, INRIA, France
  • Yoichi Sato, University of Tokyo, Japan
  • Konrad Schindler, ETH Zurich, Switzerland
  • Nicu Sebe, University of Trento, Italy
  • Eli Shechtman, Adobe Research, US
  • Boxin Shi, Peking University, China
  • Wenzhe Shi, Twitter Inc.
  • Gregory Slabaugh, Huawei Noah's Ark Lab
  • Sabine Susstrunk, EPFL, Switzerland
  • Hugues Talbot, Universite Paris Est, France
  • Robby T. Tan, Yale-NUS College, Singapore
  • Masayuki Tanaka, Tokyo Institute of Technology, Japan
  • Jean-Philippe Tarel, IFSTTAR, France
  • Radu Timofte, ETH Zurich, Switzerland
  • George Toderici, Google, US
  • Jue Wang, Face++ (Megvii)
  • Oliver Wang, Adobe Systems Inc
  • Ting-Chun Wang, NVIDIA
  • Xintao Wang, The Chinese University of Hong Kong
  • Ming-Hsuan Yang, University of California at Merced, Google AI
  • Shanxin Yuan, Huawei Noah's Ark Lab
  • Wenjun Zeng, Microsoft Research
  • Kai Zhang, ETH Zurich, Switzerland
  • Lei Zhang, The Hong Kong Polytechnic University
  • Richard Zhang, Adobe
  • Jun-Yan Zhu, Adobe Inc., US
  • Wangmeng Zuo, Harbin Institute of Technology, China

Invited Talks (TBA)



Schedule (TBA)


The accepted NTIRE workshop papers will be published under the book title "2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library



NTIRE 2020 Awards


TBA
Best Paper Awards
Challenge Winners
Challenge Awards