21.July Honolulu, Hawaii

NTIRE 2017

New Trends in Image Restoration and Enhancement workshop

and challenge on image super-resolution

in conjunction with CVPR 2017

Sponsors




Call for papers

Image restoration and image enhancement are key computer vision tasks, aiming at the restoration of degraded image content or the filling in of missing information. Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the fulfillment of further tasks, with image restoration or enhancement serving as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.

Papers addressing topics related to image restoration and enhancement are invited. The topics include, but are not limited to:

  • Image inpainting
  • Image deblurring
  • Image denoising
  • Image upsampling and super-resolution
  • Image filtering
  • Image dehazing
  • Demosaicing
  • Image enhancement: brightening, color adjustment, sharpening, etc.
  • Style transfer
  • Image generation and image hallucination
  • Image-quality assessment
  • Video restoration and enhancement
  • Hyperspectral imaging
  • Methods robust to changing weather conditions
  • Studies and applications of the above.

IMPORTANT! The competition announced its final results!

Jointly with NTIRE2017 there is the example-based single image super-resolution challenge. The authors of the top methods in each category will be invited to submit papers to NTIRE2017 workshop.

The authors of the top methods co-author the NTIRE 2017 SR Challenge report:

@InProceedings{Timofte_2017_CVPR_Workshops,
author = {Timofte, Radu and Agustsson, Eirikur and Van Gool, Luc and Yang, Ming-Hsuan and Zhang, Lei and others},
title = {NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
}

The 19 accepted NTIRE workshop papers were published under the book title "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library

Contact:

Radu Timofte, radu.timofte@vision.ee.ethz.ch

Computer Vision Laboratory

ETH Zurich, Switzerland

NTIRE challenge on example-based single image super-resolution

In order to gauge the current state-of-the-art in example-based single-image super-resolution, to compare and to promote different solutions we are organizing an NTIRE challenge in conjunction with the CVPR 2017 conference. We propose a large DIV2K dataset with DIVerse 2K resolution images.

The challenge has 2 tracks:

  1. Track 1: bicubic uses the bicubic downscaling (Matlab imresize), one of the most common settings from the recent single-image super-resolution literature.
  2. Track 2: unknown assumes that the explicit forms for the degradation operators are unknown, only the training pairs of low and high images are available.

To learn more about the challenge, to participate in the challenge, and to access the newly collected DIV2K dataset with DIVerse 2K resolution images everybody is invited to register at the following links, accordingly:

The training data is made available to the registered participants.

The top ranked participants co-author the challenge paper report.

NTIRE 2017 SR Challenge report:

@InProceedings{Timofte_2017_CVPR_Workshops,
author = {Timofte, Radu and Agustsson, Eirikur and Van Gool, Luc and Yang, Ming-Hsuan and Zhang, Lei and others},
title = {NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}

DIV2K dataset and study:

@InProceedings{Agustsson_2017_CVPR_Workshops,
author = {Agustsson, Eirikur and Timofte, Radu},
title = {NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}

Supplementary material (PSNR, SSIM, IFC, CORNIA results for top challenge methods, VDSR and A+ on DIV2K, Urban100, B100, Set14, Set5)
DIV2K dataset
NTIRE 2017 Challenge Factsheets

Important dates



Challenge Event Date (always 5PM Pacific Time)
Site online January 21, 2017
Release of train data (low-res and high-res images) and validation data (only low-res) February 14, 2017
Validation server online March 1, 2017
Final test data release (only low-res), validation data (high-res) released, validation server closed April 10, 2017
Test high-res results submission deadline April 17, 2017
Fact sheets submission deadline April 17, 2017
Code/executable submission deadline April 17, 2017
Final test results release to the participants April 24, 2017
Paper submission deadline for entries from the challenge May 4, 2017
Workshop Event Date (always 5PM Pacific Time)
Paper submission server online March 1, 2017
Paper submission deadline April 24, 2017 (extended!)
Paper submission deadline (only for methods from the challenge!) May 4, 2017
Decision notification May 8, 2017
Camera ready deadline May 19, 2017
Workshop day July 21, 2017

Submit



Instructions and Policies
Format and paper length

A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in double column. The paper format must follow the same guidelines as for all CVPR 2017 submissions.
http://cvpr2017.thecvf.com/submission/main_conference/author_guidelines

Double-blind review policy

The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.

Dual submission policy

Dual submission is allowed with CVPR2017 main conference only. If a paper is submitted also to CVPR and accepted, the paper cannot be published both at the CVPR and the workshop.

Submission site

https://cmt3.research.microsoft.com/NTIRE2017

Proceedings

Accepted and presented papers will be published after the conference in CVPR Workshops proceedings together with the CVPR2017 main conference papers.

Author Kit

http://cvpr2017.thecvf.com/files/cvpr2017AuthorKit.zip
The author kit provides a LaTeX2e template for paper submissions. Please refer to the example egpaper_for_review.pdf for detailed formatting instructions.


Published papers
The 19 accepted NTIRE workshop papers were published under the book title "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library

People



Organizers

Radu Timofte

Radu Timofte obtained a PhD degree in Electrical Engineering at the KU Leuven, Belgium in 2013, the MSc at the Univ. of Eastern Finland in 2007, and the Dipl. Eng. at the Technical Univ. of Iasi, Romania in 2006. Currently, he is research group leader in the Computer Vision Lab, from the ETH Zurich, Switzerland. He serves as a reviewer for top journals (such as TPAMI, TIP, IJCV, TNNLS, TCSVT, CVIU, PRL) and conferences (ICCV, CVPR, ECCV, NIPS). His work received a best scientific paper award at ICPR 2012, the best paper award at CVVT workshop (ECCV 2012), the best paper award at ChaLearn LAP workshop (ICCV 2015) and his team won a number of challenges including traffic sign detection (IJCNN 2013) and apparent age estimation (ICCV 2015). He is co-founder of Merantix. His current research interests include sparse and collaborative representations, classification, deep learning, optical flow, image restoration and enhancement.

Eirikur Agustsson

Eirikur Agustsson received a MSc degree in Electrical Engineering and Information Technology from ETH Zurich and a double BSc degree in Mathematics and Electrical Engineering from the University of Iceland. Currently he is a Research Assistant and PhD student at ETH Zurich, under the supervision of Prof. Luc Van Gool. His main research interests include deep learning for regression & classification and super-resolution.

Ming-Hsuan Yang

Ming-Hsuan Yang received the PhD degree in Computer Science from University of Illinois at Urbana-Champaign. He is an associate professor in Electrical Engineering and Computer Science at University of California at Merced. He has published more than 120 papers in the field of computer vision. Yang serves as a program co-chair of ACCV 2014, general co-chair of ACCV 2016, and program co-chair of ICCV 2019. He serves as an editor for PAMI, IJCV, CVIU, IVC and JAIR. His research interests include object detection, tracking, recognition, image deblurring, super resolution, saliency detection, and image/video segmentation.

Lei Zhang

Lei Zhang (M’04, SM’14) received his B.Sc. degree in 1995 from Shenyang Institute of Aeronautical Engineering, Shenyang, P.R. China, and M.Sc. and Ph.D degrees in Control Theory and Engineering from Northwestern Polytechnical University, Xi’an, P.R. China, respectively in 1998 and 2001, respectively. From 2001 to 2002, he was a research associate in the Department of Computing, The Hong Kong Polytechnic University. From January 2003 to January 2006 he worked as a Postdoctoral Fellow in the Department of Electrical and Computer Engineering, McMaster University, Canada. In 2006, he joined the Department of Computing, The Hong Kong Polytechnic University, as an Assistant Professor. Since July 2015, he has been a Full Professor in the same department. His research interests include Computer Vision, Pattern Recognition, Image and Video Processing, and Biometrics, etc. Prof. Zhang has published more than 200 papers in those areas. As of 2016, his publications have been cited more than 20,000 times in the literature. Prof. Zhang is an Associate Editor of IEEE Trans. on Image Processing, SIAM Journal of Imaging Sciences and Image and Vision Computing, etc. He is a "Highly Cited Researcher" selected by Thomson Reuters.

Luc Van Gool

Luc Van Gool received a degree in electro-mechanical engineering at the Katholieke Universiteit Leuven in 1981. Currently, he is a full professor for Computer Vision at the ETH in Zurich and the Katholieke Universiteit Leuven in Belgium. He leads research and teaches at both places. He has authored over 200 papers in his field. Luc Van Gool has been a program committee member of several, major computer vision conferences (e.g. Program Chair ICCV'05, Beijing, and General Chair of ICCV'11, Barcelona, and of ECCV'14, Zurich). His main interests include 3D reconstruction and modeling, object recognition, and tracking and gesture analysis. He received several Best Paper awards (eg. David Marr Prize '98, Best Paper CVPR'07, Tsuji Outstanding Paper Award ACCV'09, Best Vision Paper ICRA'09). He is a co-founder of 10 spin-off companies. In 2015 he received the 5-yearly Excellence Award in Applied Sciences by the Flemish Fund for Scientific Research. He is the holder of an ERC Advanced Grant (VarCity).

Program committee

Invited Talks



Jan Kautz

Title: Unsupervised Image-to-Image Translation Networks

Abstract: Most of the existing image-to-image translation frameworks---mapping an image in one domain to a corresponding image in another---are based on supervised learning, i.e., pairs of corresponding images in two domains are required for learning the translation function. This largely limits their applications, because capturing corresponding images in two different domains is often a difficult task. To address the issue, we propose the UNsupervised Image-to-image Translation (UNIT) framework, which is based on variational autoencoders and generative adversarial networks. The proposed framework can learn the translation function without any corresponding images in two domains. We enable this learning capability by combining a weight-sharing constraint and an adversarial training objective. Through visualization results from various unsupervised image translation tasks, we verify the effectiveness of the proposed framework. An ablation study further reveals the critical design choices. Moreover, we apply the UNIT framework to the unsupervised domain adaptation task and achieve better results than competing algorithms do in benchmark datasets.

Bio: Jan leads the Visual Computing Research team at NVIDIA, working predominantly on computer vision problems (from low-level vision through geometric vision to high-level vision), as well as machine learning problems (including deep reinforcement learning, generative models, and efficient deep learning). Before joining NVIDIA in 2013, Jan was a tenured faculty member at University College London. He holds a BSc in Computer Science from University of Erlangen-Nürnberg (1999), an MMath from the University of Waterloo (1999), received his PhD from the Max-Planck-Institut für Informatik (2003), and worked as a post-doc at the Massachusetts Institute of Technology (2003-2006). Jan has chaired numerous conferences (Eurographics Symposium on Rendering 2007, IEEE Symposium on Interactive Ray-Tracing 2008, Pacific Graphics 2011, CVMP 2012, Eurographics 2014) and has been on several editorial boards (IEEE Transactions on Visualization & Computer Graphics, The Visual Computer, Computer Graphics Forum, International Journal of Image and Graphics).

Sabine Süsstrunk

Title: Near-Infrared for Image Enhancement and Restoration

Abstract: Given how most modern cameras capture images, the disambiguation of how much the illuminant(s) and the object reflectance contribute to a pixel value is mathematically ill-posed. Blur and limited depth-of-field may also introduce noise and unwanted artifacts. To solve these problems, experts have proposed modified hardware, smart algorithms using priors, and machine learning approaches. In our research, we use "extra information" in the form of near-infrared (NIR), the wavelength range adjacent to the visible spectrum and easily captured by conventional sensors. Introducing NIR can improve image enhancement and restoration tasks such as denoising, dehazing, deblurring and depth-of-field extension, as well as computer vision applications such as white-balancing, shadow detection, segmentation, and classification.

Bio: Sabine Süsstrunk is full professor in the School of Information and Communication Sciences (IC) at the Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland, where she leads the Images and Visual Representation Lab since 1999. Her research areas are in computational photography, color computer vision and color image processing, image quality, and computational aesthetics. She has published over 150 scientific papers, of which 6 have received best paper/demos awards (ACM Multimedia 2010, IS&T CIC 2012, IEEE ICIP 2013, etc.), and holds 10 patents. In 2013, she received the IS&T/SPIE Electronic Imaging Scientist of the Year Award. She is a Fellow of IEEE and IS&T.

Peyman Milanfar

Title: Regularization by Denoising - "The little engine that could"

Abstract: Image denoising is the most fundamental problem in image enhancement, and it is largely solved: It has reached impressive heights in performance and quality -- almost as good as it can ever get. But interestingly, it turns out that we can solve many other problems using the image denoising "engine". I will describe the Regularization by Denoising (RED) framework: using the denoising engine in defining the regularization of any inverse problem. The idea is to define an explicit image-adaptive regularization functional directly using a high performance denoiser. Surprisingly, the resulting regularizer is guaranteed to be convex, and the overall objective functional is explicit, clear and well-defined. With complete flexibility to choose the iterative optimization procedure for minimizing this functional, RED is capable of incorporating any image denoising algorithm as a regularizer, treat general inverse problems very effectively, and is guaranteed to converge to the globally optimal result. I will show examples of its utility, including state-of-the-art results in image deblurring and super-resolution problems.

Bio: Peyman leads the Computational Imaging/Image Processing team in Google Research. Prior to this, he was a Professor of Electrical Engineering at UC Santa Cruz from 1999-2014, where he is now a visiting faculty. He was Associate Dean for Research at the School of Engineering from 2010-12. From 2012-2014 he was on leave at Google-x, where he helped develop the imaging pipeline for Google Glass. Peyman received his undergraduate education in electrical engineering and mathematics from the University of California, Berkeley, and the MS and PhD degrees in electrical engineering from the Massachusetts Institute of Technology. He holds 11 US patents, several of which are commercially licensed. He founded MotionDSP in 2005. He has been keynote speaker at numerous technical conferences including Picture Coding Symposium (PCS), SIAM Imaging Sciences, SPIE, and the International Conference on Multimedia (ICME). Along with his students, has won several best paper awards from the IEEE Signal Processing Society. He is a Fellow of the IEEE "for contributions to inverse problems and super-resolution in imaging.

Liang Lin

Title: Attention-aware Face Hallucination via Deep Reinforcement Learning

Abstract: Face hallucination is a domain-specific super-resolution problem with the goal to generate high-resolution (HR) faces from low-resolution (LR) input images. In contrast to existing methods that often learn a single patch-to-patch mapping from LR to HR images and are regardless of the contextual interdependency between patches, we study a novel Attention-aware Face Hallucination (Attention-FH) framework which resorts to deep reinforcement learning for sequentially discovering attended patches and then performing the facial part enhancement by fully exploiting the global interdependency of the image. The Attention-FH approach jointly learns the recurrent policy network and local enhancement network through maximizing the long-term reward that reflects the hallucination performance over the whole image. Therefore, our proposed Attention-FH is capable of adaptively personalizing an optimal searching path for each face image according to its own characteristic.

Bio: Liang Lin is the Executive R&D Director of SenseTime Group Limited and a full Professor of Sun Yat-sen University. He is the Excellent Young Scientist of the National Natural Science Foundation of China. He received his B.S. and Ph.D. degrees from the Beijing Institute of Technology (BIT), Beijing, China, in 2003 and 2008, respectively, and he was a joint Ph.D. student with the Department of Statistics, University of California, Los Angeles (UCLA). From 2008 to 2010, he was a Post-Doctoral Fellow at UCLA. From 2014 to 2015, as a senior visiting scholar he was with The Hong Kong Polytechnic University and The Chinese University of Hong Kong. He currently leads the SenseTime R&D teams to develop cutting-edges and deliverable solutions on computer vision, data analysis and mining, and intelligent robotic systems. He has authorized and co-authorized on more than 100 papers in top-tier academic journals and conferences (e.g., 10 papers in TPAMI/IJCV and 40+ papers in CVPR/ICCV/NIPS/IJCAI). He has been serving as an associate editor of IEEE Trans. Human-Machine Systems, The Visual Computer and Neurocomputing. He served as Area/Session Chairs for numerous conferences such as ICME, ACCV, ICMR. He was the recipient of Best Paper Runners-Up Award in ACM NPAR 2010, Google Faculty Award in 2012, Best Student Paper Award in IEEE ICME 2014, and Hong Kong Scholars Award in 2014.

Wenzhe Shi & Christian Ledig

Title: Neural networks for image and video super resolution

Abstract: The most important considerations when applying neural networks for super resolution (SR) are the training data, the network architecture and the objective function(s). In this talk we will present our recent works on both network architecture and objective functions for SR. In 2016 we developed an innovative sub-pixel convolution layer which greatly increases the speed of using neural networks for super resolution. By leveraging the speed up, it is also now possible to efficiently train deep residual network for the task. Deep residual networks current provides the most accurate of reconstructions in terms of peak signal-to-noise ratios. However, they are limited by the pixel-wise objective functions (e.g MSE, L2 distance, etc) used in training and struggle to resolve all the high-frequency details, so results are still perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. By redefining the objective functions we achieved a step-change in the perceived quality of super resolved images in our more recent work with GANs. Finally, we will briefly comment on the link between image, video super-resolution and compression and discuss what will be the challenges moving forward.

Bio: Wenzhe Shi works at Magic Pony of Twitter as a computer vision research lead. He received his Ph.D. training under Prof. Daniel Rueckert in the Biomedical Image Analysis group within Imperial College London from 2009 to 2012 where he stayed as a research associate from 2012 to 2014. His research interests includes image/video super resolution, compression, frame synthesis, motion estimation and segmentation.
Christian Ledig (@LedigChr) is a Computer Vision Researcher at Magic Pony, Twitter. He received a PhD from Imperial College London in 2015, where he was working on medical image analysis under the supervision of Prof. Daniel Rueckert. His current research focuses on deep learning approaches and generative models, in particular generative adversarial networks, for image and video super-resolution.

Eli Shechtman

Title: Image Stylization – from Patches to Neural Networks and Back

Abstract: Neural stylization methods became highly popular in the last couple years following the work by Gatys et al. These methods showed impressive results of transforming real photos into paintings given just a single style example. However, neural stylization methods have some limitations - they do not work well for some combinations of photos and styles and do not provide any control to the user to fix or change the result. Furthermore, they are effective for painterly styles and often result in non-photorealistic outputs. A more established family of stylization methods, based on patch-based synthesis and goes back to 2001 with the seminal Image Analogies work by Herzmann et al. These techniques require guiding channels in addition to the style example, but allow a finer-level of control over the output.
I will discuss about recent progress on both fronts. I will first show how simple changes to the Image Analogies framework lead to significant improvements in challenging problems like fluid animation, lighting-aware 3D model stylization and face stylization in video. I will then describe how control over spatial location, color information and control across spatial scale, can be introduced to neural stylization. Finally, I will show how photo realistic results can be obtained with neural stylization.

Bio: Eli Shechtman is a Principal Scientist at the Creative Intelligence Lab at Adobe Research. He received the B.Sc. degree in Electrical Engineering (magna cum laude) from Tel-Aviv University in 1996. Between 2001 and 2007 he attended the Weizmann Institute of Science where he received with honors his M.Sc. and Ph.D. degrees in Applied Mathematics and Computer Science. In 2007 he joined Adobe and started sharing his time as a post-doc with the University of Washington in Seattle. He published over 60 academic publications and holds over 20 issued patents. He served as a Technical Paper Committee member at SIGGRAPH 2013 and 2014, as an Area Chair at CVPR’15, ICCV’15 and CVPR’17 and serves an Associate Editor at TPAMI. He received several honors and awards, including the Best Paper prize at ECCV 2002, a Best Poster Award at CVPR 2004, a Best Reviewer Award at ECCV 2014 and published two Research Highlights papers in the Communication of the ACM journal.

Phillip Isola

Title: Image-to-Image Translation using Adversarial Nets

Abstract:I will talk about our recent work on using adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. This framework can be applied in both the paired data setting, where example input-output pairs are provided at training time, and in the unpaired setting, where two domains, X and Y, are provided, but no information is given as to which instance in X maps to which instance in Y. Unlike traditional supervised objectives, adversarial approaches naturally extend to the unpaired setting because the adversarial loss evaluates whether an output is a valid member of a set, rather than forcing the output to match a specific instance. Still, the desired mapping is highly under-constrained in the unpaired setting. To further constrain the problem, we train translation functions in both directions, F: X-->Y and G: Y-->X, and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). I will demonstrate that these approaches are effective at a wide range of problems, including standard ones like semantic segmentation and image colorization, as well important new ones, such as synthesizing photos of cats from sketches, and turning horses into zebras.

Bio: Phillip Isola is a postdoctoral scholar in the EECS department at UC Berkeley. He recently received his Ph.D. in the Brain & Cognitive Sciences department at MIT. He studies visual intelligence from the perspective of both minds and machines. He was the recipient of both the NSF Graduate Fellowship and, presently, the NSF Postdoctoral Fellowship.

Schedule


The 19 accepted NTIRE workshop papers were published under the book title "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library



07:30
Poster setting (all papers have poster panels for the whole day)

Locally Adaptive Color Correction for Underwater Dehazing and Image Matching
Codruta O. Ancuti, Cosmin Ancuti, Christophe De Vleeschouwer, Rafael Garcia
Depth-Stretch: Enhancing depth perception without depth
Hagit Hel-Or, Yacov Hel-Or, Renato Keshet
FAST: A Framework to Accelerate Super-Resolution Processing on Compressed Videos
Zhengdong Zhang, Vivienne Sze
Fast external denoising using pre-learned transformations
Shibin Parameswaran, Enming Luo, Charles-Alban Deledalle, Truong Q. Nguyen
FormResNet: Formatted Residual Learning for Image Restoration
Jianbo Jiao, Wei-Chih Tu, Shengfeng He, Rynson W. H. Lau
Reflectional and Rotational Invariances in Single Image Superresolution
Simon Donne, Laurens Meeus, Hiep Quang Luong, Bart Goossens, Wilfried Philips
Image Super Resolution Based on Fusing multiple Convolution Neural Networks
Haoyu Ren, Mostafa El-Khamy, Jungwon Lee
PaletteNet: Image Recolorization with Given Color Palette
Junho Cho, Sangdoo Yun, Kyoungmu Lee, Jin Young Choi
SRHRF+: Self-Example Enhanced Single Image Super-Resolution Using Hierarchical Random Forests
Jun-Jie Huang, Tianrui Liu, Pier Luigi Dragotti, Tania Stathaki
Image Denoising via CNNs: An Adversarial Approach
Nithish Divakar, R. Venkatesh Babu
Multi-Resolution Data Fusion for Super-Resolution Electron Microscopy
Suhas Sreehari, S. V. Venkatakrishnan, Katherine L. Bouman, Jeff P. Simmons, Larry F. Drummy, Charles A. Bouman
NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study
Eirikur Agustsson, Radu Timofte
NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results
Radu Timofte, Eirikur Agustsson, Luc Van Gool, Ming-Hsuan Yang, Lei Zhang, et al.
Enhanced Deep Residual Networks for Single Image Super-Resolution
Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, Kyoung Mu Lee
Beyond Deep Residual Learning for Image Restoration: Persistent Homology-Guided Manifold Simplification
Woong Bae, Jaejun Yoo, Jong Chul Ye
A Deep Convolutional Neural Network with Selection Units for Super-Resolution
Jae-Seok Choi, Munchurl Kim
Balanced Two-Stage Residual Networks for Image Super-Resolution
Yuchen Fan, Honghui Shi, Jiahui Yu, Ding Liu, Wei Han, Haichao Yu, Zhangyang Wang, Xinchao Wang, Thomas S. Huang
Fast and Accurate Image Super-Resolution Using A Combined Loss
Jinchang Xu, Yu Zhao, Yuan Dong, Hongliang Bai
Deep Wavelet Prediction for Image Super-resolution
Tiantong Guo, Hojjat Seyed Mousavi, Tiep Huu Vu, Vishal Monga

08:00
Invited Talk 1: Unsupervised Image-to-Image Translation Networks
Jan Kautz (NVIDIA)

Abstract: Most of the existing image-to-image translation frameworks---mapping an image in one domain to a corresponding image in another---are based on supervised learning, i.e., pairs of corresponding images in two domains are required for learning the translation function. This largely limits their applications, because capturing corresponding images in two different domains is often a difficult task. To address the issue, we propose the UNsupervised Image-to-image Translation (UNIT) framework, which is based on variational autoencoders and generative adversarial networks. The proposed framework can learn the translation function without any corresponding images in two domains. We enable this learning capability by combining a weight-sharing constraint and an adversarial training objective. Through visualization results from various unsupervised image translation tasks, we verify the effectiveness of the proposed framework. An ablation study further reveals the critical design choices. Moreover, we apply the UNIT framework to the unsupervised domain adaptation task and achieve better results than competing algorithms do in benchmark datasets.

11:00
Invited Talk 3: Regularization by Denoising - "The little engine that could"
Peyman Milanfar (Google)

Abstract: Image denoising is the most fundamental problem in image enhancement, and it is largely solved: It has reached impressive heights in performance and quality -- almost as good as it can ever get. But interestingly, it turns out that we can solve many other problems using the image denoising "engine". I will describe the Regularization by Denoising (RED) framework: using the denoising engine in defining the regularization of any inverse problem. The idea is to define an explicit image-adaptive regularization functional directly using a high performance denoiser. Surprisingly, the resulting regularizer is guaranteed to be convex, and the overall objective functional is explicit, clear and well-defined. With complete flexibility to choose the iterative optimization procedure for minimizing this functional, RED is capable of incorporating any image denoising algorithm as a regularizer, treat general inverse problems very effectively, and is guaranteed to converge to the globally optimal result. I will show examples of its utility, including state-of-the-art results in image deblurring and super-resolution problems.

NTIRE 2017 Awards



Best Paper Award (Challenge Track)
Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, Kyoung Mu Lee
"Enhanced Deep Residual Networks for Single Image Super-Resolution"


Best Paper Award (Regular Track)
Codruta Ancuti, Cosmin Ancuti, Christophe De Vleeschouwer, Rafael Garcia
"Locally Adaptive Color Correction for Underwater Dehazing and Image Matching"

1st Place Award (Challenge on Single Image Super-Resolution)
Bee Lim, Sanghyun Son, Seungjun Nah, Heewon Kim, Kyoung Mu Lee
SNU_CVLab team

2nd Place Award (Challenge on Single Image Super-Resolution)
Xintao Wang, Yapeng Tian, Ke Yu, Yulun Zhang, Shixiang Wu, Chao Dong, Liang Lin, Yu Qiao, Chen Change Loy
HelloSR team

3rd Place Award (Challenge on Single Image Super-Resolution)
Woong Bae, Jaejun Yoo, Yoseob Han, Jong Chul Ye
Lab402 team


4th Place Award (Challenge on Single Image Super-Resolution)
Jae-Seok Choi, Munchurl Kim
VICLab team

5th Place Award (Challenge on Single Image Super-Resolution)
Yuchen Fan, Jiahui Yu, Wei Han, Ding Liu, Haichao Yu, Zhangyang Wang, Honghui Shi, Xinchao Wang, Thomas S. Huang
UIUC-IFP team