计算机应用   2017, Vol. 37 Issue (4): 1174-1178  DOI: 10.11772/j.issn.1001-9081.2017.04.1174 0

引用本文

OUYANG Ning, ZENG Mengping, LIN Leping. Parallel convolutional neural network for super-resolution reconstruction[J]. Journal of Computer Applications, 2017, 37(4): 1174-1178. DOI: 10.11772/j.issn.1001-9081.2017.04.1174.

文章历史

1. 认知无线电与信息处理省部共建教育部重点实验室 (桂林电子科技大学), 广西 桂林 541004;
2. 桂林电子科技大学 信息与通信学院, 广西 桂林 541004

Parallel convolutional neural network for super-resolution reconstruction
OUYANG Ning1,2, ZENG Mengping2, LIN Leping1,2
1. Key Laboratory of Cognitive Radio and Information Processing of Ministry of Education (Guilin University of Electronic Technology), Guilin Guangxi 541004, China;
2. School of Information and Communication, Guilin University of Electronic Technology, Guilin Guangxi 541004, China
Abstract: To extract more effective features and speed up the convergence of model training, a super-resolution reconstruction algorithm based on parallel convolution neural network was proposed. The network consists of two different network structures, one is a simple residual network structure, which has a easier optimal residual mapping than the original one; the other is a convolutional neural network with nonlinear mapping, which can increase the non-linearity of the network. As the complexity of the parallel network structure, the convergence speed is the key issue. Aiming at this problem, the Local Response Normalization (LRN) layer was added to the convolution layers to simplify the model parameters and enhance the feature fitting ability, thus accelerating the convergence. Experimental results show that, compared with algorithms based on deep convolutional neural network, the proposed method accelerates the convergence, improves the visual quality, and increases Peak Signal-to-Noise Ratio (PSNR) at least 0.2 dB.
Key words: parallel convolution neural network    residual network    nonlinear mapping    Local Response Normalization (LRN)    convergence speed
0 引言

SRCNN模型证明了直接学习LR-HR之间端到端映射的可行性, 因此可以推测增加更多的卷积层用于提取更多的特征可能提高SRCNN的重建效果, 但更深的网络难以训练且不易于收敛。因此本文引入了一种并列的网络结构, 该网络训练过程是并列互不干扰的。通过两路不同网络结构捕获更多不同的有效特征, 解决了SRCNN特征少且单一的问题。由于并列网络加宽了网络, 增加了参数个数和特征数量, 从而提高了模型重建效果。

1 基于并列网络的超分辨率重建模型

 图 1 并列网络结构 Figure 1 Parallel network structure

2 分支网络设计与训练

2.1 卷积神经网络中的LRN (局部正则化)

 $\boldsymbol{b}_{x, y}^{i}=\boldsymbol{a}_{x, y}^{i}/{{(k+\alpha \sum\limits_{j=\max (0, i-n/2)}^{\min (N-1, i+n/2)}{{{(\boldsymbol{a}_{x, y}^{j})}^{2}}})}^{\beta }}$ (1)

2.2 残差支路中的残差网络

 图 2 简单的残差网络 Figure 2 Simple residual network

 $\boldsymbol{y}=F(\boldsymbol{x}, \{{{\boldsymbol{W}}_{i}}\})+\boldsymbol{x}$ (2)

2.3 网络训练

3 实验仿真及结果分析

 图 3 两条支路特征映射图 Figure 3 Feature maps of two branches

 图 4 bird原始HR及各方法重建结果对比 Figure 4 Comparison of original bird HR and reconstruction results of each method
 图 5 ppt3原始HR及各方法重建结果对比 Figure 5 Comparison of original ppt3 HR and reconstruction results of each method

 图 6 单支路添加LRN与SRCNN比较曲线图 Figure 6 PSNR comparison of single branch adding LRN and SRCNN

 图 7 本文算法和其他几种算法测试set5收敛速度以及结果曲线图 Figure 7 Test set5 convergence rate and results curves of proposed algorithm and the comparison algorithms

4 结语

 [1] GLASNER D, BAGON S, IRANI M. Super-resolution from a single image[C]//Proceedings of the 2009 IEEE 12th International Conference on Computer Vision. Piscataway, NJ: IEEE, 2009: 349-356. [2] ZHANG D, WU X. An edge-guided image interpolation algorithm via directional filtering and data fusion[J]. IEEE Transactions on Image Processing, 2006, 15 (8) : 2226-2238. doi: 10.1109/TIP.2006.877407 [3] RASTI P, DEMIREL H, ANBARJAFARI G. Image resolution enhancement by using interpolation followed by iterative back projection[C]//Proceedings of the 201321st Signal Processing and Communications Applications Conference (SIU). Piscataway, NJ: IEEE, 2013: 1-4. [4] YANG J-C, WRIGHT J, HUANG T S, et al. Image super-resolution via sparse representation[J]. IEEE Transactions on Image Processing, 2010, 19 (11) : 2861-2873. doi: 10.1109/TIP.2010.2050625 [5] YANG J, WRIGHT J, HUANG T, et al. Image super-resolution as sparse representation of raw image patches[C]//Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2008: 1-8. [6] CHANG H, YEUNG D Y, XIONG Y. Super-resolution through neighbor embedding[C]//Proceedings of the 2004 Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2004, 1: Ⅰ-Ⅰ. [7] TIMOFTE R, SMET V, GOOL L. Anchored neighborhood regression for fast example-based super-resolution[C]//Proceedings of the 2013 IEEE International Conference on Computer Vision. Piscataway, NJ: IEEE, 2013: 1920-1927. [8] SCHULTER S, LEISTNER C, BISCHOF H. Fast and accurate image upscaling with super-resolution forests[C]//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2015: 3791-3799. [9] DONG C, LOY C C, HE K, et al. Learning a deep convolutional network for image super-resolution[C]//Proceedings of the 13th European Conference on Computer Vision, LNCS 8692. Berlin: Springer, 2014: 184-199. [10] DONG C, LOY C C, HE K, et al. Image super-resolution using deep convolutional networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38 (2) : 295-307. doi: 10.1109/TPAMI.2015.2439281 [11] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[EB/OL].[2016-03-10]. https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf. [12] SHANKAR S, ROBERTSON D, IOANNOU Y, et al. Refining architectures of deep convolutional neural networks[EB/OL].[2016-03-01]. https://arxiv.org/pdf/1604.06832v1.pdf. [13] NAIR V, HINTON G E. Rectified linear units improve restricted Boltzmann machines[EB/OL].[2016-03-01]. http://machinelearning.wustl.edu/mlpapers/paper_files/icml2010_NairH10.pdf. [14] BENGIO Y, SIMARD P, FRASCONI P. Learning long-term dependencies with gradient descent is difficult[J]. IEEE Transactions on Neural Networks, 1994, 5 (2) : 157-166. doi: 10.1109/72.279181 [15] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[EB/OL].[2016-03-01]. https://arxiv.org/pdf/1512.03385v1.pdf. [16] SZEGEDY C, IOFFE S, VANHOUCKE V. Inception-v4, inception-ResNet and the impact of residual connections on learning[EB/OL].[2016-03-01]. https://arxiv.org/pdf/1602.07261.pdf. [17] BISHOP C M. Neural Networks for Pattern Recognition[M]. Oxford: Oxford University Press, 1995 . [18] LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86 (11) : 2278-2324. doi: 10.1109/5.726791