计算机应用   2016, Vol. 36 Issue (9): 2486-2491  DOI: 10.11772/j.issn.1001-9081.2016.09.2486 0

### 引用本文

ZHANG Lei, ZHU Yixin, XU Chun, YU Kai. Software defect detection algorithm based on dictionary learning[J]. Journal of Computer Applications, 2016, 36(9): 2486-2491. DOI: 10.11772/j.issn.1001-9081.2016.09.2486.

### 文章历史

Software defect detection algorithm based on dictionary learning
ZHANG Lei, ZHU Yixin, XU Chun, YU Kai
College of Computer Science and Engineering, Xinjiang University of Finance and Economics, Urumqi Xinjiang 830000, China
Background: This work is partially supported by the National Natural Science Foundation of China (71561025), the Xinjiang Social Science Foundation (13CTJ023) and the Xinjiang University Scientific Research Project (XJEDU2013I27)
ZHANG Lei, born in 1974, M.S., lecturer. Her research interests include computer network, information safety, data mining
ZHU Yixin, born in 1974, Ph. D., lecturer. His research interests include computer network safety, complex network communication
XU Chun, born in 1977, Ph. D., associate professor. Her research interests include computer network, natural language processing
YU Kai, born in1974, Ph. D., associate professor. His research interests include complex network, information dissemination
Abstract: Since the exsiting dictionary learning methods can not effectively construct discriminant structured dictionary, a discriminant dictionary learning method with discriminant and representative ability was proposed and applied in software defect detection. Firstly, sparse representation model was redesigned to train structured dictionary by adding the discriminant constraint term into the object function, which made the class-dictionary have strong representation ability for the corresponding class-samples but poor representation ability for the irrelevant class-samples. Secondly, the Fisher criterion discriminant term was added to make the representative coefficients have discriminant ability in different classes. Finally, the optimization of the designed dictionary learning model was solved to obtain strongly structured and sparsely representative dictionary. The NASA defect dataset was selected as the experiment data, and compared with Principal Component Analysis (PCA), Logistics Regression (LR), decision tree, Support Vector Machine (SVM) and the typical dictionary learning method, the accuracy and F-measure value of the proposed method were both increased. Experimental results indicate that the proposed method can increase detection accuracy with improving the classifier performance.
Key words: dictionary learning    sparse representation    Fisher criterion    software defect detection    machine learning
0 引言

1 相关算法

2 本文算法

 图 1 基于字典学习的软件缺陷检测流程
3 基于字典学习的软件缺陷检测算法 3.1 稀疏表示分类器

 ${\boldsymbol{x}_i} = \sum\limits_{j = 1}^N {{d_{i,j}}{x_j} = } \boldsymbol{X}{\boldsymbol{d}_i}$ (1)

di, j为表示系数，di=[di, 1, di, 2, …, di, t-1, 0, di, t+1, …, di, j-1, di, j, di, j+1, …, diN]T是稀疏系数向量，0表示xi不参与重构。系数d可以转化为如下形式的优化问题:

 $\begin{gathered} \mathop {\min }\limits_{{\boldsymbol{d}_i}} {\left\| {{\boldsymbol{d}_i}} \right\|_0} \hfill \\ s.t.\;\;\;\;{\boldsymbol{x}_i} = \boldsymbol{X}{\boldsymbol{d}_i} \hfill \\ \end{gathered}$ (2)

 $\begin{gathered} \mathop {\min }\limits_{{\boldsymbol{d}_i}} {\left\| {{\boldsymbol{d}_i}} \right\|_1} \hfill \\ s.t.\;\;\;\;{\boldsymbol{x}_i} = \boldsymbol{X}{\boldsymbol{d}_i} \hfill \\ \end{gathered}$ (3)

 $\begin{gathered} \mathop {\min }\limits_{{\boldsymbol{d}_i}} {\left\| {{\boldsymbol{d}_i}} \right\|_1} \hfill \\ s.t.\;\;\;\;\left\| {{\boldsymbol{x}_i} = \boldsymbol{X}{\boldsymbol{d}_i}} \right\| < \varepsilon \hfill \\ \end{gathered}$ (4)

3.2 鉴别字典学习方法

 $\boldsymbol{A} \approx \boldsymbol{DX}$ (5)

 ${\boldsymbol{J}_{\left( {\boldsymbol{D},\boldsymbol{X}} \right)}} = \mathop {\arg \min }\limits_{\left( {\boldsymbol{D},\boldsymbol{X}} \right)} \left\{ {r\left( {\boldsymbol{A},\boldsymbol{D},\boldsymbol{X}} \right) + \lambda {{\left\| \boldsymbol{X} \right\|}_1}} \right\}$ (6)

 ${\boldsymbol{R}_k} = {\boldsymbol{D}_k}\boldsymbol{X}_i^k$ (7)

 ${\boldsymbol{A}_i}\simeq{\text{【}}\boldsymbol{D}{\boldsymbol{X}_i} = {\boldsymbol{D}_1}\boldsymbol{X}_i^1 + {\boldsymbol{D}_2}\boldsymbol{X}_i^2 + \cdots + {\boldsymbol{D}_i}\boldsymbol{X}_i^i + \cdots + {\boldsymbol{D}_c}\boldsymbol{X}_i^c$ (8)

 $\begin{gathered} r\left( {\boldsymbol{A},\boldsymbol{D},\boldsymbol{X}} \right) = \sum\limits_{i = 1}^c {r\left( {{\boldsymbol{A}_i},\boldsymbol{D},{\boldsymbol{X}_i}} \right)} = \\ \sum\limits_{i = 1}^c {\left( {\left\| {{\boldsymbol{A}_i} - \boldsymbol{D}{\boldsymbol{X}_i}} \right\|_F^2 + \left\| {{\boldsymbol{A}_i} - {\boldsymbol{D}_i}\boldsymbol{X}_i^i} \right\|_F^2 + } \right.} \\ \left. {\sum\limits_{j = 1,j \ne i}^c {\left\| {{\boldsymbol{D}_j}\boldsymbol{X}_i^j} \right\|_F^2} } \right) \\ \end{gathered}$ (9)

 图 2 鉴别项示意图
3.3 Fisher鉴别项

 $\begin{gathered} {S_W}\left( \boldsymbol{X} \right) = \sum\limits_{i = 1}^c {\sum\limits_{{\boldsymbol{x}_k} \in {\boldsymbol{X}_i}} {\left( {{\boldsymbol{x}_k} - {\boldsymbol{m}_i}} \right){{\left( {{\boldsymbol{x}_k} - {\boldsymbol{m}_i}} \right)}^T}} } \hfill \end{gathered}$ (10)
 $\begin{gathered} {S_B}\left( \boldsymbol{X} \right) = \sum\limits_{i = 1}^c {{n_i}\left( {{\boldsymbol{m}_i} - \boldsymbol{m}} \right){{\left( {{\boldsymbol{m}_i} - \boldsymbol{m}} \right)}^T}} \hfill \end{gathered}$ (11)

 $f\left( \boldsymbol{X} \right) = tr\left( {{S_W}\left( \boldsymbol{X} \right)} \right) - a \cdot tr\left( {{\boldsymbol{S}_B}\left( \boldsymbol{X} \right)} \right)$ (12)

 $f\left( \boldsymbol{X} \right) = tr\left( {{S_W}\left( \boldsymbol{X} \right)} \right) - tr\left( {{S_B}\left( \boldsymbol{X} \right)} \right)$ (13)

 $\begin{gathered} \mathop {\min }\limits_{\left( {D,X} \right)} \left\{ {r\left( {\boldsymbol{A},\boldsymbol{D},\boldsymbol{X}} \right) + {\lambda _1}{{\left\| \boldsymbol{X} \right\|}_1} + } \right. \hfill \\ \left. {\left. {{\lambda _2}\left( {tr\left( {{S_W}\left( \boldsymbol{X} \right)} \right) - tr\left( {{S_B}\left( \boldsymbol{X} \right)} \right)} \right) + \eta \left\| \boldsymbol{X} \right\|_F^2} \right)} \right\} \hfill \\ s.t.\;\;\;\;{\left\| {{\boldsymbol{d}_n}} \right\|_2} = 1,\forall n \hfill \\ \end{gathered}$ (14)
3.4 模型化简

 $\begin{gathered} \mathop {\min }\limits_{\left( {\boldsymbol{D},\boldsymbol{X}} \right)} \sum\limits_{i = 1}^c {\left( {\left\| {{\boldsymbol{A}_i} - \boldsymbol{D}{\boldsymbol{X}_i}} \right\|_F^2 + \left\| {{\boldsymbol{A}_i} - \boldsymbol{DX}_i^i} \right\|_F^2} \right) + {\lambda _1}{{\left\| \boldsymbol{X} \right\|}_1} + } \hfill \\ \left. {{\lambda _2}\left( {tr\left( {{S_W}\left( \boldsymbol{X} \right)} \right) - tr\left( {{S_B}\left( \boldsymbol{X} \right)} \right) + \eta \left\| \boldsymbol{X} \right\|_F^2} \right)} \right\} \hfill \\ s.t.\;\;\;\;{\left\| {{\boldsymbol{d}_n}} \right\|_2} = 1,\forall n;\left\| {{\boldsymbol{\boldsymbol{D}}_j}\boldsymbol{X}_i^j} \right\|_F^2 \leqslant \varepsilon ,\forall i \ne j \hfill \\ \end{gathered}$ (15)

 $\begin{gathered} \mathop {\min }\limits_{\left( {\boldsymbol{D},\boldsymbol{X}} \right)} \sum\limits_{i = 1}^c {\left( {\left\| {{\boldsymbol{A}_i} - \boldsymbol{D}{\boldsymbol{X}_i}} \right\|_F^2 + \left\| {{\boldsymbol{A}_i} - \boldsymbol{DX}_i^i} \right\|_F^2} \right) + } \hfill \\ \;\;\;\;\;{\lambda _1}{\left\| \boldsymbol{X} \right\|_1} + {\lambda _2}\left( {tr\left( {{S_W}\left( \boldsymbol{X} \right)} \right) - tr\left( {{S_B}\left( \boldsymbol{X} \right)} \right) + } \right. \hfill \\ \;\;\;\;\;\left. {\left. {\eta \left\| \boldsymbol{X} \right\|_F^2} \right)} \right\} \hfill \\ s.t.\;\;\;\;{\left\| {{\boldsymbol{d}_n}} \right\|_2} = 1,\forall n;\boldsymbol{X}_i^j{\text{ = }}0,\forall i \ne j \hfill \\ \end{gathered}$ (16)

 $\begin{gathered} tr\left( {{S_B}\left( \boldsymbol{X} \right)} \right){\text{ = }} \hfill \\ \;\;\;\;\sum\limits_{i = 1}^c {{n_i}/{n^2}\left\| {\left[ { - {n_1}\boldsymbol{m}_1^1; \cdots ;\left( {n - {n_i}} \right)\boldsymbol{m}_i^i; \cdots ; - {n_c}\boldsymbol{m}_c^c} \right]} \right\|_2^2 = } \hfill \\ \;\;\;\;\sum\limits_{i = 1}^c {{\kappa _i}{n_i}\left\| {\boldsymbol{m}_i^i} \right\|_2^2} \hfill \\ \end{gathered}$ (17)

 $\begin{gathered} \mathop {\min }\limits_{\left( {\boldsymbol{D},\boldsymbol{X}} \right)} \sum\limits_{i = 1}^c {\left( {\left\| {{\boldsymbol{A}_i} - {\boldsymbol{D}_i}\boldsymbol{X}_i^i} \right\|_F^2 + {\lambda _1}^\prime {{\left\| {\boldsymbol{X}_i^i} \right\|}_1}} \right. + } \hfill \\ \;\;\;\;\;\left. {{\lambda _2}^\prime \left\| {\boldsymbol{X}_i^i - \boldsymbol{M}_i^i} \right\| + {\lambda _3}^\prime \left\| {\boldsymbol{X}_i^i} \right\|_F^2} \right) \hfill \\ s.t.\;\;\;\;{\left\| {{\boldsymbol{d}_n}} \right\|_2} = 1,\forall n \hfill \\ \end{gathered}$ (18)

 ${S_W}\left( \boldsymbol{X} \right) = \sum\limits_{i = 1}^c {\sum\limits_{{\boldsymbol{x}_k} \in {\boldsymbol{X}_i}} {\left( {\boldsymbol{x}_k^i - \boldsymbol{m}_i^i} \right){{\left( {\boldsymbol{x}_k^i - \boldsymbol{m}_i^i} \right)}^T}} }$

 $tr\left( {{S_W}\left( \boldsymbol{X} \right)} \right) = \sum\limits_{i = 1}^c {\sum\limits_{{\boldsymbol{x}_k} \in {\boldsymbol{X}_i}} {\left\| {\boldsymbol{x}_k^i - \boldsymbol{m}_i^i} \right\|_2^2} }$

 $\begin{gathered} f\left( \boldsymbol{X} \right) = \sum\limits_{i = 1}^c {\left( {\sum\limits_{{\boldsymbol{x}_k} \in {\boldsymbol{X}_i}} {\left\| {\boldsymbol{x}_k^i - \boldsymbol{m}_i^i} \right\|_2^2 + } } \right.} \\ {\kappa _i}\left( {\left\| {\boldsymbol{X}_i^i} \right\|_F^2 - {n_i}\left\| {\boldsymbol{m}_i^i} \right\|_F^2} \right) + \left( {\eta - {\kappa _i}} \right)\left\| {\boldsymbol{X}_i^i} \right\|_F^2 \\ \end{gathered}$ (19)

 $\begin{gathered} \left\| {\boldsymbol{X}_i^i} \right\|_F^2 - {n_i}\left\| {\boldsymbol{m}_i^i} \right\|_F^2 = \left\| {\boldsymbol{X}_i^i} \right\|_F^2 - \left\| {{{\left[ {\boldsymbol{m}_i^i} \right]}_{1 \times {n_i}}}} \right\|_F^2 = \hfill \\ \;\;\;\;{\text{tr}}\left( {\boldsymbol{X}_i^i\left( {I - \boldsymbol{E}_i^i/{n_i}{{\left( {\boldsymbol{E}_i^i/n} \right)}^T}} \right){{\left( {\boldsymbol{X}_i^i} \right)}^T}} \right) = \hfill \\ \;\;\;\;\left\| {\boldsymbol{X}_i^i - \boldsymbol{M}_i^i} \right\|_F^2 \hfill \\ \end{gathered}$ (20)

 $f\left( \boldsymbol{X} \right) = \sum\limits_{i = 1}^c {\left( {\left( {1 + {\kappa _i}} \right)\left\| {\boldsymbol{X}_i^i - \boldsymbol{M}_i^i} \right\|_F^2 + \left( {\eta - {\kappa _i}} \right)\left\| {\boldsymbol{X}_i^i} \right\|_F^2} \right)}$

ij时有Xij=0【，因此‖Ai-DXiF2=‖Ai-DiXiiF2

3.5 鉴别字典学习模型的优化

3.5.1 固定字典求解编码系数

 $\mathop {\min }\limits_{{\boldsymbol{X}_i}} \left\{ {r\left( {{\boldsymbol{A}_i},\boldsymbol{D},\boldsymbol{X}} \right) + {\lambda _1}\left\| {{\boldsymbol{X}_i}} \right\| + {\lambda _2}{f_i}\left( {{\boldsymbol{X}_i}} \right)} \right\}$ (21)
 ${f_i}\left( {{\boldsymbol{X}_i}} \right) = \left\| {{\boldsymbol{X}_i} - {\boldsymbol{M}_i}} \right\|_F^2 - \sum\limits_{k = 1}^c {\left\| {{\boldsymbol{M}_k} - \boldsymbol{M}} \right\|_F^2 + \eta \left\| {{\boldsymbol{X}_i}} \right\|_F^2}$ (22)

 $\mathop {\min }\limits_{{\boldsymbol{X}_i}} \left\{ {Q\left( {{\boldsymbol{X}_i}} \right) + 2\tau {{\left\| {{\boldsymbol{X}_i}} \right\|}_1}} \right\}$ (23)

 $Q\left( {{\boldsymbol{X}_i}} \right) = r\left( {{\boldsymbol{A}_i},\boldsymbol{D},\boldsymbol{X}} \right) + {\lambda _2}{f_i}\left( {{\boldsymbol{X}_i}} \right)$ (23)

 $\begin{gathered} Q\left( {{\boldsymbol{X}_i}} \right) = \left\| {{\boldsymbol{A}_i} - {\boldsymbol{D}_i}\boldsymbol{X}_i^i} \right\|_F^2 + {\lambda _2}^\prime \left\| {\boldsymbol{X}_i^i - \boldsymbol{M}_i^i} \right\|_2^2 + \\ {\lambda _3}^\prime \left\| {\boldsymbol{X}_i^i} \right\|_F^2 \\ \end{gathered}$ (24)
 $\tau = {\lambda _1}/2$ (25)

 $\left[ {{S_{\tau /\sigma }}} \right] = \left\{ \begin{gathered} 0,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\left\| {{\boldsymbol{x}_j}} \right\| > \tau /\sigma \hfill \\ {\boldsymbol{x}_j} - \operatorname{sign} \left( {{\boldsymbol{x}_j}} \right)\tau /\sigma ,\;\;\;\;\left\| {{\boldsymbol{x}_j}} \right\| > \tau /\sigma \hfill \\ \end{gathered} \right.$ (26)

1) 输入：στ, σ > 0, τ > 0。

2) 初始化：Xi(1)=0, h=1。

3) 开始迭代：

①迭代终止条件：前后差值开始收敛或达到最终迭代次数。

②迭代次数h=h+1。

$\boldsymbol{X}_i^{\left( h \right)} = {S_{\tau /\sigma }}\left( {\boldsymbol{X}_i^{\left( {h - 1} \right)} - \frac{1}{{2\sigma }}\nabla Q\left( {\boldsymbol{X}_i^{\left( {h - 1} \right)}} \right)} \right)$，其中：${\nabla Q\left( {\boldsymbol{X}_i^{\left( {h - 1} \right)}} \right)}$表示对Xi的导数，S的定义如式(26)所示。

4) 返回Xi=Xi(h)

3.5.2 固定编码系数求解字典

 $\begin{gathered} {J_{\left( {{\boldsymbol{D}_i}} \right)}} = \mathop {\arg \;\;\min }\limits_{\left( {{\boldsymbol{D}_i}} \right)} \left\{ {\left\| {\boldsymbol{A} - {\boldsymbol{D}_i}{\boldsymbol{X}^i} - \sum\limits_{j = 1,j \ne i}^2 {{\boldsymbol{D}_j}{\boldsymbol{X}^j}} } \right\|_F^2 + } \right. \hfill \\ \;\;\;\;\;\;\;\left. {\left\| {{\boldsymbol{A}_i} - {\boldsymbol{D}_i}\boldsymbol{X}_i^i} \right\|_F^2 + \sum\limits_{j = 1}^2 {\left\| {{\boldsymbol{D}_j}\boldsymbol{X}_i^j} \right\|_F^2} } \right\} \hfill \\ s.t.\;\;\;\;{\left\| {{\boldsymbol{d}_n}} \right\|_2} = 1 \end{gathered}$ (27)

1) 初始化字典集合D

2) 固定字典D, 通过3.5.1中的求解过程分别迭代求解系数集合X1X2

3) 固定步骤2)中的X1X2, 根据文献[24]求解D1D2, 得到最终的结构化字典。

4) 使用SRC分类器对样本进行重构，利用重构之后的误差进行分类工作。

4 实验分析

4.1 数据集介绍

4.2 分类器性能评价指标

1) 召回率：re=TP/(TP+FN)。召回率是一种重要的分类器性能衡量指标，因为在实际应用中需要重点考虑的是有缺陷的数据，其反映了被正确判定的缺陷样本占总的缺陷样本的比重，即衡量有缺陷样本检测的全面程度。

2) 错误接受率：pf=FP/(FP+TN)，其反映了分类结果中无缺陷数据被预测为有缺陷数据的比例。

3) 查准率：pre=TP/(TP+FP), 用来衡量检测到有缺陷样本的准确率。

4) 准确率：acc=(TP+TN)/(TP+FN+FP+TN)，用来衡量着所有正确分类的样本占总样本的比例。

 $F - measure = 2*\beta *re*pre/\left( {re + {\beta ^2}pre} \right)$ (18)

4.3 实验结果及分析

4.4 实验分析

5 结语

 [1] BAGGEN R, CORREIA J P, SCHILL K, et al. Standardized code quality benchmarking for improving software maintainability[J]. Software Quality Journal, 2012, 20 (2) : 287-307. doi: 10.1007/s11219-011-9144-9 (0) [2] SHEPPERD M, SONG Q, SUN Z, et al. Data quality: some comments on the nasa software defect datasets[J]. IEEE Transactions on Software Engineering, 2013, 39 (9) : 1208-1215. doi: 10.1109/TSE.2013.11 (0) [3] MA Y, LUO G, ZENG X, et al. Transfer learning for cross-company software defect prediction[J]. Information and Software Technology, 2012, 54 (3) : 248-256. doi: 10.1016/j.infsof.2011.09.007 (0) [4] WANG S, YAO X. Using class imbalance learning for software defect prediction[J]. IEEE Transactions on Reliability, 2013, 62 (2) : 434-443. doi: 10.1109/TR.2013.2259203 (0) [5] SONG Q, JIA Z, SHEPPERD M, et al. A general software defect-proneness prediction framework[J]. IEEE Transactions on Software Engineering, 2011, 37 (3) : 356-370. doi: 10.1109/TSE.2010.90 (0) [6] PENG Y, KOU G, WANG G, et al. Ensemble of software defect predictors: an AHP-based evaluation method[J]. International Journal of Information Technology and Decision Making, 2011, 10 (1) : 187-206. doi: 10.1142/S0219622011004282 (0) [7] ZHENG J. Cost-sensitive boosting neural networks for software defect prediction[J]. Expert Systems with Applications, 2010, 37 (6) : 4537-4543. doi: 10.1016/j.eswa.2009.12.056 (0) [8] GRAY D, BOWES D, DAVEY N, et al. Reflections on the NASA MDP data sets[J]. IET Software, 2012, 6 (6) : 549-558. doi: 10.1049/iet-sen.2011.0132 (0) [9] 姜慧研, 宗茂, 刘相莹. 基于ACO-SVM的软件缺陷预测模型的研究[J]. 计算机学报, 2011, 34 (6) : 1148-1154. ( JIANG H Y, ZONG M, LIU X Y. Research of software defect prediction model based on ACO-SVM[J]. Chinese Journal of Computers, 2011, 34 (6) : 1148-1154. doi: 10.3724/SP.J.1016.2011.01148 ) (0) [10] ELISH K O, ELISH M O. Predicting defect-prone software modules using support vector machines[J]. Journal of Systems and Software, 2008, 81 (5) : 649-660. doi: 10.1016/j.jss.2007.07.040 (0) [11] KHOSHGOFTAAR T M, SELIYA N. Software quality classification modeling using the SPRINT decision tree algorithm [C]// ICTAI '02: Proceedings of the 14th IEEE International Conference on Tools with Artificial Intelligence. Washington, DC: IEEE Computer Society, 2002: 365-374. (0) [12] ARAR Ö F, AYAN K. Software defect prediction using cost-sensitive neural network[J]. Applied Soft Computing, 2015, 33 (C) : 263-277. (0) [13] ABDI H, WILLIAMS L J. Principal component analysis[J]. Wiley Interdisciplinary Reviews: Computational Statistics, 2010, 2 (4) : 433-459. doi: 10.1002/wics.v2:4 (0) [14] VIDAL R, MA Y, SASTRY S. Generalized Principal Component Analysis (GPCA)[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27 (12) : 1945-1959. doi: 10.1109/TPAMI.2005.244 (0) [15] SELIYA N, KHOSHGOFTAAR T M. Software quality analysis of unlabeled program modules with semisupervised clustering[J]. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 2007, 37 (2) : 201-211. doi: 10.1109/TSMCA.2006.889473 (0) [16] BISHNU P S, BHATTACHERJEE V. Software fault prediction using quad tree-based K-means clustering algorithm[J]. IEEE Transactions on Knowledge and Data Engineering, 2012, 24 (6) : 1146-1150. doi: 10.1109/TKDE.2011.163 (0) [17] MA Y, ZHU S, QIN K, et al. Combining the requirement information for software defect estimation in design time[J]. Information Processing Letters, 2014, 114 (9) : 469-474. doi: 10.1016/j.ipl.2014.03.012 (0) [18] GAO K, KHOSHGOFTAAR T M, WANG H, et al. Choosing software metrics for defect prediction: an investigation on feature selection techniques[J]. Software—Practice and Experience, 2011, 41 (5) : 579-606. doi: 10.1002/spe.1043 (0) [19] SMITH L N, ELAD M. Improving dictionary learning: multiple dictionary updates and coefficient reuse[J]. IEEE Signal Processing Letters, 2013, 20 (1) : 79-82. doi: 10.1109/LSP.2012.2229976 (0) [20] YAN R, SHAO L, LIU Y. Nonlocal hierarchical dictionary learning using wavelets for image denoising[J]. IEEE Transactions on Image Processing, 2013, 22 (12) : 4689-4698. doi: 10.1109/TIP.2013.2277813 (0) [21] MAIRAL J, ELAD M, SAPIRO G. Sparse representation for color image restoration[J]. IEEE Transactions on Image Processing, 2008, 17 (1) : 53-69. doi: 10.1109/TIP.2007.911828 (0) [22] MARCHESINI S. Invited article: a unified evaluation of iterative projection algorithms for phase retrieval[J]. Review of Scientific Instruments, 2007, 78 (1) : 011301. doi: 10.1063/1.2403783 (0) [23] LUISIER F, BLU T, UNSER M. A new SURE approach to image denoising: interscale orthonormal wavelet thresholding[J]. IEEE Transactions on Image Processing, 2007, 16 (3) : 593-606. doi: 10.1109/TIP.2007.891064 (0) [24] YANG M, ZHANG L, YANG J, et al. Metaface learning for sparse representation based face recognition [EB/OL]. [2015-11-26]. http://www4.comp.polyu.edu.hk/~cslzhang/paper/conf/ICIP2010/ICIP10_3551_YM.pdf. (0) [25] JING X-Y, YING S, ZHANG Z-W, et al. Dictionary learning based software defect prediction [C]// ICSE 2014: Proceedings of the 36th International Conference on Software Engineer. New York: ACM, 2014: 414-423. (0)