计算机应用   2017, Vol. 37 Issue (4): 1164-1168  DOI: 10.11772/j.issn.1001-9081.2017.04.1164 0

### 引用本文

TANG Song, CHEN Lijuan, CHEN Zhixian, YE Mao. Domain adaptation image classification based on target local-neighbor geometrical information[J]. Journal of Computer Applications, 2017, 37(4): 1164-1168. DOI: 10.11772/j.issn.1001-9081.2017.04.1164.

### 文章历史

1. 电子科技大学 计算机科学与工程学院, 成都 611731;
2. 重庆邮电大学 复杂系统分析与控制研究中心, 重庆 400065;
3. 中国科学院 深圳先进技术研究院, 广东 深圳 518055

Domain adaptation image classification based on target local-neighbor geometrical information
TANG Song1,2, CHEN Lijuan1, CHEN Zhixian3, YE Mao1
1. School of Computer Science & Engineering, University of Electronic Science and Technology of China, Chengdu Sichuan 611731, China;
2. Center of Analysis and Control for Complex Systems, Chongqing University of Posts and Telecommunications, Chongqing 400065, China;
3. Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen Guangdong 518055, China
Abstract: In many real engineering applications, the distribution of training scenarios (source domain) and the distribution of testing scenarios (target domain) is different, thus the classification performance decreases sharply when simply applying the classifier trained in source domain directly to the target domain. At present, most of the existing domain adaptation methods are based on the probability-inference. For the problem of domain adaptation image classification, a collaborative representation based unsupervised method was proposed from the view of image representation. Firstly, all of the source samples were taken as the dictionary. Secondly, the three target samples closest to the target sample in the target domain were exploited to robustly represent the local-neighbor geometrical information. Thirdly, the target sample was encoded by combining the dictionary and the local-neighbor information. Finally, the classification was completed by using the nearest classifier. Since the collaborative representations have stronger robustness and discriminative ability by absorbing the target local-neighbor information, the classification method based on the new representations has better classification performance. The experimental results on the domain adaptation dataset confirm the effectiveness of the proposed method.
Key words: domain adaptation    manifold    target local-neighbor information    collaborative representation    image classification
0 引言

1 预备知识 1.1 问题描述

1.2 协同特征与最近邻分类器

 ${{\mathit{\boldsymbol{\tilde{z}}}}_{i}}=\underset{\mathit{\boldsymbol{z}}}{\mathop{\arg \min }}\, \frac{1}{2}\left\| {{\mathit{\boldsymbol{v}}}_{i}}-U\mathit{\boldsymbol{z}} \right\|_{2}^{2}+\mathit{\boldsymbol{ }}\!\!\eta\!\!\rm{ }\left\| \mathit{\boldsymbol{z}} \right\|_{2}^{2}$ (1)

 $l=\underset{k}{\mathop{\arg \min }}\, \frac{{{\left\| {{\mathit{\boldsymbol{v}}}_{i}}-\mathit{\boldsymbol{U }}\!\!\varphi\!\!\rm{ }\left( \mathit{\boldsymbol{z}}, k \right) \right\|}_{2}}}{{{\left\| \mathit{\boldsymbol{ }}\!\!\varphi\!\!\rm{ }\left( \mathit{\boldsymbol{z}}, k \right) \right\|}_{2}}}$ (2)

1) 计算测试样本在各个类别上的投影差:

 ${{e}_{k}}={{\left\| {{\boldsymbol{v}}_{i}}-{{\boldsymbol{U}}_{k}}{{\boldsymbol{z}}^{k}} \right\|}_{2}};k=1, 2, \ldots, c$

2) 根据{ek| k=1, 2, …, c}, 选择其中投影差最小的类别输出为预测类别标签。

2 目标局部近邻协同特征

 $F\left( \mathit{\boldsymbol{z}} \right)=\left\| {{{\mathit{\boldsymbol{\hat{y}}}}}_{i}}-\mathit{\boldsymbol{Xz}} \right\|_{2}^{2}+\alpha \left\| \mathit{\boldsymbol{Dz}} \right\|_{2}^{2}+\mathit{\boldsymbol{ }}\!\!\beta\!\!\rm{ }\left\| \mathit{\boldsymbol{z}} \right\|_{2}^{2}$ (3)

 $\mathit{\boldsymbol{D}}=\left[\begin{matrix} {{\left\| {{{\mathit{\boldsymbol{\hat{y}}}}}_{i}}-{{\mathit{\boldsymbol{x}}}_{1}} \right\|}_{2}} & 0 & \cdots & 0 \\ 0 & {{\left\| {{{\mathit{\boldsymbol{\hat{y}}}}}_{i}}-{{\mathit{\boldsymbol{x}}}_{2}} \right\|}_{2}} & \cdots & 0 \\ \vdots & \vdots & \ddots & 0 \\ 0 & 0 & \cdots & {{\left\| {{{\mathit{\boldsymbol{\hat{y}}}}}_{i}}-{{\mathit{\boldsymbol{x}}}_{m}} \right\|}_{2}} \\ \end{matrix} \right]$

F相对于z的偏导数为:

 $\frac{\partial F\left( \mathit{\boldsymbol{z}} \right)}{\partial \mathit{\boldsymbol{z}}}=-{{\mathit{\boldsymbol{X}}}^{\operatorname{T}}}\left( {{{\mathit{\boldsymbol{\hat{y}}}}}_{i}}-\mathit{\boldsymbol{Xz}} \right)+\alpha \mathit{\boldsymbol{Bz}}+\beta \mathit{\boldsymbol{z}}$

 $\mathit{\boldsymbol{\tilde{z}}}={{\left[{{\mathit{\boldsymbol{X}}}^{\rm{T}}}\mathit{\boldsymbol{X}}+\alpha \mathit{\boldsymbol{B}}+\beta \mathit{\boldsymbol{I}} \right]}^{-1}}\left( {{\mathit{\boldsymbol{X}}}^{\operatorname{T}}}{{\mathit{\boldsymbol{y}}}_{i}} \right)$ (4)

2.1 局部近邻目标域样本搜索方法

 $k\left( \mathit{\boldsymbol{a, b}} \right)={{\left\langle \psi (\mathit{\boldsymbol{a}}), \psi (\mathit{\boldsymbol{b}}) \right\rangle }_{\mathit{\boldsymbol{H}}}}=\psi {{(\mathit{\boldsymbol{a}})}^{\rm{T}}}\psi (\mathit{\boldsymbol{b}})$

 \begin{align} & rem\left( \mathit{\boldsymbol{a, b}} \right)=\left\| \psi (\mathit{\boldsymbol{a}})-\psi (\mathit{\boldsymbol{b}}) \right\|_{2}^{2} \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =k\left( \mathit{\boldsymbol{a, a}} \right)+k\left( \mathit{\boldsymbol{b, b}} \right)-2k\left( \mathit{\boldsymbol{a, b}} \right) \\ \end{align}

 cn\_k\rm{=}\left\{ \begin{align} & \underset{j}{\mathop{\arg \min }}\, \left( \left\{ \left. rem({{\mathit{\boldsymbol{v}}}_{i}}, {{\mathit{\boldsymbol{v}}}_{j}}) \right|j\ne i \right\} \right), \ \ \ \ \ \ \ \ k=1 \\ & \underset{j}{\mathop{\arg \min }}\, \left( \left\{ \left. rem({{\mathit{\boldsymbol{v}}}_{i}}, {{\mathit{\boldsymbol{v}}}_{j}}) \right|j\ne i, \cdots, cn\_(k-1) \right\} \right), \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ k>1 \\ \end{align} \right. (5)

 \left\{ \begin{align} & k\left( {{\mathit{\boldsymbol{v}}}_{i}}, {{\mathit{\boldsymbol{v}}}_{j}} \right)=\exp \left(-\varepsilon \left\| {{\mathit{\boldsymbol{v}}}_{i}}-{{\mathit{\boldsymbol{v}}}_{j}} \right\|_{2}^{2} \right) \\ & \varepsilon =\frac{1}{\left( \left\| {{\mathit{\boldsymbol{v}}}_{i}}-\mathit{\boldsymbol{\bar{v}}} \right\|_{2}^{2} \right)}, \mathit{\boldsymbol{\bar{v}}}=\frac{1}{n}\sum\limits_{i=1}^{n}{{{\mathit{\boldsymbol{v}}}_{i}}} \\ \end{align} \right.
2.2 基于TCLR特征的分类算法

3 实验与分析 3.1 数据集

 图 1 分类实验数据集样例图片 Figure 1 Some samples from the dataset for classification
3.2 分类实验结果

 ${{r}_{\text{tol}}}=\left( \sum\limits_{i=1}^{n}{\frac{ctn\left( {{l}_{i}}, {{l}_{i}}' \right)}{n}} \right)\times 100%$ (6)

 ${{\boldsymbol{r}}_{cls\_k}}=\left( \sum\limits_{j=1}^{N}{\sum\limits_{i=1}^{{{n}_{k}}}{\frac{ctn\left( {{l}_{ki}}, {{l}_{ki}}' \right)}{\sum\limits_{j=1}^{N}{{{n}_{k}}}}}} \right)\times 100%$
 图 2 TCLR方法与TCNNR方法在10个类别上分类准确率 Figure 2 Accuracies of TCLR and TCNNR on 10 classes

3.3 分析

 图 3 TCLR方法性能与局部信息强度的关系 Figure 3 Relationship between strength of local information and TCLR's performance
4 结语