-
摘要: 排序学习利用机器学习技术去训练排序模型以解决排序问题,是信息检索与机器学习交叉领域的一个新兴研究热点.越来越多的排序学习方法已经应用于实际系统中,如搜索引擎和推荐系统等.本文概括了排序学习的研究进展,并进行展望.首先,阐述了排序学习问题.然后,对排序学习方法进行了分类,并重点分析了依据训练排序模型时所采用的不同机器学习技术的排序学习方法类别.本文还介绍了一些代表性的标准排序学习数据集,对排序学习方法在若干领域的成功应用进行了总结,并归纳了一些排序学习方法软件包.最后,对排序学习的未来发展趋势和挑战进行了展望和探讨.Abstract: Learning to rank utilizes some machine learning techniques to train the ranking models to solve the ranking problems, which is a new research hotspot in the cross field of information retrieval and machine learning. More and more approaches of learning to rank have been applied in practical systems, such as search engines and recommendation systems.This paper summarizes the state of the art of learning to rank and looks into its future. First of all, the problems of learning to rank are described. Then, the approaches of learning to rank are classified, and the categories of the learning to rank approaches according to different machine learning techniques used in the training process of the ranking models are analyzed emphatically. In addition, some representative standard data sets of learning to rank are illustrated, some successful applications in different fields for learning to rank are summarized, and some software packages of the learning to rank approaches are summarized. Finally, some development trends and challenges of learning to rank are prospected and discussed in the future research.
-
压缩感知(Compressive sensing, CS)理论作为一种新的信息获取手段, 可基于信号结构的稀疏特性, 在远低于奈奎斯特采样率的条件下, 通过少数量测值实现对信号的精确重建 [1-2]. 由于能够有效缓解数据传输、 存储和处理等方面的压力, 相关的研究成果已涉及到图像 [3]、通信 [4]和雷达 [5] 等众多领域 [6]. 稀疏重建算法作为CS 的核心内容之一, 在一定程度上决定着能否将CS 推向实用化 [7]. 传统的重建算法及相关研究大多是针对一维实信号 [8]. 然而在实际应用场景中, 如阵列信号处理 [9]、SAR [10-11]、ISAR (Inverse synthetic aperture radar) [12] 和磁共振成像 [13] 等, 待处理的往往是多维复数信号. 目前对多维信号的处理主要有以下三种思路: 1)将多维信号列向量化为一维; 然后, 利用一维重建算法进行处理, 但是这种处理会使得感知矩阵规模急剧变大, 显著增加计算复杂度. 2)基于Kronecker 积CS的思想 [14], 其主要通过构建可分离感知算子降低算法的复杂度, 实际上, 利用小规模量测矩阵的Kronecker 积对向量化的多维信号采样与利用小规模量测矩阵进行逐一采样是等价的. 3)利用信号的联合结构特性进行多维处理. 如文献[15] 基于信号的块稀疏特征, 将二维信号分为更小的块, 然后利用块对角量测矩阵代替原始密集的量测矩阵进行采样, 有效地降低了存储空间和计算复杂度. 文献[16]基于二维信号的联合稀疏特征, 利用同一个感知矩阵进行重建. 但是当二维信号的稀疏结构具有任意性时, 文献[15-16]所提算法的性能将变差. 针对这一问题, 文献[17]提出利用并行CS方案进行并行感知, 并放松了对应的RIP (Restricted isometry property)条件. 但文献[17]依然是对二维信号每列逐一重建, 计算复杂度仍然较高, 且该方法是在实数情况下讨论的, 若在复信号情况下利用该方法, 则需利用文献[10] 的实数化方法, 存储空间和计算复杂度会进一步增加.
而目前对复数信号重建主要有两种思路: 1)将复信号的实部和虚部排列为实数化的信号后进行重建的思路 [10], 该方法会增加信号以及感知矩阵的规模, 造成存储空间和计算量增加. 2)文献[8]采取迭代交替估计幅度和相位的思路进行复信号重建, 但这种方法相当于两次优化过程, 增加了计算复杂度.
为有效实现对二维复稀疏信号的快速重建, 本文在线性Bregman 迭代(Linearized Bregman iteration, LBI) 的基础上, 提出一种快速并行重建复稀疏信号的并行线性Bregman迭代(Parallel fast linearized Bregman iteration, PFLBI)算法. 首先, 构建了二维复稀疏信号的结构模型, 详细分析了其信号结构特征; 其次, 从理论上推导了对二维复稀疏信号并行重建的并行Bregman迭代格式; 然后, 采用估计迭代步长的方法加快收敛过程以提高运算速度; 最后, 对PFLBI算法的收敛性, 抗噪性和计算复杂度进行分析, 仿真结果验证了理论分析的正确性. 将PFLBI 算法应用于ISAR 成像中, 仿真和实测数据成像结果都验证了算法的良好性能.
1. 二维复稀疏信号模型
对于一个二维复稀疏信号, 可以用矩阵形式来表示, 假设二维复稀疏信号 $X\in {{C}^{N\times D}}$只有 K 个非零元素, 此时称 X 是 K 稀疏的, X 的稀疏程度可以用稀疏度向量 ${ K}=[K_1, K_2, \cdots, K_d, \cdots, K_D]$ 来表示, 其中 Kd 表示 X 的第 d 列的稀疏度, 显然有 $K=\|{ K}\|_1$. 图 1给出了二维复稀疏信号的示意图. 图 1 (a)给出了信号的幅度和位置, 为更清晰地描述信号结构, 图 1 (b)给出了信号的位置关系图, 其中黑点表示该点为非零元素, 白点表示该点为零元素. 可以看出, 当二维复稀疏信号具有任意稀疏结构时, 其每一列的稀疏度和非零元素的位置都是任意的. 而二维复稀疏信号的重建可表示为如下的数学问题:
$Y=\Theta X$
(1) 其中, $ Y=[{ y}_1, {{ y}_{2}}, \cdots , {{ y}_{D}}] $为二维的量测值矩阵, $X=\left[{{ x}_{1}}, {{ x}_{2}}, \cdots , {{ x}_{D}} \right]$ 为待重建的二维稀疏矩阵, 此时可以利用并行CS方案进行求解 [17], 事实上, 式(1)可等效为 D 个一维复稀疏信号重建, 即
${{ y}_d}=\Theta {{ x}_{d}} , \qquad d=1, \cdots , D$
(2) 其中, ${ y}_d\in C^{M}$为第 d 列量测向量, D 为二维复稀疏信号的总列数, ${{ x}_{d}}\in {C^{N}}$ 为待重建的第 d 列一维复稀疏向量.
利用X的稀疏特性约束, 求解式(1)的稀疏解问题可以描述为如下优化问题 [18]:
$\underset{X\in {C^{N\times D}}}{\mathop{\min }} {{\left\| X \right\|}_{0, q}} {\rm s. t.} Y=\Theta X$
(3) 其中, ${{\left\| X \right\|}_{0, q}}=\left| {\rm supp}\left( X \right) \right|$, ${\rm supp}\left( X \right)$ 为 X 的支撑集, 即非零元素的个数. 文献[17]证明了利用并行CS方案时, 若感知矩阵 $\Theta$ 满足RIP条件, 则二维复稀疏信号 X 能够从量测值 Y 中精确重建出来. 由于式(3)是NP难的问题, 在感知矩阵 $\Theta$ 满足RIP条件下, 可将式(3)转化为以下凸优化问题 [18]:
$\underset{X\in {C^{N\times D}}}{\mathop{\min }} {{\left\| X \right\|}_{1}} {\rm s. t.} Y=\Theta X$
(4) 其中, 矩阵的1-范数定义为 ${{\left\| X \right\|}_{1}}=\sum\nolimits_{i=1}^{N}{\sum\nolimits_{j=1}^{N}{\left| {{X}_{i, j}} \right|}}$.
考虑到实际中存在噪声, 为从含有噪声的量测值 Y 中恢复稀疏信号 X, 放松式(4)的约束项, 利用正则化参数 $\mu$ 控制稀疏度和误差, 转化为如下正则化形式 [19]:
$\hat{X}={\rm arg}\underset{X\in {C^{N\times D}}}{\mathop{\rm min}} \mu {{\left\| X \right\|}_{1}}\text{+}\frac{1}{2}\left\| \Theta X-Y \right\|_{\rm F}^{2} $
(5) 其中, ${{\left\| \cdot \right\|}_{\rm F}}$ 表示矩阵的Frobenius范数, 简称 $\rm F$ 范数, 定义如下 [20]:
${{\left\| A \right\|}_{\text{F}}}={{\left( \sum\limits_{i, j=1}^{N}{{{\left| {{a}_{ij}} \right|}^{2}}} \right)}^{\frac{1}{2}}}=\text{tr}\left( {{A}^{\text{H}}}A \right)$
(6) 其中, $\text{tr}\left( \cdot \right)$为矩阵的迹, 即对角线元素之和.
2. PFLBI 算法
为高效准确地求解式(5), 本文提出PFLBI算法, 该算法主要包括两个方面: 1)构建PFLBI算法基本迭代格式, 即将LBI拓展到二维复稀疏信号模型中, 实现对二维复稀疏信号的并行重建; 2)快速实现, 即通过迭代步长的估计提高收敛速度, 有效避免处理时造成的冗余计算, 提高运算速度. 下面进行具体介绍和分析.
2.1 PFLBI 算法基本迭代格式
PFLBI算法基本迭代格式的构建主要包括两部分: 1)利用Bregman距离得到求解式(5)的Bregman迭代; 2)将求解式(5)的Bregman迭代线性化, 得到复矩阵形式的LBI. 首先给出求解式(5)的PFLBI算法基本迭代格式, 然后再对其证明.
采用LBI求解式(5)的最终迭代结果可表示为
${{V}^{k+1}}={{V}^{k}}+{{\Theta }^{\text{H}}}(Y-\Theta \text{ }{{X}^{k}})\ {{X}^{k+1}}\text{ = }\delta \text{sof}{{\text{t}}_{\mu }}({{V}^{k+1}})$
(7) 其中, ${{X}^{k+1}} $为迭代输出, ${{V}^{k+1}} $为中间变量, ${{X}^{0}}={{V}^{0}}=0$, $k=0$, $\text{\rm soft}\left( \cdot \right)$ 为复矩阵条件的软阈值算子, 定义如下:
$\eqalign{ &{\rm{sof}}{{\rm{t}}_\mu }({\rm{V}}){\rm{ = }}{{\max \left\{ {\left| {{{\rm{V}}_{{\rm{ij}}}}} \right|{\rm{0}}\mu , {\rm{0}}} \right\}} \over {\max \left\{ {\left| {{{\rm{V}}_{{\rm{ij}}}}} \right|{\rm{0}}\mu , {\rm{0}}} \right\}{\rm{ + }}\mu }} \cr &{{\rm{V}}_{{\rm{ij}}}}{\rm{ = }}\left\{ \matrix{ {{{V_{ij}}} \over {\left| {{V_{ij}}} \right|}}\left( {\left| {{V_{ij}}} \right| - \mu } \right), \left| {{V_{ij}}} \right| > \mu \hfill \cr 0, \left| {{V_{ij}}} \right| \le \mu \hfill \cr} \right. \cr} $
(8) 其中, ${{V}_{ij}}$为复数矩阵V中第i行第j列的元素, $\left| {{V}_{ij}} \right|$表示${{V}_{ij}}$的模. 在对上式进行证明之前, 首先给出两个定义: 1)不可微凸函数 $J\left( X \right) $在点 $X $处的次微分定义为 [21]
$\partial J\left( X \right):=\{P|J\left( V \right)\ge J\left( X \right)+\left\langle P, V-X \right\rangle , \forall V\in S\}$
(9) 其中, $S $为 $J\left( X \right) $的可行域, $\left\langle \cdot , \cdot \right\rangle $为内积, 矩阵 $P\in \partial J\left( X \right) $ 称为 $J\left( X \right) $在点 $X $处的次梯度.
2)凸函数 $J\left( X \right) $ 上的点 X 和 V 的Bregman距离定义为 [22]
$D_{J}^{P}\left( X, V \right)=J\left( X \right)-J\left( V \right)-\left\langle P, X-V \right\rangle $
(10) 其中, 向量 $P\in \partial J\left( V \right) $为凸函数 $J\left( X \right) $ 在点 $V $的次微分中的一个次梯度.
用 $J\left( X \right) $的Bregman距离代替 $J\left( X \right) $, 可得到式(5)对应的Bregman迭代形式:
${{X}^{k+1}}=\text{arg}\underset{X\in {{C}^{N\times D}}}{\mathop{\text{min}}}\, D_{J}^{{{p}^{k}}}\left( X, {{X}^{k}} \right)+\frac{1}{2}\text{tr}{{\left( \Theta X-Y \right)}^{\text{H}}}\left( \Theta X-Y \right)$
(11) %11 现对矩阵形式的Bregman迭代正则化方法线性化, 将 $\frac{1}{2}\left\| \Theta X-Y \right\|_{\rm F}^{2} $在 ${{X}^{k}} $处进行一阶泰勒级数展开, 忽略常数项, 则式(11)变为
$\begin{align} &{{X}^{k+1}}=\text{arg}\underset{X\in {{C}^{N\times D}}}{\mathop{\text{min}}}\, D_{J}^{{{P}^{k}}}\left( X, {{X}^{k}} \right)+ \\ &\left\langle X, {{\Theta }^{\text{H}}}\left( \Theta {{X}^{k}}-y \right) \right\rangle +\frac{1}{2}\text{tr}{{\left( \Theta X-Y \right)}^{\text{H}}}\left( \Theta X-Y \right) \\ \end{align}$
(12) 令
$\begin{align} &F\left( X \right)=D_{J}^{{{P}^{k}}}\left( X, {{X}^{k}} \right)+\left\langle X, {{\Theta }^{\text{H}}}\left( \Theta {{X}^{k}}-y \right) \right\rangle \\ &+\frac{1}{2}\text{tr}\left( {{\left( X-{{X}^{k}} \right)}^{\text{H}}}\left( X-{{X}^{k}} \right) \right) \\ \end{align}$
(13) 根据平稳点的次微分条件, 有 $0\in \partial F\left( X \right) $, 则
$0\in \partial F\left( X \right)=\partial J\left( X \right)-{{P}^{k}}+\frac{1}{\delta }\left( X-\left( {{X}^{k}}-\delta {{\Theta }^{\text{H}}}\left( \Theta {{X}^{k}}-Y \right) \right) \right)$
(14) 在 $X={{X}^{k+1}} $处有 ${{P}^{k+1}}\in \partial J\left( {{X}^{k+1}} \right) $, 所以有:
${{P}^{k+1}}={{P}^{k}}-\frac{1}{\delta }\left( {{X}^{k+1}}-{{X}^{k}} \right)-{{\Theta }^{\text{H}}}\left( \Theta {{X}^{k}}-Y \right)$
(15) 利用递推公式可将式(15)写为
$\begin{align} &{{P}^{k+1}}={{P}^{k}}-\frac{1}{\delta }\left( {{X}^{k+1}}-{{X}^{k}} \right)-{{\Theta }^{\text{H}}}\left( \Theta {{X}^{k}}-Y \right) \\ &=\cdots =\sum\limits_{j=0}^{k}{{{\Theta }^{\text{H}}}}\left( Y-\Theta {{X}^{j}} \right)-\frac{1}{\delta }{{X}^{k+1}} \\ \end{align}$
(16) 令
${{V}^{k}}=\sum\limits_{j=0}^{k-1}{{{\Theta }^{\text{H}}}\left( Y-\Theta {{X}^{j}} \right)}$
(17) 可得
${{V}^{k+1}}={{V}^{k}}+{{\Theta }^{\rm H}}\left( Y-\Theta {{V}^{k}} \right) $
(18) 将 $J=\mu ||X|{{|}_{1}} $带入式(12)并忽略常数项得:
${{X}^{k+1}}=\text{arg}\underset{X\in {{C}^{N\times D}}}{\mathop{\text{min}}}\, \mu {{\left\| X \right\|}_{1}}+\frac{1}{2\delta }\left\| X-\delta \left( {{p}^{k}}+\Delta V+\frac{1}{\delta }{{X}^{k}} \right) \right\|_{\text{F}}^{2}$
(19) 其中, $\Delta V={{\Theta }^{\rm H}}\left( Y-\Theta {{X}^{k}} \right) $. 将 ${{\Theta }^{\rm H}}( Y-\Theta {{X}^{k}} )$ $={{V}^{k+1}}-{{V}^{k}} $及 ${{p}^{k}}={{V}^{k}}-{{X}^{k}}/{\delta } $ 代入式(19)得:
${{X}^{k+1}}=\text{arg}\underset{X\in {{C}^{N\times D}}}{\mathop{\text{min}}}\, \mu {{\left\| X \right\|}_{1}}+\frac{1}{2\delta }\left\| X-\delta {{V}^{k+1}} \right\|_{\text{F}}^{2}$
(20) 式(20)可用下式求解
${{X}^{k+1}}\text{ = sof}{{\text{t}}_{\mu }}({V}^{k+1}) $
(21) 结合式(18)和(21)即得PFLBI算法基本迭代格式, 此时完成了对式(7)的证明.
2.2 快速实现
由于LBI存在停滞现象, 式(7)也类似地存在停滞现象, 因此需要研究消除这一现象的方法. 文献[23]分析了LBI中停滞现象产生的原因: 即一次或几次中间变量的积累量不足以突破收缩阈值, 以至于这几次迭代时的输出保持不变. 为解决这一问题, 文献[23]提出利用改变迭代步长思想以消除实信号的停滞现象, 取得了较好的效果, 有效地加快了LBI的收敛速度. 本文将该思想用于在二维复信号重建以消除停滞现象, 下面进行具体分析.
消除停滞现象的重点在于估计V突破闭区间 $\left[-\mu , \mu \right] $需要的步长, 在停滞时间内, V的增量 $\Delta V\text{ = }{{\Theta }^{\rm H}}\left( Y-\Theta {{X}^{k}} \right) $可认为是固定的. 那么停滞期间的迭代过程可表示为
${{X}^{k+j}}\equiv {{X}^{k}}\ {{V}^{k+j}}\text{ = }{{V}^{k}}+j\Delta V, j=1, \cdots .$
(22) 为计算停滞步长, 首先对数据进行预处理. 对${{X}^{k}}$, ${{V}^{k}} $, ${{X}^{k+j}} $, ${{V}^{k+j}} $和 $\Delta V $分别进行矩阵向量化处理, 即${{\tilde{X}}^{k}}\text{ = vec}\left( {{X}^{k}} \right)$, ${{\tilde{V}}^{k}}\text{ = vec}\left( {{V}^{k}} \right)$, ${{\tilde{X}}^{k+j}}\text{ = vec}\left( {{X}^{k+j}} \right)$, ${{\tilde{V}}^{k+j}}\text{ = vec}\left( {{V}^{k+j}} \right)$和 $\Delta \tilde{V}\text{ = vec}\left( \Delta V \right)$.
定义 ${{I}_{0}} $为 ${{\tilde{X}}^{k}} $零元素的索引集, ${{I}_{1}}={{\bar{I}}_{0}} $为 ${{\tilde{X}}^{k}} $非零元素的支撑集, 此时式(22)可改写为如下分段形式:
$\tilde{X}_{i}^{k+j}=\tilde{X}_{i}^{k}, \forall i\ \tilde{V}_{i}^{k+j}=\tilde{V}_{i}^{k}+j\Delta {{\tilde{V}}_{i}}, i\in {{I}_{0}}\ \tilde{V}_{i}^{k+j}=\tilde{V}_{i}^{k}, i\in {{I}_{1}}.$
(23) 当且仅当 ${{I}_{0}} $中 ${{\tilde{V}}^{k}} $的元素突破闭区间 $\left[-\mu , \mu \right] $的限制时, ${{\tilde{X}}^{k}} $才会产生一个新的非零元素, 从而消除停滞现象. 当 $i\in {{I}_{0}}, \tilde{V}_{i}^{k}\in \left[-\mu , \mu \right] $时, 可以利用式(24)估计 $\tilde{V}_{i}^{k} $突破限制需要的积累步数, 即
$s\text{ = min}\left\{ {{s}_{i}} \right\}\text{=}\left\{ \left\lceil \frac{\mu \cdot \text{sgn}\left( \Delta {{{\tilde{V}}}_{i}} \right)-\tilde{V}_{i}^{k}}{\Delta {{{\tilde{V}}}_{i}}} \right\rceil , \forall i\in {{I}_{0}} \right\}$
(24) 其中, $\left\lceil \cdot \right\rceil $为取整符号, $s $即为需要的积累步数, 也就是停滞的长度. 得到 $s $后就可以利用式(25)终止停滞.
$\left\{ \begin{align} &{{X}^{k+s}}={{X}^{k}} \\ &{{V}^{k+s}}={{V}^{k}}+s\Delta V \\ \end{align} \right.\ $
(25) 因此, 当 ${{X}^{k}} $在两步迭代保持不变时, 认为其处于迭代停滞状态, 此时可通过增加 ${{V}^{k}} $的变化量以突破 $\left[-\mu , \mu \right] $, 从而使 ${{X}^{k}} $加速到停滞的临界点, 以此减少积累时间, 加快算法运行速度.
注意到, 本文方法能否较好地消除停滞现象主要取决于停滞状态 ${{X}^{k+s}}= {{X}^{k}}$的判断是否准确. 通常判断 ${{X}^{k+s}} = {{X}^{k}}$是利用两者之差小于一个较小的常数 $\varepsilon $, $\varepsilon $的取值不同会使算法的收敛速度不同: 若 $\varepsilon $的取值过小时, 则两步迭代的 ${{X}^{k}} $非常接近时, 算法才会估计迭代步长, 此时算法的收敛速度就较慢; 若 $\varepsilon $ 的取值较大时, 则两步迭代的 ${{X}^{k}} $相差较大时, 算法便会估计迭代步长, 此时算法的收敛速度就较快, 但是, 若 $\varepsilon $的取值过大, 算法会不收敛, 因此选择 $\varepsilon $时, 需要考虑在算法收敛的条件下, 选择较大的值, 来获得较快的收敛速度, 以减小停滞现象影响.
3. 算法性能分析
3.1 收敛性分析
对算法的收敛性进行分析时, 主要分析式(7)是否收敛, 因为PFLBI算法的输出序列是式(7)的子序列, 因此, 若式(7)收敛, 则PFLBI算法必收敛.
下面对式(7)的收敛性进行分析. 文献[24]给出了线性Bregman迭代的收敛性结论及证明, 该文中的线性Bregman迭代可认为是式(7)的特殊情况, 即实数向量形式, 描述为式(26).
$\left\{ \begin{align} &{{v}^{k+1}}={{v}^{k}}+{{\Theta }^{\text{H}}}(y-\Theta {{x}^{k}}) \\ &{{x}^{k+1}}=\delta \text{sof}{{\text{t}}_{\mu }}({{v}^{k+1}}) \\ \end{align} \right.\ $
(26) 将文献[24]对于式(26)的收敛性定理描述为定理1.
定理 1. 假设 $\Theta \in {R^{M\times N}} $是任意矩阵, $M\le N$, 且 $0<\delta <{1}/{\left\| \Theta {{\Theta }^{\rm T}} \right\|}$, 则由式(26)得到的序列 $\left\{ {{ x}^{k}} \right\} $收敛到式(27)的唯一解, 若 $\mu \to \infty $, 则序列 $\left\{ {{ x}^{k}} \right\} $的极限收敛于式(28)的一个最优解.
$\underset{ x\in {R^{N}}}{\mathop{\min }} \left\{ f( x) : x={\mathop{\arg}}\underset{ x\in {R^{N}}}{\mathop{\min }} g( x)\right\} $
(27) 其中, $f( x)=\mu {{\left\| x \right\|}_{1}}+{{\left\| x \right\|}^{2}}/{2\delta }$, $g( x)={{\left\| \Theta x-y \right\|}^{2}}$.
$\underset{ x\in {R^{N}}}{\mathop{\min }} \left\{ {{\left\| x \right\|}_{1}}: x={\mathop{\arg }}\underset{ x\in {R^{N}}}{\mathop{\min }} g( x) \right\} $
(28) 证明. 可将式(7)改写为如下形式:
$\left\{ \begin{align} &\left[ v_{1, \cdots , D}^{k+1} \right]=\left[ v_{1, \cdots , D}^{k} \right]+ \\ &{{\Theta }^{\text{H}}}\left( \left[ {{y}_{1, \cdots , D}} \right]-\Theta \left[ x_{1, \cdots , D}^{k} \right] \right) \\ &\ \left[ x_{1, \cdots , D}^{k+1} \right]=\delta \text{sof}{{\text{t}}_{\mu }}\left( \left[ v_{1, \cdots , D}^{k+1} \right] \right) \\ \end{align} \right.\ $
(29) 其中, $\left[v_{1, \cdots , D}^{k+1} \right]=\left[v_{1}^{k+1}, \cdots , v_{D}^{k+1} \right]$, $\left[x_{1, \cdots , D}^{k+1} \right]=\left[x_{1}^{k+1}, \cdots , x_{D}^{k+1} \right]$, $\left[y_{1, \cdots , D} \right]=\left[y_{1}, \cdots , y_{D} \right]$.
式(29)中的第 d 列为
$\left\{ \begin{align} &\left[ {{v}_{d}}^{k+1} \right]=\left[ {{v}_{d}}^{k} \right]+{{\Theta }^{\text{H}}}\left( \left[ {{y}_{d}} \right]-\Theta \left[ {{x}_{d}}^{k+1} \right] \right) \\ &\ \left[ {{x}_{d}}^{k+1} \right]=\delta \text{sof}{{\text{t}}_{\mu }}\left( \left[ {{v}_{d}}^{k+1} \right] \right) \\ \end{align} \right.$
(30) 实际中, 为保证运算速度, 算法处理时针对的是复数信号. 在分析收敛性时, 为方便分析, 可利用式(31)将复数转化为实数进行分析 [10].
$\eqalign{ &{x_d} = \left[ \matrix{ {\mathop{\rm Re}\nolimits} \left( {{x_d}} \right) \hfill \cr {\mathop{\rm Im}\nolimits} \left( {{x_d}} \right) \hfill \cr} \right], \Theta = \left[ {\matrix{ {{\mathop{\rm Re}\nolimits} \left( \Theta \right)}&{ - {\mathop{\rm Im}\nolimits} \left( \Theta \right)} \cr {\;{\mathop{\rm Im}\nolimits} \left( \Theta \right)}&{{\mathop{\rm Re}\nolimits} \left( \Theta \right)} \cr } } \right] \cr &{y_d} = \left[ \matrix{ {\mathop{\rm Re}\nolimits} \left( {{y_d}} \right) \hfill \cr \;{\mathop{\rm Im}\nolimits} \left( {{y_d}} \right) \hfill \cr} \right] \cr} $
(31) 转化为实数后, 式(30)满足定理1, 显然式(29)也满足定理1, 即式(7)满足定理1的收敛性结论, 又有PFLBI算法的输出序列是式(7)的子序列, 因此PFLBI算法也满足定理1的收敛性结论.
3.2 抗噪性分析
PFLBI算法的主体部分是式(7), 因此主要分析式(7). 为方便分析抗噪性能, 类似于文献[25], 将式(7)写为等价的式(32).
$\left\{ \matrix{ {Y^{k + 1}} = Y + {Y^k} - \Theta {X^k} \hfill \cr \;{X^{k + 1}} = \delta {{\mathop{\rm soft}\nolimits} _\mu }({\Theta ^{\rm{H}}}{Y^{k + 1}}) \hfill \cr} \right.$
(32) 噪声条件下, $Y=\Theta \bar{X}+\Omega$. 其中 $\bar{X} $为真实的无噪稀疏信号, 当 $\left\| Y-\Theta {{X}^{k}} \right\|\ge \left\| Y-\Theta \bar{X} \right\| $时, ${{X}^{k}} $依Bregman距离 $D_{J}^{{{p}^{k}}}\left( \bar{X}, {{X}^{k}} \right) $单调地趋向 $\bar{X}$. 当 $k=0$, ${{Y}^{0}}=0$, ${{X}^{0}}=0 $时, ${{Y}^{1}}=Y$, 将迭代(32)中的含噪量测输入 ${{Y}^{1}} $分解为两部分: ${{Y}^{1}}=\Theta {{X}^{1}}+\Theta {{B}^{1}}$, 其中 ${{X}^{1}} $ 可看作为原始纯净信号 $\bar{X} $的一部分, 因为 $\mu $取较大值时, 收缩算子 $\operatorname{soft} $可将 ${{\Theta }^{H}}{{Y}^{1}} $中的小信号成分过滤掉, 因此 ${{X}^{1}} $是过平滑的且不含任何噪声. ${{B}^{1}} $包含两部分: 1)原始纯净信号 $\bar{x} $中未恢复的信号 $\bar{X}-{{X}^{1}}$; 2)噪声分量 $\Omega $, 可表示为 $\Theta {{B}^{1}}=\Theta \left( \bar{X}-{{X}^{1}} \right)+\Omega $, 又有 ${{Y}^{1}}=\Theta {{X}^{1}}+\Theta {{B}^{1}}$, 所以 ${{Y}^{1}}=\Theta \left( \bar{X}-{{X}^{1}} \right)+\Omega +\Theta {{X}^{1}}$. 若期望从 ${{B}^{1}}$中恢复出未恢复信号 $\bar{X}-{{X}^{1}}$, 则需要在第二次迭代时将$\Theta {B}^{1}$反馈到原始含噪量测输入 Y中, 所以第二次迭代新的输入 ${{Y}^{2}}$为
$\eqalign{ &{Y^2} = Y + \Theta {B^1} = \cr &2\Theta {X^1} + 2\Theta {B^1} - \Theta {X^1} = \cr &2\Theta \left( {\bar X - {X^1}} \right) + 2\Omega + \Theta {X^1} \cr} $
(33) 与第一次迭代含噪量测输入 ${{Y}^{1}} $相比, 未恢复信号 $\bar{X}-{{X}^{1}} $变为两倍, 同时 ${{Y}^{2}} $包含的噪声分量也变为两倍. 由于第二次迭代新的含噪量测输入 ${{Y}^{2}} $可分解为 ${{Y}^{2}}=\Theta {{X}^{2}}+\Theta {{B}^{2}}$, 利用 ${{Y}^{2}} $求解 ${{X}^{2}} $时, ${{Y}^{2}} $中的信号成分不仅使 ${{X}^{2}} $继承了 ${{X}^{1}}$, 而且重建了未恢复信号 $\bar{X}-{{X}^{1}} $的部分信息, 因此 ${{X}^{2}} $ 比 ${{X}^{1}} $更逼近 $\bar{X}$. 若已知噪声方差 ${{\Sigma }^{2}}$, 则可以利用下式作为噪声条件下的停止准则:
$\left\| Y-\Theta {{X}^{k}} \right\|>\left\| Y-\Theta \bar{X} \right\|={{\Sigma }^{2}} $
(34) 3.3 计算复杂度分析
下面通过计算量分析比较算法的重建速度, 以一次加法或乘法为计算量单位.
首先分析式(7)的计算量, 一次迭代的计算量为 O$\left( D\left( 4MN+2N \right) \right)$, 假设经过 L 次循环得到最终解, 那么式(7)的计算量为 O$\left( DL\left( 4MN+2N \right) \right)$. PFLBI算法的计算量主要也是式(7)的迭代, 主要是迭代次数不同, 假设迭代 ${{L}_{1}}$ 次终止, 那么总的计算量为 O$\left( D{{L}_{1}}\left( 4MN+2N \right) \right)$. 由于PFLBI算法估计了停滞步长, 因此有 ${{L}_{1}}<L$. 因此, PFLBI算法比式(7)的计算量更小, 运算速度更快.
3.4 并行性分析
本文算法不同于传统的逐列重建, 而是整个矩阵同时重建, 即多列同时重建. 具有多个事件同时发生的思想, 因而具有并行性. 需要注意的是, 本文算法的并行性和计算机领域利用多部计算机同时处理的并行计算并不完全相同, 仅是思想相同. 下面进行具体分析.
首先将LBI拓展到二维复稀疏信号模型中, 实现直接对矩阵进行处理, 以及对二维复稀疏信号的并行重建. 此外, 体现本文算法并行性关键的一点是处理矩阵时, 将原始对向量收缩的软阈值算子改进为可直接对矩阵进行收缩的软阈值算子, 因而本文算法能够并行重建二维复稀疏信号.
4. 仿真与实验
这里首先通过仿真对算法的性能进行验证分析, 然后通过仿真数据及实测数据的ISAR成像进一步验证算法的性能与优势, 仿真中, 采用计算机语言为Matlab 语言, 计算机主要参数如下: 处理器为Intel酷睿E7500, 主频为2.93 GHz, 内存为2 GB.
4.1 算法性能验证与分析
仿真1. 可行性验证
本仿真主要验证本文算法对于已有复数模型算法的有效性, 因此重点与文献[8, 10]中的方法进行比较. 针对一般的复数信号, $ s=\left| a \right|{\rm exp} (i\theta )$, 其中 $a $幅度服从正态分布 $a\sim {\rm N}( 0, 1 )$, $\theta $服从均匀分布 $\theta \sim \operatorname{U}\left( -\pi , \pi \right)$, $ s $的长度为256, 稀疏度为10, 感知矩阵是 $128\times 256 $的随机高斯矩阵, 算法停止准则为 $\| \Theta {\hat{ s}}^{k}-{ s} \|_{2}/{\left\| s \right\|}_{2}\le 10^{-5}$, 最大迭代次数为5 000, 相对重建误差定义为: error$\text{ = }||{{\hat{ s}}^{k}}-s|{{|}_{2}}/|| s|{{|}_{2}}$. 仿真结果如图 2所示.
从图 2中可以看出, 文献[8, 10]的方法以及本文的PFLBI算法都能够有效地重建稀疏复信号的幅度和相位, 验证了本文算法的有效性.
仿真2. 收敛性验证 下面通过仿真对算法的收敛性进行验证, 仿真参数与仿真1相同, 相对重建误差的对数和迭代次数的变化关系如图 3所示.
从图 3可以看出, 式(7)和PFLBI算法的输出序列的相对重建误差随迭代次数的增加, 最终减小到设定的停止门限, 验证了PFLBI算法收敛性. 同时可看出式(7)存在停滞现象, PFLBI算法通过估计迭代步长的方法有效地消除了停滞, 减少了迭代次数, 加快了收敛速度, 体现出PFLBI算法的优势. 此外, 可以看出, 算法的收敛速度与 $\varepsilon $的取值有关, $\varepsilon $值越小, 则收敛速度越慢; $\varepsilon $值越大, 则收敛速度越快; 但 $\varepsilon $ 的取值过大时, 则算法不收敛, 仿真验证了理论分析的正确性.
仿真3. 抗噪性验证 本仿真主要考察PFLBI算法的重建精度对信噪比的敏感度, 并与文献[8]和文献[10]的方法进行比较, 仿真参数与仿真1相同, 噪声取复高斯白噪声, 信噪比的取值范围是[0 dB $\sim$ 35 dB], 步长为5, Monte Carlo仿真100 次. 图 4为相对重建误差与信噪比的关系.
由图 4可以看出, 在较低的信噪比下, 3种算法的相对重建误差都比较大, 都不能准确地重建出原始信号; 随着信噪比的增大, 3种算法的相对重建误差都越来越小. 可以看出本文PFLBI算法优于文献[8, 10]的方法.
仿真 4. 运算时间比较 本仿真主要验证PFLBI算法的速度优势, 仿真时与文献[8, 10]的方法进行比较. PFLBI算法停止准则及最大迭代次数与仿真1相同. 信号序列长度的取值范围是[256 $\sim$ 1 024], 步长为128, Monte Carlo仿真100次. 不同算法的运算时间关系如图 5所示.
由图 5可知, 随着信号序列长度的增加, 算法的运算时间都有所增加, 由于文献[8]的方法需要两次优化, 因此时间最长; 文献[10] 的方法需要将复数实数化, 增加了信号序列和感知矩阵的维度; 而本文算法能够直接处理复数信号, 所用时间较短, 验证了本文算法的优势.
4.2 仿真数据 ISAR 成像
为更好地验证本文算法的优势, 将所提算法应用于ISAR方位向成像. CS ISAR成像的核心思想是利用ISAR图像的稀疏性, 从少量回波中得到高分辨率甚至超分辨率图像. 文献[12, 26-27]已经对CS ISAR成像做了较为全面深入的研究, 本文算法应用于ISAR方位向成像的基础与这些文献相同, 这里就不再赘述. 文献[12, 26-27]中的CS ISAR方位向成像是逐个距离单元进行重建, 而本文方法是利用所提的PFLBI算法对所有距离单元同时重建, 达到并行处理的目的, 在保证成像质量的同时提高了成像效率. 仿真参数设置如下: 发射信号为线性调频(Linear frequency modulation, LFM)信号, 载频为10 GHz, 发射脉冲时宽10 $\mu$s, 信号带宽为400 MHz, 采样频率为800 MHz, 脉冲重复频率为200 Hz. 目标为34个散射点飞机模型, 假设飞机匀速飞行, 速度为300 m/s, 观测距离门参考位置50 km, 脉冲回波数为256个. 目标模型及脉压后结果如图 6所示.
为验证三种算法在不同信噪比下的有效性与成像效果, 分别与RD (Range dopler)算法和OMP (Orthogonal matching pursuit)算法成像结果比较. PFLBI算法中 $\delta ={1}/(2\left\| \Theta {{\Theta }^{\rm T}} \right\|)$, $\mu ={300}/{\delta }$. 图 7 (a)图 7(c) 分别为RD算法、OMP算法和PFLBI算法在不同信噪比条件下得到的ISAR成像仿真结果.
由图 7可知, 相比RD算法, 基于CS理论的三种重建算法得到的ISAR图像分辨率更高, 副瓣更低. 随着信噪比的降低, 三种算法的成像质量都有所下降: RD算法成像结果在低性噪比下受噪声影响严重; 由于利用噪声方差作为算法的停止准则, OMP算法和PFLBI算法受信噪比影响较小; 对比图像可知, PFLBI算法比OMP算法的成像结果副瓣更低, 虚假散射点更少, 验证了本文算法在不同信噪比下的良好成像性能.
4.3 实测数据 ISAR 成像
为更好地验证算法性能, 下面利用实测数据进行实验验证. 部分雷达参数如下: 雷达发射信号为LFM信号, 带宽为100 MHz, 工作频段为S波段, 脉冲重复频率为400 Hz, 回波脉冲数为1 570个. 实测数据不同信噪比的成像比较. PFLBI算法中 $\delta \text{ = }{1}/(2\left\| \Theta {{\Theta }^{\rm T}} \right\|)$, $\mu \text{ = }{300}/{\delta }$. 其中图 8为实测数据脉压后信噪比分别为14 dB、10 dB、6 dB、2 dB的结果, 图 9(a) $\sim$ (c) 分别为RD 算法、OMP算法和PFLBI算法在不同信噪比条件下得到的实测数据ISAR成像结果.
从图 9可以看出, 随着信噪比的降低, 三种算法的成像质量都有所下降, 都会出现丢失部分散射点信息及出现虚假散射点的情况: 其中RD算法分辨率较低, 成像结果受噪声影响严重, 低信噪比时出现大量虚假散射点; OMP算法和PFLBI算法提高了分辨率, 降低了副瓣, 受信噪比影响较小, 但PFLBI算法成像结果优于OMP算法, 副瓣更低, 虚假散射点更少. 实测数据进一步验证了PFLBI算法在不同信噪比下的良好成像性能.
5. 结论
本文针对重建二维复信号时, 出现的存储空间和计算复杂度增加的问题, 首先, 构建了任意稀疏结构的二维复稀疏信号模型. 然后, 基于LBI构建了PFLBI算法基本迭代格式, 同时利用估计迭代步长的方法加快了收敛过程, 提出了PFLBI算法. 理论和仿真分析表明, PFLBI算法具有良好的收敛性、抗噪性和重建二维复稀疏信号时速度的优势, 同时, PFLBI算法能够有效消除停滞现象, 提高运算效率. 最后, 将PFLBI算法应用于ISAR成像, 不同信噪比的仿真及实测数据成像结果验证了PFLBI算法的良好性能.
-
表 1 Pointwise、Pairwise和Listwise排序学习方法对比
Table 1 Comparison of Pointwise, Pairwise and Listwise learning to rank approaches
类别 输入数据 样本复杂度 所转化的主要问题 特点 Pointwise 单个文档 ${\rm O}(n)$ 分类、回归或序数回归问题 考虑单个文档之间的排序特征, 不考虑同一查询下文档间的关系信息, 偏离了排序问题的实质.模型较简单, 训练时间较短. Pairwise 具有偏序关系的文档对 ${\rm O}(n^2)$ 二分类问题 考虑文档对之间的偏序关系, 部分保留了同一查询下文档间的关系信息, 并不考虑文档在文档列表上的位置, 接近排序问题的实质.模型较复杂, 训练时间较长, 需较高效的学习算法. Listwise 所有相关联的整个文档列表 ${\rm O}(n!)$ 最优化问题等 考虑同一查询下不同文档的序列关系, 完全符合排序问题的实质.模型的复杂度以及训练时间的长短很大程度上依赖于文档列表的损失函数或优化目标的定义. 表 2 排序学习方法类别及实例
Table 2 Categories and instances of the learning to rank approaches
类别 Pointwise Pairwise Listwise 感知机 PRank LDM (Percep) PAMM 神经网络 RankNet, LambdaRank, FRank, SortNet ListNet, ListMLE, R-LTR-NTN, PAMM-NTN 支持向量机 Ranking SVM SVM-MAP, SVM-NDCG 极限学习机 Pointwise RankELM Pairwise RankELM, ELMRank 贝叶斯 RankBayes, BLM-Rank Boosting RankBoost AdaRank 树 RF-point LogisticRank LambdaMART, X-DART Oblivious LambdaMART, QuickScorer, RF-list 进化算法 RankGP, RankMGP, MGP-Rank, RankCSA, RankIP, RankGDE, SwarmRank, RankPSO, RankBCA, RankDE, ES-Rank, R$^2$Rank 表 3 排序学习数据集
Table 3 Datasets of learning to rank
数据集 查询个数 文档个数 特征个数 相关性等级标注 来源 HP2003 150 147 606 64 0, 1 LETOR3.0 NP2003 50 148 657 64 0, 1 LETOR3.0 TD2003 50 49 058 64 0, 1 LETOR3.0 HP2004 75 74 409 64 0, 1 LETOR3.0 NP2004 75 73 834 64 0, 1 LETOR3.0 TD2004 75 74 146 64 0, 1 LETOR3.0 OHSUMED 106 16 140 45 0, 1, 2 LETOR3.0 MQ2007 1 692 69 623 46 0, 1, 2 LETOR4.0 MQ2008 784 15 211 46 0, 1, 2 LETOR4.0 MSLR-WEB10K 10 000 1 200 192 136 0, 1, 2, 3, 4 Microsoft LTR datasets MSLR-WEB30K 31 531 3 771 125 136 0, 1, 2, 3, 4 Microsoft LTR datasets Yahoo! LTR challenge set1 29 921 709 877 519 0, 1, 2, 3, 4 2010 Yahoo! LTR challenge datasets Yahoo! LTR challenge set2 6 330 172 870 596 0, 1, 2, 3, 4 2010 Yahoo! LTR challenge datasets Yandex internet mathematics 2009 9 124 97 290 245 0, 1, 2, 3, 4 Yandex internet mathematics 2009 contest WCL2R 79 5 200 29 0, 1, 2, 3 The TodoCL search engine Istella LETOR 33 018 10 454 629 220 0, 1, 2, 3, 4 Istella dataset Istella-S LETOR 33 018 3 408 630 220 0, 1, 2, 3, 4 Istella dataset 表 4 排序学习方法软件包
Table 4 Software packages of the learning to rank approaches
软件包名 开发语言 特点 来源 RankLib JAVA 支持多种排序学习方法和信息检索的评价指标, 提供多种执行方式评估排序学习方法, 且易于扩展更多的方法和评价指标.包含方法较多、代码开源较早且易于使用, 得到众多研究者的青睐和使用. 美国马萨诸塞大学 QuickRank C++ 支持多种排序学习方法和度量准则, 提供一种柔性、可扩展的和高效率的排序学习框架去开发和实现更多的排序学习方法.适合于从大规模的训练数据集中学习高质量的排序函数, 侧重追求高效率.新近开发的软件包, 代码开源较晚, 有待于研究者的推广和使用. 意大利ISTI-CNR的Istella团队等 Lerot Python 是第一个在线排序学习方法软件包, 可构建原型设计和评估在线排序学习方法的理想环境, 易于使用和扩展, 并易于将已实现的方法与新的在线评估和在线排序学习方法进行比较. 微软剑桥研究院和荷兰阿姆斯特丹大学等 LTR C++ 支持解决分类、回归和排序问题的开源代码, 但目前仅有两种排序学习方法源代码, 不过该软件包易于扩展更多的方法, 也易于使用现有程序.较早开发的排序学习方法的开源库, 但其使用者还有待加强推广. Yandex数据分析学院的学生 L2RLab JAVA 是一个包含设计新排序学习方法各阶段的集成实验环境, 是面向可扩展而设计的一个软件工具包, 有助于实验的展开.但由于该软件包当前仍还未提供访问和开源, 导致融入的排序学习方法较少. 古巴西恩富戈斯大学等 ESRank JAVA 只含有ES-Rank排序学习方法的JAR包.包含方法比较单一且不开源, 该软件包未获得推广. 英国诺丁汉大学和米尼亚大学 MLR-master MATLAB 基于结构化SVM框架, 包含准则排序学习方法和鲁棒的准则排序学习方法, 支持多个核函数评价准则.包含方法比较单一, 该软件包未获得推广. 美国加利福尼亚大学 SVM$^{\rm rank}$ C、C++和Python 使用线性核函数训练排序SVM的一个工具, 拥有多种语言的开源代码.其C语言版的SVM$^{\rm rank}$是一种开源最早的软件包, 后续软件包和数据集的采集大多遵循其思想开发. 美国康乃尔大学 -
[1] Page L, Brin S, Motwani R, Winograd T. The PageRank Citation Ranking: Bringing Order to the Web, Technical Report SIDL-WP-1999-0120, Stanford InfoLab, Stanford University, Stanford, California, USA, 1999. [2] Kleinberg J M. Authoritative sources in a hyperlinked environment. Journal of the ACM, 1999, 46(5):604-632 doi: 10.1145/324133.324140 [3] Gyöngyi Z, Garcia-Molina H, Pedersen J. Combating web spam with trustrank. In: Proceedings of the 30th International Conference on Very Large Data Bases. Toronto, Canada: VLDB Endowment, 2004. 576-587 [4] Liu Y T, Gao B, Liu T Y, Zhang Y, Ma Z M, He S Y, et al. BrowseRank: letting web users vote for page importance. In: Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Singapore: ACM, 2008. 451-458 [5] Zhu G Y, Mishne G. Clickrank:learning session-context models to enrich web search ranking. ACM Transactions on the Web, 2012, 6(1):Article No.1 http://d.old.wanfangdata.com.cn/NSTLQK/NSTL_QKJJ0226740933/ [6] Wang L D, Bennett P N, Collins-Thompson K. Robust ranking models via risk-sensitive optimization. In: Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval. Portland, Oregon, USA: ACM, 2012. 761-770 [7] Niu S Z, Guo J F, Lan Y Y, Cheng X Q. Top-k learning to rank: labeling, ranking and evaluation. In: Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval. Portland, Oregon, USA: ACM, 2012. 751-760 [8] Yin D W, Hu Y N, Tang J L, Daly T, Zhou M W, Ouyang H, et al. Ranking relevance in yahoo search. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco, California, USA: ACM, 2016. 323-332 [9] Wang Y, Yin D W, Luo J, Wang P Y, Yamada M, Chang Y, et al. Beyond ranking: optimizing whole-page presentation. In: Proceedings of the 9th ACM International Conference on Web Search and Data Mining. San Francisco, California, USA: ACM, 2016. 103-112 [10] Zhao T, King I. Constructing reliable gradient exploration for online learning to rank. In: Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. Indianapolis, Indiana, USA: ACM, 2016. 1643-1652 [11] Lucchese C, Nardini F M, Orlando S, Perego R, Tonellotto N, Venturini R. Quickscorer: a fast algorithm to rank documents with additive ensembles of regression trees. In: Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. Santiago, Chile: ACM, 2015. 73-82 [12] Liu T Y, Joachims T, Li H, Zhai C X. Introduction to special issue on learning to rank for information retrieval. Information Retrieval, 2010, 13(3):197-200 doi: 10.1007/s10791-009-9120-1 [13] Chapelle O, Chang Y. Yahoo! Learning to rank challenge overview. In: Proceedings of the 2010 International Conference on Yahoo! Learning to Rank Challenge. Haifa, Israel: JMLR, 2011. 1-24 [14] Liu T Y. Learning to rank for information retrieval. Foundations and Trends in Information Retrieval, 2009, 3(3):225-331 http://d.old.wanfangdata.com.cn/OAPaper/oai_arXiv.org_1303.2277 [15] Li H. Learning to Rank for Information Retrieval and Natural Language Processing (Second Edition). San Rafael, California:Morgan & Claypool Publishers, 2014. 1-121 [16] Crammer K, Singer Y. Pranking with ranking. In: Proceedings of the 2011 Advances in Neural Information Processing Systems 14(NIPS 2001). La Jolla, Canada: NIPS, 2001. 641-647 [17] Gao J F, Qi H L, Xia X S, Nie J Y. Linear discriminant model for information retrieval. In: Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Salvador, Brazil: ACM, 2005. 290-297 [18] Xia L, Xu J, Lan Y Y, Guo J F, Cheng X Q. Learning maximal marginal relevance model via directly optimizing diversity evaluation measures. In: Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. Santiago, Chile: ACM, 2015. 113-122 [19] Burges C, Shaked T, Renshaw E, Lazier A, Deeds M, Hamilton N, et al. Learning to rank using gradient descent. In: Proceedings of the 22nd International Conference on Machine Learning. Bonn, Germany: ACM, 2005. 89-96 [20] Tsai M F, Liu T Y, Qin T, Chen H H, Ma W Y. FRank: a ranking method with fidelity loss. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Amsterdam, The Netherlands: ACM, 2007. 383-390 [21] Burges C J, Ragno R, Le Q V. Learning to rank with nonsmooth cost functions. In: Proceedings of the 2006 Advances in Neural Information Processing Systems 19(NIPS 2006). Vancouver, B. C., Canada: NIPS, 2007. 193-200 [22] Cao Z, Qin T, Liu T Y, Tsai M F, Li H. Learning to rank: from pairwise approach to listwise approach. In: Proceedings of the 24th International Conference on Machine Learning. Corvalis, Oregon, USA: ACM, 2007. 129-136 [23] Xia F, Liu T Y, Wang J, Zhang W S, Li H. Listwise approach to learning to rank: theory and algorithm. In: Proceedings of the 25th International Conference on Machine Learning. Helsinki, Finland: ACM, 2008. 1192-1199 [24] Xia L, Xu J, Lan Y Y, Guo J F, Cheng X Q. Modeling document novelty with neural tensor network for search result diversification. In: Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval. Pisa, Italy: ACM, 2016. 395-404 [25] Rigutini L, Papini T, Maggini M, Scarselli F. SortNet:learning to rank by a neural preference function. IEEE Transactions on Neural Networks, 2011, 22(9):1368-1380 doi: 10.1109/TNN.2011.2160875 [26] Joachims T. Optimizing search engines using clickthrough data. In: Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Edmonton, Alberta, Canada: ACM, 2002. 133-142 [27] Yue Y S, Finley T, Radlinski F, Joachims T. A support vector method for optimizing average precision. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Amsterdam, The Netherlands: ACM, 2007. 271-278 [28] Chakrabarti S, Khanna R, Sawant U, Bhattacharyya C. Structured learning for non-smooth ranking losses. In: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Las Vegas, Nevada, USA: ACM, 2008. 88-96 [29] Zhao X Y, Li X, Zhang Z F. Joint structural learning to rank with deep linear feature learning. IEEE Transactions on Knowledge and Data Engineering, 2015, 27(10):2756-2769 doi: 10.1109/TKDE.2015.2426707 [30] Li X, Pi T, Zhang Z F, Zhao X Y, Wang M, Li X L, et al. Learning bregman distance functions for structural learning to rank. IEEE Transactions on Knowledge and Data Engineering, 2017, 29(9):1916-1927 doi: 10.1109/TKDE.2017.2654250 [31] Zong W W, Huang G B. Learning to rank with extreme learning machine. Neural Processing Letters, 2014, 39(2):155-166 doi: 10.1007/s11063-013-9295-8 [32] Chen H, Peng J T, Zhou Y C, Li L Q, Pan Z B. Extreme learning machine for ranking:generalization analysis and applications. Neural Networks, 2014, 53:119-126 doi: 10.1016/j.neunet.2014.01.015 [33] Guo H F, Chu D H, Ye Y M, Li X T, Fan X X. BLM-Rank:a Bayesian linear method for learning to rank and its GPU implementation. IEICE Transactions on Information and Systems, 2016, E99.D(4):896-905 doi: 10.1587/transinf.2015DAP0001 [34] Cossock D, Zhang T. Statistical analysis of Bayes optimal subset ranking. IEEE Transactions on Information Theory, 2008, 54(11):5140-5154 doi: 10.1109/TIT.2008.929939 [35] Wang C, Li S J. CoRankBayes: Bayesian learning to rank under the co-training framework and its application in keyphrase extraction. In: Proceedings of the 20th ACM International Conference on Information and Knowledge Management. Glasgow, Scotland, UK: ACM, 2011. 2241-2244 [36] Freund Y, Iyer R, Schapire R E, Singer Y. An efficient boosting algorithm for combining preferences. The Journal of Machine Learning Research, 2003, 4:933-969 doi: 10.1162-jmlr.2003.4.6.933/ [37] Xu J, Li H. Adarank: a boosting algorithm for information retrieval. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Amsterdam, The Netherlands: ACM, 2007. 391-398 [38] Wu Q, Burges C J C, Svore K M, Gao J F. Adapting boosting for information retrieval measures. Information Retrieval, 2010, 13(3):254-270 doi: 10.1007/s10791-009-9112-1 [39] Burges C J C. From RankNet to LambdaRank to LambdaMART: An Overview, Technical Report MSR-TR-2010-82, Microsoft Research, USA, 2010 [40] Capannini G, Lucchese C, Nardini F M, Orlando S, Perego R, Tonellotto N. Quality versus efficiency in document scoring with learning-to-rank models. Information Processing & Management, 2016, 52(6):1161-1177 http://dblp.uni-trier.de/db/journals/ipm/ipm52.html#CapanniniLNOPT16 [41] Lucchese C, Nardini F M, Orlando S, Perego R, Trani S. X-DART: blending dropout and pruning for efficient learning to rank. In: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. Shinjuku, Tokyo, Japan: ACM, 2017. 1077-1080 [42] Chen K K, Bai J, Zheng Z H. Ranking function adaptation with boosting trees. ACM Transactions on Information Systems, 2011, 29(4):Article No.18 http://dl.acm.org/citation.cfm?doid=2037661.2037663 [43] Kocsis L, György A, Bán A N. BoostingTree:parallel selection of weak learners in boosting, with application to ranking. Machine Learning, 2013, 93(2-3):293-320 doi: 10.1007/s10994-013-5364-5 [44] Asadi N, Lin J, De Vries A P. Runtime optimizations for tree-based machine learning models. IEEE Transactions on Knowledge and Data Engineering, 2014, 26(9):2281-2292 doi: 10.1109/TKDE.2013.73 [45] Dato D, Lucchese C, Nardini F M, Orlando S, Perego R, Tonellotto N, et al. Fast ranking with additive ensembles of oblivious and non-oblivious regression trees. ACM Transactions on Information Systems, 2016, 35(2):Article No.15 http://dblp.uni-trier.de/db/journals/tois/tois35.html#DatoLNOPTV16 [46] Lucchese C, Nardini F M, Orlando S, Perego R, Silvestri F, Trani S. Post-Learning optimization of tree ensembles for efficient ranking. In: Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval. Pisa, Italy: ACM, 2016. 949-952 [47] Mohan A, Chen Z, Weinberger K Q. Web-search ranking with initialized gradient boosted regression trees. In: Proceedings of the 2010 International Conference on Yahoo! Learning to Rank Challenge. Haifa, Israel: JMLR, 2011. 77-89 [48] de Sá C C, Gonçalves M A, Sousa D X, Salles T. Generalized BROOF-L2R: a general framework for learning to rank based on boosting and random forests. In: Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval. Pisa, Italy: ACM, 2016. 95-104. [49] Ibrahim M, Carman M. Comparing pointwise and listwise objective functions for random-forest-based learning-to-rank. ACM Transactions on Information Systems, 2016, 34(4):Article No.20 http://dblp.uni-trier.de/db/journals/tois/tois34.html#IbrahimC16 [50] Yeh J Y, Lin J Y, Ke H R, Yang W P. Learning to rank for information retrieval using genetic programming. In: Proceedings of ACM SIGIR 2007 Workshop on Learning to Rank for Information Retrieval. Amsterdam, Netherlands: ACM, 2007. 41-48 [51] Lin J Y, Yeh J Y, Liu C C. Learning to rank for information retrieval using layered multi-population genetic programming. In: Proceedings of the 2012 IEEE International Conference on Computational Intelligence and Cybernetics. Bali, Indonesia: IEEE, 2012. 45-49 [52] Keyhanipour A H, Moshiri B, Oroumchian F, Rahgozar M, Badie K. Learning to rank:new approach with the layered multi-population genetic programming on click-through features. Genetic Programming and Evolvable Machines, 2016, 17(3):203-230 doi: 10.1007/s10710-016-9263-y [53] Wang S Q, Ma J, Liu J M. Learning to rank using evolutionary computation: immune programming or genetic programming? In: Proceedings of the 18th ACM Conference on Information and Knowledge Management. Hong Kong, China: ACM, 2009. 1879-1882 [54] He Q, Ma J, Wang S Q. Directly optimizing evaluation measures in learning to rank based on the clonal selection algorithm. In: Proceedings of the 19th ACM International Conference on Information and Knowledge Management. Toronto, ON, Canada: ACM, 2010. 1449-1452 [55] Diaz-Aviles E, Nejdl W, Schmidt-Thieme L. Swarming to rank for information retrieval. In: Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation. Montreal, Québec, Canada: ACM, 2009. 9-16 [56] Alejo Ó, Fernández-Luna J M, Huete J F, Pérez-Vázquez R. Direct optimization of evaluation measures in learning to rank using particle swarm. In: Proceedings of the 2010 Workshops on Database and Expert Systems Applications. Bilbao, Spain: IEEE, 2010. 42-46 [57] Bollegala D, Noman N, Iba H. Rankde: learning a ranking function for information retrieval using differential evolution. In: Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation. Dublin, Ireland: ACM, 2011. 1771-1778 [58] Wang S Q, Wu Y, Gao B J, Wang K, Lauw H W, Ma J. A cooperative coevolution framework for parallel learning to rank. IEEE Transactions on Knowledge and Data Engineering, 2015, 27(12):3152-3165 doi: 10.1109/TKDE.2015.2453952 [59] Ibrahim O A S, Landa-Silva D. ES-Rank: evolution strategy learning to rank approach. In: Proceedings of the 32nd ACM Symposium on Applied Computing. Marrakech, Morocco: ACM, 2017. 944-950 [60] Tian Y L, Zhang H X. Research on B cell algorithm for learning to rank method based on parallel strategy. PLoS One, 2016, 11(8):Article No. e0157994 doi: 10.1371/journal.pone.0157994 [61] Li J Z, Liu G J, Yan C G, Jiang C J. Robust learning to rank based on portfolio theory and AMOSA algorithm. IEEE Transactions on Systems, Man, and Cybernetics:Systems, 2017, 47(6):1007-1018 doi: 10.1109/TSMC.2016.2584786 [62] Moschitti A. Kernel-based learning to rank with syntactic and semantic structures. In: Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval. Dublin, Ireland: ACM, 2013. 1128-1128. [63] Ailon N, Mohri M. Preference-based learning to rank. Machine Learning, 2010, 80(2-3):189-211 doi: 10.1007/s10994-010-5176-9 [64] Zhou K, Bai J, Zha H Y, Xue G R. Leveraging auxiliary data for learning to rank. ACM Transactions on Intelligent Systems and Technology, 2012, 3(2):Article No.37 http://dblp.uni-trier.de/db/journals/tist/tist3.html#ZhouBZX12 [65] Macdonald C, Santos R L T, Ounis I. The whens and hows of learning to rank for web search. Information Retrieval, 2013, 16(5):584-628 doi: 10.1007/s10791-012-9209-9 [66] Lai H J, Pan Y, Liu C, Lin L, Wu J. Sparse learning-to-rank via an efficient primal-dual algorithm. IEEE Transactions on Computers, 2013, 62(6):1221-1233 doi: 10.1109/TC.2012.62 [67] Lai H J, Pan Y, Tang Y, Yu R. FSMRank:feature selection algorithm for learning to rank. IEEE Transactions on Neural Networks and Learning Systems, 2013, 24(6):940-952 doi: 10.1109/TNNLS.2013.2247628 [68] Ma Q L, He B, Xu J Q. Direct measurement of training query quality for learning to rank. In: Proceedings of the 31st Annual ACM Symposium on Applied Computing. Pisa, Italy: ACM, 2016. 1035-1040 [69] Wang X H, Bendersky M, Metzler D, Najork M. Learning to rank with selection bias in personal search. In: Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval. Pisa, Italy: ACM, 2016. 115-124 [70] Wu O, You Q, Mao X, Xia F, Yuan F, Hu W M. Listwise learning to rank by exploring structure of objects. IEEE Transactions on Knowledge and Data Engineering, 2016, 28(7):1934-1939 doi: 10.1109/TKDE.2016.2535214 [71] Joachims T, Swaminathan A, Schnabel T. Unbiased learning-to-rank with biased feedback. In: Proceedings of the 10th International Conference on Web Search and Data Mining. Cambridge, UK: ACM, 2017. 781-789 [72] Calumby R T, Gonçalves M A, da Silva Torres R. On interactive learning-to-rank for IR:overview, recent advances, challenges, and directions. Neurocomputing, 2016, 208:3-24 doi: 10.1016/j.neucom.2016.03.084 [73] Qin T, Liu T Y, Xu J, Li H. LETOR:a benchmark collection for research on learning to rank for information retrieval. Information Retrieval, 2010, 13(4):346-374 doi: 10.1007/s10791-009-9123-y [74] Alcântara O D A, Pereira J Á R Jr, de Almeida H M, Gonçalves M A, Middleton C, Baeza-Yates R. WCL2R:a benchmark collection for Learning to rank research with clickthrough data. Journal of Information and Data Management, 2010, 1(3):551-566 http://dblp.uni-trier.de/db/journals/jidm/jidm1.html#AlcantaraPAGMB10 [75] Zhang M, Kuang D, Hua G C, Liu Y Q, Ma S P. Is learning to rank effective for Web search? In:Proceedings of SIGIR 2008 Workshop:Learning to Rank for Information Retrieval. Boston, US:ACM, 2009. 641-647 [76] Macdonald C, Dinçer B T, Ounis I. Transferring learning to rank models for web search. In: Proceedings of the 2015 International Conference on the Theory of Information Retrieval. Northampton, Massachusetts, USA: ACM, 2015. 41-50 [77] Kang C S, Yin D W, Zhang R Q, Torzec N, He J Z, Chang Y. Learning to rank related entities in web search. Neurocomputing, 2015, 166:309-318 doi: 10.1016/j.neucom.2015.04.004 [78] Karatzoglou A, Baltrunas L, Shi Y. Learning to rank for recommender systems. In: Proceedings of the 7th ACM Conference on Recommender Systems. Hong Kong, China: ACM, 2013. 493-494 [79] Sun J K, Wang S Q, Gao B J, Ma J. Learning to rank for hybrid recommendation. In: Proceedings of the 21st ACM International Conference on Information and Knowledge Management. Maui, Hawaii, USA: ACM, 2012. 2239-2242 [80] Yao W L, He J, Huang G Y, Zhang Y C. SoRank: incorporating social information into learning to rank models for recommendation. In: Proceedings of the 23rd International Conference on World Wide Web. Seoul, South Korea: ACM, 2014. 409-410 [81] Canuto S D, Belém F M, Almeida J M, Gonçalves M A. A comparative study of learning-to-rank techniques for tag recommendation. Journal of Information and Data Management, 2013, 4(3):453-468 http://dblp.uni-trier.de/db/journals/jidm/jidm4.html#CanutoBAG13 [82] Ifada N, Nayak R. How relevant is the irrelevant data: leveraging the tagging data for a learning-to-rank model. In: Proceedings of the 9th ACM International Conference on Web Search and Data Mining. San Francisco, California, USA: ACM, 2016. 23-32. [83] 黄震华, 张佳雯, 田春岐, 孙圣力, 向阳.基于排序学习的推荐算法研究综述.软件学报, 2016, 27(3):691-713 http://d.old.wanfangdata.com.cn/Periodical/rjxb201603014Huang Zhen-Hua, Zhang Jia-Wen, Tian Chun-Qi, Sun Sheng-Li, Xiang Yang. Survey on learning-to-rank based recommendation algorithms. Journal of Software, 2016, 27(3):691-713 http://d.old.wanfangdata.com.cn/Periodical/rjxb201603014 [84] Berendsen R, Tsagkias M, Weerkamp W, de Rijke M. Pseudo test collections for training and tuning microblog rankers. In: Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval. Dublin, Ireland: ACM, 2013. 53-62 [85] Dong A L, Zhang R Q, Kolari P, Bai J, Diaz F, Chang Y, et al. Time is of the essence: improving recency ranking using twitter data. In: Proceedings of the 19th International Conference on World Wide Web. Raleigh, North Carolina, USA: ACM, 2010. 331-340 [86] Duan Y J, Jiang L, Qin T, Zhou M, Shum H Y. An empirical study on learning to rank of tweets. In: Proceedings of the 23rd International Conference on Computational Linguistics. Beijing, China: Association for Computational Linguistics, 2010. 295-303 [87] Chelaru S, Orellana-Rodriguez C, Altingovde I S. How useful is social feedback for learning to rank YouTube videos? World Wide Web, 2014, 17(5):997-1025 doi: 10.1007/s11280-013-0258-9 [88] Yu J, Tao D C, Wang M, Rui Y. Learning to rank using user clicks and visual features for image retrieval. IEEE Transactions on Cybernetics, 2015, 45(4):767-779 doi: 10.1109/TCYB.2014.2336697 [89] Zhao X Y, Li X, Zhang Z F. Multimedia retrieval via deep learning to rank. IEEE Signal Processing Letters, 2015, 22(9):1487-1491 doi: 10.1109/LSP.2015.2410134 [90] Karaoglu S, Liu Y, Gevers T. Detect2rank:combining object detectors using learning to rank. IEEE Transactions on Image Processing, 2016, 25(1):233-248 doi: 10.1109/TIP.2015.2499702 [91] Wang J, Wang Z, Gao C X, Sang N, Huang R. DeepList:learning deep features with adaptive listwise constraint for person reidentification. IEEE Transactions on Circuits and Systems for Video Technology, 2017, 27(3):513-524 doi: 10.1109/TCSVT.2016.2586851 [92] Volkovs M N, Larochelle H, Zemel R S. Learning to rank by aggregating expert preferences. In: Proceedings of the 21st ACM International Conference on Information and Knowledge Management. Maui, Hawaii, USA: ACM, 2012. 843-851 [93] Moreira C, Calado P, Martins B. Learning to rank academic experts in the DBLP dataset. Expert Systems, 2015, 32(4):477-493 doi: 10.1111/exsy.v32.4 [94] Zheng H T, Li Q, Jiang Y, Xia S T, Zhang L S. Exploiting multiple features for learning to rank in expert finding. In: Proceedings of the 9th International Conference on Advanced Data Mining and Applications. Hangzhou, China: Springer, 2013. 219-230 [95] Chen F Q, Yu Z T, Wu Z J, Mao C L, Zhang Y M. Expert ranking method based on ListNet with multiple features. Journal of Beijing Institute of Technology, 2014, 23(2):240-247 http://d.old.wanfangdata.com.cn/Periodical/bjlgdxxb-e201402016 [96] Delpech E, Daille B, Morin E, Lemaire C. Extraction of domain-specific bilingual lexicon from comparable corpora: compositional translation and ranking. arXiv: 1210. 5751, 2012. [97] Li M X, Jiang A W, Wang M W. Listwise approach to learning to rank for automatic evaluation of machine translation. In: Proceedings of the 14th Machine Translation Summit. Nice, France: The European Association for Machine Translation, 2013. 51-59 [98] Lee J Y, Hong G, Rim H C, Song Y I, Hwang Y. Predicate-argument reordering based on learning to rank for English-Korean machine translation. In: Proceedings of the 5th International Conference on Ubiquitous Information Management and Communication. Seoul, South Korea: ACM, 2011. Article No. 2 [99] Farzi S, Faili H. Improving statistical machine translation using syntax-based learning-to-rank system. Digital Scholarship in the Humanities, 2017, 32(1):80-100 http://dblp.uni-trier.de/db/journals/lalc/lalc32.html#FarziF17 [100] Wu H C, Wu W, Zhou M, Chen E H, Duan L, Shum H Y. Improving search relevance for short queries in community question answering. In: Proceedings of the 7th ACM International Conference on Web Search and Data Mining. New York, USA: ACM, 2014. 43-52 [101] Nguyen M T, Phan V A, Nguyen T S, Nguyen M L. Learning to rank questions for community question answering with ranking SVM. arXiv: 1608. 04185, 2016. [102] Verberne S, van Halteren H, Theijssen D L, Raaijmakers S, Boves L. Learning to rank for why-question answering. Information Retrieval, 2011, 14(2):107-132 doi: 10.1007/s10791-010-9136-6 [103] Ciaramita M, Murdock V, Plachouras V. Online learning from click data for sponsored search. In: Proceedings of the 17th International Conference on World Wide Web. Beijing, China: ACM, 2008. 227-236 [104] Tagami Y, Ono S, Yamamoto K, Tsukamoto K, Tajima A. CTR prediction for contextual advertising: Learning-to-rank approach. In: Proceedings of the 7th International Workshop on Data Mining for Online Advertising. Chicago, Illinois, USA: ACM, 2013. Article No. 4 [105] Karimzadehgan M, Li W, Zhang R F, Mao J C. A stochastic learning-to-rank algorithm and its application to contextual advertising. In: Proceedings of the 20th International Conference on World Wide Web. Hyderabad, India: ACM, 2011. 377-386 [106] Shen C, Li T. Learning to rank for query-focused multi-document summarization. In: Proceedings of the 11th International Conference on Data Mining. Vancouver, BC, Canada: IEEE, 2011. 626-634 [107] Zhu Y D, Lan Y Y, Guo J F, Du P, Cheng X Q. A novel relational learning-to-rank approach for topic-focused multi-document summarization. In: Proceedings of the 13th International Conference on Data Mining. Dallas, TX, USA: IEEE, 2013. 927-936 [108] Tran T A, Niederee C, Kanhabua N, Gadiraju U, Anand A. Balancing novelty and salience: adaptive learning to rank entities for timeline summarization of high-impact events. In: Proceedings of the 24th ACM International Conference on Information and Knowledge Management. Melbourne, Australia: ACM, 2015. 1201-1210. [109] Tran G B, Tran T A, Tran N K, Alrifai M, Kanhabua N. Leveraging learning to rank in an optimization framework for timeline summarization. In: Proceedings of SIGIR 2013 Workshop on Time-aware Information Access. Dublin, Ireland: TAIA, 2013. 1-4 [110] Xu B, Lin H F, Lin Y. Assessment of learning to rank methods for query expansion. Journal of the Association for Information Science and Technology, 2016, 67(6):1345-1357 doi: 10.1002/asi.23476 [111] Lin Y, Lin H F, Jin S, Ye Z. Social annotation in query expansion: a machine learning approach. In: Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval. Beijing, China: ACM, 2011. 405-414 [112] Dang V, Bendersky M, Croft W B. Learning to rank query reformulations. In: Proceedings of the 33rd international ACM SIGIR Conference on Research and Development in Information Retrieval. Geneva, Switzerland: ACM, 2010. 807-808 [113] Santos R L T, Macdonald C, Ounis I. Learning to rank query suggestions for adhoc and diversity search. Information Retrieval, 2013, 16(4):429-451 doi: 10.1007/s10791-012-9211-2 [114] Liu B, Chen J J, Wang X L. Application of learning to rank to protein remote homology detection. Bioinformatics, 2015, 31(21):3492-3498 doi: 10.1093/bioinformatics/btv413 [115] Chen J J, Guo M Y, Li S M, Liu B. ProtDec-LTR2.0:an improved method for protein remote homology detection by combining pseudo protein and supervised Learning to Rank. Bioinformatics, 2017, 33(21):3473-3476 doi: 10.1093/bioinformatics/btx429 [116] Shang Y, Hao H H, Wu J J, Lin H F. Learning to rank-based gene summary extraction. BMC Bioinformatics, 2014, 15(S12):Article No.S10 http://dblp.uni-trier.de/db/journals/bmcbi/bmcbi15S.html#ShangHWL14 [117] Jing X Y, Dong Q W. MQAPRank:improved global protein model quality assessment by learning-to-rank. BMC Bioinformatics, 2017, 18(1):Article No.275 doi: 10.1186/s12859-017-1691-z [118] Saleem M S, Ding C, Liu X M, Chi C H. Personalized decision-strategy based web service selection using a learning-to-rank algorithm. IEEE Transactions on Services Computing, 2015, 8(5):727-739 doi: 10.1109/TSC.2014.2377724 [119] Deveaud R, Mothe J, Nia J Y. Learning to rank system configurations. In: Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. Indianapolis, Indiana, USA: ACM, 2016. 2001-2004 [120] Zhou M W, Wang H N, Change K C C. Learning to rank from distant supervision: exploiting noisy redundancy for relational entity search. In: Proceedings of the 29th International Conference on Data Engineering. Brisbane, QLD, Australia: IEEE, 2013. 829-840 [121] Chen N, Prasanna V K. Learning to rank complex semantic relationships. International Journal on Semantic Web and Information Systems, 2012, 8(4):1-19 doi: 10.4018/IJSWIS [122] Kong L L, Lu Z M, Han Z Y, Qi H L. A ranking approach to source retrieval of plagiarism detection. IEICE Transactions on Information and Systems, 2017, E100.D(1):203-205 doi: 10.1587/transinf.2016EDL8090 [123] Alejo Ó J, Fernández-Luna J M, Huete J F, Moreno-Cerrud E. L2RLab: integrated experimenter environment for learning to rank. In: Proceedings of the 10th International Conference on Flexible Query Answering Systems. Granada, Spain: Springer, 2013. 543-554 [124] Chapelle O, Chang Y, Liu T Y. Future directions in learning to rank. JMLR:Workshop and Conference Proceedings, 2011, 14:91-100 http://www.wanfangdata.com.cn/details/detail.do?_type=perio&id=JJ0224630477 [125] 王坤峰, 苟超, 段艳杰, 林懿伦, 郑心湖, 王飞跃.生成式对抗网络GAN的研究进展与展望.自动化学报, 2017, 43(3):321-332 http://www.aas.net.cn/CN/abstract/abstract19012.shtmlWang Kun-Feng, Gou Chao, Duan Yan-Jie, Lin Yi-Lun, Zheng Xin-Hu, Wang Fei-Yue. Generative adversarial networks:the state of the art and beyond. Acta Automatica Sinica, 2017, 43(3):321-332 http://www.aas.net.cn/CN/abstract/abstract19012.shtml [126] Li H, Lu Z D. Deep learning for information retrieval. In: Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval. Pisa, Italy: ACM, 2016. 1203-1206 [127] 奚雪峰, 周国栋.面向自然语言处理的深度学习研究.自动化学报, 2016, 42(10):1445-1465 http://www.aas.net.cn/CN/abstract/abstract18934.shtmlXi Xue-Feng, Zhou Guo-Dong. A survey on deep learning for natural language processing. Acta Automatica Sinica, 2016, 42(10):1445-1465 http://www.aas.net.cn/CN/abstract/abstract18934.shtml [128] 段艳杰, 吕宜生, 张杰, 赵学亮, 王飞跃.深度学习在控制领域的研究现状与展望.自动化学报, 2016, 42(5):643-654 http://www.aas.net.cn/CN/abstract/abstract18852.shtmlDuan Yan-Jie, Lv Yi-Sheng, Zhang Jie, Zhao Xue-Liang, Wang Fei-Yue. Deep learning for control:the state of the art and prospects. Acta Automatica Sinica, 2016, 42(5):643-654 http://www.aas.net.cn/CN/abstract/abstract18852.shtml [129] 朱煜, 赵江坤, 王逸宁, 郑兵兵.基于深度学习的人体行为识别算法综述.自动化学报, 2016, 42(6):848-857 http://www.aas.net.cn/CN/abstract/abstract18875.shtmlZhu Yu, Zhao Jiang-Kun, Wang Yi-Ning, Zheng Bing-Bing. A review of human action recognition based on deep learning. Acta Automatica Sinica, 2016, 42(6):848-857 http://www.aas.net.cn/CN/abstract/abstract18875.shtml [130] 管皓, 薛向阳, 安志勇.深度学习在视频目标跟踪中的应用进展与展望.自动化学报, 2016, 42(6):834-847 http://www.aas.net.cn/CN/abstract/abstract18874.shtmlGuan Hao, Xue Xiang-Yang, An Zhi-Yong. Advances on application of deep learning for video object tracking. Acta Automatica Sinica, 2016, 42(6):834-847 http://www.aas.net.cn/CN/abstract/abstract18874.shtml [131] Wang B Y, Klabjan D. An attention-based deep net for learning to rank. arXiv: 1702. 06106, 2017. [132] Severyn A, Moschitti A. Learning to rank short text pairs with convolutional deep neural networks. In: Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. Santiago, Chile: ACM, 2015. 373-382 [133] 程学旗, 兰艳艳.网络大数据的文本内容分析.大数据, 2015, 1(3):Article No.2015029 http://www.wanfangdata.com.cn/details/detail.do?_type=perio&id=68837485504849534851484853Cheng Xue-Qi, Lan Yan-Yan. Text content analysis for web big data. Big Data Research, 2015, 1(3):Article No.2015029 http://www.wanfangdata.com.cn/details/detail.do?_type=perio&id=68837485504849534851484853 [134] 李力, 林懿伦, 曹东璞, 郑南宁, 王飞跃.平行学习——机器学习的一个新型理论框架.自动化学报, 2017, 43(1):1-8 doi: 10.3969/j.issn.1003-8930.2017.01.001Li Li, Lin Yi-Lun, Cao Dong-Pu, Zheng Nan-Ning, Wang Fei-Yue. Parallel learning——a new framework for machine learning. Acta Automatica Sinica, 2017, 43(1):1-8 doi: 10.3969/j.issn.1003-8930.2017.01.001 [135] Dinçer B T, Ounis I, Macdonald C. Tackling biased baselines in the risk-sensitive evaluation of retrieval systems. In: Proceedings of the 36th European Conference on IR. Amsterdam, The Netherlands: Springer, 2014. 26-38 [136] Dinçer B T, Macdonald C, Ounis I. Hypothesis testing for the risk-sensitive evaluation of retrieval systems. In: Proceedings of the 37th international ACM SIGIR Conference on Research & Development in Information Retrieval. Gold Coast, Queensland, Australia: ACM, 2014. 23-32 [137] Dinçer B T, Macdonald C, Ounis I. Risk-sensitive evaluation and learning to rank using multiple baselines. In: Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. Pisa, Italy: ACM, 2016. 483-492. [138] Ding W K, Geng X B, Zhang X D. Learning to rank from noisy data. ACM Transactions on Intelligent Systems and Technology, 2015, 7(1):Article No.1 http://d.old.wanfangdata.com.cn/OAPaper/oai_arXiv.org_1312.2159 [139] Liang S Q, Ren Z C, de Rijke M. Fusion helps diversification. In: Proceedings of the 37th international ACM SIGIR Conference on Research & Development in Information Retrieval. Gold Coast, Queensland, Australia: ACM, 2014. 303-312 [140] Liang S S, Cai F, Ren Z C, de Rijke M. Efficient structured learning for personalized diversification. IEEE Transactions on Knowledge and Data Engineering, 2016, 28(11):2958-2973 doi: 10.1109/TKDE.2016.2594064 [141] Wu S L, Huang C L. Search result diversification via data fusion. In: Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval. Gold Coast, Queensland, Australia: ACM, 2014. 827-830 [142] Deng T, Fan W F. On the complexity of query result diversification. Proceedings of the VLDB Endowment, 2013, 6(8):577-588 doi: 10.14778/2536354 [143] Xu J, Xia L, Lan Y Y, Guo J F, Cheng X Q. Directly optimize diversity evaluation measures:a new approach to search result diversification. ACM Transactions on Intelligent Systems and Technology, 2017, 8(3):Article No. 41 http://dl.acm.org/citation.cfm?doid=2983921 [144] Zhu Y D, Lan Y Y, Guo J F, Cheng X Q, Niu S Z. Learning for search result diversification. In: Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval. Gold Coast, Queensland, Australia: ACM, 2014. 293-302 [145] 徐君, 兰艳艳.多样化——排序学习发展的新方向.中国计算机学会通讯, 2016, 12(7):50-52Xu Jun, Lan Yan-Yan. Search result diversification:a new direction of research on learning to rank. Communications of the CCF, 2016, 12(7):50-52 [146] Dai N, Shokouhi M, Davison B D. Multi-objective optimization in learning to rank. In: Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval. Beijing, China: ACM, 2011. 1241-1242 [147] Svore K M, Volkovs M, Burges C J C. Learning to rank with multiple objective functions. In: Proceedings of the 20th International Conference on World Wide Web. Hyderabad, India: ACM, 2011. 367-376 [148] Kang C S, Wang X H, Chang Y, Tseng B. Learning to rank with multi-aspect relevance for vertical search. In:Proceedings of the 5th ACM International Conference on Web Search and Data Mining. Seattle, Washington, USA:ACM, 2012. 453-462 [149] Wang L D, Lin J, Metzler D, Han J W. Learning to efficiently rank on big data. In:Proceedings of the 23rd International Conference on World Wide Web. Seoul, South Korea:ACM, 2014. 209-210 [150] Cao G Q, Ahmad I, Zhang H L, Xie W Y, Gabbouj M. Balance learning to rank in big data. In:Proceedings of the 22nd European Signal Processing Conference. Lisbon, Portugal:IEEE, 2014. 1422-1426 [151] Sculley D. Large scale learning to rank. In:Proceedings of 2009 NIPS Workshop on Advances in Ranking. La Jolla, CA, USA:NIPS, 2009. 58-63 [152] Shukla S, Lease M, Tewari A. Parallelizing ListNet training using spark. In:Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval. Portland, Oregon, USA:ACM, 2012. 1127-1128 [153] Grotov A, de Rijke M. Online Learning to rank for information retrieval:SIGIR 2016 tutorial. In:Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval. Pisa, Italy:ACM, 2016. 1215-1218 [154] Suhara Y, Suzuki J, Kataoka R. Robust online learning to rank via selective pairwise approach based on evaluation measures. Information and Media Technologies Editorial Board, 2013, 8(1):118-129 http://ci.nii.ac.jp/naid/130003366920 [155] Hofmann K, Whiteson S, de Rijke M. Balancing exploration and exploitation in listwise and pairwise online learning to rank for information retrieval. Information Retrieval, 2013, 16(1):63-90 doi: 10.1007/s10791-012-9197-9 [156] Chen Y W, Hofmann K. Online Learning to rank:absolute vs. relative. In:Proceedings of the 24th International Conference on World Wide Web. Florence, Italy:ACM, 2015. 19-20 [157] Schuth A, Oosterhuis H, Whiteson S, de Rijke M. Multileave gradient descent for fast online learning to rank. In:Proceedings of the 9th ACM International Conference on Web Search and Data Mining. San Francisco, California, USA:ACM, 2016. 457-466 [158] Keyhanipour A H, Moshiri B, Piroozmand M, Oroumchian F, Moeini A. Learning to rank with click-through features in a reinforcement learning framework. International Journal of Web Information Systems, 2016, 12(4):448-476 doi: 10.1108/IJWIS-12-2015-0046 期刊类型引用(2)
1. 冯俊杰, 张弓. 多测量向量块稀疏信号重构ISAR成像算法. 系统工程与电子技术. 2017(09): 1959-1964 . 百度学术
2. 王智文. 二维条形码在医疗设备管理中的应用价值. 临床医学研究与实践. 2017(03): 185-186 . 百度学术
其他类型引用(0)
-