-
摘要: 近年来,卷积神经网络(Convolutional neural networks,CNN)已在图像理解领域得到了广泛的应用,引起了研究者的关注. 特别是随着大规模图像数据的产生以及计算机硬件(特别是GPU)的飞速发展,卷积神经网络以及其改进方法在图像理解中取得了突破性的成果,引发了研究的热潮. 本文综述了卷积神经网络在图像理解中的研究进展与典型应用. 首先,阐述卷积神经网络的基础理论;然后,阐述其在图像理解的具体方面,如图像分类与物体检测、人脸识别和场景的语义分割等的研究进展与应用.Abstract: Convolutional neural networks (CNN) have been widely applied to image understanding, and they have arose much attention from researchers. Specifically, with the emergence of large image sets and the rapid development of GPUs, convolutional neural networks and their improvements have made breakthroughs in image understanding, bringing about wide applications into this area. This paper summarizes the up-to-date research and typical applications for convolutional neural networks in image understanding. We firstly review the theoretical basis, and then we present the recent advances and achievements in major areas of image understanding, such as image classification, object detection, face recognition, semantic image segmentation etc.
-
The control of nonholonomic systems has received a great deal of attention over the past twenty years[1-2]. In [3] and some references therein, it is shown that many systems with nonholonomic constraints can be transformed, either locally or globally, to chained systems by using coordinate and state-feedback transformations. Several new control strategies were developed around the important nonholonomic chained models[4-6]. A wheeled mobile robot (WMR) is one of the well-known systems with nonholonomic constraints[7]. In the control of nonholonomic WMR, it is usually assumed that the states are available using sensor measurements. But in practice, there exist uncertainties, such as uncalibrated parameters in the kinematic models, mechanical limitations, noise and so on. In recent ten years, the study of nonholonomic systems with uncertainties has received considerable attention. Many strategies have been investigated to stabilize the uncertain nonholonomic systems[8-11]. Adaptive strategies were often used to control the dynamic nonholonomic systems with modeling or parametric uncertainties[12-13].
The tracking control is a complicated problem due to coupled and nonlinear system dynamics[12-16]. In [12], adaptive force tracking controllers were proposed which not only ensure the entire state of the system to asymptotically converge to the desired trajectory but also ensure the constraint force to asymptotically converge to the desired force. In [13], Wang et al. proposed a robust adaptive tracking controller which not only can guarantee robustness to parametric and dynamics uncertainties but also can reject any bounded, immeasurable disturbances entering the system. Based on Lyapunov's direct method and backstepping technique in [6] and [15], the time-varying global adaptive controllers were presented which simultaneously solved both tracking and stabilization for mobile robots with unknown kinematic and dynamic parameters.
Visual feedback is an important approach to improve the control performance for robots and manipulators since it mimics the human sense of vision and allows for operating on the basis of noncontact measurement and unstructured environment. Since the late 1980s, tremendous effort has been made to visual servoing[16-21] and vision-based manipulations. In order to develop an adaptive tracking controller for a mobile robot that compensated for the parametric uncertainty in the camera and the mobile robot dynamics, the feedback from an uncalibrated, fixed (ceiling-mounted) camera was used in [16]. In [18], a visual servo tracking controller was developed for a monocular camera system mounted on an underactuated WMR subject to nonholonomic motion constraints. In [19], a new controller for controlling a number of feature points on a robot manipulator was presented to track desired trajectories specified on the image plane of a fixed camera. Recently, [20] presented a dynamic feedback tracking controller for the nonholonomic WMR of unicycle type with unknown camera parameters. In [11], a series of new chained models of nonholonomic mobile robots with uncalibrated visual parameters were shown. In [21], the trajectory tracking control problem of another kind of uncertain dynamic nonholonomic mobile robot (called type (1, 1) robot) was addressed where a new adaptive torque tracking controller was presented for tracking error model. For type (1, 2) robot, which has two steering wheels and one castor wheel with unknown visual parameters, a new and simple robust stabilization controller[11, 22] was designed for a particular case. However, the corresponding tracking problem has not been discussed. Comparing with [16], in this paper, we design an adaptive dynamic feedback controller to compensate for the unknown camera parameter. Based on Lyapunov direct method and the idea of back-stepping technique, two transformations are chosen. The controllers can make the mobile robot tracking the desired trajectory in the image space and work-space.
The paper is organized as follows. Section 1 addresses robot-camera system configuration. In Section 2, an uncertain chained form model is presented, and the tracking problems are proposed. Section 3 addresses the designs of adaptive and dynamic feedback tracking controllers for the uncertain kinematic error system and gives the rigorous proof of the asymptotical convergence of the closed-loop error system. Then, the tracking problems in work-space of the mobile robot are presented. In Section 4, simulation results are provided to illustrate the effectiveness of the proposed control strategy. Finally, the major contributions of the paper are summarized in Section 5.
1. Robot-camera system
In this section, we will address robot-camera system configuration.
In Fig. 1, a robot-camera system is shown. It is assumed that a pinhole camera is fixed to the ceiling, the type (1, 2) mobile robot is under the camera. The movement of the mobile robot can be measured by using a fixed camera. It is assumed that the camera plane runs parallel to the mobile robot plane, and the camera can capture images throughout the entire robot workspace. In the robot-camera system, three coordinate frames exist, namely the inertial frame $X-Y-Z$, the camera frame $i-j-k$ and the image frame $i_{1}-o_{1}-j_{1}$. Assume that the $i-j$ plane of the camera frame is parallel to the plane of the image coordinate plane. The direction of corresponding coordinate axis is identical. But the coordinate of the original point of the camera frame with respect to the image frame is defined by $ (O_{c1},O_{c2}).$ $C(c_{x},c_{y})$ is the crossing point between the optical axis of the camera and $X-Y$ plane.
1.1 Robot kinematic system
In Fig. 1, the type (1, 2) mobile robot[7] is in the $X-Y$ plane which has two steering wheels (conventional centered orientable wheels) and one castor wheel (conventional off-centered orientable wheel). $P$ is the mid-distance point between the centers of these two steering wheels, with $i_{2}$ aligned along the line joining their centers. $A$ and $B$ are the center points of two steering wheels respectively. $L$ is the distance between point $P$ and point $A$ (or point $P$ and point $B$). It is also the distance between point $P$ and the joint point of the castor wheel. $\theta $ denotes the angle between $i_{2}$ axis and $X$ axis, $\beta _{1}$ and $\beta _{2}$ denote the angles between the orientation of the plane of steering wheels and $i_{2}$ axis respectively. Assume that the geometric center point and the mass center point of the robot are the same. The nonholonomic constraints are described[7] by
$\begin{align} & (\cos {{\beta }_{1}},\sin {{\beta }_{1}},L\sin {{\beta }_{1}})G(\theta )\dot{q}=0 \\ & (-\cos {{\beta }_{2}},-\sin {{\beta }_{2}},L\sin {{\beta }_{2}})G(\theta )\dot{q}=0 \\ \end{align}$
where $q =(x,y,\theta )^{\rm T},$ and
$G(\theta )=\left[ \begin{matrix} \cos \theta & \sin \theta & 0 \\ -\sin \theta & \cos \theta & 0 \\ 0 & 0 & 1 \\ \end{matrix} \right]$
Then the nonholonomic kinematic system[7] can be given by
$ \begin{align} \begin{cases} \dot{x}=-Lv_{1}[\sin \beta _{1}\sin (\theta +\beta _{2})+\sin \beta _{2}\sin (\theta +\beta _{1})] \\ \dot{y}=Lv_{1}[\sin \beta _{1}\cos (\theta +\beta _{2})+\sin \beta _{2}\cos (\theta +\beta _{1})] \\ \dot{\theta}=v_{1}\sin (\beta _{2}-\beta _{1}) \\ \dot{\beta}_{1}=v_{2} \\ \dot{\beta}_{2}=v_{3} \end{cases} \label{h4} \end{align} $
(1) where $v_{1}$ is the velocity of the robot, $v_{2}$ and $v_{3}$ are the angular velocities of two steering wheels respectively.
For system (1), if $\sin(\beta_{2}-\beta_{1})=0$, then $\beta_{1}=\beta_{2}=0$ or $\beta_{2}=\beta_{1}+k\pi$ $(k=\pm1,\pm2,\cdots)$. Consider the two nonholonomic constraints along the wheel plane given by [7]. One obtains that the type (1, 2) mobile robot is stopped (i.e., $v_{1}=0$) or this robot is vestigial to type (2, 0) robot. The tracking and stabilization problems are discussed in many papers such as [20-21].
For system (1), if $\sin (\beta _{2}-\beta _{1})\neq0,$ choose the state-input transformation[3] as
$\left\{ \begin{array}{*{35}{l}} {{z}_{0}}=\theta \\ {{z}_{1}}=x\cos \theta +y\sin \theta \\ {{z}_{2}}=-x\sin \theta +y\cos \theta -2L\frac{\sin {{\beta }_{1}}\sin {{\beta }_{2}}}{\sin ({{\beta }_{2}}-{{\beta }_{1}})} \\ {{z}_{3}}=x\sin \theta -y\cos \theta \\ {{z}_{4}}=x\cos \theta +y\sin \theta -L\frac{\sin ({{\beta }_{1}}+{{\beta }_{2}})}{\sin ({{\beta }_{2}}-{{\beta }_{1}})} \\ {{\sigma }_{0}}={{v}_{1}}\sin ({{\beta }_{2}}-{{\beta }_{1}}) \\ \begin{align} & {{\sigma }_{1}}=-{{x}_{4}}{{v}_{1}}\sin ({{\beta }_{2}}-{{\beta }_{1}})-\frac{2L{{v}_{2}}{{\sin }^{2}}{{\beta }_{2}}}{{{\sin }^{2}}({{\beta }_{2}}-{{\beta }_{1}})}+\frac{2L{{v}_{3}}{{\sin }^{2}}{{\beta }_{1}}}{{{\sin }^{2}}({{\beta }_{2}}-{{\beta }_{1}})} \\ & {{\sigma }_{2}}={{x}_{2}}{{v}_{1}}\sin ({{\beta }_{2}}-{{\beta }_{1}})-\frac{L{{v}_{2}}\sin (2{{\beta }_{2}})}{{{\sin }^{2}}({{\beta }_{2}}-{{\beta }_{1}})}+\frac{L{{v}_{3}}\sin (2{{\beta }_{1}})}{{{\sin }^{2}}({{\beta }_{2}}-{{\beta }_{1}})} \\ \end{align} \\ \end{array} \right.$
(2) We obtain the following chained system
$\left\{ \begin{array}{*{35}{l}} {{{\dot{z}}}_{0}}={{\sigma }_{0}} \\ {{{\dot{z}}}_{1}}={{z}_{2}}{{\sigma }_{0}} \\ {{{\dot{z}}}_{2}}={{\sigma }_{1}} \\ {{{\dot{z}}}_{3}}={{z}_{4}}{{\sigma }_{0}} \\ {{{\dot{z}}}_{4}}={{\sigma }_{2}} \\ \end{array} \right.$
(3) System (3) is called canonical chained form system[8]. Generally, $(x,y)$ in (1) needs to be measured for feedback. The encoders can be used to do it. However, over-shoot and low precision are their main drawbacks. Camera is a convenient sensor to implement non-contact and unstructured measurement. The data from a camera can be used for the robot tracking problem.
1.2 Camera system model
In Fig. 1, the coordinate of the mass center $P$ is $(x,y)$ for the robot with respect to $X-Y$ plane. Suppose that ${{P}_{m}}({{x}_{m}},{{y}_{m}})$ is the coordinate of $(x,y)$ relative to the image frame. Pinhole camera model yields
$ \begin{equation} \left[ \begin{array}{c} x_{m} \\ y_{m} \end{array} \right] =\left[ \begin{array}{cc} \alpha _{1} & 0 \\ 0 & \alpha _{2} \end{array} \right]H(\theta_{0}) \left[\left[ \begin{array}{c} x \\ y \end{array} \right] -\left[ \begin{array}{c} c_{x} \\ c_{y} \end{array} \right] \right] +\left[ \begin{array}{c} O_{c1} \\ O_{c2} \end{array} \right] \label{h1} \end{equation} $
(4) where $\alpha _{1}$ and $\alpha _{2}$ are positive constants and dependent on the depth information, focal length, scale factors[16] defined as follows
$ \begin{align*} \alpha _{1}=\rho_{1}\frac{f}{z},\ \ \alpha _{2}=\rho_{2}\frac{f}{z} \end{align*} $
where $z\in {\bf R}^{1}$ represents the constant height of the camera optical center with respect to the task-space plane, $f\in {\bf R}^{1}$ is a constant representing the camera's focal length, the positive constants denoted by $\rho_{1}$, $\rho_{2}\in {\bf R}^{1}$, represent the camera’s constant scale factors (in pixels/m) along their respective Cartesian directions, respectively[16]. And
$ \begin{align*} H(\theta_{0})=\left[ \begin{array}{cc} \cos \theta _{0} & \sin \theta _{0} \\ -\sin \theta _{0} & \cos \theta _{0} \end{array} \right] \end{align*} $
where $\theta _{0}$ denotes the angle between $j$ axis and $X$ axis which represents the constant, anticlockwise rotation angle of the camera coordinate system with respect to the task-space coordinate system.
Therefore, the kinematic system in the image frame can be rewritten as
$ \begin{equation} \left[ \begin{array}{c} \dot{x}_{m} \\ \dot{y}_{m} \end{array} \right] =\left[ \begin{array}{cc} \alpha _{1} & 0 \\ 0 & \alpha _{2} \end{array} \right]\left[\begin{array}{cc} \cos \theta _{0} & \sin \theta _{0} \\ -\sin \theta _{0} & \cos \theta _{0} \end{array}\right] \left[ \begin{array}{c} \dot{x} \\ \dot{y} \end{array} \right] \label{h2} \end{equation} $
(5) In this paper, it is assumed that $(x,y)$ in (1) is measured by using a camera with uncalibrated visual parameters shown in Fig. 1. The pose of the mobile robot in the workspace is $(x,y,\theta)$. the pose of the robot in the image plane is $(x_{m},y_{m},\theta_{m})$. Then, by using the state and input transformations in Section 2, a kinematic model with unknown visual parameters will be deduced in the following section.
Remark 1. The first formula $H(\theta_{0})$ on this page is a rotation matrix which is different from that denoted by $R(\theta_{0})$ in [16]. In our paper, $H(\theta_{0})$ is a matrix of anticlockwise rotation, but $R(\theta_{0})$ in [16] is a matrix of clockwise rotation.
2. Problem formulation
In this section, we will present an uncertain chained system by using (5) and using the state and input transformations for type (1, 2) mobile robot with unknown visual parameters. Then, we will propose the tracking problem for the uncertain chained system and type (1, 2) mobile robot.
For system (1), suppose $\sin(\beta _{2}-\beta _{1})\neq0,$ and consider (5). We have[22]
$\begin{array}{*{35}{l}} \left[ \begin{array}{*{35}{l}} {{{\dot{x}}}_{m}} \\ {{{\dot{y}}}_{m}} \\ \end{array} \right]=\left[ \begin{array}{*{35}{l}} \begin{align} & -{{\alpha }_{1}}L{{v}_{1}}(\sin {{\beta }_{1}}{{s}_{\Delta 2}}+\sin {{\beta }_{2}}{{s}_{\Delta 1}}) \\ & {{\alpha }_{2}}L{{v}_{1}}(\sin {{\beta }_{1}}{{c}_{\Delta 2}}+\sin {{\beta }_{2}}{{c}_{\Delta 1}}) \\ \end{align} \\ \end{array} \right] \\ \end{array}$
(6) where
$ \begin{align*} s_{\Delta i}=\sin (\theta -\theta _{0}+\beta _{i}),\ c_{\Delta i}=\cos (\theta -\theta _{0}+\beta _{i}),\ \ i=1,2 \end{align*} $
Considering kinematic system (1) in the robot workspace, we have
$ \tan\theta=\frac{\dot{y}}{\dot{x}}=-\frac{\sin\beta_{1}\cos(\theta+\beta_{2})+\sin\beta_{2}\cos(\theta+\beta_{1})}{\sin\beta_{1}\sin(\theta+\beta_{2})+\sin\beta_{2}\sin(\theta+\beta_{1})} $
Then,
$ \tan(\theta-\theta_{0})=-\frac{\sin\beta_{1}c_{\Delta 2}+\sin\beta_{2}c_{\Delta 1}}{\sin\beta_{1}s_{\Delta 2}+\sin\beta_{2}s_{\Delta 1}} $
Now, considering (6) in the image space, we have
$ \tan\theta_{m}=\frac{\dot{y}_{m}}{\dot{x}_{m}}=-\frac{\alpha_{2}}{\alpha_{1}}\displaystyle\frac{\sin\beta_{1}c_{\Delta 2}+\sin\beta_{2}c_{\Delta 1}}{\sin\beta_{1}s_{\Delta 2}+\sin\beta_{2}s_{\Delta 1}} $
Hence, we obtain the following relationships
$\begin{align} & \tan {{\theta }_{m}}=\frac{{{\alpha }_{2}}}{{{\alpha }_{1}}}\tan (\theta -{{\theta }_{0}}) \\ & {{\sec }^{2}}{{\theta }_{m}}=\frac{\alpha _{1}^{2}{{\cos }^{2}}(\theta -{{\theta }_{0}})+\alpha _{2}^{2}\sin (\theta -{{\theta }_{0}})}{\alpha _{1}^{2}{{\cos }^{2}}(\theta -{{\theta }_{0}})} \\ \end{align}$
(7) After taking the time derivative of (7), we have that
$ (\sec^{2}\theta_{m})\dot{\theta}_{m}=\left[\frac{\alpha_{2}}{\alpha_{1}}\sec^{2}(\theta-\theta_{0})\right]\dot{\theta} $
Therefore, we obtain
$ \dot{\theta}=\left[\frac{\alpha_{1}}{\alpha_{2}}\cos^{2}(\theta-\theta_{0})+\ \frac{\alpha_{2}}{\alpha_{1}}\sin^{2}(\theta-\theta_{0})\right]\dot{\theta}_{m} $
If $\alpha _{1}=\alpha _{2}=\alpha$, we have $\dot{\theta}_{m}=\dot{\theta}$ and $\theta_{m}=\theta-\theta_{0}+k\pi$ ($k=$ $0$, $\pm1$, $\pm2,\cdots$). Then, the nonholonomic kinematic system with uncalibrated parameters in the image-plane can be described by the following system
$ \begin{equation} \left[ \begin{array}{l} \dot{x}_{m} \\ \dot{y}_{m} \\ \dot{\theta} \\ \dot{\beta}_{1} \\ \dot{\beta}_{2} \end{array} \right] =\left[ \begin{array}{c} -\alpha Lv_{1}(\sin\beta_{1}s_{\Delta 2}+\sin\beta_{2}s_{\Delta 1}) \\ \alpha Lv_{1}(\sin\beta_{1}c_{\Delta 2}+\sin\beta_{2}c_{\Delta 1}) \\ v_{1}\sin(\beta_{2}-\beta_{1}) \\ v_{2} \\ v_{3} \end{array} \right] \label{h6} \end{equation} $
(8) where $\theta_{m}$ is expressed by $\theta$. For $i=1,2$, denote
$\begin{align} & {{s}_{\Lambda i}}=\sin (2\theta -{{\theta }_{0}}+{{\beta }_{i}}),\ \ {{s}_{\Theta }}=\sin (2{{x}_{0}}-{{\theta }_{0}}) \\ & {{c}_{\Lambda i}}=\cos (2\theta -{{\theta }_{0}}+{{\beta }_{i}}),\ \ {{c}_{\Theta }}=\cos (2{{x}_{0}}-{{\theta }_{0}}) \\ \end{align}$
(9) and consider the following expressions
$\begin{align} & \sin \theta {{c}_{\Delta i}}=-\frac{1}{2}\sin ({{\beta }_{i}}-{{\theta }_{0}})+\frac{1}{2}{{s}_{\Lambda i}},\ \ i=1,2 \\ & \cos \theta {{c}_{\Delta i}}=\frac{1}{2}\cos ({{\beta }_{i}}-{{\theta }_{0}})+\frac{1}{2}{{c}_{\Lambda i}},\quad \ i=1,2 \\ \end{align}$
Then, note that
$\begin{align} & \sin {{\beta }_{1}}\sin ({{\beta }_{2}}-{{\theta }_{0}})+\sin {{\beta }_{2}}\sin ({{\beta }_{1}}-{{\theta }_{0}})= \\ & 2\sin {{\beta }_{1}}\sin {{\beta }_{2}}\cos {{\theta }_{0}}-\sin {{\theta }_{0}}\sin ({{\beta }_{1}}+{{\beta }_{2}}) \\ & \sin {{\beta }_{1}}\cos ({{\beta }_{2}}-{{\theta }_{0}})+\sin {{\beta }_{2}}\cos ({{\beta }_{1}}-{{\theta }_{0}})= \\ & 2\sin {{\beta }_{1}}\sin {{\beta }_{2}}\sin {{\theta }_{0}}+\cos {{\theta }_{0}}\sin ({{\beta }_{1}}+{{\beta }_{2}}) \\ \end{align}$
and
$\begin{align} & \sin {{\beta }_{1}}{{s}_{\Lambda 2}}+\sin {{\beta }_{2}}{{s}_{\Lambda 1}}= \\ & 2\sin {{\beta }_{1}}\sin {{\beta }_{2}}{{c}_{\Theta }}+\sin ({{\beta }_{1}}+{{\beta }_{2}}){{s}_{\Theta }} \\ & \sin {{\beta }_{1}}{{c}_{\Lambda 2}}+\sin {{\beta }_{2}}{{c}_{\Lambda 1}}= \\ & -2\sin {{\beta }_{1}}\sin {{\beta }_{2}}{{s}_{\Theta }}+\sin ({{\beta }_{1}}+{{\beta }_{2}}){{c}_{\Theta }} \\ \end{align}$
Hence, by taking the following state and input transformations
$\left\{ \begin{array}{*{35}{l}} {{x}_{0}}=\theta \\ {{x}_{1}}={{x}_{m}}\cos \theta +{{y}_{m}}\sin \theta \\ {{x}_{2}}=-{{x}_{m}}\sin \theta +{{y}_{m}}\cos \theta -2L\frac{\sin {{\beta }_{1}}\sin {{\beta }_{2}}}{\sin ({{\beta }_{2}}-{{\beta }_{1}})} \\ {{x}_{3}}={{x}_{m}}\sin \theta -{{y}_{m}}\cos \theta \\ {{x}_{4}}={{x}_{m}}\cos \theta +{{y}_{m}}\sin \theta -L\frac{\sin ({{\beta }_{1}}+{{\beta }_{2}})}{\sin ({{\beta }_{2}}-{{\beta }_{1}})} \\ {{u}_{0}}={{v}_{1}}\sin ({{\beta }_{2}}-{{\beta }_{1}}) \\ \begin{align} & {{u}_{1}}=-{{x}_{4}}{{v}_{1}}\sin ({{\beta }_{2}}-{{\beta }_{1}})-\frac{2L{{v}_{2}}{{\sin }^{2}}{{\beta }_{2}}}{{{\sin }^{2}}({{\beta }_{2}}-{{\beta }_{1}})}+\frac{2L{{v}_{3}}{{\sin }^{2}}{{\beta }_{1}}}{{{\sin }^{2}}({{\beta }_{2}}-{{\beta }_{1}})} \\ & {{u}_{2}}={{x}_{2}}{{v}_{1}}\sin ({{\beta }_{2}}-{{\beta }_{1}})-\frac{L{{v}_{2}}\sin (2{{\beta }_{2}})}{{{\sin }^{2}}({{\beta }_{2}}-{{\beta }_{1}})}+\frac{L{{v}_{3}}\sin (2{{\beta }_{1}})}{{{\sin }^{2}}({{\beta }_{2}}-{{\beta }_{1}})} \\ \end{align} \\ \end{array} \right.$
(10) One obtains the uncertain chained system[22]
$\left\{ \begin{align} & {{{\dot{x}}}_{0}}={{u}_{0}} \\ & {{{\dot{x}}}_{1}}={{x}_{2}}{{u}_{0}}+({{x}_{1}}-{{x}_{4}})(\alpha \sin {{\theta }_{0}}){{u}_{0}}- \\ & \ \ \ \ \ \ ({{x}_{2}}+{{x}_{3}})(1-\alpha \cos {{\theta }_{0}}){{u}_{0}} \\ & {{{\dot{x}}}_{2}}={{u}_{1}}-({{x}_{2}}+{{x}_{3}})(\alpha \sin {{\theta }_{0}}){{u}_{0}}- \\ & \ \ \ \ \ \ \ ({{x}_{1}}-{{x}_{4}})(1-\alpha \cos {{\theta }_{0}}){{u}_{0}} \\ & {{{\dot{x}}}_{3}}={{x}_{4}}{{u}_{0}}+({{x}_{2}}+{{x}_{3}})(\alpha \sin {{\theta }_{0}}){{u}_{0}}+ \\ & \ \ \ \ \ \ \ ({{x}_{1}}-{{x}_{4}})(1-\alpha \cos {{\theta }_{0}}){{u}_{0}} \\ & {{{\dot{x}}}_{4}}={{u}_{2}}+({{x}_{1}}-{{x}_{4}})(\alpha \sin {{\theta }_{0}}){{u}_{0}}- \\ & \ \ \ \ \ \ \ ({{x}_{2}}+{{x}_{3}})(1-\alpha \cos {{\theta }_{0}}){{u}_{0}} \\ \end{align} \right.$
(11) where $u_{0}=v_{1}\sin(\beta_{2}-\beta_{1})$ and $\sin(\beta_{2}-\beta_{1})\neq0$.
In contrast to canonical chained form (3), model (11) has two new parameters $\alpha$ and $\theta _{0}$. In practice, they are usually uncalibrated. Comparing with model (3), the first term on the right side of each equation of (11) is identical except uncertain coefficient gains. Note that the second and third terms on the right side of the second equation are dependent on $x_{1}$, $x_{2}$, $x_{3}$ and $x_{4}$. Therefore, (11) does not satisfy the so-called {triangular structure[8] which is required in many papers. So (11) is called an {uncertain chained system.
For uncertain system (11), the tracking problem is how to design $u_{0},$ $u _{1}$ and $u_{2}$ such that the trajectory $(x_{0},x_{1},$ $x_{2},x_{3},x_{4})$ can track a desired reference trajectory $(x_{0r},$ $x_{1r},x_{2r},x_{3r},x_{4r})$. In the work-space of type (1, 2) mobile robot, the adaptive dynamic tracking problem is how to design adaptive control law and dynamic feedback control to make the trajectory $q=(x,y,\theta)$ tracking a desired reference trajectory $q_{r}=(x_{r},y_{r},\theta_{r})$ in the work-space of the robot with the help of the reference trajectories in the image space.
3. Tracking controller design
In this section, our objective is to design adaptive dynamic feedback tracking controllers to solve the tracking problem for uncertain chained system (11) and the type (1, 2) mobile robot in the work-apace. In order to design the controller, three Assumptions and a Lemma are needed as follows.
Assumption 1. Uncertain chained system (11) satisfies ${{u}_{0}}\ne 0$.
Assumption 2. $\theta _{0}$ is known, and $\alpha_{1}=\alpha_{2}=\alpha$ are unknown. There exist two constants $\underline{\alpha}$ and $\bar{\alpha}$ such that $\underline{\alpha}\leq \alpha \leq \overline{\alpha}$.
Assumption 3. ${{x}_{ir}}(i=0,1,\cdots ,4)$ are bounded. ${{u}_{0r}}({{u}_{0r}}\ne 0)$, $u_{1r}$, $u_{2r}$ and their derivatives are all bounded too.
Remark 2. System (11) is based on the assumption $\sin ({{\beta }_{2}}-{{\beta }_{1}})\ne 0$. This means $u_{0}\neq0$ for (11).
Remark 3. For Assumption 2, $\alpha _{1}=\alpha _{2}=\alpha$ means that the scale factor along $i_{1}$ axis is the same with that one along $j_{1}$ axis. Some CCD cameras are made like this. However, $\alpha _{1}=\alpha_{2}=\alpha$ are limitations. As for the tracking problem of the case $\alpha _{1}\neq \alpha_{2}$ and unknown, we will further investigate it in the future.
Remark 4. Assumption 3 is rational. Commonly, the positive upper and lower bounds of the scale factor can be estimated in advance. In practice, the robot often has the same structure feature with a reference target when the robot tracks the reference trajectory.
Remark 5. Consider system (11) under Assumptions $1\tilde{\ }3$. By substituting $\theta-\theta_{0}$ for $\theta,$ (11) will become the system with $\theta _{0}=0$. This implies that the direction of $j$ axis is identical to that one of $X$ axis. Hence, we only need to discuss the case: $\theta _{0}=0$, $\alpha_{1}=\alpha _{2}=\alpha$ are unknown for (11).
Based on the Assumptions $1\tilde{\ }3$ and the analysis above, system (11) can be rewritten as
$ \begin{align*} \begin{cases} \dot{x}_{0}=u_{0} \\ \dot{x}_{1}=[x_{2}- (x_{2}+x_{3})(1-\alpha )]u_{0} \\ \dot{x}_{2}=u_{1}- [(x_{1}-x_{4})(1-\alpha )]u_{0} \\ \dot{x}_{3}=[x_{4}+(x_{1}-x_{4})(1-\alpha) ]u_{0} \\ \dot{x}_{4}=u_{2}-[(x_{2}+x_{3})(1-\alpha)]u_{0}\ \end{cases} \end{align*} $
where $\alpha$ is the unknown camera parameter, $u_{0},$ $u_{1}$ and $u_{2}$ are the control inputs to be designed. It can also be rewritten as
$ \begin{align} \begin{cases} \dot{x}_{0}=u_{0} \\ \dot{x}_{1}=-x_{3}u_{0}+\alpha x_{23}u_{0} \\ \dot{x}_{2}=u_{1}+(1-\alpha)x_{41}u_{0}\\ \dot{x}_{3}=x_{1}u_{0}+\alpha x_{41}u_{0}\\ \dot{x}_{4}=u_{2}-(1-\alpha)x_{23}u_{0} \end{cases} \label{a2} \end{align} $
(12) where
$ \begin{align*} x_{23}=x_{2}+x_{3},\ \ x_{41}=x_{4}-x_{1} \end{align*} $
The desired reference system for (12) is
$ \begin{align} \begin{cases} \dot{x}_{0r}=u_{0r} \\ \dot{x}_{1r}=-x_{3r}u_{0r}+\alpha x_{23r}u_{0r}\\ \dot{x}_{2r}=u_{1r}+(1-\alpha)x_{41r}u_{0r}\\ \dot{x}_{3r}=x_{1r}u_{0r}+\alpha x_{41r}u_{0r}\\ \dot{x}_{4r}=u_{2r}-(1-\alpha)x_{23r}u_{0r} \end{cases} \label{a3} \end{align} $
(13) where
$ \begin{align*} x_{23r}=x_{2r}+x_{3r},\ \ x_{41r}=x_{4r}-x_{1r} \end{align*} $
Denote
$ \begin{align} \begin{cases} e_{i}=x_{i}- x_{ir},& i=0,1,2,3,4 \\ e_{23}=e_{2}+e_{3},& e_{41}=e_{4}-e_{1} \end{cases} \label{a4} \end{align} $
(14) By using (12) and (13), the following kinematic tracking error system is obtained
$ \begin{align} \begin{cases} \dot{e}_{0}=p \\ \dot{e}_{1}=-(x_{3}p+e_{3}u_{0r})+\alpha (x_{23}p+e_{23}u_{0r})\\ \dot{e}_{2}=u_{1}- u_{1r}+(1-\alpha)(x_{41}p+e_{41}u_{0r} )\\ \dot{e}_{3}=(x_{1}p+ e_{1}u_{0r})+\alpha (x_{41}p+e_{41}u_{0r} )\\ \dot{e}_{4}=u_{2}- u_{2r}-(1-\alpha)(x_{23}p+e_{23}u_{0r} ) \end{cases} \label{a5} \end{align} $
(15) where $p := u_{0}-u_{0r}$.
Based on the idea of backstepping and the structure of model (15), choose two new transformations as
$ \begin{align} \xi=e_{23}+k_{1} e_{1}u_{0r},\ \ \eta=e_{41}+k_{3}e_{3}u_{0r} \label{a6} \end{align} $
(16) where $k_{1}$ and $k_{3}$ are positive constant control gains. Then, we have
$ \begin{align*} &e_{23}=-k_{1}e_{1}u_{0r}+\xi \\ &e_{41}=-k_{3}e_{3}u_{0r}+\eta\\ &\dot{e}_{23}=u_{1}-u_{1r}+x_{4}p+e_{4}u_{0r}\\ &\dot{e}_{41}=u_{2}-u_{2r}-x_{2}p-e_{2}u_{0r} \end{align*} $
Choose the Lyapunov function candidate
$ \begin{align} V=&\ V_{1}+V_{2}+V_{3}=\notag\\ &\ \displaystyle\frac{1}{2}\left(k_{0}e_{0}^{2}+e_{1}^{2}+e_{3}^{2}\right) +\displaystyle\frac{1}{2}\left(\xi^{2}+\eta^{2}\right)+\displaystyle\frac{1}{2}\left(p^{2}+\Lambda \widetilde{\alpha}^{2}\right) \label{a111} \end{align} $
(17) where $k_{0}$ is a positive constant gain. $\tilde{\alpha}$ is the parameter error defined as $\tilde{\alpha}=\alpha-\hat{\alpha}$. $\hat{\alpha}$ is the estimation of $\alpha$.
By using (16), we have
$\begin{align} & {{{\dot{V}}}_{1}}={{k}_{0}}{{e}_{0}}{{{\dot{e}}}_{0}}+{{e}_{1}}{{{\dot{e}}}_{1}}+{{e}_{3}}{{{\dot{e}}}_{3}}= \\ & {{k}_{0}}{{e}_{0}}p+{{e}_{1}}[(-{{x}_{3}}p-{{e}_{3}}{{u}_{0r}})+\alpha ({{x}_{23}}p+{{e}_{23}}{{u}_{0r}})]+ \\ & {{e}_{3}}[({{x}_{1}}p+{{e}_{1}}{{u}_{0r}})+\alpha ({{x}_{41}}p+{{e}_{41}}{{u}_{0r}})]= \\ & ({{k}_{0}}{{e}_{0}}-{{e}_{1}}{{x}_{3}}+{{e}_{3}}{{x}_{1}})p+\alpha ({{e}_{1}}{{x}_{23}}+{{e}_{3}}{{x}_{41}})p- \\ & {{k}_{1}}\alpha e_{1}^{2}u_{0r}^{2}-{{k}_{3}}\alpha e_{3}^{3}u_{0r}^{2}+\alpha {{u}_{0r}}({{e}_{1}}\xi +{{e}_{3}}\eta ) \\ & {{{\dot{V}}}_{2}}=\xi \dot{\xi }+\eta \dot{\eta }= \\ & \xi [({{{\dot{e}}}_{2}}+{{{\dot{e}}}_{3}})+{{k}_{1}}({{{\dot{e}}}_{1}}{{u}_{0r}}+{{e}_{1}}{{{\dot{u}}}_{0r}})]+ \\ & \eta [({{{\dot{e}}}_{4}}-{{{\dot{e}}}_{1}})+{{k}_{3}}({{{\dot{e}}}_{3}}{{u}_{0r}}+{{e}_{3}}{{{\dot{u}}}_{0r}})]= \\ & \xi [({{u}_{1}}-{{u}_{1r}}+{{x}_{4}}p+{{e}_{4}}{{u}_{0r}})+ \\ & {{k}_{1}}{{u}_{0r}}(-{{x}_{3}}p-{{e}_{3}}{{u}_{0r}}+\alpha {{x}_{23}}p+\alpha {{e}_{23}}{{u}_{0r}})+ \\ & {{k}_{1}}{{e}_{1}}{{{\dot{u}}}_{0r}}]+\eta [({{u}_{2}}-{{u}_{2r}}-{{x}_{2}}p-{{e}_{2}}{{u}_{0r}})+ \\ & {{k}_{3}}{{u}_{0r}}({{x}_{1}}p+{{e}_{1}}{{u}_{0r}}+\alpha {{x}_{41}}p+\alpha {{e}_{41}}{{u}_{0r}})+{{k}_{3}}{{e}_{3}}{{{\dot{u}}}_{0r}}] \\ & {{{\dot{V}}}_{3}}=p\dot{p}+\Lambda \tilde{\alpha }\overset{\cdot }{\mathop{{\tilde{\alpha }}}}\, \\ \end{align}$
Hence, the time derivative of V along the solution of (15) satisfies
$ \begin{align} & \dot{V}={{{\dot{V}}}_{1}}+{{{\dot{V}}}_{2}}+{{{\dot{V}}}_{3}}= \\ & -{{k}_{1}}\alpha e_{1}^{2}u_{0r}^{2}-{{k}_{3}}\alpha e_{3}^{3}u_{0r}^{2}+ \\ & ({{k}_{0}}{{e}_{0}}-{{e}_{1}}{{x}_{3}}+{{e}_{3}}{{x}_{1}})p+\alpha ({{e}_{1}}{{x}_{23}}+{{e}_{3}}{{x}_{41}})p+ \\ & \alpha {{u}_{0r}}({{e}_{1}}\xi +{{e}_{3}}\eta )+\xi [({{u}_{1}}-{{u}_{1r}}+{{x}_{4}}p+{{e}_{4}}{{u}_{0r}})+ \\ & {{k}_{1}}{{u}_{0r}}(-{{x}_{3}}p-{{e}_{3}}{{u}_{0r}}+\alpha {{x}_{23}}p+\alpha {{e}_{23}}{{u}_{0r}})+ \\ & {{k}_{1}}{{e}_{1}}{{{\dot{u}}}_{0r}}]+\eta [({{u}_{2}}-{{u}_{2r}}-{{x}_{2}}p-{{e}_{2}}{{u}_{0r}})+ \\ & {{k}_{3}}{{u}_{0r}}({{x}_{1}}p+{{e}_{1}}{{u}_{0r}}+\alpha {{x}_{41}}p+\alpha {{e}_{41}}{{u}_{0r}})+ \\ & {{k}_{3}}{{e}_{3}}{{{\dot{u}}}_{0r}}]+p\dot{p}+\Lambda \tilde{\alpha }\overset{\cdot }{\mathop{{\tilde{\alpha }}}}\,= \\ & -{{k}_{1}}\alpha e_{1}^{2}u_{0r}^{2}-{{k}_{3}}\alpha e_{3}^{2}u_{0r}^{2}+p[{{k}_{0}}{{e}_{0}}-{{e}_{1}}{{x}_{3}}+ \\ & {{e}_{3}}{{x}_{1}}+\hat{\alpha }({{e}_{1}}{{x}_{23}}+{{e}_{3}}{{x}_{41}})+({{x}_{4}}\xi -{{x}_{2}}\eta )+ \\ & {{u}_{0r}}(-{{k}_{1}}{{x}_{3}}\xi +{{k}_{3}}{{x}_{1}}\eta )+\hat{\alpha }{{u}_{0r}}({{k}_{1}}{{x}_{23}}\xi +{{k}_{3}}{{x}_{41}}\eta )+\dot{p}]+ \\ & \tilde{\alpha }[({{e}_{1}}{{x}_{23}}+{{e}_{3}}{{x}_{41}})p+{{u}_{0r}}({{e}_{1}}\xi +{{e}_{3}}\eta )+ \\ & {{u}_{0r}}p({{k}_{1}}{{x}_{23}}\xi +{{k}_{3}}{{x}_{41}}\eta )+u_{0r}^{2}({{k}_{1}}{{e}_{23}}\xi +{{k}_{3}}{{e}_{41}}\eta )- \\ & \Lambda \dot{\hat{\alpha }}]+\xi ({{u}_{1}}-{{u}_{1r}}+{{e}_{4}}{{u}_{0r}}+\hat{\alpha }{{e}_{1}}{{u}_{0r}}- \\ & {{k}_{1}}{{e}_{3}}u_{0r}^{2}+{{k}_{1}}\hat{\alpha }{{e}_{23}}u_{0r}^{2}+{{k}_{1}}{{e}_{1}}{{{\dot{u}}}_{0r}})+ \\ & \eta ({{u}_{2}}-{{u}_{2r}}-{{e}_{2}}{{u}_{0r}}+\hat{\alpha }{{e}_{3}}{{u}_{0r}}+ \\ & {{k}_{3}}{{e}_{1}}u_{0r}^{2}+{{k}_{3}}\hat{\alpha }{{e}_{41}}u_{0r}^{2}+{{k}_{3}}{{e}_{3}}{{{\dot{u}}}_{0r}}) \\ \end{align} $
where $\alpha$ is a constant and $\dot{\tilde{\alpha}}=-\dot{\hat{\alpha}}$.
Take the adaptive law and dynamic feedback controller as follows
$ \begin{align} \begin{cases} \dot{\hat{\alpha}}=\Lambda^{-1}[(e_{1}x_{23}+e_{3}x_{41})p+u_{0r}(e_{1}\xi+e_{3}\eta) +\\ ~~~~~~u_{0r}p(k_{1}x_{23}\xi+ k_{3}x_{41}\eta) +\\ ~~~~~~u^{2}_{0r}(k_{1} e_{23}\xi+k_{3}e_{41}\eta)]\\ \dot{p}=-k_{5}p-k_{0}e_{0}+e_{1}x_{3}-e_{3}x_{1} -\\ ~~~~~~\hat{\alpha }(e_{1}x_{23}+ e_{3}x_{41})-(x_{4}\xi - x_{2}\eta) -\\ ~~~~~~u_{0r}(-k_{1} x_{3}\xi+k_{3}x_{1}\eta) -\hat{\alpha}u_{0r}(k_{1}x_{23}\xi +k_{3}x_{41}\eta ) \end{cases} \label{a7} \end{align} $
(18) $ \begin{align} \begin{cases} u_{0}=u_{0r}+p\\ u_{1}=-k_{2}\xi+u_{1r}-e_{4}u_{0r}-\hat{\alpha}e_{1}u_{0r}+k_{1}e_{3}u^{2}_{0r} -\\ ~~~~~~~ k_{1}\hat{\alpha}e_{23}u^{2}_{0r}-k_{1}e_{1}\dot{u}_{0r} \\ u_{2}=-k_{4}\eta+u_{2r}+e_{2}u_{0r}-\hat{\alpha}e_{3}u_{0r}-k_{3}e_{1}u^{2}_{0r} -\\ ~~~~~~~ k_{3}\hat{\alpha}e_{41}u^{2}_{0r}-k_{3}e_{3}\dot{u}_{0r} \end{cases} \label{a8} \end{align} $
(19) One obtains
$ \begin{equation} \dot{V}=-k_{1}\alpha e^{2}_{1}u^{2}_{0r}-k_{3}\alpha e^{3}_{3}u^{2}_{0r}-k_{2}\xi^{2}-k_{4}\eta^{2}-k_{5}p^{2} \label{a112} \end{equation} $
(20) where ${{k}_{i}}(i=1,2,3,4,5)$ are positive constant control gains.
Now, in order to prove the convergence of ${{e}_{i}}(i=0,1,2,3,4)$, an important lemma is introduced as follows. It is called "the extended Barbalat theorem". Its proof was given in [23].
Lemma 1. If the differentiable function $f(t)$ has a finite limit as $t\rightarrow\infty$, and $\dot{f}(t)$ can be divided into two parts, one is uniformly continuous and the other is convergent to zero as $t\rightarrow\infty$, then $\dot{f}(t)\rightarrow0$, and the part of the uniform continuity tends to zero, too[23].
Theorem 1. Under Assumptions 1~3, the adaptive law (18) and dynamic feedback controller (19) can guarantee that all the variables of the closed-loop system (15), (18) and (19) are bounded. In addition, $p$ and kinematic tracking errors ${{e}_{i}}(i=0,1,2,3,4)$ asymptotically converge to zero.
Proof. Considering (17), (20) and Assumptions 1~3, we find that the Lyapunov function $V(t)$ is nonincreasing and converges to a limiting value $(\text{lim}V(t)\ge 0)$. This means that ${{e}_{1}},{{e}_{3}},\xi ,\eta $ and $p$ are all bounded. Then, $e_{2}$, $e_{4}$ are all bounded too by using (16). ${{x}_{i}}(i=1,2,3,4)$ are bounded further by Assumption 3. In view of (15), (18) and (19) again, we have ${{\dot{e}}_{i}}(i=0,1,2,3),\dot{\xi },\dot{\eta },\dot{p},{{u}_{0}},{{u}_{1}},{{u}_{2}}$ and $\ddot{V}$ are all bounded too. So, $\dot{V}$ is uniformly continuous. By using Lemma 1, we obtain that $\dot{V}$ tends to zero and
$ \lim\limits_{t\rightarrow\infty} (e_{1}u_{0r}, e_{3}u_{0r}, \xi, \eta, p)=\textbf{0} $
Consider that $u_{0r}$ is bounded and $u_{0r}\neq0$ in Assumption 3. We have $(e_{1},e_{3})\rightarrow\textbf{0}$ as $t\rightarrow\infty$. By the definitions of $\xi$ and $\eta$, one obtains $(e_{2},e_{4})\rightarrow\textbf{0}$ as $t\rightarrow\infty$. Using the Extended Barbalat Theorem in Lemma 1 on the second equation of (18), we have $\dot{p}\rightarrow0$ and $ k_{0}e_{0}\rightarrow0$ where $k_{0}$ is a bounded control gain. Hence, we have $e_{i}\rightarrow0$ $(i= 0,1$,$2,3,4)$ asymptotically as $t\rightarrow\infty$. $\square$
Remark 6. For the uncertain chained system (12), the tracking problem can be solved by using the control law $u_{0}$, $u_{1}$ and $u_{2}$. However, in practice, the controllers usually are $v_{1}$, $v_{2}$ and $v_{3}$ in the robot running place for (1) or (8). By using (8) and (10), they can be deduced as follows.
$ \begin{align} \begin{cases} v_{1}=\frac{u_{0}}{\sin(\beta_{2}-\beta_{1})}\\ v_{2}=\frac{[u_{2}\sin\beta_{1}-u_{1}\cos\beta_{1}-(x_{2}\sin\beta_{1}+x_{4}\cos\beta_{1})u_{0}]\sin(\beta_{2}-\beta_{1})}{2L\sin\beta_{2}}\\ v_{3}=\frac{[u_{2}\sin\beta_{2}-u_{1}\cos\beta_{2}-(x_{2}\sin\beta_{2}+x_{4}\cos\beta_{2})u_{0}]\sin(\beta_{2}-\beta_{1})}{2L\sin\beta_{1}}\\ \dot{\beta}_{1}=v_{2}\\ \dot{\beta}_{2}=v_{3} \end{cases}\label{600} \end{align} $
(21) Theorem 2. Under the Assumptions 1~3, $e_{i}\rightarrow 0$ $(i=0,1,2,3,4)$ ensure the trajectory $(x,y,\theta)$ of type (1, 2) mobile robot in the task-place tracking the reference trajectory $(x_{r},y_{r},\theta_{r})$ by using the controllers (18) and (19), or using control law (18), (19) and (21).
Proof. For (10), we have
$ \begin{align} {\small \begin{cases} x_{0r}=\theta_{r} \\ x_{1r}=x_{mr}\cos \theta_{r} +y_{mr}\sin \theta_{r}\\ x_{2r}=-x_{mr}\sin \theta_{r}+y_{mr}\cos \theta_{r}-2L\displaystyle\frac{\sin \beta _{1r}\sin \beta _{2r}}{\sin (\beta _{2r}-\beta _{1r})} \\ x_{3r}=x_{mr}\sin \theta_{r} -y_{mr}\cos \theta_{r} \\ x_{4r}=x_{mr}\cos \theta_{r} +y_{mr}\sin \theta_{r}-L\displaystyle\frac{\sin (\beta _{1r}+\beta _{2r})}{ \sin (\beta_{2r}-\beta _{1r})}\\ u_{0r}=v_{1r}\sin(\beta_{2r}-\beta_{1r}) \\ u_{1r}=-x_{4r}v_{1r}\sin (\beta _{2r}-\beta _{1r}) -\\ ~~~~~~~~ \displaystyle\frac{2Lv_{2r}\sin ^{2}\beta _{2r}}{\sin ^{2}(\beta _{2r}-\beta _{1r})}+\displaystyle\frac{2Lv_{3r}\sin ^{2}\beta _{1r}}{\sin ^{2}(\beta _{2r}-\beta _{1r})} \\ u_{2r}=x_{2r}v_{1r}\sin (\beta _{2r}-\beta _{1r}) -\\ ~~~~~~~~ \displaystyle\frac{Lv_{2r}\sin (2\beta _{2r})}{\sin ^{2}(\beta _{2r}-\beta _{1r})}+\displaystyle\frac{Lv_{3r}\sin (2\beta _{1r})}{\sin ^{2}(\beta _{2r}-\beta _{1r})} \end{cases} \label{h71}} \end{align} $
(22) Considering (4), we have
$ \begin{equation} \left[ \begin{array}{c} x_{mr} \\ y_{mr} \end{array} \right] =\alpha H(\theta_{0}) \left[\left[ \begin{array}{c} x_{r} \\ y_{r} \end{array} \right] -\left[ \begin{array}{c} c_{x} \\ c_{y} \end{array} \right] \right] +\left[ \begin{array}{c} O_{c1} \\ O_{c2} \end{array} \right] \label{h91} \end{equation} $
(23) Equation (4) minus (23) gives
$ \begin{equation} \left[ \begin{array}{c} x-x_{r} \\ y-y_{r} \end{array} \right] =\alpha^{-1}H^{-1}(\theta_{0}) \left[ \begin{array}{c} x_{m}-x_{mr} \\ y_{m}-y_{mr} \end{array} \right] \label{h92} \end{equation} $
(24) where $H^{-1}(\theta_{0})=H^{\rm T}(\theta_{0})$.
Consider the second and forth equations in (10). We have
$\left[ \begin{matrix} {{x}_{m}} \\ {{y}_{m}} \\ \end{matrix} \right]=\left[ \begin{matrix} \cos \theta & \sin \theta \\ \sin \theta & -\cos \theta \\ \end{matrix} \right]\left[ \begin{matrix} {{x}_{1}} \\ {{x}_{3}} \\ \end{matrix} \right]$
(25) $\left[ \begin{matrix} {{x}_{mr}} \\ {{y}_{mr}} \\ \end{matrix} \right]=\left[ \begin{matrix} \cos {{\theta }_{r}} & \sin {{\theta }_{r}} \\ \sin {{\theta }_{r}} & -\cos {{\theta }_{r}} \\ \end{matrix} \right]\left[ \begin{matrix} {{x}_{1r}} \\ {{x}_{3r}} \\ \end{matrix} \right]$
(26) It is obvious that
$ \begin{equation} \left[ \begin{array}{c} x_{1} \\ x_{3} \end{array} \right] =\left[ \begin{array}{c} x_{1r}+e_{1} \\ x_{3r}+e_{3} \end{array} \right]=\left[ \begin{array}{c} x_{1r} \\ x_{3r} \end{array} \right]+\left[ \begin{array}{c} e_{1} \\ e_{3} \end{array} \right] \label{h95} \end{equation} $
(27) Substracting (26) from (25), we obtain the following relationship
$ \left[ \begin{matrix} {{x}_{m}}-{{x}_{mr}} \\ {{y}_{m}}-{{y}_{mr}} \\ \end{matrix} \right]=\left[ \begin{matrix} {{e}_{1}}\cos \theta +{{e}_{3}}\sin \theta \\ {{e}_{3}}\sin \theta -{{e}_{3}}\cos \theta \\ \end{matrix} \right]+2\sin \frac{{{e}_{0}}}{2}\left[ \begin{matrix} -{{x}_{1r}}\sin \left( {{\theta }_{r}}+\frac{{{e}_{0}}}{2} \right)+{{x}_{3r}}\cos \left( {{\theta }_{r}}+\frac{{{e}_{0}}}{2} \right) \\ {{x}_{1r}}\cos \left( {{\theta }_{r}}+\frac{{{e}_{0}}}{2} \right)+{{x}_{3r}}\sin \left( {{\theta }_{r}}+\frac{{{e}_{0}}}{2} \right) \\ \end{matrix} \right] $
(28) Then, for (24), we have
$ \begin{align} &\left[ \begin{array}{l} x-x_{r} \\ y-y_{r} \end{array} \right] =\displaystyle\frac{1}{\alpha}\left[ \begin{array}{c} e_{1}\cos(\theta-\theta_{0})+e_{3}\sin(\theta-\theta_{0}) \\ e_{1}\sin(\theta-\theta_{0})-e_{3}\cos(\theta-\theta_{0}) \end{array} \right]+\notag\\ &\ \ \displaystyle\frac{2}{\alpha}\sin\displaystyle\frac{e_{0}}{2}\left[\!\!\! \begin{array}{c} -x_{1r}\sin\left(\theta-\theta_{0}-\frac{e_{0}}{2}\right) + x_{3r}\cos\left(\theta-\theta_{0}-\frac{e_{0}}{2}\right)\\ x_{1r}\cos\left(\theta-\theta_{0}-\frac{e_{0}}{2}\right)+ x_{3r}\sin\left(\theta-\theta_{0}-\frac{e_{0}}{2}\right) \end{array} \!\!\!\right] \label{h98}\notag\\ \end{align} $
(29) Note that $\theta=x_{0}=x_{0r}+e_{0}=\theta_{r}+e_{0}$. Then, we have $\theta \to {{\theta }_{r}}$, $\sin\frac{e_{0}}{2}\rightarrow0$ and $|\frac{2}{\alpha}\sin\frac{e_{0}}{2}|\leq\frac{|e_{0}|}{\underline{\alpha}}\rightarrow0$ because of ${{e}_{i}}\to 0(i=0,1,2,3)$. For $\sin(\theta_{r}+\frac{e_{0}}{2}) $, $\cos ({{\theta }_{r}}+\frac{{{e}_{0}}}{2}),{{x}_{1r}},{{x}_{3r}}$, $\cos(\theta-\theta_{0})$, $\sin(\theta-\theta_{0})$, $\sin(\theta-\theta_{0}-\frac{e_{0}}{2})$, $\cos(\theta-\theta_{0}-\frac{e_{0}}{2})$ are all bounded. Therefore, $(x_{m},y_{m})\rightarrow(x_{mr},y_{mr})$ and $(x,y)\to ({{x}_{r}},{{y}_{r}})$ by relationships (28) and (29). $\square$
To sum up, under the Assumptions 1~3, the trajectory $(x,y,\theta)$ for type (1, 2) mobile robot in the task-space can track the reference trajectory of $(x_{r},y_{r},\theta_{r})$ by using controller (18), (19) and (21). Simulation results are addressed in the next section.
4. Simulation
In this section, the simulations have been implemented mainly for the states ${{e}_{i}}(i=0,1,2,3,4)$ of the error system (15), the adaptive law (18), the control law (19), the tracking errors $(e_{x_m},e_{y_m},e_{\theta_m})=(x_{m}-x_{mr},y_{m}-y_{mr},\theta_{m}-\theta_{mr})$ in the image frame, and the tracking errors $({{e}_{x}},{{e}_{y}},{{e}_{\theta }})=(x-{{x}_{r}},y-{{y}_{r}},\theta -{{\theta }_{r}})$ in the task-space for type (1, 2) mobile robot. Two cases are considered for the different choices of the bounded control gains ${{k}_{i}}(i=0,1,2,3,4,5)$. Given a sensor noise on the velocities, the tracking simulations results are also implemented and addressed in Case 3.
Case 1. Consider (12), (15), (18) and (19). Then, take the initial error value $[{{e}_{0}}(0),{{e}_{1}}(0),{{e}_{2}}(0),{{e}_{3}}(0),{{e}_{4}}(0)]=[0.2,0.4,0.1,-0.2,0]$ for the configuration of (15). Further, choose the parameters as ${{\theta }_{0}}=\pi /3,{{u}_{0r}}=0.1,{{u}_{1r}}=1,{{u}_{2r}}=1.5,{{k}_{0}}=500,{{k}_{1}}=12,{{k}_{2}}=20,{{k}_{3}}=15,{{k}_{4}}=18,{{k}_{5}}=20$ and the control gain $\Lambda=1$. The trajectories of error states ${{e}_{i}}(i=0,1,2,3,4)$ and the control inputs ${{u}_{i}}(i=0,1,2)$ are plotted in Figs. 2~4 respectively. The estimates of the parameter $\hat{\alpha}$ and the dynamic feedback factor $p$ are plotted respectively in Figs. 5 and 6. In addition, the control laws ${{v}_{1}},{{v}_{2}}$ and $v_{3}$ in the task-space are also plotted in Figs. 7 and 8 by using (12), (15), (19) and (21).
Assume that the reference trajectory of the mobile robot in the image frame is chosen as $x_{mr}=2\cos\theta_{r}$, $y_{mr}=\sin\theta_{r}$. Then, $x_{1r}=\cos^{2}\theta_{r}$, $x_{3r}=\cos\theta_{r}\sin\theta_{r}$ by (22). The tracking error trajectories for $e_{x_m}=x_{m}-x_{mr}$, ${{e}_{{{y}_{m}}}}={{y}_{m}}-{{y}_{mr}}$ and $e_{\theta_m}=\theta_{m}-\theta_{mr}$ in the image space are presented in Fig. 9. By using (4), (10), (15)and (29), the tracking error trajectories for $e_x=x-x_{r}$, $e_y=y-y_{r}$ and ${{e}_{\theta }}=\theta -{{\theta }_{r}}$ in the robot work-space are addressed in Fig. 10.
Case 2. Given the initial error values $[{{e}_{0}}(0),{{e}_{1}}(0),{{e}_{2}}(0),{{e}_{3}}(0),{{e}_{4}}(0)]=[0.2,0.4,0.1,-0.2,0]$. Then, the parameters such as ${{\theta }_{0}},{{u}_{0r}},{{u}_{1r}},{{u}_{2r}},\alpha ,\Lambda $ and the reference trajectory of the mobile robot in the image frame are the same as in Case 1. However, choose ${{k}_{0}}=900,{{k}_{1}}=22,{{k}_{2}}=30,{{k}_{3}}=25,{{k}_{4}}=38$ and $k_{5}=50$. Then, ${{e}_{i}}(i=0,1,2,3,4)$ are plotted in Fig. 11. The tracking error trajectories for ${{e}_{{{x}_{m}}}},{{e}_{{{y}_{m}}}}$ and $e_{\theta_{m}}$ in the image space are presented in Fig. 12. The tracking error trajectories for ${{e}_{x}},{{e}_{y}}$ and $e_\theta$ in the robot work-space are addressed in Fig. 13.
Case 3. For the type (1, 2) mobile robot, assume ${{v}_{1}}={{v}_{2}}={{v}_{3}}=0$ in the initial state. The initial values for system (8) are $[{{x}_{m}}(0),{{y}_{m}}(0),\theta (0),{{\beta }_{1}}(0),{{\beta }_{2}}(0)]=[0,0,0,0.2,0.1]$. Given a sensor noise on the velocities $\Delta v_{1}=1$, $\Delta v_{2}=2$, $\Delta v_{3} = 1$. By using (8), we have $[x_{m},y_{m},\theta$, $\beta_{1},\beta_{2}]=[0.0165,0.0042,-0.0046,0.4,0.2]$ when $t_{0}=0.1$ s (without loss of generality, $t_{0}$ is a finite constant). Then, $[x_{0},x_{1},x_{2},x_{3},x_{4}] = [-0.046,0.0165,0.7831,0.0041,5.7007]$ by (10). Take the reference trajectories in the image space $x_{mr} = 2\cos\theta_{r}$, $y_{mr} =\sin\theta_{r}$. $\beta_{1r} = \theta_{r}$ and $\beta_{2r} =2\theta_{r}$. Considering (22), the reference states are $[{{x}_{0r}},{{x}_{1r}},{{x}_{2r}},{{x}_{3r}},{{x}_{4r}}]=[0.05,1.9975,0.1498,0.0499,7.9775]$ when $t_{0}=0.1$ s. So, at this moment, we get new initial values $[{{e}_{0}}({{t}_{0}}),{{e}_{1}}({{t}_{0}}),{{e}_{2}}({{t}_{0}}),{{e}_{3}}({{t}_{0}}),{{e}_{4}}({{t}_{0}})]=[-0.056,-1.9810,0.6333,-0.0458,-2.2096]$. Choose the controllers (18) and (19) where ${{k}_{0}}=1200,{{k}_{1}}=30,{{k}_{2}}=40,{{k}_{3}}=50,{{k}_{4}}=46$ and $k_{5}=100$. Then, ${{e}_{i}}(i=0,1,2,3,4)$ converge to zero asymptotically. The trajectories are plotted in Fig. 14. The tracking error trajectories for $e_{x_{m}}$, $e_{y_{m}}$ and $e_{\theta_{m}}$ in the image space are presented in Fig. 15. The tracking error trajectories for ${{e}_{x}},{{e}_{y}}$ and $e_{\theta}$ are addressed in the robot work-space in Fig. 16.
Remark 7. Comparing the tracking errors in Case 1 with those in Case 2, we find that the bigger gains ${{k}_{i}}(i=0,1,\cdots ,5)$ make the better convergence. However, they could not be big enough in the practice. Considering the tracking control problems in [16], an adaptive controller is designed to compensate for uncertain camera and mechanical parameters in the kinematic and dynamic systems for type (2, 0) mobile robot. The tracking errors for $X$-coordinate and $Y$-coordinate converged to zero within ten seconds. In our paper, two transformations are exploited based on the idea of backstepping with the help of camera-robot system. An adaptive control law and dynamic feedback robust controllers are designed to track the desired trajectory for the type (1, 2) robot by using Lyapunov direct method and the extended Barbalat Lemma. The tracking errors for $X$-coordinate and $Y$-coordinate can also converge to zero within 10 seconds (see Figs. 12 and 13). Simulation results (Figs. 2~13) demonstrate the feasibility of the proposed adaptive and dynamic feedback laws.
5. Conclusions and future work
Based on the visual servoing feedback and the transformations for the canonical chained form of type (1, 2) mobile robot, we present an uncertain chained model of nonholonomic kinematic system. Then an adaptive law and dynamic feedback controller has been proposed for the kinematic error system of the nonholonomic mobile robot. The asymptotical convergence of closed-loop error system is rigorously proved by Lyapunov stability theory and the extended Barbalat Lemma. Simulation results illustrate the performance of the proposed controller.
In this paper, the adaptive dynamic feedback tracking controller is investigated for known $\theta_{0}$, but $\alpha_{1}=\alpha_{2}=\alpha$ are unknown. As for other cases, such as ${{\theta }_{0}},{{\alpha }_{1}}$ and $\alpha_{2}$ all unknown, they will be dealt with in the future. In addition, dynamics tracking control problems with uncertain parameters are not neglected, and we will further investigate them.
-
表 1 ImageNet竞赛历年来图像分类任务的部分领先结果
Table 1 Representative top ranked results in image classification task of "ImageNet Large Scale Visual Recognition Challenge"
表 2 部分具有代表性的图像分类和物体检测模型对比
Table 2 Comparison of representative image classification and object detection models
方法 输入 优点 缺点 AlexNet[8] 整张图像(需要对图像放缩到固定大小) 网络简单易于训练, 对图像分类有较强的鉴别力 网络输入图像要求固定大小, 容易破环物体的纵横比和上下文信息 GoogLeNet[11] 整张图像(需要对图像放缩到固定大小) 对图像分类拥有非常强的鉴别力, 参数相对AlexNet较少 网络复杂, 对样本数量要求较高, 训练耗时 VGG[12] 整张图像(需要对图像放缩到固定大小) 对图像分类拥有非常强的鉴别力 网络复杂, 对样本数量要求较高, 训练耗时, 需要多次对网络参数的微调 DPM[23] 整张图像 对物体检测拥有较强的鉴别力, 对形变和遮挡具有一定的处理能力 使用人工设计的HOG特征[26]; 对物体检测的精度通常比本表中其他的CNN网络低 R-CNN[9] 图像区域 对物体检测拥有很强的鉴别力; 比在图像金字塔上逐层滑动窗口的物体检测方法效率高;使用包围盒回归(Bounding box regression)提高物体的定位精度 依赖于区域选择算法; 网络输入图像要求固定大小, 容易破环物体的纵横比和上下文信息; 训练是多阶段过程:在特定检测数据集上对网络参数进行微调、提取特征、训练SVM (Sup-port vector machine)分类器、包围盒回归(Bounding box regression);训练时间耗时、耗存储空间 SPP-net[10] 整张图像(不要求固定大小) 对物体检测拥有很强的鉴别力, 输入图像可以任意大小, 可保证图像的比例信息训练速度比R-CNN快3倍左右, 测试比R-CNN快10~100倍 网络结构复杂时, 池化对图像造成一定的信息丢失; SPP层前的卷积层不能进行网络参数更新[24]; 训练是多阶段过程:在特定检测数据集上对网络参数进行微调、提取特征、训练SVM分类器、包围盒回归; 训练时间耗时、耗存储空间 Fast R-CNN[24] 整张图像(不要求固定大小) 训练和测试都明显快于SPP-net (除了候选区域提取以外的环节接近于实时), 对物体检测拥有很强的鉴别力, 输入图像可以任意大小, 保证图像比例信息, 同时进行分类与定位 依赖于候选区域选择, 它仍是计算瓶颈 Faster R-CNN[29] 整张图像(不要求固定大小) 比Fast R-CNN更加快速, 对物体检测拥有很强的鉴别力; 不依赖于区域选择算法; 输入图像可以任意大小, 保证图像比例信息, 同时进行区域选择算法、分类与定位 训练过程较复杂; 计算流程仍有较大优化空间; 难以解决被遮挡物体的识别问题 -
[1] Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors. Nature, 1986, 323(6088): 533-536 doi: 10.1038/323533a0 [2] Vapnik V N. Statistical Learning Theory. New York: Wiley, 1998. [3] 王晓刚. 图像识别中的深度学习. 中国计算机学会通讯, 2015, 11(8): 15-23Wang Xiao-Gang. Deep learning in image recognition. Communications of the CCF, 2015, 11(8): 15-23 [4] Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks. Science, 2006, 313(5786): 504-507 doi: 10.1126/science.1127647 [5] Deng J, Dong W, Socher R, Li L J, Li K, Li F F. ImageNet: a large-scale hierarchical image database. In: Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, FL: IEEE, 2009. 248-255 [6] LeCun Y, Boser B, Denker J S, Henderson D, Howard R E, Hubbard W, Jackel L D. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1989, 1(4): 541-51 doi: 10.1162/neco.1989.1.4.541 [7] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278-2324 doi: 10.1109/5.726791 [8] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In: Proceedings of Advances in Neural Information Processing Systems 25. Lake Tahoe, Nevada, USA: Curran Associates, Inc., 2012. 1097-1105 [9] Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, USA: IEEE, 2014. 580-587 [10] He K M, Zhang X Y, Ren S Q, Sun J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9):1904-1916 doi: 10.1109/TPAMI.2015.2389824 [11] Szegedy C, Liu W, Jia Y Q, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA: IEEE, 2015. 1-9 [12] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition [Online], available: http://arxiv.org/abs/1409.1556, May 16, 2016 [13] Forsyth D A, Ponce J. Computer Vision: A Modern Approach (2nd Edition). Boston: Pearson Education, 2012. [14] 章毓晋. 图像工程(下册): III-图像理解. 第3版. 北京: 清华大学出版社, 2012.Zhang Yu-Jin. Image Engineering (Part 2): III-Image Understanding (3rd Edition). Beijing: Tsinghua University Press, 2012. [15] He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition [Online], available: http://arxiv.org/abs/1512.03385, May 3, 2016 [16] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278-324 doi: 10.1109/5.726791 [17] Bouvrie J. Notes On Convolutional Neural Networks, MIT CBCL Tech Report, Cambridge, MA, 2006. [18] Duda R O, Hart P E, Stork DG [著], 李宏东, 姚天翔 [译]. 模式分类. 北京: 机械工业出版社, 2003.Duda R O, Hart P E, Stork D G [Author], Li Hong-Dong, Yao Tian-Xiang [Translator]. Pattern Classification. Beijing: China Machine Press, 2003. [19] Lin M, Chen Q, Yan S C. Network in network. In: Proceedings of the 2014 International Conference on Learning Representations. Banff, Canada: Computational and Biological Learning Society, 2014. [20] Zeiler M D, Fergus R. Stochastic pooling for regularization of deep convolutional neural networks [Online], available: http://arxiv.org/abs/1301.3557, May 16, 2016 [21] Maas A L, Hannun A Y, Ng A Y. Rectifier nonlinearities improve neural network acoustic models. In: Proceedings of ICML Workshop on Deep Learning for Audio, Speech, and Language Processing. Atlanta, USA: IMLS, 2013. [22] Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on Machine Learning. Lille, France: IMLS, 2015. 448-456 [23] Felzenszwalb P, McAllester D, Ramanan D. A discriminatively trained, multiscale, deformable part model. In: Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition. Anchorage, USA: IEEE, 2008. 1-8 [24] Girshick R. Fast R-CNN. In: Proceedings of the 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015. 1440-1448 [25] Girshick R, Iandola F, Darrell T, Malik J. Deformable part models are convolutional neural networks. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA: IEEE, 2015. 437-446 [26] Dalal N, Triggs B. Histograms of oriented gradients for human detection. In: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego, CA, USA: IEEE, 2005. 886-893 [27] Sermanet P, Eigen D, Zhang X, Mathieu M, Fergus R, LeCun Y. Overfeat: integrated recognition, localization and detection using convolutional networks [Online], available: http://arxiv.org/abs/1312.6229, May 16, 2016 [28] Uijlings J R R, van de Sande K E A, Gevers T, Smeulders A W M. Selective search for object recognition. International Journal of Computer Vision, 2013, 104(2): 154-171 doi: 10.1007/s11263-013-0620-5 [29] Ren S, He K, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. In: Proceedings of Advances in Neural Information Processing Systems 28. Montréal, Canada: MIT, 2015. 91-99 [30] Zeiler M D, Fergus R. Visualizing and understanding convolutional networks. In: Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland: Springer, 2014. 818-833 [31] Oquab M, Bottou L, Laptev I, Sivic J. Is object localization for free?-weakly-supervised learning with convolutional neural networks. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015. 685-694 [32] Ouyang W L, Wang X G, Zeng X Y, Qiu S, Luo P, Tian Y L, Li H S, Yang S, Wang Z, Loy C C, Tang X O. Deepid-net: deformable deep convolutional neural networks for object detection. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015. 2403-2412 [33] 王晓刚, 孙袆, 汤晓鸥. 从统一子空间分析到联合深度学习: 人脸识别的十年历程. 中国计算机学会通讯, 2015, 11(4): 8-14Wang Xiao-Gang, Sun Yi, Tang Xiao-Ou. From unified subspace analysis to joint deep learning: progress of face recognition in the last decade. Communications of the CCF, 2015, 11(4): 8-14 [34] Yan Z C, Zhang H, Piramuthu R, Jagadeesh V, DeCoste D, Di W, Yu Y Z. HD-CNN: hierarchical deep convolutional neural networks for large scale visual recognition. In: Proceedings of the 2015 IEEE International Conference on Computer Vision. Boston, USA: IEEE, 2015. 2740-2748 [35] Liu B Y, Wang M, Foroosh H, Tappen M, Pensky M. Sparse convolutional neural networks. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015. 806-814 [36] Zeng A, Song S, Nießner M, Fisher M, Xiao J. 3DMatch: learning the matching of local 3D geometry in range scans [Online], available: http://arxiv.org/abs/1603.08182, August 11, 2016 [37] Song S, Xiao J. Deep sliding shapes for amodal 3D object detection in RGB-D images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 685-694 [38] Zhang Y, Bai M, Kohli P, Izadi S, Xiao J. DeepContext: context-encoding neural pathways for 3D holistic scene understanding [Online], available: http://arxiv.org/abs/1603.04922, August 11, 2016 [39] Zhang N, Donahue J, Girshick R, Darrell T. Part-based R-CNNs for fine-grained category detection. In: Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland: Springer, 2014. 834-849 [40] Shin H C, Roth H R, Gao M C, Lu L, Xu Z Y, Nogues I, Yao J H, Mollura D, Summers R M. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Transactions on Medical Imaging, 2016, 35(5): 1285-1298 doi: 10.1109/TMI.2016.2528162 [41] Belhumeur P N, Hespanha J P, Kriegman D J. Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997, 19(7): 711-720 doi: 10.1109/34.598228 [42] Sun Y, Wang X G, Tang X O. Deep learning face representation from predicting 10, 000 classes. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, USA: IEEE, 2014. 1891-1898 [43] Taigman Y, Yang M, Ranzato M A, Wolf L. Deepface: closing the gap to human-level performance in face verification. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, USA: IEEE, 2014. 1701-1708 [44] Sun Y, Wang Y H, Wang X G, Tang X O. Deep learning face representation by joint identification-verification. In: Proceedings of Advances in Neural Information Processing Systems 27. Montreal, Canada: Curran Associates, Inc., 2014. 1988-1996 [45] 山世光, 阚美娜, 李绍欣, 张杰, 陈熙霖. 深度学习在人脸分析与识别中的应用. 中国计算机学会通讯, 2015, 11(4): 15-21Shan Shi-Guang, Kan Mei-Na, Li Shao-Xin, Zhang Jie, Chen Xi-Lin. Face image analysis and recognition with deep learning. Communications of the CCF, 2015, 11(4): 15-21 [46] Farabet C, Couprie C, Najman L, LeCun Y. Learning hierarchical features for scene labeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(8): 1915-29 doi: 10.1109/TPAMI.2012.231 [47] 余淼, 胡占义. 高阶马尔科夫随机场及其在场景理解中的应用. 自动化学报, 2015, 41(7): 1213-1234 http://www.aas.net.cn/CN/abstract/abstract18696.shtmlYu Miao, Hu Zhan-Yi. Higher-order Markov random fields and their applications in scene understanding. Acta Automatica Sinica, 2015, 41(7): 1213-1234 http://www.aas.net.cn/CN/abstract/abstract18696.shtml [48] 郭平, 尹乾, 周秀玲. 图像语义分析. 北京: 科学出版社, 2015.Guo Ping, Qian Yin, Zhou Xiu-Ling. Image semantic analysis. Beijing: Science Press, 2015. [49] Yamaguchi K, Kiapour M H, Ortiz L E, Berg T L. Parsing clothing in fashion photographs. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI: IEEE, 2012. 3570-3577 [50] Liu S, Feng J S, Domokos C, Xu H, Huang J S, Hu Z Z, Yan S C. Fashion parsing with weak color-category labels. IEEE Transactions on Multimedia, 2014, 16(1): 253-265 doi: 10.1109/TMM.2013.2285526 [51] Dong J, Chen Q, Shen X H, Yang J C, Yan S C. Towards unified human parsing and pose estimation. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, OH: IEEE, 2014. 843-850 [52] Dong J, Chen Q, Xia W, Huang Z Y, Yan S C. A deformable mixture parsing model with parselets. In: Proceedings of the 2013 IEEE International Conference on Computer Vision. Sydney, Australia: IEEE, 2013. 3408-3415 [53] Liu S, Liang X D, Liu L Q, Shen X H, Yang J C, Xu C S, Lin L, Cao X C, Yan S C. Matching-CNN meets KNN: quasi-parametric human parsing. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA: IEEE, 2015. 1419-1427 [54] Yamaguchi K, Kiapour M H, Berg T L. Paper doll parsing: retrieving similar styles to parse clothing items. In: Proceedings of the 2013 IEEE International Conference on Computer Vision. Sydney, Australia: IEEE, 2013. 3519-3526 [55] Liu C, Yuen J, Torralba A. Nonparametric scene parsing via label transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(12): 2368-2382 doi: 10.1109/TPAMI.2011.131 [56] Tung F, Little J J. CollageParsing: nonparametric scene parsing by adaptive overlapping windows. In: Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland: Springer, 2014. 511-525 [57] Pinheiro P O, Collobert R, Dollar P. Learning to segment object candidates. In: Proceedings of Advances in Neural Information Processing Systems 28. Montréal, Canada: Curran Associates, Inc., 2015. 1981-1989 [58] Mohan R. Deep deconvolutional networks for scene parsing [Online], available: http://arxiv.org/abs/1411.4101, May 3, 2016 [59] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA: IEEE, 2015. 3431-3440 [60] Zheng S, Jayasumana S, Romera-Paredes B, Vineet V, Su Z Z, Du D L, Huang C, Torr P H S. Conditional random fields as recurrent neural networks. In: Proceedings of the 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015. 1529-1537 [61] Eigen D, Fergus R. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In: Proceedings of the 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015. 2650-2658 [62] Liu F Y, Shen C H, Lin G S. Deep convolutional neural fields for depth estimation from a single image. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA: IEEE, 2015. 5162-5170 [63] Tompson J, Stein M, Lecun Y, Perlin K. Real-time continuous pose recovery of human hands using convolutional networks. ACM Transactions on Graphics (TOG), 2014, 33(5): Article No.169 http://cn.bing.com/academic/profile?id=2075156252&encoded=0&v=paper_preview&mkt=zh-cn [64] Jain A, Tompson J, Andriluka M, Taylor G W, Bregler C. Learning human pose estimation features with convolutional networks. In: Proceedings of the 2014 International Conference on Learning Representations. Banff, Canada: Computational and Biological Learning Society, 2014. 1-14 [65] Oberweger M, Wohlhart P, Lepetit V. Hands deep in deep learning for hand pose estimation. In: Proceedings of the 20th Computer Vision Winter Workshop (CVWW). Seggau, Austria, 2015. 21-30 期刊类型引用(427)
1. Ri Zhao,Li?Ye Liu,Xin Liu,Zhao?Xing Liu,Run?Cheng Liang,Ren?Jing Ling?Hu,Jing Zhang,Fa?Guo Chen. Continuum estimation in low?resolution gamma?ray spectra based on deep learning. Nuclear Science and Techniques. 2025(02): 29-41 . 必应学术
2. Wanqiong Wang,Jie Wang,Xinchen Ye,Yazhou Zhang,Jia Li,Xu Du,Wenna Cai,Han Wu,Ting Zhang,Yuyue Jiao. Application of AI technology in pulsar candidate identification. Astronomical Techniques and Instruments. 2025(01): 27-43 . 必应学术
3. 舒展,秦亚洲. 基于图神经网络的基坑内支撑布置自动化设计与适配性分析. 上海大学学报(自然科学版). 2025(01): 171-181 . 百度学术
4. 方春华,周固,邵斌,胡冻三,夏荣,欧阳本红,普子恒. 高压电缆终端铅封缺陷超声图像卷积神经网络识别. 应用声学. 2025(01): 80-87 . 百度学术
5. 何银,何宇,聂祥论. 基于CWT-CNN-LSTM的配电网单相接地故障选线方法分析. 集成电路应用. 2024(01): 418-421 . 百度学术
6. 唐文娟,戴群. 红外图像去雾领域研究方法综述. 红外与激光工程. 2024(02): 214-224 . 百度学术
7. 柴子怡. 基于计算机视觉的钢构件尺寸测量应用研究. 建筑施工. 2024(02): 186-189 . 百度学术
8. 潘美莲,陈洁. 深度学习算法的图像识别技术在电子元件分拣中的应用. 电脑编程技巧与维护. 2024(02): 140-142+169 . 百度学术
9. 刘彦虹,刘合兵,尚俊平. 基于深度学习的农产品期货价格预测研究. 河南科学. 2024(03): 430-439 . 百度学术
10. 郭海涛,汤健,丁海旭,乔俊飞. 基于混合数据增强的MSWI过程燃烧状态识别. 自动化学报. 2024(03): 560-575 . 本站查看
11. 李晨睿,赖雨诗,吴燕杰,夏召强. 基于最优尺度的遥感影像土地覆盖分类仿真. 计算机仿真. 2024(04): 187-192+511 . 百度学术
12. 覃琼花,张春燕,徐百宁. 基于深度学习的微信小程序图像识别系统. 信息与电脑(理论版). 2024(01): 73-75 . 百度学术
13. 田娟,方国强,何星庭,谢银海. 基于网络安全态势感知的四川气象信息网络检测防御技术研究. 成都信息工程大学学报. 2024(02): 178-182 . 百度学术
14. 王盼孺,杨学志,刘雪南,李龙伟,王定良. 基于权重优化卷积神经网络的非接触心率检测. 合肥工业大学学报(自然科学版). 2024(04): 479-487 . 百度学术
15. 何逢广,李朝阳,张华赢,艾磊,胡海涛. 基于实测数据的牵引供电系统并联谐波谐振概率识别方法. 电网技术. 2024(05): 2084-2094 . 百度学术
16. 王睿,裴瑶瑶. 基于卷积神经网络的噪声抑制算法优化. 电声技术. 2024(04): 28-30 . 百度学术
17. 何志亚,刘闯,武官府,刘云贵,马建强. 基于HP-EMD数据分解与CNN-LSTM深度学习的蔬菜价格预测模型. 上海农业学报. 2024(02): 109-117 . 百度学术
18. 郭建政,童成彪. 基于深度卷积神经网络的单向阀泄漏模式识别. 湖南农业大学学报(自然科学版). 2024(02): 100-104+126 . 百度学术
19. 姚志成,张冠华,王海洋,杨剑,范志良. 干扰背景下基于改进AlexNet的无人机信号识别方法. 电光与控制. 2024(06): 14-18+80 . 百度学术
20. 韩煦,姜强,杨宏伟,俞荣君,祁琳. TOPAZ-2型热离子反应堆电源中子物理代理模型开发. 电子技术应用. 2024(S1): 212-215 . 百度学术
21. 杨波. 基于卷积神经网络的实时语音分割优化研究. 电声技术. 2024(05): 46-48 . 百度学术
22. 黄炜,罗谢飞. 基于MFCC与CNN的机械故障声音自动识别. 电声技术. 2024(06): 129-131 . 百度学术
23. 祝传振,王璇,郑强. 基于元素分离与整体注意力的图卷积网络框架. 计算机研究与发展. 2024(08): 2008-2019 . 百度学术
24. 周锐,刘海军,邢丽莉,崔春杰,王高远. 基于卷积神经网络的儿童自闭症面部特征分类. 电脑与电信. 2024(05): 38-41 . 百度学术
25. 崔明勇,董文韬,卢志刚. 基于密度聚类模态分解的卷积神经网络和长短期记忆网络短期风电功率预测. 现代电力. 2024(04): 631-641 . 百度学术
26. 王善杰. 基于对比学习及背景挖掘的少样本语义分割. 计算机系统应用. 2024(09): 261-268 . 百度学术
27. 文思佳,张栋,赵伟强,孙瑞,尚佳童,雷涛. 融合CNN-Transformer的医学图像分割网络. 计算机与数字工程. 2024(08): 2452-2456 . 百度学术
28. 张益. 基于卷积神经网络算法的海水循环冷却污损生物分类模型. 工业水处理. 2024(12): 160-165 . 百度学术
29. 王守军,汪新凯,王炜,王月恒,陈树雷,林枫,王晶,李志国. 海关旅检“人”“物”准确关联及拦截监督体系研究. 中国口岸科学技术. 2024(12): 83-89 . 百度学术
30. 陈振江,张垒,杜志峰,陈立新,李建华,张喆. 基于视频分析的煤矿钻孔深度估计方法. 煤矿机械. 2023(01): 145-148 . 百度学术
31. 张鹏程,吐松江·卡日,伊力哈木·亚尔买买提,刘萍,邸强,李振恩. 基于YOLOv5与改进VGG-CTC的数字仪表自动读数方法. 现代电子技术. 2023(02): 107-112 . 百度学术
32. 孙昊,郑建明. 卷积神经网络在矿石识别中的应用. 福建电脑. 2023(01): 31-34 . 百度学术
33. 郭锋,柳骏,王裘潇,徐海,万广雷. 一种低功耗的电网巡检故障图像识别方法. 沈阳工业大学学报. 2023(01): 6-11 . 百度学术
34. 丛眸,张平,王宁. 改进YOLOv3算法及其在航拍图像车辆检测中的应用. 计算机应用与软件. 2023(01): 228-233 . 百度学术
35. 刘思恒,李佳,杨武晨,赵锣,彭贤贵. 人工智能识别骨髓细胞可应用于诊断急性白血病微小残留病. 中华检验医学杂志. 2023(03): 280-285 . 百度学术
36. 林旭梅,胡川,朱广辉,陈一戈,苗芳荣. 一种改进ADAM-CNN模型的钢筋混凝土腐蚀检测方法. 中国测试. 2023(02): 8-14 . 百度学术
37. 周进祥,李志伟,邱火旺,任远红,周武能. 基于图网络的改进卷积单幅图像去雨算法. 激光与光电子学进展. 2023(04): 222-228 . 百度学术
38. 胡昊,李擎,马鑫,陈军朋,孙爽,徐鹏. 基于影像的道路积水监测研究. 华北水利水电大学学报(自然科学版). 2023(01): 62-70 . 百度学术
39. 鲁夕瑶,张成彬,皋军,徐燕萍,邵星. 基于卷积神经网络与CatBoost的轴承故障诊断算法. 机电工程. 2023(05): 715-722 . 百度学术
40. 周凤林,崔璐璐,陶贵鑫. 徐州中心区街道景观、功能与活力关系研究. 中外建筑. 2023(05): 93-98 . 百度学术
41. 王程远,赵俊莉,计晓斐,李劲华,王常颖. 基于改进的VGG网络的烟柜状态识别模型. 计算机工程与设计. 2023(06): 1796-1803 . 百度学术
42. 李尚明,洪成雨,姬凤玲,陈伟斌,朱旻. 深基坑的机器视觉监测与变形预测研究. 地下空间与工程学报. 2023(03): 992-1000 . 百度学术
43. 赵译文,刘云鹏. 基于子空间流形的图像集识别方法. 计算机应用. 2023(S1): 207-211 . 百度学术
44. 徐冬梅,廖安栋,王文川. 基于VMD-EEMD-CNN-LSTM混合模型的月径流预测. 水利规划与设计. 2023(07): 57-63 . 百度学术
45. 王德力,黄博凯,闵建彬. 高分子防水卷材面层缺陷识别清洁装置及图像处理技术. 中国建筑防水. 2023(07): 52-54+64 . 百度学术
46. 欧莉莉,杜芳芳. 基于半监督编码的深度卷积对抗网络模型研究. 软件工程. 2023(08): 53-57 . 百度学术
47. 张仲杭,蔡超志,徐辉,白金鑫,池耀磊. 基于纹理图像的强抗干扰性轴承故障诊断方法. 轴承. 2023(08): 79-86 . 百度学术
48. 梁晓智,晋文静,金超. 基于图谱增强和CNN的旋转机械智能故障诊断. 设备管理与维修. 2023(13): 172-177 . 百度学术
49. 冯奇斌,张新,郑琛,王梓,吕国强. 基于卷积神经网络的双层液晶显示方法. 光子学报. 2023(08): 146-155 . 百度学术
50. 黄志鸿,肖剑,徐先勇,张辉. 基于谱残差变换的电力设备热缺陷识别技术. 红外技术. 2023(08): 884-889 . 百度学术
51. 李士刚,李思佳,隋钧铖,宋敏敏. 基于混合深度特征的红外目标抗干扰识别算法. 飞控与探测. 2023(02): 79-86 . 百度学术
52. 刘立,董先敏,刘娟. 顾及地学特征的遥感影像语义分割模型性能评价方法. 自然资源遥感. 2023(03): 80-87 . 百度学术
53. 赵佰亭,贾晓芬. 课堂教学效果的深度评估及分析. 电气电子教学学报. 2023(04): 160-165 . 百度学术
54. 张加春,石金伟,赵兴炳,吴伟斌. 宝光自动观测技术的设计与应用. 高原山地气象研究. 2023(03): 145-150 . 百度学术
55. 李佳琳,武淑琴,张伟鹏,王美鸥,王佳. 基于机器视觉的套印精度检测方法研究. 北京印刷学院学报. 2023(09): 44-51 . 百度学术
56. 李康楠,吴雅琴,杜锋,张翔,王乙桥. 基于卷积神经网络的岩爆烈度等级预测. 煤田地质与勘探. 2023(10): 94-103 . 百度学术
57. 郑晓玲,郑永钊. 基于神经网络的无人机农药喷洒区域智能识别. 黎明职业大学学报. 2023(02): 94-101 . 百度学术
58. 张倬,汤灏,罗高,彭峥,姚武,凌富,曾渭贤,罗斌,叶于龙. 基于深度学习的自校准雷达测速系统的研究. 计量科学与技术. 2023(10): 8-18+76 . 百度学术
59. 孙克强,张义伟,沈宝诚. 一种基于深度学习的软管空管异物检测方法. 安徽电子信息职业技术学院学报. 2023(04): 6-9+14 . 百度学术
60. 李冠霖,胡陈晨,蓝天虹. 基于图像识别技术的无人机巡线系统设计. 信息记录材料. 2023(12): 168-170 . 百度学术
61. 黎强,陈惠贤. BW-Net:用于视网膜血管图像分割的W-Net扩展框架. 电子测量技术. 2023(21): 23-29 . 百度学术
62. 刘通,胡亮,王永军,初剑峰. 基于卷积神经网络的卫星遥感图像拼接. 吉林大学学报(理学版). 2022(01): 99-108 . 百度学术
63. 王艳兵. 深度学习算法在颗粒状农产品品质分选中的应用研究. 兰州文理学院学报(自然科学版). 2022(01): 37-42 . 百度学术
64. 李百寿,唐瑞鹏,张琼,谢跃辉,张越. 滑坡提取卷积神经网络层深结构与显著性特征. 测绘科学. 2022(01): 154-164 . 百度学术
65. 侯思祖,郭威,王子奇,刘雅婷. 基于小波AlexNet网络的配电网故障区段定位方法. 电测与仪表. 2022(03): 46-57 . 百度学术
66. 许龙铭,麦启明,卢家俊,陈苇浩. 基于TensorFlow的图像识别水果秤设计与实现. 电子设计工程. 2022(06): 174-178 . 百度学术
67. 刘雪峰,刘佳明,付民. 生成对抗网络扩充样本用于高光谱图像分类. 电子测量技术. 2022(03): 146-152 . 百度学术
68. 王茜. 图像风格迁移技术研究. 吕梁学院学报. 2022(02): 37-39 . 百度学术
69. 屈晓渊,崔青. 基于梅尔频率倒谱系数的音频分类研究. 电子设计工程. 2022(09): 82-87+92 . 百度学术
70. 陆雪松,闫书豪. 基于迭代卷积神经网络的肝脏MRI图像分割. 中南民族大学学报(自然科学版). 2022(03): 319-325 . 百度学术
71. 王晗鹏,平鹏,杲先锋,刘伟浦. 针对电瓶车骑手头盔佩戴情况的无人机巡检方法研究. 科学技术创新. 2022(15): 74-77 . 百度学术
72. 张焕,张庆,于纪言. 卷积神经网络中激活函数的性质分析与改进. 计算机仿真. 2022(04): 328-334 . 百度学术
73. 田月媛,邓淼磊,高辉,张德贤. 基于深度学习的人群计数算法综述. 电子测量技术. 2022(07): 152-159 . 百度学术
74. 彭涛,唐经,何凯,胡新荣,刘军平,何儒汉. 基于多步态特征融合的情感识别. 广西师范大学学报(自然科学版). 2022(03): 104-111 . 百度学术
75. 魏万珍,田建艳,姜茜,杨胜强. 基于图像技术的零件表面粗糙度支持向量机检测模型. 现代制造工程. 2022(05): 100-109+114 . 百度学术
76. 张军锋,尚展垒. 基于深度学习卷积神经网络的花生籽粒完整性检测. 食品与机械. 2022(05): 24-29+36 . 百度学术
77. 黄思维,李志丹,程吉祥,刘安东. 基于多特征融合的轻量化无锚人脸检测方法. 计算机工程与应用. 2022(11): 242-249 . 百度学术
78. 杨金成,王永超,费守江,张伟,曾婧,李娜. 基于Faster R-CNN的非侵入式负荷识别方法. 分布式能源. 2022(02): 26-33 . 百度学术
79. 赵腾飞,胡国玉,周建平,刘广,陈旭东,董娅兰. 卷积神经网络算法在核桃仁分类中的研究. 中国农机化学报. 2022(06): 181-189 . 百度学术
80. 谢泽祺,徐巍,邹光明,姜佳,闵达. 基于卷积神经网络的玻璃瓶口缺陷检测. 制造业自动化. 2022(06): 104-108+160 . 百度学术
81. 张波,陆云杰,秦东明,邹国建. 一种卷积自编码深度学习的空气污染多站点联合预测模型. 电子学报. 2022(06): 1410-1427 . 百度学术
82. 刘光辉,王秦蒙,陈宣润,孟月波. 多元信息聚合的人群密度估计与计数. 光学精密工程. 2022(10): 1228-1239 . 百度学术
83. 孙超平,梅怡嘉,黄康. 基于卷积神经网络的数字万用表读数识别方法研究. 中国计量. 2022(06): 85-89 . 百度学术
84. 仝卫国,李芝翔,翟永杰,侯哲. 基于中心差分卷积的自监督学习方法研究. 安徽大学学报(自然科学版). 2022(04): 24-29 . 百度学术
85. 武万浩,吴明飞,王慧康,贾惠,马月坤. 基于卷积神经网络的智慧康养系统的设计与实现. 信息记录材料. 2022(05): 113-115 . 百度学术
86. 余伟. 基于卷积神经网络的车牌图像识别技术实现. 信息记录材料. 2022(05): 154-156 . 百度学术
87. 金守峰,侯一泽,焦航,张鹏,李宇涛. 基于改进AlexNet模型的抓毛织物质量检测方法. 纺织学报. 2022(06): 133-139 . 百度学术
88. 姜山,龚国成,马腾远,肖龙. 基于“5G+AI”的智能温度检测系统. 电信科学. 2022(S1): 194-202 . 百度学术
89. 陈嘉昊. 人工智能支持下的新增建设用地监测图斑提取研究. 江西测绘. 2022(02): 10-12+32 . 百度学术
90. 凌里杨,徐天吉,冯博,许宏涛,魏水建. 基于深度学习的多波地震信号智能匹配方法与应用. 石油地球物理勘探. 2022(04): 768-776+735-736 . 百度学术
91. 康长青,杭波,周哲,杨明,朱丽娟. 基于卫星图像的道路检测系统的设计与实现. 长江信息通信. 2022(07): 69-71 . 百度学术
92. 梁文龙. 基于口罩遮挡情况下的人脸识别算法研究. 信息记录材料. 2022(06): 139-141 . 百度学术
93. 黄志鸿,洪峰,黄伟. 形状自适应低秩表示的电力设备热故障诊断方法研究. 红外技术. 2022(08): 870-874 . 百度学术
94. 李欢,曾烁. 诊断、评估与干预:近五年卷积神经网络在特殊教育中的实证研究述评. 中国特殊教育. 2022(07): 10-22 . 百度学术
95. 何梓林. 基于Tensorflow的车型识别系统研究. 农业装备与车辆工程. 2022(08): 135-138 . 百度学术
96. 张嘉鑫. 论商业人脸识别中隐私权和个人信息的双重保护. 行政与法. 2022(08): 78-87 . 百度学术
97. 钱虹,王建棋,刘刚. 基于深度卷积神经网络的转子轴心轨迹智能识别. 热能动力工程. 2022(08): 204-212 . 百度学术
98. 高云华,刘奔. 基于卷积神经网络的军棋判定系统. 电子测试. 2022(15): 60-62 . 百度学术
99. 陈兵,蒋行国. 卷积神经网络用于人脸特征提取. 现代电子技术. 2022(18): 182-186 . 百度学术
100. 刘思恒,李佳,彭贤贵,张诚. 探索Morphogo在多发性骨髓瘤MRD检测中的应用. 重庆医科大学学报. 2022(08): 948-952 . 百度学术
101. 郭晟,余乐,朱立东. 星地场景下基于CNN的OTFS系统信道估计方法. 天地一体化信息网络. 2022(03): 37-45 . 百度学术
102. 岑鹏,郑德生,陆超. 基于差分隐私的航空发动机喘振故障检测. 燃气涡轮试验与研究. 2022(01): 48-51 . 百度学术
103. 张功勋,姚方,曹赟. 基于CNN-SVR城市日负荷预测机制. 电气自动化. 2022(05): 38-40 . 百度学术
104. 康缪建. 自然语义分析与机器学习在大数据安全中的应用. 电子技术与软件工程. 2022(18): 202-207 . 百度学术
105. 赵妍,唐文石,聂永辉,王泽通. 基于格拉姆角差场和卷积神经网络的宽频振荡分类方法. 电网技术. 2022(11): 4364-4372 . 百度学术
106. 郑旭平. 公路病害智能检测分析解决方案研究. 西部交通科技. 2022(10): 184-186+199 . 百度学术
107. 麻连伟,宁卫远,焦利伟,薛帅栋. 基于U-Net卷积神经网络的遥感影像变化检测方法研究. 能源与环保. 2022(11): 102-106 . 百度学术
108. 梁剑,黄志鸿,张可人. 基于多尺度引导滤波和决策融合的电力设备热故障诊断方法研究. 红外技术. 2022(12): 1344-1350 . 百度学术
109. 段芸杉,吴献文,王瑞瑞,石伟,李怡燃. 基于深度学习的异源立体影像对匹配方法. 测绘通报. 2022(12): 131-135 . 百度学术
110. 黄丰,莫辉强,王伟,欧阳慧,于富洋,张城,叶明. 一种基于深度学习的视频客流密度计算方法. 计算机与数字工程. 2022(10): 2149-2152+2165 . 百度学术
111. 李霞丽,陈彦东,杨子熠,张焱垠,吴立成. 藏族久棋的一种两阶段计算机博弈算法. 重庆理工大学学报(自然科学). 2022(12): 110-120 . 百度学术
112. 徐彬森,李宁,肖立志,武宏亮,冯周,王兵,王克文. 串行结构多任务学习的储层参数预测方法(英文). Applied Geophysics. 2022(04): 513-527+604 . 百度学术
113. 徐小强,张瑞琦,冒燕,陈旭. 基于卷积神经网络的水面火焰倒影滤波方法. 江苏大学学报(自然科学版). 2021(01): 105-110 . 百度学术
114. 张玉红,白韧祥,孟凡军,王思斯,吴彪. 图像识别中的卷积神经网络应用研究. 新技术新工艺. 2021(01): 52-55 . 百度学术
115. 高瑞明,李明星. 基于调制谱图卷积神经网络的空中目标识别技术. 电光与控制. 2021(02): 59-64 . 百度学术
116. 陆宁云,陈闯,姜斌,邢尹. 复杂系统维护策略最新研究进展:从视情维护到预测性维护. 自动化学报. 2021(01): 1-17 . 本站查看
117. 孙月驰,平伟,徐明磊. 基于优化卷积神经网络结构的人体行为识别. 计算机应用与软件. 2021(02): 198-204+269 . 百度学术
118. 贾坡,田建艳,杨英波,彭宏丽,杨胜强. 基于机器视觉的滚抛磨块缺陷检测方法. 金刚石与磨料磨具工程. 2021(01): 76-82 . 百度学术
119. 李明达,姜静. 基于深度学习的轮胎缺陷检测算法. 信息技术与信息化. 2021(02): 52-53 . 百度学术
120. 方双,赵凤霞,楚松峰,吴振华. 基于多尺度卷积神经网络的缺陷红枣检测方法. 食品与机械. 2021(02): 158-163+168 . 百度学术
121. 贾瑶,靳雁霞,马博,陈治旭,芦烨. 基于机器学习合成高分辨率布料褶皱. 计算机技术与发展. 2021(03): 106-110 . 百度学术
122. 王倩,高向军,葛方振,沈龙凤,李想,刘怀愚. 基于多颜色空间的太阳能电池片智能分类. 计算机与数字工程. 2021(03): 556-561 . 百度学术
123. 梁忠壮,孟令奎,谢文君,张文. 基于卷积神经网络的城市水体提取方法研究. 测绘地理信息. 2021(02): 36-39 . 百度学术
124. 欧莉莉,邵峰晶,孙仁诚,隋毅. 基于半监督方法的脑梗死图像识别. 计算机应用. 2021(04): 1221-1226 . 百度学术
125. 易超,张清河,刘广旭,时李萍,张士惠. 基于卷积神经网络的色散介质电磁参数重构. 微波学报. 2021(02): 70-75+84 . 百度学术
126. 高滔. 基于CNN池化和进化策略的一般神经网络图像分类研究. 智能计算机与应用. 2021(02): 179-182+186 . 百度学术
127. 王雷,闫红蕾,张自力. 收益率曲面预测及其在信用债投资组合管理中的应用. 统计研究. 2021(04): 145-160 . 百度学术
128. 向本华,王徽. 基于AlexNet的手写希腊字母识别研究. 通信技术. 2021(05): 1103-1108 . 百度学术
129. 包苑村,解建仓,罗军刚. 基于VMD-CNN-LSTM模型的渭河流域月径流预测. 西安理工大学学报. 2021(01): 1-8 . 百度学术
130. 张天放,张先玲,韩涛,施泽杰,郭永强,王惠永. 人工智能图像识别技术在高炉风口监测中的应用. 冶金自动化. 2021(03): 58-66 . 百度学术
131. 沈锐,陈亚军. 一种基于批量归一化的LeNet网络改进方法. 四川文理学院学报. 2021(02): 136-140 . 百度学术
132. 闫飞,张华,冯春成,李小霞. 基于迁移学习的卷积神经网络印刷汉字字体识别模型研究. 数字印刷. 2021(02): 36-45 . 百度学术
133. 程冠琦. 深度学习人脸识别算法在课堂考勤中的应用研究. 电脑知识与技术. 2021(13): 182-183+190 . 百度学术
134. 陈强,戴军. 农机驾驶人员疲劳驾驶行为检测研究. 中国农机化学报. 2021(05): 148-152 . 百度学术
135. 张雷乐,田军委,刘雪松,王沁. 一种改进的残差网络手势识别方法. 西安工业大学学报. 2021(02): 206-212 . 百度学术
136. 徐小奇,刘海波. 基于神经网络的光伏电池红外热图热斑识别. 科学技术创新. 2021(16): 92-94 . 百度学术
137. 李东升,何远成,彭翔,杨艳,叶毓廷,王先培. CNN入侵检测算法在电力营销系统中的应用. 计算机工程与设计. 2021(06): 1585-1591 . 百度学术
138. 李奇,龙奇勇,赵东旭. 基于深度学习的疑似违法图斑快速检测研究. 城市勘测. 2021(03): 88-91 . 百度学术
139. 林锦涵,陈芸芝,汪小钦. 基于绿视率的福州市鼓楼区道路绿化水平评价. 中国城市林业. 2021(03): 73-77+84 . 百度学术
140. 安国臣,袁宏拓,韩秀璐,王晓君,侯雨佳. 基于FPGA的通用卷积层IP核设计. 河北科技大学学报. 2021(03): 241-247 . 百度学术
141. 刘海涛,伊丽丽,兰玉彬,韩鑫,崔立华. 机器视觉在棉花智能打顶领域的应用研究进展. 中国农机化学报. 2021(06): 159-165 . 百度学术
142. 黄晓琪,王莉,李钢. 融合胶囊网络的文本-图像生成对抗模型. 计算机工程与应用. 2021(14): 176-180 . 百度学术
143. 刘宝稳,汤容川,马钲洲,马宏忠,许洪华. 基于S变换D-SVM AlexNet模型的GIS机械故障诊断与试验分析. 高电压技术. 2021(07): 2526-2538 . 百度学术
144. 张鑫,缪楠,高继勇,李庆盛,王志强,孙霞,李彩虹,袁文浩. 基于电子舌和WGAN-CNN模型的小麦贮存年限快速检测. 电子测量与仪器学报. 2021(06): 176-183 . 百度学术
145. 阴祖军,蒋博文,野莹莹. 基于改进LeNet-5卷积神经网络的发票识别研究. 装备制造技术. 2021(05): 148-150+163 . 百度学术
146. 徐鹏飞,程乾,金平斌. 基于神经网络模型的千岛湖清洁水体叶绿素a遥感反演研究. 长江流域资源与环境. 2021(07): 1670-1679 . 百度学术
147. 陆卫忠,黄宏梅,杨茹,曹燕. 基于深度学习的智能花卉养护系统设计. 计算机应用与软件. 2021(08): 72-77 . 百度学术
148. 王岩,钱巨. 基于视觉特征的非侵入式用户界面输入项识别. 计算机与现代化. 2021(08): 46-51+120 . 百度学术
149. 徐焕良,孙云晓,曹雪莲,季呈明,陈龙,王浩云. 基于光子传输模拟与卷积神经网络的苹果品质检测. 农业机械学报. 2021(08): 338-345 . 百度学术
150. 张丽达,谢宸湘. 深度卷积神经网络对领导干部自然资源资产离任审计疑点提取的应用. 统计与信息论坛. 2021(08): 75-83 . 百度学术
151. 张焕,张庆,于纪言. 激活函数的发展综述及其性质分析. 西华大学学报(自然科学版). 2021(04): 1-10 . 百度学术
152. 逄岩,许枫,刘佳. 基于Gammatone滤波器组时频谱和卷积神经网络的海底底质分类. 应用声学. 2021(04): 510-517 . 百度学术
153. 冷佳,刘镇,张笑非,汤浩宇. 多特征融合CNN网络的旋转机械故障诊断研究. 软件导刊. 2021(09): 44-50 . 百度学术
154. 黄志鸿,吴晟,肖剑,张可人,黄伟. 基于引导滤波的电力设备热故障诊断方法研究. 红外技术. 2021(09): 910-915 . 百度学术
155. 王琳,张素兰,杨海峰. 基于CNN和加权贝叶斯的最近邻图像标注方法. 计算机技术与发展. 2021(10): 63-69 . 百度学术
156. 程睿喆. 基于卷积神经网络的无人机道路巡检系统设计. 电子技术与软件工程. 2021(19): 143-145 . 百度学术
157. 林宗缪,陈宁. 基于卷积神经网络算法的纺织品毛球评定研究. 计算机与数字工程. 2021(10): 2150-2154 . 百度学术
158. 马玉莹,黄成章,黄静颖,王伟丞. 基于SSD框架的红外弱小目标检测技术研究. 激光与红外. 2021(10): 1342-1347 . 百度学术
159. 王道元,王俊,孟志斌,张雪峰,李敬兆. 煤矿安全风险智能分级管控与信息预警系统. 煤炭科学技术. 2021(10): 136-144 . 百度学术
160. 封筠,赵颖,毕健康,赖柏江,胡晶晶. 多级卷积神经网络的沥青路面裂缝图像层次化筛选. 图学学报. 2021(05): 719-728 . 百度学术
161. 胡徐胜,陶彬彬,曾胜. 基于STM32的测温与身份识别系统设计. 天津理工大学学报. 2021(04): 36-39 . 百度学术
162. 王金,李颜娥,冯海林,王安澜. 基于改进的Faster R-CNN的小目标储粮害虫检测研究. 中国粮油学报. 2021(09): 164-171 . 百度学术
163. 孟月波,陈宣润,刘光辉,徐胜军. 多特征信息融合的人群密度估计方法. 激光与光电子学进展. 2021(20): 276-287 . 百度学术
164. 白又达,刘纪平,黄龙,白敬辉,车向红. 面向地图图片识别的两种卷积神经网络分析. 测绘科学. 2021(11): 126-134 . 百度学术
165. 汤礼颖,贺利乐,何林,屈东东. 一种卷积神经网络集成的多样性度量方法. 智能系统学报. 2021(06): 1030-1038 . 百度学术
166. 曹建芳,贾一鸣,田晓东,闫敏敏,陈泽宇. 基于多通道可分离网络的古代壁画分类方法. 计算机应用研究. 2021(11): 3489-3494 . 百度学术
167. 曾文雯,杨阳,钟小品. 基于改进Mask R-CNN的在架图书书脊图像实例分割方法. 计算机应用研究. 2021(11): 3456-3459+3505 . 百度学术
168. 刘近贞,叶方方,熊慧. 基于卷积神经网络的多类运动想象脑电信号识别. 浙江大学学报(工学版). 2021(11): 2054-2066 . 百度学术
169. 孙晓朋,侯立群,渠怀胜. 基于卷积神经网络的渐进式指针表自动读数方法. 传感技术学报. 2021(10): 1326-1333 . 百度学术
170. 韩存地,朱兴攀,符立梅,董立红,刘安强,李远成,许犇,汪梅. 改进空间通道注意力与残差融合的煤矸石识别. 西安科技大学学报. 2021(06): 1113-1121 . 百度学术
171. 袁培森,薛铭家,熊迎军,翟肇裕,徐焕良. 基于无人机高通量植物表型大数据分析及应用研究综述. 农业大数据学报. 2021(03): 62-75 . 百度学术
172. 鲜开军,丁新虎,朱城超,朱钟华,徐东伟. 基于遗传算法的神经网络等价模型构建. 高技术通讯. 2021(11): 1136-1144 . 百度学术
173. 周福林,刘飞帆,杨瑞轩,任慧乔. 电气化铁路车网电气耦合异常辨识. 中国电机工程学报. 2021(23): 7937-7950 . 百度学术
174. 赵卫东,施实伟,周婵. 基于ImageNet预训练卷积神经网络的图像风格迁移. 成都大学学报(自然科学版). 2021(04): 367-373 . 百度学术
175. 牛鑫鑫,孙阿猛,王钎沣,夏萍. 基于深度学习的遥感图像分类研究. 激光杂志. 2021(05): 10-14 . 百度学术
176. 任珊珊. 基于CNN处理疾病数据的技术对疾病预测研究. 价值工程. 2020(02): 248-249 . 百度学术
177. 蒋梦莹,林小竹,柯岩,魏战红. 基于权值分布的多模型分类算法研究. 计算机应用研究. 2020(01): 313-316 . 百度学术
178. 张风雷. 基于姿态机和卷积神经网络的手的关键点估计. 计算机与数字工程. 2020(01): 242-246 . 百度学术
179. 费东炜,孙涵. 基于深度哈希网络的车型识别方法. 计算机技术与发展. 2020(01): 7-12 . 百度学术
180. 朱煜峰,许永鹏,陈孝信,盛戈皞,江秀臣. 基于卷积神经网络的直流XLPE电缆局部放电模式识别技术. 电工技术学报. 2020(03): 659-668 . 百度学术
181. 徐崇良,胡毅. 基于数字化车间的改进图像识别系统. 组合机床与自动化加工技术. 2020(02): 70-73 . 百度学术
182. 董博华,韩笑,宋毅洲,安宁宁,刘韵婷. 基于卷积神经网络的植物图像识别APP开发——“植鉴”. 科技与创新. 2020(04): 134-135 . 百度学术
183. 李阳,王璞,刘扬,刘国军,王春宇,刘晓燕,郭茂祖. 基于显著图的弱监督实时目标检测. 自动化学报. 2020(02): 242-255 . 本站查看
184. 张乃夫,谭峰,范禹希,辛元明,孙政波,田生睿. 基于卷积神经网络的农作物病害识别方法研究. 安徽农业科学. 2020(05): 242-245 . 百度学术
185. 张芳,赵东旭,肖志涛,徐旭,耿磊,吴骏,刘彦北,王雯. 眼底图像质量分类综述. 计算机辅助设计与图形学学报. 2020(03): 501-512 . 百度学术
186. 华夏,王新晴,马昭烨,王东,邵发明. 基于递归神经网络的视频多目标检测技术. 计算机应用研究. 2020(02): 615-620 . 百度学术
187. 孙世宇,李喆,李建增,胡永江. 基于卷积神经网络的无人机图像模糊类型识别. 火力与指挥控制. 2020(02): 1-5 . 百度学术
188. 侯威力,俞磊,沈寅,王敬汶,陈懿,顾淼,蔡培濛. 无人机在城市水务管理中的智能应用. 新型工业化. 2020(01): 122-124 . 百度学术
189. 马骞. 基于卷积神经网络的密集场景人流估计方案. 电子设计工程. 2020(05): 189-193 . 百度学术
190. 谭远模. 无人机航测技术在农地杂草识别中的应用. 北京测绘. 2020(04): 514-517 . 百度学术
191. 黄宏梅,陆卫忠,杨茹,曹燕. 深度学习在花卉养护技术中的应用. 计算机系统应用. 2020(03): 261-268 . 百度学术
192. 肖春姣,李宇,张洪群,陈俊. 深度融合网结合条件随机场的遥感图像语义分割. 遥感学报. 2020(03): 254-264 . 百度学术
193. 张毅. 基于图像理解的典型气象灾害风险识别方法研究. 环境科学与管理. 2020(02): 186-190 . 百度学术
194. 孟月波,纪拓,刘光辉,徐胜军,李彤月. 编码-解码多尺度卷积神经网络人群计数方法. 西安交通大学学报. 2020(05): 149-157 . 百度学术
195. 鄢仁武,林穿,高硕勋,罗家满,李天建,夏正邦. 基于小波时频图和卷积神经网络的断路器故障诊断分析. 振动与冲击. 2020(10): 198-205 . 百度学术
196. 钦杰,张力平,叶云飞,胡鹏,蔺宏良. 一种基于卷积神经网络的电涡流金属辨识方法. 电子测量与仪器学报. 2020(04): 172-179 . 百度学术
197. 郑庆翔,朱敏. 基于深度卷积神经网络的高光谱图像分类算法研究. 白城师范学院学报. 2020(02): 24-29 . 百度学术
198. 祝诗平,卓佳鑫,黄华,李光林. 基于CNN的小麦籽粒完整性图像检测系统. 农业机械学报. 2020(05): 36-42 . 百度学术
199. 沈兆轩,袁三男. 利用卷积神经网络支持向量回归机的地区负荷聚类集成预测. 电网技术. 2020(06): 2237-2244 . 百度学术
200. 岑海燕,朱月明,孙大伟,翟莉,万亮,麻志宏,刘子毅,何勇. 深度学习在植物表型研究中的应用现状与展望. 农业工程学报. 2020(09): 1-16 . 百度学术
201. 王浩,杨东升,周博文,高筱婷,庞永恒. 基于并联卷积神经网络的多端直流输电线路故障诊断. 电力系统自动化. 2020(12): 84-95 . 百度学术
202. 徐代,岳璋,杨文霞,任潇. 基于改进的三向流Faster R-CNN的篡改图像识别. 计算机应用. 2020(05): 1315-1321 . 百度学术
203. 赵琪,孙立双,谢志伟. 基于改进MCNN的密度图在室内定位中的应用. 测绘通报. 2020(06): 12-16 . 百度学术
204. 余东行,张保明,赵传,郭海涛,卢俊. 联合卷积神经网络与集成学习的遥感影像场景分类. 遥感学报. 2020(06): 717-727 . 百度学术
205. 李启飞,吴芳,林义杰. 基于卷积神经网络对磁异常信号的识别研究. 海军航空工程学院学报. 2020(02): 161-166 . 百度学术
206. 赵发宾. 基于计算机方法的混凝土识别与应用研究. 粉煤灰综合利用. 2020(03): 29-32 . 百度学术
207. 张云,李岚. 基于级联卷积神经网络的人脸特征点识别算法实现. 兰州理工大学学报. 2020(03): 105-109 . 百度学术
208. 张清,胡国兵,赵嫔姣. 基于AlexNet卷积神经网络的5G信号调制方式识别. 信息化研究. 2020(02): 36-43 . 百度学术
209. 李东帅,黄靖宇. 基于深度学习的高速磁浮轨道不平顺预估. 华东交通大学学报. 2020(03): 44-51 . 百度学术
210. 李莉,乔璐,张浩洋. 基于深度学习的肺结节自动检测算法. 计算机应用与软件. 2020(07): 95-100 . 百度学术
211. 陈鹏,陈智利,李庞跃,牛恒,周泉. 树莓派3B+导盲系统设计与实现. 西安工业大学学报. 2020(03): 305-309 . 百度学术
212. 徐少平,林珍玉,李崇禧,崔燕,刘蕊蕊. 浅层CNN网络构建的噪声比例估计模型. 中国图象图形学报. 2020(07): 1344-1355 . 百度学术
213. 赵毅力,李禹成,陈皓. 云南野生鸟类图像自动识别系统. 计算机应用研究. 2020(S1): 423-425 . 百度学术
214. 陈婷丽,王静,袁非. 基于深度学习的眼底疾病筛查诊断系统的初步研究. 国际眼科杂志. 2020(08): 1452-1455 . 百度学术
215. 张辉,王杨,李昌,张鑫,赵传信. 基于深度神经决策森林的体域网数据融合方法. 计算机应用研究. 2020(08): 2329-2332 . 百度学术
216. 王玉静,王诗达,康守强,王庆岩,V.I.MIKULOVICH. 基于改进深度森林的滚动轴承剩余寿命预测方法. 中国电机工程学报. 2020(15): 5032-5043 . 百度学术
217. 甄岩,袁健全,池庆玺,郝明瑞. 深度强化学习方法在飞行器控制中的应用研究. 战术导弹技术. 2020(04): 112-118 . 百度学术
218. 孙蕊,张旭,郭颖,于新文,陈艳,侯亚男. 基于Faster R-CNN金丝猴优化检测方法. 激光与光电子学进展. 2020(12): 259-268 . 百度学术
219. 柳翠明,赵东旭. 利用深度学习的无人机影像快速发现推土区违法建设. 城市勘测. 2020(04): 83-86 . 百度学术
220. 刘文彬,温柏坚,高尚,郑杰生,郑颖龙,董召杰,王尧. 基于深度学习的智能图像处理研究. 自动化与仪器仪表. 2020(08): 60-63 . 百度学术
221. 丁琼. Matlab平台下水书文字特征提取与分类方法实现研究. 电子技术与软件工程. 2020(14): 155-157 . 百度学术
222. 刘帅,张旭含,李笑迎,田野. 基于双分支卷积网络的高光谱与多光谱图像协同土地利用分类. 农业工程学报. 2020(14): 252-262 . 百度学术
223. 杨刚,贺冬葛,戴丽珍. 基于CNN和粒子群优化SVM的手写数字识别研究. 华东交通大学学报. 2020(04): 41-47 . 百度学术
224. 肖经纬,田军委,王沁,程希希,王佳. 基于改进残差网络的果实病害分类方法. 计算机工程. 2020(09): 221-225 . 百度学术
225. 吴定安,钟建伟,秦勉,向家国,曾凡伟,陈晨,胡凯. 基于卷积神经网络的电能质量扰动识别研究. 湖北民族大学学报(自然科学版). 2020(03): 318-321+327 . 百度学术
226. 段中兴,齐嘉麟. 基于多尺度卷积神经网络的立体匹配算法研究. 计算机测量与控制. 2020(09): 206-211 . 百度学术
227. 付天豪,于力革. 改进卷积神经网络的动态手势识别. 计算机系统应用. 2020(09): 225-230 . 百度学术
228. 徐少平,刘婷云,林珍玉,崔燕. 用于去除随机脉冲噪声的两阶段盲卷积降噪模型. 计算机学报. 2020(09): 1673-1690 . 百度学术
229. 沈凯文,李浩伟,韩进,房若民. 基于卷积神经网络的病虫害可视化监测系统设计. 软件导刊. 2020(09): 122-126 . 百度学术
230. 伍思雨,冯骥. 基于改进VGGNet卷积神经网络的鲜花识别. 重庆师范大学学报(自然科学版). 2020(04): 124-131 . 百度学术
231. 魏巍,徐卫峰. 卷积神经网络在类风湿性关节炎X光影像自动识别的应用及效果分析. 医院管理论坛. 2020(07): 72-74 . 百度学术
232. 孟凡奎,银温社,贺建峰. 基于深度学习的眼底图像出血点检测方法. 山东大学学报(理学版). 2020(09): 62-71 . 百度学术
233. 杨佳,宋晓茹,高嵩,吴雪. 一种混合验证码的识别算法研究. 自动化与仪表. 2020(09): 56-59+63 . 百度学术
234. 张琪安,张波涛,吕强,王亚东. 采用卷积神经网络的低风险可行地貌分类方法. 控制理论与应用. 2020(09): 1944-1950 . 百度学术
235. 张燕,王铭玥,王婕,姜恺宁,张筠晗. 基于Xception-LSTM的下肢运动能力评价方法. 中国康复理论与实践. 2020(06): 643-647 . 百度学术
236. 马震,冯青锋,谢晋. 基于孪生网络实现商品标题相似度判别的方法研究. 信息记录材料. 2020(09): 162-164 . 百度学术
237. 王维波,徐西龙,盛立,高明. 卷积神经网络微地震事件检测. 石油地球物理勘探. 2020(05): 939-949+929 . 百度学术
238. 土登达杰,普布旦增,仁青诺布. 基于语义分割的草场生长状态分析. 电子技术与软件工程. 2020(16): 132-135 . 百度学术
239. 伍煜亮,黄上瑶,宋景华,刘洋宏,黄承昌. 禁入区域异常报警系统. 数字通信世界. 2020(10): 96-97 . 百度学术
240. 柳霄羽,蔡庭钰. 基于CNN手写识别技术的智能作业批阅软件的设计与开发. 信息与电脑(理论版). 2020(18): 99-101 . 百度学术
241. 刘晓天,孙冰,廖超,金佳莉,施招婉,范黎明,唐艺家,何继红,何卫忠,杨龙,孙倩,裴男才. 基于街景图像的城市街道绿视率计量方法比较分析. 江西农业大学学报. 2020(05): 1022-1031 . 百度学术
242. 张军,钟定江,洛桑珠巴,王永亮,李猛. 基于高原环境的光伏电站智能机器人巡检系统. 电子技术与软件工程. 2020(18): 37-38 . 百度学术
243. 李剑. 基于粒子群算法的卷积神经网络优化研究. 计算机与数字工程. 2020(10): 2452-2457 . 百度学术
244. 成思齐,宋晓茹,高嵩,陈超波. 基于卷积神经网络进行发票分类. 电子设计工程. 2020(22): 170-174 . 百度学术
245. 侯俊铭,姚恩超,朱红杰. 基于卷积神经网络的蓖麻种子损伤分类研究. 农业机械学报. 2020(S1): 440-449 . 百度学术
246. 黄彩云,吴金红,陈勇跃,王翠波. 非均衡数据下基于卷积神经网络的专利文本自动分类研究. 文献与数据学报. 2020(03): 25-36 . 百度学术
247. 朱新挺,陈志坤,彭冬亮. 面向一体化应用的电磁信号智能检测方法研究. 信号处理. 2020(10): 1708-1713 . 百度学术
248. 胡开华,黄华,章义来,张玉静. 陶瓷产品三维互动装饰设计定制服务平台. 重庆大学学报. 2020(11): 121-136 . 百度学术
249. 李天琪,谭海,戴激光,杜阳,王杨. 结合卷积神经网络和张量投票的道路提取方法. 激光与光电子学进展. 2020(20): 186-193 . 百度学术
250. 彭超,唐向红,陆见光. 基于边缘计算的轴承故障诊断. 组合机床与自动化加工技术. 2020(12): 52-55 . 百度学术
251. 王立刚,张志佳,李晋,范莹莹,刘立强. 基于卷积神经网络的LED灯类字体数字识别. 电子测量与仪器学报. 2020(11): 148-154 . 百度学术
252. 王金甲,张玉珍,夏静,王凤嫔. 多层局部块坐标下降法及其驱动的分类重构网络. 自动化学报. 2020(12): 2647-2661 . 本站查看
253. 周亦敏,李锡麟. 基于卷积神经网络的手势识别研究. 智能计算机与应用. 2020(10): 27-31 . 百度学术
254. 马志远,余粟. 基于Faster-RCNN网络的表格检测算法研究. 智能计算机与应用. 2020(12): 24-27+31 . 百度学术
255. 丁萌,姜欣言. 先进驾驶辅助系统中基于单目视觉的场景深度估计方法. 光学学报. 2020(17): 137-145 . 百度学术
256. 田影,陈国栋,潘冠慈. 基于卷积神经网络的黑白人物图像多种合理着色的研究. 通化师范学院学报. 2019(02): 6-11 . 百度学术
257. 刘彪,黄蓉蓉,林和,苏伟. 基于卷积神经网络的盲文音乐识别研究. 智能系统学报. 2019(01): 186-193 . 百度学术
258. 马大中,胡旭光,孙秋野,郑君,王睿. 基于数据特征融合的管网信息物理异常诊断方法. 自动化学报. 2019(01): 163-173 . 本站查看
259. 马浩鹏,朱春媚,周文辉,殷春. 基于深度学习的乳液泵缺陷检测算法. 液晶与显示. 2019(01): 81-89 . 百度学术
260. 左国玉,马蕾,徐长福,徐家园. 基于跨连接卷积神经网络的绝缘子检测方法. 电力系统自动化. 2019(04): 101-108 . 百度学术
261. 徐阳,张忠伟,刘明. 利用信息交互最优权重改进神经网络的方法. 吉林大学学报(信息科学版). 2019(01): 107-112 . 百度学术
262. 宋晓茹,吴雪,高嵩,陈超波. 基于深度神经网络的手写数字识别模拟研究. 科学技术与工程. 2019(05): 193-196 . 百度学术
263. 杨军志,葛长俊,张剑,寇翀. 基于人工智能的服务区智慧照明管理系统. 建筑电气. 2019(02): 55-59 . 百度学术
264. 曾平平,李林升. 基于卷积神经网络的水果图像分类识别研究. 机械设计与研究. 2019(01): 23-26+34 . 百度学术
265. 刘海玲. 基于计算机视觉算法的图像处理技术. 计算机与数字工程. 2019(03): 672-677 . 百度学术
266. 崔方,赵庶旭. 卷积神经网络GPS坐标转换方法. 测绘通报. 2019(03): 1-5 . 百度学术
267. 安雅莉,朱海龙,张学军. 使用迁移学习的驾驶员分神操作分类模型. 计算机产品与流通. 2019(04): 111 . 百度学术
268. 俞颂华,王汝凉. 基于CNN与MFCC的城市场景声音识别. 广西师范学院学报(自然科学版). 2019(01): 50-56 . 百度学术
269. 陈文兵,刘小明,王宏斌,陈允杰. G-CNN模型在浓雾天气形势识别中的应用. 计算机工程与应用. 2019(08): 124-131 . 百度学术
270. 党应聪,陈劲杰. 基于卷积神经网络的简单几何体三维模型自动分类识别研究. 软件工程. 2019(04): 13-16 . 百度学术
271. 赵海文,李锋,赵亚川,齐兴悦. 基于YOLO模型的机器人电梯厅门装箱状态快速识别方法. 包装工程. 2019(07): 180-185 . 百度学术
272. 陈金富,朱乔木,石东源,李银红,ZHU Lin,段献忠,LIU Yilu. 利用时空相关性的多位置多步风速预测模型. 中国电机工程学报. 2019(07): 2093-2106 . 百度学术
273. 陈天娇,曾娟,谢成军,王儒敬,刘万才,张洁,李瑞,陈红波,胡海瀛,董伟. 基于深度学习的病虫害智能化识别系统. 中国植保导刊. 2019(04): 26-34 . 百度学术
274. 仲志丹,吴进峰,任金梅. 油井动液面位置智能识别算法研究. 智能计算机与应用. 2019(03): 45-48+53 . 百度学术
275. 张艳升,李喜旺,李丹,杨华. 基于卷积神经网络的工控网络异常流量检测. 计算机应用. 2019(05): 1512-1517 . 百度学术
276. 徐少平,刘婷云,李崇禧,唐祎玲,胡凌燕. 基于CNN噪声分离模型的噪声水平估计算法. 计算机研究与发展. 2019(05): 1060-1070 . 百度学术
277. 王山海,刘谦,马鑫鑫. 基于图像识别的人工影响天气业务的研究. 计算机技术与发展. 2019(05): 172-177 . 百度学术
278. 邢珍珍. 卷积神经网络在图像处理中的应用研究. 软件工程. 2019(06): 5-7 . 百度学术
279. 徐迅,陶俊,吴瑰. 基于卷积神经网络的带遮蔽人脸识别. 江汉大学学报(自然科学版). 2019(03): 246-251 . 百度学术
280. 万晓琪,宋辉,罗林根,李喆,盛戈皞,江秀臣. 卷积神经网络在局部放电图像模式识别中的应用. 电网技术. 2019(06): 2219-2226 . 百度学术
281. 罗金梅,罗建,李艳梅,赵旭. 基于多特征融合CNN的人脸识别算法研究. 航空计算技术. 2019(03): 40-45 . 百度学术
282. 王胜,吕林涛,杨宏才. 卷积神经网络在印刷品缺陷检测的应用. 包装工程. 2019(11): 203-211 . 百度学术
283. 王红霞,周家奇,辜承昊,林泓. 用于图像分类的卷积神经网络中激活函数的设计. 浙江大学学报(工学版). 2019(07): 1363-1373 . 百度学术
284. 江白华,张亚,曾文文. 基于改进卷积神经网络的铁轨伤损图像识别. 测控技术. 2019(06): 19-22+27 . 百度学术
285. 吴一凡,冉晓旻. CNN神经网络在航迹预测中的应用. 电子设计工程. 2019(12): 13-20 . 百度学术
286. 高玉潼,原玥. 基于超像素分割的实时野外场景理解. 沈阳大学学报(自然科学版). 2019(03): 210-216 . 百度学术
287. 刘斌,陈桂芬,董聪瀚. CNN在汽车识别中的应用研究. 长春理工大学学报(自然科学版). 2019(03): 75-79 . 百度学术
288. 王枫,陈小平. CNN深度学习的验证码识别及Android平台移植. 单片机与嵌入式系统应用. 2019(07): 20-22+73 . 百度学术
289. 蔡楠,李萍. 基于KPCA初始化卷积神经网络的方法. 计算机技术与发展. 2019(07): 76-79 . 百度学术
290. 蒋丰千,李旸,余大为,孙敏,张恩宝. 基于Caffe卷积神经网络的大豆病害检测系统. 浙江农业学报. 2019(07): 1177-1183 . 百度学术
291. 刘亮. 基于改进卷积神经网络的人脸识别算法. 科技通报. 2019(07): 174-177 . 百度学术
292. 董思雨. 自主选课匹配系统. 电子测试. 2019(14): 63-65 . 百度学术
293. 严薇薇,旷小芳,肖云霞,郑梦雪,刘俊,杨娟. 基于深度学习技术的注意力转移模式的挖掘——以二语学习者的眼动数据为例. 电化教育研究. 2019(08): 30-36 . 百度学术
294. 李耀高. 基于神经网络的煤HRTEM图像处理技术. 煤炭技术. 2019(08): 167-170 . 百度学术
295. 吴沛佶,梅雪,何毅,袁申强. 基于深度网络模型的视频序列中异常行为的检测方法. 激光与光电子学进展. 2019(13): 134-140 . 百度学术
296. 高树辉,姜晓佳. 卷积神经网络在物证检验中的应用与毛发自动识别的展望. 科学技术与工程. 2019(23): 1-9 . 百度学术
297. 杜阔,李亚. 基于卷积神经网络的数字分类器的研究与优化. 现代电子技术. 2019(16): 98-103 . 百度学术
298. 蒋宗礼,王逸鹤. 基于序列挖掘的兴趣点推荐算法. 北京工业大学学报. 2019(09): 853-858 . 百度学术
299. 郭媛,曾银川,曾良才,傅连东,湛从昌. 基于神经网络的液压缸内泄漏在线测量研究. 液压与气动. 2019(09): 36-44 . 百度学术
300. 曾琦,向德华,李宁,肖红光. 基于半监督深度生成对抗网络的图像识别方法. 测控技术. 2019(08): 37-42 . 百度学术
301. 肖银燕,全惠敏. 基于3D CNN的鼻咽癌CT图像分割. 计算机工程与科学. 2019(08): 1444-1452 . 百度学术
302. 司徒仕忠,邱广萍,王锦春. 基于深度相机的障碍物识别. 科技创新与应用. 2019(27): 37-40 . 百度学术
303. 王郅佶. 基于卷积神经网络的描述符提取在SLAM中的应用. 电子制作. 2019(18): 39-41 . 百度学术
304. 王保平. 室外监控下特定目标人员的判别. 信息通信. 2019(08): 72-73 . 百度学术
305. 田琳静,宋文龙,卢奕竹,吕娟,李焕新,陈静. 基于深度学习的农业区土地利用无人机监测分类. 中国水利水电科学研究院学报. 2019(04): 312-320 . 百度学术
306. 杨知,欧文浩,刘晓燕,李闯,费香泽,赵斌滨,刘龙,马潇. 基于LinkNet卷积神经网络的高分辨率遥感影像水体信息提取. 云南大学学报(自然科学版). 2019(05): 932-938 . 百度学术
307. 谢璐阳,夏兆君,朱少华,张代庆,赵奉奎. 基于卷积神经网络的图像识别过拟合问题分析与研究. 软件工程. 2019(10): 27-29+26 . 百度学术
308. 李策,张栋,杜少毅,朱子重,贾盛泽,曲延云. 一种迁移学习和可变形卷积深度学习的蝴蝶检测算法. 自动化学报. 2019(09): 1772-1782 . 本站查看
309. 罗建军,刘振声,龚翔,黄绍川,欧阳业,魏征. 基于无人机图像与迁移学习的线路绝缘子状态评价方法. 电力工程技术. 2019(05): 30-36 . 百度学术
310. 王翔,赵南京,殷高方,孟德硕,马明俊,俞志敏,石朝毅,覃志松,刘建国. 基于反向传播神经网络的激光诱导荧光光谱塑料分类识别方法研究. 光谱学与光谱分析. 2019(10): 3136-3141 . 百度学术
311. 何金辉. 基于机器视觉与图像识别的发射机弧光捕捉定位系统在实际工作中的应用. 广播电视信息. 2019(10): 82-86 . 百度学术
312. 王海龙,夏筱筠,孙维堂. 基于EMD与卷积神经网络的滚动轴承故障诊断. 组合机床与自动化加工技术. 2019(10): 46-48+52 . 百度学术
313. 程时伟,周桃春,唐智川,范菁,孙凌云,朱安杰. CNN实现的运动想象脑电分类及人-机器人交互. 软件学报. 2019(10): 3005-3016 . 百度学术
314. 华夏,王新晴,马昭烨,王东,邵发明. 复杂大交通场景弱小目标检测技术. 计算机应用研究. 2019(11): 3486-3492 . 百度学术
315. 李林升,曾平平. 改进深度学习框架Faster-RCNN的苹果目标检测. 机械设计与研究. 2019(05): 24-27 . 百度学术
316. 李若天航. 基于强化学习的智能浇灌系统. 现代化农业. 2019(11): 15-18 . 百度学术
317. 王舶仲,蒋毅舟,文超,段浩然,易进,熊旋,毛文奇,袁培. 基于生成对抗网络的隔离开关分合位置判别方法研究及应用. 智慧电力. 2019(10): 77-84 . 百度学术
318. 席阿行,赵津,周滔,胡秋霞. 卷积神经网络行为克隆方法在无人车上的研究. 现代电子技术. 2019(22): 178-182 . 百度学术
319. 孙娇娇,龚安,史海涛. 基于卷积神经网络的低剂量CT图像肺结节检测. 计算机技术与发展. 2019(11): 173-177 . 百度学术
320. 周可慧,廖志伟,肖异瑶,肖立军,蓝鹏昊,万新宇. 基于改进CNN的电力设备红外图像分类模型构建研究. 红外技术. 2019(11): 1033-1038 . 百度学术
321. 胡忠康,刘英,周晓林,赵乾,沈鹭翔. 基于深度置信网络的实木板材缺陷及纹理识别研究. 计算机应用研究. 2019(12): 3889-3892 . 百度学术
322. 徐少平,林珍玉,李崇禧,刘婷云,杨晓辉. 采用训练策略实现的快速噪声水平估计. 中国图象图形学报. 2019(11): 1882-1892 . 百度学术
323. 娄润东,陈俊彪,侯宏花,刘艳莉,田珠,张鹏程,桂志国. 基于深度卷积神经网络的细胞分类新方法. 测试技术学报. 2019(06): 509-515 . 百度学术
324. 包本刚. 融合多特征的目标检测与跟踪方法. 电子测量与仪器学报. 2019(09): 93-99 . 百度学术
325. 高明慧,张尤赛,王亚军,李垣江. 应用卷积神经网络的纹理合成优化方法. 计算机工程与设计. 2019(12): 3551-3556 . 百度学术
326. 高建瓴,王竣生,王许. 基于DenseNet的图像识别方法研究. 贵州大学学报(自然科学版). 2019(06): 58-62 . 百度学术
327. 吴海平,黄世存. 基于深度学习的新增建设用地信息提取试验研究——全国土地利用遥感监测工程创新探索. 国土资源遥感. 2019(04): 159-166 . 百度学术
328. 袁明新,于洪涛,江亚峰,王琪,申燚. 联合迁移学习和自适应学习率的苹果成熟度识别. 中国农机化学报. 2019(11): 131-135 . 百度学术
329. 杨文佳,朱海龙,刘靖宇. 基于卷积神经网络的天气现象识别方法研究. 智能计算机与应用. 2019(06): 214-216 . 百度学术
330. 谭朝文,宗容,王威廉,杨宏波. 适于卷积神经网络的心音预处理算法. 计算机仿真. 2019(12): 214-218 . 百度学术
331. 杨丽娟,李利. 基于双线性插值的内容感知图像缩放算法仿真. 计算机仿真. 2019(12): 244-248 . 百度学术
332. 韩宇. 基于强化学习的网络学习的搜索. 电子制作. 2019(24): 57-58+33 . 百度学术
333. 陈妮亚,阮佳阳,黄金苗,杨伟. 结合深度学习与生物特征识别在冷链拣选中的算法研究. 智能科学与技术学报. 2019(01): 88-95 . 百度学术
334. 王相龙,胡钊政,穆孟超,陶倩文,张帆. 基于VGG深度卷积神经网络和空间分布的道路裂纹种类识别. 交通信息与安全. 2019(06): 95-102 . 百度学术
335. 张治国,李德平,柳宁. 电器标签分类的SVM方法研究. 机电工程技术. 2019(12): 1-4+14 . 百度学术
336. 蔡惠慧,徐永洋,李孜轩,曹豪豪,冯雅兴,陈思琼,李永胜. 基于卷积神经网络模型划分成矿远景区——以甘肃大桥地区金多金属矿田为例. 地质通报. 2019(12): 1999-2009 . 百度学术
337. 王奉涛,薛宇航,王洪涛,马琳杰,李宏坤,韩清凯,于晓光. GLT-CNN方法及其在航空发动机中介轴承故障诊断中的应用. 振动工程学报. 2019(06): 1077-1083 . 百度学术
338. 周月鹏,卢喜利. 基于组合激活函数的CNN应用研究. 韶关学院学报. 2019(12): 24-30 . 百度学术
339. 肖璞,黄海霞. 基于CNN算法的深度学习研究及应用. 现代计算机. 2019(35): 27-32 . 百度学术
340. 李崇禧,徐少平,崔燕,刘婷云,张贵珍,林珍玉. 带参考图像通道的卷积神经网络随机脉冲噪声降噪算法. 光电子·激光. 2019(11): 1163-1171 . 百度学术
341. 韩要昌,王洁,史通,蔡启航. 基于改进GoogLeNet的遥感图像分类方法. 弹箭与制导学报. 2019(05): 139-142+153 . 百度学术
342. 陈梦,王晓青. 全卷积神经网络在建筑物震害遥感提取中的应用研究. 震灾防御技术. 2019(04): 810-820 . 百度学术
343. 张航,程清,武英洁,王亚新,张承明,殷复伟. 一种基于卷积神经网络的小麦病害识别方法. 山东农业科学. 2018(03): 137-141 . 百度学术
344. 徐露露. 基于正交试验设计下的卷积神经网络在图像识别上的研究与应用. 电脑知识与技术. 2018(05): 203-205 . 百度学术
345. 王宏宇. 网络不良图片识别技术研究. 电脑知识与技术. 2018(12): 195-196+199 . 百度学术
346. 张涛,张乐乐. 基于卷积神经网络的图片验证码识别. 电子测量技术. 2018(14): 83-87 . 百度学术
347. 刘勤让,刘崇阳. 利用参数稀疏性的卷积神经网络计算优化及其FPGA加速器设计. 电子与信息学报. 2018(06): 1368-1374 . 百度学术
348. 董叶豪,柯宗武,熊旭辉. 卷积神经网络在图像处理方面的应用. 福建电脑. 2018(05): 151-153 . 百度学术
349. 熊平,胡彩霞,周欣星. CNN与人工特征提取快速识别斑马线的方法. 电子设计工程. 2018(03): 189-193 . 百度学术
350. 吕晓琪,吴凉,谷宇,张文莉,李菁. 基于三维卷积神经网络的低剂量CT肺结节检测. 光学精密工程. 2018(05): 1211-1218 . 百度学术
351. 王文秀,傅雨田,董峰,李锋. 基于深度卷积神经网络的红外船只目标检测方法. 光学学报. 2018(07): 160-166 . 百度学术
352. 罗梓月,余小清,万旺根. 一种新的基于人脸表情识别的图像理解模型. 工业控制计算机. 2018(03): 34-36 . 百度学术
353. 肖皓,祝永新,汪宁,田犁,汪辉. 面向卷积神经网络的FPGA硬件加速器设计. 工业控制计算机. 2018(06): 99-101 . 百度学术
354. 汪秀军,曹大平,曹宜,舒兴,荣猛,范强. 非接触式生理参数测量设备的研究. 信息技术. 2018(08): 17-22 . 百度学术
355. 陈伟,何家欢,裴喜平. 基于相空间重构和卷积神经网络的电能质量扰动分类. 电力系统保护与控制. 2018(14): 87-93 . 百度学术
356. 胡悦. 基于卷积神经网络的股票市场择时模型——以上证综指为例. 金融经济. 2018(04): 71-74 . 百度学术
357. 周安众,罗可. 一种多尺度卷积神经网络的人脸检测模型. 计算机工程与应用. 2018(14): 168-174+235 . 百度学术
358. 施恩,李骞,顾大权,赵章明. 基于局部特征的卷积神经网络模型. 计算机工程. 2018(02): 282-286 . 百度学术
359. 施恩,李骞,顾大权,赵章明. 基于卷积神经网络的雷达回波外推方法. 计算机应用. 2018(03): 661-665+676 . 百度学术
360. 李万相,田莹. 基于深度学习算法的小样本人耳识别. 计算机仿真. 2018(08): 246-250+369 . 百度学术
361. 丁承君,刘强,田军强,朱雪宏. 信息物理系统事件驱动下的农业气象监测系统. 江苏农业学报. 2018(04): 825-834 . 百度学术
362. 张小锋,刘红铮. 基于卷积神经网络的花朵图片分类算法. 计算机与现代化. 2018(09): 52-55 . 百度学术
363. 焦计晗,张帆,张良. 基于改进AlexNet模型的油菜种植面积遥感估测. 计算机测量与控制. 2018(02): 186-189 . 百度学术
364. 胡杰,李少波,于丽娅,杨观赐. 基于卷积神经网络与随机森林算法的专利文本分类模型. 科学技术与工程. 2018(06): 268-272 . 百度学术
365. 吴新杰,李红玉,梁南南. 卷积神经网络在ECT图像重建上的应用. 辽宁大学学报(自然科学版). 2018(01): 28-33 . 百度学术
366. 李勇,林小竹,蒋梦莹. 基于跨连接LeNet-5网络的面部表情识别. 自动化学报. 2018(01): 176-182 . 本站查看
367. 唐贤伦,杜一铭,刘雨微,李佳歆,马艺玮. 基于条件深度卷积生成对抗网络的图像识别方法. 自动化学报. 2018(05): 855-864 . 本站查看
368. 梁蒙蒙,周涛,夏勇,张飞飞,杨健. 基于随机化融合和CNN的多模态肺部肿瘤图像识别. 南京大学学报(自然科学). 2018(04): 775-785 . 百度学术
369. 张东俊,黎潇,吴红,王天忠. 用于作战实验运用的时空逻辑推演方法. 指挥控制与仿真. 2018(02): 99-105 . 百度学术
370. 王超,陈庆奎. 基于CUDA的交通限速牌识别. 软件导刊. 2018(07): 44-48 . 百度学术
371. 冉鹏,王灵,李昕,刘鹏伟. 改进Softmax分类器的深度卷积神经网络及其在人脸识别中的应用. 上海大学学报(自然科学版). 2018(03): 352-366 . 百度学术
372. 索琰琰,吴昊,李娆,冯成,孟令凯,苑海朝. 浅谈卷积神经网络. 数码世界. 2018(05): 20-21 . 百度学术
373. 杨红玲,宣士斌,莫愿斌. 基于卷积神经网络的手势识别. 计算机技术与发展. 2018(07): 11-14 . 百度学术
374. 聂倩倩,秦润泽,高育新,胡欣宇. 人脸表情识别算法综述. 物联网技术. 2018(05): 55-57 . 百度学术
375. 张路煜,廖鹏,赵俊峰,郭靓. 基于卷积神经网络的未知协议识别方法. 微电子学与计算机. 2018(07): 106-108 . 百度学术
376. 叶远斌. 神经网络深度学习算法在地理国情监测中的应用研究. 西部资源. 2018(04): 147-149 . 百度学术
377. 应自炉,商丽娟,徐颖,刘健. 面向图像超分辨率的紧凑型多径卷积神经网络算法研究. 信号处理. 2018(06): 668-679 . 百度学术
378. 廖原. 基于卷积网络的车牌二值化算法. 信息与电脑(理论版). 2018(12): 67-68 . 百度学术
379. 王心宇,马良,蔡瑞. 基于卷积神经网络的图像识别. 工程技术研究. 2018(04): 101-102 . 百度学术
380. 刘涵,郭润元. 基于X射线图像和卷积神经网络的石油钢管焊缝缺陷检测与识别. 仪器仪表学报. 2018(04): 247-256 . 百度学术
381. 陈波,余秋婷,陈铁明. 基于传感器人体行为识别深度学习模型的研究. 浙江工业大学学报. 2018(04): 375-381 . 百度学术
382. 鄢小虎,李康,陈凯. 一种基于卷积神经网络的智慧路灯联动控制算法. 照明工程学报. 2018(04): 72-75 . 百度学术
383. 李东东,王浩,杨帆,郑小霞,周文磊,邹胜华. 基于一维卷积神经网络和Soft-Max分类器的风电机组行星齿轮箱故障检测. 电机与控制应用. 2018(06): 80-87+108 . 百度学术
384. 丁晓龙. 基于深度学习的图像风格迁移技术的前沿进展. 电子制作. 2018(18): 86-87+93 . 百度学术
385. 李幼蛟,卓力,张菁,李嘉锋,张辉. 行人再识别技术综述. 自动化学报. 2018(09): 1554-1568 . 本站查看
386. 万萌,冯新玲. 基于无监督特征选择和卷积神经网络的图像识别算法. 赤峰学院学报(自然科学版). 2018(10): 52-55 . 百度学术
387. 李东东,王浩,杨帆,郑小霞,华伟,邹胜华. 基于无监督特征学习的行星齿轮箱故障特征提取和检测. 电网技术. 2018(11): 3805-3811 . 百度学术
388. 王粲. 基于hashing的二值加速. 电子制作. 2018(20): 39-40+38 . 百度学术
389. 卢鹏,林根巧,邹国良. 基于信息熵和深度学习的无参考图像质量评价方法研究. 计算机应用研究. 2018(11): 3508-3512 . 百度学术
390. 赵志衡,宋欢,朱江波,卢雷,孙磊. 基于卷积神经网络的花生籽粒完整性识别算法及应用. 农业工程学报. 2018(21): 195-201 . 百度学术
391. 梁蒙蒙,周涛,夏勇,张飞飞,杨健. 基于PSO-ConvK卷积神经网络的肺部肿瘤图像识别. 山东大学学报(工学版). 2018(05): 77-84 . 百度学术
392. 杨亚东,王晓峰,潘静静. 改进CNN及其在船舶识别中的应用. 计算机工程与设计. 2018(10): 3228-3233 . 百度学术
393. 蒋梦莹,林小竹,柯岩. 基于优化分类的数据增广方法. 计算机工程与设计. 2018(11): 3559-3563 . 百度学术
394. 焦廉溪. 基于CNN和LSTM的交通流预测. 通讯世界. 2018(10): 265-266 . 百度学术
395. 陈磊士,赵俊三,李易,朱祺夫,许可. 基于机器学习的多源遥感影像融合土地利用分类研究. 西南师范大学学报(自然科学版). 2018(10): 103-111 . 百度学术
396. 孟祥锐,张树清,臧淑英. 基于卷积神经网络和高分辨率影像的湿地群落遥感分类——以洪河湿地为例. 地理科学. 2018(11): 1914-1923 . 百度学术
397. 陈磊士,赵俊三,董智文,朱褀夫. 基于深度学习的滇中城市多光谱影像建设用地信息提取. 软件导刊. 2018(11): 177-180+186 . 百度学术
398. 王开宇,生梦林,韩睿,李伯轩,刘晨阳,申人升. 卷积神经网络的FPGA实现及优化. 实验室科学. 2018(04): 79-84 . 百度学术
399. 钱勇生,邵洁,季欣欣,李晓瑞,莫晨,程其玉. 基于改进卷积神经网络的多视角人脸表情识别. 计算机工程与应用. 2018(24): 12-19 . 百度学术
400. 陈寿宏,柳馨雨,马峻,康怀强. 深度卷积神经网络胸片肺结节分类识别研究. 计算机工程与应用. 2018(24): 176-181 . 百度学术
401. 王红杰,陈宇,周雨佳. 基于迁移学习的糖尿病视网膜图像分类方法. 黑龙江工程学院学报. 2018(06): 33-36+46 . 百度学术
402. 杨敏. 基于文本识别的图书智能管理. 自动化技术与应用. 2018(12): 145-150 . 百度学术
403. 华夏,王新晴,王东,马昭烨,邵发明. 基于改进SSD的交通大场景多目标检测. 光学学报. 2018(12): 221-231 . 百度学术
404. 张薇,吕晓琪,吴凉,张明,李菁. 基于典型医学图像的分类技术研究进展. 激光与光电子学进展. 2018(12): 96-105 . 百度学术
405. 王九清,邢素霞,王孝义,曹宇. 基于卷积神经网络与高光谱的鸡肉品质分类检测. 肉类研究. 2018(12): 36-41 . 百度学术
406. 唐浩漾,孙梓巍,王婧,钱萌. 基于VGG-19混合迁移学习模型的服饰图片识别. 西安邮电大学学报. 2018(06): 87-93 . 百度学术
407. 王锐,来文豪,李仲强. 基于卷积神经网络的铁轨伤损检测研究. 河南城建学院学报. 2018(06): 71-76 . 百度学术
408. 李玉刚,王永滨. 基于注意力的图像视觉关系识别研究. 中国传媒大学学报(自然科学版). 2018(06): 36-39 . 百度学术
409. 赵新秋,贺海龙,杨冬冬,段思雨. 基于改进的卷积神经网络在图片分类中的应用. 高技术通讯. 2018(Z2): 930-936 . 百度学术
410. 吴乐明,刘浩,况奇刚,孙晓帆,张鑫生. 归类精度保持的图像测试集压缩方法. 电子测量与仪器学报. 2018(10): 154-160 . 百度学术
411. 谭建灿,毛克彪,左志远,赵天杰,谭雪兰,李建军. 基于卷积神经网络和AMSR2微波遥感的土壤水分反演研究. 高技术通讯. 2018(05): 399-408 . 百度学术
412. 郑祥祥,张治安. 一种光场图像空间和角度分辨率重建方法. 电脑知识与技术. 2017(12): 171-173 . 百度学术
413. 张宏伟,张凌婕,李鹏飞. 基于深度卷积神经网络的织物花型分类. 纺织高校基础科学学报. 2017(02): 261-265+271 . 百度学术
414. 董水龙,李海生,祝晓斌,蔡强. 利用CNN特征和BoWs的三维模型检索算法. 广西大学学报(自然科学版). 2017(05): 1787-1792 . 百度学术
415. 郑友亮. 基于卷积神经网络的沿海水质评价技术研究. 自动化与信息工程. 2017(03): 12-16+36 . 百度学术
416. 刘晨,曲长文,周强,李智. 一种卷积神经网络的优化方法. 舰船电子工程. 2017(05): 36-40 . 百度学术
417. 徐有正,黄刚. 多标签图像的识别分类处理算法. 计算机时代. 2017(10): 4-7 . 百度学术
418. 周飞燕,金林鹏,董军. 卷积神经网络研究综述. 计算机学报. 2017(06): 1229-1251 . 百度学术
419. 韩光辉,刘峡壁,郑光远. 肺部CT图像病变区域检测方法. 自动化学报. 2017(12): 2071-2090 . 本站查看
420. 李帷韬,曹仲达,朱程辉,陈克琼,王建平,刘雪景,郑成强. 基于深度集成学习的青梅品级智能反馈认知方法. 农业工程学报. 2017(23): 276-283 . 百度学术
421. 李福卫,李玉惠. 基于卷积神经网络的图像清晰度识别方法. 软件. 2017(07): 6-9 . 百度学术
422. 杨光,周鹏举,张宋彬,徐鹏. 基于卷积神经网络的变电站巡检机器人图像识别. 软件. 2017(12): 190-192 . 百度学术
423. 张宏伟,张凌婕,李鹏飞,宋执环. 基于GoogLeNet的色织物花型分类. 纺织科技进展. 2017(07): 33-35+52 . 百度学术
424. 李孟起,郑煜辰. 基于深度学习特征和在线感知机的物体识别系统. 数码世界. 2017(08): 200-201 . 百度学术
425. 胡彩霞,熊平,周欣星. 基于卷积神经网络的人行道识别算法研究. 信息与电脑(理论版). 2017(21): 53-55 . 百度学术
426. 白中浩,王鹏辉,李智强. 基于Stixel-world及特征融合的双目立体视觉行人检测. 仪器仪表学报. 2017(11): 2822-2829 . 百度学术
427. 刘万军,梁雪剑,曲海成. 自适应增强卷积神经网络图像识别. 中国图象图形学报. 2017(12): 1723-1736 . 百度学术
其他类型引用(1171)
-