2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于深度协同稀疏编码网络的海洋浮筏SAR图像目标识别

耿杰 范剑超 初佳兰 王洪玉

罗毅平, 周笔锋. 时滞扩散性复杂网络同步保性能控制. 自动化学报, 2015, 41(1): 147-156. doi: 10.16383/j.aas.2015.c140202
引用本文: 耿杰, 范剑超, 初佳兰, 王洪玉. 基于深度协同稀疏编码网络的海洋浮筏SAR图像目标识别. 自动化学报, 2016, 42(4): 593-604. doi: 10.16383/j.aas.2016.c150425
LUO Yi-Ping, ZHOU Bi-Feng. Guaranteed Cost Synchronization Control of Diffusible Complex Network Systems with Time Delay. ACTA AUTOMATICA SINICA, 2015, 41(1): 147-156. doi: 10.16383/j.aas.2015.c140202
Citation: GENG Jie, FAN Jian-Chao, CHU Jia-Lan, WANG Hong-Yu. Research on Marine Floating Raft Aquaculture SAR Image Target Recognition Based on Deep Collaborative Sparse Coding Network. ACTA AUTOMATICA SINICA, 2016, 42(4): 593-604. doi: 10.16383/j.aas.2016.c150425

基于深度协同稀疏编码网络的海洋浮筏SAR图像目标识别

doi: 10.16383/j.aas.2016.c150425
基金项目: 

北戴河邻近海域典型生态灾害与污染监控海洋公益专项 201305003

国家自然科学基金 61273307, 61301130

中国博士后面上基金 2014M551082

详细信息
    作者简介:

    耿杰, 大连理工大学电子信息与电气工程学部博士研究生. 主要研究方向为SAR图像处理, 模式识别.E-mail:gengjie@mail.dlut.edu.cn

    初佳兰, 国家海洋环境监测中心工程师. 主要研究方向为海域动态卫星遥感应用.E-mail:jlchu@nmemc.org.cn

    王洪玉, 大连理工大学电子信息与电气工程学部教授. 主要研究方向为图像处理, 模式识别和无线传感器网络.E-mail:whyu@dlut.edu.cn

    通讯作者:

    范剑超, 国家海洋环境监测中心, 大连理工大学电子信息与电气工程学部副研究员. 主要研究方向为神经网络, 模式识别和遥感图像处理. 本文通信作者.E-mail:fjchaonmemc@163.com

Research on Marine Floating Raft Aquaculture SAR Image Target Recognition Based on Deep Collaborative Sparse Coding Network

Funds: 

the Public Welfare Project of Beidaihe Marine Ecological Disasters and Pollution Monitoring 201305003

National Natural Science Foundation of China 61273307, 61301130

the China Postdoctoral Science Foundation 2014M551082

More Information
    Author Bio:

    Ph. D. candidate at the Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology. His research interest covers SAR image processing and pattern recognition

    Engineer at National Marine Environment Monitoring Center. Her research interest covers sea dynamic surveillance remote sensing application

    Professor at the Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology. His research interest covers image processing, pattern recognition, and wireless sensor networks

    Corresponding author: FAN Jian-Chao Associate professor at National Marine Environment Monitoring Center and Dalian University of Technology. His research interest covers neural network, pattern recognition, and remote sensing image processing. Corresponding author of this paper
  • 摘要: 浮筏养殖广泛存在于我国近海海域, 可见光遥感图像无法完全准确地获取养殖目标, 而基于主动成像的合成孔径雷达(Synthetic aperture radar, SAR)遥感图像能够得到养殖目标, 因此采用SAR图像进行海洋浮筏养殖目标识别. 然而, 海洋遥感SAR图像包含大量相干斑噪声, 并且SAR图像特征单一, 使得目标识别难度较大. 为解决这些问题, 提出一种深度协同稀疏编码网络(Deep collaborative sparse coding network, DCSCN)进行海洋浮筏识别. 本文方法对预处理后的图像先提取纹理特征和轮廓特征, 再进行超像素分割并将同一个超像素块特征组输入该网络进行协同表示, 最后得到有效特征并分类识别. 通过人工SAR图像和北戴河海域浮筏养殖SAR图像的实验验证所提模型的有效性. 该网络不仅具有优异的特征表示能力, 能够获得更适合分类器的特征, 而且通过近邻协同约束, 有效抑制相干斑噪声影响, 所以提高了SAR图像目标识别精度.
  • With the increasingly complicated engineering problems during the past few years, many researchers devote themselves to researching new intelligent optimization algorithms. In 2011, a new heuristic optimization algorithm named fruit fly optimization algorithm (FOA) is proposed by Pan [1] who is inspired by the feeding behaviors of drosophila. FOA is easy to be understood, and it can deal with the optimization problems with fast speed and high accuracy, while, the results are influenced a lot by the initial solutions [2]. Based on the phototropic growth characteristics of plants, a new global optimization algorithm called plant growth simulation algorithm is proposed by Li et al., which is a kind of bionic random algorithm and suitable for large-scale, multi-modal and nonlinear integer programming [3], however, for its complex calculation theory, the algorithm is not widely applied in industry and scientific research. Artificial bee colony algorithm [4] is a new application of swarm intelligence, which simulates the social behaviors of bees, whose defects are slow convergence speed and easy to trap into local optimum [5].

    Mirror is a common necessity, which plays an important role in daily life. Inspired by the optical function of mirror, a new algorithm called specular reflection algorithm (SRA) is raised by this paper. SRA, similar to genetic algorithm [6]-[8], particle swarm optimization [9]-[11], simulated annealing algorithm [12], [13], differential evolution algorithm [14], [15], etc, can be widely used in science and engineering. The SRA has many outstanding advantages, such as simple principle, easy programming, high precision and fast calculation speed, and its unique non-population searching mode distinguishes itself from original swarm algorithm. Furthermore, the global searching ability is significantly improved by the specific acceptance criterion of the new solution. In order to verify above mentioned features of SRA, a great deal of comparative experiments are adopted in this paper. At last, the reliability based design and robust design are combined with the SRA, in order to evaluate the ability of SRA in reliability based robust optimization design.

    Mirror is a life necessity and a product of human civilization, which can change the direction of propagation of light. There are various kinds of mirrors, such as magnifying glass, microscope, etc. With the help of mirror, a great deal of stuff can be observed, even if they are out of the range of visibility. For example, the submarine soldier is able to catch sight of the object above the water by periscope. This reflection property of mirror is simulated by the SRA.

    Object, suspected target, eyes and mirror are the four basic elements of specular reflection system.

    Object is the objective function of optimization. Getting its exact coordinate is the purpose of the SRA. It is not involved in the optimization procedure for the location of the object is unpredictable.

    Suspected target is the coordinate of the object observed by eyes, which is approximate to the optimal solution. There is an error between the suspected target and object, because the coordinate of the object observed by eyes is not accurate. The suspected target is located around the object, and it is the element nearest to the object.

    Mirror can change the direction of propagation of light. The vision of eyes can be broaden by mirror. All the things that can reflect light (glass, water, etc.) are taken as mirror.

    Eyes are the subject of the SRA, which can acquire the approximate coordinate of the object. And it is the element farthest from the object.

    $ \begin{align}\label{eq1} &\min f(X), \ X = (x^1, x^2, \ldots, x^N), \quad X \in \mathbb{R}^N \notag\\ & {\rm s.t.}\ \ g_j (x) = 0, \ \ j = 1, 2, \ldots, m \notag\\ &\qquad h_k (x) \le 0, \ \ k = 1, 2, \ldots, l. \end{align} $

    (1)

    Taking the constrained optimization problem showed in (1) as an example, the definition of SRA will be drawn as following:

    Set the specular reflection system as a $ 4\times N$ dimensional Euclidean space, where $N$ is the number of design variables. The elements in the system are defined as $X_i$ , $x_i^N)$ , $i = (0, 1, 2, 3)$ , and , $X_{\rm Suspect} = X_1$ , $X_{\rm Mirror}$ $=$ $X_2$ , . Where $x_i^n$ $(n=1, 2, \ldots, N)$ is the position of the $i$ th variable in the $N$ dimensional space. The four elements of SRA can be defined as $f(X_i)$ , and the relationship among the four elements is $f(X_0)\leq f(X_1) \leq$ $f(X_2)$ $\leq$ $f(X_3)$ .

    Searching the new coordinate: the coordinates of $X_{\rm New1}$ and $X_{\rm New2}$ can be acquired by (2), and the new coordinate of $X_{\rm New}$ can be got by (2).

    $ \begin{align} \begin{cases} X_{\rm New1}^n = x_1^n + \xi (2{\rm rand} - 1)(x_1^n - x_3^n ) \\[2mm] X_{\rm New2}^n = x_1^n + \xi (2{\rm rand} - 1)(2x_1^n - x_2^n - x_3^n ) \end{cases} \end{align} $

    (2)

    where $\xi$ is coefficient, which is determined by (11).

    $ \begin{align} \label{eq3} \begin{cases} X_{\rm New} = X_{\rm New1}, f(X_{\rm New1} ) \leq f(X_{\rm New2} ) \\[2mm] X_{\rm New} = X_{\rm New2}, f(X_{\rm New1} ) \ge f(X_{\rm New2} ). \end{cases} \end{align} $

    (3)

    Updating the specular reflection system: Once the coordinate of $X_{\rm New}$ is acquired, the eyes will change its place to continue searching for the "object", the four elements of the system are $X_0 X_1 X_2$ and $X_{\rm New}$ under the current situation. The specular reflection system will be adjusted by the modification of the four elements, the system will be changed by the rules shown in Fig. 1.

    图 1  Coordinate update of the specular reflection system.
    Fig. 1  Coordinate update of the specular reflection system.

    The optimization steps of the SRA are shown as follows:

    Step 1: Define the initial value $X_i$ , $i = 0, 1, 2, 3$ , and the maximum iteration number $Iter_{\max}$ .

    Step 2: If the precision or the maximum iteration number reaches the design requirements, the coordinate of $X_{\rm Object}$ will be output which is the optimum solution. Otherwise, execute the next step continually.

    Step 3: Search the coordinate of $X_{\rm New}$ by (2) and (3), the new iteration process will begin, then go back to Step 2 and Continue to calculate.

    In conclusion, the optimization flow chart of the SRA is given by Fig. 2.

    图 2  Optimization flow chart of the SRA.
    Fig. 2  Optimization flow chart of the SRA.

    Theorem 1: The constraint optimization problem presented in (1) can converge to the global extremum with 100 % probability by the SRA.

    Proof: Provided that $X_{\rm Object} = \min f(X)$ , $X\in D$ which is the global optimal solution, where $f(X_{\rm Object})$ is the optimal value of objective function, $D$ is the feasible region and $D=\{X|g_j (X_{\rm Object}) = 0$ , $j = 1, 2, \ldots, m$ ; , $k$ $=$ $1$ , $2$ , $\ldots$ , $l$ ; and $D\in \mathbb{R}^N$ .

    First, get the feasible initial solutions $X_{\rm Suspect}^0$ , $X_{\rm Mirror}^0$ and $X_{\rm Eyes}^0$ randomly among the searching space, where $X_{\rm Suspect}^0$ , $X_{\rm Mirror}^0$ , , and the corresponding values of objective function $f (X_{\rm Suspect}^0)$ , $f(X_{\rm Mirror}^0)$ and $f(X_{\rm Eyes}^0)$ can be worked out, where $f(X_{\rm Mirror}^0)$ $\leq$ $f(X_{\rm Eyes}^0)$ .

    Second, the new solutions $f(X_{\rm Suspect}^k)$ , and $f(X_{\rm Eyes}^k)$ can be acquired according to the new specular reflection system, where are the randomly produced solutions which are uniformly distributed in , $X_{\rm Suspect}^k$ is the solution of the $k$ th ( iteration, $X_{\min}^k$ and $X_{\max}^k$ are the boundaries of design variable in the current iteration, and the maximum iteration number $Iter_{\max}$ should be big enough. Therefore, under the uniform distribution, the probability of generating the feasible solutions is:

    $ \begin{align} p^k =&\ \int\nolimits_{X_{\rm Object} - \varepsilon }^{X_{\rm Object} + \varepsilon } \frac{1}{X_{\max }^k - X_{\min }^k }dX = \frac{2\varepsilon }{X_{\max }^k - X_{\min }^k } \nonumber\\[2mm] \ge&\ \frac{2\varepsilon }{X_{\max } - X_{\min } } > 0 \end{align} $

    (4)

    where $\varepsilon$ is a real number which is sufficiently small; $X_{\rm max}$ and $X_{\rm min}$ are the extreme values of the 4 $\times N$ dimensional Euclidean space.

    The probability that the feasible solution $X_{\rm Suspect}^0$ is optimal is $P^1$ , and the probability that $X_{\rm Suspect}^0$ is not optimal is $Q^1$ , both $P^1$ and $Q^1$ are expressed as follows:

    $ \begin{align} \begin{cases} P^1 = P\{X_{\rm Suspect}^0 \subseteq [X_{\rm Object}-\varepsilon, X_{\rm Object} + \varepsilon]\} \\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\mbox{ }P^1 = P \\ Q^1 = P\{X_{\rm Suspect}^0 \not\subset [X_{\rm Object}-\varepsilon, X_{\rm Object} + \varepsilon]\} \\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\mbox{ }Q^1 = P \end{cases} \end{align} $

    (5)

    where $X_{\rm Suspect}^0$ is the feasible solution gotten for the first time.

    The probability that the feasible solution gotten for the second time still failing to be the optimal value is:

    $ \begin{align} Q^2=Q^1(1-P)=(1-P)^2. \end{align} $

    (6)

    So, the probability that the solution is optimal is:

    $ \begin{align} P^2=1-(1-P)^2. \end{align} $

    (7)

    After $n$ times iteration, the probability of getting the optimum solution can be acquired by the following inference.

    $ \begin{align}\label{2} P^n& = 1 - (1 - P)^n = 1 - \prod _{i = 1}^n \left( {1 - \frac{2\varepsilon }{X_{\max }^i - X_{\min }^i }} \right) \nonumber\\[1mm] &\ge 1 - \left( {1 - \frac{2\varepsilon }{X_{\max } - X{ }_{\min }}} \right) ^n. \end{align} $

    (8)

    Calculate the extreme value of (8):

    $ \begin{align} \lim _{n \to \infty } P^n& = \lim\limits_{n \to \infty } \left[{1- \prod _{i = 1}^n \left( {1-\frac{2\varepsilon }{X_{\max }^i- X_{\min }^i }} \right)} \right] \nonumber\\ &\ge \lim _{n \to \infty } \left[{1-\left( {1- \frac{2\varepsilon }{X_{\max }-X_{\min } }} \right)^n} \right] = 1. \end{align} $

    (9)

    With the iterations going on, it is more and more likely to achieve the optimum solution. When $n\rightarrow \infty$ , , it indicates that the searching process of SRA can converge to the global extreme with 100 % probability.

    The control parameter is closely related to the space complexity of optimized target, which has an effect on the capability of algorithm. The control parameters of classical optimization algorithm are gotten by experience or experiment, such as the learning parameter $c_1 = c_2$ = 2 by PSO [16], [17], and the crossover probability and mutation probability of GA [18]. It is impossible that the control parameter acquired by experience is suitable for all optimization problems. The SRA only has the control parameter $\xi$ , whose value will have a prominent effect on SRA. In this section, a classical test function is used to confirm the most appropriate value of $\xi$ , and the results are listed in Table Ⅰ.

    表 Ⅰ  JUDGEMENT OF $\xi$
    Table Ⅰ  JUDGEMENT OF $\xi$
    Value of $\xi$ N =2 N =10 N=20 N=50 N = 100
    Optimal solution (10-6) Iteration times Optimal solution (10-6) Iteration times Optimal solution (10-6) Optimal solution (103) Optimal solution (10-6) Optimal solution (103) Optimal solution (10-6) Optimal solution (104)
    0.4 4.7776 402.70 7.1895 1103 7.2883 2.0369 8.9324 6.1015 9.5844 1.4945
    0.5 3.3845 341.04 6.4267 940.12 7.3111 1.8001 8.4771 5.3149 9.6383 1.2586
    0.6 3.9884 737.76 5.4844 936.46 7.2327 1.6802 9.0155 4.8292 9.3691 1.1344
    0.7 3.5625 515.18 6.9587 810.24 7.2858 1.5971 8.5419 4.4544 9.5971 1.0679
    0.8 4.2770 509.46 6.7379 747.90 7.5046 1.4992 8.8811 4.2697 9.3384 1.0741
    0.9 4.0589 259.08 6.3850 732.90 7.4304 1.4562 8.3421 4.2036 9.4009 1.0976
    1.0 4.9287 193.26 5.9257 694.18 6.8977 1.3677 8.3414 4.3603 9.5947 1.1404
    1.1 4.6702 142.60 6.1496 674.28 8.0852 1.2946 9.4538 4.2854 9.4944 1.1889
    1.2 4.6250 142.42 5.8875 626.54 7.7654 1.3608 8.6969 4.4775 9.6771 1.2434
    1.3 5.1501 139.08 6.5208 654.72 7.2172 1.4050 8.9588 4.5342 9.5792 1.3215
    1.4 5.4409 131.02 5.6072 695.40 6.9556 1.4699 8.9053 4.6930 9.6898 1.3675
    1.5 4.7099 103.72 5.7050 675.02 7.6612 1.4740 9.0472 4.8329 9.5134 1.4173
    1.6 4.7625 93.82 5.8038 713.20 6.4546 1470 8.9756 4.8634 9.7748 1.4768
    1.7 4.9327 91.94 4.9871 783.90 5.8034 1.6036 9.1825 5.0851 9.6612 1.4985
    1.8 5.9076 87.32 5.4104 856.30 7.1143 1.6917 8.7372 5.3202 9.4446 1.5536
    1.9 4.9402 82.44 5.5724 832.12 6.4092 1.8641 8.7754 5.5962 9.6617 1.6423
    2.0 4.7168 89.08 4.8307 998.300 5.7508 2.0544 8.1780 6.3700 9.4975 2.5117
    下载: 导出CSV 
    | 显示表格

    $ \begin{align} f (x_1, x_2, \ldots, x_N)=\sum\limits_{j=1}^N j\times x_j^2. \end{align} $

    (10)

    The test function is illustrated by (10), and its three-dimension diagram is shown in Fig. 3. The global minimum value in theory of this function is 0 $(0, 0, \ldots, 0)$ and the constraint condition is $-5.12\leq x_j\leq 5.12$ , $j=1, 2, \ldots, N$ . In consideration of $N = (2, 10, 20, 50, 100, 500)$ and $\xi = (0.4$ , $0.5$ , $\ldots$ , $2.0)$ , do the calculation 50 times using every possible combination of $N$ and $\xi$ , then put the average results in Table Ⅰ. Assume that the convergence condition is $Iter_{\max}$ $=$ $10^5$ or $f (x_1, x_2, \ldots, x_N )\leq 10^{-5}$ .

    图 3  Three-dimensional surface of test function.
    Fig. 3  Three-dimensional surface of test function.

    As shown in Table Ⅰ, all the results fall in between 10 $^{-5}$ and 10 $^{-6}$ , the optimization efficiency which is influenced by $\xi$ cannot be evaluated by the optimal solutions, therefore, iteration times is the only factor to be considered.

    According to the Table Ⅰ, the conclusions can be drawn as follows: when $N = 2$ and $\xi = 1.9$ , the efficiency of the optimization is highest, the corresponding iteration is 82.44; When $N = 10$ , , $N = 50$ and $N = 100$ , the best $\xi$ and its corresponding iteration times are 1.3 and 654.72, 1.1 and $1.2946\times 10^3$ , 0.9 and $4.2036\times 10^3$ , 0.7 and $1.0679$ $\times$ $10^4$ , respectively. In addition, the value of $\xi$ will be reduced gradually with the increasing of $N$ , and the relationship between $\xi$ and $N$ (as shown in (11)) can be speculated by the method of data fitting.

    $ \begin{align} \xi=\frac{2.15}{N}+0.84. \end{align} $

    (11)

    To verify the global optimization ability of SRA, four numerical test functions in [10] are used, each test function is listed in Table Ⅱ in detail. The total iteration time is set as 2000. The SRA will be executed 50 times, and the average values are listed in Table Ⅲ, other results are references from [10], Figs. 4-7 show the iteration curves of the objective functions of each test function respectively.

    表 Ⅱ  NUMERICAL CALCULATION FUNCTION
    Table Ⅱ  NUMERICAL CALCULATION FUNCTION
    Name Expression Interval of convergence Global extreme Dimension
    Sphere $f_1 = \sum\limits_{i = 1}^n {x_i^2 }$ $x_i\in [-50,50]$ 0 (0, 0, $\ldots$ , 0) $n$ = 30 100
    Griewank $f_2 = 1 + \sum\limits_{i = 1}^n {\left( {\frac{x_i^2 }{4000}} \right) -\prod\limits_{i = 1}^n {\cos \left( {\frac{x_i }{\sqrt i }} \right)} }$ $x_i\in [-600,600]$ 0 (0, 0, $\ldots$ , 0) $n$ = 30 100
    Rosenbrock $f_3 = \sum\limits_{i = 1}^{n - 1} {[100(x_{i + 1}-x_i^2 )^2 + (x_i-1)^2]}$ $x_i\in [-100,100]$ 0 (1, 1, $\ldots$ , 1) $n$ = 30 100
    Restrigin $f_4 = \sum\limits_{i = 1}^n {[10 + x_i^2-10\cos (2\pi x_i )]}$ $x_i\in [-5.0, 5.0]$ 0 (0, 0, $\ldots$ , 0) $n$ = 30 100
    下载: 导出CSV 
    | 显示表格
    表 Ⅲ  CALCULATION RESULTS OF TEST FUNCTION
    Table Ⅲ  CALCULATION RESULTS OF TEST FUNCTION
    Name PSO
    (n = 30) [10]
    Kalman swarm
    (n = 30) [10]
    Chaos ant colony optimization
    (n = 30)[10]
    Chaos PSO
    (n = 30) [10]
    New chaos PSO
    (n = 30) [10]
    SRA
    (n = 30)
    SRA
    (n = 100)
    Sphere 3.7004×102 4.723 3.815×10-1 2.4736×10-3 2.0729×10-9 1.1080×10-24 2.3160×10-12
    Griewank 2.61×107 3.28×103 23.414 6.8481×10-2 9.9051×10-11 4.6629×10-15 2.7978×10-14
    Rosenbrock 13.865 9.96×10-1 4.669×10-1 1.0404×10-2 2.9068×10-4 9.8730×10-7 6.1173×10-5
    Restrigin 1.0655×102 53.293 22.6361 9.5258×10-1 4.3741×10-4 3.9373×10-21 8.7727×10-7
    下载: 导出CSV 
    | 显示表格
    图 4  Iteration curve of sphere.
    Fig. 4  Iteration curve of sphere.
    图 5  Iteration curve of griewank.
    Fig. 5  Iteration curve of griewank.
    图 6  Iteration curve of rosenbrock.
    Fig. 6  Iteration curve of rosenbrock.
    图 7  Iteration curve of restrigin.
    Fig. 7  Iteration curve of restrigin.

    The results in Table Ⅳ indicate that: when $n$ = 30, the results of the four test functions calculated by SRA are , $4.6629 \times 10^{-15}$ , $9.8730 \times 10^{-7}$ and $3.9373$ $\times$ $10^{-21}$ respectively, which are , $2.12 \times 10 ^4$ , $2.90$ $\times$ $10^2$ , times higher than the results gotten by new chaos PSO algorithm which possesses the highest accuracy in [10]; When , the results of the four test functions calculated by SRA are $2.3160\times 10^{-12}$ , $2.7978$ $\times$ $10^{-14}$ , $6.1173\times10^{-5}$ , $8.7727\times 10^{-7}$ respectively, and the computational accuracy are still $8.95\times 10^2$ , , $4.75$ , $4.99$ $\times$ $10^2$ times higher than the results calculated by new chaos PSO algorithm. All in all, the SRA is an efficient optimization algorithm.

    表 Ⅳ  CALCULATION RESULTS
    Table Ⅳ  CALCULATION RESULTS
    Design method Design variables (mm) Objective function (mm2) Reliability Sensitivity of reliability/(10-3)
    x1 x2 x3 x4 x5 A Rv $\frac{\partial R_v}{\partial S}$ $\frac{\partial R_v}{\partial E}$ $\frac{\partial R_v}{\partial \rho}$
    SRA Optimization 6 6 205 635 257 10 704 0.5071 14.3985 0.0017 9.15×10-9 0.0011
    Reliability Optimization 6 6 258 632 310 11 304 0.9968 13.0816 0.0015 8.77×10-9 0.0010
    Robust Reliability Optimization 6 6 324 595 376 11 652 0.9813 12.6270 0.0015 9.23×10-9 0.0010
    PSO Optimization 10 6 185 567 619 11 544 0.5314 13.0714 0.0015 9.8×10-9 0.0010
    Reliability Optimization 7 7 222 605 276 12 334 0.9806 12.9119 0.0015 9.73×10-9 0.0011
    Robust Reliability Optimization 9 6 302 534 354 12 780 0.9810 11.5262 0.0013 1.01×10-8 0.0010
    FOA Optimization 9 6 190 581 633 11 328 0.5132 13.3500 0.0016 9.64×10-9 0.0010
    Reliability Optimization 6 6 491 532 543 13 068 0.9802 11.3697 0.0013 1.01×10-8 9.95×10-4
    Robust Reliability Optimization 8 11 237 536 299 16 576 1.0 11.6479 0.0013 1.27×10-9 0.0013
    Note: The index of reliability R0 = 0.98 is deflned.
    下载: 导出CSV 
    | 显示表格

    According to the reliability design theory, the reliability can be calculated by (12):

    $ \begin{align} R=\int_{g(X)} f_x(X) dX \end{align} $

    (12)

    where $f_x (X)$ is the joint probability density of basic random variables $X=(X_1, X_2, \ldots, X_n)^T$ , which shows the state of the components.

    $ \begin{align} \begin{cases} g(X)\leq 0, &{\rm failure}\\[2mm] g(X)>0, &{\rm safe.} \end{cases} \end{align} $

    (13)

    The basic random variables $X_i$ ( $i = 1, 2, \ldots, n$ ) are independent of each other and follow certain distribution. The reliability index $\beta$ and the reliability $R=\Phi(\cdot)$ can be calculated by Monte Carlo method [19], where $\Phi(\cdot)$ is the standard normal distribution function.

    Robust design is a modern design technique that can improve the efficiency and quality and reduce the cost of products [20], [21]. The robust design of mechanical products can make the products insensitive to the changes of design parameters. The product which is designed by robust design method has the characteristic of stability. Even if there is an error in the designed parameters, the product still has excellent performance. Reliability is a kind of design method to eliminate the weaknesses, failure modes and guard against malfunction. The reliability robust optimization design is a new method by combining the robust design and reliability design, which possess all the merits of the two methods. The products designed by the reliability robust optimization design method are reliable and have robustness.

    $ \begin{align} &\min f(X)=\omega_1 f_1(X)+\omega_2 f_2(X)\notag\\ & {\rm s.t.} \ \ R\geq R_0\notag\\ &\qquad p_i(X)\geq 0, \ i=1, 2, \ldots, l\notag\\ &\qquad q_j(X)\geq 0, \ j=1, 2, \ldots, m \end{align} $

    (14)

    where $f_1(X)$ and $f_2(X)$ are the objective functions of the Reliability Robust Optimization design, $f_1(X)=R$ and $f_2(X)$ is the design criterion related to robust design which can be acquired by (15); $R$ is the reliability; $R_0$ is the constraint condition of reliability; $p_i$ and $q_j$ are equality and inequality constraints of the robust reliability optimization design respectively.

    $ \begin{align} f_2 (X) = \sqrt {\sum\limits_{i = 1}^n \left( {\frac{\partial R}{\partial X_i }} \right)^2} \end{align} $

    (15)

    where $\omega_1$ and $\omega_2$ are weighting coefficients, which are related to the importance of $f_1(X)$ and $f_2(X)$ , both of them are calculated by (16), and $\omega_1+\omega_2 = 1$ .

    $ \begin{align} \begin{cases} \omega _1 = \dfrac{f_2 (X^{1\ast }) - f_2 (X^{2\ast })}{[f_1 (X^{2\ast })- f_1 (X^{1\ast })] + [f_2 (X^{1\ast })-f_2 (X^{2\ast })]} \\[4mm] \omega _2 = \dfrac{f_1 (X^{2\ast }) - f_1 (X^{1\ast })}{[f_1 (X^{2\ast })- f_1 (X^{1\ast })] + [f_2 (X^{1\ast })-f_2 (X^{2\ast })]} \end{cases} \end{align} $

    (16)

    where $X^{1*}$ and $X^{2*}$ are the best values when $\min f(X)$ $=$ $f_1 (X)$ and $\min f(X)=f_2 (X)$ respectively.

    The bridge crane is taken as an example to verify the capability of the SRA in solving the engineering problems. The SRA is adopted to design the structure with optimized design, reliability optimization design and robust reliability optimization design, and the results are listed in Table Ⅲ together with the results calculated by PSO and FOA, which are used for analysing the performance of the SRA.

    The mechanical model of the bridge crane is shown in Fig. 8, the uniform load $q$ and the concentrated load $F$ are exerted on the girder, where $q$ is caused by the structure deadweight and $F$ is related to the weight of the hoisted cargo.

    图 8  Mechanical model diagram and sectional dimension.
    Fig. 8  Mechanical model diagram and sectional dimension.

    The parameters $x_i$ $(i = 1, 2, 3, 4, 5)$ are considered to be the design variables, where $6\leq x_1$ , $x_2\leq 30$ , $50\leq x_3$ , $x_4$ $\leq$ $5000$ , $x_5 = x_3 + 2x_2 + 40$ . The parameter $S$ is the span of the bridge crane. Other parameters include the elasticity modulus $E$ , the material density $\rho$ , $q$ $=$ $g(x_1$ , $x_2$ , $x_3$ , $x_4, x_5)$ . The parameters $S$ , $F$ , $E$ and $\rho$ are independent of each other, and they are normal random variables, $S$ $\sim$ ${\rm N}(12, 0.08^2)$ , $F$ $\sim$ , $E$ $\sim$ ${\rm N}(206 000$ , $6180^2)$ , $\rho\sim {\rm N}(7850, 5.6^2)$ .

    Objective function: According to the characteristics of the structural optimization problem, the objective function can be defined as shown in (17).

    $ \begin{align} {\rm min} f(x_1, x_2, x_3, x_4, x_5)=2x_1x_5+2x_2x_4. \end{align} $

    (17)

    Constraint condition: Strength, stiffness and stability are the three basic failure modes of bridge crane. Therefore, the constraint condition can be defined as following:

    1) Strength Constraint: The maximum stress of dangerous point in mid-span section must be smaller than the ultimate stress $f_{rd}$ ;

    $ \begin{align} &h_1(x_1, x_2, x_3, x_4, x_5)=f_{rd}-\sigma\notag\\ &\qquad =f_{rd}-\frac{qS^2+2FS}{8I_Z}\left(\frac{x_4}{2}+x_1\right) \end{align} $

    (18)

    where $f_{rd}$ is determined by the limit state method, and $f_{rd}$ $=$ ${f_{yk}}/{\gamma_m} = {235}/{1.1}=213.64$ MPa, $f_{(yk)}$ = 235 is yield stress, $\gamma_m$ = 1.1 is the resistance coefficient, $I_Z$ is moment of inertia of Section 2.1, $q$ and $I_Z$ are the functions related to design variables $x_i$ ( $i$ = 1, 2, 3, 4, 5).

    2) Stiffness Constraint: The maximum deflection of the structure must be smaller than the allowable value $\gamma_0$ $=$ $S/400$ .

    $ \begin{align} &h_2(x_1, x_2, x_3, x_4, x_5)=\gamma_0-\gamma\notag\\ &\qquad =\gamma_0-\left(\frac{5qS^4}{384EI_Z}+\frac{FS^3}{48EI_Z}\right). \end{align} $

    (19)

    3) Stability Constraint: The depth-width ratio of Section 2.1 must be smaller than 3.

    $ \begin{align} h_3(x_1, x_2, x_3, x_4, x_5)=3-\frac{x_4+2x_1}{x_3+2x_2}. \end{align} $

    (20)

    In conclusion, the optimization model of the bridge crane can be built as (21).

    $ \begin{align} & \min f(x_1, x_2, x_3, x_4, x_5) \notag\\ & {\rm s.t.} \ \ h_k(x_1, x_2, x_3, x_4, x_5)\geq 0, \quad k=1, 2, 3\notag \\ &\qquad 6\leq x_1, \ x_2\leq 30\notag \\ &\qquad 50\leq x_3, \ x_4\leq 5000. \end{align} $

    (21)

    The reliability constraint of structure is added to (21) to achieve the reliability optimization design. The failure of any mode will result in the failure of the structure, so the reliability $R_v$ is defined by (22). The reliability optimization model of bridge crane can be established by (23).

    $ \begin{align} R_v=\prod\limits_{k=1}^3 R_k %(h_k\geq 0) \end{align} $

    (22)

    where $R_k$ , $k=1, 2, 3$ is the probability of the $k$ th failure mode.

    $ \begin{align} & \min f(x_1, x_2, x_3, x_4, x_5)\notag \\ & {\rm s.t.}\ \ h_k(x_1, x_2, x_3, x_4, x_5)\geq 0, \quad k=1, 2, 3\notag\\ &\qquad 6\leq x_1, \ x_2\leq 30\notag\\ &\qquad 50\leq x_3, \ x_4\leq 5000\notag\\ &\qquad R_v-R_0\geq 0. \end{align} $

    (23)

    According to the robust reliability optimization design model which is shown in (14), the index of reliability and robustness are taken into account, the multi-objective optimization model is built by (24).

    $ \begin{align} & \min \omega_1\times f(x_1, x_2, x_3, x_4, x_5)+w_2\times f'(x)\notag \\ & {\rm s.t.} \ \ h_k(x_1, x_2, x_3, x_4, x_5)\geq 0, \quad k=1, 2, 3\notag\\ &\qquad 6\leq x_1, \ x_2\leq 30\notag\\ &\qquad 50\leq x_3, \ x_4\leq 5000\notag\\ &\qquad R_v-R_0\geq 0 \end{align} $

    (24)

    where .

    The three optimization models shown in (21), (23) and (24) are calculated by the SRA, PSO and FOA, respectively. And the results are presented in Table Ⅲ, from which the conclusions can be drawn as follows:

    1) For structural optimization, the results obtained by the three algorithms are 10 704, 11 544 and 11 328, the optimum among the three is 10 704 which is calculated by the SRA, which proves the ability of SRA is higher than PSO and FOA. The reliability results of the three groups of parameters are 0.5071, 0.5314 and 0.5132 respectively, which are unable to meet the requirement of reliability design for the reliability constraint is ignored.

    2) The reliability of the structure can be ensured and the robustness can be improved after reliability optimization design. However, the areas of Section 2.1 are increased to 11 652, 12 334 and 16 576 at the same time, and the best result is also calculated by SRA.

    3) With the requirements of the robustness, the reliability sensitivity index of design variables are significantly reduced, and the robustness of structure is improved notably.

    In this paper, a new optimization algorithm — specular reflection algorithm (SRA) is proposed, which is inspired by the optical property of the mirror. The SRA has a particular searching strategy which is different from the swarm intelligence optimization algorithms. The convergence ability of the SRA is verified by the traditional mathematical method, it converges to the global optimum value with the probability of 100 %. The reasonable values of the control parameters are analysed, and their computational formula is deduced by the method of data fitting, so that the control parameters will vary with the different problems and thus the adaptation and the operability of the SRA will be improved. Four classical numerical test functions are analysed by the SRA, and the results indicate that the ability of the SRA is better than the traditional intelligent optimization algorithms. Then, the theories of the reliability optimization and robust design are combined to establish the mathematical models of the optimization design, reliability optimization design and robust reliability optimization design for the bridge crane as an example system, which are calculated by the SRA and other two optimization methods (PSO and FOA). The conclusions are drawn after the simulation, that the structure designed by the SRA is reliable and robust. The results calculated by the SRA are superior to the PSO and the FOA. All in all, the SRA is the latest research in the area of intelligent optimization, which has the better calculation capability than other optimization algorithms, and the ability for the structure design is verified in this paper. SRA can be widely applied in other fields and create more value.

  • 图  1  堆叠自动编码器结构图

    Fig.  1  The structure of SAE

    图  2  非下采样轮廓波变换

    Fig.  2  Nonsubsampled contourlet transform

    图  3  基于深层协同稀疏编码网络算法流程图

    Fig.  3  The flow chart of the proposed algorithm

    图  4  深度协同稀疏编码网络结构图

    Fig.  4  The structure of deep collaborative sparse coding network

    图  5  人工SAR 图像数据

    Fig.  5  The arti-cial SAR data

    图  6  DCSCN 网络参数设置

    Fig.  6  Parameter setting of the DCSCN

    图  7  人工SAR 图像目标识别结果

    Fig.  7  Target recognition results of the arti-cial SAR image

    图  8  北戴河区域SAR 数据

    Fig.  8  The original SAR data of Beidaihe are

    图  9  SAR 图像1 浮筏识别结果

    Fig.  9  Target recognition results of the -rst SAR image

    图  10  SAR 图像2 浮筏识别结果

    Fig.  10  Target recognition results of the second SAR image

    表  1  各个算法在人工SAR 图像上识别结果对比

    Table  1  Recognition performance comparison of di®erent algorithms on the arti-cial SAR image

    算法OA(%)κ计算效率(s)
    SVM[25]83.574±0.1320.585±0.006166.623±1.854
    SOMP[26]92.803±0.5880.775±0.019568.204±3.376
    SAE[24]95.261±0.3750.858±0.011312.044±2.975
    Lasso-Pooling[27]97.972±0.1830.938±0.005757.605±6.334
    DCSCN99.457±0.0790.983±0.002192.630±4.481
    下载: 导出CSV

    表  2  各个算法在SAR 图像1 浮筏识别结果对比

    Table  2  Recognition performance comparison of di®erent algorithms on the -rst SAR image

    算法OA(%)κ计算效率(s)
    SVM[25]79.804±0.3750.653±0.004528.170±4.266
    SOMP[26]86.243±0.6150.708±0.00712 411.229±62.799
    SAE[24]81.960±0.3920.686±0.0057 963.060±29.862
    Lasso-Pooling[27]86.925±0.0730.692±0.00615 796.110±79.618
    DCSCN89.046±0.4330.759±0.005458.050±2.145
    下载: 导出CSV

    表  3  各个算法在SAR 图像2 浮筏识别结果对比

    Table  3  Recognition performance comparison of di®erent algorithms on the second SAR image

    算法OA(%)κ计算效率(s)
    SVM[25]87.714±1.0260.693±0.007794.139±7.148
    SOMP[26]93.261±0.0680.821±0.0216 432.545±39.582
    SAE[24]93.179±0.2400.816±0.0031 245.995±16.942
    Lasso-Pooling[27]93.614±0.1750.832±0.0428 381.801±44.702
    DCSCN98.762±0.1720.966±0.003268.498±1.633
    下载: 导出CSV
  • [1] 余航, 焦李成, 刘芳. 基于上下文分析的无监督分层迭代算法用于SAR图像分割. 自动化学报, 2014, 40(1): 100-116 http://www.aas.net.cn/CN/abstract/abstract18271.shtml

    Yu Hang, Jiao Li-Cheng, Liu Fang. Context based unsupervised hierarchical iterative algorithm for SAR segmentation. Acta Automatica Sinica, 2014, 40(1): 100-116 http://www.aas.net.cn/CN/abstract/abstract18271.shtml
    [2] 赵明波, 何峻, 付强. SAR图像CFAR检测的快速算法综述. 自动化学报, 2012, 38(12): 1885-1895 doi: 10.3724/SP.J.1004.2012.01885

    Zhao Ming-Bo, He Jun, Fu Qiang. Survey on fast CFAR detection algorithms for SAR image targets. Acta Automatica Sinica, 2012, 38(12): 1885-1895 doi: 10.3724/SP.J.1004.2012.01885
    [3] 初佳兰, 赵冬至, 张丰收. 基于关联规则的裙带菜筏式养殖遥感识别方法. 遥感技术与应用, 2012, 27(6): 941-946 http://www.cnki.com.cn/Article/CJFDTOTAL-YGJS201206018.htm

    Chu Jia-Lan, Zhao Dong-Zhi, Zhang Feng-Shou. Wakame raft interpretation method of remote sensing based on association rules. Remote Sensing Technology and Application, 2012, 27(6): 941-946 http://www.cnki.com.cn/Article/CJFDTOTAL-YGJS201206018.htm
    [4] 范剑超, 张丰收, 赵冬至, 文世勇, 卫宝泉. 基于高分辨率卫星遥感SAR图像的海洋浮筏养殖信息提取. 见: 第二届中国沿海地区灾害风险分析与管理学术研讨会. 2014. 59-63

    Fan Jian-Chao, Zhang Feng-Shou, Zhao Dong-Zhi, Wen Shi-Yong, Wei Bao-Quan. Floating raft aquaculture extraction based on high resolution satellite remote sensing SAR images. In: Proceedings of the 2nd Symposium on Disaster Risk Analysis and Management in Chinese Littoral Regions. 2014. 59-63
    [5] Qazi W A, Emery W J, Fox-Kemper B. Computing ocean surface currents over the coastal california current system using 30-min-lag sequential SAR images. IEEE Transactions on Geoscience and Remote Sensing, 2014, 52(12): 7559-7580 doi: 10.1109/TGRS.2014.2314117
    [6] 潘德炉, 林明森, 毛志华. 海洋微波遥感与应用. 北京: 海洋出版社, 2013.

    Pan De-Lu, Lin Ming-Shen, Mao Zhi-Hua. Microwave Remote Sensing and Application of Ocean. Beijing: Ocean Press, 2013.
    [7] Collins M J, Allan J M. Modeling and simulation of SAR image texture. IEEE Transactions on Geoscience and Remote Sensing, 2009, 47(10): 3530-3546 doi: 10.1109/TGRS.2009.2021260
    [8] Liu C J, Wechsler H. Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition. IEEE Transactions on Image Processing, 2002, 11(4): 467-476 doi: 10.1109/TIP.2002.999679
    [9] Do M N, Vetterli M. The contourlet transform: an efficient directional multiresolution image representation. IEEE Transactions on Image Processing, 2005, 14(12): 2091-2106 doi: 10.1109/TIP.2005.859376
    [10] Da Cunha A L, Zhou J P, Do M N. The nonsubsampled contourlet transform: theory, design, and applications. IEEE Transactions on Image Processing, 2006, 15(10): 3089-3101 doi: 10.1109/TIP.2006.877507
    [11] Yang X H, Jiao L C. Fusion algorithm for remote sensing images based on nonsubsampled contourlet transform. Acta Automatica Sinica, 2008, 34(3): 274-281 doi: 10.3724/SP.J.1004.2008.00274
    [12] Hinton G E, Osindero S, Teh Y W. A fast learning algorithm for deep belief nets. Neural Computation, 2006, 18(7): 1527-1554 doi: 10.1162/neco.2006.18.7.1527
    [13] Salakhutdinov R, Tenenbaum J B, Torralba A. Learning with hierarchical-deep models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(8): 1958-1971 doi: 10.1109/TPAMI.2012.269
    [14] Goh H, Thome N, Cord M, Lim J H. Learning deep hierarchical visual feature coding. IEEE Transactions on Neural Networks and Learning Systems, 2014, 25(12): 2212-2225 doi: 10.1109/TNNLS.2014.2307532
    [15] 郑胤, 陈权崎, 章毓晋. 深度学习及其在目标和行为识别中的新进展. 中国图象图形学报, 2014, 19(2): 175-184 http://www.cnki.com.cn/Article/CJFDTOTAL-ZGTB201402002.htm

    Zheng Yin, Chen Quan-Qi, Zhang Yu-Jin. Deep learning and its new progress in object and behavior recognition. Journal of Image and Graphics, 2014, 19(2): 175-184 http://www.cnki.com.cn/Article/CJFDTOTAL-ZGTB201402002.htm
    [16] Chen X Y, Xiang S M, Liu C L, Pan C H. Vehicle detection in satellite images by hybrid deep convolutional neural networks. IEEE Geoscience and Remote Sensing Letters, 2014, 11(10): 1797-1801 doi: 10.1109/LGRS.2014.2309695
    [17] Chen Y S, Lin Z H, Zhao X, Wang G, Gu Y F. Deep learning-based classification of hyperspectral data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2014, 7(6): 2094-2107 doi: 10.1109/JSTARS.2014.2329330
    [18] 冯博, 陈渤, 王鹏辉, 刘宏伟. 基于稳健深层网络的雷达高分辨距离像目标特征提取算法. 电子与信息学报, 2014, 36(12): 2949-2955 http://www.cnki.com.cn/Article/CJFDTOTAL-DZYX201412025.htm

    Feng Bo, Chen Bo, Wang Peng-Hui, Liu Hong-Wei. Feature extraction method for radar high resolution range profile targets based on robust deep networks. Journal of Electronics & Information Technology, 2014, 36(12): 2949-2955 http://www.cnki.com.cn/Article/CJFDTOTAL-DZYX201412025.htm
    [19] 李帅, 许悦雷, 马时平, 倪嘉成, 王坤. 基于小波变换和深层稀疏编码的SAR目标识别. 电视技术, 2014, 38(13): 31-35 http://www.cnki.com.cn/Article/CJFDTOTAL-DSSS201413010.htm

    Li Shuai, Xu Yue-Lei, Ma Shi-Ping, Ni Jia-Cheng, Wang Kun. SAR target recognition using wavelet transform and deep sparse autoencoders. Video Engineering, 2014, 38(13): 31-35 http://www.cnki.com.cn/Article/CJFDTOTAL-DSSS201413010.htm
    [20] Rai P, Khanna P. A gender classification system robust to occlusion using gabor features based 2D2PCA. Journal of Visual Communication and Image Representation, 2014, 25(5): 1118-1129 doi: 10.1016/j.jvcir.2014.03.009
    [21] 张强, 郭宝龙. 基于非采样Contourlet变换多传感器图像融合算法. 自动化学报, 2008, 34(2): 135-141 http://www.aas.net.cn/CN/abstract/abstract15976.shtml

    Zhang Qiang, Guo Bao-Long. Fusion of multi-sensor images based on the nonsubsampled contourlet transform. Acta Automatica Sinica, 2008, 34(2): 135-141 http://www.aas.net.cn/CN/abstract/abstract15976.shtml
    [22] Yan C M, Guo B L, Yi M. Fast algorithm for nonsubsampled contourlet transform. Acta Automatica Sinica, 2014, 40(4): 757-762 doi: 10.1016/S1874-1029(14)60007-0
    [23] Levinshtein A, Stere A, Kutulakos K N, Fleet D J, Dickinson S J, Siddiqi K. TurboPixels: fast superpixels using geometric flows. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(12): 2290-2297 doi: 10.1109/TPAMI.2009.96
    [24] Ng A. Sparse autoencoder [Online], available: http://web. stanford.edu/class/cs294a/sae/, November 8, 2015.
    [25] Chang C C, Lin C J. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2011, 2(3): Article No. 27 http://cn.bing.com/academic/profile?id=2153635508&encoded=0&v=paper_preview&mkt=zh-cn
    [26] Chen Y, Nasrabadi N M, Tran T D. Simultaneous joint sparsity model for target detection in hyperspectral imagery. IEEE Geoscience and Remote Sensing Letters, 2011, 8(4): 676-680 doi: 10.1109/LGRS.2010.2099640
    [27] Mairal J, Bach F, Ponce J. Sparse modeling for image and vision processing. Foundations Trend in Computer Graphics Vision, 2014, 8(2-3): 85-283 doi: 10.1561/0600000058
  • 期刊类型引用(4)

    1. 戚其松,徐航,董青,辛运胜. 基于镜面反射算法的机械产品结构稳健优化设计. 机械设计与制造. 2024(01): 318-326 . 百度学术
    2. 马兵,吕彭民,韩红安,刘永刚,李瑶,胡永涛. 共享多镜面反射优化算法及其在机械优化设计中的应用. 机械设计. 2024(12): 129-138 . 百度学术
    3. 肖浩,肖林,贺宾,赵章焰. 基于人工蜂鸟算法的门式起重机主梁安全优化设计. 机械设计与研究. 2023(03): 222-226+231 . 百度学术
    4. 于燕南,戚其松,董青,徐格宁. 多工况下的桥式起重机有限元分析及优化设计. 科学技术与工程. 2021(31): 13334-13341 . 百度学术

    其他类型引用(7)

  • 加载中
图(10) / 表(3)
计量
  • 文章访问数:  2775
  • HTML全文浏览量:  575
  • PDF下载量:  2104
  • 被引次数: 11
出版历程
  • 收稿日期:  2015-07-06
  • 录用日期:  2016-01-15
  • 刊出日期:  2016-04-01

目录

/

返回文章
返回