|Table of Contents|

[1] Liao Ruixuan, Wu Tong, Zhang Yiming, Mao Jianxiao, et al. Vision-based vessel detection for vessel-bridge collision warnings under complex scenes [J]. Journal of Southeast University (English Edition), 2024, 40 (1): 33-40. [doi:10.3969/j.issn.1003-7985.2024.01.004]
Copy

Vision-based vessel detection for vessel-bridge collision warnings under complex scenes()
Share:

Journal of Southeast University (English Edition)[ISSN:1003-7985/CN:32-1325/N]

Volumn:
40
Issue:
2024 1
Page:
33-40
Research Field:
Traffic and Transportation Engineering
Publishing date:
2024-03-20

Info

Title:
Vision-based vessel detection for vessel-bridge collision warnings under complex scenes
Author(s):
Liao Ruixuan Wu Tong Zhang Yiming Mao Jianxiao Wang Hao
Key Laboratory of Concrete and Prestressed Concrete Structures of Ministry of Education, Southeast University, Nanjing 211189, China
Keywords:
vessel detection vessel-bridge collision you-only-look-once version 5(YOLOv5) squeeze-excitation attention mechanism data augmentation
PACS:
U447;U69
DOI:
10.3969/j.issn.1003-7985.2024.01.004
Abstract:
To enable accurate vessel recognition for bridge collision avoidance and early warning, an image dataset for vessels in bridge channels is established using cameras and data augmentation. This dataset includes complex scenarios such as long distances, multiple targets, and low visibility. Subsequently, the you-only-look-once version 5(YOLOv5)model is employed as the basic detector, and several modifications are applied to its network structure. Key enhancements involve replacing C3 modules in the backbone network with C2f modules, integrating the squeeze-excitation attention mechanism into the feature fusion network, and optimizing the prior anchors of the dataset using the K-means++ clustering algorithm. Finally, the modified model undergoes training and validation using PyTorch as the deep learning framework. Results demonstrate that the mean average precision for crucial vessels in the modified YOLOv5 model reaches 99.4%, representing an 11.1% improvement compared to the original YOLOv5 model. Additionally, the inference speed is measured at 102 frame/s. The established YOLOv5 model is a reliable and efficient cornerstone for warning against vessel-bridge collisions in complex navigable scenes.

References:

[1] Yang Y D, Wang X F, Pan J J. Improved CNN and its application in ship identification[J]. Computer Engineering and Design, 2018, 39(10): 3228-3233. DOI:10.16208/j.issn1000-7024.2018.10.039. (in Chinese)
[2] Vagale A, Oucheikh R, Bye R T, et al. Path planning and collision avoidance for autonomous surface vehicles Ⅰ: A review[J]. Journal of Marine Science and Technology, 2021, 26(4): 1292-1306. DOI: 10.1007/s00773-020-00787-6.
[3] Zhang B, Xu Z F, Zhang J, et al. A warning framework for avoiding vessel-bridge and vessel-vessel collisions based on generative adversarial and dual-task networks[J].Computer-Aided Civil and Infrastructure Engineering, 2022, 37(5): 629-649. DOI: 10.1111/mice.12757.
[4] Cui Z Y, Wang X Y, Liu N Y, et al. Ship detection in large-scale SAR images via spatial shuffle-group enhance attention[J].IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(1): 379-391. DOI: 10.1109/TGRS.2020.2997200.
[5] Ren S Q, He K M, Girshick R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. DOI: 10.1109/TPAMI.2016.2577031.
[6] Liu W, Anguelov D, Erhan D, et al. SSD: Single shotmultibox detector[C]//European Conference on Computer Vision. Berlin, Germany, 2016: 21-37. DOI: 10.1007/978-3-319-46448-0_2.
[7] Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Las Vegas, NV, USA, 2016: 779-788. DOI: 10.1109/CVPR.2016.91.
[8] Shao Z F, Wu W J, Wang Z Y, et al.SeaShips: A large-scale precisely annotated dataset for ship detection[J]. IEEE Transactions on Multimedia, 2018, 20(10): 2593-2604. DOI: 10.1109/TMM.2018.2865686.
[9] Li H, Deng L B, Yang C, et al. Enhanced YOLOv3 tiny network for real-time ship detection from visual image[J].IEEE Access, 2021, 9: 16692-16706. DOI: 10.1109/ACCESS.2021.3053956.
[10] Lee W J, Roh M I, Lee H W, et al. Detection and tracking for the awareness of surroundings of a ship based on deep learning[J]. Journal of Computational Design and Engineering, 2021, 8(5): 1407-1430. DOI: 10.1093/jcde/qwab053.
[11] Ni Y H, Mao J X, Wang H, et al. Toward high-precision crack detection in concrete bridges using deep learning[J].Journal of Performance of Constructed Facilities, 2023, 37(3): 04023017. DOI: 10.1061/jpcfev.cfeng-4275.
[12] Zhou J C, Jiang P, Zou A R, et al. Ship target detection algorithm based on improved YOLOv5[J]. Journal of Marine Science and Engineering, 2021, 9(8): 908-922. DOI: 10.3390/jmse9080908.
[13] Hu J, Shen L, Sun G. Squeeze-and-excitation networks[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA, 2018: 7132-7141. DOI: 10.1109/CVPR.2018.00745.
[14] Xia Y, Chen L M, Wang J J, et al. Single shot multibox detector based vessel detection method and application for active anti-collision monitoring[J]. Journal of Hunan University: Natural Science, 2020, 47(3): 97-105. DOI:10.16339/j.cnki.hdxbzkb.2020.03.012. (in Chinese)
[15] Ni Y H, Lu H, Ji C, et al. Comparative analysis on bridge corrosion damage detection based on semantic segmentation [J]. Journal of Southeast University(Natural Science Edition), 2023, 53(2): 201-209. DOI:10.3969/j.issn.1001-0505.2023.02.003. (in Chinese)

Memo

Memo:
Biographies: Liao Ruixuan(1999—), male, Ph. D. candidate; Wang Hao(corresponding author), male, doctor, professor, wanghao1980@seu.edu.cn.
Foundation items: The National Natural Science Foundation of China(No. 51978155, 52208481).
Citation: Liao Ruixuan, Wu Tong, Zhang Yiming, et al.Vision-based vessel detection for vessel-bridge collision warnings under complex scenes[J].Journal of Southeast University(English Edition), 2024, 40(1):33-40.DOI:10.3969/j.issn.1003-7985.2024.01.004.
Last Update: 2024-03-20