Image/Signal Processing and Machine Vision in Security and Industrial Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 April 2022) | Viewed by 14970

Special Issue Editors


E-Mail Website
Guest Editor
School of Mechanical Engineering, Zhejiang University, Hangzhou, China
Interests: machine vision; pattern recognition; image processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
The State Key Laboratory of Fluid Power and Mechatronic Systems, Zhejiang University, Hangzhou 310027, China
Interests: advanced control of robotic and mechatronic systems; nonlinear adaptive robust control; motion control; trajectory planning; telerobotics; hydraulic system; precision mechatronic system; soft actuator and robot; mobile manipulator; underwater robot; exoskeleton
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Artificial Intelligence, Changchun University of Science and Technology, Changchun 130022, China
Interests: machine vision; image processing; computer vision

Special Issue Information

Dear Colleagues,

Machine vision utilizes industrial image processing through the use of cameras in order to visually inspect products, or guide robots in real time automatically. Machine vision and image processing technologies are widely employed in security screening and industrial inspection.

This Special Issue intends to provide a timely chance to scientists, researchers, as well as engineers to discuss and summarize the latest signal/image processing and machine vision methods in security and industrial applications. We invite papers that include but are not exclusive to the following topics: artificial intelligence, pattern recognition/analysis technologies in human analysis, behavior understanding, and biometrics for security application, intelligent transportation system based on computer vision methods, robotics and intelligent systems, pose recognition of industrial products, non-destructive testing and evaluation based on signal/image processing methods, precision measurements and metrology, hidden defect detection and classification methods. Both theoretical and experimental studies are welcome, as well as comprehensive review and survey papers.

Prof. Dr. Xinyue Zhao
Prof. Dr. Zheng Chen
Prof. Dr. Ming Fang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image and signal processing
  • machine vision
  • security screening
  • industrial inspection

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 2170 KiB  
Article
An Effectively Finite-Tailed Updating for Multiple Object Tracking in Crowd Scenes
by Biaoyi Xu, Dong Liang, Ling Li, Rong Quan and Mingguang Zhang
Appl. Sci. 2022, 12(3), 1061; https://doi.org/10.3390/app12031061 - 20 Jan 2022
Cited by 4 | Viewed by 1452
Abstract
Multiple Object Tracking (MOT) focuses on tracking all the objects in a video. Most MOT solutions follow a tracking-by-detection or a joint detection tracking paradigm to generate the object trajectories by exploiting the correlations between the detected objects in consecutive frames. However, according [...] Read more.
Multiple Object Tracking (MOT) focuses on tracking all the objects in a video. Most MOT solutions follow a tracking-by-detection or a joint detection tracking paradigm to generate the object trajectories by exploiting the correlations between the detected objects in consecutive frames. However, according to our observations, considering only the correlations between the objects in the current frame and the objects in the previous frame will lead to an exponential information decay over time, thus resulting in a misidentification of the object, especially in scenes with dense crowds and occlusions. To address this problem, we propose an effectively finite-tailed updating (FTU) strategy to generate the appearance template of the object in the current frame by exploiting its local temporal context in videos. To be specific, we model the appearance template for the object in the current frame on the appearance templates of the objects in multiple earlier frames and dynamically combine them to obtain a more effective representation. Extensive experiments have been conducted, and the experimental results show that our tracker outperforms the state-of-the-art methods on MOT Challenge Benchmark. We have achieved 73.7% and 73.0% IDF1, and 46.1% and 45.0% MT on the MOT16 and MOT17 datasets, which are 0.9% and 0.7% IDFI higher, and 1.4% and 1.8% MT higher than FairMOT repsectively. Full article
Show Figures

Figure 1

16 pages, 17945 KiB  
Article
Vision Transformer-Based Tailing Detection in Videos
by Jaewoo Lee, Sungjun Lee, Wonki Cho, Zahid Ali Siddiqui and Unsang Park
Appl. Sci. 2021, 11(24), 11591; https://doi.org/10.3390/app112411591 - 07 Dec 2021
Cited by 1 | Viewed by 2517
Abstract
Tailing is defined as an event where a suspicious person follows someone closely. We define the problem of tailing detection from videos as an anomaly detection problem, where the goal is to find abnormalities in the walking pattern of the pedestrians (victim and [...] Read more.
Tailing is defined as an event where a suspicious person follows someone closely. We define the problem of tailing detection from videos as an anomaly detection problem, where the goal is to find abnormalities in the walking pattern of the pedestrians (victim and follower). We, therefore, propose a modified Time-Series Vision Transformer (TSViT), a method for anomaly detection in video, specifically for tailing detection with a small dataset. We introduce an effective way to train TSViT with a small dataset by regularizing the prediction model. To do so, we first encode the spatial information of the pedestrians into 2D patterns and then pass them as tokens to the TSViT. Through a series of experiments, we show that the tailing detection on a small dataset using TSViT outperforms popular CNN-based architectures, as the CNN architectures tend to overfit with a small dataset of time-series images. We also show that when using time-series images, the performance of CNN-based architecture gradually drops, as the network depth is increased, to increase its capacity. On the other hand, a decreasing number of heads in Vision Transformer architecture shows good performance on time-series images, and the performance is further increased as the input resolution of the images is increased. Experimental results demonstrate that the TSViT performs better than the handcrafted rule-based method and CNN-based method for tailing detection. TSViT can be used in many applications for video anomaly detection, even with a small dataset. Full article
Show Figures

Figure 1

10 pages, 1465 KiB  
Article
Steel Surface Defect Classification Based on Small Sample Learning
by Shiqing Wu, Shiyu Zhao, Qianqian Zhang, Long Chen and Chenrui Wu
Appl. Sci. 2021, 11(23), 11459; https://doi.org/10.3390/app112311459 - 03 Dec 2021
Cited by 4 | Viewed by 1773
Abstract
The classification of steel surface defects plays a very important role in analyzing their causes to improve manufacturing process and eliminate defects. However, defective samples are very scarce in actual production, so using very few samples to construct a good classifier is a [...] Read more.
The classification of steel surface defects plays a very important role in analyzing their causes to improve manufacturing process and eliminate defects. However, defective samples are very scarce in actual production, so using very few samples to construct a good classifier is a challenge to be addressed. If the layer number of the model with proper depth is increased, the model accuracy will decrease (not caused by overfit), and the training error as well as the test error will be very high. This is called the degradation problem. In this paper, we propose to use feature extraction + feature transformation + nearest neighbors to classify steel surface defects. In order to solve the degradation problem caused by network deepening, the three feature extraction networks of Residual Net, Mobile Net and Dense Net are designed and analyzed. Experiment results show that in the case of a small sample number, Dense block can better solve the degradation problem caused by network deepening than Residual block. Moreover, if Dense Net is used as the feature extraction network, and the nearest neighbor classification algorithm based on Euclidean metric is used in the new feature space, the defect classification accuracy can reach 92.33% when only five labeled images of each category are used as the training set. This paper is of some guiding significance for surface defect classification when the sample number is small. Full article
Show Figures

Figure 1

16 pages, 16141 KiB  
Article
A Hard Example Mining Approach for Concealed Multi-Object Detection of Active Terahertz Image
by Ling Li, Fei Xue, Dong Liang and Xiaofei Chen
Appl. Sci. 2021, 11(23), 11241; https://doi.org/10.3390/app112311241 - 26 Nov 2021
Cited by 6 | Viewed by 1853
Abstract
Concealed objects detection in terahertz imaging is an urgent need for public security and counter-terrorism. So far, there is no public terahertz imaging dataset for the evaluation of objects detection algorithms. This paper provides a public dataset for evaluating multi-object detection algorithms in [...] Read more.
Concealed objects detection in terahertz imaging is an urgent need for public security and counter-terrorism. So far, there is no public terahertz imaging dataset for the evaluation of objects detection algorithms. This paper provides a public dataset for evaluating multi-object detection algorithms in active terahertz imaging. Due to high sample similarity and poor imaging quality, object detection on this dataset is much more difficult than on those commonly used public object detection datasets in the computer vision field. Since the traditional hard example mining approach is designed based on the two-stage detector and cannot be directly applied to the one-stage detector, this paper designs an image-based Hard Example Mining (HEM) scheme based on RetinaNet. Several state-of-the-art detectors, including YOLOv3, YOLOv4, FRCN-OHEM, and RetinaNet, are evaluated on this dataset. Experimental results show that the RetinaNet achieves the best mAP and HEM further enhances the performance of the model. The parameters affecting the detection metrics of individual images are summarized and analyzed in the experiments. Full article
Show Figures

Figure 1

13 pages, 659 KiB  
Article
A Switched Approach to Image-Based Stabilization for Nonholonomic Mobile Robots with Field-of-View Constraints
by Yao Huang
Appl. Sci. 2021, 11(22), 10895; https://doi.org/10.3390/app112210895 - 18 Nov 2021
Cited by 2 | Viewed by 1255
Abstract
This paper presents a switched visual servoing strategy for maneuvering the nonholonomic mobile robot to the desired configuration while keeping the tracked image points in the vision of the camera. Firstly, a pure backward motion and a pure rotational motion are applied to [...] Read more.
This paper presents a switched visual servoing strategy for maneuvering the nonholonomic mobile robot to the desired configuration while keeping the tracked image points in the vision of the camera. Firstly, a pure backward motion and a pure rotational motion are applied to the mobile robot in succession. Thus, the principle point and the scaled focal length in x direction of the camera are identified through the visual feedback from a fixed onboard camera. Secondly, the identified parameters are used to build the system model in polar-coordinate representation. Then an adaptive non-smooth controller is designed to maneuver the mobile robot to the desired configuration under the nonholonomic constraint. And a switched strategy which consists of two image-based controllers is utilized for keeping the features in the field-of-view. Simulation results are presented to validate the effectiveness of the proposed approach. Full article
Show Figures

Figure 1

12 pages, 2355 KiB  
Article
A Novel Metric-Learning-Based Method for Multi-Instance Textureless Objects’ 6D Pose Estimation
by Chenrui Wu, Long Chen and Shiqing Wu
Appl. Sci. 2021, 11(22), 10531; https://doi.org/10.3390/app112210531 - 09 Nov 2021
Cited by 2 | Viewed by 1494
Abstract
6D pose estimation of objects is essential for intelligent manufacturing. Current methods mainly place emphasis on the single object’s pose estimation, which limit its use in real-world applications. In this paper, we propose a multi-instance framework of 6D pose estimation for textureless objects [...] Read more.
6D pose estimation of objects is essential for intelligent manufacturing. Current methods mainly place emphasis on the single object’s pose estimation, which limit its use in real-world applications. In this paper, we propose a multi-instance framework of 6D pose estimation for textureless objects in an industrial environment. We use a two-stage pipeline for this purpose. In the detection stage, EfficientDet is used to detect target instances from the image. In the pose estimation stage, the cropped images are first interpolated into a fixed size, then fed into a pseudo-siamese graph matching network to calculate dense point correspondences. A modified circle loss is defined to measure the differences of positive and negative correspondences. Experiments on the antenna support demonstrate the effectiveness and advantages of our proposed method. Full article
Show Figures

Figure 1

15 pages, 6071 KiB  
Article
Application of Lightweight Convolutional Neural Network for Damage Detection of Conveyor Belt
by Mengchao Zhang, Yuan Zhang, Manshan Zhou, Kai Jiang, Hao Shi, Yan Yu and Nini Hao
Appl. Sci. 2021, 11(16), 7282; https://doi.org/10.3390/app11167282 - 08 Aug 2021
Cited by 17 | Viewed by 3298
Abstract
Aiming at the problem that mining conveyor belts are easily damaged under severe working conditions, the paper proposed a deep learning-based conveyor belt damage detection method. To further explore the possibility of the application of lightweight CNNs in the detection of conveyor belt [...] Read more.
Aiming at the problem that mining conveyor belts are easily damaged under severe working conditions, the paper proposed a deep learning-based conveyor belt damage detection method. To further explore the possibility of the application of lightweight CNNs in the detection of conveyor belt damage, the paper deeply integrates the MobileNet and Yolov4 network to achieve the lightweight of Yolov4, and performs a test on the exiting conveyor belt damage dataset containing 3000 images. The test results show that the lightweight network can effectively detect the damage of the conveyor belt, with the fastest test speed 70.26 FPS, and the highest test accuracy 93.22%. Compared with the original Yolov4, the accuracy increased by 3.5% with the speed increased by 188%. By comparing other existing detection methods, the strong generalization ability of the model is verified, which provides technical support and empirical reference for the visual monitoring and intelligent development of belt conveyors. Full article
Show Figures

Figure 1

Back to TopTop