sensors-logo

Journal Browser

Journal Browser

Intelligent Vision Technology/Sensors for Industrial Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (22 December 2023) | Viewed by 5575

Special Issue Editors

Institute of Automation, Chinese Academy of Sciences, Beijing, China
Interests: high speed vision; 3D vision; intelligent sensor
Institute of Automation, Chinese Academy of Sciences, Beijing, China
Interests: deep learning; electrode implantation robot for brain-computer interface; industrial vision detection
Special Issues, Collections and Topics in MDPI journals
Institute of Automation, Chinese Academy of Sciences, Beijing, China
Interests: mechatronics and robotics; measurement; deep learning

Special Issue Information

Dear Colleagues,

As an important part of artificial intelligence technology, intelligent vision technology is widely used in industrial settings. At present, images and video information obtained by visual sensors are increasingly used in industrial scenes such as high-speed measurement, quality inspection, parts grabbing, and assembly. To improve the efficiency and accuracy of industrial tasks, it has become a very necessary and urgent task to apply computer vision, deep learning, and intelligent visual sensor technology for industrial tasks. This Special Issue aims to provide a platform for exchanges on research works, technical trends, and practical experience related to intelligent vision technology and sensors for industrial applications. The scope of these papers may encompass measurement instruments, surface defect inspection, applications in mechatronics and robotics, sensors and actuators, and applications of artificial intelligence in industrial electronic systems.

Topics of Interest

  1. Measurement science;
  2. Surface defect inspection;
  3. Applications in mechatronics and robotics;
  4. Applications of artificial intelligence in industrial electronic systems.

Dr. Qingyi Gu
Dr. Xian Tao
Dr. Hu Su
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensors
  • inspection and measurement
  • deep learning and computer vision
  • industrial electronic systems

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 2581 KiB  
Article
Skill-Learning Method of Dual Peg-in-Hole Compliance Assembly for Micro-Device
by Yuting Wu, Juan Zhang, Yi Yang, Wenrong Wu and Kai Du
Sensors 2023, 23(20), 8579; https://doi.org/10.3390/s23208579 - 19 Oct 2023
Viewed by 665
Abstract
For the dual peg-in-hole compliance assembly task of upper and lower double-hole structural micro-devices, a skill-learning method is proposed. This method combines offline training in a simulation space and online training in a realistic space. In this paper, a dual peg-in-hole model is [...] Read more.
For the dual peg-in-hole compliance assembly task of upper and lower double-hole structural micro-devices, a skill-learning method is proposed. This method combines offline training in a simulation space and online training in a realistic space. In this paper, a dual peg-in-hole model is built according to the results of a force analysis, and contact-point searching methods are provided for calculating the contact force. Then, a skill-learning framework is built based on deep reinforcement learning. Both expert action and incremental action are used in training, and a reward system considers both efficiency and safety; additionally, a dynamic exploration method is provided to improve the training efficiency. In addition, based on experimental data, an online training method is used to optimize the skill-learning model continuously so that the error caused by the deviation in the offline training data from reality can be reduced. The final experiments demonstrate that the method can effectively reduce the contact force while assembling, improve the efficiency and reduce the impact of the change in position and orientation. Full article
(This article belongs to the Special Issue Intelligent Vision Technology/Sensors for Industrial Applications)
Show Figures

Figure 1

14 pages, 3696 KiB  
Article
Micro-Vision Based High-Precision Space Assembly Approach for Trans-Scale Micro-Device: The CFTA Example
by Juan Zhang, Xi Dai, Wenrong Wu and Kai Du
Sensors 2023, 23(1), 450; https://doi.org/10.3390/s23010450 - 01 Jan 2023
Cited by 1 | Viewed by 1305
Abstract
For assembly of trans-scale micro-device capsule fill tube assemblies (CFTA) for inertial confinement fusion (ICF) targets, a high-precision space assembly approach based on micro-vision is proposed in this paper. The approach consists of three modules: (i) a posture alignment module based on a [...] Read more.
For assembly of trans-scale micro-device capsule fill tube assemblies (CFTA) for inertial confinement fusion (ICF) targets, a high-precision space assembly approach based on micro-vision is proposed in this paper. The approach consists of three modules: (i) a posture alignment module based on a multi-vision monitoring model that is designed to align two trans-scale micro-parts in 5DOF while one micro-part is in ten microns and the other one is in hundreds of microns; (ii) an insertion depth control module based on a proposed local deformation detection method to control micro-part insertion depth; (iii) a glue mass control module based on simulation research that is designed to control glue mass quantitatively and to bond micro-parts together. A series of experiments were conducted and experimental results reveal that attitude alignment control error is less than ±0.3°, position alignment control error is less than ±5 µm, and insertion depth control error is less than ±5 μm. Deviation of glue spot diameter is controlled at less than 15 μm. A CFTA was assembled based on the proposed approach, the position error in 3D space measured by computerized tomography (CT) is less than 5 μm, and glue spot diameter at the joint is 56 μm. Through volume calculation by the cone calculation formula, the glue mass is about 23 PL when the cone height is half the diameter. Full article
(This article belongs to the Special Issue Intelligent Vision Technology/Sensors for Industrial Applications)
Show Figures

Figure 1

16 pages, 7982 KiB  
Article
SimpleTrack: Rethinking and Improving the JDE Approach for Multi-Object Tracking
by Jiaxin Li, Yan Ding, Hua-Liang Wei, Yutong Zhang and Wenxiang Lin
Sensors 2022, 22(15), 5863; https://doi.org/10.3390/s22155863 - 05 Aug 2022
Cited by 24 | Viewed by 2713
Abstract
Joint detection and embedding (JDE) methods usually fuse the target motion information and appearance information as the data association matrix, which could fail when the target is briefly lost or blocked in multi-object tracking (MOT). In this paper, we aim to solve this [...] Read more.
Joint detection and embedding (JDE) methods usually fuse the target motion information and appearance information as the data association matrix, which could fail when the target is briefly lost or blocked in multi-object tracking (MOT). In this paper, we aim to solve this problem by proposing a novel association matrix, the Embedding and GioU (EG) matrix, which combines the embedding cosine distance and GioU distance of objects. To improve the performance of data association, we develop a simple, effective, bottom-up fusion tracker for re-identity features, named SimpleTrack, and propose a new tracking strategy which can mitigate the loss of detection targets. To show the effectiveness of the proposed method, experiments are carried out using five different state-of-the-art JDE-based methods. The results show that by simply replacing the original association matrix with our EG matrix, we can achieve significant improvements in IDF1, HOTA and IDsw metrics, and increase the tracking speed of these methods by around 20%. In addition, our SimpleTrack has the best data association capability among the JDE-based methods, e.g., 61.6 HOTA and 76.3 IDF1, on the test set of MOT17 with 23 FPS running speed on a single GTX2080Ti GPU. Full article
(This article belongs to the Special Issue Intelligent Vision Technology/Sensors for Industrial Applications)
Show Figures

Figure 1

Back to TopTop