Applications of Computer Vision in Automation and Robotics

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (31 July 2020) | Viewed by 40720

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Electrical Engineering, West Pomeranian University of Technology, 70-313 Szczecin, Poland
Interests: applied computer science; particularly image processing and analysis; computer vision and machine vision in automation and robotics; image quality assessment; video and signal processing applications in intelligent transportation systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Computer vision applications have become one of the most rapidly developing areas in automation and robotics, as well as in some other similar areas of science and technology, e.g., mechatronics, intelligent transport and logistics, biomedical engineering, and even in the food industry. Nevertheless, automation and robotics seems to be one of the leading areas of practical applications for recently developed artificial intelligence solutions, particularly computer and machine vision algorithms. One of the most relevant issues is the safety of the human–computer and human–machine interactions in robotics, which requires the “explainability” of algorithms, often excluding the potential application of some solutions based on deep learning, regardless of their performance in pattern recognition applications.

Considering the limited amount of training data, typical for robotics, important challenges are related to unsupervised learning, as well as no-reference image and video quality assessment methods, which may prevent the use of some distorted video frames for image analysis applied for further control of, e.g., robot motion. The use of image descriptors and features calculated for natural images captured by cameras in robotics, both in “out-hand” and “in-hand” solutions, may cause more problems in comparison to artificial images, typically used for the verification of general-purpose computer vision algorithms, leading to a so-called “reality gap”.

This Special Issue on “Applications of Computer Vision in Automation and Robotics” will bring together the research communities interested in computer and machine vision from various departments and universities, focusing on both automation and robotics as well as computer science.

Topics of interest for this Special Issue include but are not limited to the following:

  • Video simultaneous localization and mapping (VSLAM) solutions
  • Image-based navigation of unmanned aerial vehicles (UAVs) and other mobile robots
  • Texture analysis and shape recognition
  • Novel image descriptors useful for image-based classification of objects
  • Feature extraction and image registration
  • Binarization algorithms and binary image analysis
  • Fast algorithms useful for embedded solutions, e.g. based on the Monte Carlo method
  • No-reference image quality assessment
  • Natural image analysis
  • Applications of computer vision in autonomous vehicles.

Dr. Hab. Krzysztof Okarma
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Image analysis
  • machine vision
  • video analysis
  • visual inspection and diagnostics
  • industrial and robotic vision systems

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

3 pages, 653 KiB  
Editorial
Applications of Computer Vision in Automation and Robotics
by Krzysztof Okarma
Appl. Sci. 2020, 10(19), 6783; https://doi.org/10.3390/app10196783 - 28 Sep 2020
Cited by 11 | Viewed by 4995
(This article belongs to the Special Issue Applications of Computer Vision in Automation and Robotics)

Research

Jump to: Editorial

19 pages, 3702 KiB  
Article
Quality Assessment of 3D Printed Surfaces Using Combined Metrics Based on Mutual Structural Similarity Approach Correlated with Subjective Aesthetic Evaluation
by Krzysztof Okarma, Jarosław Fastowicz, Piotr Lech and Vladimir Lukin
Appl. Sci. 2020, 10(18), 6248; https://doi.org/10.3390/app10186248 - 09 Sep 2020
Cited by 13 | Viewed by 2526
Abstract
Quality assessment of the 3D printed surfaces is one of the crucial issues related to fast prototyping and manufacturing of individual parts and objects using the fused deposition modeling, especially in small series production. As some corrections of minor defects may be conducted [...] Read more.
Quality assessment of the 3D printed surfaces is one of the crucial issues related to fast prototyping and manufacturing of individual parts and objects using the fused deposition modeling, especially in small series production. As some corrections of minor defects may be conducted during the printing process or just after the manufacturing, an automatic quality assessment of object’s surfaces is highly demanded, preferably well correlated with subjective quality perception, considering aesthetic aspects. On the other hand, the presence of some greater and more dense distortions may indicate a reduced mechanical strength. In such cases, the manufacturing process should be interrupted to save time, energy, and the filament. This paper focuses on the possibility of using some general-purpose full-reference image quality assessment methods for the quality assessment of the 3D printed surfaces. As the direct application of an individual (elementary) metric does not provide high correlation with the subjective perception of surface quality, some modifications of similarity-based methods have been proposed utilizing the calculation of the average mutual similarity, making it possible to use full-reference metrics without the perfect quality reference images, as well as the combination of individual metrics, leading to a significant increase of correlation with subjective scores calculated for a specially prepared dataset. Full article
(This article belongs to the Special Issue Applications of Computer Vision in Automation and Robotics)
Show Figures

Graphical abstract

14 pages, 1419 KiB  
Article
Vision-Based Sorting Systems for Transparent Plastic Granulate
by Tadej Peršak, Branka Viltužnik, Jernej Hernavs and Simon Klančnik
Appl. Sci. 2020, 10(12), 4269; https://doi.org/10.3390/app10124269 - 22 Jun 2020
Cited by 19 | Viewed by 5639
Abstract
Granulate material sorting is a mature and well-developed topic, due to its presence in various fields, such as the recycling, mining, and food industries. However, sorting can be improved, and artificial intelligence has been used for this purpose. This paper presents the development [...] Read more.
Granulate material sorting is a mature and well-developed topic, due to its presence in various fields, such as the recycling, mining, and food industries. However, sorting can be improved, and artificial intelligence has been used for this purpose. This paper presents the development of an efficient sorting system for transparent polycarbonate plastic granulate, based on machine vision and air separation technology. The developed belt-type system is composed of a transparent conveyor with an integrated vision camera to detect defects in passing granulates. The vision system incorporates an industrial camera and backlight illumination. Individual particle localization and classification with the k-Nearest Neighbors algorithm were performed to determine the positions and conditions of each particle. Particles with defects are further separated pneumatically as they fall from the conveyor belt. Furthermore, an experiment was conducted whereby the combined performances of our sorting machine and classification method were evaluated. The results show that the developed system exhibits promising separation capabilities, despite numerous challenges accompanying the transparent granulate material. Full article
(This article belongs to the Special Issue Applications of Computer Vision in Automation and Robotics)
Show Figures

Figure 1

17 pages, 1402 KiB  
Article
Histogram-Based Descriptor Subset Selection for Visual Recognition of Industrial Parts
by Ibon Merino, Jon Azpiazu, Anthony Remazeilles and Basilio Sierra
Appl. Sci. 2020, 10(11), 3701; https://doi.org/10.3390/app10113701 - 27 May 2020
Cited by 7 | Viewed by 2302
Abstract
This article deals with the 2D image-based recognition of industrial parts. Methods based on histograms are well known and widely used, but it is hard to find the best combination of histograms, most distinctive for instance, for each situation and without a high [...] Read more.
This article deals with the 2D image-based recognition of industrial parts. Methods based on histograms are well known and widely used, but it is hard to find the best combination of histograms, most distinctive for instance, for each situation and without a high user expertise. We proposed a descriptor subset selection technique that automatically selects the most appropriate descriptor combination, and that outperforms approach involving single descriptors. We have considered both backward and forward mechanisms. Furthermore, to recognize the industrial parts a supervised classification is used with the global descriptors as predictors. Several class approaches are compared. Given our application, the best results are obtained with the Support Vector Machine with a combination of descriptors increasing the F1 by 0.031 with respect to the best descriptor alone. Full article
(This article belongs to the Special Issue Applications of Computer Vision in Automation and Robotics)
Show Figures

Figure 1

19 pages, 8077 KiB  
Article
Method for Volume of Irregular Shape Pellets Estimation Using 2D Imaging Measurement
by Andrius Laucka, Darius Andriukaitis, Algimantas Valinevicius, Dangirutis Navikas, Mindaugas Zilys, Vytautas Markevicius, Dardan Klimenta, Roman Sotner and Jan Jerabek
Appl. Sci. 2020, 10(8), 2650; https://doi.org/10.3390/app10082650 - 11 Apr 2020
Cited by 8 | Viewed by 4561
Abstract
Growing population and decreasing amount of cultivated land conditions the increase of fertilizer demand. With the advancements of computerized equipment, more complex methods can be used for solving complex mathematical problems. In the fertilizer industry, the granulometric composition of products matters as much [...] Read more.
Growing population and decreasing amount of cultivated land conditions the increase of fertilizer demand. With the advancements of computerized equipment, more complex methods can be used for solving complex mathematical problems. In the fertilizer industry, the granulometric composition of products matters as much as the quality of production of chemical composition products. The shape and size of pellets determines their distribution over cultivated land areas. The effective distance of field spreading is directly related to the size and shape parameters of a pellet. Therefore, the monitoring of production in production lines is essential. The standard direct methods of the monitoring and control of granulometric composition requires too much time and human resources. These factors can be eliminated by using imaging measuring methods that have a variety of benefits, but require additional research in order to assure and determine the compliance of real-time results with results of the control equipment. One of the fastest, most flexible and largest amount of data providing methods is the processing and analysis of digital images. However, then we face the issue of the suitability of 2D images to be used for the evaluation of granulometric compositions, where processing of digital images provides only two dimensions of a pellet: length and width. This study proposes a method of evaluating an irregular pellet. After experimental research we determined < 2% of discrepancy when compared to the real volume of a pellet. Full article
(This article belongs to the Special Issue Applications of Computer Vision in Automation and Robotics)
Show Figures

Figure 1

20 pages, 867 KiB  
Article
Semantic Component Association within Object Classes Based on Convex Polyhedrons
by Petra Đurović, Ivan Vidović and Robert Cupec
Appl. Sci. 2020, 10(8), 2641; https://doi.org/10.3390/app10082641 - 11 Apr 2020
Cited by 3 | Viewed by 2006
Abstract
Most objects are composed of semantically distinctive parts that are more or less geometrically distinctive as well. Points on the object relevant for a certain robot operation are usually determined by various physical properties of the object, such as its dimensions or weight [...] Read more.
Most objects are composed of semantically distinctive parts that are more or less geometrically distinctive as well. Points on the object relevant for a certain robot operation are usually determined by various physical properties of the object, such as its dimensions or weight distribution, and by the purpose of object parts. A robot operation defined for a particular part of a representative object can be transferred and adapted to other instances of the same object class by detecting the corresponding components. In this paper, a method for semantic association of the object’s components within the object class is proposed. It is suitable for real-time robotic tasks and requires only a few previously annotated representative models. The proposed approach is based on the component association graph and a novel descriptor that describes the geometrical arrangement of the components. The method is experimentally evaluated on a challenging benchmark dataset. Full article
(This article belongs to the Special Issue Applications of Computer Vision in Automation and Robotics)
Show Figures

Figure 1

24 pages, 1969 KiB  
Article
Data-Efficient Domain Adaptation for Semantic Segmentation of Aerial Imagery Using Generative Adversarial Networks
by Bilel Benjdira, Adel Ammar, Anis Koubaa and Kais Ouni
Appl. Sci. 2020, 10(3), 1092; https://doi.org/10.3390/app10031092 - 06 Feb 2020
Cited by 32 | Viewed by 3860
Abstract
Despite the significant advances noted in semantic segmentation of aerial imagery, a considerable limitation is blocking its adoption in real cases. If we test a segmentation model on a new area that is not included in its initial training set, accuracy will decrease [...] Read more.
Despite the significant advances noted in semantic segmentation of aerial imagery, a considerable limitation is blocking its adoption in real cases. If we test a segmentation model on a new area that is not included in its initial training set, accuracy will decrease remarkably. This is caused by the domain shift between the new targeted domain and the source domain used to train the model. In this paper, we addressed this challenge and proposed a new algorithm that uses Generative Adversarial Networks (GAN) architecture to minimize the domain shift and increase the ability of the model to work on new targeted domains. The proposed GAN architecture contains two GAN networks. The first GAN network converts the chosen image from the target domain into a semantic label. The second GAN network converts this generated semantic label into an image that belongs to the source domain but conserves the semantic map of the target image. This resulting image will be used by the semantic segmentation model to generate a better semantic label of the first chosen image. Our algorithm is tested on the ISPRS semantic segmentation dataset and improved the global accuracy by a margin up to 24% when passing from Potsdam domain to Vaihingen domain. This margin can be increased by addition of other labeled data from the target domain. To minimize the cost of supervision in the translation process, we proposed a methodology to use these labeled data efficiently. Full article
(This article belongs to the Special Issue Applications of Computer Vision in Automation and Robotics)
Show Figures

Figure 1

18 pages, 4680 KiB  
Article
Pedestrian Detection at Night in Infrared Images Using an Attention-Guided Encoder-Decoder Convolutional Neural Network
by Yunfan Chen and Hyunchul Shin
Appl. Sci. 2020, 10(3), 809; https://doi.org/10.3390/app10030809 - 23 Jan 2020
Cited by 33 | Viewed by 5744
Abstract
Pedestrian-related accidents are much more likely to occur during nighttime when visible (VI) cameras are much less effective. Unlike VI cameras, infrared (IR) cameras can work in total darkness. However, IR images have several drawbacks, such as low-resolution, noise, and thermal energy characteristics [...] Read more.
Pedestrian-related accidents are much more likely to occur during nighttime when visible (VI) cameras are much less effective. Unlike VI cameras, infrared (IR) cameras can work in total darkness. However, IR images have several drawbacks, such as low-resolution, noise, and thermal energy characteristics that can differ depending on the weather. To overcome these drawbacks, we propose an IR camera system to identify pedestrians at night that uses a novel attention-guided encoder-decoder convolutional neural network (AED-CNN). In AED-CNN, encoder-decoder modules are introduced to generate multi-scale features, in which new skip connection blocks are incorporated into the decoder to combine the feature maps from the encoder and decoder module. This new architecture increases context information which is helpful for extracting discriminative features from low-resolution and noisy IR images. Furthermore, we propose an attention module to re-weight the multi-scale features generated by the encoder-decoder module. The attention mechanism effectively highlights pedestrians while eliminating background interference, which helps to detect pedestrians under various weather conditions. Empirical experiments on two challenging datasets fully demonstrate that our method shows superior performance. Our approach significantly improves the precision of the state-of-the-art method by 5.1% and 23.78% on the Keimyung University (KMU) and Computer Vision Center (CVC)-09 pedestrian dataset, respectively. Full article
(This article belongs to the Special Issue Applications of Computer Vision in Automation and Robotics)
Show Figures

Figure 1

19 pages, 3396 KiB  
Article
Off-Line Camera-Based Calibration for Optical See-Through Head-Mounted Displays
by Fabrizio Cutolo, Umberto Fontana, Nadia Cattari and Vincenzo Ferrari
Appl. Sci. 2020, 10(1), 193; https://doi.org/10.3390/app10010193 - 25 Dec 2019
Cited by 13 | Viewed by 5022
Abstract
In recent years, the entry into the market of self contained optical see-through headsets with integrated multi-sensor capabilities has led the way to innovative and technology driven augmented reality applications and has encouraged the adoption of these devices also across highly challenging medical [...] Read more.
In recent years, the entry into the market of self contained optical see-through headsets with integrated multi-sensor capabilities has led the way to innovative and technology driven augmented reality applications and has encouraged the adoption of these devices also across highly challenging medical and industrial settings. Despite this, the display calibration process of consumer level systems is still sub-optimal, particularly for those applications that require high accuracy in the spatial alignment between computer generated elements and a real-world scene. State-of-the-art manual and automated calibration procedures designed to estimate all the projection parameters are too complex for real application cases outside laboratory environments. This paper describes an off-line fast calibration procedure that only requires a camera to observe a planar pattern displayed on the see-through display. The camera that replaces the user’s eye must be placed within the eye-motion-box of the see-through display. The method exploits standard camera calibration and computer vision techniques to estimate the projection parameters of the display model for a generic position of the camera. At execution time, the projection parameters can then be refined through a planar homography that encapsulates the shift and scaling effect associated with the estimated relative translation from the old camera position to the current user’s eye position. Compared to classical SPAAM techniques that still rely on the human element and to other camera based calibration procedures, the proposed technique is flexible and easy to replicate in both laboratory environments and real-world settings. Full article
(This article belongs to the Special Issue Applications of Computer Vision in Automation and Robotics)
Show Figures

Graphical abstract

17 pages, 3274 KiB  
Article
Automatic Feature Region Searching Algorithm for Image Registration in Printing Defect Inspection Systems
by Yajun Chen, Peng He, Min Gao and Erhu Zhang
Appl. Sci. 2019, 9(22), 4838; https://doi.org/10.3390/app9224838 - 12 Nov 2019
Cited by 9 | Viewed by 2340
Abstract
Image registration is a key step in printing defect inspection systems based on machine vision, and its accuracy depends on the selected feature regions to a great extent. Aimed at the current problems of low efficiency and crucial errors of human vision and [...] Read more.
Image registration is a key step in printing defect inspection systems based on machine vision, and its accuracy depends on the selected feature regions to a great extent. Aimed at the current problems of low efficiency and crucial errors of human vision and manual selection, this study proposes a new automatic feature region searching algorithm for printed image registration. First, all obvious shapes are extracted in the preliminary shape extraction process. Second, shape searching algorithms based on contour point distribution information and edge gradient direction, respectively, are proposed. The two algorithms are combined to put forward a relatively effective and discriminative feature region searching algorithm that can automatically detect shapes such as quasi-rectangular, oval, and so on, as feature regions. The entire image and the subregional experimental results show that the proposed method can be used to extract ideal shape regions, which can be used as characteristic shape regions for image registration in printing defect detection systems. Full article
(This article belongs to the Special Issue Applications of Computer Vision in Automation and Robotics)
Show Figures

Figure 1

Back to TopTop