Next Article in Journal
Ontology-Driven Cultural Heritage Conservation: A Case of The Analects of Confucius
Next Article in Special Issue
Particle and Particle Agglomerate Size Monitoring by Scanning Probe Microscope
Previous Article in Journal
Air Cleaning Performance of Two Species of Potted Plants and Different Substrates
Previous Article in Special Issue
Parametric Optimization of Combined Wind-Solar Energy Power Plants for Sustainable Smart City Development
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Center of Gravity Coordinates Estimation Based on an Overall Brightness Average Determined from the 3D Vision System

1
Department of Production Devices and Systems, Faculty of Materials Science and Technology, Slovak University of Technology, Bottova 25, 91724 Trnava, Slovakia
2
Department of Industrial Automation and Mechatronics, Faculty of Mechanical Engineering, Technical University of Kosice, Letná 9, 04200 Kosice, Slovakia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(1), 286; https://doi.org/10.3390/app12010286
Submission received: 29 October 2021 / Revised: 22 December 2021 / Accepted: 24 December 2021 / Published: 28 December 2021
(This article belongs to the Special Issue Advanced Manufacturing Technologies: Development and Prospect)

Abstract

:
In advanced manufacturing technologies (including complex automated processes) and their branches of industry, perception and evaluation of the object parameters are the most critical factors. Many production machines and workplaces are currently equipped as standard with high-quality special sensing devices based on vision systems to detect these parameters. This article focuses on designing a reachable and fully functional vision system based on two standard CCD cameras usage, while the emphasis is on the RS 232C communication interface between two sites (vision and robotic systems). To this, we combine principles of the 1D photogrammetric calibration method from two known points at a stable point field and the available packages inside the processing unit of the vision system (as filtering, enhancing and extracting edges, weak and robust smoothing, etc.). A correlation factor at camera system (for reliable recognition of the sensed object) was set from 84 to 100%. Then, the pilot communication between both systems was proposed and then tested through CREAD/CWRITE commands according to protocol 3964R (used for the data transfer). Moreover, the system was proven by successful transition of the data into the robotic system. Since research gaps in this field still exist and many vision systems are based on PC processing or intelligent cameras, our potential research topic tries to provide the price–performance ratio solution for those who cannot regularly invest in the newest vision technology; however, they could still do so to stay competitive.

1. Introduction

The vision system is a crucial sensor implemented in manufacturing processes to enlarge its peripheral abilities and possibilities to the Industry 4.0 concept inclusion [1]. Presently, current research in this field shows that the determination of the object’s parameters (with 3D view orientation) is possible with the help of at least two single-camera sensors. The advantage of this principle is the easy implementation to industrial purposes in order to sense, record, and transmit the obtained picture by an interface such as serial interface with the connection of a PC, or an additional processing system. Images that we receive in such a way are subsequently evaluated concerning the necessary information extraction of the relevant sensed product [2]. Usually, after this process, we can subsequently determine and compile the necessary process data of the sensed object.
This study investigates the classical approaches and techniques for determining necessary sensing object characteristics when randomly oriented products are consistently moving, for example, by a conveyor at an automated workplace. The principles of calculating the displacements of two captured pictures using the DIC technique is also known. It is essential to note that this method is sensitive to the average gray intensity of the images [3]. However, the intelligent cameras still further support the manufacturing engineering and simplify it because the image analysis is run directly inside the CCD system [4]. Their central part usually consists of the processor and the algorithm setting necessary for an object’s properties determination. As part of the CCD, the system also consists of the image sensor that can be described as: “the high-quality device that can recognize monochromatic pictures in resolution, what we need, and with speed what is necessary” (60 per sec, for example) [5].
Meanwhile, we can produce sharp pictures that increase algorithms’ precision, such as “edge detection” and “pattern recognition”. The next step consists of reliable information providing (target coordinates for robotic arm) that are sent to the advanced control system of the workplace. The final goal is to monitor and ensure the product’s correct grasp. Moreover, the vision systems demonstrate their irreplaceability when it is necessary to implement overall correctness for each sensed product (TQM). A good example is implementing an infrared camera system for temperature distribution of the composites [6]. Inspection of the objects, finding the parameters such as color, shape, and so on is their primary purpose, see Figure 1.
Due to the application attractiveness, many intelligent camera manufacturers have broadly comparable parameters [7]. The gradually increasing deployment of vision systems under the pressure of the emerging market has caused this issue to have been studied widely [8]. Yet, over the past few decades, their complexity and ability to control processes and monitor products in an automated production workplace have become attractive to many global manufacturers [9]. In recent years, the vision systems market has focused mainly on intelligent cameras with a high frame rate, excellent electrical coverage, and a powerful digital signal processor in one compact unit. Their advantage is, in particular, a more straightforward implementation of the application and their increased robustness. Parameters such as flux viscosity, dipping acceleration, dipping time, and flux stable time can be investigated using a high-speed camera system and MATLAB data processing [10].
Unfortunately, there are still few solutions focused on affordability and applicability to the situations where the pallet deployment cannot be used in industry [11]. Our current research is based in this area, where we try to process the acquired image using a comprehensive vision system OMRON F150-3. This system consists of two CCD sensors, which are placed next to each other [12]. The aim is to improve the existing classical approaches and techniques by determining some necessary scanning object characteristics. Such a solution could be developed by us and considered as a price-affordable complex vision system (compared with company solution) [13].
This paper intends to show an approach for a simple vision system solution in terms of price and applicability to several cross-sectoral areas (manufacturing, assembly, and automotive). Contrary to passive and closed intelligent camera systems, embedded complex vision systems are significant for most machine vision applications in a small footprint. They are usually provided as units that deliver extensive processing power suited for industrial applications [14]. As a result, identification and subsequent navigation of the robotic arm become optimal and are maintained throughout the entire smooth automated operation. Nevertheless, some research and primary praxis demonstrated that the CCD sensor itself usually has, around the lens, a ring of red LEDs that serve as a passive light source [15]. Ho6ever, additional lighting makes the solution more expensive on the one hand but increases the versatility and performance of work in the workplace. Despite this factor, we decided not to use the additional lightning and realize a solution based on the serial communication interface connected to the robotic arm control system. Different lighting was not considered, and the system compares the current and saved scene in real time according to the relevant lighting conditions. One crucial issue in the research of vision system usage is applied to the smartness, and the fast calibration is mainly followed for 3D measurement [16]. Therefore, this paper proposes a reachable vision system that will determine sensed object properties via communication between two CCD sensors, a processing unit, and a robotic design.
This paper is divided into seven sections: Section 1 offers a review of the current deployment, research, and customer requirements for the vision system in general, as well as provides examples for their theoretic implementation. Existing technical solutions in this area focus on the progressive trends oriented on the various scanned parts to the sensing, following, and control as a whole. Section 2 deals with the available methods and procedures responsible for image processing followed by mathematical determination (epipolar geometry) regarding using 2D or 3D vision systems. Equally, in Section 3, there is an explanation of the proposed vision system that includes available communication interfaces, data transfer, and setup of the vision system itself. Finally, Section 4 consists of the framework for communication between the vision and control systems of the robotic arm, based on the communication RS232C channel. Part of this section is also the configuration and setting for the communication protocols, its composition, and an example for mutual communication between both sides (vision system and control system of robotic arm).
Section 5 deals with the experimental setup of a technical solution based on two CCD systems, determinates its variables, and provides us with the experimental measurement for image processing. This section also includes the necessary steps for obtaining the object coordinates and the center of gravity from a sensed object that is also one of the leading research goals. The verification of the proposed solution is proven by the graphical and theoretical test of normality for each object coordinates. Section 6 focuses on summarizing and comparing our achieved results with actual relevant research in vision systems. This part of the article is focused on explaining the advantages and disadvantages of this presented solution with emphasis on its applicability and further development. At the same time, we present some professional solutions compared with our research in terms of usability and deployment in actual conditions. Finally, the last section summarizes realized development and research in this field and the recommendations for future work and improvements.

2. Materials and Methods

The methodology for processing the image information obtained from the image is currently focused on determining the position of individual parameters of the sensed product and its comparison with the reference values of the correct product [17]. After that, we can evaluate a geometric imagination of the object being sensed and its parameters [18]. The next step in the process supposes the movement of the reference plane to various spacing horizontally in “x” and simultaneous measurement at the equal point with the assignment of correct values for the translation (in pixels). In additional calculations, we can also state the precise value between the sensing device (camera) and our sensed product. The correctness of this methodology is contingent on the fixation of the coordinate “x” and “z” with the usage of (at least) two sensors that are fixed alongside. Finally, the targeted product will be sensed, captured, and evaluated with the condition to the intersection of sensed directions while its axes are parallel [19]. The sensor spacing between the sensed product can be thus determined as a ratio between the spacing of the cameras “b” and spacing from its reference plane “r” to the absolute number between the measured translation “a” and “c” as it is given by the Equation (1):
Z = ( b × r ) | a c | ,
were,
  • b—Spacing between the sensors,
  • r—Spacing of the reference and ocular plane,
  • a—Value measured for sensor 1,
  • c—Value measured for sensor 2.
We must also consider the facts during the image processing:
  • Resolution of the sensed product,
  • Value of the images per second,
  • Picture intensity, color, and persistence.
Based on this, we can identify two basic principles for 3D visioning.
  • 2D sensors,
  • 3D sensors.

2.1. 2D Sensors

Initial calibration of both 2D sensors is needed because it is only possible to state the relevant values for the sensed products successfully. We can achieve this by setting the 2D sensors values (calibration matrix, distortion factor) and “mutual spacing and 2D sensors orientation”. Furthermore, epipolar geometry (which addresses two sensor issues) is needed for the statement of the number of 3D points that we can estimate based on the positions at 2D sensors [20]. During the determination of the search points m = [x′, y′, 1] using the obtained picture, the following equation applies:
m = P × M
where, the dot “m“ is the picture of point “M“in the surface for screening (therefore, in the surface that is sensed by the camera system for capture). Value “P” for the projection matrix dimension has a state 3 × 4 and shows the dependence of the actual dot with the sensed dot as:
P = K × [ I , 0 ]
where, dimension calibration matrix “K” has a state of 3 × 3 and describes the values of the environment and values of the 2d sensors. The followed matrix is “I”, 3 × 3 is the dimension, and “0” is zero columns.
K = [ a x c t x 0 a y t y 0 0 1 ]
  • a—trend is scaling in axes “x” and “y” related to the 2d sensors.
  • t—intersection of sensed surface and visual surface.
  • c—Distortion parameter (usually = 0).
Matrix “P” can be other than the higher mentioned and written representation for the value of translation and orientation to the coordinate system. Regarding this assumption, we can add the transformation matrix for the 2D sensor coordinate and environment surface:
P = K × [ I , 0 ] × [ R R C 0 1 ]
were,
  • R—this is the matrix 3 × 3 and explains the rotation.
  • C′—translation vector to the environment surface.
In such a way, we can state the value between dot “m” from the first sensed product, allocating dot “m′” from the second sensed product. In the case of these two dots, we propose the matrix “F” with 3 × 3:
m T × F × m = 0
In the next step, we suggest that one of the projection matrices is allocated at the start of the coordinate system, so that no additional activities are needed. The satisfactory result lies in the determination of the second matrix using the dual sensors:
W = [ 0 1 0 1 0 0 0 0 1 ]
To achieve the success of this methodology, we need to test projection matrices at any dot and choose the correct one. This means our selected dot must be positioned in front of the 2D sensors. After this, we can state the coordinates of the dots (with help from the linear method for the triangulation) and the relevant equations:
[ x p 3 T p 1 T y p 3 T p 2 T x p 3 T p 1 T y p 3 T p 2 T ]
By using the least square methods, we can state a resultant 3D coordinate of point “X”. Variable’s “x” and “y” are a coordinate “pi”, which is an i-th row of matrix “P”.
A sensed product and its 3D position are crucial for picture transformation, and every individual dot realizes it compared with the same “y” coordinate. Therefore, after we found the corresponding dot and its row (for the second sensor), we were advised to firstly realize the preprocessing, and only after discovering the processing process of an obtained image [21].

2.2. 3D Sensors

A sensing system for capturing images with the help of one 3D sensor as advanced and complex as a 3D camera system usually consists of two side-by-side cameras in a single housing (3D-A5000 Cognex 3D camera, SICK IVC-3D, OMRON Xpectia FZD, etc.) [22]. The aim of such devices is image obtaining and its transformation into separate parameters (with the help of the decomposition process) to obtain the relevant information about a product [23]. The data that we capture in such a way allows us to achieve the advanced calculation of 3D data. We can evaluate the sensing object’s dimensions, thickness, position, and orientation based on this data. However, this method has disadvantages. For example, sometimes we obtain the deformation image from the sensors, but we can measure the distorted images and repair it. Then, we evaluate the modification with the help of calibration. Usually, these types of 3D camera systems utilize algorithms for autocalibration or particular types of calibration, depending on the camera model, which includes the influence of its actual lenses [24]. So, as we know, the dependence between geometric and the sensed product exists here; by comparing with the 3D CAD model, we can state the correct positioning and orientation [25].

3. Setup of the Vision System

To ensure the communication quality of both the robotic system and sensing device, they need to be proven in conjunction with standard known protocols. Currently, there are several interfaces with principle and topology on the Ethernet, RS232, and PROFIBUS systems, and so on. In addition, the control system of the robotic arm needs the relevant libraries and functions to understand the commands.
The composition of the sensing principle is shown in Figure 2. The sensing process continues with calibration. Its purpose is to determine the mathematical values inside and outside for the lens distortion. This principle helps us state the correct dots, their selection, and extraction from the obtained images. Transformation of the coordinate systems precedes this step (robotic arm and sensing device) [26].

4. Verification of Data Transfer (Communication)

Data transfer between the robotic arm and the vision system is via the RS232C communication channel. Communication at both sides is supposed (to the serial communication channel) followed by configuration and specification of transmitting processes [27]. In the next step, this configuration process is followed by the cold startup of the control system of the robotic arm. Finally, the serial interface must be assigned to the operating system for the transmission with CREAD/CWRITE usage, see Table 1, Table 2 and Table 3. Usually, this can be configured for communication through CREAD/CWRITE according to the following scheme shown in Figure 3.

5. Experimental Testing (Setup)

The main aim of the vision system OMRON F150-3 is its experimental testing to verify and prove the possibilities and abilities of such a device, primarily to determine the chosen object characteristics. This system belongs to the so-called compact vision systems, consisting of an image information processing unit (independent of the PC), an externally connected CCD camera, or several CCD cameras. Image information is processed and evaluated in an external unit outside the cameras. The unit’s architecture is usually similar to an industrial or embedded PC built on a powerful processor. This system makes it possible to decide on a sensing object position in a working envelope followed by the parameters of the center of gravity (x, y, and z), as one of the goals in this paper. Such obtained information follows the control system of the robotic arm for their successive transposition [28]. The source point for the introduced solution consists of the realized technical solution with the object coordinates measuring to the object to determine its center of gravity in 3D space. By step sequencing together with the help of two CCD camera systems, precise methodological verification of measurement was reached. The vision systems’ key parameters (OMRON F150-3) taken into account for the present solution are the resolution (512 × 484), the field of view (in our case = 0 because we did not use an additional source of light), and focus (35 mm). The auxiliary light source usage makes it possible to determine sensing objects up to 50 × 50 mm and distance up to 76 mm. Experimental measurement requires implementing the complex control, input–output, and communication peripherals, mainly: control system for post-processing, CCD camera systems with 35 mmm lens, console, monitor, and corresponding cables. In addition, CCD cameras fixation directly to the aluminum profile with modular construction is required (see Figure 4).
The presented and realized technical solution with the two CCD cameras assumes such disposition to capture all object data coordinates. Thus, we try to avoid a technical solution and such disposition where the “z” coordinate axis will not be sufficiently able to determine shape, depth, and height of the sensing object [29]. Concerning this requirement, the CCD cameras’ fixation at a right angle (in the mutual one) was also considered. From the vertical CCD camera system regarding the proposed realized technical solution, it is possible to determine the object coordinates in “x” and “y”. We evaluate the coordinate “z” as the third supplementary axis with the help of a horizontal camera system. Table 4 shows an example of a symmetric object tested to obtain its coordinates (and subsequently the center of gravity). Correct and incorrect sensed objects are evaluated by the correlation factor of the camera system (inside the processing unit with a value from 84 to 100). When results under this value were obtained, the measurement process of the camera system was considered to be incorrect (faulty) and repeating the test was required
Suppose we need to obtain relatively high accuracy of the scanned object position. In that case, we need to eliminate the contrast factor of the object within the measured scene as much as possible. Thus, poorly sensed objects would hardly be identifiable, and the system would be unstable to accurately determine their shape, as shown in the left part of Figure 5.
Obtaining the “x” and “y” coordinates is dependent on the calibration of the CCD cameras. Their calibration is necessary and required, especially in the usage of the triangular method. Their aim covers the reconstruction of sensed objects in order to obtain their shape properties. The example of non-calibrated CCD cameras can be seen in the right part of Figure 5. In the case of a sensed object moving, CCD cameras cannot determine the object’s correct position. Therefore, it is appropriate to use the information in the form of optical calibration tag markers that identify the sensed scene. Markers should be identifiable in the measurement scene by their brightness profile related to the correlation factor. Using the simple thresholding method, we assume the intensity of the marker contrast will be higher than the contrast of all other points on the background of the measured scene [30]. A certain degree of change in the area of the marker depending on its position is natural and results from the physical nature of the method used. Our presented solution uses a 1D photogrammetric calibration method from two known points at a stable point field, as shown in Figure 6 (left part of the picture with two white pins).

Necessary Steps for Coordinates of the Sensed Object

The following steps aim to describe and present the necessary activities and functions needed for the measurement realization. Firstly, both CCD camera system (initial) calibrations are essential, as illustrated in Figure 6. This step contains several subactivities such as the vision system startup followed by CCD camera system registration and the additional light source specification. After these subactivities, it is also necessary to realize and specify the processing mode type required for the following processing. Finally, by setting the CCD camera system exposure time (in the case of more extended exposure time, the setting will be monitor blank), we achieve picture focusing using the actual light source conditions available in the working environment (together with the sensing object).
With the help of an accurate digital measurement instrument (in the form of the digital caliper), we can determine the coordinates for the first calibration point, i.e.,: (x = 25 mm, y = 10 mm, z = 15 mm). Similarly, for the second calibration point the coordinates were: (x = 30 mm, y = 15 mm, z = 15 mm). In terms of calibration, two calibration points are sufficient because the magnification scale in axes “x” and “y” is the same. The additional step of the CCD camera system setting is object presence detection (if it is in the working envelope for both CCD cameras); see the left part of Figure 7. We use the density averaging function (that is inside the vision system) to determine the measurement area [31]. This function compares the two pictures’ brightness and evaluates them based on the greyscale. Following this, we determine an overall brightness average with the measurement (firstly, for the empty area, and second for the area with the sensing object).
The measurement realization verifies the sensing object detection (concerning the actual source light conditions). First, it evaluates an overall density average at the selected area (rectangle shape) that specifies a value of 74.451. The setting of the limit values for the density averaging function is carried out after this process is realized (based on the reliability of the sensing object presence or absence evaluation) [32]. Second, detection of the sensed object gives us qualitative information about errors from the measurement by the vision system. A failed measurement leads us to return to the initial calibration to perform recalibration; see failed detection example of a sensed object shown in the right part of Figure 7. An identification of coordinate axes “x” and “y” for the sensing object is demonstrated in the next step (by using the gravity and area function). The following fundamental step consists of calculation of the area of the sensing object that we can view from the top. The sensing object is evaluated by white pixels (mainly, according to the light source), and the other area is black (see left part of Figure 8).
The next step consists of the “z” axis determination of the sensing object’s shape, depth, and height (see right part of Figure 8). Subactivities of this step also consist of a post-processing process applied to the captured picture from the vision system in a way in which the edge position function successfully realizes sensing the object [33]. This function detects the sensing object’s edges, allowing sufficient contrast between the object and the environment. By applying this process, we can evaluate (in the appropriate direction) the edges of a sensing object in relation to the simultaneous usage of the functions “light to dark” (the sensing object is dark, and the background is bright) or “dark to light” (the sensing object is bright, and the background is dark).
Finally, the last step consists of determining the center of gravity parameters for the sensing object, which we choose by a simple mathematical operation (x, y, z/2) because we consider the center of gravity to be at the middle of the sensed object. Information obtained by this principle is then transferred into the computer via the RS232C serial interface for advanced processing (transformation of the coordinates from the external coordinate system to the coordinates of the robotic arm control system) [34]. Subactivities of this step include transmitting coordinate information directly to the control system of the robotic arm. We can state that a major influence on the captured results is the lighting conditions (light uniformity) or vertical camera system distance from the sensing object. Consequently, to verify obtained results, we carried out a test of normality [35]. We used a graphical test of normality to verify the normality, so-called quantile–quantile plot (Q-Q plot). If the obtained data are from the normal distribution, the points lie on a straight line for the coordinate axis “x” to be fulfilled. The graph is shown in Figure 9.
Experimental measurements were performed for each axis of the sensing object 20 times. The measured values were statistically processed, and the normality of the data of individual coordinates of the sensing object was verified first. Results are shown in the graphs in Figure 9, Figure 10 and Figure 11, where we can see that the unique points are, for the most part, at a sufficiently small distance from the line. It, therefore, implies the normality of the captured data.
We perform the test of normality based on the hypothesis that the sample file with the range n = 20 comes from the normal distribution. After the graphical test of normality verification, we chose the level of significance α = 0.05. Then, we tested it with the zero hypothesis H0, which means that the sample file is from the normal distribution opposite to the alternative hypothesis H1, which comes from another distribution. Finally, the results based on the S.W. test confirms goodness, so the selected sample file comes from the normal distribution probability (value of tested statistics = 0.9726, p-value = 0.4335).

6. Discussion

The novelty of our proposed visioning system (compared with other available solutions) combines principles of 1D photogrammetric calibration methods from two known points at a stable point field and the use of the available packages inside the processing unit of the vision system (such as filtering, enhance and extract edges, weak and robust smoothing, etc.). Combining these two approaches makes it possible to create modularly oriented recognition. We want to mention that the parameters we know are entirely sufficient for the 1D photogrammetric calibration method. However, this method allows more than two points (e.g., three, four, or more). Traditionally, 2D or 3D calibration can be used depending on the object’s sensed size. These methodologies usually use a specific type of calibration field (e.g., checkerboard for 2D) or 3D calibration objects (including a spatial object used to calibrate the camera system). The 1D photogrammetric method was chosen because of the efficiency in calibrating our type of camera system, the known reference coordinates of two points (point field), and the possibility of calibrating objects of our shape and size.
The ultimate goal is to determine the necessary properties of the sensed object. The experiments found that a sufficient correlation factor that guarantees reliable recognition of the sensed object should be a value between 84% and 100%. Pilot communication between both systems (vision and robotic systems) was proposed and then tested through CREAD/CWRITE commands according to the transmit protocol 3964R used for the data transfer at our solution. It was verified and approved the successful transmission of data after the object sensing process was submitted into the robotic system (see Table 3 with successful and verified communication commands). The reliable usage of the assistive commands inside the processing unit was most clearly illustrated by the clear and understandable image of the obtained sensed object. Furthermore, such obtained results at the moment are well prepared for their further and deeper processing during the next phase of development.
Currently, many vision systems are based on PC processing, complex vision systems, or intelligent cameras. Due to the growth in popularity of advanced vision systems usage together with their rapid advance, we attempted to develop an available solution to meet the requirement of (trouble-free) implementation into the (mainly, but not only) automated manufacturing workplaces and automotive industry [36]. The majority of current vision systems consist of closed professional processing algorithms, and as such they cannot be customized to specific situations or specific working conditions. Consequently, there is a growing demand for price-affordable vision systems, such as the one proposed in this paper. On the other hand, weaknesses should also be noted and mentioned in this project. Although encouraging results are obtained from this research, there are still some significant limitations to consider. Hopefully, the potential of new modularities for our application will not be limited by robotic system selection. Firstly, this project will be tested at the KUKA robotic system available at our laboratory.
To reduce single-purpose usage, it is necessary to also verify our proposed vision system at another robotic system (to further determine reliable communication and recognition of sensed objects). Secondly, the proposed vision system is significantly dependent on the existing light conditions of the location where it will be implemented because we do not use additional lightning in our proposal, therefore, it is also worth considering this aspect, as lighting is an essential factor. Thirdly, the serial communication interface should be replaced in the future with a more advanced system, such as ETHERNET, OPC Communication, FIREWARE, USB, etc. The configuration process for these newer communication protocols is much simpler because they are based on open communication standards. Usually, the user is required to enter the IP address of the device and the appropriate port with which they want to communicate. In addition, serial communication has a low baud rate and is used over shorter distances.
Last but not least, we only performed experiments inside one of our laboratories. More experiments have to be realized to compare use, further demonstration, and the functionality of the proposed project in a diversity of conditions.

7. Conclusions

This paper proposes a reachable and fully functional vision system based on two standard CCD cameras to provide a price–performance ratio solution for those who cannot often invest in the newest vision technology at their workplaces. The innovation of our proposed visioning system (compared with other available solutions) combines the principles of the 1D photogrammetric calibration method, vision system, and standard RS 232C communication to determine the necessary properties of the sensed object. In addition, this approach makes it possible to create modularly oriented recognition (depending on the built-in functions of the camera system’s processor unit, of course). Furthermore, with the continued development of vision systems in advanced manufacturing technologies on the one hand, and to stay competitive on the other hand, we prove the elimination of the weakness of an object´s recognition by using the available packages allocated inside the processing unit of the vision system (such as filtering, enhancing and extracting edges, or weak and robust smoothing) [37]. Moreover, our new findings were validated by reliable testing of pilot communication through CREAD/CWRITE commands (according to the transmit protocol 3964R), determining the sufficient correlation factor that guarantees reliable recognition of the sensed object (84% to 100%) and by probability plots. Furthermore, the results of our work are illustrated by experimental tests, where the feasibility of the design is proved by obtainment of clear and stable images of the sensed object [38].
Future work consists of further and deeper image processing to determine advanced properties (information) about sensed objects, which will probably greatly influence the vision system performance. Consequently, exciting areas of additional research include reducing usage of the single-purpose serial communication system and testing other possible communication protocols (ETHERNET IP and OPC communication) with the KUKA robotic system to verify modularity and identify additional limitations of our proposed design.

Author Contributions

All authors discussed and agreed upon the idea and made scientific contributions: writing—original draft preparation, R.H. and M.V.; experiment designing, M.V. and R.H.; experiment performing, M.V.; data analysis and writing—review and editing, M.V. and R.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by research grant VEGA 1/0330/19; research and design of algorithms and systems for the fusion of heterogeneous data in multisensor architectures, KEGA 044TUKE-4/2021; remote access to laboratory exercises for industrial automation, erasmus+ 2019-1-RO01-KA203-063153; and development of mechatronics skills and innovative learning methods for Industry 4.0.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Distante, A.; Distante, C. Handbook of Image Processing and Computer Vision; Springer Nature: Berlin, Germany, 2020; 448p, ISBN 978-3-030-42373-0. [Google Scholar]
  2. Hasegawa, Y.; Shimon, Y.N. Springer Handbook of Automation; Springer: Berlin, Germany, 2020; 1812p, ISBN 978-3-540-78830-0. [Google Scholar]
  3. Zhang, W.; Gu, X.; Zhong, W.; Ma, Z.; Dong, X. A review of transparent soil model testing technique in underground construction: Ground visualization and result digitalization. Undergr. Space 2020. [Google Scholar] [CrossRef]
  4. Xiang, Y.; Liu, H.; Zhang, W.; Chu, J.; Zhou, D.; Xiao, Y. Application of transparent soil model test and DEM simulation in study of tunnel failure mechanism. Tunn. Undergr. Space Technol. 2018, 74, 178–184. [Google Scholar] [CrossRef]
  5. Kahily, H.M.; Sudheer, A.P.; Narayanan, M.D. RGB-D sensor-based human detection and tracking using an armed robotic system. In Proceedings of the 2014 International Conference on Advances in Electronics Computers and Communications, Bangalore, India, 10–11 October 2014; pp. 1–4. [Google Scholar]
  6. Zhan, L.; Junhui, L.; Xiaohe, L. Novel Functionalized BN Nanosheets/Epoxy Composites with Advanced Thermal Conductivity and Mechanical Properties. ACS Appl. Mater. Interfaces 2020, 12, 6503–6515. [Google Scholar] [CrossRef]
  7. Saukkoriipi, J.; Heikkilä, T.; Ahola, J.M.; Seppälä, T.; Isto, P. Programming and control for skill-based robots. Open Eng. 2020, 10, 368–376, ISSN 2391-5439. [Google Scholar] [CrossRef]
  8. Favorskaya, M.N.; Jain, L.C. Computer Vision in Control Systems-4: Real-Life Applications; Springer International Publishing AG: Berlin/Heidelberg, Germany, 2018; p. 317. ISBN 978-3-319-67993-8. [Google Scholar]
  9. Shallari, I.; O’Nils, M. From the Sensor to the Cloud: Intelligence Partitioning for Smart Camera Applications. Sensors 2019, 19, 5162. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Li, J.; Xia, Y.; Wang, W.; Zhang, W.; Zhu, W. Dipping Process Characteristics Based on Image Processing of Pictures Captured by High-speed Cameras. Nano-Micro Lett. 2015, 7, 11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Tong, M.; Fan, L.; Nan, H.; Zhao, Y. Smart Camera Aware Crowd Counting via Multiple Task Fractional Stride Deep Learning. Sensors 2019, 19, 1346. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Frontoni, E.; Mancini, A.; Zingaretti, P. Embedded Vision Sensor Network for Planogram Maintenance in Retail Environments. Sensors 2015, 15, 21114–21133. [Google Scholar]
  13. Popovic, V.; Seyid, K.; Cogal, Ö.; Akin, A.; Leblebici, Y. Design, and Implementation of Real-Time Multi-Sensor Vision Systems; Springer International Publishing AG: Berlin/Heidelberg, Germany, 2017; p. 257. ISBN 978-3-319-59056-1. [Google Scholar]
  14. Campbell, F.C. Inspection of Metals: Understanding the Basics; ASM International: Almere, The Netherlands, 2013; p. 487. ISBN 978-1-62708-000-2. [Google Scholar]
  15. Garbacz, P.; Burski, B.; Czajka, P.; Mężyk, J.; Mizak, W. The Use of 2D/3D Sensors for Robotic Manipulation for Quality Inspection Tasks. Solid State Phenom. Adv. Manuf. Eng. II 2015, 237, 77–82, ISSN 1662-9779. [Google Scholar]
  16. Sinha, P.K. Image Acquisition and Preprocessing for Machine Vision Systems; SPIE Press: Bellingham, WA, USA, 2012; p. 750. ISBN 9780819482037. [Google Scholar]
  17. Chen, J.; Jing, L.; Hong, T.; Liu, H.; Glowacz, A. Research on a Sliding Detection Method for an Elevator Traction Wheel Based on Machine Vision. Symmetry 2020, 12, 1158, ISSN 1424-8220. [Google Scholar] [CrossRef]
  18. Sukop, M.; Hajduk, M.; Baláž, V.; Semjon, J.; Vagaš, M. Increasing degree of automation of production systems based on intelligent manipulation. Acta Mech. Slov. 2011, 15, 58–63, ISSN 1335-2393. [Google Scholar] [CrossRef]
  19. Tannoury, A.; Darazi, R.; Makhoul, A.; Guyeux, C. Wireless multimedia sensor network deployment for disparity map calculation. In Proceedings of the IEEE Middle East and North Africa Communications Conference (MENACOMM), Jounieh, Lebanon, 18–20 April 2018; pp. 1–6. [Google Scholar]
  20. Mohamed, A.; Culverhouse, P.; Cangelosi, A.; Yang, C. Active stereo platform: Online epipolar geometry update. EURASIP J. Image Video Processing 2018, 2018, 54. [Google Scholar] [CrossRef]
  21. Deepa; Jyothi, K. A robust and efficient preprocessing technique for stereo images. In Proceedings of the International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), Mysuru, India, 15–16 December 2017; pp. 89–92. [Google Scholar]
  22. Van Baer, T.; Introduction to Machine Vision. Cognex Confidential Webinar. 2018, p. 49. Available online: https://www.cognex.com/library/media/ondemandwebinars/slides/2018_intro_machine_vision-webinar.pdf (accessed on 26 August 2021).
  23. Collado, J.F. New Methods for Triangulation-based Shape Acquisition Using Laser Scanners. Ph.D. Thesis, Department d’Electronica, Informatica i Automatica, Universitat de Girona, Girona, Italy, 2004. ISBN 84-689-3091-1. [Google Scholar]
  24. Gil Ruiz, A.; Victores, J.G.; Łukawski, B.; Balaguer, C. Design of an Active Vision System for High-Level Isolation Units through Q-Learning. Appl. Sci. 2020, 10, 5927. [Google Scholar] [CrossRef]
  25. Lv, Z.; Zhang, Z. Build 3D Scanner System based on Binocular Stereo Vision. J. Comput. 2012, 7, 399–404. [Google Scholar] [CrossRef]
  26. Novák, P.; Špaček, P.; Mostýn, V. Stereovision system—Detection of the correspondings points. In Proceedings of the ICMT ‘11—International Conference on Military Technologies; University of Defence: Brno, Czech Republic, 2011; pp. 961–968, ISBN 978-80-7231-787-5. [Google Scholar]
  27. Vagaš, M.; Sukop, M.; Baláž, V.; Semjon, J. The calibration issues of the 3D vision system by using two 2D camera sensors. Int. Sci. Her. 2012, 3, 234–237, ISSN 2218-5348. [Google Scholar]
  28. Schneider, M.; Machacek, Z.; Martinek, R.; Koziorek, J.; Jaros, R. A System for the Detection of Persons in Intelligent Buildings Using Camera Systems—A Comparative Study. Sensors 2020, 20, 3558, ISSN 1424-8220. [Google Scholar] [CrossRef] [PubMed]
  29. Olivka, P.; Mihola, M.; Novák, P.; Kot, T.; Babjak, J. The 3D laser range finder design for the navigation and mapping for the coal mine robot. In Proceedings of the 17th International Carpathian Control Conference (ICCC), High Tatras, Slovakia, 29 May–1 June 2016; pp. 533–538. [Google Scholar]
  30. Davies, R.E. Computer Vision: Principles, Algorithms, Applications, Learning; Academic Press: Cambridge, MA, USA, 2018; p. 900. ISBN 978-0-12-809284-2. [Google Scholar]
  31. Du, Y.C.; Muslikhin, M.; Hsieh, T.H.; Wang, M.S. Stereo Vision-Based Object Recognition and Manipulation by Regions with Convolutional Neural Network. Electronics 2020, 9, 210, ISSN 2079-9292. [Google Scholar] [CrossRef] [Green Version]
  32. Matusek, O.; Zdenek, V.; Hotar, V. Detection of glass edge corrugation for cutting distance optimization. MM Sci. J. 2017, 1734–1737. [Google Scholar] [CrossRef]
  33. Vachálek, J.; Čapucha, L.; Krasňanský, P.; Tóth, F. Collision-free manipulation of a robotic arm using the MS Windows Kinect 3D optical system. In Proceedings of the 20th International Conference on Process Control, Strbske Pleso, Slovakia, 9–12 June 2015; pp. 96–106. [Google Scholar]
  34. Holubek, R.; Ružarovský, R.; Delgado Sobrino, D.R. An innovative approach of industrial robot programming using virtual reality for the design of production systems layout. In Advances in Manufacturing; Springer Nature: Cham, Switzerland, 2019; pp. 223–235. ISBN 978-3-030-18714-9. [Google Scholar]
  35. Xu, X.; Yang, H. Vision Measurement of Tunnel Structures with Robust Modelling and Deep Learning Algorithms. Sensors 2020, 20, 4945. [Google Scholar] [CrossRef] [PubMed]
  36. Hain, J. Comparison of Common Tests for Normality. Ph.D. Thesis, Julius-Maximilians-Universitat Wurzburg Institut fur Mathematik und Informatik, Wurzburg, Germany, 2010; 235p. [Google Scholar]
  37. Pérez, L.; Rodríguez, Í.; Rodríguez, N.; Usamentiaga, R.; García, D.F. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review. Sensors 2016, 16, 335, ISSN 1424-8220. [Google Scholar] [CrossRef] [PubMed]
  38. Halenár, I.; Juhás, M.; Juhásová, B.; Vladimirovič, D.B. Virtualization of production using digital twin technology. In Proceedings of the 17th International Carpathian Control Conference (ICCC), Krakow-Wieliczka, Poland, 26–29 May 2019; pp. 1–5, ISBN 978-1-7281-0702-8. [Google Scholar]
Figure 1. Purpose of cameras.
Figure 1. Purpose of cameras.
Applsci 12 00286 g001
Figure 2. Composition of the sensing principle.
Figure 2. Composition of the sensing principle.
Applsci 12 00286 g002
Figure 3. The setting of serial interface communication.
Figure 3. The setting of serial interface communication.
Applsci 12 00286 g003
Figure 4. Realized technical solution experimental measurement stand with CCD cameras.
Figure 4. Realized technical solution experimental measurement stand with CCD cameras.
Applsci 12 00286 g004
Figure 5. Incorrect procedures for determination of the sensed objects at the measured scene.
Figure 5. Incorrect procedures for determination of the sensed objects at the measured scene.
Applsci 12 00286 g005
Figure 6. Initial calibration of CCD cameras.
Figure 6. Initial calibration of CCD cameras.
Applsci 12 00286 g006
Figure 7. Detection of sensed object presence.
Figure 7. Detection of sensed object presence.
Applsci 12 00286 g007
Figure 8. Determination of the object’s coordinates.
Figure 8. Determination of the object’s coordinates.
Applsci 12 00286 g008
Figure 9. Normality test for “x” axis of sensing object.
Figure 9. Normality test for “x” axis of sensing object.
Applsci 12 00286 g009
Figure 10. Normality test for “y” axis of sensing object.
Figure 10. Normality test for “y” axis of sensing object.
Applsci 12 00286 g010
Figure 11. Normality test for “z” axis of sensing object.
Figure 11. Normality test for “z” axis of sensing object.
Applsci 12 00286 g011
Table 1. Configuration of the serial interface.
Table 1. Configuration of the serial interface.
1.(COM3)
2.BAUD = 9600110, 150, 300, 600, 1200, 2400, 4800, 9600, 19200, 38,400, 57,600
3.CHAR_LEN = 87, 8
4.STOP_BIT = 11, 2
5.PARITY = 2EVEN = 2, ODD = 1, NONE = 0
6.PROC = 13964R = 1, SRVT = 2, WTC = 3, XON/XOFF = 4
Table 2. Configuration of transmission protocol 3964R for the data transfer.
Table 2. Configuration of transmission protocol 3964R for the data transfer.
1.(3964R)
2.CHAR_TIMEOUT = 500msec, max. interval between two symbols
3.QUITT_TIMEOUT = 500msec, max. waiting time at control system of a robotic arm to symbol DLE
4.TRANS_TIMEOUT = 500; msec,
5.MAX_TX_BUFFER =21.5, max. value of cache output
6.MAX_RX_BUFFER = 101.20, max. value of cache input
7.SIZE_RX_BUFFER = 1.2048dimension of receiving memory input (in bytes)
8.PROTOCOL_PRIOR = 1HIGH = 1, LOW = 0, priority
Table 3. An example of successful communication between both systems (vision system and control system of robotic arm).
Table 3. An example of successful communication between both systems (vision system and control system of robotic arm).
1.DEFDAT SEND
2.DECLARATION
3.INT HANDLE
4.DECL STATE_T SW_T, SC_T
5.DECL MODUS_T, MW_T
6.ENDDAT
7.DEF SEND ()
8.INITIALIZATION
9.MW_T = #SYNC
10.INSTRUCTION
11.OPEN_P ()
12.WRITE ()
13.CLOSE_P ()
14.END
Table 4. Specification of the sensed object.
Table 4. Specification of the sensed object.
3D ObjectGround ProjectionCorrelation Factor
Applsci 12 00286 i001 Applsci 12 00286 i00284–100
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Holubek, R.; Vagaš, M. Center of Gravity Coordinates Estimation Based on an Overall Brightness Average Determined from the 3D Vision System. Appl. Sci. 2022, 12, 286. https://doi.org/10.3390/app12010286

AMA Style

Holubek R, Vagaš M. Center of Gravity Coordinates Estimation Based on an Overall Brightness Average Determined from the 3D Vision System. Applied Sciences. 2022; 12(1):286. https://doi.org/10.3390/app12010286

Chicago/Turabian Style

Holubek, Radovan, and Marek Vagaš. 2022. "Center of Gravity Coordinates Estimation Based on an Overall Brightness Average Determined from the 3D Vision System" Applied Sciences 12, no. 1: 286. https://doi.org/10.3390/app12010286

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop