Next Article in Journal
Ambulance Locations in a Tiered Emergency Medical System in a City
Next Article in Special Issue
An Optimal Sizing Design Approach of Hybrid Energy Sources for Various Electric Vehicles
Previous Article in Journal
Development and Evaluation of Low-Damage Maize Snapping Mechanism Based on Deformation Energy Conversion
Previous Article in Special Issue
A Real-Time Simulator for an Innovative Hybrid Thermal Management System Based on Experimental Verification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Robotic Palletizer System

Department of Automation Engineering, National Formosa University, Yunlin County 632, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(24), 12159; https://doi.org/10.3390/app112412159
Submission received: 13 November 2021 / Revised: 9 December 2021 / Accepted: 10 December 2021 / Published: 20 December 2021

Abstract

:
In the global wave of automation, logistics and manufacturing are indispensable and important industries. Among them, the related automatic warehousing system is even more urgently needed. There are quite a few cases of using robotic arms in the current industry cargo stacking operations. Traditional operations require engineers to plan the stacking path for the robotic arm. If the size of the object changes, it will take extra time to re-plan the work path. Therefore, in recent years, quite a lot of automatic palletizing software has been developed; however, none of it has a detection mechanism for stacking correctness and personnel safety. As a result, in this research, an intelligent robotic palletizer system is developed based on a self-developed symmetrical algorithm to stack the goods in a staggered arrangement to ensure the overall structure. Innovatively, it is proposed to check the arrangement status and warnings during the visual stack inspection to ensure the correctness of the stacking process. Besides, an AI algorithm is imported to ensure that personnel cannot enter the set dangerous area during the work of the robotic arm to improve safety during stacking. In addition to uploading the relevant data to the cloud database in real time, the stacking process combined database and vision system also provide users with real-time monitoring of system information.

1. Introduction

In recent years, with the wave of “Industry 4.0” and smart manufacturing, the structure of the global manufacturing industry has been greatly impacted. With rising labor costs, traditional manufacturing factories are in need of transformation and development towards digitization and intelligence. Most factories have introduced a lot of automation equipment since traditional production is no longer suitable. The logistics and warehousing industry is also a focus in Industry 4.0. Whether it is manufacturing, logistics, e-commerce, etc., the manpower consumption can be greatly reduced by the introduction of warehousing automation. In order to expand the e-commerce market, Uniqlo has cooperated with the logistics company, Daifuku, to transform the traditional warehouse into fully automated logistics. All the processes of procurement, logistics, and distribution to the hands of consumers can be solved by artificial intelligence and robots [1]. At present, logistics labor costs account for 65% of the average operating budget of most warehouses. With such a high proportion, low gross profit, and huge competitive pressure, the logistics industry needs to try to find better ways to automate the warehouse process as much as possible. As a result, the market for warehouse automation equipment is constantly growing. According to MarketsandMarkets Research, the warehouse robotics market is expected to grow from USD 4.7 billion in 2021 to USD 9.1 billion in 2026, at a CAGR of 14.0% [2]. It is also becoming more and more popular for factories to introduce robotic arms into automation stacks. To use effective resources correctly is more important in the fierce competition of enterprises. Warehousing logistics is one of the factors that affect the profit of the entire supply chain. Good cargo stacking planning helps to place more cargo and save on the costs of transportation and storage. The Pallet Loading Problem (PLP) focuses on finding space optimization to pack the maximum quantity of goods onto the pallet [3].
With regard to solving the Pallet Loading Problem, there are mainly two options. The first one is using an exact depth-first algorithm [4]. A search tree, which is a tree data structure, is used to simulate all the possible arrangements of objects on the pallet; therefore, using this method can arrange the maximum number of objects on the pallet. Because the simulation is based on the search tree, the results of the arrangement are usually unpredictable. Meanwhile, the algorithm did not propose the methods of stacking and object coordinate marking and only discussed the arrangement of the two-dimensional planes, which makes it difficult to apply this algorithm. As for Steudel’s heuristic algorithm [5] and re-cursive partitioning algorithm [6], the pallet is regarded as a two-dimensional plane and then partitioned and arranged. This kind of algorithm can deduce the arrangement of the object according to the logic. However, the number of objects arranged according to the logical rules is relatively lower than that of the heuristic algorithm. Additionally, the heuristic packing algorithm [7], which is similar to exact depth-first algorithm, also got the arrangement result by using a search tree. Nevertheless, not all objects can be reversely arranged in the practical operation process, which does not conform to the actual container packing principle. As a result, this method is not adopted in this study. Therefore, this article will improve the shortcomings of the above four algorithms and propose an improved symmetric algorithm. The relevant comparison is shown in Table 1. All object coordinates will be established by this algorithm following the system’s actual requirements to optimize the development of automated robotic arms. The improved symmetric algorithm reduces the waste of a large area caused by Steudel’s algorithm’s one-time arrangement planning. The remaining area after each arrangement will be planned again until the remaining area cannot be arranged.
Although the recursive partitioning algorithm can increase the number of objects arranged, it will make the remaining area in the corner of the pallet, so that when the objects are stacked, there will be doubts about whether the structure is stable enough to support multi-layer stacking. At the same time, although the algorithm can actually use robotic arms for stacking, there is no staggered stacking between layers, and there will be doubts about whether the structure is stable. Therefore, the improved symmetric algorithm proposed in this paper will mainly address this problem. The following two points are ways to improve the algorithm:
  • The objects are arranged symmetrically, and only the overlapping parts of the objects are removed instead of the regional removal proposed in the recursive partitioning algorithm. In this way, it can be ensured that the outer layer of the stack has a complete rectangular structure. If the situation of overlap happened, only a few overlapping objects will be changed.
  • The improved symmetric algorithm arranges the arrangement between layers in a staggered manner to maintain the stability of the stacking structure, in accordance with the cargo stacking rules, which improves the structural problems when stacking goods by using the recursive partitioning algorithm.
The purpose of this study is to improve the algorithm and develop a complete system that can be used for the robotic palletizer system. There are quite a lot of robotic arm palletizing system products in the industry, such as robotic arm palletizing software developed by TM, YASKAWA, KUKA corp [8,9,10]. Each product has its own advantages in user interface and performance; however, no real-time monitoring module is currently integrated to ensure that goods can be stacked correctly during the palletizing operations. In response to the era of big data, equipment should digitize data so that managers can understand the operating status of the factory more clearly, and which would need to use appropriate communication forms to collect data. Additionally, this part of the function will be established in the communication layer, where all equipment is connected. When the data are received, they will be stored in the database and analyzed. The device will get feedback at the end. Moreover, to ensure that the system is widely used, different communication modes are introduced to allow the system to be extensively used in different brands’ robotic arms. For ensuring the overall safety, an object palletizing visual inspection module is innovatively proposed in this study to ensure that there is no abnormality when the robotic arm is stacking, so as to feedback to the system as a signal to confirm the practical object stacking status.

2. Improved Symmetric Algorithm

This section mainly introduces the concept of improved symmetric algorithms, and how to apply it to automated palletizing systems. In recent years, robotic arms have gradually replaced artificial object stacking. However, the operation of the robotic arm requires time for training. When palletizing, the placement of objects cannot overlap, and a safe distance must be maintained. The simplest way to palletize is to place all objects in the same direction until the objects cannot be arranged. However, in most practical situations, placing all objects in the same direction of arrangement will cause a lot of waste of effective area. The arrangement in the same direction would be less stable on the stack of objects than the staggered arrangement. An improved symmetric algorithm is developed that can be applied practically by referring to other algorithm theories to replace the programming and planning stack requirements of the robotic arm, and use the staggered arrangement to improve the stability of the stack. The algorithm developed in this paper is implemented on a real workstation.
The main purpose of this paper is to place the largest quantity of goods in the effective area; therefore, the concept and detailed calculation process of the algorithm used in this system will be introduced. The goal of the algorithm is to replace the current built-in stack program of the robotic arm to achieve better area utilization. Before developing an algorithm, system requirements and user requirements need to be understood. A palletizing algorithm is developed based on the requirements to determine the input and output data to the system. The requirements for this system are as follows:
  • Input data: the length, width, and height of the effective area, and the size of the arranged objects;
  • Output data: object arrangement result, object arrangement coordinate list;
  • Combine the arrangement of vertical and horizontal to increase the utilization rate of effective area;
  • Use staggered stacking to increase the stability of objects when palletizing;
  • The size of the space between objects can be set.

2.1. Variable Name and Basic Concepts

The main concept of the improved symmetrical arrangement algorithm proposed in this article is that the area in which cargo can be placed is divided into four areas, defined as shown in Figure 1. Starting from the lower-left corner, Block 1 to Block 4 are in sequence. The objects in Block 1 and Block 3 are placed horizontally. The objects in Block 2 and Block 4 are placed vertically. By placing the objects in different orientations, the stability of the palletized objects is increased. Many codes will be used when introducing the algorithm in this section, so we will first explain all the codes that will be used, as shown in Appendix A.

2.2. Basic Concepts of the Improved Symmetrical Arrangement Algorithm

The overall process of this algorithm is shown in Figure 2. First, it needs to be determined whether or not the effective area is sufficient for the arrangement of objects. Then, plan the best arrangement of the object portfolio according to the side length of the effective area. The calculation of this algorithm has four results in each arrangement, as shown in Figure 3, and the objects are classified according to the arrangement of the objects on the side length. Based on the results, the classification of subsequent actions is divided into two results: object overlapping and non-overlapping. Then, choose four solving algorithms according to the overlapping area. If there are overlapping objects after the first arrangement, the quantity of objects that needs to be removed is based on the number of overlapping objects. The remaining effective area will be calculated again to determine whether or not the arrangement can be continued. Finally, after finishing the arrangement, the object coordinates are marked and fed back to the system.

2.3. Improved Symmetrical Arrangement Algorithm

According to the rules, the pallet will be divided into four areas and arranged symmetrically, which means that Block 1 and Block 3 have the same size and area, and Block 2 and Block 4 have the same size and area. There are only four possible results, as shown in Figure 4, according to the symmetrical arrangement. The slash areas are overlapped. The areas of overlap, non-overlap, and Block 1 and Block 3 or Block 2 and Block 4 are larger than the four factors. From the size of L 1 , L 2 , W 1 , and W 2 , it can be known whether the arrangement results overlap. The conditional expressions are as follows Algorithm 1.
Algorithm 1. Overlap calculation
if ( L 1 > L 2 )
 if ( W 1 W 2 )
            Block 1 and Block 3 are larger in area and non-overlapping, as A1 shown in Figure 4a.
 else if ( W 1 < W 2 )
            Block 1 and Block 3 are larger in area and overlapping, as B1 shown in Figure 4b.
if ( L 1 L 2 )
 if ( W 1 W 2 )
            Block 2 and Block 4 are larger in area and non-overlapping, as A2 shown in Figure 4c.
 else if ( W 1 > W 2 )
            Block 2 and Block 4 are larger in area and overlapping, as B2 shown in Figure 4d.

2.3.1. Solution to Overlap

When an algorithm is performing a combination operation, sometimes it will encounter only one permutation and combination, and it will cause overlap. Thus, it is necessary to deal with the overlapping and remove the overlapping objects. The concept of removal is also removed symmetrically. That is, if the two areas of Block 1 and Block 3 overlapped, the object removal action will start. Each removal will remove an object in Block 1 and Block 3 at the same time. This object is placed on the diagonal of the pallet. The different overlapping problems are divided into three categories, namely, horizontal multi-object overlap, vertical multi-object overlap, and single object overlap, as shown in Figure 4, Figure 5, Figure 6 and Figure 7. The following are the solutions to these three problems:
  • Horizontal multi-object overlap
    The horizontal multi-object overlap is as shown in Figure 8. When Block 1 and Block 3 are overlapped horizontally, the number of overlaps in a single block will be calculated first. As shown in Equations (1) and (2), n × m can know the number of overlaps. According to the symmetry, Block 1 and Block 3 will remove objects at the same time. Block 1 will start to remove the overlapping objects in the upper right corner in a vertical manner. On the contrary, Block 3 will start to remove the overlapping objects in the lower left corner in a vertical manner. Because of the symmetry, the number of rows removed is n 2 (Round up). It can be obtained from Equation (3) that R is the number of objects that need to be removed for a single block to eliminate overlap.
    n = |   ( 2 L 1 P x ) B x   |   ( Round   up )
    m = |   ( 2 W 1 P y ) B y   | ( Round   up )
    R = |   ( n 2 ) × m |   ( Round   up )
  • Vertical multi-object overlap
    The vertical multi-object overlap is as shown in Figure 9. When Block 2 and Block 4 are overlapped vertically, the number of overlaps in a single Block will be calculated first as Equations (4) and (5). n and m are the same to indicate the amount of overlap. Block 2 will be removed horizontally from the upper left corner of the overlap; otherwise, Block 4 will be removed horizontally from the bottom right corner of the overlap. Additionally, due to the symmetry, the number of rows removed is m 2 (round up). It can be obtained from Equation (6) that R is the number of objects that need to be removed for a single block to eliminate overlap.
    n = |   ( 2 L 2 P x ) B x   |   ( Round   up )
    m = |   ( 2 W 2 P y ) B y   |   ( Round   up )
    R = | n × ( m 2 )   |   ( Round   up )
  • Single object overlap
    Single object overlap will occur in horizontal overlap and vertical overlap. Additionally, this situation does not have to be removed in a symmetrical way. An independent method was developed to solve the problem. The feature of a single object overlap is that both n and m are 1. However, it still needs to be brought into the corresponding equation according to the classification. If the horizontal overlap occurs, Equations (1) and (2) are used to calculate n and m ; if the vertical overlap occurs, Equations (4) and (5) are used. Figure 10a shows the single object overlap vertically. It can be seen from the Figure 10a that there is no need to remove the same number of objects in the block at the same time as in the previous two methods. Only one of the two overlapping objects in the blocks needs to be removed. This algorithm mainly focuses on Block 1 and Block 2. When a single object overlaps, the overlapped objects of Block 3 and Block 4 will be removed. When the object is removed, as shown in Figure 10b, the overlap problem can be completely eliminated.

2.3.2. Calculation of Remaining Effective Area

The remaining effective area calculation is the last step after each round of arrangement is completed. This function is to determine whether the remaining effective area is enough to be arranged again. If the remaining effective area can continue to be arranged, a new effective area will be generated and represented by P x 1 and P y 1 . If the new effective area can be re-arranged, the step will return to the object arrangement part to find the best arrangement to continue the arrangement. If the equation result shows that the new effective area can no longer be arranged, the step will move to the next step “Object Coordinate Marking”. At this step, according to the four factors at the beginning, whether the horizontal area is larger or the vertical area is larger, and whether they are overlapped, the length and width of the remaining effective area are calculated by using the following four different equations:
  • Block 1 and Block 3 are larger in area and non-overlapped
    When this horizontal area is larger and does not overlap, define the new effective area as m 1 and m 2 . It can be found that m 1 determines the length of P x 1 , and m 2 determines the length of P y 1 . The equations are as Equations (7) and (8). After calculation, the area when Block 1 and Block 3 are non-overlapped is as shown in Figure 11.
    P x 1 = P x 2 × ( m 1 × B y )
    P y 1 = P y 2 × ( m 2 × B y )
  • Block 2 and Block 4 are larger in area and non-overlapped
    In the case of non-overlapped, the calculated new effective area is roughly the same. The difference is that the key factors that determine P x 1 and P y 1 are different. When this vertical area is larger and does not overlap, define the new effective area as n 1 and n 2 . It can be found that n 1 determines the length of P x 1 , and n 2 determines the length of P y 1 . The equations are as Equations (9) and (10). After calculation, the area when Block 2 and Block 4 are non-overlapped is as shown in Figure 12. After confirming the length and width of the new area, it can be known whether the area can continue to be arranged.
    P x 1 = P x 2 × ( n 1 × B x )  
    P y 1 = P y 2 × ( n 2 × B x )
  • Block 1 and Block 3 overlapped
    In the case of overlap, the remaining effective area is the area remaining after the overlapped object is removed. It can be known that when overlap occurs, there will be areas that cannot be used. The remaining area calculation will determine the new area size according to n , m , n 1 , m 2 . Take P x 1 as an example. Its effective length is the length remaining after removing the overlapped objects, and P y 1 is the length remaining after m 2 minus m . The equations are as shown in Equations (11) and (12). After calculation, the area when Block 1 and Block 3 are overlapped is as shown in Figure 13.
    P x 1 = P x 2 × ( ( n 1 n 2 ) × B x )  
    P y 1 = P y 2 × ( ( m 2 m ) × B y )  
  • Block 2 and Block 4 overlapped
    The area calculation of horizontal overlap is roughly the same as that of vertical overlap. The remaining area calculation will determine the new area size according to n , m , n 1 , m 2 . Take P x 1 as an example. Its effective length is the length remaining after removing the overlapped objects, and P y 1 is the length remaining after m 2 minus m . The equations are as shown in Equations (13) and (14). After calculation, the area when Block 2 and Block 4 are overlapped is as shown in Figure 14. It can be seen in the Figure 14 that there are remaining blank areas that are not within the frame, all of which are invalid areas and cannot be arranged. m 2 is rounded up.
    P x 1 = P x 2 × ( ( m 1 n 2 ) × B y )
    P y 1 = P y 2 × ( ( n 2 m 2 ) × B x )  

2.3.3. Object Coordinate Marking

This system can be adopted in the current robotic arm to do pallet function, and the robotic arm eventually receives coordinate data to perform actions; therefore, the system will transfer the address data to the robotic arm ultimately. At the end of the algorithm, when the objects are arranged in the effective area, they will enter the last step, which is the object coordinate marking. Another difference is that the directionality of the objects placed in Block 2 and Block 4 is different from the horizontal placement. As a result, the final data that are transmitted to the robotic arm are X, Y, Z, R (turn), a total of four data.
  • Arrangement of each area
    At the beginning of the algorithm, the pallet is divided into four areas. Different quadrants are used by the coordinate marking according to different areas. As shown in Figure 15, P x and P y are the maximum distance that the object can be placed in the effective area, rather than the original size of the effective area. In order to simplify the marking method, each area is in a different quadrant. It can be seen from Table 2 that the objects of Block 1 are arranged in the first quadrant, Block 2 in the second quadrant, Block 3 in the third quadrant, and Block 4 in the fourth quadrant; therefore, the arrangement direction of each object will be different.
    However, each area is arranged from the origin to the center of the effective area. At the same time, there are different arrangements for horizontal overlap and vertical overlap. The coordinates are as shown in Figure 16.
  • Non-overlapping object coordinate marks
    Non-overlapping coordinate marking is relatively simple, because there is no overlap problem. Regardless of whether the horizontal area is larger or the vertical area is larger, the equations are the same. The vertical row is filled first before switching to the next to continue the arrangement while arranging horizontally. The horizontal row is filled first and then switch to the next row to continue the arrangement while arranging vertically. After calculation in the loop, the coordinates of Block 1 to Block 4 are as shown in Algorithm 2, respectively.
    Algorithm 2. Non-overlapping coordinate calculation
    for   ( i   =   0 ;   i   <   n 1 ; i + + )
           for   ( j   =   0 ;   j   <   m 2 ; j + + )
                   x = H x + i × B x
                   y = H y + j × B y
           B 1 _ pos . Add ( new List < int > ( ) { x , y }
    for   ( i   =   0 ;   i   <   m 1 ; i + + )
           for   ( i   =   0 ;   i   <   n 2 ; i + + )
             x = ( P x V x ) j × B y
             y = V y + I × B y
    for   ( i   =   0 ;   i   <   n 1 ; i + + )
           for   ( j   =   0 ;   j   <   m 2 ; j + + )
             x = ( P x H x ) j × H x
             y = ( P y H y ) i × H y
    for   ( i   =   0 ;   i   <   m 1 ; i + + )
           for   ( j   =   0 ;   j   <   m 2 ; j + + )
             x = ( V x ) j × B y
             y = ( P y V y ) i × B x
    The code of “i” represents the number of boxes on the horizontal axis, and the code of j represents the number of boxes on the vertical axis. After bringing in the respective equations, the obtained coordinates are put into the database. The arrangement sequence will start from Block 1, as shown in Figure 17. From the order of object placement, it can be found that the order of placement of each block’s object is slightly different. Block 1 and Block 3 will first arrange the vertical row and then move to the next row to continue the arrangement, while Block 2 and Block 4 will first arrange the horizontal row and then move to the next row to continue the arrangement. At the same time, at the end of the calculation of each object arrangement, the coordinates will be written into the temporary storage table of individual blocks, such as B1_pos.Add(new List<int>() {x, y}) in Algorithm 2 for storage.
  • Horizontal overlap object mark
    When marking objects with horizontal overlap, Block 1 and Block 3 must be divided into two arrangements, removed objects and not been removed objects, because the horizontal arrangement changes to the next row to continue the arrangement when the vertical arrangement is completed. Therefore, the unremoved objects are arranged into several rows, which are determined by n . Meanwhile, the vertically removed objects are arranged into several objects, which is determined by m . Taking Block 1 as an example, the equation is as follows Algorithm 3:
    Algorithm 3. Overlap calculation
    for   ( i   =   0 ;   i   <   n 1 ; i + + )
                       if ( i < n 1 n ) )
                                                        for   ( j   =   0 ;   j   <   m 2 ; j + + )
                                                                           x = H x + i × B x
                                                                           y = H y + i × B y
                       else
                                                        for   ( j   =   0 ;   j   <   m 2 m ; j + + )
                                                                           x = H x + i × B x
                                                                           y = H y + i × B y
    From ( n 1 n ) , rows of removed objects and unremoved objects can be known. From ( m 2 m ) , the number of objects that can only be arranged for the removed objects can be known. There are two rows to be removed, and under normal circumstances five objects can be arranged. The removed part can only arrange four objects, as shown in Figure 18.
  • Vertical overlap object mark
    When marking objects with vertical overlap, Block 2 and Block 4 must be divided into two arrangements, removed objects and not been removed objects, because the vertical arrangement changes to the next row to continue the arrangement when the horizontal arrangement is completed. Therefore, the unremoved objects are arranged into several rows, which are determined by m . Meanwhile, the horizontally removed objects are arranged into several objects, which is determined by n . Taking Block 2 as an example, the equation is as follows Algorithm 4:
    Algorithm 4. Vertical overlap coordinate calculation
    for   ( i   =   0 ;   i   <   n 2 ; i + + )
                       if ( i < ( n 2 m ) )
                                                        for   ( j   =   0 ;   j   <   m 2 ; j + + )
                                                                           x = ( P x V x ) j × B x
                                                                           y = V y + i × B y
                       else
                                                        for   ( j   =   0 ;   j   <   ( m 1 n ) ; j + + )
                                                                           x = ( P x     V x )     j   ×   B x
                                                                           y = V y + i × B y
    From ( n 1 n ) , rows of removed objects and unremoved objects can be known. From ( m 2 m ) , rows of removed objects and unremoved objects can be known. From ( m 1   n ) , the number of objects that can only be arranged for the removed objects can be known. Some objects are removed since the dash line area in Block 2 is overlapped. It can be seen that there is a row of unremoved objects and a row of removed objects. Normally, four objects can be arranged. The removed part can only be arranged as an object, as shown in Figure 19.

2.3.4. Staggered Stacking

In a factory, all materials or goods undergo a period of stacking and storage in the process of production, delivery, and sales. Especially in the manufacturing and warehousing industries, warehouses or storage areas are often set up. However, if the goods are not properly managed when they are stacked, they may easily collapse and endanger the safety of personnel. Therefore, most factories use industrial plastic wrap or ropes to fix the objects after stacking. When the pallets are stacked, in order to maintain the stability of the pallet stacking, the layers are staggered, as shown in Figure 20, instead of being placed in the same direction. Therefore, the algorithm also imitates the actual cargo stacking action and performs mirroring processing, as shown in Figure 21. This is also the advantage of symmetrical arrangement. When staggered, there is no need to calculate another arrangement for staggering.

3. Vision System

The vision system includes two functions that inspect the placing status (visual stack inspection module) and AI working area safety mechanism (AI working area safety module). The Figure 22 is a process flowchart for the segmenting workstation. The main controller PC gives an order to the robotic arm to activate the function of picking up objects. After picking up, the robotic arm moves to the vision system to carry out CNN recognition. The result of the recognition process leads the robotic arm to place the object at a fixed point. When the object has been placed, the resulting data are uploaded to the database. The vision system activates when it receives data from the database and detects the object’s placing status. If the result of placing status is corrected, the result will be uploaded to the database and shown to the user. Otherwise, it will display an error window to inform the user of the placing status error.

3.1. Visual Stack Inspection Module

Before the image can be identified, all of the input relating to the picture needs to be processed, as the original image cannot be accepted by the system. Instead, compatible data are sent during image processing. In order to acquire useful data, there are several steps involved in analyzing the picture. As shown in the flowchart in Figure 23, the original image is cropped to a specific size, which allows the image processing to focus on the object. Next, gray scale, Gaussian blur, and edge detection functions are performed on the cropped image. This functionality will be outlined in the following sections.

3.1.1. Gray Scale

Gray scale is an image composed using 8-bit color, which allows for 256 different gray color combinations. The full-color image captured by the web camera is converted to gray scale, the purpose of which is to reduce the number of similar colors and enhance the contrast and brightness. Ignoring or omitting the gray scale process will make recognition more difficult. The gray scale process is shown in Figure 24.

3.1.2. Gaussian Blur

Gaussian blur is commonly used to reduce the noise present in an image. The technique involves ‘fuzzing’ the image, as though you were looking at it through a semi-transparent mirror. Following the Gaussian blur, the image will look like it has passed through a low-pass filter via the flourier formula, which smooths the image and increases the accuracy of the edge grasp. An image treated with Gaussian blur is shown in Figure 25.

3.1.3. Edge Detection

The edge of the object will be entirely different when detecting the objects and the edge between the object and another image. The edge in the image has a different value to that of the placing area edge. Following from the Figure 25, the object and placing area are contrasting colors—the object is white, and the placing area is black—making it easy to find the object’s edge. The OpenCV environment offers three methods for identifying the edge: Laplacian, Sobel, and Canny. This study uses the Canny function to achieve the target of finding the edge. Canny uses a gradient method to detect changes in the first derivative of intensity after calculating the brightness of the pixels. The edge detection result is shown in Figure 26.

3.1.4. Fitting Square

The aim of the final steps in the image processing is to fit the object inside the square once the Canny function is complete. The square provides data on the object’s coordinates, length, width, and angle of rotation. The information provided by the square data is depicted in Figure 27, and the process of boxing the object is shown in Figure 28.

3.2. AI Working Area Safety Module

With the rapid evolution of CNN in recent years, the accuracy and speed of object detection have grown significantly. R-CNN and Fast R-CNN capture proposals (possible locations), then classify each proposal, and finally correct the position of the bounding box (framed object). Although the accuracy is excellent, the speed performance is poor and the training difficulty is quite high due to its structural characteristics. In this study, YOLO-v4 was used as AI vision. Compared to the CNN mentioned above, YOLO’s approach is more straightforward. The input picture is split into S*S grids. The location of bounding boxes and the type of bounding boxes are determined by each grid to avoid the high background error detection rate when using traditional CNN. The object detection is effectively promoted to a regression, which achieves good results in accuracy and speed. By using Darknet or related packages in python, the AI working area safety mechanism function can be completed.
When controlling the automated equipment in the laboratory or factory, it often happens that the aging of the equipment leads to errors in the operation process, which in turn causes time waste, machine malfunctions, industrial workplace accidents and other negative effects. Therefore, the AI working area safety module architecture is introduced to monitor the conditions in the production line. First, a lot of photos are collected for training; the ratio of training and verifying is 8:2. After finding a relatively good model, the file is put into the compiled python for detection. The system architecture diagram is shown in Figure 29. The images from the webcam are received by Raspberry Pi, and then it is determined whether personnel enter the processing range through YOLO-v4 in Raspberry Pi. If the module detects an abnormality, the robotic arm will be stopped and the coordinate position will be sent to the database to notify the user to check.

4. Experiment Results

The robotic palletizer system proposed in this study is divided into three parts: the palletizing system, the database (MySQL), and the vision system. Each system has been modularly designed and can be applied to different occasions, including the application of the palletizing system to different types of robotic arms, the use of the vision system in other palletizing systems, etc., which are all expected applications. The simulation of combining the entire software system with the palletizing workstation includes the operation, simulation, real-time movement of the human–machine interface, Modbus communication, the result of the practical arrangement of the robotic arm, and the database. The results of the experiment will be presented through the process of practical simulation. The size of the simulation object used in this demo is 80 × 50 × 17 (mm), and the size of the tray is 250 × 220 (mm). Each function will be explained below.

4.1. Communication

The communication structure is shown in Figure 30. Currently, Modbus ASCII and Modbus RTU are the most commonly used communication methods in the industry. The three systems in this study use network transmission to communicate. The first step of this system is to establish a connection between the database and the robotic arm. In the communication function of the robotic arm, you can select the model of the robotic arm and communication settings, such as IP and Port. The robotic arm can be selected according to the user’s needs. Since the communication codes of the robotic arms of various brands are different, this study will formulate a unified communication rule—Modbus abstract class, Absmodbus—to facilitate subsequent addition and maintenance. The database is responsible for recording production records to provide the system and users with references. Therefore, the communication function includes establishing a connection with the database.

4.2. Operation of User Interface (UI)

A user interface (UI) is the bridge of communication between systems and users. Understanding the user’s habits and interaction with the system is necessary. The intuitive design can be used to meet the needs and usage of users. At the same time, the designer must also have a basic knowledge and understanding of how the system works. Letting users operate intuitively and formulating interfaces according to user’s needs and preferences maximize the benefits of the system. Therefore, when developing the user interface, all the functions of the system were divided into the Robot Status interface and the Palletizing interface. All relevant functions of the robotic arm will be included in the arms control interface, such as model selection, connection establishment, communication transmission, and control. To allow users to use the interface intuitively, there are small windows in the interface to classify all functions.
In the interface of Palletizing, as shown in Figure 31, the main function is to input the effective area of the pallet, the size of the object and the database form, and to issue the execution command. Although general pallets have standard specifications, it is difficult to ensure that there are special specifications for internal use in the factory; therefore, the manual data input is used instead of the drop-down menu. In addition, a simulation of palletizing is also included in the interface, which means that the user will know how the objects are going to be palletized from the interface in advance so that the user can understand the array of the objects.
After opening the user interface, you must first select the brand of the robotic arm and enter the I/O and port numbers to establish a connection with the robotic arm. UR10 robotic arm is used in this research, and Modbus TCP is used to communication protocols. After the connection is established, the robotic arm’s basic data and other information can be obtained through the interface. After inputting the simulated size of the pallet and object into the palletizing interface, it can be seen that the first arrangement starts at the coordinate (0, 0) and the arrangement distance is (240, 210) from the single-layer simulation result, as shown in Figure 32. On the horizontal axis is two horizontally arranged objects with one vertically arranged object, and on the vertical axis is one vertically arranged object with two horizontally arranged objects. There will be overlap in the first arrangement because the number of horizontal arrangements of Block 1 is 2 (N1 Overload = 2) and the number of horizontal arrangements of Block 2 is 1 (M1 Overload = 1). After calculating, the result of the first arrangement is to add eight objects to the pallet. The second arrangement of the system is to arrange the distance from (90, 60) to (60, 90). The result of the second arrangement is to add one object to the pallet. The coordinates of each object are (X, Y, R), R is the axis of rotation, 0 is the horizontal arrangement, and 1 is the vertical arrangement.
Figure 31. User interface of palletizing.
Figure 31. User interface of palletizing.
Applsci 11 12159 g031
Figure 32. Simulation results.
Figure 32. Simulation results.
Applsci 11 12159 g032

4.3. Database

To allow users to track objects, backtrack operation records, or make it easier for other developers to integrate data, the database is imported into the system to save the production record in the database; therefore, using a suitable database is important. The database, MySQL, is used in the system and is managed by PhpMyAdmin. PhpMyAdmin is a database management tool. With the web page, the user can operate the back-end database by setting the function options on the web page. Compared with using the code of MySQL, this method allows users to get started quickly, and then extend functions, such as data extraction, analysis, and modification for the back-end database.
Different data can be seen in the designed database, such as palletizing data shown in Figure 33, and the data for vision system shown in Figure 34. If the Score is 0, it means that the result of the visual inspection is unqualified. Meanwhile, a warning will pop up, and the robotic arm will be stopped, which will be displayed on the user interface as well.

4.4. Experiments

The system architecture and real picture of the workstation is shown in Figure 35 and Figure 36, respectively. The high flexibility and integration of the PC-based control system are utilized in development, which means that the system can be connected to various robotic arms and devices via Ethernet; in addition, the value data are transmitted to the robotic arm to move the arm to the target point or start other devices. The workstation is mainly divided into the main controller, the feeding end, the robotic arm, the placing area, and the vision system. The main controller, the laptop, is mainly responsible for the calculation of the robotic arm position and the overall communication interface. At the feeding end, the conveyor belt is responsible for asserting whether there is a workpiece entering and the preliminary alignment. The robotic arm is responsible for receiving the coordinates of the object, picking up the object and placing the object at the designated point. The placing area is the area where the workpieces are stacked. Furthermore, a vision system is added to determine whether the stacked workpieces are dislocated or offset. If there is an object offset, the system will stop the action and warn the user.
The practical process is shown in Figure 37. Before starting, the model and IP of the robotic arm must be selected in the Robot Status page of the user interface, as shown in the red box of (I). Then, the length, width, height, and other information of the pallet and the box are inputted in the Palletizing page of the user interface, as shown in the red box of (II). After pressing Confirm, the conveyor belt will be controlled to move the woodblock to the picking area of the robotic arm by PLC, as shown in (III). Following the pick-up motion, the object will be placed in the specified position according to the calculated coordinates. When the single layer palletizing is completed, the palletizing status will be checked. If it is normal, the number of layers will be displayed, as shown in the red box of (IV), and continue the palletizing operation. If there is an error in the palletizing process, a “Palletizing Failure” warning message will pop up, as shown in (V), and the palletizing process will be paused at this time. One of the paused situation can be shown in Figure 38, at this time, it will wait for the staff to confirm. If the palletizing state is normal, as shown in Figure 39, the next palletizing process will be continued.
The AI working area safety module system is introduced into the visual detection system in this study. Through modular design, different levels of equipment can be replaced according to requirements. For example, the computer that processes the signal can be replaced with a Raspberry Pi, a laptop or a computer with high computing power; besides, a 2D webcam or a 3D camera that can capture depth values are both available and use the AI algorithm-YOLOv4 or YOLOv4-tiny to process the captured image. Moreover, sensors such as gyro and sound detect can also be installed to detect the machine vibration and the fault prediction, which greatly improves the practicality. This system has been demonstrated in the laboratory and KUKA robotic arm grinding station. The left side of Figure 40 is the physical AI working area safety module. As for the right side of Figure 40, the AI working area safety module can monitor whether there are people entering the warn area (yellow box) and danger area (red box) in the processing range of the KUKA robotic arm (black box) grinding station. Once people entering the danger area, red light will be turned on and the action of the robotic arm will be stopped immediately.

5. Conclusions

The intelligent robotic palletizer system for different robotic arms has been successfully developed in this research. It can plan the object placement coordinates according to the improved symmetry algorithm to start stacking objects, and prevent the risk of collapse or collision caused by the incorrect stacking using visual inspection when each layer of stacking is complete. For the safety of workers, an AI working area safety module had been innovatively constructed in the intelligent robotic palletizer system to protect personnel from entering the robotic arm’s working area. Finally, all relevant data on the intelligent robotic palletizer system are stored in a MySQL database to provide the real-time operating status display and historical data analysis.

Author Contributions

Conceptualization, J.-D.L.; Methodology, J.-D.L., C.-H.C. and E.-S.C.; Software, C.-H.C. and E.-S.C.; Validation, C.-C.K. and C.-Y.H.; Writing—original draft, J.-D.L., C.-H.C. and E.-S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Technology of Taiwan under grant number MOST-110-2221-E-150-037 (form August 2021 to July 2022).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The author at the Mechatronics and Automation Laboratory, located at the Department of Automation Engineering, National Formosa University, Taiwan, was the subject of the experiments. The author consented to participate in this research study.

Acknowledgments

The authors would like to acknowledge the financial support of the Ministry of Science and Technology of Taiwan through grant number MOST 110-2221-E-150-037.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Variable name explanation.
Variable NameExplanation
P x The length of the effective area of the objects that can be arranged (the length of the pallet)
P y The width of the effective area of the objects that can be arranged (the width of pallet)
B x Object length.
B y Object width.
L 1 The horizontal length of Block 1 and Block 3 as shown in Figure A1.
L 2 The horizontal length of Block 2 and Block 4 as shown in Figure A1.
W 1 The vertical length of Block 2 and Block 4 as shown in Figure A1.
W 2 The vertical length of Block 1 and Block 3 as shown in Figure A1.
n 1 The quantity of objects contained in the length of L 1
m 1 The quantity of objects contained in the length of L 2
n 2 The quantity of objects contained in the length of W 1
m 2 The quantity of objects contained in the length of W 2
P x 1 The horizontal length of the remaining area as shown in Figure A1.
P y 1 The vertical length of the remaining area as shown in Figure A1.
n Quantity of objects that overlap horizontally.
m Quantity of objects that overlap vertically.
R The number of objects that need to be removed.
P x The horizontal length of the objects arranged in the effective area as shown in Figure A1.
P y The vertical length of the objects arranged in the effective area as shown in Figure A1.
H x When the object is placed horizontally, the horizontal length to the center point as shown in Figure A1.
H y When the object is placed horizontally, the vertical length to the center point as shown in Figure A1.
V x When the object is placed vertically, the horizontal length to the center point as shown in Figure A1.
V y When the object is placed vertically, the vertical length to the center point as shown in Figure A1.
Figure A1. Schematic diagram of object.
Figure A1. Schematic diagram of object.
Applsci 11 12159 g0a1

References

  1. Uniqlo Replaced 90% of Staff at its Tokyo Warehouse with Robots. Available online: https://www.weforum.org/agenda/2018/10/uniqlo-replaced-90-of-staff-at-its-newly-automated-warehouse-with-robots (accessed on 12 October 2018).
  2. Warehouse Robotics Market by Type (Mobile, Articulated, Cylindrical, SCARA, Parallel, Cartesian), Software, Function (Pick & Place, Palletizing & Depalletizing, Transportation, Packaging), Payload Capacity, Industry, and Region—Global Forecast to 2022. Available online: https://www.marketsandmarkets.com/Market-Reports/warehouse-robotic-market-128876258.html%20 (accessed on 1 September 2021).
  3. Vargas-Osorio, S.; Zúñiga, C. A literature review on the pallet loading problem. Lámpsakos 2016, 15, 69–80. [Google Scholar] [CrossRef] [Green Version]
  4. Laurent, D.; Iyengar, S. A heuristic algorithm for optimal placement of rectangular objects. Inf. Sci. 1982, 26, 127–139. [Google Scholar] [CrossRef]
  5. Bhattacharya, S.; Roy, R.; Bhattacharya, S. An exact depth-first algorithm for the pallet loading problem, European Journal of Operational Research. Eur. J. Oper. Res. 1998, 110, 610–625. [Google Scholar] [CrossRef]
  6. Birgin, E.G.; Lobato, R.D.; Morabito, R. Generating unconstrained two-dimensional non-guillotine cutting patterns by a Recursive Partitioning Algorithm. J. Oper. Res. Soc. 2012, 63, 183–200. [Google Scholar] [CrossRef] [Green Version]
  7. Wu, R.H.; Lin, H.Y. System Analysis and Design-Theory and Application; Bestwise Publishing Co.: Taipei, China, 2001. [Google Scholar]
  8. What is TM Palletizing Operator? Available online: https://www.tm-robot.com/zh-hant/tm-palletizing-operator (accessed on 1 September 2021).
  9. CMD Yaskawa Motoman Robotic Palletizer. Available online: https://www.youtube.com/watch?v=bhBqc6ocBeY&ab_channel=YaskawaAmerica%2CInc (accessed on 28 September 2018).
  10. A Strong Partner in Automation—KUKA Robot Makes Medium-Sized Malaysian Company Fit for the Future. Available online: https://www.youtube.com/watch?v=aTL_PS04AZ4&t=16s&ab_channel=KUKA-Robots%26Automation (accessed on 19 March 2020).
Figure 1. Symmetrical arrangement area division.
Figure 1. Symmetrical arrangement area division.
Applsci 11 12159 g001
Figure 2. Improved symmetrical arrangement algorithm flow chart.
Figure 2. Improved symmetrical arrangement algorithm flow chart.
Applsci 11 12159 g002
Figure 3. Object arrangement results.
Figure 3. Object arrangement results.
Applsci 11 12159 g003
Figure 4. Symmetrical arrangement results. (a) Blocks 1 and 3 are bigger than blocks 2 and 4 respectively, and no overlap occurs. (b) Blocks 1 and 3 are bigger than blocks 2 and 4 respectively, and overlap occurs. (c) Blocks 2 and 4 are bigger than blocks 1 and 3 respectively, and no overlap occurs. (d) Blocks 2 and 4 are bigger than blocks 1 and 3 and overlap occurs.
Figure 4. Symmetrical arrangement results. (a) Blocks 1 and 3 are bigger than blocks 2 and 4 respectively, and no overlap occurs. (b) Blocks 1 and 3 are bigger than blocks 2 and 4 respectively, and overlap occurs. (c) Blocks 2 and 4 are bigger than blocks 1 and 3 respectively, and no overlap occurs. (d) Blocks 2 and 4 are bigger than blocks 1 and 3 and overlap occurs.
Applsci 11 12159 g004
Figure 5. Horizontal multi-object overlap.
Figure 5. Horizontal multi-object overlap.
Applsci 11 12159 g005
Figure 6. Vertical multi-object overlap.
Figure 6. Vertical multi-object overlap.
Applsci 11 12159 g006
Figure 7. Single object overlap.
Figure 7. Single object overlap.
Applsci 11 12159 g007
Figure 8. Horizontal multi-object overlap. (a) Before removal. (b) After removal.
Figure 8. Horizontal multi-object overlap. (a) Before removal. (b) After removal.
Applsci 11 12159 g008
Figure 9. Vertical multi-object overlap. (a) Before removal. (b) After removal.
Figure 9. Vertical multi-object overlap. (a) Before removal. (b) After removal.
Applsci 11 12159 g009
Figure 10. Single object overlap. (a) Before removal. (b) After removal.
Figure 10. Single object overlap. (a) Before removal. (b) After removal.
Applsci 11 12159 g010
Figure 11. The area after calculation when Block 1 and Block 3 are non-overlapped.
Figure 11. The area after calculation when Block 1 and Block 3 are non-overlapped.
Applsci 11 12159 g011
Figure 12. The area after calculation when Block 2 and Block 4 are non-overlapped.
Figure 12. The area after calculation when Block 2 and Block 4 are non-overlapped.
Applsci 11 12159 g012
Figure 13. The area after calculation when Block 1 and Block 3 are overlapped.
Figure 13. The area after calculation when Block 1 and Block 3 are overlapped.
Applsci 11 12159 g013
Figure 14. The area after calculation when Block 2 and Block 4 are overlapped.
Figure 14. The area after calculation when Block 2 and Block 4 are overlapped.
Applsci 11 12159 g014
Figure 15. Coordinate marking of different areas.
Figure 15. Coordinate marking of different areas.
Applsci 11 12159 g015
Figure 16. Each area in a different quadrant.
Figure 16. Each area in a different quadrant.
Applsci 11 12159 g016
Figure 17. The order of the object arrangement.
Figure 17. The order of the object arrangement.
Applsci 11 12159 g017
Figure 18. Horizontal overlap object mark.
Figure 18. Horizontal overlap object mark.
Applsci 11 12159 g018
Figure 19. Vertical overlap object mark.
Figure 19. Vertical overlap object mark.
Applsci 11 12159 g019
Figure 20. Staggered stacking.
Figure 20. Staggered stacking.
Applsci 11 12159 g020
Figure 21. Mirror arrangement.
Figure 21. Mirror arrangement.
Applsci 11 12159 g021
Figure 22. Vision system process flowchart.
Figure 22. Vision system process flowchart.
Applsci 11 12159 g022
Figure 23. Flowchart of image processing.
Figure 23. Flowchart of image processing.
Applsci 11 12159 g023
Figure 24. Gray scale.
Figure 24. Gray scale.
Applsci 11 12159 g024
Figure 25. Gaussian blur.
Figure 25. Gaussian blur.
Applsci 11 12159 g025
Figure 26. Canny edge detector.
Figure 26. Canny edge detector.
Applsci 11 12159 g026
Figure 27. Information of the square.
Figure 27. Information of the square.
Applsci 11 12159 g027
Figure 28. Boxing objects.
Figure 28. Boxing objects.
Applsci 11 12159 g028
Figure 29. AI working area safety module system architecture.
Figure 29. AI working area safety module system architecture.
Applsci 11 12159 g029
Figure 30. Communication structure.
Figure 30. Communication structure.
Applsci 11 12159 g030
Figure 33. Palletizing data.
Figure 33. Palletizing data.
Applsci 11 12159 g033
Figure 34. Vision system data.
Figure 34. Vision system data.
Applsci 11 12159 g034
Figure 35. The system architecture of the workstation.
Figure 35. The system architecture of the workstation.
Applsci 11 12159 g035
Figure 36. The workstation.
Figure 36. The workstation.
Applsci 11 12159 g036
Figure 37. Practical arrangement action. (I) Robot status. (II) Palletizing parameters. (III) Pick-up motion. (IV) Single-layer palletizing complete. (V) Palletizing check.
Figure 37. Practical arrangement action. (I) Robot status. (II) Palletizing parameters. (III) Pick-up motion. (IV) Single-layer palletizing complete. (V) Palletizing check.
Applsci 11 12159 g037
Figure 38. Failed palletizing in workstation.
Figure 38. Failed palletizing in workstation.
Applsci 11 12159 g038
Figure 39. Normal palletizing in workstation.
Figure 39. Normal palletizing in workstation.
Applsci 11 12159 g039
Figure 40. AI working area safety module action.
Figure 40. AI working area safety module action.
Applsci 11 12159 g040
Table 1. Algorithm comparison.
Table 1. Algorithm comparison.
TitleThis StudyExact Depth-First Algorithm [4]Steudel’s Heuristic Algorithm [5]Recursive Partitioning Algorithm [6]Heuristic Packing Algorithm [7]
Peripheral complete rectangleYesNoYesNoNo
Space utilizationMediumHighLowMediumHigh
LogicYesNoYesYesNo
Staggered stackingYesNoNoNoYes
Object coordinate generationYesNoNoYesNo
Computing speedHighLowHighHighLow
Practical applicationYesNoNoYesNo
Table 2. Arrangement of each area.
Table 2. Arrangement of each area.
AreaHorizontal AxisVertical Axis
Block 1 x = H x + i × B x y = H y + j × B y
Block 2 x = ( P x V x ) i × B y y = V y + j × B x
Block 3 x = ( P x H x ) i × B x y = ( P x H y ) j × B y
Block 4 x = V x + i × B y y = ( P x V y ) j × B x
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, J.-D.; Chang, C.-H.; Cheng, E.-S.; Kuo, C.-C.; Hsieh, C.-Y. Intelligent Robotic Palletizer System. Appl. Sci. 2021, 11, 12159. https://doi.org/10.3390/app112412159

AMA Style

Lee J-D, Chang C-H, Cheng E-S, Kuo C-C, Hsieh C-Y. Intelligent Robotic Palletizer System. Applied Sciences. 2021; 11(24):12159. https://doi.org/10.3390/app112412159

Chicago/Turabian Style

Lee, Jeng-Dao, Chen-Huan Chang, En-Shuo Cheng, Chia-Chen Kuo, and Chia-Ying Hsieh. 2021. "Intelligent Robotic Palletizer System" Applied Sciences 11, no. 24: 12159. https://doi.org/10.3390/app112412159

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop