Next Article in Journal
Low-Profile Meander Line Multiband Antenna for Wireless Body Area Network (WBAN) Applications with SAR Analysis
Next Article in Special Issue
Dynamic Characteristics Prediction Model for Diesel Engine Valve Train Design Parameters Based on Deep Learning
Previous Article in Journal
Image Analysis of Spatial Differentiation Characteristics of Rural Areas Based on GIS Statistical Analysis
Previous Article in Special Issue
Rotor Fault Diagnosis Method Using CNN-Based Transfer Learning with 2D Sound Spectrogram Analysis
 
 
Communication
Peer-Review Record

Automated Guided Vehicle (AGV) Driving System Using Vision Sensor and Color Code

Electronics 2023, 12(6), 1415; https://doi.org/10.3390/electronics12061415
by Jun-Yeong Jang, Su-Jeong Yoon and Chi-Ho Lin *
Reviewer 1:
Reviewer 2:
Electronics 2023, 12(6), 1415; https://doi.org/10.3390/electronics12061415
Submission received: 31 January 2023 / Revised: 28 February 2023 / Accepted: 11 March 2023 / Published: 16 March 2023
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)

Round 1

Reviewer 1 Report

The authors presented an AGV design and tracking using a vision sensor and color code. The article is within the scope of the journal, relevant to the related field, and interesting top for the readers but a lot of major issues need to be solved before proceeding with the publication. The list of major and minor issues are listed here. 

Abstract:

Should be rewritten. It does not reflect the overall summary of the article. In general, after reading the abstract, the ready must be able to figure out what is the article about, and what are methods used. what results were obtained etc?

Introduction: The introduction section must include the background of the study, related up-to-date articles, the research gap, the motivation of the study, your proposed method, etc. The introduction section does not even cover the basic requirement of the introduction section in the research article. Is your proposed method the only method used in AGV using color code for tracking? The author does not cover state-of-art articles. I would suggest reviewing all the related state of art articles either in the introduction section or in a separate section. The author mention the research gap but did not mention the latest and related articles. 

2 AGVA-Robot Driving System: The block diagram and circuit design presented in this interesting and the operating flowchart is simple but interesting to the readers. I would suggest describing a little bit about the operating flowchart. 

3.  Color Model: The content in this section looks irrelevant. I don't think authors should go into this much detail about the color models. Instead, I would suggest describing why do you use the HSI color model. After the experiments, the authors must prove the performance of the system is better using the HSI color model. I would suggest removing 3.2 RBG to HSI  sub-section as it is not much relevant. Section 3.3 describes the color code but is it quite difficult to get it. Please add 1 or 2 paragraphs to describe what is color code, what is the purpose of this color code in this study, etc. The figure caption is very light, please elaborate on the figure a bit. 

4. Experiment results: The experimental results section does not even cover all the results in the sense that the experiments are incomplete and also the results are not presented well. The important component of the research article is a discussion that describes the worthiness of the study. Please completely re-write the results and discussion section with more experimental results and evaluate your proposed model. 

5. Conclusion: I would recommend moving some text from the conclusion section to the discussion section.  

Author Response

Point 1. Should be rewritten. It does not reflect the overall summary of the article. In general, after reading the abstract, the ready must be able to figure out what is the article about, and what are methods used. what results were obtained etc?

Introduction: The introduction section must include the background of the study, related up-to-date articles, the research gap, the motivation of the study, your proposed method, etc. The introduction section does not even cover the basic requirement of the introduction section in the research article. Is your proposed method the only method used in AGV using color code for tracking? The author does not cover state-of-art articles. I would suggest reviewing all the related state of art articles either in the introduction section or in a separate section. The author mention the research gap but did not mention the latest and related articles. 

Response 1: It has been revised and reflected in the thesis.

In this paper, we propose an AGV operating algorithm using color codes that can perform route recognition and driving and driving commands through color recognition using a vision sensor instead of the conventional line recognition using an optical sensor. If this algorithm is applied to AGV, the CMUcam 5 Pixy2 camera can basically identify the path to follow and execute driving commands through color code recognition.

Point 2 AGVA-Robot Driving System: The block diagram and circuit design presented in this interesting and the operating flowchart is simple but interesting to the readers. I would suggest describing a little bit about the operating flowchart. 

Response 2: It has been revised and reflected in the thesis.

After extracting an image from a video, in order to obtain information from the image, it is necessary to separate the objects contained in the image. One of the representative methods for separating objects is binarization. In this paper, the Otsu method was used as the binarization algorithm for line recognition[30]. Also, color code, one of the functions of CAMcam5 pixy2, was used for the color code to give the driving command. In CAMcam5 pixy2, the HSI color model was used for color detection. The order of the detected color models was coded and matched with driving commands to be used in the AGV driving algorithm.

Point 3.  Color Model: The content in this section looks irrelevant. I don't think authors should go into this much detail about the color models. Instead, I would suggest describing why do you use the HSI color model. After the experiments, the authors must prove the performance of the system is better using the HSI color model. I would suggest removing 3.2 RBG to HSI  sub-section as it is not much relevant. Section 3.3 describes the color code but is it quite difficult to get it. Please add 1 or 2 paragraphs to describe what is color code, what is the purpose of this color code in this study, etc. The figure caption is very light, please elaborate on the figure a bit. 

Response 3: It has been revised and reflected in the thesis.

  1. Color Model

An RGB color model and an HSI color model were used to enable the AGVA-Robot to recognize the color code from the input image using the vision sensor. The HSI model is a model that expresses colors using 3 characteristics: H (Hue), S (Saturation), and I (Intensity). H refers to an attribute that expresses pure colors, S refers to an attribute that expresses saturation, and I refers to an attribute that expresses the brightness value [27].

 

3.1 RGB to HSI

 

Point 4. Experiment results: The experimental results section does not even cover all the results in the sense that the experiments are incomplete and also the results are not presented well. The important component of the research article is a discussion that describes the worthiness of the study. Please completely re-write the results and discussion section with more experimental results and evaluate your proposed model. 

Response 4: In this paper, it is proposed that more diverse command codes can be used by using color codes in the system that used to issue driving commands with existing barcodes. And we tested whether the operation according to the proposal is possible.

Point 5. Conclusion: I would recommend moving some text from the conclusion section to the discussion section.  

Response 5: It has been revised and reflected in the thesis.

 

Author Response File: Author Response.pdf

Reviewer 2 Report

The manuscript is overall well written and easy to follow and the authors have well thought out  their main contributions in the field of Automated Guided Vehicle (AGV) Driving System. Unfortunately this manuscript has a few concerns and the author should work to improve the following,

[Abstract] What is the actual finding of the research, the result should be discussed in this section. The core objective (What are you trying to solve) is not discussed. The author mentioned “This paper proposes an AGV operation algorithm” what is meant by AGV operation algorithm? 

[Keywords] Provide meaningful keywords in alphabetical order. 

[Introduction] Abbreviation should be explained in the first attempt. Recheck the manuscript. Moreover, this section is short and includes content based on sensors and AGV. The author should discuss the research gap in this section, it is recommended to include specific contributions to the research. Mention the organization of the manuscript at the end of the introduction section.

[Related work] This section is missing, the author should work on recent year papers to support the need and importance of the proposed model. What is the limitation of the existing model, explain in detail.

[Materials and Methods] Recheck typos and grammatical errors.  Use only the required subsections. Preambles are missing. Apart from the figures (1 and 2) we couldn’t find any related contents. Sensors, motors and microcontroller information are discussed in thin layers, so the novelty of the research is weak and fragile. It is strongly suggested to improve the proposed content.

[Materials and Methods]  Overall, the author combined well known sensors and controllers. The author explains the novelty of the proposed method. 

[Result and discussion] Case based analysis and different object based analysis is required. Prototype is small and volatile. We recommend the author to work on real time basics. The logic is very simple, we can't take this as a novelty. Explanation is required.

[Reference] Include recent year papers.

Author Response

[Abstract] What is the actual finding of the research, the result should be discussed in this section. The core objective (What are you trying to solve) is not discussed. The author mentioned “This paper proposes an AGV operation algorithm” what is meant by AGV operation algorithm? 

Response 1: It has been revised and reflected in the thesis.

In this paper, we propose an AGV operating algorithm using color codes that can perform route recognition and driving and driving commands through color recognition using a vision sensor instead of the conventional line recognition using an optical sensor. If this algorithm is applied to AGV, the CMUcam 5 Pixy2 camera can basically identify the path to follow and execute driving commands through color code recognition.

[Keywords] Provide meaningful keywords in alphabetical order. 

Response 2: It has been revised and reflected in the thesis.

 

[Introduction] Abbreviation should be explained in the first attempt. Recheck the manuscript. Moreover, this section is short and includes content based on sensors and AGV. The author should discuss the research gap in this section, it is recommended to include specific contributions to the research. Mention the organization of the manuscript at the end of the introduction section.

Response 3: It has been revised and reflected in the thesis.

In this paper, we propose an AGV operating algorithm using color codes that can perform route recognition and driving and driving commands through color recognition using a vision sensor instead of the conventional line recognition using an optical sensor. If this algorithm is applied to AGV, the CMUcam 5 Pixy2 camera can basically identify the path to follow and execute driving commands through color code recognition.

[Related work] This section is missing, the author should work on recent year papers to support the need and importance of the proposed model. What is the limitation of the existing model, explain in detail.

Response 4:

This paper presents another method that is not a problem of the existing model.

[Materials and Methods] Recheck typos and grammatical errors.  Use only the required subsections. Preambles are missing. Apart from the figures (1 and 2) we couldn’t find any related contents. Sensors, motors and microcontroller information are discussed in thin layers, so the novelty of the research is weak and fragile. It is strongly suggested to improve the proposed content.

[Materials and Methods]  Overall, the author combined well known sensors and controllers. The author explains the novelty of the proposed method. 

Response 5: It has been revised and reflected in the thesis.

2.1. Materials

2.1.1. CMUcam5 Pixy2

The vision sensor selected for this paper is CMUcam5 Pixy2. The CMUcam5 Pixy2 vision sensor is fast image sensor that tracks object and it can directly connect to Arduino Uno R3 through ICSP port on Arduino Uno R3 board. It’s actually faster than the I2C bus and when you have a device capable of processing 60 frames of video per second speed is an important factor. The Arduino SPI bus has a clock rate of 2 MHz, allowing you to transfer data at 2Mbits/second. Using this connection you can get back data from the camera and can also control the two servo motors used in the optional pan and tilt mount. Besides that, CMUcam5 Pixy2 has its own powerful processor to process the image. Since CMUcam5 Pixy2 has its own processor, it will process the captured images from the sensor and extract the useful information. Besides that, CMUcam5 Pixy2 come with a color algorithm to detect object’s color. Normally, RGB (red, green, and blue) used to represent colors. But CMUcam5 Pixy2 calculates the hue color and saturation of each RGB pixel from the image sensor and uses these as the primary filtering parameters. Thus, conversion algorithm to convert RGB to HSI color based is not required in the programming part as the algorithm is already integrated in the CMUcam5 Pixy2 image sensor module.

 

2.1.2. Arduino Uno R3

 

Arduino Uno R3 is a microcontroller board based on the ATmega328P. It has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz ceramic resonator, a USB connection, a power jack, an ICSP header and a reset button. It contains everything needed to support the microcontroller; simply connect it to a computer with a USB cable or power it with a AC-to-DC adapter or battery to get started.

 

2.1.3. L298N Motor Driver

 

This L298N based motor driver module is a high voltage, high current DUAL FULL-BRIDGE driver suitable for driving DC motors and stepper motors. It can control up to 4 DC motors, or 2 DC motors with directional and speed control. This motor driver is perfect for robotics and mechatronics projects and perfect for controlling motors from microcontrollers, switches, relays, etc. An H-Bridge is a circuit that can drive a current in either polarity and be controlled by Pulse Width Modulation (PWM). Two enable inputs are provided to enable or disable the device independently of the in-put signals. The emitters of the lower transistors of each bridge are connected together rand the corresponding external terminal can be used for the connection of an external sensing resistor. An additional Supply input is provided so that the logic works at a lower voltage.

 

[Result and discussion] Case based analysis and different object based analysis is required. Prototype is small and volatile. We recommend the author to work on real time basics. The logic is very simple, we can't take this as a novelty. Explanation is required.

Response 6: It has been revised and reflected in the thesis.

The algorithm for recognizing lines in the extracted image used the Otsu method, and to recognize the color code, RGB to HSI conversion was used to obtain an H value that was robust to color and recognized the color. After indexing the recognized colors on the vision sensor board, we generated color codes in accordance with the order of adjacent colors of the indexed colors and used them for traveling commands. As a result of the test using the AGVA-Robot manufactured to test the algorithm of the proposed traveling system, it was confirmed that the proposed traveling algorithm of the AGVA-Robot works normally.

[Reference] Include recent year papers.

Response 7: It has been revised and reflected in the thesis.

Jang, J.Y.; In, C.H. Design and Implementation of AGV-UNO-CAR Using a Line Scan Algorithm. J. Korean Inst. Commun. Inf. Sci. 2021, 46, 1346-1354. doi:10.7840/kics.2021.46.8.1346

 

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The article is not improved enough to accept for publication. Most of the issues are not addressed. I would not recommend accepting the publication. 

Example - In comment 1, the introduction section was suggested to improve but the authors did not improve it or did not address the issue. 

I would suggest improving the article according to the suggestions given in the previous review report.  

Author Response

Point 1. Should be rewritten. It does not reflect the overall summary of the article. In general, after reading the abstract, the ready must be able to figure out what is the article about, and what are methods used. what results were obtained etc?

Introduction: The introduction section must include the background of the study, related up-to-date articles, the research gap, the motivation of the study, your proposed method, etc. The introduction section does not even cover the basic requirement of the introduction section in the research article. Is your proposed method the only method used in AGV using color code for tracking? The author does not cover state-of-art articles. I would suggest reviewing all the related state of art articles either in the introduction section or in a separate section. The author mention the research gap but did not mention the latest and related articles. 

Response 1: It has been revised and reflected in the thesis.

Various attempts have been made, such as a pattern recognition method using a simple barcode-type location recognition symbol using a line scan camera or a method of recognizing the surrounding environment using a deep learning-based vision system.[30-33]

 In this paper, we propose an AGV line scan algorithm that can perform route recognition and driving and driving commands through color code recognition using an Arduino controller and a low-cost vision sensor instead of the existing method of recognizing and driving a line using an optical sensor. The proposed method is a path recognition technology that reduces computing and is easy to maintain and change at a low cost compared to methods using images or RFID tags by pattern recognition using an existing line scan camera and a simple barcode type recognition symbol. In addition, the proposed method has the advantage of being able to solve the problem of driving through path recognition, which used to be performed using various sensors, by using only a vision sensor.

When the proposed algorithm is applied to the AGV-CAR, the CMUcam 5 Pixy2 camera identifies the driving path to follow by tracking the black line using the Otsu method. In addition, it can be confirmed that the driving command is executed using the proposed color code by applying the color recognition function of CMUcam 5 Pixy2.

 

Point 2 AGVA-Robot Driving System: The block diagram and circuit design presented in this interesting and the operating flowchart is simple but interesting to the readers. I would suggest describing a little bit about the operating flowchart. 

Response 2: It has been revised and reflected in the thesis.

After extracting an image from a video, in order to obtain information from the image, it is necessary to separate the objects contained in the image. One of the representative methods for separating objects is binarization. In this paper, the Otsu method was used as the binarization algorithm for line recognition[30]. Also, color code, one of the functions of CAMcam5 pixy2, was used for the color code to give the driving command. In CAMcam5 pixy2, the HSI color model was used for color detection. The order of the detected color models was coded and matched with driving commands to be used in the AGV driving algorithm.

Point 3.  Color Model: The content in this section looks irrelevant. I don't think authors should go into this much detail about the color models. Instead, I would suggest describing why do you use the HSI color model. After the experiments, the authors must prove the performance of the system is better using the HSI color model. I would suggest removing 3.2 RBG to HSI  sub-section as it is not much relevant. Section 3.3 describes the color code but is it quite difficult to get it. Please add 1 or 2 paragraphs to describe what is color code, what is the purpose of this color code in this study, etc. The figure caption is very light, please elaborate on the figure a bit. 

Response 3: It has been revised and reflected in the thesis.

  1. Color Model

An RGB color model and an HSI color model were used to enable the AGVA-Robot to recognize the color code from the input image using the vision sensor. The HSI model is a model that expresses colors using 3 characteristics: H (Hue), S (Saturation), and I (Intensity). H refers to an attribute that expresses pure colors, S refers to an attribute that expresses saturation, and I refers to an attribute that expresses the brightness value [27].

 

3.1 RGB to HSI

 

Point 4. Experiment results: The experimental results section does not even cover all the results in the sense that the experiments are incomplete and also the results are not presented well. The important component of the research article is a discussion that describes the worthiness of the study. Please completely re-write the results and discussion section with more experimental results and evaluate your proposed model. 

Response 4: In this paper, it is proposed that more diverse command codes can be used by using color codes in the system that used to issue driving commands with existing barcodes. And we tested whether the operation according to the proposal is possible.

Point 5. Conclusion: I would recommend moving some text from the conclusion section to the discussion section.  

Response 5: It has been revised and reflected in the thesis.

  1. Conclusions

The facts we learned by applying the traveling algorithm using the proposed color code to the AGVA-Robot are as follows: Conventional AGVs recognize lines mainly by using guidance lines that use magnetic fields, while the AGVA-Robot applied with a proposed algorithm recognizes lines using a vision sensor. As a result, conventional AGVs travel only on designated lines along guidance lines, and when the traveling at the branch point needs to be changed, the guidance line itself needs to be changed, causing time delays and cost overrun. However, since the proposed AGVA-Robot recognizes traveling commands by using color codes photographed by a vision sensor, traveling commands can be easily changed at each branch point. In addition, the color code can be used not only to inform the direction of travel at branch points but also to execute various commands along with the direction of travel at branch points.

Improving the recognition rate of color codes in various environments and the recognition rate of color codes by the travel speed of the AGV by applying it to industrial sites will be a task to be addressed for future research. It is necessary to study an algorithm that can increase the recognition rate by checking the accuracy of color code recognition according to various lighting conditions in the actual field. In addition, research on an algorithm that can adjust the driving speed by checking how much the color code recognition rate changes according to the driving speed of the AGV should be further conducted.

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Contribution of the research is missing in the introduction section. Mention it in the introduction section.

Preamble information is missing in section 2 - 2.1 - 2.1.1

Create a separate section for discussion in the results section.

Recent year papers are still missing (2022), include it.

Author Response

Contribution of the research is missing in the introduction section. Mention it in the introduction section.

Response 1: It has been revised and reflected in the thesis

In this paper, we propose an AGV line scan algorithm that can perform route recognition and driving and driving commands through color code recognition using an Arduino controller and a low-cost vision sensor instead of the conventional optical sensor for recognizing and driving a line. do. When the proposed algorithm is applied to the AGV-CAR, the CMUcam 5 Pixy2 camera identifies the driving path to follow by tracking the black line using the Otsu method. In addition, it can be confirmed that the driving command is executed using the proposed color code by applying the color recognition function of CMUcam 5 Pixy2

Preamble information is missing in section 2 - 2.1 - 2.1.1

Response 2: It has been revised and reflected in the thesis.

2.1. Materials

Introducing the composition of AGVA-ROBOT used in this paper. First of all, the main CPU of AGVA-ROBOT uses ARDUINO, which is open-source based and has good accessibility, and L298N, which is highly versatile, is used as a driver for driving. And as a vision sensor, CMUcam5 Pixy2 was used.

 

2.1.1. CMUcam5 Pixy2

The vision sensor selected for this paper is CMUcam5 Pixy2. The CMUcam5 Pixy2 vision sensor is fast image sensor that tracks object and it can directly connect to Arduino Uno R3 through ICSP port on Arduino Uno R3 board. CMUcam5 Pixy2 has its own powerful processor to process the image. Since CMUcam5 Pixy2 has its own processor, it will process the captured images from the sensor and extract the useful information. Besides that, CMUcam5 Pixy2 come with a color algorithm to detect object’s color. Normally, RGB (red, green, and blue) used to represent colors. But CMUcam5 Pixy2 calculates the hue color and saturation of each RGB pixel from the image sensor and uses these as the primary filtering parameters.

 

2.1.2. Arduino Uno R3

 

Arduino Uno R3 is a microcontroller board based on the ATmega328P. It has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz ceramic resonator, a USB connection, a power jack, an ICSP header and a reset button. It contains everything needed to support the microcontroller; simply connect it to a computer with a USB cable or power it with a AC-to-DC adapter or battery to get started.

 

2.1.3. L298N Motor Driver

 

This L298N based motor driver module is a high voltage, high current DUAL FULL-BRIDGE driver suitable for driving DC motors and stepper motors. It can control up to 4 DC motors, or 2 DC motors with directional and speed control.

 

Create a separate section for discussion in the results section.

Response 3: It has been revised and reflected in the thesis.

It is necessary to study an algorithm that can increase the recognition rate by checking the accuracy of color code recognition according to various lighting conditions in the actual field. In addition, research on an algorithm that can adjust the driving speed by checking how much the color code recognition rate changes according to the driving speed of the AGV should be further conducted.

Recent year papers are still missing (2022), include it.

 Response 4: It has been revised and reflected in the thesis.

Various attempts have been made, such as a pattern recognition method using a simple barcode-type location recognition symbol using a line scan camera or a method of recognizing the surrounding environment using a deep learning-based vision system.[30-33]

 

  1. Jang, J.Y.; In, C.H. Design and Implementation of AGV-UNO-CAR Using a Line Scan Algorithm. Korean Inst. Commun. Inf. Sci. 2021, 46, 1346-1354. doi:10.7840/kics.2021.46.8.1346
  2. Kim,S.H.; Lee, H.G. Implementation of Pattern Recognition Algorithm Using Line Scan Camera for Recognition of Path and Location of AGV. Korea Indust. Inf. Sci. 2018, 23, 13-21. doi: 10.9723/jksiis.2018.23.1.013
  3. Lee, G.W.; Lee, H.; Cheong, H.W. Object Detection of AGV in Manufacturing Plants using Deep Learning. Korea Inst.Commun En. Sci. 2021, 25, 36-43. doi: 10.6109/jkiice.2021.25.1.36
  4. Kim, C.M.; Cho, H.Y.; Yun, T.S.; Shin, H.J.; Park, H.K. RFID-based Shortest Time Algorithm linetracer. Korea Inst. Elec. Commun. Sci. 2022, 17, 1221-1228. doi: 10.13067/JKIECS.2022.17.6.1221

Author Response File: Author Response.pdf

Round 3

Reviewer 1 Report

The authors improved the article but still need minor revision. 

I am still not convinced with the result section of this article. I would suggest mentioning a performance evaluation of the experiments. For example, the accuracy of the line detection algorithm.   

Author Response

Point 1. I am still not convinced with the result section of this article. I would suggest mentioning a performance evaluation of the experiments. For example, the accuracy of the line detection algorithm.   

Response 1:

This paper was proposed based on the lack of diversity of barcodes that give driving commands in the experiment of giving driving commands using barcodes during the research of the agv driving algorithm.

Using a camera that recognizes color instead of the existing barcode, a study was conducted on whether the proposed color code could be recognized and replaced with a barcode, and the research result showed that it was possible.

The recognition rate of the color code for the environment or the recognition rate according to driving conditions will be discussed in a future project.

Author Response File: Author Response.pdf

Back to TopTop