Next Article in Journal
The Concept regarding Vehicular Communications Based on Visible Light Communication and the IoT
Next Article in Special Issue
The Circular U-Net with Attention Gate for Image Splicing Forgery Detection
Previous Article in Journal
Time-Sensitive Networking Mechanism Aided by Multilevel Cyclic Queues in LEO Satellite Networks
Previous Article in Special Issue
Multi-Attention-Based Semantic Segmentation Network for Land Cover Remote Sensing Images
 
 
Article
Peer-Review Record

Task Scheduling Based on Adaptive Priority Experience Replay on Cloud Platforms

Electronics 2023, 12(6), 1358; https://doi.org/10.3390/electronics12061358
by Cuixia Li 1,2, Wenlong Gao 2, Li Shi 1,3, Zhiquan Shang 2 and Shuyan Zhang 2,*
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Electronics 2023, 12(6), 1358; https://doi.org/10.3390/electronics12061358
Submission received: 6 February 2023 / Revised: 4 March 2023 / Accepted: 10 March 2023 / Published: 12 March 2023
(This article belongs to the Special Issue Advanced Techniques in Computing and Security)

Round 1

Reviewer 1 Report

This paper has proposed a task scheduling algorithm based on adaptive priority experience repay in cloud computing environments. The topic is interesting.

There are some comments:
1. The optimization objective of the paper is unclear.  In addition, the reward function in RL reflects how to achieve the goal of the whole paper. The reviewer suggests to merge the 'Reward' paragraph in page 5 and eq.(14) in page 8, and explain why the immediate reward could achieve the objective of the paper.  

2. As shown in (8), the longer the links inside a DAG, the more information should be included in the subsequent nodes, in this case, it is inefficient, how to explain?

3. In the experiment part, it is suggested to add more existing algorithms, such as GARLSched, Graphene* and so on, in Section 5.2 for performance comparison with various datasets.

4. From the experimental results in Fig. 4(c), it is better to improve the proposed algorithm for fitting for the scenario with small dataset and Channing nodes. In addition, as seen from fig. 6, the proposed algorithm is unstable, how to explain?

Author Response

Please see the attachment for details.

Author Response File: Author Response.pdf

Reviewer 2 Report

- Authors need to discuss on some important experimental results for Experimental results on Tpc-h, Alibaba cluster data, and scientific workflow evaluations.

- Authors can add structure of the paper at the end of Section 1.

- Literature review can be imporved by discussing on following case studies as well:

1- Cost-aware job scheduling for cloud instances using deep reinforcement learning

2- A Trust-Aware and Authentication-Based Collaborative Method for Resource Management of Cloud-Edge Computing in Social Internet of Things

3- SAAS parallel task scheduling based on cloud service flow load algorithm

4- Energy-aware systems for real-time job scheduling in cloud data centers: A deep reinforcement learning approach

- Another think is that authors need to describe four main components from Figure 2 as buffer management.

- What is main simulation environemnt to evaluate experimental results ?

 

Author Response

Please see the attachment for details.

Author Response File: Author Response.pdf

Reviewer 3 Report

This paper introduces a task scheduling technique based on the adaptive priority experience replay algorithm. Reinforce learning (RL) is known to provide good performance on platforms with constantly changing environmental variables, such as games. RL is an algorithm that generates a positive reward when an agent corresponding to a game player achieves a game goal, and generates a negative reward in the opposite case so that the player's ability is adaptively improved.

Specifically, with Algorithm 1 and Algorithm 2 proposed in this paper, task scheduling is performed to improve Job Completion Time. Algorithm 2 determines the sampling rate, and Algorithm 1 adaptively updates the model parameters using the sampling rate and experience replay. This method was able to reduce JCT by 59% in the case of alidata compared to the Decima method by adaptively processing task graphs of various patterns.

This paper lacks an explanation of how RL can be applied to task scheduling and improve its results. And in order to understand this paper, it is necessary to introduce in detail which techniques have been introduced in previous related studies and what their strengths and weaknesses are. Therefore, an example of how the proposed algorithms schedule randomly arriving tasks and an explanation of the operation process of the existing techniques would be helpful for better understanding.

Author Response

Please see the attachment for details.

Author Response File: Author Response.pdf

Reviewer 4 Report

The authors focus their study on the problem of task scheduling within cloud platforms based on a reinforcement learning approach. Specifically, the goal of the authors is to guide the agents behavior towards reducing the number of episodes by exploiting historical data and developing a task scheduling algorithm that is based on the adaptive priority experience replay. The proposed algorithm exploits the performance metric in order to perform the scheduling and towards improving the network accuracy, the authors exploit the sampling optimization objective. Furthermore, the proposed algorithm exploits the sub task execution in the workflow in order to improve the scheduling efficiency. The manuscript is overall well written and easy to follow and the authors have well thought out their main contributions. The provided theoretical analysis is concrete, complete, and correct and the authors have provided all the intermediate steps in order to enable the average reader to easily follow it. Furthermore, the provided numerical results are rich in order to show the pure operation and the performance of the proposed framework. The authors are highly encouraged to consider the following suggestions provided by the reviewer in order to improve the scientific depth of their manuscript, as well as they need to address the following minor comments in order to improve the quality of presentation of their manuscript. Initially, the provided related work in sections one and two needs to be presented by using more summative language in order to better identify the research contributions that have already been performed in the literature, as well as the research gap that the authors tried to address. Furthermore, in Section 2, the authors need to discuss reinforcement learning frameworks that have been introduced in the literature, such as Price and Risk Awareness for Data Offloading Decision-Making in Edge Computing Systems, doi: 10.1109/JSYST.2022.3188997, that they exploit the tasks and then end nodes characteristics in order to perform the task scheduling and execution. In Section 3, the authors need to include a table summarizing the main notation that has been used in the paper and provide the units of the corresponding metrics. In section four, the authors need to include an additional subsection to discuss the implementation cost of the proposed framework in a realistic environment. Furthermore, in Section 4, the authors need to include an additional subsection providing the theoretical analysis of the computational complexity of the proposed model. Based on the previous comment, in Section 5, the authors need to provide some indicative numerical results capturing the computational complexity of the proposed model and discuss if it can be implemented in a real time or close to real time manner. Furthermore, in Section 5, the authors need to show a scalability analysis in order to capture the efficiency and robustness of the proposed framework. Finally, the overall manuscript needs to be checked for typos, syntax, and grammar errors in order to improve the quality of its presentation.

Author Response

Please see the attachment for details.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The issues have been addressed in this version.

The reviewer suggests to accept the manuscript.

 

Reviewer 4 Report

The authors have addressed the reviewers' comments.

Back to TopTop