Next Article in Journal
The Readiness of Lasem Batik Small and Medium Enterprises to Join the Metaverse
Previous Article in Journal
Experiments with Active-Set LP Algorithms Allowing Basis Deficiency
 
 
Article
Peer-Review Record

Batch Gradient Learning Algorithm with Smoothing L1 Regularization for Feedforward Neural Networks

by Khidir Shaib Mohamed 1,2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Submission received: 6 December 2022 / Revised: 13 December 2022 / Accepted: 16 December 2022 / Published: 23 December 2022

Round 1

Reviewer 1 Report

1.      In abstract, authors mention that, Preliminary findings indicate that the BGSL1 has 19 faster convergence and good generalization abilities when compared to BGL1/2, BGL1, and BGL2. It is suggested to make abstract more clear with proposed solution and existing challenges.

2.      The quality of figures is very clear. It is suggested to revise figures with more resolution, proper font of text in the figures.

3.      Which data set is used for validation?

4.      Its suggested to enrich literature review and details about overfitting by including latest studies such as, optimal BRA based electric demand prediction strategy considering instance-based learning of the forecast factors; and Data-driven load forecasting of air conditioners for demand response using levenberg–marquardt algorithm-based ANN.

 

5.      Comparison with state of art methods should be given.

Author Response

Response to Reviewer 1 Comments

Point 1: In abstract, authors mention that, Preliminary findings indicate that the BGSL1 has faster convergence and good generalization abilities when compared to BGL1/2, BGL1, and BGL2. It is suggested to make abstract more clear with proposed solution and existing challenges.

Response 1: I make abstract more clear with proposed solution and existing challenges.

Point 2: The quality of figures is very clear. It is suggested to revise figures with more resolution, proper font of text in the figures.

Response 2: I revise figures with more resolution, proper font of text in the figures.

Point 3: Which data set is used for validation?

Response 3: there no validation data used.

Point 4: Its suggested to enrich literature review and details about overfitting by including latest studies such as, optimal BRA based electric demand prediction strategy considering instance-based learning of the forecast factors; and Data-driven load forecasting of air conditioners for demand response using levenberg–marquardt algorithm-based ANN.

 Response 4: I perform literature review and details about overfitting by included latest studies such as, optimal BRA based electric demand prediction strategy considering instance-based learning of the forecast factors; and Data-driven load forecasting of air conditioners for demand response using levenberg–marquardt algorithm-based ANN.

Point 5: Comparison with state of art methods should be given

Response 5:  I added Comparison with smoothing L1/2 regularization method in the numerically.

Reviewer 2 Report

In this manuscript, the authors propose a new Batch Gradient learning algorithm with Smoothing L1 regularization (BGSL1) for feedforward neural networks. The topic is interesting and worth investigating. The research goal is clearly formulated. The obtained results demonstrate the effectiveness of the proposed BGSL1 model for some parity and approximation problems. The main drawbacks of this manuscript are as follows: its structure is not well-organized; 2) the proposed algorithm is not presented well. 

 

My remarks are as follows:

In the abstract and Introduction sections, please highlight your contributions. In the Introduction section, the advantages of the proposed method must be outlined.

“3. Main Results” section should be incorporated in “Materials and Methods” section. Please, explain the role of Proposition 1 – Proposition 3 and Theorem 1, and how they have been implemented in proposed method. The new method must be explained using a flowchart and step-by-step description of authors BGSL1 algorithm. At the end of this section, a discussion part should be added. Here you should clarify the novelty and efficacy of new method.  

Please include some details about your software implementation (development environment, programming language, libraries used). Link to the program code could be enclosed.

In “4. Experimental Results” section, I would like to recommend analysing the time complexity of the proposed approach in comparison with other existing regularization algorithms. At the end of this section, a short discussion must be added.

 

Technical remarks:

Figure 1 (a) – (b) is not described in the manuscript text. The picture resolution of Figure 1 (a) must be enhanced.

Please, check the specific guidance provided in author instructions and re-format the manuscript. For example, the sub-captions of figures /(a), (b), …, (n)/ are incorrect and should be edited.

l. 63: Variable “p” is undefined.

The whole manuscript should be edited, because there are some inconsistent and incomplete phrases:

l. 29: “silicon and wires”;

l. 98-99: “Finally, contains the proof of the convergence theorem in Appendix.”

l. 108: “odors”;

l. 112: “Final the output of the network is …”.

Some typos also should be removed (l. 16: “nods” -> “nodes”, “stander” -> “standard”, etc.).

Author Response

Response to Reviewer 2 Comments

 

In this manuscript, the authors propose a new Batch Gradient learning algorithm with Smoothing L1 regularization (BGSL1) for feedforward neural networks. The topic is interesting and worth investigating. The research goal is clearly formulated. The obtained results demonstrate the effectiveness of the proposed BGSL1 model for some parity and approximation problems. The main drawbacks of this manuscript are as follows:

Point 1: its structure is not well-organized;

Response 1: I made the structure well-organized.

 Point 2: the proposed algorithm is not presented well. 

 Response 2:  I proposed algorithm well in presented.

My remarks are as follows:

Point 3: In the abstract and Introduction sections, please highlight your contributions. In the Introduction section, the advantages of the proposed method must be outlined.

Response 3: I make the  abstract and Introduction sections more highlight. And I outlined  the Introduction section with the advantages of the proposed method.

Point 4: “3. Main Results” section should be incorporated in “Materials and Methods” section. Please, explain the role of Proposition 1 – Proposition 3 and Theorem 1, and how they have been implemented in proposed method. The new method must be explained using a flowchart and step-by-step description of authors BGSL1 algorithm. At the end of this section, a discussion part should be added. Here you should clarify the novelty and efficacy of new method.  

Response 4: the main results was incorporated in “Materials and Methods”, and I explained the role of Proposition 1 – Proposition 3 and Theorem 1, and how they have been implemented in proposed method in remark 1. Also I explained the new method using algorithm 1. I added a discussion part. Finally, I clarified the novelty and efficacy of new method.

Please include some details about your software implementation (development environment, programming language, libraries used). Link to the program code could be enclosed.

Point 5: In “Experimental Results” section, I would like to recommend analysing the time complexity of the proposed approach in comparison with other existing regularization algorithms. At the end of this section, a short discussion must be added.

 Response 5: In “Experimental Results” I added smoothing L1/2 regularization algorithms. Also I added a short discussion about our new method comparison.

Technical remarks:a

Point 6: Figure 1 (a) – (b) is not described in the manuscript text. The picture resolution of Figure 1 (a) must be enhanced.

Response 6: I make the picture resolution of Figure 1 (a) more clear.

Point 7: Please, check the specific guidance provided in author instructions and re-format the manuscript. For example, the sub-captions of figures /(a), (b), …, (n)/ are incorrect and should be edited.

Response 7: I edited and corrected the sub-captions of figures /(a), (b), …, (n)/.

Point 8: l. 63: Variable “p” is undefined.

Response 8: I changed variable “p” to “q”.

Point 9: The whole manuscript should be edited, because there are some inconsistent and incomplete phrases:

Response 9: I edited the whole manuscript.

Point 10: l. 29: “silicon and wires”;

Response 10: I re-write this part.

Point 11: l. 98-99: “Finally, contains the proof of the convergence theorem in Appendix.”

Response 11: I contained the proof of the convergence theorem in Appendix A.

Point 12: l. 108: “odors”;

Response 12: I changed to order

Point 13: l. 112: “Final the output of the network is …”.

Response 13: I found the output of the network is “For any given input  the output of the hidden neuron is , and the final output of the network”.

Point 14: Some typos also should be removed (l. 16: “nods” -> “nodes”, “stander” -> “standard”, etc.).

Response 14: I checked this point and make it better.

Round 2

Reviewer 1 Report

I have no further comments.

Reviewer 2 Report

The quality of computers-2114469-v2 ‘Batch gradient learning algorithm with smoothing L1 regularization for feedforward neural networks’ has been improved by adding: 1) a new discussion section and 2) more details about proposed method in “3. Materials and Methods” section.

In my opinion, the manuscript meets the requirements of Computers Journal.

My recommendation is “Accept as is”.

Back to TopTop