Next Article in Journal
A New Body Weight Lifelog Outliers Generation Method: Reflecting Characteristics of Body Weight Data
Previous Article in Journal
Silica Microsphere WGMR-Based Kerr-OFC Light Source and Its Application for High-Speed IM/DD Short-Reach Optical Interconnects
 
 
Article
Peer-Review Record

A New Competitive Neural Architecture for Object Classification

Appl. Sci. 2022, 12(9), 4724; https://doi.org/10.3390/app12094724
by Mohammed Madiafi 1,*, Jamal Ezzahar 1,2, Kamal Baraka 1 and Abdelaziz Bouroumi 3
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Appl. Sci. 2022, 12(9), 4724; https://doi.org/10.3390/app12094724
Submission received: 25 January 2022 / Revised: 21 April 2022 / Accepted: 25 April 2022 / Published: 7 May 2022
(This article belongs to the Section Computing and Artificial Intelligence)

Round 1

Reviewer 1 Report

In this paper, the authors propose a new neural architecture, which is made from a set of competitive neural networks, to accomplish classification tasks. Some suggestions are presented as follow.

  1. In Section 1, the authors are suggested to highlight the main problems of the published works related with classification tasks. The motivation of this paper is not clear.
  2. Figure 1 shows the framework of the proposed work, which are suggested to explain more about detail.
  3. Simulation results are suggested to compare some typical works with the proposed algorithm.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Title:  New Competitive Neural Architecture for Classification

 

This research pointed to an important area of research, i.e. classification. It's an important component of any machine learning system to distinguish various objects into distinct classes. More specifically, this research presented a new competitive neural architecture or model that can classify objects into different classes with a high degree of accuracy. Overall, this is a reasonable effort by the authors, and it can help other researchers in this domain. However, there are a few observations based on which this paper cannot be accepted in its current form until it is improved. The comments are as follows,

 

  1. The abstract needs improvement. It should start with a brief background of the domain that flows into narrowing down the research pocket chosen and then the problem addressed in this research. Also, briefly mention the results concerning the datasets used and the proposed algorithm's quantitative superiority over state of the art.
  2. The authors told on page 1, line 21 that the training data is not labelled and then on page 2 line 38, the authors wrote that the validation is done using the real labels of the elements of the training data. Its confusing, there must be more details to explain this simply and less confusingly.
  3. Does zero classification error mean that results will always be 100%? How? this needs more detail.
  4. There should be more explanation as to what should be the maximum number of layers in the network, as mentioned in line 49, page 2, and how to find that.
  5. There should be a separate section on datasets explaining all the chosen datasets and the reason for selecting these datasets or experimentation.
  6. The authors wrote that the samples of each dataset are selected automatically. There is no explanation as to what is meant by automatic selection. Details Required.
  7. Figures (3,4) must be attributed with legends to show what different shaped markings in the graphs are representing.
  8. What is the meaning of prototype here? Explain once in the start what authors are calling prototype so that readers don't get confused.
  9. Parameters of different methods are varying, is it a fair comparison?
  10. The computational complexity of the proposed algorithm must be explained in comparison to the state of the art methods.

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

In this paper, a competitive neural network framework for classification tasks is proposed, where unsupervised and supervised learning modes are combined for model training. The experimental results on public data sets show that the proposed model has good robustness and low sensitivity to parameter initialization. In general, the model is described in detail, and fully verified by experiments, which has a certain academic value.

The following points need to be addressed:

  1. In the "0 Introduction" chapter of the article, it is suggested to supplement the content of competitive neural network;
  2. When appear for the first time, abbreviations should be given the full name, such as FLVQ;
  3. Different variables should not be represented by the same symbol. For example,ψ represents both membership degree and distance measurement;
  4. In Table 1, even if the average value is taken, I think the decimal number of network layers is not rigorous;
  5. In the "3 Conclusion" chapter of the article, it is suggested to supplement quantitative results;
  6. The format of references is inconsistent, please correct it.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 4 Report

  • How does the clustering result of the FLVQ algorithm compare with the label? The results of unsupervised clustering contain the difference between classes and the similarity within classes, but do not contain semantic information.
  • If the number of neurons in the rth layer is not equal to the number of categories, how to compare the clustering results?
  • The rejection operation mentioned in section 1.4 is not mentioned in the pseudo-code and the following description. The pseudo-code seems to only add neurons but not “reject” neurons.
  • In section 1.4 the author mentions that “During each validation step, all misclassified objects are grouped with those, if any, that present small membership degrees to all classes in order to form the training data for the next layer.”. However, the previous treatment of the “strong elements” is to add a neuron in the same layer, which seems to be contradictory.
  • The input of the multi-layer competitive neural network is not clear except for the first layer. What is the input of the layer other than the first layer in the multi-layer competitive neural network? Is it the output of the previous layer or the p-dimension vector of the input image? If the author describes the latter, does that mean that the output of the previous layer has no effect on the output of the later layer? This seems more like an if-then judge than a multi-layer network. It is best to draw the schematic of the multi-layer competitive neural network.
  • In section 1.4 the author mentions that “In this case, is considered as a strong element that represent itself and of all similar objects.”. How does  represent objects similar to it, does the author mean that the objects belongs to the same category as ?
  • In figure 2, the meaning of “p” “c” and “1” is not explained.
  • The paper does not show the method of feature extraction, that is, the process of obtaining the p-dimension vector of the input image.
  • The mentioned in the interpretation of formula (10) first appears in formula (11), not in formula (10).

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

The authors have addressed my concerns and this article is in better form as compared to earlier. It is accepted in its current from from my side.

Author Response

Authors thank you very much for your efforts in processing our paper.

Spell check has been performed.

Back to TopTop