Next Article in Journal
Machine-Learning-Based Approach for Virtual Machine Allocation and Migration
Previous Article in Journal
The Class D Audio Power Amplifier: A Review
 
 
Article
Peer-Review Record

JN-Logo: A Logo Database for Aesthetic Visual Analysis

Electronics 2022, 11(19), 3248; https://doi.org/10.3390/electronics11193248
by Nannan Tian 1, Yuan Liu 1,*,† and Ziruo Sun 2
Reviewer 1: Anonymous
Reviewer 3: Anonymous
Electronics 2022, 11(19), 3248; https://doi.org/10.3390/electronics11193248
Submission received: 22 August 2022 / Revised: 28 September 2022 / Accepted: 3 October 2022 / Published: 9 October 2022

Round 1

Reviewer 1 Report

The Authors have described a dataset of logotypes collected together with an aesthetic evaluation of students. Nevertheless, the dataset is currently not available and, in my opinion, in the papers where the development of a database is the main achievement, it is obligatory before sending the article for review.

The methodology of the paper is unclear. The Authors provide some examples of the general-purpose image quality assessment databases mixing some ideas without justification. Some of the presented datasets, e.g. LIVE, TID, may be useful both for the development of the full-reference metrics as well as for the no-reference ones. There are many papers related to this topic as well as some newer datasets that have not been mentioned. The statement in line 69 (about 8 databases) is not true.

The examples provided for the development and verification of the NR metrics are at least surprising.

The development of the logotypes dataset is not well motivated. In my opinion, it has nothing in common with image quality assessment since in this case, we may think only about the perceptual evaluation of styles of colorfulness, etc. - not the technical quality as in the IQA metrics (e.g. presence of noise, blocking artifacts, transmission errors, etc.).

It is unclear why the Authors state that the developed database contains photos (lines 39-41). Logotypes are typically artificial images generated by computers.

The statement in lines 76-77 "Users can choose the appropriate database according to their needs." is trivial and makes no sense from the scientific point of view since the idea of the IQA is to find the universal metric that is as highly correlated as possible with subjective scores provided in ALL available datasets.

The link provided in lines 147-148 does not work, the notation in line 80 is inappropriate, and the sentence in line 73 should be corrected as well. Line 240 should not begin with the URL (with space inside?).

The sentence in lines 78-79 "In the Blind method, there is no corresponding reference image, and the observer scores according to the image quality" makes no sense as well. First, what is the "blind method" (in the IQA only "blind metrics" are known)? Second, during the development of the blind metrics, subjective quality scores must be known, only the reference images are missing.

Captions in two parts of Fig. 6 are switched. There is a typo in equation (2) "Socre" instead of "Score".

The choice of the HSV color space is not well motivated as well (e.g. CIELAB might be used instead). Why the Authors use the word "vividness" instead of "saturation" in line 471?

The first sentence of the conclusion is also doubtful: "We created a larger database [...]" Larger than ... ?

Concluding the review, the idea of the paper is unclear, there are many mistakes in the paper and the scientific contribution is very low. Therefore, I cannot recommend the paper for publication in the journal.

Author Response

请参阅附件。

Author Response File: Author Response.pdf

Reviewer 2 Report

  Review Comments

    The presented work introduced a larger-scale logo database named JN-Logo. JN-Logo provides 14,917 logo images from three well-known websites around the world and uses the votes of 150 graduate students. JN-Logo provides three types of annotations: aesthetic, style, and semantic. JN-Logo’s scoring system includes 6 scoring points, 6 style labels, and 11 semantic descriptions. We extracted 57 of all the primary shades from the data. However, the following minor changes can be considered by the authors to further improve the quality of the manuscript.

 I have some minor corrections and suggestions below:-

1. Authors must show and discuss the novel contribution of the work with proper justification of the outcomes.

2. The abstract must be short and precise and must declare what novel contribution is done by the authors.

3. The computational complexity of the algorithm can be added. Authors must compare the proposed method in terms of computational complexity with various used data sets.

3. How much data should be considered for training and testing for architecture implementation? Details of training and testing data sets must be tabulated.

4. Layer details of architectures must be elaborated and timing analysis needs to be added.

5. Limitations of the proposed work can be added and discussed.

6. Utilized data sets as discussed in table 2, and table 3 must be cited.

7. Comparative analysis of various performance parameters like precision, recall, F-1 measure, etc. with respect to various data sets must be discussed and tested.

8. Results presented in table 4 and compared state of art methods must be cited by proper citations.

9.  Results with respect to inference/fps and real-time time analysis are missing.

 10. There are few comparative experiments, which is not enough to convince the effectiveness of your method.

11. What is the size of epochs or no of iterations considered for complete training of architectures?

12. How is subjective analysis like Color harmony analysis done? Authors must justify with proper numerical formula or justification.

Comments for author File: Comments.pdf

Author Response

请参阅附件。

Author Response File: Author Response.pdf

Reviewer 3 Report

The paper describes the details of the design of a logo database. The work is technically sound and the procedures of collection, annotation, comparison with other datasets and applications are properly described.

My negative comment on the paper comes from the results of the classification tasks using machine learning techniques. For anyone working on the area (ML), accuracy lower than 0.6 is somewhat a negative surprise. However, I understand that the peculiarity of the data may lead to this range of accuracy. As a suggestion, I think the authors should include classification task experiments performed with other datasets as well, for purpose of comparison.

Other comments/questions:

- were the logos selected by hand (page 3, lines 112-128)?

- How many participants took part in the labeling (page 3, lines 129-132)?

- In Table 2 it would be interesting to include the number of instances of each database.

Author Response

请参阅附件。

Author Response File: Author Response.pdf

Back to TopTop