Next Article in Journal
Software-Defined Network-Based Energy-Aware Routing Method for Wireless Sensor Networks in Industry 4.0
Previous Article in Journal
Reduction of Bacterial Pathogens in a Single-Stage Steel Biodigester Co-Digesting Saw Dust and Pig Manure at Psychrophilic Temperature
 
 
Article
Peer-Review Record

Research on Traditional Mongolian-Chinese Neural Machine Translation Based on Dependency Syntactic Information and Transformer Model

Appl. Sci. 2022, 12(19), 10074; https://doi.org/10.3390/app121910074
by Ren Qing-dao-er-ji 1, Kun Cheng 1,2,* and Rui Pang 1,2
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3:
Appl. Sci. 2022, 12(19), 10074; https://doi.org/10.3390/app121910074
Submission received: 5 September 2022 / Revised: 28 September 2022 / Accepted: 1 October 2022 / Published: 7 October 2022
(This article belongs to the Section Computing and Artificial Intelligence)

Round 1

Reviewer 1 Report

The work is interesting but there are some flaws in the document:

1. The method description is rather short, please emphasize your proposal. As it is, it appears to be a small extension of a transformer model. Also, DP matrix is not defined.

2. The results do not appear to be considerably better, there is an improvement, but that would be noticeable if you added other SOTA models' into the evaluation.

3. The flow of the document needs to be improved.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

The paper is quite interesting and potentially suitable to Applied Sciences. 

However, after the Introduction, a proper Literature Review would be not only welcomed, but also necessary. The Literature Review in itself cannot be scattered here and there, in the paper, but needs to be available to the readers, also to a non-specialized audience, in a specific section. 

The Introduction is too short. The Authors need to explain better what is their (commendable) research goal and how they plan to achieve it, providing a short summary / preview of their methodology. 

The methodology itself is explained here and there, not 'en passant', but in a continuum among sections that can appear confusing. After the (missing) Literature Review, a simple section, entitled Methodology, developed step-by-step and with the main aim of reproducibility, would be necessary and essential. 

The rest of the materials and sections, which are the 'meat' of the paper, are quite interesting and relatively good, but they should be organized, simply, as Results and Discussion and not in a confusing way like they are now. In the end, if we look at titles and subtitles as organized by the Authors, it seems that the results are discussed only in half page before the Conclusions, and that is not completely true and surely awkward. 

The Conclusions need to be expanded and, like in a 'mirror' with the Introduction, have to quickly summarize the original goal of the paper and how that goal has been achieved. 

The not numerous references used in the paper have to be expanded, at the numerical level, and the Authors have not to be afraid to provide the readers even with very general sources, which can give a clearer and more comprehensive idea of the field of studies of this article and which can definitely enrich and make stronger the Literature Review. 

The English language is not bad and quite consistent. Somewhere, we have weird expressions, which should be fixed. One example, at lines 283-284, "As we can see, Transformer_Base and Transformer_Enc_Imp are better with adding Squared Leaky ReLU,". Why "with"? An so on. 

All in all, the paper is valuable and would deserve to be published in Applied Sciences, but, before that, it needs a thorough revision, especially at the level of format. 

Thank you very much. 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

General:

Please explain what is the rationale behind using sentence dependency structures for two languages from such a different language families.

Please compare your results to existing Chinese-Mongolian translation models, for example:

Wu, Jing, Hongxu Hou, Zhipeng Shen, Jian Du, and Jinting Li. "Adapting attention-based neural network to low-resource Mongolian-Chinese machine translation." In Natural Language Understanding and Intelligent Applications, pp. 470-480. Springer, Cham, 2016.

Li, Haoran, Hongxu Hou, Nier Wu, Xiaoning Jia, and Xin Chang. "Semantically Constrained Document-Level Chinese-Mongolian Neural Machine Translation." In 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1-8. IEEE, 2021.

Wu, Jing, Hongxu Hou, Feilong Bao, and Yupeng Jiang. "Template-Based Model for Mongolian-Chinese Machine Translation." Journal of advanced computational intelligence and intelligent informatics 20, no. 6 (2016): 893-901.

Lit survey is missing, especially a survey of translation models for Mongolian and Chinese-Mongolian translation models.

Tables and Figures are often outside page boundaries.

Detailed comments:

l.94-95 Are there punctuation and morphology issues in Mongolian that make tokenization non-trivial? How do you treat compound words?

l.183 Please supply a reference and a link to this corpus.

p.7 Table 1: Is there a rationale for the test set and validation set to have the same size? This is a non-traditional split, please explain.

p.7 Table 2: Were all of these transformers trained from scratch?

p.7 Table 3: Please mark the best scores by using bold font.

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 2 Report

The paper has been significantly improved. 

  As it is, it can be considered for publication. 

  Thank you. 

  Regards. 

Reviewer 3 Report

All my comments were adequately addressed.

Back to TopTop