Next Article in Journal
The Catalytic Curing Reaction and Mechanical Properties of a New Composite Resin Matrix Material for Rocket Fuel Storage Tanks
Next Article in Special Issue
Advanced Medical Image Segmentation Enhancement: A Particle-Swarm-Optimization-Based Histogram Equalization Approach
Previous Article in Journal
Composite Forming Technology for Braiding Grid-Enhanced Structures and Design of a New Weaving Mechanism
Previous Article in Special Issue
Federated Learning for Computer-Aided Diagnosis of Glaucoma Using Retinal Fundus Images
 
 
Article
Peer-Review Record

The Performance of a Lip-Sync Imagery Model, New Combinations of Signals, a Supplemental Bond Graph Classifier, and Deep Formula Detection as an Extraction and Root Classifier for Electroencephalograms and Brain–Computer Interfaces

Appl. Sci. 2023, 13(21), 11787; https://doi.org/10.3390/app132111787
by Ahmad Naebi * and Zuren Feng
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Appl. Sci. 2023, 13(21), 11787; https://doi.org/10.3390/app132111787
Submission received: 3 August 2023 / Revised: 23 September 2023 / Accepted: 25 September 2023 / Published: 27 October 2023
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Image Processing)

Round 1

Reviewer 1 Report

This research article significantly contributes to the field of brain-computer interfaces (BCIs) and signal processing. The authors meticulously examine and address important aspects that have the potential to reshape BCI technology.

 

The novel communication imagery model, Lip-sync imagery, offers a practical solution to challenges posed by traditional speech imagery. It transcends language barriers, providing a versatile framework applicable to all characters across various languages. Implementing Lip-sync imagery for distinct sounds or letters exemplifies the practicality and broad applicability of this approach.

 

The proposal of new signal combinations, aiming to optimize feature extraction through selective frequency domain manipulation, is commendable. Utilizing restricted frequency ranges to create Fragmentary Continuous frequencies is an inventive approach promising efficient and precise brain signal processing. The analysis of filter bank intervals and the determination of optimal combinations within the frequency domain demonstrate a systematic approach.

 

The introduction of a supplementary bond graph classifier to enhance SVM classifier performance in noisy data environments showcases practicality. This augmentation could mitigate performance degradation in noisy datasets, expanding BCI robustness.

 

The deep formula recognition model demonstrates dedication to expanding signal processing techniques. Converting data into a formula-based representation and reducing noise in subsequent layers presents an innovative approach to improving BCI precision. Extracting root intervals of formulas for diagnosis underscores practical significance.

 

Reported outcomes, ranging from 55% to 98% accuracy, confirm the efficacy of proposed methodologies. Although deep detection formula accuracy is 55%, new combined signals achieve an impressive 98%.

 

In conclusion, this research offers a comprehensive approach to advancing BCI technology. Novel communication models, precise frequency manipulation, additional classifiers, and formula-based representations collaborate in a groundbreaking manner. Implications extend beyond academia, enhancing quality of life for BCI users. Authors' commitment and practicality deserve praise and further study.

 

One area requiring attention is the quality of accompanying visual aids. While concepts and methodologies are impactful, graphical representations may fail to convey proposed models and processes.

 

Including high-quality, vector-format illustrations is crucial for clarity. Complex ideas, like Lip-sync imagery and Fragmentary Continuous frequencies, demand precise depictions to aid readers' understanding.

Moderate editing of English language required.

Author Response

Thank You for your review

 

Thank you for your detailed understanding of our ideas.

 

I have used special software and one native person for editing to improve the quality of our paper.

Reviewer 2 Report

The authors presented an interesting study on BCI based on EEG signals.

1. However, the current form in which the authors have presented their work is very poor. It is very difficult to understand the flow of work done by them. It is difficult to understand exactly the contributions made by the authors through this work. There is no graphical depiction of their flow of work. There is no clear distinction that this is what was done and this is what we added new to the table. Also, The content has been repeatedly written under different sections. 

2. The title is very long and confusing. Make the title short and precise.

3. Similar is the case with the abstract. It is very long and doesn't convey the work done by the authors in a crisp and compact way. Please rewrite or reorganize the abstract to make it more clear to understand.

4. The language of the paper is not upto the mark. There are alot of grammatical errors in the language of the manuscript. Please get it vetted by professional English Language Editing service.  

5. The results presented are very exhaustive. However, for better clarity kindly include the citation reference and the year of publication of the methods mentioned in the comparative performance analysis.

6. Overall, I would recommend the authors to kindly refer to the papers published in good reputed journals related to their current manuscript to have a fair idea regarding the way in which they can effectively present this work for a broader range of audience and even for the understanding of a novice reader.

Author Response

  1. However, the current form in which the authors have presented their work is very poor. It is very difficult to understand the flow of work done by them. It is difficult to understand exactly the contributions made by the authors through this work. There is no graphical depiction of their flow of work. There is no clear distinction that this is what was done and this is what we added new to the table. Also, The content has been repeatedly written under different sections. 

We believe that this is sufficient for the explanation. If more explanation is required, please guide us.

  1. The title is very long and confusing. Make the title short and precise.

I edited title. This was shorter than before.

  1. Similar is the case with the abstract. It is very long and doesn't convey the work done by the authors in a crisp and compact way. Please rewrite or reorganize the abstract to make it more clear to understand.

I edited our abstract. If it needs to be rewritten and edited again, please inform us.

  1. The language of the paper is not upto the mark. There are alot of grammatical errors in the language of the manuscript. Please get it vetted by professional English Language Editing service.  

I used some special software and one native person for editing to improve our paper Quality of English Language.

  1. The results presented are very exhaustive. However, for better clarity kindly include the citation reference and the year of publication of the methods mentioned in the comparative performance analysis.

I have added references to develop ideas. If more information is required, please tell us clearly to improve our paper.

  1. Overall, I would recommend the authors to kindly refer to the papers published in good reputed journals related to their current manuscript to have a fair idea regarding the way in which they can effectively present this work for a broader range of audience and even for the understanding of a novice reader.

I used some special software and one native person for editing to improve our paper Quality of English Language. We have deleted some of the nuclear paragraphs.

Reviewer 3 Report

The paper discusses about brain-computer interfaces, where rapid brain signal processing is pivotal. Four key concepts emerge: the introduction of a novel "Lip-sync imagery" mental task for universal communication representation, a strategic approach to combine limited frequency ranges for enhanced feature extraction, the incorporation of a supplementary bond graph classifier to bolster SVM classifiers in noisy environments, and the development of a deep formula recognition model that reduces noise and enables diagnosis through root intervals.

The paper has major issues with the language. It is largely incomprehensible to me and I had to make educated guesses on the content for the most part. I suggest a complete overhaul with regards to the language. Please use professional language editing software to revise the manuscript. I do have a few comments based on what I could understand.

1.      The methodology for the lip-synchronization is not clear. Key information regarding the type of EEG machine used and the experimental design is missing or is hard to find. A detailed description of the design will help.

2.      It is unclear whether all the electrodes were used for feature extraction or not

3.      The introduction tries to give a comprehensive view of research on motor imagery. However, the citations from 2020 till present are missing, making the information presented not comprehensive and slightly outdated. I suggest adding citations from recent research as well. For example, since the study is about lip-sync assisted speech imagery it might be useful for them to look into a study by Lakshminarayanan et al. about combining action observation with kinesthetic motor imagery.

·        Lakshminarayanan, K., Shah, R., Daulat, S. R., Moodley, V., Yao, Y., & Madathil, D. (2023). The effect of combining action observation in virtual reality with kinesthetic motor imagery on cortical activity. Frontiers in Neuroscience, 17, 1201865.

4.      Also since the paper talks about a different modality of imagery, it might be useful to include other modalities of imagery such as tactile imagery

·        Lakshminarayanan, K., Shah, R., Daulat, S. R., Moodley, V., Yao, Y., Sengupta, P., ... & Madathil, D. (2023). Evaluation of EEG Oscillatory patterns and classification of compound limb tactile imagery. Brain Sciences, 13(4), 656.

The authors are welcome to include these citations and more from other researchers to give a comprehensive view on the field.

The paper has major issues with the language. It is largely incomprehensible to me and I had to make educated guesses on the content for the most part. I suggest a complete overhaul with regards to the language. Please use professional language editing software to revise the manuscript.

Author Response

Thank you for understanding deeply our ideas.

  1. The methodology for the lip-synchronization is not clear. Key information regarding the type of EEG machine used and the experimental design is missing or is hard to find. A detailed description of the design will help.

It is clear to explain. This is because our study uses li-sync for imagination with a simple VR part for thinking.

  1. It is unclear whether all the electrodes were used for feature extraction or not

We have added the number of electrodes used in our experiment.

  1. The introduction tries to give a comprehensive view of research on motor imagery. However, the citations from 2020 till present are missing, making the information presented not comprehensive and slightly outdated. I suggest adding citations from recent research as well. For example, since the study is about lip-sync assisted speech imagery it might be useful for them to look into a study by Lakshminarayanan et al. about combining action observation with kinesthetic motor imagery.
  • Lakshminarayanan, K., Shah, R., Daulat, S. R., Moodley, V., Yao, Y., & Madathil, D. (2023). The effect of combining action observation in virtual reality with kinesthetic motor imagery on cortical activity. Frontiers in Neuroscience, 17, 1201865.
  1. Also since the paper talks about a different modality of imagery, it might be useful to include other modalities of imagery such as tactile imagery
  • Lakshminarayanan, K., Shah, R., Daulat, S. R., Moodley, V., Yao, Y., Sengupta, P., ... & Madathil, D. (2023). Evaluation of EEG Oscillatory patterns and classification of compound limb tactile imagery. Brain Sciences, 13(4), 656.

The authors are welcome to include these citations and more from other researchers to give a comprehensive view on the field.

We have added two papers for comparison with our paper. Our study can be used to improve our idea by using this new TL idea.

 

Comments on the Quality of English Language

 

I used some special software and one native person for editing to improve our paper Quality of English Language. We have deleted some of the nuclear paragraphs.

Round 2

Reviewer 2 Report

The authors have addressed all the suggestions. I recommend their manuscript for its publication.

Reviewer 3 Report

Thank you for diligently addressing all the comments and making appropriate changes to the manuscript.

Back to TopTop