Next Article in Journal
Automated Detection of Conifer Seedlings in Drone Imagery Using Convolutional Neural Networks
Next Article in Special Issue
Blood Glucose Level Monitoring Using an FMCW Millimeter-Wave Radar Sensor
Previous Article in Journal
Fuzzy Object-Based Image Analysis Methods Using Sentinel-2A and Landsat-8 Data to Map and Characterize Soil Surface Residue
Previous Article in Special Issue
An Accurate Method to Distinguish Between Stationary Human and Dog Targets Under Through-Wall Condition Using UWB Radar
 
 
Article
Peer-Review Record

A Mutiscale Residual Attention Network for Multitask Learning of Human Activity Using Radar Micro-Doppler Signatures

Remote Sens. 2019, 11(21), 2584; https://doi.org/10.3390/rs11212584
by Yuan He, Xinyu Li * and Xiaojun Jing
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Remote Sens. 2019, 11(21), 2584; https://doi.org/10.3390/rs11212584
Submission received: 19 September 2019 / Revised: 15 October 2019 / Accepted: 30 October 2019 / Published: 4 November 2019
(This article belongs to the Special Issue Radar Remote Sensing on Life Activities)

Round 1

Reviewer 1 Report

This is an interesting work that applies a multiscale residual attention network for joint activity recognition and person identification using micro-Doppler signatures from a UWB radar. The paper is well written and should be considered for publication. However, the reviewer would like the authors clarify some concerns listed below.

The authors should consider changing the paper’s title to convey the micro-Doppler signatures and UWB radar. It seems to me that the multitask classifier part in the MRA-Net is not well explained in Section 3. The authors please elaborate. I would like the authors to include a section in which they discuss the performance of the MRA-Net in the presence of noise. The paper demonstrates that the proposed approach outperforms the state-of-the-art single-task approaches. It would be nice if the authors can compare with the state-of-the-art multi-task approaches. One example could be: Lang, Q. Wang, Y. Yang, C. Hou, H. Liu and Y. He, "Joint Motion Classification and Person Identification via Multi-Task Learning for Smart Homes," in IEEE Internet of Things Journal. doi: 10.1109/JIOT.2019.2929833 Finally, I have some minor suggestions:

- The authors use both 3*3, 5*5 and 3x3, 5x5. Please stick with one notation.

- Please re-arrange the figures in a better way. For example:

   + Fig.6 is referred (in Section 3.1) right after Fig.2.

   + Fig.9 is referred before Fig.8

   + Fig.11 is referred before Fig.10

- Please enlarge Fig.2 and Fig.6.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Dear authors, congratulations on the technical quality of the paper.
Very interesting, well written. I believe the following contributions may enhance the quality of the paper:

 

1-In the summary, it would be necessary to explain the results obtained through the experiments adequately. Other factors on how to obtain data should be explained.
2- In the introduction Put the subject before the quotes, as in: "For example, [5] .."
3- In the introduction put the reference article for the concept of "Multitask learning (MTL)"
4- In the summary introduction, our contributions mainly include the following four aspects, but I could only identify three. Please review.
5- In related works, put the subject in the quote:. "In [14], a fingerprint recognition .."
6- Correct typos and lack of subject in the citations of sub-item 2.2.
7- Double representation for the value of f. In equation 2, I suggest that the nonlinear mapping representation be presented in another way.
8-Present what S represents in equation 2. Is it a vector of MD signatures? If it is, make it clear.
9- An exciting tip I can give you is that I found the pictures organized very far from their quotations in the text, creating expectations that are not met during the continuous reading of the paper. In the template, try to arrange the organization so that its figure is close to your citation or explanation in the text.
10-I believe that a conceptual explanation of the data collection flow with the devices in Figure 5 can make your paper more understandable to new readers or stakeholders on the topic.
11-Make clear the criteria for choosing the test persons.
12- Fill in the authors' contributions according to the journal's premises.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Only minor comments:

Table 1: physical unit of PRF is missing

Abbreviations of DCNN and LSTM are missing

 

I’m not an expert in machine learning, nevertheless one question arises to me. You are presenting results with a percentage with two digits after the decimal point based on only a view experiments. How realistic is that? Is it possible to make some statements about the variance of your results?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 4 Report

This paper has been well written and interesting contents for action recognition and person identification. Also, the performance was verified through comparative analysis.

1) It would be better to add some recent references as followings:

Deep learning for sensor-based activity recognition: A survey, Pattern Recognition Letters, Volume 119, 1 March 2019, Pages 3-11

- T-C3D: Temporal Convolutional 3D Network for Real-Time Action Recognition
Kun Liu, Wu Liu, Chuang Gan, Mingkui Tan, Huadong Ma, AAAI 2018.

deepGesture: Deep Learning-based Gesture Recognition Scheme using Motion Sensors, Displays (Elsevier), DOI: 10.1016/j.displa.2018.08.001, Vol. 55, pp. 38-45, Dec. 2018.

2) There are some typo-errors. So authors should check on the typo-errors to correct carefully.

I recommend this paper to be acceptable after minor revision.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop