Next Article in Journal
Assessment of Mould Risk in Low-Cost Residential Buildings in Urban Slum Districts of Surakarta City, Indonesia
Previous Article in Journal
Durability Performance and Thermal Resistance of Structural Self-Compacting Concrete Improved with Waste Rubber and Silica Fume
 
 
Article
Peer-Review Record

Research on Campus Space Features and Visual Quality Based on Street View Images: A Case Study on the Chongshan Campus of Liaoning University

Buildings 2023, 13(5), 1332; https://doi.org/10.3390/buildings13051332
by Yumeng Meng 1, Qingyu Li 2, Xiang Ji 2, Yiqing Yu 2, Dong Yue 1, Mingqi Gan 1, Siyu Wang 2, Jianing Niu 1 and Hiroatsu Fukuda 3,*
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Buildings 2023, 13(5), 1332; https://doi.org/10.3390/buildings13051332
Submission received: 31 March 2023 / Revised: 23 April 2023 / Accepted: 13 May 2023 / Published: 19 May 2023
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)

Round 1

Reviewer 1 Report

The paper addresses an important issue " Campus Space Features and Visual Quality Based on Street View Image". Overall, the paper is interesting from practical and theoretical point of view, as it provides the arguments of the visual beautiful quality (VBQ) of the student living area being the highest, the VBQ of the teacher living area being the lowest in research and study area, student living area, sports area, teacher living area, and surrounding area. The complements required for improvement of the paper are about:

§  Literature about Campus Space Features and Visual Quality and machine learning to conduct semantic segmentation and spatial perception prediction on street view images are a bit dated. More recent studies (2022, 2023,...) could be included.

§  Theoretical background could be created as Section 2. and Campus Space Features and Visual Quality Based 2 on Street View Image (2.1) and Machine Learning Approaches (Deep learning) in Semantic Segmentation and Spatial Perception (2.2) could be included as sub-sections.

§  Lack of knowledge regarding how Machine Learning Approaches (Deep learning) in Semantic Segmentation and Spatial Perception. This could be elaborated more in detail in section 2.2.

§  A summary of key literature focused on Machine Learning approaches (Deep learning) for Semantic Segmentation and Spatial Perception could be presented in a Table.

§  Authors could explain why they preferred to use Deeplab v3+ deep convolutional neural network architecture and not the other Deep learning approaches.

§  Network structure of the Deeplab v3+ model could be presented in a flow-chart to better explain the process (a deep convolutional encoder-decoder architecture).

§  There is not much information in discussion regarding with how the results of VBQ in the areas, and comparative analysis of the five types of areas were integrated into the decision-making process.

§  What are the implications for practice particularly for the managers? The practical implications of performed analysis (Spatial Perception Prediction) could be elaborated more.

§  Limitations of the study and future recommendations of the study could further be elaborated.

Minor editing of English language required.

Author Response

Point 1: Literature about Campus Space Features and Visual Quality and machine learning to conduct semantic segmentation and spatial perception prediction on street view images are a bit dated. More recent studies (2022, 2023,...) could be included.

Response: Thank you for pointing this out. We updated and added some recent references. Then, we checked the writing format of citation, made them unify. The new references were marked in yellow in the manuscript. (See page 15, line 505-208; page 16, line 512-513, 533-535, 538-539, 546-547, 554-557; page 17, line 590-591, 594-599, 608-609; page 18, line 625-626, 633-635, 639-643, 651-654)

 

Point 2: Theoretical background could be created as Section 2. and Campus Space Features and Visual Quality Based 2 on Street View Image (2.1) and Machine Learning Approaches (Deep learning) in Semantic Segmentation and Spatial Perception (2.2) could be included as sub-sections.

Response: Thank you for pointing this to us and we agree with your comments. In order to express the content of the study clearly, the introduction was divided into Section 1.(Introduction) and Section 2.(Literature review). The Section 2. includes two sub-sections: 2.1 Campus Space Features and Visual Quality Based on Street View Image, 2.2 Machine Learning in Semantic Segmentation and Spatial Perception (See page 2, line 60-94; page 3, line 95-135).

 

Point 3: Lack of knowledge regarding how Machine Learning Approaches (Deep learning) in Semantic Segmentation and Spatial Perception. This could be elaborated more in detail in section 2.2.

Response: We agree with your comments. Machine learning includes semantic segmentation and computer vision models. The semantic segmentation was used to analyze the landscape elements in the urban objective environment. The computer vision model was used to quantify urban perception. We added the detailed description of the Machine Learning in the manuscript (See page 2, line 86-94; page 3, line 95-96, 104-105, 110-115, 119-135).

 

Point 4: A summary of key literature focused on Machine Learning approaches (Deep learning) for Semantic Segmentation and Spatial Perception could be presented in a Table.

Response: Thank you for your suggestion and we agree with you. We added a literature table in the text (See page 3, line 136 Table 1).

 

Point 5: Authors could explain why they preferred to use Deeplab v3+ deep convolutional neural network architecture and not the other Deep learning approaches.

Response: Thank you for your questions. We added some content in the method, we conducted the comparative analysis of other semantic segmentation algorithms, and explained the reasons for using DeeplabV3+ deep convolutional neural network architecture (See page 5, line 166-175; page 6, line 177).   

 

Point 6:  Network structure of the Deeplab v3+ model could be presented in a flow-chart to better explain the process (a deep convolutional encoder-decoder architecture).

Response: Thank you for pointing this out. To more clearly express the steps of the experiment, we added a flow chart in the text (See page 6, figure 2).

 

Point 7: There is not much information in discussion regarding with how the results of VBQ in the areas, and comparative analysis of the five types of areas were integrated into the decision-making process.

Response: Thank you for your questions and suggestions. We added 4.1 to analyze the results of VBQ in the areas, and comparative analysis of the five types areas (See page 13, line 365-384).

 

Point 8:  What are the implications for practice particularly for the managers? The practical implications of performed analysis (Spatial Perception Prediction)could be elaborated more.

Response: Thank you for pointing this out. We provided a more detailed description of the application and research significance in the conclusion. (See page 15, line 464-470).

First, the study is not only help for finding the areas that need to optimize VBQ, but also analyzing the reasons of low VBQ area by combining with physical features.

Second, the study is helpful to provide strategies and schemes for space quality improvement.

Third, this study provided theoretical and technical for improving campus street space and coordinating it with surrounding areas.

 

Point 9: Limitations of the study and future recommendations of the study could further be elaborated.

Response: Thank you for pointing this out. We added 4.4 to describe the limitations and future work (See page 14, line 433-451).

Author Response File: Author Response.pdf

Reviewer 2 Report

Some obvious cases for improvement: 

line: 35 What is the meaning of Corresponding here?

line: 70 Why captilized? In the next paragraphi it is not capitilized. Why?

line: 97 Why there is no literature reference to this here? Same case in line 100 too.

line: 301 The sentence construction is wrong here?

line: 339 Add reference to this.

A bit of more clarity on why you decided to choose the ML model is needed.

Add details on the model and accuracy rate of predictions from the ML model.

 

 

 

Comments for author File: Comments.pdf

There are obvious language errors that needs correction. 

Author Response

Point 1: line: 35 What is the meaning of Corresponding here?

Response: Thank you for pointing this to us. The grammar and sentence structure of the introduction were modified in the manuscript, so that the introduction can be clearly understood.  (See page xx, line xx)

 

 

Point 2: line: 70 Why captilized? In the next paragraphi it is not capitilized. Why?

Response: Thank you for pointing this out. We revised the mistakes, meanwhile we also comprehensively checked and corrected the grammar in the manuscript.  (See page 1, line 35)

 

 

Point 3: line: 97 Why there is no literature reference to this here? Same case in line 100 too.

Response: Thank you for pointing this out. There is the organization of this manuscript in the end of the introduction. The ArcGIS and Place Pulse 2.0 (PP 2.0) data set are the method in the paper. So there is no literature reference to this here. We look forward to any further questions and comments you may have.

 

Point 4: line: 301 The sentence construction is wrong here?

Response: Thank you for your questions. In order to express the content of the study clearly, the grammar and sentence structure problems were checked and revised in the manuscript (See page 12, line 335-337).

 

Point 5: line: 339 Add reference to this.

Response: Thank you for pointing this out. Following your suggestion, we agree with your comments and adopt it. The references were added in the manuscript (See page 13, line 395; page 18, line 662).

 

Point 6: A bit of more clarity on why you decided to choose the ML model is needed.

Add details on the model and accuracy rate of predictions from the ML model.

Response: Thank you for pointing this out. In the manuscript, we added the detailed description of the model and accuracy rate of predictions from the ML model. (See page 7, line 207-213).

 

Author Response File: Author Response.pdf

Reviewer 3 Report

1. In the introduction section of this paper, the advantage of the existing methods should be highlighted.

2. The Introduction to this paper should be improved by including the recent literature.

3. The approaches used in proposed methodology needs to be re-write more clearly. It will be better if authors can provide pseudo code as Algorithm of proposed work.

4. What the difficulties you have met when deriving the current results? The authors are suggested to add a remark after the main results. The proposed method should be compared some more recent works.

5. For the developed method, do you consider the computation burden issue? For this case, i think you need to discuss more after the main results and a remark will be helpful here. What the difficulties you have met when deriving the current results? The authors are suggested to add a remark after the main results.

6. In the experimental part, the detail parameters used in the proposed methodology are not given. Kindly do comparative study. 

7. All figures should be clearly cited. 

8. References should be cited properly.  

Minor editing of English language is required.

Author Response

Point 1: In the introduction section of this paper, the advantage of the existing methods should be highlighted.

Response:  We agree with your comments. we added and emphasized that the advantage of the existing methods in the Section 2.(Literature review). (See page 2, line 86-94; page 3, line 95-96)

 

Point 2: The Introduction to this paper should be improved by including the recent literature.

Response: Thank you for pointing this out. We updated and added some recent references. Then, we checked the writing format of citation, made them unify. The new references were marked in yellow in the manuscript. (See page 15, line 505-208; page 16, line 512-513, 533-535, 538-539, 546-547, 554-557; page 17, line 590-591, 594-599, 608-609; page 18, line 625-626, 633-635, 639-643, 651-654)

 

Point 3: The approaches used in proposed methodology needs to be re-write more clearly. It will be better if authors can provide pseudo code as Algorithm of proposed work.

Response: Thank you for pointing this out. To more clearly express the steps of the experiment, we added a flow chart in the text (See page 6, figure 2). The detailed description of the research methodology were added in the method of the manuscript, so that readers can clearly understand the research methods of the manuscript. (See page 7, line 201-203 )

 

Point 4: What the difficulties you have met when deriving the current results? The authors are suggested to add a remark after the main results. The proposed method should be compared some more recent works.

Response: Thank you for your questions. We conducted the comparative analysis of other semantic segmentation algorithms, The DeeplabV3+ deep convolutional neural network architecture was used to recognize street landscapes in the study. Deeplab V3+ is known for its high accuracy in segmenting small street spaces and its efficiency in using smaller training sets. The method has been widely used in previous studies on street view images and semantic segmentation, further highlighting its effectiveness and reliability in this area of research.

 

Point 5: For the developed method, do you consider the computation burden issue? For this case, i think you need to discuss more after the main results and a remark will be helpful here. What the difficulties you have met when deriving the current results? The authors are suggested to add a remark after the main results.

Response: Thank you for your questions. We considered the computation burden issue. Computer vision model and semantic segmentation are well-known methods and operate with high performance.

 

Point 6: In the experimental part, the detail parameters used in the proposed methodology are not given. Kindly do comparative study. 

Response: Thank you for pointing this out. In the manuscript, we added the detailed description of the model and accuracy rate of predictions from the ML model. (See page 6, line 207-213).

 

Point 7:  All figures should be clearly cited.

Response: Thank you for your questions. We comprehensively checked and clearly cited the figures in the manuscript (See page 6, line 182).

 

Point 8: References should be cited properly. 

Response: Thank you for pointing this out, we agree with you. The information which missed from the references was added in the manuscript and we checked the writing format of citation, made them unify.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

After revision, the originality, importance, value added of the paper and potential contribution to the journal has been improved. The revised paper has been well articulated dealing with sufficiently new and original concepts regarding Campus Space Features and Visual Quality and machine learning to conduct semantic segmentation and spatial perception prediction on street view images. The organization of the sections is more clear and easier to read. Detailed description of Machine Learning Approaches (Deep learning) in Semantic Segmentation and Spatial Perception have been included. A summary of key literature focused on Machine Learning approaches (Deep learning) for Semantic Segmentation and Spatial Perception have been presented in a Table. The reasons for using DeeplabV3+ deep convolutional neural network architecture have been explained. Network structure of the Deeplab v3+ model has been presented in a flow-chart to better explain the process. The results of VBQ in the areas, and comparative analysis of the five types of areas have been explained. A more detailed description of the application and research significance has been included in the conclusion. Limitations of the study and future recommendations of the study has been included.

English language fine. No issues detected.

Reviewer 3 Report

It is acceptable. 

Minor editing of English language required

Back to TopTop