sensors-logo

Journal Browser

Journal Browser

Visual Sensing for Urban Intelligence

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (30 June 2023) | Viewed by 5765

Special Issue Editors


E-Mail Website
Guest Editor
Institute for Information Science and Technologies – National Research Council, Italy
Interests: artificial intelligence; computer vision; smart surveillance; intelligent transportation systems; human–computer interaction; robotics; 3D virtual agents and augmented reality

E-Mail Website
Guest Editor
Institute of Information Science and Technologies, National Research Council of Italy, Signals and Images Laboratory, Via Moruzzi, 1, 56124 Pisa, Italy
Interests: computational intelligence and intelligent systems; artificial intelligence; computer vision; multimedia information processing; signal processing; assistive technologies; interactive systems and augmented reality
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Kingston University
Interests: ambient assisted living; artificial intelligence and machine learning; computer vision; object detection; face recognition; plant classification; video surveillance; action and activity recognition; video segmentation and summarization.

Special Issue Information

Dear Colleagues,

The focus of this Special Issue is to present the progress in video sensor network technologies and their applications to advance the design, implementation, and management of urban assets. The ultimate goal is to support city administrators and stakeholders in designing sustainable and personalized solutions for the autonomous, reliable, and efficient operation of modern, digital cities.

To this end, the combination of methods from computer vision, artificial intelligence, and pattern recognition with the progress in edge computing and embedded systems appears to have great potential that has not yet been revealed. For instance, smart camera networks, i.e., networks of cameras equipped with high-level visual intelligence, can bring power and flexibility to the city to address wide-area monitoring problems.
Besides active real-time monitoring, smart cameras and video analytics can help in understanding the behavior of people both in public or private spaces and in outdoor/indoor scenarios and how people perceive the environment around them.

With the critical issues raised by the COVID-19 outbreak, special attention will be paid to smart monitoring applications concerned with the adherence to precautions and social distancing along with new, unexplored ideas for the analysis of individuals and clusters of people during the pandemic.

This Special Issue will be of interest to researchers, data scientists, and practitioners working on models for the automatic understanding of visual information and aiming to gain a more in-depth insight into the city for adaptive policy-making. We invite applicants to submit innovative work that is connected, but not limited, to the following fields of interest:

  • Intelligent sensors for ecoscaping and sustainable urban development;
  • Data processing and analysis methods for the automatic detection of urban-related information;
  • Vision understanding applications in smart surveillance and the urban environment;
  • Human action recognition from video;
  • Decision-making in real urban scenarios;
  • Machine learning methods applied to smart surveillance and urban mobility;
  • Deep neural network architectures for computer vision;
  • Supervised, semi-supervised, and unsupervised learning from video;
  • Artificial intelligence in resource-constrained environments and on-device machine learning;
  • Embedded systems and edge computing;
  • Perception, sensing, and smart environment monitoring;
  • Sensor fusion for environment perception;
  • Smart sensor software architectures;
  • Video data representation, summarization, and visualization.

Dr. Giuseppe Riccardo Leone
Dr. Davide Moroni
Prof. Paolo Remagnino
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Intelligent sensors for ecoscaping and sustainable urban development
  • Data processing and analysis methods for the automatic detection of urban-related information
  • Vision understanding applications in smart surveillance and the urban environment
  • Human action recognition from video
  • Decision-making in real urban scenarios
  • Machine learning methods applied to smart surveillance and urban mobility
  • Deep neural network architectures for computer vision
  • Supervised, semi-supervised, and unsupervised learning from video
  • Artificial intelligence in resource-constrained environments and on-device machine learning
  • Embedded systems and edge computing
  • Perception, sensing, and smart environment monitoring
  • Sensor fusion for environment perception
  • Smart sensor software architectures
  • Video data representation, summarization, and visualization.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 2064 KiB  
Article
Transformers for Urban Sound Classification—A Comprehensive Performance Evaluation
by Ana Filipa Rodrigues Nogueira, Hugo S. Oliveira, José J. M. Machado and João Manuel R. S. Tavares
Sensors 2022, 22(22), 8874; https://doi.org/10.3390/s22228874 - 16 Nov 2022
Cited by 5 | Viewed by 2292
Abstract
Many relevant sound events occur in urban scenarios, and robust classification models are required to identify abnormal and relevant events correctly. These models need to identify such events within valuable time, being effective and prompt. It is also essential to determine for how [...] Read more.
Many relevant sound events occur in urban scenarios, and robust classification models are required to identify abnormal and relevant events correctly. These models need to identify such events within valuable time, being effective and prompt. It is also essential to determine for how much time these events prevail. This article presents an extensive analysis developed to identify the best-performing model to successfully classify a broad set of sound events occurring in urban scenarios. Analysis and modelling of Transformer models were performed using available public datasets with different sets of sound classes. The Transformer models’ performance was compared to the one achieved by the baseline model and end-to-end convolutional models. Furthermore, the benefits of using pre-training from image and sound domains and data augmentation techniques were identified. Additionally, complementary methods that have been used to improve the models’ performance and good practices to obtain robust sound classification models were investigated. After an extensive evaluation, it was found that the most promising results were obtained by employing a Transformer model using a novel Adam optimizer with weight decay and transfer learning from the audio domain by reusing the weights from AudioSet, which led to an accuracy score of 89.8% for the UrbanSound8K dataset, 95.8% for the ESC-50 dataset, and 99% for the ESC-10 dataset, respectively. Full article
(This article belongs to the Special Issue Visual Sensing for Urban Intelligence)
Show Figures

Figure 1

30 pages, 615 KiB  
Article
Sound Classification and Processing of Urban Environments: A Systematic Literature Review
by Ana Filipa Rodrigues Nogueira, Hugo S. Oliveira, José J. M. Machado and João Manuel R. S. Tavares
Sensors 2022, 22(22), 8608; https://doi.org/10.3390/s22228608 - 08 Nov 2022
Cited by 6 | Viewed by 2593
Abstract
Audio recognition can be used in smart cities for security, surveillance, manufacturing, autonomous vehicles, and noise mitigation, just to name a few. However, urban sounds are everyday audio events that occur daily, presenting unstructured characteristics containing different genres of noise and sounds unrelated [...] Read more.
Audio recognition can be used in smart cities for security, surveillance, manufacturing, autonomous vehicles, and noise mitigation, just to name a few. However, urban sounds are everyday audio events that occur daily, presenting unstructured characteristics containing different genres of noise and sounds unrelated to the sound event under study, making it a challenging problem. Therefore, the main objective of this literature review is to summarize the most recent works on this subject to understand the current approaches and identify their limitations. Based on the reviewed articles, it can be realized that Deep Learning (DL) architectures, attention mechanisms, data augmentation techniques, and pretraining are the most crucial factors to consider while creating an efficient sound classification model. The best-found results were obtained by Mushtaq and Su, in 2020, using a DenseNet-161 with pretrained weights from ImageNet, and NA-1 and NA-2 as augmentation techniques, which were of 97.98%, 98.52%, and 99.22% for UrbanSound8K, ESC-50, and ESC-10 datasets, respectively. Nonetheless, the use of these models in real-world scenarios has not been properly addressed, so their effectiveness is still questionable in such situations. Full article
(This article belongs to the Special Issue Visual Sensing for Urban Intelligence)
Show Figures

Figure 1

Back to TopTop