sensors-logo

Journal Browser

Journal Browser

Selected Papers from the 9th International Conference on Imaging for Crime Detection and Prevention (ICDP-19)

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (10 October 2020) | Viewed by 13896

Special Issue Editors


E-Mail Website
Guest Editor
School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, UK
Interests: computer vision; intelligent transportation systems

E-Mail Website
Guest Editor
University of Westminster
Interests: explainable AI; semantic computing; information retrieval and search

E-Mail Website
Guest Editor
School of Computer Science and Engineering, University of Westminster, London, UK
Interests: computer vision; neural networks; computational intelligence; machine learning; multimodal human computer interaction, robotics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
University of Westminster
Interests: signal processing; sensors and sensor networks; image processing

Special Issue Information

Dear Colleagues,

The 9th International Conference on Imaging for Crime Detection and Prevention (ICDP-19), 16–18 December in London, UK, seeks contributions on the development of automated surveillance systems, adaptive behaviours and machine learning methods by harnessing data generated from very different sources and with many different features, such as social networks, smart cities, etc., to tackle the vulnerability of public spaces and individuals.

The aim of this Special Issue is to include selected papers from the 2019 9th International Conference on Imaging for Crime Detection and Prevention (ICDP-19) describing research from both academia and industry, on the recent advances in the theory, application and implementation of crime detection and prevention concepts, technologies and applications. Authors of selected highly qualified papers, presented at the conference, will be invited to submit extended versions of their original papers (at least 50% extensions of contents of the conference paper with significantly different title, abstract and contents) to be fully peer-reviewed and contributions under the following conference topics:

  • Surveillance systems and solutions (system architecture aspects, operational procedures, usability, scalability)
  • Multicamera systems
  • Information fusion (e.g., from visible and infrared cameras, microphone arrays, etc.)
  • Learning systems, cognitive systems engineering and video mining
  • Robust computer vision algorithms (24/7 operation under variable conditions, object tracking, multicamera algorithms, behaviour analysis and learning, scene segmentation)
  • Human machine interfaces, human systems engineering and human factors
  • Wireless communications and networks for video surveillance, video coding, compression, authentication, watermarking, location-dependent services
  • Metadata generation, video database indexing, searching and browsing
  • Embedded systems, surveillance middleware
  • Gesture and posture analysis and recognition
  • Biometrics (including face recognition)
  • Forensics and crime scene reconstruction
  • X-Ray and terahertz scanning
  • Case studies, practical systems and testbeds
  • Data protection, civil liberties and social exclusion issues
  • Algorithmic bias and transparency for machine learning
  • AI ethics
  • Custom FPGA-based approximate computing

Accepted papers (after peer review) will be published immediately.

Prof. Dr. Sergio A. Velastin
Dr. Epaminondas Kapetanios
Dr. Anastasia Angelopoulou
Prof. Izzet Kale
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Surveillance systems
  • Wireless communications and networks for video surveillance
  • Machine learning
  • Artificial Intelligence
  • Computer vision
  • Security and privacy

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 2558 KiB  
Article
Directed Gaze Trajectories for Biometric Presentation Attack Detection
by Asad Ali, Sanaul Hoque and Farzin Deravi
Sensors 2021, 21(4), 1394; https://doi.org/10.3390/s21041394 - 17 Feb 2021
Cited by 4 | Viewed by 1893
Abstract
Presentation attack artefacts can be used to subvert the operation of biometric systems by being presented to the sensors of such systems. In this work, we propose the use of visual stimuli with randomised trajectories to stimulate eye movements for the detection of [...] Read more.
Presentation attack artefacts can be used to subvert the operation of biometric systems by being presented to the sensors of such systems. In this work, we propose the use of visual stimuli with randomised trajectories to stimulate eye movements for the detection of such spoofing attacks. The presentation of a moving visual challenge is used to ensure that some pupillary motion is stimulated and then captured with a camera. Various types of challenge trajectories are explored on different planar geometries representing prospective devices where the challenge could be presented to users. To evaluate the system, photo, 2D mask and 3D mask attack artefacts were used and pupillary movement data were captured from 80 volunteers performing genuine and spoofing attempts. The results support the potential of the proposed features for the detection of biometric presentation attacks. Full article
Show Figures

Figure 1

17 pages, 5638 KiB  
Article
Rectification and Super-Resolution Enhancements for Forensic Text Recognition
by Pablo Blanco-Medina, Eduardo Fidalgo, Enrique Alegre, Rocío Alaiz-Rodríguez, Francisco Jáñez-Martino and Alexandra Bonnici
Sensors 2020, 20(20), 5850; https://doi.org/10.3390/s20205850 - 16 Oct 2020
Cited by 4 | Viewed by 2943
Abstract
Retrieving text embedded within images is a challenging task in real-world settings. Multiple problems such as low-resolution and the orientation of the text can hinder the extraction of information. These problems are common in environments such as Tor Darknet and Child Sexual Abuse [...] Read more.
Retrieving text embedded within images is a challenging task in real-world settings. Multiple problems such as low-resolution and the orientation of the text can hinder the extraction of information. These problems are common in environments such as Tor Darknet and Child Sexual Abuse images, where text extraction is crucial in the prevention of illegal activities. In this work, we evaluate eight text recognizers and, to increase the performance of text transcription, we combine these recognizers with rectification networks and super-resolution algorithms. We test our approach on four state-of-the-art and two custom datasets (TOICO-1K and Child Sexual Abuse (CSA)-text, based on text retrieved from Tor Darknet and Child Sexual Exploitation Material, respectively). We obtained a 0.3170 score of correctly recognized words in the TOICO-1K dataset when we combined Deep Convolutional Neural Networks (CNN) and rectification-based recognizers. For the CSA-text dataset, applying resolution enhancements achieved a final score of 0.6960. The highest performance increase was achieved on the ICDAR 2015 dataset, with an improvement of 4.83% when combining the MORAN recognizer and the Residual Dense resolution approach. We conclude that rectification outperforms super-resolution when applied separately, while their combination achieves the best average improvements in the chosen datasets. Full article
Show Figures

Figure 1

38 pages, 130648 KiB  
Article
Toward Mass Video Data Analysis: Interactive and Immersive 4D Scene Reconstruction
by Matthias Kraus, Thomas Pollok, Matthias Miller, Timon Kilian, Tobias Moritz, Daniel Schweitzer, Jürgen Beyerer, Daniel Keim, Chengchao Qu and Wolfgang Jentner
Sensors 2020, 20(18), 5426; https://doi.org/10.3390/s20185426 - 22 Sep 2020
Cited by 5 | Viewed by 3614
Abstract
The technical progress in the last decades makes photo and video recording devices omnipresent. This change has a significant impact, among others, on police work. It is no longer unusual that a myriad of digital data accumulates after a criminal act, which must [...] Read more.
The technical progress in the last decades makes photo and video recording devices omnipresent. This change has a significant impact, among others, on police work. It is no longer unusual that a myriad of digital data accumulates after a criminal act, which must be reviewed by criminal investigators to collect evidence or solve the crime. This paper presents the VICTORIA Interactive 4D Scene Reconstruction and Analysis Framework (“ISRA-4D” 1.0), an approach for the visual consolidation of heterogeneous video and image data in a 3D reconstruction of the corresponding environment. First, by reconstructing the environment in which the materials were created, a shared spatial context of all available materials is established. Second, all footage is spatially and temporally registered within this 3D reconstruction. Third, a visualization of the hereby created 4D reconstruction (3D scene + time) is provided, which can be analyzed interactively. Additional information on video and image content is also extracted and displayed and can be analyzed with supporting visualizations. The presented approach facilitates the process of filtering, annotating, analyzing, and getting an overview of large amounts of multimedia material. The framework is evaluated using four case studies which demonstrate its broad applicability. Furthermore, the framework allows the user to immerse themselves in the analysis by entering the scenario in virtual reality. This feature is qualitatively evaluated by means of interviews of criminal investigators and outlines potential benefits such as improved spatial understanding and the initiation of new fields of application. Full article
Show Figures

Figure 1

21 pages, 3366 KiB  
Article
Assessment and Estimation of Face Detection Performance Based on Deep Learning for Forensic Applications
by Deisy Chaves, Eduardo Fidalgo, Enrique Alegre, Rocío Alaiz-Rodríguez, Francisco Jáñez-Martino and George Azzopardi
Sensors 2020, 20(16), 4491; https://doi.org/10.3390/s20164491 - 11 Aug 2020
Cited by 15 | Viewed by 4589
Abstract
Face recognition is a valuable forensic tool for criminal investigators since it certainly helps in identifying individuals in scenarios of criminal activity like fugitives or child sexual abuse. It is, however, a very challenging task as it must be able to handle low-quality [...] Read more.
Face recognition is a valuable forensic tool for criminal investigators since it certainly helps in identifying individuals in scenarios of criminal activity like fugitives or child sexual abuse. It is, however, a very challenging task as it must be able to handle low-quality images of real world settings and fulfill real time requirements. Deep learning approaches for face detection have proven to be very successful but they require large computation power and processing time. In this work, we evaluate the speed–accuracy tradeoff of three popular deep-learning-based face detectors on the WIDER Face and UFDD data sets in several CPUs and GPUs. We also develop a regression model capable to estimate the performance, both in terms of processing time and accuracy. We expect this to become a very useful tool for the end user in forensic laboratories in order to estimate the performance for different face detection options. Experimental results showed that the best speed–accuracy tradeoff is achieved with images resized to 50% of the original size in GPUs and images resized to 25% of the original size in CPUs. Moreover, performance can be estimated using multiple linear regression models with a Mean Absolute Error (MAE) of 0.113, which is very promising for the forensic field. Full article
Show Figures

Figure 1

Back to TopTop