Image and Video Forensics

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Biometrics, Forensics, and Security".

Deadline for manuscript submissions: closed (31 May 2021) | Viewed by 69036

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors

Department of Computer, Control, and Management Engineering A. Ruberti, Sapienza University of Rome, 00185 Rome, Italy
Interests: multimedia forensics and security; machine learning; deep learning; computer vision
Special Issues, Collections and Topics in MDPI journals
Dr. Gianmarco Baldini
E-Mail Website
Guest Editor
European Commission, Joint Research Centre, Ispra, Italy
Interests: machine learning and deep learning in cybersecurity and automotive domains; physical layer identification and authentication
Department of Computer, Control, and Management Engineering, Sapienza Università di Roma, 00185 Roma, Italy
Interests: smart spaces; dataset generation; indoor localization and tracking systems; human-computer interaction
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Nowadays images and videos have become the main modalities of information being exchanged in everyday life, and their pervasiveness has led the image forensics community to question their reliability, integrity, confidentiality, and security more and more. Multimedia contents are generated in many different ways through the use of consumer electronics and high-quality digital imaging devices, such as smartphones, digital cameras, tablets, wearable, and IoT devices. The ever-increasing convenience of image acquisition has facilitated instant distribution and sharing of digital images on digital social platforms determining a great amount of exchange data. Moreover, the pervasiveness of powerful image editing tools has allowed the manipulation of digital images for malicious or criminal ends, up to the creation of synthesized images and videos with the use of deep learning techniques.

In response to these threats, the multimedia forensics community has produced major research efforts regarding the identification of the source and the detection of manipulation. In all cases (e.g., forensic investigations, fake news debunking, information warfare, and cyberattacks) where images and videos serve as critical demonstrative evidence, forensic technologies that help to determine the origin, authenticity of sources, and integrity of multimedia content can become essential tools.

In detail, this Special Issue aims to collect a diverse and complementary set of articles that demonstrate new developments and applications in image and video forensics to tackle new and serious challenges to ensure media authenticity.

Dr. Irene Amerini
Dr. Gianmarco Baldini
Dr. Francesco Leotta
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image and video forensics
  • multimedia source identification
  • image and video forgery detection
  • image and video authentication
  • image and video provenance
  • electronic device identification (e.g., smartphone) through built-in sensors
  • deepfake detection
  • adversarial multimedia forensics

Published Papers (19 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

3 pages, 182 KiB  
Editorial
Image and Video Forensics
J. Imaging 2021, 7(11), 242; https://doi.org/10.3390/jimaging7110242 - 17 Nov 2021
Cited by 1 | Viewed by 1913
Abstract
Nowadays, images and videos have become the main modalities of information being exchanged in everyday life, and their pervasiveness has led the image forensics community to question their reliability, integrity, confidentiality, and security more and more [...] Full article
(This article belongs to the Special Issue Image and Video Forensics)

Research

Jump to: Editorial, Review

20 pages, 2342 KiB  
Article
An Automated Approach for Electric Network Frequency Estimation in Static and Non-Static Digital Video Recordings
J. Imaging 2021, 7(10), 202; https://doi.org/10.3390/jimaging7100202 - 02 Oct 2021
Cited by 3 | Viewed by 1843
Abstract
Electric Network Frequency (ENF) is embedded in multimedia recordings if the recordings are captured with a device connected to power mains or placed near the power mains. It is exploited as a tool for multimedia authentication. ENF fluctuates stochastically around its nominal frequency [...] Read more.
Electric Network Frequency (ENF) is embedded in multimedia recordings if the recordings are captured with a device connected to power mains or placed near the power mains. It is exploited as a tool for multimedia authentication. ENF fluctuates stochastically around its nominal frequency at 50/60 Hz. In indoor environments, luminance variations captured by video recordings can also be exploited for ENF estimation. However, the various textures and different levels of shadow and luminance hinder ENF estimation in static and non-static video, making it a non-trivial problem. To address this problem, a novel automated approach is proposed for ENF estimation in static and non-static digital video recordings. The proposed approach is based on the exploitation of areas with similar characteristics in each video frame. These areas, called superpixels, have a mean intensity that exceeds a specific threshold. The performance of the proposed approach is tested on various videos of real-life scenarios that resemble surveillance from security cameras. These videos are of escalating difficulty and span recordings from static ones to recordings, which exhibit continuous motion. The maximum correlation coefficient is employed to measure the accuracy of ENF estimation against the ground truth signal. Experimental results show that the proposed approach improves ENF estimation against the state-of-the-art, yielding statistically significant accuracy improvements. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

15 pages, 3009 KiB  
Article
Detection of Manipulated Face Videos over Social Networks: A Large-Scale Study
J. Imaging 2021, 7(10), 193; https://doi.org/10.3390/jimaging7100193 - 28 Sep 2021
Cited by 8 | Viewed by 2130
Abstract
The detection of manipulated videos represents a highly relevant problem in multimedia forensics, which has been widely investigated in the last years. However, a common trait of published studies is the fact that the forensic analysis is typically applied on data prior to [...] Read more.
The detection of manipulated videos represents a highly relevant problem in multimedia forensics, which has been widely investigated in the last years. However, a common trait of published studies is the fact that the forensic analysis is typically applied on data prior to their potential dissemination over the web. This work addresses the challenging scenario where manipulated videos are first shared through social media platforms and then are subject to the forensic analysis. In this context, a large scale performance evaluation has been carried out involving general purpose deep networks and state-of-the-art manipulated data, and studying different effects. Results confirm that a performance drop is observed in every case when unseen shared data are tested by networks trained on non-shared data; however, fine-tuning operations can mitigate this problem. Also, we show that the output of differently trained networks can carry useful forensic information for the identification of the specific technique used for visual manipulation, both for shared and non-shared data. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

16 pages, 507 KiB  
Article
Identification of Social-Media Platform of Videos through the Use of Shared Features
J. Imaging 2021, 7(8), 140; https://doi.org/10.3390/jimaging7080140 - 08 Aug 2021
Cited by 10 | Viewed by 2146
Abstract
Videos have become a powerful tool for spreading illegal content such as military propaganda, revenge porn, or bullying through social networks. To counter these illegal activities, it has become essential to try new methods to verify the origin of videos from these platforms. [...] Read more.
Videos have become a powerful tool for spreading illegal content such as military propaganda, revenge porn, or bullying through social networks. To counter these illegal activities, it has become essential to try new methods to verify the origin of videos from these platforms. However, collecting datasets large enough to train neural networks for this task has become difficult because of the privacy regulations that have been enacted in recent years. To mitigate this limitation, in this work we propose two different solutions based on transfer learning and multitask learning to determine whether a video has been uploaded from or downloaded to a specific social platform through the use of shared features with images trained on the same task. By transferring features from the shallowest to the deepest levels of the network from the image task to videos, we measure the amount of information shared between these two tasks. Then, we introduce a model based on multitask learning, which learns from both tasks simultaneously. The promising experimental results show, in particular, the effectiveness of the multitask approach. According to our knowledge, this is the first work that addresses the problem of social media platform identification of videos through the use of shared features. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

20 pages, 33352 KiB  
Article
CNN-Based Multi-Modal Camera Model Identification on Video Sequences
J. Imaging 2021, 7(8), 135; https://doi.org/10.3390/jimaging7080135 - 05 Aug 2021
Cited by 10 | Viewed by 2420
Abstract
Identifying the source camera of images and videos has gained significant importance in multimedia forensics. It allows tracing back data to their creator, thus enabling to solve copyright infringement cases and expose the authors of hideous crimes. In this paper, we focus on [...] Read more.
Identifying the source camera of images and videos has gained significant importance in multimedia forensics. It allows tracing back data to their creator, thus enabling to solve copyright infringement cases and expose the authors of hideous crimes. In this paper, we focus on the problem of camera model identification for video sequences, that is, given a video under analysis, detecting the camera model used for its acquisition. To this purpose, we develop two different CNN-based camera model identification methods, working in a novel multi-modal scenario. Differently from mono-modal methods, which use only the visual or audio information from the investigated video to tackle the identification task, the proposed multi-modal methods jointly exploit audio and visual information. We test our proposed methodologies on the well-known Vision dataset, which collects almost 2000 video sequences belonging to different devices. Experiments are performed, considering native videos directly acquired by their acquisition devices and videos uploaded on social media platforms, such as YouTube and WhatsApp. The achieved results show that the proposed multi-modal approaches significantly outperform their mono-modal counterparts, representing a valuable strategy for the tackled problem and opening future research to even more challenging scenarios. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

17 pages, 4880 KiB  
Article
Fighting Deepfakes by Detecting GAN DCT Anomalies
J. Imaging 2021, 7(8), 128; https://doi.org/10.3390/jimaging7080128 - 30 Jul 2021
Cited by 36 | Viewed by 4645
Abstract
To properly contrast the Deepfake phenomenon the need to design new Deepfake detection algorithms arises; the misuse of this formidable A.I. technology brings serious consequences in the private life of every involved person. State-of-the-art proliferates with solutions using deep neural networks to detect [...] Read more.
To properly contrast the Deepfake phenomenon the need to design new Deepfake detection algorithms arises; the misuse of this formidable A.I. technology brings serious consequences in the private life of every involved person. State-of-the-art proliferates with solutions using deep neural networks to detect a fake multimedia content but unfortunately these algorithms appear to be neither generalizable nor explainable. However, traces left by Generative Adversarial Network (GAN) engines during the creation of the Deepfakes can be detected by analyzing ad-hoc frequencies. For this reason, in this paper we propose a new pipeline able to detect the so-called GAN Specific Frequencies (GSF) representing a unique fingerprint of the different generative architectures. By employing Discrete Cosine Transform (DCT), anomalous frequencies were detected. The β statistics inferred by the AC coefficients distribution have been the key to recognize GAN-engine generated data. Robustness tests were also carried out in order to demonstrate the effectiveness of the technique using different attacks on images such as JPEG Compression, mirroring, rotation, scaling, addition of random sized rectangles. Experiments demonstrated that the method is innovative, exceeds the state of the art and also give many insights in terms of explainability. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

16 pages, 54918 KiB  
Article
Forgery Detection in Digital Images by Multi-Scale Noise Estimation
J. Imaging 2021, 7(7), 119; https://doi.org/10.3390/jimaging7070119 - 17 Jul 2021
Cited by 12 | Viewed by 3334
Abstract
A complex processing chain is applied from the moment a raw image is acquired until the final image is obtained. This process transforms the originally Poisson-distributed noise into a complex noise model. Noise inconsistency analysis is a rich source for forgery detection, as [...] Read more.
A complex processing chain is applied from the moment a raw image is acquired until the final image is obtained. This process transforms the originally Poisson-distributed noise into a complex noise model. Noise inconsistency analysis is a rich source for forgery detection, as forged regions have likely undergone a different processing pipeline or out-camera processing. We propose a multi-scale approach, which is shown to be suitable for analyzing the highly correlated noise present in JPEG-compressed images. We estimate a noise curve for each image block, in each color channel and at each scale. We then compare each noise curve to its corresponding noise curve obtained from the whole image by counting the percentage of bins of the local noise curve that are below the global one. This procedure yields crucial detection cues since many forgeries create a local noise deficit. Our method is shown to be competitive with the state of the art. It outperforms all other methods when evaluated using the MCC score, or on forged regions large enough and for colorization attacks, regardless of the evaluation metric. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

18 pages, 3180 KiB  
Article
Performance Evaluation of Source Camera Attribution by Using Likelihood Ratio Methods
J. Imaging 2021, 7(7), 116; https://doi.org/10.3390/jimaging7070116 - 15 Jul 2021
Cited by 3 | Viewed by 1751
Abstract
Performance evaluation of source camera attribution methods typically stop at the level of analysis of hard to interpret similarity scores. Standard analytic tools include Detection Error Trade-off or Receiver Operating Characteristic curves, or other scalar performance metrics, such as Equal Error Rate or [...] Read more.
Performance evaluation of source camera attribution methods typically stop at the level of analysis of hard to interpret similarity scores. Standard analytic tools include Detection Error Trade-off or Receiver Operating Characteristic curves, or other scalar performance metrics, such as Equal Error Rate or error rates at a specific decision threshold. However, the main drawback of similarity scores is their lack of probabilistic interpretation and thereby their lack of usability in forensic investigation, when assisting the trier of fact to make more sound and more informed decisions. The main objective of this work is to demonstrate a transition from the similarity scores to likelihood ratios in the scope of digital evidence evaluation, which not only have probabilistic meaning, but can be immediately incorporated into the forensic casework and combined with the rest of the case-related forensic. Likelihood ratios are calculated from the Photo Response Non-Uniformity source attribution similarity scores. The experiments conducted aim to compare different strategies applied to both digital images and videos, by considering their respective peculiarities. The results are presented in a format compatible with the guideline for validation of forensic likelihood ratio methods. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

29 pages, 1797 KiB  
Article
Media Forensics Considerations on DeepFake Detection with Hand-Crafted Features
J. Imaging 2021, 7(7), 108; https://doi.org/10.3390/jimaging7070108 - 01 Jul 2021
Cited by 18 | Viewed by 4211
Abstract
DeepFake detection is a novel task for media forensics and is currently receiving a lot of research attention due to the threat these targeted video manipulations propose to the trust placed in video footage. The current trend in DeepFake detection is the application [...] Read more.
DeepFake detection is a novel task for media forensics and is currently receiving a lot of research attention due to the threat these targeted video manipulations propose to the trust placed in video footage. The current trend in DeepFake detection is the application of neural networks to learn feature spaces that allow them to be distinguished from unmanipulated videos. In this paper, we discuss, with features hand-crafted by domain experts, an alternative to this trend. The main advantage that hand-crafted features have over learned features is their interpretability and the consequences this might have for plausibility validation for decisions made. Here, we discuss three sets of hand-crafted features and three different fusion strategies to implement DeepFake detection. Our tests on three pre-existing reference databases show detection performances that are under comparable test conditions (peak AUC > 0.95) to those of state-of-the-art methods using learned features. Furthermore, our approach shows a similar, if not better, generalization behavior than neural network-based methods in tests performed with different training and test sets. In addition to these pattern recognition considerations, first steps of a projection onto a data-centric examination approach for forensics process modeling are taken to increase the maturity of the present investigation. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

23 pages, 11458 KiB  
Article
Exposing Manipulated Photos and Videos in Digital Forensics Analysis
J. Imaging 2021, 7(7), 102; https://doi.org/10.3390/jimaging7070102 - 24 Jun 2021
Cited by 11 | Viewed by 6375
Abstract
Tampered multimedia content is being increasingly used in a broad range of cybercrime activities. The spread of fake news, misinformation, digital kidnapping, and ransomware-related crimes are amongst the most recurrent crimes in which manipulated digital photos and videos are the perpetrating and disseminating [...] Read more.
Tampered multimedia content is being increasingly used in a broad range of cybercrime activities. The spread of fake news, misinformation, digital kidnapping, and ransomware-related crimes are amongst the most recurrent crimes in which manipulated digital photos and videos are the perpetrating and disseminating medium. Criminal investigation has been challenged in applying machine learning techniques to automatically distinguish between fake and genuine seized photos and videos. Despite the pertinent need for manual validation, easy-to-use platforms for digital forensics are essential to automate and facilitate the detection of tampered content and to help criminal investigators with their work. This paper presents a machine learning Support Vector Machines (SVM) based method to distinguish between genuine and fake multimedia files, namely digital photos and videos, which may indicate the presence of deepfake content. The method was implemented in Python and integrated as new modules in the widely used digital forensics application Autopsy. The implemented approach extracts a set of simple features resulting from the application of a Discrete Fourier Transform (DFT) to digital photos and video frames. The model was evaluated with a large dataset of classified multimedia files containing both legitimate and fake photos and frames extracted from videos. Regarding deepfake detection in videos, the Celeb-DFv1 dataset was used, featuring 590 original videos collected from YouTube, and covering different subjects. The results obtained with the 5-fold cross-validation outperformed those SVM-based methods documented in the literature, by achieving an average F1-score of 99.53%, 79.55%, and 89.10%, respectively for photos, videos, and a mixture of both types of content. A benchmark with state-of-the-art methods was also done, by comparing the proposed SVM method with deep learning approaches, namely Convolutional Neural Networks (CNN). Despite CNN having outperformed the proposed DFT-SVM compound method, the competitiveness of the results attained by DFT-SVM and the substantially reduced processing time make it appropriate to be implemented and embedded into Autopsy modules, by predicting the level of fakeness calculated for each analyzed multimedia file. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

15 pages, 2801 KiB  
Article
End-to-End Deep One-Class Learning for Anomaly Detection in UAV Video Stream
J. Imaging 2021, 7(5), 90; https://doi.org/10.3390/jimaging7050090 - 19 May 2021
Cited by 11 | Viewed by 2482
Abstract
In recent years, the use of drones for surveillance tasks has been on the rise worldwide. However, in the context of anomaly detection, only normal events are available for the learning process. Therefore, the implementation of a generative learning method in an unsupervised [...] Read more.
In recent years, the use of drones for surveillance tasks has been on the rise worldwide. However, in the context of anomaly detection, only normal events are available for the learning process. Therefore, the implementation of a generative learning method in an unsupervised mode to solve this problem becomes fundamental. In this context, we propose a new end-to-end architecture capable of generating optical flow images from original UAV images and extracting compact spatio-temporal characteristics for anomaly detection purposes. It is designed with a custom loss function as a sum of three terms, the reconstruction loss (Rl), the generation loss (Gl) and the compactness loss (Cl) to ensure an efficient classification of the “deep-one” class. In addition, we propose to minimize the effect of UAV motion in video processing by applying background subtraction on optical flow images. We tested our method on very complex datasets called the mini-drone video dataset, and obtained results surpassing existing techniques’ performances with an AUC of 85.3. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

16 pages, 1455 KiB  
Article
Copy-Move Forgery Detection (CMFD) Using Deep Learning for Image and Video Forensics
J. Imaging 2021, 7(3), 59; https://doi.org/10.3390/jimaging7030059 - 20 Mar 2021
Cited by 39 | Viewed by 5721
Abstract
With the exponential growth of high-quality fake images in social networks and media, it is necessary to develop recognition algorithms for this type of content. One of the most common types of image and video editing consists of duplicating areas of the image, [...] Read more.
With the exponential growth of high-quality fake images in social networks and media, it is necessary to develop recognition algorithms for this type of content. One of the most common types of image and video editing consists of duplicating areas of the image, known as the copy-move technique. Traditional image processing approaches manually look for patterns related to the duplicated content, limiting their use in mass data classification. In contrast, approaches based on deep learning have shown better performance and promising results, but they present generalization problems with a high dependence on training data and the need for appropriate selection of hyperparameters. To overcome this, we propose two approaches that use deep learning, a model by a custom architecture and a model by transfer learning. In each case, the impact of the depth of the network is analyzed in terms of precision (P), recall (R) and F1 score. Additionally, the problem of generalization is addressed with images from eight different open access datasets. Finally, the models are compared in terms of evaluation metrics, and training and inference times. The model by transfer learning of VGG-16 achieves metrics about 10% higher than the model by a custom architecture, however, it requires approximately twice as much inference time as the latter. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

23 pages, 9519 KiB  
Article
VIPPrint: Validating Synthetic Image Detection and Source Linking Methods on a Large Scale Dataset of Printed Documents
J. Imaging 2021, 7(3), 50; https://doi.org/10.3390/jimaging7030050 - 08 Mar 2021
Cited by 11 | Viewed by 3334
Abstract
The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, [...] Read more.
The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, printing and scanning can be used to hide the traces of image manipulation or the synthetic nature of images, since the artifacts commonly found in manipulated and synthetic images are gone after the images are printed and scanned. A problem hindering research in this area is the lack of large scale reference datasets to be used for algorithm development and benchmarking. Motivated by this issue, we present a new dataset composed of a large number of synthetic and natural printed face images. To highlight the difficulties associated with the analysis of the images of the dataset, we carried out an extensive set of experiments comparing several printer attribution methods. We also verified that state-of-the-art methods to distinguish natural and synthetic face images fail when applied to print and scanned images. We envision that the availability of the new dataset and the preliminary experiments we carried out will motivate and facilitate further research in this area. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

26 pages, 4262 KiB  
Article
Detecting and Locating Passive Video Forgery Based on Low Computational Complexity Third-Order Tensor Representation
J. Imaging 2021, 7(3), 47; https://doi.org/10.3390/jimaging7030047 - 05 Mar 2021
Cited by 5 | Viewed by 2253
Abstract
Great attention is paid to detecting video forgeries nowadays, especially with the widespread sharing of videos over social media and websites. Many video editing software programs are available and perform well in tampering with video contents or even creating fake videos. Forgery affects [...] Read more.
Great attention is paid to detecting video forgeries nowadays, especially with the widespread sharing of videos over social media and websites. Many video editing software programs are available and perform well in tampering with video contents or even creating fake videos. Forgery affects video integrity and authenticity and has serious implications. For example, digital videos for security and surveillance purposes are used as evidence in courts. In this paper, a newly developed passive video forgery scheme is introduced and discussed. The developed scheme is based on representing highly correlated video data with a low computational complexity third-order tensor tube-fiber mode. An arbitrary number of core tensors is selected to detect and locate two serious types of forgeries which are: insertion and deletion. These tensor data are orthogonally transformed to achieve more data reductions and to provide good features to trace forgery along the whole video. Experimental results and comparisons show the superiority of the proposed scheme with a precision value of up to 99% in detecting and locating both types of attacks for static as well as dynamic videos, quick-moving foreground items (single or multiple), zooming in and zooming out datasets which are rarely tested by previous works. Moreover, the proposed scheme offers a reduction in time and a linear computational complexity. Based on the used computer’s configurations, an average time of 35 s. is needed to detect and locate 40 forged frames out of 300 frames. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

19 pages, 2111 KiB  
Article
No Matter What Images You Share, You Can Probably Be Fingerprinted Anyway
J. Imaging 2021, 7(2), 33; https://doi.org/10.3390/jimaging7020033 - 11 Feb 2021
Cited by 5 | Viewed by 2069
Abstract
The popularity of social networks (SNs), amplified by the ever-increasing use of smartphones, has intensified online cybercrimes. This trend has accelerated digital forensics through SNs. One of the areas that has received lots of attention is camera fingerprinting, through which each smartphone is [...] Read more.
The popularity of social networks (SNs), amplified by the ever-increasing use of smartphones, has intensified online cybercrimes. This trend has accelerated digital forensics through SNs. One of the areas that has received lots of attention is camera fingerprinting, through which each smartphone is uniquely characterized. Hence, in this paper, we compare classification-based methods to achieve smartphone identification (SI) and user profile linking (UPL) within the same or across different SNs, which can provide investigators with significant clues. We validate the proposed methods by two datasets, our dataset and the VISION dataset, both including original and shared images on the SN platforms such as Google Currents, Facebook, WhatsApp, and Telegram. The obtained results show that k-medoids achieves the best results compared with k-means, hierarchical approaches, and different models of convolutional neural network (CNN) in the classification of the images. The results show that k-medoids provides the values of F1-measure up to 0.91% for SI and UPL tasks. Moreover, the results prove the effectiveness of the methods which tackle the loss of image details through the compression process on the SNs, even for the images from the same model of smartphones. An important outcome of our work is presenting the inter-layer UPL task, which is more desirable in digital investigations as it can link user profiles on different SNs. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

16 pages, 2021 KiB  
Article
Factors that Influence PRNU-Based Camera-Identification via Videos
J. Imaging 2021, 7(1), 8; https://doi.org/10.3390/jimaging7010008 - 13 Jan 2021
Cited by 6 | Viewed by 2570
Abstract
The Photo Response Non-Uniformity pattern (PRNU-pattern) can be used to identify the source of images or to indicate whether images have been made with the same camera. This pattern is also recognized as the “fingerprint” of a camera since it is a highly [...] Read more.
The Photo Response Non-Uniformity pattern (PRNU-pattern) can be used to identify the source of images or to indicate whether images have been made with the same camera. This pattern is also recognized as the “fingerprint” of a camera since it is a highly characteristic feature. However, this pattern, identically to a real fingerprint, is sensitive to many different influences, e.g., the influence of camera settings. In this study, several previously investigated factors were noted, after which three were selected for further investigation. The computation and comparison methods are evaluated under variation of the following factors: resolution, length of the video and compression. For all three studies, images were taken with a single iPhone 6. It was found that a higher resolution ensures a more reliable comparison, and that the length of a (reference) video should always be as high as possible to gain a better PRNU-pattern. It also became clear that compression (i.e., in this study the compression that Snapchat uses) has a negative effect on the correlation value. Therefore, it was found that many different factors play a part when comparing videos. Due to the large amount of controllable and non-controllable factors that influence the PRNU-pattern, it is of great importance that further research is carried out to gain clarity on the individual influences that factors exert. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

18 pages, 8460 KiB  
Article
Detecting Morphing Attacks through Face Geometry Features
J. Imaging 2020, 6(11), 115; https://doi.org/10.3390/jimaging6110115 - 29 Oct 2020
Cited by 8 | Viewed by 3757
Abstract
Face-morphing operations allow for the generation of digital faces that simultaneously carry the characteristics of two different subjects. It has been demonstrated that morphed faces strongly challenge face-verification systems, as they typically match two different identities. This poses serious security issues in machine-assisted [...] Read more.
Face-morphing operations allow for the generation of digital faces that simultaneously carry the characteristics of two different subjects. It has been demonstrated that morphed faces strongly challenge face-verification systems, as they typically match two different identities. This poses serious security issues in machine-assisted border control applications and calls for techniques to automatically detect whether morphing operations have been previously applied on passport photos. While many proposed approaches analyze the suspect passport photo only, our work operates in a differential scenario, i.e., when the passport photo is analyzed in conjunction with the probe image of the subject acquired at border control to verify that they correspond to the same identity. To this purpose, in this study, we analyze the locations of biologically meaningful facial landmarks identified in the two images, with the goal of capturing inconsistencies in the facial geometry introduced by the morphing process. We report the results of extensive experiments performed on images of various sources and under different experimental settings showing that landmark locations detected through automated algorithms contain discriminative information for identifying pairs with morphed passport photos. Sensitivity of supervised classifiers to different compositions on the training and testing sets are also explored, together with the performance of different derived feature transformations. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

39 pages, 20326 KiB  
Review
A Comprehensive Review of Deep-Learning-Based Methods for Image Forensics
J. Imaging 2021, 7(4), 69; https://doi.org/10.3390/jimaging7040069 - 03 Apr 2021
Cited by 31 | Viewed by 7338
Abstract
Seeing is not believing anymore. Different techniques have brought to our fingertips the ability to modify an image. As the difficulty of using such techniques decreases, lowering the necessity of specialized knowledge has been the focus for companies who create and sell these [...] Read more.
Seeing is not believing anymore. Different techniques have brought to our fingertips the ability to modify an image. As the difficulty of using such techniques decreases, lowering the necessity of specialized knowledge has been the focus for companies who create and sell these tools. Furthermore, image forgeries are presently so realistic that it becomes difficult for the naked eye to differentiate between fake and real media. This can bring different problems, from misleading public opinion to the usage of doctored proof in court. For these reasons, it is important to have tools that can help us discern the truth. This paper presents a comprehensive literature review of the image forensics techniques with a special focus on deep-learning-based methods. In this review, we cover a broad range of image forensics problems including the detection of routine image manipulations, detection of intentional image falsifications, camera identification, classification of computer graphics images and detection of emerging Deepfake images. With this review it can be observed that even if image forgeries are becoming easy to create, there are several options to detect each kind of them. A review of different image databases and an overview of anti-forensic methods are also presented. Finally, we suggest some future working directions that the research community could consider to tackle in a more effective way the spread of doctored images. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

56 pages, 11483 KiB  
Review
A Survey on Anti-Spoofing Methods for Facial Recognition with RGB Cameras of Generic Consumer Devices
J. Imaging 2020, 6(12), 139; https://doi.org/10.3390/jimaging6120139 - 15 Dec 2020
Cited by 26 | Viewed by 6113
Abstract
The widespread deployment of facial recognition-based biometric systems has made facial presentation attack detection (face anti-spoofing) an increasingly critical issue. This survey thoroughly investigates facial Presentation Attack Detection (PAD) methods that only require RGB cameras of generic consumer devices over the past two [...] Read more.
The widespread deployment of facial recognition-based biometric systems has made facial presentation attack detection (face anti-spoofing) an increasingly critical issue. This survey thoroughly investigates facial Presentation Attack Detection (PAD) methods that only require RGB cameras of generic consumer devices over the past two decades. We present an attack scenario-oriented typology of the existing facial PAD methods, and we provide a review of over 50 of the most influenced facial PAD methods over the past two decades till today and their related issues. We adopt a comprehensive presentation of the reviewed facial PAD methods following the proposed typology and in chronological order. By doing so, we depict the main challenges, evolutions and current trends in the field of facial PAD and provide insights on its future research. From an experimental point of view, this survey paper provides a summarized overview of the available public databases and an extensive comparison of the results reported in PAD-reviewed papers. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

Back to TopTop