# Benford’s Law and Perceptual Features for Face Image Quality Assessment

## Abstract

**:**

## 1. Introduction

- 1.
- Subjective assessment: In this approach, human observers are involved in rating or scoring the quality of face images. These subjective ratings are collected through controlled experiments, where observers evaluate images based on specific quality attributes. The collected ratings are then used to create subjective quality databases or models.
- 2.
- Objective assessment: These methods aim to automate the process by developing computational algorithms that can predict image quality without human involvement. These methods utilize various features and metrics extracted from face images to quantify their quality. Some commonly used features include sharpness, contrast, noise, blur, and distortion. Machine learning techniques, such as regression or classification models, are often employed to train algorithms using annotated datasets [2].
- 3.
- Hybrid approaches: These methods combine subjective and objective methods to enhance the accuracy and reliability of face image quality assessment. These approaches leverage both human ratings and computational metrics to create more robust quality models. Machine learning algorithms can be trained using subjective ratings as the ground truth, allowing them to learn from human perception.

- 1.
- First, I investigate the first digit distributions (FDDs) of different image domains for FIQA.
- 2.
- Second, I empirically corroborate that the FDD of an image domain is a rather mediocre predictor for face image quality. However, taking the fusion of different domains’ FDDs results in a strong predictor whose performance can be further increased by considering several simple perceptual features, such as colorfulness, the global contrast factor, the dark channel feature, entropy, and phase congruency.

## 2. Literature Review

#### 2.1. Benford’s Law

- 1.
- Financial auditing: Benford’s law is used as a tool for detecting anomalies and potential fraud in financial statements, such as identifying irregularities in tax returns, accounting records, or expense reports. Deviations from the expected distribution of leading digits can indicate data manipulation [13].
- 2.
- 3.
- Election fraud detection: Benford’s law has been used to analyze election results, particularly in detecting potential irregularities or fraud. Significant deviations from Benford’s law’s expected distribution could signal suspicious patterns in the reported vote counts [16].
- 4.
- 5.

#### 2.2. Face Image Quality Assessment

## 3. Methodology

#### Features

## 4. Results

#### 4.1. Evaluation Protocol

#### 4.2. Parameter Study

#### 4.3. Comparison to the State-of-the-Art Methods

## 5. Conclusions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## Abbreviations

BDT | binary decision tree |

CF | colorfulness |

DCF | dark channel feature |

DWT | discrete wavelet transform |

E | entropy |

FDD | first digit distribution |

FIQA | face image quality assessment |

GAM | generalized additive model |

GFIQA-20k | generic face image quality assessment 20k database |

GPR | Gaussian process regression |

JPEG | joint photographic experts group |

IQA | image quality assessment |

KROCC | Kendall’s rank order correlation coefficient |

NN | neural network |

PC | phase congruency |

PLCC | Pearson’s linear correlation coefficient |

RBF | radial basis function |

SROCC | Spearman’s rank order correlation coefficient |

SVD | singular value decomposition |

SVR | support vector regressor |

YHCC100M | Yahoo Flickr creative commons 100 million dataset |

## References

- Khodabakhsh, A.; Pedersen, M.; Busch, C. Subjective versus objective face image quality evaluation for face recognition. In Proceedings of the 2019 3rd International Conference on Biometric Engineering and Applications, Stockholm, Sweden, 29–31 May 2019; pp. 36–42. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst.
**2012**, 25, 1097–1105. [Google Scholar] [CrossRef] - Boutros, F.; Fang, M.; Klemt, M.; Fu, B.; Damer, N. CR-FIQA: Face image quality assessment by learning sample relative classifiability. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 5836–5845. [Google Scholar]
- Sang, J.; Lei, Z.; Li, S.Z. Face image quality evaluation for ISO/IEC standards 19794-5 and 29794-5. In Proceedings of the Advances in Biometrics: Third International Conference, ICB 2009, Alghero, Italy, 2–5 June 2009; Proceedings 3. Springer: Berlin/Heidelberg, Germany, 2009; pp. 229–238. [Google Scholar]
- Vignesh, S.; Priya, K.M.; Channappayya, S.S. Face image quality assessment for face selection in surveillance video using convolutional neural networks. In Proceedings of the 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Orlando, FL, USA, 14–16 December 2015; pp. 577–581. [Google Scholar]
- Sellahewa, H.; Jassim, S.A. Image-quality-based adaptive face recognition. IEEE Trans. Instrum. Meas.
**2010**, 59, 805–813. [Google Scholar] [CrossRef] - Wasnik, P.; Raja, K.B.; Ramachandra, R.; Busch, C. Assessing face image quality for smartphone based face recognition system. In Proceedings of the 2017 5th International Workshop on Biometrics and Forensics (IWBF), Coventry, UK, 4–5 April 2017; pp. 1–6. [Google Scholar]
- Kuru, K.; Ansell, D. TCitySmartF: A comprehensive systematic framework for transforming cities into smart cities. IEEE Access
**2020**, 8, 18615–18644. [Google Scholar] [CrossRef] - Thakur, N.; Han, C.Y. An intelligent ubiquitous activity aware framework for smart home. In Human Interaction, Emerging Technologies and Future Applications III, Proceedings of the 3rd International Conference on Human Interaction and Emerging Technologies: Future Applications (IHIET 2020), Paris, France, 27–29 August 2020; Springer: Berlin/Heidelberg, Germany, 2021; pp. 296–302. [Google Scholar]
- Mir, T.A. The Benford law behavior of the religious activity data. Phys. Stat. Mech. Appl.
**2014**, 408, 1–9. [Google Scholar] [CrossRef] - Nigrini, M.J.; Miller, S.J. Benford’s law applied to hydrology data—Results and relevance to other geophysical data. Math. Geol.
**2007**, 39, 469–490. [Google Scholar] [CrossRef] - Burke, J.; Kincanon, E. Benford’s law and physical constants: The distribution of initial digits. Am. J. Phys.
**1991**, 59, 952. [Google Scholar] [CrossRef] - Alali, F.A.; Romero, S. Benford’s Law: Analyzing a decade of financial data. J. Emerg. Technol. Account.
**2013**, 10, 1–39. [Google Scholar] [CrossRef] - Fu, D.; Shi, Y.Q.; Su, W. A generalized Benford’s law for JPEG coefficients and its applications in image forensics. In Proceedings of the Security, Steganography, and Watermarking of Multimedia Contents IX. SPIE, San Jose, CA, USA, 28 January 2007; Volume 6505, pp. 574–584. [Google Scholar]
- Kossovsky, A.E. Benford’s Law: Theory, the General Law of Relative Quantities, and Forensic Fraud Detection Applications; World Scientific: Singapore, 2014; Volume 3. [Google Scholar]
- Mebane, W.R., Jr. Election forensics: Vote counts and Benford’s law. In Proceedings of the Summer Meeting of the Political Methodology Society, UC-Davis, Davis, CA, USA, 20–22 July 2006; Volume 17. [Google Scholar]
- Gonzalez-Garcia, M.J.; Pastor, M.G.C. Benford’s Law and Macroeconomic Data Quality; International Monetary Fund: Washington, DC, USA, 2009. [Google Scholar]
- Li, F.; Han, S.; Zhang, H.; Ding, J.; Zhang, J.; Wu, J. Application of Benford’s law in Data Analysis. J. Phys. Conf. Ser.
**2019**, 1168, 032133. [Google Scholar] [CrossRef] - Gottwald, G.A.; Nicol, M. On the nature of Benford’s Law. Phys. Stat. Mech. Appl.
**2002**, 303, 387–396. [Google Scholar] [CrossRef] - Sambridge, M.; Tkalčić, H.; Jackson, A. Benford’s law in the natural sciences. Geophys. Res. Lett.
**2010**, 37, 1–5. [Google Scholar] [CrossRef] - Zhao, X.; Ho, A.T.; Shi, Y.Q. Image forensics using generalised Benford’s law for accurate detection of unknown JPEG compression in watermarked images. In Proceedings of the 2009 16th International Conference on Digital Signal Processing, Santorini, Greece, 5–7 July 2009; pp. 1–8. [Google Scholar]
- Milani, S.; Tagliasacchi, M.; Tubaro, S. Discriminating multiple JPEG compressions using first digit features. APSIPA Trans. Signal Inf. Process.
**2014**, 3, e19. [Google Scholar] [CrossRef] - Pasquini, C.; Boato, G.; Pérez-González, F. Multiple JPEG compression detection by means of Benford-Fourier coefficients. In Proceedings of the 2014 IEEE International Workshop on Information Forensics and Security (WIFS), Atlanta, GA, USA, 3–5 December 2014; pp. 113–118. [Google Scholar]
- Pasquini, C.; Boato, G.; Pérez-González, F. Statistical detection of JPEG traces in digital images in uncompressed formats. IEEE Trans. Inf. Forensics Secur.
**2017**, 12, 2890–2905. [Google Scholar] [CrossRef] - Moin, S.S.; Islam, S. Benford’s law for detecting contrast enhancement. In Proceedings of the 2017 Fourth International Conference on Image Information Processing (ICIIP), Shimla, India, 21–23 December 2017; pp. 1–4. [Google Scholar]
- Makrushin, A.; Kraetzer, C.; Neubert, T.; Dittmann, J. Generalized Benford’s law for blind detection of morphed face images. In Proceedings of the 6th ACM Workshop on Information Hiding and Multimedia Security, Innsbruck, Austria, 20–22 June 2018; pp. 49–54. [Google Scholar]
- Wiedemann, O.; Hosu, V.; Lin, H.; Saupe, D. Disregarding the big picture: Towards local image quality assessment. In Proceedings of the 2018 Tenth International Conference on Quality of Multimedia Experience (QoMEX), Cagliari, Italy, 29 May–1 June 2018; pp. 1–6. [Google Scholar]
- Götz-Hahn, F.; Hosu, V.; Lin, H.; Saupe, D. KonVid-150k: A Dataset for No-Reference Video Quality Assessment of Videos in-the-Wild. IEEE Access
**2021**, 9, 72139–72160. [Google Scholar] [CrossRef] - Akkaya, E.; Özbek, N. Comparison of the State-of-the-Art Image and Video Quality Assessment Metrics. In Proceedings of the 2021 29th Signal Processing and Communications Applications Conference (SIU), Istanbul, Turkey, 9–11 June 2021; pp. 1–4. [Google Scholar]
- Men, H. Boosting for Visual Quality Assessment with Applications for Frame Interpolation Methods. Ph.D. Thesis, University of Konstanz, Konstanz, Germany, 2022. [Google Scholar]
- Jenadeleh, M. Blind Image and Video Quality Assessment. Ph.D. Thesis, University of Konstanz, Konstanz, Germany, 2018. [Google Scholar]
- Xu, L.; Lin, W.; Kuo, C.C.J. Visual Quality Assessment by Machine Learning; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
- Schlett, T.; Rathgeb, C.; Henniger, O.; Galbally, J.; Fierrez, J.; Busch, C. Face image quality assessment: A literature survey. ACM Comput. Surv.
**2022**, 54, 1–49. [Google Scholar] [CrossRef] - Gao, X.; Li, S.Z.; Liu, R.; Zhang, P. Standardization of face image sample quality. In Proceedings of the Advances in Biometrics: International Conference, ICB 2007, Seoul, Republic of Korea, 27–29 August 2007; Proceedings. Springer: Berlin/Heidelberg, Germany, 2007; pp. 242–251. [Google Scholar]
- Terhorst, P.; Kolf, J.N.; Damer, N.; Kirchbuchner, F.; Kuijper, A. SER-FIQ: Unsupervised estimation of face image quality based on stochastic embedding robustness. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 5651–5660. [Google Scholar]
- Ou, F.Z.; Chen, X.; Zhang, R.; Huang, Y.; Li, S.; Li, J.; Li, Y.; Cao, L.; Wang, Y.G. SDD-FIQA: Unsupervised face image quality assessment with similarity distribution distance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 7670–7679. [Google Scholar]
- Babnik, Ž.; Peer, P.; Štruc, V. DifFIQA: Face Image Quality Assessment Using Denoising Diffusion Probabilistic Models. arXiv
**2023**, arXiv:2305.05768. [Google Scholar] - Prasad, L.; Iyengar, S.S. Wavelet Analysis with Applications to Image Processing; CRC Press: Boca Raton, FL, USA, 1997. [Google Scholar]
- Mallat, S.G. Multifrequency channel decompositions of images and wavelet models. IEEE Trans. Acoust. Speech Signal Process.
**1989**, 37, 2091–2110. [Google Scholar] [CrossRef] - Cintra, R.J.; Bayer, F.M. A DCT approximation for image compression. IEEE Signal Process. Lett.
**2011**, 18, 579–582. [Google Scholar] [CrossRef] - Brunton, S.L.; Kutz, J.N. Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control; Cambridge University Press: Cambridge, UK, 2022. [Google Scholar]
- Guo, K.; Kutyniok, G.; Labate, D. Sparse multidimensional representations using anisotropic dilation and shear operators. Wavelets Splines
**2006**, 14, 189–201. [Google Scholar] - Häuser, S.; Steidl, G. Fast finite shearlet transform. arXiv
**2012**, arXiv:1202.1773. [Google Scholar] - Yendrikhovskij, S.; Blommaert, F.J.; de Ridder, H. Optimizing color reproduction of natural images. In Proceedings of the Color and Imaging Conference. Society for Imaging Science and Technology, Scottsdale, AZ, USA, 17–20 November 1998; Volume 1998, pp. 140–145. [Google Scholar]
- Engeldrum, P.G. Extending image quality models. In Proceedings of the IS and TS Pics Conference. Society for Imaging Science & Technology, Portland, OR, USA, 7–10 April 2002; pp. 65–69. [Google Scholar]
- Yue, G.; Hou, C.; Zhou, T. Blind quality assessment of tone-mapped images considering colorfulness, naturalness, and structure. IEEE Trans. Ind. Electron.
**2018**, 66, 3784–3793. [Google Scholar] [CrossRef] - Peli, E. Contrast in complex images. JOSA A
**1990**, 7, 2032–2040. [Google Scholar] [CrossRef] [PubMed] - Matkovic, K.; Neumann, L.; Neumann, A.; Psik, T.; Purgathofer, W. Global contrast factor—A new approach to image contrast. In Proceedings of the Computational Aesthetics 2005: Eurographics Workshop on Computational Aesthetics in Graphics, Visualization and Imaging 2005, Girona, Spain, 18–20 May 2005; pp. 159–167. [Google Scholar]
- Lee, S.; Yun, S.; Nam, J.H.; Won, C.S.; Jung, S.W. A review on dark channel prior based image dehazing algorithms. Eurasip J. Image Video Process.
**2016**, 2016, 4. [Google Scholar] [CrossRef] - He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell.
**2010**, 33, 2341–2353. [Google Scholar] [PubMed] - Gull, S.F.; Skilling, J. Maximum entropy method in image processing. IET
**1984**, 131, 646–659. [Google Scholar] [CrossRef] - Kovesi, P. Image features from phase congruency. Videre J. Comput. Vis. Res.
**1999**, 1, 1–26. [Google Scholar] - Kovesi, P. Phase congruency: A low-level image invariant. Psychol. Res.
**2000**, 64, 136–148. [Google Scholar] [CrossRef] [PubMed] - Morrone, M.C.; Ross, J.; Burr, D.C.; Owens, R. Mach bands are phase dependent. Nature
**1986**, 324, 250–253. [Google Scholar] [CrossRef] - Kovesi, P. Phase congruency detects corners and edges. In Proceedings of the Australian Pattern Recognition Society Conference: DICTA, Sydney, Australia, 10–12 December 2003; Volume 2003. [Google Scholar]
- Su, S.; Lin, H.; Hosu, V.; Wiedemann, O.; Sun, J.; Zhu, Y.; Liu, H.; Zhang, Y.; Saupe, D. Going the Extra Mile in Face Image Quality Assessment: A Novel Database and Model. arXiv
**2022**, arXiv:2207.04904. [Google Scholar] [CrossRef] - Thomee, B.; Shamma, D.A.; Friedland, G.; Elizalde, B.; Ni, K.; Poland, D.; Borth, D.; Li, L.J. YFCC100M: The new data in multimedia research. Commun. ACM
**2016**, 59, 64–73. [Google Scholar] [CrossRef] - Saupe, D.; Hahn, F.; Hosu, V.; Zingman, I.; Rana, M.; Li, S. Crowd workers proven useful: A comparative study of subjective video quality assessment. In Proceedings of the QoMEX 2016: 8th International Conference on Quality of Multimedia Experience, Lisbon, Portugal, 6–8 June 2016. [Google Scholar]
- Seeger, M. Gaussian processes for machine learning. Int. J. Neural Syst.
**2004**, 14, 69–106. [Google Scholar] [CrossRef] - Smola, A.J.; Schölkopf, B. A tutorial on support vector regression. Stat. Comput.
**2004**, 14, 199–222. [Google Scholar] [CrossRef] - Lou, Y.; Caruana, R.; Gehrke, J. Intelligible models for classification and regression. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Beijing, China, 12–16 August 2012; pp. 150–158. [Google Scholar]
- Geurts, P.; Ernst, D.; Wehenkel, L. Extremely randomized trees. Mach. Learn.
**2006**, 63, 3–42. [Google Scholar] [CrossRef] - Breiman, L. Random forests. Mach. Learn.
**2001**, 45, 5–32. [Google Scholar] [CrossRef] - Loh, W.Y. Regression tress with unbiased variable selection and interaction detection. Stat. Sin.
**2002**, 12, 361–386. [Google Scholar] - Wright, S.; Nocedal, J. Numerical Optimization; Springer Science: Berlin/Heidelberg, Germany, 1999; Volume 35, p. 7. [Google Scholar]
- Moorthy, A.; Bovik, A. A modular framework for constructing blind universal quality indices. IEEE Signal Process. Lett.
**2009**, 17, 7. [Google Scholar] - Saad, M.A.; Bovik, A.C. Blind quality assessment of videos using a model of natural scene statistics and motion coherency. In Proceedings of the 2012 Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), Pacific Grove, CA, USA, 4–7 November 2012; pp. 332–336. [Google Scholar]
- Min, X.; Zhai, G.; Gu, K.; Liu, Y.; Yang, X. Blind image quality estimation via distortion aggravation. IEEE Trans. Broadcast.
**2018**, 64, 508–517. [Google Scholar] [CrossRef] - Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process.
**2012**, 21, 4695–4708. [Google Scholar] [CrossRef] - Liu, L.; Dong, H.; Huang, H.; Bovik, A.C. No-reference image quality assessment in curvelet domain. Signal Process. Image Commun.
**2014**, 29, 494–505. [Google Scholar] [CrossRef] - Xue, W.; Mou, X.; Zhang, L.; Bovik, A.C.; Feng, X. Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features. IEEE Trans. Image Process.
**2014**, 23, 4850–4862. [Google Scholar] [CrossRef] - Zhang, L.; Zhang, L.; Bovik, A.C. A feature-enriched completely blind image quality evaluator. IEEE Trans. Image Process.
**2015**, 24, 2579–2591. [Google Scholar] [CrossRef] - Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett.
**2012**, 20, 209–212. [Google Scholar] [CrossRef] - Liu, L.; Hua, Y.; Zhao, Q.; Huang, H.; Bovik, A.C. Blind image quality assessment by relative gradient statistics and adaboosting neural network. Signal Process. Image Commun.
**2016**, 40, 1–15. [Google Scholar] [CrossRef] - Venkatanath, N.; Praneeth, D.; Bh, M.C.; Channappayya, S.S.; Medasani, S.S. Blind image quality evaluation using perception based features. In Proceedings of the 2015 Twenty First National Conference on Communications (NCC), Maharashtra, India, 27 February–1 March 2015; pp. 1–6. [Google Scholar]
- Liu, L.; Liu, B.; Huang, H.; Bovik, A.C. No-reference image quality assessment based on spatial and spectral entropies. Signal Process. Image Commun.
**2014**, 29, 856–863. [Google Scholar] [CrossRef]

**Figure 2.**Process for investigating the effectiveness of Benford’s law-inspired and perceptual features for face image quality assessment.

**Figure 4.**Measured empirical distribution of GFIQA-20k’s [56] quality ratings.

**Figure 5.**Sample images from GFIQA-20k [56]. Quality ratings are printed on the face images in the upper left corners.

**Figure 6.**PLCC values of different regression methods in the form of box plots. Measured over 100 random train–test splits on GFIQA-20k [56]. In each box plot, the median value is denoted by the central mark. Moreover, the 25th and 75th percentiles correspond to the bottom and top edges, respectively. The outliers are represented by red plus signs, and the whiskers extend to the most extreme values, which are not considered outliers.

**Figure 7.**SROCC values of different regression methods in the form of box plots. Measured over 100 random train–test splits on GFIQA-20k [56]. In each box plot, the median value is denoted by the central mark. Moreover, the 25th and 75th percentiles correspond to the bottom and top edges, respectively. The outliers are represented by red plus signs, and the whiskers extend to the most extreme values, which are not considered as outliers.

**Figure 8.**KROCC values of different regression methods in the form of box plots. Measured over 100 random train–test splits on GFIQA-20k [56]. In each box plot, the median value is denoted by the central mark. Moreover, the 25th and 75th percentiles correspond to the bottom and top edges, respectively. The outliers are represented by red plus signs, and the whiskers extend to the most extreme values, which are not considered as outliers.

**Figure 9.**Performance comparison of FDD and perceptual features. Median SROCC values were measured over 100 random train–test splits on GFIQA-20k [56].

**Figure 10.**Performance of the proposed feature vector in cases where a part of the feature vector was removed. The performance of the whole feature vector is denoted by ’X’. Median SROCC values were measured over 100 random train–test splits on GFIQA-20k [56].

**Figure 11.**Ground-truth vs. predicted quality scores scatterplot on a GFIQA-20k [56] test set.

**Figure 12.**Radar graph for the visual comparison of median PLCC, SROCC, and KROCC values obtained on GFIQA-20k [56] after 100 random train–test splits.

Feature Number | Description | Number of Features |
---|---|---|

f1–f9 | FDD of horizontal wavelet coefficients | 9 |

f10–f18 | FDD of vertical wavelet coefficients | 9 |

f19–f27 | FDD of diagonal wavelet coefficients | 9 |

f28–f36 | FDD of DCT coefficients | 9 |

f37–f45 | FDD of singular values | 9 |

f46–f54 | FDD of absolute shearlet coefficients | 9 |

f55–f59 | Perceptual features (colorfulness, global contrast factor, dark channel feature, entropy, mean of phase congruency) | 5 |

**Table 2.**Comparison of different regression modules in terms of median PLCC, SROCC, and KROCC, which were measured on GFIQA-20k [56] over 100 random train–test splits. The standard deviation values are given in parentheses.

Regressor | PLCC | SROCC | KROCC |
---|---|---|---|

GPR | 0.816 (0.005) | 0.810 (0.006) | 0.619 (0.006) |

RBF SVR | 0.808 (0.005) | 0.805 (0.006) | 0.613 (0.006) |

GAM | 0.713 (0.007) | 0.706 (0.008) | 0.518 (0.007) |

Extra tree | 0.731 (0.007) | 0.723 (0.008) | 0.531 (0.007) |

LSBoost | 0.711 (0.008) | 0.713 (0.008) | 0.524 (0.007) |

BDT | 0.607 (0.012) | 0.597 (0.012) | 0.428 (0.009) |

NN | 0.545 (0.116) | 0.544 (0.069) | 0.382 (0.050) |

**Table 3.**Comparison to other state-of-the-art algorithms using GFIQA-20k [56] database. Median PLCC, SROCC, and KROCC values were measured over 100 random train–test splits. The best results are typed in red, the second-best results are typed green, and the third-best results are given in blue.

Method | PLCC | SROCC | KROCC |
---|---|---|---|

BIQI [66] | 0.794 | 0.790 | 0.599 |

BLIINDS-II [67] | 0.685 | 0.674 | 0.491 |

BMPRI [68] | 0.673 | 0.662 | 0.481 |

BRISQUE [69] | 0.721 | 0.718 | 0.527 |

CurveletQA [70] | 0.799 | 0.779 | 0.591 |

GM-LOG-BIQA [71] | 0.740 | 0.732 | 0.543 |

IL-NIQE [72] | 0.728 | 0.714 | 0.518 |

NIQE [73] | 0.191 | 0.183 | 0.127 |

OG-IQA [74] | 0.747 | 0.735 | 0.546 |

PIQE [75] | 0.207 | 0.095 | 0.066 |

SSEQ [76] | 0.715 | 0.690 | 0.509 |

BL-IQA | 0.816 | 0.810 | 0.619 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Varga, D.
Benford’s Law and Perceptual Features for Face Image Quality Assessment. *Signals* **2023**, *4*, 859-876.
https://doi.org/10.3390/signals4040047

**AMA Style**

Varga D.
Benford’s Law and Perceptual Features for Face Image Quality Assessment. *Signals*. 2023; 4(4):859-876.
https://doi.org/10.3390/signals4040047

**Chicago/Turabian Style**

Varga, Domonkos.
2023. "Benford’s Law and Perceptual Features for Face Image Quality Assessment" *Signals* 4, no. 4: 859-876.
https://doi.org/10.3390/signals4040047