Next Article in Journal
Feature Pyramid Networks and Long Short-Term Memory for EEG Feature Map-Based Emotion Recognition
Next Article in Special Issue
Modeling and Parameter Identification of a 3D Measurement System Based on Redundant Laser Range Sensors for Industrial Robots
Previous Article in Journal
Electronically Tunable Memristor Emulator Implemented Using a Single Active Element and Its Application in Adaptive Learning
Previous Article in Special Issue
Validation of a Lumped Parameter Model of the Battery Thermal Management System of a Hybrid Train by Means of Ultrasonic Clamp-On Flow Sensor Measurements and Hydronic Optimization
 
 
Correction published on 12 July 2023, see Sensors 2023, 23(14), 6323.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Image Definition Assessment of Optoelectronic Tracking Equipment Based on the BRISQUE Algorithm with Gaussian Weights

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(3), 1621; https://doi.org/10.3390/s23031621
Submission received: 30 November 2022 / Revised: 12 January 2023 / Accepted: 28 January 2023 / Published: 2 February 2023 / Corrected: 12 July 2023

Abstract

:
Defocus is an important factor that causes image quality degradation of optoelectronic tracking equipment in the shooting range. In this paper, an improved blind/referenceless image spatial quality evaluator (BRISQUE) algorithm is formulated by using the image characteristic extraction technology to obtain a characteristic vector (CV). The CV consists of 36 characteristic values that can effectively reflect the defocusing condition of the corresponding image. The image is evaluated and scored subjectively by the human eyes. The subjective evaluation scores and CVs constitute a set of training data samples for the defocusing evaluation model. An image database that contains sufficiently many training samples is constructed. The training model is trained to obtain the support vector machine (SVM) model by using the regression function of the SVM. In the experiments, the BRISQUE algorithm is used to obtain the image feature vector. The method of establishing the image definition evaluation model via SVM is feasible and yields higher subjective and objective consistency.

1. Introduction

The image, which is an important carrier of information, has been widely used in health, medical community, consumer electronics, etc. However, distortions are inevitably induced during image acquisition, transmission, processing, and display. The distortions cause the image quality degradation [1]. Evaluating, comparing, and optimizing the image quality effectively has gradually become a research hotspot in many fields, such as visual psychology, image processing, pattern recognition, and artificial intelligence [2,3,4].
Image distortion occurs, to a certain extent, in the process of acquisition, processing, compression, transmission, and display. Therefore, it is necessary to establish objective and effective quality assessment methods to evaluate the image quality [5,6,7]. At present, the image quality assessment includes subjective assessment and objective assessment. Image quality is evaluated by the subjective perception of the human eyes in a subjective evaluation method. As an objective evaluation method of the image quality, the mathematical models of image quality assessment are established [8,9].
The objective methods of image quality assessment include full reference image quality assessment (FR-IQA), reduced reference image quality assessment (RR-IQA), and no reference image quality assessment (NR-IQA), according to whether the reference image is needed. In the paper, NR-IQA is used to evaluate the image quality [10,11].
The main factors which affect the quality of optical measurement images include atmospheric disturbance, atmospheric extinction, optical diffraction of optical lens, defocusing, image motion, camera jitters, noise of image sensor, and so on. The image quality assessment method of defocused image is mainly studied in this paper.
If the external noises can be ignored, defocus is an important factor in image blur in the image tracking process of optoelectronic tracking equipment. To estimate the defocus severity of optoelectronic tracking equipment, image quality is evaluated objectively via an image quality evaluation algorithm [12,13]. At the same time, image characteristic values, which reflect the image quality, are obtained. The values can provide a condition for establishing the model for evaluating the focus performance based on the correlations between image characteristic values and defocus state parameters.
The optical system of optoelectronic tracking equipment can be regarded as a low-pass filter and an increase in the defocus is equivalent to a reduction in the filter cut-off frequency [14,15,16,17,18].
This paper mainly studies image evaluation indices in the defocusing state of optoelectronic tracking equipment and a method for obtaining the image characteristic values based on the indices. The characteristic values that are obtained via an image evaluation algorithm can be used to repair the image quality degradation that is caused by defocused equipment. The result of the image evaluation algorithm should be consistent with the subjective perception of the human eyes [19,20,21].
The causes of image blur also include interference factors, such as image motion of equipment and data compression, in addition to the defocus of the imaging system. A general referenceless image evaluation algorithm should be selected instead of a referenceless image evaluation algorithm with known distortion [22,23,24].
Comparisons are performed from two aspects: the theory and the performance of the evaluation algorithm. The main referenceless image quality evaluation algorithms that perform well are as follows: (1) Moorthy’s blind image quality index (BIQI) algorithm, which is implemented in the wavelet domain [25]; (2) Moorthy’s distortion-identification-based image verity and integrity evaluation (DIIVINE) algorithm, which is based on the BIQI algorithm [7]; (3) Saad’s distortion-identification-based image verity and integrity evaluation (DIIVINE) algorithm [26] and the BLIINDS-II improved algorithm [27]; (4) Mittal’s BRISQE algorithm [28] and the natural image quality evaluator (NIQE) algorithm, which is referenceless [29]; (5) Li’s general regression neural network (GRNN) algorithm [30]; and (6) Lintao Han’s combining convolution and self-attention for image quality assessment network [31].
Spatial distortion directly affects the visual quality of an image. By considering effective spatial characteristics, image quality evaluation can achieve increased consistency with subjective evaluation. At the same time, the characteristic values that are obtained via spatial characteristic extraction lay the foundation for the study of building an evaluation model for the defocused state.
Ruderman et al. found that the luminance of natural image normalization tends to follow a normal (Gaussian) distribution [32]. They posit that the distortion of an image changes the statistical characteristics of the normalization coefficient. By measuring the changes in the statistical characteristics, the distortion type can be predicted and the image visual quality can be evaluated [33]. Based on this theory, Mr. Mittal put forward the BRISQUE algorithm [28], which is based on the image spatial statistical characteristics. Ronin Institute et al. apply a broad spectrum of statistics of local and global features to characterize the variety of possible video distortions [34].
Based on the image defocus characteristics of optoelectronic tracking equipment in this paper, an improved BRISQUE algorithm is used with image characteristic extraction technology to obtain a characteristic value (CV). The CV includes 36 characteristic values that effectively reflect the defocus condition of the image [35]. The image is evaluated by the human eyes and scored subjectively. Subjective evaluation scores and feature vectors constitute a set of training data samples of the defocus evaluation model. A sufficient amount of training samples is obtained by calculating the CVs of the image database. Then, the evaluation model is obtained by using a machine learning method that is based on SVM to train the samples [36].
Many studies have employed machine learning models for prediction or classification in many fields. A convolutional neural network (CNN) is used for robust classification of PV panel faults [37]. A support vector machine (SVM) has become a common method of discrimination. In the field of machine learning, it is usually used for pattern recognition, classification, and regression analysis. For example, CNN- and SVM-based models can provide doctors with the detection of heart failure using electrocardiogram signals [38]. The SVM and general regression neural networks (GRNN) were used for the diagnosis of malfunction [39]. The adaptive support vector machine (A-SVM) was introduced for classification together with the ORICA-CSP method [40].
The defocused image sequences of the optoelectronic equipment are computed via the BRISQUE algorithm to obtain the CVs. The CVs are inputted into the evaluation model to calculate the prediction scores. The image sequences are evaluated by the human eyes subjectively. By considering the subjective and objective consistency of the results of the evaluation algorithm, the effectiveness of the evaluation algorithm is assessed.

2. Acquiring the CV via the Improved BRISQUE Algorithm

The image database is built and the CVs of image samples from the image database are obtained via the improved BRISQUE algorithm, which is weighted by a Gaussian function. The image samples are evaluated subjectively by the human eyes and used as SVM model training samples.

2.1. Training Image Sample Selection and Database Establishment

Many preliminary studies and experiments have demonstrated that if an image sequence of the optoelectronic tracking equipment is used for training directly, the training model will be inaccurate, which will lead to the failure of forecast evaluation. The main reason is that it is impossible to cover various details because the target and background tracking are too monotonous. Using public database images for training is proposed. We have used three public databases, namely, Laboratory for Image & Video Engineering (IVE), Categorical Subjective Image Quality (CSIQ), and Tampere Image Database (TID2013). Table 1 lists the databases that are used in this article and their data types.
According to the defocus characteristics of the device tracking image, an image database that includes images in a sequence that ranges from defocused to focused and back to defocused is established and each image is subjectively evaluated and scored. The scoring principle is that a severely defocused image is assigned a low score and a better focused image has a higher score. The results of model training demonstrate that the size of the database should exceed 1000 pictures and the quality of the database directly affects the application stability.

2.2. BRISQUE Algorithm

Two important advantages of using the BRISQUE algorithm are that the image definition evaluation score that is obtained by the algorithm can effectively reflect the defocus state, and the obtained image characteristic vector facilitates the subsequent training and evaluation of the machine learning model.
From an image, the BRISQUE algorithm is used to extract 36 characteristic values, which include the variances of the image brightness and the mean value. These features are called local normalized brightness statistical characteristics.
Given an intensity image I(i,j), an operation that subtracts the image mean can be applied to the image to obtain the mean subtracted contrast normalized (MSCN) image Î(i,j):
I ^ ( i , j ) = I ( i , j ) μ ( i , j ) σ ( i , j ) + C
where i = 1, …, M and j = 1, …, N are spatial indices; M and N are the image height and width, respectively; C is a constant that prevents instabilities from occurring when the denominator tends to zero; and μ(i,j) and σ(i,j) are the local mean and standard deviation, respectively, of I(i,j).
We model the statistical relationship between neighboring pixels using the empirical distributions of the pairwise products of neighboring MSCN coefficients along four orientations: horizontal (H), vertical (V), main diagonal (D1), and secondary diagonal (D2).
H ( i , j ) = I ^ ( i , j ) I ^ ( i + 1 , j )
V ( i , j ) = I ^ ( i , j ) I ^ ( i , j + 1 )
D 1 ( i , j ) = I ^ ( i , j ) I ^ ( i + 1 , j + 1 )
D 2 ( i , j ) = I ^ ( i , j ) I ^ ( i + 1 , j 1 )
The statistical properties of the MSCN coefficients are affected by the presence of distortion. Quantifying these changes will make it possible to predict the type of distortion that affects an image and its perceptual quality. According to [24], a generalized Gaussian distribution (GGD) can be used to effectively capture a broader spectrum of distorted image statistics. The GGD with zero means is expressed as follows:
f ( x ; α , σ 2 ) = α 2 β Γ ( 1 / α ) exp ( ( | x | β ) α )
where
β = σ Γ ( 1 / α ) Γ ( 3 / α )
and Γ(·) is the gamma function:
Γ ( a ) = 0 t a 1 e t d t a > 0
The shape parameter, which is denoted as α, controls the ‘shape’ of the distribution, while σ2 control the variance. The parameters of the GGD (α,σ2) are estimated via the moment-matching-based approach that was proposed in [41].
The appropriate values of α and σ are calculated via the moment-matching-based method and are two of the 36 characteristic values to be obtained. The parameters (ν,σl,σr) and η are calculated based on Equations (9) and (12) for the other four images: H, V, D1, and D2.
f ( x ; α , σ 2 ) = { ν ( β l + β r ) Γ ( 1 / ν ) exp ( ( x β l ) ν ) , x < 0 ν ( β l + β r ) Γ ( 1 / ν ) exp ( ( x β r ) ν ) , x 0
where
β l = σ l Γ ( 1 / ν ) Γ ( 3 / ν )
β r = σ r Γ ( 1 / ν ) Γ ( 3 / ν )
η = ( β r β l ) Γ ( 2 / ν ) Γ ( 1 / ν )
The details of the calculation process are presented in [24]. Via Equations (2)–(12), we obtain 16 + 2 = 18 characteristic values. The other 18 characteristic values must be calculated in other ways. The original image is down-sampled with a sampling
The characteristic values of the down-sampled image are calculated by following the given steps again and we obtain another 18 characteristic values. Now, the calculation of the 36 characteristic values is complete.

2.3. Improved BRISQUE Algorithm That Is Weighted by a Gaussian Function

Preliminary model training and prediction studies demonstrate that the characteristic values that were directly obtained via the BRISQUE algorithm cannot stably evaluate the defocused image sequence. For this particular situation, an improved BRISQUE algorithm that is weighted by a Gaussian function is selected in this paper.
The pixels of the training image are scanned by using a Gaussian function template and the center pixel value of the template is replaced with the weighted average gray value of the pixels in the neighborhood that is determined by the template. The template parameters of the Gaussian function are shown in Table 2. The image that is obtained by weighting the training image by the Gaussian function is denoted as VarI. The characteristic values of the new image are calculated by following the specified steps and we obtain 36 characteristic values, which are the input of machine learning training.

3. Support Vector Machine Model and Training

SVM is one of the basic methods of machine learning and the most important branch of machine learning theory [42,43,44]. It plays an important role in the practical applications of machine learning. SVM, which is a supervised learning model, is commonly used for pattern recognition, classification, and regression analysis.
This paper uses the regression function of SVM. The improved BRISQUE algorithm is used to calculate the CVs and subjective evaluation scores of images in the image database as the model training sample for obtaining the SVM model. The image definition CVs of the image database, which are calculated via the improved BRISQUE algorithm, are the independent variables. The scores of the subjective evaluation are the dependent variables. The independent and dependent variables are used as model training samples to obtain the SVM model. The image CVs of optoelectronic tracking equipment are input into the SVM model and predicted to obtain image evaluation scores. By comparing with the subjective evaluation of the human eyes, the accuracy and reliability of the evaluation are assessed. If the evaluation result does not meet the requirements, the above process can be iterated until a subjective and objective SVM evaluation model is obtained. Another image database can be used to calculate the characteristic vectors as needed and the image quality is scored for the inputs of the new training model via SVM.
This paper calls the LIBSVM library function, which was developed by Professor Chi-Jen Lin [45] to train and test the SVM model. The LIBSVM library function version is libsvm-3.23. In this paper, the support vector regression model “ε-SVR” is used in SVM.
The specified training sample can be represented as {(x1,z1),……,(xl,zl)}, where xiRn is the characteristic vector, which is obtained via the improved BRISQUE algorithm and composed of 36 characteristic values, and ziR1 denotes the subjective evaluation score of the image, which is the target output of the training model. When the penalty parameter C > 0 and the parameter ε > 0, the standard form of the SVR is as expressed in Equation (13):
min w , b , ξ , ξ * ( 1 2 w T w + C i = 1 l ξ i + C i = 1 l ξ i * )
s.t.
w T ϕ ( x i ) + b z i ε + ξ i
z i w T ϕ ( x i ) b ε + ξ i
ξ i , ξ i * 0 , i = 1 , , l
According to the principle of SVM, Equation (13) is converted to a dual problem to calculate α. The radial basis function (RBF) is selected as the kernel function, which is denoted as Κ(x,z) = ϕT(x)ϕ(z); the form of the RBF is as follows:
Κ ( x z ) = e x z 2 ( 2 × σ ) 2
where σ is set to 0.5.
The training parameters of the LIBSVM library function are set as follows: penalty parameter C is set to 1024, the probability estimate is set to 1, and other parameters use the default parameter values of the LIBSVM function.
The samples from the image database of Table 1 are input into the SVM model and model training is completed. The number of support vectors, which is denoted as total_sv, is 772, and the bias b is −118.247.

4. Defocused Image Acquisition and Image Evaluation Test

4.1. Defocused Image Sequence Acquisition

In the process of tracking the real target using the optoelectronic tracking equipment, to ensure that the target can be tracked effectively, the focus state cannot be adjusted. The acquired image samples typically do not contain all image definition features, which makes it impossible to fully evaluate the performance of the SVM model.
To identify the test images that meet the requirements, in the process of evaluating the imaging quality of the optoelectronic tracking device, an imaging system is built for obtaining image samples of various defocus states. A photo of the system is shown in Figure 1. A Nikon 800 mm/F5.6 fixed-focus lens from the Nikon Corporation of Japan is used in the imaging system. The piA2400-17 visible light camera is from BASLER Corporation of Germany. The main properties of the camera are as follows: pixel size: 3.45 μm × 3.45 μm; and the number of pixels: 2448 × 2050.

4.2. Predictive Test of Definition Evaluation of Defocused Images

In this paper, a series of defocused and focused images with continuous change were obtained by manually controlling the defocused position of the optical lens in the imaging system. The images are used to test the effectiveness of the definition evaluation algorithm of defocused images. At the same time, they are also used for algorithm comparison.
To acquire stable evaluation scores, static scenes are photographed using the imaging system. Therefore, the image sequences in this paper are very similar to human visual perception. The major differences between the images are definition and edge sharpness. Serial numbers of the clear images are given in advance.
The image sequences are inputted into the trained SVM model, and the image definition evaluation scores of defocused image sequences are the outputs of SVM model. Because the image-focusing process and the serial numbers of the clear images are known, the image definition evaluation scores can be compared with the defocused states of the image sequences.
For the image sequences, the larger the score, the clearer the image is. Due to the evaluation scores related to the CVs obtained by the BRISQUE algorithm, they are not fixed values. However, the scores can reflect the definition of the image sequence with the same scene. The image definition scores vary greatly among the image sequences with different scenes.

4.2.1. Single-Peak Defocused Image Test

The indoor image sequence that was obtained by the experimental imaging system is shown in Figure 2. The shooting process is from defocus to focus and back to defocus. The 9th image of the 12 images in Figure 2 has the best visual effect. In the predictive evaluation test of the 12 pictures via the SVM model, we obtained the curve that is shown in Figure 3. The X-axis of the curve represents the serial numbers of the pictures and the Y-axis represents the corresponding image definition evaluation values. The first image has the largest defocused position, and its evaluation score is only −3.34. The ninth image with the highest definition has the highest score of 20. The curve is consistent with the clarity of the real image.

4.2.2. The Test of Algorithm Comparison

The structural similarity (SSIM) is compared with the SVM model in this paper. As shown in Figure 4, the first image in the image sequence has the largest defocus, and it is the most blurred image to human visual perception. As the serial number increases, the image has a higher definition with defocused decreasing. The 14th image is the clearest to human perception. The evaluation curves with SSIM and the SVM trained model are shown in Figure 5 and Figure 6, respectively. Due to the different calculation principles of the two algorithms, the evaluation scores cannot be directly compared.
As shown in Figure 5, the evaluation scores with SSIM increase monotonously in the range of the first image to the eleventh image, which is consistent with the subjective evaluation by human eyes. However, the evaluation scores start to fall from the 12th image, and it is inconsistent with subjective evaluation. As shown in Figure 6, the evaluation scores with the SVM model increase with the serial numbers of the images in the sequence. Image 1 has the lowest score of 10.4, and image 14 has the highest score of 65.5. The evaluation with SVM is completely consistent with human subjective evaluation.

4.2.3. Dual-Peak Defocused Image Test

The dual-peak defocused image sequence is shown in Figure 7. The shooting process is focus, defocus, focus, and defocus in image 8 and image 21, respectively. According to the predictive evaluation test of the 28 pictures using the SVM model, the curve in Figure 8 is obtained. The X-axis of the curve represents the serial numbers of the pictures and the Y-axis represents the corresponding image definition evaluation values. We marked the two focused peak images with red hexagonal stars. The score of the 8th image is 58.3, and the score of the 22nd image is 55. The curve is consistent with the subjective evaluation by the human eyes of the test images. The curve also exhibits dual peaks, which demonstrates the convergence of the prediction model.

4.2.4. Repeatability Testing of Dual-Peak Defocused Image

Repeated tests were carried out to check the generalization performance of the SVM model. Another 29 images were acquired by changing the imaging scene and imaging process. The images were captured in order of focus, defocus, focus, defocus, and focus. Two randomly selected images in this image sequence are shown in Figure 9, and the definition evaluation scores of the sequence with the SVM model are shown in Figure 10. The result shows that the evaluation scores change by the focusing and defocused order, and the definition evaluation with the SVM model shows stability consistent with perspective evaluation. The SVM model has good generalization performance.
Through many test experiments, the image feature characteristic vectors are calculated via the improved BRISQUE algorithm and the evaluation model that is established via the SVM algorithm is used to evaluate the definition evaluation prediction. The evaluation results are highly consistent with the subjective evaluation results of the human eyes.

5. Discussion

For the purpose of increasing the effect of model training, we improved the BRISQUE algorithm that is weighted by the Gaussian function, and other weighted functions and parameters can also be researched in the future. The kernel function is selected as the radial basis function (RBF) in this paper, and other kernel functions can also be tried. In the future, the research of image objective evaluation model training based on machine learning will focus on two aspects. First, we should research and improve the new methods, which is to characterize the image spatial statistical characteristics. Second, we can introduce new machine learning algorithms, such as deep learning algorithms, which lead to a model with stronger self-learning ability.

6. Conclusions

Aiming at the problem of defocusing on large-scale optoelectronic tracking equipment in the shooting range, the use of image definition indicators for evaluation is proposed in this paper. An improved BRISQUE algorithm is used to objectively evaluate a defocused image and a CV that consists of 36 characteristic values are obtained. The CV is input into a previously trained SVM model to obtain an image definition evaluation score. Many image samples were obtained using the established imaging experimental system and experimental tests were carried out. The experimental results demonstrate that the image definition evaluation method that is used in this paper can effectively evaluate the defocusing condition of an optoelectronic tracking device, and the obtained image CV can effectively reflect the image defocus state.

Author Contributions

Conceptualization, N.Z. and C.L.; methodology, N.Z.; software, N.Z. and C.L.; validation, N.Z. and C.L.; formal analysis, N.Z. and C.L.; investigation, N.Z.; resources, N.Z.; data curation, C.L.; writing—original draft preparation, N.Z.; writing—review and editing, C.L.; visualization, N.Z. and C.L.; supervision, N.Z. and C.L.; project administration, N.Z.; funding acquisition, N.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (NSFC) (Grant No. 61905243) and Jilin Province Science & Technology Development Program Project in China (Grant No. 20190103157JH).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, N.; Ma, D.; Ren, G.; Huang, Y. BM-IQE: An Image Quality Evaluator with Block-Matching for Both Real-Life Scenes and Remote Sensing Scenes. Sensors 2020, 20, 3472. [Google Scholar] [CrossRef]
  2. Takam Tchendjou, G.; Simeu, E. Visual Perceptual Quality Assessment Based on Blind Machine Learning Techniques. Sensors 2021, 22, 175. [Google Scholar] [CrossRef] [PubMed]
  3. Ponomarenko, N.; Lukin, V.; Zelensky, A.; Egiazarian, K.; Carli, M.; Battisti, F. TID2008—A Database for Evaluation of Full-Reference Visual Quality Assessment Metrics. Adv. Mod. Radioelectron. 2009, 10, 30–45. [Google Scholar]
  4. Wei, M.-S.; Xing, F.; You, Z. A Real-Time Detection and Positioning Method for Small and Weak Targets Using a 1D Morphology-Based Approach in 2D Images. Light Sci. Appl. 2018, 7, 18006. [Google Scholar] [CrossRef]
  5. Moorthy, A.K.; Bovik, A.C. Blind Image Quality Assessment: From Natural Scene Statistics to Perceptual Quality. IEEE Trans. Image Process. 2011, 20, 3350–3364. [Google Scholar] [CrossRef]
  6. Tran, V.L.; Lin, H.-Y. Extending and Matching a High Dynamic Range Image from a Single Image. Sensors 2020, 20, 3950. [Google Scholar] [CrossRef]
  7. Rahmani, B.; Loterie, D.; Konstantinou, G.; Psaltis, D.; Moser, C. Multimode Optical Fiber Transmission with a Deep Learning Network. Light Sci. Appl. 2018, 7, 69. [Google Scholar] [CrossRef]
  8. Stępień, I.; Obuchowicz, R.; Piórkowski, A.; Oszust, M. Fusion of Deep Convolutional Neural Networks for No-Reference Magnetic Resonance Image Quality Assessment. Sensors 2021, 21, 1043. [Google Scholar] [CrossRef] [PubMed]
  9. Xiao, Q.; Bai, X.; Gao, P.; He, Y. Application of Convolutional Neural Network-Based Feature Extraction and Data Fusion for Geographical Origin Identification of Radix Astragali by Visible/Short-Wave Near-Infrared and Near Infrared Hyperspectral Imaging. Sensors 2020, 20, 4940. [Google Scholar] [CrossRef]
  10. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A Feature Similarity Index for Image Quality Assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed]
  11. Zhang, X.-X.; Chen, B.; He, F.; Song, K.-F.; Zhang, P.; Wang, J.-S. Wide-Field Auroral Imager Onboard the Fengyun Satellite. Light Sci. Appl. 2019, 8, 47. [Google Scholar] [CrossRef]
  12. Capodiferro, L.; Jacovitti, G.; Di Claudio, E.D. Two-Dimensional Approach to Full-Reference Image Quality Assessment Based on Positional Structural Information. IEEE Trans. Image Process. 2012, 21, 505–516. [Google Scholar] [CrossRef] [PubMed]
  13. Golestaneh, S.A.; Chandler, D.M. No-Reference Quality Assessment of JPEG Images via a Quality Relevance Map. IEEE Signal Process. Lett. 2014, 21, 155–158. [Google Scholar] [CrossRef]
  14. Olson, J.T.; Espinola, R.L.; Jacobs, E.L. Comparison of Tilted Slit and Tilted Edge Superresolution Modulation Transfer Function Techniques. Opt. Eng. 2007, 46, 01640. [Google Scholar] [CrossRef]
  15. Bentzen, S.M. Evaluation of the Spatial Resolution of a CT Scanner by Direct Analysis of the Edge Response Function. Med. Phys. 1983, 10, 579–581. [Google Scholar] [CrossRef]
  16. Bao, Y.; Yu, Y.; Xu, H.; Guo, C.; Li, J.; Sun, S.; Zhou, Z.-K.; Qiu, C.-W.; Wang, X.-H. Full-Colour Nanoprint-Hologram Synchronous Metasurface with Arbitrary Hue-Saturation-Brightness Control. Light Sci. Appl. 2019, 8, 95. [Google Scholar] [CrossRef] [PubMed]
  17. Li, L.; Shuang, Y.; Ma, Q.; Li, H.; Zhao, H.; Wei, M.; Liu, C.; Hao, C.; Qiu, C.-W.; Cui, T.J. Intelligent Metasurface Imager and Recognizer. Light Sci. Appl. 2019, 8, 97. [Google Scholar] [CrossRef] [PubMed]
  18. Nijhawan, O.P.; Gupta, S.K.; Hradaynath, R. Polychromatic MTF of Electrostatic Point Symmetric Electron Lenses. Appl. Opt. 1983, 22, 2453–2455. [Google Scholar] [CrossRef]
  19. Seghir, Z.A.; Hachouf, F.; Morain-Nicolier, F. Blind Image Quality Metric for Blurry and Noisy Image. In Proceedings of the 2013 IEEE Second International Conference on Image Information Processing (ICIIP-2013), Shimla, India, 9–11 December 2013; pp. 193–197. [Google Scholar]
  20. Gu, K.; Zhai, G.; Liu, M.; Yang, X.; Zhang, W.; Sun, X.; Chen, W.; Zuo, Y. FISBLIM: A FIve-Step BLInd Metric for Quality Assessment of Multiply Distorted Images. In Proceedings of the SiPS 2013 Proceedings, Taipei, Taiwan, 16–18 October 2013; pp. 241–246. [Google Scholar]
  21. Ponomarenko, N.; Lukin, V.; Egiazarian, K. HVS-Metric-Based Performance Analysis of Image Denoising Algorithms. In Proceedings of the 3rd European Workshop on Visual Information Processing, Paris, France, 4–6 July 2011; pp. 156–161. [Google Scholar]
  22. Ye, P.; Doermann, D. No-Reference Image Quality Assessment Based on Visual Codebook. In Proceedings of the 2011 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 3089–3092. [Google Scholar]
  23. Wang, Z.; Sheikh, H.R.; Bovik, A.C. No-Reference Perceptual Quality Assessment of JPEG Compressed Images. In Proceedings of the International Conference on Image Processing, Rochester, NY, USA, 22–25 September 2002; Volume 1, pp. 477–480. [Google Scholar]
  24. Campisi, P.; Carli, M.; Giunta, G.; Neri, A. Blind Quality Assessment System for Multimedia Communications Using Tracing Watermarking. IEEE Trans. Signal Process. 2003, 51, 996–1002. [Google Scholar] [CrossRef]
  25. Moorthy, A.K.; Bovik, A.C. A Two-Step Framework for Constructing Blind Image Quality Indices. IEEE Signal Process. Lett. 2010, 17, 513–516. [Google Scholar] [CrossRef]
  26. Saad, M.A.; Bovik, A.C.; Charrier, C. A DCT Statistics-Based Blind Image Quality Index. IEEE Signal Process. Lett. 2010, 17, 583–586. [Google Scholar] [CrossRef]
  27. Saad, M.A.; Bovik, A.C.; Charrier, C. DCT Statistics Model-Based Blind Image Quality Assessment. In Proceedings of the 2011 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 3093–3096. [Google Scholar]
  28. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
  29. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
  30. Li, C.; Bovik, A.C.; Wu, X. Blind Image Quality Assessment Using a General Regression Neural Network. IEEE Trans. Neural Netw. 2011, 22, 793–799. [Google Scholar] [PubMed]
  31. Han, L.; Lv, H.; Zhao, Y.; Liu, H.; Bi, G.; Yin, Z.; Fang, Y. Conv-Former: A Novel Network Combining Convolution and Self-Attetion for Image Quality Assessment. Sensors 2023, 23, 427. [Google Scholar] [CrossRef] [PubMed]
  32. Ruderman, D.L. The Statistics of Natural Images. Netw. Comput. Neural Syst. 1994, 5, 517–548. [Google Scholar] [CrossRef]
  33. Simoncelli, E.P.; Freeman, W.T.; Adelson, E.H.; Heeger, D.J. Shiftable Multiscale Transforms. IEEE Trans. Inf. Theory 1992, 38, 587–607. [Google Scholar] [CrossRef]
  34. Varga, D. No-Reference Video Quality Assessment Using the Temporal Statistics of Global and Local Image Features. Sensors 2022, 22, 9696. [Google Scholar]
  35. Lasmar, N.-E.; Stitou, Y.; Berthoumieu, Y. Multiscale Skewed Heavy Tailed Model for Texture Analysis. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 2281–2284. [Google Scholar]
  36. Adankon, M.M.; Cheriet, M.; Biem, A. Semisupervised Learning Using Bayesian Interpretation: Application to LS-SVM. IEEE Trans. Neural Netw. 2011, 22, 513–524. [Google Scholar] [CrossRef] [PubMed]
  37. Memon, S.A.; Javed, Q.; Kim, W.-G.; Mahmood, Z.; Khan, U.; Shahzad, M. A Machine-Learning-Based Robust Classification Method for PV Panel Faults. Sensors 2022, 22, 8515. [Google Scholar] [CrossRef]
  38. Botros, J.; Mourad-Chehade, F.; Laplanche, D. CNN and SVM-Based Models for the Detection of Heart Failure Using Electrocardiogram Signals. Sensors 2022, 22, 9190. [Google Scholar] [CrossRef]
  39. Chu, W.-L.; Lin, C.-J.; Kao, K.-C. Fault Diagnosis of a Rotor and Ball-Bearing System Using DWT Integrated with SVM, GRNN, and Visual Dot Patterns. Sensors 2019, 19, 4806. [Google Scholar] [CrossRef]
  40. Antony, M.J.; Sankaralingam, B.P.; Mahendran, R.K.; Gardezi, A.A.; Shafiq, M.; Choi, J.-G.; Hamam, H. Classification of EEG Using Adaptive SVM Classifier with CSP and Online Recursive Independent Component Analysis. Sensors 2022, 22, 7596. [Google Scholar] [CrossRef] [PubMed]
  41. Sharifi, K.; Leon-Garcia, A. Estimation of Shape Parameter for Generalized Gaussian Distributions in Subband Decompositions of Video. IEEE Trans. Circuits Syst. Video Technol. 1995, 5, 52–56. [Google Scholar] [CrossRef]
  42. Wu, J.; Yang, H. Linear Regression-Based Efficient SVM Learning for Large-Scale Classification. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 2357–2369. [Google Scholar] [CrossRef] [PubMed]
  43. Baldeck, C.A.; Asner, G.P. Single-Species Detection with Airborne Imaging Spectroscopy Data: A Comparison of Support Vector Techniques. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2501–2512. [Google Scholar] [CrossRef]
  44. Sun, J.; Li, Q.; Lu, W.; Wang, Q. Image Recognition of Laser Radar Using Linear SVM Correlation Filter. Chin. Opt. Lett. 2007, 5, 549–551. [Google Scholar]
  45. Chang, C.-C.; Lin, C.-J. LIBSVM: A Library for Support Vector Machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–27. [Google Scholar] [CrossRef]
Figure 1. Photo of the imaging system.
Figure 1. Photo of the imaging system.
Sensors 23 01621 g001
Figure 2. Indoor defocused image sequence. Sub-figures (1–12) represent the imaging results of the laboratory imaging system for the same target. The shooting process is from defocus to focus and back to defocus. The 9th picture shows the focus state.
Figure 2. Indoor defocused image sequence. Sub-figures (1–12) represent the imaging results of the laboratory imaging system for the same target. The shooting process is from defocus to focus and back to defocus. The 9th picture shows the focus state.
Sensors 23 01621 g002
Figure 3. Predictive scores of the defocused image sequence.
Figure 3. Predictive scores of the defocused image sequence.
Sensors 23 01621 g003
Figure 4. Defocused image sequence for comparison. Sub-figures (1–15) shows the imaging effect of the same target at different degrees of defocus. The first image is the most defocused, and the 15th image is the clearest.
Figure 4. Defocused image sequence for comparison. Sub-figures (1–15) shows the imaging effect of the same target at different degrees of defocus. The first image is the most defocused, and the 15th image is the clearest.
Sensors 23 01621 g004
Figure 5. Evaluation scores of the image sequence in Figure 4 with SSIM.
Figure 5. Evaluation scores of the image sequence in Figure 4 with SSIM.
Sensors 23 01621 g005
Figure 6. Evaluation scores of the image sequence in Figure 4 with the SVM model.
Figure 6. Evaluation scores of the image sequence in Figure 4 with the SVM model.
Sensors 23 01621 g006
Figure 7. Dual-peak defocused image sequence (1–28). The shooting process is defocus, focus, defocus, focus, and defocus respectively. Image 8 and image 22 are in focus.
Figure 7. Dual-peak defocused image sequence (1–28). The shooting process is defocus, focus, defocus, focus, and defocus respectively. Image 8 and image 22 are in focus.
Sensors 23 01621 g007
Figure 8. Predictive scores of the dual-peak defocused image sequence.
Figure 8. Predictive scores of the dual-peak defocused image sequence.
Sensors 23 01621 g008
Figure 9. Defocused image sequence for repeatability testing (two randomly selected images).
Figure 9. Defocused image sequence for repeatability testing (two randomly selected images).
Sensors 23 01621 g009
Figure 10. Evaluation scores with the SVM model for repeatability testing.
Figure 10. Evaluation scores with the SVM model for repeatability testing.
Sensors 23 01621 g010
Table 1. Image databases for training the model.
Table 1. Image databases for training the model.
NameNum. of Distorted ImagesNum. of Reference ImagesImage Type
IVE23510Grey and color images
TID2013170025Color images
CISQ86630Color images
Table 2. Template of the weighted Gaussian function.
Table 2. Template of the weighted Gaussian function.
Weightiness1234567
10.0001570.000990.0030.00430.0030.000990.000157
20.000990.00620.01870.0270.01870.00620.00099
30.00430.0270.08130.11740.08130.0270.003
40.0030.01870.05630.08130.05630.01870.003
50.000990.00620.01870.0270.01870.00620.00099
60.0001570.000990.0030.00430.0030.000990.000157
70.0001570.000990.0030.00430.0030.000990.000157
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, N.; Lin, C. The Image Definition Assessment of Optoelectronic Tracking Equipment Based on the BRISQUE Algorithm with Gaussian Weights. Sensors 2023, 23, 1621. https://doi.org/10.3390/s23031621

AMA Style

Zhang N, Lin C. The Image Definition Assessment of Optoelectronic Tracking Equipment Based on the BRISQUE Algorithm with Gaussian Weights. Sensors. 2023; 23(3):1621. https://doi.org/10.3390/s23031621

Chicago/Turabian Style

Zhang, Ning, and Cui Lin. 2023. "The Image Definition Assessment of Optoelectronic Tracking Equipment Based on the BRISQUE Algorithm with Gaussian Weights" Sensors 23, no. 3: 1621. https://doi.org/10.3390/s23031621

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop