# Improved Classification of Blurred Images with Deep-Learning Networks Using Lucy-Richardson-Rosen Algorithm

^{1}

^{2}

^{3}

^{4}

^{*}

^{†}

## Abstract

**:**

^{2}A), was implemented to computationally refocus images recorded in the presence of spatio-spectral aberrations. The performance of LR

^{2}A was compared against the parent techniques: Lucy-Richardson algorithm and non-linear reconstruction. LR

^{2}A exhibited a superior deblurring capability even in extreme cases of spatio-spectral aberrations. Experimental results of deblurring a picture recorded using high-resolution smartphone cameras are presented. LR

^{2}A was implemented to significantly improve the performances of the widely used deep convolutional neural networks for image classification.

## 1. Introduction

^{2}A) [8,9,10].

^{2}A was then successfully implemented to deblur images recorded using a refractive lens with focusing errors with a narrow bandwidth incoherent light source (~20 nm) [11]. In this study, LR

^{2}A was applied to blurred images recorded with white light which is spatially as well as temporally incoherent with a broad bandwidth using latest smart phone cameras. This is a significant step as the proposed method can be used to extend the LDDV of smart phones. In all the previous studies, the deblurring methods were implemented in invasive mode. In this study, we demonstrate both invasive as well as non-invasive modes of operation. There are, of course, numerous methods of deblurring developed for different applications [12]. Since it was already established that NLR had a better performance than the commonly used deblurring methods and LR

^{2}A evolved from NLR and LRA, in this manuscript, the performance of LR

^{2}A was compared only with the parent methods NLR and LRA.

## 2. Materials and Methods

^{2}A was developed by replacing the correlation in LRA by NLR [10]. The schematic of the reconstruction method is shown in Figure 2. The algorithm begins with a convolution between the PSF and I, with I as the initial guess coefficient, which results in I′. The ratio between the two matrices I and I′ was correlated with the PSF, and this correlation is replaced by NLR, and the resulting residue is multiplied to the first guess, and this process is continued until a maximum likelihood solution is obtained. The deblurred images obtained from NLR, LRA, and LR

^{2}A are then fed into the pretrained deep-learning networks instead of blurred images for image classification.

## 3. Simulation Studies

^{2}A was used to deblur images captured using lenses with low NA and spatial and spectral aberrations. Furthermore, we compared the results with the parent benchmarking algorithms, such as LRA and NLR, to verify their effectiveness. The simulation was carried out in MATLAB with a matrix size of 500 × 500 pixels, pixel size of 8 µm, and central wavelength of λ = 632.8 nm. The object and image distances were set as 30 cm. A diffractive lens was designed with a focal length of f = 15 cm for the central wavelength with the radius of the zones given as ${r}_{s}=\sqrt{2sf\lambda}$, where s is the order of the zone. This diffractive lens is representative of both diffractive as well as metalens that works based on the Pancharatnam Berry phase [20]. A standard test object ‘Lena’ in greyscale was used for this study.

^{2}A are shown for the above cases, where the number of iterations p required for LRA was at least 10 times that of LR

^{2}A, with the last case requiring 100 for LRA and only 8 for LR

^{2}A (α = 0, β = 0.9). Comparing the results of NLR, LRA, and LR

^{2}A, it appeared that the performances of LRA and LR

^{2}A were similar, while NLR did not deblur due to a blurred autocorrelation function.

^{2}A are shown in Figure 4. It was evident that the results of NLR were better than LRA, and the results of LR

^{2}A were better than both for large aberrations. However, when Δz = 25 mm, LRA was better than NLR, once again, due to the blurred autocorrelation function.

^{2}A are shown in Figure 5. As seen from the figures, the results obtained from LR

^{2}A were significantly better than LRA and NLR. Another interesting observation can be made from the results. When the intensity distribution was concentrated in a small area, LRA performed better than NLR and vice versa. In all cases, the optimal value of LR

^{2}A aligned with one of the cases of NLR or LRA. For concentrated intensity distributions, the results of LR

^{2}A aligned towards LRA; in other cases, the results of LR

^{2}A aligned with NLR. In all cases, LR

^{2}A performed better than NLR and LRA. It must be noted that in the case of focusing error due to change in distance and wavelength, the deblurring with different methods improved the results. However, for the first case, when the lateral resolution was low due to low NA, the deblurring methods did not improve the results as expected.

^{2}A was better than LRA and NLR. The number of possible solutions for LR

^{2}A was higher than that of NLR and LRA. The control parameter in the original LRA was limited to one, which was the number of iterations p. The number of control parameters of NLR was two—α and β resulting in a total of m

^{2}solutions, where m is the number of states from (α

_{min}, β

_{min}) to (α

_{max}, β

_{max}). The number of control parameters in LR

^{2}A was m

^{2}p. Quantitative studies were carried out next for two cases that were aligned towards the solutions of NLR and LRA using structural similarity (SSIM) and mean square error (MSE).

^{2}A were 0.595, 0.75, and 0.86, respectively. The minimum value of the MSE for LRA, NLR, and LR

^{2}A were 0.028, 0.01, and 0.001, respectively. The above values confirmed that LR

^{2}A performed better than both LRA and NLR and NLR better than LRA. The regions of overlap between SSIM and MSE reassured the validity of this analysis.

## 4. Optical Experiments

^{2}A. A preliminary invasive study was carried out on simple objects consisting of a few points using a quasi-monochromatic light source and a single refractive lens [11]. However, the performance was not high due to the weak intensity distributions from a pinhole, scattering, and experimental errors. In this study, the method was evaluated again in both invasive as well as non-invasive mode. In invasive mode, the PSF was obtained from the recorded image of the object from isolated points and by creating a guide star in the form of a point object added to the object. In non-invasive mode, the PSF was synthesized within the computer for different spatio-spectral aberrations using Fresnel propagation as described in Section 3. To examine the method for practical applications, we projected a test image—Lena—on a computer monitor and captured it using two smartphone cameras (Samsung Galaxy A71 with 64-megapixel (f/1.8) primary camera and Oneplus Nord 2CE with 64-megapixel (f/1.7) primary camera). The object projection was adjusted to a point where the device’s autofocus software could no longer adjust the focus. To record PSF, a small white dot was added to the image and was then extracted manually. The images were recorded ~4 cm from the screen with different point sizes, 0.3, 0.4, and 0.5 cm, respectively. They were then fed back to our algorithm and reconstructed using LR

^{2}A (α = 0.1, β = 0.98, and two iterations). The recorded raw images and the reconstructed ones of the hair region of the test sample are shown in Figure 7. Line data were extracted from the images as indicated by the lines and plotted for comparison. In all the cases, LR

^{2}A improved the image resolution. What appeared in the recorded image as a single entity could be clearly discriminated as multiple entities using LR

^{2}A, indicating a significant performance as seen in the plots where a single peak was replaced by multiple peaks.

^{2}A (α = 0, β = one and three iterations) are shown in Figure 8c–e, respectively. The image of the synthesized PSF using scalar diffraction formulation is shown in Figure 8f. Two areas with interesting features are magnified for all the three cases. The results indicate a better quality of reconstruction with LR

^{2}A. The image could not be completely deblurred but can be improved without invasively recording PSF. The deblurred images were sharper and contained more information than the blurred image. The proposed method can be converted into an application for deblurring images recorded using smart phone cameras.

## 5. Pretrained Deep-Learning Networks

^{2}A. A “bell pepper” image was blurred for different spatio-spectral aberrations and deblurred using the above algorithms LRA, NLR, and LR

^{2}A. The images obtained from LR

^{2}A were much clearer with sharper edges and fine details in comparison to the other two algorithms. The NLR method resulted in an image reconstruction that was slightly oversmoothed, with some loss of fine details. The LRA method produced a better result than NLR, but it was relatively slower than the other two methods, and the improvement was not as significant as it was with LR

^{2}A. Overall, LR

^{2}A was the most effective in removing the blur and preserving the fine details in the image.

^{2}A [21]. Pretrained networks are highly complex neural networks trained on enormous datasets of images, allowing them to recognize a wide range of objects and features in images. MATLAB offers a variety of these models that can be used for image classification, each with its own unique characteristics and strengths. One such model is Darknet53, a neural network architecture used for object detection. Comprised of 53 layers, this network is known for its high accuracy in detecting objects. It first came to prominence in the revolutionary YOLOv3 object detection algorithm. Another powerful model is EfficientNetB0, a family of neural networks specifically designed for performance and efficiency. This model uses a unique scaling approach that balances model depth, width, and resolution, producing significant results in image classification tasks.

^{2}A are given in the Supplementary Materials. GoogLeNet is a convolutional neural network developed by researchers at Google, which is one of the widely used methods for image classification [22].

## 6. Deep-Learning Experiments

_{c}= 550 nm, and the resulting blurred images were fused back into a color image. A significantly high aberration was applied to test the limits of the deblurring algorithm. The blurred color image can be written as ${I}_{c}=PSF\left(R\right)\u2a02O\left(R\right)+PSF\left(G\right)\u2a02O\left(G\right)+PSF\left(B\right)\u2a02O\left(B\right)$, where R, G, B are the red, green, and blue channels. The blurred image was then analyzed using the pretrained GoogLeNet. The schematic of the process of applying blur to different color channels and obtaining blurred color image and the analysis results from GoogLeNet are shown in Figure 9. GoogLeNet classified the blurred “bell pepper” image as jellyfish with more than 70% probability and spotlight, scuba diver, projector, and traffic light with reduced probabilities.

^{2}A were applied to different color channels and after deblurring, the channels were fused to obtain the deblurred color image. The above process is shown in Figure 10. As seen, NLR and LRA reduced the blur to some extent but the deblurred image was not classified as “bell pepper” even within 3% probability. However, the results obtained for LR

^{2}A had “bell pepper” as one of the possibilities, which shows that LR

^{2}A is a better candidate than LRA and NLR for improving the classification probabilities of pre-trained deep-learning networks when the images are blurred.

^{2}A performed better than NLR and LRA, in the next experiment, with different types of aberrations, only classification results for blurred image (BI) were compared with those obtained for LR

^{2}A. Blurred and deblurred images obtained from LR

^{2}A of the test object were loaded, and its classification was carried out. The results obtained for a typical case of blur and the deblurred images are shown in Figure 11a,b, respectively. It can be seen from Figure 11a that GoogLeNet could not identify “bell pepper” and classified it as a spotlight. In fact, the label ‘bell pepper’ did not appear in the top 25 classification labels (25th with the probability of 0.3%), whereas for the LR

^{2}A reconstructed, the classification results were improved by multifold, and the classification probability was about 36%. Few other blurring cases were also considered (Figure 8c A (R = 100 pixels), B (R = 200 pixels), C (Δz = 0.37 mm), D (Δz = 0.38 mm), and E (Δz = 0.42 mm), F (Δλ = 32.8 nm), G (Δλ = 132.8 nm). In all the cases, image classification probability was significantly improved for LR

^{2}A.

## 7. Discussion

^{2}A is a recently developed deblurring algorithm which was demonstrated so far only with spatially incoherent but temporally coherent illumination and only invasively [10,11]. In this study, for the first time, LR

^{2}A was implemented for white light illumination and using latest smart phones in both invasive as well as non-invasive modes. In the non-invasive approach demonstrated, the synthetic PSFs were obtained by scalar diffraction formulation. The method was implemented on datasets with aberrations for many deep-learning networks. The performance of deblurring assisted GoogLeNet for datasets with spatio-spectral aberrations is presented. The performances of about 19 pre-trained deep-learning networks were evaluated for NLR, LRA, and LR

^{2}A and the detailed results are presented in the Supplementary Materials. In all the cases, it was evident that LR

^{2}A significantly improved classification performance compared to NLR and LRA. It is true that there are numerous deblurring methods developed for various applications, and comparing all the existing methods is not realistic. Since, it was well-established that NLR performed better than commonly used deblurring methods such as MF, PoF, WF, and RFA, in this study, the comparison was made only with the parent methods, namely LRA and NLR.

## 8. Summary and Conclusions

^{2}A is a recently developed computational reconstruction method that integrates two well-known algorithms, namely LRA and NLR [10]. In this study, LR

^{2}A was implemented for the deblurring of images simulated with spatial and spectral aberrations and with a limited numerical aperture. In every case, LR

^{2}A was found to perform better than both LRA and NLR. In the cases where the energy was concentrated in a smaller area, LRA performed better than NLR and vice versa. However, in all cases, LR

^{2}A aligned towards one of NLR or LRA and was better than both. The convergence rate of LR

^{2}A was also at least an order better than LRA, and in some cases, the ratio of the number of iterations between LRA and LR

^{2}A was >50 times, which is significant. It must be noted that LR

^{2}A uses the same approach as LRA, which is estimation of the maximum likelihood solution. However, the estimation speed is faster than LRA due to the replacement of MF by NLR in LRA and offers a better estimation. In some cases of reconstruction, the signal strength appears weaker than the original signal due to the tuning of α and β to non-unity values. We believe that these preliminary results are promising, and LR

^{2}A can be an attractive tool for pattern recognition applications and image deblurring in incoherent linear shift-invariant imaging systems. There are certain challenges in implementing LR

^{2}A (and LRA) as supposed to NLR. In NLR, the optimal values of α and β can be blindly obtained at the minima of entropy, which is not the case in LR

^{2}A. Additionally, NLR is quite stable with non-changing values of α and β unless a significant change was made to the experiment. This was not the case with LR

^{2}A, as it was highly sensitive to even minor changes in PSF, object, and experimental conditions. Dedicated research is needed in the future to develop a fast method to find the optimal values of α, β, and number of iterations in the case of LR

^{2}A. In the current study, the values of α and β were obtained intuitively.

^{2}A was demonstrated for the first time with spatially and temporally incoherent illumination, making it applicable for wider applications. Optical experiments were carried out using the camera of smartphones, and the recorded images were significantly enhanced with LR

^{2}A. This approach enables imaging beyond the LDDV of smart phone cameras. Deep-learning-based image classification is highly sensitive to the quality of images, as even a minor blur can affect the classification probability significantly. In this study, the invasive and non-invasive LR

^{2}A-based deblurring was shown to improve the classification probability. A significant improvement was also observed in the case of deep-learning-based image classification. The current research outcome also indicates the need for an integrated approach involving signal processing, optical methods, and deep learning to achieve optimal performance [29].

## Supplementary Materials

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Peng, X.; Gao, L.; Wang, Y.; Kneip, L. Globally-Optimal Contrast Maximisation for Event Cameras. IEEE Trans. Pattern Anal. Mach. Intell.
**2021**, 44, 3479–3495. [Google Scholar] [CrossRef] [PubMed] - Li, P.; Liang, J.; Zhang, M. A degradation model for simultaneous brightness and sharpness enhancement of low-light image. Signal Process.
**2021**, 189, 108298. [Google Scholar] [CrossRef] - Kim, J.; Seong, J.; Yang, Y.; Moon, S.-W.; Badloe, T.; Rho, J. Tunable metasurfaces towards versatile metalenses and metaholograms: A review. Adv. Photon.
**2022**, 4, 024001. [Google Scholar] [CrossRef] - Pan, M.Y.; Fu, Y.F.; Zheng, M.J.; Chen, H.; Zang, Y.J.; Duan, H.G.; Li, Q.; Qiu, M.; Hu, Y.Q. Dielectric metalens for miniaturized imaging systems: Progress and challenges. Light Sci. Appl.
**2022**, 11, 195. [Google Scholar] [CrossRef] - Ng, S.H.; Anand, V.; Katkus, T.; Juodkazis, S. Invasive and Non-Invasive Observation of Occluded Fast Transient Events: Computational Tools. Photonics
**2021**, 8, 253. [Google Scholar] [CrossRef] - Vijayakumar, A.; Jayavel, D.; Muthaiah, M.; Bhattacharya, S.; Rosen, J. Implementation of a speckle-correlation-based optical lever with extended dynamic range. Appl. Opt.
**2019**, 58, 5982–5988. [Google Scholar] [CrossRef] [Green Version] - Rai, M.; Vijayakumar, A.; Rosen, J. Non-linear adaptive three-dimensional imaging with interferenceless coded aperture correlation holography (I-COACH). Opt. Express
**2018**, 26, 18143–18154. [Google Scholar] [CrossRef] - Lucy, L.B. An iterative technique for the rectification of observed distributions. Astron. J.
**1974**, 79, 745. [Google Scholar] [CrossRef] [Green Version] - Richardson, W.H. Bayesian-Based Iterative Method of Image Restoration. J. Opt. Soc. Am.
**1972**, 62, 55–59. [Google Scholar] [CrossRef] - Anand, V.; Han, M.; Maksimovic, J.; Ng, S.H.; Katkus, T.; Klein, A.; Bambery, K.; Tobin, M.J.; Vongsvivut, J.; Juodkazis, S.; et al. Single-shot mid-infrared incoherent holography using Lucy-Richardson-Rosen algorithm. Opto-Electron. Sci.
**2022**, 1, 210006. [Google Scholar] - Praveen, P.A.; Arockiaraj, F.G.; Gopinath, S.; Smith, D.; Kahro, T.; Valdma, S.-M.; Bleahu, A.; Ng, S.H.; Reddy, A.N.K.; Katkus, T.; et al. Deep Deconvolution of Object Information Modulated by a Refractive Lens Using Lucy-Richardson-Rosen Algorithm. Photonics
**2022**, 9, 625. [Google Scholar] [CrossRef] - Wang, R.; Tao, D. Recent progress in image deblurring. arXiv
**2014**, arXiv:1409.6838 2014. [Google Scholar] - Bai, X.; Wang, X.; Liu, X.; Liu, Q.; Song, J.; Sebe, N.; Kim, B. Explainable deep learning for efficient and robust pattern recognition: A survey of recent developments. Pattern Recognit.
**2021**, 120, 108102. [Google Scholar] [CrossRef] - Rosen, J.; Vijayakumar, A.; Kumar, M.; Rai, M.R.; Kelner, R.; Kashter, Y.; Bulbul, A.; Mukherjee, S. Recent advances in self-interference incoherent digital holography. Adv. Opt. Photon.
**2019**, 11, 1–66. [Google Scholar] [CrossRef] - Padhy, R.P.; Chang, X.; Choudhury, S.K.; Sa, P.K.; Bakshi, S. Multi-stage cascaded deconvolution for depth map and surface normal prediction from single image. Pattern Recognit. Lett.
**2019**, 127, 165–173. [Google Scholar] [CrossRef] - Riad, S.M. The deconvolution problem: An overview. Proc. IEEE
**1986**, 74, 82–85. [Google Scholar] [CrossRef] - Starck, J.L.; Pantin, E.; Murtagh, F. Deconvolution in astronomy: A review. Publ. Astron. Soc. Pac.
**2002**, 114, 1051. [Google Scholar] [CrossRef] - Horner, J.L.; Gianino, P.D. Phase-only matched filtering. Appl. Opt.
**1984**, 23, 812–816. [Google Scholar] [CrossRef] [PubMed] - Smith, D.; Gopinath, S.; Arockiaraj, F.G.; Reddy, A.N.K.; Balasubramani, V.; Kumar, R.; Dubey, N.; Ng, S.H.; Katkus, T.; Selva, S.J.; et al. Nonlinear Reconstruction of Images from Patterns Generated by Deterministic or Random Optical Masks—Concepts and Review of Research. J. Imaging
**2022**, 8, 174. [Google Scholar] [CrossRef] - Zhang, L.; Liu, S.; Li, L.; Cui, T.J. Spin-Controlled Multiple Pencil Beams and Vortex Beams with Different Polarizations Generated by Pancharatnam-Berry Coding Metasurfaces. ACS Appl. Mater. Interfaces
**2017**, 9, 36447–36455. [Google Scholar] [CrossRef] - Arora, G.; Dubey, A.K.; Jaffery, Z.A.; Rocha, A. A comparative study of fourteen deep learning networks for multi skin lesion classification (MSLC) on unbalanced data. Neural Comput. Appl.
**2022**, 1, 27. [Google Scholar] [CrossRef] - Tang, P.; Wang, H.; Kwong, S. G-MS2F: GoogLeNet based multi-stage feature fusion of deep CNN for scene recognition. Neurocomputing
**2017**, 225, 188–197. [Google Scholar] [CrossRef] - Gassenmaier, S.; Küstner, T.; Nickel, D.; Herrmann, J.; Hoffmann, R.; Almansour, H.; Afat, S.; Nikolaou, K.; Othman, A.E. Deep Learning Applications in Magnetic Resonance Imaging: Has the Future Become Present? Diagnostics
**2021**, 11, 2181. [Google Scholar] [CrossRef] - Lee, J.; Jun, S.; Cho, Y.; Lee, H.; Kim, G.B.; Seo, J.B.; Kim, N. Deep Learning in Medical Imaging: General Overview. Korean J. Radiol.
**2017**, 18, 570–584. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Barbastathis, G.; Ozcan, A.; Situ, G. On the use of deep learning for computational imaging. Optica
**2019**, 6, 921. [Google Scholar] [CrossRef] - Wei, W.; Tang, P.; Shao, J.; Zhu, J.; Zhao, X.; Wu, C. End-to-end design of metasurface-based complex-amplitude holograms by physics-driven deep neural networks. Nanophotonics
**2022**, 11, 2921. [Google Scholar] [CrossRef] - Ali, U.; Mahmood, M.T. Analysis of Blur Measure Operators for Single Image Blur Segmentation. Appl. Sci.
**2018**, 8, 807. [Google Scholar] [CrossRef] [Green Version] - Golestaneh, S.A.; Karam, L.J. Spatially-Varying Blur Detection Based on Multiscale Fused and Sorted Transform Coefficients of Gradient Magnitudes. arXiv
**2017**, arXiv:1703.07478. [Google Scholar] - Lin, X.; Rivenson, Y.; Yardimci, N.T.; Veli, M.; Luo, Y.; Jarrahi, M.; Ozcan, A. All-optical machine learning using diffractive deep neural networks. Science
**2018**, 361, 1004–1008. [Google Scholar] [CrossRef] [Green Version]

**Figure 1.**Schematic of the deblurring assisted deep learning method. An object is focused by a diffractive lens and recorded by a monochrome sensor. A lens with a small NA records an image with a low resolution. Spatio-spectral aberrations blur the image. The blurred image is deblurred and sent into a deep-learning network for image classification.

**Figure 2.**Schematic of LR

^{2}A algorithm; OTF—Optical transfer function; p—number of iterations; ⊗—2D convolutional operator; ${{\Im}}^{\ast}$—refers to complex conjugate following a Fourier transform; R

^{p}is the nth solution, when n = 1, R

^{p}= I.

**Figure 3.**Deblurring results of NLR, LRA, and LR

^{2}A for different radii of the aperture R = 250, 125, 50, and 25 pixels.

**Figure 4.**Deblurring results of NLR, LRA, and LR

^{2}A for different axial aberrations values Δz = 0, 25, 50, 75, and 100 mm.

**Figure 5.**Deblurring results of NLR, LRA, and LR

^{2}A for different wavelengths λ = 400, 500, 600, and 700 nm resulting in different chromatic aberrations.

**Figure 6.**Comparison of SSIM (

**a**) LRA, (

**c**) NLR and (

**e**) LR

^{2}A and MSE for (

**b**) LRA, (

**d**) NLR and (

**f**) LR

^{2}A for spectral aberration with Δλ = 232.8 nm corresponding to first column of Figure 5. The number of iterations is p. The dotted circles indicate the regions of highest SSIM and lowest MSE.

**Figure 7.**Images recorded using a smartphone (

**a**) using a Samsung Galaxy A7 (

**b**,

**c**) using a Oneplus Nord 2CE with different point sizes of (0.4 and 0.5 cm). (

**d**–

**f**) are corresponding LR

^{2}A deblurred images. Line data (orange line) from the cropped images were plotted and given in (

**g**–

**i**).

**Figure 8.**Deblurring of blurred images using NLR, LRA, and LR

^{2}A using synthetic PSF. (

**a**) Reference image, (

**b**) blurred image. Deblurring results by (

**c**) NLR, (

**d**) LRA and (

**e**) LR

^{2}A. (

**f**) Synthetic PSF.

**Figure 9.**Method of applying chromatic aberrations to a color image and classification of the resulting blurred image using GoogLeNet.

**Figure 10.**Method of deblurring different color channels using LRA, NLR, and LR

^{2}A, fusing them into a color image and classification of the resulting deblurred image using GoogLeNet. The red circle indicates that the second most probable case is bell pepper when LR

^{2}A was used.

**Figure 11.**Image classification using GoogLeNet deep learning module (

**a**) object intensity for image at Δλ = 132.8 nm, (

**b**) deblurred image using LR

^{2}A (

**a**,

**c**) classification probabilities of blurred images (BI) vs. deblurred images (LR

^{2}A).

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Jayavel, A.; Gopinath, S.; Periyasamy Angamuthu, P.; Arockiaraj, F.G.; Bleahu, A.; Xavier, A.P.I.; Smith, D.; Han, M.; Slobozhan, I.; Ng, S.H.;
et al. Improved Classification of Blurred Images with Deep-Learning Networks Using Lucy-Richardson-Rosen Algorithm. *Photonics* **2023**, *10*, 396.
https://doi.org/10.3390/photonics10040396

**AMA Style**

Jayavel A, Gopinath S, Periyasamy Angamuthu P, Arockiaraj FG, Bleahu A, Xavier API, Smith D, Han M, Slobozhan I, Ng SH,
et al. Improved Classification of Blurred Images with Deep-Learning Networks Using Lucy-Richardson-Rosen Algorithm. *Photonics*. 2023; 10(4):396.
https://doi.org/10.3390/photonics10040396

**Chicago/Turabian Style**

Jayavel, Amudhavel, Shivasubramanian Gopinath, Praveen Periyasamy Angamuthu, Francis Gracy Arockiaraj, Andrei Bleahu, Agnes Pristy Ignatius Xavier, Daniel Smith, Molong Han, Ivan Slobozhan, Soon Hock Ng,
and et al. 2023. "Improved Classification of Blurred Images with Deep-Learning Networks Using Lucy-Richardson-Rosen Algorithm" *Photonics* 10, no. 4: 396.
https://doi.org/10.3390/photonics10040396