Next Article in Journal
Multisensor Integrated Platform Based on MEMS Charge Variation Sensing Technology for Biopotential Acquisition
Previous Article in Journal
Enhancing Urban Mobility with Self-Tuning Fuzzy Logic Controllers for Power-Assisted Bicycles in Smart Cities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Neural Network Computational Spectrometer Trained by a Small Dataset with High-Correlation Optical Filters

1
Institute of Frontier and Interdisciplinary Science, Shandong University, Qingdao 266237, China
2
Institute of Space Sciences, Shandong University, Weihai 264209, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(5), 1553; https://doi.org/10.3390/s24051553
Submission received: 2 January 2024 / Revised: 19 February 2024 / Accepted: 24 February 2024 / Published: 28 February 2024
(This article belongs to the Section Optical Sensors)

Abstract

:
A computational spectrometer is a novel form of spectrometer powerful for portable in situ applications. In the encoding part of the computational spectrometer, filters with highly non-correlated properties are requisite for compressed sensing, which poses severe challenges for optical design and fabrication. In the reconstruction part of the computational spectrometer, conventional iterative reconstruction algorithms are featured with limited efficiency and accuracy, which hinders their application for real-time in situ measurements. This study proposes a neural network computational spectrometer trained by a small dataset with high-correlation optical filters. We aim to change the paradigm by which the accuracy of neural network computational spectrometers depends heavily on the amount of training data and the non-correlation property of optical filters. First, we propose a presumption about a distribution law for the common large training dataset, in which a unique widespread distribution law is shown when calculating the spectrum correlation. Based on that, we extract the original dataset according to the distribution probability and form a small training dataset. Then a fully connected neural network architecture is constructed to perform the reconstruction. After that, a group of thin film filters are introduced to work as the encoding layer. Then the neural network is trained by a small dataset under high-correlation filters and applied in simulation. Finally, the experiment is carried out and the result indicates that the neural network enabled by a small training dataset has performed very well with the thin film filters. This study may provide a reference for computational spectrometers based on high-correlation optical filters.

1. Introduction

Conventional spectrometers are bulky due to complex optical paths and moving parts that hinder such spectrometers from achieving wider applications in handheld, spaceborne, and airborne scenarios. Contrarily, the proposed computational spectroscopy conceptualization, based on the principle of compressed sensing (CS), has significant potential for portable applications, such as remote sensing, healthcare, and astronomical observation [1,2]. These applications are in high demand because of their high spectral resolution, compactness, and low cost. The computational spectrometer can break the confinement from hardware technology and improve efficiency and accuracy, based on its effective design technique and efficient algorithms. In the field of computational spectroscopy, an unknown signal is measured according to CS theory by projecting the unknown signal onto a random basis. The filter structure, which acts as the encoding part, has a specific response function. The transmission spectrum has a randomly distributed spectral signature due to multiple reflections at the interface of thin film filters or nanostructures [3]. Each different configuration of the filter structure produces a unique response function, which results in a completely different response function that is used on a random basis [4].
The major problem is the technical complexity of optical filters with random transmittance spectra on the broadband spectrum, and the second problem is computation efficiency. The performance of CS relies on the randomness of the measuring bases, namely the encoding part. The transmittance of the optical filters of the computational spectrometer should have diverse spectral features over the broadband spectrum range. Thus, we are looking for a sizable group of filters with minimal correlation. The low correlation or non-correlation of the transmittance between any two of the spectrometer pixels is basically the prerequisite for the application of CS [5,6]. This brings a challenge to the design and fabrication of thin film filters and nanostructures like photonic crystal slabs, metasurfaces, and thin film filters [7]. This problem also leads to poor performance of conventional iterative reconstruction algorithms such as gradient projection for sparse reconstruction (GPSR), orthogonal matching pursuit (OMP), and subspace pursuit (SP) algorithm [8,9,10]. Neural networks (NNs) have been introduced because of their various advantages. Bao [11] combined conventional algorithms with an NN, named solver-informed NN, to achieve better alignment of the reconstructed spectral. Even though the bulky volume of the large training dataset hinders its widespread application. Many researchers have made some progress with NNs in computational spectroscopy. Zhang [12] proposed a broadband encoding stochastic camera featuring fully connected NN layers. Ding [13] proposed an encoding and reconstruction convolutional NN (CNN), named wide-spectrum encoding, and a reconstruction NN named wide-spectrum encoding and reconstruction neural network (WER-Net). Both Zhang and Ding’s networks are trained by approximately 1,650,000 spectral datasets from the Columbia Imaging and Vision Laboratory (CAVE) [14] and the Interdisciplinary Computational Vision Laboratory (ICVL) [15]. Kulkarni applied an NN in the field of image compression reconstruction [16], which was the first study that solved the CS problem with NN. Subsequently, Song introduced deep-learned broadband encoding stochastic filters for computational spectroscopic instruments [12], the NN is also trained by the heavy training dataset CAVE and ICVL. Bao used a NN to improve the reconstruction accuracy of the conventional spectral reconstruction algorithm by fitting the original spectral curve and the reconstructed curve obtained from the conventional iterative algorithm [11]. The loss functions for almost all the NNs used in the computational spectrometer are all about the mean squared error (MSE) between the original curves and the reconstructed spectral curves. It enlightens us that there may be a possibility of getting rid of the compulsory demand for non-correlation properties in computational spectroscopy. To address the aforementioned problems, this study introduces an NN computational spectrometer trained by a small dataset with high-correlation optical filters.
The remainder paper is organized as follows: Section 2 presents the NN architecture and methodology. Section 3 describes the training and simulation. Section 4 reports the experimental results. Finally, Section 5 concludes this study.

2. Theoretical Model and Design Methodology

For traditional computational spectrometers, iterative algorithms based on the CS theory have been developed for many years. The encoding procedure is completed by random optical filters and focal plane detectors which are alike in various spectrometers. In this way, satisfying the incoherence criterion requires minimum correlation between any two of the spectrometer filters, which brings great challenges to the design and fabrication of broadband filters. Contrarily, the training process of the filters in NNs only pays attention to the conformity of the reconstructed spectrum and the ground truth, which does not require the incoherence criterion to be satisfied. The NNs proposed in this study can achieve very good reconstruction accuracy with highly correlated broadband optical filters. The decoding procedure, which is completed by the reconstruction algorithm of an NN, often requires storage in megabytes, and the reconstruction procedure takes up computing resources. So the architecture needs to be as simple as it can be.
Neurons comprise the structure of a unit of artificial NNs, based on the unit of deep learning to simulate the working process of biological NNs. The McCulloch–Pitts (M–P) neuron model, proposed by McCulloch and Pitts [17], is the most commonly used to develop the computational model of neurons. Hopfield used NNs to solve NP-hard problems for the first time [18], helping to achieve rapid development in NNs. LeCun proposed LeNet-5, the standard CNN, which greatly contributed to the development of NNs [19]. Since then, deep learning has started to spring up in various fields. Among them, Kulkarni applied deep learning to the field of image compression and reconstruction for solving CS problems [16]. Zhang applied feedforward NNs with all fully connected layers to the field of spectral reconstruction [12]. Ding proposed a lightweight CNN for computational spectroscopy [13].
Since the proposed neural network featured with trained by a small training dataset, we then refer to it as STD-Net. The architecture of the STD-Net is presented as follows: The first layer, which serves as an encoding layer, is a matrix of known transmittance curves for 15 polymethyl methacrylate (PMMA) filters. The second to the fifth layers are the reconstruction layers (referred to as decoding layers), and each layer is followed by the Leaky ReLU as the activation function. The overall reconstruction network structure is summarized as FC (encoding without bias) − (FC − LEAKY_RELU) × 4, as described in Figure 1.

2.1. CNN and All FC Network

The CNN and all FC networks are first compared, based on the analysis and verified by training process and reconstruction performance before focusing on all FC layers in STD-Net. CNN is more suitable for image segmentation because it uses two strategies of local connectivity and weight sharing to reduce the network model complexity. CNN also has the property in which the extracted features have a certain degree of output invariance to change the input data, such as translation, rotation, and scale scaling. In more depth, local connectivity means that each node in a convolutional layer is connected to only part of its predecessor layer and learns only local features of the input data. This is because the correlation between image pixels is related to the distance between pixels, and the correlation is strong between pixels that are closer together. However, in the field of computational spectroscopy, the data input to the decoding layer all represent some feature of the measurement spectrum in each band and are correlated with each other, and learning only partial features is not sufficient to fully utilize the encoded data. In addition, weight sharing refers to the use of the same convolution kernel to convolve different regions of the input matrix to detect the same feature. These two strategies result in localized features of the input matrix that are independent of the position of the data composing the features in the matrix. Thus, when the data in the matrix is moved, the convolutional layer still finds the same feature, only the position is changed.
However, the output invariance of the CNN leads to a loss of reconstruction accuracy in the computed spectrum. In the encoding layer, the original spectral curve is compressed and sampled by 15 filters to obtain 15 data, and their arrangement order is the same as the arrangement order of the filters, the same local data features may represent different spectral curve information because of their different positions in the matrix, and the CNN treats them as the same information due to the output invariance and thus leads to the reduction in reconstruction accuracy. Therefore, we canceled the method of scanning the input matrix by the convolution kernel and changed the size of the convolution kernel to be the same as the input matrix, which is computed in the same way as the fully-connected layer, so we replaced the convolution layer of the reconstruction algorithm by the fully-connected layer.
On the other hand, some studies suggest that convolutional layers can reduce the number of parameters to improve computational efficiency [13,20]. However, the convolutional kernel scanning method increases a large number of operations and computations by multiple convolutions of the input matrix compared to the fully connected layer, which reduces the computational speed and requires a higher performance of the equipment.

2.2. Small Training Dataset

A proposed presumption in this study is that the common spectral large training dataset such as CAVE and ICVL has a certain distribution law, approximate or consistent with the distribution law of natural spectra. Therefore, a small training dataset, randomly drawn from a large training dataset of the same proportion, should have similar distribution laws as the entire training dataset. This shows that this small dataset contains the main features of the whole training dataset. To analyze and find this distribution law, we introduce the concept of the Pearson product-difference correlation coefficient, whose mathematical expression is as follows:
ρ x , y = i = 1 n x i x ¯ y i y ¯ i = 1 n x i x ¯ 2 i = 1 n y i y ¯ 2
where x and y each represent a spectral curve; xi and yi refer to the intensity corresponding to the i-th wavelength of the spectral curve, and x ¯ and y ¯ represent the average intensity of the spectral curve.
The Pearson product difference correlation coefficient has a value between −1 and 1. The closer its absolute value to 1, the stronger the linear correlation between variables x and y.
The whole dataset is a hybrid dataset of CAVE and ICVL. CAVE dataset covers the spectra of five scenes: objects, skin and hair, paint, food, and drink, and has been cited by more than 632 literature sources. ICVL dataset covers the spectral information of nature, trees, and buildings, and has been cited by more than 415 pieces of literature. The mixture of the two, with a total of 1.65 million spectral curves, contains spectral information of the most commonly used scenes and is also widely recognized and used by the image processing industry for satisfying experimental requirements. Since the spectral resolution of the dataset is only 10 nm, which is somewhat different from the ideal resolution, we use the least squares fitting method to increase the spectral resolution to 2 nm to improve the resolution of the spectral reconstruction algorithm. Then, based on the assumptions made in the previous section, we take the absolute average of the correlation coefficient between each spectral curve in the dataset and the other spectral curves and are regarded as the distribution weight of that spectral curve in the large training dataset to derive the spectral distribution, and divide the large dataset into several intervals based on their distribution weights, to obtain the small dataset for training, testing, and simulation.

2.3. Loss-Function and Training Methods

To assess the conformity level between the reconstructed spectral curve and the ground truth, we adopt MSE as the objective function:
Θ = a r g m i n S S ^ 2
where S denotes the input actual spectral matrix, and S ^ represents the output reconstructed spectral matrix.
Since its loss function focuses on the alignment between the input and output spectral curve, the focus here is not on the optical filters’ non-correlation property. Luckily, the non-correlation property of optical filters is only a sufficient condition but not necessary for computational spectroscopy. That explains why the NN computational spectrometer can achieve good performance later even with high-correlation optical filters in this study.
A high learning rate is capable of accelerating learning in the early stage of algorithm optimization, facilitating the model to approach the local or global optimal solution for the loss function. However, this approach can cause the model to fluctuate too much in the later stage and prevent the model from converging to the optimal value. The learning rate adaptive decay mechanism throughout the training process is applied, which takes the average reconstruction accuracy MSE of each training epoch as the index and reduces the learning rate when the MSE does not decrease to a noticeable extent. This mechanism can effectively reduce the model fluctuations in the middle and later stages of training, making the model closer to the optimal solution.
In addition, the batch gradient descent model uses all samples to update the gradient in each iteration, resulting in a stable model iteration. However, the iteration speed is very slow for a sizable training dataset. In contrast, the stochastic gradient descent method iterates once after calculating the loss function for one training data, which has a fast iteration speed at each step, but with poor stability. We used the small-batch stochastic gradient descent method to coordinate the model stability and training speed. Thus, we achieved strong stability of the optimization direction while ensuring the training speed after summing up the loss function and iterating the gradient immediately.

3. Training and Simulation

3.1. Small Training Dataset Establishment

The CAVE and ICVL datasets mentioned in Section 2.2 have a spectral profile resolution of only 10 nm, which falls short of the experimental accuracy requirements. Therefore, we increase the spectral resolution to 2 nm by average interpolation based on extracting the spectral curve data in the visible band, i.e., between 400 and 700 nm. Based on the assumptions in Section 2.2, the whole dataset with 1.65 million spectral curves is regarded as a large training dataset, and the correlation coefficients between each spectral curve and the other spectral curves are taken as the absolute average value and are considered as the distribution weight of that spectral curve in the large training dataset to derive the contribution of that spectral curve to the richness of the whole dataset. The lower the distribution weight, the higher the contribution. The large training dataset is divided into 1000 intervals in order of the distribution weights of each spectral data in fixed steps. The overall distribution is shown in Figure 2.
We extract a certain number of spectral data to establish the small training dataset, according to the distribution proportion in each interval. The rest of the large training dataset is delegated as a testing dataset and experimental dataset. The training dataset serves as the small training dataset for NN model training. The testing dataset is used for the model to observe the current model accuracy and to judge whether the model is overfitting after each epoch of the training process. The experimental dataset is used for the reconstruction of the experimental accuracy after the NN parameters are fixed. According to the different proportions of the training and testing datasets, four groups of experiments were conducted, as shown in Table 1.
The training dataset is extracted by 90%, 10%, 5%, and 3% of the entire dataset, based on the proportion of each interval, using 1,450,000, 164,540, 82,021, and 49,007 spectral curves. Most of the remaining datasets of each interval are selected as the testing dataset, accounting for 89%, 94%, 96%, and 9% of the intervals, with 1,469,467, 1,551,986, 1,585,000, and 149,007 spectral curves, respectively. The experimental datasets of the four groups are the same; all of which are selected from the 1% of each interval of the entire dataset randomly, with 15,993 spectral curves.

3.2. Encoding Layer

Figure 3 shows the 15 selected optical filter transmittance curves, and Table 2 is the correlation coefficient matrix. The larger the correlation coefficient, the deeper the color in Table 2. This process shows that the optical filter transmittance curves are highly smooth, with significant variation and high richness. Although the maximum value of the correlation coefficient in Table 2 reaches 0.7, the MSE of reconstruction accuracy in the experimental results is still high. This is noteworthy since NN helps break the constraint of the intense demand for high non-correlation between any two of the optical filter transmittance curves. This process surely facilitates the design and fabrication of optical filters in the field of computational spectra.

3.3. Training and Simulation

In the training process, we take the average reconstruction accuracy MSE of each training epoch as the index and multiply the learning rate by 0.5 when the MSE does not decrease in 2 consecutive epochs. The model gets to the optimal solution very soon. In addition, the small-batch stochastic gradient descent method is applied to randomly and non-repeatedly select 64 data from all training data to coordinate the model stability and training speed.
The training process of the decoding network lasted until the 200th epoch. The MSE of training and testing is shown in Figure 4. At the 19th epoch, the training and testing MSE is already below 1 × 105. In the middle of training, the downward trend of the loss function of the test has been reduced to close to 0. Subsequent training is less helpful in reducing the loss function of the test, which can be assumed that the test error at this time is the same as the training error at the end of training. At this time, the MSE of the test reaches 3 × 106, and the comparison in Table 3 can be found to be a sufficiently high reconstruction accuracy.
Simulated spectra are reconstructed with NNs trained on each of the four datasets to investigate the effect of the size of the training dataset on the spectral reconstruction accuracy. In order to simulate the error in measuring the filter transmittance profile and spectra, we add random Gaussian noise with mean 0 and standard deviation (σ) of 10−3 and 10−2 to the encoding network, namely the filter transmittance profile matrix. Afterward, we compare them with the NN (STD-Net) without being doped with Gaussian noise to derive the average MSE, full width at half maximum (FWHM), peak amplitude error(PAE), peak wavelength position deviation(PWPD), and reconstruction speed of the reconstructed spectra versus the spectra in the simulated dataset, and record the data in Table 3. Some important conclusions are stated as follows.
(1)
The larger the dataset, the better the performance in metrics such as MSE, FWHM, PAE, and PWPD, and the better resistance against noise interference.
(2)
Small datasets also show good accuracy and have shorter training processes under the condition of satisfying certain accuracy.
(3)
The reconstruction speed is approximately identical when the network architectures are the same, and no correlation exists with the size of the training dataset.
These enlighten us that in some new fields where it is difficult to build large datasets or where the training cost is extremely high, the rational use of small datasets is important for cost saving.

4. Experiment and Discussion

4.1. Experiment Setup and Results

The prototype of the experimental system for the computational spectrometer is shown in Figure 5. It is composed of the light source, the camera module, the lens, the filters, and the color card. In the experimental system, the standard color card is used as the sample, the camera model is MER-051-120U3M/C (DAHENG Image, Beijing, China) with a resolution of 808(H) × 608(V) and an image element size of 4.8 μm × 4.8 μm, while the lens model is C02812F16-3MP (ANNAI Technology, Shenzhen, China) with an aperture of 1.6 and a focal length of 2.8 mm. The PMMA filters with known transmittance are placed in front of the detector lens to acquire gray-scale images. During every imaging, the filters are switched manually. The obtained measurement data is then entered into the computer for reconstruction. Performance metrics include average MSE, FWHM, peak amplitude error, peak wavelength position deviation, and reconstruction speed.
The data in Table 4 shows some metrics to display the accuracy and speed of spectral reconstruction for each group in this experiment:
(1)
The accuracy metrics in Table 4 are about 10% worse than the simulation results in Section 3.3, but the spectral curves essentially overlap as seen in Figure 6. Results of spectral curve reconstruction, and the loss of accuracy are within acceptable limits.
(2)
The training dataset for the group-large is 9 times larger than the group-α. However, their MSEs are of the same order of magnitude, reaching an acceptable range. This proves the correctness of the small training dataset assumption, reveals that it can learn the main features of the entire training dataset, provides a reliable dataset minimization method, and reduces the training cost of neural networks.
(3)
The reconstruction speed is approximately identical when the network architectures are the same, and no correlation exists with the size of the training dataset.

4.2. Comparison with Other Algorithms

We chose the classical GPSR and the Orthogonal Matching Pursuit (OMP) algorithm for comparison. As for the measurement, we introduced the Gaussian random matrix and the known encoding network. The reconstruction is performed on the Intel i5-12490f platform. For NN algorithms comparison, we chose the parameter constrained spectral encoder and decoder (PCSED) [12] and WER-Net, whose model parameters were trained using CAVE and ICVL. They work on the NVIDIA RTX3080 platform, as does STD-Net. The spectral profiles for the reconstruction experiments were obtained from the results of the spectrometer shots of the natural environment, captured by the ISPECFIELD-HH spectrometer. The reconstruction results are provided in Table 5.
In the GPSR and OMP algorithms, the average MSE reaches the order of 1 × 10−2 and 1 × 10−3, respectively, based on ideal Gaussian random matrices. In contrast, the reconstruction accuracy of STD-Net (group-β) reaches the order of 1 × 10−6, which is 1863 times higher than that of the GPSR and 723 times higher than that of the OMP. In terms of computing time, the single-spectrum reconstruction time of STD-Net (group-β) is 1.79% of that of the GPSR and 13.3% of that of the OMP. This proves that STD-Net substantially outperforms the conventional GPSR and OMP algorithms in terms of reconstruction accuracy and reconstruction time. Furthermore, the compression matrix used in the GPSR and OMP is a random Gaussian matrix. With the current industrial technology, it is not possible to produce and manufacture filters with completely randomly distributed transmittance curves. However, the transmittance curves simulated by STD-Net are feasible for industrial production.
The PCSED and WER-Net reconstruction accuracy is acceptable because the MSE of their reconstruction accuracy reaches 1 × 10−4 and 1 × 10−5, respectively. Moreover, the reconstruction accuracy of STD-Net is better as it reaches 10−6. The filter transmittance curves, obtained through WER-Net training, show significant abrupt changes and lack smoothness, making it problematic in industrial production. In Section 2.1, we conclude that the characteristics of the CNN harm the reconstruction accuracy because of its incompatibility with the spectral compression reconstruction task. Compared with FC networks, CNN occupies a substantial amount of video memory and requires high hardware performance owing to a large number of arithmetic operations. Taking WER-Net as an example, for a single spectral curve input, the computation of the three convolutional layers has reached 1.9 Mflops. After replacing them with a fully connected network, the computation of the whole decoding network only reaches 0.216 Mflops, which reduces the computation by 88.8%. In terms of computing time consumption, the single-spectrum reconstruction time of STD-Net (group-β) is 42.4% of that of WER-Net, and 10.7% of that of PCSED. Thus, the MSE of the reconstruction accuracy is improved to 10.17 times that of WER-Net and 110.67 times that of PCSED. The experimental data demonstrates that STD-Net substantially improves the speed and accuracy of spectral reconstruction.
Regarding the proposed STD-Net, even group-γ, which has the worst performance, its MSE still reaches 9 × 10−6 with only 3% of the entire dataset, which is sufficiently superior in terms of network training time and data demand, demonstrating the advantages of a small training dataset.

5. Conclusions

This study proposes an NN computational spectrometer with high-correlated optical filters. It consists of high-correlation optical filters for encoding and a neural network called STD-Net for decoding, which is trained on a small training dataset. First, we propose a presumption that the spectral has a specific distribution law, in which a small training dataset, composed of randomly extracted data from the entire dataset, can cover the main features. Based on this, targeting the CAVE and the ICVL spectral datasets, the correlation coefficients of each spectral curve, and all other spectral curves are averaged and processed in absolute value as the distribution weight. All data are divided into 1000 intervals. Then, 90%, 10%, 5%, and 3% of the data are randomly extracted for training while certain portions of the dataset work as the test dataset in case of overfitting in the training process. In addition, a tailored loss function and adaptive learning rate mechanism are introduced to improve the training efficiency and reconstruction accuracy. Since this part, only focuses on the MSE between the original curves and the reconstructed ones. It has no compulsory request on the non-correlation property of optical filters. The non-correlation property for optical filters is a sufficient but unnecessary condition. Then the highly correlated PMMA filters are applied to work as the encoding layer. In the reconstruction network, we proposed a four-layer FC architecture network. Multi-scale training datasets are established to train the constructed neural network. Furthermore, we conducted four groups of comparison experiments with different data extraction ratios, indicating that STD-Net could achieve high reconstruction accuracy and robustness, even when the training data quantity was immensely limited. Finally, an experimental system was introduced, and the results indicate that the proposed NN computational spectrometer has good accuracy and efficiency. The STD-Net has successfully enabled computational spectrometry free from finding highly non-correlated filters which diminished the difficulty of filter design and fabrication. The STD-Net may provide a new method for the development of computational spectrometers.

Author Contributions

Methodology, L.Y.; Writing—original draft, H.L.; Writing—review & editing, Y.Z.; Visualization, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (No. 12203032), the National Key Research and Development Program of China (2023YFC2808902), and the Natural Science Foundation of Shandong Province (ZR2022QA030).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cheng, R.; Zou, C.L.; Guo, X.; Wang, S.; Tang, H. Broadband on-chip single-photon spectrometer. Nat. Commun. 2019, 10, 4104. [Google Scholar] [CrossRef] [PubMed]
  2. Tao, Y.; Cao, X.; Ho, H.P.; Zhu, Y.Y.; Huang, W. Miniature spectrometer based on diffraction in a dispersive hole array. Opt. Lett. 2015, 40, 3217–3220. [Google Scholar]
  3. Wu, X.; Gao, D.; Chen, Q.; Chen, J. Multispectral imaging via nanostructured random broadband filtering. Opt. Express 2020, 28, 4859. [Google Scholar] [CrossRef] [PubMed]
  4. Xiong, J.; Cai, X.; Cui, K.; Huang, Y.; Yang, J.; Zhu, H.; Li, W.; Hong, B.; Rao, S.; Zheng, Z.; et al. Dynamic brain spectrum acquired by a real-time ultraspectral imaging chip with reconfigurable metasurfaces. Optica 2022, 9, 461–468. [Google Scholar] [CrossRef]
  5. August, Y.; Stern, A. Compressive sensing spectrometry based on liquid crystal devices. Opt. Lett. 2013, 38, 4996–4999. [Google Scholar] [CrossRef] [PubMed]
  6. Wang, Z.; Yu, Z. Spectral analysis based on compressive sensing in nanophotonic structures. Opt. Express 2014, 22, 25608. [Google Scholar] [CrossRef] [PubMed]
  7. Gao, L.; Qu, Y.; Wang, L.; Yu, Z. Computational spectrometers enabled by nanophotonics and deep learning. Nanophotonics 2022, 11, 2507–2529. [Google Scholar] [CrossRef]
  8. Figueiredo, M.; Nowak, R.D.; Wright, S.J. Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems. IEEE J. Sel. Top. Signal Process. 2008, 1, 586–597. [Google Scholar] [CrossRef]
  9. Needell, D.; Vershynin, R. Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit. Found. Comput. Math. 2009, 9, 317–334. [Google Scholar] [CrossRef]
  10. Dai, W.; Milenkovic, O. Subspace Pursuit for Compressive Sensing Signal Reconstruction. IEEE Trans. Inf. Theory 2009, 55, 2230–2249. [Google Scholar] [CrossRef]
  11. Zhang, J.; Zhu, X.; Bao, J. Solver-informed neural networks for spectrum reconstruction of colloidal quantum dot spectrometers. Opt. Express 2020, 28, 33656. [Google Scholar] [CrossRef] [PubMed]
  12. Zhang, W.; Song, H.; He, X.; Huang, L.; Zhang, X.; Zheng, J.; Shen, W.; Hao, X.; Liu, X. Deeply learned broadband encoding stochastic hyperspectral imaging. Light Sci. Appl. 2021, 10, 108. [Google Scholar] [CrossRef] [PubMed]
  13. Ding, X.; Yang, L.; Yi, M.; Zhang, Z.; Liu, Z.; Liu, H. WER-Net: A New Lightweight Wide-Spectrum Encoding and Reconstruction Neural Network Applied to Computational Spectrum. Sensors 2022, 22, 6089. [Google Scholar] [CrossRef] [PubMed]
  14. Yasuma, F.; Mitsunaga, T.; Iso, D.; Nayar, S.K. Generalized Assorted Pixel Camera: Postcapture Control of Resolution, Dynamic Range, and Spectrum. IEEE Trans. Image Process. 2010, 19, 2241–2253. [Google Scholar] [CrossRef] [PubMed]
  15. Arad, B.; Ben-Shahar, O. Sparse Recovery of Hyperspectral Signal from Natural RGB Images. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands , 11–14 October 2016; Part VII 14. Springer International Publishing: Cham, Switzerland, 2016; pp. 19–34. [Google Scholar]
  16. Kulkarni, K.; Lohit, S.; Turaga, P.; Kerviche, R.; Ashok, A. Non-Iterative Reconstruction of Images from Compressively Sensed Measurements. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  17. Culloch, W.; Pitts, W.H. A logical calculus of the ideas immanent in neural nets. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  18. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef] [PubMed]
  19. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  20. Krizhevsky, A.; Sutskever, I.; Hinton, G. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012, Lake Tahoe, NV, USA, 3–6 December 2012; Volume 25, pp. 1106–1114. [Google Scholar]
Figure 1. The architecture of STD-Net.
Figure 1. The architecture of STD-Net.
Sensors 24 01553 g001
Figure 2. Spectral distribution of the entire dataset of CAVE and ICVL.
Figure 2. Spectral distribution of the entire dataset of CAVE and ICVL.
Sensors 24 01553 g002
Figure 3. The fifteen selected optical filter transmittance curves.
Figure 3. The fifteen selected optical filter transmittance curves.
Sensors 24 01553 g003
Figure 4. Training loss and testing loss.
Figure 4. Training loss and testing loss.
Sensors 24 01553 g004
Figure 5. Experimental system.
Figure 5. Experimental system.
Sensors 24 01553 g005
Figure 6. Results of spectral curves reconstruction.
Figure 6. Results of spectral curves reconstruction.
Sensors 24 01553 g006
Table 1. Four groups of the dataset.
Table 1. Four groups of the dataset.
TrainingTestingSimulation
ProportionAmountProportionAmountProportionAmount
Group-large90%1,450,0009%149,0071%15,993
Group-α10%164,54089%1,469,467
Group-β5%82,02194%1,551,986
Group-γ3%49,00796%1,585,000
Table 2. Correlation coefficients of filter transmittance curves after group-α training.
Table 2. Correlation coefficients of filter transmittance curves after group-α training.
123456789101112131415
11.00
20.34 1.00
30.57 0.42 1.00
40.51 0.00 0.84 1.00
50.49 0.25 0.88 0.95 1.00
60.49 0.12 0.76 0.90 0.86 1.00
70.18 0.89 0.52 0.15 0.38 0.03 1.00
80.53 0.84 0.78 0.51 0.69 0.34 0.82 1.00
90.06 0.41 0.67 0.41 0.55 0.42 0.69 0.56 1.00
100.44 0.30 0.22 0.24 0.16 0.24 0.05 0.14 0.55 1.00
110.49 0.93 0.71 0.36 0.58 0.23 0.88 0.97 0.54 0.19 1.00
120.61 0.32 0.16 0.31 0.33 0.34 0.03 0.12 0.46 0.70 0.16 1.00
130.53 0.49 0.34 0.58 0.38 0.71 0.49 0.13 0.09 0.02 0.25 0.10 1.00
140.54 0.55 0.12 0.47 0.29 0.60 0.63 0.24 0.44 0.25 0.33 0.15 0.88 1.00
150.52 0.58 0.17 0.51 0.31 0.64 0.63 0.24 0.37 0.18 0.34 0.10 0.92 0.99 1.00
Table 3. Simulation results of 4 groups.
Table 3. Simulation results of 4 groups.
Evaluation Index g r o u p l a r g e g r o u p α
σ = 0 σ = 0.001 σ = 0.01 σ = 0 σ = 0.001 σ = 0.01
MSE2.96 × 10−63.29 × 10−65.49 × 10−65.18 × 10−66.63 × 10−61.60 × 10−5
FWHM0.4 nm0.6 nm1.2 nm0.6 nm0.8 nm1.4 nm
Peak amplitude error1.75 × 10−33.43 × 10−31.88 × 10−22.93 × 10−32.91 × 10−34.81 × 10−3
Peak wavelength position deviation1.85 nm3 nm7.5 nm2.14 nm3.29 nm5.14 nm
Reconstruction speed31.01 μs30.10 μs30.39 μs30.98 μs30.62 μs30.57 μs
Evaluation Index g r o u p β g r o u p γ
σ = 0 σ = 0.001 σ = 0.01 σ = 0 σ = 0.001 σ = 0.01
MSE7.63 × 10−69.95 × 10−63.31 × 10−59.93 × 10−61.01 × 10−57.53 × 10−5
FWHM0.8 nm1.1 nm1.5 nm1.2 nm1.3 nm2.0 nm
Peak amplitude error3.03 × 10−32.84 × 10−37.02 × 10−32.10 × 10−32.31 × 10−36.83 × 10−3
Peak wavelength position deviation2.57 nm3.86 nm7.43 nm5.07 nm4.93 nm8.43 nm
Reconstruction speed30.38 μs30.21 μs30.84 μs30.28 μs30.45 μs30.24 μs
Table 4. Experiment results of four groups.
Table 4. Experiment results of four groups.
Evaluation Index G r o u p l a r g e G r o u p α G r o u p β G r o u p γ
MSE3.25 × 10−65.975 × 10−68.39 × 10−61.83 × 10−5
FWHM0.5 nm0.7 nm0.9 nm1.4 nm
Peak amplitude error2.58 × 10−33.62 × 10−34.68 × 10−36.12 × 10−3
Peak wavelength position deviation1.97 nm3.35 nm3.61 nm5.93 nm
Reconstruction speed32.45 μs30.73 μs30.94 μs30.48 μs
Table 5. Comparison of reconstruction algorithms.
Table 5. Comparison of reconstruction algorithms.
GPSR
(with Gaussian Matrix)
GPSR
(with Filter Matrix of STD-Net)
OMP
(with Gaussian Matrix)
OMP
(with Filter Matrix of STD-Net)
PCSED
MSE1.95 × 10−29.116 × 10−33.54 × 10−36.46 × 10−25.413 × 10−4
Reconstruction speed1.67 ms3.42 ms224.37 μs243.97 μs281.9 μs
WER-NetSTD-Net (group-large)STD-Net (group- α )STD-Net (group- β )STD-Net (group- γ )
MSE9.374 × 10−53.85 × 10−64.369 × 10−64.891 × 10−65.029 × 10−6
Reconstruction speed71.4 μs30.33 μs30.22 μs29.87 μs30.29 μs
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liao, H.; Yang, L.; Zheng, Y.; Wang, Y. A Neural Network Computational Spectrometer Trained by a Small Dataset with High-Correlation Optical Filters. Sensors 2024, 24, 1553. https://doi.org/10.3390/s24051553

AMA Style

Liao H, Yang L, Zheng Y, Wang Y. A Neural Network Computational Spectrometer Trained by a Small Dataset with High-Correlation Optical Filters. Sensors. 2024; 24(5):1553. https://doi.org/10.3390/s24051553

Chicago/Turabian Style

Liao, Haojie, Lin Yang, Yuanhao Zheng, and Yansong Wang. 2024. "A Neural Network Computational Spectrometer Trained by a Small Dataset with High-Correlation Optical Filters" Sensors 24, no. 5: 1553. https://doi.org/10.3390/s24051553

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop