Next Article in Journal
Gas Sensing Performance and Mechanism of CuO(p)-WO3(n) Composites to H2S Gas
Next Article in Special Issue
Advances of RRAM Devices: Resistive Switching Mechanisms, Materials and Bionic Synaptic Application
Previous Article in Journal
Thermal Conductivity Performance of 2D h-BN/MoS2/-Hybrid Nanostructures Used on Natural and Synthetic Esters
Previous Article in Special Issue
Stearic Acid Coated MgO Nanoplate Arrays as Effective Hydrophobic Films for Improving Corrosion Resistance of Mg-Based Metallic Glasses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Identification of MoS2 Nanostructures with Hyperspectral Imaging by 3D-CNN

1
Department of Mechanical Engineering and Advanced Institute of Manufacturing with High Tech Innovations, National Chung Cheng University, 168, University Rd., Min Hsiung, Chia Yi 62102, Taiwan
2
Department of Materials Science and Engineering, National Tsing Hua University, 101, Sec. 2, Kuang-Fu Road, Hsinchu 30013, Taiwan
3
Department of Applied Physics, National University of Kaohsiung, 700 Kaohsiung University Rd., Nanzih District, Kaohsiung 81148, Taiwan
4
Nikolaev Institute of Inorganic Chemistry, Siberian Branch of Russian Academy of Sciences, 630090 Novosibirsk, Russia
5
Department of Natural Sciences, Novosibirsk State University, 1, Pirogova str., 630090 Novosibirsk, Russia
*
Author to whom correspondence should be addressed.
Nanomaterials 2020, 10(6), 1161; https://doi.org/10.3390/nano10061161
Submission received: 23 April 2020 / Revised: 11 June 2020 / Accepted: 11 June 2020 / Published: 13 June 2020

Abstract

:
Increasing attention has been paid to two-dimensional (2D) materials because of their superior performance and wafer-level synthesis methods. However, the large-area characterization, precision, intelligent automation, and high-efficiency detection of nanostructures for 2D materials have not yet reached an industrial level. Therefore, we use big data analysis and deep learning methods to develop a set of visible-light hyperspectral imaging technologies successfully for the automatic identification of few-layers MoS2. For the classification algorithm, we propose deep neural network, one-dimensional (1D) convolutional neural network, and three-dimensional (3D) convolutional neural network (3D-CNN) models to explore the correlation between the accuracy of model recognition and the optical characteristics of few-layers MoS2. The experimental results show that the 3D-CNN has better generalization capability than other classification models, and this model is applicable to the feature input of the spatial and spectral domains. Such a difference consists in previous versions of the present study without specific substrate, and images of different dynamic ranges on a section of the sample may be administered via the automatic shutter aperture. Therefore, adjusting the imaging quality under the same color contrast conditions is unnecessary, and the process of the conventional image is not used to achieve the maximum field of view recognition range of ~1.92 mm2. The image resolution can reach ~100 nm and the detection time is 3 min per one image.

1. Introduction

The most frequently observed material among all of the two-dimensional (2D) transition metal chalcogenides for the next generation of electronic and optoelectronic components is molybdenum disulphide [1,2,3,4,5,6]. Materials related to nanometer-scale electronic and optoelectronic components, such as field effect transistor [7,8,9,10], prospective memory component [11], light-emitting diode [12,13], and sensors [14,15,16,17,18,19], have been produced due to the excellent spin-valley coupling and flexural and optoelectronic properties of MoS2. However, the development of high-performance and large-area characterization techniques has been a major obstacle to the basic and commercial applications of 2D nanostructures.
In the prior art optical film measurement, the atomic force microscope (AFM) has various disadvantages, such as relatively limited scan range and time consuming; thus, it is unsuitable for large-area quick measurements [20,21]. Raman spectroscopy (Raman) is usually only capable of local characterization within the spot, which results in a limited measurement rate; hence, it is unsuitable for large-area analysis. Transmission electron microscopy (TEM) and scanning tunneling microscopy can be characterized at a high spatial resolution of up to atomic scale [22,23]. However, both techniques have the disadvantages of low throughput and complex sample preparation. The use of machine learning, as compared with the abovementioned techniques, in image or visual recognition is a mature application field. The integration of machine learning (SVM, KNN, BGMM-DP, and K-means) with optical microscopes has only begun in recent years. Thus, artificial intelligence has great potential in the recognition of microscopic images, especially nanostructures [24,25]. In 2019, Hong et al. demonstrated the machine learning algorithm to identify local atomic structures by reactive molecular dynamics [26]. In 2020, Masubuchi et al. showed the real-time detection of 2D materials by deep-learning-based image segmentation algorithm [27]. In the same year, Yang et al. presented the identification of 2D material flakes of different layers from the optical microscope images via machine learning-based model [28]. However, the following three shortcomings remain: (1) optical microscope image quality often depends on the user’s experience and will pass through image processing. (2) Only the color space with few feature dimensions will have the possibility of underfitting. (3) Angular illumination asymmetry (ANILAS) of the field of view (FOV) is an important factor that is largely ignored, which results in a certain loss of pixel accuracy [29,30,31].
Here, we used big data analysis and deep learning methods, combined with the hyperspectral imagery common in the remote sensing field, to solve the difficulties that are encountered in the previous literature (e.g., uneven light intensity distribution, image dynamic range correction, and image noise filtering process). The first attempt was made by analyzing the eigenvalues in other dimensions that were not previously used (e.g., morphological features) to improve the prediction accuracy [29,30,31]. An intelligent detection can be achieved to identify the layer number of 2D materials.

2. Materials and Methods

2.1. Growth Mechanism and Surface Morphology of MoS2

The majority of 2D material layer identification studies focus on film synthesis using mechanical stripping [32,33,34]. Although an improved quality of molybdenum disulfide can be obtained, this method cannot synthesize a large-area molybdenum disulfide film with a few layers. The chemical vapor deposition method [35,36,37,38] can produce high-quality and large-area molybdenum disulfide on a suitable substrate surface under a stable gas flux and temperature environment, and it is suitable for current device fabrication [38,39,40]. In 2014, Shanshan et al. explored the sensitivity of the MoS2 region growth in a relatively uniform temperature range [41]. In 2017, Lei et al. found that temperature is one of the main factors controlling MoS2 morphology [42]. Wei et al. maintained the state for a long time to observe the change of MoS2 type when the precursor was heated to a constant temperature interval [43]. In 2018, Dong et al. discussed the nucleation and growth mechanism of MoS2 [44]. On the basis of their results, two types of film growth dynamic paths were established: one is a central nanoparticle with a multi-layered MoS2 structure and the other is a single-layered triangular dominated or double-layered structure. The conclusion of reference [45] explained the effect of adjusting the growth temperature and carrier gas flux. Understanding the growth pattern mechanism can help us to obtain the initial requirements and judgments of database collection and data tagging.

2.2. CVD Sample Preparation

The sample was grown on a sapphire substrate by using CVD to form a MoS2 film. The precursor used had Sulfur (99.98%, Echo Chemical Co., Miaoli county, Taiwan) and MoO3 (99.95%, Echo Chemical Co., Miaoli county, Taiwan), each placed in the appropriate position of the inner part of the quartz tube. The substrate was placed over the MoO3 crucible and in the center of the furnace tube, in a windward position. During growth, different parameters, such as ventilation, heating rate, temperature holding time, and maximum temperature, were set. The MoS2 sample was obtained at the end of the growth process. Figure S1 shows the schematic of the experimental structure and position of the precursor. Some new pending data indicate the growth of the periodic structure of MoS2, which is crucial for the large-scale controllable molybdenum disulfide synthesis and it will greatly benefit the future production of electronic components. In comparison with that of other studies [45,46,47,48,49], this sample was fabricated via laser processing to make periodic holes (hole diameter and depth of approximately 10 μm and 300 nm, respectively), followed by CVD to grow MoS2.

2.3. Optical Microscope Image Acquisition

MoS2 on the sapphire substrate was observed through an optical microscope (MM40, Nikon, Lin Trading Co., Taipei, Taiwan) at 10×, 40×, and 100× magnification rates. For the experimental sample, we recorded the shooting area code (Figure S1c to store the images captured via optical microscopy (OM) systematically in the image database. The image had a size of 1600 pixels × 1200 pixels with a depth of 32 bits. The portable network graphics (PNG) file, including the gain image at different dynamic range intervals, was acquired, but color calibration and denoising action were not performed. We aimed to replace the cumbersome image processing with deep learning. On the basis of the amount of data that were collected in this experiment, approximately 90 pieces of 2 × 2 cm2 growth samples were obtained, and ~2000 images were used as sources of data exploration.

2.4. System Equipment and Procedures

This study aims to construct a system for the automated analysis of different layers of MoS2 film. The system architecture is mainly divided into four parts, as shown in Figure S2. The program flow is as follows: (1) database: the prepared molybdenum disulfide sample is measured using a Raman microscope to determine the position of each layer distribution, and an image is taken using an optical microscope and CCD to capture the same position (Image Capture System). (2) Offline Training: the obtained CCD image is combined with hyperspectral imaging technology (VIS-HSI) to convert the spectral characteristics of each layer of molybdenum disulfide, and data preprocessing is performed. (3) Model Design: the data are further trained in deep learning, thereby completing the establishment of our classification algorithm. (4) Online Service: when a new sample of molybdenum disulfide is to be analyzed, it is placed under an optical microscope to capture the surface image by using CCD. Subsequently, the spectral characteristic value of each pixel is obtained through the hyperspectral imaging technique, and the model is predicted by training. The number of layers of the molybdenum disulfide film at the pixel position is determined, and the molybdenum disulfide film with different layers in the image is visualized while using different colors.

2.5. Tag Analysis and Feature Extraction Workflow

Figure 1 shows the processing step before the data enters the model. Layer number labeling of the manual area circle mask (Mask) is performed and the data are divided when the captured image is converted into a spectrum image by using a hyperspectral image technology; we will measure the result on the basis of the Raman spectrum (Figure S3) [50,51,52,53,54]. The categories are substrate, monolayer, bilayer, trilayer, bulk, and residues, which are our ground truths. Two types of data are available for model training, namely “feature” and “label”. Feature has two types: one is hyperspectral vector as input for deep neural network (DNN) and one-dimensional (1D) convolutional neural network (1D-CNN), and the other is spatial domain hyperspectral cube as input for three-dimensional (3D) convolutional neural network (3D-CNN). After data preprocessing, we divide the dataset into three parts. We randomly select 80% of the labeled samples as training data, 20% as the verification set, and the remaining unmarked parts for the test set. As the light intensity distribution is incompletely covered in the dataset in the training and validation sets, we need a test set to help us measure whether the model has this capability.

2.6. Visible Hyperspectral Imaging Algorithm

The VIS-HSI used in this study is a combination of CCD (Sentech, STC-620PWT) and visible hyperspectral algorithm (VIS-HSA). The calculated wavelength range is 380–780 nm and the spectral resolution is 1 nm. The core concept of this technology is to input the image captured by the CCD in the OM to the spectrometer, such that each pixel of the captured image has spectrum information [55]. Figure S4 shows the flow of the technology, while using MATLAB. A custom algorithm is created.

2.7. Data Preprocessing and Partitioning

All of the MoS2 samples were obtained at different substrate locations and uneven illumination distribution to form a dataset. Two types of features were available: one is the 1 × 401 hyperspectral vector central pixel, which extracts the marker data as DNN and 1D-CNN inputs, and the other is the 1 × 5 × 5 × 401 hyperspectral cube, which regards the spatial region as the center of the cubes. The category labels of the pixels extracted the tag data as the 3D-CNN input. With single hyperspectral image data as an example (Figure S5), the marked area was divided into 80% training set and 20% validation set, whereas the unmarked area was the test set. Finally, the new pending data were generated. The generalization capability of the model was evaluated. The training set part had oversampling and data enhancement for the hyperspectral cube, because it contained spatial features and it was processed via horizontal mirroring and left and right flipping.

2.8. Software and Hardware

The model usage environment was implemented in the Microsoft Windows 10 operating system while using TensorFlow Framework version 1.10.0 and Python version 3.6. Open-source data analysis platforms, namely, Jupyter notebook, SciPy, NumPy, Matplotlib, Pandas, Scikit-learn, Keras, and Spectral, were used to analyze the feature values. The training hardware used a consumer-grade desktop computer with a GeForce GTX1070 graphics card (NVDIA, Hong Kong, China) and Core i5-3470 CPU @ 3.2GHz (Intel, Taipei, Taiwan).

3. Results

The model must have a certain recognition capability, because ensuring that the quality of each captured image is the same is impossible. Therefore, we add several image quality features to the hyperspectral image data of different imaging qualities to make the model close to the practical classification performance. We will explore the models with different magnification rates in order to discuss the spatial resolution and mixed pixel issues [56,57].

3.1. Model Framework for Deep Learning

The classification prediction model uses three models, namely, DNN, 1D-CNN, and 3D-CNN, as shown in Figure 2. Among the model parameters, the learning rate is adjusted to 1 × 10−6 to 5 × 10−6 and the batch size (batch) is based on the difference between the model and data. The size and dropout are 24 and 0.25, respectively, and the selected optimizer is RMSprop. Figure 2a illustrates a schematic of the basic DNN model architecture. The input is a hyperspectral vector feature belonging to single-pixel spectral information. The model only contains three layers of fully connected layer. Additional outer neuron nodes are expected to extract relatively shallow features [58]. The six categories that we classify are the outputs. Figure 2b displays a schematic of the 1D-CNN model architecture. The input is consistent with the DNN model. The model consists of four convolutional layers, which include two pooling layers and two fully connected layers. CNN convolution kernel has rights-sharing characteristics. We believe that convolving of spectral features is equivalent to chopping different frequencies. Given the 1 nm resolution of the band, the correlation with adjacent feature points is high. The pooling layer can help to eliminate features that are too similar in the neighborhood and reduce the redundant dimensions of features [59]. Figure 2c shows a schematic of the 3D-CNN model architecture. The input belongs to a hyperspectral cube type of the space-spectral domain. It extracts a feature cube consisting of pixels as d × d × N in a small spatial neighborhood (not the entire image) along the entire spectral band as input data and convolves with the 3D kernel to learn the spectral spatial features. The reason for using neighboring pixels is based on the observation that pixels in the small spatial neighborhood often reflect similar characteristics [60]. This observation is proven in Ref [55,61], which indicated that the small 3 × 3 core is the best option for spatial features; thus, only two convolution operations are performed and the sample space size is set to 5 × 5, which can be reduced to only two convolutional layers (1 × 1). The spatial domain of each layer is extracted. The first 3D convolutional layers, namely, C1 and C2, each contain a 3D kernel. The kernel size is K 1 1 × K 2 1 × K 3 1 , resulting in two sizes of 3D feature cubes as (d − K 1 1 + 1) × (d − K 2 1 + 1) × (N − K 3 1 + 1). Two 3D feature cubes, namely, C1 and C2, as (d − K 1 1 + 1) × (d − K 2 1 + 1) × (N − K 3 1 + 1), are used as inputs. The second 3D convolutional layer, namely, C3, involves four 3D cores (with a size of K 1 2 × K 2 2 × K 3 2 ) and produces eight 3D data cubes, each as (d − K 1 1 K 1 2 + 2) × (d − K 2 1 K 2 2 + 2) × (N − K 3 1 K 3 2 + 1) [62].

3.2. Training Results under Three Deep Learning Models with 10× Magnification

Figure 3 shows the calculation results of the sapphire substrate sample via three models, namely, DNN, 1D-CNN, and 3D-CNN, under 10× magnification. Figure 3a–c exhibit the convergence curve and training time of the loss and accuracy in the three algorithms, respectively. 3D-CNN has a longer training time and epoch than the first two models, and its input feature variable has more space domain parts; thus, it takes more time to start the convergence process. Figure 3d–f display the results of the confusion matrix of the verification set in the three algorithms, respectively. Table 1 presents the evaluation results of each category. Classifier precision determines how many of the positive categories of all the samples are true positive. The recall rate indicates how many of the true-positive category samples are judged as positive category samples by the classifier. F1-score is the harmonic mean of the accuracy and recall rate. Macro-average refers to the arithmetic mean of each statistical indicator value of all categories. The micro-average is used to establish a global confusion matrix for each model example in the dataset without category and then calculate the corresponding indicators. 3D-CNN accuracy is better from the indicators. Figures S6 and S7 exhibit the remaining training procedures at 40× and 100× magnification rates. Table 1 and Figures S6 and S7 show that the evaluation results under a small magnification are relatively poor. Therefore, the mixed pixels may cause the pixels to contain additional mixing or noise factors.

3.3. Prediction Results at 10× Magnification

Figure 4 presents a randomly selected sample at 10× magnification, which is predicted by three models. Figure 4a displays the OM image under the corresponding range of the prediction data. Figure 4e exhibits the OM image of new pending data. Figure 4b–d show the training data and Figure 4f–h present the prediction results for the new data (new pending data) under three models (color classification image), respectively. The results from the region of interest (ROI) in the training data indicate that the DNN and 1D-CNN models cannot accurately predict the damaged region when the MoS2 film encounters external force damage and the crystal structure is missing. 3D-CNN can be clearly judged under the destruction of the region, and Supplementary Figure S8 exhibits the remaining new pending data predictions. In other predictions for 40× and 100× magnification rates (Figures S9–S12), the color classification image (false-color composite) can be easily found for each model, but it will be reduced in the opposite FOV detection range.

3.4. Differences in Models at Three Magnification Rates

In this experiment, the classification algorithm is based on the pixel unit data in the image; thus, determining how to obtain the quantitative value of the enhanced precision for the OM is crucial. The majority of existing microscopes achieve uniform spatial irradiance through Köhler illumination [63]. However, some shortcomings remain in the need for quantitative measurement and analysis, for example, FOV nonuniform illumination asymmetry (ANILAS) [64,65] is a largely important factor that is ignored (Figure S13). This phenomenon leads to a certain loss in pixel accuracy. Thus, we attempt to understand the various feature data types through deep learning in the case of uneven illumination distribution.
The FOV sizes of the 10×, 40×, and 100× magnification rates are 1.6 mm × 1.2 mm, 0.4 mm × 0.3 mm, and 0.17 mm × 0.27 mm, respectively, which are the actual detection sizes at the time of prediction. Figure 5 presents the optimal loss values of the (a) training and (b) verification sets for three models at three magnification rates. The optimal train loss is higher than the validation loss, partly because of the use of data enhancements in the training set, which makes the model difficult to learn when the data are increased in diversity. Figure 5 shows that the three models have low loss values under 100× magnification. When considering the difference in spatial resolution, the pixel resolutions at 10×, 40×, and 100× magnification rates are 0.5, 0.25, and 0.1 μm, respectively. Therefore, the problem of mixed pixels is further serious at a small magnification. 3D-CNN is the best among the three models at different magnification rates.
As 3D-CNN demonstrates the best generalization capability at different magnification rates, we will only discuss the results of this algorithm. Figure 6a shows an OM image of a large-area periodically grown single-layer MoS2 on a sapphire substrate (also defined as ROI-3). The actual corresponding size is 1 × 1 mm, and the microscope magnification is 10×. We can observe that a single layer of MoS2 is distributed in a star shape around the hole. Figure 6a–f display the analysis of the color classification image (false-color composite). Figure 6b–d present the OM images of the predicted results under three magnification rates, namely, 100× (ROI-1), 40× (ROI-2), and 10× (ROI-3), respectively.
We obtain the magnified images of the ROI-4 and ROI-5 regions from the color classification image under 10× magnification (Figure 6d) as Figure 6e,f, respectively. Corresponding to other magnification rates (Figure 6b,c) in the same region of the color classification image, we observe that a small magnification will be limited by the spatial resolution. Consequently, the fine type of features will be blurred or impurity points are further difficult to identify. Supplementary Figure S14 discusses the probability of class prediction confidence in images with various magnification rates. This finding is different from previous research arguments [66]. Previous studies have considered that the poor image quality is due to several noise points and the surrounding blur at a large magnification or that fine impurities are caused by deposition, resulting in re-traditional classification algorithms. The effect is not good, but the small impurities will be ignored when the spatial resolution is low. However, in the present study, the cognition of the model in deep learning solves the bottleneck of the previous problem.

3.5. Instrument Measurement Verification

In the new pending data section, we observe the accuracy of the samples from Raman spectroscopy mapping, as shown in Figure S15. We show two oscillation modes of in-plane ( E 2 g 1 ) and out-of-plane ( A 1 g ) in Raman mapping analysis. The classification of the number of layers is determined by two peak differences. Figure S16 presents the SEM measurement results. The instrument can be judged by the gray level. However, the instrument measurement cannot determine the number of layers and it can only be observed from relative contrast. The PL spectroscopy results (Figure S17) indicate that the mapping diagram is either 625 or 667 nm, and the periodic growth of the single-layer to multilayer distribution of MoS2 has good uniformity. In Figure S18, the material of the sample profile is divided via HRTEM [67,68].

4. Conclusions

This study is aimed at the layer number discrimination of molybdenum disulfide film on sapphire. In comparison with the current measurement instruments, such as Raman, SEM, AFM, and TEM, the proposed equipment has a large detection area, less time, and low cost. The equipment can detect the low number of layers of molybdenum disulfide film. Unlike in the past, we use deep learning, but not image processing, for analysis, and experimentally confirm that 3D-CNN has the best precision and generalization capability. The reason is that the 3D-CNN model initially adds the spatial domain of the morphological features of MoS2 to learn to avoid the misjudgment caused by the difference in imaging quality due to noise or to make further accurate judgments on the fuzzy regions between the category regions. For the problem that the low magnification is limited by the spatial resolution, which results in fine contaminants and edge blur morphology, the GAN model can be used to achieve the super-resolution method in low magnification [69,70]. In future research, we hope to integrate all types of 2D materials and various substrates, such as in the case of heterogeneous stack, in order to easily distinguish different 2D materials and their different layers easily. The future ideas are to determine the inference of the growth pattern of MoS2 by detecting the image in real time and to avoid machine termination for reducing time and related costs under impurity intrusion

Supplementary Materials

The following are available online at https://www.mdpi.com/2079-4991/10/6/1161/s1, Figure S1: Experimental device. (a) Three-zone furnace tube used in this experiment (Lindberg/Blue HTF55347C), (b) schematic of the experimental structure and position of the precursor. (c) Area-coded position and image of the sample under OM shooting; Figure S2: Deep learning is applied to construct a flowchart for the number detection system of the optical MoS2 layer. The gray, blue, orange, and green block colors correspond to (1) database, (2) offline training, (3) model design, and (4) online service, respectively; Figure S3: Data labeling is assisted by the Raman spectroscopy instrument (MRI-1532A). This instrument is mainly used to inject a 532 nm laser light into the sample. The photons in the laser will collide with the molecules in the sample material, namely, E 2 g 1 and A 1 g . The peak difference of the vibration mode is the signal of the main judgment layer of MoS2, and the two vibration modes have a high dependence on the thickness of MoS2. We select two 30 μm×30 μm Raman mapping results in the database and wait for ~45 min, especially in the ground truth mark, which is a considerable time; Figure S4: Flowchart of visible hyperspectral image algorithm; Figure S5: In the blue area, the data in the offline training section of Figure 2 are used. The ground truth and our label data are set as the training and validation sets, respectively. The rest of the VIS-HIS feature data are used as the test set. The green area is the predicted result of new pending data in the (4) online service architecture in Figure S2; Figure S6: OM image and prediction results of the new pending data at 10× magnification; Figure S7: At 40× magnification, (a) and (e) are the OM images of train data and new pending data, respectively. The predicted results of the three models are the false-color composite predicted by (b) DNN, (c) 1D-CNN, and (d) 3D-CNN in the training data, and DNN and 1D-CNN can be observed. In the CNN, impurities in the single layer are undetected. New pending data predict the color classification image (false-color composite) by (f) DNN, (g) 1D-CNN, and (h) 3D-CNN and can observe errors in a single layer in DNN and 1D-CNN. The result of misclassification of impurity classification; Figure S8: OM image and prediction results of the new pending data at 40× magnification; Figure S9: At 100× magnification, (a) and (e) are the OM images of the training (train data) and new test (new pending data) sets. (b) DNN, (c) 1D-CNN, and (d) 3D-CNN indicate the prediction results of the color classification image (false-color composite) in the training data. (f) DNN, (g) 1D-CNN, and (h) 3D-CNN reflect the prediction results of the color classification image (false-color composite) in the new pending data, from DNN and 1D-CNN in the single layer with the wrong double-layer classification misjudgment results; Figure S10: OM image and prediction results of the new pending data at 100× magnification; Figure S11: At 40× magnification, the accuracy (Accuracy) and loss (Loss) of the convergence process in (a) DNN, (b) 1D-CNN, and (c) 3D-CNN; validation set in (d) DNN, (e) 1D-CNN, and (f) 3D-CNN confusion matrix results; Figure S12: At 100× magnification, the accuracy (Accuracy) and loss (Loss) of the convergence process in (a) DNN, (b) 1D-CNN, and (c) 3D-CNN; validation set in (d) DNN, (e) 1D-CNN, and (f) 3D-CNN confusion matrix results; Figure S13: Optical microscope light intensity distribution analysis. (a,d,g) OM images of sapphire substrate taken at 10×, 40×, and 100× magnification rates (50, 15, and 10μm in the lower right corner, respectively). (b,e,h) V channels in the HSV color space of the microscope image (lightness) light intensity distribution. (c,f,i) Scatter plots of each pixel point RGB channel in the OM image; Figure S14: Prediction rate of each classification type at the shooting magnification rates of (a) 10×, (b) 40×, and (c) 100×; Figure S15: Raman measurement of the hyperspectral image of the new test data (new data). (a) OM image at 100× magnification, (b) prediction result compared with OM image at 100× magnification, (c) Raman spectrum at three specific points in (d), and (d) Raman measurement result corresponding to (b). The number of layers in the ROI circle is consistent, the triangle is a single-layer structure, and the intermediate core point is a double layer; Figure S16: SEM measurement of the hyperspectral image of the new test data (new data). (a,b) are OM images at 100× and 40× magnification rates, (c,d) are prediction results corresponding to the (a,b) OM image ranges, and (e,f) correspond to (c,d) ROI measurement results; Figure S17: (a) Photoluminescence spectrometry. (b) OM image of selected PL mapping range. (c) PL mapping with a wavelength of 625 nm (d) PL mapping with a wavelength of 667 nm. (e) The blue measurement in (d) indicates the PL measurement result of the test piece; Figure S18: (a) is the OM image after growing MoS2, (b) is the cross-sectional TEM image of the selected area of (a), (c) is the magnified TEM image of the red arrow of (b). (d) is the HRTEM image at the red box in (c). (e) is the HRTEM image at the orange box in (c), and (f) is the HRTEM image at the yellow box in (c). (g) is the SAED diagram of (d), (h) is the SAED diagram of (e), and (i) is the SAED diagram of (f).

Author Contributions

Conceptualization, M.-Y.L. and H.-C.W.; methodology, H.-C.W.; software, K.-C.L.; validation, S.-W.F. and S.B.A.; resources, V.E.F. and H.-C.W.; data curation, K.-C.L. and M.-Y.L.; writing-original draft preparation, H.T.N.; writing-review and editing, V.E.F. and H.-C.W.; supervision, V.E.F. and H.-C.W.; project administration, H.-C.W.; funding acquisition, H.-C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Ministry of Science and Technology, The Republic of China under the Grants MOST 105-2923-E-194-003-MY3, 106-2112-M-194-002, and 108-2823-8-194-002.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Geim, A.K.; Novoselov, K.S. The rise of graphene. In Nanoscience and Technology: A Collection of Reviews from Nature Journals; World Scientific: Singapore, 2010; pp. 11–19. [Google Scholar]
  2. Sarma, S.D.; Adam, S.; Hwang, E.; Rossi, E. Electronic transport in two-dimensional graphene. Rev. Mod. Phys. 2011, 83, 407. [Google Scholar] [CrossRef] [Green Version]
  3. Lin, X.; Su, L.; Si, Z.; Zhang, Y.; Bournel, A.; Zhang, Y.; Klein, J.-O.; Fert, A.; Zhao, W. Gate-driven pure spin current in graphene. Phys. Rev. Appl. 2017, 8, 034006. [Google Scholar] [CrossRef]
  4. Wu, C.; Zhang, J.; Tong, X.; Yu, P.; Xu, J.Y.; Wu, J.; Wang, Z.M.; Lou, J.; Chueh, Y.L. A Critical Review on Enhancement of Photocatalytic Hydrogen Production by Molybdenum Disulfide: From Growth to Interfacial Activities. Small 2019, 15, 1900578. [Google Scholar] [CrossRef] [PubMed]
  5. Yang, D.; Wang, H.; Luo, S.; Wang, C.; Zhang, S.; Guo, S. Cut Flexible Multifunctional Electronics Using MoS2 Nanosheet. Nanomaterials 2019, 9, 922. [Google Scholar] [CrossRef] [Green Version]
  6. Zhang, Y.; Wan, Q.; Yang, N. Recent Advances of Porous Graphene: Synthesis, Functionalization, and Electrochemical Applications. Small 2019, 15, 1903780. [Google Scholar] [CrossRef]
  7. Radisavljevic, B.; Radenovic, A.; Brivio, J.; Giacometti, V.; Kis, A. Single-layer MoS2 transistors. Nat. Nanotechnol. 2011, 6, 147. [Google Scholar] [CrossRef]
  8. Han, T.; Liu, H.; Wang, S.; Chen, S.; Xie, H.; Yang, K. Probing the field-effect transistor with monolayer MoS2 prepared by APCVD. Nanomaterials 2019, 9, 1209. [Google Scholar] [CrossRef] [Green Version]
  9. Roh, J.; Ryu, J.H.; Baek, G.W.; Jung, H.; Seo, S.G.; An, K.; Jeong, B.G.; Lee, D.C.; Hong, B.H.; Bae, W.K. Threshold voltage control of multilayered MoS2 field-effect transistors via octadecyltrichlorosilane and their applications to active matrixed quantum dot displays driven by enhancement-mode logic gates. Small 2019, 15, 1803852. [Google Scholar] [CrossRef]
  10. Yang, K.; Liu, H.; Wang, S.; Li, W.; Han, T. A horizontal-gate monolayer MoS2 transistor based on image force barrier reduction. Nanomaterials 2019, 9, 1245. [Google Scholar] [CrossRef] [Green Version]
  11. Choi, M.S.; Lee, G.-H.; Yu, Y.-J.; Lee, D.-Y.; Lee, S.H.; Kim, P.; Hone, J.; Yoo, W.J. Controlled charge trapping by molybdenum disulphide and graphene in ultrathin heterostructured memory devices. Nat. Commun. 2013, 4, 1–7. [Google Scholar]
  12. Lu, G.Z.; Wu, M.J.; Lin, T.N.; Chang, C.Y.; Lin, W.L.; Chen, Y.T.; Hou, C.F.; Cheng, H.J.; Lin, T.Y.; Shen, J.L. Electrically pumped white-light-emitting diodes based on histidine-doped MoS2 quantum dots. Small 2019, 15, 1901908. [Google Scholar] [CrossRef] [PubMed]
  13. Lee, C.-H.; Lee, G.-H.; Van Der Zande, A.M.; Chen, W.; Li, Y.; Han, M.; Cui, X.; Arefe, G.; Nuckolls, C.; Heinz, T.F. Atomically thin p–n junctions with van der Waals heterointerfaces. Nat. Nanotechnol. 2014, 9, 676. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Sarkar, D.; Liu, W.; Xie, X.; Anselmo, A.C.; Mitragotri, S.; Banerjee, K. MoS2 field-effect transistor for next-generation label-free biosensors. ACS Nano 2014, 8, 3992–4003. [Google Scholar] [CrossRef] [PubMed]
  15. Zhao, J.; Li, N.; Yu, H.; Wei, Z.; Liao, M.; Chen, P.; Wang, S.; Shi, D.; Sun, Q.; Zhang, G. Highly sensitive MoS2 humidity sensors array for noncontact sensation. Adv. Mater. 2017, 29, 1702076. [Google Scholar] [CrossRef]
  16. Park, Y.J.; Sharma, B.K.; Shinde, S.M.; Kim, M.-S.; Jang, B.; Kim, J.-H.; Ahn, J.-H. All MoS2-based large area, skin-attachable active-matrix tactile sensor. ACS Nano 2019, 13, 3023–3030. [Google Scholar] [CrossRef]
  17. Shin, M.; Yoon, J.; Yi, C.; Lee, T.; Choi, J.-W. Flexible HIV-1 biosensor based on the Au/MoS2 nanoparticles/Au nanolayer on the PET substrate. Nanomaterials 2019, 9, 1076. [Google Scholar] [CrossRef] [Green Version]
  18. Yadav, V.; Roy, S.; Singh, P.; Khan, Z.; Jaiswal, A. 2D MoS2-based nanomaterials for therapeutic, bioimaging, and biosensing applications. Small 2019, 15, 1803706. [Google Scholar] [CrossRef] [Green Version]
  19. Zhang, P.; Yang, S.; Pineda-Gómez, R.; Ibarlucea, B.; Ma, J.; Lohe, M.R.; Akbar, T.F.; Baraban, L.; Cuniberti, G.; Feng, X. Electrochemically exfoliated high-quality 2H-MoS2 for multiflake thin film flexible biosensors. Small 2019, 15, 1901265. [Google Scholar] [CrossRef]
  20. Tu, Q.; Lange, B.R.; Parlak, Z.; Lopes, J.M.J.; Blum, V.; Zauscher, S. Quantitative subsurface atomic structure fingerprint for 2D materials and heterostructures by first-principles-calibrated contact-resonance atomic force microscopy. ACS Nano 2016, 10, 6491–6500. [Google Scholar] [CrossRef]
  21. Wastl, D.S.; Weymouth, A.J.; Giessibl, F.J. Atomically resolved graphitic surfaces in air by atomic force microscopy. ACS Nano 2014, 8, 5233–5239. [Google Scholar] [CrossRef]
  22. Zhao, W.; Xia, B.; Lin, L.; Xiao, X.; Liu, P.; Lin, X.; Peng, H.; Zhu, Y.; Yu, R.; Lei, P. Low-energy transmission electron diffraction and imaging of large-area graphene. Sci. Adv. 2017, 3, e1603231. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Meyer, J.C.; Geim, A.K.; Katsnelson, M.I.; Novoselov, K.S.; Booth, T.J.; Roth, S. The structure of suspended graphene sheets. Nature 2007, 446, 60–63. [Google Scholar] [CrossRef] [PubMed]
  24. Nolen, C.M.; Denina, G.; Teweldebrhan, D.; Bhanu, B.; Balandin, A.A. High-throughput large-area automated identification and quality control of graphene and few-layer graphene films. ACS Nano 2011, 5, 914–922. [Google Scholar] [CrossRef] [PubMed]
  25. Konstantopoulos, G.; Koumoulos, E.P.; Charitidis, C.A. Testing novel portland cement formulations with carbon nanotubes and intrinsic properties revelation: Nanoindentation analysis with machine learning on microstructure identification. Nanomaterials 2020, 10, 645. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Hong, S.; Nomura, K.-I.; Krishnamoorthy, A.; Rajak, P.; Sheng, C.; Kalia, R.K.; Nakano, A.; Vashishta, P. Defect healing in layered materials: A machine learning-assisted characterization of MoS2 crystal phases. J. Phys. Chem. Lett. 2019, 10, 2739–2744. [Google Scholar] [CrossRef]
  27. Masubuchi, S.; Watanabe, E.; Seo, Y.; Okazaki, S.; Sasagawa, T.; Watanabe, K.; Taniguchi, T.; Machida, T. Deep-learning-based image segmentation integrated with optical microscopy for automatically searching for two-dimensional materials. NPJ 2D Mater. Appl. 2020, 4, 1–9. [Google Scholar] [CrossRef]
  28. Yang, J.; Yao, H. Automated identification and characterization of two-dimensional materials via machine learning-based processing of optical microscope images. Extrem. Mech. Lett. 2020, 39, 100771. [Google Scholar] [CrossRef]
  29. Li, Y.; Kong, Y.; Peng, J.; Yu, C.; Li, Z.; Li, P.; Liu, Y.; Gao, C.-F.; Wu, R. Rapid identification of two-dimensional materials via machine learning assisted optic microscopy. J. Mater. 2019, 5, 413–421. [Google Scholar] [CrossRef]
  30. Masubuchi, S.; Machida, T. Classifying optical microscope images of exfoliated graphene flakes by data-driven machine learning. NPJ 2D Mater. Appl. 2019, 3, 1–7. [Google Scholar] [CrossRef]
  31. Lin, X.; Si, Z.; Fu, W.; Yang, J.; Guo, S.; Cao, Y.; Zhang, J.; Wang, X.; Liu, P.; Jiang, K. Intelligent identification of two-dimensional nanostructures by machine-learning optical microscopy. Nano Res. 2018, 11, 6316–6324. [Google Scholar] [CrossRef] [Green Version]
  32. Zhao, Y.; Luo, X.; Li, H.; Zhang, J.; Araujo, P.T.; Gan, C.K.; Wu, J.; Zhang, H.; Quek, S.Y.; Dresselhaus, M.S. Interlayer breathing and shear modes in few-trilayer MoS2 and WSe2. Nano Lett. 2013, 13, 1007–1015. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Li, H.; Wu, J.; Yin, Z.; Zhang, H. Preparation and applications of mechanically exfoliated single-layer and multilayer MoS2 and WSe2 nanosheets. Acc. Chem. Res. 2014, 47, 1067–1075. [Google Scholar] [CrossRef] [PubMed]
  34. Li, H.; Lu, G.; Yin, Z.; He, Q.; Li, H.; Zhang, Q.; Zhang, H. Optical identification of single-and few-layer MoS2 sheets. Small 2012, 8, 682–686. [Google Scholar] [CrossRef] [PubMed]
  35. Najmaei, S.; Liu, Z.; Zhou, W.; Zou, X.; Shi, G.; Lei, S.; Yakobson, B.I.; Idrobo, J.-C.; Ajayan, P.M.; Lou, J. Vapour phase growth and grain boundary structure of molybdenum disulphide atomic layers. Nat. Mater. 2013, 12, 754–759. [Google Scholar] [CrossRef]
  36. Van Der Zande, A.M.; Huang, P.Y.; Chenet, D.A.; Berkelbach, T.C.; You, Y.; Lee, G.-H.; Heinz, T.F.; Reichman, D.R.; Muller, D.A.; Hone, J.C. Grains and grain boundaries in highly crystalline monolayer molybdenum disulphide. Nat. Mater. 2013, 12, 554–561. [Google Scholar] [CrossRef] [Green Version]
  37. Lee, Y.H.; Zhang, X.Q.; Zhang, W.; Chang, M.T.; Lin, C.T.; Chang, K.D.; Yu, Y.C.; Wang, J.T.W.; Chang, C.S.; Li, L.J. Synthesis of large-area MoS2 atomic layers with chemical vapor deposition. Adv. Mater. 2012, 24, 2320–2325. [Google Scholar] [CrossRef] [Green Version]
  38. Jeon, J.; Jang, S.K.; Jeon, S.M.; Yoo, G.; Jang, Y.H.; Park, J.-H.; Lee, S. Layer-controlled CVD growth of large-area two-dimensional MoS2 films. Nanoscale 2015, 7, 1688–1695. [Google Scholar] [CrossRef]
  39. Dumcenco, D.; Ovchinnikov, D.; Marinov, K.; Lazic, P.; Gibertini, M.; Marzari, N.; Sanchez, O.L.; Kung, Y.-C.; Krasnozhon, D.; Chen, M.-W. Large-area epitaxial monolayer MoS2. ACS Nano 2015, 9, 4611–4620. [Google Scholar] [CrossRef]
  40. Yu, Y.; Li, C.; Liu, Y.; Su, L.; Zhang, Y.; Cao, L. Controlled scalable synthesis of uniform, high-quality monolayer and few-layer MoS2 films. Sci. Rep. 2013, 3, 1866. [Google Scholar] [CrossRef]
  41. Wang, S.; Rong, Y.; Fan, Y.; Pacios, M.; Bhaskaran, H.; He, K.; Warner, J.H. Shape evolution of monolayer MoS2 crystals grown by chemical vapor deposition. Chem. Mater. 2014, 26, 6371–6379. [Google Scholar] [CrossRef]
  42. Wang, L.; Chen, F.; Ji, X. Shape consistency of MoS2 flakes grown using chemical vapor deposition. Appl. Phys. Express 2017, 10, 065201. [Google Scholar] [CrossRef]
  43. Zheng, W.; Qiu, Y.; Feng, W.; Chen, J.; Yang, H.; Wu, S.; Jia, D.; Zhou, Y.; Hu, P. Controlled growth of six-point stars MoS2 by chemical vapor deposition and its shape evolution mechanism. Nanotechnology 2017, 28, 395601. [Google Scholar] [CrossRef] [PubMed]
  44. Zhou, D.; Shu, H.; Hu, C.; Jiang, L.; Liang, P.; Chen, X. Unveiling the growth mechanism of MoS2 with chemical vapor deposition: From two-dimensional planar nucleation to self-seeding nucleation. Cryst. Growth Des. 2018, 18, 1012–1019. [Google Scholar] [CrossRef]
  45. Zhu, D.; Shu, H.; Jiang, F.; Lv, D.; Asokan, V.; Omar, O.; Yuan, J.; Zhang, Z.; Jin, C. Capture the growth kinetics of CVD growth of two-dimensional MoS2. NPJ 2D Mater. Appl. 2017, 1, 1–8. [Google Scholar] [CrossRef]
  46. Sun, D.; Nguyen, A.E.; Barroso, D.; Zhang, X.; Preciado, E.; Bobek, S.; Klee, V.; Mann, J.; Bartels, L. Chemical vapor deposition growth of a periodic array of single-layer MoS2 islands via lithographic patterning of an SiO2/Si substrate. 2D Mater. 2015, 2, 045014. [Google Scholar] [CrossRef]
  47. Li, Y.; Hao, S.; DiStefano, J.G.; Murthy, A.A.; Hanson, E.D.; Xu, Y.; Wolverton, C.; Chen, X.; Dravid, V.P. Site-specific positioning and patterning of MoS2 monolayers: The role of Au seeding. ACS Nano 2018, 12, 8970–8976. [Google Scholar] [CrossRef]
  48. Han, G.H.; Kybert, N.J.; Naylor, C.H.; Lee, B.S.; Ping, J.; Park, J.H.; Kang, J.; Lee, S.Y.; Lee, Y.H.; Agarwal, R. Seeded growth of highly crystalline molybdenum disulphide monolayers at controlled locations. Nat. Commun. 2015, 6, 6128. [Google Scholar] [CrossRef] [Green Version]
  49. Wang, X.; Kang, K.; Chen, S.; Du, R.; Yang, E.-H. Location-specific growth and transfer of arrayed MoS2 monolayers with controllable size. 2D Materials 2017, 4, 025093. [Google Scholar] [CrossRef]
  50. Lee, C.; Yan, H.; Brus, L.E.; Heinz, T.F.; Hone, J.; Ryu, S. Anomalous lattice vibrations of single-and few-layer MoS2. ACS Nano 2010, 4, 2695–2700. [Google Scholar] [CrossRef] [Green Version]
  51. Li, H.; Zhang, Q.; Yap, C.C.R.; Tay, B.K.; Edwin, T.H.T.; Olivier, A.; Baillargeat, D. From bulk to monolayer MoS2: Evolution of Raman scattering. Adv. Funct. Mater. 2012, 22, 1385–1390. [Google Scholar] [CrossRef]
  52. Lei, S.; Ge, L.; Najmaei, S.; George, A.; Kappera, R.; Lou, J.; Chhowalla, M.; Yamaguchi, H.; Gupta, G.; Vajtai, R. Evolution of the electronic band structure and efficient photo-detection in atomic layers of InSe. ACS Nano 2014, 8, 1263–1272. [Google Scholar] [CrossRef] [PubMed]
  53. Ye, G.; Gong, Y.; Lin, J.; Li, B.; He, Y.; Pantelides, S.T.; Zhou, W.; Vajtai, R.; Ajayan, P.M. Defects engineered monolayer MoS2 for improved hydrogen evolution reaction. NanoLett. 2016, 16, 1097–1103. [Google Scholar] [CrossRef] [PubMed]
  54. Han, T.; Liu, H.; Wang, S.; Chen, S.; Li, W.; Yang, X.; Cai, M.; Yang, K. Probing the optical properties of MoS2 on SiO2/Si and sapphire substrates. Nanomaterials 2019, 9, 740. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Tran, D.; Bourdev, L.; Fergus, R.; Torresani, L.; Paluri, M. Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Araucano Park, Las Condes, Chille, 11–18 December 2015; pp. 4489–4497. [Google Scholar]
  56. Wang, Z.; Hu, G.; Yao, S. Decomposition mixed pixel of remote sensing image based on tray neural network model. In Proceedings of the International Symposium on Intelligence Computation and Applications, Wuhan, China, 21–23 September 2007; pp. 305–309. [Google Scholar]
  57. Foschi, P.G. A geometric approach to a mixed pixel problem: Detecting subpixel woody vegetation. Remote Sens. Environ. 1994, 50, 317–327. [Google Scholar] [CrossRef]
  58. Lin, Z.; Chen, Y.; Zhao, X.; Wang, G. Spectral-spatial classification of hyperspectral image using autoencoders. In Proceedings of the 9th International Conference on Information, Communications & Signal Processing, Tainan, Taiwan, 10–13 December 2013; pp. 1–5. [Google Scholar]
  59. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
  60. Shen, L.; Jia, S. Three-dimensional Gabor wavelets for pixel-based hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2011, 49, 5039–5046. [Google Scholar] [CrossRef]
  61. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  62. Li, Y.; Zhang, H.; Shen, Q. Spectral–spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef] [Green Version]
  63. Evennett, P. Kohler illumination: A simple interpretation. Proce. R. Microsc. Soc. 1983, 28, 10–13. [Google Scholar]
  64. Attota, R.K. Step beyond Kohler illumination analysis for far-field quantitative imaging: Angular illumination asymmetry (ANILAS) maps. Opt. Express 2016, 24, 22616–22627. [Google Scholar] [CrossRef]
  65. Attota, R.; Silver, R. Optical microscope angular illumination analysis. Opt. Express 2012, 20, 6693–6702. [Google Scholar] [CrossRef] [PubMed]
  66. KaiáLee, M. Large-area few-layered graphene film determination by multispectral imaging microscopy. Nanoscale 2015, 7, 9033–9039. [Google Scholar]
  67. Chen, J.; Wei, Q. Phase transformation of molybdenum trioxide to molybdenum dioxide: An in-situ transmission electron microscopy investigation. Int. J. Appl. Ceram. Technol. 2017, 14, 1020–1025. [Google Scholar] [CrossRef] [Green Version]
  68. Wang, Q.H.; Kalantar-Zadeh, K.; Kis, A.; Coleman, J.N.; Strano, M.S. Electronics and optoelectronics of two-dimensional transition metal dichalcogenides. Nat. Nanotechnol. 2012, 7, 699. [Google Scholar] [CrossRef] [PubMed]
  69. Wang, H.; Rivenson, Y.; Jin, Y.; Wei, Z.; Gao, R.; Günaydın, H.; Bentolila, L.A.; Kural, C.; Ozcan, A. Deep learning enables cross-modality super-resolution in fluorescence microscopy. Nat. Methods 2019, 16, 103–110. [Google Scholar] [CrossRef]
  70. Rivenson, Y.; Ceylan Koydemir, H.; Wang, H.; Wei, Z.; Ren, Z.; Günaydın, H.; Zhang, Y.; Gorocs, Z.; Liang, K.; Tseng, D. Deep learning enhanced mobile-phone microscopy. ACS Photonics 2018, 5, 2354–2364. [Google Scholar] [CrossRef]
Figure 1. Data feature and label processing.
Figure 1. Data feature and label processing.
Nanomaterials 10 01161 g001
Figure 2. Schematic of the model construction. (a) Deep neural network (DNN), (b) one-dimensional (1D) convolutional neural network (1D-CNN), and (c) three-dimensional (3D) convolutional neural network (3D-CNN), where the inputs in (a,b) are hyperspectral vectors, and the input in (c) is hyperspectral cube. For the outputs of the three models, Softmax is used as a classifier for six categories: substrate, monolayer, bilayer, trilayer, bulk, and residues.
Figure 2. Schematic of the model construction. (a) Deep neural network (DNN), (b) one-dimensional (1D) convolutional neural network (1D-CNN), and (c) three-dimensional (3D) convolutional neural network (3D-CNN), where the inputs in (a,b) are hyperspectral vectors, and the input in (c) is hyperspectral cube. For the outputs of the three models, Softmax is used as a classifier for six categories: substrate, monolayer, bilayer, trilayer, bulk, and residues.
Nanomaterials 10 01161 g002
Figure 3. At 10× magnification condition: accuracy (ACC) and loss in the convergence process in (a) DNN, (b) 1D-CNN, and (c) 3D-CNN; confusion matrix results of the validation set in (d) DNN, (e) 1D-CNN, and (f) 3D-CNN.
Figure 3. At 10× magnification condition: accuracy (ACC) and loss in the convergence process in (a) DNN, (b) 1D-CNN, and (c) 3D-CNN; confusion matrix results of the validation set in (d) DNN, (e) 1D-CNN, and (f) 3D-CNN.
Nanomaterials 10 01161 g003
Figure 4. At 10× magnification: (a,e) optical microscopy (OM) images of the training (train data) and new test (new pending data) data, respectively; predicted results of the color classification image (false-color composite) under three models for the training data (bd), namely, (b) DNN, (c) 1D-CNN, and (d) 3D-CNN, and for the new pending data (fh), namely, (f) DNN, (g) 1D-CNN, and (h) 3D-CNN.
Figure 4. At 10× magnification: (a,e) optical microscopy (OM) images of the training (train data) and new test (new pending data) data, respectively; predicted results of the color classification image (false-color composite) under three models for the training data (bd), namely, (b) DNN, (c) 1D-CNN, and (d) 3D-CNN, and for the new pending data (fh), namely, (f) DNN, (g) 1D-CNN, and (h) 3D-CNN.
Nanomaterials 10 01161 g004
Figure 5. Optimal loss of the three models in the (a) training and (b) validation sets at 10×, 40×, and 100× magnification rates.
Figure 5. Optimal loss of the three models in the (a) training and (b) validation sets at 10×, 40×, and 100× magnification rates.
Nanomaterials 10 01161 g005
Figure 6. (a) OM images of the large-area periodic growth of single-layer MoS2 on sapphire substrates at (b) 100× (ROI-1), (c) 40× (ROI-2), and (d) 10× (ROI-3) magnification rates. Color classification map predicted at 10× magnification: (e,f) are the amplification results of ROI-4 and ROI-5 ranges, respectively, corresponding to ROI-1 and ROI-2 circle selection.
Figure 6. (a) OM images of the large-area periodic growth of single-layer MoS2 on sapphire substrates at (b) 100× (ROI-1), (c) 40× (ROI-2), and (d) 10× (ROI-3) magnification rates. Color classification map predicted at 10× magnification: (e,f) are the amplification results of ROI-4 and ROI-5 ranges, respectively, corresponding to ROI-1 and ROI-2 circle selection.
Nanomaterials 10 01161 g006
Table 1. Model evaluation indicators for the three models at 10× magnification.
Table 1. Model evaluation indicators for the three models at 10× magnification.
Final Accuracy (Validation Data)
DNN1D-CNN3D-CNN
PrecisionRecallF1-ScorePrecisionRecallF1-ScorePrecisionRecallF1-Score
Substrate0.95210.98250.96710.96410.98200.97290.96840.99040.9792
Monolayer0.87780.74900.80830.88630.78740.83390.93450.88610.9097
Bilayer0.55930.80490.66000.56610.75670.64760.76230.77810.7701
Tri-layer0.65520.64770.65140.63410.87640.73580.65040.73390.6897
Bulk0.96240.67020.79010.88730.73680.80510.88970.87760.8836
Residues0.95710.81680.88140.96510.79430.87140.95160.94080.9462
micro average0.90230.90230.90230.91230.91230.91230.92960.92960.9296
macro average0.82730.77850.79300.81720.82230.81110.85950.86780.8631
weighted average0.91000.90230.90260.91900.91230.91340.93000.92960.9295

Share and Cite

MDPI and ACS Style

Li, K.-C.; Lu, M.-Y.; Nguyen, H.T.; Feng, S.-W.; Artemkina, S.B.; Fedorov, V.E.; Wang, H.-C. Intelligent Identification of MoS2 Nanostructures with Hyperspectral Imaging by 3D-CNN. Nanomaterials 2020, 10, 1161. https://doi.org/10.3390/nano10061161

AMA Style

Li K-C, Lu M-Y, Nguyen HT, Feng S-W, Artemkina SB, Fedorov VE, Wang H-C. Intelligent Identification of MoS2 Nanostructures with Hyperspectral Imaging by 3D-CNN. Nanomaterials. 2020; 10(6):1161. https://doi.org/10.3390/nano10061161

Chicago/Turabian Style

Li, Kai-Chun, Ming-Yen Lu, Hong Thai Nguyen, Shih-Wei Feng, Sofya B. Artemkina, Vladimir E. Fedorov, and Hsiang-Chen Wang. 2020. "Intelligent Identification of MoS2 Nanostructures with Hyperspectral Imaging by 3D-CNN" Nanomaterials 10, no. 6: 1161. https://doi.org/10.3390/nano10061161

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop