Next Article in Journal
Transcriptional Organization of the Salmonella Typhimurium Phage P22 pid ORFan Locus
Next Article in Special Issue
BioS2Net: Holistic Structural and Sequential Analysis of Biomolecules Using a Deep Neural Network
Previous Article in Journal
Janus Kinase Inhibitors Improve Disease Activity and Patient-Reported Outcomes in Rheumatoid Arthritis: A Systematic Review and Meta-Analysis of 24,135 Patients
Previous Article in Special Issue
Pan-Cancer Prediction of Cell-Line Drug Sensitivity Using Network-Based Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Deep-4mCGP: A Deep Learning Approach to Predict 4mC Sites in Geobacter pickeringii by Using Correlation-Based Feature Selection Technique

School of Life Science and Technology, Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu 610054, China
*
Author to whom correspondence should be addressed.
Int. J. Mol. Sci. 2022, 23(3), 1251; https://doi.org/10.3390/ijms23031251
Submission received: 24 December 2021 / Revised: 19 January 2022 / Accepted: 20 January 2022 / Published: 23 January 2022
(This article belongs to the Special Issue Deep Learning and Machine Learning in Bioinformatics)

Abstract

:
4mC is a type of DNA alteration that has the ability to synchronize multiple biological movements, for example, DNA replication, gene expressions, and transcriptional regulations. Accurate prediction of 4mC sites can provide exact information to their hereditary functions. The purpose of this study was to establish a robust deep learning model to recognize 4mC sites in Geobacter pickeringii. In the anticipated model, two kinds of feature descriptors, namely, binary and k-mer composition were used to encode the DNA sequences of Geobacter pickeringii. The obtained features from their fusion were optimized by using correlation and gradient-boosting decision tree (GBDT)-based algorithm with incremental feature selection (IFS) method. Then, these optimized features were inserted into 1D convolutional neural network (CNN) to classify 4mC sites from non-4mC sites in Geobacter pickeringii. The performance of the anticipated model on independent data exhibited an accuracy of 0.868, which was 4.2% higher than the existing model.

1. Introduction

Alterations in DNA play a significant role in gene expression and regulation, DNA replication, and transcriptional regulation. Methylcytosine is a key epigenetic trait at 5′-cytosine-phosphate-guanine-3′ site. Methylcytosine is precisely correlated with cell growth and chromosomal protection [1,2]. 5-Hydroxymethylcytosine (5hmC), 5-methylcytosine (5mC), and 4-methylcytosine (4mC) are the familiar cytosine methylations in multiple genomes of prokaryotes and eukaryotes [3,4]. 5mC is a frequent type of methylcytosine and responsible for many neurodegenerative and cancerous diseases [5]. 4mC is a significant alteration that protects genomic knowledge from weakening by restriction enzymes [6].
Precise identification of 4mC sites can give important signs to understand the method of gene regulation. At present, there are several techniques to recognize 4mC sites, for example, single-molecule real-time sequencing [7], mass spectrometry [8], and bisulfite sequencing [9], but these techniques are time-consuming and expensive when utilized on next-generation sequencing data. Hence, a computational model to identify 4mC sites is needed on an urgent basis. Currently, a few computational and mathematical methods have been introduced to predict 4mC sites in multiple species. In 2017, Chen at al. [10] introduced the first computational model to predict 4mC sites in multiple species on the basis of confirmed 4mC dataset. Subsequently, Wei at al. [11] designed the novel iterative feature illustrative algorithm for the prediction of 4mC sites. Tang et al. [12] introduced the new linear integration method by merging the existing models for the identification of 4mC sites. Afterwards, Manavalan et al. [13] established the new tool Meta-4mCpred to recognize 4mC sites in six different species. Khanal et al. [14] introduced the first deep learning model 4mCCNN by utilizing numerous feature combinations [15,16,17] for the prediction of 4mC sites in multiple genomes [18]. Although the prediction model 4mCCNN can yield good outcomes, there is still space for more improvement.
To tackle these hitches, we constructed a 1D CNN model to recognize 4mC sites in Geobacter pickeringii. Figure 1 illustrates the flowchart of the whole study. Binary and k-mer nucleotide composition descriptors were used to encode DNA sequences of Geobacter pickeringii into feature vectors and then these features were optimized by using a correlation and gradient-boosting decision tree (GBDT)-based algorithm with incremental feature selection (IFS) method. After this, these optimized features were inserted into 1D CNN-based classifier using 10-fold cross-validation and we attained the finest model to classify 4mC from non-4mC.

2. Results and Discussion

2.1. Performance Evaluation

We constructed a 1D CNN-based model named Deep-4mCGP for the identification of 4mC sites in Geobacter pickeringii. In the first step, we converted the sequence data in to feature vectors by using k-mer nucleotide composition and binary encodings. Subsequently, these feature vectors were improved by means of correlation and GBDT-based algorithm with IFS method. Initially, correlation and then GBDT with IFS were utilized to pick the finest features. Figure 2A,B displays the IFS curve of top features. Afterward, these finest features were inserted into 1D CNN by using 10-fold cross-validation to classify 4mC sites from non-4mC sites in Geobacter pickeringii. In this work, 10-fold cross-validation was employed to examine the efficiency of the model. The data were arbitrarily divided into 10 segments of equal proportion. Each segment was independently tested by the model, which was trained on the outstanding nine segments. Thus, 10-fold cross-validation technique was executed 10 times, and the average of the outcomes was the ultimate result. AUROC of the anticipated model was 0.986, which was 6.5% higher than the existing model. The accuracy, precision, recall, and F1 are shown in Table 1, and the ROC curve is shown in Figure 2C.

2.2. Sequence Composition Analysis

The pattern of sequence along the alteration site is a crucial phase to recognize and understand the definition of genomic disparities [19]. In this work, we utilized Two Sample Logo [20] to inspect the dispersal of nucleotides along the 4mC site. Figure 2D illustrates the dispersal of nucleotides. Nucleotides ‘A’ and ‘T’ were separately rich at the upstream and downstream of the positive sequences, e.g., five consecutive ‘A’ nucleotides (30–34) and four successive ‘A’ (15–18, 24–27) originated in positive sequences. Nucleotides ‘C’ and ‘G’ were abundant at the upstream and downstream of the negative sequences, e.g., five repeated ‘G’ nucleotides (30–34) and four repeated ‘G’ nucleotides (3–6, 24–27) and four consecutive ‘C’ nucleotides (15–18) were noticed in negative sequences. Figure 2D shows that there was a significant variance amongst 4mC sequences and non-4mC sequences. The consequences proposed that the dispersal of nucleotides in diverse places are supportive for the precise identification of 4mC.

2.3. Comparison on the Basis of Independent Data

Features fusion were inserted into LSTM [21], GBDT [22], and RF [23,24] to compare with the CNN-based model [25]. Ultimately, on the basis of AUROC, we achieved a perfect model for each predictor, which is shown in Table 1 and Figure 2F. Comparison of anticipated model with 4mCCNN by using 10-fold cross-validation is shown in Figure 2E. On the independent data (200 Pos. seq and 200 Neg. seq) the efficiency of Deep-4mCGP was checked and then compared with the existing 4mCCNN. The accuracy, precision, recall, F1, and AUROC of the 4mCCNN were 0.826, 0.818, 0.823, 0.825, and 0.920, respectively. The accuracy, precision, recall, F1, and AUROC of Deep-4mCGP were 0.868, 0.876, 0.773, 0.859, and 0.961, respectively. The performance of the anticipated Deep-4mCGP on independent data exhibited the accuracy of 0.868, which was 4.2% higher than the 4mCCNN. The performance comparison is shown in Table 2.

3. Materials and Methods

Authentic data are a significant requirement for the construction of a machine learning-based model [26,27]. Thus, we acquired the data of 1138 (569 Pos. seq and 569 Neg. seq) sequences of Geobacter pickeringii from the work of Chen et al. [10] for training and testing the model. Moreover, we attained the data of 400 sequences (200 Pos. seq and 200 Neg. seq) from the work of Manavalan et al. [13] for the sake of independent testing.

3.1. Feature Descriptors

Selecting useful and ideal features is an important step in developing machine learning models [4,28,29,30,31,32,33,34,35,36,37]. Converting the DNA sequences into numerical feature vectors is key in the recognition of functional elements, e.g., physiochemical properties, natural vectors, binary composition, and k-mer nucleotide compositions, which have been utilized in computational biology and bioinformatics [38,39]. In this study, binary and k-mer composition were used to encode DNA sequences of Geobacter pickeringii.

3.1.1. k-mer

k-mer composition has the ability to show interactions between nucleotides of DNA sequences [40]. The residues of nucleotides can be attained by setting the size of window and steps. A random sample F with n sequence length can be designated as
F = S 1   S 2   S 3 . . S i . . S ( n 1 )   S n  
where Si indicates the i-th nucleotide of the DNA sequences and can be converted in to 4k D features vector with the help of k-mer.
F k = [ d 1 k t u p l e d 2 k t u p l e . d i k t u p l e . . d 4 k k t u p l e ] T
where d1k-tuple denotes the incidence of i-th k-mer and T represents the transposition. If the value of k is equal to 1, then DNA sequence will be decoded in to 4D features vector, and if the value of k is equal to 2, then DNA sequence will be 16D features vector. In this work, k was set as 1, 2, 3, 4, 5, 6. Consequently, DNA sequences were converted into (41 + 42 + 43 + 44 + 45 + 46 = 5460D) formulated as
F = F 1     F 2     F 3     F 4       F 5     F 6  

3.1.2. Binary

Binary encodings such as 0s and 1s have the ability to illustrate any information. Therefore, we can transform DNA sequence in the form of 0s and 1s. In this work, DNA sequences of Geobacter pickeringii with length of 41bp was encoded into the (4 × 41 = 164D) features vector.

3.2. Feature Selection

3.2.1. Correlation

Correlation is a familiar comparison amongst two different features, e.g., if the features are un-correlated, then the correlation will be zero; otherwise, it will be ±1. Two complete modules named classical linear correlation and correlation on the basis of information theory were implemented to compute the correlation amongst the two unique variables. Linear correlation coefficient is the most acquainted and utilizable. The linear correlation coefficient ‘r’ for a pair of (p, q) variables is specified as
r =   ( p i p ¯ i ) ( q i q ¯ i )   ( p i p ¯ i ) 2     ( q i q ¯ i ) 2
Correlation generates good results in smaller datasets, but the performance of correlation coefficient is not up to the mark on gigantic amounts of data. Therefore, it is necessary to determine the substantial relationship amongst the features. Thus, we utilized the t-test to investigate the statistical correlation between the features and picked the significant features. The value of ‘t’ can be computed as
t = r   n 2 1 r 2
where ‘r’ signifies the coefficient of correlation and ‘n’ represents the occurrences. ‘n−2′ denotes the degree of freedom. Probability of the significance relation is 0.05. If ‘t’ is greater than the probability of the significance relation 0.05, then the feature will be selected.

3.2.2. GBDT with IFS

GBDT is a popular machine learning-based classifier that has been utilized in various mathematical, cheminformatics, and bioinformatics tools [41,42]. It has the ability to establish a scalable and reliable prediction model by utilizing non-linear joints of weak learners [43].
{ ( x 1 , y 1 ) (   x n , y n ) }   (   x i ϵ   x     S n ,   a n d   y i ϵ   y     S )   q k   ( x ) : = k = 1 k D   ( x ; θ k )
where θ k is minimal risk of the decision tree and D k ( x ; θ k ) is the decision tree.
θ k ^ = a r g m i n   i = 1 n P   ( y i ,   q k 1 ( x ) + D   ( x ; θ k )   ) ( P   is   the   loss   function )
GBDT also computes the concluding evaluations in an advancing mode.
q k ( x ) = q k 1 ( x ) + D   ( x ; θ k )
Negative gradient loss function q k 1   is applied for residual computation.
S k i   = [ P ( y i , q ( x i ) ) q ( y i ) ] q ( x ) = q k 1 ( x )   (   i = 1 , 2 , 3 . n )
Hence, we trained the anticipated model through S k i   to compute the minimal risk θ k . This kind of trees rationally represents the relations between variables, e.g., plotting the input X into J fragments S 1     S J   , and output is Z J for area S J .
D ( x ; θ ) = j = 1 J z j I ( x j   ϵ   S j   )
The IFS [44,45] method was implemented in this work to pick the finest feature. IFS estimates the performance of the best q-ranked features repetitively for q  ϵ (1, 2, 3, … n), where ‘n’ is the overall number of the features. IFS frequently stops at the first scrutiny of performance. In IFS, features were picked incrementally from a randomly taken initial feature and the finest result from several randomly re-instated IFS processes were outputted. A brief explanation of the IFS technique can be found in [46].
Algorithms 1: Correlation and GBDT-based Feature Selection Algorithm
Input: Training Data: = Q (L1, L2, ……, Lk, Lc)
Output:Qbest

1st Round
   1   Begin
   2   for i = 1 to k         do
   3   r = calculate correlational coefficient (Li, Lc)
   4   end
   5   p = 0.05
   6   ρ= 0   ( if there is no correlation among the Fi and Fc)
   7   for i = 1 to k        do
   8   t = to calculate the significance (r, ρ) for Li ( by utilizing the t-test value from Equation (5))
   9   if   t > critical value
   10  Qbest = Q list
   11  end
   12  return Qbest

2nd Round
Input:    Qbest: = ( x i , y i ) i = 1 n  
       Where, ( x i = data and y i = label)
       LF: = P ( y i , q ( x ))
  13   By initializing the model
  14    q ° ( x ) : = argument minimum i = 1 n P ( y i , z )
  15   for I = {1, 2, 3, 4, 5…, n}          do
  16   for k = {1, 2, 3, 4, 5…, K}           do
  17   Pseudo residual error calculations:   S k i   =   [ P ( y i , q ( x i ) ) q ( y i ) ] q ( x ) = q k 1 ( x )
  18   end
  19   end
  20   On the basis of S k i   , θ k = { S k j j = [ 1 , 2 , 3 J ] }, we built a decision tree D k ( x ; θ k )
  21   for j = {1, 2, 3, 4, 5…., J}            do
  22    z k j = argument minimum x i   S k j n P ( y i , q k 1 ( x ) + z )
  23   end
  24   Updating the model q k ( x ) = q k 1 ( x ) + j = 1 j z k j I ( x I ^ S k j )
  25   q (x) = k = 1 K j = 1 J z k j I ( x I ^ S k j )
Output: The decision tree function  q (x)

3.3. Convolutional Neural Network

LeCun at al. [47] introduced convolutional neural network, and now it has been roughly utilized in many biological and bioinformatics advances [48,49,50]. The fundamental principle of CNN is to create abundant filters that have the ability to produce hidden topological features from data by executing pooling procedures and layer-wise convolutions. The performance of CNN on 2D data of images and matrices is exceptional [51]. Subsequently, 1D CNN has been used to tackle the difficulties of biomedical sequence data identification and the research associated with natural language processing [41,52]. In this work, we implemented 1D CNN to identify 4mC sites in Geobacter pickeringii. We employed Keras 2.3.1 [53], TensorFlow 2.1.0, and Python 3.5.4 to perform this experiment. The best tuning parameters are recorded in Table 3.

3.4. Metrics Evaluation

Precision, accuracy, recall, and F1 [54,55,56] were employed to examine the effectiveness of the anticipated prediction model and formulated as
{ P r e c i s i o n = T P T P + F P R e c a l l = T P T P + F N A c c u r a c y = T P + T N T P + F P + T N + F N F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l  
where ‘TP’ symbolizes the accurately predicted 4mC sequences, ‘TN’ represents the perfectly predicted non-4mC sequences, ‘FP’ indicates the non-4mC sequences predicted as 4mC sequences, and ‘FN’ indicates the 4mC sequences predicted as non-4mC sequences.

4. Conclusions

4mC is a type of DNA alteration that has the ability to synchronize multiple biological movements for example DNA replication, gene expressions, and transcriptional regulations. Accurate prediction of 4mC sites can provide exact information to their hereditary functions. Currently, several machine learning models have been used to predict 4mC sites in multiple genomes [10,12,13,57,58,59,60]. However, there is only one deep learning-based model, 4mCCNN [14], that exists for Geobacter pickeringii. In this work, a deep learning model was constructed to recognize 4mC sites in Geobacter pickeringii. In the anticipated model, two kinds of feature descriptors, namely, binary and k-mer composition were used to encode the DNA sequences of Geobacter pickeringii. The obtained features from their fusion were optimized by using correlation and GBDT-based algorithm with IFS method. Then, these optimized features were inserted into a 1D CNN-based classifier using 10-fold cross-validation, and we attained the finest model to classify 4mC from non-4mC. The performance of the anticipated Deep-4mCGP on independent data exhibited an accuracy of 0.868, which was 4.2% higher than the 4mCCNN. The source code and data are available at GitHub: https://github.com/linDing-groups/Deep-4mCGP (accessed on 19 January 2022). In future work, we have a plan to release a web-based application to make our anticipated model more convenient for the users without programming and statistical knowledge.

Author Contributions

H.Z.: methodology, coding, data curation, visualization, writing—original draft preparation. Q.-L.H.: data curation, methodology. H.L. (Hao Lv): data curation, methodology, visualization. Z.-J.S.: data curation, methodology. F.-Y.D.: data curation, methodology, visualization. H.L. (Hao Lin): conceptualization, supervision, reviewing, editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported by the Sichuan Provincial Science Fund for Distinguished Young Scholars (2020JDJQ0012) and National Nature Scientific Foundation of China (62172078).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All the data are available at https://github.com/linDing-groups/Deep-4mCGP (accessed on 19 January 2022).

Acknowledgments

We are very thankful to Hui Ding (Center for Informational Biology, University of Electronic Science and Technology of China) for their constructive suggestions and support on this work.

Conflicts of Interest

All the authors are in agreement and declare that there is no conflict of interest.

References

  1. Schübeler, D. Function and information content of DNA methylation. Nature 2015, 517, 321–326. [Google Scholar] [CrossRef]
  2. Ao, C.; Yu, L.; Zou, Q. Prediction of bio-sequence modifications and the associations with diseases. Brief. Funct. Genom. 2021, 20, 1–18. [Google Scholar] [CrossRef] [PubMed]
  3. Pataillot-Meakin, T.; Pillay, N.; Beck, S. 3-methylcytosine in cancer: An underappreciated methyl lesion? Epigenomics 2016, 8, 451–454. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Yalcin, D.; Otu, H.H. An Unbiased Predictive Model to Detect DNA Methylation Propensity of CpG Islands in the Human Genome. Curr. Bioinform. 2021, 16, 179–196. [Google Scholar] [CrossRef]
  5. Robertson, K.D. DNA methylation and human disease. Nat. Rev. Genet. 2005, 6, 597–610. [Google Scholar] [CrossRef] [PubMed]
  6. Iyer, L.M.; Abhiman, S.; Aravind, L. Natural history of eukaryotic DNA methylation systems. Prog. Mol. Biol. Transl. Sci. 2011, 101, 25–104. [Google Scholar] [PubMed]
  7. Flusberg, B.A.; Webster, D.R.; Lee, J.H.; Travers, K.J.; Olivares, E.C.; Clark, T.A.; Korlach, J.; Turner, S.W. Direct detection of DNA methylation during single-molecule, real-time sequencing. Nat. Methods 2010, 7, 461. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Doherty, R.; Couldrey, C. Exploring genome wide bisulfite sequencing for DNA methylation analysis in livestock: A technical assessment. Front. Genet. 2014, 5, 126. [Google Scholar] [CrossRef] [Green Version]
  9. Boch, J.; Bonas, U. Xanthomonas AvrBs3 family-type III effectors: Discovery and function. Annu. Rev. Phytopathol. 2010, 48, 419–436. [Google Scholar] [CrossRef]
  10. Chen, W.; Yang, H.; Feng, P.; Ding, H.; Lin, H. iDNA4mC: Identifying DNA N4-methylcytosine sites based on nucleotide chemical properties. Bioinformatics 2017, 33, 3518–3523. [Google Scholar] [CrossRef]
  11. Wei, L.; Su, R.; Luan, S.; Liao, Z.; Manavalan, B.; Zou, Q.; Shi, X. Iterative feature representations improve N4-methylcytosine site prediction. Bioinformatics 2019, 35, 4930–4937. [Google Scholar] [CrossRef]
  12. Tang, Q.; Kang, J.; Yuan, J.; Tang, H.; Li, X.; Lin, H.; Huang, J.; Chen, W. DNA4mC-LIP: A linear integration method to identify N4-methylcytosine site in multiple species. Bioinformatics 2020, 36, 3327–3335. [Google Scholar] [CrossRef] [PubMed]
  13. Manavalan, B.; Basith, S.; Shin, T.H.; Wei, L.; Lee, G. Meta-4mCpred: A Sequence-Based Meta-Predictor for Accurate DNA 4mC Site Prediction Using Effective Feature Representation. Mol. Ther.-Nucleic Acids 2019, 16, 733–744. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Khanal, J.; Nazari, I.; Tayara, H.; Chong, K.T. 4mCCNN: Identification of N4-methylcytosine sites in prokaryotes using convolutional neural network. IEEE Access 2019, 7, 145455–145461. [Google Scholar] [CrossRef]
  15. Manavalan, B.; Basith, S.; Shin, T.H.; Lee, D.Y.; Wei, L.; Lee, G. 4mCpred-EL: An ensemble learning framework for identification of DNA N4-methylcytosine sites in the mouse genome. Cells 2019, 8, 1332. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Hasan, M.M.; Manavalan, B.; Shoombuatong, W.; Khatun, M.S.; Kurata, H. i4mC-Mouse: Improved identification of DNA N4-methylcytosine sites in the mouse genome using multiple encoding schemes. Comput. Struct. Biotechnol. J. 2020, 18, 906–912. [Google Scholar] [CrossRef] [PubMed]
  17. Zulfiqar, H.; Khan, R.S.; Hassan, F.; Hippe, K.; Hunt, C.; Ding, H.; Song, X.-M.; Cao, R. Computational identification of N4-methylcytosine sites in the mouse genome with machine-learning method. Math. Biosci. Eng. 2021, 18, 3348–3363. [Google Scholar] [CrossRef]
  18. Ye, P.; Luan, Y.; Chen, K.; Liu, Y.; Xiao, C.; Xie, Z. MethSMRT: An integrative database for DNA N6-methyladenine and N4-methylcytosine generated by single-molecular real-time sequencing. Nucleic Acids Res. 2016, 45, D85–D89. [Google Scholar] [CrossRef] [Green Version]
  19. Smith, Z.D.; Meissner, A. DNA methylation: Roles in mammalian development. Nat. Rev. Genet. 2013, 14, 204–220. [Google Scholar] [CrossRef]
  20. Vacic, V.; Iakoucheva, L.M.; Radivojac, P. Two Sample Logo: A graphical representation of the differences between two sets of sequence alignments. Bioinformatics 2006, 22, 1536–1537. [Google Scholar] [CrossRef] [Green Version]
  21. Gers, F.A.; Schmidhuber, J.; Cummins, F. Learning to forget: Continual prediction with LSTM. Neural Comput. 2000, 12, 2451–2471. [Google Scholar] [CrossRef] [PubMed]
  22. Ye, J.; Chow, J.-H.; Chen, J.; Zheng, Z. Stochastic gradient boosted distributed decision trees. In Proceedings of the 18th ACM Conference on Information and Knowledge Management, Hong Kong, China, 2–6 November 2009; pp. 2061–2064. [Google Scholar]
  23. Qi, Y. Random forest for bioinformatics. In Ensemble Machine Learning; Springer: Berlin/Heidelberg, Germany, 2012; pp. 307–323. [Google Scholar]
  24. Ahmed, F.F.; Khatun, M.S.; Mosharaf, M.P.; Mollah, M.N.H. Prediction of Protein-protein Interactions in Arabidopsis thaliana Using Partial Training Samples in a Machine Learning Framework. Curr. Bioinform. 2021, 16, 865–879. [Google Scholar] [CrossRef]
  25. Zhang, Y.; Li, Y.; Wang, R.; Lu, J.; Ma, X.; Qiu, M. PSAC: Proactive Sequence-aware Content Caching via Deep Learning at the Network Edge. IEEE Trans. Netw. Sci. Eng. 2020, 7, 2145–2154. [Google Scholar] [CrossRef]
  26. Su, W.; Liu, M.L.; Yang, Y.H.; Wang, J.S.; Li, S.H.; Lv, H.; Dao, F.Y.; Yang, H.; Lin, H. PPD: A Manually Curated Database for Experimentally Verified Prokaryotic Promoters. J. Mol. Biol. 2021, 433, 166860. [Google Scholar] [CrossRef] [PubMed]
  27. Sharma, A.K.; Srivastava, R. Protein Secondary Structure Prediction Using Character bi-gram Embedding and Bi-LSTM. Curr. Bioinform. 2021, 16, 333–338. [Google Scholar] [CrossRef]
  28. Hasan, M.M.; Alam, M.A.; Shoombuatong, W.; Deng, H.W.; Manavalan, B.; Kurata, H. NeuroPred-FRL: An interpretable prediction model for identifying neuropeptide using feature representation learning. Brief. Bioinform. 2021, 22, bbab167. [Google Scholar] [CrossRef] [PubMed]
  29. Charoenkwan, P.; Chiangjong, W.; Nantasenamat, C.; Hasan, M.M.; Manavalan, B.; Shoombuatong, W. StackIL6: A stacking ensemble model for improving the prediction of IL-6 inducing peptides. Brief. Bioinform. 2021, 22, bbab172. [Google Scholar] [CrossRef] [PubMed]
  30. Zulfiqar, H.; Sun, Z.J.; Huang, Q.L.; Yuan, S.S.; Lv, H.; Dao, F.Y.; Lin, H.; Li, Y.W. Deep-4mCW2V: A sequence-based predictor to identify N4-methylcytosine sites in Escherichia coli. Methods 2021, in press. [Google Scholar] [CrossRef]
  31. Ju, Z.; Wang, S.-Y. Prediction of Neddylation Sites Using the Composition of k-spaced Amino Acid Pairs and Fuzzy SVM. Curr. Bioinform. 2020, 15, 725–731. [Google Scholar] [CrossRef]
  32. Zhang, D.; Chen, H.-D.; Zulfiqar, H.; Yuan, S.-S.; Huang, Q.-L.; Zhang, Z.-Y.; Deng, K.-J. iBLP: An XGBoost-based predictor for identifying bioluminescent proteins. Comput. Math. Methods Med. 2021, 2021, 6664362. [Google Scholar] [CrossRef]
  33. Lv, H.; Dao, F.-Y.; Zulfiqar, H.; Lin, H. DeepIPs: Comprehensive assessment and computational identification of phosphorylation sites of SARS-CoV-2 infection using a deep learning-based approach. Brief. Bioinform. 2021, 22, bbab244. [Google Scholar] [CrossRef] [PubMed]
  34. Zhang, L.; Huang, Z.; Kong, L. CSBPI Site: Multi-Information Sources of Features to RNA Binding Sites Prediction. Curr. Bioinform. 2021, 16, 691–699. [Google Scholar] [CrossRef]
  35. Lv, H.; Shi, L.; Berkenpas, J.W.; Dao, F.-Y.; Zulfiqar, H.; Ding, H.; Zhang, Y.; Yang, L.; Cao, R. Application of artificial intelligence and machine learning for COVID-19 drug discovery and vaccine design. Brief. Bioinform. 2021, 22, bbab320. [Google Scholar] [CrossRef]
  36. Zulfiqar, H.; Masoud, M.S.; Yang, H.; Han, S.G.; Wu, C.Y.; Lin, H. Screening of prospective plant compounds as H1R and CL1R inhibitors and its antiallergic efficacy through molecular docking approach. Comput. Math. Methods Med. 2021, 2021, 6683407. [Google Scholar] [CrossRef]
  37. Hasan, M.M.; Schaduangrat, N.; Basith, S.; Lee, G.; Shoombuatong, W.; Manavalan, B. HLPpred-Fuse: Improved and robust prediction of hemolytic peptide and its activity by fusing multiple feature representation. Bioinformatics 2020, 36, 3350–3356. [Google Scholar] [CrossRef]
  38. Govindaraj, R.G.; Subramaniyam, S.; Manavalan, B. Extremely-randomized-tree-based Prediction of N(6)-Methyladenosine Sites in Saccharomyces cerevisiae. Curr. Genom. 2020, 21, 26–33. [Google Scholar] [CrossRef]
  39. Li, Q.; Yu, J.; Yan, Y.; Chen, Y.; Tan, S. PsePSSM-based Prediction for the Protein-ATP Binding Sites. Curr. Bioinform. 2021, 16, 576–582. [Google Scholar]
  40. Dao, F.-Y.; Lv, H.; Zulfiqar, H.; Yang, H.; Su, W.; Gao, H.; Ding, H.; Lin, H. A computational platform to identify origins of replication sites in eukaryotes. Brief. Bioinform. 2021, 22, 1940–1950. [Google Scholar] [CrossRef]
  41. Lv, H.; Dao, F.-Y.; Zulfiqar, H.; Su, W.; Ding, H.; Liu, L.; Lin, H. A sequence-based deep learning approach to predict CTCF-mediated chromatin loop. Brief. Bioinform. 2021, 22, 1–13. [Google Scholar] [CrossRef]
  42. Zulfiqar, H.; Yuan, S.-S.; Huang, Q.-L.; Sun, Z.-J.; Dao, F.-Y.; Yu, X.-L.; Lin, H. Identification of cyclin protein using gradient boost decision tree algorithm. Comput. Struct. Biotechnol. J. 2021, 19, 4123–4131. [Google Scholar] [CrossRef]
  43. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.-Y. Lightgbm: A highly efficient gradient boosting decision tree. Adv. Neural Inf. Process. Syst. 2017, 30, 3146–3154. [Google Scholar]
  44. Yang, W.; Zhu, X.-J.; Huang, J.; Ding, H.; Lin, H. A Brief Survey of Machine Learning Methods in Protein Sub-Golgi Localization. Curr. Bioinform. 2019, 14, 234–240. [Google Scholar] [CrossRef]
  45. Tan, J.-X.; Li, S.-H.; Zhang, Z.-M.; Chen, C.-X.; Chen, W.; Tang, H.; Lin, H. Identification of hormone binding proteins based on machine learning methods. Math. Biosci. Eng. 2019, 16, 2466–2480. [Google Scholar] [CrossRef]
  46. Alim, A.; Rafay, A.; Naseem, I. PoGB-pred: Prediction of Antifreeze Proteins Sequences Using Amino Acid Composition with Feature Selection Followed by a Sequential-based Ensemble Approach. Curr. Bioinform. 2021, 16, 446–456. [Google Scholar] [CrossRef]
  47. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  48. Niu, M.; Lin, Y.; Zou, Q. sgRNACNN: Identifying sgRNA on-target activity in four crops using ensembles of convolutional neural networks. Plant Mol. Biol. 2021, 105, 483–495. [Google Scholar] [CrossRef] [PubMed]
  49. Zhang, Y.; Yan, J.; Chen, S.; Gong, M.; Gao, D.; Zhu, M.; Gan, W. Review of the Applications of Deep Learning in Bioinformatics. Curr. Bioinform. 2020, 15, 898–911. [Google Scholar] [CrossRef]
  50. Bukhari, S.A.S.; Razzaq, A.; Jabeen, J.; Khan, S.; Khan, Z. Deep-BSC: Predicting Raw DNA Binding Pattern in Arabidopsis thaliana. Curr. Bioinform. 2021, 16, 457–465. [Google Scholar] [CrossRef]
  51. Kwon, Y.-H.; Shin, S.-B.; Kim, S.-D. Electroencephalography based fusion two-dimensional (2D)-convolution neural networks (CNN) model for emotion recognition system. Sensors 2018, 18, 1383. [Google Scholar] [CrossRef] [Green Version]
  52. Mo, F.; Luo, Y.; Fan, D.A.; Zeng, H.; Zhao, Y.N.; Luo, M.; Liu, X.B.; Ma, X.L. Integrated Analysis of mRNA-seq and miRNA-seq to identify c-MYC, YAP1 and miR-3960 as Major Players in the Anticancer Effects of Caffeic Acid Phenethyl Ester in Human Small Cell Lung Cancer Cell Line. Curr. Gene Ther. 2020, 20, 15–24. [Google Scholar] [CrossRef]
  53. Chollet, F. Keras: Deep learning library for theano and tensorflow. Keras 2015, 7, T1. Available online: https://Keras.Io/ (accessed on 19 January 2022).
  54. Cao, R.; Freitas, C.; Chan, L.; Sun, M.; Jiang, H.; Chen, Z. ProLanGO: Protein function prediction using neural machine translation based on a recurrent neural network. Molecules 2017, 22, 1732. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Gai, D.; Shen, X.; Chen, H. Effective Classification of Melting Curve in Real-time PCR Based on Dynamic Filter-based Convolutional Neural Network. Curr. Bioinform. 2021, 16, 820–828. [Google Scholar] [CrossRef]
  56. Ao, C.; Zou, Q.; Yu, L. RFhy-m2G: Identification of RNA N2-methylguanosine modification sites based on random forest and hybrid features. Methods 2021, in press. [Google Scholar] [CrossRef] [PubMed]
  57. He, W.; Jia, C.; Zou, Q. 4mCPred: Machine learning methods for DNA N4-methylcytosine sites prediction. Bioinformatics 2019, 35, 593–601. [Google Scholar] [CrossRef]
  58. Lv, H.; Dao, F.-Y.; Zhang, D.; Guan, Z.-X.; Yang, H.; Su, W.; Liu, M.-L.; Ding, H.; Chen, W.; Lin, H. iDNA-MS: An integrated computational tool for detecting DNA modification sites in multiple genomes. Iscience 2020, 23, 100991. [Google Scholar] [CrossRef]
  59. Zulfiqar, H.; Dao, F.Y.; Lv, H.; Yang, H.; Zhou, P.; Chen, W.; Lin, H. Identification of Potential Inhibitors Against SARS-CoV-2 Using Computational Drug Repurposing Study. Curr. Bioinform. 2021, 16, 1320–1327. [Google Scholar] [CrossRef]
  60. Liu, Q.; Chen, J.; Wang, Y.; Li, S.; Jia, C.; Song, J.; Li, F. DeepTorrent: A deep learning-based approach for predicting DNA N4-methylcytosine sites. Brief. Bioinform. 2021, 22, bbaa124. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the whole study.
Figure 1. Flowchart of the whole study.
Ijms 23 01251 g001
Figure 2. (A,B) The IFS technique for recognizing 4mC sites. Initially, 871 best features were picked from an overall 5624 by correlation measures (A). A total of 50 more optimized features were also attained from 871 best features by the using of GBDT on 10-fold CV. The Acc increases from 0.894 to 0.908 (B). Plot showing the AUROC curve of Deep-4mCGP on 10-fold CV (C). Nucleotides allocation along the alteration site (D). Performance comparison of Deep-4mCGP with 4mCCNN on 10-fold cross-validation (E). AUROC of predictors on training and independent data (F).
Figure 2. (A,B) The IFS technique for recognizing 4mC sites. Initially, 871 best features were picked from an overall 5624 by correlation measures (A). A total of 50 more optimized features were also attained from 871 best features by the using of GBDT on 10-fold CV. The Acc increases from 0.894 to 0.908 (B). Plot showing the AUROC curve of Deep-4mCGP on 10-fold CV (C). Nucleotides allocation along the alteration site (D). Performance comparison of Deep-4mCGP with 4mCCNN on 10-fold cross-validation (E). AUROC of predictors on training and independent data (F).
Ijms 23 01251 g002
Table 1. Outcomes of single encodings and their fusion based-models on training and independent data by using different classification algorithms. Bold is used to highlight the best results.
Table 1. Outcomes of single encodings and their fusion based-models on training and independent data by using different classification algorithms. Bold is used to highlight the best results.
Training DataIndependent Data
AlgorithmFSMethodAccuracyPrecisionRecallF1AUROCAccuracyPrecisionRecallF1AUROC
LSTM5460k-mer0.8610.8720.8610.8110.9430.8250.8200.8120.8190.882
164Binary0.8340.8280.8370.8380.8750.8010.8040.7980.8010.872
5624Fusion0.8680.8650.8590.8620.9370.8100.8140.8080.8130.902
871Fusion0.8590.8570.8470.8570.9250.8080.8010.8070.8000.876
50Fusion0.8840.8780.8810.8790.9590.8410.8420.8390.8420.921
RF5460k-mer0.8310.8620.7580.6640.9360.8090.8380.7610.6480.909
164Binary0.7720.7630.7550.7700.8630.7530.7480.7530.7560.832
5624Fusion0.8440.8470.8390.8450.8910.7950.7880.7830.7940.887
871Fusion0.8470.8490.8510.8460.8970.8010.8000.8000.7980.878
50Fusion0.8660.8580.8610.8540.9150.8120.8080.8140.8120.898
GBDT5460k-mer0.8480.8810.7760.6760.9620.8280.8610.7700.6690.931
164Binary0.8270.8210.8230.8270.8950.7820.7780.7790.7810.862
5624Fusion0.8350.8320.8300.8320.8930.7860.7800.7860.7860.882
871Fusion0.8510.8530.8480.8540.9010.8140.8100.8150.8100.893
50Fusion0.8750.8740.8680.8600.9450.8360.8350.8300.8410.920
CNN5460k-mer0.8800.8790.8870.8800.9490.8480.8440.8410.8450.927
164Binary0.8680.8360.8340.8320.9280.7980.8020.8070.7900.881
5624Fusion0.8680.8650.8590.8620.9370.8100.8140.8080.8130.903
871Fusion0.8940.8770.8970.8890.9550.8460.8450.8410.8380.920
50Fusion0.9080.9140.9100.9080.9860.8680.8760.7730.8590.961
Table 2. Performance comparison of Deep-4mCGP with 4mCCNN.
Table 2. Performance comparison of Deep-4mCGP with 4mCCNN.
PredictorCVAccuracyPrecisionRecallF1AUROCReference
4mcCNN10 (folds)0.8710.8570.8930.7500.921[14]
Deep-4mCGP10 (folds)0.9080.9140.9100.9080.986Deep-4mCGP
4mcCNNTest (Ind)0.8260.8180.8230.8250.920[14]
Deep-4mCGPTest (Ind)0.8680.8760.7730.8590.961Deep-4mCGP
Table 3. Program in TensorFlow 2.1.0 with employed parameters.
Table 3. Program in TensorFlow 2.1.0 with employed parameters.
ClassifierParameters
RFN-estimators = 100, Learning-rate = 0.001, Mean absolute error = 0.143, Mean square error = 0.220
GBDTN-estimators = 120, Learning-rate = 0.01, Mean absolute error = 0.117, Mean square error = 0.212
LSTMnn.LSTM(input_size = feature_size, hidden_size = 128)
nn.Linear(int_features = 128, out_features = 1)
nn.Sigmoid()
learning-rate = 0.001, Epoch = 100, Batch-size = 32
CNNnn. Conv1d (in_channels = feature size, out_channels = 32, padding = valid, strides = 1, kernel_size = 2)
nn.ReLU()
nn.MaxPool 1d (padding = valid, strides = 2, pool_size = 2)
nn. Dropout (p = 0.5)
nn.Sigmoid()
Learning-rate = 0.01, epoch = 80, batch-size = 32
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zulfiqar, H.; Huang, Q.-L.; Lv, H.; Sun, Z.-J.; Dao, F.-Y.; Lin, H. Deep-4mCGP: A Deep Learning Approach to Predict 4mC Sites in Geobacter pickeringii by Using Correlation-Based Feature Selection Technique. Int. J. Mol. Sci. 2022, 23, 1251. https://doi.org/10.3390/ijms23031251

AMA Style

Zulfiqar H, Huang Q-L, Lv H, Sun Z-J, Dao F-Y, Lin H. Deep-4mCGP: A Deep Learning Approach to Predict 4mC Sites in Geobacter pickeringii by Using Correlation-Based Feature Selection Technique. International Journal of Molecular Sciences. 2022; 23(3):1251. https://doi.org/10.3390/ijms23031251

Chicago/Turabian Style

Zulfiqar, Hasan, Qin-Lai Huang, Hao Lv, Zi-Jie Sun, Fu-Ying Dao, and Hao Lin. 2022. "Deep-4mCGP: A Deep Learning Approach to Predict 4mC Sites in Geobacter pickeringii by Using Correlation-Based Feature Selection Technique" International Journal of Molecular Sciences 23, no. 3: 1251. https://doi.org/10.3390/ijms23031251

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop