Evolving CNN with Paddy Field Algorithm for Geographical Landmark Recognition
Abstract
:1. Introduction
 (i)
 We proposed a paddy field algorithmbased approach to evolve an optimized CNN architecture.
 (ii)
 We validated the proposed approach to landmark recognition and its application.
2. Convolutional Neural Networks
3. Neural Architecture Search
4. Paddy Field Algorithm
Algorithm 1. Paddy Field Algorithm 




5. Dataset PreProcessing
6. Experimentation and Results
 Kernel Frame Size: the 3 × 3 kernel frames are considered highly optimal for CNNs. However, varying the kernel frame size, we saw that a size of 7 × 7 was the kernel with the best fit of seed. This performed the best with other arrangements of hyperparameters [35]. The kernel frame sizes chosen were between 1 × 1 and 11 × 11. These were in the form of square matrices.
 Number of Kernels: the number of kernels was varied to establish the best number that could be checked to fall between 22 and 42. It has been suggested that 32 or 64 kernels seem to work well but, for us, 42 was the variant giving the best seed [36]. Other numbers could have worked even better, had we increased the search space.
 Learning Rate: the learning rate refers to the speed with which the network trains itself. With a slower learning rate, a network can achieve better accuracy, but this increases its chances of running into a local minimum. The network also takes more time to run. A fast learning rate will quicken the rate of learning but run into the problem of deviation from the global minima. A learning rate of 0.01 is considered optimal in the usual cases; indeed, 0.099 was the best parameter found [37]. The learning rate varied between 0.001 and 0.99.
 Batch Size: the batch size refers to the number of images given to the network for training in one go. A batch size of 32 is considered good; in this case, it was found that 32 is optimal and seemed to perform well [38]. A little variation was found to be good in the batch size, which included 33 and 34 even though it varied between 22 and 42.
 Neurons: Neurons were also varied to check for the best types of connections. The initial 100 neurons were altered to fall between 90 and 110; the results showed that many variations within this range seemed to do well, although the eventual best fit was 102 [39].
 The best kernel frame length is 7, while the length that was normally used was a 3. 5 kernel frame length, which also performed better than 3 in some combinations. One striking observation was that an unusual kernel frame length of 4, which is not expected to be good because of its being an even kernel frame, also performed better in many regards.
 The number of kernels after optimization was 42, within the range of 22–42, meaning that the maximum number of kernels improved the model’s accuracy. However, more research would be required to validate how many kernels are optimum.
 The learning rate that is usually used is 0.01; there was a positive observation that the best learning rate came out to be 0.0099, which is almost 0.01.
 The number of optimum epochs is 100, but that figure was chosen manually since the network did not then require much processing power at once.
 The best batch size that was seen was the regularly used size of 32.
 The neurons did not seem to vary a great deal; the best number was 102 when initialized with 100.
 The code was run for 8 h and as many as 18 seeds were checked, over a wide variety of combinations.
 In 18 seeds only, the accuracy improved considerably, from 53%, with the default CNN, to 76%.
 The experiment showed that the paddy field search algorithm is a very viable evolutionary metaheuristic for searching bestfit hyperparameters.
7. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
 Radenović, F.; Tolias, G.; Chum, O. Finetuning CNN image retrieval with no human annotation. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 1655–1668. [Google Scholar] [CrossRef] [PubMed] [Green Version]
 LeCun, Y.; Bengio, Y. Convolutional networks for images, speech, and time series. Handb. Brain Theory Neural Netw. 1995, 3361, 1995. [Google Scholar]
 Rafiq, M.Y.; Bugmann, G.; Easterbrook, D.J. Neural network design for engineering applications. Comput. Struct. 2001, 79, 1541–1552. [Google Scholar] [CrossRef]
 Liu, Y.; Sun, Y.; Xue, B.; Zhang, M.; Yen, G.G.; Tan, K.C. A survey on evolutionary neural architecture search. In IEEE Transactions on Neural Networks and Learning Systems; IEEE: Piscatway, NJ, USA, 2021. [Google Scholar] [CrossRef]
 Yang, X.S. NatureInspired Metaheuristic Algorithms; Luniver Press: Beckington, UK, 2010. [Google Scholar]
 Hochba, D.S. Approximation algorithms for NPhard problems. ACM Sigact News 1997, 28, 40–52. [Google Scholar] [CrossRef]
 Kong, X.; Chen, Y.L.; Xie, W.; Wu, X. A novel paddy field algorithm based on pattern search method. In Proceedings of the 2012 IEEE International Conference on Information and Automation, Shenyang, China, 6–8 June 2012; pp. 686–690. [Google Scholar]
 Weyand, T.; Araujo, A.; Cao, B.; Sim, J. Google landmarks dataset v2a largescale benchmark for instancelevel recognition and retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 2575–2584. [Google Scholar]
 Bansal, K.; Rana, A.S. Landmark Recognition Using EnsembleBased Machine Learning Models. In Machine Learning and Data Analytics for Predicting, Managing, and Monitoring Disease; IGI Global: Hershey, PA, USA, 2021; pp. 64–74. [Google Scholar]
 Xu, D.; Tu, K.; Wang, Y.; Liu, C.; He, B.; Li, H. FCNengine: Accelerating deconvolutional layers in classic CNN processors. In Proceedings of the International Conference on ComputerAided Design, San Diego, CA, USA, 5–8 November 2018; pp. 1–6. [Google Scholar]
 Ghosh, S.; Singh, A. Image Classification Using Deep Neural Networks: Emotion Detection Using Facial Images. In Machine Learning and Data Analytics for Predicting, Managing, and Monitoring Disease; IGI Global: Hershey, PA, USA, 2021; pp. 75–85. [Google Scholar]
 Hernández, H.; Blum, C. Distributed graph coloring: An approach based on the calling behavior of Japanese tree frogs. Swarm Intell. 2012, 6, 117–150. [Google Scholar] [CrossRef] [Green Version]
 Lin, Y.S.; Lu, H.C.; Tsao, Y.B.; Chih, Y.M.; Chen, W.C.; Chien, S.Y. Gratetile: Efficient sparse tensor tiling for CNN processing. In Proceedings of the 2020 IEEE Workshop on Signal Processing Systems (SiPS), Coimbra, Portugal, 20–22 October 2020; pp. 1–6. [Google Scholar]
 Wang, M.; Lu, S.; Zhu, D.; Lin, J.; Wang, Z. A highspeed and lowcomplexity architecture for softmax function in deep learning. In Proceedings of the 2018 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS), Chengdu, China, 26–30 October 2018; pp. 223–226. [Google Scholar]
 Bottou, L. Stochastic gradient descent tricks. In Neural Networks: Tricks of the Trade; Springer: Berlin/Heidelberg, Germany, 2012; pp. 421–436. [Google Scholar]
 Lydia, A.; Francis, S. Adagrad—An optimizer for stochastic gradient descent. Int. J. Inf. Comput. Sci. 2019, 6, 566–568. [Google Scholar]
 Ruder, S. An overview of gradient descent optimization algorithms. arXiv 2016, arXiv:1609.04747. [Google Scholar]
 Zhang, Z. Improved adam optimizer for deep neural networks. In Proceedings of the 2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS), Banff, AB, Canada, 4–6 June 2018; pp. 1–2. [Google Scholar]
 Wang, D.; Wang, X.; Kim, M.K.; Jung, S.Y. Integrated optimization of two design techniques for cogging torque reduction combined with analytical method by a simple gradient descent method. IEEE Trans. Magn. 2012, 48, 2265–2276. [Google Scholar] [CrossRef]
 Wichrowska, O.; Maheswaranathan, N.; Hoffman, M.W.; Colmenarejo, S.G.; Denil, M.; Freitas, N.; SohlDickstein, J. Learned optimizers that scale and generalize. In Proceedings of the 34th International Conference on Machine Learning, Sydney, NSW, Australia, 6–11 August 2017; pp. 3751–3760. [Google Scholar]
 Jafarian, A.; Nia, S.M.; Golmankhaneh, A.K.; Baleanu, D. On artificial neural networks approach with new cost functions. Appl. Math. Comput. 2018, 339, 546–555. Available online: https://EconPapers.repec.org/RePEc:eee:apmaco:v:339:y:2018:i:c:p:546555 (accessed on 20 February 2022). [CrossRef]
 Raja, M.A.Z.; Shah, F.H.; Tariq, M.; Ahmad, I. Design of artificial neural network models optimized with sequential quadratic programming to study the dynamics of nonlinear Troesch’s problem arising in plasma physics. Neural Comput. Appl. 2018, 29, 83–109. [Google Scholar] [CrossRef]
 Kumar, S.; Singh, A.; Walia, S. Parallel Big Bang–Big Crunch Global Optimization Algorithm: Performance and its Applications to routing in WMNs. Wirel. Pers. Commun. 2018, 100, 1601–1618. [Google Scholar] [CrossRef]
 Sabir, Z.; Raja, M.A.Z.; Guirao, J.L.G.; Saeed, T. Swarm Intelligence Procedures Using Meyer Wavelets as a Neural Network for the Novel Fractional Order Pantograph Singular System. Fractal Fract. 2021, 5, 277. [Google Scholar] [CrossRef]
 Boiarov, A.; Tyantov, E. Large scale landmark recognition via deep metric learning. arXiv 2019, arXiv:1908.10192v3. [Google Scholar]
 Kumar, S.; Walia, S.S.; Singh, A. Parallel big bangbig crunch algorithm. Int. J. Adv. Comput. 2013, 46, 1330–1335. [Google Scholar]
 Singh, A.; Kumar, S.; Walia, S.S.; Chakravorty, S. Face Recognition: A Combined Parallel BBBC & PCA Approach to Feature Selection. Int. J. Comput. Sci. Inf. Technol. 2015, 2, 1–5. [Google Scholar]
 Singh, A.; Kumar, S.; Walia, S.S. Parallel 3Parent Genetic Algorithm with Application to Routing in Wireless Mesh Networks. In Implementations and Applications of Machine Learning; Springer: Cham, Switzerland, 2020; pp. 1–28. [Google Scholar]
 Singh, P.; Chaudhury, S.; Panigrahi, B.K. Hybrid MPSOCNN: Multilevel Particle Swarm optimized hyperparameters of Convolutional Neural Network. Swarm Evol. Comput. 2021, 63, 100863. [Google Scholar] [CrossRef]
 He, X.; Wang, Y.; Wang, X.; Huang, W.; Zhao, S.; Chen, X. SimpleEncoded evolving convolutional neural network and its application to skin disease image classification. Swarm Evol. Comput. 2021, 67, 100955. [Google Scholar] [CrossRef]
 Muppala, C.; Guruviah, V. Detection of leaf folder and yellow stemborer moths in the paddy field using deep neural network with search and rescue optimization. Inf. Process. Agric. 2021, 8, 350–358. [Google Scholar] [CrossRef]
 Premaratne, U.; Samarabandu, J.; Sidhu, T. A new biologically inspired optimization algorithm. In Proceedings of the 2009 International Conference on Industrial and Information Systems (ICIIS), Peradeniya, Sri Lanka, 28–31 December 2009; pp. 279–284. [Google Scholar]
 Zhao, L.; Kobayasi, K.; Hasegawa, T.; Wang, C.L.; Yoshimoto, M.; Wan, J.; Matsui, T. Traits responsible for variation in pollination and seed set among six rice cultivars grown in a miniature paddy field with free air at a hot, humid spot in China. Agric. Ecosyst. Environ. 2010, 139, 110–115. [Google Scholar] [CrossRef]
 Magliani, F.; Bidgoli, N.M.; Prati, A. A locationaware embedding technique for accurate landmark recognition. In Proceedings of the 11th International Conference on Distributed Smart Cameras, Stanford, CA, USA, 5–7 September 2017. [Google Scholar]
 Ullah, F.U.M.; Ullah, A.; Muhammad, K.; Haq, I.U.; Baik, S.W. Violence detection using spatiotemporal features with 3D convolutional neural network. Sensors 2019, 19, 2472. [Google Scholar] [CrossRef] [Green Version]
 Li, Y.; Lin, S.; Zhang, B.; Liu, J.; Doermann, D.; Wu, Y.; Huang, F.; Ji, R. Exploiting kernel sparsity and entropy for interpretable CNN compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 2800–2809. [Google Scholar]
 Zhuangzhuang, T.; Ronghui, Z.; Jiemin, H.; Jun, Z. Adaptive learning rate CNN for SAR ATR. In Proceedings of the 2016 CIE International Conference on Radar (RADAR), Guangzhou, China, 10–13 October 2016; pp. 1–5. [Google Scholar]
 Radiuk, P.M. Impact of training set batch size on the performance of convolutional neural networks for diverse datasets. Inf. Technol. Manag. Sci. 2017, 20, 20–24. [Google Scholar] [CrossRef]
 Aizenberg, N.N.; Aizenberg, I.N. Fastconvergence learning algorithms for multilevel and binary neurons and solution of some image processing problems. In International Workshop on Artificial Neural Networks; Springer: Berlin/Heidelberg, Germany, 1993; pp. 230–236. [Google Scholar]
 Dogra, V. Banking newsevents representation and classification with a novel hybrid model using DistilBERT and rulebased features. Turk. J. Comput. Math. Educ. 2021, 12, 3039–3054. [Google Scholar]
 Srivastava, A.; Verma, S.; Jhanjhi, N.Z.; Talib, M.N.; Malhotra, A. Analysis of Quality of Service in VANET. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2020; Volume 993, p. 012061. [Google Scholar]
 Kumar, P.; Verma, S. Detection of Wormhole Attack in VANET. Natl. J. Syst. Inf. Technol. 2017, 10, 71–80. [Google Scholar]
 Jhanjhi, N.Z.; Verma, S.; Talib, M.N.; Kaur, G. A Canvass of 5G Network Slicing: Architecture and Security Concern. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2020; Volume 993, p. 012060. [Google Scholar]
 Gandam, A.; Sidhu, J.S.; Verma, S.; Jhanjhi, N.Z.; Nayyar, A.; Abouhawwash, M.; Nam, Y. An efficient postprocessing adaptive filtering technique to rectifying the flickering effects. PLoS ONE 2021, 16, e0250959. [Google Scholar] [CrossRef]
 Puneeta, S.; Sahil, V. Analysis on Different Strategies Used in Blockchain Technology. J. Comput. Theor. Nanosci. 2019, 16, 4350–4355. [Google Scholar] [CrossRef]
 Kumar, K.; Verma, S.; Jhanjhi, N.Z.; Talib, M.N. A Survey of The Design and Security Mechanisms of The Wireless Networks and Mobile AdHoc Networks. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2020; Volume 993, p. 012063. [Google Scholar]
Accuracy  

S. No.  Epochs  Time (in mins.)  Default CNN (Approx. in %)  PFANET (Best Fit) (Approx. in %) 
1  13  5  8  12 
2  25  10  16  24 
3  38  15  24  35 
4  50  20  31  45 
5  63  25  37  55 
6  75  30  43  63 
7  88  35  48  70 
8  100  40  53  76 
Kernel Frame Length  Number of Kernels  Learning Rate  Batch Size  Neurons 

7  42  0.0099  32  102 
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. 
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bansal, K.; Singh, A.; Verma, S.; Kavita; Jhanjhi, N.Z.; Shorfuzzaman, M.; Masud, M. Evolving CNN with Paddy Field Algorithm for Geographical Landmark Recognition. Electronics 2022, 11, 1075. https://doi.org/10.3390/electronics11071075
Bansal K, Singh A, Verma S, Kavita, Jhanjhi NZ, Shorfuzzaman M, Masud M. Evolving CNN with Paddy Field Algorithm for Geographical Landmark Recognition. Electronics. 2022; 11(7):1075. https://doi.org/10.3390/electronics11071075
Chicago/Turabian StyleBansal, Kanishk, Amar Singh, Sahil Verma, Kavita, Noor Zaman Jhanjhi, Mohammad Shorfuzzaman, and Mehedi Masud. 2022. "Evolving CNN with Paddy Field Algorithm for Geographical Landmark Recognition" Electronics 11, no. 7: 1075. https://doi.org/10.3390/electronics11071075