Convolutional Neural Network and Its Applications in Image Detection and Recognition

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 September 2023) | Viewed by 4513

Special Issue Editors


E-Mail Website
Guest Editor
School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
Interests: face detection; weakly/fully supervised object detection; activity detection; image and video understanding in the real world

E-Mail Website
Guest Editor
Artificial Intelligence Initiative, King Abdullah University of Science and Technology (KAUST), Thuwal 23955, Saudi Arabia
Interests: computer vision; image/video understanding

E-Mail Website
Guest Editor
School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
Interests: image detection and recognition; artificial intelligence, machine learning and deep learning

Special Issue Information

Dear Colleagues,

Object detection and recognition is a fundamental problem in computer vision, since it is the basic technology of some advanced tasks such as object segmentation, object tracking, action analysis and detection, etc. Now, complex application scenarios pose new challenges to the performance and speed of object detection methods. Therefore, this Special Issue presents new ideas and experimental results in the field of object detection and recognition algorithms and architectures from design, service, and theory to its practical use. Areas relevant to object detection and recognition algorithms and architectures include, but are not limited to, object detection and recognition applications, novel algorithms and architectures, large-scale datasets, artificial intelligence, machine learning, and deep learning. Network architectures are necessary to achieve extremely high performance. Techniques for resource management in the context of parallel and distributed systems to obtain high detection speed are also topics of interest. This Special Issue will publish high-quality, original research papers in the overlapping fields:

  • object detection algorithms, networks, architectures, applications, and datasets;
  • artificial intelligence, machine learning, and deep learning;
  • object detection in the wild/in the real-world/in the real scenarios;
  • unsupervised/semi-supervised/weakly supervised/zero-shot/few-shot/open-world/supervised/long-tailed object detection;
  • degraded image object detection;
  • small/tiny object detection; 
  • general object/face/pedestrian/key point/car detection.

Dr. Yongqiang Zhang
Dr. Chen Zhao
Prof. Dr. Mingli Ding
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • object detection
  • convolutional neural network
  • algorithms
  • datasets
  • artificial intelligence
  • machine learning

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 2888 KiB  
Article
Radish Growth Stage Recognition Based on GAN and Deep Transfer Learning
by Ximeng Zhou, Xinhao Yang and Songshi Luo
Appl. Sci. 2023, 13(14), 8306; https://doi.org/10.3390/app13148306 - 18 Jul 2023
Cited by 1 | Viewed by 2716
Abstract
Image recognition of plant growth states provides technical support for crop monitoring; this reduces labor costs and promotes efficient planting. However, difficulties in data collection, the required high levels of algorithm efficiency, and the lack of computing power resources create challenges to the [...] Read more.
Image recognition of plant growth states provides technical support for crop monitoring; this reduces labor costs and promotes efficient planting. However, difficulties in data collection, the required high levels of algorithm efficiency, and the lack of computing power resources create challenges to the development of intelligent agriculture. As a result, a deep transfer learning algorithm is proposed in this paper. The main motivation for this study was the lack of a dataset of plant growth stages. The key idea was to collect radish growth stage images in an experimental field using standardized equipment and to generate more images using DCGAN. By improving the deep transfer learning model, radish growth stages can be identified much more accurately. In this study, five different deep migration models were selected, namely, Inception-v3, MobileNet, Xception, VGG-16, and VGG-19. Our experiment demonstrated that Inception-v3 was the most suitable model for the recognition of plant growth states. Based on Inception-v3, we propose three improved models. The test accuracies for the radish and Oxford Flower datasets were 99.5% and 99.3%, respectively. Additionally, the accuracy of the pest and disease dataset also achieved excellent performance, with an accuracy of 94.7%, 2.4% higher than previously. These results demonstrate the wide applicability of our model and the rationality of constructing a radish growth stage dataset. Full article
Show Figures

Figure 1

Review

Jump to: Research

23 pages, 21528 KiB  
Review
Enhancing 3D Lung Infection Segmentation with 2D U-Shaped Deep Learning Variants
by Anindya Apriliyanti Pravitasari, Mohammad Hamid Asnawi, Farid Azhar Lutfi Nugraha, Gumgum Darmawan and Triyani Hendrawati
Appl. Sci. 2023, 13(21), 11640; https://doi.org/10.3390/app132111640 - 24 Oct 2023
Viewed by 1241
Abstract
Accurate lung segmentation plays a vital role in generating 3D projections of lung infections, which contribute to the diagnosis and treatment planning of various lung diseases, including cases like COVID-19. This study capitalizes on the capabilities of deep learning techniques to reconstruct 3D [...] Read more.
Accurate lung segmentation plays a vital role in generating 3D projections of lung infections, which contribute to the diagnosis and treatment planning of various lung diseases, including cases like COVID-19. This study capitalizes on the capabilities of deep learning techniques to reconstruct 3D lung projections from CT-scans. In this pursuit, we employ well-established 2D architectural frameworks like UNet, LinkNet, Attention UNet, UNet 3+, and TransUNet. The dataset used comprises 20 3D CT-scans from COVID-19 patients, resulting in over 2900 raw 2D slices. Following preprocessing, the dataset is refined to encompass 2560 2D slices tailored for modeling. Preprocessing procedures involve mask refinement, image resizing, contrast limited adaptive histogram equalization (CLAHE), and image augmentation to enhance the data quality and diversity. Evaluation metrics, including Intersection over Union (IoU) and dice scores, are used to assess the models’ performance. Among the models tested, Attention UNet stands out, demonstrating the highest performance. Its key trait of harnessing attention mechanisms enhances its ability to focus on crucial features. This translates to exceptional results, with an IoU score of 85.36% and dice score of 91.49%. These findings provide valuable insights into guiding the selection of an appropriate architecture tailored to specific requirements, considering factors such as segmentation accuracy and computational resources, in the context of 3D lung projection reconstruction. Full article
Show Figures

Figure 1

Back to TopTop