Recent Advances in Deep Transfer Learning Applications for Image Processing Problems and Big Data

A special issue of Big Data and Cognitive Computing (ISSN 2504-2289).

Deadline for manuscript submissions: 30 June 2024 | Viewed by 10701

Special Issue Editors


E-Mail Website
Guest Editor
Department of Information Technology, Satya Wacana Christian University, Salatiga 50711, Indonesia
Interests: database programming; advanced machine learning; feature selection; artificial neural networks; computer vision; object detection
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recently, deep learning for big data and image processing has become an increasingly popular area of discussion. This Special Issue is devoted to the topic of "Recent Advances in Deep Transfer Learning Applications for Image Processing Problems and Big Data". Transfer learning is a technique whereby a neural network model is first trained on a problem, such as a problem that is being solved. Transfer learning has the advantage of decreasing the training time for a learning model and can result in lower generalization errors. It can also help train models when only unlabeled data sets are available, as most models will be trained beforehand. This Special Issue aims to host original, unpublished, and breakthrough concepts in Transfer Learning applications and Computer Vision that use new algorithms and mechanisms, such as artificial intelligence, machine learning, and explainable artificial intelligence (XAI). The objective is to bring leading scientists and researchers together and create an interdisciplinary platform of computational theories, methodologies, and techniques.

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but are not limited to) the following:

  • Multi-class image classification and multi-label image classification;
  • Audio/video systems and signal processing;
  • Object detection and recognition systems and Big Data analysis;
  • Explainable artificial intelligence (XAI) and Big Data;
  • Embedded systems and transfer learning applications;
  • Image processing and vision computing;
  • Image/video-based object detection using deep learning;
  • Deep learning-based object detection for real-world applications and Big Data;
  • Image, video, and 3D scene processing;
  • Emerging techniques in learning for image, video, and 3D vision.

We look forward to receiving your contributions. 

Dr. Christine Dewi
Prof. Dr. Rung-Ching Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Big Data and Cognitive Computing is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 5540 KiB  
Article
Transfer Learning Approach to Seed Taxonomy: A Wild Plant Case Study
by Nehad M. Ibrahim, Dalia G. Gabr, Atta Rahman, Dhiaa Musleh, Dania AlKhulaifi and Mariam AlKharraa
Big Data Cogn. Comput. 2023, 7(3), 128; https://doi.org/10.3390/bdcc7030128 - 4 Jul 2023
Cited by 6 | Viewed by 2106
Abstract
Plant taxonomy is the scientific study of the classification and naming of various plant species. It is a branch of biology that aims to categorize and organize the diverse variety of plant life on earth. Traditionally, plant taxonomy has been performed using morphological [...] Read more.
Plant taxonomy is the scientific study of the classification and naming of various plant species. It is a branch of biology that aims to categorize and organize the diverse variety of plant life on earth. Traditionally, plant taxonomy has been performed using morphological and anatomical characteristics, such as leaf shape, flower structure, and seed and fruit characters. Artificial intelligence (AI), machine learning, and especially deep learning can also play an instrumental role in plant taxonomy by automating the process of categorizing plant species based on the available features. This study investigated transfer learning techniques to analyze images of plants and extract features that can be used to cluster the species hierarchically using the k-means clustering algorithm. Several pretrained deep learning models were employed and evaluated. In this regard, two separate datasets were used in the study comprising of seed images of wild plants collected from Egypt. Extensive experiments using the transfer learning method (DenseNet201) demonstrated that the proposed methods achieved superior accuracy compared to traditional methods with the highest accuracy of 93% and F1-score and area under the curve (AUC) of 95%, respectively. That is considerable in contrast to the state-of-the-art approaches in the literature. Full article
Show Figures

Figure 1

19 pages, 10184 KiB  
Article
Recognizing Road Surface Traffic Signs Based on Yolo Models Considering Image Flips
by Christine Dewi, Rung-Ching Chen, Yong-Cun Zhuang, Xiaoyi Jiang and Hui Yu
Big Data Cogn. Comput. 2023, 7(1), 54; https://doi.org/10.3390/bdcc7010054 - 22 Mar 2023
Cited by 7 | Viewed by 2749
Abstract
In recent years, there have been significant advances in deep learning and road marking recognition due to machine learning and artificial intelligence. Despite significant progress, it often relies heavily on unrepresentative datasets and limited situations. Drivers and advanced driver assistance systems rely on [...] Read more.
In recent years, there have been significant advances in deep learning and road marking recognition due to machine learning and artificial intelligence. Despite significant progress, it often relies heavily on unrepresentative datasets and limited situations. Drivers and advanced driver assistance systems rely on road markings to help them better understand their environment on the street. Road markings are signs and texts painted on the road surface, including directional arrows, pedestrian crossings, speed limit signs, zebra crossings, and other equivalent signs and texts. Pavement markings are also known as road markings. Our experiments briefly discuss convolutional neural network (CNN)-based object detection algorithms, specifically for Yolo V2, Yolo V3, Yolo V4, and Yolo V4-tiny. In our experiments, we built the Taiwan Road Marking Sign Dataset (TRMSD) and made it a public dataset so other researchers could use it. Further, we train the model to distinguish left and right objects into separate classes. Furthermore, Yolo V4 and Yolo V4-tiny results can benefit from the “No Flip” setting. In our case, we want the model to distinguish left and right objects into separate classes. The best model in the experiment is Yolo V4 (No Flip), with a test accuracy of 95.43% and an IoU of 66.12%. In this study, Yolo V4 (without flipping) outperforms state-of-the-art schemes, achieving 81.22% training accuracy and 95.34% testing accuracy on the TRMSD dataset. Full article
Show Figures

Figure 1

16 pages, 4964 KiB  
Article
Deep Learning for Highly Accurate Hand Recognition Based on Yolov7 Model
by Christine Dewi, Abbott Po Shun Chen and Henoch Juli Christanto
Big Data Cogn. Comput. 2023, 7(1), 53; https://doi.org/10.3390/bdcc7010053 - 22 Mar 2023
Cited by 17 | Viewed by 4788
Abstract
Hand detection is a key step in the pre-processing stage of many computer vision tasks because human hands are involved in the activity. Some examples of such tasks are hand posture estimation, hand gesture recognition, human activity analysis, and other tasks such as [...] Read more.
Hand detection is a key step in the pre-processing stage of many computer vision tasks because human hands are involved in the activity. Some examples of such tasks are hand posture estimation, hand gesture recognition, human activity analysis, and other tasks such as these. Human hands have a wide range of motion and change their appearance in a lot of different ways. This makes it hard to identify some hands in a crowded place, and some hands can move in a lot of different ways. In this investigation, we provide a concise analysis of CNN-based object recognition algorithms, more specifically, the Yolov7 and Yolov7x models with 100 and 200 epochs. This study explores a vast array of object detectors, some of which are used to locate hand recognition applications. Further, we train and test our proposed method on the Oxford Hand Dataset with the Yolov7 and Yolov7x models. Important statistics, such as the quantity of GFLOPS, the mean average precision (mAP), and the detection time, are tracked and monitored via performance metrics. The results of our research indicate that Yolov7x with 200 epochs during the training stage is the most stable approach when compared to other methods. It achieved 84.7% precision, 79.9% recall, and 86.1% mAP when it was being trained. In addition, Yolov7x accomplished the highest possible average mAP score, which was 86.3%, during the testing stage. Full article
Show Figures

Figure 1

Back to TopTop