AI for Computational Vision, Natural Language Processing, and Geoinformatics

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 September 2023) | Viewed by 18577

Special Issue Editors


E-Mail Website
Guest Editor
School of Automation, University of Electronic Science and Technology of China, Chengdu 610054, China
Interests: surgical robot; AI/ML; haptics; teleoperation; medical robotics; image fusion; surgical vision; 3D visualization; adaptive visualization; artificial neural network; geoinformatics (GIS); artificial intelligence; computer graphics; motion tracking; image processing; machine vision; 3D reconstruction; medical imaging; robotic surgery; data mining; earth surface process; cognitive intelligence; GIS/RS; visual reasoning; visual question answering; cloud computing; perception and cognition, etc.
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Computer Science and Cyber Security, Chengdu University of Technology, Chengdu 610059, China
Interests: image and video processing; machine learning and deep learning; data mining and big data; intelligent information processing; information security; data science; artificial intelligence; blockchain; nuclear measurement and control technology; system control.
Special Issues, Collections and Topics in MDPI journals
Department of Epidemiology and Biostatistics, College of Public Health and Social Justice, Saint Louis University, St. Louis, MO 63103, USA
Interests: geoinformatics; spatial computation and modeling of community resilience/sustainability; data science and statistics in land use; geo-simulation of human and environmental systems; GeoAI (artificial intelligence) frameworks; integrated geo-cyber-infrastructures; urban planning; GIS/RS; AI/ML; social equity; land development; urbanization; space value modelling; social sensing; GeoAI; land management; land policy
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Public Affairs and Administration, University of Electronic Science and Technology of China, Chengdu 610054, China
Interests: geoInformatics; urban planning; urban renewal; real estate; GIS/RS; AI/ML; social equity; land development; urbanization; space value modelling; post-productivism transformation; social sensing; GeoAI; land management; land policy
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The purpose of this Special Issue is to showcase recent advances in the application of artificial intelligence (AI) to computational vision, natural language processing, and geoinformatics. With the rapid progress of technology, these fields have become pivotal in various domains. Computational vision, for instance, empowers automatic detection and recognition algorithms in critical areas, such as quality inspection, robotic guidance, and autonomous driving. Computational vision combines sophisticated software algorithms with the implementation of hardware to enable tasks such as object sensing, image processing, and understanding.

To address crucial applications such as medical prognosis and autonomous driving, where the cost of decision errors can be severe—even fatal—AI has been integrated into computational vision. The utilization of deep network architectures and other AI techniques enables efficient image processing and the extraction of subtle features, thereby enhancing image recognition and understanding.

Current research is actively exploring restoration and enhancement techniques that contribute to human activity recognition, surgical medicine, geoinformatics, and remote sensing analysis. These areas are essential in helping us perceive and comprehend the world more effectively.

Another promising avenue is intelligent reasoning facilitated by AI. Semantic and visual reasoning enable machines to perform tasks that resemble human intelligence, thus improving human–computer interactions and decision-making processes. This breakthrough has found applications in diverse domains, ranging from medical care and environmental analysis to autonomous driving, text classification, recommender systems, machine translation, and analog dialogues. Although the integration of computer vision and natural language processing is still an area of ongoing exploration, it holds immense potential. Cross-modal applications such as visual question and answer (VQA), visual reasoning, and video translation necessitate the processing of large-scale datasets involving visual, textual, and voice-based information. Achieving superior results requires the fusion of features and the representation of high-level knowledge.

In summary, the advent of AI has revolutionized multiple industries through its impact on computational vision, natural language processing, and geoinformatics. By integrating these technologies, previously arduous or seemingly impossible tasks have been made attainable. As researchers continue to delve into the capabilities of AI, new opportunities and applications will undoubtedly emerge, further augmenting intelligent systems’ ability to tackle complex problems.

We cordially invite you to contribute your original high-quality research and comprehensive review articles to this Special Issue. Your submissions should address the subject of the current issue, shedding light on the latest advances in AI for computational vision, natural language processing, and geoinformatics. Each submitted paper will undergo a rigorous evaluation process by from two to three independent reviewers, focusing on relevance, the significance of the contribution, technical rigor, and the quality of the presentation.

This Special Issue welcomes research demonstrating new theories and methods, unique application strategies, and studies of AI in the above fields. Topics of interest include, but are not limited to:

  • Machine learning in image processing;
  • Neural networks in image processing;
  • Video-based activity recognition, sensor-based activity recognition;
  • Expert systems in image processing;
  • Knowledge engineering in image processing;
  • Medical image (e.g., CT, MRI, ultrasound) processing;
  • Intelligent agents and multi-agent systems in image processing;
  • Artificial intelligence for augmented perception;
  • Machine translation, text sentiment analysis, text classification;
  • Semantic reasoning, semantic representation, knowledge base;
  • Visual question answering (VQA) and visual reasoning;
  • Characterization inference, natural language reasoning;
  • Geospatial artificial intelligence, geospatial AI (GeoAI);
  • AI in geostatistics, remote sensing, and spatiotemporal simulation;
  • AI for geospatial data acquisition, analysis, planning, and prediction;
  • Visual augmentation and reconstruction, 3D reconstruction of deformable surfaces.

Dr. Wenfeng Zheng
Prof. Dr. Mingzhe Liu
Dr. Kenan Li
Prof. Dr. Xuan Liu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning in image processing
  • neural networks in image processing
  • video-based activity recognition, sensor-based activity recognition
  • expert systems in image processing
  • knowledge engineering in image processing
  • medical image (e.g., CT, MRI, ultrasound) processing
  • intelligent agents and multi-agent systems in image processing
  • artificial intelligence for augmented perception
  • machine translation, text sentiment analysis, text classification
  • semantic reasoning, semantic representation, knowledge base
  • visual question answering (VQA) and visual reasoning
  • characterization inference, natural language reasoning
  • geospatial artificial intelligence, geospatial AI (GeoAI)
  • AI in geostatistics, remote sensing, and spatiotemporal simulation
  • AI for geospatial data acquisition, analysis, planning, and prediction
  • visual augmentation and reconstruction, 3D reconstruction of deformable surfaces

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

4 pages, 190 KiB  
Editorial
AI for Computational Vision, Natural Language Processing, and Geoinformatics
by Wenfeng Zheng, Mingzhe Liu, Kenan Li and Xuan Liu
Appl. Sci. 2023, 13(24), 13276; https://doi.org/10.3390/app132413276 - 15 Dec 2023
Viewed by 1172
Abstract
The rapid development of artificial intelligence technology has had a huge impact on the fields of computer vision, natural language processing, and geographic information applications [...] Full article

Research

Jump to: Editorial

21 pages, 11209 KiB  
Article
An Efficient Steganographic Protocol for WebP Files
by Katarzyna Koptyra and Marek R. Ogiela
Appl. Sci. 2023, 13(22), 12404; https://doi.org/10.3390/app132212404 - 16 Nov 2023
Cited by 1 | Viewed by 968
Abstract
In this paper, several ideas of data hiding in WebP images are presented. WebP is a long-known, but not very poplar file format that provides lossy or lossless compression of data, in the form of a still image or an animation. A great [...] Read more.
In this paper, several ideas of data hiding in WebP images are presented. WebP is a long-known, but not very poplar file format that provides lossy or lossless compression of data, in the form of a still image or an animation. A great number of WebP features are optional, so the structure of the image offers great opportunities for data hiding. The article describes distinct approaches to steganography divided into two categories: format-based and data-based. Among format-based methods, we name simple injection, multi-secret steganography that uses thumbnails, hiding a message in metadata or in a specific data chunk. Data-based methods achieve secret concealment with the use of a transparent, WebP-specific algorithm that embeds bits by choosing proper prediction modes and alteration of the color indexing transform. The capacity of presented techniques varies. It may be unlimited for injection, up to a few hundred megabytes for other format-based algorithms, or be content-dependent in data-based techniques. These methods fit into the container modification branch of steganography. We also present a container selection technique which benefits from available WebP compression parameters. Images generated with the described methods were tested with three applications, including the Firefox web browser, GNU Image Manipulation Program, and ImageMagick. Some of the presented techniques can be combined in order to conceal more than one message in a single carrier. Full article
Show Figures

Figure 1

19 pages, 3472 KiB  
Article
DaGATN: A Type of Machine Reading Comprehension Based on Discourse-Apperceptive Graph Attention Networks
by Mingli Wu, Tianyu Sun, Zhuangzhuang Wang and Jianyong Duan
Appl. Sci. 2023, 13(22), 12156; https://doi.org/10.3390/app132212156 - 09 Nov 2023
Cited by 1 | Viewed by 726
Abstract
In recent years, with the advancement of natural language processing techniques and the release of models like ChatGPT, how language models understand questions has become a hot topic. In handling complex logical reasoning with pre-trained models, its performance still has room for improvement. [...] Read more.
In recent years, with the advancement of natural language processing techniques and the release of models like ChatGPT, how language models understand questions has become a hot topic. In handling complex logical reasoning with pre-trained models, its performance still has room for improvement. Inspired by DAGN, we propose an improved DaGATN (Discourse-apperceptive Graph Attention Networks) model. By constructing a discourse information graph to learn logical clues in the text, we decompose the context, question, and answer into elementary discourse units (EDUs) and connect them with discourse relations to construct a relation graph. The text features are learned through a discourse graph attention network and applied to downstream multiple-choice tasks. Our method was evaluated on the ReClor dataset and achieved an accuracy of 74.3%, surpassing the best-known performance methods utilizing deberta-xlarge-level pre-trained models, and also performed better than ChatGPT (Zero-Shot). Full article
Show Figures

Figure 1

14 pages, 13498 KiB  
Article
Within-Document Arabic Event Coreference: Challenges, Datasets, Approaches and Future Direction
by Mohammed Aldawsari, Manjur Kolhar and Omer Salih Dawood Omer
Appl. Sci. 2023, 13(19), 11004; https://doi.org/10.3390/app131911004 - 06 Oct 2023
Cited by 2 | Viewed by 712
Abstract
Event coreference resolution is a crucial component in Natural Language Processing (NLP) applications as it directly affects text summarization, machine translation, classification, and textual entailment. However, the research on this task for Arabic language is limited, compared to other languages such as English, [...] Read more.
Event coreference resolution is a crucial component in Natural Language Processing (NLP) applications as it directly affects text summarization, machine translation, classification, and textual entailment. However, the research on this task for Arabic language is limited, compared to other languages such as English, Chinese and Spanish. This paper aims to review the state-of-the-art approaches in event coreference (EC) within the context of coreference resolution tasks, emphasizing the significance of EC in NLP. The focus is placed on the latest developments in Arabic language processing related to event coreference. To fill this gap, a comprehensive study of existing work is conducted, and new approaches are suggested. The paper highlights the challenges specific to Arabic event coreference resolution, such as the variability of verb forms, pronoun ambiguity, ellipsis and null arguments, lexical and morphological variation, lack of annotated resources, discourse and pragmatic context, and cultural and contextual sensitivity. Addressing these challenges requires a deep understanding of Arabic linguistics, advanced NLP techniques, and the availability of annotated resources. Furthermore, this paper examines the existing datasets and methods for Arabic event coreference and proposes an annotation scheme. By leveraging existing NLP algorithms and developing event coreference resolution systems tailored for Arabic, the accuracy and performance of NLP tasks can be significantly improved. Full article
25 pages, 1127 KiB  
Article
Comprehensive Study of Arabic Satirical Article Classification
by Fatmah Assiri and Hanen Himdi
Appl. Sci. 2023, 13(19), 10616; https://doi.org/10.3390/app131910616 - 23 Sep 2023
Cited by 2 | Viewed by 695
Abstract
A well-known issue for social media sites consists of the hazy boundaries between malicious false news and protected speech satire. In addition to the protective measures that lessen the exposure of false material on social media, providers of fake news have started to [...] Read more.
A well-known issue for social media sites consists of the hazy boundaries between malicious false news and protected speech satire. In addition to the protective measures that lessen the exposure of false material on social media, providers of fake news have started to pose as satire sites in order to escape being delisted. Potentially, this may cause confusion to the readers as satire can sometimes be mistaken for real news, especially when their context or intent is not clearly understood and written in a journalistic format imitating real articles. In this research, we tackle the issue of classifying Arabic satiric articles written in a journalistic format to detect satirical cues that aid in satire classification. To accomplish this, we compiled the first Arabic satirical articles dataset extracted from real-world satirical news platforms. Then, a number of classification models that integrate a variety of feature extraction techniques with machine learning, deep learning, and transformers to detect the provenance of linguistic and semantic cues were investigated, including the first use of the ArabGPt model. Our results indicate that BERT is the best-performing model with F1-score reaching 95%. We also provide an in-depth lexical analysis of the formation of Arabic satirical articles. The lexical analysis provides insights into the satirical nature of the articles in terms of their linguistic word uses. Finally, we developed a free open-source platform that automatically organizes satirical and non-satirical articles in their correct classes from the best-performing model in our study, BERT. In summary, the obtained results found that pretrained models gave promising results in classifying Arabic satirical articles. Full article
Show Figures

Figure 1

12 pages, 2828 KiB  
Article
Attention Block Based on Binary Pooling
by Chang Chen and Huaixiang Zhang
Appl. Sci. 2023, 13(18), 10012; https://doi.org/10.3390/app131810012 - 05 Sep 2023
Cited by 1 | Viewed by 649
Abstract
Image classification has become highly significant in the field of computer vision due to its wide array of applications. In recent years, Convolutional Neural Networks (CNN) have emerged as potent tools for addressing this task. Attention mechanisms offer an effective approach to enhance [...] Read more.
Image classification has become highly significant in the field of computer vision due to its wide array of applications. In recent years, Convolutional Neural Networks (CNN) have emerged as potent tools for addressing this task. Attention mechanisms offer an effective approach to enhance the accuracy of image classification. Despite Global Average Pooling (GAP) being a crucial component of traditional attention mechanisms, it only computes the average of spatial elements in each channel, failing to capture the complete range of feature information, resulting in fewer and less expressive features. To address this limitation, we propose a novel pooling operation named “Binary Pooling” and integrate it into the attention block. Binary pooling combines both GAP and Global Max Pooling (GMP), obtaining a more comprehensive feature vector by extracting average and maximum values, thereby enriching the diversity of extracted image features. Furthermore, to further enhance the extraction of image features, dilation operations and pointwise convolutions are applied on the channel-wise. The proposed attention block is simple yet highly effective. Upon integration into ResNet18/50 models, it leads to accuracy improvements of 2.02%/0.63% on ImageNet. Full article
Show Figures

Figure 1

16 pages, 2013 KiB  
Article
Marketing Insights from Reviews Using Topic Modeling with BERTopic and Deep Clustering Network
by Yusung An, Hayoung Oh and Joosik Lee
Appl. Sci. 2023, 13(16), 9443; https://doi.org/10.3390/app13169443 - 21 Aug 2023
Cited by 2 | Viewed by 2028
Abstract
The feedback shared by consumers on e-commerce platforms holds immense value in marketing, as it offers insights into their opinions and preferences, which are readily accessible. However, analyzing a large volume of reviews manually is impractical. Therefore, automating the extraction of essential insights [...] Read more.
The feedback shared by consumers on e-commerce platforms holds immense value in marketing, as it offers insights into their opinions and preferences, which are readily accessible. However, analyzing a large volume of reviews manually is impractical. Therefore, automating the extraction of essential insights from these data can provide more comprehensive and efficient information. This research focuses on leveraging clustering algorithms to automate the extraction of consumer intentions, related products, and the pros and cons of products from review data. To achieve this, a review dataset was created by performing web crawling on the Naver Shopping platform. The findings are expected to contribute to a more precise understanding of consumer sentiments, enabling marketers to make informed decisions across a wide range of products and services. Full article
Show Figures

Figure 1

17 pages, 6740 KiB  
Article
SP-YOLOv8s: An Improved YOLOv8s Model for Remote Sensing Image Tiny Object Detection
by Mingyang Ma and Huanli Pang
Appl. Sci. 2023, 13(14), 8161; https://doi.org/10.3390/app13148161 - 13 Jul 2023
Cited by 6 | Viewed by 3603
Abstract
An improved YOLOv8s-based method is proposed to address the challenge of accurately recognizing tiny objects in remote sensing images during practical human-computer interaction. In detecting tiny targets, the accuracy of YOLOv8s is low because the downsampling module of the original YOLOv8s algorithm causes [...] Read more.
An improved YOLOv8s-based method is proposed to address the challenge of accurately recognizing tiny objects in remote sensing images during practical human-computer interaction. In detecting tiny targets, the accuracy of YOLOv8s is low because the downsampling module of the original YOLOv8s algorithm causes the network to lose fine-grained feature information, and the neck network feature information needs to be sufficiently fused. In this method, the strided convolution module in YOLOv8s is replaced with the SPD-Conv module. By doing so, the feature map undergoes downsampling while preserving fine-grained feature information, thereby improving the learning and expressive capabilities of the network and enhancing recognition accuracy. Meanwhile, the path aggregation network is substituted with the SPANet structure, which facilitates the acquisition of more prosperous gradient paths. This substitution enhances the fusion of feature maps at various scales, reduces model parameters, and further improves detection accuracy. Additionally, it enhances the network’s robustness to complex backgrounds. Experimental verification is conducted on the following two intricate datasets containing tiny objects: AI-TOD and TinyPerson. A comparative analysis with the original YOLOv8s algorithm reveals notable enhancements in recognition accuracy. Specifically, under real-time performance constraints, the proposed method yields a 4.9% and 9.1% improvement in mAP0.5 recognition accuracy for AI-TOD and TinyPerson datasets, respectively. Moreover, the recognition accuracy for mAP0.5:0.95 is enhanced by 3.4% and 3.2% for the same datasets, respectively. The results indicate that the proposed method enables rapid and accurate recognition of tiny objects in complex backgrounds. Furthermore, it demonstrates better recognition precision and stability than other algorithms, such as YOLOv5s and YOLOv8s. Full article
Show Figures

Figure 1

16 pages, 14420 KiB  
Article
ECGYOLO: Mask Detection Algorithm
by Wenyi Hu, Jinling Zou, Yuan Huang, Hongkun Wang, Kun Zhao, Mingzhe Liu and Shan Liu
Appl. Sci. 2023, 13(13), 7501; https://doi.org/10.3390/app13137501 - 25 Jun 2023
Cited by 1 | Viewed by 747
Abstract
Of past years, wearing masks has turned into a necessity in daily life due to the rampant new coronavirus and the increasing importance people place on health and life safety. However, current mask detection algorithms are difficult to run on low-computing-power hardware platforms [...] Read more.
Of past years, wearing masks has turned into a necessity in daily life due to the rampant new coronavirus and the increasing importance people place on health and life safety. However, current mask detection algorithms are difficult to run on low-computing-power hardware platforms and have low accuracy. To resolve this discrepancy, a lightweight mask inspection algorithm ECGYOLO based on improved YOLOv7tiny is proposed. This algorithm uses GhostNet to replace the original convolutional layer with ECG module instead of ELAN module, which greatly improves the inspection efficiency and decreases the parameters of the model. In the meantime, the ECA (efficient channel attention) mechanism is led into the neck section to boost the feature fetch capability of the channel, and Mosaic and Mixup data enhancement techniques are adopted in training to obtain mask images under different viewpoints to improve the comprehensiveness and effectiveness of the model. Experiments show that the mAP (mean average precision) of the algorithm is raised by 4.4% to 92.75%, and the number of arguments is decreased by 1.14 M to 5.06M compared with the original YOLOv7tiny. ECGYOLO is more efficient than other algorithms at present and can meet the real-time and lightweight needs of mask detection. Full article
Show Figures

Figure 1

23 pages, 2542 KiB  
Article
Multi-Label Classification of Chinese Rural Poverty Governance Texts Based on XLNet and Bi-LSTM Fused Hierarchical Attention Mechanism
by Xin Wang and Leifeng Guo
Appl. Sci. 2023, 13(13), 7377; https://doi.org/10.3390/app13137377 - 21 Jun 2023
Cited by 2 | Viewed by 1315
Abstract
Hierarchical multi-label text classification (HMTC) is a highly relevant and widely discussed topic in the era of big data, particularly for efficiently classifying extensive amounts of text data. This study proposes the HTMC-PGT framework for poverty governance’s single-path hierarchical multi-label classification problem. The [...] Read more.
Hierarchical multi-label text classification (HMTC) is a highly relevant and widely discussed topic in the era of big data, particularly for efficiently classifying extensive amounts of text data. This study proposes the HTMC-PGT framework for poverty governance’s single-path hierarchical multi-label classification problem. The framework simplifies the HMTC problem into training and combination problems of multi-class classifiers in the classifier tree. Each independent classifier in this framework uses an XLNet pretrained model to extract char-level semantic embeddings of text and employs a hierarchical attention mechanism integrated with Bi-LSTM (BiLSTM + HA) to extract semantic embeddings at the document level for classification purposes. Simultaneously, this study proposes that the structure uses transfer learning (TL) between classifiers in the classifier tree. The experimental results show that the proposed XLNet + BiLSTM + HA + FC + TL model achieves micro-P, micro-R, and micro-F1 values of 96.1%, which is 7.5~38.1% higher than those of other baseline models. The HTMC-PGT framework based on XLNet, BiLSTM + HA, and transfer learning (TL) between classifier tree nodes proposed in this study solves the hierarchical multi-label classification problem of poverty governance text (PGT). It provides a new idea for solving the traditional HMTC problem. Full article
Show Figures

Figure 1

12 pages, 13397 KiB  
Article
Identifying Malignant Breast Ultrasound Images Using ViT-Patch
by Hao Feng, Bo Yang, Jingwen Wang, Mingzhe Liu, Lirong Yin, Wenfeng Zheng, Zhengtong Yin and Chao Liu
Appl. Sci. 2023, 13(6), 3489; https://doi.org/10.3390/app13063489 - 09 Mar 2023
Cited by 30 | Viewed by 2272
Abstract
Recently, the Vision Transformer (ViT) model has been used for various computer vision tasks, due to its advantages to extracting long-range features. To better integrate the long-range features useful for classification, the standard ViT adds a class token, in addition to patch tokens [...] Read more.
Recently, the Vision Transformer (ViT) model has been used for various computer vision tasks, due to its advantages to extracting long-range features. To better integrate the long-range features useful for classification, the standard ViT adds a class token, in addition to patch tokens. Despite state-of-the-art results on some traditional vision tasks, the ViT model typically requires large datasets for supervised training, and thus, it still face challenges in areas where it is difficult to build large datasets, such as medical image analysis. In the ViT model, only the output corresponding to the class token is fed to a Multi-Layer Perceptron (MLP) head for classification, and the outputs corresponding to the patch tokens are exposed. In this paper, we propose an improved ViT architecture (called ViT-Patch), which adds a shared MLP head to the output of each patch token to balance the feature learning on the class and patch tokens. In addition to the primary task, which uses the output of the class token to discriminate whether the image is malignant, a secondary task is introduced, which uses the output of each patch token to determine whether the patch overlaps with the tumor area. More interestingly, due to the correlation between the primary and secondary tasks, the supervisory information added to the patch tokens help with improving the performance of the primary task on the class token. The introduction of secondary supervision information also improves the attention interaction among the class and patch tokens. And by this way, ViT reduces the demand on dataset size. The proposed ViT-Patch is validated on a publicly available dataset, and the experimental results show its effectiveness for both malignant identification and tumor localization. Full article
Show Figures

Figure 1

13 pages, 1912 KiB  
Article
An Improved Passing Network for Evaluating Football Team Performance
by Wenxuan Zhou, Guo Yu, Songhui You and Zejun Wang
Appl. Sci. 2023, 13(2), 845; https://doi.org/10.3390/app13020845 - 07 Jan 2023
Cited by 3 | Viewed by 2292
Abstract
With the continuous development of sensor technology, the realization of football techniques and tactics comes with richer technical support. Among them, network analysis has been widely used to analyze passing behavior, and some results have been achieved. However, most of these studies directly [...] Read more.
With the continuous development of sensor technology, the realization of football techniques and tactics comes with richer technical support. Among them, network analysis has been widely used to analyze passing behavior, and some results have been achieved. However, most of these studies directly determine the weight of passing sidelines between players by measuring the number of passes, without carefully considering the potential contribution value of a single pass. In view of this problem, we carried out the following work: (1) map the football field to the coordinate system, calculate the endpoint coordinates of each pass, and take the coordinates as coefficients to obtain the weighted value of a single channel, and then calculate all channels together to achieve a directional channel network. (2) On this network, for the team evaluation that is difficult to quantify, we suggest that the ratio of the average clustering coefficient to the average intermediate centrality be taken as the overall network index to measure the coordination of the football team’s performance. (3) We tested the proposed index with two scores. The index passed the correlation and sensitivity tests, which proves that it is helpful for explaining the coordination level of the team and has certain reference value for the evaluation of the competitiveness of the football team. Full article
Show Figures

Figure 1

Back to TopTop