Next Issue
Volume 14, August
Previous Issue
Volume 14, June
 
 

Information, Volume 14, Issue 7 (July 2023) – 73 articles

Cover Story (view full-size image): Health monitoring is crucial in hospitals and rehabilitation centres, as it plays a vital role in patient care. However, challenges affecting the reliability and accuracy of health data need to be addressed. To overcome these obstacles, we propose a non-intrusive smart sensing system that integrates the use of a SensFloor smart carpet, and an Inertial Measurement Unit (IMU) wearable sensor placed on the user's back to monitor position and gait characteristics. By employing Machine Learning (ML) algorithms, the system analyses data collected from the SensFloor and IMU sensors, generating real-time data stored in the cloud. These real-time data are readily accessible to both physical therapists and patients, providing comprehensive analysis of the user's gait and balance. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 3382 KiB  
Article
Employee Productivity Assessment Using Fuzzy Inference System
by Mohammad Nikmanesh, Ardalan Feili and Shahryar Sorooshian
Information 2023, 14(7), 423; https://doi.org/10.3390/info14070423 - 22 Jul 2023
Cited by 1 | Viewed by 1752
Abstract
The success of an organization hinges upon the effective utilization of its human resources, which serves as a crucial developmental factor and competitive advantage, and sets the organization apart from others. Evaluating staff productivity involves considering various dimensions, notably structural, behavioral, and circumferential [...] Read more.
The success of an organization hinges upon the effective utilization of its human resources, which serves as a crucial developmental factor and competitive advantage, and sets the organization apart from others. Evaluating staff productivity involves considering various dimensions, notably structural, behavioral, and circumferential factors. These factors collectively form a three-pronged model that comprehensively encompasses the facets of an organization. However, assessing the productivity of employees poses challenges, due to the inherent complexity of the humanities domain. Fuzzy logic offers a sound approach to address this issue, employing its rationale and leveraging a fuzzy inference system (FIS) as a sophisticated toolbox for measuring productivity. Fuzzy inference systems enhance the flexibility, speed, and adaptability in soft computation. Likewise, their applications, integration, hybridization, and adaptation are also introduced. They also provide an alternative solution to deal with imprecise data. In this study, we endeavored to identify and measure the productivity of human resources within a case study, by developing an alternative framework known as an FIS. Our findings provided evidence to support the validity of the alternative approach. Thus, the utilized approach for assessing employee productivity may provide managers and businesses with a more realistic asset. Full article
Show Figures

Figure 1

12 pages, 1558 KiB  
Article
Correction of Threshold Determination in Rapid-Guessing Behaviour Detection
by Muhammad Alfian, Umi Laili Yuhana, Eric Pardede and Akbar Noto Ponco Bimantoro
Information 2023, 14(7), 422; https://doi.org/10.3390/info14070422 - 21 Jul 2023
Cited by 1 | Viewed by 1004
Abstract
Assessment is one benchmark in measuring students’ abilities. However, assessment results cannot necessarily be trusted, because students sometimes cheat or even guess in answering the questions. Therefore, to obtain valid results, it is necessary to separate valid and invalid answers by considering rapid-guessing [...] Read more.
Assessment is one benchmark in measuring students’ abilities. However, assessment results cannot necessarily be trusted, because students sometimes cheat or even guess in answering the questions. Therefore, to obtain valid results, it is necessary to separate valid and invalid answers by considering rapid-guessing behaviour. We conducted a test to record exam log data from undergraduate and postgraduate students to model rapid-guessing behaviour by determining the threshold response time. Rapid-guessing behaviour detection is inspired by the common k-second method. However, the method flattens the application of the threshold, thus allowing misclassification. The modified method considers item difficulty in determining the threshold. The evaluation results show that the system can identify students’ rapid-guessing behaviour with a success rate of 71%, which is superior to the previous method. We also analysed various aggregation techniques of response time and compared them to see the effect of selecting the aggregation technique. Full article
(This article belongs to the Special Issue International Database Engineered Applications)
Show Figures

Figure 1

17 pages, 4899 KiB  
Article
Combining Classifiers for Deep Learning Mask Face Recognition
by Wen-Chang Cheng, Hung-Chou Hsiao, Yung-Fa Huang and Li-Hua Li
Information 2023, 14(7), 421; https://doi.org/10.3390/info14070421 - 21 Jul 2023
Viewed by 1272
Abstract
This research proposes a single network model architecture for mask face recognition using the FaceNet training method. Three pre-trained convolutional neural networks of different sizes are combined, namely InceptionResNetV2, InceptionV3, and MobileNetV2. The models are augmented by connecting an otherwise fully connected network [...] Read more.
This research proposes a single network model architecture for mask face recognition using the FaceNet training method. Three pre-trained convolutional neural networks of different sizes are combined, namely InceptionResNetV2, InceptionV3, and MobileNetV2. The models are augmented by connecting an otherwise fully connected network with a SoftMax output layer. We combine triplet loss and categorical cross-entropy loss to optimize the training process. In addition, the learning rate of the optimizer is dynamically updated using the cosine annealing mechanism, which improves the convergence of the model during training. Mask face recognition (MFR) experimental results on a custom MASK600 dataset show that proposed InceptionResNetV2 and InceptionV3 use only 20 training epochs, and MobileNetV2 uses only 50 training epochs, but to achieve more than 93% accuracy than the previous works of MFR with annealing. In addition to reaching a practical level, it saves time for training models and effectively reduces energy costs. Full article
Show Figures

Figure 1

16 pages, 1690 KiB  
Article
Revisiting Softmax for Uncertainty Approximation in Text Classification
by Andreas Nugaard Holm, Dustin Wright and Isabelle Augenstein
Information 2023, 14(7), 420; https://doi.org/10.3390/info14070420 - 20 Jul 2023
Cited by 2 | Viewed by 1419
Abstract
Uncertainty approximation in text classification is an important area with applications in domain adaptation and interpretability. One of the most widely used uncertainty approximation methods is Monte Carlo (MC) dropout, which is computationally expensive as it requires multiple forward passes through the model. [...] Read more.
Uncertainty approximation in text classification is an important area with applications in domain adaptation and interpretability. One of the most widely used uncertainty approximation methods is Monte Carlo (MC) dropout, which is computationally expensive as it requires multiple forward passes through the model. A cheaper alternative is to simply use a softmax based on a single forward pass without dropout to estimate model uncertainty. However, prior work has indicated that these predictions tend to be overconfident. In this paper, we perform a thorough empirical analysis of these methods on five datasets with two base neural architectures in order to identify the trade-offs between the two. We compare both softmax and an efficient version of MC dropout on their uncertainty approximations and downstream text classification performance, while weighing their runtime (cost) against performance (benefit). We find that, while MC dropout produces the best uncertainty approximations, using a simple softmax leads to competitive, and in some cases better, uncertainty estimation for text classification at a much lower computational cost, suggesting that softmax can in fact be a sufficient uncertainty estimate when computational resources are a concern. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence)
Show Figures

Figure 1

16 pages, 2463 KiB  
Article
A Context Semantic Auxiliary Network for Image Captioning
by Jianying Li and Xiangjun Shao
Information 2023, 14(7), 419; https://doi.org/10.3390/info14070419 - 20 Jul 2023
Viewed by 1136
Abstract
Image captioning is a challenging task, which generates a sentence for a given image. The earlier captioning methods mainly decode the visual features to generate caption sentences for the image. However, the visual features lack the context semantic information which is vital for [...] Read more.
Image captioning is a challenging task, which generates a sentence for a given image. The earlier captioning methods mainly decode the visual features to generate caption sentences for the image. However, the visual features lack the context semantic information which is vital for generating an accurate caption sentence. To address this problem, this paper first proposes the Attention-Aware (AA) mechanism which can filter out erroneous or irrelevant context semantic information. And then, AA is utilized to constitute a Context Semantic Auxiliary Network (CSAN), which can capture the effective context semantic information to regenerate or polish the image caption. Moreover, AA can capture the visual feature information needed to generate a caption. Experimental results show that our proposed CSAN outperforms the compared image captioning methods on MS COCO “Karpathy” offline test split and the official online testing server. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

10 pages, 1031 KiB  
Article
Natural Syntax, Artificial Intelligence and Language Acquisition
by William O’Grady and Miseon Lee
Information 2023, 14(7), 418; https://doi.org/10.3390/info14070418 - 20 Jul 2023
Viewed by 1932
Abstract
In recent work, various scholars have suggested that large language models can be construed as input-driven theories of language acquisition. In this paper, we propose a way to test this idea. As we will document, there is good reason to think that processing [...] Read more.
In recent work, various scholars have suggested that large language models can be construed as input-driven theories of language acquisition. In this paper, we propose a way to test this idea. As we will document, there is good reason to think that processing pressures override input at an early point in linguistic development, creating a temporary but sophisticated system of negation with no counterpart in caregiver speech. We go on to outline a (for now) thought experiment involving this phenomenon that could contribute to a deeper understanding both of human language and of the language models that seek to simulate it. Full article
16 pages, 8024 KiB  
Article
NARX Technique to Predict Torque in Internal Combustion Engines
by Federico Ricci, Luca Petrucci, Francesco Mariani and Carlo Nazareno Grimaldi
Information 2023, 14(7), 417; https://doi.org/10.3390/info14070417 - 20 Jul 2023
Cited by 4 | Viewed by 1259
Abstract
To carry out increasingly sophisticated checks, which comply with international regulations and stringent constraints, on-board computational systems are called upon to manipulate a growing number of variables, provided by an ever-increasing number of real and virtual sensors. The optimization phase of an ICE [...] Read more.
To carry out increasingly sophisticated checks, which comply with international regulations and stringent constraints, on-board computational systems are called upon to manipulate a growing number of variables, provided by an ever-increasing number of real and virtual sensors. The optimization phase of an ICE passes through the control of these numerous variables, which often exhibit rapidly changing trends over time. On the one hand, the amount of data to be processed, with narrow cyclical frequencies, entails ever more powerful computational equipment. On the other hand, computational strategies and techniques are required which allow actuation times that are useful for timely and optimized control. In the automotive industry, the ‘machine learning’ approach is becoming one the most used approaches to perform forecasting activities with reduced computational effort, due to both its cost-effectiveness and its simple and compact structure. In the present work, the nonlinear dynamic system we address is related to the torque estimation of an ICE through a nonlinear autoregressive with exogenous inputs (NARX) approach. Preliminary activities were performed to optimize the neural network in terms of neurons, hidden layers, and the number of input parameters to be assessed. A Shapley sensitivity analysis allowed quantification of the impact of each variable on the target prediction, and therefore, a reduction in the amount of data to be processed by the architecture. In all cases analyzed, the optimized structure was able to achieve average percentage errors on the target prediction that were always lower than a critical threshold of 10%. In particular, when the dataset was augmented or the analyzed cases merged, the architecture achieved average prediction errors of about 1%, highlighting its remarkable ability to reproduce the target with fidelity. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

11 pages, 1379 KiB  
Article
MSGAT-Based Sentiment Analysis for E-Commerce
by Tingyao Jiang, Wei Sun and Min Wang
Information 2023, 14(7), 416; https://doi.org/10.3390/info14070416 - 19 Jul 2023
Cited by 1 | Viewed by 1208
Abstract
Sentence-level sentiment analysis, as a research direction in natural language processing, has been widely used in various fields. In order to address the problem that syntactic features were neglected in previous studies on sentence-level sentiment analysis, a multiscale graph attention network (MSGAT) sentiment [...] Read more.
Sentence-level sentiment analysis, as a research direction in natural language processing, has been widely used in various fields. In order to address the problem that syntactic features were neglected in previous studies on sentence-level sentiment analysis, a multiscale graph attention network (MSGAT) sentiment analysis model based on dependent syntax is proposed. The model adopts RoBERTa_WWM as the text encoding layer, generates graphs on the basis of syntactic dependency trees, and obtains sentence sentiment features at different scales for text classification through multilevel graph attention network. Compared with the existing mainstream text sentiment analysis models, the proposed model achieves better performance on both a hotel review dataset and a takeaway review dataset, with 94.8% and 93.7% accuracy and 96.2% and 90.4% F1 score, respectively. The results demonstrate the superiority and effectiveness of the model in Chinese sentence sentiment analysis. Full article
Show Figures

Figure 1

14 pages, 2658 KiB  
Article
Multi-Class Skin Cancer Classification Using Vision Transformer Networks and Convolutional Neural Network-Based Pre-Trained Models
by Muhammad Asad Arshed, Shahzad Mumtaz, Muhammad Ibrahim, Saeed Ahmed, Muhammad Tahir and Muhammad Shafi
Information 2023, 14(7), 415; https://doi.org/10.3390/info14070415 - 18 Jul 2023
Cited by 7 | Viewed by 3533
Abstract
Skin cancer, particularly melanoma, has been recognized as one of the most lethal forms of cancer. Detecting and diagnosing skin lesions accurately can be challenging due to the striking similarities between the various types of skin lesions, such as melanoma and nevi, especially [...] Read more.
Skin cancer, particularly melanoma, has been recognized as one of the most lethal forms of cancer. Detecting and diagnosing skin lesions accurately can be challenging due to the striking similarities between the various types of skin lesions, such as melanoma and nevi, especially when examining the color images of the skin. However, early diagnosis plays a crucial role in saving lives and reducing the burden on medical resources. Consequently, the development of a robust autonomous system for skin cancer classification becomes imperative. Convolutional neural networks (CNNs) have been widely employed over the past decade to automate cancer diagnosis. Nonetheless, the emergence of the Vision Transformer (ViT) has recently gained a considerable level of popularity in the field and has emerged as a competitive alternative to CNNs. In light of this, the present study proposed an alternative method based on the off-the-shelf ViT for identifying various skin cancer diseases. To evaluate its performance, the proposed method was compared with 11 CNN-based transfer learning methods that have been known to outperform other deep learning techniques that are currently in use. Furthermore, this study addresses the issue of class imbalance within the dataset, a common challenge in skin cancer classification. In addressing this concern, the proposed study leverages the vision transformer and the CNN-based transfer learning models to classify seven distinct types of skin cancers. Through our investigation, we have found that the employment of pre-trained vision transformers achieved an impressive accuracy of 92.14%, surpassing CNN-based transfer learning models across several evaluation metrics for skin cancer diagnosis. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

32 pages, 3421 KiB  
Article
Federated Edge Intelligence and Edge Caching Mechanisms
by Aristeidis Karras, Christos Karras, Konstantinos C. Giotopoulos, Dimitrios Tsolis, Konstantinos Oikonomou and Spyros Sioutas
Information 2023, 14(7), 414; https://doi.org/10.3390/info14070414 - 18 Jul 2023
Cited by 6 | Viewed by 1871
Abstract
Federated learning (FL) has emerged as a promising technique for preserving user privacy and ensuring data security in distributed machine learning contexts, particularly in edge intelligence and edge caching applications. Recognizing the prevalent challenges of imbalanced and noisy data impacting scalability and resilience, [...] Read more.
Federated learning (FL) has emerged as a promising technique for preserving user privacy and ensuring data security in distributed machine learning contexts, particularly in edge intelligence and edge caching applications. Recognizing the prevalent challenges of imbalanced and noisy data impacting scalability and resilience, our study introduces two innovative algorithms crafted for FL within a peer-to-peer framework. These algorithms aim to enhance performance, especially in decentralized and resource-limited settings. Furthermore, we propose a client-balancing Dirichlet sampling algorithm with probabilistic guarantees to mitigate oversampling issues, optimizing data distribution among clients to achieve more accurate and reliable model training. Within the specifics of our study, we employed 10, 20, and 40 Raspberry Pi devices as clients in a practical FL scenario, simulating real-world conditions. The well-known FedAvg algorithm was implemented, enabling multi-epoch client training before weight integration. Additionally, we examined the influence of real-world dataset noise, culminating in a performance analysis that underscores how our novel methods and research significantly advance robust and efficient FL techniques, thereby enhancing the overall effectiveness of decentralized machine learning applications, including edge intelligence and edge caching. Full article
Show Figures

Figure 1

20 pages, 6480 KiB  
Article
Arabic Mispronunciation Recognition System Using LSTM Network
by Abdelfatah Ahmed, Mohamed Bader, Ismail Shahin, Ali Bou Nassif, Naoufel Werghi and Mohammad Basel
Information 2023, 14(7), 413; https://doi.org/10.3390/info14070413 - 16 Jul 2023
Cited by 2 | Viewed by 1196
Abstract
The Arabic language has always been an immense source of attraction to various people from different ethnicities by virtue of the significant linguistic legacy that it possesses. Consequently, a multitude of people from all over the world are yearning to learn it. However, [...] Read more.
The Arabic language has always been an immense source of attraction to various people from different ethnicities by virtue of the significant linguistic legacy that it possesses. Consequently, a multitude of people from all over the world are yearning to learn it. However, people from different mother tongues and cultural backgrounds might experience some hardships regarding articulation due to the absence of some particular letters only available in the Arabic language, which could hinder the learning process. As a result, a speaker-independent and text-dependent efficient system that aims to detect articulation disorders was implemented. In the proposed system, we emphasize the prominence of “speech signal processing” in diagnosing Arabic mispronunciation using the Mel-frequency cepstral coefficients (MFCCs) as the optimum extracted features. In addition, long short-term memory (LSTM) was also utilized for the classification process. Furthermore, the analytical framework was incorporated with a gender recognition model to perform two-level classification. Our results show that the LSTM network significantly enhances mispronunciation detection along with gender recognition. The LSTM models attained an average accuracy of 81.52% in the proposed system, reflecting a high performance compared to previous mispronunciation detection systems. Full article
Show Figures

Figure 1

18 pages, 2424 KiB  
Article
Health Monitoring Apps: An Evaluation of the Persuasive System Design Model for Human Wellbeing
by Asif Hussian, Abdul Mateen, Farhan Amin, Muhammad Ali Abid and Saeed Ullah
Information 2023, 14(7), 412; https://doi.org/10.3390/info14070412 - 16 Jul 2023
Viewed by 2872
Abstract
In the current era of ubiquitous computing and mobile technology, almost all human beings use various self-monitoring applications. Mobile applications could be the best health assistant for safety and adopting a healthy lifestyle. Therefore, persuasive designing is a compulsory element for designing such [...] Read more.
In the current era of ubiquitous computing and mobile technology, almost all human beings use various self-monitoring applications. Mobile applications could be the best health assistant for safety and adopting a healthy lifestyle. Therefore, persuasive designing is a compulsory element for designing such apps. A popular model for persuasive design named the Persuasive System Design (PSD) model is a generalized model for whole persuasive technologies. Any type of persuasive application could be designed using this model. Designing any special type of application using the PSD model could be difficult because of its generalized behavior which fails to provide moral support for users of health applications. There is a strong need to propose a customized and improved persuasive system design model for each category to overcome the issue. This study evaluates the PSD model and finds persuasive gaps in users of the Mobile Health Monitoring application, developed by following the PSD model. Furthermore, this study finds that users misunderstand health-related problems when using such apps. A misunderstanding of this nature can have serious consequences for the user’s life in some cases. Full article
(This article belongs to the Special Issue Artificial Intelligence and Big Data Applications)
Show Figures

Figure 1

36 pages, 915 KiB  
Article
Assessing the Solid Protocol in Relation to Security and Privacy Obligations
by Christian Esposito, Ross Horne, Livio Robaldo, Bart Buelens and Elfi Goesaert
Information 2023, 14(7), 411; https://doi.org/10.3390/info14070411 - 16 Jul 2023
Cited by 2 | Viewed by 1892
Abstract
The Solid specification aims to empower data subjects by giving them direct access control over their data across multiple applications. As governments are manifesting their interest in this framework for citizen empowerment and e-government services, security and privacy represent pivotal issues to be [...] Read more.
The Solid specification aims to empower data subjects by giving them direct access control over their data across multiple applications. As governments are manifesting their interest in this framework for citizen empowerment and e-government services, security and privacy represent pivotal issues to be addressed. By analysing the relevant legislation, with an emphasis on GDPR and officially approved documents such as codes of conduct and relevant security ISO standards, we formulate the primary security and privacy requirements for such a framework. The legislation places some obligations on pod providers, much like cloud services. However, what is more interesting is that Solid has the potential to support GDPR compliance of Solid apps and data users that connect, via the protocol, to Solid pods containing personal data. A Solid-based healthcare use case is illustrated where identifying such controllers responsible for apps and data users is essential for the system to be deployed. Furthermore, we survey the current Solid protocol specifications regarding how they cover the highlighted requirements, and draw attention to potential gaps between the specifications and requirements. We also point out the contribution of recent academic work presenting novel approaches to increase the security and privacy degree provided by the Solid project. This paper has a twofold contribution to improve user awareness of how Solid can help protect their data and to present possible future research lines on Solid security and privacy enhancements. Full article
(This article belongs to the Special Issue Addressing Privacy and Data Protection in New Technological Trends)
Show Figures

Figure 1

14 pages, 2220 KiB  
Article
Breast Cancer Detection in Mammography Images: A CNN-Based Approach with Feature Selection
by Zahra Jafari and Ebrahim Karami
Information 2023, 14(7), 410; https://doi.org/10.3390/info14070410 - 16 Jul 2023
Cited by 8 | Viewed by 6842
Abstract
The prompt and accurate diagnosis of breast lesions, including the distinction between cancer, non-cancer, and suspicious cancer, plays a crucial role in the prognosis of breast cancer. In this paper, we introduce a novel method based on feature extraction and reduction for the [...] Read more.
The prompt and accurate diagnosis of breast lesions, including the distinction between cancer, non-cancer, and suspicious cancer, plays a crucial role in the prognosis of breast cancer. In this paper, we introduce a novel method based on feature extraction and reduction for the detection of breast cancer in mammography images. First, we extract features from multiple pre-trained convolutional neural network (CNN) models, and then concatenate them. The most informative features are selected based on their mutual information with the target variable. Subsequently, the selected features can be classified using a machine learning algorithm. We evaluate our approach using four different machine learning algorithms: neural network (NN), k-nearest neighbor (kNN), random forest (RF), and support vector machine (SVM). Our results demonstrate that the NN-based classifier achieves an impressive accuracy of 92% on the RSNA dataset. This dataset is newly introduced and includes two views as well as additional features like age, which contributed to the improved performance. We compare our proposed algorithm with state-of-the-art methods and demonstrate its superiority, particularly in terms of accuracy and sensitivity. For the MIAS dataset, we achieve an accuracy as high as 94.5%, and for the DDSM dataset, an accuracy of 96% is attained. These results highlight the effectiveness of our method in accurately diagnosing breast lesions and surpassing existing approaches. Full article
(This article belongs to the Special Issue Advances in Object-Based Image Segmentation and Retrieval)
Show Figures

Figure 1

14 pages, 1737 KiB  
Article
Artificial Intelligence Generative Tools and Conceptual Knowledge in Problem Solving in Chemistry
by Wajeeh Daher, Hussam Diab and Anwar Rayan
Information 2023, 14(7), 409; https://doi.org/10.3390/info14070409 - 16 Jul 2023
Cited by 7 | Viewed by 3887
Abstract
In recent years, artificial intelligence (AI) has emerged as a valuable resource for teaching and learning, and it has also shown promise as a tool to help solve problems. A tool that has gained attention in education is ChatGPT, which supports teaching and [...] Read more.
In recent years, artificial intelligence (AI) has emerged as a valuable resource for teaching and learning, and it has also shown promise as a tool to help solve problems. A tool that has gained attention in education is ChatGPT, which supports teaching and learning through AI. This research investigates the difficulties faced by ChatGPT in comprehending and responding to chemistry problems pertaining to the topic of Introduction to Material Science. By employing the theoretical framework proposed by Holme et al., encompassing categories such as transfer, depth, predict/explain, problem solving, and translate, we evaluate ChatGPT’s conceptual understanding difficulties. We presented ChatGPT with a set of thirty chemistry problems within the Introduction to Material Science domain and tasked it with generating solutions. Our findings indicated that ChatGPT encountered significant conceptual knowledge difficulties across various categories, with a notable emphasis on representations and depth, where difficulties in representations hindered effective knowledge transfer. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence for Sustainable Development)
Show Figures

Figure 1

20 pages, 3467 KiB  
Article
Analyzing Social Media Data Using Sentiment Mining and Bigram Analysis for the Recommendation of YouTube Videos
by Ken McGarry
Information 2023, 14(7), 408; https://doi.org/10.3390/info14070408 - 16 Jul 2023
Viewed by 1975
Abstract
In this work we combine sentiment analysis with graph theory to analyze user posts, likes/dislikes on a variety of social media to provide recommendations for YouTube videos. We focus on the topic of climate change/global warming, which has caused much alarm and controversy [...] Read more.
In this work we combine sentiment analysis with graph theory to analyze user posts, likes/dislikes on a variety of social media to provide recommendations for YouTube videos. We focus on the topic of climate change/global warming, which has caused much alarm and controversy over recent years. Our intention is to recommend informative YouTube videos to those seeking a balanced viewpoint of this area and the key arguments/issues. To this end we analyze Twitter data; Reddit comments and posts; user comments, view statistics and likes/dislikes of YouTube videos. The combination of sentiment analysis with raw statistics and linking users with their posts gives deeper insights into their needs and quest for quality information. Sentiment analysis provides the insights into user likes and dislikes, graph theory provides the linkage patterns and relationships between users, posts, and sentiment. Full article
(This article belongs to the Special Issue Recommendation Algorithms and Web Mining)
Show Figures

Figure 1

21 pages, 882 KiB  
Article
Improved Spacecraft Authentication Method for Satellite Internet System Using Residue Codes
by Alexandr Anatolyevich Olenev, Igor Anatolyevich Kalmykov, Vladimir Petrovich Pashintsev, Nikita Konstantinovich Chistousov, Daniil Vyacheslavovich Dukhovnyj and Natalya Igorevna Kalmykova
Information 2023, 14(7), 407; https://doi.org/10.3390/info14070407 - 15 Jul 2023
Viewed by 987
Abstract
Low-orbit satellite internet (LOSI) expands the scope of the Industrial Internet of Things (IIoT) in the oil and gas industry (OGI) to include areas of the Far North. However, due to the large length of the communication channel, the number of threats and [...] Read more.
Low-orbit satellite internet (LOSI) expands the scope of the Industrial Internet of Things (IIoT) in the oil and gas industry (OGI) to include areas of the Far North. However, due to the large length of the communication channel, the number of threats and attacks increases. A special place among them is occupied by relay spoofing interference. In this case, an intruder satellite intercepts the control signal coming from the satellite (SC), delays it, and then imposes it on the receiver located on the unattended OGI object. This can lead to a disruption of the facility and even cause an environmental disaster. To prevent a spoofing attack, a satellite authentication method has been developed that uses a zero-knowledge authentication protocol (ZKAP). These protocols have high cryptographic strength without the use of encryption. However, they have a significant drawback. This is their low authentication speed, which is caused by calculations over a large module Q (128 bits or more). It is possible to reduce the time of determining the status of an SC by switching to parallel computing. To solve this problem, the paper proposes to use residue codes (RC). Addition, subtraction, and multiplication operations are performed in parallel in RC. Thus, a correct choice of a set of modules of RC allows for providing an operating range of calculations not less than the number Q. Therefore, the development of a spacecraft authentication method for the satellite internet system using RC that allows for reducing the authentication time is an urgent task. Full article
(This article belongs to the Special Issue Advances in Wireless Communications Systems)
Show Figures

Figure 1

20 pages, 290 KiB  
Article
Virtual Restaurants: Customer Experience Keeps Their Businesses Alive
by Maria I. Klouvidaki, Nikos Antonopoulos, Georgios D. Styliaras and Andreas Kanavos
Information 2023, 14(7), 406; https://doi.org/10.3390/info14070406 - 15 Jul 2023
Cited by 2 | Viewed by 3150
Abstract
Due to COVID-19 restrictions, many restaurants were forced to discontinue in-person service, either by locking down or finding alternative methods of operation. Despite the fact that, in the United States of America, digital restaurants have already been established for many years, in Greece, [...] Read more.
Due to COVID-19 restrictions, many restaurants were forced to discontinue in-person service, either by locking down or finding alternative methods of operation. Despite the fact that, in the United States of America, digital restaurants have already been established for many years, in Greece, this phenomenon became popular during the pandemic. These delivery-only companies operate exclusively online, allowing customers to place orders from restaurants without a physical location. This has revolutionized the process of ordering food, as customers can browse digital menus, view images, and utilize other options provided by digital food technology. As a result, customers have had new experiences with food thanks to digital eateries during the pandemic. This research study is quantitative and utilized a questionnaire distributed to 1097 participating consumers over the internet. The sample was selected using straightforward random sampling, where each member of the population had an equal and unique chance of participating in the survey. The data were collected over a period of 2 months. Full article
28 pages, 5371 KiB  
Article
Linguistic Communication Channels Reveal Connections between Texts: The New Testament and Greek Literature
by Emilio Matricciani
Information 2023, 14(7), 405; https://doi.org/10.3390/info14070405 - 14 Jul 2023
Cited by 1 | Viewed by 1027
Abstract
We studied two fundamental linguistic channels—the sentences and the interpunctions channels—and showed they can reveal deeper connections between texts. The applied theory does not follow the actual paradigm of linguistic studies. As a study case, we considered the Greek New Testament, with the [...] Read more.
We studied two fundamental linguistic channels—the sentences and the interpunctions channels—and showed they can reveal deeper connections between texts. The applied theory does not follow the actual paradigm of linguistic studies. As a study case, we considered the Greek New Testament, with the purpose of determining mathematical connections between its texts and possible differences in the writing style (mathematically defined) of the writers and in the reading skill required of their readers. The analysis was based on deep-language parameters and communication/information theory. To set the New Testament texts in the larger Greek classical literature, we considered texts written by Aesop, Polybius, Flavius Josephus, and Plutarch. The results largely confirmed what scholars have found about the New Testament texts, therefore giving credibility to the theory. The Gospel according to John is very similar to the fables written by Aesop. Surprisingly, the Epistle to the Hebrews and Apocalypse are each other’s “photocopies” in the two linguistic channels and not linked to all other texts. These two texts deserve further study by historians of the early Christian church literature at the level of meaning, readers, and possible Old Testament texts that might have influenced them. The theory can guide scholars to study any literary corpus. Full article
(This article belongs to the Special Issue Editorial Board Members’ Collection Series: "Information Processes")
Show Figures

Figure 1

15 pages, 6561 KiB  
Article
Enhancing CSI-Based Human Activity Recognition by Edge Detection Techniques
by Hossein Shahverdi, Mohammad Nabati, Parisa Fard Moshiri, Reza Asvadi and Seyed Ali Ghorashi
Information 2023, 14(7), 404; https://doi.org/10.3390/info14070404 - 14 Jul 2023
Cited by 5 | Viewed by 1708
Abstract
Human Activity Recognition (HAR) has been a popular area of research in the Internet of Things (IoT) and Human–Computer Interaction (HCI) over the past decade. The objective of this field is to detect human activities through numeric or visual representations, and its applications [...] Read more.
Human Activity Recognition (HAR) has been a popular area of research in the Internet of Things (IoT) and Human–Computer Interaction (HCI) over the past decade. The objective of this field is to detect human activities through numeric or visual representations, and its applications include smart homes and buildings, action prediction, crowd counting, patient rehabilitation, and elderly monitoring. Traditionally, HAR has been performed through vision-based, sensor-based, or radar-based approaches. However, vision-based and sensor-based methods can be intrusive and raise privacy concerns, while radar-based methods require special hardware, making them more expensive. WiFi-based HAR is a cost-effective alternative, where WiFi access points serve as transmitters and users’ smartphones serve as receivers. The HAR in this method is mainly performed using two wireless-channel metrics: Received Signal Strength Indicator (RSSI) and Channel State Information (CSI). CSI provides more stable and comprehensive information about the channel compared to RSSI. In this research, we used a convolutional neural network (CNN) as a classifier and applied edge-detection techniques as a preprocessing phase to improve the quality of activity detection. We used CSI data converted into RGB images and tested our methodology on three available CSI datasets. The results showed that the proposed method achieved better accuracy and faster training times than the simple RGB-represented data. In order to justify the effectiveness of our approach, we repeated the experiment by applying raw CSI data to long short-term memory (LSTM) and Bidirectional LSTM classifiers. Full article
Show Figures

Figure 1

26 pages, 3230 KiB  
Article
Exploitation of Vulnerabilities: A Topic-Based Machine Learning Framework for Explaining and Predicting Exploitation
by Konstantinos Charmanas, Nikolaos Mittas and Lefteris Angelis
Information 2023, 14(7), 403; https://doi.org/10.3390/info14070403 - 14 Jul 2023
Cited by 1 | Viewed by 1901
Abstract
Security vulnerabilities constitute one of the most important weaknesses of hardware and software security that can cause severe damage to systems, applications, and users. As a result, software vendors should prioritize the most dangerous and impactful security vulnerabilities by developing appropriate countermeasures. As [...] Read more.
Security vulnerabilities constitute one of the most important weaknesses of hardware and software security that can cause severe damage to systems, applications, and users. As a result, software vendors should prioritize the most dangerous and impactful security vulnerabilities by developing appropriate countermeasures. As we acknowledge the importance of vulnerability prioritization, in the present study, we propose a framework that maps newly disclosed vulnerabilities with topic distributions, via word clustering, and further predicts whether this new entry will be associated with a potential exploit Proof Of Concept (POC). We also provide insights on the current most exploitable weaknesses and products through a Generalized Linear Model (GLM) that links the topic memberships of vulnerabilities with exploit indicators, thus distinguishing five topics that are associated with relatively frequent recent exploits. Our experiments show that the proposed framework can outperform two baseline topic modeling algorithms in terms of topic coherence by improving LDA models by up to 55%. In terms of classification performance, the conducted experiments—on a quite balanced dataset (57% negative observations, 43% positive observations)—indicate that the vulnerability descriptions can be used as exclusive features in assessing the exploitability of vulnerabilities, as the “best” model achieves accuracy close to 87%. Overall, our study contributes to enabling the prioritization of vulnerabilities by providing guidelines on the relations between the textual details of a weakness and the potential application/system exploits. Full article
(This article belongs to the Special Issue Predictive Analytics and Data Science)
Show Figures

Figure 1

12 pages, 1902 KiB  
Article
Smart Wearable to Prevent Injuries in Amateur Athletes in Squats Exercise by Using Lightweight Machine Learning Model
by Ricardo P. Arciniega-Rocha, Vanessa C. Erazo-Chamorro, Paúl D. Rosero-Montalvo and Gyula Szabó
Information 2023, 14(7), 402; https://doi.org/10.3390/info14070402 - 14 Jul 2023
Cited by 1 | Viewed by 1182
Abstract
An erroneous squat movement might cause different injuries in amateur athletes who are not experts in workout exercises. Even when personal trainers watch out for the athletes’ workout performance, light variations in ankles, knees, and lower back movements might not be recognized. Therefore, [...] Read more.
An erroneous squat movement might cause different injuries in amateur athletes who are not experts in workout exercises. Even when personal trainers watch out for the athletes’ workout performance, light variations in ankles, knees, and lower back movements might not be recognized. Therefore, we present a smart wearable to alert athletes whether their squats performance is correct. We collect data from people experienced with workout exercises and from learners, supervising personal trainers in annotation of data. Then, we use data preprocessing techniques to reduce noisy samples and train Machine Learning models with a small memory footprint to be exported to microcontrollers to classify squats’ movements. As a result, the k-Nearest Neighbors algorithm with k = 5 achieves an 85% performance and weight of 40 KB of RAM. Full article
(This article belongs to the Special Issue Trends in Computational and Cognitive Engineering)
Show Figures

Figure 1

22 pages, 9658 KiB  
Article
Justification vs. Transparency: Why and How Visual Explanations in a Scientific Literature Recommender System
by Mouadh Guesmi, Mohamed Amine Chatti, Shoeb Joarder, Qurat Ul Ain, Clara Siepmann, Hoda Ghanbarzadeh and Rawaa Alatrash
Information 2023, 14(7), 401; https://doi.org/10.3390/info14070401 - 14 Jul 2023
Cited by 2 | Viewed by 1348
Abstract
Significant attention has been paid to enhancing recommender systems (RS) with explanation facilities to help users make informed decisions and increase trust in and satisfaction with an RS. Justification and transparency represent two crucial goals in explainable recommendations. Different from transparency, which faithfully [...] Read more.
Significant attention has been paid to enhancing recommender systems (RS) with explanation facilities to help users make informed decisions and increase trust in and satisfaction with an RS. Justification and transparency represent two crucial goals in explainable recommendations. Different from transparency, which faithfully exposes the reasoning behind the recommendation mechanism, justification conveys a conceptual model that may differ from that of the underlying algorithm. An explanation is an answer to a question. In explainable recommendation, a user would want to ask questions (referred to as intelligibility types) to understand the results given by an RS. In this paper, we identify relationships between Why and How explanation intelligibility types and the explanation goals of justification and transparency. We followed the Human-Centered Design (HCD) approach and leveraged the What–Why–How visualization framework to systematically design and implement Why and How visual explanations in the transparent Recommendation and Interest Modeling Application (RIMA). Furthermore, we conducted a qualitative user study (N = 12) based on a thematic analysis of think-aloud sessions and semi-structured interviews with students and researchers to investigate the potential effects of providing Why and How explanations together in an explainable RS on users’ perceptions regarding transparency, trust, and satisfaction. Our study shows qualitative evidence confirming that the choice of the explanation intelligibility types depends on the explanation goal and user type. Full article
(This article belongs to the Special Issue Information Visualization Theory and Applications)
Show Figures

Figure 1

24 pages, 2082 KiB  
Review
The Application of Z-Numbers in Fuzzy Decision Making: The State of the Art
by Nik Muhammad Farhan Hakim Nik Badrul Alam, Ku Muhammad Naim Ku Khalif, Nor Izzati Jaini and Alexander Gegov
Information 2023, 14(7), 400; https://doi.org/10.3390/info14070400 - 13 Jul 2023
Cited by 6 | Viewed by 1849
Abstract
A Z-number is very powerful in describing imperfect information, in which fuzzy numbers are paired such that the partially reliable information is properly processed. During a decision-making process, human beings always use natural language to describe their preferences, and the decision information is [...] Read more.
A Z-number is very powerful in describing imperfect information, in which fuzzy numbers are paired such that the partially reliable information is properly processed. During a decision-making process, human beings always use natural language to describe their preferences, and the decision information is usually imprecise and partially reliable. The nature of the Z-number, which is composed of the restriction and reliability components, has made it a powerful tool for depicting certain decision information. Its strengths and advantages have attracted many researchers worldwide to further study and extend its theory and applications. The current research trend on Z-numbers has shown an increasing interest among researchers in the fuzzy set theory, especially its application to decision making. This paper reviews the application of Z-numbers in decision making, in which previous decision-making models based on Z-numbers are analyzed to identify their strengths and contributions. The decision making based on Z-numbers improves the reliability of the decision information and makes it more meaningful. Another scope that is closely related to decision making, namely, the ranking of Z-numbers, is also reviewed. Then, the evaluative analysis of the Z-numbers is conducted to evaluate the performance of Z-numbers in decision making. Future directions and recommendations on the applications of Z-numbers in decision making are provided at the end of this review. Full article
Show Figures

Figure 1

19 pages, 1603 KiB  
Article
Addiction and Spending in Gacha Games
by Nikola Lakić, Andrija Bernik and Andrej Čep
Information 2023, 14(7), 399; https://doi.org/10.3390/info14070399 - 13 Jul 2023
Viewed by 17202
Abstract
Gacha games are the most dominant games on the mobile market. These are free-to-play games with a lottery-like system, where the user pays with in-game currency to enter a draw in order to obtain the character or item they want. If a player [...] Read more.
Gacha games are the most dominant games on the mobile market. These are free-to-play games with a lottery-like system, where the user pays with in-game currency to enter a draw in order to obtain the character or item they want. If a player does not obtain what he hoped for, there is the option of paying with his own money for more draws, and this is the main way to monetize the Gacha game. The purpose of this study is to show the playing and spending habits of Gacha players: the reasons they like such games, the reasons for spending, how much they spend, what they spend on, how long they have been spending, and whether they are aware of their spending. The paper includes studies by other researchers on various aspects of Gacha games as well. The aim of the paper is to conduct a study with the hypothesis that players who play the same game for a while and have a habit of playing it are willing to give more of their money to enter a draw. Therefore, two research questions and two hypotheses were analyzed. A total of 713 participants took part in the study. Full article
(This article belongs to the Special Issue Game Informatics)
Show Figures

Figure 1

13 pages, 2648 KiB  
Article
Trademark Similarity Evaluation Using a Combination of ViT and Local Features
by Dmitry Vesnin, Dmitry Levshun and Andrey Chechulin
Information 2023, 14(7), 398; https://doi.org/10.3390/info14070398 - 12 Jul 2023
Cited by 2 | Viewed by 1148
Abstract
The origin of the trademark similarity analysis problem lies within the legal area, specifically the protection of intellectual property. One of the possible technical solutions for this issue is the trademark similarity evaluation pipeline based on the content-based image retrieval approach. CNN-based off-the-shelf [...] Read more.
The origin of the trademark similarity analysis problem lies within the legal area, specifically the protection of intellectual property. One of the possible technical solutions for this issue is the trademark similarity evaluation pipeline based on the content-based image retrieval approach. CNN-based off-the-shelf features have shown themselves as a good baseline for trademark retrieval. However, in recent years, the computer vision area has been transitioning from CNNs to a new architecture, namely, Vision Transformer. In this paper, we investigate the performance of off-the-shelf features extracted with vision transformers and explore the effects of pre-, post-processing, and pre-training on big datasets. We propose the enhancement of the trademark similarity evaluation pipeline by joint usage of global and local features, which leverages the best aspects of both approaches. Experimental results on the METU Trademark Dataset show that off-the-shelf features extracted with ViT-based models outperform off-the-shelf features from CNN-based models. The proposed method achieves a mAP value of 31.23, surpassing previous state-of-the-art results. We assume that the usage of an enhanced trademark similarity evaluation pipeline allows for the improvement of the protection of intellectual property with the help of artificial intelligence methods. Moreover, this approach enables one to identify cases of unfair use of such data and form an evidence base for litigation. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

17 pages, 400 KiB  
Article
Leveraging Satisfiability Modulo Theory Solvers for Verification of Neural Networks in Predictive Maintenance Applications
by Dario Guidotti, Laura Pandolfo and Luca Pulina
Information 2023, 14(7), 397; https://doi.org/10.3390/info14070397 - 12 Jul 2023
Cited by 2 | Viewed by 1025
Abstract
Interest in machine learning and neural networks has increased significantly in recent years. However, their applications are limited in safety-critical domains due to the lack of formal guarantees on their reliability and behavior. This paper shows recent advances in satisfiability modulo theory solvers [...] Read more.
Interest in machine learning and neural networks has increased significantly in recent years. However, their applications are limited in safety-critical domains due to the lack of formal guarantees on their reliability and behavior. This paper shows recent advances in satisfiability modulo theory solvers used in the context of the verification of neural networks with piece-wise linear and transcendental activation functions. An experimental analysis is conducted using neural networks trained on a real-world predictive maintenance dataset. This study contributes to the research on enhancing the safety and reliability of neural networks through formal verification, enabling their deployment in safety-critical domains. Full article
Show Figures

Figure 1

19 pages, 6886 KiB  
Article
Theme Mapping and Bibliometric Analysis of Two Decades of Smart Farming
by Tri Kushartadi, Aditya Eka Mulyono, Azhari Haris Al Hamdi, Muhammad Afif Rizki, Muhammad Anwar Sadat Faidar, Wirawan Dwi Harsanto, Muhammad Suryanegara and Muhamad Asvial
Information 2023, 14(7), 396; https://doi.org/10.3390/info14070396 - 11 Jul 2023
Cited by 6 | Viewed by 1840
Abstract
The estimated global population for 2050 is 9 billion, which implies an increase in food demand. Agriculture is the primary source of food production worldwide, and improving its efficiency and productivity through an integration with information and communication technology system, so-called “smart farming”, [...] Read more.
The estimated global population for 2050 is 9 billion, which implies an increase in food demand. Agriculture is the primary source of food production worldwide, and improving its efficiency and productivity through an integration with information and communication technology system, so-called “smart farming”, is a promising approach to optimizing food supply. This research employed bibliometric analysis techniques to investigate smart farming trends, identify their potential benefits, and analyze their research insight. Data were collected from 1141 publications in the Scopus database in the period 1997–2021 and were extracted using VOS Viewer, which quantified the connections between the articles using the co-citation unit, resulting in a mapping of 10 clusters, ranging from agriculture to soil moisture. Finally, the analysis further focuses on the three major themes of smart farming, namely the IoT; blockchain and agricultural robots; and smart agriculture, crops, and irrigation. Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
Show Figures

Figure 1

19 pages, 2705 KiB  
Article
Incorporating an Unsupervised Text Mining Approach into Studying Logistics Risk Management: Insights from Corporate Annual Reports and Topic Modeling
by David Olson and Bongsug (Kevin) Chae
Information 2023, 14(7), 395; https://doi.org/10.3390/info14070395 - 11 Jul 2023
Viewed by 1008
Abstract
This study examined the Security and Exchange Commission (SEC) annual reports of selected logistics firms over the period from 2006 through 2021 for risk management terms. The purpose was to identify which risks are considered most important in supply chain logistics operations. Section [...] Read more.
This study examined the Security and Exchange Commission (SEC) annual reports of selected logistics firms over the period from 2006 through 2021 for risk management terms. The purpose was to identify which risks are considered most important in supply chain logistics operations. Section 1A of the SEC reports includes risk factors. The COVID-19 pandemic has had a heavy impact on global supply chains. We also know that trucking firms have long had difficulties recruiting drivers. Fuel price has always been a major risk for airlines but also can impact shipping, trucking, and railroads. We were especially interested in pandemic, personnel, and fuel risks. We applied topic modeling, enabling us to identify some of the capabilities of unsupervised text mining as applied to SEC reports. We demonstrate the identification of terms, the time dimension, and correlation across topics by the topic model. Our analysis confirmed expectations about COVID-19’s impact, personnel shortages, and fuel. It also revealed common themes regarding the risks involved in international trade and perceived regulatory risks. We conclude with the supply chain management risks identified and discuss means of mitigation. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

25 pages, 24280 KiB  
Article
Automatic 3D Building Model Generation from Airborne LiDAR Data and OpenStreetMap Using Procedural Modeling
by Robert Župan, Adam Vinković, Rexhep Nikçi and Bernarda Pinjatela
Information 2023, 14(7), 394; https://doi.org/10.3390/info14070394 - 11 Jul 2023
Cited by 4 | Viewed by 2385
Abstract
This research is primarily focused on utilizing available airborne LiDAR data and spatial data from the OpenStreetMap (OSM) database to generate 3D models of buildings for a large-scale urban area. The city center of Ljubljana, Slovenia, was selected for the study area due [...] Read more.
This research is primarily focused on utilizing available airborne LiDAR data and spatial data from the OpenStreetMap (OSM) database to generate 3D models of buildings for a large-scale urban area. The city center of Ljubljana, Slovenia, was selected for the study area due to data availability and diversity of building shapes, heights, and functions, which presented a challenge for the automated generation of 3D models. To extract building heights, a range of data sources were utilized, including OSM attribute data, as well as georeferenced and classified point clouds and a digital elevation model (DEM) obtained from openly available LiDAR survey data of the Slovenian Environment Agency. A digital surface model (DSM) and digital terrain model (DTM) were derived from the processed LiDAR data. Building outlines and attributes were extracted from OSM and processed using QGIS. Spatial coverage of OSM data for buildings in the study area is excellent, whereas only 18% have attributes describing external appearance of the building and 6% describing roof type. LASTools software (rapidlasso GmbH, Friedrichshafener Straße 1, 82205 Gilching, GERMANY) was used to derive and assign building heights from 3D coordinates of the segmented point clouds. Various software options for procedural modeling were compared and Blender was selected due to the ability to process OSM data, availability of documentation, and low computing requirements. Using procedural modeling, a 3D model with level of detail (LOD) 1 was created fully automated. After analyzing roof types, a 3D model with LOD2 was created fully automated for 87.64% of buildings. For the remaining buildings, a comparison of procedural roof modeling and manual roof editing was performed. Finally, a visual comparison between the resulting 3D model and Google Earth’s model was performed. The main objective of this study is to demonstrate the efficient modeling process using open data and free software and resulting in an enhanced accuracy of the 3D building models compared to previous LOD2 iterations. Full article
(This article belongs to the Special Issue Big Data Visualization and Virtual Reality)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop