Next Issue
Volume 2, March
Previous Issue
Volume 1, September
 
 

Analytics, Volume 1, Issue 2 (December 2022) – 8 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
18 pages, 767 KiB  
Review
Using Internet Search Data to Forecast COVID-19 Trends: A Systematic Review
by Simin Ma, Yan Sun and Shihao Yang
Analytics 2022, 1(2), 210-227; https://doi.org/10.3390/analytics1020014 - 02 Dec 2022
Cited by 1 | Viewed by 1962
Abstract
Since the outbreak of the coronavirus disease pandemic (COVID-19) at the end of 2019, many scientific groups have been working towards solutions to forecast outbreaks. Accurate forecasts of future waves could mitigate the devastating effects of the virus. They would allow healthcare organizations [...] Read more.
Since the outbreak of the coronavirus disease pandemic (COVID-19) at the end of 2019, many scientific groups have been working towards solutions to forecast outbreaks. Accurate forecasts of future waves could mitigate the devastating effects of the virus. They would allow healthcare organizations and governments to alter public intervention, allocate healthcare resources accordingly, and raise public awareness. Many forecasting models have been introduced, harnessing different underlying mechanisms and data sources. This paper provides a systematic review of forecasting models that utilize internet search information. The success of these forecasting models provides a strong support for the big-data insight of public online search behavior as an alternative signal to the traditional surveillance system and mechanistic compartmental models. Full article
Show Figures

Figure 1

17 pages, 368 KiB  
Article
A Quantitative Analysis of Information Systems Management in the Educational Industry
by Juan Luis Rubio Sánchez
Analytics 2022, 1(2), 193-209; https://doi.org/10.3390/analytics1020013 - 01 Dec 2022
Cited by 2 | Viewed by 2250
Abstract
1. Purpose: One of the consequences of the COVID-19 pandemic period was the migration of educational centers from face-to-face learning to e-learning. Most centers adapted their educational services and technological resources so that the students could attend the courses online and the teachers [...] Read more.
1. Purpose: One of the consequences of the COVID-19 pandemic period was the migration of educational centers from face-to-face learning to e-learning. Most centers adapted their educational services and technological resources so that the students could attend the courses online and the teachers (and the rest of the staff) could telework. So, technology departments have become critical in educational services and need to adapt their processes. The ITIL (Information Technology Infrastructure Library) standard guides companies for this transformation. If educational centers are involved in digital transformation, the question to solve is the following: How far are the processes used in the technology departments of educational centers from the ITIL standard adopted in the information technology industry? The purpose of this research was to investigate whether technology departments have implemented the necessary processes. 2. Methods. The research was conducted by means of an online form sent to educational organizations to gather information about their technological processes. The responses collected from the web forms were statistically analyzed. 3. Results and conclusion. The main finding in this paper was that technology departments in educational centers have yet to adopt the processes required for an intensive online service, demonstrating a weakness in educational institutions. Full article
18 pages, 827 KiB  
Article
Speed Matters: What to Prioritize in Optimization for Faster Websites
by Christina Xilogianni, Filippos-Rafail Doukas, Ioannis C. Drivas and Dimitrios Kouis
Analytics 2022, 1(2), 175-192; https://doi.org/10.3390/analytics1020012 - 25 Nov 2022
Cited by 4 | Viewed by 2336
Abstract
Website loading speed time matters when it comes to users’ engagement and conversion rate optimization. The websites of libraries, archives, and museums (LAMs) are not an exception to this assumption. In this research paper, we propose a methodological assessment schema to evaluate the [...] Read more.
Website loading speed time matters when it comes to users’ engagement and conversion rate optimization. The websites of libraries, archives, and museums (LAMs) are not an exception to this assumption. In this research paper, we propose a methodological assessment schema to evaluate the LAMs webpages’ speed performance for a greater usability and navigability. The proposed methodology is composed of three different stages. First, the retrieval of the LAMs webpages’ speed data is taking place. A sample of 121 cases of LAMs worldwide has been collected using the PageSpeed Insights tool of Google for their mobile and desktop performance. In the second stage, a statistical reliability and validity analysis takes place to propose a speed performance measurement system whose metrics express an internal cohesion and consistency. One step further, in the third stage, several predictive regression models are developed to discover which of the involved metrics impact mostly the total speed score of mobile or desktop versions of the examined webpages. The proposed methodology and the study’s results could be helpful for LAMs administrators to set a data-driven framework of prioritization regarding the rectifications that need to be implemented for the optimized loading speed time of the webpages. Full article
Show Figures

Figure 1

31 pages, 1311 KiB  
Article
A Foundation for Archival Engineering
by Kenneth Thibodeau
Analytics 2022, 1(2), 144-174; https://doi.org/10.3390/analytics1020011 - 18 Nov 2022
Cited by 3 | Viewed by 2883
Abstract
Archives comprise information that individuals and organizations use in their activities. Archival theory is the intellectual framework for organizing, managing, preserving and access to archives both while they serve the needs of those who produce them and later when researchers consult them for [...] Read more.
Archives comprise information that individuals and organizations use in their activities. Archival theory is the intellectual framework for organizing, managing, preserving and access to archives both while they serve the needs of those who produce them and later when researchers consult them for other purposes. Archival theory is sometimes called archival science, but it does not constitute a modern science in the sense of a coherent body of knowledge formulated in a way that is appropriate for empirical testing and validation. Both archival theory and practice are seriously challenged by the spread and continuing changes in information technology and its increasing and increasingly diverse use in human activities. This article describes problems with and controversies in archival theory and advocates for a reformulation of concepts to address the digital challenge and to make the field more robust, both by addressing the problems and by enriching its capabilities by adopting concepts from other fields such as taxonomy, semiotics and systemic functional linguistics. The objective of this reformulation is to transform the discipline on the model of modern scientific method in a way that engenders a new discipline of archival engineering that is robust enough to guide the development of automated methods even in the face of continuing and unpredictable change in IT. Full article
Show Figures

Figure 1

9 pages, 1116 KiB  
Article
Automated Segmentation and Classification of Aerial Forest Imagery
by Kieran Pichai, Benjamin Park, Aaron Bao and Yiqiao Yin
Analytics 2022, 1(2), 135-143; https://doi.org/10.3390/analytics1020010 - 14 Nov 2022
Viewed by 1544
Abstract
Monitoring the health and safety of forests has become a rising problem with the advent of global wildfires, rampant logging, and reforestation efforts. This paper proposes a model for the automatic segmentation and classification of aerial forest imagery. The model is based on [...] Read more.
Monitoring the health and safety of forests has become a rising problem with the advent of global wildfires, rampant logging, and reforestation efforts. This paper proposes a model for the automatic segmentation and classification of aerial forest imagery. The model is based on U-net architecture and relies on dice coefficients, binary cross-entropy, and accuracy as loss functions. While models without autoencoder-based structures can only reach a dice coefficient of 45%, the proposed model can achieve a dice coefficient of 79.85%. In addition, for barren adn dense forestry image classification, the proposed model can achieve 82.51%. This paper demonstrates how complex convolutional neural networks can be applied to aerial forest images to help preserve and save the forest environment. Full article
Show Figures

Figure 1

18 pages, 343 KiB  
Article
Comparison of Different Modeling Techniques for Flemish Twitter Sentiment Analysis
by Manon Reusens, Michael Reusens, Marc Callens, Seppe vanden Broucke and Bart Baesens
Analytics 2022, 1(2), 117-134; https://doi.org/10.3390/analytics1020009 - 18 Oct 2022
Cited by 1 | Viewed by 2163
Abstract
Microblogging websites such as Twitter have caused sentiment analysis research to increase in popularity over the last several decades. However, most studies focus on the English language, which leaves other languages underrepresented. Therefore, in this paper, we compare several modeling techniques for sentiment [...] Read more.
Microblogging websites such as Twitter have caused sentiment analysis research to increase in popularity over the last several decades. However, most studies focus on the English language, which leaves other languages underrepresented. Therefore, in this paper, we compare several modeling techniques for sentiment analysis using a new dataset containing Flemish tweets. The key contribution of our paper lies in its innovative experimental design: we compared different preprocessing techniques and vector representations to find the best-performing combination for a Flemish dataset. We compared models belonging to four different categories: lexicon-based methods, traditional machine-learning models, neural networks, and attention-based models. We found that more preprocessing leads to better results, but the best-performing vector representation approach depends on the model applied. Moreover, an immense gap was observed between the performances of the lexicon-based approaches and those of the other models. The traditional machine learning approaches and the neural networks produced similar results, but the attention-based model was the best-performing technique. Nevertheless, a tradeoff should be made between computational expenses and performance gains. Full article
Show Figures

Figure 1

19 pages, 3415 KiB  
Article
On Sense Making and the Generation of Knowledge in Visual Analytics
by Milena Vuckovic and Johanna Schmidt
Analytics 2022, 1(2), 98-116; https://doi.org/10.3390/analytics1020008 - 02 Oct 2022
Viewed by 2034
Abstract
Interactive visual tools and related visualization technologies, built to support explorative data analysis, ultimately lead to sense making and knowledge discovery from large volumes of raw data. These processes namely rely on human visual perception and cognition, in which human analysts perceive external [...] Read more.
Interactive visual tools and related visualization technologies, built to support explorative data analysis, ultimately lead to sense making and knowledge discovery from large volumes of raw data. These processes namely rely on human visual perception and cognition, in which human analysts perceive external representations (system structure, dataset, integral data visualizations) and form respective internal representations (internal cognitive imprints of external systems) that enable deeper comprehension of the employed system and the underlying data features. These internal representations further evolve through continuous interaction with external representations. They also depend on the individual’s own cognitive pathways. Currently, there has been insufficient work on understanding how these internal cognitive mechanisms form and function. Hence, we aim to offer our own interpretations of such processes observed through our daily data exploration workflows. This is accomplished by following specific explorative data science tasks while working with diverse interactive visual systems and related notebook style environments that have different organizational structures and thus may entail different approaches to thinking and shaping sense making and knowledge generation. In this paper, we deliberate on the cognitive implications for human analysists when interacting with such a diverse organizational structure of tools and approaches when performing the essential steps of an explorative visual analysis. Full article
Show Figures

Figure 1

26 pages, 796 KiB  
Communication
Twitter Big Data as a Resource for Exoskeleton Research: A Large-Scale Dataset of about 140,000 Tweets from 2017–2022 and 100 Research Questions
by Nirmalya Thakur
Analytics 2022, 1(2), 72-97; https://doi.org/10.3390/analytics1020007 - 23 Sep 2022
Cited by 3 | Viewed by 3364
Abstract
The exoskeleton technology has been rapidly advancing in the recent past due to its multitude of applications and diverse use cases in assisted living, military, healthcare, firefighting, and industry 4.0. The exoskeleton market is projected to increase by multiple times its current value [...] Read more.
The exoskeleton technology has been rapidly advancing in the recent past due to its multitude of applications and diverse use cases in assisted living, military, healthcare, firefighting, and industry 4.0. The exoskeleton market is projected to increase by multiple times its current value within the next two years. Therefore, it is crucial to study the degree and trends of user interest, views, opinions, perspectives, attitudes, acceptance, feedback, engagement, buying behavior, and satisfaction, towards exoskeletons, for which the availability of Big Data of conversations about exoskeletons is necessary. The Internet of Everything style of today’s living, characterized by people spending more time on the internet than ever before, with a specific focus on social media platforms, holds the potential for the development of such a dataset by the mining of relevant social media conversations. Twitter, one such social media platform, is highly popular amongst all age groups, where the topics found in the conversation paradigms include emerging technologies such as exoskeletons. To address this research challenge, this work makes two scientific contributions to this field. First, it presents an open-access dataset of about 140,000 Tweets about exoskeletons that were posted in a 5-year period from 21 May 2017 to 21 May 2022. Second, based on a comprehensive review of the recent works in the fields of Big Data, Natural Language Processing, Information Retrieval, Data Mining, Pattern Recognition, and Artificial Intelligence that may be applied to relevant Twitter data for advancing research, innovation, and discovery in the field of exoskeleton research, a total of 100 Research Questions are presented for researchers to study, analyze, evaluate, ideate, and investigate based on this dataset. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop