Big Data: Advanced Methods, Interdisciplinary Study and Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 September 2022) | Viewed by 69575

Special Issue Editor


E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

Big Data are a new challenge to the reliability and capabilities of technologies and methods. Today, there are a large number of applications and interdisciplinary research using and forming Big Data. However, to obtain results and process Big Data, it is necessary to modify methods, take into account computational complexity, and use technologies for processing and transmitting Big Data, optimizing computational structures, and choosing reliable computational methods. Big Data, on one hand, are a source of new ideas about hidden dependencies, about data analysis from many sources, and about the collection and analysis of heterogeneous information, and on the other hand, allow obtaining important scientific results only through the efforts of specialists from different fields of science who previously interacted rarely with each other, such as psychologists and database specialists, economic analysts, mathematicians, artificial intelligence specialists, control systems specialists, linguists, and many others. All these specialists, interacting with each other in different combinations, obtain new results that promote contemporary science, technologies, and form a deeper understanding of the phenomena of natural science and humanities.

This Special Issue of the journal Applied Sciences aims to consolidate the efforts of interdisciplinary research groups to obtain new processing methods, new results, and new ideas and concepts in the field of Big Data.

Articles in any applied fields of science containing new scientific results based on the use of Big Data and the development of new methods and technologies focused on Big Data processing are accepted.

Prof. Dr. Evgeny Nikulchev
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Big Data
  • interdisciplinary research
  • data analysis
  • contemporary science
  • artificial intelligence

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

14 pages, 2193 KiB  
Article
Interaction Design Based on Big Data Community Home Care Service Demand Levels
by Fangyuan Jiang, Wan-Sok Jang and Young-Hwan Pan
Appl. Sci. 2023, 13(2), 848; https://doi.org/10.3390/app13020848 - 07 Jan 2023
Cited by 1 | Viewed by 2033
Abstract
Most of the contemporary models for meeting the majority of the needs of middle-aged and elderly people are community-based, in-home care. Therefore, this paper designs an Interaction model that can meet the need for a rich spiritual and cultural life of the elderly [...] Read more.
Most of the contemporary models for meeting the majority of the needs of middle-aged and elderly people are community-based, in-home care. Therefore, this paper designs an Interaction model that can meet the need for a rich spiritual and cultural life of the elderly at home. First, the questionnaire content of the Chinese Longitudinal Healthy Longevity Survey (CLHLS) sampling method was designed based on the content of community-based home care services. Then, using the CLHLS sampling method, the survey results of the home care group were collected to form a community of big data consisting of four types of home care service needs. Finally, the Interaction book model was designed based on the hierarchy of service needs obtained from Abraham Maslow’s hierarchy of needs classification method. The experimental results showed that the mean values of the target population’s ratings for the presentation and interface aesthetics of the Interaction mode were 4.34 and 4.19, respectively, the mean value for improving the learning effectiveness of the home-bound population was 4.57, and the mean value for their overall satisfaction was 4.31. It proves that the Interaction model is ideal for practice and can meet the learning needs of the elderly, at-home population from different service demand levels, thus solving the problem of education for the elderly. Full article
(This article belongs to the Special Issue Big Data: Advanced Methods, Interdisciplinary Study and Applications)
Show Figures

Figure 1

30 pages, 2329 KiB  
Article
Conceptual Framework for Implementing Temporal Big Data Analytics in Companies
by Maria Mach-Król
Appl. Sci. 2022, 12(23), 12265; https://doi.org/10.3390/app122312265 - 30 Nov 2022
Cited by 3 | Viewed by 2179
Abstract
Considering the time dimension in big data analytics allows for a more complete insight into the analyzed phenomena and thus for gaining a competitive advantage on the market. The entrepreneurs also reported the need for temporal big data analytics, when interviewed by the [...] Read more.
Considering the time dimension in big data analytics allows for a more complete insight into the analyzed phenomena and thus for gaining a competitive advantage on the market. The entrepreneurs also reported the need for temporal big data analytics, when interviewed by the author. Hence, the main goal of this article is to create a conceptual framework for applying temporal big data analytics (TBDA) in businesses. It is determined that a temporal framework is required. Existing big data implementation frameworks are discussed. The requirements for the successful implementation of temporal big data analytics are shown. Finally, the conceptual framework for organizational adoption of temporal big data analytics is offered and verified. The most important findings of this study are: proving that effective implementation of big data analytics in companies requires open consideration of time; demonstrating the usefulness of the leagile approach in the implementation of TBDA in companies; proposing a comprehensive conceptual framework for TBDA implementation; indicating possible success measures of the TBDA implementation in the company. The study has been conducted according to the Design Science Research in Information Systems (DSRIS) methodology. IT, business leaders, and policymakers can use the findings of this article to plan and develop temporal big data analytics in their enterprises. The report provides useful information on how to implement temporal big data in companies. Full article
(This article belongs to the Special Issue Big Data: Advanced Methods, Interdisciplinary Study and Applications)
Show Figures

Figure 1

14 pages, 1205 KiB  
Article
PaperNet: A Dataset and Benchmark for Fine-Grained Paper Classification
by Tan Yue, Yong Li, Xuzhao Shi, Jiedong Qin, Zijiao Fan and Zonghai Hu
Appl. Sci. 2022, 12(9), 4554; https://doi.org/10.3390/app12094554 - 30 Apr 2022
Cited by 3 | Viewed by 2540
Abstract
Document classification is an important area in Natural Language Processing (NLP). Because a huge amount of scientific papers have been published at an accelerating rate, it is beneficial to carry out intelligent paper classifications, especially fine-grained classification for researchers. However, a public scientific [...] Read more.
Document classification is an important area in Natural Language Processing (NLP). Because a huge amount of scientific papers have been published at an accelerating rate, it is beneficial to carry out intelligent paper classifications, especially fine-grained classification for researchers. However, a public scientific paper dataset for fine-grained classification is still lacking, so the existing document classification methods have not been put to the test. To fill this vacancy, we designed and collected the PaperNet-Dataset that consists of multi-modal data (texts and figures). PaperNet 1.0 version contains hierarchical categories of papers in the fields of computer vision (CV) and NLP, 2 coarse-grained and 20 fine-grained (7 in CV and 13 in NLP). We ran current mainstream models on the PaperNet-Dataset, along with a multi-modal method that we propose. Interestingly, none of these methods reaches an accuracy of 80% in fine-grained classification, showing plenty of room for improvement. We hope that PaperNet-Dataset will inspire more work in this challenging area. Full article
(This article belongs to the Special Issue Big Data: Advanced Methods, Interdisciplinary Study and Applications)
Show Figures

Figure 1

25 pages, 8357 KiB  
Article
Digitization of Accounting: The Premise of the Paradigm Shift of Role of the Professional Accountant
by Dan Marius Coman, Constantin Aurelian Ionescu, Anișoara Duică, Mihaela Denisa Coman, Marilena Carmen Uzlau, Sorina Geanina Stanescu and Violeta State
Appl. Sci. 2022, 12(7), 3359; https://doi.org/10.3390/app12073359 - 25 Mar 2022
Cited by 16 | Viewed by 9741
Abstract
The current pandemic crisis has led to a paradigm shift in the economy. Expressions such as digital transformation and digitalization of business are common in the communication channels of economic entities, which want to benefit from the advantages of information technology (artificial intelligence, [...] Read more.
The current pandemic crisis has led to a paradigm shift in the economy. Expressions such as digital transformation and digitalization of business are common in the communication channels of economic entities, which want to benefit from the advantages of information technology (artificial intelligence, software robots, and blockchain) to streamline their business. The aim of this research is to highlight the impact of the digitalization of accounting on the business environment, the work style, and the role of professional accountants: the paradigm shift. The study is based on theoretical research as well as empirical research based on a questionnaire applied in economic entities, and respondents are both decision makers and professional accountants. The results obtained by the statistical analysis of the questionnaire (Chi-square, Crosstabulation, Friedman test) suggest that digitization is more than a conventional change, being equally about technology and people. The orientation towards digitalization implies, in addition to a well-organized implementation plan, a change in the mentalities of the human factor corroborated with the evolution of the organizational culture of economic entities. At the same time, we are witnessing a change in the accounting paradigm, and the role of professional accountants is evolving from “transaction logger” to analyst and consultant for entrepreneurs. Research confirms that the digitalization of accounting is proving to be not only a modern solution, imposed by technological progress, but also timely, necessary, and even mandatory given the difficulty of anticipating the economic and social context due to the pandemic crisis. This study stands out both because of the innovative character of the approached subject, the digitalization of accounting, which represents a concept in full expansion, and because of its practical utility. This is proven by the analyses performed and the conclusions drawn in the context of an economic environment that is constantly looking for solutions. All operations can be moved to a controlled and accessible digital environment that can be accessed from any location. Full article
(This article belongs to the Special Issue Big Data: Advanced Methods, Interdisciplinary Study and Applications)
Show Figures

Figure 1

23 pages, 18104 KiB  
Article
An Analysis of the Performance and Configuration Features of MySQL Document Store and Elasticsearch as an Alternative Backend in a Data Replication Solution
by Doina R. Zmaranda, Cristian I. Moisi, Cornelia A. Győrödi, Robert Ş. Győrödi and Livia Bandici
Appl. Sci. 2021, 11(24), 11590; https://doi.org/10.3390/app112411590 - 07 Dec 2021
Cited by 7 | Viewed by 6284
Abstract
In recent years, with the increase in the volume and complexity of data, choosing a suitable database for storing huge amounts of data is not easy, because it must consider aspects such as manageability, scalability, and extensibility. Nowadays, the NoSQL databases have gained [...] Read more.
In recent years, with the increase in the volume and complexity of data, choosing a suitable database for storing huge amounts of data is not easy, because it must consider aspects such as manageability, scalability, and extensibility. Nowadays, the NoSQL databases have gained immense popularity for their efficiency in managing such datasets compared to relational databases. However, relational databases also exhibit some advantages in certain circumstances, therefore many applications use a combined approach: relational and non-relational. This paper performs a comparative evaluation of two popular open-source DBMSs: MySQL Document Store and Elasticsearch as non-relational DBMSs; this comparison is based on a detailed analysis of CRUD operations for different amounts of data showing how the databases could be modeled and used in an application. A case-study application was developed for this purpose in Java programming language and Spring framework using for data storage both relational MySQL and non-relational Elasticsearch and MySQL Document Store. To model the real situation encountered in several developed applications that use both relational and non-relational databases, a data replication solution that imports data from the primary relational MySQL database into Elasticsearch and MySQL Document Store as possible alternatives for more efficient data search was proposed and implemented. Full article
(This article belongs to the Special Issue Big Data: Advanced Methods, Interdisciplinary Study and Applications)
Show Figures

Figure 1

12 pages, 4030 KiB  
Communication
Evaluation of User Reactions and Verification of the Authenticity of the User’s Identity during a Long Web Survey
by Evgeny Nikulchev, Alexander Gusev, Dmitry Ilin, Nurziya Gazanova and Sergey Malykh
Appl. Sci. 2021, 11(22), 11034; https://doi.org/10.3390/app112211034 - 22 Nov 2021
Cited by 2 | Viewed by 2028
Abstract
Web surveys are very popular in the Internet space. Web surveys are widely incorporated for gathering customer opinion about Internet services, for sociological and psychological research, and as part of the knowledge testing systems in electronic learning. When conducting web surveys, one of [...] Read more.
Web surveys are very popular in the Internet space. Web surveys are widely incorporated for gathering customer opinion about Internet services, for sociological and psychological research, and as part of the knowledge testing systems in electronic learning. When conducting web surveys, one of the issues to consider is the respondents’ authenticity throughout the entire survey process. We took 20,000 responses to an online questionnaire as experimental data. The survey took about 45 min on average. We did not take into account the given answers; we only considered the response time to the first question on each page of the survey interface, that is, only the users’ reaction time was taken into account. Data analysis showed that respondents get used to the interface elements and want to finish a long survey as soon as possible, which leads to quicker reactions. Based on the data, we built two neural network models that identify the records in which the respondent’s authenticity was violated or the respondent acted as a random clicker. The amount of data allows us to conclude that the identified dependencies are widely applicable. Full article
(This article belongs to the Special Issue Big Data: Advanced Methods, Interdisciplinary Study and Applications)
Show Figures

Figure 1

17 pages, 55810 KiB  
Article
Automatic Asbestos Control Using Deep Learning Based Computer Vision System
by Vasily Zyuzin, Mikhail Ronkin, Sergey Porshnev and Alexey Kalmykov
Appl. Sci. 2021, 11(22), 10532; https://doi.org/10.3390/app112210532 - 09 Nov 2021
Cited by 7 | Viewed by 2511
Abstract
The paper discusses the results of the research and development of an innovative deep learning-based computer vision system for the fully automatic asbestos content (productivity) estimation in rock chunk (stone) veins in an open pit and within the time comparable with the work [...] Read more.
The paper discusses the results of the research and development of an innovative deep learning-based computer vision system for the fully automatic asbestos content (productivity) estimation in rock chunk (stone) veins in an open pit and within the time comparable with the work of specialists (about 10 min per one open pit processing place). The discussed system is based on the applying of instance and semantic segmentation of artificial neural networks. The Mask R-CNN-based network architecture is applied to the asbestos-containing rock chunks searching images of an open pit. The U-Net-based network architecture is applied to the segmentation of asbestos veins in the images of selected rock chunks. The designed system allows an automatic search and takes images of the asbestos rocks in an open pit in the near-infrared range (NIR) and processes the obtained images. The result of the system work is the average asbestos content (productivity) estimation for each controlled open pit. It is validated to estimate asbestos content as the graduated average ratio of the vein area value to the selected rock chunk area value, both determined by the trained neural network. For both neural network training tasks the training, validation, and test datasets are collected. The designed system demonstrates an error of about 0.4% under different weather conditions in an open pit when the asbestos content is about 1.5–4%. The obtained accuracy is sufficient to use the system as a geological service tool instead of currently applied visual-based estimations. Full article
(This article belongs to the Special Issue Big Data: Advanced Methods, Interdisciplinary Study and Applications)
Show Figures

Figure 1

9 pages, 1150 KiB  
Article
Experimental Characteristics Study of Data Storage Formats for Data Marts Development within Data Lakes
by Vladimir Belov, Alexander N. Kosenkov and Evgeny Nikulchev
Appl. Sci. 2021, 11(18), 8651; https://doi.org/10.3390/app11188651 - 17 Sep 2021
Cited by 3 | Viewed by 2360
Abstract
One of the most popular methods for building analytical platforms involves the use of the concept of data lakes. A data lake is a storage system in which the data are presented in their original format, making it difficult to conduct analytics or [...] Read more.
One of the most popular methods for building analytical platforms involves the use of the concept of data lakes. A data lake is a storage system in which the data are presented in their original format, making it difficult to conduct analytics or present aggregated data. To solve this issue, data marts are used, representing environments of stored data of highly specialized information, focused on the requests of employees of a certain department, the vector of an organization’s work. This article presents a study of big data storage formats in the Apache Hadoop platform when used to build data marts. Full article
(This article belongs to the Special Issue Big Data: Advanced Methods, Interdisciplinary Study and Applications)
Show Figures

Figure 1

23 pages, 4781 KiB  
Article
Performance Impact of Optimization Methods on MySQL Document-Based and Relational Databases
by Cornelia A. Győrödi, Diana V. Dumşe-Burescu, Robert Ş. Győrödi, Doina R. Zmaranda, Livia Bandici and Daniela E. Popescu
Appl. Sci. 2021, 11(15), 6794; https://doi.org/10.3390/app11156794 - 23 Jul 2021
Cited by 11 | Viewed by 8328
Abstract
Databases are an important part of today’s applications where large amounts of data need to be stored, processed, and accessed quickly. One of the important criteria when choosing to use a database technology is its data processing performance. In this paper, some methods [...] Read more.
Databases are an important part of today’s applications where large amounts of data need to be stored, processed, and accessed quickly. One of the important criteria when choosing to use a database technology is its data processing performance. In this paper, some methods for optimizing the database structure and queries were applied on two popular open-source database management systems: MySQL as a relational DBMS, and document-based MySQL as a non-relational DBMS. The main objective of this paper was to conduct a comparative analysis of the impact that the proposed optimization methods have on each specific DBMS when carrying out CRUD (CREATE, READ, UPDATE, DELETE) requests. To perform the analysis and performance evaluation of CRUD operations for different amounts of data, a case study testing architecture based on Java was developed and used to show how the databases’ proposed optimization methods can influence the performance of the application, and to highlight the differences in response time and complexity. The results obtained show the degree to which the proposed optimization methods contributed to the application’s performance improvement in the case of both databases; based on these, a detailed analysis and several conclusions are presented to support a decision for choosing a specific approach. Full article
(This article belongs to the Special Issue Big Data: Advanced Methods, Interdisciplinary Study and Applications)
Show Figures

Figure 1

26 pages, 7320 KiB  
Article
From Classical Machine Learning to Deep Neural Networks: A Simplified Scientometric Review
by Ravil I. Mukhamediev, Adilkhan Symagulov, Yan Kuchin, Kirill Yakunin and Marina Yelis
Appl. Sci. 2021, 11(12), 5541; https://doi.org/10.3390/app11125541 - 15 Jun 2021
Cited by 21 | Viewed by 7174
Abstract
There are promising prospects on the way to widespread use of AI, as well as problems that need to be overcome to adapt AI&ML technologies in industries. The paper systematizes the AI sections and calculates the dynamics of changes in the number of [...] Read more.
There are promising prospects on the way to widespread use of AI, as well as problems that need to be overcome to adapt AI&ML technologies in industries. The paper systematizes the AI sections and calculates the dynamics of changes in the number of scientific articles in machine learning sections according to Google Scholar. The method of data acquisition and calculation of dynamic indicators of changes in publication activity is described: growth rate (D1) and acceleration of growth (D2) of scientific publications. Analysis of publication activity, in particular, showed a high interest in modern transformer models, the development of datasets for some industries, and a sharp increase in interest in methods of explainable machine learning. Relatively small research domains are receiving increasing attention, as evidenced by the negative correlation between the number of articles and D1 and D2 scores. The results show that, despite the limitations of the method, it is possible to (1) identify fast-growing areas of research regardless of the number of articles, and (2) predict publication activity in the short term with satisfactory accuracy for practice (the average prediction error for the year ahead is 6%, with a standard deviation of 7%). This paper presents results for more than 400 search queries related to classified research areas and the application of machine learning models to industries. The proposed method evaluates the dynamics of growth and the decline of scientific domains associated with certain key terms. It does not require access to large bibliometric archives and allows to relatively quickly obtain quantitative estimates of dynamic indicators. Full article
(This article belongs to the Special Issue Big Data: Advanced Methods, Interdisciplinary Study and Applications)
Show Figures

Graphical abstract

11 pages, 2979 KiB  
Article
Resource Analysis of the Log Files Storage Based on Simulation Models in a Virtual Environment
by Shamil Magomedov, Dmitry Ilin and Evgeny Nikulchev
Appl. Sci. 2021, 11(11), 4718; https://doi.org/10.3390/app11114718 - 21 May 2021
Cited by 2 | Viewed by 1469
Abstract
In order to perform resource analyses, we here offer an experimental stand on virtual machines. The concept of how to measure the resources of each component is proposed. In the case of system design, you can estimate how many resources to reserve, and [...] Read more.
In order to perform resource analyses, we here offer an experimental stand on virtual machines. The concept of how to measure the resources of each component is proposed. In the case of system design, you can estimate how many resources to reserve, and if external modules are installed in an existing system, you can assess whether there are enough resources and whether the system can scale. This is especially important for large software systems with web services. The dataset contains a set of experimental data and the configuration of virtual servers of the experiment in order to conduct resource analyses of the logs. Full article
(This article belongs to the Special Issue Big Data: Advanced Methods, Interdisciplinary Study and Applications)
Show Figures

Figure 1

9 pages, 725 KiB  
Article
Users’ Reaction Time for Improvement of Security and Access Control in Web Services
by Shamil Magomedov, Alexander Gusev, Dmitry Ilin and Evgeny Nikulchev
Appl. Sci. 2021, 11(6), 2561; https://doi.org/10.3390/app11062561 - 12 Mar 2021
Cited by 5 | Viewed by 1772
Abstract
This paper concerns the case of the development of a technology for increasing the efficiency of access control based on the user behavior monitoring built into a software system’s user interface. It is proposed to use the time of user reactions as individual [...] Read more.
This paper concerns the case of the development of a technology for increasing the efficiency of access control based on the user behavior monitoring built into a software system’s user interface. It is proposed to use the time of user reactions as individual indicators of psychological and psychophysical state. This paper presents the results and interpretation of user reactions collected during a mass web survey of students of the Russian Federation. The total number of users was equal to 22,357. To reveal the patterns in user reactions, both quantitative and qualitative approaches were applied. The analysis of the data demonstrated that the user could be characterized by their psychomotor reactions, collected during the answering of a set of questions. Those reactions reflected the personal skills of the interface interaction, the speed of reading, and the speed of answering. Thus, those observations can be used as a supplement to personal verification in information systems. The collection of the reaction times did not load the data volumes significantly nor transmit confidential information. Full article
(This article belongs to the Special Issue Big Data: Advanced Methods, Interdisciplinary Study and Applications)
Show Figures

Figure 1

19 pages, 3595 KiB  
Article
Protected Network Architecture for Ensuring Consistency of Medical Data through Validation of User Behavior and DICOM Archive Integrity
by Shamil Magomedov and Artem Lebedev
Appl. Sci. 2021, 11(5), 2072; https://doi.org/10.3390/app11052072 - 26 Feb 2021
Cited by 5 | Viewed by 2581
Abstract
The problem of consistency of medical data in Hospital Data Management Systems is considered in the context of correctness of medical images stored in a PACS (Picture Archiving and Communication System) and legality of actions authorized users perform when accessing MIS (Medical Information [...] Read more.
The problem of consistency of medical data in Hospital Data Management Systems is considered in the context of correctness of medical images stored in a PACS (Picture Archiving and Communication System) and legality of actions authorized users perform when accessing MIS (Medical Information System) facilities via web interfaces. The purpose of the study is to develop a SIEM-like (Security Information and Event Management) architecture for offline analysis of DICOM (Digital Imaging and Communications in Medicine) archive integrity and users’ activity. To achieve amenable accuracy when validating DICOM archive integrity, two aspects are taken into account: correctness of periodicity of the incoming data stream and correctness of the image data (time series) itself for the considered modality. Validation of users’ activity assumes application of model-driven approaches using state-of-the-art machine learning methods. This paper proposes a network architecture with guard clusters to protect sensitive components like the DICOM archive and application server of the MIS. New server roles were designed to perform traffic interception, data analysis and alert management without reconfiguration of production software components. The cluster architecture allows the analysis of incoming big data streams with high availability, providing horizontal scalability and fault tolerance. To minimize possible harm from spurious DICOM files the approach should be considered as an addition to other securing techniques like watermarking, encrypting and testing data conformance with a standard. Full article
(This article belongs to the Special Issue Big Data: Advanced Methods, Interdisciplinary Study and Applications)
Show Figures

Figure 1

15 pages, 435 KiB  
Article
Advanced Data Mining of SSD Quality Based on FP-Growth Data Analysis
by Jieh-Ren Chang, You-Shyang Chen, Chien-Ku Lin and Ming-Fu Cheng
Appl. Sci. 2021, 11(4), 1715; https://doi.org/10.3390/app11041715 - 14 Feb 2021
Cited by 5 | Viewed by 2599
Abstract
Storage devices in the computer industry have gradually transformed from the hard disk drive (HDD) to the solid-state drive (SSD), of which the key component is error correction in not-and (NAND) flash memory. While NAND flash memory is under development, it is still [...] Read more.
Storage devices in the computer industry have gradually transformed from the hard disk drive (HDD) to the solid-state drive (SSD), of which the key component is error correction in not-and (NAND) flash memory. While NAND flash memory is under development, it is still limited by the “program and erase” cycle (PE cycle). Therefore, the improvement of quality and the formulation of customer service strategy are topics worthy of discussion at this stage. This study is based on computer company A as the research object and collects more than 8000 items of SSD error data of its customers, which are then calculated with data mining and frequent pattern growth (FP-Growth) of the association rule algorithm to identify the association rule of errors by setting the minimum support degree of 90 and the minimum trust degree of 10 as the threshold. According to the rules, three improvement strategies of production control are suggested: (1) use of the association rule to speed up the judgment of the SSD error condition by customer service personnel, (2) a quality strategy, and (3) a customer service strategy. Full article
(This article belongs to the Special Issue Big Data: Advanced Methods, Interdisciplinary Study and Applications)
Show Figures

Figure 1

Review

Jump to: Research, Other

30 pages, 4044 KiB  
Review
Review of Some Applications of Unmanned Aerial Vehicles Technology in the Resource-Rich Country
by Ravil I. Mukhamediev, Adilkhan Symagulov, Yan Kuchin, Elena Zaitseva, Alma Bekbotayeva, Kirill Yakunin, Ilyas Assanov, Vitaly Levashenko, Yelena Popova, Assel Akzhalova, Sholpan Bastaubayeva and Laila Tabynbaeva
Appl. Sci. 2021, 11(21), 10171; https://doi.org/10.3390/app112110171 - 29 Oct 2021
Cited by 48 | Viewed by 5949
Abstract
The use of unmanned aerial vehicles (UAVs) in various spheres of human activity is a promising direction for countries with very different types of economies. This statement refers to resource-rich economies as well. The peculiarities of such countries are associated with the dependence [...] Read more.
The use of unmanned aerial vehicles (UAVs) in various spheres of human activity is a promising direction for countries with very different types of economies. This statement refers to resource-rich economies as well. The peculiarities of such countries are associated with the dependence on resource prices since their economies present low diversification. Therefore, the employment of new technologies is one of the ways of increasing the sustainability of such economy development. In this context, the use of UAVs is a prospect direction, since they are relatively cheap, reliable, and their use does not require a high-tech background. The most common use of UAVs is associated with various types of monitoring tasks. In addition, UAVs can be used for organizing communication, search, cargo delivery, field processing, etc. Using additional elements of artificial intelligence (AI) together with UAVs helps to solve the problems in automatic or semi-automatic mode. Such UAV is named intelligent unmanned aerial vehicle technology (IUAVT), and its employment allows increasing the UAV-based technology efficiency. However, in order to adapt IUAVT in the sectors of economy, it is necessary to overcome a range of limitations. The research is devoted to the analysis of opportunities and obstacles to the adaptation of IUAVT in the economy. The possible economic effect is estimated for Kazakhstan as one of the resource-rich countries. The review consists of three main parts. The first part describes the IUAVT application areas and the tasks it can solve. The following areas of application are considered: precision agriculture, the hazardous geophysical processes monitoring, environmental pollution monitoring, exploration of minerals, wild animals monitoring, technical and engineering structures monitoring, and traffic monitoring. The economic potential is estimated by the areas of application of IUAVT in Kazakhstan. The second part contains the review of the technical, legal, and software-algorithmic limitations of IUAVT and modern approaches aimed at overcoming these limitations. The third part—discussion—comprises the consideration of the impact of these limitations and unsolved tasks of the IUAVT employment in the areas of activity under consideration, and assessment of the overall economic effect. Full article
(This article belongs to the Special Issue Big Data: Advanced Methods, Interdisciplinary Study and Applications)
Show Figures

Figure 1

Other

Jump to: Research, Review

19 pages, 745 KiB  
Systematic Review
The Role of Veracity on the Load Monitoring of Professional Soccer Players: A Systematic Review in the Face of the Big Data Era
by João Gustavo Claudino, Carlos Alberto Cardoso Filho, Daniel Boullosa, Adriano Lima-Alves, Gustavo Rejano Carrion, Rodrigo Luiz da Silva GianonI, Rodrigo dos Santos Guimarães, Fúlvio Martins Ventura, André Luiz Costa Araujo, Sebastián Del Rosso, José Afonso and Julio Cerca Serrão
Appl. Sci. 2021, 11(14), 6479; https://doi.org/10.3390/app11146479 - 14 Jul 2021
Cited by 9 | Viewed by 7543
Abstract
Big Data has real value when the veracity of the collected data has been previously identified. However, data veracity for load monitoring in professional soccer players has not been analyzed yet. This systematic review aims to evaluate the current evidence from the scientific [...] Read more.
Big Data has real value when the veracity of the collected data has been previously identified. However, data veracity for load monitoring in professional soccer players has not been analyzed yet. This systematic review aims to evaluate the current evidence from the scientific literature related to data veracity for load monitoring in professional soccer. Systematic searches through the PubMed, Scopus, and Web of Science databases were conducted for reports onthe data veracity of diverse load monitoring tools and the associated parameters used in professional soccer. Ninety-four studies were finally included in the review, with 39 different tools used and 578 associated parameters identified. The pooled sample consisted of 2066 footballers (95% male: 24 ± 3 years and 5% female: 24 ± 1 years). Seventy-three percent of these studies did not report veracity metrics for anyof the parameters from these tools. Thus, data veracity was found for 54% of tools and 23% of parameters. The current information will assist in the selection of the most appropriate tools and parameters to be used for load monitoring with traditional and Big Data approaches while identifying those still requiring the analysis of their veracity metrics or their improvement to acceptable veracity levels. Full article
(This article belongs to the Special Issue Big Data: Advanced Methods, Interdisciplinary Study and Applications)
Show Figures

Figure 1

Back to TopTop