Bayesian Statistics on Artificial Intelligence: Theory, Methods and Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 July 2023) | Viewed by 11214

Special Issue Editor


E-Mail Website
Guest Editor
Department of Big Data and Statistics, Cheongju University, Chungbuk 28503, Korea
Interests: Bayesian statistics; artificial intelligence; data science

Special Issue Information

Dear Colleagues,

We are inviting submissions to the Special Issue on Bayesian Statistics on Artificial Intelligence: Theory, Methods and Applications. Bayesian statistics are based on Bayesian inference that consists of prior, likelihood, and posterior distributions. Using Bayesian inference, Bayesian learning represents the update of human beliefs about events as a probability distribution. Thus, Bayesian statistics is one of popular fields in artificial intelligence (AI). Bayesian neural networks and Bayesian deep learning are the results of Bayesian statistics applied to AI. We know that Bayesian statistics are making various contributions to more AI domains. So, in this Special Issue, we invite submissions on diverse methods and applications of Bayesian statistics on AI. We welcome not only theoretical studies on Bayesian statistics for artificial intelligence but also various applied studies.

Prof. Dr. Sunghae Jun
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Bayesian statisitcs for machine learning
  • Bayesian neural networks
  • Bayesian deep learning
  • cognitive artficial intelligence using Bayesian inference
  • Bayesian networks
  • regression models using Bayesian approaches
  • classification models using Bayesian approaches
  • reinforcement learning using Bayesian approaches
  • Bayesian mixture models for artificial intelligence
  • Markov Chain Monte Carlo (MCMC) for artificial intelligence
  • big data analysis and visualization
  • statistical models for Artificial Intelliegnce
  • patent big data analysis using statistics and machine learning

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 2054 KiB  
Article
Advanced Bayesian Network for Task Effort Estimation in Agile Software Development
by Mili Turic, Stipe Celar, Srdjana Dragicevic and Linda Vickovic
Appl. Sci. 2023, 13(16), 9465; https://doi.org/10.3390/app13169465 - 21 Aug 2023
Cited by 1 | Viewed by 916
Abstract
Effort estimation is always quite a challenge, especially for agile software development projects. This paper describes the process of building a Bayesian network model for effort prediction in agile development. Very few studies have addressed the application of Bayesian networks to assess agile [...] Read more.
Effort estimation is always quite a challenge, especially for agile software development projects. This paper describes the process of building a Bayesian network model for effort prediction in agile development. Very few studies have addressed the application of Bayesian networks to assess agile development efforts. Some research has not been validated in practice, and some has been validated on one or two projects. This paper aims to bring the implementation and use of Bayesian networks for effort prediction closer to the practitioners. This process consists of two phases. The Bayesian network model for task effort estimation is constructed and validated in the first phase on real agile projects. A relatively small model showed satisfactory estimation accuracy, but only five output intervals were used. The model was proven to be useful in daily work, but the project manager wanted to obtain more output intervals, although increasing the number of output intervals reduces the prediction accuracy. In the second phase, the focus is on increasing the number of output intervals while maintaining satisfactory accuracy. The advanced model for task effort estimation is developed and tested on real projects of two software firms. Full article
Show Figures

Figure 1

27 pages, 11028 KiB  
Article
Support Vector Machine-Assisted Importance Sampling for Optimal Reliability Design
by Chunyan Ling, Jingzhe Lei and Way Kuo
Appl. Sci. 2022, 12(24), 12750; https://doi.org/10.3390/app122412750 - 12 Dec 2022
Cited by 1 | Viewed by 1026
Abstract
A population-based optimization algorithm combining the support vector machine (SVM) and importance sampling (IS) is proposed to achieve a global solution to optimal reliability design. The proposed approach is a greedy algorithm that starts with an initial population. At each iteration, the population [...] Read more.
A population-based optimization algorithm combining the support vector machine (SVM) and importance sampling (IS) is proposed to achieve a global solution to optimal reliability design. The proposed approach is a greedy algorithm that starts with an initial population. At each iteration, the population is divided into feasible/infeasible individuals by the given constraints. After that, feasible individuals are classified as superior/inferior individuals in terms of their fitness. Then, SVM is utilized to construct the classifier dividing feasible/infeasible domains and that separating superior/inferior individuals, respectively. A quasi-optimal IS distribution is constructed by leveraging the established classifiers, on which a new population is generated to update the optimal solution. The iteration is repeatedly executed until the preset stopping condition is satisfied. The merits of the proposed approach are that the utilization of SVM avoids repeatedly invoking the reliability function (objective) and constraint functions. When the actual function is very complicated, this can significantly reduce the computational burden. In addition, IS fully excavates the feasible domain so that the produced offspring cover almost the entire feasible domain, and thus perfectly escapes local optima. The presented examples showcase the promise of the proposed algorithm. Full article
Show Figures

Figure 1

12 pages, 1778 KiB  
Article
A Study on the Identification of Delayed Delivery Risk Transmission Paths in Multi-Variety and Low-Volume Enterprises Based on Bayesian Network
by Linchao Yang, Fan Zhang, Anying Liu, Shenghan Zhou, Xiangwei Wu and Fajie Wei
Appl. Sci. 2022, 12(23), 12024; https://doi.org/10.3390/app122312024 - 24 Nov 2022
Cited by 1 | Viewed by 1294
Abstract
Due to the prevalence of the order production mode, multi-variety, small-batch manufacturing enterprises frequently delay deliveries to downstream customers. To date, most existing studies on delayed delivery risk have focused on the response to the risk after it occurs, ignoring how the risk [...] Read more.
Due to the prevalence of the order production mode, multi-variety, small-batch manufacturing enterprises frequently delay deliveries to downstream customers. To date, most existing studies on delayed delivery risk have focused on the response to the risk after it occurs, ignoring how the risk arises. For multi-variety, low-volume production companies, any part of the production process could lead to the ultimate risk of delayed delivery, and the risk is transmissible. Therefore, the path of risk transmission needs to be identified to effectively control the risk of late delivery at key production stages. In this paper, from the perspective of risk transmission, a recognition method based on association rules and the Bayesian network was proposed to identify the risk conduction path. This method firstly determined the strong association rules among the risk factors based on historical data stored in the ERP system and determined the Bayesian network topology structures of the risk transmission path by combining the business process and expert experience. Secondly, the prior and conditional probabilities of each node were determined using data statistics, and the risk transmission path of delayed delivery was identified using the forward and backward reasoning of the Bayesian network. Finally, this paper provided a case study to verify the method, and the following conclusions were obtained: (1) the delay in delivery to downstream customers is mainly due to the delayed delivery of upstream suppliers and the sudden change in customer demand, and (2) the adjustment of enterprise production plans is the key node of the delayed delivery risk transmission path. Through the research in this paper, production companies can identify the target of risk management more scientifically and mitigate the risk through the adjustment of key links. Full article
Show Figures

Figure 1

12 pages, 1542 KiB  
Article
Cognitive Artificial Intelligence Using Bayesian Computing Based on Hybrid Monte Carlo Algorithm
by Sangsung Park and Sunghae Jun
Appl. Sci. 2022, 12(18), 9270; https://doi.org/10.3390/app12189270 - 15 Sep 2022
Cited by 3 | Viewed by 1628
Abstract
Cognitive artificial intelligence (CAI) is an intelligent machine that thinks and behaves similar to humans. CAI also has an ability to mimic human emotions. With the development of AI in various fields, the interest and demand for CAI are continuously increasing. Most of [...] Read more.
Cognitive artificial intelligence (CAI) is an intelligent machine that thinks and behaves similar to humans. CAI also has an ability to mimic human emotions. With the development of AI in various fields, the interest and demand for CAI are continuously increasing. Most of the current AI research focuses on the realization of intelligence that can make optimal decisions. Existing AI studies have not conducted in-depth research on human emotions and cognitive perspectives. However, in the future, the demand for the use of AI that can imitate human emotions in various fields, such as healthcare and education, will continue. Therefore, we propose a method to build CAI in this paper. We also use Bayesian inference and computing based on the hybrid Monte Carlo algorithm for CAI development. To show how the proposed method for CAI can be applied to practical problems, we create an experiment using simulation data. Full article
Show Figures

Figure 1

18 pages, 3154 KiB  
Article
A Study on the Calibrated Confidence of Text Classification Using a Variational Bayes
by Juhyun Lee and Sangsung Park
Appl. Sci. 2022, 12(18), 9007; https://doi.org/10.3390/app12189007 - 08 Sep 2022
Cited by 3 | Viewed by 1200
Abstract
Recently, predictions based on big data have become more successful. In fact, research using images or text can make a long-imagined future come true. However, the data often contain a lot of noise, or the model does not account for the data, which [...] Read more.
Recently, predictions based on big data have become more successful. In fact, research using images or text can make a long-imagined future come true. However, the data often contain a lot of noise, or the model does not account for the data, which increases uncertainty. Moreover, the gap between accuracy and likelihood is widening in modern predictive models. This gap may increase the uncertainty of predictions. In particular, applications such as self-driving cars and healthcare have problems that can be directly threatened by these uncertainties. Previous studies have proposed methods for reducing uncertainty in applications using images or signals. However, although studies that use natural language processing are being actively conducted, there remains insufficient discussion about uncertainty in text classification. Therefore, we propose a method that uses Variational Bayes to reduce the difference between accuracy and likelihood in text classification. This paper conducts an experiment using patent data in the field of technology management to confirm the proposed method’s practical applicability. As a result of the experiment, the calibrated confidence in the model was very small, from a minimum of 0.02 to a maximum of 0.04. Furthermore, through statistical tests, we proved that the proposed method within the significance level of 0.05 was more effective at calibrating the confidence than before. Full article
Show Figures

Figure 1

15 pages, 4702 KiB  
Article
Study on the Technology Trend Screening Framework Using Unsupervised Learning
by Junseok Lee, Sangsung Park and Juhyun Lee
Appl. Sci. 2022, 12(17), 8920; https://doi.org/10.3390/app12178920 - 05 Sep 2022
Cited by 2 | Viewed by 1422
Abstract
Outliers that deviate from a normal distribution are typically removed during the analysis process. However, the patterns of outliers are recognized as important information in the outlier detection method. This study proposes a technology trend screening framework based on a machine learning algorithm [...] Read more.
Outliers that deviate from a normal distribution are typically removed during the analysis process. However, the patterns of outliers are recognized as important information in the outlier detection method. This study proposes a technology trend screening framework based on a machine learning algorithm using outliers. The proposed method is as follows: first, we split the dataset by time into training and testing sets for training the Doc2Vec model. Next, we pre-process the patent documents using the trained model. The final outlier documents are selected from the preprocessed document data, through voting for the outlier documents extracted using the IQR, the three-sigma rule, and the Isolation Forest algorithm. Finally, the technical topics of the outlier documents extracted through the topic model are identified. This study analyzes the patent data on drones to describe the proposed method. Results show that, despite cumulative research on drone-related hardware and system technology, there is a general lack of research regarding the autonomous flight field. Full article
Show Figures

Figure 1

14 pages, 1105 KiB  
Article
Technology Commercialization Activation Model Using Imagification of Variables
by Youngho Kim, Sangsung Park and Jiho Kang
Appl. Sci. 2022, 12(16), 7994; https://doi.org/10.3390/app12167994 - 10 Aug 2022
Cited by 1 | Viewed by 2061
Abstract
Various institutions such as universities and corporations strive to commercialize technologies produced through R&D investment. The ideal way to commercialize technology is to transfer it, recognizing the value of the developed technology. Technology transfer is the transfer of technology from R&D entities, such [...] Read more.
Various institutions such as universities and corporations strive to commercialize technologies produced through R&D investment. The ideal way to commercialize technology is to transfer it, recognizing the value of the developed technology. Technology transfer is the transfer of technology from R&D entities, such as universities, research institutes, and companies, to others, with the advantage of spreading research results and maximizing cost efficiency. In other words, if enough technology is transferred, it can be commercialized. Although many institutions have various support measures to assist in transferring technology, there is no substitution for quantitative, objective methods. To solve this problem, this paper proposes a technology transfer prediction model based on the information found in patents. However, it is not realistic to include the information from all patents in the quantitative, objective method, so patterns related to technology transfer must be identified to select the appropriate patents that can be used in the predictive model. In addition, a method is needed to address the insufficient training data for the model. Training data are limited because some technology transfer information is not disclosed, and there is little technology transferred in new technology fields. The technology transfer prediction model proposed in this paper searches for hidden patterns related to technology transfer by imaging the patent information, which can also be applied to image analysis models. Furthermore, augmenting the data can solve the problem of the lack of learning data for technology transfer. To examine whether the proposed model can be used in real industries, we collected patents related to artificial intelligence technology registered in the United States and conducted experiments. The experimental results show that the models trained by imaging patent information performed excellently. Moreover, it was shown that the data augmentation technique can be used when there are insufficient data for technology transfer. Full article
Show Figures

Figure 1

Back to TopTop