Machine Learning: From Tech Trends to Business Impact

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Processes".

Deadline for manuscript submissions: closed (24 August 2023) | Viewed by 10746

Special Issue Editors

Engineering School of Carthage, University of Carthage, Carthage 1054, Tunisia
Interests: machine learning; data mining; big data architecture; spatial computing at scale

E-Mail Website
Guest Editor
1. NRL lab, University of Montreal, Montreal, QC 2900, Canada
2. Laboratoire Recherche Informatique Maisonneuve, College Maisonneuve, Montreal, QC 3800, Canada
Interests: internet of things; artificial intelligence; machine learning

E-Mail Website
Guest Editor
Department of Computer Engineering, Faculty of Sciences of Bizerta, University of Carthage, Carthage 1054, Tunisia
Interests: internet of things; artificial intelligence; wireless networks; QoS provisioning

E-Mail Website
Guest Editor
Laboratoire d’Informatique Paris Descartes, University of Paris-Descartes, 75006 Paris, France
Interests: large-scale data management and data quality; machine learning

Special Issue Information

Summary

Dear Colleagues,

This Special Issue will include extended versions of selected papers presented at the 2022 International Symposium on Networks, Computers and Communications (ISNCC 2022) and papers that originate from the public call for papers. 

ISNCC 2022 features scientific papers presented in parallel tracks focusing on Artificial Intelligence and Machine Learning, Data Science and Big Data Systems Engineering, Smart applications, Security and Privacy, and Communications and Networking.

Overview

Machine learning (ML) is one of the most exciting fields of computing today. In recent decades, ML has become an entrenched part of everyday life and has been successfully used for solving practical problems. Machine learning has been widely used in data mining, computer vision, biometrics, search engines, medical diagnostics, detection of credit card fraud, securities market analysis, industry, finance, medicine, business, and many other domains. Businesses incorporate machine learning into their core processes for a variety of strategic reasons. Machine learning can deliver benefits such as the ability to discover patterns and correlations, improve customer segmentation and targeting, and ultimately increase a business's revenue, growth, and market position. Twitter, for example, uses machine learning to curate better timelines for its users. Similarly, Facebook has introduced machine learning into its Messenger app, so chatbots can grow and change based on user responses and interactions. As this technology continues to evolve, it will become more prevalent in the lives of business owners. This means that new applications for this type of technology will emerge and, ultimately, benefit small businesses.

ML includes a wide range of learning algorithms including linear regression, k-nearest neighbors or decision trees, support vector machines and neural networks, deep learning, boosted tree models, and so on. In practice, it is quite challenging to properly determine an appropriate architecture and parameters of ML models so that the resulting learning model can achieve sound performance for both learning and generalization. Practical applications of ML in business bring additional challenges such as dealing with big, missing, distorted, and uncertain data. Interpretability is a paramount quality that ML methods should aim to achieve if they are to be applied in practice. Interpretability allows us to understand ML model operation and increases confidence in its results. In addition, data visualization techniques are also essential for showing insights after analyzing large amounts of information.

Topics of Interests

This Special Issue is intended to bring together diverse, novel, and impactful research works on business performance. Potential topics include but are not limited to the following:                                          

  • New algorithms with empirical and theoretical studies;
  • Experimental and/or theoretical studies yielding new insights into the design and behavior of learning in intelligent systems;
  • Accounts of applications of existing techniques that shed light on the strengths and weaknesses of the methods;
  • Development of new analytical frameworks handling big data or specific data formats;
  • Extremely well-written surveys of existing work.

Dr. Rim Moussa
Dr. Jihene Rezgui
Prof. Dr. Tarek Bejaoui
Dr. Soror Sahri
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 681 KiB  
Article
Can Triplet Loss Be Used for Multi-Label Few-Shot Classification? A Case Study
by Gergely Márk Csányi, Renátó Vági, Andrea Megyeri, Anna Fülöp , Dániel Nagy, János Pál Vadász and István Üveges
Information 2023, 14(10), 520; https://doi.org/10.3390/info14100520 - 23 Sep 2023
Cited by 1 | Viewed by 1301
Abstract
Few-shot learning is a deep learning subfield that is the focus of research nowadays. This paper addresses the research question of whether a triplet-trained Siamese network, initially designed for multi-class classification, can effectively handle multi-label classification. We conducted a case study to identify [...] Read more.
Few-shot learning is a deep learning subfield that is the focus of research nowadays. This paper addresses the research question of whether a triplet-trained Siamese network, initially designed for multi-class classification, can effectively handle multi-label classification. We conducted a case study to identify any limitations in its application. The experiments were conducted on a dataset containing Hungarian legal decisions of administrative agencies in tax matters belonging to a major legal content provider. We also tested how different Siamese embeddings compare on classifying a previously non-existing label on a binary and a multi-label setting. We found that triplet-trained Siamese networks can be applied to perform classification but with a sampling restriction during training. We also found that the overlap between labels affects the results negatively. The few-shot model, seeing only ten examples for each label, provided competitive results compared to models trained on tens of thousands of court decisions using tf-idf vectorization and logistic regression. Full article
(This article belongs to the Special Issue Machine Learning: From Tech Trends to Business Impact)
Show Figures

Figure 1

11 pages, 448 KiB  
Article
Feature Selection Engineering for Credit Risk Assessment in Retail Banking
by Jaber Jemai and Anis Zarrad
Information 2023, 14(3), 200; https://doi.org/10.3390/info14030200 - 22 Mar 2023
Cited by 2 | Viewed by 3230
Abstract
In classification, feature selection engineering helps in choosing the most relevant data attributes to learn from. It determines the set of features to be rejected, supposing their low contribution in discriminating the labels. The effectiveness of a classifier passes mainly through the set [...] Read more.
In classification, feature selection engineering helps in choosing the most relevant data attributes to learn from. It determines the set of features to be rejected, supposing their low contribution in discriminating the labels. The effectiveness of a classifier passes mainly through the set of selected features. In this paper, we identify the best features to learn from in the context of credit risk assessment in the financial industry. Financial institutions concur with the risk of approving the loan request of a customer who may default later, or rejecting the request of a customer who can abide by their debt without default. We propose a feature selection engineering approach to identify the main features to refer to in assessing the risk of a loan request. We use different feature selection methods including univariate feature selection (UFS), recursive feature elimination (RFE), feature importance using decision trees (FIDT), and the information value (IV). We implement two variants of the XGBoost classifier on the open data set provided by the Lending Club platform to evaluate and compare the performance of different feature selection methods. The research shows that the most relevant features are found by the four feature selection techniques. Full article
(This article belongs to the Special Issue Machine Learning: From Tech Trends to Business Impact)
Show Figures

Figure 1

26 pages, 4075 KiB  
Article
A Method for UWB Localization Based on CNN-SVM and Hybrid Locating Algorithm
by Zefu Gao, Yiwen Jiao, Wenge Yang, Xuejian Li and Yuxin Wang
Information 2023, 14(1), 46; https://doi.org/10.3390/info14010046 - 12 Jan 2023
Cited by 3 | Viewed by 2431
Abstract
In this paper, aiming at the severe problems of UWB positioning in NLOS-interference circumstances, a complete method is proposed for NLOS/LOS classification, NLOS identification and mitigation, and a final accurate UWB coordinate solution through the integration of two machine learning algorithms and a [...] Read more.
In this paper, aiming at the severe problems of UWB positioning in NLOS-interference circumstances, a complete method is proposed for NLOS/LOS classification, NLOS identification and mitigation, and a final accurate UWB coordinate solution through the integration of two machine learning algorithms and a hybrid localization algorithm, which is called the C-T-CNN-SVM algorithm. This algorithm consists of three basic processes: an LOS/NLOS signal classification method based on SVM, an NLOS signal recognition and error elimination method based on CNN, and an accurate coordinate solution based on the hybrid weighting of the Chan–Taylor method. Finally, the validity and accuracy of the C-T-CNN-SVM algorithm are proved through a comparison with traditional and state-of-the-art methods. (i) Focusing on four main prediction errors (range measurements, maxNoise, stdNoise and rangeError), the standard deviation decreases from 13.65 cm to 4.35 cm, while the mean error decreases from 3.65 cm to 0.27 cm, and the errors are practically distributed normally, demonstrating that after training a SVM for LOS/NLOS signal classification and a CNN for NLOS recognition and mitigation, the accuracy of UWB range measurements may be greatly increased. (ii) After target positioning, the proposed method can realize a one-dimensional X-axis and Y-axis accuracy within 175 mm, and a Z-axis accuracy within 200 mm; a 2D (X,Y) accuracy within 200 mm; and a 3D accuracy within 200 mm, most of which fall within (100 mm, 100 mm, 100 mm). (iii) Compared with the traditional algorithms, the proposed C-T-CNN-SVM algorithm performs better in location accuracy, cumulative error probability (CDF), and root-mean-square difference (RMSE): the 1D, 2D, and 3D accuracy of the proposed method is 2.5 times that of the traditional methods. When the location error is less than 10 cm, the CDF of the proposed algorithm only reaches a value of 0.17; when the positioning error reaches 30 cm, only the CDF of the proposed algorithm remains in an acceptable range. The RMSE of the proposed algorithm remains ideal when the distance error is greater than 30 cm. The results of this paper and the idea of a combination of machine learning methods with the classical locating algorithms for improved UWB positioning under NLOS interference could meet the growing need for wireless indoor locating and communication, which indicates the possibility for the practical deployment of such a method in the future. Full article
(This article belongs to the Special Issue Machine Learning: From Tech Trends to Business Impact)
Show Figures

Figure 1

13 pages, 549 KiB  
Article
CryptoNet: Using Auto-Regressive Multi-Layer Artificial Neural Networks to Predict Financial Time Series
by Leonardo Ranaldi, Marco Gerardi and Francesca Fallucchi
Information 2022, 13(11), 524; https://doi.org/10.3390/info13110524 - 02 Nov 2022
Cited by 6 | Viewed by 2363
Abstract
When analyzing a financial asset, it is essential to study the trend of its time series. It is also necessary to examine its evolution and activity over time to statistically analyze its possible future behavior. Both retail and institutional investors base their trading [...] Read more.
When analyzing a financial asset, it is essential to study the trend of its time series. It is also necessary to examine its evolution and activity over time to statistically analyze its possible future behavior. Both retail and institutional investors base their trading strategies on these analyses. One of the most used techniques to study financial time series is to analyze its dynamic structure using auto-regressive models, simple moving average models (SMA), and mixed auto-regressive moving average models (ARMA). These techniques, unfortunately, do not always provide appreciable results both at a statistical level and as the Risk-Reward Ratio (RRR); above all, each system has its pros and cons. In this paper, we present CryptoNet; this system is based on the time series extraction exploiting the vast potential of artificial intelligence (AI) and machine learning (ML). Specifically, we focused on time series trends extraction by developing an artificial neural network, trained and tested on two famous crypto-currencies: Bitcoinand Ether. CryptoNet learning algorithm improved the classic linear regression model up to 31% of MAE (mean absolute error). Results from this work should encourage machine learning techniques in sectors classically reluctant to adopt non-standard approaches. Full article
(This article belongs to the Special Issue Machine Learning: From Tech Trends to Business Impact)
Show Figures

Figure 1

Back to TopTop