Next Article in Journal
Secure Virtual Network Embedding Algorithms for a Software-Defined Network Considering Differences in Resource Value
Previous Article in Journal
A Serendipity-Oriented Personalized Trip Recommendation Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Recent Trends in AI-Based Intelligent Sensing

by
Abhishek Sharma
1,†,
Vaidehi Sharma
1,†,
Mohita Jaiswal
1,†,
Hwang-Cheng Wang
2,†,
Dushantha Nalin K. Jayakody
3,*,
Chathuranga M. Wijerathna Basnayaka
3,4,† and
Ammar Muthanna
5
1
Department of Electronic and Communication Engineering, The LNM Institute of Information Technology, Jaipur 302031, India
2
Department of Electronic Engineering, National Ilan University, Yilan 260007, Taiwan
3
COPELABS, Lusófona University, 1749-024 Lisbon, Portugal
4
Centre for Telecommunication Research, School of Engineering, Sri Lanka Technological Campus, Padukka 10500, Sri Lanka
5
Department of Applied Probability and Informatics, Peoples’ Friendship University of Russia (RUDN University), Miklukho-Maklaya St, 117198 Moscow, Russia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2022, 11(10), 1661; https://doi.org/10.3390/electronics11101661
Submission received: 31 January 2022 / Revised: 3 May 2022 / Accepted: 6 May 2022 / Published: 23 May 2022
(This article belongs to the Topic Artificial Intelligence in Sensors)

Abstract

:
In recent years, intelligent sensing has gained significant attention because of its autonomous decision-making ability to solve complex problems. Today, smart sensors complement and enhance the capabilities of human beings and have been widely embraced in numerous application areas. Artificial intelligence (AI) has made astounding growth in domains of natural language processing, machine learning (ML), and computer vision. The methods based on AI enable a computer to learn and monitor activities by sensing the source of information in a real-time environment. The combination of these two technologies provides a promising solution in intelligent sensing. This survey provides a comprehensive summary of recent research on AI-based algorithms for intelligent sensing. This work also presents a comparative analysis of algorithms, models, influential parameters, available datasets, applications and projects in the area of intelligent sensing. Furthermore, we present a taxonomy of AI models along with the cutting edge approaches. Finally, we highlight challenges and open issues, followed by the future research directions pertaining to this exciting and fast-moving field.

1. Introduction

The term “Smart Sensor” was coined in the 1970s [1]. The word “Smart” is related to the capability of microelectronic devices having operative intelligence features. The improvements observed in the 1980s, especially those related to the area of sensor technology, show perfection in signal extraction, real-time data transfer, and adaptability to the physical environment by sensors, which helps in fetching data that seemed to be inaccessible previously. In the 1990s, intelligence was added to devices and more promising results were observed in this area. The evolution in intelligence technology was due to the advancement in computational technologies. Such intelligent devices possess three main features: (i) extraction of signal information, (ii) signal processing, and (iii) instruction execution. It is interesting to observe that applied intelligence was also being advanced at the same time. In the 1980s, machine learning, and later, in the 1990s, deep-learning, were also in a progressive state. Artificial intelligence covers all the important technological development in this domain, including RNN, CNN, Transfer Learning, Continual AI, etc. Thus, both smart Sensors and AI are integrated to form intelligent sensing for the development of smart applications. It is important to observe that nowadays sensors are not just used to extract information but are also involved in more complex tasks such as execution of different instructions based on the pattern of data sequence. Indeed, we encounter a vast amount of data in different forms on a daily basis. To extract useful information from the plethora of data, smart sensors are designed that perceive the environment, make decisions, and draw conclusions. Intelligent sensing is important for various reasons. It can be applied in different areas such as self-driving cars, autonomous flying droids, and Amazon Kiva systems.
In light of recent successes, AI is a trending field in the research areas of management science [2], operational research [3], and technology [4]. There is a broad array of applications of AI, ranging from expert systems to computer vision, which improves the everyday lives of ordinary people. For example, [5] investigated the application of machine learning to medicine and reported the diagnostic performance and caution of machine learning in dermatology, radiology, pathology, and microscopy. [6] examined the serious issues of modern transport systems and how AI techniques can be used to tackle the issues. The recent improvements in AI algorithms and computer hardware are expected to exceed human intelligence shortly. The current research on AI, including machine learning (ML) and deep learning (DL), uses real-time algorithms to enable machines to learn information from the sensing parameters. Recently, several AI-based approaches have witnessed rapid growth due to their sensing capability to learn feature representations for decision making and control problems [7]. Furthermore, the critical aspect of AI is to design efficient learning algorithms to unlock new possibilities in the field of intelligent sensing. Algorithms based on AI have been successfully utilized in myriads of areas such as mobile applications [8], social media analytics [9], healthcare [10], agriculture [11], manufacturing processes [12], logistics [13] and environmental engineering [14].

1.1. Related Works

Many researchers have conducted surveys related to intelligent sensing models to tackle challenging issues of particular applications and provide solutions to cope with existing vulnerabilities. However, most of the existing survey articles on intelligent sensing have not explicitly focused on new methods based on AI and ML/DL for real-time applications and associated research challenges. The survey in [15] was conducted from two viewpoints. The first is the intelligent approaches based on AI to solve issues related to wireless sensor networks (WSNs) and the second is to design intelligent applications that incorporate sensor networks. In [16], the authors have discussed the research directions of AI 2.0 and the new models based on AI technology. New forms of intelligent manufacturing systems are also explored. Various AI algorithms are implemented as estimators (i.e., software sensors) in chemical operating units and their advantages are shown. Practical implications and limitations were also discussed for the proper design of AI-based estimators in [17]. In [18], the authors have focused mainly on different intelligent techniques used in vehicular applications and listed research challenges and issues in the integration of AI and vehicular systems. In [19], the authors have discussed AI algorithms coupled with gas sensor arrays (GSAs) embedded in robots as electronic noses to explore potential applications such as gas explosive detection, environmental monitoring, beverage and food production and storage. They also discussed the types of gas sensors, gas sensor limitations and possible solutions.
Another application based on intelligent sensing was given in [20,21]. They focused on the use of ML and AI technology to fight the coronavirus pandemic. The studies used AI-based embedded sensors to track the spread of COVID-19 infections and side effects, thereby helping health professionals to diagnose common symptoms of the virus. The article [22] surveys the future of healthcare technologies for H-IoT. It summarizes the features of H-IoT systems based on generic IoT systems.
Several ML and DL methods were reviewed in [23] for big data applications together with open issues and research directions. Different ML-based algorithms to address issues of WSNs (i.e., congestion control, synchronization, and energy harvesting) were surveyed in [24] and their drawbacks were discussed. An overview of current data mining and ML techniques employed for activity recognition (AR) were presented in [25]. The authors also discussed how an activity is captured using different sensors. In [26], the authors reviewed how recent ML and DL algorithms can be coupled with sensor technologies for particular sensing applications. They have also compared a new smart sensing system based on ML with a conventional sensing system and discussed its future opportunities. A comprehensive survey of various DL algorithms that can be applied to sensor data for predictive maintenance was provided in [27]. Ref. [28] performs a comprehensive survey of the applications of the DL models for different network layers, that includes data link layer, physical layer, routing layer etc. Literature review in [29] is based on the ML algorithms, which were used to solve the WSN issues in the period of 2002–2013. Also, this paper investigates the ML solutions to enhance the functional behaviors of WSNs, for example, quality of service (QoS) and data integrity. Table 1 summarizes some recent survey articles in the field of intelligent sensing with their advantages and limitations.

1.2. Overview of Intelligent Sensing Elements

A smart sensor is a sensor that can detect an object’s information, and can learn, judge, and process the data in the form of signals. It can calibrate automatically, collect data, and compensate it. In the 1980s, the effort was focused on integrating computer memory, signal processing circuit, interface circuit, and a microprocessor to one chip so that the sensor can achieve certain AI capability [39]. Smart sensors have emerged due to technological demands and feasibility [40]. The primary source is the sensing element, which can trigger the sensing component to deliver a self-test facility. For this, a reference voltage is applied to monitor the response of the sensor. Amplification is necessary, as most of the sensors produce signals that are lower than signal levels of a digital processor. For example, a piezoelectric sensor requires charge amplification, while resistive sensors need instrumentation amplification. Analog filtering is used to block the aliasing effect in the data conversion stage.
The data conversion is associated with the digitization process, wherein analog signals are converted into discrete signals [41]. In this stage, input from sensors is fed into the data conversion unit to implement different forms of compensation. Signals in the frequency domain like those from resonant sensors do not need conversion and can be fed directly into a digital system. Digital processors are required to implement sensor compensation like cross-sensitivity, linearization, offset, etc., for pattern recognition methods. Finally, data communication unit sends signals to the sensor bus and deals with the passing and receiving of data.

1.3. Contributions of This Work

To our knowledge, very little study in the form of review or survey in intelligent sensing has been done by taking into consideration all of the key aspects, specifically projects, application areas, state of the art approaches, datasets, and comparative analysis of existing research works. Thus, this is the first work that comprises various algorithms, approaches, and applications in these domains. Since all these techniques are essential for the understanding of ML and AI, therefore, it is crucial to highlight their interconnection in regards to intelligent sensing and its challenges. This work provides a systematic survey to understand and expand the perspective of AI technologies in intelligent sensing through different approaches to inspire and promote further research in the relevant areas.
The main contributions of this work are listed as follows:
  • Comprehensive discussion of AI techniques, specifically ML and DL algorithms for intelligent sensing—The most promising AI techniques, ML and DL algorithms, are briefly reviewed in the context of intelligent sensing. We also discuss the key factors that affect the efficiency of intelligent sensing and algorithms. Furthermore, we highlight the lessons learned and pitfalls when ML and DL methods are used for intelligent sensing.
  • In-depth review of practical applications and datasets in intelligent sensing —We discuss a broad array of applications that have used ML and DL algorithms and also include a case study of intelligent sensing for pandemic monitoring and diagnosing. We present various publicly available datasets that can be used in different domains of intelligent sensing.
  • Noteworthy projects based on the trending technologies —We enumerate several ongoing research projects around the world that make use of and contribute toward intelligent sensing.
  • Challenges and future research directions—We highlight and discuss research challenges that need serious attention, along with possible future directions for the successful merging of AI and intelligent sensing technologies.
Table 2 show the research and reviews in the area of intelligent sensing with the novelty components presented in this work. It is observed that most of the work available in this domain is application-specific. The work presented in this paper covers an in-depth review of various components such as the essential elements of intelligent sensing, machine learning models, influential projects, datasets, and current technology trends such as future citizenship, explainable AI, 6G and beyond, healthcare, and usefulness of intelligent sensing in the pandemic.
The paper is structured as follows. In Section 2, an overview of recent learning models based on AI and used for intelligent sensing applications is presented. Key parameters that affect the performance of intelligent sensing are also discussed. Section 3 describes the datasets used for intelligent sensing. Section 4 focuses on the numerous applications of intelligent sensing and also presents lessons learned in relation to AI techniques. The key challenges and future research opportunities are presented in Section 5. Several ongoing research projects based on AI technologies and intelligent sensing are summarized in Section 6. Finally, Section 7 concludes the paper. Figure 1 illustrates the structure of the paper.

2. AI Methods for Intelligent Sensing

In this section, an overview of ML and DL algorithms from an intelligent sensing perspective is presented. The aim of this section is to highlight learning algorithms that are widely used in many real-time applications. Furthermore, parameters affecting the performance of intelligent sensing are also discussed. This section concludes with lessons learned.

2.1. AI-Based Algorithms/Models in Intelligent Sensing

A machine that is able to make decisions on its own is said to possess AI. There is a broad spectrum of applications for AI, ranging from machine learning to robotics. By combining the current advancements in machine and deep learning, huge amounts of data from various sources are analyzed by utilizing AI to identify patterns and make intelligent predictions [51]. However, recent advances in artificial intelligence systems and robotics still need more research to solve complex problems. The tremendous growth in AI has ushered in a wave of applications using sensors. As a result, the demand for intelligent sensing increases in the market. Using sensor signals, the analysis of sensor data based on AI provides robust predictions and classifications. Hence, intelligent sensing will be the bright future of AI, where human behavior and emotions can be recognized by AI machines. Although some prior works have provided an in-depth summary of AI and ML techniques in particular areas of applications, this survey shows AI and ML-based intelligent sensing which has not been explored in other works. We also identify current problems that have limited real-world implementations. This will provide helpful guidelines to researchers and practitioners interested in intelligent sensing.

2.2. Machine Learning Algorithms/Models in Intelligent Sensing

In the last few years, the tremendous growth of ML-based approaches has expanded the research area of intelligent sensing. Generally, ML can be considered to be a subset of AI which handles complexities to solve a specific task. In this subsection, a brief overview of existing ML algorithms that improve the functioning of sensing systems is presented together with their advantages and disadvantages. Various scenarios portraying how machine learning methods are applied in intelligent sensing is depicted in Figure 2. ML algorithms are divided into supervised, semi-supervised, unsupervised, and reinforcement learning.
  • Supervised learning-based intelligent sensing—Supervised learning deals with the known and labeled data and is divided into two types: classification and regression. This approach has been successfully implemented for many years in the fields of image classification, fraud detection, medical diagnosis, weather forecasting, market forecasting, and life expectancy estimation. In [52], ECG data are collected via wearable sensors, which detect heartbeats automatically, and a supervised learning approach is used for arrhythmia classification. An artificial haptic neuron system is fabricated in [53]. The system comprises a Nafion-based memristor and a piezoelectric sensor. The sensory receptor converts external stimulus into an electric signal, and the memristor is used for further processing of the data collected from the sensor. A supervised learning method is implemented for the recognition of English letters by placing the sensor on the joint of a finger. A novel methodology proposed in [54] using supervised learning for resolving the collision of cash tags yields high classification accuracy of listed companies in London Stock Exchange. A hybrid model that combines ML and game theory is proposed in [55] to solve issues related to network selection in ultra-dense heterogeneous networks.
    • K-Nearest Neighbors (K-NN) is an effective classification algorithm used for large datasets. Here, K represents the number of training samples that are near the test sample in the feature space [56]. In [57], a machine learning-based K-NN approach is used for load classification by collecting data from various smart plug sensors and other devices.
    • Support Vector Machine (SVM) is mainly used to categorize data attributes between classes by creating two-dimensional planes to minimize the classification error [58]. For example, Ref. [59] introduces a danger-pose detection system based on Wi-Fi devices that is used to monitor a bathroom while ensuring privacy. A machine learning-based detection approach usually requires large amount of data collected in target scenarios, which is challenging to detect danger situations. However, this work employed a machine learning-based anomaly-detection technique which requires a small amount of data in anomalous conditions. In this work, researchers first extracted the amplitude and phase shift from Wi-Fi Channel State Information (CSI) in order to detect low-frequency components associated with human activities. The static and dynamic features were then derived from the CSI changes over time. Finally, the static and dynamic characteristics are input into a one-class SVM which is employed as an anomaly-detection method to determine if a person is not in the bathtub, is bathing safely or in unsafe situations.
    • Decision Tree (DT) model consists of branches and nodes, wherein every node represents a test on every feature, and each branch has a value that the associated node can use to classify a sample [60]. A decision tree-based approach was presented in [61] for an intelligent transportation system (ITS). LIDAR sensors obtain point cloud data, which are then projected onto the XOY plane. After that, the images are classified into road and background grids for monitoring road traffic.
    • Ensemble Learning (EL) is a method based on combining the outputs of basic classification algorithms to boost the performance of classification. It is robust to data overfitting problem and is better than a single classifier [62]. This method is proposed in [63], where soft sensors are used to collect data to predict the composition, flow rate, and other features of the product, e.g., fatty acid methyl esters (FAME), in the procedure of production of biodiesel from vegetable oil.
    • Random Forest (RF) is made of a combination of several DTs and constructed randomly to form a model for improving the overall results [64]. A random forest-based classifier is proposed in [65] for estimating the content of bulky metals in agricultural soil using hyperspectral sensor data and is shown to reduce computational cost and time.
  • Unsupervised learning-based intelligent sensing—Due to the large amount of unlabeled data in our everyday life, researchers have emphasized the unsupervised learning-based algorithms for intelligent sensing applications. This method consists of dimensionality reduction, generative networks, and clustering. Unsupervised learning-based intelligent sensing is proposed in [66], which is applied for real-time environment sensing to detect rare event instances intelligently. An unsupervised clustering-based method is introduced in [67] to describe an individual’s behavioral pattern by analyzing 100 days of unlabeled sensor data of 17 older adults from their homes and extract information of their day-to-day activities at different times. To detect the change in Landsat images, unsupervised learning is used in [68] with mean-shift clustering and hybrid wavelet transform under the Multi-Objective Particle Swarm optimization (MO-PSO) framework.
  • Semi-Supervised intelligent sensing—This method deals with the combination of labeled and unlabeled data. To reduce the complexity of labeling all data for large datasets, semi-supervised methods are used. A robust model based on a semi-supervised approach is proposed in [69] to warn about the aircraft fault during the flight of a UAV by sensing real-time data such as angular velocity and pitch angle from flight sensors, and dramatically reduces the manual work. To detect faults in Additive Manufacturing (AM) products, a semi-supervised method with a few labeled data and a large number of unlabeled data is explored in [70].
  • Reinforcement learning-based intelligent sensing—In the context of AI, reinforcement learning learns to make a sequence of decisions by interacting with its environment. One of the successful applications of this approach is to control autonomous cars by training the model. A deep reinforcement learning-based multi-sensor tracking fusion is proposed in [71] for vehicle tracking by learning on fused data from different sensors (camera and LIDAR). An intelligent sensing-based approach is introduced in [47] to autonomously monitor bridge conditions by collecting data from sensor nodes and make decisions using the reinforcement learning method. A novel approach based on YOLO V3 is proposed in [72] for multi-object tracking based on multi-agent deep reinforcement learning. This approach performs better in terms of precision, accuracy, and robustness. A routing protocol built on reinforcement learning is developed in [73] to find an optimal routing path for data transmission in a wireless network.
Table 3 shows the comparison of several ML and DL algorithms used in different areas of intelligent sensing. ML is a branch of AI that advocates the idea of acquiring the right data so that a machine can learn how to solve a particular problem by itself. The rise of ML is due to the availability of large datasets, and the adoption of ML algorithms in the field of intelligent sensing is to create smart devices that can take actions based on what they sense from the environment. With the implementation of ML in sensors, the efficiency and robustness of the system will reach the next level in smart sensing applications. Using sensor data, ML algorithms enable more robust predictions and classifications as compared to other physics-based models that envisage AI being added eventually to devices to adapt to the new circumstances. Therefore, the use of machine learning, including deep learning algorithms, is appropriate for performing challenging tasks in intelligent sensing, as shown in Figure 2.
The availability of datasets and the invention of new algorithms have increased the usage of ML and DL in the last few years. The supervised learning method has been used in numerous applications, such as object recognition, speech recognition, and spam detection. It predicts the value of one or more output variables (in the form of continuous or discrete) by observing input variables. The unsupervised learning method is generally used for gene clustering, social media analysis, and market research. The main focus of this method is to analyze unlabeled data. Semi-supervised learning is the hybrid model of supervised and unsupervised learning methods, which is used to solve problems with a few data points labeled and most of the data unlabeled. Reinforcement learning (RL) is used in applications such as finance, inventory management, and robotics, where the purpose is to learn a policy, i.e., to map situations between states of the environment to perform actions appropriately.

2.3. Deep Learning Algorithms/Models in Intelligent Sensing

Deep Learning is now dominating the industry and research spheres for the growth of a range of smart-world systems for good reasons. DL has shown considerable potential in approximating and reducing huge datasets into accurate predictive and transformational output, greatly facilitating human-centered smart systems. This section discusses deep learning models based on intelligent sensing.
  • Convolutional Neural Network—CNN is a robust supervised DL algorithm with better performance than other DL algorithms. IoT security is one of CNN’s applications where the features of the security data can be automatically learned by the sensors [82]. Deep CNN-based learning is proposed in [83] to recognize human emotions using electrodermal activity (EDA) sensors. These devices capture emotional patterns from a group of persons. The paper [84] proposed a system that detects the physical activity of older people from wearable sensors. For rotation-invariant features, each feature triplet is extracted from the X, Y, and Z axes and reduced to one feature represented by a 3D vector. Other works similar to this also achieve high accuracy in the study of younger people.
  • Recurrent Neural Network—RNN is an important algorithm of DL in which present and past inputs depend on the output of the neural network. It is used to handle sequential inputs, which can be speech, text, or sensor data [85]. An RNN-based approach is discussed in [86], which is meant to interpolate sparse geomagnetic data from lost traces to reduce the time taken by linear interpolation approaches. The study in [87] discussed a mobile positioning method using RNN to analyze the strength of received signals. The authors experiment with the training of two RNNs separately for estimating latitude and longitude, which results in overfitting. An RNN-based learning model is proposed in [88] to monitor underwater sensor networks in real time, which improves the delay and reduces the cost of packet transmission.
  • Generative Adversarial Network—GAN comprises two models; one is the generator, and the other is the discriminator. The two are trained in tandem via an adversarial process. These networks have been implemented for the security of IoT systems [89]. A conditional GAN-based DL method is presented for the reconstruction of CS-MRI that is compressed sensing magnetic resource imaging using compressed MR data [90]. In [91], the authors proposed a GAN-based method to generate X-ray prohibited images with different item poses. According to the paper, the quality of the images is good as compared to DC-GAN and WGAN-GP. After the images are generated, they are added to the real images and FID (Fréchet inception distance) is used to evaluate the performance of GANs.
  • Long Short-Term Memory—LSTM is a type of recurrent neural network that is intended to model temporal sequences and their long-range dependencies more accurately than conventional RNNs [92]. The LSTM comprises units called memory blocks in the recurrent hidden layers. The memory blocks contain memory cells with self-connections that store the temporal state of the network in addition to special multiplicative units called gates to control the flow of information. A DL-based approach is used in [93] for emotion classification, dealing with a large number of sensor signals from different modalities. From the results presented in the paper, it came to be known that ad-hoc feature extraction may not be compulsory as DL models extract the high-level features automatically.

2.4. Parameters Affecting the Performance of Intelligent Sensing

This subsection presents a review of some of the parameters that affect the performance of intelligent sensing. Intelligent sensing methods have been promising with state-of-the-art results in several areas, such as healthcare, image segmentation, agriculture, soft sensors, etc. The use of sensor systems in industrial, scientific, and consumer equipment is extensive and is continuously increasing in domains like automation. Essentially, industrial information revolutions need more sensors of every kind. The focus of the sensor system is to provide reliable signals and evaluate information. The smart sensing units include a sensing element and proper signal processing function within the same package.
Table 4 give a list of parameters that affect the performance of intelligent sensing based on the results reported in literature. Key information includes the title and year of publication of each paper, and parameters that influence the performance of the various intelligent sensing approaches, such as temperature, accuracy, cost, time, occupancy, dependency, etc. One of the parameters is feature extraction in image recognition. Several techniques of pre-processing are used for enhancing certain features and removing unnecessary data. These techniques include digital spatial filtering, contrast enhancement, gray level distribution linearization, and image subtraction [94]. Measurement of redundancy in test samples is attempted to achieve test loss minimization, which can lead to a reduction of test maintenance costs and also ensure the integrity of test samples [95]. Evaluating ML algorithms is an important part of any project. Accuracy is one of the essential parameters to judge the performance of the trained model. Classification accuracy is defined as the fraction of correct predictions relative to the total number of input samples.
The most crucial aspect of this matter is the collection of data from multiple sources. The data usually goes through several stages of pre-processing to make it in presentable form. Intelligent sensing approaches are in general associated with technological applications where they are applied. For example, in cognitive radio, the sensing approach will be different from applications in a smart grid. The work in [102] presented an artificial intelligence-based approach for high-speed data delivery with latency regulation. Compared to CogMAC (Cognitive Medium Access Control) and AHP (Analytic Hierarchy Process) protocols, the decentralized approach helps in creating opportunistic methods for spectrum access and better design of channel selection mechanisms. The work presented in [103] proposed a method for integrating intelligence close to the sensor, which will enable decision making in local nodes before transferring the information to cloud or server. The local intelligence will be helpful in producing smart data that can be used for analysis to produce effective outcomes. Techniques such as normalization, linearization, and data cleaning can be done at local nodes in a piconet. Such inclusion will be helpful in the elimination of unnecessary steps, which needs to be done very frequently before data is used in artificial intelligence algorithms.
It is very important to identify data anomalies as data sometimes are collected from multiple platforms. In such cases, the source of data needs to be tracked for threat and irregularity. The work presented in [104] proposed scheduling and anomaly handling mechanisms in cross-platform IoT systems using cognitive tokens. The proposed methods use intelligent sensing with fair play and exponential growth procedures. In contrast to current technology trends in full-stack system development, a layered architecture-based approach was proposed in [105]. The proposed method will help to collect data, extract useful information, and transfer it for further processing. In the case of more sensitive data sensing, such as clinical or eHealth, Ref. [106] presented the implementation of gateway and scoring mechanisms to reduce the latency and to analyze the performance of systems. Such implementations have shown good performance in fog computing environments, where restricted resources are available at local nodes. The work presented in [107] shows the importance and challenges of IoT-based healthcare information sensing. The work presents challenges related to information acquisition, sensing, storage, processing, analytics, and presentation.
The studies reviewed in this section reveal that, although the new generation intelligence reduces the cost of devices and helps present the information more accurately for decision making, design and implementation as well as communication technologies still play important roles.

2.5. Lessons Learned

In this section, several approaches based on AI are reviewed that can analyze the complex characteristics of sensor data for various applications. Most of the ML and DL-based algorithms work with numerous types of sensory data that come from different sources. However, algorithms of supervised learning for classification (i.e., SVM, DT, and RF) are mainly recommended when data have complex feature space (for example, hyperspectral sensor data). In particular, for data fusion from multiple devices, EL is more favorable because the fused data can be fed to an ensemble classifier for better results. For cases where the dataset size is small, K-NNs perform well as compared to other algorithms. The task is more challenging when the sensor data are unlabelled, and hence, the desired results can be obtained using unsupervised learning algorithms. Classification based on semi-supervised algorithms requires a limited set of annotated sensor data and performs well with time-series data. Another category is reinforcement learning that works well with the high-dimensional stream of input data. Its integration with deep learning is applied in new areas of research such as drone navigation. Furthermore, DL-based algorithms are also discussed and several conclusions are drawn. The variations of CNN are preferred when input sensor data is more than 1-D and are highly recommended due to their simultaneous feature extraction and classification capabilities. Most of the recent architectures such as RNN and LSTM perform well with sequential sensing data (i.e., sequence of words, images, etc.), but more favorably, LSTM is used due to its long-term dependencies among input data. For generating synthesized data that is different from actual sensor data, GAN is considered and has been proven to be successful in handling data privacy.
A few attempts were made to examine the parameters that affect the performance of intelligent sensing. Internal and external factors such as the collection of real-time environmental data from multiple sensors, the nature of datasets, the accuracy of the training datasets, optimization parameters, etc., may hinder the overall performance of intelligent sensing. Thus, to create an efficient and robust smart system, it is vital to identify anomalies in data and take appropriate measures to remove them.

3. Datasets in Intelligent Sensing

A dataset is an assemblage of information. Commonly, data are organized as a stream of bytes into a partitioned dataset, which may comprise multiple members, each containing a separate sub-dataset, similar to directories or folders. This organization is employed for the application requirements and to optimize communications. Examples of classic datasets include iris flower dataset [108], MNIST dataset [109,110] etc. Table 5 present a variety of sources of data with comments on the merits and demerits of information. Intelligent sensing algorithms with appropriate datasets foster sensible and more accurate solutions.
Datasets can be categorized as
  • File-based datasets: These are datasets that are entirely stored in a single file.
  • Folder-based datasets: In this type of datasets, the dataset is a folder that holds the data.
  • Database datasets: This type of dataset is a set of data stored in a database, for example, the Oracle database.
  • Web datasets: Web datasets are the datasets that are stored on an internet site. An example is the WFS format.
Individual datasets are sets of data values in an organized way intended for automated analysis. The structure of a dataset can be as simple as a table of rows and columns or can be as complicated as a multidimensional structure. This section comprises different datasets that belong to the different fields of intelligent sensing. These datasets are used in various applications like image classification, gender recognition, speech recognition, obstacle detection, action detection, etc.
Datasets have played a vital role in the development of sophisticated machine learning and deep learning algorithms, as documented in [130]. The importance of datasets is that they represent the relationship of the individual data items. Datasets vary in the types of manipulations, feature analysis, and other functionality closely related to the domains. In some areas, for example, astronomy and genetics, domain-specific software may be supreme. Thus, the data can be incorporated into the cumulative knowledge base of the respective disciplines.
In machine learning projects, there is a need for training datasets. The datasets are used to train the model for performing a variety of actions. It is impossible for a machine learning algorithm to learn without data. It is the most crucial aspect that makes algorithm training possible. Completeness and accuracy are the two necessities for any dataset [131]. In the absence of these characteristics, the final result is prone to wrong conclusions. Any investigation relies on the availability and quality of suitable datasets. For this reason, there is a need to verify the dependability of data before they are converted into valuable information. AI development heavily relies on data, from training, testing and tuning. Three different types of datasets are the training set, the validation set, and the testing set. The training set is employed to train an algorithm to learn and produce results. The validation set is used to tune the final ML model. The testing dataset is used to evaluate how well the algorithm was trained on the training dataset. With the growing acceptance of AI by companies across all industries around the world, developing a strategy for ML is vital to gain a competitive edge. A significant component of this strategy is the data used to train machine learning-based algorithms.
It is very important to remember that good performance on datasets does not necessarily mean that a system with ML algorithms will perform well in real scenarios. Most people in AI forget that the crucial part of building a new AI solution is not the AI or algorithms—it is the data collection and labeling. Training datasets represent the majority (around 60%) of the total data, whereas the validation and testing datasets each account for 20% and 20% of the total data. Other ratios such as 70%:15%:15% among the three datasets are also possible, depending on the application.
Overfitting takes place when a model learns too well about the training data. It learns all the features of training data with noise to a level that it adversely affects the performance of the model on fresh data [132]. If the training part takes too long on the dataset, the performance may decline because of overfitting of the model [133]. At the same time, the error for the test set begins to increase as the model’s ability to generalize decreases. Data augmentation [134] is an approach that allows practitioners to significantly increase the amount of data without actually collecting new data. It is a way of creating new ‘data’ with different orientations. The benefit of data augmentation is that it generates “more data” from a limited amount of data and prevents overfitting. Data augmentation techniques such as padding, cropping, and horizontal/vertical flipping are commonly used to train huge neural networks. An underfitting machine learning model is not an appropriate model [134] and will have poor performance on the testing data. The remedy is to try alternative machine learning algorithms.

4. Practical Applications of Intelligent Sensing

In this section, a plethora of applications based on intelligent sensing such as agriculture, surveillance, traffic management, healthcare, and assistive services are summarized.

4.1. Applications of Intelligent Sensing

The amount of data that is available on the internet and in our daily life in different forms is growing fast because of the rapid development of sensing and computing technologies.
  • Smart Agriculture—Intelligent sensing applied to this domain, to fulfill the need of farmers, faces lots of problems on a daily basis, like crop disease infestations, weed management, pesticide control, etc. [135]. The gradient descent-based technique is used in [136] to train the network on a real field dataset consisting of various tea gardens. To identify tea pests, a radial basis function network is used for classification. Ref. [137] combines expert system technology and ANN to predict the nutrition level in the crop in order to help inexpert farmers. This system is developed as an application of Android, which could be installed on a smartphone [137]. The basic methodology is feed-forward and backpropagation. The neural process and recalling patterns are done by the feed-forward algorithm, and the training is done by the backpropagation algorithm. A study carried out in [138] considers the use of ANN in various techniques to estimate evapotranspiration (ET). The methods applied include the Penman-Monteith method and Levenberg-Marquardt backpropagation. Because of the increase in the number of hidden layers, an increase in the variability of the ET estimation was observed.
    Figure 3 illustrates how smart sensors can be used in different areas of agriculture, i.e., soil, crop growth, disease identification, supply control, environment sensing, and bio-surveillance. Some key reasons for using sensors are real-time monitoring to enable remedial measures, cost savings by reducing waste, remote sensing through wireless and IoT platforms, and automated agricultural produce monitoring.
  • Intruder Detection and Surveillance—In IoT devices, attacks and threats have become more dominant as intrusion detection methodologies are hard to deploy. The most effective intrusion detection systems apply signature-matching methods for detecting vicious activities [139]. These systems have low false alarm rates and perform well in the various attacks. Another is anomaly detection; it maps ordinary behaviors to a certain baseline and detects eccentricities. For creating a baseline profile, a supervised learning algorithm is used, which uses previous data samples to train a model. In [140], the knowledge discovery in databases (KDD) is saved to the Oracle database server to extract the proper dataset for a set of classifiers. After preparing the dataset by removing the attacks, the most common experimental techniques are multilayer perception, Bayesian algorithm, and J48 trees for classification. In [141], the dataset from the 1998 DARPA intrusion detection program is pre-processed in binary TCP dump format readable by the neural network. Backpropagation is the supervised learning method used to accomplish this task.
    An intelligent video surveillance system (IVSS) composed of an IP camera and a human-computer interface is presented in [142]. IVSS has modules for image analysis, image understanding, video capture, and event generation. In the video capture module, input data can be accessed from different IP addresses of cameras over a LAN. Image analysis comprises image processing tasks, for example, extracting relevant information, including tracking, motion detection, etc. Image understanding includes AI techniques to understand the significance of the scene captured by a camera. The abnormal behavior is then forwarded to an event generation module, which helps the user by generating an alarm. The use of intelligent sensing in intrusion detection and remote surveillance for monitoring applications is shown in Figure 4. The use of smart sensors will greatly help improve the existing systems in terms of cost, energy, and performance.
  • Intelligent Traffic Management- AI-based techniques have been applied in this field to control road traffic. To optimize the traffic light cycles, a technique based on genetic algorithm (GA) is used to improve the traffic light configuration [143]. Ref. [144] discusses the design of a traffic light controller that varies the cycle time according to the number of vehicles behind the red and green traffic lights. Another technique based on extension neural network (ENN) is used in outdoor environments to recognize the objects. A traffic light can be monitored by gathering data from the number of vehicles passing and then processing that data. Here, how intelligent sensing is used in traffic management is shown in Figure 5. With the emergence of smart sensors, various challenges faced by traffic management authorities such as traffic congestion, optimum route, travel cost, average waiting time, etc. can be solved.
  • Smart Healthcare—Unsupervised learning algorithms like clustering and principal component analysis (PCA) are used in [145]. In this technique, by maximizing and minimizing the resemblance of patients, the clustering algorithm outputs the labels within the clusters. PCA mainly focuses on reducing the dimension, especially when the features are achieved in a considerable number of dimensions. In [146], SVM is applied to classify imaging biomarkers of nervous and psychiatric diseases. Recently, CNN has been successfully implemented in the healthcare domain through knowledge from ocular images to assist in diagnosing congenital cataract disease [147]. Natural language processing (NLP) aims at better clinical decision making from the narrative text [148]. In [149], NLP is used to read chest X-ray reports to alert physicians to the possible requirement for anti-infective therapy.
    In healthcare organizations like insurance companies, the use of sensors is to provide accurate and reliable diagnostic results, which can be monitored remotely irrespective of whether the patient is at a clinic, hospital, or home, thereby improving healthcare efficiency. Healthcare management uses intelligent sensing for different purposes, as shown in Figure 6.
    Mass spreading diseases are not rare nowadays; in such cases, fast and reliable information helps to stop the infection to the general public. The mitigation is facilitated by early detection, identification of the cause, and finding a cure. In healthcare, DL and AI have been implemented to control such diseases. Intelligent sensing is also implemented for vaccine detection. In the case of COVID-19, WHO has recommended a swab-based SARS-CoV-2 test. From the swabbed samples, the information related to E-gene from SARS-CoV-2 and gene from enzyme RNA-dependent polymerase, which is in charge of the copying of a DNA sequence into an RNA sequence during the transcription process, plays a key role in the identification of symptoms. Many researchers have observed that real-time PCR methods are also effective for diagnosing the test results [150,151,152]. In these approaches, the protein related to immunological defense is tagged to identify the potential targets using fluorescent tubes. A CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats)-Cas13-based strategy for viral inhibition has been found to be effective for dealing with SARS-CoV-2, which caused COVID-19, and emerging pan-coronavirus pandemic strains [153].
    Intelligent sensing techniques can be employed to determine the diseases that cause the epidemic and pandemic. In [154], Santosh suggested active learning algorithms with cross-population datasets to test and train models that can compute the data with multiple mini-batches of information to help in detection and decision making. The work presented in [21] proposed an AI-based framework with the use of sensors already mounted on palmtop devices, like smartphones, cameras, inertial, microphone, temperature, and fingerprint sensors to collect information from patients. Deep Learning algorithms are implemented to do the multimodal analysis to detect the presence of COVID-19 symptoms.
    The use of intelligent sensing with edge computing and cloud services are also encouraging several corporations and startups to work in the area. UNet++ [155] was proposed for medical image segmentation and has been successfully implemented on computed tomography scan, microscopy, and RGB video data. The UNet++ architecture is the new version in the series of U-Net and wide U-Net architecture. It has been observed that with deep supervision, UNet++ has demonstrated better performance than its predecessors. The common issue in implementing the intelligence for disease detection using sensors is that the raw data need to be segmented and labeled for further analysis. In case urgent or fast results are required, a manual method is preferred as in incorporating intelligence to algorithms, training needs to be done by feeding information in significant amounts. If the system is not trained, the accuracy of such a model will be degraded, which will affect the end result, i.e., successful prediction of true positive cases.
    The nucleic acid test is a method used to identify the cases related to the diagnosis of Gonococcal and other Neisserial infections, HIV RNA, Severe Acute Respiratory Syndrome (SARS), coronaviruses, etc. A recent study shows that it is also being considered for the diagnosis of COVID-19 patients. This test helps detect specific nucleic acid sequences and organisms in blood, tissue, urine, or stool. The work in [156] proposed nucleic acid amplification tests (NAATs) using stacked denoising autoencoders (SDAE) for feature extraction. It is also observed that DeepGene, a cancer type classifier, can be essential [157]. In another work [158], Wang et al. have implemented CNN to recognize the behavior of pulmonary nodules and also to extract features for machine-generated endoscopy images in low light. As observed, artificial intelligence algorithms are increasingly implemented in pathology devices with assistive methods to achieve high accuracy results. In the monitoring of pandemic and epidemic also, such intelligent sensing plays a vital role. Thermography devices can be easily found in public places, especially in the case of mass spreading diseases. Thermal scanner is a commonly known device to identify the fever by interpreting the data (heat map) in human-readable forms.
    Figure 7 is a case study on the importance of intelligent sensing in epidemic and pandemic. To detect the symptoms related to the infection, multiple tests are proposed, and in most of the cases, a combination of such tests needs to be performed. WHO suggests swab test, which requires the sample collection from nose and throat. On the other hand, in CT scan images are used, and test data need to be further analyzed in thermography to sort out potential patients based on the heat analysis. The tissues collected by biopsy and bronchoscopy are examined to understand the symptoms. Urine and stool tests have also shown the presence of infection in patients. In the case of COVID-19, urine samples are not adequate, and stool analysis has helped detect the presence of infection, similar to SARS and MERS. Blood tests will help analyze cell culture, and multiple serology assays will help identify the virus growth and immune system status. Tests such as RDT, ELISA, and Neutralization assay indicate the presence of antibodies with the possibility of protection against infection. Data from multiple test sources shown in Figure 8 are helpful for training and testing algorithms. Deep learning and artificial intelligence algorithms trained with such data will further help in symptom identification.
  • Smart Assistive technology—In [159], a navigation system for visually impaired people is developed. This project focuses on how place cells, grid cells, and track integration along with AI can be helpful. Artificial intelligence with grid cells uses deep Q learning with an RNN-based ANN architecture. In [160], cash recognition for blind people is designed to allow those people to identify the notes correctly. In this project, an AI-powered application uses a smartphone to capture the image of a cash note. After recognizing the value of that particular note, an audio sequence signifies the value of the note. In order to work on realistic images taken from a smartphone, a transfer learning-based pre-trained model on ImageNet of VGG-16 is used for training deep neural networks and for verifying the approach. Recently, ML algorithms have improved the intelligibility of speech in both hearing-impaired and normal-hearing listeners. In [161], speech separation is considered as a binary classification problem in which each true or false unit needs to classify noise dominant as 0 and speech dominant as 1. In speech recognition for normal-hearing persons, Gaussian mixture models have been used.
    Figure 9 shows how AI, ML, and DL techniques are used for the visually impaired by taking gestures as input and converting text to speech using algorithms. These technologies improve the way of communication between ordinary people and visually impaired people.
  • Smart Communication Networks—As a recent trend in communication technologies, it is observed through 4G, 5G, and ongoing research in 6G networks, that ubiquitous sensor network is going to be a feature of intelligence sensing, which means that information of sensor nodes could be easily retrieved remotely and processed. This also requires adopting new techniques related to nondestructive data transfer mechanism, fast and lightweight computational nodes for signal and communication requirements, multichannel modulation schemes, and opportunistic channel sensing schemes. To control, transfer, and supervision of sensor information, Intelligent sensing can be observed in Internet of Things, Industry 4.0, etc. [162] presents the behavior analysis using the Latent Dirichlet Allocation (LDA), the Non-negative Matrix Factorization (NMF) and the Probabilistic Latent Semantic Analysis (PLSA) for a comparative study using three different datasets from ubiquitous sensors. LDA, NMF, and PLSA have all been successfully used in text analysis tasks such as document clustering and are closely related to each other. In particular, [163] has formally shown the equivalence between NLM and PLSA. The PLSA, also known as the probabilistic latent semantic indexing (PLSI), is a statistical approach used to analyze two-mode and co-occurrence data. Further, PLSA can be treated as LDA with a uniform Dirichlet prior distribution. The semi-supervised learning approach was implemented in [164] for gait recognition for person identification using ubiquitous sensor data. Sparse labels and low modality factors were analyzed in [165].
    For intelligent sensing in communication, the steps illustrated in Figure 8 can be used. The initial steps include basic signal processing with sensing, filtering, amplification, sampling, quantization, data acquisition, and conversion. After that, information processing and digital communication procedures need to be adopted, where edge computing plays an important role. The edge computing system includes a low-power compute unit specifically tailored to the requirements. In such scenarios, it is important to notice that hardware restrictions exist. Because of this, the algorithm development should take all such limitations into account. The gateway is an important medium to transfer data from local nodes to the main computational platform, i.e., the server node. Communication protocols such as 4G, 5G, UWB, WiMax, etc. can be implemented as per the design requirements.

4.2. Lessons Learned

In this section, we have identified a few ML-based foundational services in a broad range of intelligent sensing applications. We discussed how ML has been used to facilitate these services. The major contributions of this section are the coverage of the applications of intelligent sensing, which are gaining tremendous attention.
ML is a revolutionary technology which attracts every other technology through its algorithms and impressive results. Agriculture represents one sector of the future of computing and communications. Intelligent sensing applied to this domain fulfills the needs of the farmers and the population by efficiently utilizing limited resources. Smart agriculture involves the incorporation of information technology into the traditional methods of farming. When dealing with smart farming or agriculture, factors like population movement, weather conditions, and demographics play a significant role. There are other parameters that are important in the field of agriculture. These may include surveillance in agriculture, supply control, environment sensing, length analysis of crop growth, disease identification, soil parameters, etc. Intruder detection and surveillance are very important and have attracted a great deal of attention. Nowadays, nearly every shop, home, or office needs a surveillance system for intrusion detection. Signature matching algorithms are the most effective method in intrusion detection systems for detecting malicious activities. Multilayer perceptron, Bayesian and J48 algorithms are common experimental techniques in anomaly detection. Image and video recognition plays a vital role in adapting IVSS (intelligent video surveillance system). The authors observed that sensors would greatly help in improving existing systems.
Healthcare has become a high priority after the pandemic became rampant globally in 2020. AI and ML are widely used in smartizing healthcare systems. As mass spreading disease is not rare nowadays but instead has become normal, fast and reliable information helps to stop the infection to general public. Recently, CNN has been successfully adapted for healthcare and is used to classify X-ray images to diagnose heart diseases and ocular images, thereby helping in the diagnosis of cataract disease. Intelligent sensing is widely used to monitor the various parameters of the patients remotely, like pulse analysis, routine checkups, etc. Real-time PCR for diagnosing test results is also a CRISPR-based strategy and is effective for dealing with COVID-19. Hybrid approaches such as IS with ML, edge computing, and cloud services are also gaining attention of several corporations in this area. AI has been applied to road traffic control. However. intelligent sensing has been so improved in this domain that most of the work is done by sensors. For example, a vehicle may have an intelligent system installed inside it to help avoid accidents and recognize traffic. Some vehicles have installed cameras to monitor the surrounding. For the infrastructure part, there are speed monitoring systems with video surveillance installed at traffic signal poles or toll stations on highways to avoid road accidents. AI and ML combined proved to be successful in assistive technology. ML algorithms have improved speech intelligibility for both speech and hearing impaired people. Intelligent sensing also affects communication networks positively to make them smarter and more reliable. The omnipresent sensor network will play a very important role as one of the features of intelligence sensing. The sensor nodes will be easily traced remotely. Intelligent sensing with ML algorithms has been widely used to improve the intelligence of sensors.

5. Challenges and Future Research Directions

With the advancement of sensor technology, research has been carried out to extract useful information in various domains [166]. The adoption of AI in smart sensing has advantages associated with forecast based maintenance, adaptable manufacturing, and improvised productivity [167]. In this section, we review numerous challenges associated with particular applications and AI approaches and also briefly discuss possible future research directions.

5.1. Challenges

  • Data Security and Privacy: Despite the success of AI and ML models, they face the major challenging issue of data security. ML models extract features by learning patterns that contain information, which can be vulnerable to real-world attacks [168]. One of the legitimate concerns in any real-time environment is data integrity and also it affects the quality of datasets and overall performance of the system. For example, the UAV-enabled intelligent transport system in a smart city where information about vehicle location and speed can be leaked by malicious entities [169,170]. The sensors must gather and share only essential information that is required to execute any operation. Standard rules and procedures must be applied to maintain data integrity.
    The presentation of information has to go through several stages in machines. Most of the machines are connected, resource-constrained devices and also available as standalone computational units. The first step in intelligent sensing is to gather the collected data from the sensors, which are then merged with information sources to identify and process accordingly. If information can be analyzed locally, further steps need to be taken. However, in most of the devices, the next step is to transfer the information to database storage or cloud services. Such a collection of information is then processed for data analysis and presentation through queries specific to user requirements. In the process from data collection to presentation, several types of security threats need to be taken care of. As illustrated in Figure 10, the process of intelligent sensing consists of multiple stages from sensing to data analysis and security needs to be handled in each of the stages.
  • Data Storage and Management: The storage of an enormous amount of data in the form of audio, video, images, smart device data, and social media has become the main hurdle for several applications that need to be addressed. Mismanagement of data will make it difficult to analyze the quality of data collected by sensors and further affect the decision-making process [171]. The availability of a large amount of data motivates us to accept the ML and AI methods to enhance the overall performance of the sensor-based system. Therefore, to avoid redundancy, more advanced AI algorithms will be needed to extract meaningful data.
  • Power Consumption: Nowadays, the use of wearable flexible sensors has gained significant attention in medical applications [172,173,174]. These sensors are placed in contact with the clothes worn by a person to measure physiological parameters like temperature, ECG, EMG, muscle activity, and cardiovascular problems. The power consumption of these devices is an important issue that needs attention. In addition to this, the production cost of a flexible sensor is also a challenging issue that needs to be addressed [175]. Low power consumption sensors such as Shimmer and Telos should be used for monitoring the health to reduce the power consumption of wearable flexible sensors.
  • Hardware Deployment: Despite the benefits of AI, designing algorithms on hardware requires sufficient computing resources, power consumption, high computational complexity, which is a very challenging task [176]. Hence, the collaboration between AI and hardware components needs serious efforts to enhance intelligent communication. The large memory footprint of the trained model and the enormous amount of sensor data affect the training accuracy and computational speed on hardware. Moreover, due to lack of specific libraries for hardware, the trained model is not properly deployed from a specific framework to low-power devices (i.e., edge or mobile) and FPGAs. These may delay the product delivery for a couple of weeks. Many researchers are focusing on reducing the complexity of AI and ML algorithms from hardware perspective and thus enhance the overall performance of the real-time inference model to make it memory-efficient [177,178].

5.2. Future Directions

  • Data Fusion: Recently, data fusion techniques are gaining a lot of attention in different aspects. Data fusion with big data is an area that ensures the aggregation of data that are generated either independently or collectively. It facilitates improvement in decision making through value extraction. The result of this data fusion can be further manipulated, analyzed, and stored. Data fusion in IoT [32] is more efficient in integrating, managing, storing, and manipulating the large amount of data. Data processing in IoT leads to the addition of more data by extracting meaningful information. Thus, data fusion can help to reduce the volume of that data. Emerging technologies like M2MC (machine to machine communication) allow data fusion to be performed at the edge [107]. M2MC has the ability to communicate over a dedicated medium, for example, the internet, to enable information flow in an intelligent way through smart devices for smart homes, cities, and businesses.
  • Industry 4.0: Smartization of manufacturing industries has been perceived as Industry 4.0 (fourth industrial revolution), a paradigm shift made possible by the development of new information and communication technologies (ICT) [179]. Industry 4.0 is a new industrial model that displays how production trails and deviates over time. The emerging technology means the digital factory in which intelligent devices are inter-networked with semi-finished products, raw materials, robots, machines, and workers. Industry 4.0 is characterized by the use of resources and the incorporation of customers and business partners in the business process [180]. The technologies of the future will be founded on the availability of data. Moreover, those data are becoming available in profusion thanks to Industry 4.0 that is transforming the industry digitally. Digital resources like Siemens’ Digital Enterprise portfolio are affecting every phase of industrial production, from the design of a product to its production to its use. Future technologies make it possible to analyze and exploit these data pools in completely new ways. This development will necessitate the use of technology and knowledge developed in numerous other domains. Autonomous systems need to gain trust between humans and machines [181]. The IoT vision is rooted in the belief that the advancement in communications, information technology and microelectronics we have observed in recent years will be continued into the future. Due to their small sizes, decreasing energy consumption, and falling prices, communications modules, processors, and other electronic equipment are being progressively integrated into everyday objects. At present, cities are remotely monitored and data are collected intelligently through multiple sensors embedded in surveillance systems. Fifth-generation (5G) cellular wireless can connect numerous smart objects at the same time thanks to its capacity and high speed [182].
  • Industry 5.0: After Industry 4.0, intelligent sensing is discovering new heights with more strategic growth in industrial automation and control. The origin of Industry 5.0 was presented in [183]. The inclusion of ecosystem for safe operation and accelerating innovation are core features of Industry 5.0. The communication technology used in Industry 5.0 is similar to Industry 4.0, but the emphasis is on collecting more dark data from the core components of the plant or manufacturing units to enable intelligence on it. Society 5.0 is an outcome of industrial advancements which assist human and machines in making intelligent decisions [184]. Industry 5.0 includes the implementation of IoT, Big data, Artificial Intelligence, and communication technology for the digitalization of work environments. The work presented in [185] shares details about the infrastructure involved in the development of Industry 5.0 work environment and its effects on business and industries with the involvement of information technology. The work presented in [186] shows the performance of Byzantine-tolerant machine learning algorithms in Industry 5.0 with the involvement of edge computing technology. The goal of Industry 5.0 is to empower rather than replace workers. Moreover, applications of Industry 5.0 extend well beyond industrial production. For instance, Industry 5.0 can provide customized therapy and treatment to COVID-19 patients if detailed information about the patient is available [187]. Industry 5.0-based UAV secure communication using AI was presented in [188]. The work suggests mass customization and inclusion of cyber-physical systems in this area. In view of the development, Industry 5.0 will open up ample opportunities for future research.
  • Explainable AI (XAI): One of the prominent future advancements is explainable AI that resolves the complexity issues of the models and enables users to understand how the models reach specific decisions and recommendations [189]. Also, users will know how the workflow of AI models leads to different conclusions for different cases and the strengths/weaknesses of the models. Black box models like ANN and RF are difficult to understand and implement due to their complexity. Therefore, an explanation interface such as data visualization and scenario analysis has been built which presents more explanation towards models and helps humans to easily understand the relationship between input and predictions. Companies providing XAI which presents different interfaces for the explanation of complicated AI models include Google Cloud Platform, Flowcast, and Fiddler Labs [190].
  • Extended Reality and AI: One of the AI-enabled future technology is the extended reality (XR) combined with all forms of real and virtual environments including augmented reality (AR), virtual reality (VR), and mixed reality (MR). XR is an immersive technology that creates training data synthetically for DNN. Moreover, it creates virtual environments [191]. XR environments include cameras, virtual machinery, sensors, human avatars, and control software, and provide much richer contents compared to virtual reality. XR and AI unlock many opportunities in various domains [192], such as mobile XR, which uses a combination of smartphones, AR glasses, and mobile VR headsets. XR solutions are also used in industries and educational institutions to offer innovative and safe training to employees based on the data collected by tracking the movement of humans and machines [193]. The healthcare industry leverages XR in medical procedures to improve surgical imaging [194]. Areas where XR solutions can be applied still need to be explored. These include 5G communication networks, public services, real estate, defense, and military applications.
  • Convergence of AI and 6G: The future 6G with AI and ML methods will optimize network performance, support diverse services, and build seamless connectivity. Many researchers have started focusing on 6G with the vision of transmission over THz and mmWave and integrating communication, sensing, and control functionalities toward building a sustainable ecosystem. Studies have shown that 6G integrated with UAV-enabled networks leads to frequent handovers [38]. One of the powerful AI techniques named DRL, which is a combination of DL and RL, is capable of taking on the decision-making tasks [195] and can be adapted to provide efficient handovers, intelligent mobility, and reliable wireless connectivity. Moreover, in some complex networks, fuzzy Q-learning and LSTM-based AI techniques can be used to avoid connectivity or handover failures and enable mobility management [196].
  • Channel Coding: Intelligent communication techniques extract the meaning of the information [197]. This can fulfill two purposes. One is to reduce the amount of data transmitted, and the other is to protect the information from channel distortion and noise using error control coding. Network Coding (NC) has been suggested as a promising technique for improving vehicular wireless network throughput by reducing packet loss in transmission. In [198], an adaptive network coding method is proposed with the use of the Hidden Markov Model (HMM) in the network coding scheme to regulate the rate of coding according to the estimated packet loss rate. In the near future, research work combining multipath transmission with hierarchical edge computing in the high-speed cellular-based vehicular network will be a more focused field.
    Recently, Q-learning (QL), which is an ML algorithm, has shown very promising results in learning problems in energy and computation-constrained sensor devices. The intelligent collision probability inference algorithm based on Q-Learning model was proposed in [199]. It is used to optimize the performance of sensor nodes by utilizing channel collision probability and network layer ranking states with the help of an accumulated reward function. Future IoT networks will have an assortment of stimulating features that optimize network performance and communication efficiency. ML techniques allowing machine intelligence to be incorporated in IoT communication technologies are attracting much attention [200]. The MAC layer and network layer capabilities of future IoT networks can be enhanced with ML-based algorithms [199].
  • Latency Minimization: Latency minimization is a crucial factor in the deployment of real-time applications on energy-constrained platforms such as mobile devices. In the design of AI and computer vision algorithms, latency is considered the primary requirement for resource-intensive tasks. Researchers are exploring ways based on ML and DL methods for reducing latency and energy consumption for future 5G networks [201,202]. Some of the critical issues in intelligent 5G communication technologies include scheduling medium access control (MAC) layer resources among sensor devices, storing a large amount of data generated at the network edges, and assigning virtual network functions (vNFs) to the hosting devices. These issues can be resolved by reducing the demand on network bandwidth, latency and improving QoS. These 5G networks are capable of implementing critical tasks such as autonomous driving, remote drone control, and real-time AI on handheld devices according to their latency requirements [203].
  • Future Citizenship: Due to the government initiative around the world on digitization of identity and social information documents, the resources are accessible to their citizens through various secure online portals. The citizens no longer need to stand in queues to access the resources as all information is available online. In daily life, technology is also involved in the form of smart clothing, smart homes, disease prevention, medicine, etc. It can be said that smart citizenship is the demand of the smart world. Intelligent sensing is all around the technology used by smart citizens. Work presented in [42] discusses contributions of information provided by the local community. One major benefit of such information is to strengthen the quality of government decision making. In the future, citizens will generate valuable data through intelligent sensing on mobile platforms. Thus, the challenges related to theft prevention, forgery, and right to access the information are even more critical for future citizens.
  • Software Platforms in Intelligent Sensing: The platform on which algorithms can be executed in an intelligent sensing environment requires multiple software applications. The three key steps in the development of such systems are (a) hardware level integration, (b) middleware for feature enhancement, and (c) front end development. For all the three steps, multiple types of software are available which can be integrated with each other to create a single framework. The challenge in this domain is to find one single platform to perform all three steps. Usually the selection of intelligent sensing platforms is based on the familiarity of the developer with the development platform. It has been observed that manufacturers provide the development platforms but limit the use to certain levels. For example, the integration of middleware in a specific development environment depends on the compatibility of dependent libraries and the programming language. Due to such constraints, developers face challenges related to software integration and debugging. Figure 11 shows a brief overview of how intelligent sensing is applied in various domains and also lists several future research directions in intelligent sensing.

6. Noteworthy Projects Based on Intelligent Sensing

This section presents some noteworthy research projects and initiatives around the world that are contributing to the field of intelligent sensing. We attempt to cover recent technologies that can also be helpful in the future. The projects and their technical details are presented in Table 6. These projects belong to a variety of fields, including autonomous underwater vehicles (AUV), 6G, Industry 4.0, smart irrigation, smart farming, smart cities, smart healthcare, and smart home. The technologies used in these projects are the most recent ones such as ML, computer vision, DL, MIMO, mmWave, ultra massive MIMO, fog computing, cloud computing, artificial intelligence, IoT, wireless communication, etc. The projects spread across the world and touch on many facets of intelligent sensing. Some of the projects are supported by government agencies, some are sponsored by enterprises, and others are pursued by academic institutions. These projects attest to the vigorous development in intelligent sensing.

7. Conclusions

The continuous growth in intelligent sensing raises challenges related to the integration, communication, safety, and adaptation of algorithms in different stages and applications. This paper has presented a survey of AI-enabled intelligent sensing and its technology requirements, opportunities, and future directions. In the beginning, we pointed out the AI technology in intelligent sensing. Then we summarized the contributions of the work, highlighting key areas in intelligent sensing. We have reviewed various learning models with comparative analysis. Parameters that affect the performance of intelligent sensing are discussed based on the results of recent research. Then available datasets for use in intelligent sensing are presented to help the research community explore further. They represent a broad spectrum of datasets that have been used fruitfully in AI and intelligent sensing research. Advantages and limitations, format of information, and elucidations are provided. Next, we have presented the review of practical applications, including intelligent sensing in healthcare, pandemic monitoring, assistive technology, smart sensor networks, among others. The list is by no means exhaustive but instead serves to exemplify the ample applications of intelligent sensing. In addition, we have elaborated on the challenges and future research directions in intelligent sensing, pointing out challenges related to data security and privacy, data storage, power consumption, and hardware deployment. It is observed that intelligent sensing will grow more rapidly with communication technology and edge computing. Therefore, its involvement in data fusion, Industry 4.0, Industry 5.0, explainable AI, latency minimization, future citizenship, extended reality, convergence of AI and 6G, and software platforms in intelligent sensing is discussed in future research directions. Finally, we have presented noteworthy projects in intelligent sensing, mentioning project names, sources, technology used, and aims of the projects. These projects are dispersed in many countries and represent the use of intelligent sensing in diverse areas globally. We believe this work will help researchers get a deeper understanding of the different aspects of AI-enabled intelligent sensing.

Author Contributions

Conceptualization, A.S.; methodology, A.S.; writing—original draft preparation, A.S., V.S., M.J., H.-C.W., D.N.K.J. and C.M.W.B.; writing—review and editing, A.S., H.-C.W., D.N.K.J. and C.M.W.B.; funding acquisition, A.S., H.-C.W., D.N.K.J. and A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded, in part, by the Fundação para a Ciência e a Tecnologia, Portugal under Grant No. UIDB/04111/2020 (COPELABS), by the Ministry of Science and Technology, Taiwan, and Rebit Digital, grant number MOST 110-2622-E-197-001, by the International Cooperation project of the Sri Lanka Technological Campus, Sri Lanka, No. RRSG/19/5008, and Department of Science and Technology (DST), India under project name Sign Language to Regional Language Converter (SLRLC) with project number SEED/TIDE/063/2016.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AcronymDefinition
5GFifth-Generation (Mobile Telecommunications Technology)
6GSixth-Generation (Mobile Telecommunications Technology)
ADMSAdvanced Dimensional Measurement System
AHPAnalytic Hierarchy Process
AIArtificial Intelligence
AMAdditive Manufacturing
ANNArtificial Neural Network
ARAugmented Reality or Activity Recognition
AUVAutonomous Underwater Vehicle
BMIBody Mass Index
BSIBlind System Identification
CCCommon Criteria Process
CKDChronic Kidney Disease
CNNConvolutional Neural Networks
CoAPConstrained Application Protocol
CogMACCognitive Medium Access Control
COVID-19Coronavirus Diseases-2019
CRISPRClustered Regularly Interspaced Short and Palindromic Repeats
DLDeep Learning
DNP3Distributed Network Protocol 3
ECGElectrocardiogram
EDAElectrodermal Activity
ELEnsemble Learning
EMGElectromyography
ENNEnsemble Neural Network
ETEvapotranspiration
FAMEFatty Acid Methyl Esters
FIDFréchet Inception Distance
FWFeature Weights
GAGenetic Algorithm
GANGenerative Adversarial Network
GMMGaussian Mixture Model
GPRSGeneral Packet Radio Service
HMMHidden Markov Model
IBIntelligent Beamforming
ICTInformation and Communication Technology
IoTInternet of Things
ITSIntelligent Transport System
IVSSIntelligent Video Surveillance System
KDDKnowledge Discovery and Data Mining
K-NNK-Nearest Neighbors
LANLocal Area Network
LDALatent Dirichlet Allocation
LIDARLight Detection and Ranging
LRLinear Regression
LSTMLong Short Term Memory
M2MCMachine to Machine Communication
MASMulti-Agent System
MFCCMel-Frequency Cepstral Coefficients
MIMOMultiple Input Multiple Output
MLMachine Learning
MO-PSOMulti-Objective Particle Swarm Optimization
MQTTMessage Queuing Telemetry Transport
MRMixed Reality
NFVNetwork Function Virtualization
NLPNatural Language Processing
NMFNon-negative Matrix Factorization
NNNeural Network
PCAPrincipal Component Analysis
PLSAProbabilistic Latent Semantic Analysis
QLQ-Learning
QoSQuality of Service
RANRadio Access Network
RFRandom Forest
RFIDRadio Frequency Identification
RLReinforcement Learning
RNNRecurrent Neural Network
SARS-CoV-2Severe Acute Respiratory Syndrome Coronavirus 2
SCADASupervisory Control and Data Acquisition
SDAEStacked Denoising Auto Encoders
SDNSoftware Defined Networking
SMESmall to Mid-size Enterprise
SVMSupport Vector Machine
UUVUnmanned Underwater Vehicle
VRVirtual Reality
WSNWireless Sensor Network
XAIExplainable AI
XRExtended Reality

References

  1. Corsi, C. Smart sensors: Why and when the origin was and why and where the future will be. Proc. SPIE 2014, 8993, 899302. [Google Scholar] [CrossRef]
  2. Von Krogh, G. Artificial intelligence in organizations: New opportunities for phenomenon-based theorizing. Acad. Manag. Discov. 2018. [Google Scholar] [CrossRef] [Green Version]
  3. Kobbacy, K.A.H.; Vadera, S.; Rasmy, M.H. AI and OR in management of operations: History and trends. J. Oper. Res. Soc. 2007, 58, 10–28. [Google Scholar] [CrossRef]
  4. Shabbir, J.; Anwer, T. Artificial intelligence and its role in near future. arXiv 2018, arXiv:1804.01396. [Google Scholar]
  5. Nichols, J.A.; Chan, H.W.H.; Baker, M.A.B. Machine learning: Applications of artificial intelligence to imaging and diagnosis. Biophys. Rev. 2018, 11, 111–118. [Google Scholar] [CrossRef]
  6. Abduljabbar, R.; Dia, H.; Liyanage, S.; Bagloee, S.A. Applications of artificial intelligence in transport: An overview. Sustainability 2019, 11, 189. [Google Scholar] [CrossRef] [Green Version]
  7. Al-Sahaf, H.; Bi, Y.; Chen, Q.; Lensen, A.; Mei, Y.; Sun, Y.; Tran, B.; Xue, B.; Zhang, M. A survey on evolutionary machine learning. J. R. Soc. N. Z 2019, 49, 205–228. [Google Scholar] [CrossRef]
  8. Avola, D.; Foresti, G.L.; Piciarelli, C.; Vernier, M.; Cinque, L. Mobile applications for automatic object recognition. In Advanced Methodologies and Technologies in Network Architecture, Mobile Computing, and Data Analytics; IGI Global: Hershey, PA, USA, 2019; pp. 1008–1020. [Google Scholar] [CrossRef]
  9. Aloufi, S.; Zhu, S.; El Saddik, A. On the Prediction of Flickr Image Popularity by Analyzing Heterogeneous Social Sensory Data. Sensors 2017, 17, 631. [Google Scholar] [CrossRef] [Green Version]
  10. Bahri, S.; Zoghlami, N.; Abed, M.; Tavares, J.M.R.S. Big data for healthcare: A survey. IEEE Access 2019, 7, 7397–7408. [Google Scholar] [CrossRef]
  11. Liakos, K.; Busato, P.; Moshou, D.; Pearson, S.; Bochtis, D. Machine learning in agriculture: A review. Sensors 2018, 18, 2674. [Google Scholar] [CrossRef] [Green Version]
  12. Mohammadi, V.; Minaei, S. Artificial intelligence in the production process. In Engineering Tools in the Beverage Industry; Elsevier: Amsterdam, The Netherlands, 2019; pp. 27–63. [Google Scholar]
  13. Woschank, M.; Rauch, E.; Zsifkovits, H. A review of further directions for artificial intelligence, machine learning, and deep learning in smart logistics. Sustainability 2020, 12, 3760. [Google Scholar] [CrossRef]
  14. Zhong, S.; Zhang, K.; Bagheri, M.; Burken, J.G.; Gu, A.; Li, B.; Ma, X.; Marrone, B.L.; Ren, Z.J.; Schrier, J. Machine learning: New ideas and tools in environmental science and engineering. Environ. Sci. Technol. 2021, 55, 12741–12754. [Google Scholar] [CrossRef] [PubMed]
  15. Chang, F.C.; Huang, H.C. A survey on intelligent sensor network and its applications. J. Netw. Intell. 2016, 1, 1–15. [Google Scholar]
  16. Li, B.; Hou, B.; Yu, W.; Lu, X.; Yang, C. Applications of artificial intelligence in intelligent manufacturing: A review. Front. Inf. Technol. Electron. Eng. 2017, 18, 86–96. [Google Scholar] [CrossRef]
  17. Ali, J.M.; Hussain, M.A.; Tade, M.O.; Zhang, J. Artificial intelligence techniques applied as estimator in chemical process systems—A literature survey. Expert Syst. Appl. 2015, 42, 5915–5931. [Google Scholar]
  18. Tong, W.; Hussain, A.; Bo, W.X.; Maharjan, S. Artificial intelligence for vehicle-to-everything: A survey. IEEE Access 2019, 7, 10823–10843. [Google Scholar] [CrossRef]
  19. Chen, Z.; Chen, Z.; Song, Z.; Ye, W.; Fan, Z. Smart gas sensor arrays powered by artificial intelligence. J. Semicond. 2019, 40, 111601. [Google Scholar] [CrossRef]
  20. Kumar, A.; Gupta, P.K.; Srivastava, A. A review of modern technologies for tackling COVID-19 pandemic. Diabetes Metab. Syndr. Clin. Res. Rev. 2020, 14, 569–573. [Google Scholar] [CrossRef]
  21. Maghded, H.S.; Ghafoor, K.; Sadiq, A.; Curran, K.; Rawat, D.B.; Rabie, K. A Novel AI-enabled Framework to Diagnose Coronavirus COVID-19 using Smartphone Embedded Sensors: Design Study. In Proceedings of the IEEE 21st International Conference on Information Reuse and Integration for Data Science (IRI), Las Vegas, NV, USA, 11–13 August 2020; pp. 180–187. [Google Scholar] [CrossRef]
  22. Qadri, Y.A.; Nauman, A.; Zikria, Y.B.; Vasilakos, A.V.; Kim, S.W. The Future of Healthcare Internet of Things: A Survey of Emerging Technologies. IEEE Commun. Surv. Tutor. 2020, 22, 1121–1167. [Google Scholar] [CrossRef]
  23. Qiu, J.; Wu, Q.; Ding, G.; Xu, Y.; Feng, S. A survey of machine learning for big data processing. EURASIP J. Adv. Signal Process. 2016, 2016, 1. [Google Scholar]
  24. Kumar, D.P.; Amgoth, T.; Annavarapu, C.S.R. Machine learning algorithms for wireless sensor networks: A survey. Inf. Fusion 2019, 49, 1–25. [Google Scholar] [CrossRef]
  25. Ramasamy Ramamurthy, S.; Roy, N. Recent trends in machine learning for human activity recognition—A survey. WIREs Data Min. Knowl. Discov. 2018, 8, e1254. [Google Scholar] [CrossRef]
  26. Ha, N.; Xu, K.; Ren, G.; Mitchell, A.; Ou, J.Z. Machine Learning-Enabled Smart Sensor Systems. Adv. Intell. Syst. 2020, 2, 2000063. [Google Scholar] [CrossRef]
  27. Namuduri, S.; Narayanan, B.N.; Davuluru, V.S.P.; Burton, L.; Bhansali, S. Review—Deep Learning Methods for Sensor Based Predictive Maintenance and Future Perspectives for Electrochemical Sensors. J. Electrochem. Soc. 2020, 167, 037552. [Google Scholar] [CrossRef]
  28. Mao, Q.; Hu, F.; Hao, Q. Deep Learning for Intelligent Wireless Networks: A Comprehensive Survey. IEEE Commun. Surv. Tutor. 2018, 20, 2595–2621. [Google Scholar] [CrossRef]
  29. Alsheikh, M.A.; Lin, S.; Niyato, D.; Tan, H.P. Machine learning in wireless sensor networks: Algorithms, strategies, and applications. IEEE Commun. Surv. Tutor. 2014, 16, 1996–2018. [Google Scholar] [CrossRef] [Green Version]
  30. Morais, C.M.D.; Sadok, D.; Kelner, J. An IoT sensor and scenario survey for data researchers. J. Braz. Comput. Soc. 2019, 25, 4. [Google Scholar] [CrossRef] [Green Version]
  31. Deng, X.; Jiang, Y.; Yang, L.T.; Lin, M.; Yi, L.; Wang, M. Data fusion based coverage optimization in heterogeneous sensor networks: A survey. Inf. Fusion 2019, 52, 90–105. [Google Scholar] [CrossRef]
  32. Ding, W.; Jing, X.; Yan, Z.; Yang, L.T. A survey on data fusion in internet of things: Towards secure and privacy-preserving fusion. Inf. Fusion 2019, 51, 129–144. [Google Scholar] [CrossRef]
  33. Zhang, R.; Nie, F.; Li, X.; Wei, X. Feature selection with multi-view data: A survey. Inf. Fusion 2019, 50, 158–167. [Google Scholar] [CrossRef]
  34. Meher, B.; Agrawal, S.; Panda, R.; Abraham, A. A survey on region based image fusion methods. Inf.Fusion 2019, 48, 119–132. [Google Scholar] [CrossRef]
  35. Jones, D.O.B.; Gates, A.R.; Huvenne, V.A.I.; Phillips, A.B.; Bett, B.J. Autonomous marine environmental monitoring: Application in decommissioned oil fields. Sci. Total Environ. 2019, 668, 835–853. [Google Scholar] [CrossRef] [PubMed]
  36. Villa, M.; Gofman, M.; Mitra, S. Survey of biometric techniques for automotive applications. In Information Technology-New Generations; Springer: New York, NY, USA, 2018; pp. 475–481. [Google Scholar]
  37. Liu, Y.; Li, Z.; Liu, H.; Kan, Z.; Xu, B. Bioinspired Embodiment for Intelligent Sensing and Dexterity in Fine Manipulation: A Survey. IEEE Trans. Ind. Inform. 2020, 16, 4308–4321. [Google Scholar] [CrossRef]
  38. Yang, H.; Alphones, A.; Xiong, Z.; Niyato, D.; Zhao, J.; Wu, K. Artificial-Intelligence-Enabled Intelligent 6G Networks. IEEE Netw. 2020, 34, 272–280. [Google Scholar] [CrossRef]
  39. Apogeeweb. What is Intelligent Sensor and Its Applications. 2018. Available online: http://www.apogeeweb.net/article/75.html (accessed on 6 January 2022).
  40. White, N. Intelligent sensors: Systems or components? Integr. Vlsi. J. 2005, 3, 471–474. [Google Scholar]
  41. Powner, E.; Yalcinkaya, F. Intelligent sensors: Structure and system. Sens. Rev. 1995, 15, 31–35. [Google Scholar] [CrossRef]
  42. O’Grady, M.J.; Muldoon, C.; Carr, D.; Wan, J.; Kroon, B.; O’Hare, G.M.P. Intelligent sensing for citizen science. Mobile Netw. Appl. 2016, 21, 375–385. [Google Scholar] [CrossRef]
  43. Chen, Z.; Fan, K.; Wang, S.; Duan, L.; Lin, W.; Kot, A.C. Toward Intelligent Sensing: Intermediate Deep Feature Compression. IEEE Trans. Image Process. 2020, 29, 2230–2243. [Google Scholar] [CrossRef]
  44. Shokri-Ghadikolaei, H.; Fallahi, R. Intelligent Sensing Matrix Setting in Cognitive Radio Networks. IEEE Commun. Lett. 2012, 16, 1824–1827. [Google Scholar] [CrossRef]
  45. Li, J.Q.; Yu, F.R.; Deng, G.; Luo, C.; Ming, Z.; Yan, Q. Industrial Internet: A Survey on the Enabling Technologies, Applications, and Challenges. IEEE Commun. Surv. Tutor. 2017, 19, 1504–1526. [Google Scholar] [CrossRef]
  46. Reebadiya, D.; Rathod, T.; Gupta, R.; Tanwar, S.; Kumar, N. Blockchain-based Secure and Intelligent Sensing Scheme for Autonomous Vehicles Activity Tracking Beyond 5G Networks. Peer-to-Peer Netw. Appl. 2021, 14, 2757–2774. [Google Scholar] [CrossRef]
  47. Putra, S.A.; Trilaksono, B.R.; Riyansyah, M.; Laila, D.S.; Harsoyo, A.; Kistijantoro, A.I. Intelligent Sensing in Multiagent-Based Wireless Sensor Network for Bridge Condition Monitoring System. IEEE Internet Things J. 2019, 6, 5397–5410. [Google Scholar] [CrossRef]
  48. Zhao, W.; Wu, J.; Shi, P.; Wang, H. Intelligent sensing and decision making in smart technologies. Int. J. Distrib. Sens. Netw. 2018, 14, 1550147718813754. [Google Scholar] [CrossRef] [Green Version]
  49. Duan, Y.; Zhang, L.; Fan, X.; Hou, Q.; Hou, X. Smart city oriented Ecological Sensitivity Assessment and Service Value Computing based on Intelligent sensing data processing. Comput. Commun. 2020, 160, 263–273. [Google Scholar] [CrossRef]
  50. Mitseva, A.; Prasad, N.R.; Todorova, P.; Fokus, F.; Aguero, R.; Garcia Armada, A.; Panayiotou, C.; Timm-Giel, A.; Maccari, L. CRUISE research activities toward ubiquitous intelligent sensing environments. IEEE Wirel. Commun. 2008, 15, 52–60. [Google Scholar] [CrossRef]
  51. Lu, H.; Li, Y.; Chen, M.; Kim, H.; Serikawa, S. Brain intelligence: Go beyond artificial intelligence. Mob. Netw. Appl. 2018, 23, 368–375. [Google Scholar] [CrossRef] [Green Version]
  52. Scirè, A.; Tropeano, F.; Anagnostopoulos, A.; Chatzigiannakis, I. Fog-Computing-Based Heartbeat Detection and Arrhythmia Classification Using Machine Learning. Algorithms 2019, 12, 32. [Google Scholar] [CrossRef] [Green Version]
  53. Zhang, C.; Ye, W.B.; Zhou, K.; Chen, H.Y.; Yang, J.Q.; Ding, G.; Chen, X.; Zhou, Y.; Zhou, L.; Li, F.; et al. Bioinspired Artificial Sensory Nerve Based on Nafion Memristor. Adv. Funct. Mater. 2019, 29, 1808783. [Google Scholar] [CrossRef]
  54. Evans, L.; Owda, M.; Crockett, K.; Vilas, A.F. A methodology for the resolution of cashtag collisions on Twitter—A natural language processing & data fusion approach. Expert Syst. Appl. 2019, 127, 353–369. [Google Scholar] [CrossRef]
  55. Wang, X.; Li, J.; Wang, L.; Yang, C.; Han, Z. Intelligent user-centric network selection: A model-driven reinforcement learning framework. IEEE Access 2019, 7, 21645–21661. [Google Scholar] [CrossRef]
  56. Harrington, P. Machine Learning in Action; Manning: Shelter Island, NY, USA, 2012. [Google Scholar]
  57. Soe, W.T.; Belleudy, C. Load Recognition from Smart Plug Sensor for Energy Management in a Smart Home. In Proceedings of the IEEE Sensors Applications Symposium (SAS), Sophia Antipolis, France, 11–13 March 2019; pp. 1–6. [Google Scholar] [CrossRef]
  58. Cristianini, N.; Shawe-Taylor, J. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar] [CrossRef]
  59. Zhang, S. An overview of network slicing for 5G. IEEE Wirel. Commun. 2019, 26, 111–117. [Google Scholar] [CrossRef]
  60. Hlynsson, H. Transfer learning using the minimum description length principle with a decision tree application. Master’s Thesis, University of Amsterdam, Amsterdam, The Netherlands, 2007. [Google Scholar]
  61. Zheng, J.; Yang, S.; Wang, X.; Xia, X.; Xiao, Y.; Li, T. A decision tree based road recognition approach using roadside fixed 3d lidar sensors. IEEE Access 2019, 7, 53878–53890. [Google Scholar] [CrossRef]
  62. Zhou, Z.H. Ensemble Learning. In Encyclopedia of Biometrics; Li, S.Z., Jain, A., Eds.; Springer: Boston, MA, USA, 2009; pp. 270–273. [Google Scholar] [CrossRef]
  63. Ahmad, I.; Ayub, A.; Ibrahim, U.; Khattak, M.K.; Kano, M. Data-Based Sensing and Stochastic Analysis of Biodiesel Production Process. Energies 2019, 12, 63. [Google Scholar] [CrossRef] [Green Version]
  64. Zhang, J.; Zulkernine, M.; Haque, A. Random-Forests-Based Network Intrusion Detection Systems. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2008, 38, 649–659. [Google Scholar] [CrossRef]
  65. Tan, K.; Ma, W.; Wu, F.; Du, Q. Random forest–based estimation of heavy metal concentration in agricultural soils with hyperspectral sensor data. Environ. Monit. Assess. 2019, 191, 446. [Google Scholar] [CrossRef]
  66. Janjua, Z.H.; Vecchio, M.; Antonini, M.; Antonelli, F. IRESE: An intelligent rare-event detection system using unsupervised learning on the IoT edge. Eng. Appl. Artif. Intell. 2019, 84, 41–50. [Google Scholar] [CrossRef] [Green Version]
  67. Fiorini, L.; Cavallo, F.; Dario, P.; Eavis, A.; Caleb-Solly, P. Unsupervised machine learning for developing personalised behaviour models using activity data. Sensors 2017, 17, 1034. [Google Scholar] [CrossRef] [Green Version]
  68. Kusetogullari, H.; Yavariabdi, A. Unsupervised change detection in landsat images with atmospheric artifacts: A fuzzy multiobjective approach. Math. Probl. Eng. 2018, 2018, 7274141. [Google Scholar] [CrossRef]
  69. Wang, N.; Xu, Z.S.; Sun, S.W.; Liu, Y. Pattern recognition of UAV flight data based on semi-supervised clustering. J. Phys. Conf. Ser. 2019, 1195, 012001. [Google Scholar] [CrossRef]
  70. Okaro, I.A.; Jayasinghe, S.; Sutcliffe, C.; Black, K.; Paoletti, P.; Green, P.L. Automatic fault detection for laser powder-bed fusion using semi-supervised machine learning. Addit. Manuf. 2019, 27, 42–53. [Google Scholar] [CrossRef]
  71. Fang, Y.; Zhao, H.; Zha, H.; Zhao, X.; Yao, W. Camera and LiDAR Fusion for On-road Vehicle Tracking with Reinforcement Learning. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1723–1730. [Google Scholar] [CrossRef]
  72. Jiang, M.; Hai, T.; Pan, Z.; Wang, H.; Jia, Y.; Deng, C. Multi-agent deep reinforcement learning for multi-object tracker. IEEE Access 2019, 7, 32400–32407. [Google Scholar] [CrossRef]
  73. Guo, W.; Yan, C.; Lu, T. Optimizing the lifetime of wireless sensor networks via reinforcement-learning-based routing. Int. J. Distrib. Sens. Netw. 2019, 15, 1–20. [Google Scholar] [CrossRef] [Green Version]
  74. Li, M.; Xie, L.; Wang, Z. A transductive model-based stress recognition method using peripheral physiological signals. Sensors 2019, 19, 429. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  75. Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. Deap: A database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 2011, 3, 18–31. [Google Scholar] [CrossRef] [Green Version]
  76. Abdelaziz, A.; Salama, A.S.; Riad, A.M.; Mahmoud, A.N. A Machine Learning Model for Predicting of Chronic Kidney Disease Based Internet of Things and Cloud Computing in Smart Cities. In Security in Smart Cities: Models, Applications, and Challenges; Hassanien, A.E., Elhoseny, M., Ahmed, S.H., Singh, A.K., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 93–114. [Google Scholar] [CrossRef]
  77. Palaniappan, R.; Sundaraj, K.; Sundaraj, S. A comparative study of the svm and k-nn machine learning algorithms for the diagnosis of respiratory pathologies using pulmonary acoustic signals. BMC Bioinform. 2014, 15, 223. [Google Scholar] [CrossRef] [Green Version]
  78. Pasterkamp, H. RALE Lung Sound Repository. Available online: http://www.rale.ca (accessed on 6 January 2022).
  79. Chua, T.S.; Tang, J.; Hong, R.; Li, H.; Luo, Z.; Zheng, Y. NUS-WIDE: A real-world web image database from National University of Singapore. In Proceedings of the ACM International Conference on Image and Video Retrieval, Santorini, Fira, Greece, 8–10 July 2009. [Google Scholar] [CrossRef]
  80. Thakur, S.S.; Abdul, S.S.; Chiu, H.Y.S.; Roy, R.B.; Huang, P.Y.; Malwade, S.; Nursetyo, A.A.; Li, Y.C.J. Artificial-Intelligence-Based Prediction of Clinical Events among Hemodialysis Patients Using Non-Contact Sensor Data. Sensors 2018, 18, 2833. [Google Scholar] [CrossRef] [Green Version]
  81. Yamada, Y.; Kobayashi, M. Detecting mental fatigue from eye-tracking data gathered while watching video: Evaluation in younger and older adults. Artif. Intell. Med. 2018, 91, 39–48. [Google Scholar] [CrossRef]
  82. Kotlar, M.; Bojic, D.; Punt, M.; Milutinovic, V. A survey of deep neural networks deployment location and underlying hardware. In Proceedings of the 14th Symposium on Neural Networks and Applications (NEUREL), Belgrade, Serbia, 20–21 November 2018. [Google Scholar] [CrossRef]
  83. Al Machot, F.; Elmachot, A.; Ali, M.; Al Machot, E.; Kyamakya, K. A Deep-Learning Model for Subject-Independent Human Emotion Recognition Using Electrodermal Activity Sensors. Sensors 2019, 19, 1659. [Google Scholar] [CrossRef] [Green Version]
  84. Papagiannaki, A.; Zacharaki, E.I.; Kalouris, G.; Kalogiannis, S.; Deltouzos, K.; Ellul, J.; Megalooikonomou, V. Recognizing Physical Activity of Older People from Wearable Sensors and Inconsistent Data. Sensors 2019, 19, 880. [Google Scholar] [CrossRef] [Green Version]
  85. Shiva Prakash, B.; Sanjeev, K.V.; Prakash, R.; Chandrasekaran, K. A Survey on Recurrent Neural Network Architectures for Sequential Learning. In Soft Computing for Problem Solving; Bansal, J.C., Das, K.N., Nagar, A., Deep, K., Ojha, A.K., Eds.; Springer: Singapore, 2019; pp. 57–66. [Google Scholar] [CrossRef]
  86. Liu, H.; Liu, Z.; Dong, H.; Ge, J.; Yuan, Z.; Zhu, J.; Zhang, H.; Zeng, X. Recurrent Neural Network-Based Approach for Sparse Geomagnetic Data Interpolation and Reconstruction. IEEE Access 2019, 7, 33173–33179. [Google Scholar] [CrossRef]
  87. Wu, L.; Chen, C.H.; Zhang, Q. A Mobile Positioning Method Based on Deep Learning Techniques. Electronics 2019, 8, 59. [Google Scholar] [CrossRef] [Green Version]
  88. Wei, X.; Liu, Y.; Gao, S.; Wang, X.; Yue, H. An RNN-based delay-guaranteed monitoring framework in underwater wireless sensor networks. IEEE Access 2019, 7, 25959–25971. [Google Scholar] [CrossRef]
  89. Ferdowsi, A.; Saad, W. Generative adversarial networks for distributed intrusion detection in the internet of things. In Proceedings of the IEEE Global Communications Conference (GLOBECOM), Waikoloa, HI, USA, 9–13 December 2019; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  90. Yang, G.; Yu, S.; Dong, H.; Slabaugh, G.; Dragotti, P.L.; Ye, X.; Liu, F.; Arridge, S.; Keegan, J.; Guo, Y.; et al. DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction. IEEE Trans. Med. Imaging 2018, 37, 1310–1321. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  91. Yang, J.; Zhao, Z.; Zhang, H.; Shi, Y. Data Augmentation for X-Ray Prohibited Item Images Using Generative Adversarial Networks. IEEE Access 2019, 7, 28894–28902. [Google Scholar] [CrossRef]
  92. Sherstinsky, A. Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef] [Green Version]
  93. Kanjo, E.; Younis, E.M.; Ang, C.S. Deep learning analysis of mobile physiological, environmental and location sensor data for emotion detection. Inf. Fusion 2019, 49, 46–56. [Google Scholar] [CrossRef]
  94. Hall, E.; Kruger, R.; Dwyer, S.; Hall, D.; Mclaren, R.; Lodwick, G. A Survey of Preprocessing and Feature Extraction Techniques for Radiographic Images. IEEE Trans. Comput. 1971, C-20, 1032–1044. [Google Scholar] [CrossRef]
  95. Koochakzadeh, N.; Garousi, V.; Maurer, F. Test Redundancy Measurement Based on Coverage Information: Evaluations and Lessons Learned. In Proceedings of the International Conference on Software Testing Verification and Validation, Denver, CO, USA, 1–4 April 2009; pp. 220–229. [Google Scholar] [CrossRef] [Green Version]
  96. Bi, L.; Feleke, A.G.; Guan, C. A review on EMG-based motor intention prediction of continuous human upper limb motion for human-robot collaboration. Biomed. Signal Process. Control 2019, 51, 113–127. [Google Scholar] [CrossRef]
  97. Guerra, A.; von Stosch, M.; Glassey, J. Toward biotherapeutic product real-time quality monitoring. Crit. Rev. Biotechnol. 2019, 39, 289–305. [Google Scholar] [CrossRef]
  98. Qiao, Y.; Jiao, L.; Yang, S.; Hou, B. A novel segmentation based depth map up-sampling. IEEE Trans. Multimed. 2019, 21, 1–14. [Google Scholar] [CrossRef]
  99. Abeykoon, C. Design and applications of soft sensors in polymer processing: A review. IEEE Sens. J. 2019, 19, 2801–2813. [Google Scholar] [CrossRef] [Green Version]
  100. Wei, Y.; Xia, L.; Pan, S.; Wu, J.; Zhang, X.; Han, M.; Zhang, W.; Xie, J.; Li, Q. Prediction of occupancy level and energy consumption in office building using blind system identification and neural networks. Appl. Energy 2019, 240, 276–294. [Google Scholar] [CrossRef]
  101. Fang, B.; Li, Y.; Zhang, H.; Chan, J. Semi-supervised deep learning classification for hyperspectral image based on dual-strategy sample selection. Remote Sens. 2018, 10, 574. [Google Scholar] [CrossRef] [Green Version]
  102. Marques, A.F.F.; Miranda, G.; Silva, L.M.; Ávila, R.S.; Correia, L.H.A. ISCRa—An Intelligent Sensing Protocol for Cognitive Radio. In Proceedings of the IEEE Symposium on Computers and Communication (ISCC), Messina, Italy, 27–30 June 2016; pp. 385–390. [Google Scholar] [CrossRef]
  103. Lee, W.; Sharma, A. Smart sensing for IoT applications. In Proceedings of the 13th IEEE International Conference on Solid-State and Integrated Circuit Technology (ICSICT), Hangzhou, China, 25–28 October 2016; pp. 362–364. [Google Scholar] [CrossRef]
  104. Sharma, V.; You, I.; Kumar, R. ISMA: Intelligent Sensing Model for Anomalies Detection in Cross Platform OSNs with a Case Study on IoT. IEEE Access 2017, 5, 3284–3301. [Google Scholar] [CrossRef]
  105. Ma, M.; Wang, P.; Chu, C.H. Data management for internet of things: Challenges, approaches and opportunities. In Proceedings of the IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing, Beijing, China, 20–23 August 2013; pp. 1144–1151. [Google Scholar] [CrossRef]
  106. Rahmani, A.M.; Gia, T.N.; Negash, B.; Anzanpour, A.; Azimi, I.; Jiang, M.; Liljeberg, P. Exploiting smart e-Health gateways at the edge of healthcare Internet-of-Things: A fog computing approach. Future Gener. Comput. Syst. 2018, 78, 641–658. [Google Scholar] [CrossRef]
  107. Hassanalieragh, M.; Page, A.; Soyata, T.; Sharma, G.; Aktas, M.; Mateos, G.; Kantarci, B.; Andreescu, S. Health Monitoring and Management Using Internet-of-Things (IoT) Sensing with Cloud-Based Processing: Opportunities and Challenges. In Proceedings of the IEEE International Conference on Services Computing, New York, NY, USA, 27 June–2 July 2015; pp. 285–292. [Google Scholar] [CrossRef] [Green Version]
  108. MathNerd. Iris Flower Dataset. Available online: https://www.kaggle.com/arshid/iris-flower-dataset (accessed on 6 January 2022).
  109. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  110. Kussul, E.; Baidyk, T. Improved method of handwritten digit recognition tested on MNIST database. Image Vis. Comput. 2004, 22, 971–981. [Google Scholar] [CrossRef]
  111. Glover-Kapfer, P.; Soto-Navarro, C.A.; Wearn, O.R. Camera-trapping version 3.0: Current constraints and future priorities for development. Remote Sens. Ecol. Conserv. 2019, 5, 209–223. [Google Scholar] [CrossRef] [Green Version]
  112. Xiao, H.; Rasul, K.; Vollgraf, R. Fashion-mnist: A novel image dataset for benchmarking machine learning algorithms. arXiv 2017, arXiv:1708.07747. [Google Scholar]
  113. Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; Yang, Y. Random erasing data augmentation. In Proceedings of the AAAI conference on artificial intelligence, New York, NY, USA, 2–9 February 2021; pp. 13001–13008. [Google Scholar] [CrossRef]
  114. Koelstra, S. DEAP: A Dataset for Emotion Analysis Using Physiological and Audiovisual Signals. 2020. Available online: https://www.eecs.qmul.ac.uk/mmv/datasets/deap (accessed on 6 January 2022).
  115. Soleymani, M.; Lichtenauer, J.; Pun, T.; Pantic, M. A Multimodal Database for Affect Recognition and Implicit Tagging. IEEE Trans. Affect. Comput. 2012, 3, 42–55. [Google Scholar] [CrossRef] [Green Version]
  116. Polatidis, N.; Georgiadis, C.K. A multi-level collaborative filtering method that improves recommendations. Expert Syst. Appl. 2016, 48, 100–110. [Google Scholar] [CrossRef] [Green Version]
  117. Zhi, D. TRSPD: Toronto Rehab Stroke Posture Dataset. 2017. Available online: https://github.com/zhiderek/TRSPD (accessed on 5 December 2021).
  118. Zhi, Y.X.; Lukasik, M.; Li, M.H.; Dolatabadi, E.; Wang, R.H.; Taati, B. Automatic detection of compensation during robotic stroke rehabilitation therapy. IEEE J. Transl. Eng. Health Med. 2018, 6, 1–7. [Google Scholar] [CrossRef]
  119. Hartmann, A.K.; Marx, E.; Soru, T. Generating a large dataset for neural question answering over the DBpedia knowledge base. In Proceedings of the Workshop on Linked Data Management, co-located with the W3C WEBBR, Vienna, Austria, 17–18 April 2018; Available online: https://www.researchgate.net/publication/324482598_Generating_a_Large_Dataset_for_Neural_Question_Answering_over_the_DBpedia_Knowledge_Base (accessed on 6 January 2022).
  120. Soru, T.; Marx, E.; Moussallem, D.; Publio, G.; Valdestilhas, A.; Esteves, D.; Neto, C.B. SPARQL as a Foreign Language. arXiv 2017, arXiv:1708.07624. [Google Scholar]
  121. Zero Resource Speech Challenge. 2022. Available online: https://www.zerospeech.com (accessed on 6 January 2022).
  122. Versteegh, M.; Anguera, X.; Jansen, A.; Dupoux, E. The Zero Resource Speech Challenge 2015: Proposed Approaches and Results. Procedia Comput. Sci. 2016, 81, 67–72. [Google Scholar] [CrossRef] [Green Version]
  123. Lomonaco, V.; Maltoni, D. Why Core50? 2017. Available online: https://vlomonaco.github.io/core50 (accessed on 6 January 2022).
  124. Parisi, G.I.; Kemker, R.; Part, J.L.; Kanan, C.; Wermter, S. Continual lifelong learning with neural networks: A review. Neural. Netw. 2019, 113, 54–71. [Google Scholar] [CrossRef]
  125. Afifi, M. 11K hands: Gender recognition and biometric identification using a large dataset of hand images. Multimed. Tools. Appl. 2019, 78, 20835–20854. [Google Scholar] [CrossRef] [Green Version]
  126. Shu, Z.; Sahasrabudhe, M.; Guler, R.A.; Samaras, D.; Paragios, N.; Kokkinos, I. Deforming autoencoders: Unsupervised disentangling of shape and appearance. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 650–665. [Google Scholar]
  127. Xu, X.; Zhang, X.; Yu, B.; Hu, X.S.; Rowen, C.; Hu, J.; Shi, Y. DAC-SDC Low Power Object Detection Challenge for UAV Applications. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 392–403. [Google Scholar] [CrossRef] [Green Version]
  128. Sadoughi, N.; Liu, Y.; Busso, C. MSP-AVATAR Corpus: Motion Capture Recordings to Study the Role of Discourse Functions in the Design of Intelligent Virtual Agents. Available online: https://ecs.utdallas.edu/research/researchlabs/msp-lab/MSP-AVATAR.html (accessed on 6 January 2022).
  129. Dizaji, K.G.; Herandi, A.; Deng, C.; Cai, W.; Huang, H. Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 5736–5745. [Google Scholar] [CrossRef] [Green Version]
  130. Gershgorn, D. The Data That Transformed AI Research—And Possibly the World. Available online: https://qz.com/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world (accessed on 6 January 2022).
  131. Linjordet, T.; Balog, K. Impact of Training Dataset Size on Neural Answer Selection Models. In Proceedings of the Advances in Information Retrieval: 41st European Conference on IR Research, ECIR 2019, Cologne, Germany, 14–18 April 2019; Azzopardi, L., Stein, B., Fuhr, N., Mayr, P., Hauff, C., Hiemstra, D., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 828–835. [Google Scholar] [CrossRef]
  132. Rao, M.R.; Prasad, V.; Teja, P.; Zindavali, M.; Reddy, O.P. A survey on prevention of overfitting in convolution neural networks using machine learning techniques. Int. J. Eng. Technol. (UAE) 2018, 7, 177–180. [Google Scholar] [CrossRef]
  133. Ghasemian, A.; Hosseinmardi, H.; Clauset, A. Evaluating overfit and underfit in models of network community structure. IEEE Trans. Knowl. Data Eng. 2020, 32, 1722–1735. [Google Scholar] [CrossRef] [Green Version]
  134. Mikołajczyk, A.; Grochowski, M. Data augmentation for improving deep learning in image classification problem. In Proceedings of the International Interdisciplinary PhD Workshop (IIPhDW), Swinoujscie, Poland, 9–12 May 2018; pp. 117–122. [Google Scholar] [CrossRef]
  135. Jha, K.; Doshi, A.; Patel, P.; Shah, M. A comprehensive review on automation in agriculture using artificial intelligence. Artif. Intell. Agric. 2019, 2, 1–12. [Google Scholar] [CrossRef]
  136. Banerjee, G.; Sarkar, U.; Ghosh, I. A radial basis function network based classifier for detection of selected tea pests. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 2017, 7, 665–669. [Google Scholar] [CrossRef]
  137. Ravichandran, G.; Koteeshwari, R.S. Agricultural crop predictor and advisor using ANN for smartphones. In Proceedings of the International Conference on Emerging Trends in Engineering, Technology and Science (ICETETS), Pudukkottai, India, 24–26 February 2016. [Google Scholar] [CrossRef]
  138. Nema, M.K.; Khare, D.; Chandniha, S.K. Application of artificial intelligence to estimate the reference evapotranspiration in sub-humid Doon valley. Appl. Water Sci. 2017, 7, 3903–3910. [Google Scholar] [CrossRef] [Green Version]
  139. Systique, H. Machine learning based network anomaly detection. Int. J. Recent Technol. Eng. 2019, 8, 542–548. [Google Scholar] [CrossRef]
  140. Alkasassbeh, M.; Almseidin, M. Machine learning methods for network intrusion detection. arXiv 2018, arXiv:1809.02610. [Google Scholar]
  141. Pradhan, M.; Pradhan, S.K.; Sahu, S.K. Anomaly detection using artificial neural network. Int. J. Eng. Sci. Emerg. Technol. 2012, 2, 29–36. [Google Scholar]
  142. Elarbi-Boudihir, M.; Al-Shalfan, K.A. Intelligent video surveillance system architecture for abnormal activity detection. In Proceedings of the International Conference on Informatics and Applications (ICIA2012), Terengganu, Malaysia, 3–5 June 2012; pp. 102–111. [Google Scholar]
  143. Sanchez, J.; Galan, M.; Rubio, E. Applying a Traffic Lights Evolutionary Optimization Technique to a Real Case: “Las Ramblas” Area in Santa Cruz de Tenerife. IEEE Trans. Evol. Comput. 2008, 12, 25–40. [Google Scholar] [CrossRef]
  144. Kaur, D.; Konga, E.; Konga, E. Fuzzy traffic light controller. In Proceedings of the 37th Midwest Symposium on Circuits and Systems, Lafayette, LA, USA, 3–5 August 1994. [Google Scholar] [CrossRef]
  145. Jiang, F.; Jiang, Y.; Zhi, H.; Dong, Y.; Li, H.; Ma, S.; Wang, Y.; Dong, Q.; Shen, H.; Wang, Y. Artificial intelligence in healthcare: Past, present and future. Stroke Vasc. Neurol. 2017, 2, 230–243. [Google Scholar] [CrossRef]
  146. Orru, G.; Pettersson-Yeo, W.; Marquand, A.F.; Sartori, G.; Mechelli, A. Using support vector machine to identify imaging biomarkers of neurological and psychiatric disease: A critical review. Neurosci. Biobehav. Rev. 2012, 36, 1140–1152. [Google Scholar] [CrossRef]
  147. Long, E.; Lin, H.; Liu, Z.; Wu, X.; Wang, L.; Jiang, J.; An, Y.; Lin, Z.; Li, X.; Chen, J. An artificial intelligence platform for the multihospital collaborative management of congenital cataracts. Nat. Biomed. Eng. 2017, 1, 1–8. [Google Scholar] [CrossRef]
  148. Sharma, Z.; Chauhan, A.; Ashok, L.; D’Souza, A.; Malarout, N.; Kamath, R. The Impact of Artificial Intelligence on Healthcare. Indian J. Public Health Res. Dev. 2019, 10. [Google Scholar] [CrossRef]
  149. Fiszman, M.; Chapman, W.W.; Aronsky, D.; Evans, R.S.; Haug, P.J. Automatic detection of acute bacterial pneumonia from chest X-ray reports. J. Am. Med. Inform. Assoc. 2000, 7, 593–604. [Google Scholar] [CrossRef] [PubMed]
  150. Hadaya, J.; Schumm, M.; Livingston, E.H. Testing Individuals for Coronavirus Disease 2019 (COVID-19). JAMA 2020, 323, 1981. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  151. Chan, J.F.W.; Yip, C.C.Y.; To, K.K.W.; Tang, T.H.C.; Wong, S.C.Y.; Leung, K.H.; Fung, A.Y.F.; Ng, A.C.K.; Zou, Z.; Tsoi, H.W. Improved molecular diagnosis of COVID-19 by the novel, highly sensitive and specific COVID-19-RdRp/Hel real-time reverse transcription-PCR assay validated in vitro and with clinical specimens. J. Clin. Microbiol. 2020, 58, e00310-20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  152. Lan, L.; Xu, D.; Ye, G.; Xia, C.; Wang, S.; Li, Y.; Xu, H. Positive RT-PCR Test Results in Patients Recovered From COVID-19. JAMA 2020, 323, 1502–1503. [Google Scholar] [CrossRef] [Green Version]
  153. Abbott, T.R.; Dhamdhere, G.; Liu, Y.; Lin, X.; Goudy, L.; Zeng, L.; Chemparathy, A.; Chmura, S.; Heaton, N.S.; Debs, R. Development of CRISPR as an antiviral strategy to combat SARS-CoV-2 and influenza. Cell 2020, 181, 865–876. [Google Scholar] [CrossRef]
  154. Santosh, K.C. AI-driven tools for coronavirus outbreak: Need of active learning and cross-population train/test models on multitudinal/multimodal data. J. Med. Syst. 2020, 44, 93. [Google Scholar] [CrossRef] [Green Version]
  155. Zhou, Z.; Rahman Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical DECISION Support; Springer: Cham, Switzerland, 2018; pp. 3–11. [Google Scholar] [CrossRef] [Green Version]
  156. Lee, S.H.; Park, S.; Kim, B.N.; Kwon, O.S.; Rho, W.Y.; Jun, B.H. Emerging ultrafast nucleic acid amplification technologies for next-generation molecular diagnostics. Biosens. Bioelectron. 2019, 141, 8. [Google Scholar] [CrossRef]
  157. Yuan, Y.; Shi, Y.; Li, C.; Kim, J.; Cai, W.; Han, Z.; Feng, D.D. DeepGene: An advanced cancer type classifier based on deep learning and somatic point mutations. BMC Bioinform. 2016, 17, 476. [Google Scholar] [CrossRef] [Green Version]
  158. Wang, S.; Kang, B.; Ma, J.; Zeng, X.; Xiao, M.; Guo, J.; Cai, M.; Yang, J.; Li, Y.; Meng, X. A deep learning algorithm using CT images to screen for Corona Virus Disease (COVID-19). Eur. Radiol. 2021, 31, 6096–6104. [Google Scholar] [CrossRef]
  159. Ray, A. Navigation System for Blind People Using Artificial Intelligence. 2022. Available online: https://amitray.com/artificial-intelligence-for-assisting-blind-people (accessed on 6 January 2022).
  160. Rimal, K. Cash Recognition for the Visually Impaired: Part 1. Available online: https://software.intel.com/en-us/articles/cash-recognition-for-the-visually-impaired-part-1 (accessed on 6 January 2022).
  161. Healy, E.W.; Yoho, S.E.; Wang, Y.; Wang, D. An algorithm to improve speech recognition in noise for hearing-impaired listeners. J. Acoust. Soc. Am. 2013, 134, 3029–3038. [Google Scholar] [CrossRef] [Green Version]
  162. Sharma, M.; Tiwari, S.; Chakraborty, S.; Banerjee, D.S. Behavior Analysis through Routine Cluster Discovery in Ubiquitous Sensor Data. In Proceedings of the International Conference on COMmunication Systems NETworkS (COMSNETS), Bengaluru, India, 7–11 January 2020; pp. 267–274. [Google Scholar] [CrossRef]
  163. Gaussier, E.; Goutte, C. Relation between PLSA and NMF and Implications. In Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Salvador, Brazil, 15–19 August 2005; Association for Computing Machinery: New York, NY, USA, 2005; pp. 601–602. [Google Scholar] [CrossRef]
  164. Dhekane, S.G.; Vajra, K.; Banerjee, D.S. Semi-supervised Subject Recognition through Pseudo Label Generation in Ubiquitous Sensor Data. In Proceedings of the International Conference on COMmunication Systems NETworkS (COMSNETS), Bengaluru, India, 7–11 January 2020; pp. 184–191. [Google Scholar] [CrossRef]
  165. Dua, D.; Graff, C. UCI Machine Learning Repository. 2017. Available online: https://archive.ics.uci.edu/ml/datasets/opportunity+activity recognition (accessed on 6 January 2022).
  166. Indri, M.; Lachello, L.; Lazzero, I.; Sibona, F.; Trapani, S. Smart sensors applications for a new paradigm of a production line. Sensors 2019, 19, 650. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  167. Atlam, H.F.; Walters, R.J.; Wills, G.B. Intelligence of Things: Opportunities & Challenges. In Proceedings of the 3rd Cloudification of the Internet of Things (CIoT), Paris, France, 2–4 July 2018; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  168. Fasano, G.; Accardo, D.; Tirri, A.E.; Moccia, A.; Lellis, E.D. Radar/electro-optical data fusion for non-cooperative UAS sense and avoid. Aerosp. Sci. Technol. 2015, 46, 436–450. [Google Scholar] [CrossRef] [Green Version]
  169. Abbas, A.; Khan, S.U. A review on the state-of-the-art privacy-preserving approaches in the e-health clouds. IEEE J. Biomed. Health Inform. 2014, 18, 1431–1441. [Google Scholar] [CrossRef] [PubMed]
  170. Menouar, H.; Guvenc, I.; Akkaya, K.; Uluagac, A.S.; Kadri, A.; Tuncer, A. UAV-enabled intelligent transportation systems for the smart city: Applications and challenges. IEEE Commun. Mag. 2017, 55, 22–28. [Google Scholar] [CrossRef]
  171. Kang, J.; Yin, S.; Meng, W. An Intelligent Storage Management System Based on Cloud Computing and Internet of Things. In Proceedings of the International Conference on Computer Science and Information Technology, Kunming, China, 21–23 September 2013; Patnaik, S., Li, X., Eds.; Springer: New Delhi, India, 2014; pp. 499–505. [Google Scholar] [CrossRef]
  172. Son, D.; Lee, J.; Qiao, S.; Ghaffari, R.; Kim, J.; Lee, J.E.; Song, C.; Kim, S.J.; Lee, D.J.; Jun, S.W.; et al. Multifunctional wearable devices for diagnosis and therapy of movement disorders. Nat. Nanotechnol. 2014, 9, 397–404. [Google Scholar] [CrossRef]
  173. Xu, S.; Zhang, Y.; Jia, L.; Mathewson, K.E.; Jang, K.I.; Kim, J.; Fu, H.; Huang, X.; Chava, P.; Wang, R.; et al. Soft Microfluidic Assemblies of Sensors, Circuits, and Radios for the Skin. Science 2014, 344, 70–74. [Google Scholar] [CrossRef]
  174. Kim, D.H.; Ghaffari, R.; Lu, N.; Rogers, J.A. Flexible and Stretchable Electronics for Biointegrated Devices. Annu. Rev. Biomed. Eng. 2012, 14, 113–128. [Google Scholar] [CrossRef] [Green Version]
  175. Nag, A.; Mukhopadhyay, S.C.; Kosel, J. Wearable Flexible Sensors: A Review. IEEE Sens. J. 2017, 17, 3949–3960. [Google Scholar] [CrossRef] [Green Version]
  176. Sze, V.; Chen, Y.H.; Emer, J.; Suleiman, A.; Zhang, Z. Hardware for machine learning: Challenges and opportunities. In Proceedings of the IEEE Custom Integrated Circuits Conference (CICC), Austin, TX, USA, 30 April–3 May 2017; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  177. Talib, M.A.; Majzoub, S.; Nasir, Q.; Jamal, D. A systematic literature review on hardware implementation of artificial intelligence algorithms. J. Supercomput. 2021, 77, 1897–1938. [Google Scholar] [CrossRef]
  178. Karras, K.; Pallis, E.; Mastorakis, G.; Nikoloudakis, Y.; Batalla, J.M.; Mavromoustakis, C.X.; Markakis, E. A hardware acceleration platform for AI-based inference at the edge. Circuits Syst. Signal Process. 2019, 39, 1059–1070. [Google Scholar] [CrossRef]
  179. Diez-Olivan, A.; Ser, J.D.; Galar, D.; Sierra, B. Data fusion and machine learning for industrial prognosis: Trends and perspectives towards industry 4.0. Inf. Fusion 2019, 50, 92–111. [Google Scholar] [CrossRef]
  180. Lee, J.; Davari, H.; Singh, J.; Pandhare, V. Industrial artificial intelligence for industry 4.0-based manufacturing systems. Manuf. Lett. 2018, 18, 20–23. [Google Scholar] [CrossRef]
  181. Bartodziej, C.J. The concept industry 4.0. In The concept industry 4.0; Springer: Wiesbaden, Germany, 2017; pp. 27–50. [Google Scholar] [CrossRef]
  182. Xu, L.D.; Xu, E.L.; Li, L. Industry 4.0: State of the art and future trends. Int. J. Prod. Res. 2018, 56, 2941–2962. [Google Scholar] [CrossRef] [Green Version]
  183. Özdemir, V.; Hekim, N. Birth of Industry 5.0: Making Sense of Big Data with Artificial Intelligence, “The Internet of Things” and Next-Generation Technology Policy. OMICS J. Integr. Biol. 2018, 22, 65–76. [Google Scholar] [CrossRef] [PubMed]
  184. Al Faruqi, U. Future Service in Industry 5.0: Survey Paper. J. Sist. Cerdas 2019, 2, 67–79. [Google Scholar] [CrossRef]
  185. Massaro, A. Information Technology Infrastructures Supporting Industry 5.0 Facilities. In Electronics in Advanced Research Industries: Industry 4.0 to Industry 5.0 Advances; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2022; pp. 51–101. [Google Scholar] [CrossRef]
  186. Du, A.; Shen, Y.; Zhang, Q.; Tseng, L.; Aloqaily, M. CRACAU: Byzantine Machine Learning Meets Industrial Edge Computing in Industry 5.0. IEEE Trans. Industr. Inform. 2022, 18, 5435–5445. [Google Scholar] [CrossRef]
  187. Javaid, M.; Haleem, A.; Singh, R.P.; Haq, M.I.U.; Raina, A.; Suman, R. Industry 5.0: Potential applications in COVID-19. J. Ind. Integr. Manag. 2020, 5, 507–530. [Google Scholar] [CrossRef]
  188. Jain, D.K.; Li, Y.; Er, M.J.; Xin, Q.; Gupta, D.; Shankar, K. Enabling Unmanned Aerial Vehicle Borne Secure Communication With Classification Framework for Industry 5.0. IEEE Trans. Industr. Inform. 2022, 18, 5477–5484. [Google Scholar] [CrossRef]
  189. Došilović, F.K.; Brčić, M.; Hlupić, N. Explainable artificial intelligence: A survey. In Proceedings of the 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 20–24 May 2018; pp. 0210–0215. [Google Scholar] [CrossRef]
  190. Arrieta, A.B.; Díaz-Rodríguez, N.; Ser, J.D.; Bennetot, A.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-Lopez, S.; Molina, D.; Benjamins, R.; et al. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef] [Green Version]
  191. Fast-Berglund, Å.; Gong, L.; Li, D. Testing and validating Extended Reality (xR) technologies in manufacturing. Procedia Manuf. 2018, 25, 31–38. [Google Scholar] [CrossRef]
  192. Köse, A.; Tepljakov, A.; Petlenkov, E. Real Time Data Communication for Intelligent Extended Reality Applications. In Proceedings of the IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), Tunis, Tunisia, 22–24 June 2020; pp. 1–6. [Google Scholar] [CrossRef]
  193. Ferreira, J.M.M.; Qureshi, Z.I. Use of XR technologies to bridge the gap between Higher Education and Continuing Education. In Proceedings of the IEEE Global Engineering Education Conference (EDUCON), Porto, Portugal, 27–30 April 2020; pp. 913–918. [Google Scholar] [CrossRef]
  194. Andrews, C.; Southworth, M.K.; Silva, J.N.A.; Silva, J.R. Extended reality in medical practice. Curr. Treat. Options Cardiovasc. Med. 2019, 21, 1–12. [Google Scholar] [CrossRef] [PubMed]
  195. Liu, C.H.; Chen, Z.; Tang, J.; Xu, J.; Piao, C. Energy-efficient uav control for effective and fair communication coverage: A deep reinforcement learning approach. IEEE J. Sel. Areas Commun. 2018, 36, 2059–2070. [Google Scholar] [CrossRef]
  196. Yang, H.; Xie, X.; Kadoch, M. Intelligent resource management based on reinforcement learning for ultra-reliable and low-latency IoV communication networks. IEEE Trans. Veh. Technol. 2019, 68, 4157–4169. [Google Scholar] [CrossRef]
  197. Asano, D.K.; Fujioka, S.; Matsunaga, M.; Kohno, R. Coding Techniques for Intelligent Communication over a Satellite Link. In Proceedings of the 5th COMETS Workshop, Tokyo, Japan, 10 December 1996. [Google Scholar]
  198. Yin, C.; Dong, P.; Du, X.; Zheng, T.; Zhang, H.; Guizani, M. An Adaptive Network Coding Scheme for Multipath Transmission in Cellular-Based Vehicular Networks. Sensors 2020, 20, 5902. [Google Scholar] [CrossRef] [PubMed]
  199. Musaddiq, A.; Nain, Z.; Ahmad Qadri, Y.; Ali, R.; Kim, S.W. Reinforcement Learning-Enabled Cross-Layer Optimization for Low-Power and Lossy Networks under Heterogeneous Traffic Patterns. Sensors 2020, 20, 4158. [Google Scholar] [CrossRef]
  200. Mahdavinejad, M.S.; Rezvan, M.; Barekatain, M.; Adibi, P.; Barnaghi, P.; Sheth, A.P. Machine learning for internet of things data analysis: A survey. Digit. Commun. Netw. 2018, 4, 161–175. [Google Scholar] [CrossRef]
  201. McClellan, M.; Cervelló-Pastor, C.; Sallent, S. Deep Learning at the Mobile Edge: Opportunities for 5G Networks. Appl. Sci. 2020, 10, 4735. [Google Scholar] [CrossRef]
  202. Haider, N.; Baig, M.Z.; Imran, M. Artificial Intelligence and Machine Learning in 5G Network Security: Opportunities, advantages, and future research trends. arXiv 2020, arXiv:2007.04490. [Google Scholar]
  203. Basnayaka, C.M.W.; Jayakody, D.N.K.; Perera, T.D.P.; Ribeiro, M.V. Age of Information in an URLLC-enabled Decode-and-Forward Wireless Communication System. In Proceedings of the IEEE 93rd Vehicular Technology Conference (VTC2021-Spring), Helsinki, Finland, 25–28 April 2021; pp. 1–6. [Google Scholar] [CrossRef]
  204. Meloa Project. 2022. Available online: https://www.ec-meloa.eu/ (accessed on 6 January 2022).
  205. Upadhyay, V.; Gupta, S.; Dubey, A.; Rao, M.; Siddhartha, P.; Gupta, V.; George, S.; Bobba, R.; Sirikonda, R.; Maloo, A.; et al. Design and motion control of Autonomous Underwater Vehicle, Amogh. In Proceedings of the IEEE Underwater Technology (UT), Chennai, India, 23–25 February 2015; pp. 1–9. [Google Scholar] [CrossRef]
  206. Completed Projects, Defence Research and Development Organisation, Ministry of Defence, Government of India. 2019. Available online: https://www.drdo.gov.in/naval-research-board/completed-projects (accessed on 6 January 2022).
  207. Brunel University. Beamforming Using Artificial Intelligence for 6G Networks. Available online: https://www.brunel.ac.uk/research/projects/beamforming-using-artificial-intelligence-for-6g-networks (accessed on 6 January 2022).
  208. Broadband Wireless Networking Lab. Intelligent Environments for Wireless Communication for 6G. Available online: https://ianakyildiz.com/bwn/projects/6G/index.html (accessed on 6 January 2022).
  209. University of Oulu. 6G Flagship. 2022. Available online: https://www.oulu.fi/6gflagship/ (accessed on 6 January 2022).
  210. SME 4.0 Consortium. Smart Manufacturing and Logistics for SMEs in an X-to-order and Mass Customization Environment. Available online: https://www.sme40.eu/ (accessed on 6 January 2022).
  211. Industry 4.0. Available online: https://www.gestamp.com/What-we-do/Industry-4-0 (accessed on 6 January 2022).
  212. MF2C Consortium. Towards an Open, Secure, Decentralized and Coordinated Fog-to-Cloud Management Ecosystem. Available online: https://www.mf2c-project.eu/ (accessed on 6 January 2022).
  213. CORDIS, European Commission. Waterbee Smart Irrigation Systems Demonstration Action. Available online: https://cordis.europa.eu/project/id/283638/it (accessed on 6 January 2022).
  214. Vinfinet Technologies. Introducing Mobile Motor Controller. Available online: http://kisanraja.com/ (accessed on 6 January 2022).
  215. The Better India. Meet the Man Whose ’Kisanraja’ Smart Irrigation Device Helps Over 34,200 Farmers! 2019. Available online: https://www.thebetterindia.com/204393/india-water-pumps-innovation-agritech-invention-irrigation-agriculture-farmers-kisanraja/ (accessed on 6 January 2022).
  216. Smart Cities Mission: A Step towards Smart India, National Portal of India. Available online: https://www.india.gov.in/spotlight///smart-cities-mission-step-towards-smart-india (accessed on 6 January 2022).
  217. IoT-Based Healthcare. Available online: https://cse.iitkgp.ac.in/~smisra/theme_pages/healthcare/hProject.html (accessed on 6 January 2022).
  218. Control Your Home Around the Globe. Available online: http://www.iplugcontrol.com/ (accessed on 6 January 2022).
  219. Hyperspectral Imaging Standards. 2021. Available online: https://www.nist.gov/programs-projects/hyperspectral-imaging-standards (accessed on 6 January 2022).
  220. Ocean Color. 2021. Available online: https://www.nist.gov/programs-projects/ocean-color (accessed on 6 January 2022).
  221. Advanced Dimensional Measurement Systems. 2020. Available online: https://www.nist.gov/programs-projects/advanced-dimensional-measurement-systems (accessed on 6 January 2022).
  222. Murphy, C. Meet Project N, China’s $60,000 Smart Car. 2015. Available online: https://blogs.wsj.com/chinarealtime/2015/04/24/meet-project-n-chinas-60000-smart-car (accessed on 6 January 2022).
  223. Parliament of Australia. Smart Cities. Available online: https://https://www.aph.gov.au/Parliamentary_Business/Committees/House/ITC/DevelopmentofCities/Report/section?id=committees%2Freportrep%2F024151%2F25693 (accessed on 6 January 2022).
Figure 1. Structure of the Paper.
Figure 1. Structure of the Paper.
Electronics 11 01661 g001
Figure 2. Various Scenarios Portraying ML and DL-based Intelligent Sensing.
Figure 2. Various Scenarios Portraying ML and DL-based Intelligent Sensing.
Electronics 11 01661 g002
Figure 3. Intelligent Sensing in Agriculture.
Figure 3. Intelligent Sensing in Agriculture.
Electronics 11 01661 g003
Figure 4. Intelligent Sensing in Intruder Detection and Surveillance.
Figure 4. Intelligent Sensing in Intruder Detection and Surveillance.
Electronics 11 01661 g004
Figure 5. Intelligent Sensing in Traffic Management.
Figure 5. Intelligent Sensing in Traffic Management.
Electronics 11 01661 g005
Figure 6. Intelligent Sensing in Healthcare Management.
Figure 6. Intelligent Sensing in Healthcare Management.
Electronics 11 01661 g006
Figure 7. Intelligent Sensing in Pandemic Monitoring.
Figure 7. Intelligent Sensing in Pandemic Monitoring.
Electronics 11 01661 g007
Figure 8. Intelligent Sensing in Communication Networks.
Figure 8. Intelligent Sensing in Communication Networks.
Electronics 11 01661 g008
Figure 9. Intelligent Sensing for Visually Impaired.
Figure 9. Intelligent Sensing for Visually Impaired.
Electronics 11 01661 g009
Figure 10. Layer-wise Security Challenges.
Figure 10. Layer-wise Security Challenges.
Electronics 11 01661 g010
Figure 11. Future Directions.
Figure 11. Future Directions.
Electronics 11 01661 g011
Table 1. Recent Survey Articles in Intelligent Sensing.
Table 1. Recent Survey Articles in Intelligent Sensing.
Reference NumberYearTechnology UsedElucidation and CommentsAdvantagesLimitations
[30]2019IoT scenarios variables, sensor analysis and application analysis.The details of emerging IoT scenarios are discussed.Classification is presented for analysis of variables and sensors in IoT scenarios that will help data analysts recognize the features of IoT applications in a better way.The source (three publishers) and quantity of papers (48) reviewed are the main limitation of the paper.
[31]2019Coverage models and classification, network life maximization, data fusion, and reinforcement learning-based coverage optimization.Methods for tackling the network lifetime and coverage optimization issues of a heterogeneous sensor network in geographically scattered and resource-constrained environments are discussed.Extension of network lifetime and optimization of coverage based on data fusion and sensor collaboration are summarized in the paper. Coverage hole problems in realistic WSNs are also ameliorated using reinforcement learning (RL) approaches.Some topics need further elaboration, e.g., how to elongate the lifetime and optimize the coverage of a wireless sensor network by various RL methods such as cellular learning automata.
[32]2019IoT data properties, fusion in IoT, data fusion requirements, smart grid, smart home, and smart transportation.The data fusion helps to eliminate the imperfect data.To evaluate performance of existing data fusion techniques, IoT data fusion is employed as an essential requirement.The difference in data resolution, which affects the accuracy, reliability and privacy at some level, is not achieved.
[33]2019Feature selection, Feature fusion, adaptive fusion.This survey is focused on the area of feature fusion, selection and adaptive multi-view problems.This paper discusses the various feature selection approaches to tackle multiview problems.To select the important features of unlabeled data, unsupervised feature selection faces some problems.
[34]2019Region-based fusion methods, evaluation of the performance of objective fusion.Saliency map method is found to be an evolving technique for use in medical image fusion.The region partition algorithms produce better fusion results in medical image fusion applications.Image segmentation is not proper in region-based image fusion methods. Limiting factors are noise, misregistration, and blur.
[35]2019Environmental monitoring, autonomous systems for decommissioning monitoring, MAS sensors, MAS dataAutonomy has changed the ocean-based science and monitoring of the marine environment.Marine autonomous systems reduce the human risk of seagoing operations.The main drawback of autonomy is its inability to collect physical samples in seabed sediments.
[36]2019Intelligent vehicle technologies, In-vehicle applied biometric grades, cognitive and context-aware intelligence.This paper focused on improving safety of vehicles against theft using the selection of biometrics.Traffic and vehicle data collection enhance the decision-making in transport systems.This survey constrained to address bio-metric techniques used in emerging applications such as Vehicular Ad hoc Network (VANET) and self-driving cars.
[37]2020Bio-inspired Embodiment, Design challenges and planningDiscussion on major challenges in Intelligent sensing for Bio-inspired Embodiment based on dynamics, work mechanism and technology involvedActivity skills and implication for bio-inspired robots using deep reinforcement learning, CNN and other methods were discussed.Implication on robotic hand grasping was discussed with explanation of challenges and limitation related to distortion from senor nodes.
[38]20206G networks with AI enabled Architectures for knowledge and decision making in telecommunicationApplication areas for 6G based intelligent networks and layers based Intelligent sensing network for various applications.Methods and application for utilization AI technology in the area of 6G networks including resource management, traffic and signal optimization.Well discussed content specifically on 6G current trends and challenges focusing on networks and resource utilization based on applications of 6G Network.
Table 2. Comparative Analysis of Work Available in the Area of Intelligent Sensing and Contributions Presented in This Paper.
Table 2. Comparative Analysis of Work Available in the Area of Intelligent Sensing and Contributions Presented in This Paper.
RefTitleAreas AddressedAreas Not AddressedNovel Contributions of This Work
[37]Bioinspired Embodiment for Intelligent Sensing and Dexterity in Fine Manipulation: A SurveyThe operating mechanism, categorization, implications issues, and methods for the industrial embodiment of intelligent sensing based on bioinspired mechanism.Communication technology
Dark data handling
Performance analysis
Implementation of digital twins, communication features, and detailed discussion of AI approaches used in intelligent sensing are presented in this work
[42]Intelligent Sensing for Citizen ScienceWell presented work on mobile devices with embedded sensors using existing communication protocols5G and 6G communication protocols
AI−inspired communication protocols
Projects and Database in the area of intelligent sensing
Future−generation communication technology
AI−based algorithms and models
Future Citizenship are reviewed in detail
[43]Toward Intelligent Sensing: Intermediate Deep Feature CompressionWell explained work related to compactly represented and layer−wise deep learning approach
Result−based analysis of deep feature compression
Major emphasis on Visual data
Nonvisual data
Machine learning approaches
Industrial Communication protocols and ISM band based communication protocol for intelligent sensing
Visual and nonvisual data
Smart assistive technology
Data security and privacy
Intelligent sensing in healthcare data
Both machine learning and deep learning approaches are reviewed for intelligent sensing algorithm
[44]Intelligent Sensing Matrix Setting in Cognitive Radio NetworksSpectrum sensing
Cognitive radio
Sensing sequence
Well drafted work on matrix setting for cognitive radio includes timing analysis
Intelligent sensing related future challenges
Application areas for intelligent sensing
5 G and 6G communication for intelligent sensing
Learning models
Analysis of their advantages and limitations
Detailed review on influential parameters in intelligent sensing
[45]Industrial Internet: A Survey on the Enabling Technologies, Applications, and ChallengesIndustrial Internet
Functional Safety
E−government
5C architecture
General public utilities, beyond 5G communication
Artificial intelligence in future challenges
Industry 4.0
Communication applications in intelligent sensing
Project and data set available for intelligent sensing
[46]Blockchain−based Secure and Intelligent Sensing Scheme for Autonomous Vehicles Activity Tracking Beyond 5G NetworksIntelligent sensing and tracking based on blockchain using 5G and beyond communication
The application area is Autonomous Vehicle
Other application area such as assistive technology, health care Smart cities, etcSmart city environment, healthcare, assistive technology are reviewed with respect to intelligent sensing
[47]Intelligent Sensing in Multiagent−Based Wireless Sensor Network for Bridge Condition Monitoring SystemWireless Sensor Networks
Multi–agent system
Artificial intelligence
Performance analysis using case study
Review of practical applications of intelligent sensing
The data set in intelligent sensing
More emphasis on communication technology
Reviews of projects and survey work in the area of intelligent sensing
Covers all the aspects of intelligent sensing such as future direction challenges, learning models
[48]Intelligent sensing and decision making in smart technologiesEditorial on various works such as beamforming
Path selection
Data compression
Intelligent sensing in health care
Comparative analysis of machine learning algorithms and models
Influential parameters in Intelligent sensing
Communication network for intelligent sensing
Smart communication network
Latency and Q−learning
[49]Smart city−oriented Ecological Sensitivity Assessment and Service Value Computing based on Intelligent sensing data processingSensing in Sustainable rural development
Smart sensing and
Computational algorithms in territorial rural planning
Smart city planning
Communication technologies Application−oriented Health care
Convergence of AI and 6G
Data security and Planning
Intelligent sensing in pandemic monitoring
[50]CRUISE research activities toward ubiquitous intelligent sensing environmentsUbiquitous Intelligent sensing environment
Wireless Sensor Networks Research orientation and challenge
Hardware deployment
Explainable AI for Intelligent sensing
Next−generation communication protocols
Extended reality and AI
Channel coding
Software platforms in Intelligent sensing
Lesson learned
Table 3. Comparison of Machine Learning Algorithms/Models in Intelligent Sensing.
Table 3. Comparison of Machine Learning Algorithms/Models in Intelligent Sensing.
Reference NumberMachine Learning Algorithm/ModelDataset UsedDescriptionParameters Influencing the PerformanceAdvantagesLimitations
[74]epsilon-SVR
(eSVR), Linear regression (LR), Convolutional Neural Network (CNN), STSVR, T-SVR.
DEAP Dataset [75].A Framework is proposed for stress recognition in real-time using peripheral physiological signals.BVP and GSR.1. Less Prediction error;
2. Convenient for real-world applications.
The model is limited to the slight movement of physiological signals.
[76]Linear regression (LR) and neural network (NN).Chronic Kidney Disease (CKD) from patients.A hybrid intelligent model is proposed to guess chronic kidney disease from a patient’s data on the cloud environment to improve services in healthcare in smart cities.Feature weights (FW).The proposed model significantly improves accuracy compared to other models.The hybrid model is limited to a small amount of data of a patient’s record.
[77]R.A.L.E lung sound Database [78].DEAP Dataset [75].Performance of K-NN and SVM classifiers are compared using the pulmonary acoustic signal from RALE database for diagnosing respiratory pathologies.Mel-frequency cepstral coeffcients (MFCC).Analysis of feature vectors is via ANOVA and separately fed into SVM and K-NN classifiers.1. The amount of data used to train and test the classifier is very small.
2. Collection of data was carried out in a controlled environment.
[9]Ranking SVM.NUS-WIDE dataset [79].The interaction between social images and online users is analyzed.Color, texture, and GIST features.Powerful learning method and heterogeneous social sensory data improve performance.External factors such as images based on cultural and geographical locations are not considered for prediction.
[80]K-NN, AdaBoost, SVM, RF and Logistic regression (LR).Non-contact sensor data.The non-contact sensor the device is designed to predict the signs of HR, RR, HRV parameters from a patient’s records during a period of 23 weeks of HD sessions.Age and BMI (body mass index) of patients.Using machine learning-based predictive models, high accuracy is obtained.The main limitation is the prediction of clinical events in advance and the other parameters like BP and the patient’s medical history using a multi-class prediction model.
[81]Support Vector Machine (SVM).CRCNS-ORIG and DIEM.A model is proposed to detect mental weakness of older and younger people by collecting their eye-tracking data while watching a video.Pupil diameters, eye blinking, gaze allocation, and saccade mean velocity.Improves detection accuracy using an automated feature selection method.1. Limited no. of participants.
2. Eye-tracking data are collected in a controlled environment.
Table 4. Parameters Influencing the Performance of Intelligent Sensing.
Table 4. Parameters Influencing the Performance of Intelligent Sensing.
Ref. NumberYearTitle of PaperParameter 1Parameter 2Parameter 3Parameter 4Parameter 5Parameter 6Description
[96]2019A review on EMG based motor intention prediction of continuous human upper limb motion for human-robot collaboration.EMG signal acquisitionPre-processingFeature extractionAccuracy of continuous motionDependency on autonomyRedundancyResearchers have explored several approaches and models for motor intention prediction based on EMG signals for estimation of continuous motion of the human upper limb and also discussed motion parameters for measuring the performance of the system.
[97]2019Toward biotherapeutic product real time quality monitoring.Dynamic natureAdaptive model structureHigh levels of noiseComplexityHeteroge- neityReal-time monitoringClose monitoring of Critical Quality Attributes (CQAs) of the product in real time is critical to increasing product quality and improving process control. A CQA value is a physical, chemical, biological, or microbiological property or characteristic that should be within an appropriate limit, range, or distribution to ensure the desired product quality. Various monitoring techniques are surveyed to detect CQA value uncertainty and subsequent reduction in end-product variability.
[98]2019A novel segmentation based depth map up-sampling.Depth mapsGeodesic distancesSuper pixelsInitial no. of pixelsScale constantSplitting thresholdProposed color image segmentation according to the guidance of the depth. Hence, the segmented regions observe the depth of the boundary well.
[99]2018Design and applications of soft sensors in polymer processing: A review.TemperaturePressureProcess speedFlow indexViscosityProduct dimensionsResearchers have done a comprehensive survey on soft sensing techniques applied for polymer processing and its importance for the growth of process monitoring, process control and fault diagnostics. These techniques have replaced the use of physical sensors for practical process measurements in industries.
[100]2019Prediction of occupancy level and energy consumption in an office building using blind system identification and neural networks.OccupancyPrediction accuracyTime factorHistorical internal loadEnergy consumptionStructure parametersA prediction model based on the feed forward network, ensemble models as well as extreme learning machine (ELM) is established for measuring electricity consumption of the AC system, and based on the approach of blind system identification (BSI) model, the occupancy profile is estimated in an office building.
[101]2019Semi-supervised deep learning for hyperspectral image classification.Training samplesClassification accuracyBias parametersKappa coefficientWeight decayMomentumA novel method based on a semi-supervised deep feature fusion network for classifying hyperspectral images by integrating the original training set with pseudo labeled samples to reduce the problem of overfitting during training of DNN.
Table 5. Publicly Accessible Datasets for Intelligent Sensing.
Table 5. Publicly Accessible Datasets for Intelligent Sensing.
Publicly Available DatasetSources of DataFormatExemplar Work Using the DatasetElucidation and CommentsApplications DeployedAdvantages/Limitations
LILALabeled information Library of Alexandria: Biology and conservationImages[111]Based on Deep Learning models, CNN, ResNet-18 Architecture used.Image Classification.Accuracy of images in the night time is less than images in day time.
Fashion-MNIST[112]Images[113]More challenging as compared to original MNIST.Image Classification.More challenging Classification task than MNIST.
DEAP[114]xls, csv, ods spreadsheet[115]In some cases, such as the scales of arousal, valence, liking, single trial classification is performed.Human affective states.Individual physiological difference and noise make single trial classification challenging.
Movie TweetingsDataset text collected from Twitter IMDbText[116]Automatically collects data from structured social media posts and involves recent and relevant movies.Regression and Classification of Twitter and Tweets.Only well structured tweets are considered.
Toronto Rehab stoke Pose Dataset[117]CSV[118]Dataset meant to develop and evaluate an algorithm for monitoring of post stroke, upper body posture and motionMotion Tracking, Classification.Tracking of Kinect posture is susceptible to noise and also unstable occasionally while tracking.
DBpedia Neural Question Answering (DBNAQ) datasetMachines (NSpM) templates extracted from queries in QALD-7training (QALD-7-train) in conjunction with the LC-QuAD dataset [119]Question query pair.[120]A reusable and efficient method to generate pairs of natural language questions.Questions and Answering.Affecting the BLEU accuracy over large vocabularies.
The Zero Research speech Challenge 2015[121]Sound[122]Focused on two levels of linguistic structure subword unit and word units.Discovery of speech subword features/word units based on the unsupervised method.NLP type and token metrics are not very good for a system that does not attempt to optimize a lexicon.
CORe50[123]RBG-D images[124]Complex setting of acquisition in Dataset makes the problem harder to solve when learning is done on training data.Classification object recognition.Noticeable accuracy decrease with respect to the cumulative approach.
11k HandsBiometric identification gender recognition using a huge hand data- base [125]Images (.txt, .csv, .mat) label files[126]Gender recognition based on binary classification and biometric identification based on SVM classifier.Gender recognition and biometric identification.Can construct biometric identification and gender classification system that depends on images.
Field SafeComputer vision and bio system signal processing groupImages and 3D point clouds[127]Supports object tracking, detection, classification, sensor fusion, and mapping.Object detect- ion in agriculture.Projecting explanations to local sensors frames inevitably causes localization errors.
MSPAvatarA motion capture database of spontaneous improvisations [128]Motion captured video, audio.[129]Relationship between speech, discourse functions, and non-verbal behavior.Classification action detection.Cleaning of the motion capture data slower than expected.
Table 6. Some Noteworthy Projects in Intelligent Sensing.
Table 6. Some Noteworthy Projects in Intelligent Sensing.
ProjectsFunding Firms/agenciesTechnology Used in the ProjectAim of the Project
MELOA [204]The European Union’s Horizon 2020 Research and Innovation ProgrammeAutonomous underwater technology, GPRS, satellite communications, and solar panels.This project design the WAVY drifter units for ocean observing and monitoring systems.
AMOGH [205]National Institute of Ocean Technology and IIT Madras, IndiaArtificial intelligence, underwater navigation and imaging.It possesses intelligence for picking/placing underwater objects and processing audio signals.
Autonomous Under water Vehicle (AUV)CSIR-CMRI, IndiaAutonomous underwater technology.The vehicle is used for underwater operations like deep-sea mining, exploration, collection of various scientific data like habitat information of underwater biomass to oceanographic and bathymetric data.
SSB PANEL (Sonar Signal Behavior Panel) [206]Defense Research Development, Organization (DRDO), IndiaDeep Learning, machine learning, and computer vision.Classification of sonar signals using deep convolutional neural networks.
Beamforming using AI for 6G Networks [207]Viavi Solutions, London Brunel University, London, UKArtificial intelligence, massive MIMO and mmWave systems.Intelligent beamforming (IB) scheme is proposed to drive 6G.
Intelligent Environments for Wireless Communication for 6G [208]Broadband Wireless Networking Lab, Georgia Institute of TechnologyMillimeter-wave, terahertz -band communications, ultra-massive MIMO.This project deals with the 6G wireless communications as intelligent communication environment to improve the communication distance and data rates in mmWave and THz frequency band.
6Genesis—the 6G- Enabled Wireless Smart Society and Ecosystem [209]University of Oulu, FinlandArtificial intelligence, wireless connectivity and distributed intelligent computing, 5G/6G radio access network (RAN).The goal of this project is to explore the development of 6G standard and the implementation of the 5G mobile communication technology.
SME 4.0, Industry 4.0 for SMEs(Smart Manufacturing) and Logistics for SMEs in an X-to-order and Mass Customization Environment [210]European Union’s Horizon 2020 R&I Programme under the Marie Skłodowska-CurieSmart logistics, smart manufacturing in Industry 4.0.This project focuses on identifying the need and enablers for Industry 4.0 applications and implementation, also fostering SME -specific concepts and strategies in SME manufacturing and Logistics.
SmartFactory: Cold 4.0 project [211]Gestamp, FranceSmart factory, Industry 4.0 data analytics, Chassis quality Project.This project envisions creating more efficient and flexible manufacturing plants and more consistent processes through the analysis of data, by adding intelligence to the processes.
MF2C Project [212]European Union’s Horizon 2020 research and innovation programmeFog computing, cloud computing.The main goal of this project is to address the need for an open and coordinated managing of fog and cloud computing systems.
WATERBEE DA (WaterBee Smart Irrigation Systems Demonstration Action) [213]European Union’s Horizon 2020 research and innovation programmeSmart irrigation, intelligent irrigation modeling, soil sensor technology, Web and smartphone user inter- face, operational sensors.Project targeted at demonstrating and evaluating a smart irrigation and water management system. It exploits recent advances in wireless networking and environmental sensors.
KisanRaja-Smart Irrigation Device [214]Ministry of Micro, Small and Medium Enterprises (MSME), Government of IndiaIoT, data analytics, AI, ML, mobile pump cont- rollers, wireless valve cont- rollers, wireless sensors, and satellite data.It is designed to transform the technique, used by a farmer to interact with motors. This project allows a farmer to manage the agricultural motor using his mobile or landline from the comforts of his home. [215].
Smart Cities Mission Building a Smart India [216]Indian Government, IndiaInternet of Things (IoT), Information and Communication Technology (ICT), Big data, 5G Connectivity, sensor technology, Geospatial technology, Robotics.Government of India has started this project for such urban areas that must have all core infrastructure required for citizens to have a civilized life and a sustainable environment. These features comprise guaranteed water and electricity supplies, proper sanitation, public transport, sufficient healthcare, education facilities and affordable housing for economically weak sections of society. Beyond these, such cities must also offer robust information technology connectivity, which improves local governance.
Ambulatory Sensing and Point-Of-Care Recommendation for IoT-based Healthcare [217]Kalam Technology National Fellowship (INAE), IndiaCloud computing, fog computing.This project focuses on the efficient decision delivery based on the real-time monitoring of the conditions such as patient health, road condition. Based on these decisions, the system finds a nearest hospital through a safer route.
Safe: Secure And Usable IoT Ecosystem [217]UGC-UKIERI, IndiaIoT, Raspberry Pi, sensor technology.This project explores the impact of IoT in intelligent ecosystem from a perception of end-to-end security and context-aware intelligent data access.
i-Plug Control [218]DoQuick services pvt.ltd, IndiaBased on Smart home technologies, intelligent sensors, automatic speech recognition, mobile development, artificial intelligence machine learning.This project focuses on the smart home technology, which helps you to control everything at your fingertips. From turn on/off lights, play music to adjust the room temperature from the tap of a Smartphone.
Hyperspectral Microscopy [219]National Institute of Standards and Technology, USAOptical technology, photometry, laser metrology.This project aims at measuring the optical properties of materials through the use of commercial and custom hyper spectral images.
Ocean Color [220]The National Institute of Standards and Technology, USAMarine science, Optical physics and Calibration services.Ocean color radiometry provides essential data of phyto-plankton concentration and dissolved organic matters, which allows analysis of primary productivity, global carbon cycling, and the influence of both on the global climate.
Advanced Dimensional Measurement Systems [221]The National Institute of Standards and Technology, USADimensional metrology, Calibration services and Documentary standards.ADMS furnishes the infrastructure needed for the adoption of new measurement technology.
Project N [222]Shanghai-based Pateo Group Co., Shanghai, ChinaWireless communication, Artificial intelligence, Automation.Smart Cars: It is a project of electric vehicles that have range extender a tiny gasoline motor that charges the battery. The car offers traffic forecasts, and syncs to the driver’s social networks.
Smart Cities, Australia [223]Australian Government, AustraliaIoT Technologies, Artificial intelligence, sensor technology, intelligent asset management.Smart cities leverage innovative technologies to enhance quality and performance of services, reduce cost and consumption of resources, and engage inhabitants more effectively and actively.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sharma, A.; Sharma, V.; Jaiswal, M.; Wang, H.-C.; Jayakody, D.N.K.; Basnayaka, C.M.W.; Muthanna, A. Recent Trends in AI-Based Intelligent Sensing. Electronics 2022, 11, 1661. https://doi.org/10.3390/electronics11101661

AMA Style

Sharma A, Sharma V, Jaiswal M, Wang H-C, Jayakody DNK, Basnayaka CMW, Muthanna A. Recent Trends in AI-Based Intelligent Sensing. Electronics. 2022; 11(10):1661. https://doi.org/10.3390/electronics11101661

Chicago/Turabian Style

Sharma, Abhishek, Vaidehi Sharma, Mohita Jaiswal, Hwang-Cheng Wang, Dushantha Nalin K. Jayakody, Chathuranga M. Wijerathna Basnayaka, and Ammar Muthanna. 2022. "Recent Trends in AI-Based Intelligent Sensing" Electronics 11, no. 10: 1661. https://doi.org/10.3390/electronics11101661

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop