10th Anniversary of Electronics: Recent Advances in Computer Science & Engineering

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (31 May 2022) | Viewed by 220664

Special Issue Editors


E-Mail Website
Guest Editor
1. BISITE Research Group, University of Salamanca, 37007 Salamanca, Spain
2. Air Institute, IoT Digital Innovation Hub, 37188 Salamanca, Spain
3. Department of Electronics, Information and Communication, Faculty of Engineering, Osaka Institute of Technology, Osaka 535-8585, Japan
Interests: artificial intelligence; smart cities; smart grids
Special Issues, Collections and Topics in MDPI journals

E-Mail Website1 Website2
Guest Editor
School of Computer Science, University of Lincoln, Lincoln LN6 7TS, UK
Interests: image and video processing, analysis, coding, storage, retrieval; multimedia systems; computer graphics and virtual reality; artificial intelligence; neural networks; human–computer interaction; medical imaging
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, Karlstad University, 651 88 Karlstad, Sweden
Interests: cloud computing; edge computing; optimization; artificial intelligence; high-performance computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

It has now been ten years since the first paper was published in Electronics back in 2011. It has been a rocky road with many highs and many lows, but we are extremely proud to have reached this very important milestone of the 10th anniversary of the journal. To celebrate this momentous occasion, a Special Issue is being prepared which invites both members of the Editorial Board and outstanding renowned authors, including past editors and authors, to submit their high-quality works on the topic of “Computer Science & Engineering”.

Prof. Dr. Juan M. Corchado
Prof. Dr. Stefanos Kollias
Prof. Dr. Javid Taheri
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI
  • IoT
  • AIoT
  • Machine Learning
  • Industry 4.0
  • Smart Cities
  • Networks
  • Software Engineering
  • Intelligent Interaction
  • Edge Computing and Fog Computing

Published Papers (65 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

11 pages, 13072 KiB  
Article
HeightNet: Monocular Object Height Estimation
by In Su Kim, Hyeongbok Kim, Seungwon Lee and Soon Ki Jung
Electronics 2023, 12(2), 350; https://doi.org/10.3390/electronics12020350 - 10 Jan 2023
Viewed by 3709
Abstract
Monocular depth estimation is a traditional computer vision task that predicts the distance of each pixel relative to the camera from one 2D image. Relative height information about objects lying on a ground plane can be calculated through several processing steps from the [...] Read more.
Monocular depth estimation is a traditional computer vision task that predicts the distance of each pixel relative to the camera from one 2D image. Relative height information about objects lying on a ground plane can be calculated through several processing steps from the depth image. In this paper, we propose a height estimation method for directly predicting the height of objects from a 2D image. The proposed method utilizes an encoder-decoder network for pixel-wise dense prediction based on height consistency. We used the CARLA simulator to generate 40,000 training datasets from different positions in five areas within the simulator. The experimental results show that the object’s height map can be estimated regardless of the camera’s location. Full article
Show Figures

Figure 1

20 pages, 4569 KiB  
Article
Classification of Roads and Types of Public Roads Using EOG Smart Glasses and an Algorithm Based on Machine Learning While Driving a Car
by Rafał Doniec, Natalia Piaseczna, Frédéric Li, Konrad Duraj, Hawzhin Hozhabr Pour, Marcin Grzegorzek, Katarzyna Mocny-Pachońska and Ewaryst Tkacz
Electronics 2022, 11(18), 2960; https://doi.org/10.3390/electronics11182960 - 18 Sep 2022
Cited by 7 | Viewed by 1789
Abstract
Driving a car is an activity that became necessary for exploration, even when living in the present world. Research exploring the topic of safety on the roads has therefore become increasingly relevant. In this paper, we propose a recognition algorithm based on physiological [...] Read more.
Driving a car is an activity that became necessary for exploration, even when living in the present world. Research exploring the topic of safety on the roads has therefore become increasingly relevant. In this paper, we propose a recognition algorithm based on physiological signals acquired from JINS MEME ES_R smart glasses (electrooculography, acceleration and angular velocity) to classify four commonly encountered road types: city road, highway, housing estate and undeveloped area. Data from 30 drivers were acquired in real driving conditions. Hand-crafted statistical features were extracted from the physiological signals to train and evaluate a random forest classifier. We achieved an overall accuracy, precision, recall and F1 score of 87.64%, 86.30%, 88.12% and 87.08% on the test dataset, respectively. Full article
Show Figures

Figure 1

18 pages, 552 KiB  
Article
A Comprehensive Analysis of Proportional Intensity-Based Software Reliability Models with Covariates
by Siqiao Li, Tadashi Dohi and Hiroyuki Okamura
Electronics 2022, 11(15), 2353; https://doi.org/10.3390/electronics11152353 - 28 Jul 2022
Cited by 1 | Viewed by 1204
Abstract
This paper focuses on the so-called proportional intensity-based software reliability models (PI-SRMs), which are extensions of the common non-homogeneous Poisson process (NHPP)-based SRMs, and describe the probabilistic behavior of software fault-detection process by incorporating the time-dependent software metrics data observed in the development [...] Read more.
This paper focuses on the so-called proportional intensity-based software reliability models (PI-SRMs), which are extensions of the common non-homogeneous Poisson process (NHPP)-based SRMs, and describe the probabilistic behavior of software fault-detection process by incorporating the time-dependent software metrics data observed in the development process. The PI-SRM is proposed by Rinsaka et al. in the paper “PISRAT: Proportional Intensity-Based Software Reliability Assessment Tool” in 2006. Specifically, we generalize this seminal model by introducing eleven well-known fault-detection time distributions, and investigate their goodness-of-fit and predictive performances. In numerical illustrations with four data sets collected in real software development projects, we utilize the maximum likelihood estimation to estimate model parameters with three time-dependent covariates (test execution time, failure identification work, and computer time-failure identification), and examine the performances of our PI-SRMs in comparison with the existing NHPP-based SRMs without covariates. It is shown that our PI-STMs could give better goodness-of-fit and predictive performances in many cases. Full article
Show Figures

Figure 1

12 pages, 1955 KiB  
Article
Validating Syntactic Correctness Using Unsupervised Clustering Algorithms
by Sanguk Noh, Kihyun Chung and Jaebock Shim
Electronics 2022, 11(14), 2113; https://doi.org/10.3390/electronics11142113 - 06 Jul 2022
Viewed by 1326
Abstract
When developing a complex system in an open platform setting, users need to compose and maintain a systematic requirement specification. This paper proposes a solution to guarantee a syntactically accurate requirement specification that minimizes the ambiguity caused by ungrammatical sentences. Our system has [...] Read more.
When developing a complex system in an open platform setting, users need to compose and maintain a systematic requirement specification. This paper proposes a solution to guarantee a syntactically accurate requirement specification that minimizes the ambiguity caused by ungrammatical sentences. Our system has a set of standard jargon and templates that are used as a guideline to write grammatically correct sentences. Given a database of standard technical Korean (STK) templates, the system that we have designed and implemented divides a new sentence into a specific cluster. If the system finds an identical template in a cluster, it confirms the new sentence as a sound one. Otherwise, the system uses unsupervised clustering algorithms to return the template that most closely resembles the syntax of the inputted sentence. We tested our proposed system in the field of open platform development for a railway train. In the experiment, our system learned to partition templates into clusters while reducing null attributes of an instance using the autoencoding procedure. Given a set of clusters, the system was able to successfully recommend templates that were syntactically similar to the structure of the inputted sentence. Since the degree of similarity for 500 instances was 97.00% on average, we conclude that our robust system can provide an appropriate template that users can use to modify their syntactically incorrect sentences. Full article
Show Figures

Figure 1

12 pages, 2817 KiB  
Article
Deepsign: Sign Language Detection and Recognition Using Deep Learning
by Deep Kothadiya, Chintan Bhatt, Krenil Sapariya, Kevin Patel, Ana-Belén Gil-González and Juan M. Corchado
Electronics 2022, 11(11), 1780; https://doi.org/10.3390/electronics11111780 - 03 Jun 2022
Cited by 51 | Viewed by 18557
Abstract
The predominant means of communication is speech; however, there are persons whose speaking or hearing abilities are impaired. Communication presents a significant barrier for persons with such disabilities. The use of deep learning methods can help to reduce communication barriers. This paper proposes [...] Read more.
The predominant means of communication is speech; however, there are persons whose speaking or hearing abilities are impaired. Communication presents a significant barrier for persons with such disabilities. The use of deep learning methods can help to reduce communication barriers. This paper proposes a deep learning-based model that detects and recognizes the words from a person’s gestures. Deep learning models, namely, LSTM and GRU (feedback-based learning models), are used to recognize signs from isolated Indian Sign Language (ISL) video frames. The four different sequential combinations of LSTM and GRU (as there are two layers of LSTM and two layers of GRU) were used with our own dataset, IISL2020. The proposed model, consisting of a single layer of LSTM followed by GRU, achieves around 97% accuracy over 11 different signs. This method may help persons who are unaware of sign language to communicate with persons whose speech or hearing is impaired. Full article
Show Figures

Figure 1

24 pages, 4851 KiB  
Article
Integration and Deployment of Cloud-Based Assistance System in Pharaon Large Scale Pilots—Experiences and Lessons Learned
by Andrej Grguric, Miran Mosmondor and Darko Huljenic
Electronics 2022, 11(9), 1496; https://doi.org/10.3390/electronics11091496 - 06 May 2022
Cited by 1 | Viewed by 1986
Abstract
The EU project Pharaon aims to support older European adults by integrating digital services, tools, interoperable open platforms, and devices. One of the objectives is to validate the integrated solutions in large-scale pilots. The integration of mature solutions and existing systems is one [...] Read more.
The EU project Pharaon aims to support older European adults by integrating digital services, tools, interoperable open platforms, and devices. One of the objectives is to validate the integrated solutions in large-scale pilots. The integration of mature solutions and existing systems is one of the preconditions for the successful realization of the different aims of the pilots. One such solution is an intelligent, privacy-aware home-care assistance system, SmartHabits. After briefly introducing the Pharaon and SmartHabits, the authors propose different Pharaon models in the Ambient/Active Assisted Living (AAL) domain, namely the Pharaon conceptual model, Pharaon reference logical architecture view, AAL ecosystem model, meta AAL ecosystem model, and Pharaon ecosystem and governance models. Building on the proposed models, the authors provide details of the holistic integration and deployment process of the SmartHabits system into the Pharaon ecosystem. Both technical and supporting integration challenges and activities are discussed. Technical activities, including syntactic and semantic integration and securing the transfer of the Pharaon sensitive data, are among the priorities. Supporting activities include achieving legal and regulatory compliance, device procurement, and use-case co-designing in COVID-19 conditions. Full article
Show Figures

Figure 1

19 pages, 6904 KiB  
Article
Digital Image Compression Using Approximate Addition
by Padmanabhan Balasubramanian, Raunaq Nayar and Douglas L. Maskell
Electronics 2022, 11(9), 1361; https://doi.org/10.3390/electronics11091361 - 25 Apr 2022
Cited by 2 | Viewed by 2002
Abstract
This paper analyzes the usefulness of approximate addition for digital image compression. Discrete Cosine Transform (DCT) is an important operation in digital image compression. We used accurate addition and approximate addition individually while calculating the DCT to perform image compression. Accurate addition was [...] Read more.
This paper analyzes the usefulness of approximate addition for digital image compression. Discrete Cosine Transform (DCT) is an important operation in digital image compression. We used accurate addition and approximate addition individually while calculating the DCT to perform image compression. Accurate addition was performed using the accurate adder and approximate addition was performed using different approximate adders individually. The accurate adder and approximate adders were implemented in an application specific integrated circuit (ASIC)-type design environment using a 32–28 nm complementary metal oxide semiconductor (CMOS) standard cell library and in a field programmable gate array (FPGA)-based design environment using a Xilinx Artix-7 device. Error analysis was performed to calculate the error parameters of various approximate adders by applying one million random input vectors. It is observed that the approximate adders help to better reduce the file size of compressed images than the accurate adder. Simultaneously, the approximate adders enable reductions in design parameters compared to the accurate adder. For an ASIC-type implementation using standard cells, an optimum approximate adder achieved 27.1% reduction in delay, 46.4% reduction in area, and 50.3% reduction in power compared to a high-speed accurate carry look-ahead adder. With respect to an FPGA-based implementation, an optimum approximate adder achieved 8% reduction in delay and 19.7% reduction in power while requiring 47.6% fewer look-up tables (LUTs) and 42.2% fewer flip-flops compared to the native accurate FPGA adder. Full article
Show Figures

Figure 1

14 pages, 1556 KiB  
Article
AI Ekphrasis: Multi-Modal Learning with Foundation Models for Fine-Grained Poetry Retrieval
by Muhammad Shahid Jabbar, Jitae Shin and Jun-Dong Cho
Electronics 2022, 11(8), 1275; https://doi.org/10.3390/electronics11081275 - 18 Apr 2022
Cited by 1 | Viewed by 2153
Abstract
Artificial intelligence research in natural language processing in the context of poetry struggles with the recognition of holistic content such as poetic symbolism, metaphor, and other fine-grained attributes. Given these challenges, multi-modal image–poetry reasoning and retrieval remain largely unexplored. Our recent accessibility study [...] Read more.
Artificial intelligence research in natural language processing in the context of poetry struggles with the recognition of holistic content such as poetic symbolism, metaphor, and other fine-grained attributes. Given these challenges, multi-modal image–poetry reasoning and retrieval remain largely unexplored. Our recent accessibility study indicates that poetry is an effective medium to convey visual artwork attributes for improved artwork appreciation of people with visual impairments. We, therefore, introduce a deep learning approach for the automatic retrieval of poetry suitable to the input images. The recent state-of-the-art CLIP provides a way for multi-modal visual and text features matched using cosine similarity. However, it lacks shared cross-modality attention features to model fine-grained relationships. The proposed approach in this work takes advantage of strong pre-training of the CLIP model and overcomes its limitations by introducing shared attention parameters to better model the fine-grained relationship between both modalities. We test and compare our proposed approach using the expertly annotated MiltiM-Poem dataset, which is considered the largest public image–poetry pair dataset for English poetry. The proposed approach aims to solve the problems of image-based attribute recognition and automatic retrieval for fine-grained poetic verses. The test results reflect that the shared attention parameters alleviate fine-grained attribute recognition, and the proposed approach is a significant step towards automatic multi-modal retrieval for improved artwork appreciation of people with visual impairments. Full article
Show Figures

Figure 1

21 pages, 699 KiB  
Article
Preserving Privacy of High-Dimensional Data by l-Diverse Constrained Slicing
by Zenab Amin, Adeel Anjum, Abid Khan, Awais Ahmad and Gwanggil Jeon
Electronics 2022, 11(8), 1257; https://doi.org/10.3390/electronics11081257 - 15 Apr 2022
Cited by 2 | Viewed by 1486
Abstract
In the modern world of digitalization, data growth, aggregation and sharing have escalated drastically. Users share huge amounts of data due to the widespread adoption of Internet-of-things (IoT) and cloud-based smart devices. Such data could have confidential attributes about various individuals. Therefore, privacy [...] Read more.
In the modern world of digitalization, data growth, aggregation and sharing have escalated drastically. Users share huge amounts of data due to the widespread adoption of Internet-of-things (IoT) and cloud-based smart devices. Such data could have confidential attributes about various individuals. Therefore, privacy preservation has become an important concern. Many privacy-preserving data publication models have been proposed to ensure data sharing without privacy disclosures. However, publishing high-dimensional data with sufficient privacy is still a challenging task and very little focus has been given to propound optimal privacy solutions for high-dimensional data. In this paper, we propose a novel privacy-preserving model to anonymize high-dimensional data (prone to various privacy attacks including probabilistic, skewness, and gender-specific). Our proposed model is a combination of l-diversity along with constrained slicing and vertical division. The proposed model can protect the above-stated attacks with minimal information loss. The extensive experiments on real-world datasets advocate the outperformance of our proposed model among its counterparts. Full article
Show Figures

Figure 1

13 pages, 2513 KiB  
Article
Evolving CNN with Paddy Field Algorithm for Geographical Landmark Recognition
by Kanishk Bansal, Amar Singh, Sahil Verma, Kavita, Noor Zaman Jhanjhi, Mohammad Shorfuzzaman and Mehedi Masud
Electronics 2022, 11(7), 1075; https://doi.org/10.3390/electronics11071075 - 29 Mar 2022
Cited by 16 | Viewed by 2023
Abstract
Convolutional Neural Networks (CNNs) operate within a wide variety of hyperparameters, the optimization of which can greatly improve the performance of CNNs when performing the task at hand. However, these hyperparameters can be very difficult to optimize, either manually or by brute force. [...] Read more.
Convolutional Neural Networks (CNNs) operate within a wide variety of hyperparameters, the optimization of which can greatly improve the performance of CNNs when performing the task at hand. However, these hyperparameters can be very difficult to optimize, either manually or by brute force. Neural architecture search or NAS methods have been developed to address this problem and are used to find the best architectures for the deep learning paradigm. In this article, a CNN has been evolved with a well-known nature-inspired metaheuristic paddy field algorithm (PFA). It can be seen that PFA can evolve the neural architecture using the Google Landmarks Dataset V2, which is one of the toughest datasets available in the literature. The CNN’s performance, when evaluated based on the accuracy benchmark, increases from an accuracy of 0.53 to 0.76, which is an improvement of more than 40%. The evolved architecture also shows some major improvements in hyperparameters that are normally considered to be the best suited for the task. Full article
Show Figures

Figure 1

18 pages, 1481 KiB  
Article
Human–Machine Interaction Using Probabilistic Neural Network for Light Communication Systems
by Julian Webber, Abolfazl Mehbodniya, Rui Teng and Ahmed Arafa
Electronics 2022, 11(6), 932; https://doi.org/10.3390/electronics11060932 - 17 Mar 2022
Cited by 4 | Viewed by 1927
Abstract
Hand gestures are a natural and efficient means to control systems and are one of the promising but challenging areas of human–machine interaction (HMI). We propose a system to recognize gestures by processing interrupted patterns of light in a visible light communications (VLC) [...] Read more.
Hand gestures are a natural and efficient means to control systems and are one of the promising but challenging areas of human–machine interaction (HMI). We propose a system to recognize gestures by processing interrupted patterns of light in a visible light communications (VLC) system. Our solution is aimed at the emerging light communication systems and can facilitate the human–computer interaction for services in health-care, robot systems, commerce and the home. The system exploits existing light communications infrastructure using low-cost and readily available components. Different finger sequences are detected using a probabilistic neural network (PNN) trained on light transitions between fingers. A novel pre-processing of the sampled light on a photodiode is described to facilitate the use of the PNN with limited complexity. The contributions of this work include the development of a sensing technique for light communication systems, a novel PNN pre-processing methodology to convert the light sequences into manageable size matrices along with hardware implementation showing the proof of concept under natural lighting conditions. Despite the modest complexity our system could correctly recognize gestures with an accuracy of 73%, demonstrating the potential of this technology. We show that the accuracy depends on the PNN pre-processing matrix size and the Gaussian spread function. The emerging IEEE 802.11bb ‘Li-Fi’ standard is expected to bring the light communications infrastructure into virtually every room across the world and a methodology to exploit a system for gesture sensing is expected to be of considerable interest and value to society. Full article
Show Figures

Figure 1

19 pages, 20306 KiB  
Article
A Hybrid Method for the Fault Diagnosis of Onboard Traction Transformers
by Junmin Zhu, Shuaibing Li, Yang Liu and Haiying Dong
Electronics 2022, 11(5), 762; https://doi.org/10.3390/electronics11050762 - 01 Mar 2022
Cited by 4 | Viewed by 1593
Abstract
As vital equipment in high-speed train power supply systems, the failure of onboard traction transformers affect the safe and stable operation of the trains. To diagnose faults in onboard traction transformers, this paper proposes a hybrid optimization method based on quickly and accurately [...] Read more.
As vital equipment in high-speed train power supply systems, the failure of onboard traction transformers affect the safe and stable operation of the trains. To diagnose faults in onboard traction transformers, this paper proposes a hybrid optimization method based on quickly and accurately using support vector machines (SVMs) as fault diagnosis systems for onboard traction transformers, which can accurately locate and analyze faults. Considering the limitations of traditional transformers for identifying faults, this study used kernel principal component analysis (KPCA) to analyze the feature quantity of dissolved gas analysis (DGA) data, electrical test data, and oil quality test data. The improved seagull optimization algorithm (ISOA) was used to optimize the SVM, and a Henon chaotic map was introduced to initialize the population. Combined with differential evolution (DE) based on the adaptive formula, the foraging formula of the seagull optimization algorithm (SOA) was improved to increase the diversity of the algorithm and enhance its ability to find the optimal parameters of SVM, which made the simulation results more accurate. Finally, the KPCA–ADESOA–SVM model was constructed and applied to fault diagnosis for the traction transformer. The example analysis compared the diagnosis results of the proposed diagnosis model with those of the traditional diagnosis model, showing further optimization of the feature quantity and improvements in the diagnosis accuracy. This proves that the proposed diagnosis model has high generalization performance and can effectively increase the fault diagnosis accuracy and speed of traction transformers. Full article
Show Figures

Figure 1

14 pages, 4389 KiB  
Article
Inverse Transform Using Linearity for Video Coding
by Hyeonju Song and Yung-Lyul Lee
Electronics 2022, 11(5), 760; https://doi.org/10.3390/electronics11050760 - 01 Mar 2022
Cited by 1 | Viewed by 1459
Abstract
In hybrid block-based video coding, transform plays an important role in energy compaction. Transform coding converts residual data in the spatial domain into frequency domain data, thereby concentrating energy in a lower frequency band. In VVC (versatile video coding), the primary transform is [...] Read more.
In hybrid block-based video coding, transform plays an important role in energy compaction. Transform coding converts residual data in the spatial domain into frequency domain data, thereby concentrating energy in a lower frequency band. In VVC (versatile video coding), the primary transform is performed using DCT-II (discrete cosine transform type 2), DST-VII (discrete sine transform type 7), and DCT-VIII (discrete cosine transform type 8). Considering that DCT-II, DST-VII, and DCT-VIII are all linear transforms, inverse transform is proposed to reduce the number of computations by using the linearity of transform. When the proposed inverse transform using linearity is applied to the VVC encoder and decoder, run-time savings can be achieved without decreasing the coding performance relative to the VVC decoder. It is shown that, under VVC common-test conditions (CTC), average decoding time savings values of 4% and 10% are achieved for all intra (AI) and random access (RA) configurations, respectively. Full article
Show Figures

Figure 1

27 pages, 952 KiB  
Article
An Efficient Adaptive Fuzzy Hierarchical Sliding Mode Control Strategy for 6 Degrees of Freedom Overhead Crane
by Hung Van Pham, Quoc-Dong Hoang, Minh Van Pham, Dung Manh Do, Nha Hoang Phi, Duy Hoang, Hai Xuan Le, Thai Dinh Kim and Linh Nguyen
Electronics 2022, 11(5), 713; https://doi.org/10.3390/electronics11050713 - 25 Feb 2022
Cited by 6 | Viewed by 2061
Abstract
The paper proposes a new approach to efficiently control a three-dimensional overhead crane with 6 degrees of freedom (DoF). Most of the works proposing a control law for a gantry crane assume that it has five output variables, including three positions of the [...] Read more.
The paper proposes a new approach to efficiently control a three-dimensional overhead crane with 6 degrees of freedom (DoF). Most of the works proposing a control law for a gantry crane assume that it has five output variables, including three positions of the trolley, bridge, and pulley and two swing angles of the hoisting cable. In fact, the elasticity of the hoisting cable, which causes oscillation in the cable direction, is not fully incorporated into the model yet. Therefore, our work considers that six under-actuated outputs exist in a crane system. To design an efficient controller for the 6 DoF crane, it first employs the hierarchical sliding mode control approach, which not only guarantees stability but also minimizes the sway and oscillation of the overhead crane when it transports a payload to a desired location. Moreover, the unknown and uncertain parameters of the system caused by its actuator nonlinearity and external disturbances are adaptively estimated and inferred by utilizing the fuzzy inference rule mechanism, which results in efficient operations of the crane in real time. More importantly, stabilization of the crane controlled by the proposed algorithm is theoretically proved by the use of the Lyapunov function. The proposed control approach was implemented in a synthetic environment for the extensive evaluation, where the obtained results demonstrate its effectiveness. Full article
Show Figures

Figure 1

15 pages, 2406 KiB  
Article
Robot Grasping Based on Stacked Object Classification Network and Grasping Order Planning
by Chenlu Liu, Di Jiang, Weiyang Lin and Luis Gomes
Electronics 2022, 11(5), 706; https://doi.org/10.3390/electronics11050706 - 25 Feb 2022
Cited by 2 | Viewed by 2026
Abstract
In this paper, the robot grasping for stacked objects is studied based on object detection and grasping order planning. Firstly, a novel stacked object classification network (SOCN) is proposed to realize stacked object recognition. The network takes into account the visible volume of [...] Read more.
In this paper, the robot grasping for stacked objects is studied based on object detection and grasping order planning. Firstly, a novel stacked object classification network (SOCN) is proposed to realize stacked object recognition. The network takes into account the visible volume of the objects to further adjust its inverse density parameters, which makes the training process faster and smoother. At the same time, SOCN adopts the transformer architecture and has a self-attention mechanism for feature learning. Subsequently, a grasping order planning method is investigated, which depends on the security score and extracts the geometric relations and dependencies between stacked objects, it calculates the security score based on object relation, classification, and size. The proposed method is evaluated by using a depth camera and a UR-10 robot to complete grasping tasks. The results show that our method has high accuracy for stacked object classification, and the grasping order effectively and successfully executes safely. Full article
Show Figures

Figure 1

14 pages, 365 KiB  
Article
Predictive Analysis of COVID-19 Symptoms in Social Networks through Machine Learning
by Clístenes Fernandes da Silva, Arnaldo Candido Junior and Rui Pedro Lopes
Electronics 2022, 11(4), 580; https://doi.org/10.3390/electronics11040580 - 15 Feb 2022
Cited by 2 | Viewed by 1852
Abstract
Social media is a great source of data for analyses, since they provide ways for people to share emotions, feelings, ideas, and even symptoms of diseases. By the end of 2019, a global pandemic alert was raised, relative to a virus that had [...] Read more.
Social media is a great source of data for analyses, since they provide ways for people to share emotions, feelings, ideas, and even symptoms of diseases. By the end of 2019, a global pandemic alert was raised, relative to a virus that had a high contamination rate and could cause respiratory complications. To help identify those who may have the symptoms of this disease or to detect who is already infected, this paper analyzed the performance of eight machine learning algorithms (KNN, Naive Bayes, Decision Tree, Random Forest, SVM, simple Multilayer Perceptron, Convolutional Neural Networks and BERT) in the search and classification of tweets that mention self-report of COVID-19 symptoms. The dataset was labeled using a set of disease symptom keywords provided by the World Health Organization. The tests showed that Random Forest algorithm had the best results, closely followed by BERT and Convolution Neural Network, although traditional machine learning algorithms also have can also provide good results. This work could also aid in the selection of algorithms in the identification of diseases symptoms in social media content. Full article
Show Figures

Figure 1

14 pages, 2255 KiB  
Article
Artificial Visual System for Orientation Detection
by Jiazhen Ye, Yuki Todo, Zheng Tang, Bin Li and Yu Zhang
Electronics 2022, 11(4), 568; https://doi.org/10.3390/electronics11040568 - 13 Feb 2022
Cited by 1 | Viewed by 1598
Abstract
The human visual system is one of the most important components of the nervous system, responsible for visual perception. The research on orientation detection, in which neurons of the visual cortex respond only to a line stimulus in a particular orientation, is an [...] Read more.
The human visual system is one of the most important components of the nervous system, responsible for visual perception. The research on orientation detection, in which neurons of the visual cortex respond only to a line stimulus in a particular orientation, is an important driving force of computer vision and biological vision. However, the principle underlying orientation detection remains a mystery. In order to solve this mystery, we first propose a completely new mechanism that explains planar orientation detection in a quantitative manner. First, we assume that there are planar orientation-detective neurons which respond only to a particular planar orientation locally and that these neurons detect local planar orientation information based on nonlinear interactions that take place on the dendrites. Then, we propose an implementation of these local planar orientation-detective neurons based on their dendritic computations, use them to extract the local planar orientation information, and infer the global planar orientation information from the local planar orientation information. Furthermore, based on this mechanism, we propose an artificial visual system (AVS) for planar orientation detection and other visual information processing. In order to prove the effectiveness of our mechanism and the AVS, we conducted a series of experiments on rectangular images which included rectangles of various sizes, shapes and positions. Computer simulations show that the mechanism can perfectly perform planar orientation detection regardless of their sizes, shapes and positions in all experiments. Furthermore, we compared the performance of both AVS and a traditional convolution neural network (CNN) on planar orientation detection and found that AVS completely outperformed CNN in planar orientation detection in terms of identification accuracy, noise resistance, computation and learning cost, hardware implementation and reasonability. Full article
Show Figures

Figure 1

14 pages, 1022 KiB  
Article
Dynamically-Tunable Dataflow Architectures Based on Markov Queuing Models
by Mattia Tibaldi, Gianluca Palermo and Christian Pilato
Electronics 2022, 11(4), 555; https://doi.org/10.3390/electronics11040555 - 12 Feb 2022
Cited by 2 | Viewed by 1341
Abstract
Dataflow architectures are fundamental to achieve high performance in data-intensive applications. They must be optimized to elaborate input data arriving at an expected rate, which is not always constant. While worst-case designs can significantly increase hardware resources, more optimistic solutions fail to sustain [...] Read more.
Dataflow architectures are fundamental to achieve high performance in data-intensive applications. They must be optimized to elaborate input data arriving at an expected rate, which is not always constant. While worst-case designs can significantly increase hardware resources, more optimistic solutions fail to sustain execution phases with high throughput, leading to system congestion or even computational errors. We present an architecture to monitor and control dataflow architectures that leverage approximate variants to trade off accuracy and latency of the computational processes. Our microarchitecture features online prediction based on queuing models to estimate the response time of the system and select the proper variant to meet the target throughput, enabling the creation of dynamically-tunable systems. Full article
Show Figures

Figure 1

36 pages, 6893 KiB  
Article
TAWSEEM: A Deep-Learning-Based Tool for Estimating the Number of Unknown Contributors in DNA Profiling
by Hamdah Alotaibi, Fawaz Alsolami, Ehab Abozinadah and Rashid Mehmood
Electronics 2022, 11(4), 548; https://doi.org/10.3390/electronics11040548 - 11 Feb 2022
Cited by 5 | Viewed by 3568
Abstract
DNA profiling involves the analysis of sequences of an individual or mixed DNA profiles to identify the persons that these profiles belong to. A critically important application of DNA profiling is in forensic science to identify criminals by finding a match between their [...] Read more.
DNA profiling involves the analysis of sequences of an individual or mixed DNA profiles to identify the persons that these profiles belong to. A critically important application of DNA profiling is in forensic science to identify criminals by finding a match between their blood samples and the DNA profile found on the crime scene. Other applications include paternity tests, disaster victim identification, missing person investigations, and mapping genetic diseases. A crucial task in DNA profiling is the determination of the number of contributors in a DNA mixture profile, which is challenging due to issues that include allele dropout, stutter, blobs, and noise in DNA profiles; these issues negatively affect the estimation accuracy and the computational complexity. Machine-learning-based methods have been applied for estimating the number of unknowns; however, there is limited work in this area and many more efforts are required to develop robust models and their training on large and diverse datasets. In this paper, we propose and develop a software tool called TAWSEEM that employs a multilayer perceptron (MLP) neural network deep learning model for estimating the number of unknown contributors in DNA mixture profiles using PROVEDIt, the largest publicly available dataset. We investigate the performance of our developed deep learning model using four performance metrics, namely accuracy, F1-score, recall, and precision. The novelty of our tool is evident in the fact that it provides the highest accuracy (97%) compared to any existing work on the most diverse dataset (in terms of the profiles, loci, multiplexes, etc.). We also provide a detailed background on the DNA profiling and literature review, and a detailed account of the deep learning tool development and the performance investigation of the deep learning method. Full article
Show Figures

Figure 1

16 pages, 1260 KiB  
Article
Deploying Efficiently Modern Applications on Cloud
by Damiano Perri, Marco Simonetti and Osvaldo Gervasi
Electronics 2022, 11(3), 450; https://doi.org/10.3390/electronics11030450 - 02 Feb 2022
Cited by 4 | Viewed by 2337
Abstract
This study analyses some of the leading technologies for the construction and configuration of IT infrastructures to provide services to users. For modern applications, guaranteeing service continuity even in very high computational load or network problems is essential. Our configuration has among the [...] Read more.
This study analyses some of the leading technologies for the construction and configuration of IT infrastructures to provide services to users. For modern applications, guaranteeing service continuity even in very high computational load or network problems is essential. Our configuration has among the main objectives of being highly available (HA) and horizontally scalable, that is, able to increase the computational resources that can be delivered when needed and reduce them when they are no longer necessary. Various architectural possibilities are analysed, and the central schemes used to tackle problems of this type are also described in terms of disaster recovery. The benefits offered by virtualisation technologies are highlighted and are bought with modern techniques for managing Docker containers that will be used to build the back-end of a sample infrastructure related to a use-case we have developed. In addition to this, an in-depth analysis is reported on the central autoscaling policies that can help manage high loads of requests from users to the services provided by the infrastructure. The results we have presented show an average response time of 21.7 milliseconds with a standard deviation of 76.3 milliseconds showing excellent responsiveness. Some peaks are associated with high-stress events for the infrastructure, but the response time does not exceed 2 s even in this case. The results of the considered use case studied for nine months are presented and discussed. In the study period, we improved the back-end configuration and defined the main metrics to deploy the web application efficiently. Full article
Show Figures

Figure 1

28 pages, 70352 KiB  
Article
Swarm Intelligence Techniques for Mobile Wireless Charging
by Gerald K. Ijemaru, Kenneth Li-Minn Ang and Jasmine Kah Phooi Seng
Electronics 2022, 11(3), 371; https://doi.org/10.3390/electronics11030371 - 26 Jan 2022
Cited by 6 | Viewed by 2456
Abstract
This paper proposes energy-efficient swarm intelligence (SI)-based approaches for efficient mobile wireless charging in a distributed large-scale wireless sensor network (LS-WSN). This approach considers the use of special multiple mobile elements, which traverse the network for the purpose of energy replenishment. Recent techniques [...] Read more.
This paper proposes energy-efficient swarm intelligence (SI)-based approaches for efficient mobile wireless charging in a distributed large-scale wireless sensor network (LS-WSN). This approach considers the use of special multiple mobile elements, which traverse the network for the purpose of energy replenishment. Recent techniques have shown the advantages inherent to the use of a single mobile charger (MC) which periodically visits the network to replenish the sensor-nodes. However, the single MC technique is currently limited and is not feasible for LS-WSN scenarios. Other approaches have overlooked the need to comprehensively discuss some critical tradeoffs associated with mobile wireless charging, which include: (1) determining the efficient coordination and charging strategies for the MCs, and (2) determining the optimal amount of energy available for the MCs, given the overall available network energy. These important tradeoffs are investigated in this study. Thus, this paper aims to investigate some of the critical issues affecting efficient mobile wireless charging for large-scale WSN scenarios; consequently, the network can then be operated without limitations. We first formulate the multiple charger recharge optimization problem (MCROP) and show that it is N-P hard. To solve the complex problem of scheduling multiple MCs in LS-WSN scenarios, we propose the node-partition algorithm based on cluster centroids, which adaptively partitions the whole network into several clusters and regions and distributes an MC to each region. Finally, we provide detailed simulation experiments using SI-based routing protocols. The results show the performance of the proposed scheme in terms of different evaluation metrics, where SI-based techniques are presented as a veritable state-of-the-art approach for improved energy-efficient mobile wireless charging to extend the network operational lifetime. The investigation also reveals the efficacy of the partial charging, over the full charging, strategies of the MCs. Full article
Show Figures

Figure 1

11 pages, 448 KiB  
Article
A Critical Analysis of a Tourist Trip Design Problem with Time-Dependent Recommendation Factors and Waiting Times
by Cynthia Porras, Boris Pérez-Cañedo, David A. Pelta and José L. Verdegay
Electronics 2022, 11(3), 357; https://doi.org/10.3390/electronics11030357 - 25 Jan 2022
Cited by 6 | Viewed by 2736
Abstract
The tourist trip design problem (TTDP) is a well-known extension of the orienteering problem, where the objective is to obtain an itinerary of points of interest for a tourist that maximizes his/her level of interest. In several situations, the interest of a point [...] Read more.
The tourist trip design problem (TTDP) is a well-known extension of the orienteering problem, where the objective is to obtain an itinerary of points of interest for a tourist that maximizes his/her level of interest. In several situations, the interest of a point depends on when the point is visited, and the tourist may delay the arrival to a point in order to get a higher interest. In this paper, we present and discuss two variants of the TTDP with time-dependent recommendation factors (TTDP-TDRF), which may or may not take into account waiting times in order to have a better recommendation value. Using a mixed-integer linear programming solver, we provide solutions to 27 real-world instances. Although reasonable at first sight, we observed that including waiting times is not justified: in both cases (allowing or not waiting times) the quality of the solutions is almost the same, and the use of waiting times led to a model with higher solving times. This fact highlights the need to properly evaluate the benefits of making the problem model more complex than is actually needed. Full article
Show Figures

Figure 1

19 pages, 4517 KiB  
Article
Semantic Modeling of a VLC-Enabled Task Automation Platform for Smart Offices
by Sergio Muñoz, Carlos A. Iglesias, Andrei Scheianu and George Suciu
Electronics 2022, 11(3), 326; https://doi.org/10.3390/electronics11030326 - 20 Jan 2022
Cited by 1 | Viewed by 2558
Abstract
The evolution of ambient intelligence has introduced a range of new opportunities to improve people’s well-being. One of these opportunities is the use of these technologies to enhance workplaces and improve employees’ comfort and productivity. However, these technologies often entail two major challenges: [...] Read more.
The evolution of ambient intelligence has introduced a range of new opportunities to improve people’s well-being. One of these opportunities is the use of these technologies to enhance workplaces and improve employees’ comfort and productivity. However, these technologies often entail two major challenges: the requirement for fast and reliable data transmission between the vast number of devices connected simultaneously, and the interoperability between these devices. Conventional communication technologies present some drawbacks in these kinds of systems, such as lower data rates and electromagnetic interference, which have prompted research into new wireless communication technologies. One of these technologies is visible light communication (VLC), which uses existing light in an environment to transmit data. Its characteristics make it an up-and-coming technology for IoT services but also aggravate the interoperability challenge. To facilitate the continuous communication of the enormous amount of heterogeneous data generated, highly agile data models are required. The semantic approach tackles this problem by switching from ad hoc application-centric representation models and formats to a formal definition of concepts and relationships. This paper aims to advance the state of the art by proposing a semantic vocabulary for an intelligent automation platform with VLC enabled, which benefits from the advantages of VLC while ensuring the scalability and interoperability of all system components. Thus, the main contributions of this work are threefold: (i) the design and definition of a semantic model for an automation platform; (ii) the development of a prototype automation platform based on a VLC-based communication system; and (iii) the integration and validation of the proposed semantic model in the VLC-based automation platform. Full article
Show Figures

Figure 1

12 pages, 379 KiB  
Article
Polynomial Algorithm for Minimal (1,2)-Dominating Set in Networks
by Joanna Raczek
Electronics 2022, 11(3), 300; https://doi.org/10.3390/electronics11030300 - 19 Jan 2022
Cited by 1 | Viewed by 1393
Abstract
Dominating sets find application in a variety of networks. A subset of nodes D is a (1,2)-dominating set in a graph G=(V,E) if every node not in D is adjacent to a [...] Read more.
Dominating sets find application in a variety of networks. A subset of nodes D is a (1,2)-dominating set in a graph G=(V,E) if every node not in D is adjacent to a node in D and is also at most a distance of 2 to another node from D. In networks, (1,2)-dominating sets have a higher fault tolerance and provide a higher reliability of services in case of failure. However, finding such the smallest set is NP-hard. In this paper, we propose a polynomial time algorithm finding a minimal (1,2)-dominating set, Minimal_12_Set. We test the proposed algorithm in network models such as trees, geometric random graphs, random graphs and cubic graphs, and we show that the sets of nodes returned by the Minimal_12_Set are in general smaller than sets consisting of nodes chosen randomly. Full article
Show Figures

Figure 1

29 pages, 6000 KiB  
Article
Network Slicing Security Controls and Assurance for Verticals
by Tomasz Wichary, Jordi Mongay Batalla, Constandinos X. Mavromoustakis, Jerzy Żurek and George Mastorakis
Electronics 2022, 11(2), 222; https://doi.org/10.3390/electronics11020222 - 11 Jan 2022
Cited by 16 | Viewed by 3478
Abstract
This paper focuses on the security challenges of network slice implementation in 5G networks. We propose that network slice controllers support security by enabling security controls at different network layers. The slice controller orchestrates multilevel domains with resources at a very high level [...] Read more.
This paper focuses on the security challenges of network slice implementation in 5G networks. We propose that network slice controllers support security by enabling security controls at different network layers. The slice controller orchestrates multilevel domains with resources at a very high level but needs to understand how to define the resources at lower levels. In this context, the main outstanding security challenge is the compromise of several resources in the presence of an attack due to weak resource isolation at different levels. We analysed the current standards and trends directed to mitigate the vulnerabilities mentioned above, and we propose security controls and classify them by efficiency and applicability (easiness to develop). Security controls are a common way to secure networks, but they enforce security policies only in respective areas. Therefore, the security domains allow for structuring the orchestration principles by considering the necessary security controls to be applied. This approach is common for both vendor-neutral and vendor-dependent security solutions. In our classification, we considered the controls in the following fields: (i) fair resource allocation with dynamic security assurance, (ii) isolation in a multilayer architecture and (iii) response to DDoS attacks without service and security degradation. Full article
Show Figures

Figure 1

25 pages, 5120 KiB  
Article
A Machine Learning Based Model for Energy Usage Peak Prediction in Smart Farms
by SaravanaKumar Venkatesan, Jonghyun Lim, Hoon Ko and Yongyun Cho
Electronics 2022, 11(2), 218; https://doi.org/10.3390/electronics11020218 - 11 Jan 2022
Cited by 8 | Viewed by 3308
Abstract
Context: Energy utilization is one of the most closely related factors affecting many areas of the smart farm, plant growth, crop production, device automation, and energy supply to the same degree. Recently, 4th industrial revolution technologies such as IoT, artificial intelligence, and big [...] Read more.
Context: Energy utilization is one of the most closely related factors affecting many areas of the smart farm, plant growth, crop production, device automation, and energy supply to the same degree. Recently, 4th industrial revolution technologies such as IoT, artificial intelligence, and big data have been widely used in smart farm environments to efficiently use energy and control smart farms’ conditions. In particular, machine learning technologies with big data analysis are actively used as one of the most potent prediction methods supporting energy use in the smart farm. Purpose: This study proposes a machine learning-based prediction model for peak energy use by analyzing energy-related data collected from various environmental and growth devices in a smart paprika farm of the Jeonnam Agricultural Research and Extension Service in South Korea between 2019 and 2021. Scientific method: To find out the most optimized prediction model, comparative evaluation tests are performed using representative ML algorithms such as artificial neural network, support vector regression, random forest, K-nearest neighbors, extreme gradient boosting and gradient boosting machine, and time series algorithm ARIMA with binary classification for a different number of input features. Validate: This article can provide an effective and viable way for smart farm managers or greenhouse farmers who can better manage the problem of agricultural energy economically and environmentally. Therefore, we hope that the recommended ML method will help improve the smart farm’s energy use or their energy policies in various fields related to agricultural energy. Conclusion: The seven performance metrics including R-squared, root mean squared error, and mean absolute error, are associated with these two algorithms. It is concluded that the RF-based model is more successful than in the pre-others diction accuracy of 92%. Therefore, the proposed model may be contributed to the development of various applications for environment energy usage in a smart farm, such as a notification service for energy usage peak time or an energy usage control for each device. Full article
Show Figures

Figure 1

30 pages, 2107 KiB  
Article
Towards Human Stress and Activity Recognition: A Review and a First Approach Based on Low-Cost Wearables
by Juan Antonio Castro-García, Alberto Jesús Molina-Cantero, Isabel María Gómez-González, Sergio Lafuente-Arroyo and Manuel Merino-Monge
Electronics 2022, 11(1), 155; https://doi.org/10.3390/electronics11010155 - 04 Jan 2022
Cited by 14 | Viewed by 3194
Abstract
Detecting stress when performing physical activities is an interesting field that has received relatively little research interest to date. In this paper, we took a first step towards redressing this, through a comprehensive review and the design of a low-cost body area network [...] Read more.
Detecting stress when performing physical activities is an interesting field that has received relatively little research interest to date. In this paper, we took a first step towards redressing this, through a comprehensive review and the design of a low-cost body area network (BAN) made of a set of wearables that allow physiological signals and human movements to be captured simultaneously. We used four different wearables: OpenBCI and three other open-hardware custom-made designs that communicate via bluetooth low energy (BLE) to an external computer—following the edge-computingconcept—hosting applications for data synchronization and storage. We obtained a large number of physiological signals (electroencephalography (EEG), electrocardiography (ECG), breathing rate (BR), electrodermal activity (EDA), and skin temperature (ST)) with which we analyzed internal states in general, but with a focus on stress. The findings show the reliability and feasibility of the proposed body area network (BAN) according to battery lifetime (greater than 15 h), packet loss rate (0% for our custom-made designs), and signal quality (signal-noise ratio (SNR) of 9.8 dB for the ECG circuit, and 61.6 dB for the EDA). Moreover, we conducted a preliminary experiment to gauge the main ECG features for stress detection during rest. Full article
Show Figures

Figure 1

18 pages, 905 KiB  
Article
Travel Time Prediction and Explanation with Spatio-Temporal Features: A Comparative Study
by Irfan Ahmed, Indika Kumara, Vahideh Reshadat, A. S. M. Kayes, Willem-Jan van den Heuvel and Damian A. Tamburri
Electronics 2022, 11(1), 106; https://doi.org/10.3390/electronics11010106 - 29 Dec 2021
Cited by 9 | Viewed by 3416
Abstract
Travel time information is used as input or auxiliary data for tasks such as dynamic navigation, infrastructure planning, congestion control, and accident detection. Various data-driven Travel Time Prediction (TTP) methods have been proposed in recent years. One of the most challenging tasks in [...] Read more.
Travel time information is used as input or auxiliary data for tasks such as dynamic navigation, infrastructure planning, congestion control, and accident detection. Various data-driven Travel Time Prediction (TTP) methods have been proposed in recent years. One of the most challenging tasks in TTP is developing and selecting the most appropriate prediction algorithm. The existing studies that empirically compare different TTP models only use a few models with specific features. Moreover, there is a lack of research on explaining TTPs made by black-box models. Such explanations can help to tune and apply TTP methods successfully. To fill these gaps in the current TTP literature, using three data sets, we compare three types of TTP methods (ensemble tree-based learning, deep neural networks, and hybrid models) and ten different prediction algorithms overall. Furthermore, we apply XAI (Explainable Artificial Intelligence) methods (SHAP and LIME) to understand and interpret models’ predictions. The prediction accuracy and reliability for all models are evaluated and compared. We observed that the ensemble learning methods, i.e., XGBoost and LightGBM, are the best performing models over the three data sets, and XAI methods can adequately explain how various spatial and temporal features influence travel time. Full article
Show Figures

Figure 1

12 pages, 4673 KiB  
Article
The Mechanism of Orientation Detection Based on Artificial Visual System
by Xiliang Zhang, Tang Zheng and Yuki Todo
Electronics 2022, 11(1), 54; https://doi.org/10.3390/electronics11010054 - 24 Dec 2021
Cited by 2 | Viewed by 2311
Abstract
As an important part of the nervous system, the human visual system can provide visual perception for humans. The research on it is of great significance to improve our understanding of biological vision and the human brain. Orientation detection, in which visual cortex [...] Read more.
As an important part of the nervous system, the human visual system can provide visual perception for humans. The research on it is of great significance to improve our understanding of biological vision and the human brain. Orientation detection, in which visual cortex neurons respond only to linear stimuli in specific orientations, is an important driving force in computer vision and biological vision. However, the principle of orientation detection is still unknown. This paper proposes an orientation detection mechanism based on dendrite calculation of local orientation detection neurons. We hypothesized the existence of orientation detection neurons that only respond to specific orientations and designed eight neurons that can detect local orientation information. These neurons interact with each other based on the nonlinearity of dendrite generation. Then, local orientation detection neurons are used to extract local orientation information, and global orientation information is deduced from local orientation information. The effectiveness of the mechanism is verified by computer simulation, which shows that the machine can perform orientation detection well in all experiments, regardless of the size, shape, and position of objects. This is consistent with most known physiological experiments. Full article
Show Figures

Figure 1

22 pages, 7134 KiB  
Article
Design and Development of a Blockchain-Based System for Private Data Management
by Prasanth Varma Kakarlapudi and Qusay H. Mahmoud
Electronics 2021, 10(24), 3131; https://doi.org/10.3390/electronics10243131 - 16 Dec 2021
Cited by 10 | Viewed by 9023
Abstract
The concept of blockchain was introduced as the Bitcoin cryptocurrency in a 2008 whitepaper by the mysterious Satoshi Nakamoto. Blockchain has applications in many domains, such as healthcare, the Internet of Things (IoT), and data management. Data management is defined as obtaining, processing, [...] Read more.
The concept of blockchain was introduced as the Bitcoin cryptocurrency in a 2008 whitepaper by the mysterious Satoshi Nakamoto. Blockchain has applications in many domains, such as healthcare, the Internet of Things (IoT), and data management. Data management is defined as obtaining, processing, safeguarding, and storing information about an organization to aid with making better business decisions for the firm. The collected information is often shared across organizations without the consent of the individuals who provided the information. As a result, the information must be protected from unauthorized access or exploitation. Therefore, organizations must ensure that their systems are transparent to build user confidence. This paper introduces the architectural design and development of a blockchain-based system for private data management, discusses the proof-of-concept prototype using Hyperledger Fabric, and presents evaluation results of the proposed system using Hyperledger Caliper. The proposed solution can be used in any application domain where managing the privacy of user data is important, such as in health care systems. Full article
Show Figures

Figure 1

26 pages, 2279 KiB  
Article
AI-Crime Hunter: An AI Mixture of Experts for Crime Discovery on Twitter
by Niloufar Shoeibi, Nastaran Shoeibi, Guillermo Hernández, Pablo Chamoso and Juan M. Corchado
Electronics 2021, 10(24), 3081; https://doi.org/10.3390/electronics10243081 - 10 Dec 2021
Cited by 4 | Viewed by 4443
Abstract
Maintaining a healthy cyber society is a great challenge due to the users’ freedom of expression and behavior. This can be solved by monitoring and analyzing the users’ behavior and taking proper actions. This research aims to present a platform that monitors the [...] Read more.
Maintaining a healthy cyber society is a great challenge due to the users’ freedom of expression and behavior. This can be solved by monitoring and analyzing the users’ behavior and taking proper actions. This research aims to present a platform that monitors the public content on Twitter by extracting tweet data. After maintaining the data, the users’ interactions are analyzed using graph analysis methods. Then, the users’ behavioral patterns are analyzed by applying metadata analysis, in which the timeline of each profile is obtained; also, the time-series behavioral features of users are investigated. Then, in the abnormal behavior detection and filtering component, the interesting profiles are selected for further examinations. Finally, in the contextual analysis component, the contents are analyzed using natural language processing techniques; a binary text classification model (SVM (Support Vector Machine) + TF-IDF (Term Frequency—Inverse Document Frequency) with 88.89% accuracy) is used to detect if a tweet is related to crime or not. Then, a sentiment analysis method is applied to the crime-related tweets to perform aspect-based sentiment analysis (DistilBERT + FFNN (Feed-Forward Neural Network) with 80% accuracy), because sharing positive opinions about a crime-related topic can threaten society. This platform aims to provide the end-user (the police) with suggestions to control hate speech or terrorist propaganda. Full article
Show Figures

Figure 1

24 pages, 497 KiB  
Article
Enhancing Big Data Feature Selection Using a Hybrid Correlation-Based Feature Selection
by Masurah Mohamad, Ali Selamat, Ondrej Krejcar, Ruben Gonzalez Crespo, Enrique Herrera-Viedma and Hamido Fujita
Electronics 2021, 10(23), 2984; https://doi.org/10.3390/electronics10232984 - 30 Nov 2021
Cited by 7 | Viewed by 2428
Abstract
This study proposes an alternate data extraction method that combines three well-known feature selection methods for handling large and problematic datasets: the correlation-based feature selection (CFS), best first search (BFS), and dominance-based rough set approach (DRSA) methods. This study aims to enhance the [...] Read more.
This study proposes an alternate data extraction method that combines three well-known feature selection methods for handling large and problematic datasets: the correlation-based feature selection (CFS), best first search (BFS), and dominance-based rough set approach (DRSA) methods. This study aims to enhance the classifier’s performance in decision analysis by eliminating uncorrelated and inconsistent data values. The proposed method, named CFS-DRSA, comprises several phases executed in sequence, with the main phases incorporating two crucial feature extraction tasks. Data reduction is first, which implements a CFS method with a BFS algorithm. Secondly, a data selection process applies a DRSA to generate the optimized dataset. Therefore, this study aims to solve the computational time complexity and increase the classification accuracy. Several datasets with various characteristics and volumes were used in the experimental process to evaluate the proposed method’s credibility. The method’s performance was validated using standard evaluation measures and benchmarked with other established methods such as deep learning (DL). Overall, the proposed work proved that it could assist the classifier in returning a significant result, with an accuracy rate of 82.1% for the neural network (NN) classifier, compared to the support vector machine (SVM), which returned 66.5% and 49.96% for DL. The one-way analysis of variance (ANOVA) statistical result indicates that the proposed method is an alternative extraction tool for those with difficulties acquiring expensive big data analysis tools and those who are new to the data analysis field. Full article
Show Figures

Figure 1

19 pages, 858 KiB  
Article
Intelligent Cyber-Security System for IoT-Aided Drones Using Voting Classifier
by Rizwan Majeed, Nurul Azma Abdullah, Muhammad Faheem Mushtaq, Muhammad Umer and Michele Nappi
Electronics 2021, 10(23), 2926; https://doi.org/10.3390/electronics10232926 - 25 Nov 2021
Cited by 13 | Viewed by 2971
Abstract
Developments in drones have opened new trends and opportunities in different fields, particularly in small drones. Drones provide interlocation services for navigation, and this interlink is provided by the Internet of Things (IoT). However, architectural issues make drone networks vulnerable to privacy and [...] Read more.
Developments in drones have opened new trends and opportunities in different fields, particularly in small drones. Drones provide interlocation services for navigation, and this interlink is provided by the Internet of Things (IoT). However, architectural issues make drone networks vulnerable to privacy and security threats. It is critical to provide a safe and secure network to acquire desired performance. Small drones are finding new paths for progress in the civil and defense industries, but also posing new challenges for security and privacy as well. The basic design of the small drone requires a modification in its data transformation and data privacy mechanisms, and it is not yet fulfilling domain requirements. This paper aims to investigate recent privacy and security trends that are affecting the Internet of Drones (IoD). This study also highlights the need for a safe and secure drone network that is free from interceptions and intrusions. The proposed framework mitigates the cyber security threats by employing intelligent machine learning models in the design of IoT-aided drones by making them secure and adaptable. Finally, the proposed model is evaluated on a benchmark dataset and shows robust results. Full article
Show Figures

Figure 1

20 pages, 8308 KiB  
Article
Random Forest Similarity Maps: A Scalable Visual Representation for Global and Local Interpretation
by Dipankar Mazumdar, Mário Popolin Neto and Fernando V. Paulovich
Electronics 2021, 10(22), 2862; https://doi.org/10.3390/electronics10222862 - 20 Nov 2021
Cited by 3 | Viewed by 3279
Abstract
Machine Learning prediction algorithms have made significant contributions in today’s world, leading to increased usage in various domains. However, as ML algorithms surge, the need for transparent and interpretable models becomes essential. Visual representations have shown to be instrumental in addressing such an [...] Read more.
Machine Learning prediction algorithms have made significant contributions in today’s world, leading to increased usage in various domains. However, as ML algorithms surge, the need for transparent and interpretable models becomes essential. Visual representations have shown to be instrumental in addressing such an issue, allowing users to grasp models’ inner workings. Despite their popularity, visualization techniques still present visual scalability limitations, mainly when applied to analyze popular and complex models, such as Random Forests (RF). In this work, we propose Random Forest Similarity Map (RFMap), a scalable interactive visual analytics tool designed to analyze RF ensemble models. RFMap focuses on explaining the inner working mechanism of models through different views describing individual data instance predictions, providing an overview of the entire forest of trees, and highlighting instance input feature values. The interactive nature of RFMap allows users to visually interpret model errors and decisions, establishing the necessary confidence and user trust in RF models and improving performance. Full article
Show Figures

Figure 1

20 pages, 5742 KiB  
Article
Cyberbullying Detection: Hybrid Models Based on Machine Learning and Natural Language Processing Techniques
by Chahat Raj, Ayush Agarwal, Gnana Bharathy, Bhuva Narayan and Mukesh Prasad
Electronics 2021, 10(22), 2810; https://doi.org/10.3390/electronics10222810 - 16 Nov 2021
Cited by 41 | Viewed by 5883
Abstract
The rise in web and social media interactions has resulted in the efortless proliferation of offensive language and hate speech. Such online harassment, insults, and attacks are commonly termed cyberbullying. The sheer volume of user-generated content has made it challenging to identify such [...] Read more.
The rise in web and social media interactions has resulted in the efortless proliferation of offensive language and hate speech. Such online harassment, insults, and attacks are commonly termed cyberbullying. The sheer volume of user-generated content has made it challenging to identify such illicit content. Machine learning has wide applications in text classification, and researchers are shifting towards using deep neural networks in detecting cyberbullying due to the several advantages they have over traditional machine learning algorithms. This paper proposes a novel neural network framework with parameter optimization and an algorithmic comparative study of eleven classification methods: four traditional machine learning and seven shallow neural networks on two real world cyberbullying datasets. In addition, this paper also examines the effect of feature extraction and word-embedding-techniques-based natural language processing on algorithmic performance. Key observations from this study show that bidirectional neural networks and attention models provide high classification results. Logistic Regression was observed to be the best among the traditional machine learning classifiers used. Term Frequency-Inverse Document Frequency (TF-IDF) demonstrates consistently high accuracies with traditional machine learning techniques. Global Vectors (GloVe) perform better with neural network models. Bi-GRU and Bi-LSTM worked best amongst the neural networks used. The extensive experiments performed on the two datasets establish the importance of this work by comparing eleven classification methods and seven feature extraction techniques. Our proposed shallow neural networks outperform existing state-of-the-art approaches for cyberbullying detection, with accuracy and F1-scores as high as ~95% and ~98%, respectively. Full article
Show Figures

Figure 1

25 pages, 67673 KiB  
Article
Optical Recognition of Handwritten Logic Formulas Using Neural Networks
by Vaios Ampelakiotis, Isidoros Perikos, Ioannis Hatzilygeroudis and George Tsihrintzis
Electronics 2021, 10(22), 2761; https://doi.org/10.3390/electronics10222761 - 12 Nov 2021
Viewed by 2173
Abstract
In this paper, we present a handwritten character recognition (HCR) system that aims to recognize first-order logic handwritten formulas and create editable text files of the recognized formulas. Dense feedforward neural networks (NNs) are utilized, and their performance is examined under various training [...] Read more.
In this paper, we present a handwritten character recognition (HCR) system that aims to recognize first-order logic handwritten formulas and create editable text files of the recognized formulas. Dense feedforward neural networks (NNs) are utilized, and their performance is examined under various training conditions and methods. More specifically, after three training algorithms (backpropagation, resilient propagation and stochastic gradient descent) had been tested, we created and trained an NN with the stochastic gradient descent algorithm, optimized by the Adam update rule, which was proved to be the best, using a trainset of 16,750 handwritten image samples of 28 × 28 each and a testset of 7947 samples. The final accuracy achieved is 90.13%. The general methodology followed consists of two stages: the image processing and the NN design and training. Finally, an application has been created that implements the methodology and automatically recognizes handwritten logic formulas. An interesting feature of the application is that it allows for creating new, user-oriented training sets and parameter settings, and thus new NN models. Full article
Show Figures

Figure 1

21 pages, 1907 KiB  
Article
Effective On-Chip Communication for Message Passing Programs on Multi-Core Processors
by Joonmoo Huh and Deokwoo Lee
Electronics 2021, 10(21), 2681; https://doi.org/10.3390/electronics10212681 - 03 Nov 2021
Viewed by 2070
Abstract
Shared memory is the most popular parallel programming model for multi-core processors, while message passing is generally used for large distributed machines. However, as the number of cores on a chip increases, the relative merits of shared memory versus message passing change, and [...] Read more.
Shared memory is the most popular parallel programming model for multi-core processors, while message passing is generally used for large distributed machines. However, as the number of cores on a chip increases, the relative merits of shared memory versus message passing change, and we argue that message passing becomes a viable, high performing, and parallel programming model. To demonstrate this hypothesis, we compare a shared memory architecture with a new message passing architecture on a suite of applications tuned for each system independently. Perhaps surprisingly, the fundamental behaviors of the applications studied in this work, when optimized for both models, are very similar to each other, and both could execute efficiently on multicore architectures despite many implementations being different from each other. Furthermore, if hardware is tuned to support message passing by supporting bulk message transfer and the elimination of unnecessary coherence overheads, and if effective support is available for global operations, then some applications would perform much better on a message passing architecture. Leveraging our insights, we design a message passing architecture that supports both memory-to-memory and cache-to-cache messaging in hardware. With the new architecture, message passing is able to outperform its shared memory counterparts on many of the applications due to the unique advantages of the message passing hardware as compared to cache coherence. In the best case, message passing achieves up to a 34% increase in speed over its shared memory counterpart, and it achieves an average 10% increase in speed. In the worst case, message passing is slowed down in two applications—CG (conjugate gradient) and FT (Fourier transform)—because it could not perform well on the unique data sharing patterns as its counterpart of shared memory. Overall, our analysis demonstrates the importance of considering message passing as a high performing and hardware-supported programming model on future multicore architectures. Full article
Show Figures

Figure 1

25 pages, 8771 KiB  
Article
Cluster-Based Memetic Approach of Image Alignment
by Catalina-Lucia Cocianu and Cristian Răzvan Uscatu
Electronics 2021, 10(21), 2606; https://doi.org/10.3390/electronics10212606 - 25 Oct 2021
Cited by 3 | Viewed by 1403
Abstract
The paper presents a new memetic, cluster-based methodology for image registration in case of geometric perturbation model involving translation, rotation and scaling. The methodology consists of two stages. First, using the sets of the object pixels belonging to the target image and to [...] Read more.
The paper presents a new memetic, cluster-based methodology for image registration in case of geometric perturbation model involving translation, rotation and scaling. The methodology consists of two stages. First, using the sets of the object pixels belonging to the target image and to the sensed image respectively, the boundaries of the search space are computed. Next, the registration mechanism residing in a hybridization between a version of firefly population-based search procedure and the two membered evolutionary strategy computed on clustered data is applied. In addition, a procedure designed to deal with the premature convergence problem is embedded. The fitness to be maximized by the memetic algorithm is defined by the Dice coefficient, a function implemented to evaluate the similarity between pairs of binary images. The proposed methodology is applied on both binary and monochrome images. In case of monochrome images, a preprocessing step aiming the binarization of the inputs is considered before the registration. The quality of the proposed approach is measured in terms of accuracy and efficiency. The success rate based on Dice coefficient, normalized mutual information measures, and signal-to-noise ratio are used to establish the accuracy of the obtained algorithm, while the efficiency is evaluated by the run time function. Full article
Show Figures

Figure 1

21 pages, 2681 KiB  
Article
Enhance the Language Ability of Humanoid Robot NAO through Deep Learning to Interact with Autistic Children
by Tianhao She and Fuji Ren
Electronics 2021, 10(19), 2393; https://doi.org/10.3390/electronics10192393 - 30 Sep 2021
Cited by 3 | Viewed by 2648
Abstract
Autism spectrum disorder (ASD) is a life-long neurological disability, and a cure has not yet been found. ASD begins early in childhood and lasts throughout a person’s life. Through early intervention, many actions can be taken to improve the quality of life of [...] Read more.
Autism spectrum disorder (ASD) is a life-long neurological disability, and a cure has not yet been found. ASD begins early in childhood and lasts throughout a person’s life. Through early intervention, many actions can be taken to improve the quality of life of children. Robots are one of the best choices for accompanying children with autism. However, for most robots, the dialogue system uses traditional techniques to produce responses. Robots cannot produce meaningful answers when the conversations have not been recorded in a database. The main contribution of our work is the incorporation of a conversation model into an actual robot system for supporting children with autism. We present the use a neural network model as the generative conversational agent, which aimed at generating meaningful and coherent dialogue responses given the dialogue history. The proposed model shares an embedding layer between the encoding and decoding processes through adoption. The model is different from the canonical Seq2Seq model in which the encoder output is used only to set-up the initial state of the decoder to avoid favoring short and unconditional responses with high prior probability. In order to improve the sensitivity to context, we changed the input method of the model to better adapt to the utterances of children with autism. We adopted transfer learning to make the proposed model learn the characteristics of dialogue with autistic children and to solve the problem of the insufficient corpus of dialogue. Experiments showed that the proposed method was superior to the canonical Seq2sSeq model and the GAN-based dialogue model in both automatic evaluation indicators and human evaluation, including pushing the BLEU precision to 0.23, the greedy matching score to 0.69, the embedding average score to 0.82, the vector extrema score to 0.55, the skip-thought score to 0.65, the KL divergence score to 5.73, and the EMD score to 12.21. Full article
Show Figures

Figure 1

46 pages, 1777 KiB  
Article
Human Face Detection Techniques: A Comprehensive Review and Future Research Directions
by Md Khaled Hasan, Md. Shamim Ahsan, Abdullah-Al-Mamun, S. H. Shah Newaz and Gyu Myoung Lee
Electronics 2021, 10(19), 2354; https://doi.org/10.3390/electronics10192354 - 26 Sep 2021
Cited by 41 | Viewed by 10012
Abstract
Face detection, which is an effortless task for humans, is complex to perform on machines. The recent veer proliferation of computational resources is paving the way for frantic advancement of face detection technology. Many astutely developed algorithms have been proposed to detect faces. [...] Read more.
Face detection, which is an effortless task for humans, is complex to perform on machines. The recent veer proliferation of computational resources is paving the way for frantic advancement of face detection technology. Many astutely developed algorithms have been proposed to detect faces. However, there is little attention paid in making a comprehensive survey of the available algorithms. This paper aims at providing fourfold discussions on face detection algorithms. First, we explore a wide variety of the available face detection algorithms in five steps, including history, working procedure, advantages, limitations, and use in other fields alongside face detection. Secondly, we include a comparative evaluation among different algorithms in each single method. Thirdly, we provide detailed comparisons among the algorithms epitomized to have an all-inclusive outlook. Lastly, we conclude this study with several promising research directions to pursue. Earlier survey papers on face detection algorithms are limited to just technical details and popularly used algorithms. In our study, however, we cover detailed technical explanations of face detection algorithms and various recent sub-branches of the neural network. We present detailed comparisons among the algorithms in all-inclusive and under sub-branches. We provide the strengths and limitations of these algorithms and a novel literature survey that includes their use besides face detection. Full article
Show Figures

Figure 1

19 pages, 4797 KiB  
Article
Parameter Estimation of Modified Double-Diode and Triple-Diode Photovoltaic Models Based on Wild Horse Optimizer
by Abdelhady Ramadan, Salah Kamel, Ibrahim B. M. Taha and Marcos Tostado-Véliz
Electronics 2021, 10(18), 2308; https://doi.org/10.3390/electronics10182308 - 19 Sep 2021
Cited by 28 | Viewed by 2973
Abstract
The increase in industrial and commercial applications of photovoltaic systems (PV) has a significant impact on the increase in interest in studying the improvement of the efficiency of these systems. Estimating the efficiency of PV is considered one of the most important problems [...] Read more.
The increase in industrial and commercial applications of photovoltaic systems (PV) has a significant impact on the increase in interest in studying the improvement of the efficiency of these systems. Estimating the efficiency of PV is considered one of the most important problems facing those in charge of manufacturing these systems, which makes it interesting to many researchers. The difficulty in estimating the efficiency of PV is due to the high non-linear current–voltage characteristics and power–voltage characteristics. In addition, the absence of ample efficiency information in the manufacturers’ datasheets has led to the development of an effective electrical mathematical equivalent model necessary to simulate the PV module. In this paper, an application for an optimization algorithm named Wild Horse Optimizer (WHO) is proposed to extract the parameters of a double-diode PV model (DDM), modified double-diode PV model (MDDM), triple-diode PV model (TDM), and modified triple-diode PV model (MTDM). This study focuses on two main objectives. The first concerns comparing the original models (DDM and TDM) and their modification (MDDM and MTDM). The second concerns the algorithm behavior with the optimization problem and comparing this behavior with other recent algorithms. The evaluation process uses different methods, such as Root Mean Square Error (RMSE) for accuracy and statistical analysis for robustness. Based on the results obtained by the WHO, the estimated parameters using the WHO are more accurate than those obtained by the other studied optimization algorithms; furthermore, the MDDM and MTDM modifications enhanced the original DDM and TDM efficiencies. Full article
Show Figures

Graphical abstract

18 pages, 1845 KiB  
Article
Remote Laboratory for E-Learning of Systems on Chip and Their Applications to Nuclear and Scientific Instrumentation
by Maria Liz Crespo, François Foulon, Andres Cicuttin, Mladen Bogovac, Clement Onime, Cristian Sisterna, Rodrigo Melo, Werner Florian Samayoa, Luis Guillermo García Ordóñez, Romina Molina and Bruno Valinoti
Electronics 2021, 10(18), 2191; https://doi.org/10.3390/electronics10182191 - 07 Sep 2021
Cited by 3 | Viewed by 2308
Abstract
Configuring and setting up a remote access laboratory for an advanced online school on fully programmable System-on-Chip (SoC) proved to be an outstanding challenge. The school, jointly organized by the International Centre for Theoretical Physics (ICTP) and the International Atomic Energy Agency (IAEA), [...] Read more.
Configuring and setting up a remote access laboratory for an advanced online school on fully programmable System-on-Chip (SoC) proved to be an outstanding challenge. The school, jointly organized by the International Centre for Theoretical Physics (ICTP) and the International Atomic Energy Agency (IAEA), focused on SoC and its applications to nuclear and scientific instrumentation and was mainly addressed to physicists, computer scientists and engineers from developing countries. The use of e-learning tools, which some of them adopted and others developed, allowed the school participants to directly access both integrated development environment software and programmable SoC platforms. This facilitated the follow-up of all proposed exercises and the final project. During the four weeks of the training activity, we faced and overcame different technology and communication challenges, whose solutions we describe in detail together with dedicated tools and design methodology. We finally present a summary of the gained experience and an assessment of the results we achieved, addressed to those who foresee to organize similar initiatives using e-learning for advanced training with remote access to SoC platforms. Full article
Show Figures

Figure 1

21 pages, 561 KiB  
Article
Automatic Multilingual Stopwords Identification from Very Small Corpora
by Stefano Ferilli
Electronics 2021, 10(17), 2169; https://doi.org/10.3390/electronics10172169 - 05 Sep 2021
Cited by 5 | Viewed by 2225
Abstract
Tools for Natural Language Processing work using linguistic resources, that are language-specific. The complexity of building such resources causes many languages to lack them. So, learning them automatically from sample texts would be a desirable solution. This usually requires huge training corpora, which [...] Read more.
Tools for Natural Language Processing work using linguistic resources, that are language-specific. The complexity of building such resources causes many languages to lack them. So, learning them automatically from sample texts would be a desirable solution. This usually requires huge training corpora, which are not available for many local languages and jargons, lacking a wide literature. This paper focuses on stopwords, i.e., terms in a text which do not contribute in conveying its topic or content. It provides two main, inter-related and complementary, methodological contributions: (i) it proposes a novel approach based on term and document frequency to rank candidate stopwords, that works also on very small corpora (even single documents); and (ii) it proposes an automatic cutoff strategy to select the best candidates in the ranking, thus addressing one of the most critical problems in the stopword identification practice. Nice features of these approaches are that (i) they are generic and applicable to different languages, (ii) they are fully automatic, and (iii) they do not require any previous linguistic knowledge. Extensive experiments show that both are extremely effective and reliable. The former outperforms all comparable approaches in the state-of-the-art, both in terms of performance (Precision stays at 100% or nearly so for a large portion of the top-ranked candidate stopwords, while Recall is quite close to the maximum reachable in theory.) and in smooth behavior (Precision is monotonically decreasing, and Recall is monotonically increasing, allowing the experimenter to choose the preferred balance.). The latter is more flexible than existing solutions in the literature, requiring just one parameter intuitively related to the balance between Precision and Recall one wishes to obtain. Full article
Show Figures

Figure 1

11 pages, 2134 KiB  
Article
Communication Cost Reduction with Partial Structure in Federated Learning
by Dongseok Kang and Chang Wook Ahn
Electronics 2021, 10(17), 2081; https://doi.org/10.3390/electronics10172081 - 27 Aug 2021
Cited by 7 | Viewed by 2231
Abstract
Federated learning is a distributed learning algorithm designed to train a single server model on a server using different clients and their local data. To improve the performance of the server model, continuous communication with clients is required, and since the number of [...] Read more.
Federated learning is a distributed learning algorithm designed to train a single server model on a server using different clients and their local data. To improve the performance of the server model, continuous communication with clients is required, and since the number of clients is very large, the algorithm must be designed in consideration of the cost required for communication. In this paper, we propose a method for distributing a model with a structure different from that of the server model, distributing a model suitable for clients with different data sizes, and training a server model using the reconstructed model trained by the client. In this way, the server model deploys only a subset of the sequential model, collects gradient updates, and selectively applies updates to the server model. This method of delivering the server model at a lower cost to clients who only need smaller models can reduce the communication cost of training server models compared to standard methods. An image classification model was designed to verify the effectiveness of the proposed method via three data distribution situations and two datasets, and it was confirmed that training was accomplished only with a cost 0.229 times smaller than the standard method. Full article
Show Figures

Figure 1

16 pages, 763 KiB  
Article
Motor Unit Discharges from Multi-Kernel Deconvolution of Single Channel Surface Electromyogram
by Luca Mesin
Electronics 2021, 10(16), 2022; https://doi.org/10.3390/electronics10162022 - 21 Aug 2021
Cited by 2 | Viewed by 1528
Abstract
Surface electromyogram (EMG) finds many applications in the non-invasive characterization of muscles. Extracting information on the control of motor units (MU) is difficult when using single channels, e.g., due to the low selectivity and large phase cancellations of MU action potentials (MUAPs). In [...] Read more.
Surface electromyogram (EMG) finds many applications in the non-invasive characterization of muscles. Extracting information on the control of motor units (MU) is difficult when using single channels, e.g., due to the low selectivity and large phase cancellations of MU action potentials (MUAPs). In this paper, we propose a new method to face this problem in the case of a single differential channel. The signal is approximated as a sum of convolutions of different kernels (adapted to the signal) and firing patterns, whose sum is the estimation of the cumulative MU firings. Three simulators were used for testing: muscles of parallel fibres with either two innervation zones (IZs, thus, with MUAPs of different phases) or one IZ and a model with fibres inclined with respect to the skin. Simulations were prepared for different fat thicknesses, distributions of conduction velocity, maximal firing rates, synchronizations of MU discharges, and variability of the inter-spike interval. The performances were measured in terms of cross-correlations of the estimated and simulated cumulative MU firings in the range of 0–50 Hz and compared with those of a state-of-the-art single-kernel algorithm. The median cross-correlations for multi-kernel/single-kernel approaches were 92.2%/82.4%, 98.1%/97.6%, and 95.0%/91.0% for the models with two IZs, one IZ (parallel fibres), and inclined fibres, respectively (all statistically significant differences, which were larger when the MUAP shapes were of greater difference). Full article
Show Figures

Figure 1

21 pages, 519 KiB  
Article
A Metaheuristic Based Approach for the Customer-Centric Perishable Food Distribution Problem
by Hanane El Raoui, Mustapha Oudani, David A. Pelta and Ahmed El Hilali Alaoui
Electronics 2021, 10(16), 2018; https://doi.org/10.3390/electronics10162018 - 20 Aug 2021
Cited by 4 | Viewed by 1662
Abstract
High transportation costs and poor quality of service are common vulnerabilities in various logistics networks, especially in food distribution. Here we propose a many-objective Customer-centric Perishable Food Distribution Problem that focuses on the cost, the quality of the product, and the service level [...] Read more.
High transportation costs and poor quality of service are common vulnerabilities in various logistics networks, especially in food distribution. Here we propose a many-objective Customer-centric Perishable Food Distribution Problem that focuses on the cost, the quality of the product, and the service level improvement by considering not only time windows but also the customers’ target time and their priority. Recognizing the difficulty of solving such model, we propose a General Variable Neighborhood Search (GVNS) metaheuristic based approach that allows to efficiently solve a subproblem while allowing us to obtain a set of solutions. These solutions are evaluated over some non-optimized criteria and then ranked using an a posteriori approach that requires minimal information about decision maker preferences. The computational results show (a) GVNS achieved same quality solutions as an exact solver (CPLEX) in the subproblem; (b) GVNS can generate a wide number of candidate solutions, and (c) the use of the a posteriori approach makes easy to generate different decision maker profiles which in turn allows to obtain different rankings of the solutions. Full article
Show Figures

Figure 1

16 pages, 3021 KiB  
Article
Multiple-Searching Genetic Algorithm for Whole Test Suites
by Wanida Khamprapai, Cheng-Fa Tsai, Paohsi Wang and Chi-En Tsai
Electronics 2021, 10(16), 2011; https://doi.org/10.3390/electronics10162011 - 19 Aug 2021
Cited by 2 | Viewed by 1793
Abstract
A test suite is a set of test cases that evaluate the quality of software. The aim of whole test suite generation is to create test cases with the highest coverage scores possible. This study investigated the efficiency of a multiple-searching genetic algorithm [...] Read more.
A test suite is a set of test cases that evaluate the quality of software. The aim of whole test suite generation is to create test cases with the highest coverage scores possible. This study investigated the efficiency of a multiple-searching genetic algorithm (MSGA) for whole test suite generation. In previous works, the MSGA has been effectively used in multicast routing of a network system and in the generation of test cases on individual coverage criteria for small- to medium-sized programs. The performance of the algorithms varies depending on the problem instances. In this experiment were generated whole test suites for complex programs. The MSGA was expanded in the EvoSuite test generation tool and compared with the available algorithms on EvoSuite in terms of the number of test cases, the number of statements, mutation score, and coverage score. All algorithms were evaluated on 14 problem instances with different corpus to satisfy multiple coverage criteria. The problem instances were Java open-source projects. Findings demonstrate that the MSGA generated test cases reached greater coverage scores and detected a larger number of faults in the test class when compared with the others. Full article
Show Figures

Figure 1

19 pages, 996 KiB  
Article
Algorithms for Finding Vulnerabilities and Deploying Additional Sensors in a Region with Obstacles
by Kibeom Kim and Sunggu Lee
Electronics 2021, 10(12), 1504; https://doi.org/10.3390/electronics10121504 - 21 Jun 2021
Cited by 4 | Viewed by 1927
Abstract
Consider a two-dimensional rectangular region guarded by a set of sensors, which may be smart networked surveillance cameras or simpler sensor devices. In order to evaluate the level of security provided by these sensors, it is useful to find and evaluate the path [...] Read more.
Consider a two-dimensional rectangular region guarded by a set of sensors, which may be smart networked surveillance cameras or simpler sensor devices. In order to evaluate the level of security provided by these sensors, it is useful to find and evaluate the path with the lowest level of exposure to the sensors. Then, if desired, additional sensors can be placed at strategic locations to increase the level of security provided. General forms of these two problems are presented in this paper. Next, the minimum exposure path is found by first using the sensing limits of the sensors to compute an approximate “feasible area” of interest, and then using a grid within this feasible area to search for the minimum exposure path in a systematic manner. Two algorithms are presented for the minimum exposure path problem, and an additional subsequently executed algorithm is proposed for sensor deployment. The proposed algorithms are shown to require significantly lower computational complexity than previous methods, with the fastest proposed algorithm requiring O(n2.5) time, as compared to O(mn3) for a traditional grid-based search method, where n is the number of sensors, m is the number of obstacles, and certain assumptions are made on the parameter values. Full article
Show Figures

Figure 1

39 pages, 3681 KiB  
Article
RYEL: An Experimental Study in the Behavioral Response of Judges Using a Novel Technique for Acquiring Higher-Order Thinking Based on Explainable Artificial Intelligence and Case-Based Reasoning
by Luis Raúl Rodríguez Oconitrillo, Juan José Vargas, Arturo Camacho, Álvaro Burgos and Juan Manuel Corchado
Electronics 2021, 10(12), 1500; https://doi.org/10.3390/electronics10121500 - 21 Jun 2021
Cited by 5 | Viewed by 2771
Abstract
The need for studies connecting machine explainability with human behavior is essential, especially for a detailed understanding of a human’s perspective, thoughts, and sensations according to a context. A novel system called RYEL was developed based on Subject-Matter Experts (SME) to investigate new [...] Read more.
The need for studies connecting machine explainability with human behavior is essential, especially for a detailed understanding of a human’s perspective, thoughts, and sensations according to a context. A novel system called RYEL was developed based on Subject-Matter Experts (SME) to investigate new techniques for acquiring higher-order thinking, the perception, the use of new computational explanatory techniques, support decision-making, and the judge’s cognition and behavior. Thus, a new spectrum is covered and promises to be a new area of study called Interpretation-Assessment/Assessment-Interpretation (IA-AI), consisting of explaining machine inferences and the interpretation and assessment from a human. It allows expressing a semantic, ontological, and hermeneutical meaning related to the psyche of a human (judge). The system has an interpretative and explanatory nature, and in the future, could be used in other domains of discourse. More than 33 experts in Law and Artificial Intelligence validated the functional design. More than 26 judges, most of them specializing in psychology and criminology from Colombia, Ecuador, Panama, Spain, Argentina, and Costa Rica, participated in the experiments. The results of the experimentation have been very positive. As a challenge, this research represents a paradigm shift in legal data processing. Full article
Show Figures

Figure 1

15 pages, 849 KiB  
Article
Remote Laboratory for Online Engineering Education: The RLAB-UOC-FPGA Case Study
by Carlos Monzo, Germán Cobo, José Antonio Morán, Eugènia Santamaría and David García-Solórzano
Electronics 2021, 10(9), 1072; https://doi.org/10.3390/electronics10091072 - 01 May 2021
Cited by 26 | Viewed by 4392
Abstract
Practical experiments are essential for engineering studies. Regarding the acquisition of practical and professional competences in a completely online scenario, the use of technology that allows students to carry out practical experiments is important. This paper presents a remote laboratory designed and developed [...] Read more.
Practical experiments are essential for engineering studies. Regarding the acquisition of practical and professional competences in a completely online scenario, the use of technology that allows students to carry out practical experiments is important. This paper presents a remote laboratory designed and developed by the Open University of Catalonia (RLAB-UOC), which allows engineering students studying online to carry out practical experiments anywhere and anytime with real electronic and communications equipment. The features of the remote laboratory and students’ satisfaction with its use are analyzed in real subjects across six semesters using a self-administered questionnaire in an FPGA-based case study. The results for the FPGA-based case study present the perception and satisfaction of students using the proposed remote laboratory in the acquisition of subject competences and content. Full article
Show Figures

Figure 1

24 pages, 913 KiB  
Article
Machine Learning Methods for Preterm Birth Prediction: A Review
by Tomasz Włodarczyk, Szymon Płotka, Tomasz Szczepański, Przemysław Rokita, Nicole Sochacki-Wójcicka, Jakub Wójcicki, Michał Lipa and Tomasz Trzciński
Electronics 2021, 10(5), 586; https://doi.org/10.3390/electronics10050586 - 03 Mar 2021
Cited by 17 | Viewed by 5693
Abstract
Preterm births affect around 15 million children a year worldwide. Current medical efforts focus on mitigating the effects of prematurity, not on preventing it. Diagnostic methods are based on parent traits and transvaginal ultrasound, during which the length of the cervix is examined. [...] Read more.
Preterm births affect around 15 million children a year worldwide. Current medical efforts focus on mitigating the effects of prematurity, not on preventing it. Diagnostic methods are based on parent traits and transvaginal ultrasound, during which the length of the cervix is examined. Approximately 30% of preterm births are not correctly predicted due to the complexity of this process and its subjective assessment. Based on recent research, there is hope that machine learning can be a helpful tool to support the diagnosis of preterm births. The objective of this study is to present various machine learning algorithms applied to preterm birth prediction. The wide spectrum of analysed data sets is the advantage of this survey. They range from electrohysterogram signals through electronic health records to transvaginal ultrasounds. Reviews of works on preterm birth already exist; however, this is the first review that includes works that are based on a transvaginal ultrasound examination. In this work, we present a critical appraisal of popular methods that have employed machine learning methods for preterm birth prediction. Moreover, we summarise the most common challenges incurred and discuss their possible application in the future. Full article
Show Figures

Figure 1

12 pages, 416 KiB  
Article
Machine Learning for Predictive Modelling of Ambulance Calls
by Miao Yu, Dimitrios Kollias, James Wingate, Niro Siriwardena and Stefanos Kollias
Electronics 2021, 10(4), 482; https://doi.org/10.3390/electronics10040482 - 18 Feb 2021
Cited by 5 | Viewed by 2884
Abstract
A novel machine learning approach is presented in this paper, based on extracting latent information and using it to assist decision making on ambulance attendance and conveyance to a hospital. The approach includes two steps: in the first, a forward model analyzes the [...] Read more.
A novel machine learning approach is presented in this paper, based on extracting latent information and using it to assist decision making on ambulance attendance and conveyance to a hospital. The approach includes two steps: in the first, a forward model analyzes the clinical and, possibly, non-clinical factors (explanatory variables), predicting whether positive decisions (response variables) should be given to the ambulance call, or not; in the second, a backward model analyzes the latent variables extracted from the forward model to infer the decision making procedure. The forward model is implemented through a machine, or deep learning technique, whilst the backward model is implemented through unsupervised learning. An experimental study is presented, which illustrates the obtained results, by investigating emergency ambulance calls to people in nursing and residential care homes, over a one-year period, using an anonymized data set provided by East Midlands Ambulance Service in United Kingdom. Full article
Show Figures

Figure 1

24 pages, 6780 KiB  
Article
On the Selection of Process Mining Tools
by Panagiotis Drakoulogkonas and Dimitris Apostolou
Electronics 2021, 10(4), 451; https://doi.org/10.3390/electronics10040451 - 11 Feb 2021
Cited by 8 | Viewed by 4021
Abstract
Process mining is a research discipline that applies data analysis and computational intelligence techniques to extract knowledge from event logs of information systems. It aims to provide new means to discover, monitor, and improve processes. Process mining has gained particular attention over recent [...] Read more.
Process mining is a research discipline that applies data analysis and computational intelligence techniques to extract knowledge from event logs of information systems. It aims to provide new means to discover, monitor, and improve processes. Process mining has gained particular attention over recent years and new process mining software tools, both academic and commercial, have been developed. This paper provides a survey of process mining software tools. It identifies and describes criteria that can be useful for comparing the tools. Furthermore, it introduces a multi-criteria methodology that can be used for the comparative analysis of process mining software tools. The methodology is based on three methods, namely ontology, decision tree, and Analytic Hierarchy Process (AHP), that can be used to help users decide which software tool best suits their needs. Full article
Show Figures

Figure 1

23 pages, 969 KiB  
Article
Creating Customized CGRAs for Scientific Applications
by George Charitopoulos, Ioannis Papaefstathiou and Dionisios N. Pnevmatikatos
Electronics 2021, 10(4), 445; https://doi.org/10.3390/electronics10040445 - 11 Feb 2021
Cited by 2 | Viewed by 1924
Abstract
Executing complex scientific applications on Coarse Grain Reconfigurable Arrays (CGRAs) offers improvements in the execution time and/or energy consumption when compared to optimized software implementations or even fully customized hardware solutions. In this work, we explore the potential of application analysis methods in [...] Read more.
Executing complex scientific applications on Coarse Grain Reconfigurable Arrays (CGRAs) offers improvements in the execution time and/or energy consumption when compared to optimized software implementations or even fully customized hardware solutions. In this work, we explore the potential of application analysis methods in such customized hardware solutions. We offer analysis metrics from various scientific applications and tailor the results that are to be used by MC-Def, a novel Mixed-CGRA Definition Framework targeting a Mixed-CGRA architecture that leverages the advantages of CGRAs and those of FPGAs by utilizing a customized cell-array along, with a separate LUT array being used for adaptability. Additionally, we present the implementation results regarding the VHDL-created hardware implementations of our CGRA cell concerning various scientific applications. Full article
Show Figures

Figure 1

14 pages, 1263 KiB  
Article
Construction and Evaluation of QOL Specialized Dictionary SqolDic Using Vocabulary Meaning and QOL Scale
by Satoshi Nakagawa, Huang Minlie and Yasuo Kuniyoshi
Electronics 2021, 10(4), 417; https://doi.org/10.3390/electronics10040417 - 08 Feb 2021
Cited by 2 | Viewed by 2059
Abstract
Agents that build interactive relationships with people can provide appropriate support and generate behaviors by accurately grasping the state of the person. This study focuses on the quality of life (QOL), which can be assessed multidimensionally, and aims to estimate QOL scores in [...] Read more.
Agents that build interactive relationships with people can provide appropriate support and generate behaviors by accurately grasping the state of the person. This study focuses on the quality of life (QOL), which can be assessed multidimensionally, and aims to estimate QOL scores in the process of human interaction. Although vision-based estimation has been the main method for QOL estimation, we proposed a new text-based estimation method. We created a QOL-specific dictionary called SqolDic, which is based on large-scale Japanese textual data. To evaluate the effectiveness of SqolDic, we implemented a system that outputs the time-series variation of a user’s conversation content and the QOL scores based on it. In an experiment for estimating the content of user conversations based on a QOL scale by inputting data from actual human conversations, we achieved a maximum estimation accuracy of 91.2%. Additionally, in an experiment to estimate QOL score variability, we successfully estimated the mental health state and one of the QOL scales with a smaller distribution of error than that in previous studies. The experimental results demonstrated the effectiveness of our system in estimating conversation content and QOL scores as well as the effectiveness of our newly proposed QOL dictionary. Full article
Show Figures

Figure 1

14 pages, 684 KiB  
Article
WRGAN: Improvement of RelGAN with Wasserstein Loss for Text Generation
by Ziyun Jiao and Fuji Ren
Electronics 2021, 10(3), 275; https://doi.org/10.3390/electronics10030275 - 25 Jan 2021
Cited by 6 | Viewed by 2662
Abstract
Generative adversarial networks (GANs) were first proposed in 2014, and have been widely used in computer vision, such as for image generation and other tasks. However, the GANs used for text generation have made slow progress. One of the reasons is that the [...] Read more.
Generative adversarial networks (GANs) were first proposed in 2014, and have been widely used in computer vision, such as for image generation and other tasks. However, the GANs used for text generation have made slow progress. One of the reasons is that the discriminator’s guidance for the generator is too weak, which means that the generator can only get a “true or false” probability in return. Compared with the current loss function, the Wasserstein distance can provide more information to the generator, but RelGAN does not work well with Wasserstein distance in experiments. In this paper, we propose an improved neural network based on RelGAN and Wasserstein loss named WRGAN. Differently from RelGAN, we modified the discriminator network structure with 1D convolution of multiple different kernel sizes. Correspondingly, we also changed the loss function of the network with a gradient penalty Wasserstein loss. Our experiments on multiple public datasets show that WRGAN outperforms most of the existing state-of-the-art methods, and the Bilingual Evaluation Understudy(BLEU) scores are improved with our novel method. Full article
Show Figures

Figure 1

36 pages, 19116 KiB  
Article
Privacy-Preserving Surveillance as an Edge Service Based on Lightweight Video Protection Schemes Using Face De-Identification and Window Masking
by Alem Fitwi, Yu Chen, Sencun Zhu, Erik Blasch and Genshe Chen
Electronics 2021, 10(3), 236; https://doi.org/10.3390/electronics10030236 - 21 Jan 2021
Cited by 24 | Viewed by 4624
Abstract
With a myriad of edge cameras deployed in urban and suburban areas, many people are seriously concerned about the constant invasion of their privacy. There is a mounting pressure from the public to make the cameras privacy-conscious. This paper proposes a Privacy-preserving Surveillance [...] Read more.
With a myriad of edge cameras deployed in urban and suburban areas, many people are seriously concerned about the constant invasion of their privacy. There is a mounting pressure from the public to make the cameras privacy-conscious. This paper proposes a Privacy-preserving Surveillance as an Edge service (PriSE) method with a hybrid architecture comprising a lightweight foreground object scanner and a video protection scheme that operates on edge cameras and fog/cloud-based models to detect privacy attributes like windows, faces, and perpetrators. The Reversible Chaotic Masking (ReCAM) scheme is designed to ensure an end-to-end privacy while the simplified foreground-object detector helps reduce resource consumption by discarding frames containing only background-objects. A robust window-object detector was developed to prevent peeping via windows; whereas human faces are detected by using a multi-tasked cascaded convolutional neural network (MTCNN) to ensure de-identification. The extensive experimental studies and comparative analysis show that the PriSE scheme (i) can efficiently detect foreground objects, and scramble those frames that contain foreground objects at the edge cameras, and (ii) detect and denature window and face objects, and identify perpetrators at a fog/cloud server to prevent unauthorized viewing via windows, to ensure anonymity of individuals, and to deter criminal activities, respectively. Full article
Show Figures

Figure 1

19 pages, 2152 KiB  
Article
EdgeAvatar: An Edge Computing System for Building Virtual Beings
by Neftali Watkinson, Fedor Zaitsev, Aniket Shivam, Michael Demirev, Mike Heddes, Tony Givargis, Alexandru Nicolau and Alexander Veidenbaum
Electronics 2021, 10(3), 229; https://doi.org/10.3390/electronics10030229 - 20 Jan 2021
Cited by 5 | Viewed by 2786
Abstract
Dialogue systems, also known as conversational agents, are computing systems that use algorithms for speech and language processing to engage in conversation with humans or other conversation-capable systems. A chatbot is a conversational agent that has, as its primary goal, to maximize the [...] Read more.
Dialogue systems, also known as conversational agents, are computing systems that use algorithms for speech and language processing to engage in conversation with humans or other conversation-capable systems. A chatbot is a conversational agent that has, as its primary goal, to maximize the length of the conversation without any specific targeted task. When a chatbot is embellished with an artistic approach that is meant to evoke an emotional response, then it is called a virtual being. On the other hand, conversational agents that interact with the physical world require the use of specialized hardware to sense and process captured information. In this article we describe EdgeAvatar, a system based on Edge Computing principles for the creation of virtual beings. The objective of the EdgeAvatar system is to provide a streamlined and modular framework for virtual being applications that are to be deployed in public settings. We also present two implementations that use EdgeAvatar and are inspired by historical figures to interact with visitors of the Venice Biennale 2019. EdgeAvatar can adapt to fit different approaches for AI powered conversations. Full article
Show Figures

Figure 1

23 pages, 1502 KiB  
Article
Muon–Electron Pulse Shape Discrimination for Water Cherenkov Detectors Based on FPGA/SoC
by Luis Guillermo Garcia, Romina Soledad Molina, Maria Liz Crespo, Sergio Carrato, Giovanni Ramponi, Andres Cicuttin, Ivan Rene Morales and Hector Perez
Electronics 2021, 10(3), 224; https://doi.org/10.3390/electronics10030224 - 20 Jan 2021
Cited by 7 | Viewed by 3709
Abstract
The distinction of secondary particles in extensive air showers, specifically muons and electrons, is one of the requirements to perform a good measurement of the composition of primary cosmic rays. We describe two methods for pulse shape detection and discrimination of muons and [...] Read more.
The distinction of secondary particles in extensive air showers, specifically muons and electrons, is one of the requirements to perform a good measurement of the composition of primary cosmic rays. We describe two methods for pulse shape detection and discrimination of muons and electrons implemented on FPGA. One uses an artificial neural network (ANN) algorithm; the other exploits a correlation approach based on finite impulse response (FIR) filters. The novel hls4ml package is used to build the ANN inference model. Both methods were implemented and tested on Xilinx FPGA System on Chip (SoC) devices: ZU9EG Zynq UltraScale+ and ZC7Z020 Zynq. The data set used for the analysis was captured with a data acquisition system on an experimental site based on a water Cherenkov detector. A comparison of the accuracy of the detection, resources utilization and power consumption of both methods is presented. The results show an overall accuracy on particle discrimination of 96.62% for the ANN and 92.50% for the FIR-based correlation, with execution times of 848 ns and 752 ns, respectively. Full article
Show Figures

Figure 1

21 pages, 4174 KiB  
Article
A Non-Linear Convolution Network for Image Processing
by Stefano Marsi, Jhilik Bhattacharya, Romina Molina and Giovanni Ramponi
Electronics 2021, 10(2), 201; https://doi.org/10.3390/electronics10020201 - 17 Jan 2021
Cited by 9 | Viewed by 3156
Abstract
This paper proposes a new neural network structure for image processing whose convolutional layers, instead of using kernels with fixed coefficients, use space-variant coefficients. The adoption of this strategy allows the system to adapt its behavior according to the spatial characteristics of the [...] Read more.
This paper proposes a new neural network structure for image processing whose convolutional layers, instead of using kernels with fixed coefficients, use space-variant coefficients. The adoption of this strategy allows the system to adapt its behavior according to the spatial characteristics of the input data. This type of layers performs, as we demonstrate, a non-linear transfer function. The features generated by these layers, compared to the ones generated by canonical CNN layers, are more complex and more suitable to fit to the local characteristics of the images. Networks composed by these non-linear layers offer performance comparable with or superior to the ones which use canonical Convolutional Networks, using fewer layers and a significantly lower number of features. Several applications of these newly conceived networks to classical image-processing problems are analyzed. In particular, we consider: Single-Image Super-Resolution (SISR), Edge-Preserving Smoothing (EPS), Noise Removal (NR), and JPEG artifacts removal (JAR). Full article
Show Figures

Figure 1

Review

Jump to: Research

15 pages, 1784 KiB  
Review
A Survey of Network-Based Hardware Accelerators
by Iouliia Skliarova
Electronics 2022, 11(7), 1029; https://doi.org/10.3390/electronics11071029 - 25 Mar 2022
Cited by 6 | Viewed by 2731
Abstract
Many practical data-processing algorithms fail to execute efficiently on general-purpose CPUs (Central Processing Units) due to the sequential matter of their operations and memory bandwidth limitations. To achieve desired performance levels, reconfigurable (FPGA (Field-Programmable Gate Array)-based) hardware accelerators are frequently explored that permit [...] Read more.
Many practical data-processing algorithms fail to execute efficiently on general-purpose CPUs (Central Processing Units) due to the sequential matter of their operations and memory bandwidth limitations. To achieve desired performance levels, reconfigurable (FPGA (Field-Programmable Gate Array)-based) hardware accelerators are frequently explored that permit the processing units’ architectures to be better adapted to the specific problem/algorithm requirements. In particular, network-based data-processing algorithms are very well suited to implementation in reconfigurable hardware because several data-independent operations can easily and naturally be executed in parallel over as many processing blocks as actually required and technically possible. GPUs (Graphics Processing Units) have also demonstrated good results in this area but they tend to use significantly more power than FPGA, which could be a limiting factor in embedded applications. Moreover, GPUs employ a Single Instruction, Multiple Threads (SIMT) execution model and are therefore optimized to SIMD (Single Instruction, Multiple Data) operations, while in FPGAs fully custom datapaths can be built, eliminating much of the control overhead. This review paper aims to analyze, compare, and discuss different approaches to implementing network-based hardware accelerators in FPGA and programmable SoC (Systems-on-Chip). The performed analysis and the derived recommendations would be useful to hardware designers of future network-based hardware accelerators. Full article
Show Figures

Figure 1

13 pages, 1015 KiB  
Review
An Analysis of Hardware Design of MLWE-Based Public-Key Encryption and Key-Establishment Algorithms
by Tuy Tan Nguyen, Tram Thi Bao Nguyen and Hanho Lee
Electronics 2022, 11(6), 891; https://doi.org/10.3390/electronics11060891 - 12 Mar 2022
Cited by 2 | Viewed by 2860
Abstract
This paper presents a review of module ring learning with errors-based (MLWE-based) public-key encryption and key-establishment algorithms. In particular, we introduce the preliminaries of public key cryptography, MLWE-based algorithms, and arithmetic operations in post-quantum cryptography. We then focus on analyzing the state-of-the-art hardware [...] Read more.
This paper presents a review of module ring learning with errors-based (MLWE-based) public-key encryption and key-establishment algorithms. In particular, we introduce the preliminaries of public key cryptography, MLWE-based algorithms, and arithmetic operations in post-quantum cryptography. We then focus on analyzing the state-of-the-art hardware architecture designs of CRYSTALS-Kyber at different security levels, including hardware architectures for Kyber-512, Kyber-768, and Kyber-1024. This analysis is dedicated to providing complete guidelines for selecting the most suitable CRYSTALS-Kyber hardware architecture to apply in post-quantum cryptography-based security systems in reality, with different requirements of security levels and hardware efficiency. Full article
Show Figures

Figure 1

29 pages, 1066 KiB  
Review
Service Robots: A Systematic Literature Review
by In Lee
Electronics 2021, 10(21), 2658; https://doi.org/10.3390/electronics10212658 - 30 Oct 2021
Cited by 25 | Viewed by 13368
Abstract
A service robot performs various professional services and domestic/personal services useful for organizations and humans in many application domains. Currently, the service robot industry is growing rapidly along with the technological advances of the Fourth Industrial Revolution. In light of the great interest [...] Read more.
A service robot performs various professional services and domestic/personal services useful for organizations and humans in many application domains. Currently, the service robot industry is growing rapidly along with the technological advances of the Fourth Industrial Revolution. In light of the great interest and potential of service robots, this study conducts a systematic review of the past and current research in service robots. This study examines the development activities for service robots across applications and industries and categorizes the service robots into four types. The categorization provides us with insights into the unique research activities and practices in each category of service robots. Then, this study analyzes the technological foundation that applies to all four categories of service robots. Finally, this study discusses opportunities and challenges that are understudied but potentially important for the future research of service robots. Full article
Show Figures

Figure 1

23 pages, 8068 KiB  
Review
Challenges and Opportunities in Industry 4.0 for Mechatronics, Artificial Intelligence and Cybernetics
by Vasiliki Liagkou, Chrysostomos Stylios, Lamprini Pappa and Alexander Petunin
Electronics 2021, 10(16), 2001; https://doi.org/10.3390/electronics10162001 - 19 Aug 2021
Cited by 17 | Viewed by 7221
Abstract
Industry 4.0 has risen as an integrated digital manufacturing environment, and it has created a novel research perspective that has thrust research to interdisciplinarity and exploitation of ICT advances. This work presents and discusses the main aspects of Industry 4.0 and how intelligence [...] Read more.
Industry 4.0 has risen as an integrated digital manufacturing environment, and it has created a novel research perspective that has thrust research to interdisciplinarity and exploitation of ICT advances. This work presents and discusses the main aspects of Industry 4.0 and how intelligence can be embedded in manufacturing to create the smart factory. It briefly describes the main components of Industry 4.0, and it focuses on the security challenges that the fully interconnected ecosystem of Industry 4.0 has to meet and the threats for each component. Preserving security has a crucial role in Industry 4.0, and it is vital for its existence, so the main research directions on how to ensure the confidentiality and integrity of the information shared among the Industry 4.0 components are presented. Another view is in light of the security issues that come as a result of enabling new technologies. Full article
Show Figures

Figure 1

42 pages, 1350 KiB  
Review
Machine Learning Methods for Histopathological Image Analysis: A Review
by Jonathan de Matos, Steve Tsham Mpinda Ataky, Alceu de Souza Britto, Luiz Eduardo Soares de Oliveira and Alessandro Lameiras Koerich
Electronics 2021, 10(5), 562; https://doi.org/10.3390/electronics10050562 - 27 Feb 2021
Cited by 29 | Viewed by 4646
Abstract
Histopathological images (HIs) are the gold standard for evaluating some types of tumors for cancer diagnosis. The analysis of such images is time and resource-consuming and very challenging even for experienced pathologists, resulting in inter-observer and intra-observer disagreements. One of the ways of [...] Read more.
Histopathological images (HIs) are the gold standard for evaluating some types of tumors for cancer diagnosis. The analysis of such images is time and resource-consuming and very challenging even for experienced pathologists, resulting in inter-observer and intra-observer disagreements. One of the ways of accelerating such an analysis is to use computer-aided diagnosis (CAD) systems. This paper presents a review on machine learning methods for histopathological image analysis, including shallow and deep learning methods. We also cover the most common tasks in HI analysis, such as segmentation and feature extraction. Besides, we present a list of publicly available and private datasets that have been used in HI research. Full article
Show Figures

Figure 1

Back to TopTop