Information Technology and Its Applications 2021

A special issue of Symmetry (ISSN 2073-8994). This special issue belongs to the section "Computer".

Deadline for manuscript submissions: closed (31 August 2022) | Viewed by 211995

Special Issue Editors


E-Mail Website
Guest Editor
Department of Information Management, Chaoyang University of Technology, Taichung 41349, Taiwan
Interests: information hiding; steganography; image processing; interactive game design; 3D modeling
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Electronic Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Sungai Long Campus, Jalan Sungai Long, Bandar Sungai Long Cheras 43000, Kajang, Selangor, Malaysia
Interests: information security; cryptography; biometric security; machine learning

E-Mail Website
Guest Editor
Department of Computer Science, Vidyasagar University, Paschim Medinipur, West Bengal, India
Interests: computer vision; data hiding; cryptography and steganography; digital watermarking; image processing

Special Issue Information

Dear Colleagues,

This Special Issue aims to provide a forum for presentations and discussions on the recent methodological[DM1]  in advanced information and multimedia technology and its applications. This Special Issue covers pure research and applications within novel scopes related to multimedia techniques and applications. In addition, it deals with information technologies, such as information hiding, security, IOT[DM2] , cloud computing, and so on. The topics of this Special Issue include, but are not limited to:

  • Multimedia Applications
  • Image Related
  • Pattern Recognition
  • Multimedia Related Issues
  • Information Hiding
  • Security
  • IOT
  • Cloud Computing
  • Machine Learning
  • Artificial Intelligent
  • Data Mining
  • Software Engineering
  • Information Technology Related Issues

Dr. Tzu Chuen Lu
Dr. Wun-She Yap
Dr. Biswapart Jana
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Symmetry is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multimedia Applications
  • Image Related
  • Pattern Recognition
  • Multimedia Related Issues
  • Information Hiding
  • Security
  • IOT
  • Cloud Computing
  • Machine Learning
  • Artificial Intelligent
  • Data Mining
  • Software Engineering
  • Information Technology Related Issues

Published Papers (57 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

11 pages, 255 KiB  
Article
Ignoring Internal Utilities in High-Utility Itemset Mining
by Damla Oguz
Symmetry 2022, 14(11), 2339; https://doi.org/10.3390/sym14112339 - 07 Nov 2022
Viewed by 1056
Abstract
High-utility itemset mining discovers a set of items that are sold together and have utility values higher than a given minimum utility threshold. The utilities of these itemsets are calculated by considering their internal and external utility values, which correspond, respectively, to the [...] Read more.
High-utility itemset mining discovers a set of items that are sold together and have utility values higher than a given minimum utility threshold. The utilities of these itemsets are calculated by considering their internal and external utility values, which correspond, respectively, to the quantity sold of each item in each transaction and profit units. Therefore, internal and external utilities have symmetric effects on deciding whether an itemset is high-utility. The symmetric contributions of both utilities cause two major related challenges. First, itemsets with low external utility values can easily exceed the minimum utility threshold if they are sold extensively. In this case, such itemsets can be found more efficiently using frequent itemset mining. Second, a large number of high-utility itemsets are generated, which can result in interesting or important high-utility itemsets that are overlooked. This study presents an asymmetric approach in which the internal utility values are ignored when finding high-utility itemsets with high external utility values. The experimental results of two real datasets reveal that the external utility values have fundamental effects on the high-utility itemsets. The results of this study also show that this effect tends to increase for high values of the minimum utility threshold. Moreover, the proposed approach reduces the execution time. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

25 pages, 3513 KiB  
Article
Improving the Performance and Explainability of Indoor Human Activity Recognition in the Internet of Things Environment
by Ayse Betul Cengiz, Kokten Ulas Birant, Mehmet Cengiz, Derya Birant and Kemal Baysari
Symmetry 2022, 14(10), 2022; https://doi.org/10.3390/sym14102022 - 26 Sep 2022
Cited by 5 | Viewed by 2019
Abstract
Traditional indoor human activity recognition (HAR) has been defined as a time-series data classification problem and requires feature extraction. The current indoor HAR systems still lack transparent, interpretable, and explainable approaches that can generate human-understandable information. This paper proposes a new approach, called [...] Read more.
Traditional indoor human activity recognition (HAR) has been defined as a time-series data classification problem and requires feature extraction. The current indoor HAR systems still lack transparent, interpretable, and explainable approaches that can generate human-understandable information. This paper proposes a new approach, called Human Activity Recognition on Signal Images (HARSI), which defines the HAR problem as an image classification problem to improve both explainability and recognition accuracy. The proposed HARSI method collects sensor data from the Internet of Things (IoT) environment and transforms the raw signal data into some visual understandable images to take advantage of the strengths of convolutional neural networks (CNNs) in handling image data. This study focuses on the recognition of symmetric human activities, including walking, jogging, moving downstairs, moving upstairs, standing, and sitting. The experimental results carried out on a real-world dataset showed that a significant improvement (13.72%) was achieved by the proposed HARSI model compared to the traditional machine learning models. The results also showed that our method (98%) outperformed the state-of-the-art methods (90.94%) in terms of classification accuracy. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

29 pages, 5821 KiB  
Article
A Continuous Region-Based Skyline Computation for a Group of Mobile Users
by Ghoncheh Babanejad Dehaki, Hamidah Ibrahim, Ali A. Alwan, Fatimah Sidi, Nur Izura Udzir and Ma′aruf Mohammed Lawal
Symmetry 2022, 14(10), 2003; https://doi.org/10.3390/sym14102003 - 24 Sep 2022
Viewed by 953
Abstract
Skyline queries, which are based on the concept of Pareto dominance, filter the objects from a potentially large multi-dimensional collection of objects by keeping the best, most favoured objects in satisfying the user′s preferences. With today′s advancement of technology, ad hoc meetings or [...] Read more.
Skyline queries, which are based on the concept of Pareto dominance, filter the objects from a potentially large multi-dimensional collection of objects by keeping the best, most favoured objects in satisfying the user′s preferences. With today′s advancement of technology, ad hoc meetings or impromptu gatherings involving a group of people are becoming more and more common. Intuitively, deciding on an optimal meeting point is not a straightforward task especially when conflicting criteria are involved and the number of criteria to be considered is vast. Moreover, a point that is near to a user might not meet all the various users′ preferences, while a point that meets most of the users′ preferences might be located far away from these users. The task becomes more complicated when these users are on the move. In this paper, we present the Region-based Skyline for a Group of Mobile Users (RSGMU) method, which aims to resolve the problem of continuously finding the optimal meeting points, herein called skyline objects, for a group of users while they are on the move. RSGMU assumes a centroid-based movement where users are assumed to be moving towards a centroid that is identified based on the current locations of each user in the group. Meanwhile, to limit the searching space in identifying the objects of interest, a search region is constructed. However, the changes in the users′ locations caused the search region of the group to be reconstructed. Unlike the existing methods that require users to frequently report their latest locations, RSGMU utilises a dynamic motion formula, which abides to the laws of classical physics that are fundamentally symmetrical with respect to time, in order to predict the locations of the users at a specified time interval. As a result, the skyline objects are continuously updated, and the ideal meeting points can be decided upon ahead of time. Hence, the users′ locations as well as the spatial and non-spatial attributes of the objects are used as the skyline evaluation criteria. Meanwhile, to avoid re-computation of skylines at each time interval, the objects of interest within a Single Minimum Bounding Rectangle that is formed based on the current search region are organized in a Kd-tree data structure. Several experiments have been conducted and the results show that our proposed method outperforms the previous work with respect to CPU time. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

24 pages, 3659 KiB  
Article
Machine Learning Models for the Prediction of Energy Consumption Based on Cooling and Heating Loads in Internet-of-Things-Based Smart Buildings
by Bita Ghasemkhani, Reyat Yilmaz, Derya Birant and Recep Alp Kut
Symmetry 2022, 14(8), 1553; https://doi.org/10.3390/sym14081553 - 28 Jul 2022
Cited by 4 | Viewed by 2677
Abstract
In this article, the consumption of energy in Internet-of-things-based smart buildings is investigated. The main goal of this work is to predict cooling and heating loads as the parameters that impact the amount of energy consumption in smart buildings, some of which have [...] Read more.
In this article, the consumption of energy in Internet-of-things-based smart buildings is investigated. The main goal of this work is to predict cooling and heating loads as the parameters that impact the amount of energy consumption in smart buildings, some of which have the property of symmetry. For this purpose, it proposes novel machine learning models that were built by using the tri-layered neural network (TNN) and maximum relevance minimum redundancy (MRMR) algorithms. Each feature related to buildings was investigated in terms of skewness to determine whether their distributions are symmetric or asymmetric. The best features were determined as the essential parameters for energy consumption. The results of this study show that the properties of relative compactness and glazing area have the most impact on energy consumption in the buildings, while orientation and glazing area distribution are less correlated with the output variables. In addition, the best mean absolute error (MAE) was calculated as 0.28993 for heating load (kWh/m2) prediction and 0.53527 for cooling load (kWh/m2) prediction, respectively. The experimental results showed that our method outperformed the state-of-the-art methods on the same dataset. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

13 pages, 318 KiB  
Article
Hiding Sensitive Itemsets Using Sibling Itemset Constraints
by Baris Yildiz, Alp Kut and Reyat Yilmaz
Symmetry 2022, 14(7), 1453; https://doi.org/10.3390/sym14071453 - 15 Jul 2022
Cited by 1 | Viewed by 1315
Abstract
Data collection and processing progress made data mining a popular tool among organizations in the last decades. Sharing information between companies could make this tool more beneficial for each party. However, there is a risk of sensitive knowledge disclosure. Shared data should be [...] Read more.
Data collection and processing progress made data mining a popular tool among organizations in the last decades. Sharing information between companies could make this tool more beneficial for each party. However, there is a risk of sensitive knowledge disclosure. Shared data should be modified in such a way that sensitive relationships would be hidden. Since the discovery of frequent itemsets is one of the most effective data mining tools that firms use, privacy-preserving techniques are necessary for continuing frequent itemset mining. There are two types of approaches in the algorithmic nature: heuristic and exact. This paper presents an exact itemset hiding approach, which uses constraints for a better solution in terms of side effects and minimum distortion on the database. This distortion creates an asymmetric relation between the original and the sanitized database. To lessen the side effects of itemset hiding, we introduced the sibling itemset concept that is used for generating constraints. Additionally, our approach does not require frequent itemset mining executed before the hiding process. This gives our approach an advantage in total running time. We give an evaluation of our algorithm on some benchmark datasets. Our results show the effectiveness of our hiding approach and elimination of prior mining of itemsets is time efficient. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
13 pages, 3039 KiB  
Article
Verifiable (2, n) Image Secret Sharing Scheme Using Sudoku Matrix
by Yi-Hui Chen, Jia-Ye Lee, Min-Hsien Chiang and Shih-Hsin Chen
Symmetry 2022, 14(7), 1445; https://doi.org/10.3390/sym14071445 - 14 Jul 2022
Cited by 1 | Viewed by 1352
Abstract
As Internet technology continues to profoundly impact our lives, techniques for information protection have become increasingly advanced and become a common discussion topic. With the aim to protect private images, this paper splits a secret image into n individual shares using a Sudoku [...] Read more.
As Internet technology continues to profoundly impact our lives, techniques for information protection have become increasingly advanced and become a common discussion topic. With the aim to protect private images, this paper splits a secret image into n individual shares using a Sudoku matrix with authentication features. Later, the shares can be compiled to completely reconstruct the secret image. The shares are meaningful ones in order to avoid detection and suspicion among malicious users. Our proposed matrix is unique because the embedding rate of the secret data is very high, while the visual quality of the shares can be well guaranteed. In addition, the embedded authentication codes can be retrieved to authenticate the integrity of the secret image. Experimental results prove the advantages of our approach in terms of visual quality and authentication ability. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

36 pages, 10950 KiB  
Article
JLcoding Language Tool for Early Programming Learning
by Wei-Ying Li and Tzu-Chuen Lu
Symmetry 2022, 14(7), 1405; https://doi.org/10.3390/sym14071405 - 08 Jul 2022
Viewed by 1963
Abstract
This paper proposes a symmetry language of block-based to design novel educational programming called the JLcoding system. JLcoding system helps students convert from a block-based language to a text-based programming language. The interface and function of the system are block-based programs such as [...] Read more.
This paper proposes a symmetry language of block-based to design novel educational programming called the JLcoding system. JLcoding system helps students convert from a block-based language to a text-based programming language. The interface and function of the system are block-based programs such as Scratch, but it is designed with text-based architecture. The system contains graphic teaching to teach the basic knowledge of programming, such that students can maintain interest and confidence when learning computational thinking. The system simultaneously combines the advantages of block-based and text-based programming. This research engaged 41 students who learned block-based programming language as the research objects. The experimental results show that the students can obtain higher post-test scores than the pre-test scores after learning the JLcoding system. The degree of learning progress was not affected by their gender. Additionally, it was discovered that male students have higher confidence in their programming abilities, and students who have learning interests are more motivated to continue learning the program. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

18 pages, 1999 KiB  
Article
Effects of Adversarial Training on the Safety of Classification Models
by Handong Kim and Jongdae Han
Symmetry 2022, 14(7), 1338; https://doi.org/10.3390/sym14071338 - 28 Jun 2022
Cited by 1 | Viewed by 1309
Abstract
Artificial intelligence (AI) is one of the most important topics that implements symmetry in computer science. As like humans, most AI also learns by trial-and-error approach which requires appropriate adversarial examples. In this study, we prove that adversarial training can be useful to [...] Read more.
Artificial intelligence (AI) is one of the most important topics that implements symmetry in computer science. As like humans, most AI also learns by trial-and-error approach which requires appropriate adversarial examples. In this study, we prove that adversarial training can be useful to verify the safety of classification model in early stage of development. We experimented with various amount of adversarial data and found that the safety can be significantly improved by appropriate ratio of adversarial training. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

13 pages, 10168 KiB  
Article
Separable Reversible Watermarking in Encrypted Images for Privacy Preservation
by Ya-Fen Chang and Wei-Liang Tai
Symmetry 2022, 14(7), 1336; https://doi.org/10.3390/sym14071336 - 28 Jun 2022
Cited by 1 | Viewed by 1161
Abstract
We propose a separable, reversible watermarking scheme in encrypted images for privacy preservation. The Paillier cryptosystem is used for separable detection and decryption. Users may want to use cloud services without exposing their content. To preserve privacy, the image owner encrypts the original [...] Read more.
We propose a separable, reversible watermarking scheme in encrypted images for privacy preservation. The Paillier cryptosystem is used for separable detection and decryption. Users may want to use cloud services without exposing their content. To preserve privacy, the image owner encrypts the original image using a public key cryptosystem before sending it to the cloud. Cloud service providers can embed the watermark into encrypted images by using a data-hiding key without knowing and destroying the original image. Even though the cloud service providers do not know the original image content, they can use the data-hiding key to detect the watermark from the encrypted image for authentication. Besides, the image owner can use the private key to directly decrypt the watermarked encrypted image to get the original image without any distortion due to its homomorphic property. Experimental results show the feasibility of the proposed method, which can provide efficient privacy-preserving authentication without degrading security. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

21 pages, 4199 KiB  
Article
Modelling the Impact of Transit Media on Information Spreading in an Urban Space Using Cellular Automata
by Krzysztof Małecki, Jarosław Jankowski and Mateusz Szkwarkowski
Symmetry 2019, 11(3), 428; https://doi.org/10.3390/sym11030428 - 22 Mar 2019
Cited by 10 | Viewed by 3920
Abstract
Information spreading processes are the key drivers of marketing campaigns. Activity on social media delivers more detailed information compared to viral marketing in traditional media. Monitoring the performance of outdoor campaigns that are carried out using this transportation system is even more complicated [...] Read more.
Information spreading processes are the key drivers of marketing campaigns. Activity on social media delivers more detailed information compared to viral marketing in traditional media. Monitoring the performance of outdoor campaigns that are carried out using this transportation system is even more complicated because of the lack of data. The approach that is presented in this paper is based on cellular automata and enables the modelling of the information-spreading processes that are initiated by transit advertising within an urban space. The evaluation of classical and graph cellular automata models and a coverage analysis of transit advertising based on tram lines were performed. The results demonstrated how the number of lines affects the performance in terms of coverage within an urban space and the differences between the proposed models. While research is based on an exemplary dataset taken from Szczecin (Poland), the presented framework can be used together with data from the public transport system for modelling advertising resources usage and coverage within the urban space. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

21 pages, 4824 KiB  
Article
Energy Efficiency and Coverage Trade-Off in 5G for Eco-Friendly and Sustainable Cellular Networks
by Mohammed H. Alsharif, Anabi Hilary Kelechi, Jeong Kim and Jin Hong Kim
Symmetry 2019, 11(3), 408; https://doi.org/10.3390/sym11030408 - 20 Mar 2019
Cited by 29 | Viewed by 5258
Abstract
Recently, cellular networks’ energy efficiency has garnered research interest from academia and industry because of its considerable economic and ecological effects in the near future. This study proposes an approach to cooperation between the Long-Term Evolution (LTE) and next-generation wireless networks. The fifth-generation [...] Read more.
Recently, cellular networks’ energy efficiency has garnered research interest from academia and industry because of its considerable economic and ecological effects in the near future. This study proposes an approach to cooperation between the Long-Term Evolution (LTE) and next-generation wireless networks. The fifth-generation (5G) wireless network aims to negotiate a trade-off between wireless network performance (sustaining the demand for high speed packet rates during busy traffic periods) and energy efficiency (EE) by alternating 5G base stations’ (BSs) switching off/on based on the traffic instantaneous load condition and, at the same time, guaranteeing network coverage for mobile subscribers by the remaining active LTE BSs. The particle swarm optimization (PSO) algorithm was used to determine the optimum criteria of the active LTE BSs (transmission power, total antenna gain, spectrum/channel bandwidth, and signal-to-interference-noise ratio) that achieves maximum coverage for the entire area during the switch-off session of 5G BSs. Simulation results indicate that the energy savings can reach 3.52 kW per day, with a maximum data rate of up to 22.4 Gbps at peak traffic hours and 80.64 Mbps during a 5G BS switched-off session along with guaranteed full coverage over the entire region by the remaining active LTE BSs. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

17 pages, 1750 KiB  
Article
Handling Semantic Complexity of Big Data using Machine Learning and RDF Ontology Model
by Rauf Sajjad, Imran Sarwar Bajwa and Rafaqut Kazmi
Symmetry 2019, 11(3), 309; https://doi.org/10.3390/sym11030309 - 01 Mar 2019
Cited by 3 | Viewed by 3556
Abstract
Business information required for applications and business processes is extracted using systems like business rule engines. Since the advent of Big Data, such rule engines are producing rules in a big quantity whereas more rules lead to more complexity in semantic analysis and [...] Read more.
Business information required for applications and business processes is extracted using systems like business rule engines. Since the advent of Big Data, such rule engines are producing rules in a big quantity whereas more rules lead to more complexity in semantic analysis and understanding. This paper introduces a method to handle semantic complexity in rules and support automated generation of Resource Description Framework (RDF) metadata model of rules and such model is used to assist in querying and analysing Big Data. Practically, the dynamic changes in rules can be a source of conflict in rules stored in a repository. It is identified during the literature review that there is a need of a method that can semantically analyse rules and help business analysts in testing and validating the rules once a change is made in a rule. This paper presents a robust method that not only supports semantic analysis of rules but also generates RDF metadata model of rules and provide support of querying for the sake of semantic interpretation of the rules. The results of the experiments manifest that consistency checking of a set of big data rules is possible through automated tools. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

11 pages, 1181 KiB  
Article
Analysis of Open Source Operating System Evolution: A Perspective from Package Dependency Network Motif
by Jing Wang, Youguo Li, Yusong Tan, Qingbo Wu and Quanyuan Wu
Symmetry 2019, 11(3), 298; https://doi.org/10.3390/sym11030298 - 27 Feb 2019
Cited by 1 | Viewed by 2699
Abstract
Complexity of open source operating systems constantly increase on account of their widespread application. It is increasingly difficult to understand the collaboration between components in the system. Extant research of open source operating system evolution is mainly achieved by Lehman’s law, which is [...] Read more.
Complexity of open source operating systems constantly increase on account of their widespread application. It is increasingly difficult to understand the collaboration between components in the system. Extant research of open source operating system evolution is mainly achieved by Lehman’s law, which is conducted by analyzing characteristics such as line of the source code. Networks, which are utilized to demonstrate relationships among entities, is an adequate model for exploring cooperation of units that form a software system. Software network has become a research hotspot in the field of software engineering, leading to a new viewpoint for us to estimate evolution of open source operating systems. Motif, a connected subgraph that occurs frequently in a network, is extensively used in other scientific research such as bioscience to detect evolutionary rules. Thus, this paper constructs software package dependency network of open source software operating systems and investigates their evolutionary discipline from the perspective of the motif. Results of our experiments, which took Ubuntu Kylin as a study example, indicate a stable evolution of motif change as well as discovering structural defect in that system. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

22 pages, 326 KiB  
Article
A Generic Framework for Accountable Optimistic Fair Exchange Protocol
by Jia-Ch’ng Loh, Swee-Huay Heng and Syh-Yuan Tan
Symmetry 2019, 11(2), 285; https://doi.org/10.3390/sym11020285 - 22 Feb 2019
Cited by 3 | Viewed by 2254
Abstract
Optimistic Fair Exchange protocol was designed for two parties to exchange in a fair way where an arbitrator always remains offline and will be referred only if any dispute happens. There are various optimistic fair exchange protocols with different security properties in the [...] Read more.
Optimistic Fair Exchange protocol was designed for two parties to exchange in a fair way where an arbitrator always remains offline and will be referred only if any dispute happens. There are various optimistic fair exchange protocols with different security properties in the literature. Most of the optimistic fair exchange protocols satisfy resolution ambiguity where a signature signed by the signer is computational indistinguishable from the one resolved by the arbitrator. Huang et al. proposed the first generic framework for accountable optimistic fair exchange protocol in the random oracle model where it possesses resolution ambiguity and is able to reveal the actual signer when needed. Ganjavi et al. later proposed the first generic framework in the standard model. In this paper, we propose a new generic framework for accountable optimistic fair exchange protocol in the standard model using ordinary signature, convertible undeniable signature, and ring signature scheme as the underlying building blocks. We also provide an instantiation using our proposed generic framework to obtain an efficient pairing-based accountable optimistic fair exchange protocol with short signature. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

11 pages, 699 KiB  
Article
Kernel Ridge Regression Model Based on Beta-Noise and Its Application in Short-Term Wind Speed Forecasting
by Shiguang Zhang, Ting Zhou, Lin Sun and Chao Liu
Symmetry 2019, 11(2), 282; https://doi.org/10.3390/sym11020282 - 22 Feb 2019
Cited by 6 | Viewed by 3451
Abstract
The Kernel ridge regression ( K R R) model aims to find the hidden nonlinear structure in raw data. It makes an assumption that the noise in data satisfies the Gaussian model. However, it was pointed out that the noise in wind [...] Read more.
The Kernel ridge regression ( K R R) model aims to find the hidden nonlinear structure in raw data. It makes an assumption that the noise in data satisfies the Gaussian model. However, it was pointed out that the noise in wind speed/power forecasting obeys the Beta distribution. The classic regression techniques are not applicable to this case. Hence, we derive the empirical risk loss about the Beta distribution and propose a technique of the kernel ridge regression model based on the Beta-noise ( B N-K R R). The numerical experiments are carried out on real-world data. The results indicate that the proposed technique obtains good performance on short-term wind speed forecasting. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

17 pages, 383 KiB  
Article
Data Source Selection Based on an Improved Greedy Genetic Algorithm
by Jian Yang and Chunxiao Xing
Symmetry 2019, 11(2), 273; https://doi.org/10.3390/sym11020273 - 20 Feb 2019
Cited by 7 | Viewed by 3994
Abstract
The development of information technology has led to a sharp increase in data volume. The tremendous amount of data has become a strategic capital that allows businesses to derive superior market intelligence or improve existing operations. People expect to consolidate and utilize data [...] Read more.
The development of information technology has led to a sharp increase in data volume. The tremendous amount of data has become a strategic capital that allows businesses to derive superior market intelligence or improve existing operations. People expect to consolidate and utilize data as much as possible. However, too much data will bring huge integration cost, such as the cost of purchasing and cleaning. Therefore, under the context of limited resources, obtaining more data integration value is our expectation. In addition, the uneven quality of data sources make the multi-source selection task more difficult, and low-quality data sources can seriously affect integration results without the desired quality gain. In this paper, we have studied how to balance data gain and cost in the source selection, specifically, maximizing the gain of data on the premise of a given budget. We proposed an improved greedy genetic algorithm (IGGA) to solve the problem of source selection, and carried out a wide range of experimental evaluations on the real and synthetic dataset. The empirical results show considerable performance in favor of the proposed algorithm in terms of solution quality. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

23 pages, 2031 KiB  
Article
Tenant-Oriented Monitoring for Customized Security Services in the Cloud
by Huaizhe Zhou, Haihe Ba, Yongjun Wang, Zhiying Wang, Jun Ma, Yunshi Li and Huidong Qiao
Symmetry 2019, 11(2), 252; https://doi.org/10.3390/sym11020252 - 18 Feb 2019
Cited by 5 | Viewed by 2983
Abstract
The dramatic proliferation of cloud computing makes it an attractive target for malicious attacks. Increasing solutions resort to virtual machine introspection (VMI) to deal with security issues in the cloud environment. However, the existing works are not feasible to support tenants to customize [...] Read more.
The dramatic proliferation of cloud computing makes it an attractive target for malicious attacks. Increasing solutions resort to virtual machine introspection (VMI) to deal with security issues in the cloud environment. However, the existing works are not feasible to support tenants to customize individual security services based on their security requirements flexibly. Additionally, adoption of VMI-based security solutions makes tenants at the risk of exposing sensitive information to attackers. To alleviate the security and privacy anxieties of tenants, we present SECLOUD, a framework for monitoring VMs in the cloud for security analysis in this paper. By extending VMI techniques, SECLOUD provides remote tenants or their authorized security service providers with flexible interfaces for monitoring runtime information of guest virtual machines (VMs) in a non-intrusive manner. The proposed framework enhances effectiveness of monitoring by taking advantages of architectural symmetry of cloud environment. Moreover, we harden our framework with a privacy-preserving capacity for tenants. The flexibility and effectiveness of SECLOUD is demonstrated through a prototype implementation based on Xen hypervisor, which results in acceptable performance overhead. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

20 pages, 1307 KiB  
Article
EasyStego: Robust Steganography Based on Quick-Response Barcodes for Crossing Domains
by Zhenhao Luo, Wei Xie, Baosheng Wang, Yong Tang and Qianqian Xing
Symmetry 2019, 11(2), 222; https://doi.org/10.3390/sym11020222 - 14 Feb 2019
Cited by 5 | Viewed by 4083
Abstract
Despite greater attention being paid to sensitive-information leakage in the cyberdomain, the sensitive-information problem of the physical domain remains neglected. Anonymous users can easily access the sensitive information of other users, such as transaction information, health status, and addresses, without any advanced technologies. [...] Read more.
Despite greater attention being paid to sensitive-information leakage in the cyberdomain, the sensitive-information problem of the physical domain remains neglected. Anonymous users can easily access the sensitive information of other users, such as transaction information, health status, and addresses, without any advanced technologies. Ideally, secret messages should be protected not only in the cyberdomain but also in the complex physical domain. However, popular steganography schemes only work in the traditional cyberdomain and are useless when physical distortions of messages are unavoidable. This paper first defines the concept of cross-domain steganography, and then proposes EasyStego, a novel cross-domain steganography scheme. EasyStego is based on the use of QR barcodes as carriers; therefore, it is robust to physical distortions in the complex physical domain. Moreover, EasyStego has a large capacity for embeddable secrets and strong scalability in various scenarios. EasyStego uses an AES encryption algorithm to control the permissions of secret messages, which is more effective in reducing the possibility of sensitive-information leakage. Experiments show that EasyStego has perfect robustness and good efficiency. Compared with the best current steganography scheme based on barcodes, EasyStego has greater steganographic capacity and less impact on barcode data. In robustness tests, EasyStego successfully extracts secret messages at different angles and distances. In the case of adding natural textures and importing quantitative error bits, other related steganography techniques fail, whereas EasyStego can extract secret messages with a success rate of nearly 100%. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

14 pages, 2647 KiB  
Article
Cross-Language End-to-End Speech Recognition Research Based on Transfer Learning for the Low-Resource Tujia Language
by Chongchong Yu, Yunbing Chen, Yueqiao Li, Meng Kang, Shixuan Xu and Xueer Liu
Symmetry 2019, 11(2), 179; https://doi.org/10.3390/sym11020179 - 02 Feb 2019
Cited by 17 | Viewed by 5385
Abstract
To rescue and preserve an endangered language, this paper studied an end-to-end speech recognition model based on sample transfer learning for the low-resource Tujia language. From the perspective of the Tujia language international phonetic alphabet (IPA) label layer, using Chinese corpus as an [...] Read more.
To rescue and preserve an endangered language, this paper studied an end-to-end speech recognition model based on sample transfer learning for the low-resource Tujia language. From the perspective of the Tujia language international phonetic alphabet (IPA) label layer, using Chinese corpus as an extension of the Tujia language can effectively solve the problem of an insufficient corpus in the Tujia language, constructing a cross-language corpus and an IPA dictionary that is unified between the Chinese and Tujia languages. The convolutional neural network (CNN) and bi-directional long short-term memory (BiLSTM) network were used to extract the cross-language acoustic features and train shared hidden layer weights for the Tujia language and Chinese phonetic corpus. In addition, the automatic speech recognition function of the Tujia language was realized using the end-to-end method that consists of symmetric encoding and decoding. Furthermore, transfer learning was used to establish the model of the cross-language end-to-end Tujia language recognition system. The experimental results showed that the recognition error rate of the proposed model is 46.19%, which is 2.11% lower than the that of the model that only used the Tujia language data for training. Therefore, this approach is feasible and effective. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

20 pages, 1314 KiB  
Article
Detecting Word-Based Algorithmically Generated Domains Using Semantic Analysis
by Luhui Yang, Jiangtao Zhai, Weiwei Liu, Xiaopeng Ji, Huiwen Bai, Guangjie Liu and Yuewei Dai
Symmetry 2019, 11(2), 176; https://doi.org/10.3390/sym11020176 - 02 Feb 2019
Cited by 18 | Viewed by 4255
Abstract
In highly sophisticated network attacks, command-and-control (C&C) servers always use domain generation algorithms (DGAs) to dynamically produce several candidate domains instead of static hard-coded lists of IP addresses or domain names. Distinguishing the domains generated by DGAs from the legitimate ones is critical [...] Read more.
In highly sophisticated network attacks, command-and-control (C&C) servers always use domain generation algorithms (DGAs) to dynamically produce several candidate domains instead of static hard-coded lists of IP addresses or domain names. Distinguishing the domains generated by DGAs from the legitimate ones is critical for finding out the existence of malware or further locating the hidden attackers. The word-based DGAs disclosed in recent network attack events have shown significantly stronger stealthiness when compared with traditional character-based DGAs. In word-based DGAs, two or more words are randomly chosen from one or more specific dictionaries to form a dynamic domain, these regularly generated domains aim to mimic the characteristics of a legitimate domain. Existing DGA detection schemes, including the state-of-the-art one based on deep learning, still cannot find out these domains accurately while maintaining an acceptable false alarm rate. In this study, we exploit the inter-word and inter-domain correlations using semantic analysis approaches, word embedding and the part-of-speech are taken into consideration. Next, we propose a detection framework for word-based DGAs by incorporating the frequency distribution of the words and that of part-of-speech into the design of the feature set. Using an ensemble classifier constructed from Naive Bayes, Extra-Trees, and Logistic Regression, we benchmark the proposed scheme with malicious and legitimate domain samples extracted from public datasets. The experimental results show that the proposed scheme can achieve significantly higher detection accuracy for word-based DGAs when compared with three state-of-the-art DGA detection schemes. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

20 pages, 4243 KiB  
Article
Package Network Model: A Way to Capture Holistic Structural Features of Open-Source Operating Systems
by Jing Wang, Kedi Zhang, Xiaoli Sun, Yusong Tan, Qingbo Wu and Quanyuan Wu
Symmetry 2019, 11(2), 172; https://doi.org/10.3390/sym11020172 - 01 Feb 2019
Cited by 3 | Viewed by 2463
Abstract
Open-source software has become a powerful engine for the development of the software industry. Its production mode, which is based on large-scale group collaboration, allows for the rapid and continuous evolution of open-source software on demand. As an important branch of open-source software, [...] Read more.
Open-source software has become a powerful engine for the development of the software industry. Its production mode, which is based on large-scale group collaboration, allows for the rapid and continuous evolution of open-source software on demand. As an important branch of open-source software, open-source operating systems are commonly used in modern service industries such as finance, logistics, education, medical care, e-commerce and tourism, etc. The reliability of these systems is increasingly valued. However, a self-organizing and loosely coupled development approach complicates the structural analysis of open-source operating system software. Traditional methods focus on analysis at the local level. There is a lack of research on the relationship between internal attributes and external overall characteristics. Consequently, conventional methods are difficult to adapt to complex software systems, especially the structural analysis of open-source operating system software. It is therefore of great significance to capture the holistic structure and behavior of the software system. Complex network theory, which is adequate for this task, can make up for the deficiency of traditional software structure evaluation methods that focus only on local structure. In this paper, we propose a package network model, which is a directed graph structure, to describe the dependency of open-source operating system software packages. Based on the Ubuntu Kylin Linux Operating system, we construct a software package dependency network of each distributed version and analyze the structural evolution through the dimensions of scale, density, connectivity, cohesion, and heterogeneity of each network. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

17 pages, 4989 KiB  
Article
Reliability Enhancement of Edge Computing Paradigm Using Agreement
by Shu-Ching Wang, Wei-Shu Hsiung, Chia-Fen Hsieh and Yao-Te Tsai
Symmetry 2019, 11(2), 167; https://doi.org/10.3390/sym11020167 - 01 Feb 2019
Cited by 7 | Viewed by 2784
Abstract
Driven by the vision of the Internet of Things (IoT), there has been a dramatic shift in mobile computing in recent years from centralized mobile cloud computing (MCC) to mobile edge computing (MEC). The main features of MECs are to promote mobile computing, [...] Read more.
Driven by the vision of the Internet of Things (IoT), there has been a dramatic shift in mobile computing in recent years from centralized mobile cloud computing (MCC) to mobile edge computing (MEC). The main features of MECs are to promote mobile computing, network control, and storage to the edge of the network in order to achieve computationally intensive and latency-critical applications on resource-constrained mobile devices. Therefore, MEC is proposed to enable computing directly at the edge of the network, which can deliver new applications and services, especially for the IoT. In order to provide a highly flexible and reliable platform for the IoT, a MEC-based IoT platform (MIoT) is proposed in this study. Through the MIoT, the information asymmetrical symmetry between the consumer and producer can be reduced to a certain extent. Because of the IoT platform, fault tolerance is an important research topic. In order to deal with the impact of a faulty component, it is important to reach an agreement in the event of a failure before performing certain special tasks. For example, the initial time of all devices and the time stamp of all applications should be the same in a smart city before further processing. However, previous protocols for distributed computing were not sufficient for MIoT. Therefore, in this study, a new polynomial time and optimal algorithm is proposed to revisit the agreement problem. The algorithm makes all fault-free nodes decide on the same initial value with minimal rounds of message exchanges and tolerate the maximal number of allowable faulty components in the MIoT. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

19 pages, 903 KiB  
Article
Spatial Correlation-Based Motion-Vector Prediction for Video-Coding Efficiency Improvement
by Xiantao Jiang, Tian Song, Takafumi Katayama and Jenq-Shiou Leu
Symmetry 2019, 11(2), 129; https://doi.org/10.3390/sym11020129 - 23 Jan 2019
Cited by 3 | Viewed by 3333
Abstract
H.265/HEVC achieves an average bitrate reduction of 50% for fixed video quality compared with the H.264/AVC standard, while computation complexity is significantly increased. The purpose of this work is to improve coding efficiency for the next-generation video-coding standards. Therefore, by developing a novel [...] Read more.
H.265/HEVC achieves an average bitrate reduction of 50% for fixed video quality compared with the H.264/AVC standard, while computation complexity is significantly increased. The purpose of this work is to improve coding efficiency for the next-generation video-coding standards. Therefore, by developing a novel spatial neighborhood subset, efficient spatial correlation-based motion vector prediction (MVP) with the coding-unit (CU) depth-prediction algorithm is proposed to improve coding efficiency. Firstly, by exploiting the reliability of neighboring candidate motion vectors (MVs), the spatial-candidate MVs are used to determine the optimized MVP for motion-data coding. Secondly, the spatial correlation-based coding-unit depth-prediction is presented to achieve a better trade-off between coding efficiency and computation complexity for interprediction. This approach can satisfy an extreme requirement of high coding efficiency with not-high requirements for real-time processing. The simulation results demonstrate that overall bitrates can be reduced, on average, by 5.35%, up to 9.89% compared with H.265/HEVC reference software in terms of the Bjontegaard Metric. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

22 pages, 597 KiB  
Article
Reusing Source Task Knowledge via Transfer Approximator in Reinforcement Transfer Learning
by Qiao Cheng, Xiangke Wang, Yifeng Niu and Lincheng Shen
Symmetry 2019, 11(1), 25; https://doi.org/10.3390/sym11010025 - 29 Dec 2018
Cited by 3 | Viewed by 2774
Abstract
Transfer Learning (TL) has received a great deal of attention because of its ability to speed up Reinforcement Learning (RL) by reusing learned knowledge from other tasks. This paper proposes a new transfer learning framework, referred to as Transfer Learning via Artificial Neural [...] Read more.
Transfer Learning (TL) has received a great deal of attention because of its ability to speed up Reinforcement Learning (RL) by reusing learned knowledge from other tasks. This paper proposes a new transfer learning framework, referred to as Transfer Learning via Artificial Neural Network Approximator (TL-ANNA). It builds an Artificial Neural Network (ANN) transfer approximator to transfer the related knowledge from the source task into the target task and reuses the transferred knowledge with a Probabilistic Policy Reuse (PPR) scheme. Specifically, the transfer approximator maps the state of the target task symmetrically to states of the source task with a certain mapping rule, and activates the related knowledge (components of the action-value function) of the source task as the input of the ANNs; it then predicts the quality of the actions in the target task with the ANNs. The target learner uses the PPR scheme to bias the RL with the suggested action from the transfer approximator. In this way, the transfer approximator builds a symmetric knowledge path between the target task and the source task. In addition, two mapping rules for the transfer approximator are designed, namely, Full Mapping Rule and Group Mapping Rule. Experiments performed on the RoboCup soccer Keepaway task verified that the proposed transfer learning methods outperform two other transfer learning methods in both jumpstart and time to threshold metrics and are more robust to the quality of source knowledge. In addition, the TL-ANNA with the group mapping rule exhibits slightly worse performance than the one with the full mapping rule, but with less computation and space cost when appropriate grouping method is used. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

27 pages, 8853 KiB  
Article
Pixel-Value-Ordering based Reversible Information Hiding Scheme with Self-Adaptive Threshold Strategy
by Tzu-Chuen Lu, Chun-Ya Tseng, Shu-Wen Huang and Thanh Nhan Vo.
Symmetry 2018, 10(12), 764; https://doi.org/10.3390/sym10120764 - 17 Dec 2018
Cited by 18 | Viewed by 4026
Abstract
Pixel value ordering (PVO) hiding scheme is a kind of data embedding technique that hides a secret message in the difference of the largest and second largest pixels of a block. After that, the scholars improved PVO scheme by using a threshold to [...] Read more.
Pixel value ordering (PVO) hiding scheme is a kind of data embedding technique that hides a secret message in the difference of the largest and second largest pixels of a block. After that, the scholars improved PVO scheme by using a threshold to determine whether the block is smooth or complex. Only a smooth block can be used to hide information. The researchers analyzed all the possible thresholds to find the proper one for hiding secret message. However, it is time consuming. Some researchers decomposing the smooth block into four smaller blocks for hiding more messages to increase image quality. However, the complexity of the block is more important than block size. Hence, this study proposes an ameliorated method. The proposed scheme analyzes the variation of the region so as to judge the complexity of the block and applies quantification strategy to quantified the pixel for making sure the pixel is reversible. It adopts an adaptive threshold generation mechanism to find the proper threshold for different images. The results show that the image quality of the proposed scheme is higher than that of the other methods. The proposed scheme can also let the user adjust the hiding rate to achieve higher image quality or hiding capacity. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

18 pages, 1762 KiB  
Article
The Augmented Approach towards Equilibrated Nexus Era into the Wireless Rechargeable Sensor Network
by Ahmad Ali, Yu Ming, Sagnik Chakraborty, Saima Iram and Tapas Si
Symmetry 2018, 10(11), 639; https://doi.org/10.3390/sym10110639 - 15 Nov 2018
Cited by 1 | Viewed by 2663
Abstract
Present research in the domain of wireless sensor network (WSN) has unearthed that energy restraint of sensor nodes (SNs) encumbers their perpetual performance. Of late, the encroachment in the vicinity of wireless power transfer (WPT) technology has achieved pervasive consideration from both industry [...] Read more.
Present research in the domain of wireless sensor network (WSN) has unearthed that energy restraint of sensor nodes (SNs) encumbers their perpetual performance. Of late, the encroachment in the vicinity of wireless power transfer (WPT) technology has achieved pervasive consideration from both industry and academia to cater the sensor nodes (SNs) letdown in the wireless rechargeable sensor network (WRSNs). The fundamental notion of wireless power transfer is to replenish the energy of sensor nodes using a single or multiple wireless charging devices (WCDs). Herein, we present a jointly optimization model to maximize the charging efficiency and routing restraint of the wireless charging device (WCD). At the outset, we intend an unswerving charging path algorithm to compute the charging path of the wireless charging device. Moreover, Particle swarm optimization (PSO) algorithm has designed with the aid of a virtual clustering technique during the routing process to equilibrate the network lifetime. Herein clustering algorithm, the enduring energy of the sensor nodes is an indispensable parameter meant for the assortment of cluster head (CH). Furthermore, compare the proposed approach to corroborate its pre-eminence over the benchmark algorithm in diverse scenarios. The simulation results divulge that the proposed work is enhanced concerning the network lifetime, charging performance and the enduring energy of the sensor nodes. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

18 pages, 338 KiB  
Article
A Novel Vertical Fragmentation Method for Privacy Protection Based on Entropy Minimization in a Relational Database
by Tie Hong, SongZhu Mei, ZhiYing Wang and JiangChun Ren
Symmetry 2018, 10(11), 637; https://doi.org/10.3390/sym10110637 - 14 Nov 2018
Cited by 2 | Viewed by 2522
Abstract
Many scholars have attempted to use an encryption method to resolve the problem of data leakage in data outsourcing storage. However, encryption methods reduce data availability and are inefficient. Vertical fragmentation perfectly solves this problem. It was first used to improve the access [...] Read more.
Many scholars have attempted to use an encryption method to resolve the problem of data leakage in data outsourcing storage. However, encryption methods reduce data availability and are inefficient. Vertical fragmentation perfectly solves this problem. It was first used to improve the access performance of the relational database, and nowadays some researchers employ it for privacy protection. However, there are some problems that remain to be solved with the vertical fragmentation method for privacy protection in the relational database. First, current vertical fragmentation methods for privacy protection require the user to manually define privacy constraints, which is difficult to achieve in practice. Second, there are many vertical fragmentation solutions that can meet privacy constraints; however, there are currently no quantitative evaluation criteria evaluating how effectively solutions can protect privacy more effectively. In this article, we introduce the concept of information entropy to quantify privacy in vertical fragmentation, so we can automatically discover privacy constraints. Based on this, we propose a privacy protection model with a minimum entropy fragmentation algorithm to achieve minimal privacy disclosure of vertical fragmentation. Experimental results show that our method is suitable for privacy protection with a lower overhead. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

14 pages, 2084 KiB  
Article
Vector Field Convolution-Based B-Spline Deformation Model for 3D Segmentation of Cartilage in MRI
by Jinke Wang, Changfa Shi, Yuanzhi Cheng, Xiancheng Zhou and Shinichi Tamura
Symmetry 2018, 10(11), 591; https://doi.org/10.3390/sym10110591 - 04 Nov 2018
Viewed by 3017
Abstract
In this paper, a novel 3D vector field convolution (VFC)-based B-spline deformation model is proposed for accurate and robust cartilage segmentation. Firstly, the anisotropic diffusion method is utilized for noise reduction, and the Sinc interpolation method is employed for resampling. Then, to extract [...] Read more.
In this paper, a novel 3D vector field convolution (VFC)-based B-spline deformation model is proposed for accurate and robust cartilage segmentation. Firstly, the anisotropic diffusion method is utilized for noise reduction, and the Sinc interpolation method is employed for resampling. Then, to extract the rough cartilage, features derived from Hessian matrix are chosen to enhance the cartilage, followed by binarizing the images via an optimal thresholding method. Finally, the proposed VFC-based B-spline deformation model is used to refine the rough segmentation. In the experiments, the proposed method was evaluated and demonstrated on 46 magnetic resonance images (MRI) (including 20 hip joints and 26 knee joints), and the results were compared with three state-of-the-art cartilage segmentation methods. Both qualitative and quantitative segmentation results indicate that the proposed method can be deployed for accurate and robust cartilage segmentation. Furthermore, from the segmentation results, patient-specific 3D models of the patient’s anatomy can be derived, which then can be utilized in a wide range of clinical applications, such as 3D visualization for surgical planning and guidance. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

21 pages, 1154 KiB  
Article
Understanding User Behavioral Intention to Adopt a Search Engine that Promotes Sustainable Water Management
by Ana Reyes-Menendez, Jose Ramon Saura, Pedro R. Palos-Sanchez and Jose Alvarez-Garcia
Symmetry 2018, 10(11), 584; https://doi.org/10.3390/sym10110584 - 02 Nov 2018
Cited by 46 | Viewed by 5228
Abstract
An increase in users’ online searches, the social concern for an efficient management of resources such as water, and the appearance of more and more digital platforms for sustainable purposes to conduct online searches lead us to reflect more on the users’ behavioral [...] Read more.
An increase in users’ online searches, the social concern for an efficient management of resources such as water, and the appearance of more and more digital platforms for sustainable purposes to conduct online searches lead us to reflect more on the users’ behavioral intention with respect to search engines that support sustainable projects like water management projects. Another issue to consider is the factors that determine the adoption of such search engines. In the present study, we aim to identify the factors that determine the intention to adopt a search engine, such as Lilo, that favors sustainable water management. To this end, a model based on the Theory of Planned Behavior (TPB) is proposed. The methodology used is the Structural Equation Modeling (SEM) analysis with the Analysis of Moment Structures (AMOS). The results demonstrate that individuals who intend to use a search engine are influenced by hedonic motivations, which drive their feeling of contentment with the search. Similarly, the success of search engines is found to be closely related to the ability a search engine grants to its users to generate a social or environmental impact, rather than users’ trust in what they do or in their results. However, according to our results, habit is also an important factor that has both a direct and an indirect impact on users’ behavioral intention to adopt different search engines. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

14 pages, 4233 KiB  
Article
A Filtering Method for Grain Flow Signals Using EMD Thresholds Optimized by Artificial Bee Colony Algorithm
by He Wang and Hua Song
Symmetry 2018, 10(11), 575; https://doi.org/10.3390/sym10110575 - 02 Nov 2018
Cited by 1 | Viewed by 2718
Abstract
For the purpose of reducing noise from grain flow signal, this paper proposes a filtering method that is on the basis of empirical mode decomposition (EMD) and artificial bee colony (ABC) algorithm. At first, decomposing noise signal is performed adaptively into intrinsic mode [...] Read more.
For the purpose of reducing noise from grain flow signal, this paper proposes a filtering method that is on the basis of empirical mode decomposition (EMD) and artificial bee colony (ABC) algorithm. At first, decomposing noise signal is performed adaptively into intrinsic mode functions (IMFs). Then, ABC algorithm is utilized to determine a proper threshold shrinking IMF coefficients instead of traditional threshold function. Furthermore, a neighborhood search strategy is introduced into ABC algorithm to balance its exploration and exploitation ability. Simulation experiments are conducted on four benchmark signals, and a comparative study for the proposed method and state-of-the-art methods are carried out. The compared results demonstrate that signal to noise ratio (SNR) and root mean square error (RMSE) are obtained by the proposed method. The conduction of which is finished on actual grain flow signal that is with noise for the demonstration of the effect in actual practice. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

15 pages, 1484 KiB  
Article
A Change Recommendation Approach Using Change Patterns of a Corresponding Test File
by Jungil Kim and Eunjoo Lee
Symmetry 2018, 10(11), 534; https://doi.org/10.3390/sym10110534 - 23 Oct 2018
Cited by 2 | Viewed by 2159
Abstract
Change recommendation improves the development speed and quality of software projects. Through change recommendation, software project developers can find the relevant source files that they must change for their modification tasks. In an existing change-recommendation approach based on the change history of source [...] Read more.
Change recommendation improves the development speed and quality of software projects. Through change recommendation, software project developers can find the relevant source files that they must change for their modification tasks. In an existing change-recommendation approach based on the change history of source files, the reliability of the recommended change patterns for a source file is determined according to the change history of the source file. If a source file has insufficient change history to identify its change patterns or has frequently been changed with unrelated source files, the existing change-recommendation approach cannot identify meaningful change patterns for the source file. In this paper, we propose a novel change-recommendation approach to resolve the limitation of the existing change-recommendation method. The basic idea of the proposed approach is to consider the change history of a test file corresponding to a given source file. First, the proposed approach identifies the test file corresponding to a given source file by using a source–test traceability linking method based on the popular naming convention rule. Then, the change patterns of the source and test files are identified according to their change histories. Finally, a set of change recommendations is constructed using the identified change patterns. In an experiment involving six open-source projects, the accuracy of the proposed approach is evaluated. The results show that the accuracy of the proposed approach can be significantly improved from 21% to 62% compared with the existing approach. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

17 pages, 3052 KiB  
Article
Deep Refinement Network for Natural Low-Light Image Enhancement in Symmetric Pathways
by Lincheng Jiang, Yumei Jing, Shengze Hu, Bin Ge and Weidong Xiao
Symmetry 2018, 10(10), 491; https://doi.org/10.3390/sym10100491 - 12 Oct 2018
Cited by 19 | Viewed by 3569
Abstract
Due to the cost limitation of camera sensors, images captured in low-light environments often suffer from low contrast and multiple types of noise. A number of algorithms have been proposed to improve contrast and suppress noise in the input low-light images. In this [...] Read more.
Due to the cost limitation of camera sensors, images captured in low-light environments often suffer from low contrast and multiple types of noise. A number of algorithms have been proposed to improve contrast and suppress noise in the input low-light images. In this paper, a deep refinement network, LL-RefineNet, is built to learn from the synthetical dark and noisy training images, and perform image enhancement for natural low-light images in symmetric—forward and backward—pathways. The proposed network utilizes all the useful information from the down-sampling path to produce the high-resolution enhancement result, where global features captured from deeper layers are gradually refined using local features generated by earlier convolutions. We further design the training loss for mixed noise reduction. The experimental results show that the proposed LL-RefineNet outperforms the comparative methods both qualitatively and quantitatively with fast processing speed on both synthetic and natural low-light image datasets. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

20 pages, 1619 KiB  
Article
A Two-Stage Big Data Analytics Framework with Real World Applications Using Spark Machine Learning and Long Short-Term Memory Network
by Muhammad Ashfaq Khan, Md. Rezaul Karim and Yangwoo Kim
Symmetry 2018, 10(10), 485; https://doi.org/10.3390/sym10100485 - 11 Oct 2018
Cited by 44 | Viewed by 6034
Abstract
Every day we experience unprecedented data growth from numerous sources, which contribute to big data in terms of volume, velocity, and variability. These datasets again impose great challenges to analytics framework and computational resources, making the overall analysis difficult for extracting meaningful information [...] Read more.
Every day we experience unprecedented data growth from numerous sources, which contribute to big data in terms of volume, velocity, and variability. These datasets again impose great challenges to analytics framework and computational resources, making the overall analysis difficult for extracting meaningful information in a timely manner. Thus, to harness these kinds of challenges, developing an efficient big data analytics framework is an important research topic. Consequently, to address these challenges by exploiting non-linear relationships from very large and high-dimensional datasets, machine learning (ML) and deep learning (DL) algorithms are being used in analytics frameworks. Apache Spark has been in use as the fastest big data processing arsenal, which helps to solve iterative ML tasks, using distributed ML library called Spark MLlib. Considering real-world research problems, DL architectures such as Long Short-Term Memory (LSTM) is an effective approach to overcoming practical issues such as reduced accuracy, long-term sequence dependency, and vanishing and exploding gradient in conventional deep architectures. In this paper, we propose an efficient analytics framework, which is technically a progressive machine learning technique merged with Spark-based linear models, Multilayer Perceptron (MLP) and LSTM, using a two-stage cascade structure in order to enhance the predictive accuracy. Our proposed architecture enables us to organize big data analytics in a scalable and efficient way. To show the effectiveness of our framework, we applied the cascading structure to two different real-life datasets to solve a multiclass and a binary classification problem, respectively. Experimental results show that our analytical framework outperforms state-of-the-art approaches with a high-level of classification accuracy. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Graphical abstract

17 pages, 414 KiB  
Article
Practical, Provably Secure, and Black-Box Traceable CP-ABE for Cryptographic Cloud Storage
by Huidong Qiao, Haihe Ba, Huaizhe Zhou, Zhiying Wang, Jiangchun Ren and Ying Hu
Symmetry 2018, 10(10), 482; https://doi.org/10.3390/sym10100482 - 11 Oct 2018
Cited by 7 | Viewed by 2966
Abstract
Cryptographic cloud storage (CCS) is a secure architecture built in the upper layer of a public cloud infrastructure. In the CCS system, a user can define and manage the access control of the data by himself without the help of cloud storage service [...] Read more.
Cryptographic cloud storage (CCS) is a secure architecture built in the upper layer of a public cloud infrastructure. In the CCS system, a user can define and manage the access control of the data by himself without the help of cloud storage service provider. The ciphertext-policy attribute-based encryption (CP-ABE) is considered as the critical technology to implement such access control. However, there still exists a large security obstacle to the implementation of CP-ABE in CCS. That is, how to identify the malicious cloud user who illegally shares his private keys with others or applies his keys to construct a decryption device/black-box, and provides the decryption service. Although several CP-ABE schemes with black-box traceability have been proposed to address the problem, most of them are not practical in CCS systems, due to the absence of scalability and expensive computation cost, especially the cost of tracing. Thus, we present a new black-box traceable CP-ABE scheme that is scalable and high efficient. To achieve a much better performance, our work is designed on the prime order bilinear groups that results in a great improvement in the efficiency of group operations, and the cost of tracing is reduced greatly to O ( N ) or O ( 1 ) , where N is the number of users of a system. Furthermore, our scheme is proved secure in a selective standard model. To the best of our knowledge, this work is the first such practical and provably secure CP-ABE scheme for CCS, which is black-box traceable. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

16 pages, 285 KiB  
Article
JDriver: Automatic Driver Class Generation for AFL-Based Java Fuzzing Tools
by Zhijian Huang and Yongjun Wang
Symmetry 2018, 10(10), 460; https://doi.org/10.3390/sym10100460 - 03 Oct 2018
Cited by 2 | Viewed by 2928
Abstract
AFL (American Fuzzy Lop) is a powerful fuzzing tool that has discovered hundreds of real-world vulnerabilities. Recent efforts are seen to port AFL to a fuzzing Java program and have shown to be effective in Java testing. However, these tools require humans to [...] Read more.
AFL (American Fuzzy Lop) is a powerful fuzzing tool that has discovered hundreds of real-world vulnerabilities. Recent efforts are seen to port AFL to a fuzzing Java program and have shown to be effective in Java testing. However, these tools require humans to write driver classes, which is not plausible for testing large-scale software. In addition, AFL generates files as input, making it limited for testing methods that process files. In this paper, we present JDriver, an automatic driver class generation framework for AFL-based fuzzing tools, which can build driver code for methods’ processing files as well as ordinary methods not processing files. Our approach consists of three parts: a dependency-analysis based method to generate method sequences that are able to change the instance’s status so as to exercise more paths, a knowledge assisted method to make instance for the method sequences, and an input-file oriented driver class assembling method to handle the method parameters for ordinary methods. We evaluate JDriver on commons-imaging, a widely used image library provided by the Apache organization. JDriver has successfully generated 149 helper methods which can be used to make instances for 110 classes. Moreover, 99 driver classes are built to cover 422 methods. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

14 pages, 3782 KiB  
Article
Multi-Scale Adversarial Feature Learning for Saliency Detection
by Dandan Zhu, Lei Dai, Ye Luo, Guokai Zhang, Xuan Shao, Laurent Itti and Jianwei Lu
Symmetry 2018, 10(10), 457; https://doi.org/10.3390/sym10100457 - 01 Oct 2018
Cited by 17 | Viewed by 4063
Abstract
Previous saliency detection methods usually focused on extracting powerful discriminative features to describe images with a complex background. Recently, the generative adversarial network (GAN) has shown a great ability in feature learning for synthesizing high quality natural images. Since the GAN shows a [...] Read more.
Previous saliency detection methods usually focused on extracting powerful discriminative features to describe images with a complex background. Recently, the generative adversarial network (GAN) has shown a great ability in feature learning for synthesizing high quality natural images. Since the GAN shows a superior feature learning ability, we present a new multi-scale adversarial feature learning (MAFL) model for image saliency detection. In particular, we build this model, which is composed of two convolutional neural network (CNN) modules: the multi-scale G-network takes natural images as inputs and generates the corresponding synthetic saliency map, and we design a novel layer in the D-network, namely a correlation layer, which is used to determine whether one image is a synthetic saliency map or ground-truth saliency map. Quantitative and qualitative comparisons on several public datasets show the superiority of our approach. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

25 pages, 647 KiB  
Article
Astrape: An Efficient Concurrent Cloud Attestation with Ciphertext-Policy Attribute-Based Encryption
by Haihe Ba, Huaizhe Zhou, Songzhu Mei, Huidong Qiao, Tie Hong, Zhiying Wang and Jiangchun Ren
Symmetry 2018, 10(10), 425; https://doi.org/10.3390/sym10100425 - 21 Sep 2018
Cited by 4 | Viewed by 2515
Abstract
Cloud computing emerges as a change in the business paradigm that offers pay-as-you-go computing capability and brings enormous benefits, but there are numerous organizations showing hesitation for the adoption of cloud computing due to security concerns. Remote attestation has been proven to boost [...] Read more.
Cloud computing emerges as a change in the business paradigm that offers pay-as-you-go computing capability and brings enormous benefits, but there are numerous organizations showing hesitation for the adoption of cloud computing due to security concerns. Remote attestation has been proven to boost confidence in clouds to guarantee hosted cloud applications’ integrity. However, the state-of-the-art attestation schemes do not fit that multiple requesters raise their challenges simultaneously, thereby leading to larger performance overheads on the attester side. To address that, we propose an efficient and trustworthy concurrent attestation architecture under multi-requester scenarios, Astrape, to improve efficiency in the integrity and confidentiality protection aspects to generate an unforgeable and encrypted attestation report. Specifically, we propose two key techniques in this paper. The first one—aggregated attestation signature—reliably protects the attestation content from being compromised even in the presence of adversaries who have full control of the network, therefore successfully providing attestation integrity. The second one—delegation-based controlled report—introduces a third-party service to distribute the attestation report to requesters in order to save computation and communication overload on the attested party. The report is encrypted with an access policy by using attribute-based encryption and accessed by a limited number of qualified requesters, hence supporting attestation confidentiality. The experimental results show that Astrape can take no more than 0.4 s to generate an unforgeable and encrypted report for 1000 requesters and deliver a throughput speedup of approximately 30 × in comparison to the existing attestation systems. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

12 pages, 3975 KiB  
Article
A Vertex-Based 3D Authentication Algorithm Based on Spatial Subdivision
by Yuan-Yu Tsai, Yu-Shiou Tsai, I-Ting Chi and Chi-Shiang Chan
Symmetry 2018, 10(10), 422; https://doi.org/10.3390/sym10100422 - 20 Sep 2018
Cited by 4 | Viewed by 2458
Abstract
The study proposed a vertex-based authentication algorithm based on spatial subdivision. A binary space partitioning tree was employed to subdivide the bounding volume of the input model into voxels. Each vertex could then be encoded into a series of binary digits, denoted as [...] Read more.
The study proposed a vertex-based authentication algorithm based on spatial subdivision. A binary space partitioning tree was employed to subdivide the bounding volume of the input model into voxels. Each vertex could then be encoded into a series of binary digits, denoted as its authentication code, by traversing the constructed tree. Finally, the above authentication code was embedded into the corresponding reference vertex by modulating its position within the located subspace. Extensive experimental results demonstrated that the proposed algorithm provided high embedding capacity and high robustness. Furthermore, the proposed algorithm supported controllable distortion and self-recovery. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

23 pages, 2400 KiB  
Article
BGPcoin: Blockchain-Based Internet Number Resource Authority and BGP Security Solution
by Qianqian Xing, Baosheng Wang and Xiaofeng Wang
Symmetry 2018, 10(9), 408; https://doi.org/10.3390/sym10090408 - 17 Sep 2018
Cited by 28 | Viewed by 5607
Abstract
Without the design for inherent security, the Border Gateway Protocol (BGP) is vulnerable to prefix/subprefix hijacks and other attacks. Though many BGP security approaches have been proposed to prevent or detect such attacks, the unsatisfactory cost-effectiveness frustrates their deployment. In fact, the currently [...] Read more.
Without the design for inherent security, the Border Gateway Protocol (BGP) is vulnerable to prefix/subprefix hijacks and other attacks. Though many BGP security approaches have been proposed to prevent or detect such attacks, the unsatisfactory cost-effectiveness frustrates their deployment. In fact, the currently deployed BGP security infrastructure leaves the chance for potential centralized authority misconfiguration and abuse. It actually becomes the critical yield point that demands the logging and auditing of misbehaviors and attacks in BGP security deployments. We propose a blockchain-based Internet number resource authority and trustworthy management solution, named BGPcoin, to facilitate the transparency of BGP security. BGPcoin provides a reliable origin advertisement source for origin authentication by dispensing resource allocations and revocations compliantly against IP prefix hijacking. We perform and audit resource assignments on the tamper-resistant Ethereum blockchain by means of a set of smart contracts, which also interact as one to provide the trustworthy origin route examination for BGP. Compared with RPKI, BGPcoin yields significant benefits in securing origin advertisement and building a dependable infrastructure for the object repository. We demonstrate it through an Ethereum prototype implementation, and we deploy it and do experiment on a locally-simulated network and an official Ethereum test network respectively. The extensive experiment and evaluation demonstrate the incentives to deploy BGPcoin, and the enhanced security provided by BGPcoin is technically and economically feasible. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

16 pages, 3486 KiB  
Article
3D Spatial Pyramid Dilated Network for Pulmonary Nodule Classification
by Guokai Zhang, Xiao Liu, Dandan Zhu, Pengcheng He, Lipeng Liang, Ye Luo and Jianwei Lu
Symmetry 2018, 10(9), 376; https://doi.org/10.3390/sym10090376 - 01 Sep 2018
Cited by 4 | Viewed by 4118
Abstract
Lung cancer mortality is currently the highest among all kinds of fatal cancers. With the help of computer-aided detection systems, a timely detection of malignant pulmonary nodule at early stage could improve the patient survival rate efficiently. However, the sizes of the pulmonary [...] Read more.
Lung cancer mortality is currently the highest among all kinds of fatal cancers. With the help of computer-aided detection systems, a timely detection of malignant pulmonary nodule at early stage could improve the patient survival rate efficiently. However, the sizes of the pulmonary nodules are usually various, and it is more difficult to detect small diameter nodules. The traditional convolution neural network uses pooling layers to reduce the resolution progressively, but it hampers the network’s ability to capture the tiny but vital features of the pulmonary nodules. To tackle this problem, we propose a novel 3D spatial pyramid dilated convolution network to classify the malignancy of the pulmonary nodules. Instead of using the pooling layers, we use 3D dilated convolution to learn the detailed characteristic information of the pulmonary nodules. Furthermore, we show that the fusion of multiple receptive fields from different dilated convolutions could further improve the classification performance of the model. Extensive experimental results demonstrate that our model achieves a better result with an accuracy of 88.6 % , which outperforms other state-of-the-art methods. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

13 pages, 4427 KiB  
Article
Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based Recurrent Attention Networks
by Zhen He and Hangen He
Symmetry 2018, 10(9), 375; https://doi.org/10.3390/sym10090375 - 01 Sep 2018
Cited by 8 | Viewed by 3425
Abstract
Nowadays, video surveillance has become ubiquitous with the quick development of artificial intelligence. Multi-object detection (MOD) is a key step in video surveillance and has been widely studied for a long time. The majority of existing MOD algorithms follow the “divide and conquer” [...] Read more.
Nowadays, video surveillance has become ubiquitous with the quick development of artificial intelligence. Multi-object detection (MOD) is a key step in video surveillance and has been widely studied for a long time. The majority of existing MOD algorithms follow the “divide and conquer” pipeline and utilize popular machine learning techniques to optimize algorithm parameters. However, this pipeline is usually suboptimal since it decomposes the MOD task into several sub-tasks and does not optimize them jointly. In addition, the frequently used supervised learning methods rely on the labeled data which are scarce and expensive to obtain. Thus, we propose an end-to-end Unsupervised Multi-Object Detection framework for video surveillance, where a neural model learns to detect objects from each video frame by minimizing the image reconstruction error. Moreover, we propose a Memory-Based Recurrent Attention Network to ease detection and training. The proposed model was evaluated on both synthetic and real datasets, exhibiting its potential. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

20 pages, 7976 KiB  
Article
Multimedia Data Modelling Using Multidimensional Recurrent Neural Networks
by Zhen He, Shaobing Gao, Liang Xiao, Daxue Liu and Hangen He
Symmetry 2018, 10(9), 370; https://doi.org/10.3390/sym10090370 - 01 Sep 2018
Viewed by 3045
Abstract
Modelling the multimedia data such as text, images, or videos usually involves the analysis, prediction, or reconstruction of them. The recurrent neural network (RNN) is a powerful machine learning approach to modelling these data in a recursive way. As a variant, the long [...] Read more.
Modelling the multimedia data such as text, images, or videos usually involves the analysis, prediction, or reconstruction of them. The recurrent neural network (RNN) is a powerful machine learning approach to modelling these data in a recursive way. As a variant, the long short-term memory (LSTM) extends the RNN with the ability to remember information for longer. Whilst one can increase the capacity of LSTM by widening or adding layers, additional parameters and runtime are usually required, which could make learning harder. We therefore propose a Tensor LSTM where the hidden states are tensorised as multidimensional arrays (tensors) and updated through a cross-layer convolution. As parameters are spatially shared within the tensor, we can efficiently widen the model without extra parameters by increasing the tensorised size; as deep computations of each time step are absorbed by temporal computations of the time series, we can implicitly deepen the model with little extra runtime by delaying the output. We show by experiments that our model is well-suited for various multimedia data modelling tasks, including text generation, text calculation, image classification, and video prediction. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

30 pages, 5186 KiB  
Article
A Dynamic Adjusting Novel Global Harmony Search for Continuous Optimization Problems
by Chui-Yu Chiu, Po-Chou Shih and Xuechao Li
Symmetry 2018, 10(8), 337; https://doi.org/10.3390/sym10080337 - 12 Aug 2018
Cited by 12 | Viewed by 3014
Abstract
A novel global harmony search (NGHS) algorithm, as proposed in 2010, is an improved algorithm that combines the harmony search (HS), particle swarm optimization (PSO), and a genetic algorithm (GA). Moreover, the fixed parameter of mutation probability was used in the NGHS algorithm. [...] Read more.
A novel global harmony search (NGHS) algorithm, as proposed in 2010, is an improved algorithm that combines the harmony search (HS), particle swarm optimization (PSO), and a genetic algorithm (GA). Moreover, the fixed parameter of mutation probability was used in the NGHS algorithm. However, appropriate parameters can enhance the searching ability of a metaheuristic algorithm, and their importance has been described in many studies. Inspired by the adjustment strategy of the improved harmony search (IHS) algorithm, a dynamic adjusting novel global harmony search (DANGHS) algorithm, which combines NGHS and dynamic adjustment strategies for genetic mutation probability, is introduced in this paper. Moreover, extensive computational experiments and comparisons are carried out for 14 benchmark continuous optimization problems. The results show that the proposed DANGHS algorithm has better performance in comparison with other HS algorithms in most problems. In addition, the proposed algorithm is more efficient than previous methods. Finally, different strategies are suitable for different situations. Among these strategies, the most interesting and exciting strategy is the periodic dynamic adjustment strategy. For a specific problem, the periodic dynamic adjustment strategy could have better performance in comparison with other decreasing or increasing strategies. These results inspire us to further investigate this kind of periodic dynamic adjustment strategy in future experiments. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

17 pages, 23610 KiB  
Article
Detectability Improved Tamper Detection Scheme for Absolute Moment Block Truncation Coding Compressed Images
by Wien Hong, Xiaoyu Zhou, Der-Chyuan Lou, Xiaoqin Huang and Cancan Peng
Symmetry 2018, 10(8), 318; https://doi.org/10.3390/sym10080318 - 02 Aug 2018
Cited by 11 | Viewed by 2693
Abstract
Since digital media is gaining popularity nowadays, people are more concerned about its integrity protection and authentication since tampered media may result in unexpected problems. Considering a better media protection technique, this paper proposes an efficient tamper detection scheme for absolute moment block [...] Read more.
Since digital media is gaining popularity nowadays, people are more concerned about its integrity protection and authentication since tampered media may result in unexpected problems. Considering a better media protection technique, this paper proposes an efficient tamper detection scheme for absolute moment block truncation coding (AMBTC) compressed images. In AMBTC, each image block is represented by two quantization levels (QLs) and a bitmap. Requiring insignificant computation cost, it attracts not only a wide range of application developers, but also a variety of studies to investigate the authentication of its codes. While the existing methods protect the AMBTC codes to a large extent, the leakage of some unprotected codes may be insensitive to intentional tampering. The proposed method fully protects the AMBTC codes by embedding authentication codes (ACs) into QLs. Meanwhile, the most significant bits of QLs are symmetrically perturbed to generate the candidates of ACs. The ACs that cause the minimum distortion are embedded into the least significant bits of QLs to minimize the distortion. When compared with prior works, the experimental results reveal that the proposed method offers a significant sensitivity-of-tamper property while providing a comparable image quality. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

15 pages, 2412 KiB  
Article
No-Reference Image Blur Assessment Based on Response Function of Singular Values
by Shanqing Zhang, Pengcheng Li, Xianghua Xu, Li Li and Ching-Chun Chang
Symmetry 2018, 10(8), 304; https://doi.org/10.3390/sym10080304 - 01 Aug 2018
Cited by 13 | Viewed by 3540
Abstract
Blur is an important factor affecting the image quality. This paper presents an efficient no-reference (NR) image blur assessment method based on a response function of singular values. For an image, the grayscale image is computed to the acquire spatial information. In the [...] Read more.
Blur is an important factor affecting the image quality. This paper presents an efficient no-reference (NR) image blur assessment method based on a response function of singular values. For an image, the grayscale image is computed to the acquire spatial information. In the meantime, the gradient map is computed to acquire the shape information, and the saliency map can be obtained by using scale-invariant feature transform (SIFT). Then, the grayscale image, the gradient map, and the saliency map are divided into blocks of the same size. The blocks of the gradient map are converted into discrete cosine transform (DCT) coefficients, from which the response function of singular values (RFSV) are generated. The sum of the RFSV are then utilized to characterize the image blur. The variance of the grayscale image and the DCT domain entropy of the gradient map are used to reduce the impact of the image content. The SIFT-dependent weights are calculated in the saliency map, which are assigned to the image blocks. Finally, the blur score is the normalized sum of the RFSV. Extensive experiments are conducted on four synthetic databases and two real blur databases. The experimental results indicate that the blur scores produced by our method are highly correlated with the subjective evaluations. Furthermore, the proposed method is superior to six state-of-the-art methods. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

19 pages, 6249 KiB  
Article
Game-Theoretic Solutions for Data Offloading in Next Generation Networks
by Muhammad Asif, Shafi Ullah Khan, Rashid Ahmad and Dhananjay Singh
Symmetry 2018, 10(8), 299; https://doi.org/10.3390/sym10080299 - 25 Jul 2018
Cited by 1 | Viewed by 2942
Abstract
In recent years, global mobile data traffic has seen an unprecedented increase. This is due to worldwide usage of smart devices, availability of fast internet connections, and the popularity of social media. The Mobile Network Operators (MNOs) are, therefore, facing problems in handling [...] Read more.
In recent years, global mobile data traffic has seen an unprecedented increase. This is due to worldwide usage of smart devices, availability of fast internet connections, and the popularity of social media. The Mobile Network Operators (MNOs) are, therefore, facing problems in handling this huge traffic flow. Each type of traffic, including real-time video, audio, and text has its own Quality of Services (QoS) requirements which, if not met, may cause a sufficient loss of profit. Offloading of these traffics can be made more efficient so that values of QoS parameters are enhanced. In this work, we propose an incentive-based game-theoretic frame work for downloading data. The download of each type of data will get an incentive determined by the two-stage Stackelberg game. We model the communication among single Mobile Base Station (MBS) and multiple Access Points (APs) in a crowded metropolitan environment. The leader offers an economic incentive based on the traffic type and followers respond to the incentive and offload traffic accordingly. The model optimizes strategies of both the MBS and APs in order to make the best use of their utilities. For the analysis, we have used a combination of analytical and experimental methods. The numerical outcome characterized a direct process of the best possible offloading ratio and legalized the efficiency of the proposed game. Optimal incentives and optimal offloading was the achievement of our proposed game-theoretic approach. We have implemented the model in MATLAB, and the experimental results show a maximum payoff was achieved and the proposed scheme achieved Nash Equilibria. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

14 pages, 2557 KiB  
Article
The Application of a Double CUSUM Algorithm in Industrial Data Stream Anomaly Detection
by Guang Li, Jie Wang, Jing Liang and Caitong Yue
Symmetry 2018, 10(7), 264; https://doi.org/10.3390/sym10070264 - 05 Jul 2018
Cited by 8 | Viewed by 7051
Abstract
The effect of the application of machine learning on data streams is influenced by concept drift, drift deviation, and noise interference. This paper proposes a data stream anomaly detection algorithm combined with control chart and sliding window methods. This algorithm is named DCUSUM-DS [...] Read more.
The effect of the application of machine learning on data streams is influenced by concept drift, drift deviation, and noise interference. This paper proposes a data stream anomaly detection algorithm combined with control chart and sliding window methods. This algorithm is named DCUSUM-DS (Double CUSUM Based on Data Stream), because it uses a dual mean value cumulative sum. The DCUSUM-DS algorithm based on nested sliding windows is proposed to satisfy the concept drift problem; it calculates the average value of the data within the window twice, extracts new features, and then calculates accumulated and controlled graphs to avoid misleading by interference points. The new algorithm is simulated using drilling engineering industrial data. Compared with automatic outlier detection for data streams (A-ODDS) and with sliding nest window chart anomaly detection based on data streams (SNWCAD-DS), the DCUSUM-DS can account for concept drift and shield a small amount of interference deviating from the overall data. Although the algorithm complexity increased from 0.1 second to 0.19 second, the classification accuracy receiver operating characteristic (ROC) increased from 0.89 to 0.95. This meets the needs of the oil drilling industry data stream with a sampling frequency of 1 Hz, and it improves the classification accuracy. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

13 pages, 1165 KiB  
Article
Iterative Group Decomposition for Refining Microaggregation Solutions
by Laksamee Khomnotai, Jun-Lin Lin, Zhi-Qiang Peng and Arpita Samanta Santra
Symmetry 2018, 10(7), 262; https://doi.org/10.3390/sym10070262 - 04 Jul 2018
Cited by 3 | Viewed by 2473
Abstract
Microaggregation refers to partitioning n given records into groups of at least k records each to minimize the sum of the within-group squared error. Because microaggregation is non-deterministic polynomial-time hard for multivariate data, most existing approaches are heuristic based and derive a solution [...] Read more.
Microaggregation refers to partitioning n given records into groups of at least k records each to minimize the sum of the within-group squared error. Because microaggregation is non-deterministic polynomial-time hard for multivariate data, most existing approaches are heuristic based and derive a solution within a reasonable timeframe. We propose an algorithm for refining the solutions generated using the existing microaggregation approaches. The proposed algorithm refines a solution by iteratively either decomposing or shrinking the groups in the solution. Experimental results demonstrated that the proposed algorithm effectively reduces the information loss of a solution. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

23 pages, 1575 KiB  
Article
RIM4J: An Architecture for Language-Supported Runtime Measurement against Malicious Bytecode in Cloud Computing
by Haihe Ba, Huaizhe Zhou, Huidong Qiao, Zhiying Wang and Jiangchun Ren
Symmetry 2018, 10(7), 253; https://doi.org/10.3390/sym10070253 - 02 Jul 2018
Cited by 2 | Viewed by 3746
Abstract
While cloud customers can benefit from migrating applications to the cloud, they are concerned about the security of the hosted applications. This is complicated by the customers not knowing whether their cloud applications are working as expected. Although memory-safety Java Virtual Machine (JVM) [...] Read more.
While cloud customers can benefit from migrating applications to the cloud, they are concerned about the security of the hosted applications. This is complicated by the customers not knowing whether their cloud applications are working as expected. Although memory-safety Java Virtual Machine (JVM) can alleviate their anxiety due to the control flow integrity, their applications are prone to a violation of bytecode integrity. The analysis of some Java exploits indicates that the violation results primarily from the given excess sandbox permission, loading flaws in Java class libraries and third-party middlewares and the abuse of sun.misc.UnsafeAPI. To such an end, we design an architecture, called RIM4J, to enforce a runtime integrity measurement of Java bytecode within a cloud system, with the ability to attest this to a cloud customer in an unforgeable manner. Our RIM4J architecture is portable, such that it can be quickly deployed and adopted for real-world purposes, without requiring modifications to the underlying systems and access to application source code. Moreover, our RIM4J architecture is the first to measure dynamically-generated bytecode. We apply our runtime measurement architecture to a messaging server application where we show how RIM4J can detect undesirable behaviors, such as uploading arbitrary files and remote code execution. This paper also reports the experimental evaluation of a RIM4J prototype using both a macro- and a micro-benchmark; the experimental results indicate that RIM4J is a practical solution for real-world applications. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

12 pages, 1374 KiB  
Article
A Cluster-Based Boosting Algorithm for Bankruptcy Prediction in a Highly Imbalanced Dataset
by Tuong Le, Le Hoang Son, Minh Thanh Vo, Mi Young Lee and Sung Wook Baik
Symmetry 2018, 10(7), 250; https://doi.org/10.3390/sym10070250 - 02 Jul 2018
Cited by 63 | Viewed by 6696
Abstract
Bankruptcy prediction has been a popular and challenging research topic in both computer science and economics due to its importance to financial institutions, fund managers, lenders, governments, as well as economic stakeholders in recent years. In a bankruptcy dataset, the problem of class [...] Read more.
Bankruptcy prediction has been a popular and challenging research topic in both computer science and economics due to its importance to financial institutions, fund managers, lenders, governments, as well as economic stakeholders in recent years. In a bankruptcy dataset, the problem of class imbalance, in which the number of bankruptcy companies is smaller than the number of normal companies, leads to a standard classification algorithm that does not work well. Therefore, this study proposes a cluster-based boosting algorithm as well as a robust framework using the CBoost algorithm and Instance Hardness Threshold (RFCI) for effective bankruptcy prediction of a financial dataset. This framework first resamples the imbalance dataset by the undersampling method using Instance Hardness Threshold (IHT), which is used to remove the noise instances having large IHT value in the majority class. Then, this study proposes a Cluster-based Boosting algorithm, namely CBoost, for dealing with the class imbalance. In this algorithm, the majority class will be clustered into a number of clusters. The distance from each sample to its closest centroid will be used to initialize its weight. This algorithm will perform several iterations for finding weak classifiers and combining them to create a strong classifier. The resample set resulting from the previous module, will be used to train CBoost, which will be used to predict bankruptcy for the validation set. The proposed framework is verified by the Korean bankruptcy dataset (KBD), which has a very small balancing ratio in both the training and the testing phases. The experimental results of this research show that the proposed framework achieves 86.8% in AUC (area under the ROC curve) and outperforms several methods for dealing with the imbalanced data problem for bankruptcy prediction such as GMBoost algorithm, the oversampling-based method using SMOTEENN, and the clustering-based undersampling method for bankruptcy prediction in the experimental dataset. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

13 pages, 464 KiB  
Article
An Efficient Object Detection Algorithm Based on Compressed Networks
by Jianjun Li, Kangjian Peng and Chin-Chen Chang
Symmetry 2018, 10(7), 235; https://doi.org/10.3390/sym10070235 - 22 Jun 2018
Cited by 7 | Viewed by 4007
Abstract
For a long time, object detection has been a popular but difficult research problem in the field of pattern recognition. In recent years, object detection algorithms based on convolutional neural networks have achieved excellent results. However, neural networks are computationally intensive and parameter [...] Read more.
For a long time, object detection has been a popular but difficult research problem in the field of pattern recognition. In recent years, object detection algorithms based on convolutional neural networks have achieved excellent results. However, neural networks are computationally intensive and parameter redundant, so they are difficult to deploy on resource-limited embedded devices. Especially for two-stage detectors, operations and parameters are mainly clustered on feature fusion of proposals after the region of interest (ROI) pooling layer, and they are enormous. In order to deal with these problems, we propose a subnetwork—efficient feature fusion module (EFFM) to reduce the number of operations and parameters for a two-stage detector. In addition, we propose a multi-scale dilation region proposal network (RPN) to further improve detection accuracy. Finally, our accuracy is higher than Faster RCNN based on VGG16, the number of operations is only half of the latter, and the number of parameters is only one third. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

23 pages, 1066 KiB  
Article
Multi-Source Stego Detection with Low-Dimensional Textural Feature and Clustering Ensembles
by Fengyong Li, Kui Wu, Xinpeng Zhang, Jingsheng Lei and Mi Wen
Symmetry 2018, 10(5), 128; https://doi.org/10.3390/sym10050128 - 24 Apr 2018
Cited by 2 | Viewed by 2918
Abstract
This work tackles a recent challenge in digital image processing: how to identify the steganographic images from a steganographer, who is unknown among multiple innocent actors. The method does not need a large number of samples to train classification model, and thus it [...] Read more.
This work tackles a recent challenge in digital image processing: how to identify the steganographic images from a steganographer, who is unknown among multiple innocent actors. The method does not need a large number of samples to train classification model, and thus it is significantly different from the traditional steganalysis. The proposed scheme consists of textural features and clustering ensembles. Local ternary patterns (LTP) are employed to design low-dimensional textural features which are considered to be more sensitive to steganographic changes in texture regions of image. Furthermore, we use the extracted low-dimensional textural features to train a number of hierarchical clustering results, which are integrated as an ensemble based on the majority voting strategy. Finally, the ensemble is used to make optimal decision for suspected image. Extensive experiments show that the proposed scheme is effective and efficient and outperforms the state-of-the-art steganalysis methods with an average gain from 4 % to 6 % . Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

25 pages, 10524 KiB  
Article
Efficient Algorithms for Real-Time GPU Volumetric Cloud Rendering with Enhanced Geometry
by Carlos Jiménez de Parga and Sebastián Rubén Gómez Palomo
Symmetry 2018, 10(4), 125; https://doi.org/10.3390/sym10040125 - 20 Apr 2018
Cited by 5 | Viewed by 12610
Abstract
This paper presents several new techniques for volumetric cloud rendering using efficient algorithms and data structures based on ray-tracing methods for cumulus generation, achieving an optimum balance between realism and performance. These techniques target applications such as flight simulations, computer games, and educational [...] Read more.
This paper presents several new techniques for volumetric cloud rendering using efficient algorithms and data structures based on ray-tracing methods for cumulus generation, achieving an optimum balance between realism and performance. These techniques target applications such as flight simulations, computer games, and educational software, even with conventional graphics hardware. The contours of clouds are defined by implicit mathematical expressions or triangulated structures inside which volumetric rendering is performed. Novel techniques are used to reproduce the asymmetrical nature of clouds and the effects of light-scattering, with low computing costs. The work includes a new method to create randomized fractal clouds using a recursive grammar. The graphical results are comparable to those produced by state-of-the-art, hyper-realistic algorithms. These methods provide real-time performance, and are superior to particle-based systems. These outcomes suggest that our methods offer a good balance between realism and performance, and are suitable for use in the standard graphics industry. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

13 pages, 1502 KiB  
Article
Application of Sliding Nest Window Control Chart in Data Stream Anomaly Detection
by Guang Li, Jie Wang, Jing Liang and Caitong Yue
Symmetry 2018, 10(4), 113; https://doi.org/10.3390/sym10040113 - 17 Apr 2018
Cited by 12 | Viewed by 8020
Abstract
Since data stream anomaly detection algorithms based on sliding windows are sensitive to the abnormal deviation of individual interference data, this paper presents a sliding nest window chart anomaly detection based on the data stream (SNWCAD-DS) by employing the concept of the sliding [...] Read more.
Since data stream anomaly detection algorithms based on sliding windows are sensitive to the abnormal deviation of individual interference data, this paper presents a sliding nest window chart anomaly detection based on the data stream (SNWCAD-DS) by employing the concept of the sliding window and control chart. By nesting a small sliding window in a large sliding window and analyzing the deviation distance between the small window and the large sliding window, the algorithm increases the out-of-bounds detection ratio and classifies the conceptual drift data stream online. The designed algorithm is simulated on the industrial data stream of drilling engineering. The proposed algorithm SNWCAD is compared with Automatic Outlier Detection for Data Streams (A-ODDS) and Distance-Based Outline Detection for Data Stream (DBOD-DS). The experimental results show that the new algorithm can obtain higher detection accuracy than the compared algorithms. Furthermore, it can shield the influence of individual interference data and satisfy actual engineering needs. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

16 pages, 5418 KiB  
Article
A Watermarking Method for 3D Printing Based on Menger Curvature and K-Mean Clustering
by Giao N. Pham, Suk-Hwan Lee, Oh-Heum Kwon and Ki-Ryong Kwon
Symmetry 2018, 10(4), 97; https://doi.org/10.3390/sym10040097 - 04 Apr 2018
Cited by 10 | Viewed by 4834
Abstract
Nowadays, 3D printing is widely used in many areas of life. This leads to 3D printing models often being used illegally without any payment to the original providers. Therefore, providers need a solution to identify and protect the copyright of 3D printing. This [...] Read more.
Nowadays, 3D printing is widely used in many areas of life. This leads to 3D printing models often being used illegally without any payment to the original providers. Therefore, providers need a solution to identify and protect the copyright of 3D printing. This paper presents a novel watermarking method for the copyright protection of 3D printing based on the Menger facet curvature and K-mean clustering. The facets of the 3D printing model are classified into groups based on the value of Menger curvature and the K-mean clustering, and the mean Menger curvature of each group will then be computed for embedding the watermark data. The watermark data are embedded into the groups of facets by changing the mean Menger curvature of each group according to the bit of watermark data. In each group, we select a facet that has the Menger curvature closest to the changed mean Menger curvature, and we then transform the vertices of the selected facet according to the changed Menger curvature for the watermarked 3D printing model generation. Watermark data are extracted from 3D-printed objects, which are printed from the watermarked 3D printing models by the 3D printer. Experimental results after embedding the watermark verified that the proposed method is invisible and robust to geometric attacks such as rotation, scaling and translation. In experiments with an XYZ Printing Pro 3D printer and 3D scanner, the accuracy and performance of the proposed method was higher than the two previous methods in the 3D printing watermarking domain. The proposed method provides a better solution for the copyright protection of 3D printing. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures

Figure 1

15 pages, 56204 KiB  
Article
Anti-3D Weapon Model Detection for Safe 3D Printing Based on Convolutional Neural Networks and D2 Shape Distribution
by Giao N. Pham, Suk-Hwan Lee, Oh-Heum Kwon and Ki-Ryong Kwon
Symmetry 2018, 10(4), 90; https://doi.org/10.3390/sym10040090 - 31 Mar 2018
Cited by 10 | Viewed by 7046
Abstract
With the development of 3D printing, weapons are easily printed without any restriction from the production managers. Therefore, anti-3D weapon model detection is necessary issue in safe 3D printing to prevent the printing of 3D weapon models. In this paper, we would like [...] Read more.
With the development of 3D printing, weapons are easily printed without any restriction from the production managers. Therefore, anti-3D weapon model detection is necessary issue in safe 3D printing to prevent the printing of 3D weapon models. In this paper, we would like to propose an anti-3D weapon model detection algorithm to prevent the printing of anti-3D weapon models for safe 3D printing based on the D2 shape distribution and an improved convolutional neural networks (CNNs). The purpose of the proposed algorithm is to detect anti-3D weapon models when they are used in 3D printing. The D2 shape distribution is computed from random points on the surface of a 3D weapon model and their geometric features in order to construct a D2 vector. The D2 vector is then trained by improved CNNs. The CNNs are used to detect anti-3D weapon models for safe 3D printing by training D2 vectors which have been constructed from the D2 shape distribution of 3D weapon models. Experiments with 3D weapon models proved that the D2 shape distribution of 3D weapon models in the same class is the same. Training and testing results also verified that the accuracy of the proposed algorithm is higher than the conventional works. The proposed algorithm is applied in a small application, and it could detect anti-3D weapon models for safe 3D printing. Full article
(This article belongs to the Special Issue Information Technology and Its Applications 2021)
Show Figures