Topic Editors

Prof. Dr. Sanjay Misra
Sr. Scientist, Department of Applied Data Science, Institute for Energy Technology, 1777 Halden, Norway
Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
Prof. Dr. Bharti Suri
University School of Information and Communication Technology, GGS Indraprastha University, Delhi 110078, India

Software Engineering and Applications

Abstract submission deadline
31 July 2024
Manuscript submission deadline
31 October 2024
Viewed by
118057

Topic Information

Dear Colleagues,

This Topic on software engineering and applications (SEA) aims to provide a forum for scientists/researchers/engineers/practitioners and academicians to share their ideas, experiences, and research in various aspects and applications of software engineering. SEA covers all the frontier issues and trends in modern software development processes. It includes process models, development processes for different software platforms (e.g., social networks, clouds), a process for adaptive, dependable, embedded systems, agile development, software engineering practices, requirements, systems, and design engineering including architectural design, component-level design, formal methods, software modeling, testing strategies, tactics, process and product metrics, web engineering, project management, risk management, and configuration management. This Topic will also consider papers on the development process of software/software systems (including AI-based systems) in various application areas, e.g., agriculture, aviation industry, business, cybercrime, education, government, and the military.

The following topics are welcome:

  • Software development fundamentals;
  • Software process models;
  • Software standardization and certification;
  • Advanced topics in software engineering.
  • Agile, DevOps models, practices, challenges;
  • Software requirement engineering;
  • Software process assessment (SPA) and software process improvement (SPI);
  • Software maintenance and testing;
  • Software quality management;
  • Artifacts, software verification and validation;
  • Software project management;

Software engineering practices and applications

  • Knowledge-based systems and formal methods;
  • Algorithms and programming languages;
  • Search engines and information retrieval;
  • AI for software systems;
  • Multimedia and visual software engineering;
  • Formal methods;
  • Web engineering and its applications;
  • Web-based systems in various applications areas;
  • Knowledge-based systems;
  • Software gaming;
  • Software modeling and simulation;
  • Various applications software in cloud computing, green computing, cyber security, IoT, big data, distributed systems, supercomputing, quantum computing;
  • Software, algorithms, and software systems in agriculture, aviation industry, business, management, education, health, government, military, etc.

Prof. Dr. Sanjay Misra
Prof. Dr. Robertas Damaševičius
Prof. Dr. Bharti Suri
Topic Editors

Keywords

  • software
  • software engineering (SE)
  • advanced SE applications
  • web engineering
  • artificial intelligence and SE

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.7 4.5 2011 16.9 Days CHF 2400 Submit
Electronics
electronics
2.9 4.7 2012 15.6 Days CHF 2400 Submit
Informatics
informatics
3.1 4.8 2014 30.3 Days CHF 1800 Submit
Information
information
3.1 5.8 2010 18 Days CHF 1600 Submit
Software
software
- - 2022 19.3 Days CHF 1000 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (49 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
14 pages, 1136 KiB  
Article
NICE: A Web-Based Tool for the Characterization of Transient Noise in Gravitational Wave Detectors
by Nunziato Sorrentino, Massimiliano Razzano, Francesco Di Renzo, Francesco Fidecaro and Gary Hemming
Software 2024, 3(2), 169-182; https://doi.org/10.3390/software3020008 - 18 Apr 2024
Viewed by 296
Abstract
NICE—Noise Interactive Catalogue Explorer—is a web service developed for rapid-qualitative glitch analysis in gravitational wave data. Glitches are transient noise events that can smother the gravitational wave signal in data recorded by gravitational wave interferometer detectors. NICE provides interactive graphical tools to support [...] Read more.
NICE—Noise Interactive Catalogue Explorer—is a web service developed for rapid-qualitative glitch analysis in gravitational wave data. Glitches are transient noise events that can smother the gravitational wave signal in data recorded by gravitational wave interferometer detectors. NICE provides interactive graphical tools to support detector noise characterization activities, in particular, the analysis of glitches from past and current observing runs, passing from glitch population visualization to individual glitch characterization. The NICE back-end API consists of a multi-database structure that brings order to glitch metadata generated by external detector characterization tools so that such information can be easily requested by gravitational wave scientists. Another novelty introduced by NICE is the interactive front-end infrastructure focused on glitch instrumental and environmental origin investigation, which uses labels determined by their time–frequency morphology. The NICE domain is intended for integration with the Advanced Virgo, Advanced LIGO, and KAGRA characterization pipelines and it will interface with systematic classification activities related to the transient noise sources present in the Virgo detector. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

22 pages, 341 KiB  
Article
Detection Techniques for DBI Environment in Windows
by Seongwoo Park and Yongsu Park
Electronics 2024, 13(5), 871; https://doi.org/10.3390/electronics13050871 - 23 Feb 2024
Viewed by 720
Abstract
Dynamic binary instrumentation (DBI) is a technique that enables the monitoring and analysis of software, providing enhanced performance compared to other analysis tools. However, to provide the robust dynamic analysis capabilities, it commonly requires the setup of separate environments for analysis, thereby increasing [...] Read more.
Dynamic binary instrumentation (DBI) is a technique that enables the monitoring and analysis of software, providing enhanced performance compared to other analysis tools. However, to provide the robust dynamic analysis capabilities, it commonly requires the setup of separate environments for analysis, thereby increasing the contrast with normal execution and the distinctive features that may reveal the presence of the DBI environment. Malware adapts to detect the presence of DBI environments, and it consequently leads to the expansion of the attack surface. In this paper, we provide an in-depth exploration of anti-instrumentation techniques that can be exploited by malware, with a specific focus on the Windows operating system. Leveraging the unique features of the DBI environment, we introduce and categorize DBI detection techniques. Additionally, we conduct a comprehensive analysis of the techniques through the implementation algorithms with bypassing methods for the techniques. Our experiments showcase the effectiveness of these techniques on the latest versions of several DBI frameworks. Furthermore, we address associated concerns with the aim of contributing to the development of enhanced tools to combat malicious activities exploiting DBI and propose directions for future research. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

18 pages, 4425 KiB  
Article
A Novel Non-Intrusive Load Monitoring Algorithm for Unsupervised Disaggregation of Household Appliances
by D. Criado-Ramón, L. G. B. Ruiz, J. R. S. Iruela and M. C. Pegalajar
Information 2024, 15(2), 87; https://doi.org/10.3390/info15020087 - 05 Feb 2024
Viewed by 1062
Abstract
This paper introduces the first completely unsupervised methodology for non-intrusive load monitoring that does not rely on any additional data, making it suitable for real-life applications. The methodology includes an algorithm to efficiently decompose the aggregated energy load from households in events and [...] Read more.
This paper introduces the first completely unsupervised methodology for non-intrusive load monitoring that does not rely on any additional data, making it suitable for real-life applications. The methodology includes an algorithm to efficiently decompose the aggregated energy load from households in events and algorithms based on expert knowledge to assign each of these events to four types of appliances: fridge, dishwasher, microwave, and washer/dryer. The methodology was developed to work with smart meters that have a granularity of 1 min and was evaluated using the Reference Energy Disaggregation Dataset. The results show that the algorithm can disaggregate the refrigerator with high accuracy and the usefulness of the proposed methodology to extract relevant features from other appliances, such as the power use and duration from the heating cycles of a dishwasher. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

34 pages, 4263 KiB  
Article
An Open-Source Software Reliability Model Considering Learning Factors and Stochastically Introduced Faults
by Jinyong Wang and Ce Zhang
Appl. Sci. 2024, 14(2), 708; https://doi.org/10.3390/app14020708 - 14 Jan 2024
Viewed by 613
Abstract
In recent years, software development models have undergone changes. In order to meet user needs and functional changes, open-source software continuously improves its software quality through successive releases. Due to the iterative development process of open-source software, open-source software testing also requires continuous [...] Read more.
In recent years, software development models have undergone changes. In order to meet user needs and functional changes, open-source software continuously improves its software quality through successive releases. Due to the iterative development process of open-source software, open-source software testing also requires continuous learning to understand the changes in the software. Therefore, the fault detection process of open-source software involves a learning process. Additionally, the complexity and uncertainty of the open-source software development process also lead to stochastically introduced faults when troubleshooting in the open-source software debugging process. Considering the phenomenon of learning factors and the random introduction of faults during the testing process of open-source software, this paper proposes a reliability modeling method for open-source software that considers learning factors and the random introduction of faults. Least square estimation and maximal likelihood estimation are used to determine the model parameters. Four fault data sets from Apache open-source software projects are used to compare the model performances. Experimental results indicate that the proposed model is superior to other models. The proposed model can accurately predict the number of remaining faults in the open-source software and be used for actual open-source software reliability evaluation. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

20 pages, 806 KiB  
Article
The Potential of AI-Driven Assistants in Scaled Agile Software Development
by Vasilka Saklamaeva and Luka Pavlič
Appl. Sci. 2024, 14(1), 319; https://doi.org/10.3390/app14010319 - 29 Dec 2023
Viewed by 1902
Abstract
Scaled agile development approaches are now used widely in modern software engineering, allowing businesses to improve teamwork, productivity, and product quality. The incorporation of artificial intelligence (AI) into scaled agile development methods (SADMs) has emerged as a potential strategy in response to the [...] Read more.
Scaled agile development approaches are now used widely in modern software engineering, allowing businesses to improve teamwork, productivity, and product quality. The incorporation of artificial intelligence (AI) into scaled agile development methods (SADMs) has emerged as a potential strategy in response to the ongoing demand for simplified procedures and the increasing complexity of software projects. This paper explores the intersection of AI-driven assistants within the context of the scaled agile framework (SAFe) for large-scale software development, as it stands out as the most widely adopted framework. Our paper pursues three principal objectives: (1) an evaluation of the challenges and impediments encountered by organizations during the implementation of SADMs, (2) an assessment of the potential advantages stemming from the incorporation of AI in large-scale contexts, and (3) the compilation of aspects of SADMs that AI-driven assistants enhance. Through a comprehensive systematic literature review, we identified and described 18 distinct challenges that organizations confront. In the course of our research, we pinpointed seven benefits and five challenges associated with the implementation of AI in SADMs. These findings were systematically categorized based on their occurrence either within the development phase or the phases encompassing planning and control. Furthermore, we compiled a list of 15 different AI-driven assistants and tools, subjecting them to a more detailed examination, and employing them to address the challenges we uncovered during our research. One of the key takeaways from this paper is the exceptional versatility and effectiveness of AI-driven assistants, demonstrating their capability to tackle a broader spectrum of problems. In conclusion, this paper not only sheds light on the transformative potential of AI, but also provides invaluable insights for organizations aiming to enhance their agility and management capabilities. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

22 pages, 8370 KiB  
Article
PV-OPTIM: A Software Architecture and Functionalities for Prosumers
by Adela Bâra and Simona-Vasilica Oprea
Electronics 2024, 13(1), 161; https://doi.org/10.3390/electronics13010161 - 29 Dec 2023
Viewed by 569
Abstract
The future development of the energy sector is influenced by Renewable Energy Sources (RES) and their integration. The main hindrance with RES is that their output is highly volatile and less predictable. However, the utility of the RES can be further enhanced by [...] Read more.
The future development of the energy sector is influenced by Renewable Energy Sources (RES) and their integration. The main hindrance with RES is that their output is highly volatile and less predictable. However, the utility of the RES can be further enhanced by prediction, optimization, and control algorithms. The scope of this paper is to disseminate a smart Adaptive Optimization and Control (AOC) software for prosumers, namely PV-OPTIM, that is developed to maximize the consumption from local Photovoltaic (PV) systems and, if the solar energy is not available, to minimize the cost by finding the best operational time slots. Furthermore, PV-OPTIM aims to increase the Self-Sustainable Ratio (SSR). If storage is available, PV-OPTIM is designed to protect the battery lifetime. AOC software consists of three algorithms: (i) PV Forecast algorithm (PVFA), (ii) Day Ahead Optimization Algorithm (DAOA), and (iii) Real Time Control Algorithm (RTCA). Both software architecture and functionalities, including interactions, are depicted to promote and replicate its usage. The economic impact is related to cost reduction and energy independence reflected by the SSR. The electricity costs are reduced after optimization and further significantly decrease in case of real-time control, the percentage depending on the flexibility of the appliances and the configuration parameters of the RTCA. By optimizing and controlling the load, prosumers increase their SSR to at least 70% in the case of small PV systems with less than 4 kW and to more than 85% in the case of PV systems over 5 kW. By promoting free software applications to enhance RES integration, we estimate that pro-environmental attitude will increase. Moreover, the PV-OPTIM provides support for trading activities on the Local Electricity Markets (LEM) by providing the deficit and surplus quantities for the next day, allowing prosumers to set-up their bids. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

18 pages, 584 KiB  
Article
Evaluating the Usability and Functionality of Intelligent Source Code Completion Assistants: A Comprehensive Review
by Tilen Hliš, Luka Četina, Tina Beranič and Luka Pavlič
Appl. Sci. 2023, 13(24), 13061; https://doi.org/10.3390/app132413061 - 07 Dec 2023
Viewed by 1092
Abstract
As artificial intelligence advances, source code completion assistants are becoming more advanced and powerful. Existing traditional assistants are no longer up to all the developers’ challenges. Traditional assistants usually present proposals in alphabetically sorted lists, which does not make a developer’s tasks any [...] Read more.
As artificial intelligence advances, source code completion assistants are becoming more advanced and powerful. Existing traditional assistants are no longer up to all the developers’ challenges. Traditional assistants usually present proposals in alphabetically sorted lists, which does not make a developer’s tasks any easier (i.e., they still have to search and filter an appropriate proposal manually). As a possible solution to the presented issue, intelligent assistants that can classify suggestions according to relevance in particular contexts have emerged. Artificial intelligence methods have proven to be successful in solving such problems. Advanced intelligent assistants not only take into account the context of a particular source code but also, more importantly, examine other available projects in detail to extract possible patterns related to particular source code intentions. This is how intelligent assistants try to provide developers with relevant suggestions. By conducting a systematic literature review, we examined the current intelligent assistant landscape. Based on our review, we tested four intelligent assistants and compared them according to their functionality. GitHub Copilot, which stood out, allows suggestions in the form of complete source code sections. One would expect that intelligent assistants, with their outstanding functionalities, would be one of the most popular helpers in a developer’s toolbox. However, through a survey we conducted among practitioners, the results, surprisingly, contradicted this idea. Although intelligent assistants promise high usability, our questionnaires indicate that usability improvements are still needed. However, our research data show that experienced developers value intelligent assistants highly, highlighting their significant utility for the experienced developers group when compared to less experienced individuals. The unexpectedly low net promoter score (NPS) for intelligent code assistants in our study was quite surprising, highlighting a stark contrast between the anticipated impact of these advanced tools and their actual reception among developers. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

22 pages, 430 KiB  
Article
TCCCD: Triplet-Based Cross-Language Code Clone Detection
by Yong Fang, Fangzheng Zhou, Yijia Xu and Zhonglin Liu
Appl. Sci. 2023, 13(21), 12084; https://doi.org/10.3390/app132112084 - 06 Nov 2023
Viewed by 941
Abstract
Code cloning is a common practice in software development, where developers reuse existing code to accelerate programming speed and enhance work efficiency. Existing clone-detection methods mainly focus on code clones within a single programming language. To address the challenge of code clone instances [...] Read more.
Code cloning is a common practice in software development, where developers reuse existing code to accelerate programming speed and enhance work efficiency. Existing clone-detection methods mainly focus on code clones within a single programming language. To address the challenge of code clone instances in cross-platform development, we propose a novel method called TCCCD, which stands for Triplet-Based Cross-Language Code Clone Detection. Our approach is based on machine learning and can accurately detect code clone instances between different programming languages. We used the pre-trained model UniXcoder to map programs written in different languages into the same vector space and learn their code representations. Then, we fine-tuned TCCCD using triplet learning to improve its effectiveness in cross-language clone detection. To assess the effectiveness of our proposed approach, we conducted thorough comparative experiments using the dataset provided by the paper titled CLCDSA (Cross Language Code Clone Detection using Syntactical Features and API Documentation). The experimental results demonstrated a significant improvement of our approach over the state-of-the-art baselines, with precision, recall, and F1-measure scores of 0.96, 0.91, and 0.93, respectively. In summary, we propose a novel cross-language code-clone-detection method called TCCCD. TCCCD leverages the pre-trained model UniXcode for source code representation and fine-tunes the model using triplet learning. In the experimental results, TCCCD outperformed the state-of-the-art baselines in terms of the precision, recall, and F1-measure. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

29 pages, 9898 KiB  
Article
Developing Web-Based Process Management with Automatic Code Generation
by Burak Uyanık and Ahmet Sayar
Appl. Sci. 2023, 13(21), 11737; https://doi.org/10.3390/app132111737 - 26 Oct 2023
Viewed by 793
Abstract
Automated code generation and process flow management are central to web-based application development today. This database-centric approach targets the form and process management challenges faced by corporate companies. It minimizes the time losses caused by managing hundreds of forms and processes, especially in [...] Read more.
Automated code generation and process flow management are central to web-based application development today. This database-centric approach targets the form and process management challenges faced by corporate companies. It minimizes the time losses caused by managing hundreds of forms and processes, especially in large companies. Shortening development times, optimizing user interaction, and simplifying the code are critical advantages offered by this methodology. These low-code systems accelerate development, allowing organizations to adapt to the market quickly. This approach simplifies the development process with drag-and-drop features and enables developers to produce more effective solutions with less code. Automatic code generation with flow diagrams allows one to manage inter-page interactions and processes more intuitively. The interactive Process Design Editor developed in this study makes code generation more user-friendly and accessible. The case study results show that a 98.68% improvement in development processes, a 95.84% improvement in test conditions, and a 36.01% improvement in code size were achieved with this system. In conclusion, automated code generation and process flow management represent a significant evolution in web application development processes. This methodology both shortens development times and improves code quality. In the future, the demand for these technologies is expected to increase even more. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

49 pages, 11733 KiB  
Article
Transpiler-Based Architecture Design Model for Back-End Layers in Software Development
by Andrés Bastidas Fuertes, María Pérez and Jaime Meza
Appl. Sci. 2023, 13(20), 11371; https://doi.org/10.3390/app132011371 - 17 Oct 2023
Cited by 1 | Viewed by 1482
Abstract
The utilization of software architectures and designs is widespread in software development, offering conceptual frameworks to address recurring challenges. A transpiler is a tool that automatically converts source code from one high-level programming language to another, ensuring algorithmic equivalence. This study introduces an [...] Read more.
The utilization of software architectures and designs is widespread in software development, offering conceptual frameworks to address recurring challenges. A transpiler is a tool that automatically converts source code from one high-level programming language to another, ensuring algorithmic equivalence. This study introduces an innovative software architecture design model that integrates transpilers into the back-end layer, enabling the automatic transformation of business logic and back-end components from a single source code (the coding artifact) into diverse equivalent versions using distinct programming languages (the automatically produced code). This work encompasses both abstract and detailed design aspects, covering the proposal, automated processes, layered design, development environment, nest implementations, and cross-cutting components. In addition, it defines the main target audiences, discusses pros and cons, examines their relationships with prevalent design paradigms, addresses considerations about compatibility and debugging, and emphasizes the pivotal role of the transpiler. An empirical experiment involving the practical application of this model was conducted by implementing a collaborative to-do list application. This paper comprehensively outlines the relevant methodological approach, strategic planning, precise execution, observed outcomes, and insightful reflections while underscoring the the model’s pragmatic viability and highlighting its relevance across various software development contexts. Our contribution aims to enrich the field of software architecture design by introducing a new way of designing multi-programming-language software. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

30 pages, 26327 KiB  
Article
Software Operation Anomalies Diagnosis Method Based on a Multiple Time Windows Mixed Model
by Tao Shi, Zhuoliang Zou and Jun Ai
Appl. Sci. 2023, 13(20), 11349; https://doi.org/10.3390/app132011349 - 16 Oct 2023
Viewed by 698
Abstract
The detection of anomalies in software systems has become increasingly crucial in recent years due to their impact on overall software quality. However, existing integrated anomaly detectors usually combine the results of multiple detectors in a clustering manner and do not consider the [...] Read more.
The detection of anomalies in software systems has become increasingly crucial in recent years due to their impact on overall software quality. However, existing integrated anomaly detectors usually combine the results of multiple detectors in a clustering manner and do not consider the changes in data anomalies in the time dimension. This paper investigates the limitations of existing anomaly detection methods and proposes an improved integrated anomaly detection approach based on time windows and a voting mechanism. By utilizing multiple time windows, the proposed method overcomes the challenges of cumulative anomalies and achieves enhanced performance in capturing anomalies that accumulate gradually over time. Additionally, two hybrid models are introduced, based on accuracy and sensitivity, respectively, to optimize performance metrics such as AUC, precision, recall, and F1-score. The proposed method demonstrates remarkable performance, achieving either the highest or only a marginal 3% lower performance compared to the optimal model. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

20 pages, 364 KiB  
Article
A Differential Datalog Interpreter
by Matthew James Stephenson
Software 2023, 2(3), 427-446; https://doi.org/10.3390/software2030020 - 21 Sep 2023
Viewed by 6284
Abstract
The core reasoning task for datalog engines is materialization, the evaluation of a datalog program over a database alongside its physical incorporation into the database itself. The de-facto method of computing is through the recursive application of inference rules. Due to it being [...] Read more.
The core reasoning task for datalog engines is materialization, the evaluation of a datalog program over a database alongside its physical incorporation into the database itself. The de-facto method of computing is through the recursive application of inference rules. Due to it being a costly operation, it is a must for datalog engines to provide incremental materialization; that is, to adjust the computation to new data instead of restarting from scratch. One of the major caveats is that deleting data is notoriously more involved than adding since one has to take into account all possible data that has been entailed from what is being deleted. Differential dataflow is a computational model that provides efficient incremental maintenance, notoriously with equal performance between additions and deletions, and work distribution of iterative dataflows. In this paper, we investigate the performance of materialization with three reference datalog implementations, out of which one is built on top of a lightweight relational engine, and the two others are differential-dataflow and non-differential versions of the same rewrite algorithm with the same optimizations. Experimental results suggest that monotonic aggregation is more powerful than ascenting merely the powerset lattice. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

25 pages, 1679 KiB  
Article
SUCCEED: Sharing Upcycling Cases with Context and Evaluation for Efficient Software Development
by Takuya Nakata, Sinan Chen, Sachio Saiki and Masahide Nakamura
Information 2023, 14(9), 518; https://doi.org/10.3390/info14090518 - 21 Sep 2023
Cited by 1 | Viewed by 1083
Abstract
Software upcycling, a form of software reuse, is a concept that efficiently generates novel, innovative, and value-added development projects by utilizing knowledge extracted from past projects. However, how to integrate the materials derived from these projects for upcycling remains uncertain. This study defines [...] Read more.
Software upcycling, a form of software reuse, is a concept that efficiently generates novel, innovative, and value-added development projects by utilizing knowledge extracted from past projects. However, how to integrate the materials derived from these projects for upcycling remains uncertain. This study defines a systematic model for upcycling cases and develops the Sharing Upcycling Cases with Context and Evaluation for Efficient Software Development (SUCCEED) system to support the implementation of new upcycling initiatives by effectively sharing cases within the organization. To ascertain the efficacy of upcycling within our proposed model and system, we formulated three research questions and conducted two distinct experiments. Through surveys, we identified motivations and characteristics of shared upcycling-relevant development cases. Development tasks were divided into groups, those that employed the SUCCEED system and those that did not, in order to discern the enhancements brought about by upcycling. As a result of this research, we accomplished a comprehensive structuring of both technical and experiential knowledge beneficial for development, a feat previously unrealizable through conventional software reuse, and successfully realized reuse in a proactive and closed environment through construction of the wisdom of crowds for upcycling cases. Consequently, it becomes possible to systematically perform software upcycling by leveraging knowledge from existing projects for streamlining of software development. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

24 pages, 1901 KiB  
Article
A Machine Learning Python-Based Search Engine Optimization Audit Software
by Konstantinos I. Roumeliotis and Nikolaos D. Tselikas
Informatics 2023, 10(3), 68; https://doi.org/10.3390/informatics10030068 - 25 Aug 2023
Viewed by 2850
Abstract
In the present-day digital landscape, websites have increasingly relied on digital marketing practices, notably search engine optimization (SEO), as a vital component in promoting sustainable growth. The traffic a website receives directly determines its development and success. As such, website owners frequently engage [...] Read more.
In the present-day digital landscape, websites have increasingly relied on digital marketing practices, notably search engine optimization (SEO), as a vital component in promoting sustainable growth. The traffic a website receives directly determines its development and success. As such, website owners frequently engage the services of SEO experts to enhance their website’s visibility and increase traffic. These specialists employ premium SEO audit tools that crawl the website’s source code to identify structural changes necessary to comply with specific ranking criteria, commonly called SEO factors. Working collaboratively with developers, SEO specialists implement technical changes to the source code and await the results. The cost of purchasing premium SEO audit tools or hiring an SEO specialist typically ranges in the thousands of dollars per year. Against this backdrop, this research endeavors to provide an open-source Python-based Machine Learning SEO software tool to the general public, catering to the needs of both website owners and SEO specialists. The tool analyzes the top-ranking websites for a given search term, assessing their on-page and off-page SEO strategies, and provides recommendations to enhance a website’s performance to surpass its competition. The tool yields remarkable results, boosting average daily organic traffic from 10 to 143 visitors. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

24 pages, 1471 KiB  
Article
A Multimodal Deep Learning Model Using Text, Image, and Code Data for Improving Issue Classification Tasks
by Changwon Kwak, Pilsu Jung and Seonah Lee
Appl. Sci. 2023, 13(16), 9456; https://doi.org/10.3390/app13169456 - 21 Aug 2023
Viewed by 3430
Abstract
Issue reports are valuable resources for the continuous maintenance and improvement of software. Managing issue reports requires a significant effort from developers. To address this problem, many researchers have proposed automated techniques for classifying issue reports. However, those techniques fall short of yielding [...] Read more.
Issue reports are valuable resources for the continuous maintenance and improvement of software. Managing issue reports requires a significant effort from developers. To address this problem, many researchers have proposed automated techniques for classifying issue reports. However, those techniques fall short of yielding reasonable classification accuracy. We notice that those techniques rely on text-based unimodal models. In this paper, we propose a novel multimodal model-based classification technique to use heterogeneous information in issue reports for issue classification. The proposed technique combines information from text, images, and code of issue reports. To evaluate the proposed technique, we conduct experiments with four different projects. The experiments compare the performance of the proposed technique with text-based unimodal models. Our experimental results show that the proposed technique achieves a 5.07% to 14.12% higher F1-score than the text-based unimodal models. Our findings demonstrate that utilizing heterogeneous data of issue reports helps improve the performance of issue classification. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

27 pages, 10016 KiB  
Article
Challenges and Solutions for Engineering Applications on Smartphones
by Anthony Khoury, Mohamad Abbas Kaddaha, Maya Saade, Rafic Younes, Rachid Outbib and Pascal Lafon
Software 2023, 2(3), 350-376; https://doi.org/10.3390/software2030017 - 18 Aug 2023
Cited by 1 | Viewed by 1684
Abstract
This paper starts by presenting the concept of a mobile application. A literature review is conducted, which shows that there is still a certain lack with regard to smartphone applications in the domain of engineering as independent simulation applications and not only as [...] Read more.
This paper starts by presenting the concept of a mobile application. A literature review is conducted, which shows that there is still a certain lack with regard to smartphone applications in the domain of engineering as independent simulation applications and not only as extensions of smartphone tools. The challenges behind this lack are then discussed. Subsequently, three case studies of engineering applications for both smartphones and the internet are presented, alongside their solutions to the challenges presented. The first case study concerns an engineering application for systems control. The second case study focuses on an engineering application for composite materials. The third case study focuses on the finite element method and structure generation. The solutions to the presented challenges are then described through their implementation in the applications. The three case studies show a new system of thought concerning the development of engineering smartphone applications. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

13 pages, 3045 KiB  
Article
Empirical Comparison of Higher-Order Mutation Testing and Data-Flow Testing of C# with the Aid of Genetic Algorithm
by Eman H. Abd-Elkawy and Rabie Ahmed
Appl. Sci. 2023, 13(16), 9170; https://doi.org/10.3390/app13169170 - 11 Aug 2023
Cited by 1 | Viewed by 763
Abstract
Data-Flow and Higher-Order Mutation are white-box testing techniques. To our knowledge, no work has been proposed to compare data flow and Higher-Order Mutation. This paper compares all def-uses Data-Flow and second-order mutation criteria. The comparison will support the testing decision-making, especially when choosing [...] Read more.
Data-Flow and Higher-Order Mutation are white-box testing techniques. To our knowledge, no work has been proposed to compare data flow and Higher-Order Mutation. This paper compares all def-uses Data-Flow and second-order mutation criteria. The comparison will support the testing decision-making, especially when choosing a suitable criterion. This compassion investigates the subsumption relation between these two criteria and evaluates the effectiveness of test data developed for each. To compare the two criteria, a set of test data satisfying each criterion is generated using genetic algorithms; the set is then used to explore whether one criterion subsumes the other criterion and assess the effectiveness of the test set that was developed for one methodology in terms of the other. The results showed that the mean mutation coverage ratio of the all du-pairs adequate test cover is 80.9%, and the mean data flow coverage ratio of the second-order mutant adequate test cover is 98.7%. Consequently, second-order mutation “ProbSubsumes” the all du-pairs data flow. The failure detection efficiency of the mutation (98%) is significantly better than the failure detection efficiency of data flow (86%). Consequently, second-order mutation testing is “ProbBetter” than all du-pairs data flow testing. In contrast, the size of the test suite of second-order mutation is more significant than the size of the test suite of all du-pairs. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

24 pages, 608 KiB  
Article
GTMesh: A Highly Efficient C++ Template Library for Numerical Schemes on General Topology Meshes
by Tomáš Jakubec and Pavel Strachota
Appl. Sci. 2023, 13(15), 8748; https://doi.org/10.3390/app13158748 - 28 Jul 2023
Viewed by 1257
Abstract
This article introduces GTMesh, an open-source C++ library providing data structures and algorithms that facilitate the development of numerical schemes on general polytopal meshes. After discussing the features and limitations of the existing open-source alternatives, we focus on the theoretical description of geometry [...] Read more.
This article introduces GTMesh, an open-source C++ library providing data structures and algorithms that facilitate the development of numerical schemes on general polytopal meshes. After discussing the features and limitations of the existing open-source alternatives, we focus on the theoretical description of geometry and the topology of conforming polytopal meshes in an arbitrary-dimensional space, using elements from graph theory. The data structure for mesh representation is explained. The main part of the article focuses on the implementation of data structures and algorithms (computation of measures, centers, normals, cell coloring) by using State-of-the-Art template metaprogramming techniques for maximum performance. The geometrical algorithms are designed to be valid regardless of the dimension of the underlying space. As an integral part of the library, a template implementation of class reflection in C++ has been created, which is sufficiently versatile and suitable for the development of numerical and data I/O algorithms working with generic data types. Finally, the use of GTMesh is demonstrated on a simple example of solving the heat equation by the finite volume method. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

22 pages, 586 KiB  
Article
Comparing Measured Agile Software Development Metrics Using an Agile Model-Based Software Engineering Approach versus Scrum Only
by Moe Huss, Daniel R. Herber and John M. Borky
Software 2023, 2(3), 310-331; https://doi.org/10.3390/software2030015 - 26 Jul 2023
Viewed by 2101
Abstract
This study compares the reliability of estimation, productivity, and defect rate metrics for sprints driven by a specific instance of the agile approach (i.e., scrum) and an agile model-Bbased software engineering (MBSE) approach called the integrated Scrum Model-Based System Architecture Process [...] Read more.
This study compares the reliability of estimation, productivity, and defect rate metrics for sprints driven by a specific instance of the agile approach (i.e., scrum) and an agile model-Bbased software engineering (MBSE) approach called the integrated Scrum Model-Based System Architecture Process (sMBSAP) when developing a software system. The quasi-experimental study conducted ten sprints using each approach. The approaches were then evaluated based on their effectiveness in helping the product development team estimate the backlog items that they could build during a time-boxed sprint and deliver more product backlog items (PBI) with fewer defects. The commitment reliability (CR) was calculated to compare the reliability of estimation with a measured average scrum-driven value of 0.81 versus a statistically different average sMBSAP-driven value of 0.94. Similarly, the average sprint velocity (SV) for the scrum-driven sprints was 26.8 versus 31.8 for the MBSAP-driven sprints. The average defect density (DD) for the scrum-driven sprints was 0.91, while that of the sMBSAP-driven sprints was 0.63. The average defect leakage (DL) for the scrum-driven sprints was 0.20, while that of the sMBSAP-driven sprints was 0.15. The t-test analysis concluded that the sMBSAP-driven sprints were associated with a statistically significant larger mean CR, SV, DD, and DL than that of the scrum-driven sprints. The overall results demonstrate formal quantitative benefits of an agile MBSE approach compared to an agile alone, thereby strengthening the case for considering agile MBSE methods within the software development community. Future work might include comparing agile and agile MBSE methods using alternative research designs and further software development objectives, techniques, and metrics. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

16 pages, 1165 KiB  
Article
Enhancing Traceability Link Recovery with Fine-Grained Query Expansion Analysis
by Tao Peng, Kun She, Yimin Shen, Xiangliang Xu and Yue Yu
Information 2023, 14(5), 270; https://doi.org/10.3390/info14050270 - 02 May 2023
Cited by 1 | Viewed by 1351
Abstract
Requirement traceability links are an essential part of requirement management software and are a basic prerequisite for software artifact changes. The manual establishment of requirement traceability links is time-consuming. When faced with large projects, requirement managers spend a lot of time in establishing [...] Read more.
Requirement traceability links are an essential part of requirement management software and are a basic prerequisite for software artifact changes. The manual establishment of requirement traceability links is time-consuming. When faced with large projects, requirement managers spend a lot of time in establishing relationships from numerous requirements and codes. However, existing techniques for automatic requirement traceability link recovery are limited by the semantic disparity between natural language and programming language, resulting in many methods being less accurate. In this paper, we propose a fine-grained requirement-code traceability link recovery approach based on query expansion, which analyzes the semantic similarity between requirements and codes from a fine-grained perspective, and uses a query expansion technique to establish valid links that deviate from the query, so as to further improve the accuracy of traceability link recovery. Experiments showed that the approach proposed in this paper outperforms state-of-the-art unsupervised traceability link recovery methods, not only specifying the obvious advantages of fine-grained structure analysis for word embedding-based traceability link recovery, but also improving the accuracy of establishing requirement traceability links. The experimental results demonstrate the superiority of our approach. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

21 pages, 2648 KiB  
Article
A UML Activity Flow Graph-Based Regression Testing Approach
by Pragya Jha, Madhusmita Sahu and Takanori Isobe
Appl. Sci. 2023, 13(9), 5379; https://doi.org/10.3390/app13095379 - 25 Apr 2023
Cited by 1 | Viewed by 2514
Abstract
Regression testing is a crucial process that ensures that changes made to a system do not affect existing functionalities. However, there is currently no adequate technique for selecting test cases that consider changes in Unified Modeling Language (UML) activity flow graphs. This paper [...] Read more.
Regression testing is a crucial process that ensures that changes made to a system do not affect existing functionalities. However, there is currently no adequate technique for selecting test cases that consider changes in Unified Modeling Language (UML) activity flow graphs. This paper proposes a novel approach to regression testing of UML diagrams, focusing on healthcare management systems. We provide a formal definition of sequence and activity diagrams and their relationship and construct corresponding activity flow graphs, which are used to develop a regression testing algorithm. The proposed algorithm categorizes test cases into reusable, retestable, obsolete, and newly generated categories by comparing old and new versions of UML activity flow graphs. The methodology is evaluated using a custom-designed hospital management system website as the test case, and the results demonstrate a significant reduction in time and resources required for regression testing. Our study provides valuable insights into the application of UML diagrams and activity flow graphs in regression testing, making it an important contribution to software testing research. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

24 pages, 5313 KiB  
Article
An Agile Model-Based Software Engineering Approach Illustrated through the Development of a Health Technology System
by Moe Huss, Daniel R. Herber and John M. Borky
Software 2023, 2(2), 234-257; https://doi.org/10.3390/software2020011 - 17 Apr 2023
Cited by 5 | Viewed by 3695
Abstract
Model-Based Software Engineering (MBSE) is an architecture-based software development approach. Agile, on the other hand, is a light system development approach that originated in software development. To bring together the benefits of both approaches, this article proposes an integrated Agile MBSE approach that [...] Read more.
Model-Based Software Engineering (MBSE) is an architecture-based software development approach. Agile, on the other hand, is a light system development approach that originated in software development. To bring together the benefits of both approaches, this article proposes an integrated Agile MBSE approach that adopts a specific instance of the Agile approach (i.e., Scrum) in combination with a specific instance of an MBSE approach (i.e., Model-Based System Architecture Process—“MBSAP”) to create an Agile MBSE approach called the integrated Scrum Model-Based System Architecture Process (sMBSAP). The proposed approach was validated through a pilot study that developed a health technology system over one year, successfully producing the desired software product. This work focuses on determining whether the proposed sMBSAP approach can deliver the desired Product Increments with the support of an MBSE process. The interaction of the Product Development Team with the MBSE tool, the generation of the system model, and the delivery of the Product Increments were observed. The preliminary results showed that the proposed approach contributed to achieving the desired system development outcomes and, at the same time, generated complete system architecture artifacts that would not have been developed if Agile had been used alone. Therefore, the main contribution of this research lies in introducing a practical and operational method for merging Agile and MBSE. In parallel, the results suggest that sMBSAP is a middle ground that is more aligned with federal and state regulations, as it addresses the technical debt concerns. Future work will analyze the results of a quasi-experiment on this approach focused on measuring system development performance through common metrics. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

19 pages, 1183 KiB  
Article
A Maturity Model for Trustworthy AI Software Development
by Seunghwan Cho, Ingyu Kim, Jinhan Kim, Honguk Woo and Wanseon Shin
Appl. Sci. 2023, 13(8), 4771; https://doi.org/10.3390/app13084771 - 10 Apr 2023
Cited by 1 | Viewed by 2884
Abstract
Recently, AI software has been rapidly growing and is widely used in various industrial domains, such as finance, medicine, robotics, and autonomous driving. Unlike traditional software, in which developers need to define and implement specific functions and rules according to requirements, AI software [...] Read more.
Recently, AI software has been rapidly growing and is widely used in various industrial domains, such as finance, medicine, robotics, and autonomous driving. Unlike traditional software, in which developers need to define and implement specific functions and rules according to requirements, AI software learns these requirements by collecting and training relevant data. For this reason, if unintended biases exist in the training data, AI software can create fairness and safety issues. To address this challenge, we propose a maturity model for ensuring trustworthy and reliable AI software, known as AI-MM, by considering common AI processes and fairness-specific processes within a traditional maturity model, SPICE (ISO/IEC 15504). To verify the effectiveness of AI-MM, we applied this model to 13 real-world AI projects and provide a statistical assessment on them. The results show that AI-MM not only effectively measures the maturity levels of AI projects but also provides practical guidelines for enhancing maturity levels. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

20 pages, 1088 KiB  
Systematic Review
A Review to Find Elicitation Methods for Business Process Automation Software
by Thiago Menezes
Software 2023, 2(2), 177-196; https://doi.org/10.3390/software2020008 - 29 Mar 2023
Cited by 2 | Viewed by 2491
Abstract
Several organizations have invested in business process automation software to improve their processes. Unstandardized processes with high variance and unstructured data encumber the requirements elicitation for business process automation software. This study conducted a systematic literature review to discover methods to understand business [...] Read more.
Several organizations have invested in business process automation software to improve their processes. Unstandardized processes with high variance and unstructured data encumber the requirements elicitation for business process automation software. This study conducted a systematic literature review to discover methods to understand business processes and elicit requirements for business process automation software. The review revealed many methods used to understand business processes, but only one was employed to elicit requirements for business process automation software. In addition, the review identified some challenges and opportunities. The challenges of developing a business process automation software include dealing with business processes, meeting the needs of the organization, choosing the right approach, and adapting to changes in the process during the development. These challenges open opportunities for proposing specific approaches to elicit requirements in this context. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

13 pages, 352 KiB  
Article
Guidelines for Future Agile Methodologies and Architecture Reconciliation for Software-Intensive Systems
by Fábio Gomes Rocha, Sanjay Misra and Michel S. Soares
Electronics 2023, 12(7), 1582; https://doi.org/10.3390/electronics12071582 - 28 Mar 2023
Cited by 2 | Viewed by 2679
Abstract
Background: Several methodologies have been proposed since the first days of software development, from what is now named traditional/heavy methodologies, and later their counterpart, the agile methodologies. The whole idea behind agile methodologies is to produce software at a faster pace than what [...] Read more.
Background: Several methodologies have been proposed since the first days of software development, from what is now named traditional/heavy methodologies, and later their counterpart, the agile methodologies. The whole idea behind agile methodologies is to produce software at a faster pace than what was considered with plan-based methodologies, which had a greater focus on documenting all tasks and activities before starting the proper software development. Problem: One issue here is that strict agilists are often against fully documenting the software architecture in the first phases of a software process development. However, architectural documentation cannot be neglected, given the well-known importance of software architecture to the success of a software project. Proposed Solution: In this article, we describe the past and current situation of agile methodologies and their relation to architecture description, as well as guidelines for future Agile Methodologies and Architecture Reconciliation. Method: We propose a literature review to understand how agile methodologies and architecture reconciliation can help in providing trends towards the success of a software project and supporting software development at a faster pace. This work was grounded in General Systems Theory as we describe the past, present, and future trends for rapid systems development through the integration of organizations, stakeholders, processes, and systems for software development. Summary of results: As extensively discussed in the literature, we found that there is a false dichotomy between agility and software architecture, and then we describe guidelines for future trends in agile methodologies and reconciliation of architecture to document agile architectures with both architectural decisions and agile processes for any system, as well as future trends to support organizations, stakeholders, processes, and systems. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Graphical abstract

40 pages, 3234 KiB  
Systematic Review
Transpilers: A Systematic Mapping Review of Their Usage in Research and Industry
by Andrés Bastidas Fuertes, María Pérez and Jaime Meza Hormaza
Appl. Sci. 2023, 13(6), 3667; https://doi.org/10.3390/app13063667 - 13 Mar 2023
Cited by 2 | Viewed by 2483
Abstract
Transpilers refer to a special type of compilation that takes source code and translates it into target source code. This type of technique has been used for different types of implementations in scientific studies. A review of the research areas related to the [...] Read more.
Transpilers refer to a special type of compilation that takes source code and translates it into target source code. This type of technique has been used for different types of implementations in scientific studies. A review of the research areas related to the use of transpilers allows the understanding of the direction in this branch of knowledge. The objective was to carry out an exhaustive and extended mapping of the usage and implementation of transpilers in research studies in the last 10 years. A systematic mapping review was carried out for answering the 5 research questions proposed. The PSALSAR method is used as a guide to the steps needed for the review. In total, from 1181 articles collected, 683 primary studies were selected, reviewed, and analyzed. Proposals from the industry were also analyzed. A new method for automatic data tabulation has been proposed for the mapping objective, using a relational database and SQL language. It was identified that the most common uses of transpilers are related to performance optimizations, parallel programming, embedded systems, compilers, testing, AI, graphics, and software development. In conclusion, it was possible to determine the extent and identification of research sub-areas and their impact on the usage of the transpilers. Future research could be considered about the usage of transpilers in transactional software, migration strategies for legacy systems, AI, math, multiplatform games and apps, automatic source code generation, and networking. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

30 pages, 889 KiB  
Article
Approach to Formalizing Software Projects for Solving Design Automation and Project Management Tasks
by Aleksey Filippov, Anton Romanov, Anton Skalkin, Julia Stroeva and Nadezhda Yarushkina
Software 2023, 2(1), 133-162; https://doi.org/10.3390/software2010006 - 08 Mar 2023
Cited by 1 | Viewed by 1499
Abstract
GitHub and GitLab contain many project repositories. Each repository contains many design artifacts and specific project management features. Developers can automate the processes of design and project management with the approach proposed in this paper. We described the knowledge base model and diagnostic [...] Read more.
GitHub and GitLab contain many project repositories. Each repository contains many design artifacts and specific project management features. Developers can automate the processes of design and project management with the approach proposed in this paper. We described the knowledge base model and diagnostic analytics method for the solving of design automation and project management tasks. This paper also presents examples of use cases for applying the proposed approach. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

22 pages, 2456 KiB  
Article
Development of Evaluation Criteria for Robotic Process Automation (RPA) Solution Selection
by Seung-Hee Kim
Electronics 2023, 12(4), 986; https://doi.org/10.3390/electronics12040986 - 16 Feb 2023
Cited by 3 | Viewed by 4588
Abstract
When introducing a robotic process automation (RPA) solution for business automation, selecting an RPA solution that is suitable for the automation target and goals is extremely difficult for customers. One reason for this difficulty is that standardised evaluation items and indicators that can [...] Read more.
When introducing a robotic process automation (RPA) solution for business automation, selecting an RPA solution that is suitable for the automation target and goals is extremely difficult for customers. One reason for this difficulty is that standardised evaluation items and indicators that can support the evaluation of RPA have not been defined. The broad extension of RPA is still in its infancy and only a few studies have been conducted on this subject. In this study, an evaluation breakdown structure for RPA selection was developed by deriving evaluation items from prior studies related to RPA selection and a feasibility study was conducted. Consequently, a questionnaire was administered three times, and the coefficients of variation, content validity, consensus, and convergence of factors and criteria were measured from the survey results. All of these measurement results are reflected in the final suitability value that was calculated to verify the stability of the evaluation system and evaluation criteria indicators. This study is the first to develop an RPA solution selection evaluation standard and the proposed evaluation breakdown structure provides useful evaluation criteria and a checklist for successful RPA application and introduction. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

20 pages, 2314 KiB  
Article
Timed-SAS: Modeling and Analyzing the Time Behaviors of Self-Adaptive Software under Uncertainty
by Deshuai Han, Yanping Cai, WenJie Chen, Zhigao Cui and Aihua Li
Appl. Sci. 2023, 13(3), 2018; https://doi.org/10.3390/app13032018 - 03 Feb 2023
Cited by 1 | Viewed by 1396
Abstract
Self-adaptive software (SAS) is gaining in popularity as it can handle dynamic changes in the operational context or in itself. Time behaviors are of vital importance for SAS systems, as the self-adaptation loops bring in additional overhead time. However, early modeling and quantitative [...] Read more.
Self-adaptive software (SAS) is gaining in popularity as it can handle dynamic changes in the operational context or in itself. Time behaviors are of vital importance for SAS systems, as the self-adaptation loops bring in additional overhead time. However, early modeling and quantitative analysis of time behaviors for the SAS systems is challenging, especially under uncertainty environments. To tackle this problem, this paper proposed an approach called Timed-SAS to define, describe, analyze, and optimize the time behaviors within the SAS systems. Concretely, Timed-SAS: (1) provides a systematic definition on the deterministic time constraints, the uncertainty delay time constraints, and the time-based evaluation metrics for the SAS systems; (2) creates a set of formal modeling templates for the self-adaptation processes, the time behaviors and the uncertainty environment to consolidate design knowledge for reuse; and (3) provides a set of statistical model checking-based quantitative analysis templates to analyze and verify the self-adaptation properties and the time properties under uncertainty. To validate its effectiveness, we presented an example application and a subject-based experiment. The results demonstrated that the Timed-SAS approach can effectively reduce modeling and verification difficulties of the time behaviors, and can help to optimize the self-adaptation logic. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

28 pages, 3594 KiB  
Article
Roadmap to Reasoning in Microservice Systems: A Rapid Review
by Amr S. Abdelfattah and Tomas Cerny
Appl. Sci. 2023, 13(3), 1838; https://doi.org/10.3390/app13031838 - 31 Jan 2023
Cited by 9 | Viewed by 2276
Abstract
Understanding software systems written by others is often challenging. When we want to assess systems to reason about them, i.e., to understand dependencies, analyze evolution trade-offs, or to verify conformance to the original blueprint, we must invest broad efforts. This becomes difficult when [...] Read more.
Understanding software systems written by others is often challenging. When we want to assess systems to reason about them, i.e., to understand dependencies, analyze evolution trade-offs, or to verify conformance to the original blueprint, we must invest broad efforts. This becomes difficult when considering decentralized systems. Microservice-based systems are mainstream these days; however, to observe, understand, and manage these systems and their properties, we are missing fundamental tools that would derive various simplified system abstract perspectives. Microservices architecture characteristics yield many advantages to system operation; however, they bring challenges to their development and deployment lifecycles. Microservices urge a system-centric perspective to better reason about the system evolution and its quality attributes. This process review paper considers the current system analysis approaches and their possible alignment with automated system assessment or with human-centered approaches. We outline the necessary steps to accomplish holistic reasoning in decentralized microservice systems. As a contribution, we provide a roadmap for analysis and reasoning in microservice-based systems and suggest that various process phases can be decoupled through the introduction of system intermediate representation as the trajectory to provide various system-centered perspectives to analyze various system aspects. Furthermore, we cover different technical-based reasoning strategies and metrics in addition to the human-centered reasoning addressed through alternative visualization approaches. Finally, a system evolution is discussed from the perspective of such a reasoning process to illustrate the impact analysis evaluation over system changes. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

20 pages, 2229 KiB  
Article
Optimal Feature Selection through Search-Based Optimizer in Cross Project
by Rizwan bin Faiz, Saman Shaheen, Mohamed Sharaf and Hafiz Tayyab Rauf
Electronics 2023, 12(3), 514; https://doi.org/10.3390/electronics12030514 - 19 Jan 2023
Cited by 1 | Viewed by 1581
Abstract
Cross project defect prediction (CPDP) is a key method for estimating defect-prone modules of software products. CPDP is a tempting approach since it provides information about predicted defects for those projects in which data are insufficient. Recent studies specifically include instructions on how [...] Read more.
Cross project defect prediction (CPDP) is a key method for estimating defect-prone modules of software products. CPDP is a tempting approach since it provides information about predicted defects for those projects in which data are insufficient. Recent studies specifically include instructions on how to pick training data from large datasets using feature selection (FS) process which contributes the most in the end results. The classifier helps classify the picked-up dataset in specified classes in order to predict the defective and non-defective classes. The aim of our research is to select the optimal set of features from multi-class data through a search-based optimizer for CPDP. We used the explanatory research type and quantitative approach for our experimentation. We have F1 measure as our dependent variable while as independent variables we have KNN filter, ANN filter, random forest ensemble (RFE) model, genetic algorithm (GA), and classifiers as manipulative independent variables. Our experiment follows 1 factor 1 treatment (1F1T) for RQ1 whereas for RQ2, RQ3, and RQ4, there are 1 factor 2 treatments (1F2T) design. We first carried out the explanatory data analysis (EDA) to know the nature of our dataset. Then we pre-processed our data by removing and solving the issues identified. During data preprocessing, we analyze that we have multi-class data; therefore, we first rank features and select multiple feature sets using the info gain algorithm to get maximum variation in features for multi-class dataset. To remove noise, we use ANN-filter and get significant results more than 40% to 60% compared to NN filter with base paper (all, ckloc, IG). Then we applied search-based optimizer i.e., random forest ensemble (RFE) to get the best features set for a software prediction model and we get 30% to 50% significant results compared with genetic instance selection (GIS). Then we used a classifier to predict defects for CPDP. We compare the results of the classifier with base paper classifier using F1-measure and we get almost 35% more than base paper. We validate the experiment using Wilcoxon and Cohen’s d test. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

24 pages, 2851 KiB  
Article
Models and Methods of Designing Data-Centric Microservice Architectures of Digital Enterprises
by Sergey Deryabin, Igor Temkin, Ulvi Rzazade and Egor Kondratev
Informatics 2023, 10(1), 4; https://doi.org/10.3390/informatics10010004 - 05 Jan 2023
Cited by 2 | Viewed by 2670
Abstract
The article is devoted to methods and models of designing systems for the digital transformation of industrial enterprises within the framework of the Industry 4.0 concept. The purpose of this work is to formalize a new notation for graphical modeling of the architecture [...] Read more.
The article is devoted to methods and models of designing systems for the digital transformation of industrial enterprises within the framework of the Industry 4.0 concept. The purpose of this work is to formalize a new notation for graphical modeling of the architecture of complex large-scale systems with data-centric microservice architectures and to present a variant of the reference model of such an architecture for creating an autonomously functioning industrial enterprise. The paper provides a list and justification for the use of functional components of a data-centric microservice architecture based on the analysis of modern approaches to building systems and the authors’ own results obtained during the implementation of a number of projects. The problems of using traditional graphical modeling notations to represent a data-centric microservice architecture are considered. Examples of designing a model of such an architecture for a mining enterprise are given. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

34 pages, 9725 KiB  
Article
Enhancing UML Connectors with Behavioral ALF Specifications for Exogenous Coordination of Software Components
by Alper Tolga Kocatas and Ali Hikmet Dogru
Appl. Sci. 2023, 13(1), 643; https://doi.org/10.3390/app13010643 - 03 Jan 2023
Cited by 2 | Viewed by 1976
Abstract
Connectors are powerful architectural elements that allow the specification of interactions between software components. Since the connectors do not include behavior in UML, the components include the behavior for coordinating the components, complicating the designs of components and decreasing their reusability. In this [...] Read more.
Connectors are powerful architectural elements that allow the specification of interactions between software components. Since the connectors do not include behavior in UML, the components include the behavior for coordinating the components, complicating the designs of components and decreasing their reusability. In this study, we propose the enrichment of UML connectors with behavioral specifications. The goal is to provide separation of concerns for the components so that they are freed from coordination duties. The reusability of the components will increase as a result of such exogenous coordination. Additionally, using the associated behaviors, we aim to resolve the ambiguities that arise when n-ary connectors are used. We use a series of QVTo transformations to transform UML models that include connector behaviors in ALF specifications into UML models which include fUML activities as connector behavior specifications. We present a set of example connectors specified using the proposed method. We execute the QVTo transformations on the example connectors to produce models that represent platform-independent definitions of the coordination behaviors. We also present and discuss cases from real-life large-scale avionics software projects in which using the proposed approach results in simpler and more flexible designs and increases component reusability. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

19 pages, 438 KiB  
Article
VoiceJava: A Syntax-Directed Voice Programming Language for Java
by Tao Zan and Zhenjiang Hu
Electronics 2023, 12(1), 250; https://doi.org/10.3390/electronics12010250 - 03 Jan 2023
Cited by 1 | Viewed by 2410
Abstract
About 5–10% of software engineers suffer from repetitive strain injury, and it would be better to provide an alternative way to write code instead of using a mouse and keyboard and sitting on a chair the whole day. Coding by voice is an [...] Read more.
About 5–10% of software engineers suffer from repetitive strain injury, and it would be better to provide an alternative way to write code instead of using a mouse and keyboard and sitting on a chair the whole day. Coding by voice is an attractive approach, and quite a bit of work has been done in that direction. At the same time, dictating plain Java text with low accuracy through the existing voice recognition engines or providing complex panels controlled by the voice makes the coding process even more complex. We argue that current programming languages are suitable for programming by hand, not by mouth. We try to solve this problem by designing a new programming language, VoiceJava, suitable for dictating. A Java program is constructed in a syntax-directed way through a sequence of VoiceJava commands. As a result, users do not need to dictate spaces, parentheses, and commas, reducing the vocal load. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

15 pages, 1459 KiB  
Article
Patch It If You Can: Increasing the Efficiency of Patch Generation Using Context
by Jinseok Heo, Hohyeon Jeong and Eunseok Lee
Electronics 2023, 12(1), 179; https://doi.org/10.3390/electronics12010179 - 30 Dec 2022
Viewed by 1193
Abstract
Although program repair is a tremendous aspect of a software system, it can be extremely challenging. An Automated Program Repair (APR) technique has been proposed to solve this problem. Among them, template-based APR shows good performance. One of the key properties of the [...] Read more.
Although program repair is a tremendous aspect of a software system, it can be extremely challenging. An Automated Program Repair (APR) technique has been proposed to solve this problem. Among them, template-based APR shows good performance. One of the key properties of the template-based APR technique for practical use is its efficiency. However, because the existing techniques mainly focus on performance improvement, they do not sufficiently consider the efficiency. In this study, we propose EffiGenC, which efficiently explores the patch ingredient search space to improve the overall efficiency of the template-based APR. EffiGenC defines the context using the concept of extended reaching definition from compiler theory. EffiGenC constructs the search space by collecting the ingredient required for patching in the context. We evaluated EffiGenC on the Defects4j benchmark. EffiGenC decreases the number of candidate patches from 27% to 86% compared to existing techniques. EffiGenC also correctly/plausibly fixes 47/72 bugs. For Future work, we will solve the search space problem that exists in multiline bugs using context. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

8 pages, 1943 KiB  
Communication
A New Fast Logarithm Algorithm Using Advanced Exponent Bit Extraction for Software-Based Ultrasound Imaging Systems
by Seongjun Park and Yangmo Yoo
Electronics 2023, 12(1), 170; https://doi.org/10.3390/electronics12010170 - 30 Dec 2022
Viewed by 1569
Abstract
Ultrasound B-mode imaging provides anatomical images of the body with a high resolution and frame rate. Recently, to improve its flexibility, most ultrasound signal and image processing modules in modern ultrasound B-mode imaging systems have been implemented in software. In a software-based B-mode [...] Read more.
Ultrasound B-mode imaging provides anatomical images of the body with a high resolution and frame rate. Recently, to improve its flexibility, most ultrasound signal and image processing modules in modern ultrasound B-mode imaging systems have been implemented in software. In a software-based B-mode imaging system, an efficient processing technique for calculating a logarithm instruction is required to support its high computational burden. In this paper, we present a new method to efficiently implement a logarithm operation based on exponent bit extraction. In the proposed method, the exponent bit field is first extracted and then some algebraic operations are applied to improve its precision. To evaluate the performance of the proposed method, the peak signal-to-noise ratio (PSNR) and the execution time were measured. The proposed efficient logarithm operation method substantially reduced the execution time, i.e., eight times, compared to direct computation while providing a PSNR of over 50 dB. These results indicate that the proposed efficient logarithm computation method can be used for lowering the computational burden in software-based ultrasound B-mode ultrasound imaging systems while improving or maintaining the image quality. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

34 pages, 954 KiB  
Article
BIPMIN: A Gamified Framework for Process Modeling Education
by Kylie Bedwell, Giacomo Garaccione, Riccardo Coppola, Luca Ardito and Maurizio Morisio
Information 2023, 14(1), 3; https://doi.org/10.3390/info14010003 - 21 Dec 2022
Cited by 1 | Viewed by 2246
Abstract
Business Process Modeling is a skill that is becoming sought after for computer engineers, with Business Process Modeling Notation (BPMN) being one example of the tools used in modeling activities. Students of the Master of Computer Engineering course at Politecnico di Torino learn [...] Read more.
Business Process Modeling is a skill that is becoming sought after for computer engineers, with Business Process Modeling Notation (BPMN) being one example of the tools used in modeling activities. Students of the Master of Computer Engineering course at Politecnico di Torino learn about BPMN in dedicated courses but often underperform on BPMN-related exercises due to difficulties understanding how to model processes. In recent years, there has been a surge of studies that employ gamification (using game elements in non-recreative contexts to obtain benefits) as a tool in Computer Engineering education to increase students’ engagement with the learning process. This study aims to use the principles of gamification to design a supplementary learning tool for the teaching of information systems technology. In particular, to improve student understanding and use of BPMN diagrams. This study also analyzes the usability and motivation of the participants in using different game elements in increasing student motivation and performance. As part of the study, a prototype web application was developed, which implemented three different designs, each incorporating different game elements relating to either progress, competition, or rewards. An evaluation was then conducted on the prototype to evaluate the performance of the practitioners in performing BPMN modeling tasks with the gamified tool, the usability of the proposed mechanics and the enjoyment of the individual game mechanics that were implemented. With the usage of the gamified tool, the users of the experimental sample were able to complete BPMN modeling tasks with performances compatible with estimates made through expert judgement (i.e., gamification had no negative effect on performance), and were motivated to check the correctness of their models many times during the task execution. The system was evaluated as highly usable (85.8 System Usability Score); the most enjoyed game elements were rewards, levels, progress bars and aesthetics. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

12 pages, 1179 KiB  
Article
Bi-LSTM-Based Neural Source Code Summarization
by Sarah Aljumah and Lamia Berriche
Appl. Sci. 2022, 12(24), 12587; https://doi.org/10.3390/app122412587 - 08 Dec 2022
Cited by 2 | Viewed by 1562
Abstract
Code summarization is a task that is often employed by software developers for fixing code or reusing code. Software documentation is essential when it comes to software maintenance. The highest cost in software development goes to maintenance because of the difficulty of code [...] Read more.
Code summarization is a task that is often employed by software developers for fixing code or reusing code. Software documentation is essential when it comes to software maintenance. The highest cost in software development goes to maintenance because of the difficulty of code modification. To help in reducing the cost and time spent on software development and maintenance, we introduce an automated comment summarization and commenting technique using state-of-the-art techniques in summarization. We use deep neural networks, specifically bidirectional long short-term memory (Bi-LSTM), combined with an attention model to enhance performance. In this study, we propose two different scenarios: one that uses the code text and the structure of the code represented in an abstract syntax tree (AST) and another that uses only code text. We propose two encoder-based models for the first scenario that encodes the code text and the AST independently. Previous works have used different techniques in deep neural networks to generate comments. This study’s proposed methodologies scored higher than previous works based on the gated recurrent unit encoder. We conducted our experiment on a dataset of 2.1 million pairs of Java methods and comments. Additionally, we showed that the code structure is beneficial for methods’ signatures featuring unclear words. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

19 pages, 1378 KiB  
Article
MSDeveloper: A Variability-Guided Methodology for Microservice-Based Development
by Betul Kuruoglu Dolu, Anil Cetinkaya, M. Cagri Kaya, Selma Nazlioglu and Ali H. Dogru
Appl. Sci. 2022, 12(22), 11439; https://doi.org/10.3390/app122211439 - 11 Nov 2022
Viewed by 1875
Abstract
This article presents a microservice-based development approach, MSDeveloper (Microservices Developer), employing variability management for product configuration through a low-code development environment. The purpose of this approach is to offer a general-purpose environment for the easier development of families of products for different domains: [...] Read more.
This article presents a microservice-based development approach, MSDeveloper (Microservices Developer), employing variability management for product configuration through a low-code development environment. The purpose of this approach is to offer a general-purpose environment for the easier development of families of products for different domains: a domain-oriented development environment is suggested, where domain developers and product developers can utilize the environment as a software ecosystem. Thus, genericity is offered through supporting different domains. A domain is populated with feature and process models and microservices in a layered architecture. Feature models drive the product configuration, which affects the process model and the microservice layer. An experimental study was conducted to validate the applicability of the approach and the usability of the development environment. Students from different courses were assigned system modeling projects where they utilized helper tools supporting the provided methodology. Furthermore, professional software developers were consulted about this recommended domain-oriented development environment. Feedback from student projects and professionals’ remarks are analyzed and discussed. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

23 pages, 1172 KiB  
Article
Security Requirements Prioritization Techniques: A Survey and Classification Framework
by Shada Khanneh and Vaibhav Anu
Software 2022, 1(4), 450-472; https://doi.org/10.3390/software1040019 - 28 Oct 2022
Cited by 2 | Viewed by 4337
Abstract
Security requirements Engineering (SRE) is an activity conducted during the early stage of the SDLC. SRE involves eliciting, analyzing, and documenting security requirements. Thorough SRE can help software engineers incorporate countermeasures against malicious attacks into the software’s source code itself. Even though all [...] Read more.
Security requirements Engineering (SRE) is an activity conducted during the early stage of the SDLC. SRE involves eliciting, analyzing, and documenting security requirements. Thorough SRE can help software engineers incorporate countermeasures against malicious attacks into the software’s source code itself. Even though all security requirements are considered relevant, implementing all security mechanisms that protect against every possible threat is not feasible. Security requirements must compete not only with time and budget, but also with the constraints they inflect on a software’s availability, features, and functionalities. Thus, the process of security requirements prioritization becomes an integral task in the discipline of risk-analysis and trade-off-analysis. A sound prioritization technique provides guidance for software engineers to make educated decisions on which security requirements are of topmost importance. Even though previous research has proposed various security requirement prioritization techniques, none of the existing research efforts have provided a detailed survey and comparative analysis of existing techniques. This paper uses a literature survey approach to first define security requirements engineering. Next, we identify the state-of-the-art techniques that can be adopted to impose a well-established prioritization criterion for security requirements. Our survey identified, summarized, and compared seven (7) security requirements prioritization approaches proposed in the literature. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

18 pages, 937 KiB  
Article
A Novel Framework to Detect Irrelevant Software Requirements Based on MultiPhiLDA as the Topic Model
by Daniel Siahaan and Brian Rizqi Paradisiaca Darnoto
Informatics 2022, 9(4), 87; https://doi.org/10.3390/informatics9040087 - 27 Oct 2022
Viewed by 1603
Abstract
Noise in requirements has been known to be a defect in software requirements specifications (SRS). Detecting defects at an early stage is crucial in the process of software development. Noise can be in the form of irrelevant requirements that are included within an [...] Read more.
Noise in requirements has been known to be a defect in software requirements specifications (SRS). Detecting defects at an early stage is crucial in the process of software development. Noise can be in the form of irrelevant requirements that are included within an SRS. A previous study had attempted to detect noise in SRS, in which noise was considered as an outlier. However, the resulting method only demonstrated a moderate reliability due to the overshadowing of unique actor words by unique action words in the topic–word distribution. In this study, we propose a framework to identify irrelevant requirements based on the MultiPhiLDA method. The proposed framework distinguishes the topic–word distribution of actor words and action words as two separate topic–word distributions with two multinomial probability functions. Weights are used to maintain a proportional contribution of actor and action words. We also explore the use of two outlier detection methods, namely percentile-based outlier detection (PBOD) and angle-based outlier detection (ABOD), to distinguish irrelevant requirements from relevant requirements. The experimental results show that the proposed framework was able to exhibit better performance than previous methods. Furthermore, the use of the combination of ABOD as the outlier detection method and topic coherence as the estimation approach to determine the optimal number of topics and iterations in the proposed framework outperformed the other combinations and obtained sensitivity, specificity, F1-score, and G-mean values of 0.59, 0.65, 0.62, and 0.62, respectively. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

28 pages, 3244 KiB  
Article
Game Development Topics: A Tag-Based Investigation on Game Development Stack Exchange
by Farag Almansoury, Segla Kpodjedo and Ghizlane El Boussaidi
Appl. Sci. 2022, 12(21), 10750; https://doi.org/10.3390/app122110750 - 24 Oct 2022
Viewed by 1960
Abstract
Video-game development, despite being a multi-billion-dollar industry, has not attracted sustained attention from software engineering researchers and remains understudied from a software engineering perspective. We aim to uncover, from game developers’ perspectives, which video game development topics are the most asked about and [...] Read more.
Video-game development, despite being a multi-billion-dollar industry, has not attracted sustained attention from software engineering researchers and remains understudied from a software engineering perspective. We aim to uncover, from game developers’ perspectives, which video game development topics are the most asked about and which are the most supported, in order to provide insights about technological and conceptual challenges game developers and managers may face on their projects. To do so, we turned to the Game Development Stack Exchange (GDSE), a prominent Question and Answer forum dedicated to game development. On that forum, users ask questions and tag them with keywords recognized as important categories by the community. Our study relies on those tags, which we classify either as technology or concept topics. We then analysed these topics for their levels of community attention (number of questions, views, upvotes, etc.) and community support (whether their questions are answered and how long it takes). Related to community attention, we found that topics with the most questions include concepts such as 2D and collision detection and technologies such as Unity and C#, whereas questions touching on concepts such as video and augmented reality and technologies such as iOS, Unreal-4 and Three.js generally lack satisfactory answers. Moreover, by pairing topics, we uncovered early clues that, from a community support perspective, (i) the pairing of some technologies appear more challenging (e.g., questions mixing HLSL and MonoGame receive a relatively lower level of support); (ii) some concepts may be more difficult to handle conjointly (e.g., rotation and movement); and some technologies may prove more challenging to use to address a given concept (e.g., Java for 3D). Our findings provide insights to video game developers on the topics and challenges they might encounter and highlight tool selection and integration for video game development as a promising research direction. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

21 pages, 1894 KiB  
Article
A Study on Selective Implementation Approaches for Soft Error Detection Using S-SWIFT-R
by Mohaddaseh Nikseresht, Jens Vankeirsbilck and Jeroen Boydens
Electronics 2022, 11(20), 3380; https://doi.org/10.3390/electronics11203380 - 19 Oct 2022
Viewed by 1220
Abstract
This article analyzes diverse criteria for effectively implementing selective hardening against soft errors through software-based strategies. The goal is to obtain maximum fault coverage with the least amount of overhead for each specific application. To achieve this objective, the analysis is conducted based [...] Read more.
This article analyzes diverse criteria for effectively implementing selective hardening against soft errors through software-based strategies. The goal is to obtain maximum fault coverage with the least amount of overhead for each specific application. To achieve this objective, the analysis is conducted based on two important phases, pre-selection and selective hardening of registers. In the pre-selection phase, the impact of the two most used selection metrics has been examined: (1) selecting registers based on their memory interaction vs. (2) selecting registers depending on fault injection vulnerability. Toward the selective hardening phase, the impact of gradually increasing the number of registers to protect is examined. Experiments have been conducted on 8 academic case studies and 1 industrial case study. Faults have been injected into the case studies using our in-house fault injector. The results indicate that selecting registers based on the fault injected into the system has an overall 10% better performance in comparison to selecting registers based on the memory interaction in 6 out of 8 academic case studies and also the industrial case study. Additionally, there is a significant improvement in reliability when increasing the number of registers to protect at the expense of rising overhead. In this work, these comparisons and analyses are presented. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

26 pages, 5146 KiB  
Article
A Framework for Designing Usability: Usability Redesign of a Mobile Government Application
by Pinnaree Kureerung, Lachana Ramingwong, Sakgasit Ramingwong, Kenneth Cosh and Narissara Eiamkanitchat
Information 2022, 13(10), 470; https://doi.org/10.3390/info13100470 - 30 Sep 2022
Cited by 2 | Viewed by 2362
Abstract
The existing usability models have been used primarily for evaluation, not for usability engineering. The models were found to be general for specific mobile applications. They also lack appropriate guidelines to apply the usability models to m-government applications. Earthquake information is an example [...] Read more.
The existing usability models have been used primarily for evaluation, not for usability engineering. The models were found to be general for specific mobile applications. They also lack appropriate guidelines to apply the usability models to m-government applications. Earthquake information is an example of critical information delivered to citizens via m-government applications. Usability design is considered a very important key factor to the success of such applications. This research addresses the challenges in finding the usability factors important to m-government applications and choosing appropriate factors for specific m-government applications. A questionnaire was administered to 49 citizens. The results include six usability factors which are learnability, simplicity, satisfaction, security, privacy, and memorability. Descriptions of the usability factors were later added to provide a clearer definition for each factor. This paper proposes the usability design framework for m-government applications. The use of the framework was illustrated based on the user interface redesign of the EarthquakeTMD application. The main aim was to demonstrate the applicability of the framework. The quality of the original UI design of the application in the case study was assessed with a questionnaire which was administered to 57 Thai citizens who lived in the areas affected by the disasters. Four designers participated in UI redesigning and produced four different UI designs. The new UI designs were evaluated via two usability tests on two sample groups of representative users. The first usability test was conducted with 24 participants. Twenty-four test cases were used. The second usability test was conducted with 351 representative users. After the tests, both sample groups were given a questionnaire based on the SUS (System Usability Scale). The same two UI designs by experienced and inexperienced designers who used the framework received the highest scores: 89.58 and 87.60 on the first usability test. They also received the highest score on the second usability test: 89.10 and 90.88. The results reveal that the citizens preferred the new user interfaces designed using the framework. It was found that the scores of the UI designed by inexperienced designers who used the framework were as high as the scores of the UI designed by experienced designers, whereas the UI designs from the designers who did not use the framework received the lowest scores: 63.23 and 54.27 on the first usability testing and 59.34 and 46.53 on the second usability testing. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

33 pages, 1222 KiB  
Article
BPM2DDD: A Systematic Process for Identifying Domains from Business Processes Models
by Carlos Eduardo da Silva, Eduardo Luiz Gomes and Soumya Sankar Basu
Software 2022, 1(4), 417-449; https://doi.org/10.3390/software1040018 - 29 Sep 2022
Viewed by 3355
Abstract
Domain-driven design is one of the most used approaches for identifying microservice architectures, which should be built around business capabilities. There are a number of documentation with principles and patterns for its application. However, despite its increasing use there is still a lack [...] Read more.
Domain-driven design is one of the most used approaches for identifying microservice architectures, which should be built around business capabilities. There are a number of documentation with principles and patterns for its application. However, despite its increasing use there is still a lack of systematic approaches for creating the context maps that will be used to design the microservices. This article presents BPM2DDD, a systematic approach for identification of bounded contexts and their relationships based on the analysis of business processes models, which provide a business view of an organisation. We present an example of its application in a real business process, which has also be used to perform a comparative application with external analysts. The technique has been applied to a real project in the department of transport of a Brazilian state capital, and has been incorporated into the software development process employed by them to develop their new system. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

16 pages, 1996 KiB  
Article
Semantic Annotation of Legal Contracts with ContrattoA
by Michele Soavi, Nicola Zeni, John Mylopoulos and Luisa Mich
Informatics 2022, 9(4), 72; https://doi.org/10.3390/informatics9040072 - 20 Sep 2022
Cited by 2 | Viewed by 2526
Abstract
The aim of the research is to semi-automate the process of generating formal specifications from legal contracts in natural language text form. Towards this end, the paper presents a tool, named ContrattoA, that semi-automatically conducts semantic annotation of legal contract text using an [...] Read more.
The aim of the research is to semi-automate the process of generating formal specifications from legal contracts in natural language text form. Towards this end, the paper presents a tool, named ContrattoA, that semi-automatically conducts semantic annotation of legal contract text using an ontology for legal contracts. ContrattoA was developed through two iterations where lexical patterns were defined for legal concepts and their effectiveness was evaluated with experiments. The first iteration was based on a handful of sample contracts and resulted in defining lexical patterns for recognizing concepts in the ontology; these were evaluated with an empirical study where one group of subjects was asked to annotate legal text manually, while a second group edited the annotations generated by ContrattoA. The second iteration focused on the lexical patterns for the core contract concepts of obligation and power where results of the first iteration were mixed. On the basis of an extended set of sample contracts, new lexical patterns were derived and those were shown to substantially improve the performance of ContrattoA, nearing in quality the performance of experts. The experiments suggest that good quality annotations can be generated for a broad range of contracts with minor refinements to the lexical patterns. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

20 pages, 1247 KiB  
Article
Evidence-Based Software Engineering: A Checklist-Based Approach to Assess the Abstracts of Reviews Self-Identifying as Systematic Reviews
by Alvine Boaye Belle and Yixi Zhao
Appl. Sci. 2022, 12(18), 9017; https://doi.org/10.3390/app12189017 - 08 Sep 2022
Cited by 5 | Viewed by 2220
Abstract
A systematic review allows synthesizing the state of knowledge related to a clearly formulated research question as well as understanding the correlations between exposures and outcomes. A systematic review usually leverages explicit, reproducible, and systematic methods that allow reducing the potential bias that [...] Read more.
A systematic review allows synthesizing the state of knowledge related to a clearly formulated research question as well as understanding the correlations between exposures and outcomes. A systematic review usually leverages explicit, reproducible, and systematic methods that allow reducing the potential bias that may arise when conducting a review. When properly conducted, a systematic review yields reliable findings from which conclusions and decisions can be made. Systematic reviews are increasingly popular and have several stakeholders to whom they allow making recommendations on how to act based on the review findings. They also help support future research prioritization. A systematic review usually has several components. The abstract is one of the most important parts of a review because it usually reflects the content of the review. It may be the only part of the review read by most readers when forming an opinion on a given topic. It may help more motivated readers decide whether the review is worth reading or not. But abstracts are sometimes poorly written and may, therefore, give a misleading and even harmful picture of the review’s contents. To assess the extent to which a review’s abstract is well constructed, we used a checklist-based approach to propose a measure that allows quantifying the systematicity of review abstracts i.e., the extent to which they exhibit good reporting quality. Experiments conducted on 151 reviews published in the software engineering field showed that the abstracts of these reviews had suboptimal systematicity. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

17 pages, 6665 KiB  
Article
An Automated Tool for Upgrading Fortran Codes
by Lesley Mak and Pooya Taheri
Software 2022, 1(3), 299-315; https://doi.org/10.3390/software1030014 - 13 Aug 2022
Cited by 2 | Viewed by 4801
Abstract
With archaic coding techniques, there will be a time when it will be necessary to modernize vulnerable software. However, redeveloping out-of-date code can be a time-consuming task when dealing with a multitude of files. To reduce the amount of reassembly for Fortran-based projects, [...] Read more.
With archaic coding techniques, there will be a time when it will be necessary to modernize vulnerable software. However, redeveloping out-of-date code can be a time-consuming task when dealing with a multitude of files. To reduce the amount of reassembly for Fortran-based projects, in this paper, we develop a prototype for automating the manual labor of refactoring individual files. ForDADT (Fortran Dynamic Autonomous Diagnostic Tool) project is a Python program designed to reduce the amount of refactoring necessary when compiling Fortran files. In this paper, we demonstrate how ForDADT is used to automate the process of upgrading Fortran codes, process the files, and automate the cleaning of compilation errors. The developed tool automatically updates thousands of files and builds the software to find and fix the errors using pattern matching and data masking algorithms. These modifications address the concerns of code readability, type safety, portability, and adherence to modern programming practices. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

11 pages, 466 KiB  
Article
The Impact of Agile Development Practices on Project Outcomes
by Dipendra Ghimire and Stuart Charters
Software 2022, 1(3), 265-275; https://doi.org/10.3390/software1030012 - 05 Aug 2022
Cited by 5 | Viewed by 7169
Abstract
Agile software development methods were introduced to minimize problems faced using traditional software development approaches. There are several Agile approaches used in developing software projects, these include Scrum, Extreme programming and Kanban. An Agile approach focuses on collaboration between customers and developers and [...] Read more.
Agile software development methods were introduced to minimize problems faced using traditional software development approaches. There are several Agile approaches used in developing software projects, these include Scrum, Extreme programming and Kanban. An Agile approach focuses on collaboration between customers and developers and encourages development teams to be self-organizing. To achieve this there are different Agile practices teams choose to use in their projects. Some teams only use one practice whilst others use a combination of practices. The most common practices used are stand-ups, user stories, Burndown chart/Burnup chart, pair programming, Epic and User stories. This paper reports on the analysis of the data collected from people involved in Agile software development teams and identifies that the combination of practices in Agile software development have an impact on the communication in the team, project requirements and project priorities, with more practices being adopted correlating with better project outcomes. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

Back to TopTop