Next Article in Journal
Mechanical Behavior and Low-Cycle Fatigue Performance of a Carburized Steel for GTF Engines
Next Article in Special Issue
The Application of the Gesture Analysis Method Based on Hybrid RF and CNN Algorithms in an IoT–VR Human–Computer Interaction System
Previous Article in Journal
Natural Fiano Wines Fermented in Stainless Steel Tanks, Oak Barrels, and Earthenware Amphora
Previous Article in Special Issue
Surface Crack Detection of Steel Structures in Railroad Industry Based on Multi-Model Training Comparison Technique
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation Methodology of Interoperability for the Industrial Domain: Standardization vs. Mediation

by
Yuhan Chen
1,2,*,
David Annebicque
1,
Alexandre Philippot
1,
Véronique Carré-Ménétrier
1 and
Thierry Daneau
2
1
CReSTIC, UFR Sciences Exactes et Naturelles, Moulin de la Housse, University of Reims Champagne Ardenne, 51100 Reims, France
2
Industrial Strategy and Engineering Department, Renault Group, 1 Avenue du Golf, 78084 Guyancourt, France
*
Author to whom correspondence should be addressed.
Processes 2023, 11(4), 1274; https://doi.org/10.3390/pr11041274
Submission received: 4 March 2023 / Revised: 7 April 2023 / Accepted: 14 April 2023 / Published: 19 April 2023

Abstract

:
With the arrival of Industry 4.0, interoperability has become a major subject for companies worldwide. It is a crucial asset that enables new technologies and possibilities (Industrial Internet of Things, predictive maintenance or traceability solutions). With the increasing importance of data in business use cases, companies are faced with a choice between two interoperability approaches to deal with the challenge of reconciling different domains: standardization and mediation. This paper presents an analysis of each approach and proposes a decision-making methodology based on the Analytic Hierarchy Process (AHP) that aims to help companies in choosing the most suitable solution to resolve interoperability challenges.

1. Introduction

Industry 4.0 was born in Germany, the fourth and latest Industrial Revolution; also called Industry of Future, it marks the arrival of digital technologies in the manufacturing environment [1].
The main assets of Industry 4.0 are factory digitization with the implementation of Industrial Internet of Things (IIOT), flexibility with the ability to customize production, new means for production simulation (digital twin) and a reliance on Big Data [2].
All these modern technologies make industrial data collection and exploitation a significant role of Industry of Future. Industrial data have numerous use cases from various domains that can range from predictive maintenance to traceability means [3]. The heterogeneity of data usage and data origins creates new interoperability issues that can cripple business performance if not addressed correctly. In fact, data reconciliation and data cleansing originated from these heterogeneities are time-consuming and do not create any added value. Renault has engaged several Industry 4.0 projects during the past few years and is now facing these problems, especially at the industrial data model level. Therefore, the research on Industrial Data Model Conception and Propagation aims to enhance digital continuity through business layers by implementing sustainable rules for data model creation.
Interoperability research works are not new and have always been a major challenge for enterprises with the multiplication of different technologies. Regarding industrial interoperability, we have conducted a thorough review of the existing scientific literature and have found a limited number of studies that have specifically addressed this topic within the context of real-world industrial constraints. It is worth noting that [4] has conducted a comparison of the two solutions in question. However, it should be acknowledged that this work may now be considered outdated.
Thanks to the flexibility of the Information Technology (IT) world, various solutions have emerged to solve data heterogeneity with every evolving technology. In contrast, the manufacturing domain, where Operational Technology (OT) is prominent, has entirely different constraints, with software and hardware life cycles lasting far longer than in the IT domain (industrial robots are designed to last for decades). This raises the question of the compatibility of IT interoperability solutions for the manufacturing world.
Industrial interoperability is a great concern for current manufacturers, who use many different solutions to solve it. Among these solutions, it is possible to observe two main strategies: standardization and mediation. The two solutions aim to achieve interoperability in diverse ways, but there are currently few scientific research works that compare them. The lack of scientific consensus on the better choice is very problematic for the industry as the wrong decision can impact its future. In our work, we aim to compare standardization and mediation and find the best solution for manufacturers based on their needs. The assessment involves establishing metrics to characterize each potential solution, and the usage of the Analytic Hierarchy Process (AHP) methodology to assign scores to each solution based on the needs of the organization. It will ensure that the selected solution is the most appropriate one, given the organization’s goals and constraints.
Section 2 will focus on the state of the art of the two interoperability solutions, whereas, in Section 3 and Section 4, we develop our methodology of decision making based on the Analytic Hierarchy Process (AHP). Finally, the methodology’s application in an industrial example is presented in Section 5, and we end with a discussion and a conclusion.

2. Interoperability State of Art

2.1. Interoperability Definition

Interoperability is a crucial aspect of Industry 4.0 for manufacturers, given the need for different domains with varying characteristics to collaborate seamlessly. The purpose of this section is to provide a comprehensive definition of interoperability that can be applied to different domains, each with its specific requirements and goals [5].
Heterogeneity in the industrial domain is multidimensional and it becomes imperative to represent it in a global architecture [6]. In the scientific literature, the first reference model for this architecture is Reference Architectural Model Industrie 4.0 (RAMI 4.0). A German strategy to modernize industry to accommodate the integration of new technologies, it first appeared in 2011 at the Hannover Industrial Fair. It aims to improve the flexibility, efficiency and quality of industrial production by formalizing the ecosystem into branches and layers. The architecture is visualized along three axes. The X axis shows the stages of a product life cycle and value chain in accordance with the IEC 62890 standard, including product development, production and maintenance. The Y axis determines the layers of the industrial ecosystem that interact with the product. Finally, the Z axis embodies the hierarchical layers of an enterprise according to IEC 622264 and IEC 615125. In the context of researching inter-domain interactions in a global ecosystem, the RAMI 4.0 architecture is particularly interesting because it provides a global understanding of the organization and interactions between domains throughout the product life cycle (Figure 1).
Globally speaking, the term interoperability refers to the capacity of two systems to communicate and exchange information without any comprehension issues that would impact the performance or its functionalities. Systems are a set of entities working together as part of a mechanism or an interconnecting network. As interoperability is a wide theme, the literature has proposed many frameworks that organize enterprise interoperability in a specific structure for a better understanding. In the work of [7], the authors have performed a literature review with the definition of multiple frameworks. Thus, the ANTHENA [8] and the INTEROP projects [9], which later became ISO 11354:1 [10], both define reference architectures (ATHENA International framework and Framework for Enterprise Interoperability) to analyze interoperability in a standard and structured way. Furthermore, the European Interoperability Framework (EIF) brings the organization to a wider perspective with a global architecture that covers multiple domains related to an enterprise, from a microscopic level to a macroscopic level, such as governance (Table 1). One application of the EIF is the interoperability between public entities from various governments around Europe [11]. In the literature, EIF studies show relevancy to cyber-physical manufacturing enterprises [12] and demonstrate the completeness of the framework for other domains.
Although the framework defined by the EIF is complete, it is, however, not entirely suitable for an industrial ecosystem that is starting a digital transformation because the layers, such as the governance layer and legal layer, are not necessarily a priority. The evolution of traditional industry to Industry 4.0 is not always conducted in a structured multi-layer manner but from a small use case implementation that will scale over time. Additionally, for a better practical application in an industrial context, such as Renault, the interoperability framework should avoid complexity so that it can be understood by everyone in the organization, including non-experts. The reason that it is important to define interoperability at various levels is that it enables a more precise and comprehensive understanding of the concept. This has been the focus of works such as [13], which aim to enhance the standardization of interoperability.
In the work of [14], the authors have defined a framework that is similar to the three lower layers of the EIF and adds the human factor of interoperability (informal interoperability). The latter is particularly interesting because of its impact in a wide ecosystem where the change resistance level, for instance, plays a massive role in the success of an interoperability project. The authors define interoperability in the following levels.
  • Technical specific interoperability is the lowest level of our scope. It represents the communication ability between heterogeneous protocols (for instance, OPC UA communication protocol and MQTT). Information exchanges between two protocols involve sub-elements such as format compatibility, data quality or semantic preservation. Interoperability increases with the lesser intervention needed to enable protocols to “understand” each other. In most cases, technical interoperability can be achieved thanks to various technological means, especially with the help of Information Technology (IT) solutions that serve as translators/mediators.
  • Formal interoperability is a higher level of interoperability that is important for corporations to work efficiently. It involves the organizational capability of a team or a group of people, such as communication, role clarity, objective clarity or business process modeling. In a high-interoperability operating environment, each worker has a clear idea of the goal and know who to address in case of problems. This results in a great increase in global efficiency and performance.
  • Informal interoperability is the highest level of interoperability and takes into consideration the human factors of a team or the whole company. Such factors could be change acceptability, cultures, traditions or motivation. Though informal interoperability is more subjective, it remains a valuable piece of information to assess when implementing major changes within a company or an ecosystem (new project, strategy change).
Considering the industrial scope that this paper is aiming for (industrial asset data collection), semantic interoperability is only limited to the understanding of machine variables throughout the plant floor from the enterprise’s data scientists and is often linked to the technical level. Therefore, similarly to [14], we have decided to merge the technical and semantic levels.
As we can see, interoperability is a multidimensional subject, and each barrier must be thoroughly studied to improve it. To do so, interoperability assessment is a good approach to informal and organizational barriers, whereas data analysis can help to solve technical interoperability.
Furthermore, there is a huge amount of knowledge from international organizations that exclusively work on interoperability standardization and improvement over many industrial domains. As far as manufacturing is concerned, we can mention the Open Platform Communication Unified Architecture (OPC UA), an industrial communication and information standard created by the OPC Foundation that aims to achieve interoperability among industrial assets (programmable logic controller, robot, automated guided vehicles) [15].

2.2. Interoperability Impacts

The lack of interoperability is a great issue for major manufacturers as it could have huge impacts on business performance. Interoperability barriers come in different forms (technical, informal, organizational) and cause miscommunication, misalignment or data quality losses within an organization. Reconciling heterogeneity requires a great amount of effort and the appropriate expertise.
Particularly with the arrival of the Fourth Industrial Revolution, where data are considered crucial, data interoperability within different systems based on heterogeneous protocols or technologies is more critical than ever. In the manufacturing world, data quality preservation is a serious challenge because of historical factors and the coexistence of heterogeneous technologies. The silo working approach conducted by different business domains to develop their own protocols and technologies without any consideration of interoperability is one of the causes.
The term data quality is composed of six main parts [16]:
  • Data uniqueness: Covers the existence of a unique value for a specific data attribute within a table.
  • Data consistency: Assures logical coherence within a system that frees them of contradiction.
  • Data integrity: Existence of data values in reference tables from different systems.
  • Data completeness: Existence of data in a specific data attribute or field.
  • Data timeliness: Degree to which data are representative of current business conditions (updated and available).
  • Data conformity: Data are valid if they conform to the syntax of their definition.
After being gathered from the factory floor, data are transmitted to higher levels of IT infrastructure using a variety of protocols, such as Modbus, MQTT and OPC UA. This heterogeneity situation causes a serious decrease in data quality over time because of multiple data handovers and negatively impacts data exploitation for end users (semantic ambiguities, lack of data correspondence). For instance, in the manufacturing domain, information models of equipment suppliers have historically been very heterogeneous. This causes situations in which data scientists can encounter two variables with the same name but with different meanings and lead to poor interpretations. When a variable is misinterpreted, it can compromise the final result, which can lead to poor-quality data. In the (Figure 2), the quality of data from system A has worsened or declined by the time it is received in system C. Data scientists then need to invest a significant amount of time and resources into cleaning these data before they can use them effectively. However, if a solution is implemented to address interoperability issues within the architecture, it can greatly reduce the amount of effort required to clean the data, without adding any real value. This would allow collected data to be understood by different interoperable systems with consistency and conformity, minimizing the need for extensive data cleansing.

3. Towards Industrial-Oriented Interoperability Solutions

3.1. Operational Technology (OT) vs. Information Technology (IT)

Interoperability is a critical subject that is relevant to all aspects of an organization. However, when conducting an interoperability assessment, it is essential to focus on a specific domain as each ecosystem may differ from the others. Therefore, it is crucial to identify the differences within each specific domain of the entire ecosystem.
In a large manufacturing organization such as the automotive industry, OT and IT work in a very complementary way to achieve various business goals [17]. The IT domain of an organization refers to technologies related to computer technology, including cloud computing. As the backbone of the upper layer of the company, it guarantees crucial functionalities related to monitoring, managing and securing data.
The OT domain, on the other hand, focuses on connecting, monitoring, managing and securing industrial operations. It also takes in all physical industrial assets, such as robots, industrial control systems (ICS), programmable logic controllers (PLC) or computer numerical control (CNC). In the world of Industry 4.0, which is heavily evolving around data collection and exploitation, IT and OT convergence is essential. However, while both technologies use data and communication protocols, they remain very different in many aspects.
  • IT devices are easily replaceable with a 3–5-year lifespan. This mitigates the commitment of the company regarding a specific technology or device. In contrast, the operational technology (OT) domain relies on industrial equipment that is designed to operate for extended periods, often spanning several decades. Changes to the equipment and technology used in OT are carefully evaluated and planned well in advance, with an emphasis on maintenance and scalability over the long term.
  • Service continuity and reliability are crucial elements for OT because it involves productivity and the operator’s security. Real time is measured in milliseconds, for instance, where it is measured in seconds or minutes for IT.
  • Given that the subject of our study focuses on the manufacturing domain, it is important to take into account all the specific constraints of OT in the final assessment, such as asset life cycle or scalability perspectives.
For companies, choosing the right solution is a real challenge as it needs to be sustainable in the long run and there are no obvious solutions. The economical factor of the digital transformation is a key criterion and it is linked to the ease of implementation and the efficiency of each solution.

3.2. Interoperability Solutions

3.2.1. Standardization

To achieve technical interoperability, researchers and companies have searched for solutions that can reconcile heterogeneous systems. In this paper, we identify two main strategies: standardization and mediation. Each strategy has its own advantages and disadvantages in regard to the target ecosystem. Standardization may solve final interoperability issues, but if the company cannot implement it or use it correctly, it can also become a money sink without solving all the problems. A full analysis of both the solution and the target ecosystem needs to be performed to find the most suitable solution for the enterprise.
A standard is a technical document designed to be used as a rule, guideline or definition. It is a consensus-built, repeatable way of doing something. To create a standard, working groups and organizations bring together all interested parties, such as manufacturers, end users or domain-specific experts. Standards aim to bring down silo working approaches by reconciling multiple domains together.
It can be expressed through many forms, such as format and syntax conformity, protocol, model and process unification [17]. There are many international domain-specific standards that exist throughout the world. We can mention some widely used standards such as those of the International Organization for Standardization (ISO), which is an independent, non-governmental, international organization that develops standards to ensure the quality, safety and efficiency of products, services and systems [18].
In the manufacturing field, OPC UA is a fast-growing standard that is widely used by over 750 members and thousands of pieces of OPC-UA-compliant equipment with a server/client logic. The main goal of OPC UA is to achieve interoperability within the automation world by using global standardization in such a way that all pieces of industrial equipment can fully understand each other despite communication protocols’ heterogeneity [19].
The OPC UA protocol has its roots in the OPC Classic protocol, which provided a standardized interface for clients to communicate with each other by functioning as a server. It created standards for three main assets: Data Access (DA), Historical Data Access (HDA) and Alarms and Events (AE). Although capable, OPC Classic presents some major flaws. Firstly, it is reliant on Microsoft Windows COM/DCOM technology. Thus, the future evolution of OPC Classic is thereby directly dependent on MS COM/DCOM. Secondly, connections between server/client were difficult and there were no native security configurations. Due to certain limitations and inconveniences associated with the separation of the three assets, the OPC Unified Architecture (UA) was developed as an improvement over the earlier OPC Classic protocol. OPC UA was designed to be platform-independent and not reliant on Microsoft technologies. Additionally, it includes security features and integrates the functionalities of Data Access (DA), Historical Data Access (HDA) and Alarms and Events (AE).
Another strength of OPC UA is its information model standards. In a time when data models are becoming increasingly crucial, the OPC foundation created a layered and hierarchical information model based on industrial assets. The model is devised with three main specification parts, the Core Specification, the Access Type Specification and the Utility Specification. The Core Specification part defines the main functionalities of the OPC UA kernel by structuring the main Address Space, whereas the Access Type specification defines specific Access Types. Finally, the Utility Specification part takes in data discovery and aggregation mechanics (Figure 3). Standardized data models can be used beyond the factory floor and throughout other business layers such as data clients with minimum data cleansing needed.
International working groups composed of industrial suppliers have been actively working on a unique data structuration standard. The outcome is a standardized data model called “Companion Specification” (CS) that is shared by all equipment using the OPC UA standard. The Verband Deutscher Maschinen und Anlagenbau (VDMA) [20], which is the German machinery association, has, for instance, created numerous machinery companion specifications, including the Tightening CS, the Robotic CS or the CNC CS.
Standardization is often viewed as an effective approach to achieving interoperability across various business layers. However, its implementation can be a complex and far-reaching process that may require significant changes throughout the entire ecosystem. Additionally, the adoption of standardization can have a substantial impact on current business processes, which may require careful planning and management to mitigate any potential disruptions.

3.2.2. Mediation

Standards require a certain amount of effort to build and to maintain. Moreover, it is a collective task that mobilizes the expertise of many domains. This can be challenging for some manufacturers who fear the impact and the cost of standardization due to its initial complexity. Therefore, a second approach must be considered to obtain interoperability by relying on IT solutions [21].
Data mediation aims to solve interoperability issues by translating information between two different systems. It is often a data mapping software that receives untranslated data from system A and sends translated data to system B. Translated data sending can be initiated by a demanding client system to the mediator, which will request the source data from the source information system (Figure 4). The final processed data must be fully comprehensible for the demanding system and data quality must be preserved. As a matter of fact, the mediator acts as a semantic gateway between the two systems.
The effectiveness of mediation solutions is closely tied to the semantic gap between systems, and greater semantic disparity can significantly increase the effort required to create such solutions. In many commercial [22] applications that rely heavily on IT assets, similar approaches can be found to address interoperability challenges.

4. Evaluation Methodology of Interoperability Approach Selection

Heterogeneity is an increasing concern for industries because of financial losses due to business and technical system miscommunication. Finding the right solution among standardization and mediation to address heterogeneity can represent a delicate decision for companies, as it usually induces significant changes and impacts the whole business [23]. Moreover, there are very few scientific works that compare the two approaches and help companies to make the right decision based on their current situation.
In our work, we aim to create a methodology that first compares the two solutions and then obtains a decision regarding which solution is more sustainable for a company based on its current situation and future strategies.

4.1. Preliminary Study of the Target Ecosystem

Interoperability is a broad terminology that can regroup systems with various levels of granularity [24]. For instance, the term can simply involve the communication between two PLCs or also a global comprehension between two business processes. Therefore, it is essential to define the limit of the case study of every interoperability operation to ensure its completeness and accuracy.
The first step is to define the main interoperability objectives of the study by identifying heterogeneous systems that are required to be interoperable. This can be, for example, business processes or information systems. When the main targets are identified, it is important to then determine related dependencies that may exist in the whole ecosystem, to anticipate any form of impact. Formalization by using System Engineering Modeling can be helpful to highlight critical dependencies between the studied systems and identify the stakeholders that are involved.
Once the scope and stakeholders are clearly identified, the next step is to assess the initial interoperability level. It is important to suggest a methodology to evaluate the global interoperability of the business level by creating an assessment based on interviews of ecosystem representatives and operatives. The strength of the methodology is the information’s completeness thanks to the breadth of the assessment scope. In fact, the study includes multiple aspects of interoperability, with both technical and human factors, which is essential to study the whole ecosystem. Implementing new solutions, whether it is standardization or mediation, may cause great impacts on the current systems and the success is based on both technical and human challenges.
For [14], interoperability is classified into three categories: informal, formal and technical. Formal interoperability is related to human aspects, such as tradition or change acceptance. Informal interoperability involves organizational, strategy and business goal alignment. Finally, technical interoperability includes the data format, protocols and semantic alignment.
The main goal of the assessment is to evaluate the current interoperability state of the target ecosystem and the change readiness. To be able to do so, several interviews must be conducted with stakeholders that represent each specific organization of the whole ecosystem. This can be done in four steps.
  • Respondent Selection: Respondent profiles (responsibility and experience) must be diverse in order to capture the whole spectrum of opinions. Each selection, however, must be coherent with the main interoperability study.
  • Question Selection: As mentioned before, interoperability is a global term that incorporates informal, organizational and technical aspects. Therefore, interview questions must also be included in these three categories. We decided to not limit the number and the specificity of the questions, so as to encourage open discussions.
  • Interview: Once the profiles and question selections are done, interviews can be scheduled and must serve to assemble information for future analysis. In our study case, interviews are occasions for respondents to fully express their opinions. There should not be any restriction, so as to prevent any form of self-censorship.
  • Information Analysis: Collected information must be processed and analyzed. According to [14], each interoperability level can be divided into subcategories. Based on interview answers, a notation (0–5) and a weight can be assigned to each subcategory with the possibility of creating a graphic chart.
Interoperability level scores can therefore be calculated by using metrics from each subcategory and will reflect the interoperability readiness of the target system.

4.2. Solution Overview Analysis

The first part of the preliminary work focuses on the industrial ecosystem of the case study, whereas the second part will be aimed at a comparison between the interoperability solutions. A first decision analysis can performed by studying the strengths and weaknesses of standardization and mediation. This study is especially efficient to create a quick overview and allow for further specification afterward. Moreover, interoperability solutions are used in the industrial and corporate domains. Therefore, it is critical to take into consideration the business strategy and goals for the decision-making process by representing the strengths and weaknesses.
This study can be performed by using a SWOT, a widely used evaluation tool in the industry that depicts the strengths, weaknesses, opportunities and threats of a product or a solution:
  • The strength of the solution depicts what it does well and how it differentiates itself from others.
  • The weakness of the solution depicts what it lacks (resource and performance limitation) and in which case other solutions perform better.
  • The opportunity covers the industrial environment and the easiness of implementation (change acceptance) and sustainability.
  • The threat will observe internal and external environment obstacles (technology evolution and standards modification) that undermine the sustainability of the solution.
The SWOT tool is interesting in our study case because of its ability to mix solution-specific criteria with industrial business strategies. To use it to its full potential, the clear identification of business goals must be performed with the formalization of essential criteria or metrics (sustainability, level of interoperability and total cost). For instance, in our industrial application, the studied company gives major importance to solution viability over extended periods of time. Therefore, sustainability will be a major deciding factor in our work. Finally, a metric hierarchy must be conducted according to the company strategy, risks, sustainability and impacts. The results of the evaluation of mediation and standardization are presented as follows (Figure 5 and Figure 6).

4.3. Deeper Industrial Analysis: Metric Implementation

SWOT allows a general overview of the solution and can give users a global picture of standardization or mediation within the business process. This is, however, incomplete because of the lack of measurable metrics and scenario projection. A deeper analysis is thus required. In our work, we propose a three-step analysis that will result in the refinement of the impact study of standardization and mediation based on industrial use cases and needs. The steps are as follows:
  • Use case identification;
  • Metric selection and hierarchization;
  • Solution impact analysis.
Standardization and mediation usually revolve around communication heterogeneities between different systems, with communication basically composed of data exchanges. It is consequently relevant to identify business use cases involving data exchange and study the data behaviors or changes that occur during its life cycle. Identified use cases can be categorized into business affiliations to isolate recurring data patterns or behavior.
By analyzing business use cases, it is possible to link data behavior and modification with Key Performance Indicators (KPIs). This will allow us to introduce specific KPIs that will help us to the evaluate the business impacts of standardization and mediation. To be able to precisely measure the impact of the implementation of each interoperability solution, the need for a representative metric is important. As we have already identified the data use cases, we can base our following work on them.
In the industrial and business domain, there exist a huge variety of KPIs [25] and selecting the most relevant ones for the study is essential. As mentioned before, interoperability solutions are heavily linked to data exchanges and industrial/business processes, with all business levels included. For example, there can be heterogeneity between factory floor PLCs, but also between information systems higher up in the business layer, such as predictive maintenance [26]. The KPI choice must thus not be domain-specific and must cover a larger spectrum of business layers.
In the work of [27], they introduced a cloud KPI that covers not only system information metrics but also business performance metrics. This seems particularly suitable for our study case because of the need to measure data performance alongside business process performance. However, it still lacks classification and some KPIs are not directly relevant to interoperability evaluation. Therefore, it is necessary to choose specific KPIs relevant to the study.
Categorizing them into the main parts will give us a better visualization of the global business strategy.
  • Reusability: the capability to reuse the solution.
  • Usability: the ease of use of the solution.
  • Scalability: the capability of the solution to scale.
  • Interoperability: the interoperability level of the solution.
  • Economics: financial aspects of the solution.
To evaluate the performance with the selected metrics, it is essential to first contextualize the environment. Metric scores can vary depending on the current industrial situation. In our case, we identified two main scenario types. The first phase is the implementation phase, which represents the implementation of the interoperability solution in the business process. We could consider this part as a proof-of-concept situation. The second phase will be the maturation phase and will represent the scalability aspect of the initial solution. The two phases must be studied separately and will, respectively, depict a short-term and a long-term scenario. Section 5 will detail the methodology with a use case application to Renault’s industrial processes.

5. Metric Analysis in Regard to Renault’s Ecosystem

5.1. Renault Industrial Data Management 4.0 and Industrial Data Capture and Publish

Renault has been making substantial progress in digitalizing its factories for the past ten years [28]. The reason behind this decision is to satisfy its current industrial and operational needs. It has conducted numerous projects to capture and store data from devices at the plant level (Figure 7). As the volume of data continues to increase, it has become increasingly important to establish a comprehensive data management project that can aggregate all relevant data entities. This need led to the creation of Renault Industrial Data Management (IDM 4.0), which is designed to address these challenges and provide a centralized approach to data management.
According to Renault, data management is identified as three main components that start from the bottom and end at the top:
  • Data exploitation;
  • Data provision;
  • Data acquisition.
Data acquisition is the first step and applies to the plant level. Its role is to collect as much as data possible from industrial devices such as PLCs, robots, sensors, etc. Data sources are multiple and heterogeneous, which greatly complicates the task.
Data provisioning is the second part of the data management project and is responsible for data storage and accessibility. It has a key role in data mapping by contextualizing and interconnecting different data sources. Data provisioning should allow a single point of access.
Finally, the last component of data management is data exploitation. It serves an essential role in data capitalization thanks to its capability to explore and visualize stored data. Data exploitation also uses analytic and machine learning tools to monitor and make decisions.
“Collect once, use many” is the main philosophy behind this project as highly granular and structured data are collected only once to be used in multiple use cases. Industrial Data Capture and Publish (IDACAP) is Renault’s core project for data capture and data publication and is part of IDM 4.0. It has a key role in the data acquisition level and harmonization thanks to standardization tools provided by OPC UA. Although OPC UA’s basic models provide a structured solution for heterogeneous data to be understandable by data users, scalability, evolution and change management must still be addressed. This still creates interrogations for the sustainability of the standardized OPC UA model approach. By 2021, the project IDACAP was already implemented in 19 plants across the world, with over 900 OPC UA servers and more than 800 billion messages.
IDACAP’s architecture allows data from multiple heterogeneous sources to be captured and routed toward the central data lake. At this point, data will be processed and published to make them available for other client projects. At the plant level, captured data are first routed to local OPC UA servers (Unified Data Connector—UDC). The heterogeneity of data sources calls for many communication protocols (Modbus TCP/IP, FTP, TGC, OPC DA and proprietary protocols). In these local servers are embedded various data models for each specific use case. Moreover, UDC servers are responsible for other tasks, such as data historization and data pre-treatment. Data from a UDC are then routed to a local digital flow platform before traveling to the corporate level.
IDM 4.0 is facing interoperability issues due to the heterogeneity of information systems across the global ecosystem. This is due to historical silo working logic and business dissimilarities. Information reconciliation is a tedious process and requires data engineers to commit fully to the task, creating a loss of money and time. The use of our methodology in the Renault industrial case aims to help the company to better visualize the impact of standardization and mediation on its current process.
In a preliminary study, we first identified the different stakeholders of Renault’s digital ecosystem. Within IDACAP, there are three interdependent business levels working together (Figure 8):
  • Data end users (e.g., CBM, traceability, process engineering) that use data from the corporate cloud (Google Cloud Platform—GCP);
  • Information technology/information System (IT/IS);
  • Operational technology (factory floor).

5.2. Establishing Metrics

We have conducted several interviews with stakeholders of IDM 4.0, which consisted of project leaders, project owners and experts, to evaluate the interoperability readiness of the current state of IDM 4.0. It led to the initial conclusion that the ecosystems are aware of interoperability issues and the necessity to solve them. However, resistance to change remains, as employees are not keen to change their working habits. Therefore, standardization seems to be a difficult task to implement at first glance because it implies consequential changes.
To have a better overview of standardization and mediation’s implications, a SWOT analysis is conducted and is based on current existing industrial solutions. Standardization components are based on OPC UA Standards, whereas mediation is based on commercial solutions such as OSIsoft [22] or Mimosa [29]. The SWOT indicates the better scalability capability of Standardization but shows, at the same time, a very complex implementation phase. On the other side, mediation can be easy to implement but has poor scalability capability. The analysis is, however, done at a high level, taking account information gathered in the previous interviews. Deciding on which solution is the best, based on this sole information, is certainly insufficient from a scientific point of view. To go further in our analysis, the use of precise metrics combined with a multi-criteria decision methodology will allow us a deeper understanding of the advantages and disadvantages of each interoperability solution.
Metrics are widely used in numerous different domains to better characterize a solution or a process. Industrial digitalization has enabled metrics that are more suitable for the digital ecosystem. In the work of [27], for instance, selected criteria are described and structured to fit the entire cloud domain. As the cloud domain is composed of different fields of expertise, metrics must not be overly precise, so as to fit all the areas. The selection of metrics for the interoperability analysis is conducted in the same way because of the number of domains composing Renault’s IDM 4.0 project. Furthermore, metrics can be classified into categories as well for better readability and coherence. In [27]’s work, cloud metrics are classified into three main parts: technological, non-functional and economic. Technological metrics represent all characteristics bound to the specific technologies driving the cloud system. This can, for example, be the brand CPU of a computer for a specific realization. Non-functional is the opposite of technological and represents the attributes of a product rather than technological requirements. Finally, economic refers to all the aspects involving the direct or indirect costs of the product. Corresponding to [27], we have decided, in our case, to create three categories that are better suited for the interoperability study (Table 2).
  • Economic: As the name implies, economic is the direct and indirect costs of the interoperability solution, along its life cycle.
  • Technical: Technical represents all the technical aspects of the solution, including maintenance, usability and performance efficiency.
  • Interoperability: This category is based on the previous state of the art of interoperability (informal, formal and technical). It measures how well the solution can achieve these three aspects.
Inside the three categories lie metrics, with some chosen from the cloud metrics mentioned above. For the economic category, there are the following:
  • Implementation cost: the cost of implementing the solution in the digital ecosystem (material cost and study cost). This part only concerns the initial implementation and does not involve scalability perspectives.
  • Maintenance cost: the cost of maintenance of the solution during its life cycle.
For the technical category, there are the following:
  • Efficiency: It measures the resources employed for the service demanded. It also takes into account the quality of the service, which is, in the case of the study, data quality.
  • Ease of implementation: This measures the effort required to implement the solution (time, resources, complexity).
  • Ease of maintenance: This measures how difficult and complex the solution is to maintain.
  • Usability: This takes into consideration the human factor by measuring how complex and difficult it is for an operator to use the solution.
Finally, the last category, interoperability, is composed of the three classifications mentioned in Section 2:
  • Informal interoperability: Is the solution viable with the organization’s traditions? Are the stakeholders willing to adopt the new solution (change resistance)?
  • Formal interoperability: Is the solution well aligned with the enterprise’s strategy? Will adopting the solution help to achieve an interoperable organization?
  • Technical interoperability: How well does the solution solve data and protocol interoperability within the entire organization (format, variable names and structure)?
With the metrics established, they are now ready to be exploited to push the analysis further with a formal multi-criteria methodology.

5.3. Analytical Hierarchy Process (AHP)

Among the numerous methodologies that exist in the state of the art, the Analytical Hierarchy Process (AHP) is commonly used for multi-criteria decision making [30]. The strength of AHP is that it is suitable and used for almost all the applications related to decision making, and thanks to this, the methodology has proven its effectiveness over time, with constant updates and variations. As the methodology of this paper is to be used by not only scientific experts but also industrial representatives, we sought to focus on the simplest version of the AHP so that everyone is able to understand it. In this research case, the methodology used is considered to be particularly well-suited to the task at hand, given that the established metrics are primarily of a qualitative nature.
In layman’s terms, AHP is composed of a goal, choices (solutions) and factors (metrics) (Figure 9). The goal is what the study aims to achieve (for example, deciding on the best solution to choose) and the choices are the different options available, with factors associated with each.
Most of the time, factors are not equally important, as some are more important than others. Factor importance can be represented as weights, with higher weights being more decisive than lower ones. As the number of factors grows, weight distribution can become a tedious task, prone to inaccuracy and inconsistency. AHP offers a way to overcome this barrier by performing a pairwise factor comparison. In other words, two factors are put together and we ask the following question: is factor A more important or less important than factor B? To help in answering the question, the methodology uses a formal scale of 1 (equally important) to 9 (extremely important). When the answer is less than 1, A is less important than B. As the value approaches 0, the significance of B increases in comparison to A (Figure 10).
Once every pair combination of factors is questioned, it is possible to establish the pairwise comparison based on the different answers to the survey (Table 3).
The comparison matrix serves two primary purposes. Firstly, it allows for the calculation of the weight of each factor, which is based on the numerical values entered into the matrix. Secondly, the comparison matrix is used to assess the consistency of the answers provided, which is determined by calculating the consistency index. Inconsistency is a major threat to this methodology as it will falsify the entire analysis, and it is caused by the inconsistency in the answers. For instance, if a person considers that factor A is more important than factor B and factor B is more important than factor C, by deduction, factor A is naturally greater in weight than factor C. However, if the same person claims that factor C is larger than factor A, the answer is considered inconsistent. The consistency helps to verify the answers and it is generally agreed that an index inferior to 10% (0.1) is considered consistent. In this paper, we will not detail the calculation of weight or consistency, but further information can be found in [30]. Once the weight and consistency are verified, a score can now be attributed to each choice (solution) by multiplying each factor with its associated weight.
Typically, an AHP analysis is done in four steps (Figure 11). The first step is the establishment of the AHP diagram with the chosen metrics and the goal. The second and third steps are to build the pairwise comparison matrices of the metrics firstly, and then the solution with respect to each metric to find the weight. Finally, metrics and solution weights are put together in a synthesis table to establish the solutions’ final scores.
By following AHP principles and using the previous metrics, we can design our own AHP diagram. However, the SWOT analysis demonstrated that mediation and standardization have strengths and weaknesses based on the application conditions, which are the initial launch implementation and the scalability perspective. To evaluate the interoperability solutions in both conditions, two AHPs are conducted, representing each condition with their respective diagrams.

5.4. AHP Application for Best Solution at Launch

  • Step 1: AHP diagram design
The first evaluation is focused on the initial launch condition, in which case the scalability criterion will not be considered. Using the previous metrics, it is possible to build the following AHP diagram.
The diagram (Figure 12) is structured in three different layers. The initial layer in this setup is known as the goal layer, which strives to identify the most optimal solution for the launch condition at hand. Subsequently, the next layer is referred to as the metric category layer, followed by the third layer that is dedicated to the metrics linked to these categories.
  • Step 2: Metrics pairwise comparison matrix
In this step, the answers provided by Renault’s domain experts allowed us to build the pairwise comparison matrix with the following weights of the metrics of layers 3 and 4 (Table 4, Table 5, Table 6 and Table 7).
For layer 2 (Table 4), experts in the field of Industry 4.0 from the Renault Group carried out the assessment. The primary factor taken into consideration when selecting a solution was its impact on the economy. The company has internal validation committees responsible for determining whether the solution is financially viable. While interoperability and technical metrics were also considered, they were not deemed as critical as the economic factor, but still played a significant role in the decision-making process.
In the launch scenario for layer 3 (Table 5), the maintenance cost is not as crucial as it is in the scalability scenario. As a result, we decided to give equal importance to the initial implementation cost and the maintenance cost.
In the launch scenario, the efficiency metric is the most critical factor in the technical metrics category (Table 6). The chosen solution must be efficient in carrying out its tasks and ensure interoperability between different domains. This is particularly crucial in the context of industrial data capture, where real-time process monitoring and traceability use cases require high efficiency. Data quality is also essential for successful data exploitation, making it a key consideration in the decision-making process.
As previously stated, selecting a solution with technical interoperability is of the utmost importance in the short term (Table 7), as it allows for immediate use cases of the data. However, in the long run, formal interoperability also becomes crucial. In order to optimize cross-domain interoperability, the entire ecosystem must be organized accordingly. The consistency index for each matrix is inferior to 10%, which is acceptable.
  • Step 3: Solution pairwise comparison matrix with respect to the metrics
In the same way as in step 2, we conducted a pairwise comparison for the solutions, standardization and mediation, for each metric above by obtaining opinions from several Renault experts. Weights are regrouped in Table 8.
  • Step 4: Synthesis and results
To calculate the final score of each metric using the AHP methodology, we need to multiply the weight of the preferred option obtained from the solution pairwise comparison matrix with the weight of the metric pairwise comparison. For example, the final score for the initial implementation cost metric was calculated by multiplying the weight of the preferred option obtained from Table 8 with the weight of the metric obtained from Table 5, in accordance with the AHP methodology. The result of this multiplication (0.143 × 0.313) is 0.0447.
At the end, the final score for the standardization solution in the launch condition is 0.3685, whereas that for mediation is 0.6294. With the AHP methodology, we can assume that mediation is a better solution than standardization for the initial launch situation as it is far less complex and costly for the organization. The next step of the study is to conduct the same analysis for the scalability situation, which is also the focus of Renault (Table 9).

5.5. AHP Application for Best Solution for Scalability

Scalability is the main focus of Renault for the next few years. The multiplication of use cases and the rise in industrial data needs require the company to find a sustainable and scalable solution for interoperability that can be deployed in its plants all over the world. Here, again, mediation is competing with standardization as the best solution. If we look back at the previous metrics for the launch situation, there are very few that involve scalability. Therefore, it is necessary to add a specific criterion that addresses the scalability challenge for our analysis. Two metrics will provide such information: in the economic category, there is the scalability cost, and in the technical category, there is the scalability capability (both metrics are highlighted in (Table 10)).
The scalability cost includes all direct and indirect costs that are linked to the solution’s scalability. For instance, solution prices are usually not linear as the solution scales. The scalability capability evaluates whether the solution is designed to be scalable. How does the solution react and adapt to a scalable environment? Is it sustainable in time?
  • Step 1: AHP diagram design
The AHP diagram (Figure 13) for scalability includes the addition of the two metrics and, equivalently to the previous analysis, we need to find the weights of layers 2 and 3 to calculate the final solution score.
It is also important to mention that, compared to the initial launch situation, the weight is bound to change as the company goals and priorities are not the same in the solution scalability situation.
  • Step 2: Solution pairwise comparison matrix with respect to the metrics
In this step, layer 2 remains the same for the initial launch situation. In layer 3, the interoperability category remains the same as well. Because of the new metrics introduced previously, the economic and technical categories have been modified (Table 11 and Table 12).
  • Step 3: Synthesis and results
At the end, the final score of mediation is 0.189, whereas the final standardization score is 0.811. We can safely assume that standardization is a better solution for scalability (Table 13). To push the analysis further with a practical example, Section 6 simulates the impact of each interoperability solution on Renault’s data collection processes.

6. Solution Application in Renault’s Data Collection Processes

To practically demonstrate the effects of standardization and mediation on the real system, a comparative implementation simulation is carried out. This simulation tracks the entire journey of data, from its inception to its utilization, to showcase the practical implications of both methods. Our decision to apply each solution to the data collection process was based on the need to address the problem of data heterogeneity at the machine level. In the past, each device or industrial asset had its own protocols, which resulted in a diversity of variable naming and semantics. This issue has become even more significant with the emergence of Industry 4.0, where use cases rely heavily on data exploitation. Therefore, implementing interoperability solutions is essential to overcome this challenge. It is possible to identify three main steps in the data life cycle:
  • Data preparation;
  • Data collection;
  • Data exploitation.
Data preparation (Figure 14) is the first part of the cycle, where, based on industrial use cases, a list of data that need to be collected is decided. A use case can be provided by the factory level (industrial process monitoring, maintenance) or from a data client from the upper layer (traceability). The list is then incorporated into a data model and processed by IT servers.
In the case of standardization, alignment via standardizing variable names between all stakeholders is necessary, and it is expressed by the creation of a unified variable dictionary. This process can be challenging because of the heterogeneity of each domain, but will prevent any further heterogeneity issues in the life cycle.
Mediation, on the other hand, does not require alignment but requires a data mapping for each specific use case. This results in lesser complexity and impact on current systems compared to standardization. Nevertheless, for each new use case, a new data mapping might be required and this may cause a great diversity of situations for the entire ecosystem.
The second part of the process is data collection (Figure 15). Industrial data produced from the OT layer (e.g. PLCs, robots, sensors) are collected by servers and sent to other business layers to be published and exploited.
In the standardized case, the normalization and structuring of data and data models through the business layer will ensure data quality in case of data handovers. Data unsuitable for replication will need algorithm re-adaptation for every new model. On the other side, mediation solutions can fit a specific heterogeneity scenario, and it does not provide a global solution for the whole ecosystem. The architecture involves multiple types of data that are different from one another and are processed at various layers; it will be necessary to employ various approaches to ensure that interoperability is maintained. However, it may not be possible to guarantee the quality of data due to their heterogeneity [31]. Moreover, this may also mean relying on an external supplier to maintain, which, from a business strategic point of view, can become an issue.
Finally, in data exploitation (Figure 16), data are exploited by data end users. Standardized and structured data will provide the possibility for data analysts to elaborate algorithms based on generic data models. This will facilitate replication, thus improving the scaling capabilities. On the contrary, unstructured data from mediation are unsuitable for replication and will need algorithm re-adaptation for every new model.

7. Discussion

Throughout the previous analysis, standardization emerged as the best option for its scalability capability, but it is legitimate to question whether this is always the case. There are indeed many standards, and choosing the right one for a specific ecosystem can be a challenge in itself, as the implementation capability of the standard itself is a concern. Standardization assets are especially valuable in the OT domain as standards provide a sustainable and long-term interoperability solution. However, standards are meant to evolve and grow with time, and creating them is a complex task, especially when it regroups multiple heterogeneous domains. Moreover, even within the same standard, there can be different visions of how the standard can evolve.
One example involves the standardized data model provided by the OPC foundation. As mentioned before, the machinery models are provided by VDMA, which is solely composed of equipment suppliers and has a different point of view from data end users. This model divergence between the standard and the end users, such as Renault, results in the end users implementing their own data model instead of those provided by the standard to fulfill their requirements.
In the Tightening System CS (Figure 17), VDMA proposed a model structuration facilitating industrial assets’ modularity and tightening the results invoked by methods [32].
The CS does not contain any parts and processes information that is crucial for end users to analyze the data. Though the addition of new data is within its scope, the long waiting time between each release is a major barrier to implementation for end users.
In comparison, Renault’s own Tightening Model is more functionally oriented with model structuration, which represents the factory organization.
The structuration provides data analysts with an organizational view that helps them to recontextualize data (Figure 18).
Model heterogeneity needs to be resolved to build a sustainable standard used by every stakeholder of an ecosystem. Alongside our research, we conducted discussions with the OPC foundation to align standard creators and standard users.

8. Conclusions and Future Work

Interoperability is an essential and well-identified concern for most present-day companies. The problem will grow larger as Industry 4.0 pushes heterogeneous domains to work together. Standardization and mediation have emerged as the main solutions to the issue but there is no obvious industrial or scientific consensus in the manufacturing domain taking into consideration the scalability factor. Compared to the Information Technology world, where mediation could be applied, the automation world has different constraints and paradigms. Therefore, an assessment was necessary to point out the main features of each approach while taking the business strategy into consideration. In fact, the industrial and business goals of a company represent major decisive factors.
While standardization may be an effective approach for Renault, it may not necessarily be the optimal solution for other companies with distinct constraints and strategies. There is no universally applicable method for achieving interoperability, and the most appropriate approach will depend on the specific objectives of the organization. There may be certain situations where the traditional learning process is too complex and challenging to navigate, making mediation a more suitable and effective solution [33]. The scope of the interoperability challenge is another important factor, as, in some situations, it may be necessary to employ both mediation and standardization. This is especially relevant for large ecosystems where the different domains are too diverse to be encompassed by a single standard. In the case study at hand, the focus is specifically on the manufacturing domain (OT) and, more specifically, machine data.
Renault’s previous IDACAP studies suggest that mediation provides faster and less disruptive implementation with minimal business impact, while standardization is more challenging and has a greater impact. However, as the business expands, standardization becomes a more attractive option due to its scalability capabilities. On the other hand, mediation solutions may face difficulties in a scalability scenario due to maintenance issues and specific requirements. Since scalability is a major focus for Renault in the coming years, standardization appears to be the more viable and long-term solution.
After data collection, data are exploited by data end users. Standardized and structured data will provide the possibility for data analysts to elaborate algorithms based on generic data models. This will facilitate replication, thus improving scaling capabilities. On the contrary, unstructured data are suitable for replication and will need algorithm re-adaptation for every new model. Finally, the choice of standardization also leads to some deeper questions. Data modeling is the vital centerpiece of data propagation and represents the reconciliation of the main heterogeneous domains, such as OT and IT. Although existing standards help to formalize technical assets, there are still numerous disparities between these two on a global and informal level. An impact assessment needs to be done to highlight the most critical issues. Furthermore, domain-specific differences are also expressed through different data model specifications. The industrial thesis conducted by Renault and CReSTIC on the Conception and Propagation of Industrial Data Models is essential to address these scientific obstacles. The work aims to create model conception rules that take into consideration interoperability, life cycles and data quality. Therefore, this will help to implement digital continuity in a standardized and uniform ecosystem.

Author Contributions

Conceptualization, Y.C.; methodology, Y.C.; validation, T.D.; writing—original draft, Y.C.; writing—review and editing, D.A. and A.P.; supervision, V.C.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

This research and development undertaking was conducted in the context of an industrial and academic collaboration (CIFRE convention) between the Renault Group and the CReSTIC Laboratory of the University of Reims Champagne Ardennes, financed by the Renault Group and the National Association of Research and Technology.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Ghobakhloo, M. Industry 4.0, digitization, and opportunities for sustainability. J. Clean. Prod. 2020, 252, 119869. [Google Scholar] [CrossRef]
  2. Massive Internet of Things for Industrial Applications—CityU Scholars | A Research Hub of Excellence. Available online: https://scholars.cityu.edu.hk/en/publications/massive-internet-of-things-for-industrial-applications(bfc81625-39cc-4ec4-b346-1354384d9306).html (accessed on 14 March 2021).
  3. Drissi, Ayoub and Benoît, IUNG and Voisin, Alexandre and Galinier, Vincent Coupling Prognostics and Decision Making (P&DM) Processes in PHM: A literature review and proposal of a Residual Performance Lifetime concept. IFAC-PapersOnLine 2022, 55, 145–150. [CrossRef]
  4. Renner, S.A.; Rosenthal, A.S.; Scarano, J.G. Data interoperability: Standardization or mediation. In Proceedings of the 1st IEEE Metadata Conference, Silver Spring, MD, USA, 16–18 April 1996. [Google Scholar]
  5. Hankel, M.; Rexroth, B. The reference architectural model industrie 4.0 (rami 4.0). Zvei 2015, 2, 4–9. [Google Scholar]
  6. Nakagawa, E.Y.; Antonino, P.O.; Schnicke, F.; Capilla, R.; Kuhn, T.; Liggesmeyer, P. Industry 4.0 reference architectures: State of the art and future trends. Comput. Ind. Eng. 2021, 156, 107241. [Google Scholar] [CrossRef]
  7. da Silva Serapião Leal, G.; Guédria, W.; Panetto, H. Interoperability assessment: A systematic literature review. Comput. Ind. 2019, 106, 111–132. [Google Scholar] [CrossRef]
  8. ATHENA. Available online: http://interop-vlab.eu/athena/ (accessed on 15 November 2022).
  9. INTEROP. Available online: http://interop-vlab.eu/interop/ (accessed on 15 November 2022).
  10. Geraci, A.; Katki, F.; McMonegal, L.; Meyer, B.; Lane, J.; Wilson, P.; Radatz, J.; Yee, M.; Porteous, H.; Springsteel, F. IEEE Standard Computer Dictionary: Compilation of IEEE Standard Computer Glossaries; IEEE Press: Piscataway, NJ, USA, 1991. [Google Scholar]
  11. 6 Interoperability Layers | Joinup. Available online: https://joinup.ec.europa.eu/collection/nifo-national-interoperability-framework-observatory/solution/eif-toolbox/6-interoperability-layers (accessed on 15 November 2022).
  12. Weichhart, G.; Panetto, H.; Molina, A. Interoperability in the cyber-physical manufacturing enterprise. Annu. Rev. Control 2021, 51, 346–356. [Google Scholar] [CrossRef]
  13. Burns, T.; Cosgrove, J.; Doyle, F. A Review of Interoperability Standards for Industry 4.0. Procedia Manuf. 2019, 38, 646–653. [Google Scholar] [CrossRef]
  14. Liu, L.; Li, W.; Aljohani, N.R.; Lytras, M.D.; Hassan, S.U.; Nawaz, R. A framework to evaluate the interoperability of information systems—Measuring the maturity of the business process alignment. Int. J. Inf. Manag. 2020, 54, 102153. [Google Scholar] [CrossRef]
  15. Almeida, C. OPC Foundation. Available online: https://opcfoundation.org/ (accessed on 6 September 2022).
  16. Cai, L.; Zhu, Y. The Challenges of Data Quality and Data Quality Assessment in the Big Data Era. Data Sci. J. 2015, 14, 2. [Google Scholar] [CrossRef]
  17. Leitner, S.H.; Mahnke, W. OPC UA—Service-Oriented Architecture for Industrial Applications. p. 6. Available online: https://picture.iczhiku.com/resource/paper/SHiwgroEUDwQIcxm.pdf (accessed on 20 March 2021).
  18. Kosanke, K. ISO Standards for Interoperability: A Comparison. In Interoperability of Enterprise Software and Applications; Konstantas, D., Bourrières, J.P., Léonard, M., Boudjlida, N., Eds.; Springer: London, UK, 2006; pp. 55–64. [Google Scholar] [CrossRef]
  19. OPC UA Unified Architecture. Available online: https://opcfoundation.org/about/opc-technologies/opc-ua/ (accessed on 21 July 2022).
  20. Home—vdma.org—VDMA. Available online: https://www.vdma.org/ (accessed on 18 January 2023).
  21. Halevy, A.; Ives, Z.; Suciu, D.; Tatarinov, I. Schema mediation in peer data management systems. In Proceedings of the 19th International Conference on Data Engineering (Cat. No.03CH37405), Bangalore, India, 5–8 March 2003; pp. 505–516. [Google Scholar] [CrossRef]
  22. OSIsoft | Intelligence Opérationnelle | PI System. Available online: https://www.osisoft.fr/ (accessed on 21 July 2022).
  23. Blind, K. The impact of standardisation and standards on innovation. In Handbook of Innovation Policy Impact; Edward Elgar Publishing Section: Cheltenham, UK, 2016; pp. 423–449. ISBN 9781784711856. [Google Scholar]
  24. Rezaei, R.; Chiew, T.k.; Lee, S.p. A review of interoperability assessment models. J. Zhejiang Univ.-Sci. C 2013, 14, 663–681. [Google Scholar] [CrossRef]
  25. Amrina, E.; Vilsi, A.L. Key Performance Indicators for Sustainable Manufacturing Evaluation in Cement Industry. Procedia CIRP 2015, 26, 19–23. [Google Scholar] [CrossRef]
  26. Wu, Z.; Luo, H.; Yang, Y.; Lv, P.; Zhu, X.; Ji, Y.; Wu, B. K-PdM: KPI-Oriented Machinery Deterioration Estimation Framework for Predictive Maintenance Using Cluster-Based Hidden Markov Model. IEEE Access 2018, 6, 41676–41687. [Google Scholar] [CrossRef]
  27. Bardsiri, A.K.; Hashemi, S.M. QoS Metrics for Cloud Computing Services Evaluation. IJISA 2014, 6, 27–33. [Google Scholar] [CrossRef]
  28. Hoppe, S. Digital Transformation at Groupe Renault with GoogleCloud and OPC UA. Available online: https://opcfoundation.org/news/opc-foundation-news/digital-transformation-at-groupe-renault-with-googlecloud-and-opc-ua/ (accessed on 21 July 2022).
  29. MIMOSA—Open Standards for Physical Asset Management. Available online: https://www.mimosa.org/ (accessed on 21 July 2022).
  30. Vaidya, O.S.; Kumar, S. Analytic hierarchy process: An overview of applications. Eur. J. Oper. Res. 2006, 169, 1–29. [Google Scholar] [CrossRef]
  31. Mimouni, L.; Zellou, A.; Idri, A. Quality of Data in mediation systems. In Proceedings of the 2015 10th International Conference on Intelligent Systems: Theories and Applications (SITA), Rabat, Morocco, 20–21 October 2015; pp. 1–5. [Google Scholar] [CrossRef]
  32. OPC UA Information Models. Available online: https://opcfoundation.org/developer-tools/specifications-opc-ua-information-models (accessed on 13 September 2022).
  33. Zhao, K.; Xia, M. Forming interoperability through interorganizational systems standards. J. Manag. Inf. Syst. 2014, 30, 269–298. [Google Scholar] [CrossRef]
Figure 1. Protocol heterogeneity.
Figure 1. Protocol heterogeneity.
Processes 11 01274 g001
Figure 2. Protocol heterogeneity.
Figure 2. Protocol heterogeneity.
Processes 11 01274 g002
Figure 3. OPC UA architecture.
Figure 3. OPC UA architecture.
Processes 11 01274 g003
Figure 4. Data mediation.
Figure 4. Data mediation.
Processes 11 01274 g004
Figure 5. Mediation SWOT.
Figure 5. Mediation SWOT.
Processes 11 01274 g005
Figure 6. Standardization SWOT.
Figure 6. Standardization SWOT.
Processes 11 01274 g006
Figure 7. IDACAP architecture.
Figure 7. IDACAP architecture.
Processes 11 01274 g007
Figure 8. IDACAP architecture.
Figure 8. IDACAP architecture.
Processes 11 01274 g008
Figure 9. AHP diagram example.
Figure 9. AHP diagram example.
Processes 11 01274 g009
Figure 10. AHP scale.
Figure 10. AHP scale.
Processes 11 01274 g010
Figure 11. AHP analysis steps.
Figure 11. AHP analysis steps.
Processes 11 01274 g011
Figure 12. Initial launch diagram.
Figure 12. Initial launch diagram.
Processes 11 01274 g012
Figure 13. AHP diagram for scalability.
Figure 13. AHP diagram for scalability.
Processes 11 01274 g013
Figure 14. Data preparation.
Figure 14. Data preparation.
Processes 11 01274 g014
Figure 15. Data collection.
Figure 15. Data collection.
Processes 11 01274 g015
Figure 16. Data exploitation.
Figure 16. Data exploitation.
Processes 11 01274 g016
Figure 17. VDMA Tightening System CS.
Figure 17. VDMA Tightening System CS.
Processes 11 01274 g017
Figure 18. Renault’s Tightening Model.
Figure 18. Renault’s Tightening Model.
Processes 11 01274 g018
Table 1. Interoperability layers from EIF.
Table 1. Interoperability layers from EIF.
EIF Layers
Interoperability governance
Integrated public service governance
Legal interoperability
Organization interoperability
Semantic interoperability
Technical interoperability
Table 2. Metric classification.
Table 2. Metric classification.
CategoriesMetrics
ECONOMICImplementation cost
Maintenance cost
TECHNICALEfficiency
Ease of implementation
Ease of maintenance
Usability
INTEROPERABILITYFormal
Informal
Technical
Table 3. Pairwise comparison matrix.
Table 3. Pairwise comparison matrix.
FactorABC
A139
B1/311/5
C1/951
Table 4. Layer 2 pairwise comparison matrix.
Table 4. Layer 2 pairwise comparison matrix.
EconomyTechnicalInteroperabilityPriorities
Economy 1.0004.0003.0000.625
Technical0.2501.0000.5000.137
Interoperability0.3002.0001.0000.238
Table 5. Layer 3 economy pairwise comparison matrix.
Table 5. Layer 3 economy pairwise comparison matrix.
Initial Implementation CostMaintenance CostPrioritiesGlobal Weight
Initial implementation
cost
1.0001.0000.5000.313
Maintenance
cost
1.0001.0000.5000.313
Table 6. Layer 3 technical pairwise comparison matrix.
Table 6. Layer 3 technical pairwise comparison matrix.
EfficiencyEase of
Implementation
Ease of
Maintenance
UsabilityPrioritiesGlobal Weight
Efficiency1.0003.0004.0003.0000.5200.071
Ease of implementation0.3301.0002.0000.3300.1380.019
Ease of maintenance0.2500.5001.0000.3300.0900.012
Usability0.3303.0003.0001.0000.2700.037
Table 7. Layer 3 interoperability pairwise comparison matrix.
Table 7. Layer 3 interoperability pairwise comparison matrix.
InformalFormalTechnicalPrioritiesGlobal Weight
Informal1.0000.3300.20001.1010.024
Formal3.0001.0000.2500.2260.054
Technical5.0004.0001.0000.6740.160
Table 8. Solution pairwise comparison matrix with respect to the metrics.
Table 8. Solution pairwise comparison matrix with respect to the metrics.
StandardizationMediation
Initial implementation cost0.1430.857
Maintenance cost0.3300.667
Efficiency0.1670.833
Ease of implementation0.1250.675
Ease of maintenance0.3330.667
Usability0.2500.750
Informal interoperability0.8330.167
Formal interoperability0.8330.167
Technical interoperability0.8000.200
Table 9. Synthesis table for initial launch.
Table 9. Synthesis table for initial launch.
EconomyTechnicalInteroperability
Initial
Implementation Cost
Maintenance
Cost
EfficiencyEase of
Implementation
Ease of
Maintenance
UsabilityInformalFormalTechnicalFinal
Score
Standardization0.04470.10310.01190.00240.00410.00920.02000.04480.12830.3685
Mediation0.26780.20840.05930.01280.00820.02770.00400.00900.03210.6294
Table 10. Metrics for scalability.
Table 10. Metrics for scalability.
CategoriesMetrics
  CostImplementation cost
Maintenance cost
Scalability Cost
TechnicalEfficiency
Ease of implementation
Ease of maintenance
Usability
Scalability Capability
  InteroperabilityInformal
Formal
Technical
Table 11. Layer 3 economy pairwise comparison matrix.
Table 11. Layer 3 economy pairwise comparison matrix.
Initial Implementation
Cost
Maintenance
Cost
Scalability CostPrioritiesGlobal Weight
Initial implementation cost1.0000.20000.1400.0720.045
Maintenance cost5.0001.0000.3300.2790.174
Scalability cost7.0003.0001.0000.6490.406
Table 12. Layer 3 technical pairwise comparison matrix.
Table 12. Layer 3 technical pairwise comparison matrix.
Efficiency Ease of
Implementation
Ease of
Maintenance
Scalability
Capability
UsabilityPrioritiesGlobal Weight
Efficiency1.0004.0003.0000.5003.0000.2770.038
Ease of Implementation0.2501.0000.2000.1400.2500.0440.006
Ease of maintenance0.3305.0001.0000.3300.1000.1370.019
Scalability capability2.0007.0003.0001.0004.0000.4200.058
Usability0.3304.0001.0000.2501.0000.1220.017
Table 13. Synthesis table for scalability.
Table 13. Synthesis table for scalability.
EconomyTechnicalInteroperability
Initial Implementation
Cost
Maintenance
Cost
Scalability
Cost
EfficiencyEase of
Implementation
Ease of
Maintenance
UsabilityScalability
Capability
InformalFormalTechnicalFinal
Score
Standardization0.0060.1450.3550.0320.0010.0160.0130.0510.0200.0450.1280.811
Mediation0.0390.0290.0510.0060.0050.0030.0040.0060.0040.0090.0320.189
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Y.; Annebicque, D.; Philippot, A.; Carré-Ménétrier, V.; Daneau, T. Evaluation Methodology of Interoperability for the Industrial Domain: Standardization vs. Mediation. Processes 2023, 11, 1274. https://doi.org/10.3390/pr11041274

AMA Style

Chen Y, Annebicque D, Philippot A, Carré-Ménétrier V, Daneau T. Evaluation Methodology of Interoperability for the Industrial Domain: Standardization vs. Mediation. Processes. 2023; 11(4):1274. https://doi.org/10.3390/pr11041274

Chicago/Turabian Style

Chen, Yuhan, David Annebicque, Alexandre Philippot, Véronique Carré-Ménétrier, and Thierry Daneau. 2023. "Evaluation Methodology of Interoperability for the Industrial Domain: Standardization vs. Mediation" Processes 11, no. 4: 1274. https://doi.org/10.3390/pr11041274

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop