Next Article in Journal
Amplification of Chirality in Photopatterned 3D Nanostructures of Chiral/Achiral Mixtures
Next Article in Special Issue
The Study of Machine Learning Assisted the Design of Selected Composites Properties
Previous Article in Journal
Evaluation of Metamorphic Testing for Edge Detection in MRI Brain Diagnostics
Previous Article in Special Issue
Recent Application of Dijkstra’s Algorithm in the Process of Production Planning
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Towards Flexible and Cognitive Production—Addressing the Production Challenges

Pro2Future GmbH, 4040 Linz, Austria
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(17), 8696;
Submission received: 1 July 2022 / Revised: 24 August 2022 / Accepted: 25 August 2022 / Published: 30 August 2022
(This article belongs to the Special Issue Industry 5.0.: Current Status, Challenges, and New Strategies)


Globalization in the field of industry is fostering the need for cognitive production systems. To implement modern concepts that enable tools and systems for such a cognitive production system, several challenges on the shop floor level must first be resolved. This paper discusses the implementation of selected cognitive technologies on a real industrial case-study of a construction machine manufacturer. The partner company works on the concept of mass customization but utilizes manual labour for the high-variety assembly stations or lines. Sensing and guidance devices are used to provide information to the worker and also retrieve and monitor the working, with respecting data privacy policies. Next, a specified process of data contextualization, visual analytics, and causal discovery is used to extract useful information from the retrieved data via sensors. Communications and safety systems are explained further to complete the loop of implementation of cognitive entities on a manual assembly line. This deepened involvement of cognitive technologies are human-centered, rather than automated systems. The explained cognitive technologies enhance human interaction with the processes and ease the production methods. These concepts form a quintessential vision for an effective assembly line. This paper revolutionizes the existing industry 4.0 with an even-intensified human–machine interaction and moving towards cognitivity.

1. Introduction

Many industries are currently experiencing a transformation of their production processes, in a development which is often termed the 4th industrial revolution or ’Industry 4.0’ in short [1]. The reasons for this transformation are manifold. On the one hand, globalization often requires decentralized manufacturing processes and consequently the automation which supports and controls such processes. On the other hand, due to the customization of goods and products, manufacturers increasingly need to take their customer needs into account and tailor their products accordingly towards adaptive, flexible and intelligent production [2,3].
The Internet of Things [4], data analytics, machine learning (ML) [5], and other emerging technologies can be seen as major drivers behind the transition of traditional manufacturing processes. They enable both the realization of intelligence within automation components in the manufacturing process itself, the collection and provisioning of data in a distributed environment and consequently the realization of higher level services. Examples of references leveraging these technologies in different domains include machine monitoring [6], system security monitoring [7,8], chemical plant optimization [9,10], fault diagnosis [11], etc. Amidst this revolution, humans remain integral as the strategic decision makers rather than operators [12] interacting with the system and shaping its final decisions.
Based on a real industrial use-case (explained in Section 2), this paper demonstrates how cognition resolves the major challenges of real-world assembly systems. These challenges include monitoring the high flexibility of assembly workers, the optimal planning and configuration of assembly lines, managing the large amount of data generated from different sensors and departments and ensuring efficient communication and overall safety on the shopfloor.
The paper is structured as follows: Section 1 elaborates on the motivation and vision behind this paper. Then, Section 2 describes the use-case this paper is based on with the research expectations. Afterwards, Section 3 illustrates the various specifics involved in building an ideal cognitive assembly line, which is further illustrated in Figure 1. The next chapters indicate a step-by-step procedure to consider while moving towards cognitive production and addressing the relevant challenges in each specific field.
We begin with Section 4, which describes a solution to assembly line balancing in the case of missing relevant data. The next section, Section 5, describes the use of cognition to support flexible assembly systems through process monitoring and perception systems. The topics of data contextualization, predictive maintenance, visual analytics, and causal discovery have been amplified in Section 6. The process followed here is a solely developed approach, but, to a certain degree, it follows principles similar to those of many data analysis methodologies (e.g., KDD, CRISP DM, SEMMA, etc. [13]). Section 7 expands the horizon into the field of communications and safety on the shop floor. Thus, this paper reviews the existing state of the art and paints a holistic picture of different technologies that have been developed to facilitate an intensified human–machine interaction. Additionally, the article indicates a roadmap for industries as well as researchers for consideration of the addressed topics in a cognitive production system.

2. Use Case

We propose different approaches for the support of the production flexibility and cognitivity. These approaches are grounded in a real industrial use case of construction machines assembly by our industrial partner Wacker Neuson, a leading manufacturer of compact construction machines. This use case will subsequently be used in the remainder of the paper to demonstrate the proposed approaches and to depict “the added values of cognitive production”.
Each plant hosts multiple assembly lines where different products are assembled. The assembly lines are laid out in a “fish-bone” structure with multiple pre-assembly workstations feeding into the main line. The buffer zone is embedded in the pre-assembly stations, and the stations are tightly coupled by a fixed tact time: i.e., every ’t’ minutes. Each station hosts a team of workers who are responsible for carrying out their assigned assembly tasks. The assembly work is mainly manual with the assistance of large tools such as the crane to move parts, or lighter equipment such as e-screwdrivers or human assistive devices.
As aforementioned, the manual-intensive assembly line is prone to errors and monitoring the assembly line can mitigate these disparities. The errors are difficult to monitor or to trace back to their source due to the complexity of the product and also the many assembly tasks that must be performed across various stations. Moreover, the assemblies in the use-case scenario highly involve screwing operations. The authors of this paper have thus presented a monitoring approach that indicates the correctness of the screwing operations and also notifies faulty assemblies or operations.

3. Cognitive Entities

The following sections provide an overview on the parameters or tendons in an assembly line. An overview of the assembly process with the use-case is depicted in Figure 1 from conception to maintenance along with a top-down approach of the involved cognitive technologies. These are explained in detail in the further sections of paper. Moreover, the adaptability of the assembly line is also increased by building a cognitive assembly line. Abdul Hadi et al. in [14,15] describes the use of basic adaptive technologies to enhance the flexibility and reduce the errors in an assembly line.
With the great variety of products in the Wacker Neuson production portfolio, the complexity of the assembly line also increases, which leads progressively to higher costs. The flexibility of a completely automated system is controlled by its complexity, while by contrast having a manual assembly line leads to highest flexibility, but induces human and quality errors. To counter the negatives and leveraging the benefits of automated and manual assembly, a newly defined approach is described in this paper. In this type, the human–machine interaction and innovative monitoring approaches with knowledge feedback loops are used. The sensory, actuatory devices, and the data-driven approach in particular enable this interaction for building a cognitive assembly line. The method of combining several cognitive concepts under one umbrella is also deemed necessary and is an innovative approach.

4. Cognitive Assembly Balancing

The thorough planning and configuration of assembly systems is critical for manufacturers maximizing the production efficiency and profit [16]. The assembly line balancing problem (ALBP) is an optimization issue dealing with the partitioning of assembly work among stations with respect to prioritized objectives [17]. Rapid assembly balancing is of utmost importance for meeting the ever-changing market demands and mitigating the effects of supply disruptions. Currently, most solutions of the ALBP rely on the assembly precedence graph, a directed acyclic graph describing all the dependencies between assembly tasks. Due to the lack of this necessary input data in real industrial settings, the majority of the proposed solutions to the ALBP remain inapplicable [18]. This data are often outdated, incomplete or altogether unavailable. The creation and maintenance of the ever-changing precedence relations require extensive time and effort [19]. As an example, in the automotive and related industries (including our industry partner), the experts rely on their tacit knowledge of precedence relations to manually create an assembly line balance [20]. In [21], the authors design cognitive assistance systems in manual assembly and use the so-called quality function deployment (QFD). In their research, Pokorni et al. introduce six phases, where a cognitive assistance system is developed by iterating through the phases. In the approach, many factors are considered, such as the workers’ information needs. As suggested by Pokorni et al. there should be several replications of case studies, where many different assembly configurations, employees, quantities, complexities, and products are applied. In this work, we deliver data for results and data for another case study, which adds to these prior investigations regarding to automating assembly processes.
To this end, Ouijdane et al. in [22] proposed an approach assisting in the manual assembly balancing by providing station assignment recommendations without relying on the precedence relations. This approach is based on historical data from prior assembly balances (Figure 2) of different products and derives station assignment information based on calculated similarities among tasks.
For each assembly task, the recommender system suggests a station assignment. The recommendations are based on similarities to tasks in prior assembly balances with similar products (those belonging to the same product family for example). The evaluated results of the recommendations for assembly line balancing are over 91%, and with precision on the other hand, is valued on average at 82%. Hence, this approach can be utilized for building a cognitive assembly line with a great variety of products.

5. Cognitive and Flexible Shop Floors

Most of the tasks in a high variety assembly line, are generally performed manually, thereby making it error prone. A high variety is hard to tackle with fully autonomous systems, and hence OEMs (original equipment manufacturer) are thus again decreasing the level of automation [23,24] in order to gain more flexibility. Moreover, the level of quality expected by product customers today is very high, and the complexity of manufacturing puts a strain on assembly personnel. This disparity can be mitigated using assistance systems. These systems should be a companion technology for workers [25] and follow principles of cognitive systems [26]. Hence, supporting workers not only with the process but also according to strengths and weakness [27], as summarized in architecture in Figure 3.
This architecture consists of several layers, each of which provides cognitive abilities. Perception and awareness can be extracted using infrastructural and wearable sensors. The actuators on the other hand provide information related to autonomous acting. The linking of these domains is performed with skill and cognitive state detection, as explained by Haslgrübler et al. in [27,28].
To further improve flexibility in the context of assembly systems, decision support systems can be applied. When using collaborative robots (cobots), productivity can be increased on the shop floors. However, it is always a question of whether those additional support systems ease the work stress and workload of human workers or if they have a negative impact on human flexibility as an outcome. In general, those factors and assistive technologies are assessed in detail in [29]. Peron et al. describe how a fitting decision support system can utilize assistive technologies to support human workers effectively. For this purpose, the authors developed a decision tree based on cost models of four different assembly system configurations. We also consider other approaches, e.g., the Cloud Material Handling System (CMHS), which can further increase the flexibility and productivity of a manufacturing system [30]. IoT, Cloud Computing, and ML can all be integrated with CMHS, and proper scheduling of the manufacturing tasks improves overall flexibility in material-handling activities within manufacturing environments. Those are just a few examples of how flexibility can be increased in manufacturing processes.
We present a review of solutions found in the literature for assistance systems in the workplace in the following, as sensing and acting needs to be tailored to a specific use case and setting (Figure 4). Moreover, visual analytics also form a crucial part in human–machine interaction as this provides a real-time user interface for the workers. However, this has been explained in Section 6.2.

5.1. Sensing

Sensors are a crucial element in the development of a cognitive assembly line, namely two deployment modalities for sensing in an assembly line can be found in the literature: infrastructure based sensors and mobile or wearable sensors. These are based on sensory information in the visual domain [31,32], auditory domain [33,34], and haptic/mechanical domain [35,36,37]. In addition, we see that these sensors not only monitor humans and their behaviour [38,39,40] but also equipment [41,42], individual parts of the product [43,44] and machines [45]. Based on these sensory devices, it is not only possible to build the assistance functionality, but also monitoring of a whole shop floor can be provided as described by the following examples.
The manufacturing assembly steps for this involve macro and micro worksteps. The macro workstep is a collection of micro worksteps. The objective is to distinguish between activities that have a short duration of hand motions such as screwing, compared to movements like walking or running. Data from multiple screwing activities were recorded, and extensive research was conducted. The architecture, window size, sliding rate, weighing techniques and creating models that are capable of recognizing micro activities and macro worksteps were developed [46].
An infrastructure deployed computer-vision based approach based on a depth camera [36] tracks changes in the assembly of an ATM machine and for the workstep classification. Based on the presented solution, e.g., users can be provided with the correct documentation for each phase of the procedure or in-efficiency in product manufacturing can be obtained based on significant variation in time spend by each workers. Depth images collected by realsense [47] were used to overcome some of the privacy concerns and protect the personal data of the employees. Single frame images were extracted from the collected video material. There were labeled and split into different classes representing the numerous worksteps of the workflow.
In combination with the depth sensors, inertial measurement units (IMUs) were deployed to identify micro activities of workers during the assembly of an excavator and provide confirmation of each complete macro-workstep or provide notifications that help workers focus on possible errors.
Obtaining data from IMU sensors was unidirectional, since it was a requirement that the device be attached to the wrist of the user, similar to the approaches in [48,49]. This is due to the fact that this position of the sensor is generally accepted by employees in manufacturing, as less disturbing and intrusive and providing descriptive data for complex small-scale activities of hands. The IMU sensors have an accelerometer, a gyroscope and a magnetometer, thus keeping costs low, ensuring minimal interference, and faster processing of data. They can work as standalone devices in terms of mobility, since the devices can function without cables, which is important when the worker moves around in his working space to collect parts required for the assembly. Their ability to establish a connection to the main system via Bluetooth supported the desired freedom of movement that was important for the user.
However, another crucial issue is the privacy and personal data protection of the user. In this case, IMUs are considered to be more compliant with the General Data Protection Regulation (GDPR), less invasive and less unobtrusive compared to other sensor devices, in the daily work of the operator. Furthermore, IMUs are suitable and able to maintain continuous data recordings while on the move. The selected device, Figure 3 (wearable sensors), is from the wireless wearables sensor producer Shimmer. The sensor is a low cost platform, and the model was the Shimmer3 GSR+ Unit which contains, besides the sensors, an MSP430 micro-controller (24 MHz, MSP430 CPU), Bluetooth Radio—RN-42, Integrated 8 GB micro SD card, 450 mAh rechargeable Li-ion battery.
The received data for the training of the models were sent and monitored. The link was made via Bluetooth to the IMUs and via cables to the eye-tracker. The activities were recorded with a sampling rate of 60 Hz and stored in csv (Comma-separated values). The data in a csv file consists of samples where each contains the accelerometer, gyroscope, magnetometer data in XYZ for each IMU sensor, as well as the timestamp and ground truth for each data sample. In the next step, the raw time-series data were used as input for training in machine learning and deep learning models.

5.2. Assembly Monitoring

Developing an assembly line with easy-coexisting of workers and machines is one of the crucial pillars in the cognitive assembly system. With a great variety of products, implementing the supporting devices for human workers leads to accretion of cognitivity. In this section, techniques for assembly monitoring technology are explained along with a use-case. Monitoring of assembly lines, however, is interlinked to the feedback systems explained previously.
Human workers are an integral component in the assembly of several products, assuring flexibility on the shop floor [50,51]. The workers, however, tend to apply intuitive techniques in response to minor disturbances on the shop floor. These techniques include reordering of tasks, impromptu assembly collaborations, executing assembly tasks ahead or behind the allocated tact and outside the scope of the allocated station, etc. In this case, monitoring the assembly line is necessary for the timely and accurate tracking of the progress the assembly process is making. Assembly monitoring is important for the detection of severe deviations on the shop floor such as forgotten tasks, tasks taking longer than the estimated duration or stations not finishing within the allocated tact time [52]. Timely detection is crucial before such deviations are amplified and cause quality defects or even production disruption. Assembly process monitoring is also required to provide dynamically adequate assistance to workers [53,54], dynamically plan human–machine interactions or serve as feedback input for the assembly design and logistics.
Privacy concerns limit the types of sensors to be used as input for the assembly monitoring [55]. Indirect observations from the shop floor are a privacy-respecting alternative. Thus, data from body-worn sensors, depth sensors, part picking sensors or tool usage sensors, etc. can be collected. Guiza et al. in [56] describe an approach to timely and accurately monitor the progress of human-intensive assembly processes. This approach is based on indirect and incomplete observations from the shop floor. Figure 5 outlines our approach composed of two main phases: modeling (A) and monitoring (B).
We start by modeling the prescribed assembly process (1). The process is a succession of assembly sequences. Each sequence includes the fine-granular assembly steps arranged in the recommended order. However, when facing unforeseen situations on the shop floor, workers tend to deviate from the prescribed process. Stricter constraints are thus required. These constraints cannot be violated and construct a precedence link between the steps defining the priority of execution. Next, we instantiate the assembly process instances (2) according to the order schedule (where an order describes the feature configuration of the product). Lastly, we define a finite state machine (FSM) describing the cycle of the task to track the state of assembly steps and thus the progress of the overall assembly. The FSM describes the possible task states and the valid transitions between them as its memory is limited by the number of states it has. Transitions indicate, for example, the activation of a task signaling the beginning of the execution of a task or the completion of it.
The monitoring approach feeds on indirect shop floor observations (3). It is important to note that these observations do not directly identify a task, a station or a particular process instance. We therefore define a set of heuristics to identify the corresponding task and map the shop floor observation to an FSM task-state transition. We determine the task candidates associated with the indirect observations (tasks assembling the detected picked part, tasks using the detected tool or tasks associated with a detected activity such as screwing, drilling, picking, etc.). Heuristics are then applied to determine the most suitable task candidate. The low-level assembly floor observation events are thus converted into high-level process task specific events (4). We then trigger the FSM transition based on the task’s previous state. The task states that are not directly linked to an observation, i.e., unobservant tasks, is inferred based on the precedence graph. An updated run-time representation of the assembly process is then available (5). Note that the precedence graph can be manually created for portions of the assembly process where a bottleneck has been previously identified for example.
The monitored data are later used as input to a deviation detection approach detecting three types of serious deviations that could harm the efficiency of the assembly and the quality of the assembled product [57]. These deviations are delaying tasks where tasks take longer than the estimated duration, delaying stations or sequence deviations where the recommended order of execution is altered. These detected deviations need to be then communicated to the assembly workers, line leaders or logistics department for the appropriate mitigation actions.

5.3. Feedback

Feedback systems are necessary in order to provide assistance to workers in production or alert them to detected deviations. These feedback systems are based on the actuators and deployed throughout an assembly line. Again, we see two deployment modalities, namely infrastructure based actuators and mobile and wearable actuators. The most prominent modality for feedback is on the visual channel using projectors or static screens [58,59], peripheral display [60], smart watches [61], and argument reality glasses [62,63]. Furthermore, the auditory channel is also often engaged using speakers [64], wearable speakers [65] and earplugs [66]. Lastly, further feedback is also provided on the haptic channel using haptic display [67], head worn tactors [68,69], and wrist worn devices [64].
We see that feedback for production can be performed on any channel [64], whereas each feedback device has limitations either in terms of environment (e.g., audio in loud environment) or human perception (e.g., not looking into the direction of the display) and needs careful consideration. Similar to sensing, the selection of actuators depends to a high degree on the use case and the placement of these devices. For example, the use of wrist worn devices when interacting with dangerous machines is rendered unsafe.
Researchers [70], for example, showed the deployment of feedback using wearable (haptic) and infrastructure (visual and auditory) feedback devices. They not only showed the inter-linkage with workflow sensing and the presentation of video snippets for each workstep but also how cognitive state and attention orientation must affect actuator selection, so that critical information gets through.
In this use case, the selected feedback system includes a wearable smartwatch since the product is moving down the manufacturing line during its assembly. The feedback device is connected to a server device where the reasoning, the monitoring, and the decision-making takes place, and it receives vibration signals after the completion of worksteps. Detailed information about completed macro and micro-worksteps will be displayed on its screen.
The assistive and monitoring process begins with the initialization of an application on the smartwatch that enables the collection of the IMU and depth data. The collected data are then sent to a CPU via Bluetooth and used as input to the deep learning models that will have to predict the number of activities that constitute each module (micro activities contained in macro-worksteps) and the correct module of the workflow in which the employees currently are (macro-worksteps). The data from the sensors are only processed electronically without being stored during use of the system under real conditions. To this extent, the models do not rely on any identification for the workers to assure that their privacy is preserved, as the only data being stored are the data recorded and used to train the models.

6. Cognitive Data-Driven Approach to Improve the Assembly Line

Large volumes of data or big data are continuously generated from shop floor. From our use-case, data are generated from three specific sensors: IMU sensor, depth sensor, and head worn camera. The generated data must first be filtered, contextualized, and analyzed before deriving final results. As described in Figure 6, this chapter presents a step by step approach for handling data generated from the shop floor in a meaningful form.

6.1. Data Contextualization

Most of the information that manufacturers acquire from the assembly line during production processes is stored in its raw form without the provision of adequate contextual information. Manufacturers use NoSQL technologies [71], comma-separated-values files or even plain text files to store this data. The data volume in their data warehouses is big and is constantly increasing each day, but the lack of proper metadata makes analysing and exploiting this material a very difficult task. Raw process information often conceals much valuable knowledge and the key for this issue is to give raw data a context (i.e., a meaning) and to interpret it well so that the knowledge possessed can be satisfactorily utilized. In order to contextualize process information and be able to use the obtained knowledge for building a cognitive assembly line, we need to link raw data with the previous contextual knowledge. This would lead to more automated and accurate production systems.
Data contextualization was carried out using a semantic-driven approach for the three sensory devices: IMU sensor, depth sensor, and head worn camera. This approach is presented in detail by the authors in [72]. By the use of semantic technologies, raw data are stored in an NoSQL database which preserves information without metadata or any contextual information. The data are accessible through a hyperlink to a web service containing data. All the contextual knowledge about the assembly line is simultaneously preserved in a dynamic knowledge management system. The system creation is enabled by semantic technologies and the open semantic framework tool [73]. Then, the raw information is retrieved to open semantic framework using a hyperlink to a web service containing data, enriching it with the existing contextual knowledge and using this contextualization later when processing and analyzing the data (Figure 7). This approach will greatly facilitate further data processing.

6.2. Visual Analytics

Automated data analysis offers powerful tools to analyze data throughout the whole industrial life cycle, promising valuable insights into industrial processes [74]. Visualization for smart manufacturing in several industrial sectors has been worked out in an extensive survey by Zhou et al. [75]. Visual analytics research offers techniques to intertwine automated data analysis and the rich understanding of the underlying industrial processes of domain experts through human perception and interactive visualization user interfaces [76].
Suschnigg et al. in [77] researched a visual analytics approach on how to adapt a production process in an online manner, which has been implemented in the use-case. Automated data analysis models to predict the quality of products in an ongoing assembly process via a dashboard visualization highlights interesting data points, which can be interactively explored to support engineers in their decision-making, i.e., to modify a production process, promising an increased efficiency.
From the three sensory devices, the visualizations from the data received indicate the patterns of different activities as seen in Figure 8. The patterns depicted refer to the hand screwing, manual-tool screwing, electrical screwdriver screwing and wrench screwing. Data of acceleration, gyroscope and magnetometer in XYZ coordinates are shown in different colors, and the rectangle box indicates the detection of the activity. The number of the peaks inside the selected region designates the number of the repetitions for each activity in that sequence of data. It highlights interesting data and offers interactive visualizations to explore that data to support engineers in their decision-making. These techniques help in developing a cognitive assembly line or a production system which supports the flexibility and adaptiveness. Moreover, the interactive visualizations can provide initial insights for causal analytics which leads to the betterment of product assembled. Inception-V3 and the VGG-19 deep learning topologies are then used for the analysis of the screwing operations/activities. Detailed description and results of these models is described in Section 8.

6.3. Predictive Maintenance

The success of an overall assembly line largely depends on quality, reliability, and productions costs that are affected directly by the complexity of this manufacturing process. Implementing predictive maintenance (PdM) strategy can bring all these benefits together by ensuring that the product is operating in an ideal condition so as to avoid unexpected breakdowns. PdM approaches primarily detect early signs of defects and also foresee when a defect is about to happen [79].
The quality of PdM approaches depends to a great extent on the quality of the available data [80]. The importance of this aspect has been clearly shown in the previous subsection (data contextualization). The challenges to ensure successful and applicable PdM are threefold. First, various types of data, such as assembly data (IMU and depth sensor, and head worn camera data), process sensory, log, and service data are collected throughout the production process [81]. In our use-case, the sensor data contains heterogeneous data including time series, numerical, categorical, and unstructured data. Secondly, some of these data are often missing, i.e., completely or partially. This leads to the lack of extracting the knowledge, which can be used as a basis later in the predictive models [80,82]. Finally, the high variety of produced products leads to a well-known small data challenge [74]. In [82,83,84], PdM in the context of multi-component systems (MCS) has been analysed, which is a promising approach to handle the use-case scenario challenges. In particular, MCS appears to be an encouraging solution to handle the small data challenge and a high variety of products. Additionally, Thalmann et al. in [74] support the argument that a great variety of products and the small data sets of reference measurements is a challenge for data driven approaches.
In an industrial setting of End-of-Line (EoL) testing, where the last check of complex products is carried out after the complete assembly line, multiple tests are performed on the product in the course of the quality test. Devices that pass the quality test are considered as devices with correct functions (see Figure 9). The decision on the state of product after a quality test was performed and was previously left to the experience of domain experts [85]. However, with the introduction of mass customization, shorter product life cycles and digital supply chains, this approach has reached its limits, and the need for new approaches has become clear. PdM appears to be promising in this context. Several studies have been carried out with the aim for improving these challenges by employing PdM solutions. Gashi et al. [86] introduced a new predictive approach based on data-driven solution, which has also been implemented in the use-case. In particular, this approach demonstrates how EoL test-based quality control can be enhanced in case of missing usage data. Moreover, an MCS perspective in context of PdM is proposed to remedy the small-data challenge. Finally, defect prediction of low-quality products over time is conducted when contextual information is available. In this context, engineers are assisted to effectively perform quality control and fault diagnosis. However, the application of these approaches require a high level of explainability. One way to tackle this challenge is using, previously described, interactive visual approaches [87], as a tool to increase understanding, trust, and, as a result, the acceptance rate from engineers.

6.4. Causal Discovery

As predictive maintenance methodologies mitigate the errors occurring in an assembly line, causal discovery, on the other hand, analyses the root cause for this error. Simply stated, these approaches aim to provide similar conclusions as traditional engineering methods such as Five whys. However, these methods extend their applicability to the complex production systems where traditional approaches would be time-consuming or impossible to perform. In our use-case, causal discovery has a dual role. In the first role, a perspective on causal reasoning focused on an assembly process is taken, i.e., discovering causal relationships in the data collected from the worker (IMU sensor, depth sensor and camera) to better understand the root causes of quality properties in the context of a single worker. In the second role, causal discovery takes the global perspective and utilizes data beyond the assembly stations to discover causal interdependence of different production processes and their impact on the overall product quality. In the following subsection, the role of causal discovery in the context of industrial processes is described and compared to other data-driven and traditional, domain-driven approaches.
The applicability of data-driven approaches in industrial environments is on the rise due to the increased availability of raw and contextualized data. Compared with the domain-driven approaches for process improvement, data-driven approaches can address three important aspects of industrial data [88]:
  • ML approaches can learn to model nonlinear and complex relationships;
  • In addition, once the model is trained, it can capture possible hidden relationships which enable better predictions on unseen data in the future;
  • These approaches do not impose any restrictions on the input variables and their distribution.
With the increase in their complexity, however, especially with the approaches like deep learning, ML models are becoming extremely difficult to explain and hence are often referred to as black-box models [89].
The increasing complexity, limited explainability, and interpretability of the complex ML models make it difficult to address the emerging requirements for acceptance of these models and hinders their applications in industrial and mission-critical scenarios. Furthermore, a significant amount of these methods are based on associational patterns and correlation, which does not provide any insight into causal relationships, i.e., the underlying driving forces and generative mechanisms [90]. These aspects can be addressed through causal discovery, which goes beyond statistical dependency and focuses on cause and effect relationships. An advantage of having knowledge about causal relationships rather than statistical associations is that the former enables prediction of the effects of actions that perturb the observed system [91] and can drive the knowledge discovery through the detection of underlying driving forces in the assembly process.
While the gold standard for identifying causal relationships is controlled experimentation, the experiments these require are very expensive, time-consuming and as a result often impossible to perform. Understanding causal relationships allows generalizations to be made in the absence of test data [92]. This is especially prominent in our assembly line use-case where, due to mass customization and high flexibility of the assembly line, process stability is crucial.
There are many possible applications of causal discovery for process improvement in an assembly process. An approach presented by Vukovic et al., in [93], focuses on the discovery of influencing parameters of the quality in a production scenario. This discovery is done through an approach that incorporates development of the predictive model and its verification through discovery interviews with the domain experts, visual analytics, and predictive model analysis. Other application tasks can include root cause analysis [94], fault (anomaly) detection [95], detection of plant wide oscillations [96], causal discovery of alarm floods [97], quality improvement [98], modeling of the temporal patterns, and their influence on different key performance indicators [99], etc.

7. Cognitive Communication and Safety on Shop Floors

The previous chapter explained the involvement of data and know-how on extracting the relevant information. This chapter addresses more on the shop floor level for achieving efficient communication and safety across the assembly line at Wacker Neuson.

7.1. Communication Systems

In a labour intensive production process, many sequential steps must be performed to obtain the final product. If each machine, device, and component is connected, they can communicate specific information which will optimize the overall production process. Implementing a communication system among all devices within an industrial environment is a mixture of fixed and movable components, meaning that a wireless communication system is required for most movable parts, e.g., AGVs. However, also in the presented use case, the scenario is highly dynamic and workers equipped with sensors (head worn camera) can move freely within the environment. The network topology must be very flexible to cover the entire area where workers are moving. The direct communication path will then be often blocked by various objects in the environment which will be in most cases of a metallic nature which strongly attenuates wireless signals and a link failure will be likely for specific positions. Additionally, mobile components can distort and block signals during movement, rendering working links useless.
When creating a wireless communication system for industrial applications, all these issues have to be considered in advance to prevent the implementation of an inoperable system. Through the use of simulations, many scenarios can be evaluated before actual implementation in the environment. Such simulations can be used to optimize the architecture and reduce the costs to a minimum to fulfill the requirements of reliability, safety, and security. The model representation of a shopfloor environment is a CAD model which is imported into the various simulation environments, in our case the CST Studio Suite. By defining material properties and interfering sources within the environment, a very detailed representation including materialistic behaviour can be achieved to estimate most of the impacts on the wireless communication link. This includes effects such as scattering, multi-path propagation, reflections, interference by other devices and many more which will occur in industrial environments. These physical effects are scarcely considered in modern industrial scenarios.
Taking the harsh environment of our shopfloor into consideration, we identified the physical effects that will have the biggest impact in such a harsh environment. By the means of simulation, most of the effects mentioned are considered. Based on the initial results derived from this simulation model, it is possible to decide on several communication parameters such as topology, protocol, devices, and their antennas to achieve the maximum throughput for each position in the scenario [100]. What this means in this context is that, for each position of a wireless device, the optimum communication path can be derived. A detailed description of the model is provided in [101].
Antennas are one critical component for wireless communications, but in most cases, antennas of sensor nodes should have an omnidirectional radiation pattern so that their position can be changed arbitrarily in an unobstructed environment. In a heavily obstructed environment, such properties are not useful, and can only be of use if the object of interest is moving in the entire environment and the receiving stations are distributed all over the area. In previous work, the limiting properties of antennas of commercial off-the-shelf (COTS) devices have been thoroughly investigated [100,102]. The achievable distance with low power devices is relatively small, and, if a big area has to be covered, many central communication nodes might be required. There are several approaches when it comes to topology, which will be discussed later in this section.
The structure of such an assembly line must be dynamic, and thus a communication system must support this dynamic behavior by being adaptable. The topology and protocol will have the most impact on an efficient communication system with appropriate antennas at the individual locations. The 5G standard is touted to be the universal solution for industrial applications. The problems as mentioned, however, remain the same due to physical constraints. An implementation of a fully working 5G network in a defined and confined environment such as an assembly line is the most likely scenario in the context of 5G communications. The distances are short enough to utilize even the high frequency ranges of 5G (up to 60 GHz), which is difficult in any other scenario. Viewed realistically, a 5G network is only useful if there are devices or applications which require high bandwidths (e.g., raw image data) or low latency [103]. Low latency is required e.g., when drones are used which are controlled in real-time. If only states and simple information is being exchanged (steering, checks,...), then a 5G system coheres well. Much more interesting, however, are adaptive technologies which identify the best communication channel in real-time. The mentioned mobile components in the context of assembly line are IMU, depth sensor, and workers with head worn devices, which move around in the environment. Some of the components are on a fixed path, some of them move around “randomly”. Considering all the deliberations, a system architecture for a cognitive assembly line can be created.
As a first step, the technology must be identified to satisfy the demands for reliability, latency, safety, security, battery life, and data rate. The most promising technology at the present time would appear to be Bluetooth Low Energy (BLE) with high data rates plus low energy consumption. BLE devices are very cheap and, with a fitting topology and adaptability, they can meet most requirements for indoor wireless systems. With the easy deployment and the low costs, they are very attractive for most companies. By adjustment of the antennas on the COTS PCB (printed circuit board) devices, the throughput can be further increased [102]. In [104], an antenna recommendation system has been designed and verified to identify fitting antennas for specific positions within a harsh environment. This approach is also applicable to industrial environments, which have similar properties to the investigated engine compartment. In Figure 10, two common architectures are depicted which represent a wireless communication system solution for cognitive assembly lines.
On the left side is a meshed topology, which can be very useful in such a harsh environment, where the direct communication path to another node might be blocked or strongly attenuated by movement of objects or persons. Then, the communication will be rerouted over the next best path to keep up a reliable communication. In the scenario on the right, a typical star topology, with a central communication hub, the individual communication paths might fail at times. Star topology networks are not flexible in structure and need to be defined at the beginning. If we want to achieve a smart adaptive assembly line, where parts and stations might change over time, the only solution is to apply a meshed topology. Even though the efforts required for developing a meshed topology are comparatively higher, the benefits are also increased by the same degree. The main benefit of a meshed topology is clearly the property that the connection can be routed over other devices for which a connection exists. If a large number of devices are included in the environment, there will be always an active connection over a specific route to the main transceiver station. This is not the case with a star topology, where the link will simply fail. The drawback of a meshed network is that the power consumption of the individual sending nodes will increase, since each device might be used for the transmission of information of other devices. Not all devices will require a battery to power the communication devices, however, and movable objects are able to charge themselves at charging stations. Once again, it is worth mentioning that simulations can already give a good indication on what type of topology might be required.
In the presented use case, where wearable sensors and depth sensors are applied to the workers and environment, there will be situations, where the connection to the devices will be disrupted. In case of the wearable devices, the data connection could be disrupted due to one of the previous mentioned reasons. Then, there are some approaches to overcome the issue in case of a connection loss. The easiest solution is to have a small memory/storage at the wearable device, where the data recordings are stored to the point where the communication is re-established. The communication protocol has to periodically check if the connection can be resumed and will try to re-transmit lost or not yet transmitted data packages.The data stream from depth sensors might be disrupted if a visual blockage occurs, this is also the case with camera sensors. In those scenarios, it makes sense to have visual algorithms in place which continue to work even if the sight is blocked.

7.2. Safety Systems

The desire for better and faster production as our use-case scenario with Wacker Neuson has created an industrial environment where people and machines collaborate without separation in the same physical space, e.g., a cognitive assembly line. As a result of this configuration, the safety of people and the equipment emerged as major concerns for engineers in human–robot collaborative work settings. In order to fully implement safety, the system needs to be monitored and, when necessary, there must be an override function for the control commands to ensure safe operation. A safety-related system thus implements the required safety functions by detecting hazardous conditions and bringing the operation to a safe state, by ensuring that the desired action takes place.
Consequently, future safety systems should adapt to, or ideally anticipate, the dynamics in order to guarantee safety properties at all times [106]. Predictive Fail-safe is the ability of a system to adapt its fail-safe measures to new configurations and situations that arise dynamically in smart factory environments with the goal of self-protection and its working environment, including human workers. Overall, safety in the future assembly line can be categorized into two groups.

7.2.1. Safety of the Machines and Workers

The system safety for dealing with failures is termed functional safety. The functional safety denotes that the system is free of machine-related failures. Safety-certified components should be used in such a manner that they are capable of detecting anomalies and can adequately reach a safe-state in case of internal and external failures. Integrated safety measures such as over/under-temperature and over/under-voltage monitoring [107], memory protections [108], code, timing consistency checks, and many more are ensuring reliability, credibility, and availability of the safety PLC.
Safety in the system also means that the system is fault tolerant via redundancy in the system. The most common redundant architectures are shown in Figure 11. In the context of sensors, therefore, denotes having multiple sensors that measure the same physical phenomenon and thus deciding which of the values, if any, is valid. Moreover, future production lines will rely to a great extent on the data provided by sensors. This implies that sensory systems need to be reliable since they are the main source of information for process control and safety functions.
Basic information on worker movement and behavior could be demanded via the IMU sensors in order to optimize the safety functions of the system. Localization in the current use-case is more dynamic and sophisticated. The head worn camera with visual processing will be used to detect the position and the gestures of the workers. It is also important to mention that the health status (e.g., heart rate, blood pressure, breathing, etc.) of the workers should play an important role in the future smart factories. These data will indicate the conditions of the workers and show how much stress they are experiencing. Human factors play a huge role in workplace safety, with fatigue and stress readily contributing to the number of accidents. With proper analysis of the health data, this number can be minimized.

7.2.2. Dynamic Future

As mentioned above, a highly dynamic movement is expected in our use-case assembly line, in addition to stationary machines and robots. Thus, intensive movement of robots or cobots will create safety risks in the physical, cognitive and social realm. Therefore, all moving parts should be able to reason and must be able to make workers feel safe. If they are to achieve this, these machines (e.g., robots or cobots) must demonstrate perception of objects versus humans, ability to predict collisions, behaviour adaptability, sufficient memory to facilitate machine learning, and decision-making autonomy [109]. Along with a statically detection systems, to track all further movements in the environment, future systems will require additional localization that should be able to detect movements in real-time. The localization system should prevent collision between AVGs and workers, while additionally initiating a safe-state of the machine in the event of a safety violation. These systems must be reliable as most of the decisions will be made based on the information that they provide [110]. Further localization systems research in the direction of safety, energy efficiency, and development cost is in progress. Localization can be implemented with different technologies including camera vision, wireless, or Bluetooth technology [111].

8. Conclusions and Outlook

This paper explains the various facets in an assembly line. All of these aspects and the sub-technologies involved must be considered in detail in the course of building a cognitive assembly line.
This paper divides the tendons of assembly line into four main sections, as shown in Figure 1. Three sensory devices have been used where the sensor data are retrieved, i.e., perception and awareness. They are:
  • IMU sensor;
  • Depth sensor;
  • Head worn camera.
The recognition of activities, especially the patterns in screwing, are perceived via the sensory devices. Depth sensors also track the movement of workers in simulated heat patterns in such a manner that the issues related to privacy. The screwing patterns and the related modelling is performed under the umbrella of ’understanding and modelling’ as seen in Figure 4.
Furthermore, an independent model of reasoning and decision-making is established to include the concepts of predictive algorithms in making the right decisions for the assembly line and workers. The monitoring approach shown in this paper depicts a novel approach in assembly monitoring in a complex assembly line of Wacker Neuson which also has a very high variety of products. Finally, the decisions or directions or notifications to the workers are sent via autonomous acting actuators. A personal actuator, i.e., smart watch, is used to transfer important one-to-one information, whereas a public actuator is used for station-wise transfer of information. Static screens are used as public actuators as they have high reliability on the shopfloor level [15]. The overall loop is controlled via IoT messaging broker system which captivates all the transfer of information and data.
The generated data from the sensors are analysed sequentially as seen in Figure 6. When analysed, inter-dependencies that could enhance the process can be derived. The process of data contextualization adds meaning to the extracted data. Raw data are a concealed source of valuable information that can be extracted using a semantic driven approach. Visual analytics offer an alternative to view the inter-dependencies and various plots. The screwing patterns are drawn using visualization approaches as seen in Figure 8. With the help of these techniques, several insights can be investigated and looked upon at the shop floor level as visual analytics make the underlying machine learning models visible. Strategies involving predictive maintenance (PdM) techniques ensure that the product operates in an ideal condition by avoiding unexpected breakdowns. PdM methodologies can be implemented in an EoL testing stations which thoroughly analyze and certify the product. However, much research must be yet conducted in the direction towards faster PdM techniques.
The overall performance of the detection of activities is detected via the accuracy score. Two different kinds of considerations have been done—baseline and optimized. Baseline refers to the model which has been pre-trained and fine-tuned, whereas, in an optimized model, all parameters have been trained from scratch and fine-tuned. We see that the model that has been optimized and shows better accuracy of more than 91.4%. [78]. As shown in Table 1, two main deep learning network topologies are used: Inception-V3 and the VGG-19. These two are chosen as they have a comparatively high accuracy rate [112].
Communication and safety systems play a crucial role for the ideal functioning of an assembly line at the shop floor level. These systems can be designed via the CAD model in a CST Studio Suite for simulation. The use-case assembly line utilizes Bluetooth Low Energy devices for communication as they are cheaper and operate with low energy consumption. Meshed topology architecture is utilized as communication structures for the assembly line as it fits well in a labour intensive assembly process. The authors explain predictive fail-safe methodology in which the system has the capability of adapting to new configurations in the assembly lines. Moreover, the system must also be fault-tolerant, which is achieved by using a redundant system. Finally, localization techniques are known as the dynamic future of safety systems on the shop floor. These types of localization techniques can be implemented with the help of camera vision, wireless, or Bluetooth technology.

9. Further Research

Advanced data analytics tools, such as machine learning or artificial intelligence, has the potential for greatly increasing the safety of systems. The question arises here, however, of whether artificial intelligence can be developed to such an extent that it will be capable to fully take over such responsible tasks as safety for both the workers and the machines? AI is already the subject of extensive discussion in research circles where it is already in use for safety purposes [113]. On the industrial front, however, there is at present only a partial integration and acceptance of AI in the safety context. The contribution of AI and ML to the safety is mostly focused at the present time on fault detection algorithms [114,115].
Further research must also be performed in the direction of implementing sensor-fusion technology for communication systems and thereby implementing it in an assembly line. The research must also be focused on deeper PdM techniques as well as causal discovery with reference to cognitive assembly lines. Future work in each aspect has also been outlined briefly in respective sections. Thus, this paper can be denoted as a well developed road-map in addressing the future production challenges. The methodologies are summarized as an enhanced human–machine interaction, an evolution from industry 4.0.
Perception and awareness sensors can also be further expanded to include several other sensory input devices. However, the disparities in communication must be kept in mind, as an increase in the number of cognitive devices will require an even stronger communication network. Further research must also include sustainable processes in assembly and production [116] addressing the ever increasing sustainable development goals and also the requirements of climate neutrality. These issues are also the subject of ongoing research at Pro2Future, with the focus on attaining cognition in production via sustainable methods.

Author Contributions

Conceptualization, M.A.H., D.K.; methodology, M.A.H.; software, M.A.H., D.K., A.K., J.S., O.G., M.G., G.S., M.V., K.M. and M.H.; validation, M.A.H., M.B. and K.D.; resources, O.G. and G.S.; data curation, O.G. and G.S.; writing—original draft preparation, M.A.H.; writing—review and editing, M.A.H.; software, M.A.H., D.K., A.K., J.S., O.G., M.G., G.S., M.V., K.M. and M.H.; visualization, M.A.H., O.G. and G.S.; supervision, K.D.; project administration, M.A.H. All authors have read and agreed to the published version of the manuscript.


Open Access Funding by the Graz University of Technology.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.


This work has been supported by the FFG, Contract No. 881844: Pro2Future is funded within the Austrian COMET Program Competence Centers for Excellent Technologies under the auspices of the Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology, the Austrian Federal Ministry for Digital and Economic Affairs and of the Provinces of Upper Austria and Styria. COMET is managed by the Austrian Research Promotion Agency FFG.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Vaidya, S.; Ambad, P.; Bhosle, S. Industry 4.0—A glimpse. Procedia Manuf. 2018, 20, 233–238. [Google Scholar] [CrossRef]
  2. Sanders, A.; Elangeswaran, C.; Wulfsberg, J.P. Industry 4.0 implies lean manufacturing: Research activities in industry 4.0 function as enablers for lean manufacturing. J. Ind. Eng. Manag. (JIEM) 2016, 9, 811–833. [Google Scholar] [CrossRef]
  3. Zhong, R.Y.; Xu, X.; Klotz, E.; Newman, S.T. Intelligent manufacturing in the context of industry 4.0: A review. Engineering 2017, 3, 616–630. [Google Scholar] [CrossRef]
  4. Xu, L.D.; Xu, E.L.; Li, L. Industry 4.0: State of the art and future trends. Int. J. Prod. Res. 2018, 56, 2941–2962. [Google Scholar] [CrossRef]
  5. Lee, J.; Kao, H.A.; Yang, S. Service innovation and smart analytics for industry 4.0 and big data environment. Procedia Cirp 2014, 16, 3–8. [Google Scholar] [CrossRef]
  6. Bacci di Capaci, R.; Scali, C. A Cloud-Based Monitoring System for Performance Assessment of Industrial Plants. Ind. Eng. Chem. Res. 2020, 59, 2341–2352. [Google Scholar] [CrossRef]
  7. Tran, M.Q.; Elsisi, M.; Liu, M.K.; Vu, V.; Mahmoud, K.; Darwish, M.M.F.; Abdelaziz, A.; Lehtonen, M. Reliable Deep Learning and IoT-Based Monitoring System for Secure Computer Numerical Control Machines Against Cyber-Attacks With Experimental Verification. IEEE Access 2022, 10, 23186–23197. [Google Scholar] [CrossRef]
  8. Elsisi, M.; Tran, M.Q.; Mahmoud, K.; Mansour, D.E.; Lehtonen, M.; Darwish, M.M.F. Towards Secured Online Monitoring for Digitalized GIS Against Cyber-Attacks Based on IoT and Machine Learning. IEEE Access 2021, 9, 78415–78427. [Google Scholar] [CrossRef]
  9. Vaccari, M.; Bacci di Capaci, R.; Brunazzi, E.; Tognotti, L.; Pierno, P.; Vagheggi, R.; Pannocchia, G. Implementation of an Industry 4.0 system to optimally manage chemical plant operation. IFAC-PapersOnLine 2020, 53, 11545–11550. [Google Scholar] [CrossRef]
  10. Vaccari, M.; Bacci di Capaci, R.; Tognotti, L.; Pierno, P.; Vagheggi, R.; Pannocchia, G. Optimally Managing Chemical Plant Operations: An Example Oriented by Industry 4.0 Paradigms. Ind. Eng. Chem. Res. 2021, 60, 7853–7867. [Google Scholar] [CrossRef]
  11. Elsisi, M.; Tran, M.Q.; Mahmoud, K.; Mansour, D.E.; Lehtonen, M.; Darwish, M.M.F. Effective IoT-based deep learning platform for online fault diagnosis of power transformers against cyberattacks and data uncertainties. Measurement 2022, 190, 110686. [Google Scholar] [CrossRef]
  12. Hermann, M.; Pentek, T.; Otto, B. Design principles for industrie 4.0 scenarios. In Proceedings of the 2016 49th Hawaii International Conference on System Sciences (HICSS), Koloa, HI, USA, 5–8 January 2016; pp. 3928–3937. [Google Scholar]
  13. Azevedo, A.I.R.L.; Santos, M.F. KDD, SEMMA and CRISP-DM: A parallel overview. IADS-DM 2008. [Google Scholar]
  14. Abdul Hadi, M.; Brillinger, M.; Haas, F. Adaptive Assembly Approach for E-Axles. In Proceedings of the 4th EAI International Conference on Management of Manufacturing Systems; Knapcikova, L., Balog, M., Perakovic, D., Perisa, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2020; pp. 249–260. [Google Scholar]
  15. Abdul Hadi, M.; Brillinger, M.; Weinzerl, M. Parametric evaluation and cost analysis in an e-axle assembly layout. In Proceedings of the 5th EAI International Conference on Management of Manufacturing Systems; Springer: Berlin/Heidelberg, Germany, 2021; pp. 1–15. [Google Scholar]
  16. Boysen, N.; Fliedner, M.; Scholl, A. A classification of assembly line balancing problems. Eur. J. Oper. Res. 2007, 183, 674–693. [Google Scholar] [CrossRef]
  17. Scholl, A.; Becker, C. State-of-the-art exact and heuristic solution procedures for simple assembly line balancing. Eur. J. Oper. Res. 2006, 168, 666–693. [Google Scholar] [CrossRef]
  18. Boysen, N.; Schulze, P.; Scholl, A. Assembly line balancing: What happened in the last fifteen years? Eur. J. Oper. Res. 2021. [Google Scholar] [CrossRef]
  19. Otto, C.; Otto, A. Multiple-source learning precedence graph concept for the automotive industry. Eur. J. Oper. Res. 2014, 234, 253–265. [Google Scholar] [CrossRef]
  20. Klindworth, H.; Otto, C.; Scholl, A. On a learning precedence graph concept for the automotive industry. Eur. J. Oper. Res. 2012, 217, 259–269. [Google Scholar] [CrossRef]
  21. Pokorni, B.; Popescu, D.; Constantinescu, C. Design of Cognitive Assistance Systems in Manual Assembly Based on Quality Function Deployment. Appl. Sci. 2022, 12, 3887. [Google Scholar] [CrossRef]
  22. Guiza, O.; Mayr-Dorn, C.; Mayhofer, M.; Egyed, A.; Rieger, H.; Brandt, F. Recommending Assembly Work to Station Assignment Based on Historical Data. In Proceedings of the 2021 IEEE 26th International Conference on Emerging Technologies and Factory Automation (ETFA), Vasteras, Sweden, 7–10 September 2021. [Google Scholar]
  23. Behrmann, E.; Rauwald, C. Mercedes Boots Robots From the Production Line. 2016. Available online: (accessed on 1 February 2017).
  24. Hull, D. Musk Says Excessive Automation Was ‘My Mistake’. 2018. Available online: (accessed on 19 June 2018).
  25. Wendemuth, A.; Biundo, S. A companion technology for cognitive technical systems. In Cognitive Behavioural Systems; Springer: Berlin/Heidelberg, Germany, 2012; pp. 89–103. [Google Scholar]
  26. Trendafilov, D.; Zia, K.; Ferscha, A.; Abbas, A.; Azadi, B.; Selymes, J.; Haslgrübler, M. Cognitive Products: System Architecture and Operational Principles. In Proceedings of the Cognitive 2019 Proceedings, Venice, Italy, 5–9 May 2019; Franova, C.M., Sennersten, J.T.D., Eds.; 2019; p. 62. [Google Scholar]
  27. Haslgrübler, M.; Gollan, B.; Ferscha, A. A Cognitive Assistance Framework for Supporting Human Workers in Industrial Tasks. IT Prof. 2018, 20, 8. [Google Scholar] [CrossRef]
  28. Haslgrübler, M.; Gollan, B.; Tomay, C.; Ferscha, A.; Heftberger, J. Towards Skill Recognition using Eye-Hand Coordination in Industrial Production. In Proceedings of the12th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Rhodes, Greece, 5–7 June 2019; ACM: New York, NY, USA, 2019. [Google Scholar]
  29. Peron, M.; Sgarbossa, F.; Strandhagen, J.O. Decision support model for implementing assistive technologies in assembly activities: A case study. Int. J. Prod. Res. 2022, 60, 1341–1367. [Google Scholar] [CrossRef]
  30. Sgarbossa, F.; Peron, M.; Fragapane, G. Cloud Material Handling Systems: Conceptual Model and Cloud-Based Scheduling of Handling Activities. In Scheduling in Industry 4.0 and Cloud Manufacturing; Sokolov, B., Ivanov, D., Dolgui, A., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 87–101. [Google Scholar] [CrossRef]
  31. Rude, D.J.; Adams, S.; Beling, P.A. A Benchmark Dataset for Depth Sensor Based Activity Recognition in a Manufacturing Process. IFAC-PapersOnLine 2015, 48, 668–674. [Google Scholar] [CrossRef]
  32. Bleser, G.; Damen, D.; Behera, A.; Hendeby, G.; Mura, K.; Miezal, M.; Gee, A.; Petersen, N.; Maçães, G.; Domingues, H.; et al. Cognitive learning, monitoring and assistance of industrial workflows using egocentric sensor networks. PLoS ONE 2015, 10, e0127769. [Google Scholar] [CrossRef] [PubMed]
  33. Cheng, C.F.; Rashidi, A.; Davenport, M.A.; Anderson, D. Audio Signal Processing for Activity Recognition of Construction Heavy Equipment. In Proceedings of the ISARC International Symposium on Automation and Robotics in Construction, Auburn, AL, USA, 18–21 July 2016; Volume 33, p. 1. [Google Scholar]
  34. Lenz, C.; Sotzek, A.; Röder, T.; Radrich, H.; Knoll, A.; Huber, M.; Glasauer, S. Human workflow analysis using 3D occupancy grid hand tracking in a human-robot collaboration scenario. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 3375–3380. [Google Scholar] [CrossRef]
  35. Malaisé, A.; Maurice, P.; Colas, F.; Charpillet, F.; Ivaldi, S. Activity Recognition With Multiple Wearable Sensors for Industrial Applications. In Proceedings of the ACHI 2018—Eleventh International Conference on Advances in Computer-Human Interactions, Rome, Italy, 25–29 March 2018. [Google Scholar]
  36. Maekawa, T.; Nakai, D.; Ohara, K.; Namioka, Y. Toward Practical Factory Activity Recognition: Unsupervised Understanding of Repetitive Assembly Work in a Factory. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Heidelberg, Germany, 12–16 September 2016; pp. 1088–1099. [Google Scholar] [CrossRef]
  37. Campbell, T.; Harper, J.; Hartmann, B.; Paulos, E. Towards Digital Apprenticeship: Wearable Activity Recognition in the Workshop Setting. Technical Report No. UCB/EECS-2015-172. 2015. Available online: (accessed on 19 June 2018).
  38. Reining, C.; Schlangen, M.; Hissmann, L.; ten Hompel, M.; Moya, F.; Fink, G.A. Attribute Representation for Human Activity Recognition of Manual Order Picking Activities. In Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction, Berlin, Germany, 20–21 September 2018. [Google Scholar] [CrossRef]
  39. Tao, W.; Lai, Z.H.; Leu, M.C.; Yin, Z. Worker Activity Recognition in Smart Manufacturing Using IMU and sEMG Signals with Convolutional Neural Networks. Procedia Manuf. 2018, 26, 1159–1166. [Google Scholar] [CrossRef]
  40. Avrahami, D.; Patel, M.; Yamaura, Y.; Kratz, S. Below the Surface: Unobtrusive Activity Recognition for Work Surfaces Using RF-radar Sensing. In Proceedings of the 23rd International Conference on Intelligent User Interfaces, Tokyo, Japan, 7–11 March 2018; pp. 439–451. [Google Scholar] [CrossRef]
  41. Al-Naser, M.; Ohashi, H.; Ahmed, S.; Nakamura, K.; Akiyama, T.; Sato, T.; Nguyen, P.; Dengel, A. Hierarchical Model for Zero-shot Activity Recognition using Wearable Sensors. In Proceedings of the 10th International Conference on Agents and Artificial Intelligence—Volume 2: ICAART, Madeira, Portugal, 16–18 January 2018; pp. 478–485. [Google Scholar] [CrossRef]
  42. Yang, J.; Shi, Z.; Wu, Z. Vision-based action recognition of construction workers using dense trajectories. Adv. Eng. Inform. 2016, 30, 327–336. [Google Scholar] [CrossRef]
  43. Makantasis, K.; Doulamis, A.; Doulamis, N.; Psychas, K. Deep learning based human behavior recognition in industrial workflows. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 1609–1613. [Google Scholar] [CrossRef]
  44. Voulodimos, A.; Grabner, H.; Kosmopoulos, D.; Van Gool, L.; Varvarigou, T. Robust Workflow Recognition Using Holistic Features and Outlier-Tolerant Fused Hidden Markov Models. In Proceedings of the Artificial Neural Networks—ICANN 2010, Thessaloniki, Greece, 15–18 September 2010; pp. 551–560. [Google Scholar]
  45. Akhavian, R.; Behzadan, A.H. Construction equipment activity recognition for simulation input modeling using mobile sensors and machine learning classifiers. Adv. Eng. Inform. 2015, 29, 867–877. [Google Scholar] [CrossRef]
  46. Sopidis, G.; Ahmad, A.; Michael, H.; Ferscha, A. Micro-Activities Recognition and Macro Worksteps Classification for Industrial IoT Processes. In Proceedings of the 11th International Conference on the Internet of Things (IoT’21), St. Gallen, Switzerland, 8–12 November 2021; ACM: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
  47. Keselman, L.; Iselin Woodfill, J.; Grunnet-Jepsen, A.; Bhowmik, A. Intel realsense stereoscopic depth cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 1–10. [Google Scholar]
  48. Azadi, B.; Haslgrübler, M.; Sopidis, G.; Murauer, M.; Anzengruber, B.; Ferscha, A. Feasibility analysis of unsupervised industrial activity recognition based on a frequent micro action. In Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Rhodes, Greece, 5–7 June 2019; pp. 368–375. [Google Scholar]
  49. Koskimäki, H.; Huikari, V.; Siirtola, P.; Röning, J. Behavior modeling in industrial assembly lines using a wrist-worn inertial measurement unit. J. Ambient. Intell. Humaniz. Comput. 2013, 4, 187–194. [Google Scholar] [CrossRef]
  50. Bannat, A.; Bautze, T.; Beetz, M.; Blume, J.; Diepold, K.; Ertelt, C.; Geiger, F.; Gmeiner, T.; Gyger, T.; Knoll, A.; et al. Artificial Cognition in Production Systems. IEEE Trans. Autom. Sci. Eng. 2011, 8, 148–174. [Google Scholar] [CrossRef]
  51. Fantini, P.; Tavola, G.; Taisch, M.; Barbosa, J.; Leitao, P.; Liu, Y.; Sayed, M.S.; Lohse, N. Exploring the integration of the human as a flexibility factor in CPS enabled manufacturing environments: Methodology and results. In Proceedings of the IECON 2016—42nd Annual Conference of the IEEE Industrial Electronics Society, Florence, Italy, 23–26 October 2016; pp. 5711–5716. [Google Scholar] [CrossRef]
  52. Srewil, Y.; Scherer, R.J. Effective Construction Process Monitoring and Control through a Collaborative Cyber-Physical Approach. In Collaborative Systems for Reindustrialization; Camarinha-Matos, L.M., Scherer, R.J., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 172–179. [Google Scholar]
  53. Aehnelt, M.; Bader, S. Tracking Assembly Processes and Providing Assistance in Smart Factories. In Proceedings of the ICAART 2014: International Conference on Agents and Artificial Intelligence, Angers, France, 6–8 March 2014; Volume 1, pp. 161–168. [Google Scholar]
  54. Tarallo, A.; Mozzillo, R.; Di Gironimo, G.; De Amicis, R. A cyber-physical system for production monitoring of manual manufacturing processes. Int. J. Interact. Des. Manuf. (IJIDeM) 2018, 12, 1235–1241. [Google Scholar] [CrossRef]
  55. Yerby, J. Legal and ethical issues of employee monitoring. Online J. Appl. Knowl. Manag. 2013, 1, 44–55. [Google Scholar]
  56. Guiza, O.; Mayr-Dorn, C.; Weichhart, G.; Mayhofer, M.; Zangi, B.B.; Egyed, A.; Fanta, B.; Gieler, M. Monitoring of Human-Intensive Assembly Processes Based on Incomplete and Indirect Shopfloor Observations. In Proceedings of the 2021 IEEE 19th International Conference on Industrial Informatics (INDIN), Palma de Mallorca, Spain, 21–23 July 2021. [Google Scholar]
  57. Guiza, O.; Mayr-Dorn, C.; Weichhart, G.; Mayhofer, M.; Zangi, B.B.; Egyed, A.; Fanta, B.; Gieler, M. Automated Deviation Detection for Partially-Observable Human-Intensive Assembly Processes. In Proceedings of the 2021 IEEE 19th International Conference on Industrial Informatics (INDIN), Palma de Mallorca, Spain, 21–23 July 2021. [Google Scholar]
  58. Funk, M.; Bächler, A.; Bächler, L.; Korn, O.; Krieger, C.; Heidenreich, T.; Schmidt, A. Comparing projected in-situ feedback at the manual assembly workplace with impaired workers. In Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Corfu, Greece, 1–3 July 2015. [Google Scholar]
  59. Funk, M.; Kosch, T.; Kettner, R.; Korn, O.; Schmidt, A. Motioneap: An overview of 4 years of combining industrial assembly with augmented reality for industry 4.0. In Proceedings of the Conference on Knowledge Technologies and Datadriven Business, Graz, Austria, 18 October 2016. [Google Scholar]
  60. Dingler, T.; Schmidt, A. Peripheral displays to support human cognition. In Peripheral Interaction; Springer: Berlin/Heidelberg, Germany, 2016; pp. 167–181. [Google Scholar]
  61. Ziegler, J.; Heinze, S.; Urbas, L. The potential of smartwatches to support mobile industrial maintenance tasks. In Proceedings of the Conference on Emerging Technologies & Factory Automation, Luxembourg, 8–11 September 2015. [Google Scholar]
  62. Ong, S.K.; Nee, A.Y.C. Virtual and Augmented Reality Applications in Manufacturing; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  63. Büttner, S.; Funk, M.; Sand, O.; Röcker, C. Using head-mounted displays and in-situ projection for assistive systems: A comparison. In Proceedings of the 9th ACM International Conference on Pervasive Technologies Related to Assistive Environments, Corfu Island, Greece, 29 June–1 July 2016. [Google Scholar]
  64. Funk, M.; Heusler, J.; Akcay, E.; Weiland, K.; Schmidt, A. Haptic, Auditory, or Visual?: Towards Optimal Error Feedback at Manual Assembly Workplaces. In Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Corfu Island, Greece, 29 June 2016–1 July 2016. [Google Scholar]
  65. Petzold, B.; Zaeh, M.F.; Faerber, B.; Deml, B.; Egermeier, H.; Schilp, J.; Clarke, S. A study on visual, auditory, and haptic feedback for assembly tasks. Presence Teleoperators Virtual Environ. 2004, 13, 16–21. [Google Scholar] [CrossRef]
  66. Wilson, J.; Walker, B.N.; Lindsay, J.; Cambias, C.; Dellaert, F. Swan: System for wearable audio navigation. In Proceedings of the 2007 11th IEEE International Symposium on Wearable Computers, Boston, MA, USA, 11–13 October 2007; pp. 91–98. [Google Scholar]
  67. Carter, T.; Seah, S.A.; Long, B.; Drinkwater, B.; Subramanian, S. UltraHaptics: Multi-point mid-air haptic feedback for touch surfaces. In Symposium on User Interface Software and Technology; ACM: New York, NY, USA, 2013. [Google Scholar]
  68. Berning, M.; Braun, F.; Riedel, T.; Beigl, M. ProximityHat: A head-worn system for subtle sensory augmentation with tactile stimulation. In Proceedings of the 2015 ACM International Symposium on Wearable Computers, Osaka, Japan, 7–11 September 2015. [Google Scholar]
  69. Diener, V.; Beigl, M.; Budde, M.; Pescara, E. VibrationCap: Studying vibrotactile localization on the human head with an unobtrusive wearable tactile display. In Proceedings of the 2017 ACM International Symposium on Wearable Computers, Maui, HI, USA, 11–15 September 2017. [Google Scholar]
  70. Haslgrübler, M.; Fritz, P.; Gollan, B.; Ferscha, A. Getting Through—Modality Selection in a Multi-Sensor-Actuator Industrial IoT Environment. In Proceedings of the 7th International Conference on the Internet of Things, Linz, Austria, 22–25 October 2017; p. 8. [Google Scholar] [CrossRef]
  71. Madison, M.; Barnhill, M.; Napier, C.; Godin, J. NoSQL Database Technologies. J. Int. Technol. Inf. Manag. 2015, 24, 1. [Google Scholar]
  72. Milenkovic, K.; Mayer, S.; Diwold, K.; Zehetner, J. Enabling Knowledge Management in Complex Industrial Processes Using Semantic Web Technology. In Proceedings of the 2019 International Conference on Theory and Applications in the Knowledge Economy, TAKE 2019, Vienna, Austria, 3–5 July 2019. [Google Scholar]
  73. Mayer, S.; Hodges, J.; Yu, D.; Kritzler, M.; Michahelles, F. An Open Semantic Framework for the Industrial Internet of Things. IEEE Intell. Syst. 2017, 32, 96–101. [Google Scholar] [CrossRef]
  74. Thalmann, S.; Gursch, H.G.; Suschnigg, J.; Gashi, M.; Ennsbrunner, H.; Fuchs, A.K.; Schreck, T.; Mutlu, B.; Mangler, J.; Kappl, G.; et al. Cognitive Decision Support for Industrial Product Life Cycles: A Position Paper. In Proceedings of the Cognitive 2019: The Eleventh International Conference on Advanced Cognitive Technologies and Applications, IARIA, Venice, Italy, 5–9 May 2019; pp. 3–9. [Google Scholar]
  75. Zhou, F.; Lin, X.; Liu, C.; Zhao, Y.; Xu, P.; Ren, L.; Xue, T.; Ren, L. A survey of visualization for smart manufacturing. J. Vis. 2019, 22, 419–435. [Google Scholar] [CrossRef]
  76. Sacha, D.; Stoffel, A.; Stoffel, F.; Kwon, B.C.; Ellis, G.; Keim, D.A. Knowledge generation model for visual analytics. IEEE Trans. Vis. Comput. Graph. 2014, 20, 1604–1613. [Google Scholar] [CrossRef] [PubMed]
  77. Suschnigg, J.; Ziessler, F.; Brillinger, M.; Vukovic, M.; Mangler, J.; Schreck, T.; Thalmann, S. Industrial Production Process Improvement by a Process Engine Visual Analytics Dashboard. In Proceedings of the 53rd Hawaii International Conference on System Sciences, Maui, HI, USA, 7–10 January 2020. [Google Scholar]
  78. Abbas, A.; Haslgrübler, M.; Dogar, A.M.; Ferscha, A. Micro Activities Recognition in Uncontrolled Environments. Appl. Sci. 2021, 11, 10327. [Google Scholar] [CrossRef]
  79. Hashemian, H.M. State-of-the-art predictive maintenance techniques. IEEE Trans. Instrum. Meas. 2010, 60, 226–236. [Google Scholar] [CrossRef]
  80. Gashi, M.; Gursch, H.; Hinterbichler, H.; Pichler, S.; Lindstaedt, S.; Thalmann, S. MEDEP: Maintenance Event Detection for Multivariate Time Series Based on the PELT Approach. Sensors 2022, 22, 2837. [Google Scholar] [CrossRef]
  81. Sipos, R.; Fradkin, D.; Moerchen, F.; Wang, Z. Log-based predictive maintenance. In Proceeding of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 24–27 August 2014; pp. 1867–1876. [Google Scholar]
  82. Gashi, M.; Thalmann, S. Taking Complexity into Account: A Structured Literature Review on Multi-component Systems in the Context of Predictive Maintenance. In Proceedings of the European, Mediterranean, and Middle Eastern Conference on Information Systems, Dubai, United Arab Emirates, 9–10 December 2019; pp. 31–44. [Google Scholar]
  83. Gashi, M.; Mutlu, B.; Lindstaedt, S.; Thalmann, S. Decision support for multi-component systems: Visualizing interdependencies for predictive maintenance. In Proceedings of the 55rd Hawaii International Conference on System Sciences, Online, 3–7 January 2022; Accepted. [Google Scholar]
  84. Gashi, M.; Mutlu, B.; Lindstaedt, S.; Thalmann, S. No Time to Crash: Visualizing Interdependencies for Optimal Maintenance Scheduling. In Proceedings of the Cognitive 2022: The Fourteenth International Conference on Advanced Cognitive Technologies and Applications, IARIA, Barcelona, Spain, 24–28 April 2022; pp. 11–16. [Google Scholar]
  85. Leitner, L.; Lagrange, A.; Endisch, C. End-of-line fault detection for combustion engines using one-class classification. In Proceedings of the 2016 IEEE International Conference on Advanced Intelligent Mechatronics (AIM), Banff, AB, Canada, 12–15 July 2016; pp. 207–213. [Google Scholar]
  86. Gashi, M.; Ofner, P.; Ennsbrunner, H.; Thalmann, S. Dealing with missing usage data in defect prediction: A case study of a welding supplier. Comput. Ind. 2021, 132, 103505. [Google Scholar] [CrossRef]
  87. Gashi, M.; Mutlu, B.; Suschnigg, J.; Ofner, P.; Pichler, S.; Schreck, T. Interactive Visual Exploration of defect prediction in industrial setting through explainable models based on SHAP values. In Proceedings of the IEEE VIS Poster Program, Virtual, 25–30 October 2020. [Google Scholar]
  88. Vuković, M.; Thalmann, S. Causal Discovery in Manufacturing: A Structured Literature Review. J. Manuf. Mater. Process. 2022, 6, 10. [Google Scholar] [CrossRef]
  89. Olden, J.D.; Jackson, D.A. Illuminating the “black box”: A randomization approach for understanding variable contributions in artificial neural networks. Ecol. Model. 2002, 154, 135–150. [Google Scholar] [CrossRef]
  90. Maier, M. Causal Discovery for Relational Domains: Representation, Reasoning, and Learning. Ph.D. Thesis, University of Massachusetts Amherst, Amherst, MA, USA, 2014. [Google Scholar]
  91. Mooij, J.M.; Peters, J.; Janzing, D.; Zscheischler, J.; Schölkopf, B.; Guyon, I.; Statnikov, A.; Mooij, M.; Mooij, S. Distinguishing Cause from Effect Using Observational Data: Methods and Benchmarks. arXiv 2016, arXiv:1412.3773. [Google Scholar]
  92. Hund, L.; Schroeder, B. A causal perspective on reliability assessment. Reliab. Eng. Syst. Saf. 2020, 195, 106678. [Google Scholar] [CrossRef]
  93. Vukovic, M.; Dhanoa, V.; Jäger, M.; Walchshofer, C.; Küng, J.; Krahwinkler, P.; Mutlu, B.; Thalmann, S. A Forecasting Model-Based Discovery of Causal Links of Key Influencing Performance Quality Indicators for Sinter Production Improvement. In Proceedings of the 2020 AISTech Conference Proceedings, Cleveland, OH, USA, 31 August–3 September 2020; pp. 2028–2040. [Google Scholar] [CrossRef]
  94. Li, G.; Qin, S.J.; Yuan, T. Data-driven root cause diagnosis of faults in process industries. Chemom. Intell. Lab. Syst. 2016, 159, 1–11. [Google Scholar] [CrossRef]
  95. Verron, S.; Li, J.; Tiplica, T. Fault detection and isolation of faults in a multivariate process with Bayesian network. J. Process Control 2010, 20, 902–911. [Google Scholar] [CrossRef]
  96. Duan, P.; Chen, T.; Shah, S.L.; Yang, F. Methods for root cause diagnosis of plant-wide oscillations. AIChE J. 2014, 60, 2019–2034. [Google Scholar] [CrossRef]
  97. Wang, J.; Li, H.; Huang, J.; Su, C. A data similarity based analysis to consequential alarms of industrial processes. J. Loss Prev. Process Ind. 2015, 35, 29–34. [Google Scholar] [CrossRef]
  98. Li, J.; Shi, J. Knowledge discovery from observational data for process control using causal Bayesian networks. IIE Trans. 2007, 39, 681–690. [Google Scholar] [CrossRef]
  99. Kühnert, C.; Beyerer, J. Data-Driven Methods for the Detection of Causal Structures in Process Technology. Machines 2014, 2, 255–274. [Google Scholar] [CrossRef]
  100. Kraus, D.; Diwold, K.; Leitgeb, E. Poster: RSSI-Based Antenna Evaluation for Robust BLE Communication in in-Car Environments. In Proceedings of the 2021 International Conference on Embedded Wireless Systems and Networks, Delft, The Netherlands, 17–19 February 2021; pp. 167–168. [Google Scholar]
  101. Kraus, D.; Diwold, K.; Leitgeb, E. Getting on Track – Simulation-aided Design of Wireless IoT Sensor Systems. In Proceedings of the 2020 International Conference on Broadband Communications for Next, Generation Networks and Multimedia Applications (CoBCom), Graz, Austria, 7–9 July 2020; pp. 1–6. [Google Scholar] [CrossRef]
  102. Kraus, D.; Priller, P.; Diwold, K.; Leitgeb, E. Achieving Robust and Reliable Wireless Communications in Hostile In-Car Environments. In Proceedings of the 9th International Conference on the Internet of Things (IoT), Bilbao, Spain, 22–25 October 2019; pp. 1–4. [Google Scholar] [CrossRef]
  103. Schulz, P.; Matthe, M.; Klessig, H.; Simsek, M.; Fettweis, G.; Ansari, J.; Ashraf, S.A.; Almeroth, B.; Voigt, J.; Riedel, I.; et al. Latency Critical IoT Applications in 5G: Perspective on the Design of Radio Interface and Network Architecture. IEEE Commun. Mag. 2017, 55, 70–78. [Google Scholar] [CrossRef]
  104. Kraus, D.; Diwold, K.; Pestana, J.; Priller, P.; Leitgeb, E. Towards a Recommender System for In-Vehicle Antenna Placement in Harsh Propagation Environments. Sensors 2022, 22, 6339. [Google Scholar] [CrossRef]
  105. Macrovector: Freepik. Quality Control Isometric Composition. Available online: (accessed on 10 June 2020).
  106. Kajmakovic, A.; Zupanc, R.; Mayer, S.; Kajtazovic, N.; Höffernig, M.; Vogl, H. Predictive Fail-Safe Improving the Safety of Industrial Environments through Model-based Analytics on hidden Data Sources. In Proceedings of the 13th IEEE International Symposium on Industrial Embedded Systems, Graz, Austria, 6–8 June 2018; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  107. Bjetak, R.; Diwold, K.; Kajmaković, A. Retrofit: Creating Awareness in Embedded Systems—A Usecase for PLCs. In Proceedings of the 9th International Conference on the Internet of Things (IoT), Bilbao, Spain, 22–25 October 2019; pp. 1–4. [Google Scholar] [CrossRef]
  108. Kajmakovic, A.; Diwold, K.; Kajtazovic, N.; Zupanc, R. Challenges in Mitigating Soft Errors in Safety-critical Systems with COTS Microprocessors. In Proceedings of the PESARO 2020, The Tenth International Conference on Performance, Safety and Robustness in Complex Systems and Applications, IARIA, Lisbon, Portugal, 23–27 February 2020; pp. 1–6. [Google Scholar]
  109. Phoebe, V.M. Artificial Intelligence:Occupational Safety andHealth and the Future of Work; School of Business, University of Leicester: Leicester, UK, 2019. [Google Scholar]
  110. Bostelman, R.; Hong, T.; Eastman, R. Safety and performance standard developments for automated guided vehicles. Mob. Serv. Robot. 2014, 487–494. [Google Scholar] [CrossRef]
  111. Botler, L.; Diwold, K.; Römer, K. E-SALDAT: Efficient Single-Anchor Localization of Dual-Antenna Tags. In Proceedings of the 2019 16th Workshop on Positioning, Navigation and Communications (WPNC), Bremen, Germany, 23–24 October 2019; pp. 1–6. [Google Scholar]
  112. Canziani, A.; Paszke, A.; Culurciello, E. An analysis of deep neural network models for practical applications. arXiv 2016, arXiv:1605.07678. [Google Scholar]
  113. Lou, J.; Xie, S.; Zhang, W.; Yang, Y.; Liu, N.; Peng, Y.; Pu, H.; Dai, W.; Cao, N.; Chen, H.; et al. Artificial Intelligence and Safety Control. In Reconstructing Our Orders: Artificial Intelligence and Human Society; Jin, D., Ed.; Springer: Berlin/Heidelberg, Germany, 2018; pp. 163–193. [Google Scholar] [CrossRef]
  114. Lo, N.G.; Flaus, J.M.; Adrot, O. Review of Machine Learning Approaches In Fault Diagnosis applied to IoT System. In Proceedings of the International Conference on Control, Automation and Diagnosis ICCAD’19, Grenoble, France, 2–4 July 2019; pp. 1–6. [Google Scholar]
  115. Mohapatra, D.; Subudhi, B.; Daniel, R. Real-time sensor fault detection in Tokamak using different machine learning algorithms. Fusion Eng. Des. 2020, 151, 111401. [Google Scholar] [CrossRef]
  116. Hadi, M.A.; Brillinger, M.; Wuwer, M.; Schmid, J.; Trabesinger, S.; Jäger, M.; Haas, F. Sustainable peak power smoothing and energy-efficient machining process thorough analysis of high-frequency data. J. Clean. Prod. 2021, 318, 128548. [Google Scholar]
Figure 1. Specifics of a cognitive assembly line. A holistic top-down approach of the paper can be seen, where various cognitive areas are explained in further chapters.
Figure 1. Specifics of a cognitive assembly line. A holistic top-down approach of the paper can be seen, where various cognitive areas are explained in further chapters.
Applsci 12 08696 g001
Figure 2. Cognitive assembly line balancing approach based on similarity detection with historic balancing data.
Figure 2. Cognitive assembly line balancing approach based on similarity detection with historic balancing data.
Applsci 12 08696 g002
Figure 3. Supporting workers on the shop floor based on a three layered cognitive architecture.
Figure 3. Supporting workers on the shop floor based on a three layered cognitive architecture.
Applsci 12 08696 g003
Figure 4. Sensory and actuatory interaction via IoT messaging broker.
Figure 4. Sensory and actuatory interaction via IoT messaging broker.
Applsci 12 08696 g004
Figure 5. Proposed monitoring approach for cognitive shop floor. This depicts a step by step approach followed for representation of the assembly process, where (A) represents modeling and (B) represents monitoring.
Figure 5. Proposed monitoring approach for cognitive shop floor. This depicts a step by step approach followed for representation of the assembly process, where (A) represents modeling and (B) represents monitoring.
Applsci 12 08696 g005
Figure 6. The model developed for cognitive data analysis. This model represents a sequence of study for a data-driven approach in the assembly line.
Figure 6. The model developed for cognitive data analysis. This model represents a sequence of study for a data-driven approach in the assembly line.
Applsci 12 08696 g006
Figure 7. Contextualization of assembly line data. The figure explains a holistic view of contextualizing the data from its raw form using a Semantic framework.
Figure 7. Contextualization of assembly line data. The figure explains a holistic view of contextualizing the data from its raw form using a Semantic framework.
Applsci 12 08696 g007
Figure 8. The figure presents exemplary patterns describing micro-activities of macro-worksteps that have occurred during the assembly of a product. The figures on the right are reproduced with permission from [78], IEEE Transactions on Instrumentation and measurement, 2010. From top to down: wrenching, electrical screwing, hand screwing, manual screwing.
Figure 8. The figure presents exemplary patterns describing micro-activities of macro-worksteps that have occurred during the assembly of a product. The figures on the right are reproduced with permission from [78], IEEE Transactions on Instrumentation and measurement, 2010. From top to down: wrenching, electrical screwing, hand screwing, manual screwing.
Applsci 12 08696 g008
Figure 9. An overview of EoL testing process, reproduced with the permission of [86], Elsevier, 2021.
Figure 9. An overview of EoL testing process, reproduced with the permission of [86], Elsevier, 2021.
Applsci 12 08696 g009
Figure 10. Network architectures for industrial applications [105]. The adapted image denotes the meshed (left) and star (right) topology which can be utilized, depending on the production environment.
Figure 10. Network architectures for industrial applications [105]. The adapted image denotes the meshed (left) and star (right) topology which can be utilized, depending on the production environment.
Applsci 12 08696 g010
Figure 11. Different safety architectures describing the redundancy in a system.
Figure 11. Different safety architectures describing the redundancy in a system.
Applsci 12 08696 g011
Table 1. Accuracy comparison of the detection of activities.
Table 1. Accuracy comparison of the detection of activities.
Network NameAccuracy
Baseline Inception v366.88%
Baseline Inception v3 + RNN(LSTM)88.96%
Optimized Inception v378.6%
Optimized Inception v3 + RNN(LSTM)91.40%
Baseline VGG1974.62%
Baseline VGG19 + RNN(LSTM)79.57%
Optimize VGG1981.32%
Optimize VGG19 + RNN(LSTM)83.69%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abdul Hadi, M.; Kraus, D.; Kajmakovic, A.; Suschnigg, J.; Guiza, O.; Gashi, M.; Sopidis, G.; Vukovic, M.; Milenkovic, K.; Haslgruebler, M.; et al. Towards Flexible and Cognitive Production—Addressing the Production Challenges. Appl. Sci. 2022, 12, 8696.

AMA Style

Abdul Hadi M, Kraus D, Kajmakovic A, Suschnigg J, Guiza O, Gashi M, Sopidis G, Vukovic M, Milenkovic K, Haslgruebler M, et al. Towards Flexible and Cognitive Production—Addressing the Production Challenges. Applied Sciences. 2022; 12(17):8696.

Chicago/Turabian Style

Abdul Hadi, Muaaz, Daniel Kraus, Amer Kajmakovic, Josef Suschnigg, Ouijdane Guiza, Milot Gashi, Georgios Sopidis, Matej Vukovic, Katarina Milenkovic, Michael Haslgruebler, and et al. 2022. "Towards Flexible and Cognitive Production—Addressing the Production Challenges" Applied Sciences 12, no. 17: 8696.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop