Next Article in Journal
An Innovative and Cost-Effective Traffic Information Collection Scheme Using the Wireless Sniffing Technique
Previous Article in Journal
A Study on Additive Manufacturing of Metal Components for Mobility in the Area of After-Sales with Spare and Performance Parts
Previous Article in Special Issue
A Study on Performance Evaluation of Biodiesel from Grape Seed Oil and Its Blends for Diesel Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detecting the Flame Front Evolution in Spark-Ignition Engine under Lean Condition Using the Mask R-CNN Approach

Engineering Department, University of Perugia, Via Goffredo Duranti, 93, 06125 Perugia, Italy
*
Author to whom correspondence should be addressed.
Vehicles 2022, 4(4), 978-995; https://doi.org/10.3390/vehicles4040053
Submission received: 24 August 2022 / Revised: 20 September 2022 / Accepted: 21 September 2022 / Published: 26 September 2022
(This article belongs to the Special Issue Recent Advances in Automotive Engines)

Abstract

:
In the wake of previous works, the authors propose a new approach for the identification and evolution of the flame front in an optical SI engine. Currently, it is an essential prerogative to characterize the capability of innovative igniters to guarantee earlier flame development in critical operating conditions, such as ultra-lean mixture, towards which automotive research is moving to deal with the ever more stringent regulations on pollutant emissions. The core of the new approach lies in the R-CNN Mask method. The latter consists of a conceptually simple and general framework for object instance segmentation. It can efficiently detect objects contained in an image while simultaneously generating a high-quality segmentation mask for each instance. In particular, the aim this work is to develop an automatized algorithm for detecting, as objectively as possible, the flame front evolution of lean/ultra-lean mixtures ignited by low-temperature plasma-based ignition systems. The capability of the Mask R-CNN algorithm to automatically estimate the binarized area, without setting a defined binarized threshold, allows us to perform an analysis of the flame front evolution completely independent from the user interpretation. Mask R-CNN can detect the kernel in advance and can identify events as regular combustions instead of misfires or anomalies if compared to other traditional approaches. These features make the proposed method the most suitable option to analysis the real behavior of the innovative ignition systems at critical operating conditions.

1. Introduction

The even more stringent regulations on pollutant emissions are forcing the entire research community to design cleaner and more efficient internal combustion engines (ICEs) [1,2]. The investigation of the combustion process is an essential requirement to develop innovative solutions that are able to address this challenge [3,4]. The synergy between computational and experimental methods allowed in-depth analysis of the physical phenomena occurring in spark-ignition (SI) engines [5,6]. Optical diagnostic techniques [7,8,9] proved to be valid tools for examining the spatial and temporal evolution of the flame front produced by innovative ignition systems called ACISs (Advanced Corona Ignition Systems), which represent alternative solutions to the traditional spark for facing future high-efficiency SI engines [10,11,12]. Such systems guarantee stable ignitions and strong combustion processes characterized by low cycle-to-cycle variability even at critical operating conditions, such as, for instance, highly diluted and/or extremely lean mixtures [13,14,15]. Idicheria [16] and Marko [17] performed morphological and indicating analysis, on optical access engines, of the flame front produced by corona-based ignition systems. They found relevant improvements in EGR tolerance with respect to the traditional spark. The research group of the Department of Engineering (University of Perugia) also recorded extensions of the lean stable limit at different engine operating conditions [18,19,20] and using different fuels [14]. The spatial and temporal analysis of the flame fronts allowed us to correlate the velocity and repeatability of the combustion process with the robustness of the kernel formation event in these critical operating conditions [14,19]. The lean extension may be an effective technology to meet the target of reducing pollutant emissions and fuel consumption [18,21], and the correct detection of the first moment of kernel formation is crucial for characterizing the capability of an igniter to initiate robust combustions. In ultra-lean conditions, the low luminosity of the recorded events hinders the correct identification of the early flame development [22]. For such a reason, powerful tools are required to overcome this issue and accomplish this task. Currently, artificial intelligence (AI) is increasingly used for engine parameter control [23,24], image classification [25,26], background noise removal [27,28], and object and edge detection [29,30,31]. Concerning the latter operation, literature shows the promising results of deep learning algorithms based on YOLO [32,33], SSD [34] and Mask R-CNN approaches [35,36]. Focusing on the latter, Mask R-CNN [37] is a convolutional neural network based on Faster R-CNN [38]. This neural network can detect targets and perform semantic segmentation at the same time. Nie et al. [39] proposed a ship detection and segmentation method based on an improved Mask R-CNN model, starting from an Airbus ship image dataset.
Concerning other tested methods such as PANet [40], SCRDet [41], the proposed network achieved the greatest improvement—up to 6%—in detection and segmentation accuracies. Vuola et al. [42] compared the capability of the U-Net approach and Mask R-CNN when segmenting the nuclei instance of microscopy images deriving from the biomedical field. The results show that Mask R-CNN is characterized by better recall and precision, indicating that it can detect nuclei more accurately. Leger et al. [43], proposed a pretrained Mask R-CNN to improve the quality control of wheel suspensions in the automotive field. The target was to avoid issues deriving from lack of attention of the operators in charge, too rapid inspection completion time, and non-detection of defects, which can lead to high supplementary costs.
Within this context, the present work proposes to evaluate the capability of a Mask R-CNN approach in detecting the flame front evolution of combustion processes started on an optical access engine by a corona ignition system.
Starting from the results of [44], the proposed method is firstly tested on a weakly lean case to determine the feasibility and effectiveness of the Mask R-CNN approach. After that, the neural structure is tested up to the lean stable limits performed by the corona device. The obtained results are compared with the performance of the algorithm described in [44] and used as a Base Reference (BR) to quantify any differences and improvements.
The results show that Mask R-CNN is able to reproduce both the area and shape of the flame front by showing percentual errors lower than 6% and accuracy levels higher than 90% in the richest case analyzed. In the leaner cases, Mask R-CNN can detect in advance, with about 1500 μs on average, the kernel formation if compared to BR. In the leanest case, with respect to the BR method, Mask R-CNN identifies events as regular combustion instead of misfires or anomalies. Even in such a case, Mask R-CNN reproduces the flame front evolution with an accuracy level higher than 90%, showing improvements of over 200% if compared to BR. Moreover, the capability of the Mask R-CNN algorithm to automatically estimate the binarized area, without setting a defined threshold, allows the authors to perform an analysis of the flame front evolution completely independent of the user interpretation. The results of the present approach suggest the application of the proposed method to other combustion engines operating in ultra-lean conditions such as, for instance, gas-turbine combustors [45,46].

2. Experimental Setup

2.1. Optical Access Engine

Tests are carried out on a 500 cc single-cylinder research engine that is optically accessible (Figure 1), with four valves, a pent-roof combustion chamber, and a reverse tumble intake designed to operate. The internal cylinder bore is 85 mm, the piston stroke is 88 mm and the compression ratio equals 8:8:1 (Table 1). Piston rings are realized in a Teflon–graphite mix, a self-lubricant material, since dry contact between rings and cylinder liner is required. The engine speed control is guaranteed by An AVL 5700 dynamic brake both in motored and firing conditions. A Mitsubishi KSN230B port fuel injector, placed in the intake manifold, injects Standard European market gasoline (E5, with RON = 95 and MON = 85) at 3.8 bar of pressure. The air–fuel ratio (λ) is modified by varying the fuel amount at a fixed throttle position to maintain the same turbulence level inside the combustion chamber. The injector energizing time and the ignition timing are controlled by an Athena GET HPUH4 engine control unit (ECU). A piezoresistive transducer (Kistler 4075A5) records the intake port pressure, while the in-cylinder pressure is measured by a piezoelectric transducer (Kistler 6061 B). The indicated analysis is performed through a Kistler Kibox combustion analysis system (temporal resolution of 0.1 CAD) which acquires the λ measured by a fast lambda probe at the exhaust pipe (Horiba MEXA-720, accuracy of ±2.5%), the pressure signals, the ignition signal from the ECU, the absolute crank angular position measured by an optical encoder (AVL 365C), and the trigger signal used to synchronize indicating and imaging data. The latter operation is performed by a Vision Research Phantom V710 high-speed CMOS camera coupled with a Nikon 55 mm f/2.8 lens.

2.2. Imaging System

The natural luminosity of streamers and flames is recorded by a Vision Research Phantom V710 high-speed CMOS camera coupled with a Nikon 55 mm f/2.8. For each point tested, a maximum of 63 consecutive combustion events can be recorded. A common trigger signal, derived from an automotive camshaft position sensor (Bosch 0232103052), ensures the synchronization between imaging data and indicating data, thus allowing matching of flame development 2D information (on a swirl plane) and in-cylinder pressure trace of the same cycle (Figure 2). The high-speed camera starts recording when the rising edge of the trigger signal is detected. A tunable pre-trigger length allowed the setting of several frames to be acquired even before the rising edge. According to the characteristics of the optical apparatus, each frame is composed of 512 × 512 pixels to detect the whole flame evolution inside the optical limit. The maximum allowable sampling rate of 25 kHz is used, corresponding to a temporal resolution of 0.24 CAD/frame at 1000 rpm. A summary of the main optical parameter is shown in Table 2. An in-house MATLAB code is used by the research group to extract quantitative information from the grayscale combustion images acquired by the high-speed camera. In the following sections, a detailed description of the algorithms used in [44] can be found.

2.3. Igniter

The experimental campaign is developed by using a radio-frequency (RF) advanced corona ignition system, i.e., the Barrier Discharge Igniter (BDI), provided by Federal Mogul Powertrain, a Tenneco group company [47,48,49] (Figure 3). The igniter generates ionization waves called streamers that start the combustion process using thermal, kinetic, and transport effects [50,51]. When the applied electric field overcomes a critical threshold, such streamers start from the annular grounded electrode placed on the base circumference of the igniter and propagate in a volumetric way on the surface of the dielectric material that covers the counter electrode (Figure 3) [52]. The presence of the dielectric layer is crucial to prevent the streamer-to-arc transition, and therefore to maintain the discharge in low-temperature plasma mode [52]. The absence of a prominent ground electrode reduces heat losses and avoids hot points for pre-ignition. Moreover, the power electrode is not directly exposed to the action of excited species produced during the discharge.
Upon receiving the trigger signal from the ECU, the igniter is powered by a dedicated electronic system (ACIS Box) at 1.04 MHz, corresponding to the resonance frequency of the equivalent RLC circuit [47]. Corona behavior is controlled by managing two setting parameters: duration (ton) and driving voltage (Vd) [53,54]. The first one represents the activation time of the igniters and plays an important role in reducing the cycle-to-cycle variability [55]. The second one, proportional to the electrode voltage [49], is responsible for the corona development around the igniter copula. Once Vd is set, the electronic system magnifies the voltage up to a proportional value (supplied voltage, Vs) and provides it to the coil. This latter amplifies the voltage to the firing end up to Ve to produce the discharge.

3. Test Campaign

The algorithms compared in this work are tested on the image dataset of [44] (Table 3). In such work, BDI proved to be able to extend the lean stable limit of the traditional spark (λ = 1.6) up to λ = 1.8. Tests were carried out at low load and with the engine operating at 1000 rpm. The igniter was tested, with a safety margin to prevent device issues [47,49], using extreme control parameters, i.e., ton = 1500 us and Vd = 60 V. All tests were carried out with the ignition timing set to achieve the Maximum Brake Torque (MBT), which was found with the APmax occurring around 11–14 CAD aTDC. Each tested point was considered stable if featured with a CoVIMEP < 4% [14]. First, the Mask R-CNN approach was tested on a weakly lean case (λ = 1.4). After that, the neural structure was tested up to the lean stable limits performed by the corona device.

4. Methods

In this section, the structures and the functionalities of the algorithm used in [44] and the one proposed in the herein work are discussed.

4.1. Base Reference

The algorithm dedicated to the post-processing of the combustion images performs the operations of ignition detection, image filtering, contouring and binarization of the frames (Figure 4) [14].
Each recorded image is filtered by a 3 × 3 pixel median spatial filter, in order to reduce the salt and pepper noise. In a centrally located sub-area of 220 × 220 pixel, the determination of the average maximum gray level (MGLavg), in the first 50 frames, allows us to detect the start of the ignition. MGLavg (Equation (1)) and its standard deviation MGLmax,dev (Equation (2)) are determined by the following formulas:
MGL avg = 1 n j = 1 n MGL j
  MGL avg , dev   MGL j MGL avg 1 j n max
where n is the dimension of the statistic window and j is the frame index in the statistic window (j ≤ n). The detection condition of the first ignition event can be therefore expressed as follows (Equation (3)):
MGL i > MGL avg + K × MGL max , dev
where i is the frame index after the statistics window (i > n) and K is an arbitrary constant. The final process of binarization converts the grayscale images into black (unburned area with pixel values equal to zero) and white (burned area with pixel values equal to one) ones, to extract quantitative information expressed through the equivalent flame area Aeq in mm2 (Equation (4)) and equivalent flame radius Req in mm.
A eq = π r eq 2 = n b sc 2
where nb is the number of white pixels and sc the scaling factor [mm/pixel]. The binarization threshold is evaluated with a semi-automatic algorithm, proposed by Shawal et al. [56]. From the first ignition event detected, the threshold TH of each subsequent image is set proportionally to the average gray level AVGj of the previous image (Equation (5)):
TH j = AVGj × K 1 + K 2
The constant K1 e K2 are preliminarily chosen by the user on the basis of the algorithm output, i.e., Req, to correctly reproduce the flame front evolution, minimizing any underestimates and overestimates of the flame front. Once the choice has been made, the method is applied to all the combustions analyzed. At ultra-lean conditions, where the brightness is very low, the instant of kernel formation can be demanding to identify [22] (Figure A1). Different operators could produce quite different results; therefore, these results are of questionable value. As a consequence, innovative algorithms are mandatory to overcome these issues.

4.2. Mask R-CNN

The Mask R-CNN approach used in this work to detect the evolution process of the flame front follows the structure reported in [57], which efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method starts from the Faster R-CNN architecture, which presents two outputs for each candidate object: a class label and a bounding-box offset [38]. In addition, a third branch that outputs the object mask in a much finer spatial layout and a pixel-to-pixel alignment operation have been added to the base structure. Faster R-CNN consists of two stages: the first one (Region Proposal Network (RPN)) proposes candidate object bounding boxes, while the second one performs classification and bounding-box regression, starting from the features extracted from each candidate box using RoIPool [58,59]. RoIPool operation extracts a small feature map from RoI and quantizes a floating-number RoI to the discrete granularity of the feature map. This number is subdivided into quantized spatial bins, and then the feature values covered by each bin are aggregated. The quantification process has a negative effect on predicting pixel-accurate masks [57]. Mask R-CNN adopts the same first RPN stage and, in a second stage, predicts in parallel the class box and box offset and generates a binary m × m mask for each RoI using FCN [60]. Thus, unlike the class labels and box offset, each layer in the mask branch maintains the explicit m × m object spatial layout without collapsing into short output vectors by fully connected layers. To avoid issues of RoIPool previously described, a RoIAlign layer removes the harsh quantization, aligning the extracted features with the input [57]. The herein neural network structure tested is based on convolutional ResNet-101-FPN backbone architectures, used for feature extraction over an entire image, and on a network head for bounding-box recognition (classification and regression) and mask prediction separately applied to each RoI [57]. ResNet-101 is a convolutional neural network that is 101 layers deep. Such pretrained network can classify images into over 1000 object categories [61]. The Feature Pyramid Network (FPN) uses top-down architecture with lateral connections to build an in-network feature pyramid from a single-scale input [62]. ResNet-101-FPN backbone for feature extraction with Mask R-CNN gives excellent gains in both accuracy and speed [62]. The network head architecture includes the fifth stage of ResNet used in [63], to which a mask branch is added. In other words, from the pretrained backbone [62], the training is realized on the three layers of the network head [57] based on images of the flame front evolution.
The training session is realized as follows:
  • For each analyzed case (s items), n images of p combustion events (i.e., events X and Y of Figure 5b) are extracted from the high-speed camera and used as a dataset for training the three layers of the network head. In other words, the neural architecture is trained with s × p × n = 5 × 2 × 250 number of items.
  • Each item (Figure 5a(A)), i.e., each image portraying the flame front, is segmented (Figure 5a(B)) by the user via human perception on MakeSense.AI (https://www.makesense.ai/) and then labeled.
  • The output of each item is then imported into Google Colaboratory (https://colab.research.google.com/) to train the neural architecture. GPU Tesla K80 with CUDA 11.2 is used.
  • The fifth epoch of 10 is selected, and its weight is exported, showing the best performance in terms of loss and val_loss [64].
The test session is realized, for each operating point, on different combustion events (Figure 5a(D)) from the ones used for the training session (i.e., event Z of Figure 5b). To compare the performance of Mask R-CNN with one of BR, a binarization process is applied to estimate Req without setting any binarization threshold. The process is directly realized on the obtained Mask, as reported in Figure 5a(F).

5. Results

The proposed method is preliminarily validated on a specific combustion event at λ = 1.4. In the test session, the output of the algorithm is compared with binarized images obtained via human perception and used as Target. Figure 6 shows an example of the main steps performed in the MATLAB environment. The grayscale raw image (Figure 6a(A)) is converted into a colored one (Figure 6a(B)) using jet colormap with a number of levels equal to the bit profundity of the original image (255). This action proved to be of pivotal importance for the determination of the flame contour. The latter is determined by the user, on the colored image, by red contouring the flame front. To improve the capability of the following step in binarizing the image, the pixels not belonging to the flame front are black filled (Figure 6a(C)) based on a critical threshold. Such a value is proportional to the average level (i.e., noise) of image with no flame evolution occurring (Figure 6b. Finally, the pixels inside such a defined perimeter are white filled using the imbinarize function, thus obtaining the image in Figure 6a(D).
The binarized images used as a target and the ones obtained from the proposed algorithm are compared through Aeq by considering characteristic moments of the combustion development, from the kernel formation until Req = 20 mm is reached. To this end, the absolute relative percentage errors (%ErrAeq = |(Aeq,Mask − Aeq,Target)/Aeq,Target| × 100) made by the methods in estimating the equivalent flame area of the target are used. Moreover, the performance of the proposed method is quantified using the evaluation metrics [65], based on the raster values of the binarized target images. The purpose is to evaluate any overestimation and/or underestimation performed by the algorithm. The pixels that do not belong to the edge are indicated as true negative (TN); true positive (TP) refers to the ones that are correctly detected; false negative (FN) refers to those where the algorithm has mistakenly not detected the edge; and true negative (TN) refers to those correctly indicated as not belonging to the edge. Based on those metrics, the accuracy, sensitivity, and specificity of the model were computed as below:
Sensitivity = TP TP + FN
Specificity = TN TN + FP
Accuracy = TP + TN TP + TN + FP + FN
In Figure 7, the overlap of the flame fronts is represented by white and the flame overestimates with green, whereas violet represents the underestimates made by the method. The |%ErrAeq| values are colored by following the same criteria. Generally speaking, the Mask algorithm reproduces both shapes and areas of the Target images. Considering the equivalent flame area, up to five, CAD aEoD, Mask R-CNN overestimates the front, while from seven CAD aEoD tends to perform underestimations. However, in each analyzed crank angle, the %ErrAeq is always lower than 6% and, in particular, in three of them, assumes values under 4%. The confusion matrix provides more detailed information about the ability of the predicted images to reproduce the Target shape. In other words, it allows quantifying any over and underestimations. Specificity levels, always higher than 98.8%, indicate an excellent capability of Mask R-CNN to correctly detect all the pixels not belonging to the flame edge, i.e., the overestimations can be considered circumstantial evidence. Sensitivity levels, always higher than 93%, provide information about the capability of Mask R-CNN to detect pixels inside the Target boundary; the lower the sensitivity, the higher the underestimation produced. As reported, Mask R-CNN tends to gain the underestimation of about 3.5% as CAD aEoD increases. To summarize, the proposed method produces a slight overestimation in proximity to the kernel formation and tends to slightly underestimate the front across its evolution. Accuracy levels that are always higher than 97% prove the Mask R-CNN’s effectiveness to correctly detect pixels belonging and not belonging to the flame front. Based on these promising results, the analysis was extended to the other cases displayed in the Test Campaign section.
First, at each λ analyzed, a random case is chosen from the recorded 63 to evaluate the forecasting performance of Mask R-CNN against the one of BR. Figure 8 displays the equivalent flame radius obtained from both structures (blue curve for BR and red curve for Mask R-CNN) and the Target values, used as a yardstick (black circles), will be compared. Such Reqs have been determined every five frames, according to the procedure previously described. Generally speaking, it is possible to observe that as the mixture leans out, slower combustion processes take place due to the absence of available fuel [18]. Moreover, the leaner the mixture, the lower the equivalent flame radius at the end of discharge, synonymous of a combustion process just initiated but not already developed, as in richer cases. For instance, to reach the optical limit, the 1.8 case takes about 55 CAD more than the richest case tested. While the λ = 1.5 case reaches Req = 20 mm at about 18 CAD aEoD, the λ = 1.8 needs more than 40 CAD aEoD to reach the same radius value. At λ = 1.5, the flame front is already well developed at the igniter deactivation, as is visible from the Req which is higher than 8 mm regardless of the algorithm used. For each case analyzed, Mask R-CNN is able to detect the kernel formation in advance. This quality is an essential prerogative for characterizing the ACIS igniters’ ability to guarantee earlier flame development. At λ = 1.5, considering the higher values of luminosity [22], no appreciable differences are found between the compared approaches. With slight underestimations, both algorithms prove to be capable of reproducing the Target trend, especially after the Req = 20 mm. However, when leaning the mixture, the BR method tends to amplify the underestimation of the flame front at the initial part of the kernel formation, while Mask R-CNN satisfactorily reproduces the Target. As mentioned in the Methods section, the low luminosity complicates the determination of the correct binarization threshold for BR algorithm. Therefore, the corresponding Req curves are featured by delayed growths. The %ERR on the estimation of Req highlights these differences. %ERRReq is intentionally reported without the absolute value to better distinguish overestimations and underestimations. For sake of clarity, the results of the extreme operating conditions tested are reported, i.e., λ = 1.5 and λ = 1.8. In the cases reported, both algorithms perform, on average, underestimations, especially in the first part of the kernel formation, where the detected luminosity is very low. However, it is evident that Mask R-CNN can limit these setbacks with respect to BR.
To visualize the effectiveness of the proposed algorithm to recognize the front evolution in advance, a complementary analysis is carried out, at λ = 1.6 case, by overlapping the corresponding binarized images (Figure 9). The areas (red contour for Target and white for Mask R-CNN and BR) are superimposed to visualize the flame-front evolution at each CAD. This additional analysis is necessary to highlight how the proposed method, despite a slight underestimation of the front with respect to the Target, still obtains a much better result than the BR method, which, for about 9 CAD, practically does not detect the combustion phenomenon. At the end (30 CAD aEoD), as previously reported, both algorithms are able to reproduce the Target front.
Considering that the aim of the research group is to investigate the igniter potentiality in the leanest stable operating conditions, we tested the two compared methods on 30 events at λ = 1.8 (Figure 10). The scope of that was to discover if each analyzed event can be correctly detected by the proposed method.
Generally speaking, Mask R-CNN detects the flame formation in advance, with a slight delay in achieving 30 mm compared with BR. Due to flame wrinkling, distortion and convection, the flame average radius, which can be correctly detected without reaching the optical boundary, is about 20 mm [19]. Therefore, Mask R-CNN is suitable for providing detailed information about the igniter’s performance in the first moments of kernel formation. Moreover, concerning the BR method, Mask R-CNN identifies events as regular combustion instead of misfires or anomalies. In BR, the binarization threshold chosen by the user could be potentially unsuitable for all the combustion events at the same λ because of the different brightness featuring such events. Figure 11 reports images, at different CAD aEoD, of the original images of the flame front evolution of cases detected as anomalies by BR. As can be observed, in localized regions of the chamber, the high luminosity level, which is probably due to the unburned mixture and/or reflection phenomena, makes the threshold (Equation (5)) unsuitable for detecting the evolution of the combustion process. On the opposite, Mask can identify such cases. The object and contour identification make the Mask R-CNN approach highly suitable for the kind of analysis proposed in the present work.
For sake of completeness, we report the Sensitivity (Equation (6)), Specificity (Equation (7)), and Accuracy levels (Equation (8)) of a random case, correctly identified by BR, at λ = 1.8, are reported to better highlight the differences between the outputs of the compared approaches. As observed, Mask R-CNN shows higher levels compared with BR, testifying more sensibility in identifying the flame front evolution. In particular, the proposed method always shows confusion matrix values close to 90% and over.

6. Conclusions

The present work proposes evaluating the capability of a Mask R-CNN approach to detecting the flame front evolution of combustion processes started, on an optical access engine, by a corona ignition system. The performances of the proposed method are compared with the ones of the base reference algorithm, used in a previous work by the same research group, to quantify any differences and improvements. The outputs of both algorithms are compared with binarized images obtained via human perception and used as a yardstick. The proposed method is first tested on a weakly lean case (λ = 1.4), to determine the feasibility and effectiveness of the Mask R-CNN approach. After that, the neural structure is tested up to the lean stable limits performed by the corona device (λ = 1.8). The results show that:
At λ = 1.4, Mask R-CNN can reproduce both the area and shape of the target images. The algorithm performs percentual error in estimating the equivalent flame area, which is always lower than 6%. The confusion matrix shows values higher than 93%, thus testifying the capability of Mask R-CNN to reproduce the shape of the flame front with extreme accuracy.
From λ = 1.5 to λ = 1.8, in each case analyzed, Mask R-CNN can detect the kernel formation in advance compared to BR. For instance, at λ = 1.6, for about 9 CAD ( 1500   μ s ), BR practically does not detect the combustion phenomenon. This quality is an essential prerogative to characterize the corona igniters’ capability to guarantee an earlier flame development. Moreover, this feature allows to approach an almost-perfect correspondence between indicating and imaging analysis. At λ = 1.5, considering the higher values of luminosity, no appreciable differences are found between the compared approaches. Instead, by leaning out the mixture and therefore the luminosity levels, BR tends to amplify the underestimation of the flame front at the initial part of kernel formation, while Mask R-CNN reproduces the Target, showing percentual errors 50% lower than BR at such initial stage.
The dispersion analysis, performed at λ = 1.8 on 30 combustion cases, highlights that Mask R-CNN achieves advanced detection of the flame formation, with a slight delay in achieving the optical boundary if compared to 30 mm. Due to flame wrinkling, distortion and convection, the flame average radius that can be correctly detected without reaching the optical boundary is about 20 mm. Therefore, Mask R-CNN can be considered more suitable for this kind of analysis and for providing detailed information on the igniter’s performance in the first moments of kernel formation if compared to BR. Moreover, always with respect to the BR method, Mask R-CNN identifies events as regular combustion instead of misfires or anomalies.
The confusion matrix at λ = 1.8 certifies the capability of Mask R-CNN to reproduce the flame front evolution with an accuracy level higher than 95%. Moreover, in the cases shown, Mask R-CNN shows enhancements in the Sensitivity level of BR by over than 200%.
The capability of the Mask R-CNN algorithm to automatically estimate the binarized area, without setting a defined binarized threshold permits us to perform an analysis of the flame front evolution that is completely decoupled from the user’s interpretation.

Author Contributions

Conceptualization, L.P. and F.R.; methodology, F.R.; software, L.P.; validation, L.P., F.R. and R.M.; formal analysis, L.P. and F.R.; investigation, R.M.; resources, F.M.; data curation, F.R.; writing—original draft preparation, F.R.; writing—review and editing, L.P.; visualization, L.P.; supervision, F.M.; project administration, F.M.; funding acquisition, F.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available from the corresponding author. The data are not publicly available due to privacy related choices.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

%ERRPercentage Errors.
ACIS Advanced Corona Ignition System.
AIArtificial Intelligence.
aEoDAfter End of Discharge.
BDIBarrier Discharge Igniter.
BRBase Reference method.
CADCrank Angle Degree.
CoVIMEPCovariance of IMEP.
CSICorona Streamer-Type igniter.
DIDirect Injection.
ECUEngine Control Unit.
EGRExhaust Gas Recirculation.
FPNFeature Pyramid Network.
IMEPIndicated Mean Effective Pressure.
ITIgnition Timing.
MBTMaximum Brake Torque.
MFBMass Fraction Burned.
MONMotor Octane Number.
PFIPort Fuel Injection.
R-CNNRegion-based Convolutional Neural Network.
ReqEquivalent flame radius.
RFRadio Frequency.
RoiRegion of Interest.
RONResearch Octane Number.
RPNRegion Proposal Network.
SISpark Ignition.
TDCTop Dead Center.
tonActivation time of the igniter.
FNFalse Negative.
FPFalse Positive.
ICEInternal Combustion Engine.
TNTrue Negative.
TPTrue Positive.
VdDriving Voltage of the igniter.

Appendix A

Figure A1. (a) Examples of Req not suitable for reproducing the increasing trend of the flame radius during the development of the combustion process and Req (solid lines) that may be considered potentially suitable for reproducing the same combustion physical phenomenon. (b) The figures on the right depict the contouring process, for the potentially right cases, applied to detect the capability of BR in reproducing the flame front of the original images. (K13, K23) choice allows us to better reproduce the flame front, and therefore can be considered the best choice to select.
Figure A1. (a) Examples of Req not suitable for reproducing the increasing trend of the flame radius during the development of the combustion process and Req (solid lines) that may be considered potentially suitable for reproducing the same combustion physical phenomenon. (b) The figures on the right depict the contouring process, for the potentially right cases, applied to detect the capability of BR in reproducing the flame front of the original images. (K13, K23) choice allows us to better reproduce the flame front, and therefore can be considered the best choice to select.
Vehicles 04 00053 g0a1

References

  1. Jeon, J. Spatiotemporal flame propagations, combustion and solid particle emissions from lean and stoichiometric gasoline direct injection engine operation. Energy 2020, 210, 118652. [Google Scholar] [CrossRef]
  2. Distaso, E.; Amirante, R.; Cassone, E.; De Palma, P.; Sementa, P.; Tamburrano, P.; Vaglieco, B. Analysis of the combustion process in a lean-burning turbulent jet ignition engine fueled with methane. Energy Convers. Manag. 2020, 223, 113257. [Google Scholar] [CrossRef]
  3. Nakata, K.; Nogawa, S.; Takahashi, D.; Yoshihara, Y.; Kumagai, A.; Suzuki, T. Engine Technologies for Achieving 45% Thermal Efficiency of S.I. Engine. SAE Int. J. Engines 2015, 9, 179–192. [Google Scholar] [CrossRef]
  4. Johnson, T. Vehicular Emissions in Review. SAE Int. J. Engines 2014, 7, 1207–1227. [Google Scholar] [CrossRef]
  5. Gao, J.; Tian, G.; Sorniotti, A.; Karci, A.E.; di Palo, R. Review of thermal management of catalytic converters to decrease engine emissions during cold start and warm up. Appl. Therm. Eng. 2019, 147, 177–187. [Google Scholar] [CrossRef]
  6. Alshammari, M.; Alshammari, F.; Pesyridis, A. Electric Boosting and Energy Recovery Systems for Engine Downsizing. Energies 2019, 12, 4636. [Google Scholar] [CrossRef]
  7. Merola, S.S.; Marchitto, L.; Tornatore, C.; Valentino, G.; Irimescu, A. UV-visible Optical Characterization of the Early Combustion Stage in a DISI Engine Fuelled with Butanol-Gasoline Blend. SAE Int. J. Engines 2013, 6, 2638. [Google Scholar] [CrossRef]
  8. Jeon, J.; Bock, N.; Northrop, W.F. In-cylinder Flame Luminosity Measured from a Stratified Lean Gasoline Direct Injection Engine. Results Eng. 2019, 1, 100005. [Google Scholar] [CrossRef]
  9. Battistoni, M.; Zembi, J.; Casadei, D.; Ricci, F.; Martinelli, R.; Grimaldi, C.; Milani, E. Burner Development for Light-Off Speed-Up of Aftertreatment Systems in Gasoline SI Engines (No. 2022-37-0033); SAE Technical Paper; SAE International: Warrendale, PA, USA, 2022. [Google Scholar]
  10. Singleton, D.; Pendleton, S.J.; Gundersen, M.A. The Role of Non-Thermal Transient Plasma for Enhanced Flame Ignition in C 2 H 4 -Air. J. Phys. D Appl. Phys. 2011, 44, 022001. [Google Scholar] [CrossRef]
  11. Padala, S.; Le, M.K.; Nishiyama, A.; Ikeda, Y. Ignition of Propane-Air Mixtures by Miniaturized Resonating Microwave Flat-Panel Plasma Igniter; SAE Technical Paper 2017-24-0150; SAE International: Warrendale, PA, USA, 2017. [Google Scholar] [CrossRef]
  12. Scarcelli, R.; Biswas, S.; Ekoto, I.; Breden, D.; Karpatne, A. Numerical Simulation of a Nano-Pulsed High-Voltage Discharge and Impact on Low-Temperature Plasma Igni- Tion Processes for Automotive Applications. In Proceedings of the Ignition Systems for Gasoline Engines: 4th International Conference, Berlin, Germany, 6–7 December 2018; pp. 329–339. [Google Scholar] [CrossRef]
  13. Nishiyama, A.; Ikeda, Y. Improvement of Lean Limit and Fuel Consumption Using Microwave Plasma Ignition Technology; SAE Technical Paper 2012-01-1139; SAE International: Warrendale, PA, USA, 2012. [Google Scholar] [CrossRef]
  14. Cruccolini, V.; Discepoli, G.; Cimarello, A.; Battistoni, M.; Mariani, F.; Grimaldi, C.N.; Dal Re, M. Lean combustion analysis using a corona discharge igniter in an optical engine fueled with methane and a hydrogen-methane blend. Fuel 2020, 259, 116290. [Google Scholar] [CrossRef]
  15. Wyczalek, F.A.; Frank, D.L.; Nueman, J.G. Plasma Jet Ignition of Lean Mixtures; SAE Technical Paper 750349; SAE International: Warrendale, PA, USA, 1975. [Google Scholar] [CrossRef]
  16. Idicheria, C.A.; Najt, P.M. Potential of Advanced Corona Ignition System (ACIS) for Future Engine Applications. In International Conference on Ignition Systems for Gasoline Engines; Springer International Publishing: Cham, Switzerland, 2017; pp. 315–331. [Google Scholar] [CrossRef]
  17. Marko, F.; König, G.; Schöffler, T.; Bohne, S.; Dinkelacker, F. Comparative Optical and Thermodynamic Investigations of High Frequency Corona- and Spark-Ignition on a CV Natural Gas Research Engine Operated with Charge Dilution by Exhaust Gas Recirculation. In International Conference on Ignition Systems for Gasoline Engines; Springer International Publishing: Cham, Switzerland, 2017; pp. 293–314. [Google Scholar] [CrossRef]
  18. Ricci, F.; Petrucci, L.; Cruccolini, V.; Discepoli, G.; Grimaldi, C.N.; Papi, S. Investigation of the Lean Stable Limit of a Barrier Discharge Igniter and of a Streamer-Type Corona Igniter at Different Engine Loads in a Single-Cylinder Research Engine. Proceedings 2020, 58, 11. [Google Scholar] [CrossRef]
  19. Cruccolini, V.; Discepoli, G.; Ricci, F.; Petrucci, L.; Grimaldi, C.; Papi, S.; Dal Re, M. Comparative Analysis between a Barrier Discharge Igniter and a Streamer-Type Radio-Frequency Corona Igniter in an Optically Accessible Engine in Lean Operating Conditions; SAE Technical Paper; SAE International: Warrendale, PA, USA, 2020. [Google Scholar] [CrossRef]
  20. Ricci, F.; Zembi, J.; Battistoni, M.; Grimaldi, C.; Discepoli, G.; Petrucci, L. Experimental and Numerical Investigations of the Early Flame Development Produced by a Corona Igniter; SAE Technical Paper; SAE International: Warrendale, PA, USA, 2019. [Google Scholar] [CrossRef]
  21. Ricci, F.; Martinelli, R.; Petrucci, L.; Discepoli, G.; Nazareno, C.; Papi, S. Streamers Variability Investigation of a Radio- Frequency Corona Discharge in an Optical Access Engine at Different Speeds and Loads. E3S Web Conf. 2021, 312, 07021. [Google Scholar] [CrossRef]
  22. Petrucci, L.; Ricci, F.; Mariani, F.; Discepoli, G. A Development of a New Image Analysis Technique for Detecting the Flame Front Evolution in Spark Ignition Engine under Lean Condition. Vehicles 2022, 4, 145–166. [Google Scholar] [CrossRef]
  23. Petrucci, L.; Ricci, F.; Mariani, F.; Cruccolini, V.; Violi, M. Engine Knock Evaluation Using a Machine Learning Approach; SAE Technical Paper; SAE International: Warrendale, PA, USA, 2020. [Google Scholar] [CrossRef]
  24. Petrucci, L.; Ricci, F.; Mariani, F.; Grimaldi, C.N.; Discepoli, G.; Violi, M.; Matteazzi, N. Performance analysis of artificial neural networks for control in internal combustion engines. AIP Conf. Proc. 2019, 2191, 020129. [Google Scholar] [CrossRef]
  25. Goyal, M.; Knackstedt, T.; Yan, S.; Hassanpour, S. Artificial intelligence-based image classification methods for diagnosis of skin cancer: Challenges and opportunities. Comput. Biol. Med. 2020, 127, 104065. [Google Scholar] [CrossRef]
  26. Affonso, C.; Sassi, R.J.; Marques Barreiros, R. Biological image classification using rough-fuzzy artificial neural network. Expert Syst. Appl. 2015, 42, 9482–9488. [Google Scholar] [CrossRef]
  27. Kwan, M.F.Y.; Cheung, K.C.; Gibson, I.R. Automatic boundary extraction and rectification of bony tissue in CT images using artificial intelligence techniques. Med. Imaging Image Process. 2000, 3979, 96–905. [Google Scholar]
  28. Sara, A.; Usman Tariq, M. Handwriting recognition using artificial intelligence neural network and image processing. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 7. [Google Scholar]
  29. Park, K.; Chae, M.; Cho, J.H. Image Pre-Processing Method of Machine Learning for Edge Detection with Image Signal Processor Enhancement. Micromachines 2021, 12, 73. [Google Scholar] [CrossRef]
  30. Znamenskaya, I.A.; Doroshchenko, I.A. Edge detection and machine learning for automatic flow structures detection and tracking on schlieren and shadowgraph images. J. Flow Vis. Image Process. 2021, 28, 1–26. [Google Scholar] [CrossRef]
  31. Nazish, T.; Hamzah, R.A.; Ng, T.F.; Wang, S.L.; Ibrahim, H. Quality assessment methods to evaluate the performance of edge detection algorithms for digital image: A systematic literature review. IEEE Access 2021, 9, 87763–87776. [Google Scholar]
  32. Xu, X.; Zhang, X.; Zhang, T.; Shi, J.; Wei, S.; Li, J. On-Board Ship Detection in SAR Images Based on L-YOLO. In Proceedings of the 2022 IEEE Radar Conference (RadarConf22), New York, NY, USA, 21–25 March 2022. [Google Scholar]
  33. Chang, Y.-L.; Anagaw, A.; Chang, L.; Wang, Y.C.; Hsiao, C.-Y.; Lee, W.-H. Ship Detection Based on YOLOv2 for SAR Imagery. Remote Sens. 2019, 11, 786. [Google Scholar] [CrossRef]
  34. Liu, W.D.; Anguelov, D.; Erhan, C.; Szegedy, S.E.; Fu, R.C.; Berg, A.C. SSD: Single shot multibox detector. In Proceedings of the 14th European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 17 October 2016; pp. 21–37. [Google Scholar]
  35. Xu, Y.; Li, D.; Xie, Q.; Wu, Q.; Wang, J. Automatic defect detection and segmentation of tunnel surface using modified Mask R-CNN. Measurement 2021, 178, 109316. [Google Scholar] [CrossRef]
  36. Suh, S.; Park, Y.; Ko, K.; Yang, S.; Ahn, J.; Shin, J.-K.; Kim, S. Weighted Mask R-CNN for Improving Adjacent Boundary Segmentation. J. Sens. 2021, 2021, 1–8. [Google Scholar] [CrossRef]
  37. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R.B. Mask R-CNN, CoRR,vol. abs/1703.06870, 2017. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22 October 2017. [Google Scholar]
  38. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
  39. Nie, X.; Duan, M.; Ding, H.; Hu, B.; Wong, E.K. Attention Mask R-CNN for Ship Detection and Segmentation From Remote Sensing Images. IEEE Access 2020, 8, 9325–9334. [Google Scholar] [CrossRef]
  40. Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path Aggregation Network for Instance Segmentation. In Proceedings of the 2018, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 8759–8768. [Google Scholar]
  41. Yang, X.; Fu, K.; Sun, H.; Yang, J.; Guo, Z.; Yan, M.; Zhang, T.; Xian, S. R2CNN++: Multi-dimensional attention based rotation invariant detector with robust anchor strategy. arXiv 2018, arXiv:1811.071262.7. [Google Scholar]
  42. Oskar, A.V.; Akram, S.U.; Kannala, J. Mask-RCNN and U-net ensembled for nuclei segmentation. In Proceedings of the 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019. [Google Scholar]
  43. Leger, A.; Le Goic, G.; Fofi, D.; Kornalewski, R. R-CNN based automated visual inspection system for engine parts quality assessment. In Proceedings of the 15th International Conference on Quality Control by Artificial Vision, SPIE, Mulhouse, France, 15–17 May 2021; Volume 11794. [Google Scholar]
  44. Martinelli, R.; Ricci, F.; Zembi, J.; Battistoni, M.; Grimaldi, C.; Papi, S. Lean Combustion Analysis of a Plasma-Assisted Ignition System in a Single Cylinder Engine Fueled with E85; SAE Technical Paper 2022-24-0034; SAE International: Warrendale, PA, USA, 2022. [Google Scholar]
  45. Canepa, E.; Nilberto, A. Experimental flame front characterisation in a lean premix burner operating with syngas simplified model fuel. Energies 2019, 12, 2377. [Google Scholar] [CrossRef]
  46. Melo, M.J.; Sousa, J.M.M.; Costa, M.; Levy, Y. Flow and combustion characteristics of a low-NOx combustor model for gas turbines. J. Propuls. Power 2011, 27, 1212–1217. [Google Scholar] [CrossRef]
  47. Ricci, F.; Discepoli, G.; Cruccolini, V.; Petrucci, L.; Papi, S.; Di Giuseppe, A.; Grimaldi, C. Energy characterization of an innovative non-equilibrium plasma ignition system based on the dielectric barrier discharge via pressure-rise calorimetry. Energy Convers. Manag. 2021, 244, 114458. [Google Scholar] [CrossRef]
  48. Zembi, J.; Ricci, F.; Grimaldi, C.; Battistoni, M. Numerical Simulation of the Early Flame Development Produced by a Barrier Discharge Igniter in an Optical Access Engine. In Proceedings of the 15th International Conference on Engines & Vehicles, Capri, Italy, 12–16 September 2021; pp. 1–12. [Google Scholar] [CrossRef]
  49. Ricci, F.; Cruccolini, V.; Discepoli, G.; Petrucci, L.; Grimaldi, C.; Papi, S. Luminosity and Thermal Energy Measurement and Comparison of a Dielectric Barrier Discharge in an Optical Pressure-Based Calorimeter at Engine Relevant Conditions. SAE Tech. Pap. Ser. 2021, 1, 1–11. [Google Scholar] [CrossRef]
  50. Starikovskaia, S.M. Assisted ignition and combustion. J. Phys. D Appl. Phys. 2006, 39, R265. [Google Scholar] [CrossRef]
  51. Starikovskii, A. Plasma supported combustion. Proc. Combust. Inst. 2005, 30, 2405–2417. [Google Scholar] [CrossRef]
  52. Idicheria, C.A.; Yun, H.; Najt, P.M. An Advanced Ignition System for High Efficiency Engines. In Proceedings of the Ignition Systems for Gasoline Engines: 4th International Conference, Berlin, Germany, 6–7 December 2018; pp. 40–54. [Google Scholar]
  53. Discepoli, G.; Cruccolini, V.; Ricci, F.; Di Giuseppe, A.; Papi, S.; Grimaldi, C. Experimental characterisation of the thermal energy released by a Radio-Frequency Corona Igniter in nitrogen and air. Appl. Energy 2020, 263, 114617. [Google Scholar] [CrossRef] [Green Version]
  54. Cruccolini, V.; Grimaldi, C.N.; Discepoli, G.; Ricci, F.; Petrucci, L.; Papi, S. An Optical Method to Characterize Streamer Variability and Streamer-to-Flame Transition for Radio-Frequency Corona Discharges. Appl. Sci. 2020, 10, 2275. [Google Scholar] [CrossRef]
  55. Cruccolini, V.; Discepoli, G.; Ricci, F.; Grimaldi, C.N.; Di Giuseppe, A. Optical and Energetic Investigation of an Advanced Corona Ignition System in a Pressure-Based Calorimeter. E3S Web Conf. 2020, 197, 06019. [Google Scholar] [CrossRef]
  56. Shawal, S.; Goschutz, M.; Schild, M.; Kaiser, S.; Neurohr, M.; Pfeil, J.; Koch, T. High-speed imaging of early flame growth in spark-ignited engines using different imaging systems via endoscopic and full optical access. SAE Int. J. Engines 2016, 9, 704–718. [Google Scholar] [CrossRef]
  57. Kaiming, H.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
  58. Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015. [Google Scholar]
  59. Huang, J.; Rathod, V.; Sun, C.; Zhu, M.; Korattikara, A.; Fathi, A.; Fischer, I.; Wojna, Z.; Song, Y.; Guadarrama, S.; et al. Speed/accuracy trade-offs for modern convolutional object detectors. In Proceedings of the Computer Vision and Pattern Recognition 2017, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  60. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the Computer Vision and Pattern Recognition 2015, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  61. Ghosal, P.; Nandanwar, L.; Kanchan, S.; Bhadra, A.; Chakraborty, J.; Nandi, D. Brain tumor classification using ResNet-101 based squeeze and excitation deep neural network. In Proceedings of the Second International Conference on Advanced Computational and Communication Paradigms (ICACCP), Gangtok, India, 25–28 February 2019. [Google Scholar]
  62. Tsung-Yi, L.; Dollar, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  63. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the Computer Vision and Pattern Recognition 2016, Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar]
  64. Safa, M.; Ejbali, R.; Zaied, M. Denoising autoencoder with dropout based network anomaly detection. In Proceedings of the International Conference on Sustainable Environment and Agriculture 2019, Honolulu, HI, USA, 18–20 October 2019. [Google Scholar]
  65. Islam, M.Z.; Islam, M.M.; Asraf, A. A combined deep CNN-LSTM network for the detection of novel coronavirus (COVID-19) using X-ray images. Inform. Med. Unlocked 2020, 20, 100412. [Google Scholar] [CrossRef]
Figure 1. The main components of the optical access engine; focus on the optical configuration, which allows acquisition of the flame front evolution inside the combustion chamber.
Figure 1. The main components of the optical access engine; focus on the optical configuration, which allows acquisition of the flame front evolution inside the combustion chamber.
Vehicles 04 00053 g001
Figure 2. Area of the combustion chamber detected by the high-speed camera.
Figure 2. Area of the combustion chamber detected by the high-speed camera.
Vehicles 04 00053 g002
Figure 3. Barrier Discharge Igniter with focus on the streamers’ evolution around the firing end.
Figure 3. Barrier Discharge Igniter with focus on the streamers’ evolution around the firing end.
Vehicles 04 00053 g003
Figure 4. Representation of the main post-processing steps applied to the original image.
Figure 4. Representation of the main post-processing steps applied to the original image.
Vehicles 04 00053 g004
Figure 5. (a) Extraction of the region of interest during the training session and determination of the binarized image from the Mask. This latter is obtained during the test session starting from the outputs of the training session. (b) Representation of data division to perform training and test of the proposed algorithm.
Figure 5. (a) Extraction of the region of interest during the training session and determination of the binarized image from the Mask. This latter is obtained during the test session starting from the outputs of the training session. (b) Representation of data division to perform training and test of the proposed algorithm.
Vehicles 04 00053 g005
Figure 6. (a) Representation of the main steps used to obtain the target image to be compared to the proposed algorithm output. (b) Definition of the noise level to be subtracted from (B) to obtain the image (C). (A′) represents the image with no flame occurring used as a sample to define the noise level. Figure (B′) shows, in false colors, image (A′), while (C′) is reported to emphasize, from the minimum to the maximum level, the average level of (A′). The histogram on the right side shows the level distribution of the pixels of (C′). Image (D) shows the result of the binarization process.
Figure 6. (a) Representation of the main steps used to obtain the target image to be compared to the proposed algorithm output. (b) Definition of the noise level to be subtracted from (B) to obtain the image (C). (A′) represents the image with no flame occurring used as a sample to define the noise level. Figure (B′) shows, in false colors, image (A′), while (C′) is reported to emphasize, from the minimum to the maximum level, the average level of (A′). The histogram on the right side shows the level distribution of the pixels of (C′). Image (D) shows the result of the binarization process.
Vehicles 04 00053 g006
Figure 7. Qualitative and quantitative comparison, at different CAD after the end of the discharge, between the original images and the output of Mask R-CNN, made out to evaluate the effectiveness of the proposed method. Overestimation of the flame front are indicated in green while underestimation in purple.
Figure 7. Qualitative and quantitative comparison, at different CAD after the end of the discharge, between the original images and the output of Mask R-CNN, made out to evaluate the effectiveness of the proposed method. Overestimation of the flame front are indicated in green while underestimation in purple.
Vehicles 04 00053 g007
Figure 8. Equivalent flame radius at different λ (ad) produced by BR (blue curve) and Mask R-CNN (red curve) to be compared with the Target (black round markers). The error in evaluation in the equivalent flame radius is intentionally reported, at λ = 1.5 and λ = 1.8, without the absolute value to better distinguish overestimations and underestimations produced by the compared algorithms.
Figure 8. Equivalent flame radius at different λ (ad) produced by BR (blue curve) and Mask R-CNN (red curve) to be compared with the Target (black round markers). The error in evaluation in the equivalent flame radius is intentionally reported, at λ = 1.5 and λ = 1.8, without the absolute value to better distinguish overestimations and underestimations produced by the compared algorithms.
Vehicles 04 00053 g008
Figure 9. Comparison between Mask R-CNN outputs and BR ones at different instants after the end of the discharge at λ = 1.6. The red lines on the images indicate the contour of the Target flame front whereas the white areas the flame front computed by the algorithms (upper Mask-RCNN and bottom BR).
Figure 9. Comparison between Mask R-CNN outputs and BR ones at different instants after the end of the discharge at λ = 1.6. The red lines on the images indicate the contour of the Target flame front whereas the white areas the flame front computed by the algorithms (upper Mask-RCNN and bottom BR).
Vehicles 04 00053 g009
Figure 10. Dispersion of the equivalent flame radius at λ = 1.8 for both analyzed algorithms ((a) for base reference and (b) for Mask R-CNN). The blue curve and red curve indicate the corresponding average value, while the green curve is the case detected as an anomaly from the BR Method. Images on the right represent the flame front evolution of such a case that Mask R-CNN correctly detects.
Figure 10. Dispersion of the equivalent flame radius at λ = 1.8 for both analyzed algorithms ((a) for base reference and (b) for Mask R-CNN). The blue curve and red curve indicate the corresponding average value, while the green curve is the case detected as an anomaly from the BR Method. Images on the right represent the flame front evolution of such a case that Mask R-CNN correctly detects.
Vehicles 04 00053 g010
Figure 11. Evaluation of the metric at λ = 1.8 for two cases. Green area represents the overestimation performed by the tested method on detecting the target flame area; purple area, instead, the underestimations.
Figure 11. Evaluation of the metric at λ = 1.8 for two cases. Green area represents the overestimation performed by the tested method on detecting the target flame area; purple area, instead, the underestimations.
Vehicles 04 00053 g011
Table 1. Optical access engine features.
Table 1. Optical access engine features.
FeatureValue and Unit
Displaced volume500 cc
Stroke88 mm
Bore85 mm
Connecting Rod139 mm
Compression ratio8.8:1
Number of Valves4
Exhaust Valve Open13 CAD bBDC
Exhaust Valve Close25 CAD aTDC
Inlet Valve Open20 CAD bTDC
Intake Valve Close24 CAD aBDC
Table 2. Optical access engine features.
Table 2. Optical access engine features.
FeatureValueUnit
Image resolution512 × 512pixel
Sampling rate25kHz
Exposure time49µs
Bit depth12Bit
Spatial resolution124µm/pixel
Temporal resolution (@1000rpm)0.24CAD/frame
Table 3. Main technical characteristics of the experimental points chosen to develop and test the methodology proposed in the present work.
Table 3. Main technical characteristics of the experimental points chosen to develop and test the methodology proposed in the present work.
λIT, CAD aTDCIMEP, barCoVIMEP, %
1.4263.191.21
1.5322.951.21
1.6382.931.14
1.7442.701.71
1.8562.523.52
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Petrucci, L.; Ricci, F.; Martinelli, R.; Mariani, F. Detecting the Flame Front Evolution in Spark-Ignition Engine under Lean Condition Using the Mask R-CNN Approach. Vehicles 2022, 4, 978-995. https://doi.org/10.3390/vehicles4040053

AMA Style

Petrucci L, Ricci F, Martinelli R, Mariani F. Detecting the Flame Front Evolution in Spark-Ignition Engine under Lean Condition Using the Mask R-CNN Approach. Vehicles. 2022; 4(4):978-995. https://doi.org/10.3390/vehicles4040053

Chicago/Turabian Style

Petrucci, Luca, Federico Ricci, Roberto Martinelli, and Francesco Mariani. 2022. "Detecting the Flame Front Evolution in Spark-Ignition Engine under Lean Condition Using the Mask R-CNN Approach" Vehicles 4, no. 4: 978-995. https://doi.org/10.3390/vehicles4040053

Article Metrics

Back to TopTop