Next Article in Journal
Digital Technologies and the Role of Data in Cultural Heritage: The Past, the Present, and the Future
Next Article in Special Issue
Detection and Classification of Human-Carrying Baggage Using DenseNet-161 and Fit One Cycle
Previous Article in Journal
A Comprehensive Spark-Based Layer for Converting Relational Databases to NoSQL
Previous Article in Special Issue
Startups and Consumer Purchase Behavior: Application of Support Vector Machine Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lightweight AI Framework for Industry 4.0 Case Study: Water Meter Recognition

1
CES Lab, ENIS, Sfax University, Sfax 3029, Tunisia
2
Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
3
Faculty of Engineering, Université de Moncton, Moncton, NB E1A3E9, Canada
4
Spectrum of Knowledge Production & Skills Development, Sfax 3027, Tunisia
5
International Institute of Technology and Management, Libreville BP1989, Gabon
6
Department of Electrical and Electronic Engineering Science, School of Electrical Engineering, University of Johannesburg, Johannesburg 2006, South Africa
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2022, 6(3), 72; https://doi.org/10.3390/bdcc6030072
Submission received: 9 June 2022 / Revised: 27 June 2022 / Accepted: 28 June 2022 / Published: 1 July 2022
(This article belongs to the Special Issue Advancements in Deep Learning and Deep Federated Learning Models)

Abstract

:
The evolution of applications in telecommunication, network, computing, and embedded systems has led to the emergence of the Internet of Things and Artificial Intelligence. The combination of these technologies enabled improving productivity by optimizing consumption and facilitating access to real-time information. In this work, there is a focus on Industry 4.0 and Smart City paradigms and a proposal of a new approach to monitor and track water consumption using an OCR, as well as the artificial intelligence algorithm and, in particular the YoLo 4 machine learning model. The goal of this work is to provide optimized results in real time. The recognition rate obtained with the proposed algorithms is around 98%.

1. Introduction

Nowadays, smart cities are conceived to integrate multiple information types and technologies to offer more and better services. The research progress in this domain solved many complex problems in recent years, such as pollution, e-health, transport, and remote data collection.
Moreover, the use of real time data and Artificial Intelligence (AI) increased efficiency and offered flexibility and easy use. In fact, deep and machine learning are nurtured to enable systems to correctly interpret external data, to learn from such data, and to use this knowledge to achieve specific goals and tasks through flexible adaptation [1,2,3].
The Internet of Things (IoT) is used in different areas not only smart cities, smart agriculture, and industry 4.0 [4] but also the field of sports and e-health (called in this case IoMT [5,6,7]). When we collect data from different sensors via IoT, artificial intelligence is used to process the data and to have forecasts and to bring a considerable saving of the consumption.
In this paper, we focus on remote automatic data treatment using artificial intelligence for water monitoring. This task will be performed automatically without an agent, which allows a low-cost solution. In addition, this real time data helps customers to obtain more comprehensive visibility of their water consumption.
In this work, we start from the real case of the water consumption management infrastructure in Tunisia and we will apply AI to facilitate the management and minimize the consumption for users.
The contributions of this paper may be summarized as follows.
  • An AI-and OCR-based model is advanced to detect and extract water meter numbers.
  • This model is implementable on smart phones.
  • The model enables detecting, extracting, and calculating pertinent data, such as consumption and date and storing them in a database.
  • The accuracy obtained from the object detection model is about 98%.
In light of these results, several perspectives are proposed.
This paper is articulated around four parts:
-
The state of the art that talks about the application of AI in the context of smart cities, and particularly in the management of consumption.
-
The proposed approach that facilitates data collection, storage, and approximation of consumption
-
The different results of the implementation of the proposed approach
-
A conclusion and perspectives of the proposed work.

2. Related Works

2.1. Industry 4.0

The concept of Industry 4.0 was launched in 2011 by the government of Germany. The perspective has been to increase and maintain the productivity and flexibility performance of the German manufacturing sector [8]. It is about promoting smart production by machines and humans communicating with each other.
Although Industry 4.0 is preceded by three industrial revolutions, it considers itself disruptive since it aims at making the manufacturing system intelligent by factories, products and services that are also intelligent and connected to each other. It is about making all the objects and stakeholders of a factory interconnected throughout the value chain.
Industry 4.0, therefore, involves contemporary societies and organizations and is the subject of research in the academic and industrial world [9]. The transdisciplinarity of the concept, translated by the strong interest given to said concept, leads to the emergence of a diversity of terminology, such as “future industry”, “digital industry”, “smart industry”, “industrial internet”, or “digital transformation” [10]. Some authors characterize Industry 4.0 as “systems that communicate and cooperate with each other, but also with humans, to decentralize decision making” [11].
The definition given to the term industrial internet by General Electric, confirms the transdisciplinarity nature of the industry 4.0 concept. It describes the integration of machines, computers, and humans with sensors, connected objects, and software enabling the prediction, planning, and control of industrial operations and generating transformational organizational results [12]. It is recognized that a long period of time is needed for a change, restructuring, and even an industrial revolution to develop and adjust. Thus, Qin [11] states that, in parallel with the implementation of change, the definition of the concept industry 4.0 will be refined and adapted to the advances of the field [13]. Indeed, for Blanchet [14], it is a new paradigm of inserting these technologies into industries. Companies are driven to invest in integrating new information technologies, automating processes through robotics, cyber-physical systems and embedded systems, and making supply chains coordinated [14]. This paradigm ranges from optimizing physical assets to optimizing how data and information are leveraged throughout the product lifecycle. This digital optimization is based on an information flow, represented by a “digital thread”, that spans the entire product lifecycle.
To optimize the manufacturing ecosystem, it is important to use information well. The technologies used in Industry 4.0 provide the means for smart connected devices and sensors to better utilize data. This helps optimize productivity and efficiency [15]. For example, advanced analytics transform information into results that help decision makers and 3D printing to convert digital data into a tangible part and the captured information to plan the ideal maintenance time. In other words, the key to seizing new opportunities and boosting performance is to actively manage information along the value chain to avoid information leakage [15]. These leakages present loss information that may affect a stakeholder in the value chain. Moreover, machines and goods are a major cost category for manufacturing companies. Therefore, the optimal use of information sensors and smart, connected devices will have a significant effect on optimizing productivity, life cycle management, and organizational design.
The introduction of remote monitoring and steering to reduce downtime, by making the best use of all machine information, can improve asset utilization, and thus generate value. For example, for [16], “Industry 4.0 refers to recent technological advances in which the Internet and associated technologies (e.g., embedded systems) serve as a fulcrum for integrating physical objects, human actors, smart machines, production lines, and processes across organizational boundaries to form a new, more agile, intelligent, and connected value chain” [16].

2.2. Water Monitoring

One of the smart cities components is “Smart City Services”. It includes the activities that sustain a city’s population; these involve municipal tasks, such as supply of water, waste management, environmental control, and both monitoring and billing meters, etc. In this paper, we will apply the basics of industry 4.0 to the management of water consumption in Tunisia.
Tunisia is one of the countries of the Mediterranean with little water resources. The mobilizable potential is estimated at 4.6 billion/m3, the regulatable resources amount to 4.1 billion / m3 and the current mobilization rate is 74%. The volume currently available per capita and per year is 450 m3 against 556 in Morocco, 776 in Syria, and 2200 in Turkey [17].
In this context, automatic meter reading is an important subject, which refers to automatically recording the consumption of electric energy, gas, and water for both monitoring and billing. Despite the existence of smart readers, they are not widespread in many countries, especially in the underdeveloped ones, and the reading is still performed manually on site by an operator who manually writes the meter number on a piece of paper, which could be easily lost, with no reading proof, such as an image.
Since this operation is subject to human errors, unfortunately there is no checking process to confirm correct data reading before saving them in a database, and even if there is a process, such as having two operators visit the site, this way of information checking is human effort and time consuming. Moreover, it shows low efficiency. Furthermore, due to a large number of meters to be evaluated, the inspection is usually done by another operator and errors might go unnoticed. Performing the meter inspection automatically would reduce mistakes introduced by the human factor and save manpower. Furthermore, the reading could also be executed automatically using a mobile application installed in the smartphone of the operator. In summary, image-based automatic meter reading has advantages, such as lower cost and fast installation, since it does not require renewal or replacement of existing meters.
The work carried out in this paper was realized within the framework of an industrial collaboration with the Company of Production and Management of Water in Tunisia.
This prototype will be developed and used in the context of the company digitization and governance.
In the method currently used in Tunisia, the error and the problem of data recording may be the result of several causes:
-
Error in collecting information from the counter.
-
Error in recording data on the register.
-
Error in saving the data in the database.
-
Lack of clarity in the data obtained manually.
-
Loss of the paper register on which the data is collected.
This system connects the water management company, the employees, and the customers. Therefore, it leads to the following outcomes:
-
For the Tunisian water management company: to continuously obtain updated consumption values of the different customers.
-
For the customers: to get access in real time to the consumption value as well as the invoices that have been saved.
-
For the collaborators: to avoid errors in the manual seizure of the values.
To uncover the benefits of real time data in water monitoring, we look at how it is collected and analyzed and the kind of insights it can provide for water providers. In fact, the intelligent access to consumption data as a part of the end-to-end connectivity that is increasingly important to their business processes.
Before presenting our approach, it is important to briefly present an object detection/ recognition algorithm. It is based on image classification, object localization, and object detection/segmentation. Figure 1 shows these algorithms. Top performing deep learning models are R-CNN (region-based convolutional neural network), Fast R-CNN, Faster R-CNN, SSD (single shot multibox detector), and Yolo.
Several works have been carried out on the problems of AMR (automatic meter reading). In some cases, the authors integrate the process in a single step while others divide it into three main steps:
(1)
The detection of the meters;
(2)
Digit segmentation;
(3)
Digit recognition.
The authors in Refs. [18,19,20] have limited themselves to images of counters with specific characteristics (position and colors of the digits, etc.). The major drawback of this technique is that it may only work on certain types of water meters under specific conditions.
The authors in Refs. [21,22] use deep learning approaches, which requires a large image base to have efficient results.
In [23], the authors perform three steps of electric meter recognition: preprocessing, segmentation of individual digits, and reading recognition. The results were obtained with 21 images used.
Many works focus on a single step of the AMR pipeline [24,25]. It becomes difficult to accurately evaluate the presented methods from end to end. It is difficult to evaluate existing methods given the execution time of the proposed approaches, as well as the hardware used.
The authors in [26] focus on the problem of water meter recognition in smart city applications. The experimental results show high accuracy and require fewer parameters and less computation.
They also implemented a system for real-time database management on the Cloud platform. The system sends the image of the meter index. This increases the percentage of data storage.
In [27,28], the authors explore the angle between the pointer and the dial to perform the reading. Therefore, they do not work on digit counters but rather on dial counters.
Based on the different approaches used in the state of the art and given the technological and infrastructural constraints (limited network connection and coverage, GSM and tablets with minimal resolution and resources, etc.), we propose in this paper a hybrid approach. This approach allows for recognition, computation and minimization of consumption.

3. Proposed Approach

We represent our proposed system based on deep learning. The software architecture allows us to describe in a symbolic and schematic way the different elements of the computer system, their interrelations, and interactions.
Our approach presents 3 units, which are: (Figure 2)
-
Display unit: mobile application;
-
Image processing unit: model AI, which will be integrated within the mobile application;
-
Water providers data storage unit: database.

3.1. Specification

It is important to identify and specify the functionalities that will be implemented. This will determine what we expect from our application. Indeed, our system will be modeled using diagrams that respect the UML modeling language.
So, to satisfy the needs of users, our system must provide these main services, illustrated in Figure 3. In Figure 4, we propose the online detection system uses case diagram.

3.2. Image Processing

The image processing unit is a class containing methods to detect the contours of water meter numbers, extract these numbers, and calculate the monthly consumption of each. Once the outline of the meter is detected, an object named meter is created. This object will then be processed by a fast OCR algorithm. Once we have obtained the water meter number, the calculation of the consumption of the meter will be done automatically and the image, the detected number, and the consumption figure, as well as the GPS location, the date of the day, and the name of the field must be recorded in our database. The meter is a class, all meters are objects generated by the object detection process.
The following diagram (Figure 5) is used to represent the triggering of events according to the system states and to model parallelizable behaviors. It is used to describe a workflow.
In our approach, the image processing goes through several steps: first, we will insert the image. Then, each image should undergo the two phases of the program: detection and number extraction of each meter.

3.2.1. Yolo Meter Detection

The object detection model will be integrated in a mobile application. We need to choose the fastest, lightest, and the most accurate one. The major reason why you cannot proceed with this problem, object detection, by building a standard convolutional network followed by a fully connected layer is that the length of the output layer is variable and not constant. This is because the number of occurrences of the objects of interest is not fixed. A straightforward approach to solve this problem would be to take different regions of interest from the image and use a CNN to classify the presence of the object within that region.
The problem with this approach is that the objects of interest might have different spatial locations within the image and different aspect ratios. Hence, you would have to select a huge number of regions, and this could computationally blow up. Therefore, algorithms, such as R-CNN, YOLO, etc., have been developed to find these occurrences and find them rapidly.
Moreover, in Yolo4, features were predicted for each layer using “Feature Pyramid Network”. This solved the problem of not catching small objects as high resolution features. The authors in [27] compare the precision between Faster-RCNN, Yolo v4, and SSD after several object detection, and conclude that Yolo v4 shows the highest precision, as shown in Figure 6.

3.2.2. Yolo Implementation

The approach is based on the Darknet neural network framework for training and testing mobile applications. The framework uses multi-scale training, massive data expansion and batch normalization. It is an open-source neural network framework written in C and CUDA.
For deep learning detection, a dataset is needed. It generally integrates several data (video files, images, texts, sounds, or even statistics). Their grouping together forms a set that enables the automatic learning and model creation. Thus, the first step is to collect images and, if necessary, by exploiting data augmentation or image enhancement (1100 images).
The first step is to collect images and, if necessary, by exploiting data augmentation or image enhancement (1100 images). The next step is data annotation/labeling (Figure 7). Our dataset is in the Darknet YOLO format to train YOLOv4 on the Darknet with our custom dataset and to divide the data into three folders. The training was performed on 70% of the images, 10% for validation and the test on 20% of the images.
To be able to integrate the model in mobile applications, the weights are converted to TensorFlow Lite.

3.3. Number Meter Extraction

OCR methods use algorithms to recognize the characters, of which there are two variants. Pattern recognition is where the algorithm is trained with examples of characters in different fonts and can then use this training to try and recognize characters from the input. Feature recognition is where the algorithm has a specific set of rules regarding the features of characters, for example the number of angles and crossed lines. The algorithm then uses this to recognize the text [28,29,30].
In our approach, the open-source OCR Tesseract is used and deployed on the mobile application. The Tesseract process flow in presented in Figure 8.
The image process starts with eliminating image noise with non-local means denoising and Gaussian blur. Next, using four different thresholds to preprocess our images. This binarization is based on Niblack’s algorithm, which is creating a threshold image. A rectangular window is used to glide across the image and compute the threshold value for the center pixel by using the mean and the variance of the gray values in this window.
Another method is used based on the histogram of Otsu thresholding. Using this setup, we develop an effective thresholding technique for diverse test situations. The results are provided in Figure 9.
After having realized the architecture of our system, the next step will be dedicated to the implementation and realization of the mobile application.

3.4. Mobile Application

To facilitate the access and recording of data in relation to the agents who read the meters, it is important to use their smartphones. Since these smartphones have different characteristics, we proposed to use light mobile applications. We chose to use Android as the mobile platform.
Indeed, since the majority of smartphones used in this work are Android smartphones, we decided to use Android Studio as SW. We could have used cross-platform systems but given the need for an optimized SW lite, we chose to create an Android platform.
On this platform we used a lite AI framework to do the digital recognition. This application will then allow us to save the data on the phone.
As soon as the system is connected via 3G and/or Wifi, the data will be saved in the main database. In the result part, screen printouts will display the result of the implementation.

4. Obtained Results

4.1. Counter Detection

In this part, we will present the results of the implementation of the application. Figure 10 illustrates the object detection result.
Using a smartphone, we detect the counter number, as described in the proposed approach part.
After training the custom tiny-YOLOv4 object detector and saving the obtained weights, we then repeated the process, re-modifying the configuration file to obtain the weights that achieved the highest mAPscore on my training set.
Once the training is finished, we use our trained custom tiny-YOLO v4 detector to make inference on test images. When we run this detector on a test image, we got the bounding box of the detected water meter number successfully.
Finally, we converted the weights to TensorFlow’s.pb representation then we converted the TensorFlow weights to TensorFlow Lite to prepare the model to be integrated in the mobile application.

4.2. Overview Process

After implementing the approach of the water meter counter detection process, we illustrated the obtained result in Figure 11. It contains the number recognition. The proposed process is as follows:
  • Detect the requested area from the image: water meter counter;
  • Perform an image processing on the images;
  • Pass the images to Tesseract;
  • Store the results of Tesseract in the desired format.

4.3. Application Realization

After creating both the AI model and the Android mobile application and integrating the model within the application, we have a function mobile application that replied to all the project goals. The welcome interface (Figure 12) is the first window encountered after launching the mobile application. This interface welcomes back the user and resumes the role of the application. It lasts for 10 s, then one of these two scenarios may arise:
  • If the user is not authenticated, he will be directed to the Sign in interface.
  • If not, he will be directed to the Home interface.
The authentication interface, as depicted in Figure 13, is the second window encountered after launching the application and seeing the welcome interface. Its role is to secure access to the application; it allows the entry of a sign-in and a password. After the validation of the information entered by the sign in button, two scenarios are presented:
  • If the information is validated, the user will be redirected to the main interface of the application, which is the Home interface.
  • If not, an error message will be displayed.
Figure 14 shows the general and main menu of the application. It has four sections: “Online detection”, “Take picture”, “Real time detection”, and “Show uploads”.
Once the operator opens the “Online detection” interface, as shown in Figure 15, they can enter the field name, the zip code name, and pick a picture either from the gallery or from the camera. Once the user has picked a picture, the water meter number will be detected and extracted. Finally, both the image and the number will be displayed. Then he can press the register meter button to go to the second interface to be able to enter the water current units, see the monthly consumption of the water, see the date of the recording, and locate the water meter location by clicking on the location button view. Finally, the operator presses the submit button to save all the water meter data in the firebase.
When the user opens the “Real time detection” interface (Figure 16), they can test the Yolo detection model and its accuracy to know the limits of the AI model and to have the best experience possible with this mobile application to perform their job successfully.
When the user opens the “Show uploads” interface (Figure 17), they will be able to both view and delete any saved record.

5. Discussion

The present work was carried out within the framework of an industrial collaboration with the Company of Production and Management of Water in Tunisia. We used a dataset of 1100 images.
The training was performed on 70% of the images, 10% for validation and the test on 20% of the images. We then conducted a test on 150 real images that were photographed from a water meter. The detection process applied to the 150 images, including some cases with numbers between two positions. It resulted in 2 erroneous and 148 correct values.
Thanks to the learning approach, the proposed system allowed for choosing the low value that will be stored in the database. The obtained recognition rate was 98.67%.

6. Conclusions

The present work was carried out within the framework of an industrial collaboration with the Company of Production and Management of Water in Tunisia. This prototype will be developed and used in the context of the company digitization and governance.
The objective of this paper was to develop an AI model based on deep learning, OCR algorithms, and artificial intelligence, which allows us to detect and extract water meter numbers. Moreover, this model was integrated into an Android mobile application. In fact, the meter images are taken by the cameras of the operator’s smartphones. Then, our application allows them to detect, extract, calculate the consumption of each meter monthly, and finally save all relevant information in the Firebase, such as the meter number, location, date, etc.
The accuracy obtained from the object detection model with the tiny YOLOv4 is 98%. The results obtained and the studies and the experiments carried out have also enabled us to highlight certain areas of improvement for our algorithm, such as enrichment and optimization of the speed and efficiency of our system.
Despite the results obtained, several perspectives of this work are being developed.
The future work aims at making a diagnosis of intelligent consumption, on the one hand, and a secure data backup using blockchain technology, on the other hand.
The blockchain will create a system allowing traceability not only of the data but also of the different transactions that take place.

Author Contributions

This paper is the result of collaboration between different authors from different universities. Conceptualization, T.F., J.K.; methodology, H.H., T.F.; software.; validation, J.K., and H.H.; formal analysis, M.H.; investigation, H.E.; resources, M.H.; data curation, H.H.; writing—original draft preparation, J.K.; writing—review and editing, T.F.; visualization, M.H.; supervision, H.H.; project administration, T.F.; funding acquisition, M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R125), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

No data availability.

Acknowledgments

The authors thank Natural Sciences and Engineering Research Council of Canada (NSERC) and New Brunswick Innovation Foundation (NBIF) for the financial support of the global project. These granting agencies did not contribute to the design of the study and collection, analysis, or interpretation of data.

Conflicts of Interest

There is no conflict of interest.

References

  1. Balouch, S.; Abrar, M.; Abdul Muqeet, H.; Shahzad, M.; Jamil, H.; Hamdi, M.; Malik, A.S.; Hamam, H. Optimal Scheduling of Demand Side Load Management of Smart Grid Considering Energy Efficiiency. Energy Res. 2022, 18, 861571. [Google Scholar] [CrossRef]
  2. Masood, B.; Guobing, S.; Nebhen, J.; Rehman, A.U.; Iqbal, M.N.; Rasheed, I.; Bajaj, M.; Shafiq, M.; Hamam, H. Investigation and Field Measurements for Demand Side Management Control Technique of Smart Air Conditioners located at Residential, Commercial, and Industrial Sites. Energies 2022, 15, 2482. [Google Scholar] [CrossRef]
  3. Asif, M.; Ali, I.; Ahmad, S.; Irshad, A.; Gardezi, A.A.; Alassery, F.; Hamam, H.; Shafiq, M. Industrial Automation Information Analogy for Smart Grid Security. CMC-Comput. Mater. Contin. 2022, 71, 3985–3999. [Google Scholar] [CrossRef]
  4. Boyes, H.; Hallaq, B.; Cunningham, J.; Watson, T. The industrial internet of things (IIoT): An analysis framework. Comput. Ind. 2018, 101, 1–12. [Google Scholar] [CrossRef]
  5. França, R.P.; Monteiro, A.C.B.; Arthur, R.; Iano, Y. An Overview of the Internet of Medical Things and Its Modern Perspective. In Efficient Data Handling for Massive Internet of Medical Things. Internet of Things (Technology, Communications and Computing); Chakraborty, C., Ghosh, U., Ravi, V., Shelke, Y., Eds.; Springer: Cham, Switzerland, 2021. [Google Scholar] [CrossRef]
  6. Frikha, T.; Chaari, A.; Chaabane, F.; Cheikhrouhou, O.; Zaguia, A. Healthcare and Fitness Data Management Using the IoT-Based Blockchain Platform. J. Healthc. Eng. 2021, 2021, 9978863. [Google Scholar] [CrossRef]
  7. Frikha, T.; Chaabane, F.; Aouinti, N.; Cheikhrouhou, O.; Ben Amor, N.; Kerrouche, A. Implementation of Blockchain Consensus Algorithm on Embedded Architecture. Secur. Commun. Netw. 2021, 2021, 9918697. [Google Scholar] [CrossRef]
  8. Kagermann, H.; Wahlster, W.; Helbig, J. Securing the Future of German Manufacturing Industry: Recommendations for Implementing the Strategic Initiative INDUSTRIE 4.0; Final Report of the Industrie 4.0 Working Group; Forschungsunion im Stifterverband fur die Deutsche Wirtschaft e.V.: Berlin, Germany, 2013. [Google Scholar]
  9. Duan, L.; Da Xu, L. Data Analytics in Industry 4.0: A Survey. Inf. Syst. Front. 2021, 1–17. [Google Scholar] [CrossRef]
  10. Perrier, N.; Bled, A.; Bourgault, M.; Cousin, N.; Danjou, C.; Pellerin, R.; Roland, T. Construction 4.0: A survey of research trends. J. Inf. Technol. Constr. 2020, 25, 416–437. [Google Scholar] [CrossRef]
  11. Serpanos, D.; Wolf, M. Industrial Internet of Things. In Internet-of-Things (IoT) Systems; Springer: Cham, Switzerland, 2018. [Google Scholar] [CrossRef]
  12. Qin, J.; Liu, Y.; Grosvenor, R. A Categorical Framework of Manufacturing for Industry 4.0 and Beyond. Procedia CIRP 2016, 52, 173–178. [Google Scholar] [CrossRef] [Green Version]
  13. Blanchet, M.; Rinn, T. The Industrie 4.0 Transition Quantified. Roland Berger Think Act, Munich. 2016. Available online: www.rolandberger.com/publications/publication_pdf/roland_berger_industry_40_20160609.pdf (accessed on 15 May 2022).
  14. Preuveneers, D.; Ilie-Zudor, E. The intelligent industry of the future: A survey on emerging trends, research challenges and opportunities in industry 4.0. J. Ambient. Intell. Smart Environ. 2017, 9, 287–298. [Google Scholar] [CrossRef] [Green Version]
  15. Schumacher, A.; Erol, S.; Sihn, W. A Maturity Model for Assessing Industry 4.0 Readiness and Maturity of Manufacturing Enterprises. Procedia CIRP 2016, 52, 161–166. [Google Scholar] [CrossRef]
  16. Fathalli, A.; Romdhane, M.S.; Vasconcelos, V.; Ben Rejeb Jenhani, A. Biodiversity of cyanobacteria in Tunisian freshwater reservoirs: Occurrence and potent toxicity—A review. J. Water Supply Res. Technol.-Aqua 2015, 64, 755–772. [Google Scholar] [CrossRef] [Green Version]
  17. Gallo, I.; Zamberletti, A.; Noce, L. Robust Angle Invariant GAS Meter Reading. In Proceedings of the 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Adelaide, SA, Australia, 23–25 November 2015. [Google Scholar] [CrossRef]
  18. Quintanilha, D.B.P.; Costa, R.W.S.; Diniz, J.O.B.; de Almeida, J.D.S.; Braz, G.; Silva, A.C.; de Paiva, A.C.; Monteiro, E.M.; Froz, B.R.; Piheiro, L.P.A.; et al. Automatic consumption reading on electromechanical meters using HoG and SVM. In Proceedings of the 7th Latin American Conference on Networked and Electronic Media (LACNEM 2017), Valparaiso, Chile, 6–7 November 2018. [Google Scholar] [CrossRef]
  19. Gonçalves, J.C.; Centeno, T.M. Utilização De Técnicas De Processamento De Imagens E Classificação De Padrões No Reconhecimento De Dígitos Em Imagens De Medidores De Consumo De Gás Natural. Abakos (Brasil) 2017, 5, 59–78. [Google Scholar] [CrossRef] [Green Version]
  20. Cerman, M.; Shalunts, G.; Albertini, D. A mobile recognition system for analog energy meter scanning. In International Symposium on Visual Computing; Springer: Cham, Switzerland, 2016. [Google Scholar] [CrossRef]
  21. Gomez, L.; Rusinol, M.; Karatzas, D. Cutting Sayre’s Knot: Reading Scene Text without Segmentation. Application to Utility Meters. In Proceedings of the 2018 13th IAPR International Workshop on Document Analysis Systems (DAS), Vienna, Austria, 24–27 April 2018. [Google Scholar] [CrossRef]
  22. Elrefaei, L.A.; Bajaber, A.; Natheir, S.; Abusanab, N.; Bazi, M. Automatic electricity meter reading based on image processing. In Proceedings of the 2015 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT), Amman, Jordan, 3–5 November 2015. [Google Scholar] [CrossRef]
  23. Tsai, C.M.; Shou, T.D.; Chen, S.C.; Hsieh, J.W. Use SSD to Detect the Digital Region in Electricity Meter. In Proceedings of the 2019 International Conference on Machine Learning and Cybernetics (ICMLC), Kobe, Japan, 7–10 July 2019. [Google Scholar] [CrossRef]
  24. Yang, F.; Jin, L.; Lai, S.; Gao, X.; Li, Z. Fully convolutional sequence recognition network for water meter number reading. IEEE Access 2019, 7, 11679–11687. [Google Scholar] [CrossRef]
  25. Li, C.; Su, Y.; Yuan, R.; Chu, D.; Zhu, J. Light-weight spliced convolution network-based automatic water meter reading in smart city. IEEE Access 2019, 7, 174359–174367. [Google Scholar] [CrossRef]
  26. Salomon, G.; Laroca, R.; Menotti, D. Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020. [Google Scholar]
  27. Zuo, L.; He, P.; Zhang, C.; Zhang, Z. A robust approach to reading recognition of pointer meters based on improved mask-RCNN. Neurocomputing 2020, 388, 90–101. [Google Scholar] [CrossRef]
  28. Jeong-ah, K.; Ju-yeong, S.; Se-ho, P. Comparaison of Faster RCNN, YOLO and SSD for Real time vehicle type recognition. In Proceedings of the 2020 IEEE International Conference on Consumer Electronics—Asia (ICCE-Asia), Seoul, Korea, 1–3 November 2020. [Google Scholar]
  29. Forsberg, A.; Lundqvist, M. A Comparison of OCR Methods on Natural Images in Different Image Domains; Degree Project in technology; KTH Royal Institute of Technology: Stockholm, Sweden, 2020. [Google Scholar]
  30. Allouche, M.; Frikha, T.; Mitrea, M.; Memmi, G.; Chaabane, F. Lightweight Blockchain Processing. Case Study: Scanned Document Tracking on Tezos Blockchain. Appl. Sci. 2021, 11, 7169. [Google Scholar] [CrossRef]
Figure 1. Object detection algorithms.
Figure 1. Object detection algorithms.
Bdcc 06 00072 g001
Figure 2. Software architecture.
Figure 2. Software architecture.
Bdcc 06 00072 g002
Figure 3. General use case diagram.
Figure 3. General use case diagram.
Bdcc 06 00072 g003
Figure 4. Online detection’ use case diagram.
Figure 4. Online detection’ use case diagram.
Bdcc 06 00072 g004
Figure 5. Flow chart of the Program.
Figure 5. Flow chart of the Program.
Bdcc 06 00072 g005
Figure 6. Comparison of performance of Deep learning Models.
Figure 6. Comparison of performance of Deep learning Models.
Bdcc 06 00072 g006
Figure 7. Data Labeling.
Figure 7. Data Labeling.
Bdcc 06 00072 g007
Figure 8. Tesseract process flow.
Figure 8. Tesseract process flow.
Bdcc 06 00072 g008
Figure 9. Image processing result.
Figure 9. Image processing result.
Bdcc 06 00072 g009
Figure 10. Object detection result.
Figure 10. Object detection result.
Bdcc 06 00072 g010
Figure 11. Process overview.
Figure 11. Process overview.
Bdcc 06 00072 g011
Figure 12. The welcome interface.
Figure 12. The welcome interface.
Bdcc 06 00072 g012
Figure 13. The sign interface.
Figure 13. The sign interface.
Bdcc 06 00072 g013
Figure 14. The Home interface.
Figure 14. The Home interface.
Bdcc 06 00072 g014
Figure 15. The Online detection interfaces.
Figure 15. The Online detection interfaces.
Bdcc 06 00072 g015
Figure 16. The real time detection interface.
Figure 16. The real time detection interface.
Bdcc 06 00072 g016
Figure 17. The show uploads interface.
Figure 17. The show uploads interface.
Bdcc 06 00072 g017
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ktari, J.; Frikha, T.; Hamdi, M.; Elmannai, H.; Hmam, H. Lightweight AI Framework for Industry 4.0 Case Study: Water Meter Recognition. Big Data Cogn. Comput. 2022, 6, 72. https://doi.org/10.3390/bdcc6030072

AMA Style

Ktari J, Frikha T, Hamdi M, Elmannai H, Hmam H. Lightweight AI Framework for Industry 4.0 Case Study: Water Meter Recognition. Big Data and Cognitive Computing. 2022; 6(3):72. https://doi.org/10.3390/bdcc6030072

Chicago/Turabian Style

Ktari, Jalel, Tarek Frikha, Monia Hamdi, Hela Elmannai, and Habib Hmam. 2022. "Lightweight AI Framework for Industry 4.0 Case Study: Water Meter Recognition" Big Data and Cognitive Computing 6, no. 3: 72. https://doi.org/10.3390/bdcc6030072

Article Metrics

Back to TopTop