Next Article in Journal
Industry 4.0 and Smart Systems in Manufacturing: Guidelines for the Implementation of a Smart Statistical Process Control
Previous Article in Journal
Adaptive Active Disturbance Rejection Control for Vehicle Steer-by-Wire under Communication Time Delays
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Aerial Surveillance Leveraging Delaunay Triangulation and Multiple-UAV Imaging Systems

School of engineering and informatics, University of Sussex, Brighton BN1 9RH, UK
*
Author to whom correspondence should be addressed.
Appl. Syst. Innov. 2024, 7(2), 23; https://doi.org/10.3390/asi7020023
Submission received: 12 January 2024 / Revised: 23 February 2024 / Accepted: 6 March 2024 / Published: 11 March 2024

Abstract

:
In aerial surveillance systems, achieving optimal object detection precision is of paramount importance for effective monitoring and reconnaissance. This article presents a novel approach to enhance object detection accuracy through the integration of Delaunay triangulation with multi-unmanned aerial vehicle (UAV) systems. The methodology involves positioning multiple UAVs at pre-specified locations using the Delaunay triangulation algorithm with performance of O (n log n). This is compared with the conventional single UAV approach at a near distance. Our findings reveal that the collaborative efforts of multiple UAVs, guided by Delaunay triangulation, significantly improves object detection accuracy, especially when compared to a single UAV operating in close proximity. This research employs advanced image processing techniques to identify objects in the area under surveillance. Results indicate a substantial enhancement in the collective surveillance capabilities of the multi-UAV system, demonstrating its efficacy in unconstrained scenarios. This research not only contributes to the optimization of aerial surveillance operations but also underscores the potential of spatially informed UAV networks for applications demanding heightened object detection accuracy. The integration of Delaunay triangulation with multi-UAV systems emerges as a promising strategy for advancing the capabilities of aerial surveillance in scenarios ranging from security and emergency response to environmental monitoring.

1. Introduction

Achieving optimal object detection precision is crucial for effective monitoring and reconnaissance using aerial surveillance systems. With the proliferation of unmanned aerial vehicles (UAVs), there is a growing need to enhance the accuracy of object detection methodologies. This article presents a novel approach that integrates Delaunay triangulation with multiple-unmanned aerial vehicle systems, aiming to improve object detection accuracy and overcome limitations of conventional single UAV approaches. The use of UAVs for aerial surveillance has gained significant attention in recent years due to their ability to navigate challenging terrains, provide large-scale coverage, and capture high-resolution imagery. However, the effectiveness of a traditional single UAV-based surveillance system is restricted by their limited capabilities to cover large areas and detect objects with high precision, especially when operating in close proximity [1,2]. To address these limitations, this research explores the benefits of employing multiple UAVs in collaborative efforts guided by Delaunay triangulation.
Delaunay triangulation is a geometric algorithm that divides a set of points in a plane into a set of non-overlapping triangles, such that no point is inside the circumcircle (the circle that passes through all three vertices) of any triangle [3]. This triangulation has several desirable properties and has notable applications in positioning and localization, particularly in the context of spatial optimization and efficient coverage [4]. In this research, Delaunay triangulation is used as a geometric approach that involves positioning multiple UAVs at pre-specified locations, creating a network that maximizes coverage and optimizes the collective surveillance capabilities. This approach allows for the systematic division of the surveillance area into smaller triangles, enhancing the precision of object detection.
The current state of research in the field of aerial surveillance indicates a significant interest in enhancing object detection accuracy. Several studies have focused on developing advanced image processing techniques to identify objects accurately and efficiently [5,6,7,8]. Controversial and diverging hypotheses regarding the optimal strategies for aerial surveillance also exist [9,10]. Some researchers argue that single UAV approaches with specialized sensors and algorithms yield satisfactory results, while others advocate for the scalability and collaboration benefits offered by multiple UAV systems [11,12]. This research aims to address these diverging viewpoints by investigating the performance of a multiple-UAV system integrated with Delaunay triangulation.
The principal aim of this study is to evaluate the impact of the collaborative efforts of multiple UAVs, guided by Delaunay triangulation, on object detection accuracy. Through advanced image processing techniques and experimental assessment, this research seeks to quantify the improvements achieved by this approach compared to traditional single UAV surveillance systems operating at close proximity. In essence, this study aims to answer the question: how does the integration of Delaunay triangulation with multiple-unmanned aerial vehicle systems enhance object detection accuracy in comparison to traditional single UAV approaches? The image processing methodology in this research leverages Amazon Rekognition, a robust and scalable solution for object detection in aerial surveillance [13]. Amazon Rekognition offers a pre-trained deep learning model capable of identifying and locating objects within images, making it suitable for real-time analysis of UAV-captured data. Amazon Rekognition allows the system to automatically identify and locate objects of interest in the surveillance area. This includes vehicles, individuals, and other relevant items depending on the application scenario. The deep learning models within Amazon Rekognition are continuously updated, ensuring that the system benefits from the latest advancements in object detection research. Amazon Rekognition’s real-time processing capabilities enable swift analysis of images, facilitating rapid decision-making in surveillance scenarios. The system can promptly identify and report detected objects, aiding in timely response and intervention. To align with the latest developments in image processing for object detection, this research incorporates insights from recent advancements in deep learning models, transfer learning techniques, and optimization strategies. The integration of Amazon Rekognition represents a practical application of cutting-edge technology in the context of aerial surveillance [14].
The significance of this research extends beyond the optimization of aerial surveillance operations. The integration of Delaunay triangulation with multiple-UAV systems provides the potential to enhance object detection accuracy in various domains of interest, including security, emergency response, and environmental monitoring. In practical scenarios, constraints in aerial surveillance operations can manifest in various forms, significantly impacting the effectiveness and scope of surveillance activities. These constraints can include challenges related to limited coverage, especially in vast or intricate terrains, restricting the ability to achieve comprehensive surveillance and detect objects or events. Additionally, constraints tied to the payload capacity of UAVs can impact the deployment of necessary sensors and equipment. Operational limitations such as battery life, range, and adverse weather conditions pose challenges, affecting the duration, distance, and effectiveness of surveillance efforts. Regulatory constraints pertaining to airspace usage, flight altitude, and privacy considerations, as well as the need for efficient communication and coordination among multiple UAVs, further contribute to the complex landscape of constraints encountered in practice. Understanding and addressing these constraints is essential for designing and implementing effective aerial surveillance systems. By providing enhanced surveillance capabilities in unconstrained scenarios, this approach contributes to the development of more efficient and effective surveillance strategies.

2. Materials and Methods

2.1. Delaunay Triangulation

One of the well-known problems in computational geometry involves computing the Delaunay triangulation of a given set of points. This mathematical challenge was first explored by the French mathematician Boris Nikolaevich Delone (or Delaunay) in 1934 [15]. Since its inception, numerous algorithms have been developed by scientists to address this problem. Notably, in 1985, Guibas and Stolfi [16] introduced a Divide and Conquer algorithm that demonstrated improved performance, achieving the optimal bound of O (n log n). This means that the running time of the algorithm grows logarithmically with the size of the input, represented by “n”. Another widely employed method is the Incremental Construction algorithm, which gradually builds the triangulation by adding points one at a time, adjusting the existing triangulation accordingly. Bowyer–Watson and incremental insertion algorithms are examples of this category [17]. Moreover, advancements in randomized algorithms, such as the randomized incremental construction, have brought about probabilistic techniques to efficiently compute Delaunay triangulation. Each algorithm presents unique trade-offs in terms of computational complexity, memory usage, and applicability to specific scenarios, catering to the diverse needs of computational geometry practitioners and researchers.

DT App

The Android application “Drones DT”, which was developed as a part of this research initiative using Kotlin and JAVA in Android Studio, is a notable accomplishment. This app features the implementation of the Divide and Conquer algorithm for Delaunay triangulation, demonstrating an optimal time complexity of O(n log n). The strategic use of the algorithm enhances the app’s efficiency, particularly when dealing with large datasets. Through the fusion of Kotlin and JAVA, the app offers a user-friendly experience, making it a valuable tool for those involved in tasks related to point sets and triangulation, especially in the context of aerial surveillance. The application interface and screens are shown in Figure 1 below. The successful development of “Drones DT” not only showcases technological prowess but also contributes to the advancement of mobile applications in the field of computational geometry.
The DT Drones application as shown above streamlines the process of leveraging Delaunay triangulation for effective aerial surveillance. The user-friendly interface guides users through three simple steps. First, by tapping to add drones, users can seamlessly input any number of drones they have at their disposal. Following this, tapping to add a target location provides a straightforward way to specify the area of interest. The final step involves pressing the DT icon to initiate Delaunay triangulation calculations. The application then displays the triangulated result of the target locations, along with suggested locations for the drones. This intuitive and step-by-step approach ensures ease of use, allowing users to efficiently harness the power of Delaunay triangulation for optimal drone deployment in the context of surveillance and reconnaissance. The flowchart of the (Drones DT) application is shown in Appendix A.

2.2. Dronelink

After determining the optimal locations for drones through the utilization of the (DT Drones) application as part of this research, the next crucial step involves mission planning using the customized Dronelink SDK. Dronelink stands out as a powerful tool for automating data capture across various industries and use cases. This SDK allows for the seamless creation of missions tailored to specific needs, enabling the capture of data for orthomosaics, point clouds, 3D models, inspections, site documentation, and videography. The versatility of Dronelink is particularly valuable, providing a comprehensive solution for automating the data capture process with precision and efficiency. This integration ensures that the drones not only reach their designated locations but also execute missions with a high level of automation, contributing to the effectiveness of the overall aerial surveillance and reconnaissance strategy in our research.
Dronelink stands out as a comprehensive software development kit (SDK) that empowers users to exercise precise control over unmanned aerial vehicles (UAVs) during mission planning and execution. This robust platform offers an array of features that includes the manipulation of key parameters such as latitude, longitude, and altitude, allowing users to finely tune the drone’s navigation to specific coordinates. Dronelink’s versatility extends to controlling the drone’s speed, defining a radius of orbit for circular flight paths, and specifying the number of rotations around a target point. Additionally, users can seamlessly choose between capturing images or videos of designated targets during the mission, enhancing flexibility for diverse applications. A notable feature is the ability to set the direction of flight, enabling users to opt for either clockwise or anticlockwise trajectories. Furthermore, Dronelink provides a thoughtful mechanism for defining actions to be taken once the drone completes its mission, adding an extra layer of automation and efficiency to the overall data capture process. Overall, Dronelink’s rich feature set positions it as a valuable tool for mission customization, offering unparalleled control and automation for a wide range of UAV applications. Figure 2 below shows a screenshot from the dronelike dashboard used to control UAV missions.
The software is designed to be versatile, accommodating popular drone models across different brands. Some of the well-known UAV brands that are typically compatible with Dronelink include DJI, Autel Robotics, Skydio, Yuneec, and Parrot, among others [18]. Dronelink’s support for multiple drone models allows users to choose the UAV that best fits their needs and operational requirements. The software often provides a comprehensive list of supported drones on its official website, along with any specific integration requirements or considerations. However, it is crucial to note that the landscape of UAV models and their compatibility with Dronelink may evolve over time as new drone models are released, and the software is updated accordingly [19].

2.3. Image Processing

Image processing is a multidisciplinary field that involves the manipulation and analysis of images to extract valuable information or enhance specific features. In the context of this research, image processing serves as a pivotal component in interpreting the visual data captured by drones during surveillance missions.
The image processing pipeline typically begins with the acquisition of images, acquired by drones equipped with cameras. These images form the raw data that undergoes a series of processing steps to derive meaningful insights. The first significant phase is preprocessing, where acquired images undergo cleaning and enhancement procedures. Operations such as noise reduction, contrast adjustment, and normalization are employed to refine the quality of the images, ensuring a consistent and reliable dataset for subsequent analysis. Following preprocessing, image segmentation becomes a crucial step. This process involves dividing the image into meaningful regions or objects [20]. Segmentation is particularly valuable for isolating areas of interest within the surveillance context, aiding in the identification of distinct objects or regions within the aerial imagery. Once segmentation is complete, the focus shifts to feature extraction. Features represent distinctive characteristics inherent in objects or regions within the image [21]. These features can span various attributes, including color, texture, shape, and spatial relationships. Extracting relevant features is essential for subsequent analysis and the identification of specific elements within the surveillance area. The culmination of these image processing steps contributes to a refined and analytically rich dataset.
Image recognition, a subset of computer vision, focuses on enabling machines to comprehend and interpret the content within images. The fundamental goal is to develop algorithms and models that possess the ability to identify and categorize objects, scenes, or patterns present in visual data. The process involves teaching a machine to recognize specific features or characteristics in images and associating them with predefined classes or categories. This recognition capability empowers machines to autonomously classify images and make informed decisions based on their content. The image recognition workflow consists of several key components. It commences with data collection, where a diverse and representative dataset of images is gathered, covering the various classes the model is intended to recognize. Subsequently, preprocessing techniques are applied to clean and enhance the acquired images, ensuring consistency and improving overall quality [22]. Feature extraction follows, where relevant characteristics within the images are identified, such as color, texture, and shapes, forming the basis for classification. The training phase involves the utilization of machine learning algorithms, and convolutional neural networks (CNNs), to learn the relationships between the extracted features and the corresponding image classes [23]. Testing and evaluation assesses the model’s performance on new, unseen images, gauging its ability to generalize. Finally, the trained model is deployed in real-world applications to automatically analyze and classify images. Image processing is an essential precursor to image recognition, as it encompasses a set of techniques designed to manipulate and enhance images. During preprocessing, image processing techniques are employed to ready the images for analysis, including tasks such as normalization, contrast adjustment, and noise reduction. Furthermore, image processing plays a pivotal role in feature extraction, aiding in the identification and enhancement of relevant aspects within images [24]. The synergy between these fields ensures that the images used for training and deploying image recognition models are optimized, standardized, and enriched, contributing to the overall accuracy and effectiveness of the recognition process. In essence, image processing lays the groundwork for image recognition by refining raw visual data and facilitating the extraction of meaningful features critical to the classification and interpretation of images by machine learning models.
In the context of this research, the image processing phase is a critical component, serving as the analytical backbone for the results obtained through the combined use of Drones DT and Dronelink. The image processing application employed in this research relies on the Amazon Rekognition API, a sophisticated image and video analysis service powered by advanced deep learning algorithms. Amazon Rekognition is designed to excel in various image recognition tasks, including object and scene detection, and face analysis.
Achieving a level of comprehension in computers akin to human understanding has long been a challenging task for computer scientists. Over past decades, various approaches have been explored to address this issue. The consensus that has emerged today is that deep learning, utilizing a combination of feature abstraction and neural networks, offers a powerful solution. While this approach can produce results that seem almost magical, it comes with a significant computational cost, especially during the intensive training phase. In the training phase, a deep learning network is presented with a diverse set of labeled examples to correlate features in images with specific labels, such as identifying a dog or a pet. This phase is computationally expensive due to the size and multi-layered nature of neural networks. Once trained, the network can efficiently evaluate new images against the learned features, expressing results with confidence levels (0 to 100%) rather than absolute facts, allowing for adaptable precision in various applications [25]. Amazon Rekognition, a fully-managed service powered by deep learning, has been meticulously developed by the computer vision team over many years. Analyzing billions of images daily, it has been trained on thousands of objects and scenes, making it available for use in various applications. Designed to run at scale, Rekognition comprehends scenes, objects, and faces, returning lists of labels for images and bounding boxes for faces, along with attributes. In a practical demonstration, Rekognition accurately labeled an image of a dog named Luna, recognizing it as an animal, a dog, a pet, and a golden retriever with high confidence. Notably, the labels are independent, reflecting the model’s correlation of features without explicit understanding of relationships between labels [26]. Rekognition’s capabilities extend to facial recognition, as demonstrated in an image of the author’s wife and himself. The service identified faces, set up bounding boxes, and even detected emotions, noting that the author’s wife appeared happy. Rekognition enables face comparison and recognition tasks, and its functionalities are accessible through a set of API functions. Rekognition’s power is harnessed through API calls, allowing for programmable interaction. Functions like DetectLabels and DetectFaces enable users to replicate the demonstrated examples. The service also facilitates face indexing through functions like IndexFaces, extracting features known as face vectors for recognition. Moreover, Rekognition seamlessly processes images stored in Amazon S3 and can be integrated with AWS Lambda functions for efficient and scalable image processing. AWS Identity and Access Management (IAM) controls access to Rekognition APIs, ensuring secure usage. Rekognition finds diverse applications, particularly in scenarios where large photo collections need tagging and indexing. Its scalability allows for the processing of millions of photos daily without concerns about infrastructure setup. Visual search, tag-based browsing, and interactive discovery models become feasible with Rekognition. The service is also valuable in authentication and security contexts, enabling tasks such as face comparison for secure zone access or visual surveillance to inspect photos for objects or people of interest or concern [27]. The comprehensive set of features provided by Amazon Rekognition makes it a versatile tool for developers and businesses seeking advanced image and video analysis capabilities in the cloud.
The primary objective of leveraging Amazon Rekognition is to automatically identify and locate objects within the aerial imagery captured during automated drone flights. This encompasses a broad range of objects, such as vehicles, individuals, and other relevant items based on the specific application scenario. The real-time processing capabilities of Amazon Rekognition are paramount, allowing for swift analysis of images and facilitating rapid decision making in dynamic surveillance scenarios.
Integral to the success of this research is the continuous evolution of Amazon Rekognition’s deep learning models. The system benefits from the latest advancements in object detection research, ensuring that the image processing phase remains at the forefront of technological innovation. Moreover, the image processing methodology adopted in this research extends beyond basic object detection. It incorporates insights from recent advancements in deep learning models, transfer learning techniques, and optimization strategies. This meticulous approach ensures that the image processing algorithms are not only accurate but also adaptive to varying environmental conditions and scenarios encountered during aerial surveillance.
The culmination of this research effort includes the development of an Android application named “Recognize,” implemented using Kotlin and JAVA within Android Studio. Positioned subsequent to the DT Drones and Dronelink phases, Recognize plays a crucial role in the experimental framework. Upon deploying a specified fleet of drones to capture images of a common target, the application seamlessly integrates the Amazon Rekognition API, specifically utilizing Amazon Rekognition Custom Labels. Users input the number of drones utilized and are prompted to upload images captured by these drones. The application orchestrates a sophisticated analysis of these images through Amazon Rekognition Custom Labels, facilitating the detection of objects with a customized focus. In the final phase, Recognize presents detailed results, showcasing confidence levels associated with each detected object. The confidence levels are calculated using the confidence interval of the proportion method, ensuring a statistically rigorous evaluation of object detection accuracy (Appendix B). This application represents a significant stride in the integration of cutting-edge technology, contributing to the advancement of aerial surveillance and image processing paradigms. The application interface and screens are shown in Figure 3a below.
The above Figure 3b illustrates the Recognize application architecture. By integrating the “Recognize” application into the workflow, this research makes a practical application of cutting-edge technology within the domain of aerial surveillance. The comprehensive approach, combining automated drone flights, precise location control through Dronelink, and advanced image analysis with the “Recognize” application, the research constructs a highly effective and evolving surveillance system. This integration measures the accuracy of the Delaunay triangulation approach, enabling timely responses and interventions based on the analyzed results. Overall, the research marks a significant advancement in the field of aerial surveillance and reconnaissance through the seamless integration of cutting-edge technologies.

3. Results

In this section, the focus shifts towards the culmination of the exploration, presenting a concise and precise overview of the experimental outcomes derived from the DT Drones, Dronelink, and Recognize phases. The analysis undertaken interprets the empirical findings, providing insights into the implications and conclusions drawn from the integration of advanced technologies, including Delaunay triangulation, UAV systems, and the Amazon Rekognition API. By systematically examining the experimental data, the layers of information derived from each application are unfolded, contributing to a holistic understanding of the research’s overarching objectives and outcomes.

3.1. Single Drone for Aerial Surveillance

As part of the survey exploration, a DJI Mini 3 Pro drone was utilized to execute a targeted mission at geographical coordinates (29.452177, 47.761308). Precision in navigation was achieved by directing the drone to a specific location (29.452263, 47.761576), where it captured an image of the designated area, depicted in Figure 4.
Importantly, the drone maintained an approximate distance of 27.65 m from the survey target, as visually represented in Figure 5. The distance between them has been calculated using the Haversine method [28]. It is a mathematical formula used in navigation and geography to calculate the distance between two points on the surface of a sphere, such as the Earth. Details of the formula are explained in Appendix C.
The navigation of the single drone to the designated point was orchestrated seamlessly through the Dronelink application, employing an automatic mission with predefined properties, as illustrated in Figure 6.
The properties for the automatic mission for the single drone are as follows:
  • Latitude: 29.452263;
  • Longitude: 47.761576;
  • Altitude: 11 m;
  • Speed: 30 km/h;
  • Radius: 1;
  • Direction: Clockwise;
  • Rotations: 1;
  • Circumference: 360°;
  • Automatic Capture: photos.
This strategic integration of drone technology and automated mission planning enhances the efficiency and accuracy of geographical surveys, contributing to a robust methodology for data collection and analysis. The captured image underwent detailed analysis using the “Recognize” application, revealing identified objects within the surveyed picture. Results from processing the above image with the “Recognize” application are summarized in Table 1 below.

3.2. Multiple Drones for Aerial Surveillance

Following this initial single-drone mission, the study was expanded to include the deployment of five drones. The positioning of these drones was strategically orchestrated using the DT Drones Application, implementing Delaunay triangulation for an optimal spatial arrangement. Each of the five drones then captured images of the same target location, and these images were subsequently processed using the “Recognize” application. The results from both experiments, comparing the outcomes of the single-drone and multiple-drone approaches, are scrutinized and discussed, providing insights into the effectiveness of employing multiple drones and advanced image processing techniques for comprehensive location surveys.

3.2.1. DT Drones Application

The DT Drones application is now employed to obtain the precise locations of five drones strategically positioned around a designated target with coordinates (29.452177, 47.761308). Leveraging the computational power of Delaunay triangulation, the application systematically distributes the drones in an optimized configuration, ensuring effective coverage of the specified area. Figure 7 below shows the result obtained from DT drones application for the distribution of five drones around the target.
The drones have been successfully distributed around the target location, with their respective distances ranging from 32 m to 63 m. The strategic deployment ensures comprehensive coverage of the specified area, providing a well-distributed spatial arrangement. Each drone’s precise distance from the target has been accurately determined. This strategic distribution, facilitated by the DT Drones application using Delaunay triangulation, optimizes the spatial efficiency of the drone network. The varying distances contribute to a comprehensive and systematic approach, enhancing the drones’ ability to capture data effectively and ensuring a thorough survey of the target location. Details of the drones’ positions are shows in Table 2 below:
Utilizing this geometric algorithm, DT Drones orchestrates the spatial arrangement of the drones, enhancing their coordination for efficient surveying and data capture. This methodology aims to improve the overall effectiveness and precision of the drone deployment, allowing for comprehensive and well-distributed data acquisition around the specified target location. The distribution of drones around the target with corresponding distance is shown in Figure 8 below.

3.2.2. Dronelink Automatic Mission for Multiple Drones

Following the determination of the positions of the five drones utilizing the DT Drones application and Delaunay triangulation, each drone was subsequently assigned a specific and tailored automatic mission using the Dronelink application. The individualized nature of these missions highlights the adaptability and versatility of the Dronelink application, allowing for precise and effective coordination of the swarm of drones in accordance with the designated goals. These missions were crafted with meticulous attention to detail, incorporating properties that intricately defined the objectives and functionalities of each mission. The properties outlined the mission’s scope, parameters, and operational intricacies, ensuring a comprehensive and efficient execution. Properties of each mission for the five drones are shown in Table 3 below:
The details of missions for the five drones from the Dronelink application can be found in Appendix E.

3.2.3. Recognize Application for Images Taken from Multiple Drones

After capturing images of the designated target from each of the five drones, the photographs were systematically processed using the Recognize application. The aim was to discern and identify objects present in the images, shedding light on the environmental elements captured by the drones. The Recognize application, leveraging the power of Amazon Rekognition Custom Labels, employed advanced image analysis techniques to provide insightful results. These outcomes, derived from the processed images showcased in Appendix D, reveal valuable information about the detected objects, contributing to a comprehensive understanding of the surveyed area. The subsequent analysis and interpretation of these results will play a pivotal role in drawing meaningful conclusions from the experimental data gathered by the drone imaging and recognition process. Results from the Recognize application are shown Table 4 below.

4. Discussion

The results obtained from the survey mission using both a single drone and a swarm of five drones present intriguing insights into the effectiveness of drone deployment and mission planning. Notably, the single DJI Mini 3 Pro drone, orchestrated through the Dronelink application, demonstrated efficiency in capturing the target at a close range (27.65 m). The subsequent image processing using the ‘Recognize’ app revealed noteworthy findings, successfully identifying objects within the surveyed area. In contrast, the deployment of five drones, strategically positioned using Delaunay triangulation facilitated by the DT Drones application, yielded enhanced results in terms of detected objects. The distributed arrangement of drones, spanning distances from 32 m to 63 m, provided comprehensive coverage of the target area. The Delaunay triangulation approach optimized the spatial distribution of drones, ensuring efficient coverage and minimizing potential gaps in the surveyed region. These findings align with the working hypotheses, suggesting that the collaborative utilization of multiple drones, guided by geometric algorithms such as Delaunay triangulation, can significantly improve the efficacy of survey missions. The comparative advantage of the five drone approach is evident in the broader coverage achieved, ultimately enhancing the detection capabilities of the ‘Recognize’ app.
From a broader perspective, these results contribute to the growing body of literature on optimal drone deployment for geographical surveys. The successful integration of automated mission planning, as exemplified by Dronelink, and geometric algorithms for drone distribution, exemplified by Delaunay triangulation, holds promise for future research and applications. The discussion underscores the potential of such integrated methodologies in various domains, including environmental monitoring, infrastructure inspection, and disaster response.
Future research directions may explore further refinements to optimize the balance between the number of drones, their spatial distribution, and the specific algorithms employed for mission planning. Additionally, investigations into the scalability of these approaches for larger geographical areas and more complex terrains could offer valuable insights. Overall, this study lays a foundation for advancing the capabilities of drone-based surveys, contributing to the evolving landscape of remote sensing and spatial data collection technologies.

5. Conclusions

In conclusion, this research embarked on an exploration of drone-based survey missions, leveraging advanced technologies and methodologies to enhance the efficiency and effectiveness of data collection processes. The deployment of a single DJI Mini 3 Pro drone, orchestrated through the Dronelink application, demonstrated the capability to capture precise images of a designated area with remarkable accuracy. The subsequent analysis using the ‘Recognize’ app showcased the successful identification of objects within the surveyed region.
Building upon this foundation, the research extended its inquiry to the collaborative use of multiple drones. The DT Drones application facilitated the strategic distribution of five drones around a specified target through the implementation of Delaunay triangulation. This geometric algorithm optimized the spatial arrangement of drones, resulting in comprehensive coverage and superior detection capabilities. The comparative analysis revealed that this collaborative approach outperformed the single drone deployment, especially in terms of object identification. The findings underscore the significance of automated mission planning and geometric algorithms in optimizing drone-based surveys. The integration of Dronelink for mission automation and Delaunay triangulation for spatial distribution holds promise for a myriad of applications, ranging from environmental monitoring to infrastructure inspection. The success of this approach suggests its potential scalability and adaptability to diverse terrains and survey objectives.
As the research journey concludes, the implications of these findings extend beyond the current study. The methodologies employed offer valuable insights for researchers, practitioners, and industries seeking to harness the full potential of drone technology in data collection and analysis. Future research endeavors may delve deeper into refining the proposed methodologies, exploring variations in terrain, and investigating the scalability of these approaches for large-scale surveys.
In essence, this research contributes to the evolving landscape of remote sensing and spatial data collection, providing a framework for the thoughtful integration of advanced technologies in drone-based surveys. The success of both single and collaborative drone deployments highlights the dynamic capabilities of these platforms, paving the way for continued advancements in the field of geographical data acquisition.

Author Contributions

Writing—original draft preparation, A.A.; writing—review and editing, C.C.; supervision, C.C. and P.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Acknowledgments

The authors acknowledge the allocation of computing resources by The University of Sussex.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Figure A1. Flowchart of DT Drones application.
Figure A1. Flowchart of DT Drones application.
Asi 07 00023 g0a1

Appendix B

Confidence interval: The confidence interval is the range in which the population parameter is most likely to be found. The degree of certainty for which it is likely to be within that range is called the confidence level. When gathering sample data, the precise value of the parameter is unknown.
Confidence level: The confidence level is the required degree of certainty that the population parameter will be in the confidence interval. This is the probability that the calculated confidence interval contains the population parameter. Researchers frequently employ a confidence level of 0.95. The 95% confidence interval postulates that if one were to compute the confidence interval for an infinite number of samples, then 95% of the calculated ranges would encompass the population parameter.
When the population’s standard deviation (σ) is known, the normal distribution is utilized. The distribution of the sample mean ( X ¯ ) is normal with a mean of Mean and a standard deviation of σ/√n. If the population’s standard deviation is unknown, the t distribution with n − 1 degrees of freedom is used, employing the sample standard deviation. The distribution of ( X ¯ − Mean)/(S/√n) is T.
The mean confidence interval formula:
  • The population standard deviation is known:
    X ¯ ± Z α / 2 × σ n
  • The population standard deviation is unknown:
    X ¯ ± T α 2 ( d f ) S n
The standard deviation confidence interval formula:
( n 1 ) S 2 X 1 α 2 ( d f ) σ 2 ( n 1 ) S 2 X α 2 ( d f )
where:
  • X ¯ : The sample average.
  • σ : The population standard deviation; typically, the population standard deviation is unknown and may be obtained from other research as a sample standard deviation with a larger sample size. In this scenario, it is permissible to assume it as the population standard deviation.
  • S: The sample standard deviation.
  • n : The sample size (the number of observations).
  • CL: confidence level.
  • α = 1 − CL.
  • Z α / 2 —the z-score based on the standard normal distribution, p(z < Z α / 2 ) = α/2.
  • T α / 2 —the t-score based on the t distribution, p(t < T α / 2 ) = α/2.
  • d f —degrees of freedom. d f = n − 1.

Appendix C

The distance between the points has been computed using the Haversine method, which is a mathematical formula employed in navigation and geography to determine the distance between two points on the surface of the Earth. The formula is as follows:
a = sin 2 l a t 2 + cos ( l a t 1 ) · cos ( l a t 2 ) · sin 2 l o n 2
c = 2 · a tan 2 ( a , 1 a )
d = R · c
where:
  • d : is the distance between the two points (along the surface of the sphere).
  • R : is the radius of the sphere (in this case, the radius of the Earth).
  • l a t 1: the latitude of the first point.
  • l a t 2: the latitude of the second point.
  • l a t : the difference between the latitudes.
  • l o n : the difference between the longitudes.
  • a t a n 2 : is a special function that computes the arctangent of the quotient of its arguments.

Appendix D

Images captured for the target from drone 1, drone 2, drone 3, drone 4, and drone 5 are shown, respectively, in Figure A2 below:
Figure A2. Images taken from five drones. (a) Image taken by drone 1. (b) Image taken by drone 2. (c) Image taken by drone 3. (d) Image taken by drone 4. (e) Image taken by drone 5.
Figure A2. Images taken from five drones. (a) Image taken by drone 1. (b) Image taken by drone 2. (c) Image taken by drone 3. (d) Image taken by drone 4. (e) Image taken by drone 5.
Asi 07 00023 g0a2

Appendix E

The details of missions for the five drones created in the Dronelink application can be found in Figure A3 below:
Figure A3. Dronelink missions for five drones. (a) Drone 1 mission. (b) Drone 2 mission. (c) Drone 3 mission. (d) Drone 4 mission. (e) Drone 5 mission.
Figure A3. Dronelink missions for five drones. (a) Drone 1 mission. (b) Drone 2 mission. (c) Drone 3 mission. (d) Drone 4 mission. (e) Drone 5 mission.
Asi 07 00023 g0a3

References

  1. Li, X.; Savkin, A.V. Networked unmanned aerial vehicles for surveillance and monitoring: A survey. Future Internet 2021, 13, 174. [Google Scholar] [CrossRef]
  2. Shakhatreh, H.; Sawalmeh, A.H.; Al-Fuqaha, A.; Dou, Z.; Almaita, E.; Khalil, I.; Othman, N.S.; Khreishah, A.; Guizani, M. Unmanned aerial vehicles (UAVs): A survey on civil applications and key research challenges. IEEE Access 2019, 7, 48572–48634. [Google Scholar] [CrossRef]
  3. Musin, O.R. Properties of the Delaunay triangulation. In Proceedings of the Thirteenth Annual Symposium on Computational geometry (SCG ’97). Association for Computing Machinery, New York, NY, USA, 4–6 June 1997; pp. 424–426. [Google Scholar] [CrossRef]
  4. Li, Q.; Nevalainen, P.; Peña Queralta, J.; Heikkonen, J.; Westerlund, T. Localization in Unstructured Environments: Towards Autonomous Robots in Forests with Delaunay Triangulation. Remote Sens. 2020, 12, 1870. [Google Scholar] [CrossRef]
  5. Srivastava, S.; Divekar, A.V.; Anilkumar, C.; Naik, I.; Kulkarni, V.; Pattabiraman, V. Comparative analysis of deep learning image detection algorithms. J. Big Data 2021, 8, 66. [Google Scholar] [CrossRef]
  6. Sharma, V.K.; Mir, R.N. A Comprehensive and Systematic Look up into Deep Learning Based Object Detection Techniques: A Review. Comput. Sci. Rev. 2020, 38, 100301. [Google Scholar] [CrossRef]
  7. Li, P.; Zhao, W. Image fire detection algorithms based on convolutional neural networks. Case Stud. Therm. Eng. 2020, 19, 100625. [Google Scholar] [CrossRef]
  8. Pi, Y.; Nath, N.D.; Behzadan, A.H. Convolutional neural networks for object detection in aerial imagery for disaster response and recovery. Adv. Eng. Inform. 2020, 43, 101009. [Google Scholar] [CrossRef]
  9. Doitsidis, L.; Weiss, S.; Renzaglia, A.; Achtelik, M.W.; Kosmatopoulos, E.; Siegwart, R.; Scaramuzza, D. Optimal surveillance coverage for teams of micro aerial vehicles in GPS-denied environments using onboard vision. Auton. Robot. 2012, 33, 173–188. [Google Scholar] [CrossRef]
  10. Beard, R.; McLain, T.; Nelson, D.; Kingston, D.; Johanson, D. Decentralized Cooperative Aerial Surveillance Using Fixed-Wing Miniature UAVs. Proc. IEEE 2006, 94, 1306–1324. [Google Scholar] [CrossRef]
  11. Pitre, R.R.; Li, X.R.; Delbalzo, R. UAV Route Planning for Joint Search and Track Missions—An Information-Value Approach. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 2551–2565. [Google Scholar] [CrossRef]
  12. Nigam, N.; Bieniawski, S.; Kroo, I.; Vian, J. Control of Multiple UAVs for Persistent Surveillance: Algorithm and Flight Test Results. IEEE Trans. Control Syst. Technol. 2012, 20, 1236–1251. [Google Scholar] [CrossRef]
  13. Sharma, V. Object Detection and Recognition using Amazon Rekognition with Boto3. In Proceedings of the 2022 6th International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 28–30 April 2022; pp. 727–732. [Google Scholar] [CrossRef]
  14. Zahid, S.M.; Najesh, T.N.; K, S.; Ameen, S.R.; Ali, A. A Multi Stage Approach for Object and Face Detection using CNN. In Proceedings of the 2023 8th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 1–3 June 2023; pp. 798–803. [Google Scholar] [CrossRef]
  15. Vavilov, N.A. Saint Petersburg School of the Theory of Linear Groups. I. Prehistory. Vestn. St. Petersburg Univ. Math. 2023, 56, 273–288. [Google Scholar] [CrossRef]
  16. Guibas, L.; Stolfi, J. Primitives for the manipulation of general subdivisions and the computation of Voronoi. ACM Trans. Graph. 1985, 4, 74–123. [Google Scholar] [CrossRef]
  17. Dwyer, R.A. A faster divide-and-conquer algorithm for constructing delaunay triangulations. Algorithmica 1987, 2, 137–151. [Google Scholar] [CrossRef]
  18. Prill, F.; Zängl, G. A compact parallel algorithm for spherical Delaunay triangulations. Concurr. Comput. Pract. Exp. 2017, 29, e3971. [Google Scholar] [CrossRef]
  19. Kamarudin, K.R.; Wei, Y.J. Production of drone orthomosaic map of UTHM Wetland Conservation Research Station using UAV photogrammetry. IOP Conf. Ser. Earth Environ. Sci. 2022, 1064, 012012. [Google Scholar] [CrossRef]
  20. dos Santos Boente, A.; de Oliveira, T.E.A.; Baldivieso, T.J.; da Fonseca, V.P.; Rosa, P.F. Small Scale Unmanned Aircraft System and Photogrammetry Applied for 3D Modeling of Historical Buildings. In Proceedings of the Twelfth International Conference on Sensor Device Technologies and Applications SENSORDEVICES, Athens, Greece, 14–18 November 2021. [Google Scholar]
  21. Kovasznay, L.S.G.; Joseph, H.M. Image Processing. Proc. IRE 1955, 43, 560–570. [Google Scholar] [CrossRef]
  22. Egmont-Petersen, M.; de Ridder, D.; Handels, H. Image processing with neural networks—A review. Pattern Recognit. 2002, 35, 2279–2301. [Google Scholar] [CrossRef]
  23. Wu, M.; Chen, L. Image recognition based on deep learning. In Proceedings of the 2015 Chinese Automation Congress (CAC), Wuhan, China, 27–29 November 2015; pp. 542–546. [Google Scholar] [CrossRef]
  24. Shin, H.-C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans. Med. Imaging 2016, 35, 1285–1298. [Google Scholar] [CrossRef]
  25. Prewitt, J.M.S. Object enhancement and extraction. Pict. Process. Psychopictorics 1970, 10, 15–19. [Google Scholar]
  26. Leibe, B.; Leonardis, A.; Schiele, B. Robust object detection with interleaved categorization and segmentation. Int. J. Comput. Vis. 2008, 77, 259–289. [Google Scholar] [CrossRef]
  27. Paola, P.; Concetti, R.; Belli, A.; Palma, L. Amazon, Google and Microsoft solutions for IoT: Architectures and a performance comparison. IEEE Access 2019, 8, 5455–5470. [Google Scholar]
  28. Winarno, E.; Hadikurniawati, W.; Rosso, R.N. Location based service for presence system using haversine method. In Proceedings of the 2017 International Conference on Innovative and Creative Information Technology (ICITech), Salatiga, Indonesia, 2–4 November 2017; pp. 1–4. [Google Scholar] [CrossRef]
Figure 1. The screens and interface of DT Drones application. The points in red color indicates the drones and the point in green shows the target.
Figure 1. The screens and interface of DT Drones application. The points in red color indicates the drones and the point in green shows the target.
Asi 07 00023 g001
Figure 2. Dronelink dashboard.
Figure 2. Dronelink dashboard.
Asi 07 00023 g002
Figure 3. (a). The screens and interface of Recognize application. (b). Recognize application architecture.
Figure 3. (a). The screens and interface of Recognize application. (b). Recognize application architecture.
Asi 07 00023 g003
Figure 4. Picture taken for the target using single DJI mini 3 pro drone.
Figure 4. Picture taken for the target using single DJI mini 3 pro drone.
Asi 07 00023 g004
Figure 5. Distance between the target and the single drone.
Figure 5. Distance between the target and the single drone.
Asi 07 00023 g005
Figure 6. The Dronelink mission for the single drone.
Figure 6. The Dronelink mission for the single drone.
Asi 07 00023 g006
Figure 7. Result from computing Delaunay triangulation around target with DT Drones application. The grey points indicates the drones while the green point indicated the targeted object.
Figure 7. Result from computing Delaunay triangulation around target with DT Drones application. The grey points indicates the drones while the green point indicated the targeted object.
Asi 07 00023 g007
Figure 8. Distance between target and the five drones.
Figure 8. Distance between target and the five drones.
Asi 07 00023 g008
Table 1. Results of image captured of target using single drone.
Table 1. Results of image captured of target using single drone.
Recognize ResultsConfidence Level
Car97
Pickup Truck97
Person95
Outdoors87
Wheel77
Table 2. Details of drones’ positons and their distances from the target.
Table 2. Details of drones’ positons and their distances from the target.
CoordinatesLongitude (°)Latitude (°)Distance to Target (m)
Drone 129.45179347.76179363.47
Drone 229.45170247.76108856.95
Drone 329.45217447.76202269.13
Drone 429.45229147.76040688.25
Drone 529.45241747.76148431.66
Table 3. The properties for the automatic mission for the five drones.
Table 3. The properties for the automatic mission for the five drones.
PropertyDrone 1Drone 2Drone 3Drone 4Drone 5
Longitude (°)29.45179329.45170229.45217429.45229129.452417
Latitude (°)47.76179347.76108847.76202247.76040647.761484
Altitude (m)111111611
Speed (km/h)3035403245
Radius (m)11111
DirectionClockwiseClockwiseClockwiseClockwiseClockwise
Number of Rotations00000
Circumference360°360°360°360°360°
Automatic Capturephotosphotosphotosphotosphotos
Table 4. Results of image captured of target using five drones.
Table 4. Results of image captured of target using five drones.
Recognize ResultsConfidence Level
Car98
Pickup Truck100
Person100
Outdoors100
Wheel100
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alotaibi, A.; Chatwin, C.; Birch, P. Aerial Surveillance Leveraging Delaunay Triangulation and Multiple-UAV Imaging Systems. Appl. Syst. Innov. 2024, 7, 23. https://doi.org/10.3390/asi7020023

AMA Style

Alotaibi A, Chatwin C, Birch P. Aerial Surveillance Leveraging Delaunay Triangulation and Multiple-UAV Imaging Systems. Applied System Innovation. 2024; 7(2):23. https://doi.org/10.3390/asi7020023

Chicago/Turabian Style

Alotaibi, Ahad, Chris Chatwin, and Phil Birch. 2024. "Aerial Surveillance Leveraging Delaunay Triangulation and Multiple-UAV Imaging Systems" Applied System Innovation 7, no. 2: 23. https://doi.org/10.3390/asi7020023

Article Metrics

Back to TopTop