Next Article in Journal
An Objective Evaluation Approach for Safety-Relevant Steering Feedback on a Test Bench
Previous Article in Journal
NR Sidelink Performance Evaluation for Enhanced 5G-V2X Services
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of UAV Flight Patterns for Road Accident Site Investigation

1
Department of Automotive Technologies, Faculty of Transportation Engineering and Vehicle Engineering (KJK), Budapest University of Technology and Economics (BME), 1111 Budapest, Hungary
2
Ipsum-Tech Kft., 1096 Budapest, Hungary
3
Centre of Modern Languages, Faculty of Economic and Social Sciences (GTK INYK), Budapest University of Technology and Economics (BME), 1111 Budapest, Hungary
*
Author to whom correspondence should be addressed.
Vehicles 2023, 5(4), 1707-1726; https://doi.org/10.3390/vehicles5040093
Submission received: 27 September 2023 / Revised: 14 November 2023 / Accepted: 25 November 2023 / Published: 27 November 2023
(This article belongs to the Topic Vehicle Safety and Automated Driving)

Abstract

:
Unmanned Aerial Vehicles (UAVs) offer a promising solution for road accident scene documentation. This study seeks to investigate the occurrence of systematic deformations, such as bowling and doming, in the 3D point cloud and orthomosaic generated from images captured by UAVs along an horizontal road segment, while exploring how adjustments in flight patterns can rectify these errors. Four consumer-grade UAVs were deployed, all flying at an altitude of 10 m while acquiring images along two different routes. Processing solely nadir images resulted in significant deformations in the outputs. However, when additional images from a circular flight around a designated Point of Interest (POI), captured with an oblique camera axis, were incorporated into the dataset, these errors were notably reduced. The resulting measurement errors remained within the 0–5 cm range, well below the customary error margins in accident reconstruction. Remarkably, the entire procedure was completed within 15 min, which is half the estimated minimum duration for scene investigation. This approach demonstrates the potential for UAVs to efficiently record road accident sites for official documentation, obviating the need for pre-established Ground Control Points (GCP) or the adoption of Real-Time Kinematic (RTK) drones or Post Processed Kinematic (PPK) technology.

1. Introduction

Although the safety of road traffic is expected to increase with the advancement of autonomous vehicles [1], there remains an ongoing occurrence of road accidents. Apart from minor collisions, the police investigates the accident site [2] in order to document available data, including skid marks and the final resting position of the vehicles [3,4]. The accuracy of the records is contingent upon a multitude of factors, ranging from the tools and methods applied to the professional knowledge of the police personnel [5]. Upon the completion of the on-site investigation, a final report is compiled. This report encompasses various essential details, including a detailed sketch of the accident scene, complete with measurements. Subsequently, the comprehensive documentation compiled by the police is handed over to the accident reconstructionist, a forensic expert well-versed in discerning the accident process. The accident reconstructionist employs specialized software programs to simulate the accident process [6]. The reconstructed representation of the accident serves as one of the most important pillars of the legal procedure aiming to establish liability for the accident.
In the course of a standard investigative procedure, the relevant road section is partially or entirely closed. This inevitably leads to traffic congestions, which have negative psychological effects on drivers [7], increase the amount of emissions [8] and, in some cases, lead to further, secondary accidents [9,10]. Thus, it is of primary importance to shorten incident duration [11] and also clearance time, i.e., the time required for the collision investigator to complete the on-site accident investigation procedure [12]. The reported values for average clearance times exhibit a notable degree of variability and span a wide range, being significantly influenced by the specific regulatory requirements of the respective country [11,12,13]. No official statistical data were found for Hungary for clearance time. However, reference [14] provides a substantial estimate, indicating a broad range from 30 to 180 min.
To expedite clearance time, it is imperative to streamline the on-side data recording process. Currently, certain steps of the on-site investigation procedure cannot be sped up or omitted (such as closing off the accident scene or highlighting certain marks). Now, taking photographs at accident scenes is part of the standard procedure [15], which is a step forward in reducing the duration of the recording process, compared with traditional methods solely applying mechanical measurement tools and sketching the scene manually [16]. Also, camera footage recorded by highway surveillance systems [17] or video recordings obtained by the scene investigator [18] might be added to the data set the forensic expert relies on. However, the addition of such recordings does not shorten the on-site investigation procedure.
Through the application of an unmanned aerial vehicle (UAV), i.e., a drone [19,20,21,22,23], accident data recording time can be decreased. This is due to the fact that certain phases of the traditional data recording process can be completely replaced by a UAV flight. Thus, the time required for the recording procedure can be diminished significantly [24], even compared with the methodology applying Close Range Photogrammetry [25] without using drones [26]. Ideally, the result of the UAV recording procedure and subsequent image processing is an accurate, high-resolution 2D orthomosaic, which correctly reflects true distances. Thus, the orthomosaic can replace the site sketch, which is traditionally prepared by the police officer manually. Moreover, the photos taken by the UAV are also used to create a 3D point cloud of the scene, which is also accurate. This 3D model of the original accident site can be used by the accident reconstructionist as a simulation environment (e.g., [27]).
Our ongoing research project is dedicated to investigating the feasibility of employing Unmanned Aerial Vehicles (UAVs) for the precise documentation of road accident sites. UAVs have been applied in diverse fields, including mapping [28], seeding [29] and traffic monitoring [30], and also have been reported to record road accident sites accurately [31]. Our primary goal is to find the simplest possible method for this task, which is cost- and time-effective, and, at the same time, does not require high levels of professional expertise. The widespread application of such a method would enable police officers to speed up the on-site crash investigation procedure and open the affected road section for traffic sooner. As time is of primary concern, the methods developed here do not apply Ground Control Points (GCPs), which are frequently utilized to increase accuracy [32,33]. Also, for the sake of simplicity, the default camera settings are applied, as opposed to [34]. RTK drones [35,36] or Post Processed Kinematic (PPK) technology [37] are not used in our experiments either. Thus, our calculations only rely on the Global Navigation Satellite System (GNSS) locations of camera positions recorded by the UAV to georeference the results [38,39].
Earlier studies have shown that neither the use of RTK drones [40] nor the application of GCPs improved horizontal accuracy significantly [41], and the differences typically remined below 5 cm. This level of accuracy is satisfactory for road accident investigation [31] (also see calculation below in Section 3.4).
Accuracy with respect to flight pattern was investigated in [42]. Data obtained of a sample road accident site by n UAV following a circular path (point of interest—POI-technique), and a single grid path (waypoint technique) combined with a circular path, were compared. The results suggest that the accuracy (which was in the cm range) did not improve if the circular path was combined with the grid technique. On the contrary, Sanz-Ablanedo et al. [43] suggest that while taking nadir images is an excellent source of data for the orthomosaic, if photos are taken solely with a vertical camera axis, this will result in serious deformations of the point cloud, especially concerning the vertical axis. To solve this problem, [43] suggest that the camera should be facing the center of the investigated site, while the UAV follows a grid pattern. Similarly, [44,45] propose that an oblique camera angle can reduce systematic deformations.
Road alignments can be very diverse, thus road accidents also occur in very diverse environments [46], e.g., at road junctions or straight road sections. In the latter case, the skid marks and the affected vehicles are typically found in a long stretch of the road, i.e., in a long and narrow rectangle. In the preliminary stages of our project, it was found that when recording long straight road sections, the output is not accurate enough if the UAV flew above the road section in parallel lines (in single grid mode), taking nadir images. Even with high percentages of overlap between individual images, vertical alignment was faulty: the horizontal road section seemed to tilt (i.e., the road surface was not horizontal but sloped to one side) or to bend (convex image—doming, concave image—bowling), as illustrated in Figure 1.
As discussed in [43,47], such systematic deformations tend to occur in the 3D model for a variety of reasons, including, among other factors, the calibration of the camera, the geometry of the image acquisition, the number and overlap of the images and flight patterns.
The aim of the present study is to explore whether doming and bowling deformations occur irrespective of the UAV type applied, and to test a simple method to eliminate these vertical alignment errors through experiments with parallel grid paths with the nadir camera setting and circular flight patterns with oblique camera settings (POI), similarly to [43,48]. The overall aim of the research is to find a simple and cost-effective method for road accident site recording, thus the output point cloud and orthomosaic should be suitable for road accident forensic analysis.
The experiments were conducted at a given long, horizontal, public road segment in Hungary, applying four different consumer-grade non-RTK UAVs. Data processing was standardized, employing identical Python software and settings and applying the relevant functions of the Agisoft Metashape 2.0.1 program [49]. It was found that systematic deformations occurred if the test site was photographed solely with nadir camera positions, irrespective of the type of the drone. However, if photos taken along a circular path over a section of the test site with the camera aimed at the center of the circle were added to the nadir images, the resulting 2D orthomosaic and the 3D point cloud accurately corresponded to the real terrain. In the latter scenario, horizontal and vertical accuracy was satisfactory, which makes the method suitable for road accident site investigation.
The following sections of this article are organized as follows. In Section 2, an elaborate description of the experimental setting is provided, encompassing details about the employed Unmanned Aerial Vehicles (UAVs), the associated software and a comprehensive exposition of the flight patterns. Section 3 is dedicated to the presentation of the experimental results. Section 4 serves as a platform for the discussion of the findings and offers insights into potential directions for future research. The article is brought to its conclusion in Section 5.

2. Materials and Methods

For the test flights, four different, commercially available, non-RTK DJI drones were used (Table 1, Figure 2). All drones flew along the same paths at the altitude of 10 m, so that differences between them could be revealed. If it was possible (UAVs b, c), the flight route was pre-programmed with the Litchi software [50]. In other cases, the flight was administered with manual control, taking photos approximately at the same positions as in the programmed flights. Images were processed in an identical program environment, using the Python module [51] of the Agisoft Metashape 2.0.1. program [49].
The experiment was conducted along a straight, horizontal section of the lower ranking Road 5207 (47.23501099292472, 19.11899903515487) in Hungary. The site was selected because the road runs on a straight embankment, no trees interfere with the view and there are road markings (Figure 3). At the end of the section there is a junction with a dirt road, but the junction itself is paved (Figure 4).
Ground resolution was expected to be 3.3 mm/px, as based on our preliminary tests; this image resolution ensures that the resulting orthomosaic and point cloud can be used well for accident reconstruction purposes. Flight altitudes that ensure this ground resolution were calculated using the following Formulas (1) and (2) [56] (p. 289).
A G L 1 = f G S D H R S W ,
A G L 2 = f G S D V R S H ,
where
  • AGL1 is the flight altitude above ground level (AGL) [m], calculated from the horizontal resolution;
  • AGL2 is the flight altitude above ground level (AGL) [m], calculated from the vertical resolution;
  • f is the focal distance [mm];
  • GSD is the ground sample distance [m/px];
  • HR is the horizontal resolution of the sensor [px];
  • VR is the vertical resolution of the sensor [px];
  • SW is the sensor width [mm];
  • SH is the sensor height [mm].
Thus, the flight altitudes AGL1 and AGL2 could be calculated from the relevant data of the UAVs applied (Table 2). As the values are between 7.0 and 11.5 m, the uniform altitude of the test flights was set to 10 m.
In order to check vertical alignment, two technologies were applied. First of all, measurements of a certain horizontal distance on the orthomosaic and the corresponding point cloud were compared. If the two measurements were equal, there was no height difference in the point cloud. However, if the point cloud distance was longer than the corresponding distance on the orthomosaic, then one of the points in the point cloud was at a different elevation.
Also, a custom-made experimental device was set up, on which two plumb bobs were suspended. Colored plastic balls were strung across their diameter onto the string of the plumb bobs in order to make the position of the string visible on the photos. The distance between the centers of the large purple balls was 1 m. The device was set up on the side road, in a location visible to the drone in all flight patterns (Figure 5). The device could not be set up on the main road itself, as it would have disturbed the traffic.
After data processing was complete, spheres were fit onto the points representing the large balls in the point cloud, and the position of their centers was compared with the vertical axis (Figure 6).
The centers of the balls were on a vertical line, in reality. If the point cloud is accurate, the centers of the fitted virtual spheres should also be positioned on the same vertical line. However, if the point cloud is not accurate vertically, an angle error α arises (Figure 7).
The angle error α can be calculated with the following Formula (3), if the distance between the two center points is regarded to be 1 unit, and x 1 = 0 and y 1 = 0 .
α = a r c s i n ( x 2 2 + y 2 2 ) ,
where
  • α —angle error;
  • x 2 —deviation of the center of the lower sphere from the origin along axis x;
  • y 2 —deviation of the center of the lower sphere from the origin along axis y.
To check horizontal accuracy, white arrows were placed to the side of the main road, which could be seen on the point cloud and the orthomosaic as well. The distance between the points of the arrows was measured with a laser distance meter (7.42 m). The results were compared with the measurements on the point cloud and the orthomosaic. Furthermore, the distance between the road marks along the sides of the road were measured in the same way.
Each UAV was started from the side road and followed the same flight pattern (Figure 8). Block 1 consisted of long parallel L-shaped routes along the main road and above a small section of the side road (path length: 371 m), with vertical (nadir) camera alignment. In Block 2, the UAV followed a circular path around a virtual point of interest (POI) in the junction (path length: 52 m), with the camera facing a point 1 m above ground level in the center of the circle (camera angle: 38°, nadir: 0°). The two routes (Blocks 1 + 2) were combined, thus adding up to a 423 m long path (Figure 6).
Block 1 images were processed separately from Block 2 images. Also, images from both Block 1 and Block 2 were processed together.

3. Results

Each UAV followed the same path, at the planned altitude of 10 m above the road section. The flying time was 10 min in each case. Table 3 shows the actual flight altitudes, the number of images captured during each mission and the size of the image set.

3.1. General Properties of Orthomosaics and Point Clouds

The photos were processed, resulting in an orthomosaic and a point cloud for each part of the mission (1) Block 1; (2) Block 2; (3) Blocks 1 + 2. Table 4 shows the main characteristics of the processing and its results.
Photos were taken at approximately every 2 m, which ensured that each point designated was visible at least on 9 different photos within a mission (blue area in Figure 9 and Figure 10).

3.2. Block 1 (Nadir Images)—Deformations

As shown in Figure 11, the expected deformations occurred in the point clouds generated from Block 1 images, i.e., solely nadir images taken along a long stretch of a horizontal road section, irrespective of the type of the UAV. Although the deformations varied, none of the resulting point clouds could have been used for accident reconstruction due to the lack of accuracy.

3.3. Blocks 1 + 2

In the next step of the experiment, Block 1 images and Block 2 images taken during the same mission were combined. Figure 12 shows the point cloud and the corresponding camera images. Block 2 images were also processed separately for further analysis. However, as those did not include the whole of the road section discussed in the present study, the results of the circular missions alone (i.e., Block 2) are not discussed here.
As shown in Figure 13, if images from the two blocks were combined, the resulting orthomosaic and point cloud were correctly oriented in space. Bowling, doming and tilting deformations were minimized.

3.4. Accuracy

The vertical accuracy assessment in this experiment, which did not depend on a GNSS receiver, involved measuring the discrepancies between points assumed to be at identical altitudes. These points were situated on opposing sides of the horizontal road, and a comparison was made between the orthomosaic and the corresponding point cloud.
The processing software applied shows two distance values between two marked points of the point cloud (Figure 14, A–B and C–D). The values marked on the red line as “3D” show the distance measured by the software between the respective points in the point cloud, using their coordinates. The values marked as “2D” show how long the distance is between the images of the two points projected onto a common horizontal surface. This is illustrated in Figure 14. The distance between points C and D in the point cloud is 2.14 m (3D value). When these two points are projected onto a horizontal plane (Points C and E), the distance between C and E is 1.92 m (2D value). The processing software provides both measurements for each distance measured in the point cloud. If the 3D and 2D values are the same (as in the case of the A–B distance), it means that the 3D distance and the distance of the projection are equal, i.e., the two points are on the same horizontal plane. In the present case, if the point cloud was generated solely from nadir images, a tilting effect was detected (Figure 11c). However, if oblique images were added to the processed image set, vertical orientation was corrected (Figure 14). In all four examined cases, the vertical alignment is satisfactory (Table 5).
Another method was also tested to check the vertical accuracy of the point cloud. The custom-made device (see Figure 5 in Section 2 above) had two vertical strings with plumb bobs, one enhanced with small and the other with larger balls. Only the larger balls were well visible on the point clouds, thus the string with the smaller balls could not be taken into account.
The fitting of spheres to the images of the balls on the point cloud was successful. In spite of this, this method could not be applied for determining vertical accuracy. The reason for this was that the ground resolution values of the emerging point clouds were between 2.8 and 5.9 mm/px. This means, if the sphere is fitted with 1 px error, it results in an angle error that would correspond to a width difference of 20.72–43.66 mm, respectively, when examining the width of the road (7.42 m). This error value is much greater than the one produced by the first method (Table 5). Thus, the error margin of the second method is too large for this experiment.
Horizontal accuracy was checked on both the orthomosaics and the point clouds (Figure 15). On the site, the distance between the heads of the arrows was 7.42 m according to our measurements with a laser distance meter.
Table 6 shows the errors in the horizontal measurements. The error is below 1% in each case, which is much lower than the expected accuracy of an accident reconstruction diagram.
The reason for this relatively large error margin in accident reconstruction is practical. For example, the beginning of a skid mark can usually be determined with an error of several centimeters, as it is not always visible. However, such an error only minimally modifies the results of a forensic investigation. To illustrate this, let us have a look at the speed calculation. The speed of a vehicle at the beginning of a skid mark can be determined by the following Formula (4).
v = 2 a s ,
where
  • v —initial speed [m/s];
  • a —average deceleration [m/s2];
  • s —length of skid mark [m].
If a skid mark is measured to be 15 m at the accident scene, and the average deceleration of the vehicle is 7.5 m/s2, the speed of the vehicle at the beginning of the skid mark is 54 km/h. Table 7 gives the arising speed values if we suppose a ±1% error in the length measurement.

4. Discussion

This study presented the outcomes of an experimental investigation involving four consumer-grade non-RTK UAVs, which took photos following the same path over the same public road section, in an imitation of road accident scene documentation. The study pursued a two-fold objective. The first aim was to test whether systematic deformations occur in the point clouds created from the images for all UAVs, as expected based on the literature (i.e., [43,44,45,47]) and our earlier experience. Secondly, if such deformations were identified, the research aimed to test a straightforward and expedient method for their mitigation.
Concerning deformations observed in point clouds, as illustrated by Figure 11, it became evident that irrespective of the type of UAV employed, the processing of solely nadir images captured in single grid mode yields point clouds characterized by pronounced deformations. Consequently, doming, bowling and tilting effects could be reproduced in the experiment. This result confirms our earlier experience when such distortions occurred during accident scene documentation (Figure 1). Furthermore, these findings corroborate the widely discussed findings in the literature. If photos are acquired in the conventionally applied single grid flight pattern, i.e., when the drone flies in parallel lines over an elongated rectangular area with the camera directed vertically (i.e., capturing solely nadir images), the resulting point cloud will exhibit systematic deformations [43,44,45,47].
Furthermore, our results underpinned earlier findings [43,45] that with flight pattern modification, especially with the combination of images taken with vertical and oblique camera axes, the above effects can be minimized. If the nadir image set was complemented with images taken during the same mission with oblique camera angles around a POI on a circular route (Figure 8 and Figure 12), deformations such as doming, bowling and tilting were successfully diminished (Figure 13). However, this result contradicts [42]’s claim, in whose experiment “combining POI and waypoint techniques did not improve accuracy” (p. 12). The reasons for this difference cannot be revealed, as [42] did not provide data about the camera angle during their circular mission.
On the topic of camera angles, [45] suggested gentle forward inclinations between −5 and 15° (nadir: 0°), which resulted in accurate point clouds. Authors of [48] experimented with camera angles between 0 and 35°. We applied a 38° angle for the circular mission (diameter 20 m, altitude 10 m). This inclination also proved to be successful in mitigating deformations and thus increasing accuracy. This angle ensured that the POI around which the circular mission was carried out was at 1 m above the ground. This POI altitude is ideal for road accident recording, as car deformations typically occur at around this height.
As regards flight altitudes, following the calculation proposed in [56], the flights in our experiment were administered at 10 m. This contradicts the suggestions of [31], who surveyed road accidents from much higher altitudes in urban settings (20, 25 or 30 m), in order to avoid obstacles such as public illumination poles. Even higher altitudes (60 and 80 m) were tested by [38], but [42] suggested flying the drone at 5, 7 and 10 m, respectively. Our experiment was conducted in an “ideal” setting, where no road-side obstacles were present. Thus, the lower flight altitude (10 m) posed no problems, and this altitude also ensured a high-enough resolution of the point cloud and the orthomosaic. Shadowing effects did not result in serious distortions either.
While [38] demonstrated that the direct georeferencing (DG) approach can successfully be applied to gain data for topographic surveying, our study reaffirms the conclusion reached by [42] that utilizing UAVs for accident scene recording provides feasible and accurate results, characterized by errors at a centimeter scale. This accuracy is satisfactory for police documentation and accident reconstruction purposes. The error margin arising from this method is considerably lower than the error thresholds commonly employed by forensic experts for road accident reconstruction.
Similar to the present study, [34] also aimed at devising a quick method for applying UAVs for accident reconstruction. However, their study focused on camera calibration, in a similar manner to [38]. In the present study, however, default camera settings were applied and no pre-calibration of the camera was performed. The reason for this is that the long-term objective of our research is to create a methodology that can be used by police personnel to record accident data. In order to be applied reliably and easily, the system should be as automatic as possible, requiring minimum intervention from the controller of the UAV.
As for the length of the recording procedure, in the present investigation, flying time was 10 min. In the proposed system, when the UAV lands, the photos are uploaded to a server, where a preliminary, low-quality processing is carried out. The quality of processing can be set by choosing low values for Accuracy and Depth map quality. The aim of such a low-quality processing is solely to determine whether the photos can be used to create an orthomosaic and a point cloud. In the event that these processed images prove inadequate, a new mission with higher overlap percentage should be carried out before the accident scene is modified or cleared. High-quality processing of the same image set requires 3–4 times longer time, which can be carried out any time after data upload. Our experiments have revealed that the low-quality processing of the image set acquired lasted for 3–6 min, while the flying time was 10 min. Consequently, the entire process, including data capture and preliminary analysis, consumes approximately a quarter of an hour. This time frame is notably shorter than the minimal estimates (30 min) for accident scene clearance in Hungary [14]. However, it is essential to acknowledge that some additional minutes are also necessary both before and after UAV flight to complete the on-site investigation procedures comprehensively.
The proposed scheme for road accident site recording with a UAV is presented in a flowchart (Figure 16).
As Figure 16 illustrates, the following steps should be followed when documenting an accident scene with a UAV.
  • Step 1: Delineate the area to be photographed. The boundaries of the relevant accident site must be identified. Establish the POI, i.e., the point around which the circular part of the mission is to be executed. Typically, this corresponds to the location where the vehicles involved in the accident are situated.
  • Step 2: Obtain nadir images with the UAV following a grid path. The images should cover the whole area, with suitable longitudinal and transversal overlap (generally 60%). The number of images depends on their resolution and the dimensions of the site. In an average case, 60–100 images should be taken. A flight altitude of 10 m results in a point cloud and an orthomosaic with an adequate resolution.
  • Step 3: Obtain oblique images around a POI. Images should be taken following a circular path, with the camera facing a characteristic point of the scene (e.g., a vehicle in its final rest position), with an oblique camera angle.
  • Step 4: Upload data; process images. The images are to be uploaded to the processing site. Preliminary processing takes place.
  • Step 5: Quality check of point cloud and orthomosaic. The point cloud and orthomosaic should be checked.
  • Step 6: Modify parameters. If the quality of the point cloud and orthomosaic is not satisfactory (e.g., it is distorted or fragmented or has faulty spatial orientation), the flight path should be modified to increase overlap. A new image set should be obtained.
  • Step 7: Mission complete. If the quality of the resulting point cloud and orthomosaic is satisfactory, the accident documentation process is complete.
For the purpose of accident scene documentation, any UAV equipped with a camera featuring a resolution similar to that of the drones tested here is deemed suitable. It is essential to ensure that the GNSS coordinates of the drone are recorded and saved within the metadata of the captured images. All the four UAVs under evaluation in this experiment were found to be applicable for accident scene documentation. However, considering costs and the ease of operation, small and economically viable drones, such as the DJI Mavic Air 2 and DJI Air S2, are recommended for this specific application.
In the future, analogous experiments at other (higher and lower) altitudes are planned. This approach is essential, because the presence of illumination poles, overhead electricity and telecommunication lines, as well as road-side trees, may present challenges for accident investigators, especially in urban areas. However, higher altitudes result in lower image resolution, potentially diminishing the overall accuracy of the data. Hence, we anticipate the need for an optimized solution for such urban accident scenes in the long term. Given that road accidents frequently occur in urban settings, this research direction carries substantial relevance. Additionally, exploring the employment of RTK-drones is another potential way for increasing accuracy in accident scene documentation.

5. Conclusions

This article has presented the findings of an Unmanned Aerial Vehicle (UAV) experiment conducted along a lengthy, straight, horizontal section of road featuring a crossing at one end, emblematic of a common road accident site. The UAVs maintained a consistent altitude of 10 m while capturing images with default camera settings. The flight path encompassed two segments: for Block 1 images, the camera was oriented vertically (0°), and the UAV followed parallel L-shaped trajectories, whereas for Block 2 images, the drones circled a virtual Point of Interest (POI) positioned 1 m above ground level, employing oblique camera settings (38°). The number of images captured during the 10 min flying time for each mission was 162 or 163. The total size of images per mission varied between 639 MB and 1.8 GB. The preliminary processing time ranged from 3 min 2 sec to 6 min. Consequently, the documentation of the scene with a UAV took approximately 15 min.
The experiments described here have revealed that taking solely nadir images in single grid mode with a UAV along an extensive road section (Block 1) does not yield satisfactory or accurate results for accident reconstruction. This is due to the presence of systematic errors in the point cloud, resulting in deformations characterized by bowling, doming and tilting. These deformations occurred despite the high number of images taken and the substantial overlap between the photos. Notably, relevant road points were represented in nine different images (Figure 9).
Conversely, when images taken with an oblique camera setting around a point of interest (POI) on the same road section within the same flight mission were added to the processed image set (Blocks 1 + 2), these deformations were effectively minimized. Thus, combining nadir images taken along the relevant road section with images around a POI may be a successful, accurate and quick method for recording road accidents via UAV technology.
In this test, the circular mission was executed at one end of the relevant road section at 10 m altitude. Nevertheless, positioning the POI at any other part of the road section affected by the accident is also expected to produce equally satisfactory results. To capture additional vehicle details relevant to the accident, selecting the final resting positions of the involved vehicles as the center of the circular mission is a rational choice.
As regards accuracy, the error of the horizontal measurements remained in the 0–5 cm range (Table 6), which is well below the error margin generally employed in accident reconstruction. Vertical accuracy was also satisfactory (Table 5). Consequently, the method delineated in this study can be applied to document road accidents accurately and quickly, offering immediate feedback on the success of the UAV flight. Considering that the entire recording process, encompassing preliminary image processing, consumed approximately 15 min, it is anticipated that the utilization of UAVs in road accident reconstruction can substantially reduce on-site investigation times and consequently expedite road clearance procedures.

Author Contributions

Conceptualization, G.V., G.M., Á.S. and N.W.; methodology, G.V., G.M. and N.W.; software, G.V.; validation, G.M. and Á.T.; formal analysis, G.V. and G.M.; investigation, G.V., Á.S. and N.W.; resources, G.V. and N.W.; data curation, G.V.; writing—original draft preparation, N.W.; writing—review and editing, G.V. and Á.T.; visualization, G.V.; supervision, G.M. and Á.T.; project administration, G.V.; funding acquisition, G.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by NKFIH, grant number 2021-1.1.4-GYORSÍTÓSÁV-2022-00023 “Proper documentation and evaluation of an accident scene using high-performance image capture technology. Development and market introduction of an intelligent Geographic Information System (GIS) [Baleseti helyszín megfelelő dokumentálása és kiértékelése nagyteljesítményű képi adatrögzítő technológia alkalmazásával. Intelligens Térinformatikai Rendszer (ITR) kifejlesztése, piaci bevezetése]”.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the data may be part of an ongoing or future research project and releasing it could jeopardize the integrity of subsequent studies.

Acknowledgments

The authors thank police officers István Harmat and Pál Varga for their permission to use their images in Figure 1.

Conflicts of Interest

Author Árpád Süveges was employed by the company Ipsum-Tech. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Dixit, A.; Kumar Chidambaram, R.; Allam, Z. Safety and Risk Analysis of Autonomous Vehicles Using Computer Vision and Neural Networks. Vehicles 2021, 3, 595–617. [Google Scholar] [CrossRef]
  2. Mohammed, S.I. An Overview of Traffic Accident Scene Investigation Using Different Techniques. Autom. Exp. 2023, 6, 68–79. [Google Scholar] [CrossRef]
  3. Agent, K.R.; Pigman, J.G. Traffic Accident Investigation; University of Kentucky, Kentucky Transportation Center: Lexington, KY, USA, 1993. [Google Scholar] [CrossRef]
  4. ENFSI. Best Practice Manual for Road Accident Reconstruction. ENFSI. 2015. Available online: http://enfsi.eu/wp-content/uploads/2016/09/4._road_accident_reconstruction_0.pdf (accessed on 16 September 2023).
  5. Topolšek, D.; Herbaj, E.A.; Sternad, M. The Accuracy Analysis of Measurement Tools for Traffic Accident Investigation. J. Transp. Tech. 2014, 4, 84–92. [Google Scholar] [CrossRef]
  6. Lengyel, H.; Maral, S.; Kerebekov, S.; Szalay, Z.; Török, Á. Modelling and Simulating Automated Vehicular Functions in Critical Situations—Application of a Novel Accident Reconstruction Concept. Vehicles 2023, 5, 266–285. [Google Scholar] [CrossRef]
  7. Li, G.; Lai, W.; Sui, X.; Li, X.; Qu, X.; Zhang, T.; Li, Y. Influence of traffic congestion on driver behavior in post-congestion driving. Accid. Anal. Prev. 2020, 141, 105508. [Google Scholar] [CrossRef]
  8. Ferencz, C.; Zöldy, M. Road traffic queue length estimation with artificial intelligence (AI) methods. Cogn. Sustain. 2023, 2. [Google Scholar] [CrossRef]
  9. Wang, J.; Liu, B.; Fu, T.; Liu, S.; Stipancic, J. Modeling when and where a secondary accident occurs. Accid. Anal. Prev. 2019, 130, 160–166. [Google Scholar] [CrossRef]
  10. Goodall, N.J. Probability of Secondary Crash Occurrence on Freeways with the Use of Private-Sector Speed Data. Transp. Res. Record 2017, 2635, 11–18. [Google Scholar] [CrossRef]
  11. Nam, D.; Mannering, F. An exploratory hazard-based analysis of highway incident duration. Transp. Res. A-Pol. 2000, 34, 85–102. [Google Scholar] [CrossRef]
  12. Alkaabi, A.M.; Dissanayake, D.; Bird, R. Analyzing Clearance Time of Urban Traffic Accidents in Abu Dhabi, United Arab Emirates, with Hazard-Based Duration Modeling Method. Transp. Res. Record 2011, 2229, 46–54. [Google Scholar] [CrossRef]
  13. Ghosh, I. Examination of the Factors Influencing the Clearance Time of Freeway Incidents. J. Transp. Sys. Eng. IT 2012, 12, 75−89. [Google Scholar] [CrossRef]
  14. Major, R. Közúti közlekedési balesetek miatt kialakult torlódások okozta időveszteség csökkentésének lehetőségei, avagy miként gyorsítható a helyszíni eljárás [The possibilities of reducing time loss in road congestion caused by accidents, or how to speed up on-site investigation]. Belügyi Szemle 2016, 69, 1009–1026. [Google Scholar] [CrossRef]
  15. Weiss, S.L. (Ed.) Handbook of Forensic Photography; Taylor Francis: Boca Raton, FL, USA, 2002. [Google Scholar]
  16. U.S. Department of Transportation. Crash Investigation and Reconstruction Technologies and Best Practices; Federal Highway Administration: Washington, DC, USA, 2015. Available online: https://rosap.ntl.bts.gov/view/dot/50639/dot_50639_DS1.pdf (accessed on 3 September 2023).
  17. Cristiani, A.L.; Immich, R.; Akabane, A.T.; Madeira, E.R.; Villas, L.A.; Meneguette, R.I. ATRIP: Architecture for Traffic Classification Based on Image Processing. Vehicles 2020, 2, 303–317. [Google Scholar] [CrossRef]
  18. Saveliev, A.; Izhboldina, V.; Letenkov, M.; Aksamentov, E.; Vatamaniuk, I. Method for automated generation of road accident scene sketch based on data from mobile device camera. Transp. Res. Proc. 2020, 50, 608–613. [Google Scholar] [CrossRef]
  19. The Johns Hopkins University. Operational Evaluation of Unmanned Aircraft Systems for Crash Scene Reconstruction, Operational Evaluation Report, Version 1.0; Applied Physics Laboratory, National Criminal Justice Reference Service: Laurel, MD, USA, 2018. Available online: https://www.ojp.gov/pdffiles1/nij/grants/251628.pdf (accessed on 3 September 2023).
  20. Outay, F.; Mengash, H.A.; Abdan, M. Applications of unmanned aerial vehicle (UAV) in road safety, traffic and highway infrastructure management: Recent advances and challenges. Transp. Res. A-Pol. 2020, 141, 116–129. [Google Scholar] [CrossRef] [PubMed]
  21. Saveliev, A.; Lebedeva, V.; Lebedev, I.; Uzdiaev, M. An Approach to the Automatic Construction of a Road Accident Scheme Using UAV and Deep Learning Methods. Sensors 2022, 22, 4728. [Google Scholar] [CrossRef] [PubMed]
  22. PixD. A New Protocol of CSI for The Royal Canadian Mounted Police. (n.d.). Available online: https://assets.ctfassets.net/go54bjdzbrgi/2ufTRuEockU20mWCmosUmE/88ad2758f8c1042ab696c974627239e6/RCMP_Pix4D_collision_crime_scene_investigation_protocol.pdf (accessed on 2 September 2023).
  23. Kamnik, R.; Perc, M.N.; Topolšek, D. Using the scanners and drone for comparison of point cloud accuracy at traffic accident analysis. Accid. Anal. Prev. 2020, 135, 105391. [Google Scholar] [CrossRef]
  24. Desai, J.; Sakhare, R.; Rogers, S.; Mathew, J.K.; Habib, A.; Bullock, D. Using Connected Vehicle Data to Evaluate Impact of Secondary Crashes on Indiana Interstates. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems, Indianapolis, IN, USA, 19–22 September 2021; IEEE: Indianapolis, IN, USA, 2021; pp. 4057–4063. [Google Scholar] [CrossRef]
  25. Osman, M.R.; Tahar, K.N. 3D accident reconstruction using low-cost imaging technique. Adv. Eng. Softw. 2016, 100, 231–237. [Google Scholar] [CrossRef]
  26. Arnold, E.D. Use of Photogrammetry as a Tool for Accident Investigation and Reconstruction: A Review of the Literature and State of the Practice; Virginia Transportation Research Council: Charlottesville, VA, USA, 2007. Available online: https://rosap.ntl.bts.gov/view/dot/19853 (accessed on 3 September 2023).
  27. Wang, J.; Li, Z.; Ying, F.; Zou, D.; Chen, Y. Reconstruction of a real-world car-to-pedestrian collision using geomatics techniques and numerical simulations. J. Forensic Legal Med. 2022, 91, 102433. [Google Scholar] [CrossRef]
  28. Török, Á.; Bögöly, G.; Somogyi, Á.; Lovas, T. Application of UAV in Topographic Modelling and Structural Geological Mapping of Quarries and Their Surroundings—Delineation of Fault-Bordered Raw Material Reserves. Sensors 2020, 20, 489. [Google Scholar] [CrossRef]
  29. Castro, J.; Morales-Rueda, F.; Alcaraz-Segura, D.; Tabik, S. Forest restoration is more than firing seeds from a drone. Restor. Ecol. 2023, 31, e13736. [Google Scholar] [CrossRef]
  30. Bisio, I.; Garibotto, C.; Haleem, H.; Lavagetto, F.; Sciarrone, A. A Systematic Review of Drone Based Road Traffic Monitoring System. IEEE Access 2022, 10, 101537–101555. [Google Scholar] [CrossRef]
  31. Pádua, L.; Sousa, J.; Vanko, J.; Hruška, J.; Adão, T.; Peres, E.; Sousa, A.; Sousa, J.J. Digital Reconstitution of Road Traffic Accidents: A Flexible Methodology Relying on UAV Surveying and Complementary Strategies to Support Multiple Scenarios. Int. J. Environ. Res. Public Health 2020, 17, 1868. [Google Scholar] [CrossRef] [PubMed]
  32. Hastaoglu, K.O.; Kapicioglu, H.S.; Gül, Y.; Poyraz, F. Investigation of the effect of height difference and geometry of GCP on position accuracy of point cloud in UAV photogrammetry. Surv. Rev. 2023, 55, 325–337. [Google Scholar] [CrossRef]
  33. Zhang, K.; Okazawa, H.; Hayashi, K.; Hayashi, T.; Fiwa, L.; Maskey, S. Optimization of Ground Control Point Distribution for Unmanned Aerial Vehicle Photogrammetry for Inaccessible Fields. Sustainability 2022, 14, 9505. [Google Scholar] [CrossRef]
  34. Su, S.; Liu, W.; Li, K.; Yang, G.; Feng, C.; Ming, J.; Liu, S.; Yin, Z. Developing an unmanned aerial vehicle-based rapid mapping system for traffic accident investigation. Aust. J. Forensic Sci. 2016, 48, 454–468. [Google Scholar] [CrossRef]
  35. Stott, E.; Williams, R.D.; Hoey, T.B. Ground Control Point Distribution for Accurate Kilometre-Scale Topographic Mapping Using an RTK-GNSS Unmanned Aerial Vehicle and SfM Photogrammetry. Drones 2020, 4, 55. [Google Scholar] [CrossRef]
  36. Ekaso, D.; Nex, F.; Kerle, N. Accuracy assessment of real-time kinematics (RTK) measurements on unmanned aerial vehicles (UAV) for direct geo-referencing. Geo-Spat. Inf. Sci. 2020, 23, 165–181. [Google Scholar] [CrossRef]
  37. Liu, X.; Lian, X.; Yang, W.; Wang, F.; Han, Y.; Zhang, Y. Accuracy Assessment of a UAV Direct Georeferencing Method and Impact of the Configuration of Ground Control Points. Drones 2022, 6, 30. [Google Scholar] [CrossRef]
  38. Carbonneau, P.E.; Dietrich, J.T. Cost-Effective Non-Metric Photogrammetry from Consumer-Grade sUAS: Implications for direct georeferencing of structure from motion photogrammetry. Earth Surf. Processes 2017, 42, 473–486. [Google Scholar] [CrossRef]
  39. Cook, K.L.; Dietze, M. Short Communication: A simple workflow for robust low-cost UAV-derived change detection without ground control points. Earth Surf. Dynam. 2019, 7, 1009–1017. [Google Scholar] [CrossRef]
  40. Desai, J.; Mathew, J.K.; Zhang, Y.; Hainje, R.; Horton, D.; Hasheminasab, S.M.; Habib, A.; Bullock, D.M. Assessment of Indiana Unmanned Aerial System Crash Scene Mapping Program. Drones 2022, 6, 259. [Google Scholar] [CrossRef]
  41. Norahim, M.N.; Tahar, K.N.; Maharjan, G.R.; Matos, J.C. Reconstructing 3D model of accident scene using drone image. Int. J. Electr. Comp. Eng. 2023, 13, 4087–4100. [Google Scholar] [CrossRef]
  42. Zulkifli, M.H.; Tahar, K.N. The Influence of UAV Altitudes and Flight Techniques in 3D Reconstruction Mapping. Drones 2023, 7, 227. [Google Scholar] [CrossRef]
  43. Sanz-Ablanedo, E.; Chandler, J.H.; Ballestros-Pérez, P.; Rodriguez-Pérez, J.R. Reducing systematic dome errors in digital elevation models through better UAV flight design. Earth Surf. Process. 2020, 45, 2143–2147. [Google Scholar] [CrossRef]
  44. James, M.R.; Robson, S. Mitigating systematic error in topographic models derived from UAV and ground-based image networks. Earth Surf. Process. 2014, 39, 1413–1420. [Google Scholar] [CrossRef]
  45. James, M.R.; Antoniazza, G.; Robson, S.; Lane, S.N. Mitigating systematic error in topographic models for geomorphic change detection: Accuracy, precision and considerations beyond off-nadir imagery. Earth Surf. Process. Landf. 2020, 45, 2251–2271. [Google Scholar] [CrossRef]
  46. Liu, Z.; He, J.; Zhang, C.; Xing, L.; Zhou, B. The impact of road alignment characteristics on different types of traffic accidents. J. Transp. Saf. Secur. 2020, 12, 697–726. [Google Scholar] [CrossRef]
  47. Mueller, M.; Dietenberger, S.; Nestler, M.; Hese, S.; Ziemer, J.; Bachmann, F.; Leiber, J.; Dubois, C.; Thiel, C. Novel UAV Flight Designs for Accuracy Optimization of Structure from Motion Data Products. Remote Sens. 2023, 15, 4308. [Google Scholar] [CrossRef]
  48. Nesbit, P.R.; Hugenholtz, C.H. Enhancing UAV–SfM 3D Model Accuracy in High-Relief Landscapes by Incorporating Oblique Images. Remote Sens. 2019, 11, 239. [Google Scholar] [CrossRef]
  49. Agisoft. Agisoft Metashape 2.0.1. 2023. Available online: https://www.agisoft.com/features/professional-edition/ (accessed on 2 February 2023).
  50. Litchi. Litchi for DJI Drones. 2023. Available online: https://flylitchi.com/ (accessed on 2 February 2023).
  51. Agisoft. Metashape Python Reference. Release 2.0.0. 2022. Available online: https://www.agisoft.com/pdf/metashape_python_api_2_0_0.pdf (accessed on 2 February 2023).
  52. DJI. DJI Mavic Air 2, Specs. n.d. Available online: https://www.dji.com/hu/mavic-air-2/specs (accessed on 2 September 2023).
  53. DJI. DJI Air 2 S, Specs. n.d. Available online: https://www.dji.com/hu/air-2s/specs (accessed on 2 September 2023).
  54. DJI. DJI Phantom 4, Specs. n.d. Available online: https://www.dji.com/hu/phantom-4-pro/info (accessed on 2 September 2023).
  55. DJI. DJI Inspire 1, Specs. n.d. Available online: https://www.dji.com/hu/inspire-1/info (accessed on 2 September 2023).
  56. Jiménez-Jiménez, S.I.; Ojeda-Bustamante, W.; Marcial-Pablo, M.d.J.; Enciso, J. Digital Terrain Models Generated with Low-Cost UAV Photogrammetry: Methodology and Accuracy. ISPRS Int. J. Geo-Inf. 2021, 10, 285. [Google Scholar] [CrossRef]
Figure 1. Deformations detected in the results of image procession. (a) Three-dimensional point cloud of a real accident scene taken in a single grid path with vertical camera—tilted image (Source: Varga, P.); (b) Three-dimensional point cloud of a real accident scene taken in a single grid path with vertical camera—seemingly correct (Source: Harmat, I.); (c) Point cloud (b) from above (Source: Harmat, I.); (d) Deformed orthomosaic corresponding to (c) (Source: Harmat, I.).
Figure 1. Deformations detected in the results of image procession. (a) Three-dimensional point cloud of a real accident scene taken in a single grid path with vertical camera—tilted image (Source: Varga, P.); (b) Three-dimensional point cloud of a real accident scene taken in a single grid path with vertical camera—seemingly correct (Source: Harmat, I.); (c) Point cloud (b) from above (Source: Harmat, I.); (d) Deformed orthomosaic corresponding to (c) (Source: Harmat, I.).
Vehicles 05 00093 g001
Figure 2. The drones used for the experiment. (a) DJI Mavic Air 2; (b) DJI Air 2S; (c) DJI Phantom 4; (d) DJI Inspire 1 v2.0.
Figure 2. The drones used for the experiment. (a) DJI Mavic Air 2; (b) DJI Air 2S; (c) DJI Phantom 4; (d) DJI Inspire 1 v2.0.
Vehicles 05 00093 g002
Figure 3. The experiment site—a long straight road section (photo taken from the ground).
Figure 3. The experiment site—a long straight road section (photo taken from the ground).
Vehicles 05 00093 g003
Figure 4. The experiment site—junction at the end of the straight section (photo taken from the ground).
Figure 4. The experiment site—junction at the end of the straight section (photo taken from the ground).
Vehicles 05 00093 g004
Figure 5. Custom-made device for showing the vertical set up at the experiment site (photo taken from the ground).
Figure 5. Custom-made device for showing the vertical set up at the experiment site (photo taken from the ground).
Vehicles 05 00093 g005
Figure 6. (a) The device in the point cloud. (b) Spheres (green) fit to the points representing the large balls (purple) in the point cloud.
Figure 6. (a) The device in the point cloud. (b) Spheres (green) fit to the points representing the large balls (purple) in the point cloud.
Vehicles 05 00093 g006
Figure 7. Calculation of angle error. (a) If the balls are not on the same vertical axis, the angle error is α; (b) View from above.
Figure 7. Calculation of angle error. (a) If the balls are not on the same vertical axis, the angle error is α; (b) View from above.
Vehicles 05 00093 g007
Figure 8. Flight paths. Yellow lines indicate the route of the UAV. Purple markers mark the positions where the UAV turned. (a) Block 1: L-shaped path (vertical camera setting); (b) Blocks 1 + 2: L-shaped (vertical camera setting) and circular (camera facing the POI) paths combined.
Figure 8. Flight paths. Yellow lines indicate the route of the UAV. Purple markers mark the positions where the UAV turned. (a) Block 1: L-shaped path (vertical camera setting); (b) Blocks 1 + 2: L-shaped (vertical camera setting) and circular (camera facing the POI) paths combined.
Vehicles 05 00093 g008
Figure 9. Image overlap analysis in the processing report of Block 1 images. The black dots represent the camera positions along the flight path. Colors indicate the number of images on which a given data point is captured.
Figure 9. Image overlap analysis in the processing report of Block 1 images. The black dots represent the camera positions along the flight path. Colors indicate the number of images on which a given data point is captured.
Vehicles 05 00093 g009
Figure 10. Image overlap analysis in the processing report of Block 1 + 2 images. The black dots represent the camera positions along the flight path. Colors indicate the number of images on which a given data point is captured.
Figure 10. Image overlap analysis in the processing report of Block 1 + 2 images. The black dots represent the camera positions along the flight path. Colors indicate the number of images on which a given data point is captured.
Vehicles 05 00093 g010
Figure 11. Deformations could be detected on the point clouds created from the nadir images from the experiment. (a) DJI Mavic—strong bowling; (b) DJI Air2S—slight doming; (c) DJI Phantom—tilting and doming (red line—distance measured on point cloud; green line—vertical and horizontal components); (d) DJI Inspire—strong doming and tilting.
Figure 11. Deformations could be detected on the point clouds created from the nadir images from the experiment. (a) DJI Mavic—strong bowling; (b) DJI Air2S—slight doming; (c) DJI Phantom—tilting and doming (red line—distance measured on point cloud; green line—vertical and horizontal components); (d) DJI Inspire—strong doming and tilting.
Vehicles 05 00093 g011aVehicles 05 00093 g011b
Figure 12. Results of the processing Block 1 + 2 images of DJI Mavic. (a) Three-dimensional point cloud; (b) the same point cloud with camera positions.
Figure 12. Results of the processing Block 1 + 2 images of DJI Mavic. (a) Three-dimensional point cloud; (b) the same point cloud with camera positions.
Vehicles 05 00093 g012
Figure 13. Result of processing Block 1 (nadir) and Block 2 (POI) images together. (a) DJI Mavic; (b) DJI Air2S; (c) DJI Phantom; (d) DJI Inspire.
Figure 13. Result of processing Block 1 (nadir) and Block 2 (POI) images together. (a) DJI Mavic; (b) DJI Air2S; (c) DJI Phantom; (d) DJI Inspire.
Vehicles 05 00093 g013
Figure 14. Distances measured on the 2D orthomosaic and the 3D point cloud. A, B, C—points on the outer side of the road marking line. D—point at the top of the verge marker post. E—projection of Point D on the horizontal plane of Point C.
Figure 14. Distances measured on the 2D orthomosaic and the 3D point cloud. A, B, C—points on the outer side of the road marking line. D—point at the top of the verge marker post. E—projection of Point D on the horizontal plane of Point C.
Vehicles 05 00093 g014
Figure 15. Distances measured between the two arrow heads on the 2D orthomosaic and the 3D point cloud. (a) left side; (b) right side; (c) whole distance.
Figure 15. Distances measured between the two arrow heads on the 2D orthomosaic and the 3D point cloud. (a) left side; (b) right side; (c) whole distance.
Vehicles 05 00093 g015
Figure 16. Flowchart for road accident site recording with UAV.
Figure 16. Flowchart for road accident site recording with UAV.
Vehicles 05 00093 g016
Table 1. Basic data of the four UAVs applied in the experiment [52,53,54,55].
Table 1. Basic data of the four UAVs applied in the experiment [52,53,54,55].
UAVCamera ModelCMOS Sensor
[inch]
ResolutionPixel Size
[um]
Focal Length [mm]
DJI Mavic Air 2DJI FC31701/24000 × 22501.77 × 1.774.5
DJI Air 2SDJI FC341115472 × 36482.51 × 2.518.38
DJI Phantom 4 Pro+DJI FC631015472 × 30782.53 × 2.538.8
DJI Inspire 1 v2.0DJI FC3501/2.34000 × 22501.7 × 1.73.61
Table 2. Flight altitudes calculated for 3.3 mm/px resolution [52,53,54,55].
Table 2. Flight altitudes calculated for 3.3 mm/px resolution [52,53,54,55].
UAVf
[mm]
GSD
[m/px]
HR
[px]
VR
[px]
SW
[mm]
SH
[mm]
AGL1
[m]
AGL2
[m]
DJI Mavic Air 24.50.0033400022507.14.08.48.4
DJI Air 2S8.380.00335472364813.79.211.011.0
DJI Phantom 4 Pro+8.80.00335472307813.87.811.511.5
DJI Inspire 1 v2.03.610.0033400022506.83.87.07.0
Table 3. Mission data (Blocks 1 + 2).
Table 3. Mission data (Blocks 1 + 2).
UAVFlying Altitude
[m]
Flying Time
[min]
No. of ImagesMedia Size
DJI Mavic Air 210.510162988 MB
DJI Air 2S11101631.8 GB
DJI Phantom 4 Pro+10.3101621.2 GB
DJI Inspire 1 v2.013.910163639 MB
Table 4. Main characteristics of the orthomosaic and the point cloud for Block 1 + 2 images, with high- and low-quality processing.
Table 4. Main characteristics of the orthomosaic and the point cloud for Block 1 + 2 images, with high- and low-quality processing.
AircraftOrtho-
Mosaic
Resolution
Ortho-Mosaic
Size
Ground
Resolution
[mm/pix]
No. of
Point Cloud Points
Processing Time
[min:s]
No. of
Point Cloud Points
Processing Time
[min:s]
High QualityLow Quality
DJI Mavic
Air 2
24,738 ×
28,994
534 MB3.5848,594,49821 min
6 s
4,029,3743 min
6 s
DJI Air 2S29,146 ×
34,624
769 MB3.1761,258,18921 min
5 s
4,890,0435 min
38 s
DJI Phantom 4 Pro+45,152 ×
52,514
1.06 GB2.8183,873,49817 min
42 s
6,584,0516 min
0 s
DJI Inspire 1 v2.023,624 ×
25,920
348 MB5.937,971,0659 min
31 s
2,620,7063 min
2 s
Table 5. Road width measured on the point cloud and its horizontal projection compared.
Table 5. Road width measured on the point cloud and its horizontal projection compared.
UAV3D
[m]
2D
[m]
s
[m]
Error
[%]
DJI Mavic Air 27.407.390.010.135
DJI Air 2S7.377.370.000.00
DJI Phantom 4 Pro+7.427.410.010.135
DJI Inspire 1 v2.07.407.400.000.00
Table 6. Horizontal accuracy. Distance measured by a laser distance meter: 7.42 m.
Table 6. Horizontal accuracy. Distance measured by a laser distance meter: 7.42 m.
UAVDistance Measured
on Point Cloud
[m]
s
[m]
Error
[%]
DJI Mavic Air 27.400.020.270
DJI Air 2S7.370.050.674
DJI Phantom 4 Pro+7.420.000.000
DJI Inspire 1 v2.07.400.020.270
Table 7. Effect of 1% measurement error of the skid mark length on the calculated speed of the vehicle at the beginning of the skid mark.
Table 7. Effect of 1% measurement error of the skid mark length on the calculated speed of the vehicle at the beginning of the skid mark.
s
[m]
v
[km/h]
Δv
[km/h]
Error
[%]
15 − 1%53.729−0.2710.501%
1554
15 + 1%54.2690.2690.499%
As the data in Table 7 show, a 1% difference in the length of the skid marks results only in a 0.5% difference in the calculated speed (0.3 km/h in the above example). The forensic expert, however, determines the speed with a much greater margin, generally as 53–55 km/h in this case.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vida, G.; Melegh, G.; Süveges, Á.; Wenszky, N.; Török, Á. Analysis of UAV Flight Patterns for Road Accident Site Investigation. Vehicles 2023, 5, 1707-1726. https://doi.org/10.3390/vehicles5040093

AMA Style

Vida G, Melegh G, Süveges Á, Wenszky N, Török Á. Analysis of UAV Flight Patterns for Road Accident Site Investigation. Vehicles. 2023; 5(4):1707-1726. https://doi.org/10.3390/vehicles5040093

Chicago/Turabian Style

Vida, Gábor, Gábor Melegh, Árpád Süveges, Nóra Wenszky, and Árpád Török. 2023. "Analysis of UAV Flight Patterns for Road Accident Site Investigation" Vehicles 5, no. 4: 1707-1726. https://doi.org/10.3390/vehicles5040093

Article Metrics

Back to TopTop