Next Article in Journal
Static Balance Digital Endpoints with Mon4t: Smartphone Sensors vs. Force Plate
Previous Article in Journal
Newly Designed and Experimental Test of the Sediment Trap for Horizontal Transport Flux
Previous Article in Special Issue
Smart Search System of Autonomous Flight UAVs for Disaster Rescue
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Object Perceptibility Prediction for Transmission Load Reduction in Vehicle-Infrastructure Cooperative Perception †

1
School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China
2
Guangxi Key Laboratory of Multimedia Communications and Network Technology, Nanning 530004, China
3
China-ASEAN International Join Laboratory of Integrated Transport, Nanning University, Nanning 541699, China
*
Authors to whom correspondence should be addressed.
The paper is an extension of conference: Lv, P.; He, Y.; Han, J.; Xu, J. Objects Perceptibility Prediction Model Based on Machine Learning for V2I Communication Load Reduction. In Proceedings of the International Conference on Wireless Algorithms, Systems, and Applications, Nanjing, China, 25–27 June 2021; Liu, Z., Wu, F., Das, S.K., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 521–528.
Sensors 2022, 22(11), 4138; https://doi.org/10.3390/s22114138
Submission received: 10 May 2022 / Revised: 25 May 2022 / Accepted: 27 May 2022 / Published: 30 May 2022
(This article belongs to the Special Issue Recent Advances in Connected and Autonomous Internet of Vehicles)

Abstract

:
Vehicle-infrastructure cooperative perception is an ingenious way to eliminate environmental perception blind areas of connected and autonomous vehicles (CAVs). However, if the infrastructure transmits all environmental information to the nearby CAVs, the transmission load is so heavy that it causes a waste of network resources, such as time and bandwidth, because parts of the information are redundant for the CAVs. It is an efficient manner for the infrastructure to merely transmit the information about objects which cannot be perceived by the CAVs. Therefore, the infrastructure needs to predict whether an object is perceptible for a CAV. In this paper, a machine-leaning-based model is established to settle this problem, and a data filter is also designed to enhance the prediction accuracy in various scenarios. Based on the proposed model, the infrastructure transmits the environmental information selectively, which significantly reduces the transmission load. The experiments prove that the prediction accuracy of the model achieves up to 95%, and the transmission load is reduced by 55%.

1. Introduction

In recent years, connected and autonomous vehicles (CAVs) have attracted much attention from both academic and industrial circles. CAVs are expected to become mainstream in future transportation systems, which improve traffic safety and efficiency [1,2].
Environment perception is a critical and fundamental task for CAVs because they need to make behavioral decisions according to the environmental information. However, environment perception is prone to be influenced by obstacles and distance, and CAVs can hardly obtain complete environmental information by themselves. To solve such problems and to assist CAVs to perceive hidden vehicles and pedestrians blocked by obstacles such as buildings, a cooperative perception between vehicles and roadside infrastructure is proposed. The environment perception accuracy of CAVs can be improved significantly if roadside nodes transmit environmental information to them [3]. Nevertheless, the transmission load can easily exceed the channel capacity using such methods due to the limited bandwidth of the wireless channels [4], which imposes many limitations on CAVs. For example, in reference [5], vehicle speed is limited to ensure safe data rates for V2I (vehicle-to-infrastructure) communications. To reduce the load of environmental information transmission and to avoid the redundancy of perception information, it is a practical solution for the roadside infrastructure to transmit object information selectively. Specifically, the infrastructure only needs to broadcast messages which the autonomous vehicles cannot perceive by themselves. As demonstrated in Figure 1, Vehicles 1 and 2 are autonomous vehicles, and the rest are traditional vehicles. In traditional methods, the roadside infrastructure broadcasts information of nonautonomous vehicles (Vehicles 3, 4, and 5) to improve traffic safety. However, the roadside infrastructure can only broadcast the information of Vehicle 5 once it knows that Vehicle 5 cannot be perceived by the autonomous vehicles themselves. In this case, it is necessary for the roadside infrastructure to predict whether the objects are perceptible to the CAVs.
In [6], we propose a neural network model to predict the perceptibility of objects. Based on the model, the roadside infrastructure broadcasts the unperceived information to autonomous vehicles selectively. The model is trained and verified based on a real dataset, and its effectiveness is evaluated using IOU (intersection-over-union) as an indicator. In this paper, we further explore the factors affecting the perceptibility of objects. It is found that inaccurate predictions are more likely to be made in some special scenarios (such as crowed roads). Therefore, we propose a data filter which filters out the vehicles in such special scenarios, and their information is broadcast by the roadside infrastructure. The information of other vehicles are broadcast selectively according to the prediction results of our previously proposed perceptibility prediction model. Extensive experiments are carried out, and the results demonstrate that the newly introduced data filter further improves the accuracy of the perceptibility prediction model, while reducing the communication load.
Vehicle-infrastructure cooperative perception has been studied in recent years. One type of method is to improve the sensing capability of CAVs directly by data fusion. Duan et al. [7] propose an environment perception framework, in which CAVs fuse their own sensor data with the data of roadside infrastructure to improve sensing capability. A centralized fusion scheme is also proposed in [8], and the sensing data is sent by CAVs and fused by roadside infrastructure. Another type of method indirectly improves the perception capability by sending assistance messages according to V2I communication. For example, Rebsamen et al. [9] set up a roadside infrastructure to send messages to self-driving cars and let them slow down when pedestrians appear at the intersection. Liu et al. [10] evaluate the risk level through special roadside infrastructure; thus, autonomous vehicles can effectively avoid danger by combining their own perception information and risk level. In the studies above, the roadside infrastructure sends messages to each self-driving car rather than sending it selectively, which leads to a large amount of data redundancy and heavy communication load.
To address the problems above, various approaches have been proposed. Garlichs et al. [11] contribute a set of generation rules for the collective perception about when to generate a message and which data to include in order to reduce the network load. In [12], the information is sent selectively by considering the impact of the information on the perception accuracy and the size of the information. In reference [13], the V2I communication load is reduced by a trajectory prediction approach. Noguchi et al. [14] use V2I communication for target tracking tasks, the tracker broadcasts the handover request message to the surrounding vehicles and infrastructures before the object is lost. However, these works barely notice that in many cases only the information that CAVs cannot perceive by themselves is needed, and the perceptibility of objects is not considered in the above works.
The contributions of this paper are as follows:
  • A neural network model is proposed to predict the perceptibility of objects for autonomous vehicles.
  • A data filter is designed to screen out vehicles which are prone to be incorrectly predicted, and the accuracy of prediction model is improved by handling these vehicles separately.
  • The model is trained and the accuracy is measured on real datasets. The effectiveness of the model is proven as the perceptual capability of CAVs is improved and the communication load is reduced.
The rest of the paper is organized as follows. The system model of this paper is introduced in Section 2, and the data preprocessing is described in Section 3. In Section 4, we propose a machine-leaning-based model to predict whether an object can be perceived by vehicles, verify the prediction accuracy, and then design a data filter to improve the model. The effectiveness of the model is evaluated in Section 5. Finally, we draw conclusion in Section 6.

2. System Model

As described in Figure 2. The work of this paper is carried out in three steps.
  • Step 1: Data preprocessing.
    We firstly obtain the appropriate information from relevant open datasets and reconstruct a perception algorithm on the datasets. Secondly, the perception result is matched with the original data and the perceptibility (vehicle can be perceived or not) of each vehicle is recorded. Finally, the factors that may have an impact on perceptibility are screened out and processed.
  • Step 2: Model training and data filter setting.
    In Step 2, a suitable perceptibility prediction model is built; it receives the processed data from Step 1 and gives a prediction of perceptibility. The model is trained with several comparative methods to verify the relationships between input factors and prediction accuracy. In addition, a data filter is proposed according to the prediction result of the model, and the data filter in turn improves the model accuracy.
  • Step 3: Evaluation of the proposed model.
    In Step 3, the effectiveness of the model is evaluated. Firstly, we evaluate the effectiveness of the model in improving the perceptual capability of vehicles by the IOU (intersection-over-union) metric. Secondly, extensive simulations are conducted under several comparative methods to evaluate the effectiveness of the model regarding the reduction in the communication load.

3. Data Preprocessing

The work in this paper is based on the KITTI dataset [15]. The KITTI dataset is founded by the Karlsruhe Institute of Technology in Germany and the Toyota American Institute of Technology. This dataset is currently one of the most important public datasets in the field of autonomous driving, which is widely used in 3D object detection, visual odometry, and stereo evaluation. The KITTI dataset provides a large amount of data which is obtained from real scenes such as urban roads, villages, and highways, and the sensing data include camera pictures, lidar point cloud files, coordinate conversion information and labels of the data, etc. The labels of the data include the type of objects (car, cyclist, pedestrian, etc.), the degree of occlusion and truncation of the object, etc. The data of the KITTI dataset are based on the Velodyne coordinate, and Figure 3 shows the settings of the Velodyne coordinate.
In this work, we reconstruct an object detection algorithm PointRCNN [16], which is based on the KITTI dataset, to obtain the predicted object positions, self-rotation angles of objects, and other information. To match each object in the label of the KITTI dataset with the detection result, we control the deviation δ of the object center position on the 2D plane. Firstly, the label of each object from the KITTI dataset is traversed to record the Velodyne coordinates x a and z a of them. The object is considered to be detected if the predicted x p and z p in the predicted result satisfies formula (1).
x a x p < δ z a z p < δ
In the next step, we analyze the data in the KITTI dataset carefully and clean the data. After many attempts, we screened out six types of factors which have a great impact on the perceptibility of the objects, including the coordinate information ( x , y , z ), the vehicle size information ( h , w , l ), the degree of occlusion of the vehicle o, the self-rotation angle of vehicle θ , and the degree of truncation of the vehicle t. These factors are used as the input of the prediction model. The detail of all these factors are shown in Table 1.
Because values of the Velodyne coordinate ( x , y , z ) are relatively larger than the vehicle size ( w , h , l ), the input needed to be normalized. In this paper, the standard deviation is used as the normalization method. The normalization formula is shown as follow, where μ represents the mean and σ represents the variance.
x n o r m a l = ( x μ ) / σ

4. Method and Verification

In this section, a perceptibility prediction model (PPM) is built to predict whether the objects can be perceived, and the effects of each input factor on the prediction results are verified. In addition, a data filter is proposed to improve the performance of the PPM.

4.1. Training and Verification of the Model

As shown in Figure 4, the PPM is built as a neural network, which consists of four layers. The first layer is the input layer which receives external factors such as the size of objects as the input. There are two hidden layers after the input layer. The output of each hidden layer is the input of the next layer through the Relu activation function. The output layer maps the output to “0” or “1” through the Sigmoid function, where “1” represents that the object is perceptible, and “0” denotes that the object is imperceptible. After many attempts, we find the best parameter settings for the model, and the specific parameter settings of each layer in the PPM are shown in Table 2.
The KITTI dataset is divided into two parts, 80% as training set and 20% as testing set. The model is trained with 120 epochs with a learning rate of 0.005; the mean-square error (MSE) is used as the loss function, and the model is verified on the testing set every 5 epochs. Prediction accuracy and the MSE loss are recorded during the training process, and they are depicted in Figure 5a,b, respectively.
The results reveal that the external factors have a great influence on the perceptibility of the objects. After training, the prediction accuracy of the testing set and training set reached 93%. It is possible to accurately predict whether an object can be detected according to these factors.
Although the factors we screen out are highly related to the perceptibility of the objects, the effect on the prediction result may vary under different factors. Therefore, we try to figure out the effect of different factors on the prediction accuracy, and the six factors below are removed from the input of the PPM separately to set up control groups:
  • Coordinate of object ( x , y , z ).
  • Size of object ( w , h , l ).
  • Degree of occlusion (o).
  • Self-rotation angle ( θ ).
  • Degree of truncation (t).
  • Set of ( o , θ , t ).
The PPM are trained with different inputs separately, and the accuracy of prediction of the training set and the testing set are recorded, as shown in Figure 6a,b. Considering that if the proportion of positive samples is much larger (or much smaller) than negative samples, it would obtain a high accuracy rate by setting all prediction results as positive samples (or negative samples), so the proportion of positive samples is used as the control condition baseline, which is 0.678. The accuracy of the last epoch of training on the training set and the testing set are shown in Table 3.
According to the comparative experiment, the following conclusions can be drawn. First, the size information of the object has the greatest impact on the prediction result, after removing the object size information ( w , h , l ), the prediction accuracy reduces up to 85%. Second, the position of the object also affects the prediction result significantly; the prediction accuracy reduces to less than 90% after removing the position information ( x , y , z ). Furthermore, the occlusion degree, the self-rotation angle of vehicles, and the truncation degree have little effect on the prediction accuracy, but when removing these three factors at the same time, it results in a slight decrease in the prediction accuracy.

4.2. Setting and Evaluation of the Data Filter

Although the average prediction accuracy of the PPM reaches 93%, it is noted that its performance is imbalanced in different scenarios. For instance, the prediction accuracy of the PPM reaches 97% in most scenarios, while the prediction accuracy of it is lower than 83% in a few scenarios, which indicates an unsatisfactory performance of the PPM in certain situations. Therefore, we further explore the factors which affect the perceptibility of objects and find that due to the limitations of sensors and the perception algorithm, a wrong prediction is more likely to be made in some special scenarios, such as crowed roads. In this section, we try to figure out in which scenario the inaccuracy prediction would be made, and the information of all vehicles in such scenarios is broadcast; the information of vehicles not in such scenarios is broadcast selectively according to the prediction result of the PPM. In this way, the accuracy of the PPM can be improved and its performance in various scenarios can be balanced.

4.2.1. Statistical Analysis

Scenarios which contain inaccuracy prediction samples are separated from the KITTI dataset and named the vulnerable dataset, and the rest of the dataset is named the stable dataset. To determine the difference between these two datasets, we count the number of vehicles separately, and the statistical data is shown in Table 4. It is obvious that the number of vehicles in the vulnerable dataset is significantly larger than the stable dataset.
In addition, we visualize the point cloud and find that the scenarios of multiple vehicles approaching each other are relatively common in the vulnerable dataset, as illustrated in Figure 7. Therefore, a data filter can be built to separate the scenarios which are prone to result in an inaccurate prediction due to the distinctions of the vulnerable dataset.

4.2.2. Data Filter Setting and Evaluation

The structure of the data filter is sketched in Figure 8. Firstly, each scenario is converted to a tensor with n dimensions, where n is equal to the number of vehicles it contains. Secondly, the scenarios in which n < 5 are divided into the stable dataset. Then, a judgment model is built to judge whether vehicles are approaching each other in a scenario, and the rest data are divided into the vulnerable dataset or the stable dataset according to the judgment model. As for the judgement model, a boundary threshold and an area threshold are set to determine whether a scenario such as the one in Figure 7 appears.
After screening out the vulnerable dataset, the information of vehicles in it is broadcast by roadside infrastructure, and the prediction accuracy of preceptibility of the rest dataset can be improved. Ideally, most of the vulnerable data and few other data are filtered out, and the preceptibility prediction accuracy reaches a high value. Therefore, we set three metrics to evaluate the data filter:
  • p 1 : Means the filtered-out vulnerable data as a percentage of the total vulnerable data.
  • p 2 : Means the filtered-out stable data as a percentage of the total stable data.
  • a c c : Means the accuracy of the preceptibility prediction model after filtering out some data.
We except a large value of p 1 and a small value of p 2 . However, these two metrics have a positive correlation, hence, we set some restrictions and adjust the threshold parameter of the judgement mode to obtain the best performance of each metric, as listed in Table 5. The parameter f i x _ p 1 represents the minimum value of p 2 , while p 1 is larger than 0.5; f i x _ p 2 means the maximum value of p 1 , while p 2 is less than 0.2, and M a x _ a c c means the global maximum accuracy of the preceptibility prediction model. The prediction accuracy can reach more than 0.95 in all three situations above.
In addiction, experiments are conducted to evaluate the effectiveness of the data filter in improving the accuracy of the PPM in the worst case. Traffic scenarios in the KITTI dataset are randomly used as the input and of the PPM, and the prediction accuracy of not adding the data filter and adding the data filter are recorded and shown in Figure 9.
The experimental results clearly show that the addition of the data filter not only improves the prediction accuracy of the PPM but also effectively improves the performance of the PPM in the worst case. Before adding the data filter, the average prediction accuracy of the PPM is approximately 93%, and the best performance of prediction accuracy is up to 97%, while in the worst case the prediction accuracy is only approximately 83%. After adding the data filter, the average prediction accuracy of the PPM is approximately 95%, and the variations of prediction accuracy are less than 1% in various scenarios.

5. Evaluation of the PPM

In Section 4, it was verified that the PPM had a high accuracy in the perceptibility prediction of an object, and cooperative messages can be broadcast selectively based on the prediction result. Specifically, the information of an object is broadcast if it is predicted to be imperceptible for CAVs; on the contrary, the information of an object is not broadcast if it is predicted to be perceptible for CAVs. This approach effectively reduces the communication load; however, the reduction in the broadcast cooperative messages may result in a decline in the perceptual capability of CAVs at the same time. In this section, we firstly evaluate the effectiveness of our method in improving the perceptual capability of CAVs. Moreover, a simulation experiment is conducted to verify the reduction in the communication load.

5.1. Effectiveness in Improving Perceptual Capability

For different CAVs, the effectiveness of cooperative messages in improving the perceptual capability is different. For example, an object may be perceived precisely by some CAVs, but it may not be perceived by other CAVs. For the CAVs that can perceive the object, the cooperative perception messages are not necessary. On the contrary, for the CAVs that cannot perceive the object, the cooperative perception messages are useful for improving their perceptual capability. Therefore, to evaluate the effectiveness of a cooperative perception message about an object, it is essential to estimate the perceived accuracy of the object. In this section, the intersection-over-union (IOU) [17] is used as a metric to describe the perceived accuracy of objects. The IOU is demonstrated in Figure 10, the occupied area of the object in the actual label of the dataset is marked as GroundTruth, the occupied area of the predicted object position is marked as DetectionResult, and the occupied area of the intersection area is marked as InterArea. The IOU calculation formula is as follows:
I O U = I n t e r A r e a D e t e c t i o n R e s u l t + G r o u n d T r u t h I n t e r A r e a
We use all the data (including the training set and testing set) as the input of the PPM and obtain the prediction result of whether each object can be detected, then compare it with the actual label of the KITTI dataset and obtain two sets of True Positive (TP) and False Positive (FP). Finally, we calculate the average IOU of the object detection in the two sets separately. The process of evaluation is sketched in Figure 11.
TP sets include detected objects which are predicted as positive samples by the PPM. On the contrary, FP sets include undetected objects which are predicted as positive samples by the PPM. According to the calculation result, the average IOU value of the FP set is 0.284, while the average IOU value of the TP set is 0.741. Two conclusions can be drawn according to the result above:
  • In our method, the information of the objects in the TP set is not broadcast by the roadside infrastructure because they can be perceived by CAVs according to the prediction results of the PPM. Such a strategy does not reduce the perceptual capability of the CAVs because the objects in the TP set are perceived with high IOU accuracy, which means that CAVs are able to accurately perceive these objects by themselves, and additional cooperative messages are not helpful.
  • In our method, the information of the objects that are predicted to be imperceptible are broadcast by the roadside infrastructure. These objects can be divided into two types. In the first type, the objects are indeed not perceived by CAVs, and it is obvious that the additional cooperative messages about them are helpful in improving the perceptual capability of CAVs. In the second type, the objects can be perceived by CAVs (although they are predicted to be imperceptible); these objects belong to the FP set, and they are perceived with low IOU accuracy. Therefore, the cooperative perception messages about them are also useful in improving the perceptual capability of CAVs.
In summary, our method is effective in improving the perceptual capability of CAVs while the communication load can also be reduced.

5.2. Effectiveness in the Reduction in the Communication Load

5.2.1. Experimental Settings

In this section, we evaluate the effectiveness of the PPM in the reduction in the communication load by conducting simulation experiments. The simulations are based on the SUMO [18] traffic simulator, and the simulation scenario is depicted in Figure 1 in Section 1. The road has four lanes, and vehicles appear randomly at the starting points of the lanes at each simulation step. At the beginning, there are no vehicles on the road. The number of vehicles gradually increases to the maximum road capacity as the experiment proceeds.
In the simulations, the cooperative perception messages are broadcast according to the following three strategies:
  • All: In this strategy, the information of all vehicles in the coverage area of infrastructure is broadcast.
  • PPM: In this method, the information of the vehicles which are predicted as an imperceptible object according to the PPM is broadcast.
  • PPM + filter: In this method, the information of vehicles which are divided into the vulnerable dataset or are predicted as an imperceptible object according to the PPM is broadcast.
In addition, we use the traffic of the cooperative perception message (CPM) as a metric to evaluate the reduction in communication load. According to the latest standards developed by the European Telecommunications Standards Institute (ESTI) [19], CPM can be composed of multiple message containers. The cooperative message is contained in the perceived object container (POC), which contains 128 fields of the perceptual information about the object (including ID, distance, speed and acceleration, etc.). In the simulation, the information about each vehicle is contained in a single CPM, and it is broadcast by the roadside infrastructure.

5.2.2. Experimental Results

In Figure 12, the average numbers of CPMs broadcast in different methods are shown. In the All method, information about all the vehicles is broadcast by the roadside infrastructure. Therefore, the number of broadcast CPMs in the All method is equal to the number of vehicles in the road. For the PPM method, CPMs are broadcast selectively based on the prediction results of the PPM; thus, the average number of CPMs is significantly reduced compared with the All method, and the reduction ratio of CPMs is approximately 55%. For the PPM+filter method, the reduction ratio of CPMs is approximately 45% compared with the All method. The number of broadcast CPMs in the PPM+filter method is generally the same as those in the PPM method. However, the curves of broadcast CPMs in Figure 12 indicate that the broadcast CPMs of the PPM+filter method are still larger than those in the PPM method when the number of vehicles on the road is more than 60 and increasing (such as Epochs 10–20 and Epochs 35–45 in Figure 12). Under this condition, traffic is crowded, and more vehicles are about to enter the road; congestion such as that shown in Figure 7 can easily occur. The filter works under this condition, and it leads to an increase in the broadcast CPMs.
It is worth noting that although the performance of the PPM+filter method in reducing the communication load is weaker than that of the PPM method, as discussed in Section 4.2.2, it is better at improving the perceptual capability of CAVs. In addition, compared with the the PPM method, the PPM+filter method consumes more computational resources due to the introduction of the data filter. Therefore, the balance of the communication load reduction and the improvement of perception capability and computational resources consumption should be considered in the practical applications. For example, the PPM method can be deployed when the reduction in the communication load is the top priority. The PPM+filter method is a better choice when improving the perceived capability of CAVs is more important. Additionally, the PPM+filter method will be given lower priority than the PPM method when computational resources are insufficient.

6. Conclusions

In this paper, we built a perceptibility prediction model (PPM) to predict whether an object can be detected by an autonomous vehicle. It was proved through experiments that the model had high accuracy. In addition, a data filter was proposed to improve the accuracy of the model by consuming more computational resources. When the roadside infrastructure predicted that the autonomous vehicle cannot perceive an object according to the model, it sent the object information to the vehicles, which improved the perception range of the autonomous vehicle and reduced the communication load of roadside infrastructure. Finally, the effectiveness of the model was proved by extensive experiments.

Author Contributions

Conceptualization, P.L., J.H. and Y.H.; Data curation, Y.H.; Formal analysis, J.H. and Y.H.; Funding acquisition, P.L., J.X. and T.L.; Investigation, P.L. and J.H.; Methodology, P.L., J.H. and Y.H.; Project administration, P.L. and J.X.; Resources, P.L., J.H., J.X. and T.L.; Software, J.H. and Y.H.; Supervision, P.L., J.X. and T.L.; Validation, J.H.; Visualization, J.H.; Writing—original draft, P.L., J.H., Y.H. and J.X.; Writing—review & editing, P.L., J.H. and J.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by National Natural Science Foundation of China [62062008], Special funds for Guangxi BaGui Scholars, Guangxi Science and Technology Plan Project of China [AD20297125], Guangxi Natural Science Foundation [2019JJA170045], Innovation Project of Guangxi Graduate Education [YCSW2022104].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of the study are available from the first author upon request.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

References

  1. Min, K.; Han, S.; Lee, D.; Choi, D.; Sung, K.; Choi, J. SAE Level 3 Autonomous Driving Technology of the ETRI. In Proceedings of the 2019 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Korea, 16–18 October 2019; pp. 464–466. [Google Scholar] [CrossRef]
  2. Okuyama, T.; Gonsalves, T.; Upadhay, J. Autonomous driving system based on deep Q learnig. In Proceedings of the 2018 International Conference on Intelligent Autonomous Systems (ICoIAS), Singapore, 1–3 March 2018; pp. 201–205. [Google Scholar]
  3. Aoki, S.; Higuchi, T.; Altintas, O. Cooperative perception with deep reinforcement learning for connected vehicles. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 328–334. [Google Scholar]
  4. Delooz, Q.; Festag, A. Network load adaptation for collective perception in V2X communications. In Proceedings of the 8th IEEE International Conference on Connected Vehicles and Expo (ICCVE 2019), Graz, Austria, 4–8 November 2019; pp. 1–6. [Google Scholar] [CrossRef]
  5. Fukatsu, R.; Sakaguchi, K. Automated Driving with Cooperative Perception Using Millimeter-wave V2I Communications for Safe and Efficient Passing Through Intersections. In Proceedings of the 2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring), Helsinki, Finland, 25–28 April 2021; pp. 1–5. [Google Scholar] [CrossRef]
  6. Lv, P.; He, Y.; Han, J.; Xu, J. Objects Perceptibility Prediction Model Based on Machine Learning for V2I Communication Load Reduction. In Proceedings of the International Conference on Wireless Algorithms, Systems, and Applications, Nanjing, China, 25–27 June 2021; Liu, Z., Wu, F., Das, S.K., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 521–528. [Google Scholar]
  7. Duan, X.; Jiang, H.; Tian, D.; Zou, T.; Zhou, J.; Cao, Y. V2I based environment perception for autonomous vehicles at intersections. China Commun. 2021, 18, 1–12. [Google Scholar] [CrossRef]
  8. Calvo, J.Á.L.; Mathar, R. A multi-level cooperative perception scheme for autonomous vehicles. In Proceedings of the 2017 15th International Conference on ITS Telecommunications (ITST), Warsaw, Poland, 29–31 May 2017; pp. 1–5. [Google Scholar] [CrossRef]
  9. Rebsamen, B.; Bandyopadhyay, T.; Wongpiromsarn, T.; Kim, S.; Chong, Z.; Qin, B.; Ang, M.; Frazzoli, E.; Rus, D. Utilizing the infrastructure to assist autonomous vehicles in a mobility on demand context. In Proceedings of the TENCON 2012 IEEE Region 10 Conference, Cebu, Philippines, 19–22 November 2012; pp. 1–5. [Google Scholar]
  10. Liu, W.; Muramatsu, S.; Okubo, Y. Cooperation of V2I/P2I communication and roadside radar perception for the safety of vulnerable road users. In Proceedings of the 16th International Conference on Intelligent Transportation Systems Telecommunications (ITST), Lisbon, Portugal, 14–17 October 2018; pp. 1–7. [Google Scholar]
  11. Garlichs, K.; Gunther, H.J.; Wolf, L.C. Generation Rules for the Collective Perception Service. In Proceedings of the IEEE Vehicular Networking Conference (VNC), Los Angeles, CA, USA, 4–6 December 2019. [Google Scholar] [CrossRef]
  12. Allig, C.; Wanielik, G. Dynamic Dissemination Method for Collective Perception. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 3756–3762. [Google Scholar]
  13. Boquet, G.; Vicario, J.L.; Correa, A.; Morell, A.; Rashdan, I.; Díaz, E.M.; de Ponte Müller, F. Trajectory prediction to avoid channel congestion in V2I communications. In Proceedings of the IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), Montreal, QC, Canada, 8–13 October 2017; pp. 1–6. [Google Scholar]
  14. Noguchi, T.; Ting, Y.C.; Yoshida, M.; Ramonet, A.G. Real-time cooperative vehicle tracking in vanets. In Proceedings of the 29th International Conference on Computer Communications and Networks (ICCCN), Honolulu, HI, USA, 3–6 August 2020; pp. 1–6. [Google Scholar]
  15. Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar] [CrossRef]
  16. Shi, S.; Wang, X.; Li, H.P. PointRCNN: 3d object proposal generation and detection from point cloud. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 16–20. [Google Scholar]
  17. Zhou, D.; Fang, J.; Song, X.; Guan, C.; Yin, J.; Dai, Y.; Yang, R. Iou loss for 2d/3d object detection. In Proceedings of the 2019 International Conference on 3D Vision (3DV), Quebec City, QC, Canada, 16–19 September 2019; pp. 85–94. [Google Scholar]
  18. Krajzewicz, D.; Erdmann, J.; Behrisch, M.; Bieker, L. Recent development and applications of SUMO-Simulation of Urban MObility. Int. J. Adv. Syst. Meas. 2012, 5, 128–138. [Google Scholar]
  19. ETSI. Intelligent Transport System (ITS); Vehicular Communications; Basic Set of Applications; Analysis of the Collective-Perception Service (CPS); Draft TR 103 562 V0. 0.15; ETSI: Sophia Antipolis, France, 2019. [Google Scholar]
Figure 1. Scenario of broadcasting information selectively.
Figure 1. Scenario of broadcasting information selectively.
Sensors 22 04138 g001
Figure 2. The system model. The model is built and evaluated in three steps.
Figure 2. The system model. The model is built and evaluated in three steps.
Sensors 22 04138 g002
Figure 3. The Velodyne coordinate system of the KITII dataset. z-axis points in the direction of motion of the vehicle, and the x-axis is the direction perpendicular to it. The direction of the y-axis is up.
Figure 3. The Velodyne coordinate system of the KITII dataset. z-axis points in the direction of motion of the vehicle, and the x-axis is the direction perpendicular to it. The direction of the y-axis is up.
Sensors 22 04138 g003
Figure 4. Structure of the PPM.
Figure 4. Structure of the PPM.
Sensors 22 04138 g004
Figure 5. The accuracy and loss of the PPM.
Figure 5. The accuracy and loss of the PPM.
Sensors 22 04138 g005
Figure 6. Training and validation accuracy of the control group.
Figure 6. Training and validation accuracy of the control group.
Sensors 22 04138 g006
Figure 7. Scenarios of multiple vehicles. The vehicles approach each other, and such scenarios often contain incorrect prediction samples.
Figure 7. Scenarios of multiple vehicles. The vehicles approach each other, and such scenarios often contain incorrect prediction samples.
Sensors 22 04138 g007
Figure 8. Structure of the data filter.
Figure 8. Structure of the data filter.
Sensors 22 04138 g008
Figure 9. Prediction accuracy of the PPM. The bars on the histogram indicate the range of prediction accuracy in different scenarios.
Figure 9. Prediction accuracy of the PPM. The bars on the histogram indicate the range of prediction accuracy in different scenarios.
Sensors 22 04138 g009
Figure 10. Area definition in the IOU calculation.
Figure 10. Area definition in the IOU calculation.
Sensors 22 04138 g010
Figure 11. The process of evaluation. Since the negative samples have no IOU value, only the IOU of positive samples are counted.
Figure 11. The process of evaluation. Since the negative samples have no IOU value, only the IOU of positive samples are counted.
Sensors 22 04138 g011
Figure 12. Result of simulation.
Figure 12. Result of simulation.
Sensors 22 04138 g012
Table 1. The detail of input factors.
Table 1. The detail of input factors.
FactorPhysical MeaningRange
x , y , z The coordinates of the object[0, 100]
w , h , l The width, height, and length of object[0, 20]
oThe degree of occlusion{0, 1, 2, 3}
θ Self-rotation angle of object[− π , π ]
tThe degree of truncation{0, 1}
Table 2. Parameters of the PPM.
Table 2. Parameters of the PPM.
LayerParametersActivate Function
Input Layerneurons cell: 9None
Hidden Layer 1neurons cell: 64Relu
Hidden Layer 2neurons cell: 32Relu
Output Layerneurons cell: 1Sigmoid
Table 3. Training and validation accuracy of the control group in the final epoch.
Table 3. Training and validation accuracy of the control group in the final epoch.
ParameterAllRemove
( x , y , z )
Remove
( w , h , l )
Remove oRemove θ Remove tRemove
( o , θ , t )
Baseline
acc0.9320.8940.8530.9260.9280.9280.9220.678
val_acc0.9300.8940.8500.9280.9270.9290.9260.678
Table 4. Multiple vehicles’ proportion in different datasets.
Table 4. Multiple vehicles’ proportion in different datasets.
Number of VehiclesAllVulnerableStable
≥50.5310.7730.477
≥60.4230.6560.372
≥70.3440.5690.293
≥80.2700.5040.218
≥90.1970.4110.149
Table 5. Best performance of the metrics.
Table 5. Best performance of the metrics.
Metric fix _ p 1 fix _ p 2 Max _ acc
p 1 0.5010.4010.657
p 2 0.2650.1940.378
a c c 0.9520.9510.956
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lv, P.; Han, J.; He, Y.; Xu, J.; Li, T. Object Perceptibility Prediction for Transmission Load Reduction in Vehicle-Infrastructure Cooperative Perception. Sensors 2022, 22, 4138. https://doi.org/10.3390/s22114138

AMA Style

Lv P, Han J, He Y, Xu J, Li T. Object Perceptibility Prediction for Transmission Load Reduction in Vehicle-Infrastructure Cooperative Perception. Sensors. 2022; 22(11):4138. https://doi.org/10.3390/s22114138

Chicago/Turabian Style

Lv, Pin, Jinlei Han, Yuebin He, Jia Xu, and Taoshen Li. 2022. "Object Perceptibility Prediction for Transmission Load Reduction in Vehicle-Infrastructure Cooperative Perception" Sensors 22, no. 11: 4138. https://doi.org/10.3390/s22114138

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop