# PESO: A Seq2Seq-Based Vessel Trajectory Prediction Method with Parallel Encoders and Ship-Oriented Decoder

^{1}

^{2}

^{3}

^{4}

^{5}

^{6}

^{*}

## Abstract

**:**

## 1. Introduction

- We propose a novel deep learning model, PESO, based on a Seq2Seq network for vessel trajectory prediction, which aims to capture richer features from previous information and better represent the spatial correlation of historical trajectory points.
- We develop Parallel Encoders, including Location Encoder and the Sailing Status Encoder, to capture more information from longitude, latitude, COG, SOG, and sailing distance.
- We develop the Ship-Oriented Decoder and the Semantic Location Vector (SLV). The Ship-Oriented Decoder can utilize the SLV to generate accurate prediction results, which better represent the spatial correlation of historical track points.
- We implement comparative experiments with several baseline models. The experimental results show that our model is superior to them both quantitatively and qualitatively.

## 2. Related Works

#### 2.1. Seq2Seq Model

#### 2.2. Vessel Trajectory Prediction

## 3. Proposed Method

#### 3.1. Definitions and Problem Statements

**Definition**

**1.**

**Definition**

**2.**

**Definition**

**3.**

#### 3.2. Data Preprocessing

#### 3.3. PESO

#### 3.3.1. Semantic Location Vector

#### 3.3.2. Parallel Encoders

#### 3.3.3. Ship-Oriented Decoder

#### 3.3.4. Objective Function

## 4. Experiments

#### 4.1. Experiment Settings

#### 4.1.1. Dataset

#### 4.1.2. Hyperparameters and Experimental Environment

#### 4.1.3. Baselines

- (1)
- LSTM. An RNN variant is composed of five layers.
- (2)
- BiLSTM. An RNN variant is composed of five bidirectional layers.
- (3)
- GRU. Similar to LSTM.
- (4)
- BiGRU. Similar to BiGRU.
- (5)
- LSTM–LSTM. A Seq2Seq-based model with five LSTM layers in the encoder and the decoder. LSTM–LSTM and PESO are both LSTM as the encoder and LSTM as the decoder. The difference is that the input of baseline LSTM–LSTM is longitude and latitude, while PESO’s input includes multiple semantic features and an oriental vector.
- (6)
- BiLSTM–LSTM. A Seq2Seq-based model with five BiLSTM layers in the encoder and five LSTM layers in the decoder. respectively.
- (7)
- GRU–GRU. Similar to LSTM–LSTM.
- (8)
- BiGRU–GRU. Similar to BiLSTM–LSTM.

- (1)
- ARIMA. A statistical time-series forecasting model.
- (2)
- Kalman Filter. A linear optimal estimation model.
- (3)
- VAR. A statistical model for multivariate time-series prediction.
- (4)
- ST-Norm. A deep learning model for time-series forecasting.

#### 4.1.4. Evaluation Metrics

#### 4.2. Model Performance Comparison

#### 4.2.1. Comparison Results with Baselines

#### 4.2.2. Exploration on Seq2Seq Structure of PESO

#### 4.2.3. Quantitative Analysis

#### 4.3. Ablation Study

- $Without$ $SOG,COG,andDIS$. Delete the speed, course, and sailing distance on the input of the Sailing Status Encoder of the Parallel Encoders;
- $Without$ $SOG$. Delete the speed on the input of the Sailing Status Encoder of the Parallel Encoders;
- $Without$ $COG$. Delete the course on the input of the Sailing Status Encoder of the Parallel Encoders;
- $Without$ $DIS$. Delete the sailing distance on the input of the Sailing Status Encoder of the Parallel Encoders;
- $Without$ $SLV$. Delete the Semantic Location Vector on the input of the Ship-Oriented Decoder.

#### 4.4. Case Study

#### 4.4.1. Visual Result Comparing with Baselines

#### 4.4.2. Visual Result of Exploration on Seq2Seq Structure of PESO

#### 4.4.3. Qualitative Ablation Results

## 5. Conclusions

## 6. Future Works

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## Abbreviations

AIS | Automatic Identification System |

Seq2Seq | Sequence-to-Sequence Network |

PESO | the Parallel Encoders and the Ship-Oriented Decoder model |

COG | Course Over Ground |

SOG | Speed Over Ground |

SLV | the Semantic Location Vector |

CBOW | Continuous Bag-of-Word |

## References

- Capobianco, S.; Forti, N.; Millefiori, L.M.; Braca, P.; Willett, P. Uncertainty-Aware Recurrent Encoder-Decoder Networks for Vessel Trajectory Prediction. In Proceedings of the 2021 IEEE 24th International Conference on Information Fusion (FUSION), Sun City, South Africa, 1–4 November 2021. [Google Scholar]
- Lee, H.T.; Lee, J.S.; Yang, H.; Cho, I.S. An AIS Data-Driven Approach to Analyze the Pattern of Ship Trajectories in Ports Using the DBSCAN Algorithm. Appl. Sci.
**2021**, 11, 799. [Google Scholar] [CrossRef] - Mikolov, T.; Chen, K.; Corrado, G.; Dean, J. Efficient Estimation of Word Representations in Vector Space. arXiv
**2013**, arXiv:1301.3781. [Google Scholar] - Cho, K.; van Merrienboer, B.; Gülçehre, Ç.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. arXiv
**2014**, arXiv:1406.1078. [Google Scholar] - Wilms, H.; Cupelli, M.; Monti, A. Combining auto-regression with exogenous variables in sequence-to-sequence recurrent neural networks for short-term load forecasting. In Proceedings of the 2018 IEEE 16th International Conference on Industrial Informatics (INDIN), Porto, Portugal, 18–20 July 2018. [Google Scholar]
- Razghandi, M.; Zhou, H.; Erol-Kantarci, M.; Turgut, D. Short-Term Load Forecasting for Smart Home Appliances with Sequence to Sequence Learning. In Proceedings of the ICC 2021—IEEE International Conference on Communications, Montreal, QC, Canada, 14–23 June 2021. [Google Scholar]
- Ahmad, T.; Zhang, D. A data-driven deep sequence-to-sequence long-short memory method along with a gated recurrent neural network for wind power forecasting. Energy
**2022**, 239, 122109. [Google Scholar] [CrossRef] - Wang, K.; Zhong, H.; Yu, N.; Xia, Q. Nonintrusive Load Monitoring based on Sequence-to-sequence Model With Attention Mechanism. Zhongguo Dianji Gongcheng Xuebao/Proc. Chin. Soc. Electr. Eng.
**2019**, 39, 75–83. [Google Scholar] - Fang, Z.; Crimier, N.; Scanu, L.; Midelet, A.; Delinchant, B. Multi-zone indoor temperature prediction with LSTM-based sequence to sequence model. Energy Build.
**2021**, 245, 111053. [Google Scholar] [CrossRef] - Sehovac, L.; Nesen, C.; Grolinger, K. Forecasting Building Energy Consumption with Deep Learning: A Sequence to Sequence Approach. In Proceedings of the IEEE International Congress on Internet of Things, Milan, Italy, 8–13 July 2019. [Google Scholar]
- Wang, G.; Zhang, F. A Sequence-to-Sequence Model With Attention and Monotonicity Loss for Tool Wear Monitoring and Prediction. IEEE Trans. Instrum. Meas.
**2021**, 70, 3525611. [Google Scholar] [CrossRef] - Yin, H.; Zhang, X.; Wang, F.; Zhang, Y.; Jin, J. Rainfall-Runoff Modeling Using LSTM-based Multi-State-Vector Sequence-to-Sequence Model. J. Hydrol.
**2021**, 598, 126378. [Google Scholar] [CrossRef] - Mootha, S.; Sridhar, S.; Seetharaman, R.; Gopalan, C. Stock Price Prediction using Bi-Directional LSTM based Sequence to Sequence Modeling and Multitask Learning. In Proceedings of the 2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), New York, NY, USA, 28–31 October 2020. [Google Scholar]
- Bauer, J.; Jannach, D. Improved Customer Lifetime Value Prediction with Sequence-To-Sequence Learning and Feature-Based Models. ACM Trans. Knowl. Discov. Data (TKDD)
**2021**, 15, 80. [Google Scholar] [CrossRef] - Li, X.; Tang, J.; Yin, C. Sequence-to-Sequence Learning for Prediction of Soil Temperature and Moisture. IEEE Geosci. Remote Sens. Lett.
**2022**, 19, 3005605. [Google Scholar] [CrossRef] - Zaytar, M.A.; Amrani, C.E. Sequence to Sequence Weather Forecasting with Long Short-Term Memory Recurrent Neural Networks. Int. J. Comput. Appl.
**2016**, 143, 7–11. [Google Scholar] - Yin, H.; Guo, Z.; Zhang, X.; Chen, J.; Zhang, Y. Runoff predictions in ungauged basins using sequence-to-sequence models. J. Hydrol.
**2021**, 603, 126975. [Google Scholar] [CrossRef] - Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput.
**1997**, 9, 1735–1780. [Google Scholar] [CrossRef] - Tang, H.; Yin, Y.; Shen, H. A model for vessel trajectory prediction based on long short-term memory neural network. J. Mar. Eng. Technol. Proc. Inst. Mar. Eng. Sci. Technol.
**2022**, 21, 136–145. [Google Scholar] [CrossRef] - Gao, M.; Shi, G.; Li, S. Online Prediction of Ship Behavior with Automatic Identification System Sensor Data Using Bidirectional Long Short-Term Memory Recurrent Neural Network. Sensors
**2018**, 18, 4211. [Google Scholar] [CrossRef][Green Version] - Wang, C.; Fu, Y. Ship Trajectory Prediction Based on Attention in Bidirectional Recurrent Neural Networks. In Proceedings of the 2020 5th International Conference on Information Science, Computer Technology and Transportation (ISCTT), Shenyang, China, 13–15 November 2020. [Google Scholar]
- Mehri, S.; Alesheikh, A.A.; Basiri, A. A Contextual Hybrid Model for Vessel Movement Prediction. IEEE Access
**2021**, 9, 45600–45613. [Google Scholar] [CrossRef] - Ding, M.; Su, W.; Liu, Y.; Zhang, J.; Wu, J. A Novel Approach on Vessel Trajectory Prediction Based on Variational LSTM. In Proceedings of the 2020 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA), Dalian, China, 27–29 June 2020. [Google Scholar]
- Wang, C.; Ren, H.; Li, H. Vessel trajectory prediction based on AIS data and bidirectional GRU. In Proceedings of the 2020 International Conference on Computer Vision, Image and Deep Learning (CVIDL), Chongqing, China, 10–12 July 2020. [Google Scholar]
- Capobianco, S.; Forti, N.; Millefiori, L.M.; Braca, P.; Willett, P. Recurrent Encoder-Decoder Networks for Vessel Trajectory Prediction with Uncertainty Estimation. IEEE Trans. Aerosp. Electron. Syst.
**2022**. [Google Scholar] [CrossRef] - Nguyen, D.D.; Chan, L.V.; Ali, M.I. Vessel Trajectory Prediction using Sequence-to-Sequence Models over Spatial Grid. In Proceedings of the the 12th ACM International Conference, Hamilton, New Zealand, 25–29 June 2018. [Google Scholar]
- Forti, N.; Millefiori, L.M.; Braca, P.; Willett, P.K. Prediction of vessel trajectories from ais data via sequence-to-sequence recurrent neural networks. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020. [Google Scholar]
- Sekhon, J.; Fleming, C. A Spatially and Temporally Attentive Joint Trajectory Prediction Framework for Modeling Vessel Intent. Learn. Dyn. Control
**2020**, 318–327. [Google Scholar] - You, L.; Xiao, S.; Peng, Q.; Claramunt, C.; Zhang, J. ST-Seq2Seq: A Spatio-Temporal Feature-Optimized Seq2Seq Model for Short-Term Vessel Trajectory Prediction. IEEE Access
**2020**, 8, 218565–218574. [Google Scholar] [CrossRef] - Capobianco, S.; Millefiori, L.M.; Forti, N.; Braca, P.; Willett, P. Deep Learning Methods for Vessel Trajectory Prediction based on Recurrent Neural Networks. IEEE Trans. Aerosp. Electron. Syst.
**2021**, 57, 4329–4346. [Google Scholar] [CrossRef] - Zhang, S.; Wang, L.; Zhu, M.; Chen, S.; Zeng, Z. A Bi-directional LSTM Ship Trajectory Prediction Method based on Attention Mechanism. In Proceedings of the 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 12–14 March 2021. [Google Scholar]
- Wang, S.; He, Z. A prediction model of vessel trajectory based on generative adversarial network. J. Navig.
**2021**, 74, 1161–1171. [Google Scholar] [CrossRef] - Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Bing, X.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014. [Google Scholar]
- Nguyen, D.; Fablet, R. TrAISformer-A generative transformer for AIS trajectory prediction. arXiv
**2021**, arXiv:2109.03958. [Google Scholar] - Dyer, S.A.; Dyer, J.S. Cubic-spline interpolation. 1. IEEE Instrum. Meas. Mag.
**2001**, 4, 44–46. [Google Scholar] [CrossRef] - Shekhar, S.; Hui, X. Root-Mean-Square Error. In Encyclopedia of Gis; Springer: Berlin/Heidelberg, Germany, 2008; p. 979. Available online: https://link.springer.com/referencework/10.1007/978-3-319-17885-1 (accessed on 19 February 2023).
- Kingma, D.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA, May 2015; Available online: http://arxiv.org/abs/1412.6980 (accessed on 19 February 2023).
- Box, G.E.P.; Jenkins, G.M. Time series analysis: Forecasting and control. J. Time
**2010**, 31, 93–135. [Google Scholar] - Harvey, A.C. Forecasting, Structural Time Series Models and the Kalman Filter; Cambridge University Press: Cambridge, UK, 1990; pp. 100–167. [Google Scholar]
- Tson, J.C.R.; Parker, R. Vector Autoregressions: Forecasting and Reality. Econom. Rev.
**1999**, 84, 4. [Google Scholar] - Deng, J.; Chen, X.; Jiang, R.; Song, X.; Tsang, I. ST-Norm: Spatial and Temporal Normalization for Multi-variate Time Series Forecasting. In Proceedings of the KDD ’21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual, 14–18 August 2021. [Google Scholar]

**Figure 1.**An example of vessel trajectory prediction. The black ships denote the historical AIS trajectories. The grey ships denote the prediction results, while the blue ships denote the actual trajectories.

**Figure 2.**PESO is composed of a Semantic Location Vector (SLV), Parallel Encoders, and a Ship-Oriented Decoder. The SLV of a particular ship represents the spatial correlation of its trajectories. The Parallel Encoders are designed to capture more features by using two different encoders, which include the Location Encoder and the Sailing Status Encoder. The Ship-Oriented Decoder is designed to utilize the SLV to guide the decoding process.

**Figure 3.**Workflow for obtaining the Semantic Location Vector. First, we divide the sea areas of our dataset with 0.1 latitude and 0.1 longitude as a grid. Then, we use sliding window to conduct a training set of a CBOW model. After training, this model can map every grid to an 8-dimensional semantic vector. Finally, after collecting all the grids corresponding to a vessel, we average the semantic vectors of these grids to obtain the Semantic Location Vector (SLV).

**Figure 4.**The details of training loss and testing loss during the the training process. The X-axis represents the number of training epochs, and the Y-axis represents the RMSE loss value.

**Figure 5.**The visual comparison of different baselines and PESO. The difficulty of prediction increases gradually from (

**a**–

**c**). PESO outperforms other models in all scenarios.

**Figure 6.**The visual comparison of different structures of PESO. The difficulty of prediction increases gradually from (

**a**–

**c**). As we can see from the figures, PESO (with LSTM–LSTM) can obtain the best prediction results.

**Figure 7.**The influence of the sailing status information. The yellow and green lines are prediction results of PESO with and without sailing status information, respectively.

**Figure 8.**The influence of the COG information. The yellow and green lines are prediction results of PESO with and without COG information, respectively.

**Figure 9.**The influence of the SOG information. The yellow and green lines are prediction results of PESO with and without SOG, respectively.

**Figure 10.**The influence of the distance information. The yellow and green lines are prediction results of PESO with and without distance information, respectively.

**Figure 11.**The influence of the SLV. The yellow and green lines are prediction results by PESO with and without SLV, respectively.

**Table 1.**Explorations on the numbers of LSTM layers under the metrics of RMSE, MAE, ADE, and FDE. Note that 1 layer represents that the number of LSTM layers of encoder and decoder in PESO is 1.

Model | RMSE | MAE | ADE | FDE |
---|---|---|---|---|

1 layer | 0.000525 | 0.000369 | 0.000581 | 0.000865 |

2 layers | 0.000614 | 0.000452 | 0.000699 | 0.001008 |

3 layers | 0.000476 | 0.000325 | 0.000565 | 0.000774 |

4 layers | 0.000469 | 0.000316 | 0.000542 | 0.000695 |

5 layers | 0.000466 | 0.000327 | 0.000523 | 0.000681 |

**Table 2.**Comparison results with baselines under the metric of RMSE. Here, 10—>5 represents the RMSE value of 5 prediction trajectories through 10 historical trajectories.

Model Name | 10—>1 | 10—>2 | 10—>3 | 10—>4 | 10—>5 |
---|---|---|---|---|---|

LSTM | 0.000389 | 0.000539 | 0.000728 | 0.000929 | 0.001130 |

BiLSTM | 0.000499 | 0.000643 | 0.000823 | 0.001015 | 0.001210 |

GRU | 0.000395 | 0.000542 | 0.000730 | 0.000929 | 0.001130 |

BiGRU | 0.000434 | 0.000570 | 0.000750 | 0.000944 | 0.001141 |

LSTM-LSTM | 0.000380 | 0.000499 | 0.000658 | 0.000844 | 0.001054 |

GRU-GRU | 0.000326 | 0.000559 | 0.000848 | 0.001163 | 0.001494 |

BiGRU-GRU | 0.000730 | 0.000819 | 0.000938 | 0.001136 | 0.001447 |

BiLSTM-LSTM | 0.000571 | 0.000596 | 0.000662 | 0.000747 | 0.000864 |

PESO | 0.000333 | 0.000351 | 0.000378 | 0.000417 | 0.000466 |

**Table 3.**Comparison results with baselines under the metric of MAE. Here, 10—>5 represents the MAE value of 5 prediction trajectories through 10 historical trajectories.

Model Name | 10—>1 | 10—>2 | 10—>3 | 10—>4 | 10—>5 |
---|---|---|---|---|---|

LSTM | 0.000319 | 0.000415 | 0.000532 | 0.000656 | 0.000780 |

BiLSTM | 0.000380 | 0.000478 | 0.000593 | 0.000714 | 0.000836 |

GRU | 0.000324 | 0.000419 | 0.000532 | 0.000653 | 0.000774 |

BiGRU | 0.000341 | 0.000430 | 0.000541 | 0.000660 | 0.000781 |

LSTM-LSTM | 0.000299 | 0.000379 | 0.000483 | 0.000605 | 0.000740 |

GRU-GRU | 0.000257 | 0.000403 | 0.000581 | 0.000774 | 0.000978 |

BiGRU-GRU | 0.000579 | 0.000640 | 0.000718 | 0.000841 | 0.001021 |

BiLSTM-LSTM | 0.000458 | 0.000470 | 0.000510 | 0.000559 | 0.000623 |

PESO | 0.000259 | 0.000267 | 0.000283 | 0.000303 | 0.000327 |

**Table 4.**Comparison results with baselines under the metric of ADE. Here, 10—>5 represents the ADE value of 5 prediction trajectories through 10 historical trajectories.

Model Name | 10—>1 | 10—>2 | 10—>3 | 10—>4 | 10—>5 |
---|---|---|---|---|---|

LSTM | 0.000493 | 0.000651 | 0.000842 | 0.001043 | 0.001245 |

BiLSTM | 0.000611 | 0.000769 | 0.000956 | 0.001152 | 0.001350 |

GRU | 0.000495 | 0.000650 | 0.000839 | 0.001037 | 0.001238 |

BiGRU | 0.000528 | 0.000677 | 0.000860 | 0.001056 | 0.001254 |

LSTM-LSTM | 0.000462 | 0.000587 | 0.000748 | 0.000935 | 0.001139 |

GRU-GRU | 0.000399 | 0.000649 | 0.000953 | 0.001290 | 0.001649 |

BiGRU-GRU | 0.000914 | 0.000995 | 0.001096 | 0.001256 | 0.001499 |

BiLSTM-LSTM | 0.000740 | 0.000753 | 0.000818 | 0.000900 | 0.001007 |

PESO | 0.000412 | 0.000429 | 0.000453 | 0.000484 | 0.000523 |

**Table 5.**Comparison results with time-series forecasting models under four metrics of RMSE, MAE, ADE, and FDE. The experiments were conducted in the most typical scenario in this paper: predicting the following 5 track points with the previous 10.

Model Name | RMSE | MAE | ADE | FDE |
---|---|---|---|---|

PESO | 0.000466 | 0.000327 | 0.000523 | 0.000681 |

ARIMA | 0.001977 | 0.001675 | 0.002708 | 0.003318 |

Kalman Filter | 0.000783 | 0.000643 | 0.000989 | 0.001664 |

VAR | 0.004251 | 0.002924 | 0.004701 | 0.010152 |

ST-Norm | 0.000992 | 0.000720 | 0.001133 | 0.001498 |

Model Name | Enc | Dec | RMSE | MAE | ADE | FDE |
---|---|---|---|---|---|---|

PESO | LSTM | LSTM | 0.000466 | 0.000327 | 0.000523 | 0.000681 |

PESO-GRU-GRU | GRU | GRU | 0.000552 | 0.000399 | 0.000646 | 0.000817 |

PESO-BiGRU-GRU | BiGRU | GRU | 0.000531 | 0.000387 | 0.000617 | 0.000766 |

PESO-BiLSTM-LSTM | BiLSTM | LSTM | 0.000511 | 0.000380 | 0.000562 | 0.000704 |

Model Name | First | Second | Third | Fourth | Fifth |
---|---|---|---|---|---|

LSTM | 0.000389 | 0.000649 | 0.001000 | 0.001359 | 0.001710 |

BiLSTM | 0.000499 | 0.000757 | 0.001092 | 0.001441 | 0.001785 |

GRU | 0.000395 | 0.000651 | 0.001000 | 0.001358 | 0.001708 |

BiGRU | 0.000434 | 0.000672 | 0.001011 | 0.001366 | 0.001712 |

LSTM-LSTM | 0.000380 | 0.000592 | 0.000893 | 0.001245 | 0.001639 |

GRU-GRU | 0.000326 | 0.000719 | 0.001235 | 0.001801 | 0.002396 |

BiGRU-GRU | 0.000730 | 0.000897 | 0.001133 | 0.001574 | 0.002280 |

BiLSTM-LSTM | 0.000571 | 0.000618 | 0.000772 | 0.000952 | 0.001210 |

PESO | 0.000333 | 0.000367 | 0.000425 | 0.000511 | 0.000620 |

Model Name | First | Second | Third | Fourth | Fifth |
---|---|---|---|---|---|

LSTM | 0.000319 | 0.000511 | 0.000767 | 0.001026 | 0.001276 |

BiLSTM | 0.000380 | 0.000577 | 0.000823 | 0.001075 | 0.001322 |

GRU | 0.000324 | 0.000513 | 0.000760 | 0.001013 | 0.001261 |

BiGRU | 0.000341 | 0.000519 | 0.000763 | 0.001017 | 0.001266 |

LSTM-LSTM | 0.000299 | 0.000458 | 0.000693 | 0.000971 | 0.001279 |

GRU-GRU | 0.000257 | 0.000549 | 0.000937 | 0.001354 | 0.001795 |

BiGRU-GRU | 0.000579 | 0.000701 | 0.000874 | 0.001211 | 0.001738 |

BiLSTM-LSTM | 0.000458 | 0.000482 | 0.000589 | 0.000709 | 0.000877 |

PESO | 0.000259 | 0.000278 | 0.000314 | 0.000362 | 0.000426 |

Model Name | First | Second | Third | Fourth | Fifth |
---|---|---|---|---|---|

LSTM | 0.000493 | 0.000809 | 0.001225 | 0.001646 | 0.002052 |

BiLSTM | 0.000611 | 0.000927 | 0.001330 | 0.001740 | 0.002140 |

GRU | 0.000495 | 0.000805 | 0.001216 | 0.001634 | 0.002040 |

BiGRU | 0.000528 | 0.000825 | 0.001228 | 0.001643 | 0.002046 |

LSTM-LSTM | 0.000462 | 0.000711 | 0.001071 | 0.001494 | 0.001955 |

GRU-GRU | 0.000399 | 0.000899 | 0.001561 | 0.002299 | 0.003087 |

BiGRU-GRU | 0.000914 | 0.001077 | 0.001296 | 0.001737 | 0.002469 |

BiLSTM-LSTM | 0.000740 | 0.000766 | 0.000947 | 0.001147 | 0.001436 |

PESO | 0.000412 | 0.000446 | 0.000501 | 0.000578 | 0.000681 |

Ablation | RMSE | MAE | ADE | FDE |
---|---|---|---|---|

PESO | 0.000466 | 0.000327 | 0.000523 | 0.000681 |

$w/o$ $COG\&SOG\&DIS$ | 0.000801 | 0.000576 | 0.000921 | 0.001344 |

$w/o$ $COG$ | 0.000554 | 0.000385 | 0.000621 | 0.000841 |

$w/o$ $SOG$ | 0.000486 | 0.000342 | 0.000540 | 0.000728 |

$w/o$ $DIS$ | 0.000474 | 0.000343 | 0.000542 | 0.000711 |

$w/o$ $SLV$ | 0.000492 | 0.000355 | 0.000563 | 0.000719 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Zhang, Y.; Han, Z.; Zhou, X.; Zhang, L.; Wang, L.; Zhen, E.; Wang, S.; Zhao, Z.; Guo, Z. PESO: A Seq2Seq-Based Vessel Trajectory Prediction Method with Parallel Encoders and Ship-Oriented Decoder. *Appl. Sci.* **2023**, *13*, 4307.
https://doi.org/10.3390/app13074307

**AMA Style**

Zhang Y, Han Z, Zhou X, Zhang L, Wang L, Zhen E, Wang S, Zhao Z, Guo Z. PESO: A Seq2Seq-Based Vessel Trajectory Prediction Method with Parallel Encoders and Ship-Oriented Decoder. *Applied Sciences*. 2023; 13(7):4307.
https://doi.org/10.3390/app13074307

**Chicago/Turabian Style**

Zhang, Yuanben, Zhonghe Han, Xue Zhou, Lili Zhang, Lei Wang, Enqiang Zhen, Sijun Wang, Zhihao Zhao, and Zhi Guo. 2023. "PESO: A Seq2Seq-Based Vessel Trajectory Prediction Method with Parallel Encoders and Ship-Oriented Decoder" *Applied Sciences* 13, no. 7: 4307.
https://doi.org/10.3390/app13074307