Next Article in Journal
Study on Single-Tree Extraction Method for Complex RGB Point Cloud Scenes
Previous Article in Journal
Enhancing Streamflow Modeling by Integrating GRACE Data and Shared Socio-Economic Pathways (SSPs) with SWAT in Hongshui River Basin, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Large Area High-Resolution 3D Mapping of the Von Kármán Crater: Landing Site for the Chang’E-4 Lander and Yutu-2 Rover

1
Mullard Space Science Laboratory, Department of Space and Climate Physics, University College London, Holmbury St Mary, Surrey RH5 6NT, UK
2
Planetary Sciences and Remote Sensing Group, Department of Earth Sciences, Freie Universität Berlin, Malteserstr. 74-100, 12249 Berlin, Germany
3
Laboratoire de Planétologie et Géodynamique, CNRS, UMR 6112, Université de Nantes, 44300 Nantes, France
4
Guangdong Laboratory of Artificial Intelligence and Digital Economy, Shenzhen 518107, China
5
State Key Laboratory of Remote Sensing Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(10), 2643; https://doi.org/10.3390/rs15102643
Submission received: 6 April 2023 / Revised: 14 May 2023 / Accepted: 17 May 2023 / Published: 18 May 2023

Abstract

:
We demonstrate the creation of a large area of high-resolution (260 × 209 km2 at 1 m/pixel) DTM mosaic from the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images over the Chang’E-4 landing site at Von Kármán crater using an in-house deep learning-based 3D modelling system developed at University College London, called MADNet, trained with lunar orthorectified images and digital terrain models (DTMs). The resultant 1 m DTM mosaic is co-aligned with the Chang’E-2 (CE-2) and the Lunar Orbiter Laser Altimeter (LOLA)—SELenological and Engineering Explorer (SELENE) blended DTM product (SLDEM), providing high spatial and vertical congruence. In this paper, technical details are briefly discussed, along with visual and quantitative assessments of the resultant DTM mosaic product. The LROC NAC MADNet DTM mosaic was compared with three independent DTM datasets, and the mean differences and standard deviations are as follows: PDS photogrammetric DTM at 5 m grid-spacing had a mean difference of −0.019 ± 1.09 m, CE-2 DTM at 20 m had a mean difference of −0.048 ± 1.791 m, and SLDEM at 69 m had a mean difference of 0.577 ± 94.940 m. The resultant LROC NAC MADNet DTM mosaic, alongside a blended LROC NAC and CE-2 MADNet DTM mosaic and a separate LROC NAC, orthorectified image mosaic, are made publicly available via the ESA planetary science archive’s guest storage facility.

1. Introduction

Three-dimensional (3D) mapping is not only essential for performing key science investigations of the lunar surface, subsurface and interior but also crucial for planning and supporting lunar robotic and human exploration missions. Over the last two decades, large-area 3D mapping of the lunar surface has been pursued intensively through laser altimetry and stereo imaging via a variety of lunar orbital missions from space agencies around the world. These include the Japanese Selenological and Engineering Explorer (SELENE; Kaguya) mission (launched in 2007) [1], the Chinese Chang’E-1 (CE-1; launched in 2007) and Chang’E-2 (CE-2; launched in 2010) missions [2,3,4], the Indian Chandrayaan-1 (launched in 2008) [5] and Chandrayaan-2 (launched in 2019) [6] missions, the U.S. Lunar Reconnaissance Orbiter (LRO) mission (launched in 2009) [7], and the Korean KPLO (Korea Pathfinder Lunar Orbiter) Danuri mission (launched in 2022) [8].
Following the successful orbital insertion of these lunar orbiters, several global scale digital terrain models (DTMs, sometimes referred to as DEMs (digital elevation models) in different publications, have been produced based either on laser altimetry or stereo photogrammetry and/or a mixture of the two. The most widely used global lunar DTM products are the 118 m/pixel LRO Lunar Orbiter Laser Altimeter (LOLA) DTM [9,10,11], the merged 59 m/pixel SELENE stereo Terrain Camera (TC) and LRO LOLA DTM (SLDEM or SLDEM2015) [12], and the 20 m/pixel CE-2 photogrammetric DTM (CE2TMap2015); hereafter referred to CE-2 DTM) [13,14,15]. These DTM products provide important global topographic information about the Moon but are mainly used for large-scale studies or being used as a global geodetic baseline. For detailed studies of a particular site or small-scale lunar surface features, e.g., [16,17], higher-resolution DTMs are generally required.
Currently, the highest possible resolution orbital DTMs of the lunar surface are generated using the 0.5–2 m/pixel Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) stereo images [18]. However, due to the limited coverage of suitable stereo images (stereo image coverage is ~6% of the lunar surface and stereo DTM coverage is ~0.5%; M. Henriksen, private communication, 2023) [19] and the high computational cost of traditional photogrammetric (e.g., [19,20]), photoclinometric (e.g., [21,22,23]) and/or multi-image photometric (e.g., [24]) processing, LROC NAC based DTM products have not been produced for very large areas (e.g., larger than 10,000 km2). In this work, we present a large area of 54,340 km2 (260 × 209 km) high-resolution (1 m/pixel) LROC NAC DTM mosaic using a previously developed single-input-image-based 3D estimation network called MADNet (Multi-scale generative Adversarial u-net with Dense convolutional and up-projection blocks) [25], covering the Chinese Chang’E-4 (CE-4) Yutu-2 [26,27] rover’s landing site at the Von Kármán crater [28,29]. It should be noted that the MADNet method is a monocular image-to-height estimation network. Other deep learning methods for 3D reconstruction based on single-image shape-from-shading networks (e.g., [30]) and/or multi-image photometric networks (e.g., [31,32]), which could potentially produce similar or compatible DTM results, are not discussed in this paper due to the lack of general/open-source implementations of such methods.
In this work, we train MADNet with 5 m/pixel LROC NAC DTMs and 5 m/pixel orthorectified images (ORIs) that are publicly available from the Planetary Data System (PDS). The pre-trained MADNet model is then used to process a total number of 370 LROC NAC input images (consisting of 252 images at 0.5–1 m/pixel and 118 images at 1–1.5 m/pixel) that are pre-processed and co-registered with the 7 m/pixel CE-2 ORI and orthorectified using the 59 m/pixel SLDEM as the base map. The resultant single-strip LROC NAC MADNet DTMs are then mosaiced using the Ames Stereo Pipeline (ASP) [33] to create a 1 m/pixel DTM mosaic of the Von Kármán crater (186 km diameter, centred at 176.2°E, 44.5°S). The 20 m/pixel CE-2 DTM is refined using MADNet to a higher resolution at 14 m/pixel (twice the pixel resolution of the corresponding 7 m ORI mosaic) and is used to fill in the gaps of the LROC NAC MADNet DTM mosaic. The final LROC NAC MADNet DTM mosaics (with and without CE-2 for gap filling), alongside a separate LROC NAC ORI 50 cm mosaic created at JPL (Jet Propulsion Laboratory), are all made publicly available through the ESA Guest Storage Facility (GSF) [34] at https://doi.org/10.57780/esa-fb921t3 (accessed on 1 March 2023).

2. Materials and Methods

2.1. Reference DTMs

The baseline referencing data of this work is the 59 m/pixel SLDEM that is available from the USGS (the United States Geological Survey) site: https://astrogeology.usgs.gov/search/map/Moon/LRO/LOLA/Lunar_LRO_LOLAKaguya_DEMmerge_60N60S_512ppd (accessed on 17 December 2022). The SLDEM is based on the GRAIL (Gravity Recovery and Interior Laboratory mission) controlled LOLA DTM [35], which has a horizontal accuracy of ~10 m and vertical accuracy of ~0.5 m [36], but with improved spatial coverage using photogrammetric DTMs that are independently derived from the SELENE TC stereo images [37]. The SLDEM has a slightly lower vertical accuracy of about 3–4 m but an improved spatial resolution of 59 m/pixel compared to the LOLA 118 m/pixel DTM [12]. SLDEM provides the most accurate geodetic framework of the Moon to date. However, the resolution gap between the SLDEM and the input LROC NAC images and DTMs are too large to achieve sensible co-registration and co-alignment directly. In this work, the CE-2 global photogrammetric DTM, i.e., CE-2 DTM, produced by the National Astronomical Observatories, Chinese Academy of Sciences (NAOC), is used as an intermediate referencing dataset to bridge the resolution gap between SLDEM and LROC-NAC.
The 20 m/pixel CE-2 DTM is available publicly from the NAOC site (https://moon.bao.ac.cn/ce5web/moonGisMap.search (accessed on 17 December 2022)). The CE-2 DTM was produced using 384 selected single-strip 7 m/pixel CE-2 stereo images (with forward and backward viewing angles of 7.98° and −17.2°, respectively) [14,15]. The CE-2 DTM is currently the highest-resolution global DTM of the lunar surface. However, it is reported in [15] that there are large geometric inconsistencies (average difference of 183.1 m with a standard deviation of 101.2 m spatially) between the CE-2 DTM and the SLDEM. In this work, we co-aligned a portion of the CE-2 DTM (tile ID from the ORI: CE2_GRAS_DOM_07m_K136_45S175E_A) with SLDEM using our inhouse 3D co-alignment pipeline that is described in [38,39]. Subsequently, the co-aligned CE-2 DTM and corresponding ORI are used as the intermediate referencing data for the production of the LROC NAC DTMs and ORIs. Figure 1 shows the CE-2 DTM tile that covers the von Kármán crater area before and after co-alignment with the SLDEM. The mean height difference between the raw CE-2 DTM and SLDEM of this area (see Figure 1) is 4428.52 m, which is reduced to 0.194 m after co-alignment with the SLDEM.

2.2. Input LROC NAC Images

The input data of this work are the LROC NAC single-strip images. The LROC NAC instrument [18] captures repeat-pass line-scanning panchromatic images at 0.5–2 m/pixel resolution over a swath width of 2.5–10 km (~10,000 pixels wide) and a swath length of ~25 km (~52,000 pixels long). Each of the two LROC NAC cameras has a 700 mm focal length telescope and a 5064 pixels CCD (charge-coupled device) line array providing a cross-track field of view of ~2.85°. A cross-track overlap of ~135 pixels for the two LROC NAC CCD arrays provides doubled observation swath width. The raw LROC NAC records have a bit depth of 12-bit and are compressed to 8-bit images for a better signal-to-noise ratio [18]. The LROC NAC EDR (Engineering Data Record) images are publicly available from the LROC PDS archive (https://pds.lroc.asu.edu/data/LRO-L-LROC-2-EDR-V1.0/ (accessed on 17 December 2022)).
LROC NAC was not designed as a stereo imager. However, with suitable repeat-pass observations (subject to stereo intersection angles and solar illumination conditions [40]), stereo-derived DTMs are possible for limited areas. Even though it is reported in [19] that as of December 2015, the LROC NAC has collected over 2400 sets of stereo observations covering ~2.9% of the lunar surface, many of these do not meet the criteria of the production of high-quality photogrammetric DTMs due to the large difference of the solar incidence angles. The LROC NAC stereo observation shapefile can be extracted from https://wms.lroc.asu.edu/lroc/view_rdr/SHAPEFILE_STEREO_OBSERVATIONS_EQ (accessed on 17 December 2022) and the LROC NAC EDR coverage shapefile can be found from https://ode.rsl.wustl.edu/moon/datafile/derived_products/coverageshapefiles/moon/lro/lroc/Edrnac/ (accessed on 17 December 2022). Figure 2 shows the LROC NAC non-repeat single image coverage (100%) and all available stereo coverage (3.8%) to date (by 17 December 2022) over the target Von Kármán crater area.
A pre-selection of a list of input LROC NAC images is achieved using a set of image metadata screening criteria, including a bounding box that is within the extent of the reference CE-2 DTM tile (top-left: 169.982°E, 41.987°S; bottom-right: 179.955°E, 49.004°S), a solar incidence angle threshold of 70° to avoid heavy shadowing effects, and a suitable overlap range of 250–750 pixels between each individual images. Initially, 507 LROC NAC single-strip images were found. After manual screening to remove shadowed and noisy images, the number of input images is reduced to 399, of which 370 of the images are successfully co-registered with the reference CE-2 ORI. The LROC NAC images that failed to co-register with the CE-2 ORI are mainly due to the shading and shadowing effects, e.g., completely different shading orientations and lengths between the target and reference images. The final down-selected 370 of input LROC NAC single-strip images consist of 118 images with 1 m/pixel resolution (upsampled from native image resolution of 1–1.5 m/pixel) and 252 images with 0.5 m/pixel resolution (upsampled from native image resolution of 0.5–1 m/pixel; see Figure 3 for the distribution and coverage of the 370 screened and co-registered input LROC NAC images).

2.3. Overview of the MADNet Network

The processing core of this work is the MADNet deep learning-based single-image DTM estimation system described in [25]. MADNet is based on the relativistic Generative Adversarial Network (GAN) framework [41,42]. For the MADNet generator network, a fully convolutional U-net [43] architecture is employed, consisting of four stacks of dense convolution blocks [44] as the encoder and five stacks of the up-projection blocks [45] as the decoder. The network architecture of the MADNet single-input-image-based 3D estimation network that is used in this work is shown in Figure 4. For training of the model, we use the same total loss function that is proposed in [25,46], which is a weighted sum of the gradient loss, the Berhu loss [47], and the adversarial loss under the GAN framework. In the split training and testing experiment, the same weights (0.5, 5 × 10−2, 5 × 10−3) as described in [46] for the three loss terms worked well for a 1000 subset of paired samples of the LROC NAC PDS DTMs and ORIs (see Section 2.4 for details).

2.4. Network Training and Testing

The training dataset for the MADNet Moon model consists of 22,084 ORI and DTM pairs (512 × 512 pixels subsets at 5 m/pixel) which were formed from 392 pairs of downsampled LROC NAC PDS ORIs and DTMs. We follow the same methods that are described in [25] to form the training dataset. These include a downsampling process (from 2–3 m/pixel to 5 m/pixel for DTMs and from 0.5–1 m/pixel to 5 m/pixel for ORIs) in order to average out several high-frequency photogrammetric artefacts, a structural similarity index measurement (SSIM) [48] assisted manual screening process to remove the low-quality ORI and DTM pairs, and data augmentation that uses horizontal and vertical flipping. The raw LROC NAC PDS ORIs and DTMs were downloaded from https://pds.lroc.asu.edu/data/LRO-L-LROC-5-RDR-V1.0/LROLRC_2001/DATA/SDP/NAC_DTM/ (accessed on 17 December 2022). The MADNet network was trained with all available LROC NAC PDS ORIs, which comprise a wide range of different solar incidence angles (from 58.81° to 89.09°) and azimuth angles (from 84.18° to 284.15°). The training coverage of different azimuth angles is further augmented with horizontal and vertical flipping. The wide coverage of different combinations of different solar incidence angles and azimuth angles of the training images is essential to the robustness of the trained model when tested on images with different solar altitudes and azimuth angles. During the mapping process, the input LROC NAC images have solar incidence angles between 45.11° and 74.95° and azimuth angles between 4.49° and 357.95°.
At the initial training and tuning stage, we left out 1000 training pairs to form the test dataset. For testing purposes, root mean squared errors (RMSEs) and mean SSIMs [48] are used as evaluation metrics and are periodically monitored throughout the initial training process. SSIM is the locally computed structural similarity index metric derived using patterns of pixel intensities among neighbouring pixels with normalised brightness and contrast [48]. RMSE measures the pixel-wise differences between the inference results and the ground-truth DTMs, while SSIM complementarily measures the differences in structural features between the inference results and the ground-truth DTMs. Both RMSE and mean SSIMs are widely used as loss functions and evaluation metrics in monocular depth/height estimation studies (e.g., [49,50,51]). Figure 5 shows four randomly selected examples from the test results in comparison to the input images and ground-truth height maps from the test dataset. The corresponding RMSE and mean SSIM measurements of the exemplars are shown in Figure 5. The total averaged RMSEs and mean SSIMs for the 1000 test datasets are 0.987 m and 0.944, respectively.

2.5. Overall Processing Chain

The overall processing chain consists of four main steps, including (1) the reference data processing; (2) input data pre-processing; (3) LROC NAC to CE-2 image co-registration; and (4) LROC NAC MADNet processing and DTM mosaicing. A flow diagram of these steps is shown in Figure 6.
In step (1), the higher resolution referencing data, i.e., CE-2 ORI and DTM, are reprojected to the same coordinate system as the SLDEM (Equidistant Cylindrical). The CE-2 ORI shows a good co-alignment with the hillshaded SLDEM (less than 1 pixel) from visual inspection. However, there is a large vertical difference between the CE-2 DTM and SLDEM of about 2–5 km (mean difference 4428.52 m; standard deviation 1379.20 m) for the selected area of interest (refer to Figure 1), which is subsequently corrected (mean difference 1.11 m, standard deviation 104.42 m) using our in-house 3D co-alignment pipeline that is described in [38,39]. The co-aligned 20 m/pixel CE-2 DTM is then refined into 14 m/pixel using the same MADNet Moon model with the 7 m CE-2 ORI (refer to Section 2.4).
In step (2), a series of USGS-ISIS (Integrated Software for Imagers and Spectrometers) based pre-processing functions are applied to the raw input LROC NAC PDS images. These include the data format conversion (lronac2isis), radiometric calibration (lronacccal), echo effects removal (lronacecho), ancillary information initialisation (spiceinit), and map projection and orthorectification with respect to the SLDEM (cam2map). After pre-processing, the data cubes (in USGS-ISIS format) are converted to GeoTiff images using GDAL (Geospatial Data Abstraction Library; refer to https://github.com/OSGeo/gdal/releases/tag/v3.6.2 (accessed on 17 December 2022)).
In step (3), the pre-processed LROC NAC images are automatically co-registered with the CE-2 image using the ENVI® Modeler software (https://www.l3harrisgeospatial.com/Software-Technology/ENVI (accessed on 17 December 2022)) and are then manually inspected for co-registration quality. In this process, 370 out of 399 LROC NAC images were successfully co-registered with respect to the 7 m/pixel CE-2 ORI. LROC NAC images that failed to be co-registered with the CE-2 ORI are mainly due to the large shading and shadowing differences. Alternative repeat pass LROC NAC images are available for some of the missing areas but were not used in this work.
In step (4), the MADNet inference process is applied to the 370 co-registered LROC NAC images. The MADNet processing includes image pyramiding and tiling, relative height inference, absolute height rescaling using the referencing CE-2 MADNet DTM, multi-scale 3D co-alignment of the height map tiles with respect to the CE-2 MADNet DTM, and mosaicing of heightmap tiles. It should be noted that, due to each adjacent height map tile being normalised with respect to the same referencing DTM before blending, the height inconsistency on the edge of adjacent tiles is minor (up to 10 cm when processing images with similar spatial resolutions [39]). This minor height variation is then smoothed out (averaged) when blending adjacent height map tiles across overlaps, resulting in a seamlessly mosaiced height map for each input LROC NAC image. A final DTM mosaicing process is then performed on the resultant LROC NAC MADNet DTMs using the ASP’s DTM mosaicing pipeline (dem_mosaic). The LROC NAC ORI mosaic was processed separately at NASA JPL using ArcGIS® (see Section 3.3 for data access information), which involves brightness/contrast adjustment and blending.

3. Results

3.1. Data Products Overview

This work contributes to the production of the first large area and high-resolution 3D model of the landing site of the Chang’E-4 lander and Yutu-2 rover at the von Kármán crater. The von Kármán crater is a large lunar impact crater that is located in the southern hemisphere on the far side of the Moon. The crater is about 180 km in diameter and lies within an immense impact crater known as the South Pole–Aitken basin of roughly 2500 km in diameter and 13 km deep.
The final outputs of the described 3D mapping work include two 1 m/pixel LROC NAC DTM mosaics with and without CE-2 MADNet DTM for gap filling, 370 single-strip LROC NAC ORIs, and an LROC NAC ORI mosaic that was separately produced at NASA JPL. The area covered is about 260 × 209 km2 of the von Karman crater (see Figure 7 for the DTM coverage and Figure 3 for the ORI coverage). A 14 m/pixel CE-2 MADNet DTM covering a larger area of 302 × 213 km2 of the same area is also produced as the reference DTM of the LROC NAC MADNet processing. It should be noted that all final outputs are 3D co-aligned with the reference SLDEM using the B-spline fitting-based 3D co-alignment method described in [38,39]. The standard Equirectangular projection (with central longitude being 180°) and the standard Moon reference radius of 1,737,400.00 m are used for all intermediate and final data products described in this work.

3.2. Qualitative and Quantitative Assessments

For qualitative assessments of the resultant LROC NAC DTM mosaic, we compare it with the existing LROC NAC PDS DTM product as well as the lower resolution CE-2 DTM and SLDEM. As shown in Figure 2, there are four possible stereo pairs within the area of interest; however, there is only one PDS DTM available so far (DTM ID: NAC_DTM_CHANGE4_E458S1775; ORI ID: NAC_DTM_CHANGE4_M1303619844_140CM.IMG). Consequently, our comparison between the LROC NAC MADNet DTM mosaic and LROC NAC PDS DTM is limited to small areas within the existing PDS DTM extent. In order to make comparisons in the same geographical context, the LROC NAC PDS DTM and ORI are projected, co-registered and co-aligned with the CE-2 DTM and ORI as well as the SLDEM. Figure 8 shows profile measurements of the LROC NAC MADNet DTM mosaic, LROC NAC PDS DTM, and CE-2 DTM. We can observe from the profile lines in Figure 8 that there is good agreement at the large scale for the three datasets, but significantly more topographic features are shown on profiles of the resultant LROC NAC MADNet DTM mosaic.
Figure 9, Figure 10 and Figure 11 show three zoom-in views between the PDS DTM and MADNet DTM mosaic (the locations are indicated as three green dots in Figure 8), showing the superiority of significant topographic details and reduced artefacts over different areas. In general, we can observe from the colourised DTMs, an excellent agreement for large-scale topographic features between the PDS DTM and MADNet DTM, even though the MADNet DTM was referenced to the much lower-resolution CE-2 MADNet DTM. Such large-scale features can be referred to as the larger craters (e.g., craters with ~500 m diameter and ~25 m depth in Figure 9 and ~750 m width and ~85 m depth in Figure 11), local hills, and the flat areas. Meanwhile, we can also observe a much larger number of fine-scale topographic features in the MADNet DTM compared to the PDS DTM, such as the small craters with less than 20 m diameter and less than 5 m depth. From the shaded relief images, we can observe that the MADNet DTM appears to have fewer artefacts compared to the PDS DTM. Such artefacts take the form of blocky squares on the shaded relief images of the PDS DTM in Figure 10 and Figure 11, as well as the incorrect local high relief features on the crater edge of the largest crater and on the centre of the second largest crater in Figure 11. The shaded relief images produced from the MADNet DTM show much better qualitative agreement with the LROC NAC PDS ORI compared to the PDS DTM.
For quantitative assessments, we can only compare the resultant LROC NAC MADNet DTM mosaic (without gap filling from the CE-2 MADNet DTM) against much lower resolution reference DTMs, i.e., the co-aligned CE-2 DTM and SLDEM, except for the small area, where there is an existing LROC NAC PDS DTM, a comparison against a slightly lower resolution DTM is possible. The difference map between the LROC NAC MADNet DTM mosaic and the SLDEM co-aligned version of the CE-2 DTM, the difference map between the LROC NAC MADNet DTM mosaic and the SLDEM, and the difference map between the single-strip LROC NAC MADNet DTM and the LROC NAC PDS DTM are shown in Figure 12. We also show a difference map between the SLDEM and SLDEM co-aligned versions of the CE-2 DTM. The mean and standard deviations of the differences are summarised in Table 1. We can observe from the difference map between the LROC NAC MADNet DTM mosaic and the SLDEM co-aligned version of the CE-2 DTM that good agreement is shown in the “flat” regions at the centre of the von Kármán crater, while the differences mostly appear as small-scale features or around the edge of the crater. The differences between the LROC NAC MADNet DTM and SLDEM are comparably larger. This is due to the remaining differences between the SLDEM co-aligned version of the CE-2 DTM and the SLDEM. On the other hand, the large-scale features show excellent agreement between the LROC NAC MADNet DTM and the LROC NAC PDS DTM, while small-scale features are shown as differences between ±5 m. We also note from Table 1 that the mean differences for all four comparisons are small but have larger standard deviations for the comparison between LROC NAC MADNet DTM and SLDEM and for the comparison between the SLDEM co-aligned version of the CE-2 DTM and the SLDEM, while there are much smaller standard deviations for the comparison between the LROC NAC MADNet DTM and the SLDEM co-aligned version of the CE-2 DTM, as well as for the comparison between the LROC NAC MADNet DTM and the existing LROC NAC PDS DTM.

3.3. Data and Products Access

The final LROC NAC MADNet DTM mosaics (with and without CE-2 for gap filling), alongside a separate LROC NAC ORI mosaic created at JPL (Jet Propulsion Laboratory), are all made publicly available through the ESA Guest Storage Facility (GSF) [34] at https://doi.org/10.57780/esa-fb921t3 (accessed on 14 May 2023). The data products are also viewable and downloadable through NASA’s Moon Trek interactive web-based Geographic Information System (webGIS) system for planetary data visualisation and analysis https://trek.nasa.gov/moon/ (accessed on 17 December 2022).

4. Discussion

The original MADNet work [25] was developed and demonstrated with different Mars orbital imaging datasets for the large-area high-resolution topographic mapping of the Martian surface [39,52]. In this paper, we retrain the same network with 392 pairs of the publicly available LROC NAC PDS DTMs and ORIs and demonstrate the same DTM production system can be robustly applied to a large number (total 370) of LROC NAC images in order to create a large-area (covering 260 × 209 km2) high-resolution (at 1 m/pixel) DTM mosaic. The main challenge is the long shadows that are present on some of the LROC NAC images. This is not only a major issue for LROC NAC to CE-2 image co-registration but also for the MADNet image-to-height inference process. Since the finest scale inference is based on tiled small image patches (512 × 512 pixels), patches that are fully covered by shadow generally do not contain enough information for height inference.
Theoretically, with sufficient staffing and GPU resources, the same method could be extended to create a semi-global 1 m/pixel topographic of the Moon using the LROC NAC images as inputs and the CE-2 DTM and SLDEM as reference datasets. However, the same issue with shadowing still exists. Pre-filtering the input images with a strict threshold of incidence angle would increase success rates of the co-registration and DTM inference process but would result in much sparser coverage of the area, i.e., more gaps or slivers. A future solution to this may be through the use of the new ShadowCam instrument that was developed by NASA and KARI (Korean Aerospace Research Institute) [53] and/or through applying fully trained de-shading networks to improve the LROC NAC images prior to the image-to-height inference process.
Regarding the assessments of the resultant LROC NAC MADNet DTM mosaic, only lower-resolution DTMs from CE-2 and SLDEM are currently available for whole-area comparisons in this work. The mean difference between the LROC NAC MADNet DTM mosaic and CE-2 DTM is –0.048 m with a standard deviation of ±1.791 m, while the mean difference and standard deviation between the LROC NAC MADNet DTM mosaic and SLDEM is 0.577 m and 94.940 m, which is much higher due to the inherited larger difference of a mean of 0.194 m and standard deviation of 100.382 m between the CE-2 DTM and SLDEM even after the co-alignment process that corrected the major offsets (mean difference was 4428.52 m before co-alignment). It should be noted that more recent high-resolution LOLA DTMs (at 5 m/pixel) have become available for the south pole region (up to −85.5°S) and a variety of different regions nearby, which have been selected for the NASA Artemis landing sites [54]. These new LOLA DTMs have greatly reduced orbital geolocation errors and interpolation errors [55]. In the future, we plan to produce LROC NAC MADNet DTM mosaics in the south polar region and perform a comparison with the new 5 m/pixel LOLA DTMs.
In the future, we plan to update the MADNet model to take lower-resolution DTMs together with the images as inputs for training and inference of higher-resolution DTMs. In this way, we should be able to tackle the issue of not being able to fully co-align two DTMs from different sources but with different resolutions. The MADNet prediction code and trained models are planned to be released alongside a major data products release for Mars soon.

5. Conclusions

In this paper, we demonstrate the use of retrained MADNet model to produce a large area high-resolution LROC NAC DTM mosaic over the CE-4 landing site at the von Kármán crater. The resultant 1 m/pixel MADNet DTM mosaic is co-aligned with the 20 m/pixel CE-2 DTM and the 59 m/pixel SLDEM, providing high spatial and vertical congruence. Technical details are provided, along with a visual evaluation and quantitative assessments of the resultant DTM mosaic product. The resultant LROC NAC MADNet DTM mosaic, alongside a blended LROC NAC and CE-2 MADNet DTM mosaic and a separate LROC NAC orthorectified image mosaic, has been made publicly available via the ESA PSA GSF site as well as the NASA Moon Trek web-GIS system. An initial evaluation was performed over a small area where a PDS DTM was available, which showed almost zero bias and a standard deviation of ±1 m as well as comparisons with lower resolution SLDEM and CE-2 DTMs, which show comparable difference statistics.

Author Contributions

Conceptualization, Y.T. and J.-P.M.; methodology, Y.T. and J.-P.M.; software, Y.T. and S.X.; validation, Y.T. and J.-P.M.; formal analysis, Y.T. and J.-P.M.; investigation, Y.T. and J.-P.M.; resources, Y.T., J.-P.M., S.H.G.W. and B.L.; data curation, S.J.C., Y.T., B.L. and J.-P.M.; writing—original draft preparation, Y.T.; writing—review and editing, J.-P.M., Y.T., S.X., S.H.G.W. and B.L.; visualization, Y.T.; supervision, J.-P.M. and S.H.G.W.; project administration, Y.T. and J.-P.M.; funding acquisition, J.-P.M., S.H.G.W., and Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

The research leading to these results received initial funding from the UKSA Aurora program (2018–2021) under grant ST/S001891/1, as well as partial funding from the STFC MSSL Consolidated Grant ST/K000977/1. The processing was supported by JPL contract no. 1668434.

Data Availability Statement

All resultant products have been published in the ESA GSF: https://doi.org/10.57780/esa-fb921t3.

Acknowledgments

The research leading to these results received initial funding from the UKSA Aurora program (2018–2021) under grant ST/S001891/1. The lunar processing was supported by JPL contract no. 1668434. The follow-up work is supported by the German Space Agency (DLR Bonn), grant 50 OO 2204 (Koregistrierung), on behalf of the German Federal Ministry for Economic Affairs and Climate Action. We thank Emily Law and the following team members of the NASA Solar System Treks: Bach Bui, Richard Kim, Heather Lethcoe and Catherine Suh. CE-2 dataset acquisition was kindly supported by Kaichang Di of the State Key Laboratory of Remote Sensing Science, Chinese Academy of Sciences.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Kato, M.; Sasaki, S.; Tanaka, K.; Iijima, Y.; Takizawa, Y. The Japanese lunar mission SELENE: Science goals and present status. Adv. Space Res. 2008, 42, 294–300. [Google Scholar] [CrossRef]
  2. Li, C.; Liu, J.; Ren, X.; Mou, L.; Zou, Y.; Zhang, H.; Lü, C.; Liu, J.; Zuo, W.; Su, Y.; et al. The global image of the Moon obtained by the Chang’E-1: Data processing and lunar cartography. Sci. China Earth Sci. 2010, 53, 1091–1102. [Google Scholar] [CrossRef]
  3. Zhao, B.; Yang, J.; Wen, D.; Gao, W.; Chang, L.; Song, Z.; Xue, B.; Zhao, W. Overall scheme and on-orbit images of Chang’E-2 lunar satellite CCD stereo camera. Sci. China Technol. Sci. 2011, 54, 2237–2242. [Google Scholar] [CrossRef]
  4. Zuo, W.; Li, C.; Zhang, Z. Scientific data and their release of Chang’E-1 and Chang’E-2. Chin. J. Geochem. 2014, 33, 24–44. [Google Scholar] [CrossRef]
  5. Goswami, J.N.; Annadurai, M. Chandrayaan-1: India’s first planetary science mission to the Moon. Curr. Sci. 2009, 25, 486–491. [Google Scholar]
  6. Sundararajan, V. Overview and technical architecture of India’s Chandrayaan-2 mission to the Moon. In Proceedings of the 2018 AIAA Aerospace Sciences Meeting, Kissimmee, FL, USA, 8–12 January 2018; p. 2178. [Google Scholar]
  7. Chin, G.; Brylow, S.; Foote, M.; Garvin, J.; Kasper, J.; Keller, J.; Litvak, M.; Mitrofanov, I.; Paige, D.; Raney, K.; et al. Lunar reconnaissance orbiter overview: The instrument suite and mission. Space Sci. Rev. 2007, 129, 391–419. [Google Scholar] [CrossRef]
  8. Song, Y.J.; Bae, J.; Hong, S.; Bang, J. Korea Pathfinder Lunar Orbiter Flight Dynamics Simulation and Rehearsal Results for Its Operational Readiness Checkout. J. Astron. Space Sci. 2022, 39, 181–194. [Google Scholar] [CrossRef]
  9. Smith, D.E.; Zuber, M.T.; Jackson, G.B.; Cavanaugh, J.F.; Neumann, G.A.; Riris, H.; Sun, X.; Zellar, R.S.; Coltharp, C.; Connelly, J.; et al. The lunar orbiter laser altimeter investigation on the lunar reconnaissance orbiter mission. Space Sci. Rev. 2010, 150, 209–241. [Google Scholar] [CrossRef]
  10. Smith, D.E.; Zuber, M.T.; Neumann, G.A.; Mazarico, E.; Lemoine, F.G.; Head, J.W., III; Lucey, P.G.; Aharonson, O.; Robinson, M.S.; Sun, X.; et al. Summary of the results from the lunar orbiter laser altimeter after seven years in lunar orbit. Icarus 2017, 283, 70–91. [Google Scholar] [CrossRef]
  11. Mazarico, E.; Neumann, G.A.; Barker, M.K.; Goossens, S.; Smith, D.E.; Zuber, M.T. Orbit determination of the Lunar Reconnaissance Orbiter: Status after seven years. Planet. Space Sci. 2018, 162, 2–19. [Google Scholar] [CrossRef] [PubMed]
  12. Barker, M.K.; Mazarico, E.; Neumann, G.A.; Zuber, M.T.; Haruyama, J.; Smith, D.E. A new lunar digital elevation model from the Lunar Orbiter Laser Altimeter and SELENE Terrain Camera. Icarus 2016, 273, 346–355. [Google Scholar] [CrossRef]
  13. Li, C.; Liu, J.; Ren, X.; Yan, W.; Zuo, W.; Mou, L.; Zhang, H.; Su, Y.; Wen, W.; Tan, X.; et al. Lunar Global High-precision Terrain Reconstruction Based on Chang’e-2 Stereo Images. Geomat. Inf. Sci. Wuhan Univ. 2018, 43, 485–495. [Google Scholar]
  14. Ren, X.; Liu, J.; Li, C.; Li, H.; Yan, W.; Wang, F.; Wang, W.; Zhang, X.; Gao, X.; Chen, W. A global adjustment method for photogrammetric processing of Chang’E-2 stereo images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6832–6843. [Google Scholar] [CrossRef]
  15. Xin, X.; Liu, B.; Di, K.; Yue, Z.; Gou, S. Geometric Quality Assessment of Chang’E-2 Global DEM Product. Remote Sens. 2020, 12, 526. [Google Scholar] [CrossRef]
  16. Hu, T.; Yang, Z.; Kang, Z.; Lin, H.; Zhong, J.; Zhang, D.; Cao, Y.; Geng, H. Population of Degrading Small Impact Craters in the Chang’E-4 Landing Area Using Descent and Ground Images. Remote Sens. 2022, 14, 3608. [Google Scholar] [CrossRef]
  17. Zhao, S.; Qian, Y.; Xiao, L.; Zhao, J.; He, Q.; Huang, J.; Wang, J.; Chen, H.; Xu, W. Lunar Mare Fecunditatis: A Science-Rich Region and a Concept Mission for Long-Distance Exploration. Remote Sens. 2022, 14, 1062. [Google Scholar] [CrossRef]
  18. Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; et al. Lunar reconnaissance orbiter camera (LROC) instrument overview. Space Sci. Rev. 2010, 150, 81–124. [Google Scholar] [CrossRef]
  19. Henriksen, M.R.; Manheim, M.R.; Burns, K.N.; Seymour, P.; Speyerer, E.J.; Deran, A.; Boyd, A.K.; Howington-Kraus, E.; Rosiek, M.R.; Archinal, B.A.; et al. Extracting accurate and precise topography from LROC narrow angle camera stereo observations. Icarus 2017, 283, 122–137. [Google Scholar] [CrossRef]
  20. Wu, B.; Hu, H.; Liu, W.C. Photogrammetric processing of LROC NAC images for precision lunar topographic mapping. In Planetary Remote Sensing and Mapping; CRC Press: Boca Raton, FL, USA, 2018; pp. 125–147. [Google Scholar]
  21. Grumpe, A.; Belkhir, F.; Wöhler, C.W. Construction of lunar DEMs based on reflectance modelling. Adv. Space Res. 2014, 53, 1735–1767. [Google Scholar] [CrossRef]
  22. Wu, B.; Liu, W.C.; Grumpe, A.; Wöhler, C. Construction of pixel-level resolution DEMs from monocular images by shape and albedo from shading constrained with low-resolution DEM. ISPRS J. Photogramm. Remote Sens. 2018, 140, 3–19. [Google Scholar] [CrossRef]
  23. Liu, W.C.; Wu, B. An integrated photogrammetric and photoclinometric approach for illumination-invariant pixel-resolution 3D mapping of the lunar surface. ISPRS J. Photogramm. Remote Sens. 2020, 159, 153–168. [Google Scholar] [CrossRef]
  24. Woodham, R.J. Photometric method for determining surface orientation from multiple images. Opt. Eng. 1980, 19, 139–144. [Google Scholar] [CrossRef]
  25. Tao, Y.; Muller, J.-P.; Xiong, S.; Conway, S.J. MADNet 2.0: Pixel-Scale Topography Retrieval from Single-View Orbital Imagery of Mars Using Deep Learning. Remote Sens. 2021, 13, 4220. [Google Scholar] [CrossRef]
  26. Jia, Y.; Zou, Y.; Ping, J.; Xue, C.; Yan, J.; Ning, Y. The scientific objectives and payloads of Chang’E-4 mission. Planet. Space Sci. 2018, 162, 207–215. [Google Scholar] [CrossRef]
  27. Wu, W.; Li, C.; Zuo, W.; Zhang, H.; Liu, J.; Wen, W.; Su, Y.; Ren, X.; Yan, J.; Yu, D.; et al. Lunar farside to be explored by Chang’e-4. Nat. Geosci. 2019, 12, 222–223. [Google Scholar] [CrossRef]
  28. Qiao, L.; Ling, Z.; Fu, X.; Li, B. Geological characterization of the Chang’e-4 landing area on the lunar farside. Icarus 2019, 333, 37–51. [Google Scholar] [CrossRef]
  29. Huang, J.; Xiao, Z.; Flahaut, J.; Martinot, M.; Head, J.; Xiao, X.; Xie, M.; Xiao, L. Geological characteristics of Von Kármán crater, northwestern south pole-Aitken Basin: Chang’E-4 landing site region. J. Geophys. Res. Planets 2018, 123, 1684–1700. [Google Scholar] [CrossRef]
  30. Chen, H.; Hu, X.; Oberst, J. Pixel-Resolution DTM Generation for the Lunar Surface Based on a Combined Deep Learning and Shape-From-Shading (Sfs) Approach. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, V-3-2022, 511–516. [Google Scholar] [CrossRef]
  31. Chen, G.; Han, K.; Wong, K.Y.K. PS-FCN: A flexible learning framework for photometric stereo. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–18. [Google Scholar]
  32. Ju, Y.; Peng, Y.; Jian, M.; Gao, F.; Dong, J. Learning conditional photometric stereo with high-resolution features. Comput. Vis. Media 2022, 8, 105–118. [Google Scholar] [CrossRef]
  33. Beyer, R.; Alexandrov, O.; McMichael, S. The Ames Stereo Pipeline: NASA’s Opensource Software for Deriving and Processing Terrain Data. Earth Space Sci. 2018, 5, 537–548. [Google Scholar] [CrossRef]
  34. Masson, A.; de Marchi, G.; Merin, B.; Sarmiento, M.H.; Wenzel, D.L.; Martinez, B. Google dataset search and DOI for data in the ESA space science archives. Adv. Space Res. 2021, 67, 2504–2516. [Google Scholar] [CrossRef]
  35. Zuber, M.T.; Smith, D.E.; Watkins, M.M.; Asmar, S.W.; Konopliv, A.S.; Lemoine, F.G.; Melosh, H.J.; Neumann, G.A.; Phillips, R.J.; Solomon, S.C.; et al. Gravity field of the Moon from the Gravity Recovery and Interior Laboratory (GRAIL) mission. Science 2013, 339, 668–671. [Google Scholar] [CrossRef]
  36. Mazarico, E.; Goossens, S.J.; Lemoine, F.G.; Neumann, G.A.; Torrence, M.H.; Rowlands, D.D.; Smith, D.E.; Zuber, M.T. Improved orbit determination of lunar orbiters with lunar gravity fields obtained by the GRAIL mission. In Proceedings of the 44th Annual Lunar and Planetary Science Conference, The Woodlands, TX, USA, 18–22 March 2013; No. 1719. p. 2414. [Google Scholar]
  37. Haruyama, J.; Ohtake, M.; Matsunaga, T.; Morota, T.; Yokota, Y.; Honda, C.; Hirata, N.; Demura, H.; Iwasaki, A.; Nakamura, R.; et al. Planned radiometrically calibrated and geometrically corrected products of lunar high-resolution Terrain Camera on SELENE. Adv. Space Res. 2008, 42, 310–316. [Google Scholar] [CrossRef]
  38. Tao, Y.; Michael, G.; Muller, J.-P.; Conway, S.J.; Putri, A.R.D. Seamless 3D Image Mapping and Mosaicing of Valles Marineris on Mars Using Orbital HRSC Stereo and Panchromatic Images. Remote Sens. 2021, 13, 1385. [Google Scholar] [CrossRef]
  39. Tao, Y.; Muller, J.-P.; Conway, S.J.; Xiong, S. Large Area High-Resolution 3D Mapping of Oxia Planum: The Landing Site for the ExoMars Rosalind Franklin Rover. Remote Sens. 2021, 13, 3270. [Google Scholar] [CrossRef]
  40. Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Xiong, S.-T.; Putri, A.R.D.; Walter, S.H.G.; Veitch-Michaelis, J.; Yershov, V. Massive Stereo-based DTM Production for Mars on Cloud Computers. Planet. Space Sci. 2018, 154, 30–58. [Google Scholar] [CrossRef]
  41. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. arXiv 2014, arXiv:1406.2661. [Google Scholar] [CrossRef]
  42. Jolicoeur-Martineau, A. The relativistic discriminator: A key element missing from standard GAN. arxiv 2018, arXiv:1807.00734. [Google Scholar]
  43. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 18 May 2015; pp. 234–241. [Google Scholar]
  44. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Donostia, Spain, 5–8 June 2017; pp. 4700–4708. [Google Scholar]
  45. Laina, I.; Rupprecht, C.; Belagiannis, V.; Tombari, F.; Navab, N. Deeper depth prediction with fully convolutional residual networks. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 239–248. [Google Scholar]
  46. Tao, Y.; Xiong, S.; Conway, S.J.; Muller, J.-P.; Guimpier, A.; Fawdon, P.; Thomas, N.; Cremonese, G. Rapid Single Image-Based DTM Estimation from ExoMars TGO CaSSIS Images Using Generative Adversarial U-Nets. Remote Sens. 2021, 13, 2877. [Google Scholar] [CrossRef]
  47. Zwald, L.; Lambert-Lacroix, S. The berhu penalty and the grouped effect. arXiv 2012, arXiv:1207.6868. [Google Scholar]
  48. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  49. Eigen, D.; Puhrsch, C.; Fergus, R. Depth map prediction from a single image using a multi-scale deep network. arXiv 2014, arXiv:1406.2283. [Google Scholar]
  50. Eigen, D.; Fergus, R. Predicting depth, surface normal and semantic labels with a common multi-scale convolutional architecture. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 2650–2658. [Google Scholar]
  51. Godard, C.; Mac Aodha, O.; Firman, M.; Brostow, G.J. Digging into self-supervised monocular depth estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3828–3838. [Google Scholar]
  52. Tao, Y.; Xiong, S.; Muller, J.-P.; Michael, G.; Conway, S.J.; Paar, G.; Cremonese, G.; Thomas, N. Subpixel-Scale Topography Retrieval of Mars Using Single-Image DTM Estimation and Super-Resolution Restoration. Remote Sens. 2022, 14, 257. [Google Scholar] [CrossRef]
  53. Robinson, M.S.; Mahanti, P.; Carter, L.M.; Denevi, B.W.; Estes, N.M.; Ravine, M.A.; Speyerer, E.J.; Wagner, R.V. ShadowCam—Seeing in the dark. In Proceedings of the European Planetary Science Congress, Riga, Latvia, 17–22 September 2017; Volume 11, p. 506. [Google Scholar]
  54. Smith, M.; Craig, D.; Herrmann, N.; Mahoney, E.; Krezel, J.; McIntyre, N.; Goodliff, K. The artemis program: An overview of nasa’s activities to return humans to the moon. In Proceedings of the 2020 IEEE Aerospace Conference, Big Sky, MT, USA, 7–14 March 2020; IEEE: Washington, DC, USA, 2020; pp. 1–10. [Google Scholar]
  55. Barker, M.K.; Mazarico, E.; Neumann, G.A.; Smith, D.E.; Zuber, M.T.; Head, J.W. Improved LOLA elevation maps for south pole landing sites: Error estimates and their impact on illumination conditions. Planet. Space Sci. 2021, 203, 105119. [Google Scholar] [CrossRef]
Figure 1. A tile of the colourised and hillshaded 20 m/pixel CE-2 DTM, before (left) and after (right) co-alignment with the reference SLDEM, that covers the von Kármán crater area, superimposed on top of the colourised and hillshaded 59 m/pixel SLDEM.
Figure 1. A tile of the colourised and hillshaded 20 m/pixel CE-2 DTM, before (left) and after (right) co-alignment with the reference SLDEM, that covers the von Kármán crater area, superimposed on top of the colourised and hillshaded 59 m/pixel SLDEM.
Remotesensing 15 02643 g001
Figure 2. LROC NAC image footprints of non-repeat single image coverage (green) and all existing stereo image coverage (red), superimposed on the 7 m/pixel CE-2 ORI over the target site of von Kármán crater.
Figure 2. LROC NAC image footprints of non-repeat single image coverage (green) and all existing stereo image coverage (red), superimposed on the 7 m/pixel CE-2 ORI over the target site of von Kármán crater.
Remotesensing 15 02643 g002
Figure 3. The final screened and co-registered (with CE-2 ORI) LROC NAC single-strip images ((left) showing as all 370 LROC NAC images; (right): showing separately for the 118 of 1 m/pixel LROC NAC images and 252 of 0.5 m/pixel LROC NAC images) that are used as the input data of this work. The background image is the colourised, hillshaded, and co-aligned CE-2 DTM (w.r.t SLDEM).
Figure 3. The final screened and co-registered (with CE-2 ORI) LROC NAC single-strip images ((left) showing as all 370 LROC NAC images; (right): showing separately for the 118 of 1 m/pixel LROC NAC images and 252 of 0.5 m/pixel LROC NAC images) that are used as the input data of this work. The background image is the colourised, hillshaded, and co-aligned CE-2 DTM (w.r.t SLDEM).
Remotesensing 15 02643 g003
Figure 4. Network architecture of the MADNet [25] single-input-image-based 3D estimation network.
Figure 4. Network architecture of the MADNet [25] single-input-image-based 3D estimation network.
Remotesensing 15 02643 g004
Figure 5. Examples of MADNet inference results (2nd row), i.e., relative heights in the range from 0 (black) to 1 (white), in comparison to the input images (1st row) and ground-truth height maps (3rd row) from the test dataset.
Figure 5. Examples of MADNet inference results (2nd row), i.e., relative heights in the range from 0 (black) to 1 (white), in comparison to the input images (1st row) and ground-truth height maps (3rd row) from the test dataset.
Remotesensing 15 02643 g005
Figure 6. Overall workflow of the large-area high-resolution 3D mapping system using LROC NAC images.
Figure 6. Overall workflow of the large-area high-resolution 3D mapping system using LROC NAC images.
Remotesensing 15 02643 g006
Figure 7. An overview of the resultant LROC NAC MADNet DTM mosaics (colourised and hillshaded) superimposed on the CE-2 ORI. (Upper-left): DTM mosaic of all 1 m/pixel LROC NAC MADNet DTMs (produced from 0.5–1 m/pixel LROC NAC images); (upper-right): DTM mosaic of all 2 m/pixel LROC NAC MADNet DTMs (produced from 1–1.5 m/pixel LROC NAC images); (lower-left): final DTM mosaic of all 1 m/pixel and 2 m/pixel LROC NAC MADNet DTMs (resampled to 1 m/pixel; higher-resolution DTMs are blended on top of the lower-resolution DTMs); (lower-right): final 1 m/pixel DTM mosaic with 14 m/pixel CE-2 MADNet DTM for gap filling.
Figure 7. An overview of the resultant LROC NAC MADNet DTM mosaics (colourised and hillshaded) superimposed on the CE-2 ORI. (Upper-left): DTM mosaic of all 1 m/pixel LROC NAC MADNet DTMs (produced from 0.5–1 m/pixel LROC NAC images); (upper-right): DTM mosaic of all 2 m/pixel LROC NAC MADNet DTMs (produced from 1–1.5 m/pixel LROC NAC images); (lower-left): final DTM mosaic of all 1 m/pixel and 2 m/pixel LROC NAC MADNet DTMs (resampled to 1 m/pixel; higher-resolution DTMs are blended on top of the lower-resolution DTMs); (lower-right): final 1 m/pixel DTM mosaic with 14 m/pixel CE-2 MADNet DTM for gap filling.
Remotesensing 15 02643 g007
Figure 8. Examples of profile measurements of the LROC NAC MADNet DTM mosaic (red), LROC NAC PDS DTM (blue), and CE-2 DTM (black). The locations of the profile lines (white lines) and follow-up zoom-in comparisons (green dots) are shown on the left.
Figure 8. Examples of profile measurements of the LROC NAC MADNet DTM mosaic (red), LROC NAC PDS DTM (blue), and CE-2 DTM (black). The locations of the profile lines (white lines) and follow-up zoom-in comparisons (green dots) are shown on the left.
Remotesensing 15 02643 g008
Figure 9. Example-1 of the zoom-in views of the LROC NAC PDS ORI (NAC_DTM_CHANGE4_M1303619844_140CM.IMG), the LROC NAC PDS DTM (NAC_DTM_CHANGE4_E458S1775), and the resultant LROC NAC MADNet DTM mosaic. The DTMs are shown as colourised (top) and shaded relief (bottom) images (azimuth: 315°; altitude: 45°; vertical exaggeration: 1).
Figure 9. Example-1 of the zoom-in views of the LROC NAC PDS ORI (NAC_DTM_CHANGE4_M1303619844_140CM.IMG), the LROC NAC PDS DTM (NAC_DTM_CHANGE4_E458S1775), and the resultant LROC NAC MADNet DTM mosaic. The DTMs are shown as colourised (top) and shaded relief (bottom) images (azimuth: 315°; altitude: 45°; vertical exaggeration: 1).
Remotesensing 15 02643 g009
Figure 10. Example-2 of the zoom-in views of the LROC NAC PDS ORI (NAC_DTM_CHANGE4_M1303619844_140CM.IMG), the LROC NAC PDS DTM (NAC_DTM_CHANGE4_E458S1775), and the resultant LROC NAC MADNet DTM mosaic. The DTMs are shown as colourised (top) and shaded relief (bottom) images (azimuth: 315°; altitude: 45°; vertical exaggeration: 1).
Figure 10. Example-2 of the zoom-in views of the LROC NAC PDS ORI (NAC_DTM_CHANGE4_M1303619844_140CM.IMG), the LROC NAC PDS DTM (NAC_DTM_CHANGE4_E458S1775), and the resultant LROC NAC MADNet DTM mosaic. The DTMs are shown as colourised (top) and shaded relief (bottom) images (azimuth: 315°; altitude: 45°; vertical exaggeration: 1).
Remotesensing 15 02643 g010
Figure 11. Example-3 of the zoom-in views of the LROC NAC PDS ORI (NAC_DTM_CHANGE4_M1303619844_140CM.IMG), the LROC NAC PDS DTM (NAC_DTM_CHANGE4_E458S1775), and the resultant LROC NAC MADNet DTM mosaic. The DTMs are shown as colourised (top) and shaded relief (bottom) images (azimuth: 315°; altitude: 45°; vertical exaggeration: 1).
Figure 11. Example-3 of the zoom-in views of the LROC NAC PDS ORI (NAC_DTM_CHANGE4_M1303619844_140CM.IMG), the LROC NAC PDS DTM (NAC_DTM_CHANGE4_E458S1775), and the resultant LROC NAC MADNet DTM mosaic. The DTMs are shown as colourised (top) and shaded relief (bottom) images (azimuth: 315°; altitude: 45°; vertical exaggeration: 1).
Remotesensing 15 02643 g011
Figure 12. Difference maps between (a) the resultant LROC NAC MADNet DTM mosaic and the SLDEM co-aligned version of the CE-2 DTM; (b) the resultant LROC NAC MADNet DTM mosaic and the SLDEM; (c) the SLDEM co-aligned version of the CE-2 DTM and the SLDEM; and (d) the resultant LROC NAC MADNet DTM mosaic and the only available LROC NAC PDS DTM which is rotated 90° clockwise (ID: NAC_DTM_CHANGE4_E458S1775).
Figure 12. Difference maps between (a) the resultant LROC NAC MADNet DTM mosaic and the SLDEM co-aligned version of the CE-2 DTM; (b) the resultant LROC NAC MADNet DTM mosaic and the SLDEM; (c) the SLDEM co-aligned version of the CE-2 DTM and the SLDEM; and (d) the resultant LROC NAC MADNet DTM mosaic and the only available LROC NAC PDS DTM which is rotated 90° clockwise (ID: NAC_DTM_CHANGE4_E458S1775).
Remotesensing 15 02643 g012
Table 1. Summary of the mean and standard deviations of the differences between different DTM sources at von Kármán crater.
Table 1. Summary of the mean and standard deviations of the differences between different DTM sources at von Kármán crater.
Comparison InputsMean DifferenceStandard Deviation
LROC NAC MADNet DTM mosaicSLDEM0.577 m94.940 m
CE-2 DTM–0.048 m1.791 m
LROC NAC PDS DTM–0.019 m1.09 m
CE-2 DTMSLDEM0.194 m100.382 m
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tao, Y.; Muller, J.-P.; Conway, S.J.; Xiong, S.; Walter, S.H.G.; Liu, B. Large Area High-Resolution 3D Mapping of the Von Kármán Crater: Landing Site for the Chang’E-4 Lander and Yutu-2 Rover. Remote Sens. 2023, 15, 2643. https://doi.org/10.3390/rs15102643

AMA Style

Tao Y, Muller J-P, Conway SJ, Xiong S, Walter SHG, Liu B. Large Area High-Resolution 3D Mapping of the Von Kármán Crater: Landing Site for the Chang’E-4 Lander and Yutu-2 Rover. Remote Sensing. 2023; 15(10):2643. https://doi.org/10.3390/rs15102643

Chicago/Turabian Style

Tao, Yu, Jan-Peter Muller, Susan J. Conway, Siting Xiong, Sebastian H. G. Walter, and Bin Liu. 2023. "Large Area High-Resolution 3D Mapping of the Von Kármán Crater: Landing Site for the Chang’E-4 Lander and Yutu-2 Rover" Remote Sensing 15, no. 10: 2643. https://doi.org/10.3390/rs15102643

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop