# Mushroom Detection and Three Dimensional Pose Estimation from Multi-View Point Clouds

^{*}

## Abstract

**:**

## 1. Introduction

- Explore multi-view vision systems for geometry-preserving point clouds.
- Explore RANSAC-based (RANdom SAmple Consensus [6]) solutions for template matching.
- Propose a two-step pipeline for detecting mushrooms and estimating their poses. The first step is a novel instance-segmentation approach, while the second introduces a modified version of the Iterative Closest Point algorithm for template matching. The whole pipeline is learning-free, in the sense that no training based on annotation was involved.
- Introduce an ellipsoid-based lightweight alternative for pose estimation, relying on the resemblance of the mushroom cap shape to ellipsoids.
- Develop a pipeline for creating synthetic mushroom point cloud scenes that approximate realistic scenes. This pipeline was used for creating a validation dataset for quantitative evaluation of the proposed approach.

## 2. Related Work

## 3. Problem Statement: Vision Setup and Limitations

^{®}RealSense™ depth cameras (D435 version), which rely on active stereo technology for depth estimation. An indicative setup is depicted in Figure 1, where two RealSense cameras are mounted on a 3D printed extension that can be rotated (along the z-axis) in order to explore the ability to enrich the point cloud information under multiple viewpoints. The angle of the cameras and their distance can be modified.

- A proof of concept setup with three RGB-D cameras placed on the perimeter of a circle above the mushroom scene (one camera every 120 degrees). The mushroom scenes explored with this setup were simplistic and contained 3D printed mushrooms, as shown in Figure 2 (left). While developing our methods, we used this setup to validate the significance of each step, providing visual examples throughout this manuscript.
- A similar setup of rotating cameras, developed by our project collaborators in TWI Hellas (THL), was used to create a collection of multi-view point clouds. Our goal was to recreate the actual 3D mushroom scene as accurately as possible, utilizing a rotating vision system that consisted of two cameras that could move along the perimeter of a circle. In order to scan a specific mushroom scene, the vision system captured nine snapshots of the scene. Between each snapshot, the camera system rotated 20°. The RGB-D snapshots from the scanning were converted to point clouds stitched together (a preliminary stitching estimation was performed with an Iterative Closest Point (ICP) registration algorithm, given an initial estimation of the point cloud position from the camera parameters and the camera orientation). Figure 2 (right) shows an example of a point cloud obtained from this setup.

## 4. Point Cloud Preprocessing

^{®}RealSense™ cameras.

#### 4.1. Plane Segmentation

- We removed the background irrelevant to mushroom detection and pose estimation, considerably simplifying the scene and reducing the point cloud size.
- We obtained auxiliary information about the point cloud’s direction in space, namely the direction of the normal of the extracted plane. This could be helpful to quickly initialize the forthcoming point cloud alignment step, when multiple point clouds from different views were available.

#### 4.2. Alignment of Multiple Point Clouds

#### Mesh Reconstruction

- Vertex Estimation: Sample the initial point cloud with a voxel-based downsampling approach. The resulting points constitute the initial estimation of the vertices of the mesh.
- Manifold Embedding: Project the 3D points into a 2D space while retaining the local geometry of the 3D structure. This is a typical case of manifold embedding. We used the ISOMAP (ISOmetric feature MAPping [26]) algorithm. Projecting the whole point cloud has two major disadvantages: (1) if the geometry is complex, local details cannot always be retained, and (2) the computational complexity is high since ISOMAP computes the shortest path (Dijkstra algorithm) and subsequently an eigenvalue decomposition. Both disadvantages can be effectively addressed by breaking the point cloud into segments. This was performed by k-means clustering. In practice, we used a large number of clusters to speed up the procedure (K = 200). In order to track cross-segment regions, we added, to each cluster, a number of neighboring points from other clusters.
- Face Estimation: Given the projected 2D point set for each cluster, perform a Delaunay triangulation. The generated triangles are the faces of the 3D shape. The necessity of this triangulation step was the motivation behind the request for unfolding the 3D manifold into a 2D space.
- Post-processing: Remove triangles with unnaturally long edges in the 3D space (cases of violated local structure into the projected space). Remove duplicate faces that may occur due to the neighboring points in each cluster. Finally, apply Laplacian smoothing to the generated mesh.

**Limitations:**Despite the effectiveness of the approach and the reduced complexity of the proposed modification that used k-means, the overall processing time was approximately 10 s for the depicted detailed meshes (25K vertices) and could be reduced to 1–2 s for a more crude mesh of 9K vertices. Such time requirements are not helpful for real-time applications, but this process should not be performed at high frequency. Moreover, the forthcoming algorithms rely on point clouds and not on mesh representations. Therefore, one should sample the reconstructed mesh. To this end, we should explore different and more efficient ways to provide this surface-like effect directly on the point cloud representation without the extra overheads.

## 5. First Attempt: RANSAC-Based Template Matching

#### 5.1. The 3D Features

#### 5.2. Template Matching Pipeline

- Subsample both template and scene point clouds using a voxel-based downsampling method. This step speeds up the whole procedure.
- Extract 3D features for the downsampled point clouds. Each point corresponds to a feature. Two options are available, FPFH and FCGF features, as mentioned above.
- Perform RANSAC matching: impose strict criteria (e.g., have very similar normals) for validating a set of correspondences. Optionally, we could re-run the RANSAC matching step and select the run with the higher fitness value. This might be helpful due to the relatively large scene and the probabilistic nature of RANSAC.
- Perform fine-tuning of the detected template transform using ICP. To assist the ICP algorithm, we extracted the points from the scene under a specific radius threshold from the transformed points under the detected transform from the RANSAC step. Note that ICP is rather sensitive, and, thus, a good initialization is necessary from the RANSAC step.

## 6. Proposed Pipeline

- We had to know an estimation of the minimum number of mushrooms to be searched if we wanted to detect them all, and this process could be rather slow.
- It could be significantly inaccurate in some cases, especially considering that an off-the-shelf feature extractor was used and that several “valid” matches could occur due to the inherent symmetry of the mushroom cap.

#### 6.1. Mushroom Instance Segmentation

- Randomly sample a different point cloud from the initial template mesh
- Use a modified voxel size by multiplying the initial user-defined voxel size with a value uniformly sampled from the range $[0.8,1.2]$.
- Perform a 3D affine transformation using a constrained set of possible 3D rotations, as well as a constrained set of possible scales.
- Perform a local deformation of the mushroom surface along the surface normals. This step does not significantly affect the shape of the mushroom; it only helps to avoid “overfitting” of the model to very specific surface patterns.

#### 6.2. Pose Estimation via Template Iterative Fine-Tuning

**An Ellipsoid Alternative:**The mushroom cap has a very specific shape that resembles a 3D ellipsoid. Therefore, one might avoid the template matching scheme, which requires a matching step between points of two different point clouds, and straightforwardly apply a much faster ellipsoid fitting procedure. The resulting translation, scale and rotation of the ellipsoid would correspond to the respective transformation parameters of the template matching approach. To avoid outliers contributing to the fitting process, we followed a re-weighted iterative least-square approach as Equation (2) suggests, where we iteratively estimated $\mathbf{M}\in {\mathbb{R}}^{3\times 3}$ and $\mathbf{H}\in {\mathbb{R}}^{1\times 3}$ and then used these estimations to update the weight ${w}_{i}$. Specifically, Equation (2) describes a (normalized) general formulation of an ellipsoid, where $\mathbf{M}$ is a positive definite matrix.

## 7. Experimental Evaluation

#### 7.1. Synthetic Point Clouds

- 1
- Use a realistic non-smooth ground plane. For this, a ground dirt mesh was used. For each scene, small local deformations were applied to the ground mesh.
- 2
- Select a number K of mushrooms to be placed on the ground mesh. The whole mushroom template of Figure 3 was used in this step and not only the cap. For each mushroom to be placed, follow these transformation sub-steps:
- Translate a mushroom template anywhere in the domain (over the xy-axes) defined by the ground mesh.
- Apply a scale factor within the range $[1-1/3,1+1/3]$. Then, a finer per-axis re-scaling step is performed in a constrained range of $[0.9,1.1]$, providing extra variability in the shape of the mushroom. The mushroom is then translated along the z-axis in order to have its bottom point in the proximity of the ground plane.
- Apply a 3D rotation according to randomly selected axis angles. Rotation over the x- and y-axes was constrained to the range $[-{30}^{\circ},{30}^{\circ}]$. Due to symmetry over the z-axis, we chose to leave this rotation unconstrained.
- Apply local deformations in the mushroom mesh (along the surface normals) without significantly altering its surface. Implementation-wise, a small number of vertices were randomly selected to be translated along their corresponding normal. Small values were considered in order to retain the surface shape. Neighboring vertices were also deformed following an interpolation rationale.
- Apply basic collision checking. If a newly created mushroom collided with an existing one, we discarded the new mushroom and created another new one until no collision was detected.

- 3
- Create the final point cloud, using a slightly different sampling number of points and different voxel sizes ($\times 0.8$–$\times 1.2$) for the subsequent down-sampling in order to provide extra variability in the created scenes.
- 4
- Lastly, simulate realistic collected point clouds, for which a hidden point removal step was performed [31]. This step approximated the visibility of a point cloud from a given view and removed the occluded points. Possible views were randomly selected using a constrained radius range and a constrained set of axis angles that did not considerably diverge from an overhead viewpoint.

#### 7.2. Quantitative Evaluation

#### 7.2.1. RANSAC-Based vs. Proposed Pipeline

#### 7.2.2. Ablation over Implementation Choices

#### 7.2.3. Quantify Pose Estimation

#### 7.2.4. Comparison to Existing Methods

#### 7.3. Qualitative Evaluation

## 8. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- R. Shamshiri, R.; Weltzien, C.; Hameed, I.A.; J. Yule, I.; E. Grift, T.; Balasundram, S.K.; Pitonakova, L.; Ahmad, D.; Chowdhary, G. Research and development in agricultural robotics: A perspective of digital farming. Int. J. Agric. Biol. Eng.
**2018**, 11, 1–14. [Google Scholar] [CrossRef] - Yang, W.; Gong, C.; Luo, X.; Zhong, Y.; Cui, E.; Hu, J.; Song, S.; Xie, H.; Chen, W. Robotic Path Planning for Rice Seeding in Hilly Terraced Fields. Agronomy
**2023**, 13, 380. [Google Scholar] [CrossRef] - Zhang, J.; Karkee, M.; Zhang, Q.; Zhang, X.; Yaqoob, M.; Fu, L.; Wang, S. Multi-class object detection using faster R-CNN and estimation of shaking locations for automated shake-and-catch apple harvesting. Comput. Electron. Agric.
**2020**, 173, 105384. [Google Scholar] [CrossRef] - Sari, S. Comparison of Camera-Based and LiDAR-Based Object Detection for Agricultural Robots. In Proceedings of the 2021 International Conference on Information Technology and Applications (ICITA), Dubai, United Arab Emirates, 13–14 November 2021. [Google Scholar]
- Baisa, N.L.; Al-Diri, B. Mushrooms Detection, Localization and 3D Pose Estimation using RGB-D Sensor for Robotic-picking Applications. arXiv
**2022**, arXiv:2201.02837. [Google Scholar] - Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM
**1981**, 24, 381–395. [Google Scholar] [CrossRef] - Bargoti, S.; Underwood, J. Deep fruit detection in orchards. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017. [Google Scholar]
- Salazar-Gomez, A.; Darbyshire, M.; Gao, J.; Sklar, E.I.; Parsons, S. Towards practical object detection for weed spraying in precision agriculture. arXiv
**2021**, arXiv:2109.11048. [Google Scholar] - Navas, E.; Fernandez, R.; Sepúlveda, D.; Armada, M.; Gonzalez-de Santos, P. Soft grippers for automatic crop harvesting: A review. Sensors
**2021**, 21, 2689. [Google Scholar] [CrossRef] [PubMed] - Wosner, O.; Farjon, G.; Bar-Hillel, A. Object detection in agricultural contexts: A multiple resolution benchmark and comparison to human. Comput. Electron. Agric.
**2021**, 189, 106404. [Google Scholar] [CrossRef] - Yin, H.; Yi, W.; Hu, D. Computer vision and machine learning applied in the mushroom industry: A critical review. Comput. Electron. Agric.
**2022**, 198, 107015. [Google Scholar] [CrossRef] - Lin, X.; Xu, J.L.; Sun, D.W. Investigation of moisture content uniformity of microwave-vacuum dried mushroom (Agaricus bisporus) by NIR hyperspectral imaging. Lwt
**2019**, 109, 108–117. [Google Scholar] [CrossRef] - Gowen, A.; O’donnell, C.; Taghizadeh, M.; Cullen, P.; Frias, J.; Downey, G. Hyperspectral imaging combined with principal component analysis for bruise damage detection on white mushrooms (Agaricus bisporus). J. Chemom. A J. Chemom. Soc.
**2008**, 22, 259–267. [Google Scholar] [CrossRef] - Dong, J.E.; Zhang, J.; Zuo, Z.T.; Wang, Y.Z. Deep learning for species identification of bolete mushrooms with two-dimensional correlation spectral (2DCOS) images. Spectrochim. Acta Part A Mol. Biomol. Spectrosc.
**2021**, 249, 119211. [Google Scholar] [CrossRef] [PubMed] - Tarsoly, S.; Karoly, A.I.; Galambos, P. Lessons Learnt with Traditional Image Processing Techniques for Mushroom Detection. In Proceedings of the IEEE 10th Jubilee International Conference on Computational Cybernetics and Cyber-Medical Systems (ICCC), Reykjavík, Iceland, 6–9 July 2022. [Google Scholar]
- Lu, C.P.; Liaw, J.J. A novel image measurement algorithm for common mushroom caps based on convolutional neural network. Comput. Electron. Agric.
**2020**, 171, 105336. [Google Scholar] [CrossRef] - Wang, Y.; Yang, L.; Chen, H.; Hussain, A.; Ma, C.; Al-gabri, M. Mushroom-YOLO: A deep learning algorithm for mushroom growth recognition based on improved YOLOv5 in agriculture 4.0. In Proceedings of the IEEE 20th International Conference on Industrial Informatics (INDIN), Perth, Australia, 25–28 July 2022. [Google Scholar]
- Wei, B.; Zhang, Y.; Pu, Y.; Sun, Y.; Zhang, S.; Lin, H.; Zeng, C.; Zhao, Y.; Wang, K.; Chen, Z. Recursive-YOLOv5 network for edible mushroom detection in scenes with vertical stick placement. IEEE Access
**2022**, 10, 40093–40108. [Google Scholar] [CrossRef] - Ciarfuglia, T.A.; Motoi, I.M.; Saraceni, L.; Nardi, D. Pseudo-label Generation for Agricultural Robotics Applications. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 1686–1694. [Google Scholar]
- Fei, Z.; Olenskyj, A.G.; Bailey, B.N.; Earles, M. Enlisting 3D crop models and GANs for more data efficient and generalizable fruit detection. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 1269–1277. [Google Scholar]
- Le Louedec, J.; Montes, H.A.; Duckett, T.; Cielniak, G. Segmentation and detection from organised 3D point clouds: A case study in broccoli head detection. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
- Wang, L.; Zheng, L.; Wang, M. 3D Point Cloud Instance Segmentation of Lettuce Based on PartNet. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 1647–1655. [Google Scholar]
- Guo, R.; Qu, L.; Niu, D.; Li, Z.; Yue, J. Leafmask: Towards greater accuracy on leaf segmentation. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 1249–1258. [Google Scholar]
- Qian, Y.; Jiacheng, R.; Pengbo, W.; Zhan, Y.; Changxing, G. Real-time detection and localization using SSD method for oyster mushroom picking robot. In Proceedings of the 2020 IEEE International Conference on Real-Time Computing and Robotics (RCAR), Asahikawa, Japan, 28–29 September 2020. [Google Scholar]
- Park, J.; Zhou, Q.Y.; Koltun, V. Colored point cloud registration revisited. In Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
- Tenenbaum, J.B.; Silva, V.d.; Langford, J.C. A global geometric framework for nonlinear dimensionality reduction. Science
**2000**, 290, 2319–2323. [Google Scholar] [CrossRef] [PubMed] - Rusu, R.B.; Blodow, N.; Beetz, M. Fast point feature histograms (FPFH) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009. [Google Scholar]
- Choy, C.; Park, J.; Koltun, V. Fully convolutional geometric features. In Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 8958–8966. [Google Scholar]
- Xiao, J.; Owens, A.; Torralba, A. Sun3d: A database of big spaces reconstructed using sfm and object labels. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013. [Google Scholar]
- Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the KDD-96, Portland, OR, USA, 2–4 August 1996; Volume 96, pp. 226–231. [Google Scholar]
- Katz, S.; Tal, A.; Basri, R. Direct visibility of point sets. In Proceedings of the ACM SIGGRAPH 2007, San Diego, CA, USA, 5–9 August 2007. [Google Scholar]

**Figure 1.**Indicative vision setup for obtaining point clouds from different views. Two RealSense depth cameras are used, with a rotation option of the cameras around z-axis. In the simplest scenario, two pointclouds, one from each camera, are collected and merged into a single point cloud of the scene.

**Figure 2.**Three different settings of collected validation data: merged point cloud from 3 views using 3D printed mushrooms (

**left**), merged point cloud from 2 views using real mushrooms (

**center**) and merged point cloud from multiple views (18) using 3D printed mushrooms (

**right**).

**Figure 3.**The 3D mesh of the mushroom template. In practice, we used only the 3D model of the mushroom cap (

**right**), instead of the whole mushroom mesh (

**left**).

**Figure 4.**Overview of the preprocessing steps. The input is a set of point clouds from different views. The surface reconstruction step is optional.

**Figure 5.**Plane segmentation; points belonging to the detected ground plane are visualized with red color.

**Figure 6.**Upper row: Examples of the initial alignment using the camera parameters, bottom row: Examples of the finer alignment of the developed method. Red boxes point out misalignments.

**Figure 9.**Visualization of the RANSAC-based estimation results for FPFH (first column) and FCGF (second column) features.

**Figure 10.**Overview of the proposed pipeline. The two main functionalities are mushroom detection and mushroom pose estimation. Visualization examples of each step/sub-step are provided.

**Figure 11.**Examples of mushroom segmentation; points belonging to a mushroom are depicted with red color, while background points with blue. Two variants are depicted: (

**a**) only one template point cloud is considered and (

**b**) a set of augmented template point clouds are considered.

**Figure 12.**Overview of foreground/background separation. Augmentation of the template (only the cap), and clustering of the 3D features of the augmented set with a k-medoids algorithm, was applied offline. For the “online” processing of a new point cloud we simply compared the 3D feature of each point with the already computed medoids. If cosine similarity was above a user-defined threshold, the point was considered a mushroom point (i.e., foreground point).

**Figure 13.**Example of foreground/background separation of neighboring mushrooms. Note that regions of “contact”, denoted with green boxes, were not recognized as mushrooms. Red points are classified as mushroom points (foreground) while blue points are not relevant points (background).

**Figure 14.**Visualization of the density clustering step over the foreground/background separation results. Different colors denote different clusters.

**Figure 15.**Visualization of three distinct (starting, intermediate and ending) steps of the developed ICP variant.

**Figure 20.**Two bounding boxes of very similar pose that report an overlap IoU value of 60%. The green box denotes the ground truth box, while the blue box denotes the predicted bounding box.

**Figure 21.**Visualization of the proposed steps when real data were considered. From left to right: (1) initial point cloud, (2) foreground/background separation, (3) instance segmentation, (4) template matching, (5) fitted templates overlaid over the initial pointcloud. The first three rows correspond to point cloud from multiple viewpoints of a rotating vision system, while the last two correspond to point clouds of two opposite views.

Method | MAP @ 25% IoU | MAP @ 50% IoU |
---|---|---|

RANSAC-based Approach | 90.89% | 53.62% |

Proposed Approach | 99.80% | 96.31% |

3D Features | MAP @ 25% IoU | MAP @ 50% IoU |
---|---|---|

FCGF | 99.63% | 24.75% |

FPFH | 99.80% | 96.31% |

**Table 3.**Impact of the ICP template fine-tuning step. Typical ICP and the proposed modification are compared.

Template Alignment | MAP @ 25% IoU | MAP @ 50% IoU |
---|---|---|

basic ICP | 98.33% | 45.92% |

modified ICP | 99.80% | 96.31% |

w/Surface Reconstruction | w/o Surface Reconstruction | |
---|---|---|

MAP @ 50% IoU: | 96.36 ± 2.02% | 96.31 ± 0.92% |

**Table 5.**Comparison between the ellipsoid-based pose estimation and the proposed template matching ICP variant.

Ellipsoid Variant | Modified ICP | |
---|---|---|

MAP @ 50% IoU: | 89.09% | 96.31% |

**Table 6.**Pose Estimation in terms of cosine similarity between rotation vectors. Mean and median cosine metrics were computed for detection over a specified IoU threshold.

25% IoU | 50% IoU | |
---|---|---|

mean cos. similarity: | 0.9885 | 0.9920 |

mean angle error: | 8.70${}^{\circ}$ | 7.27${}^{\circ}$ |

median cos. similarity: | 0.9927 | 0.9948 |

median angle error: | 6.93${}^{\circ}$ | 5.85${}^{\circ}$ |

**Table 7.**Detection and pose estimation results for a standard registration pipeline (RANSAC+ICP) applied on segmented mushrooms regions. We report both MAP and angle error for detections over 25% IoU.

MAP | (Mean) Angle Error | |
---|---|---|

segmented RANSAC+ICP | 91.78% | 13.77${}^{\circ}$ |

proposed | 99.80% | 8.70${}^{\circ}$ |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Retsinas, G.; Efthymiou, N.; Anagnostopoulou, D.; Maragos, P. Mushroom Detection and Three Dimensional Pose Estimation from Multi-View Point Clouds. *Sensors* **2023**, *23*, 3576.
https://doi.org/10.3390/s23073576

**AMA Style**

Retsinas G, Efthymiou N, Anagnostopoulou D, Maragos P. Mushroom Detection and Three Dimensional Pose Estimation from Multi-View Point Clouds. *Sensors*. 2023; 23(7):3576.
https://doi.org/10.3390/s23073576

**Chicago/Turabian Style**

Retsinas, George, Niki Efthymiou, Dafni Anagnostopoulou, and Petros Maragos. 2023. "Mushroom Detection and Three Dimensional Pose Estimation from Multi-View Point Clouds" *Sensors* 23, no. 7: 3576.
https://doi.org/10.3390/s23073576