Next Article in Journal
Comparing Static and Dynamic Weighted Software Coupling Metrics
Previous Article in Journal
Introduction to the Special Issue “Applications in Self-Aware Computing Systems and their Evaluation”
Previous Article in Special Issue
Virtual Forestry Generation: Evaluating Models for Tree Placement in Games
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accelerating Surface Tension Calculation in SPH via Particle Classification and Monte Carlo Integration

1
Interactive Graphics and Simulation Group, University of Innsbruck, 6020 Innsbruck, Austria
2
Department of Environmental Engineering, University of Innsbruck, 6020 Innsbruck, Austria
*
Author to whom correspondence should be addressed.
Computers 2020, 9(2), 23; https://doi.org/10.3390/computers9020023
Submission received: 10 February 2020 / Revised: 23 March 2020 / Accepted: 25 March 2020 / Published: 29 March 2020
(This article belongs to the Special Issue Computer Graphics & Visual Computing (CGVC 2019))

Abstract

:
Surface tension has a strong influence on the shape of fluid interfaces. We propose a method to calculate the corresponding forces efficiently. In contrast to several previous approaches, we discriminate to this end between surface and non-surface SPH particles. Our method effectively smooths the fluid interface, minimizing its curvature. We make use of an approach inspired by Monte Carlo integration to estimate local normals as well as curvatures, based on which the force can be calculated. We compare different sampling schemes for the Monte Carlo approach, for which a Halton sequence performed best. Our overall technique is applicable, but not limited to 2D and 3D simulations, and can be coupled with any common SPH formulation. It outperforms prior approaches with regard to total computation time per time step in dynamic scenes. Additionally, it is adjustable for higher quality in small scale scenes with dominant surface tension effects.

1. Introduction

Surface tension is a phenomenon appearing at the interface of differing media, typically involving a liquid and a gas; such as, for instance, a water-air interface. It results from cohesive forces attracting the molecules of the liquid towards each other. Formally, surface tension is defined as the ratio between the surface force and the distance along which it acts. These forces lead, for instance, to smoothing of fluid surfaces, wherefore they play a vital role in the visual appearance. Accordingly, computational fluid simulations should include estimations of these processes.
In smoothed particle hydrodynamics (SPH) [1], fluids are discretized into particles. Due to this, the interface, e.g., between water and air, is not exactly defined. Therefore, proper ways of approximating the surface tension forces are required. In SPH approaches, these forces are often computed per particle, based on an estimate of the local normal direction as well as of the local curvature. However, some state-of-the-art methods generalize such force calculations to all particles in the fluid, not taking into consideration whether they are located at the surface or not. Technically, this should not introduce any artifacts, since the forces obtained for non-surface particles would be calculated as zero. Nevertheless, computational resources are being spent in the process, without having any effect on the fluid behavior.
Instead of the above, it could be advantageous to first classify particles into surface and non-surface ones, as for instance suggested in [2,3]. Assuming this can be done efficiently, the subsequent surface tension calculation could be accelerated, leading in total to a reduced computation time per simulation step. Related to this notion, a method for SPH surface detection in 2D has been presented in [4]. They classify a particle as part of the surface, if a circle centered at the particle position is not fully overlapped by circles associated with neighboring particles. Inspired by this idea, we propose an extension of the method, with which we first classify particles (in 2D or 3D). For this step, we employ linear regression, based on machine learning techniques. Once the particles are classified, the local normal and curvature have to be obtained. This is realized by a Monte Carlo approach, where the geometry is locally sampled to determine local coverage. The approach only requires the neighborhood geometry, wherefore it is applicable to currently existing SPH algorithms for fluid simulation. Furthermore, we also suggest adaptive adjustment of the sample resolution, according to the time step. Comparing our approach to state-of-the-art methods for surface tension force estimation, the total simulation runtimes could be consistently reduced with our approach. As an initial example using our method, see Figure 1—the evolution of two particle sets is depicted, arranged initially as two cubes, following our surface tension calculation. Note the coalescence of both parts, including oscillatory movement over time, while also exhibiting concavities. The final shape equilibrium, minimizing surface tension energy is, as expected, a spherical droplet. In addition to the above extensions, we also explored the use of different sampling techniques; employing a pseudo-random Halton sequence instead of a uniform random distribution could yield better performance. Nevertheless, it should be mentioned that minor drift in zero gravity test cases could possibly appear, due to numerical inaccuracies. This may be reduced by further improvement of the normal estimation; but this will require additional investigation.

2. Related Work

Various approaches to calculate surface tension forces in SPH fluids have been proposed in the past. In earlier work, it was attempted to represent surfaces with a smoothed color field, as seen in [8,9,10,11]. The latter is a scalar field, which is initially set to one at particle locations, and zero everywhere else. This permits to obtain estimates of surface normals and curvatures. The latter are calculated as the gradient, as well as the divergence of the gradient of the field, respectively. However, the technique usually leads to a random assignment of normals for particles far from the surface. Moreover, errors in the curvature values result and conservation of the fluid momentum was not ensured. The local nature of our method will reduce problems of normal randomness at locations far from the surface, as well as reduce curvature error.
In [5], the surface tension force is modeled as a sum of cohesion forces between particles in the same fluid phase. However, the equilibrium of these cohesion forces, as found in simulations, does not always correspond to the correct minimal surface area, as one would expect from a surface tension dominated fluid. The method is also prone to clustering of particles on the fluid surface. To avoid such particle clustering, it was suggested in [12] to introduce a repulsion force when particles are too close to each other. This was achieved by manually tuning a force profile according to particle separation distance. In our method particle clustering is avoided since we do not use cohesion forces. Related to this, it was stated in [6] that the surface tension force cannot be estimated as a summation of cohesion forces alone, as observed in nature, since SPH particles represent a fluid on a macroscopic level. Instead, they suggest to combine a cohesion with a surface minimization term. Thus, their force term minimizes fluid surface area, conserves momentum, and prevents clustering. However, forces are manually tuned to attract particles in a certain distance range, while repulsing particles that are too close. In contrast, in [13] the curvature minimization problem is first solved on a mesh that is reconstructed from the SPH particles; and later the results are transferred back to the particles. The authors encountered surface waves that could appear due to a mismatch between mesh vertices and underlying SPH particles. The effect could be reduced in a post-processing step.
All the methods above treat all particles equally. However, for non-surface particles the resulting force will be zero. Thus, time is spent on calculations that do not have an effect on the simulation. Thus, it may be beneficial to classify particles initially, and then compute forces only for surface particles. One of the first methods that distinguishes between surface and non-surface particles is [2]. The force is modeled based on the asymmetric neighborhood of particles close to the surface, which leads to asymmetries in the summation of Van der Waals interactions. This yields a force acting on surface particles, proportional to surface curvature. The work in [3] introduces a method for surface particle classification based on visual occlusion of particles from different viewpoints. However, the method is computationally intensive and cannot accommodate false positives. In [14] surface tension was computed for long, thin objects.
In addition to the above, curvature estimation in general point clouds is also a widely studied topic. In related work, magnitudes proportional to the surface curvature are sometimes computed, but not the exact value itself. This problem has a similar cause; also in general point clouds surface curvature may not be exactly defined. Moreover, most existing work already assumes the availability of a surface-only point cloud, e.g., [15,16]. In contrast, our work starts with arbitrary particle locations in a volume. Finally, also note the relation of the problem to SPH surface reconstruction, e.g., via distance fields, such as [17,18].

3. Methods

Following the idea of modeling surface tension with a continuum method in [19], we calculate the surface tension force via f s t i = σ κ i n ^ i , where κ i and n ^ i are surface curvature as well as normal at SPH particle i (superscripts denote particle indices). Furthermore, σ is a constant surface tension parameter, measured in N / m , which depends on the simulated fluid. As mentioned above, we thus have to approximate curvature as well as normal direction per particle. Our proposed method is organized in three major algorithmic steps. First, particles in an SPH simulation are classified into two groups—surface and non-surface particles. Second, the normal vector is estimated for all surface particles. This makes use of a Monte Carlo technique to locally estimate an integral, taking into account neighboring particles. Due to the probabilistic nature of Monte Carlo computations, the resulting normal vectors are additionally smoothed. Thirdly, following a similar Monte Carlo strategy, we estimate local curvature, again only for the classified surface particles, also with a subsequent smoothing step. The described process can be applied to 2D or 3D scenes. In addition, the number of random samples, and thus the accuracy, can automatically be adjusted according to the simulation time step. Employing the computed data, the surface tension forces per particle can be determined. The individual steps are described in detail in the following.

3.1. Particle Classification

The main idea of classifying particles is to reduce the computation time of the surface tension calculation. We aim to achieve this by excluding (ideally all) non-surface particles from the calculation. Doing so should not affect the overall result since for the latter the surface tension force should be zero anyhow. In contrast, it is crucial for the correctness of the result that no surface particle be misclassified (i.e., there should be no false negatives). Incorrect classification of non-surface particles as surface ones (i.e., false positives) should be minimized, but does not affect the correctness of the fluid dynamics.
To properly classify the particles, we experimented with defining various feature spaces. Optimally, this should only depend on the local geometry. As possible features, we examined, for instance, the summation of neighborhood masses, using various weighting kernels. However, it turned out that good results could already be achieved by mapping fluid particles into a simple 2-dimensional feature space. The first component of this space is given by the mass-weighted average distance of particles in a local neighborhood:
X i / h = 1 h j m i x j m j x i / j m j ,
where m and x are masses and positions of particles i and j, respectively. The neighborhood is defined by the (user-selected) SPH kernel radius h; thus each particle i has associated neighbor particles j, at distances smaller than h. Please note that we normalize by dividing the mass-weighted average by h, thus making the feature independent of kernel size. For the second feature dimension, we just employ the number of neighbors per particle N i .
Next, in order to train a classifier, we have to generate fluid simulation data, and determine for each particle which class it belongs to. The latter training data classification step is done employing a similar strategy as followed for our normal and curvature estimation, as outlined in Section 3.2 and Section 3.4. We randomly generate samples on a sphere enclosing a particle and determine coverage of these by neighboring spheres. To achieve high accuracy in this classification, we employ a very large number of samples. In addition, any incorrect classification of a particle as surface (i.e., false positives) in this step, can potentially be remedied by checking for full sphere coverage. Further details of the underlying Monte Carlo strategy will be presented below. Simulations to create the training data have been performed using the SPlisHSPlasH framework [7]. Scenes with particle numbers between 2K and 30K are employed. Different particle configurations, obstacles, boundaries, and gravity forces are used, to ensure broad coverage. Thus generated, and initially classified, particles are plotted in our 2-dimensional feature space in Figure 2 (left). Please note that using the described features, the two particle classes already exhibit a reasonable separation. It becomes apparent that a linear classifier may already suffice for the classification task.
For the classification step machine learning strategies can be employed. Since we initially worked in higher-dimensional feature spaces, we decided to use a neural network classifier. Nevertheless, as discussed above, moving to a 2-dimensional feature space turned out to be sufficient for our purpose. We still do employ a neural network as linear classifier; however, using a simpler approach, such as for instance support vector machines would also be adequate. In this context, it should be noted that some recent work explored the use of machine learning in fluid simulation, albeit only for obtaining solutions to the Navier–Stokes equations, instead of performing the task of classifying particles (e.g., [20,21,22,23]). In our technique, we effectively obtain a line separating the two classes in the feature space. However, since we strive to minimize (i.e., optimally avoid) false negatives, we opted to shift the line by a distance d along the ordinate, such that no false positives remain (i.e., with respect to the training data). The obtained linear classifier ( y k x + d ) is then applied in any new fluid simulation, dividing particles into surface and non-surface ones, progressively per time step. As visualized in Figure 2 (right), the selection of parameter d affects the particle classification, which will be discussed further below.
Applying this approach in our later tests, we did not encounter any false negatives in those simulations, also with different particle configurations and geometries. Still, false positives do result. In the experiments outlined in Section 4 the method yielded on average 0.014% false positives in the Droplet, 5.66% in the Dambreak, and 4.75% in the Crown splash test scenario, respectively. Still, the method proved to work fast and be robust with regard to false negatives. The performance of the method is three orders of magnitudes faster than the timings reported by [3] for a double dambreak scenario.
As a further well-defined testcase, we also examined the classification for the example of a hyperbolic paraboloid. For this shape the normals and curvatures are analytically known, both positives as well as negatives being present. To obtain a matching SPH particle system for the analytic surface, we employed the paraboloid function to define a fluid container as an OBJ triangle-mesh geometry. The bottom of this container geometry is set to the shape of the paraboloid. Given this boundary mesh, we then carried out an SPH simulation using the Implicit Incompressible SPH [24], with constant step size of t = 0.001 . The simulation was run until the system converged to a steady state, providing particles in the container, including the bottom. The particles classified as surface particles in the resulting SPH system are extracted and cropped for further comparison and analysis. Snapshots of the simulation, as well as the cropped region of surface particles at the container’s bottom are visualized in Figure 3 (note that also curvature computed for the particles is indicated). To visualize the curvature we employ a blue-white-red color-map, with an optional a t a n scale revealing variations in small number regions.
On this dataset, we examined the effect of the shift parameter d in the linear classifier. Figure 4 illustrates how points located near the surface in the cropped section were classified. Results are shown for the paraboloid setup, with increasing d: the smaller the parameter the less surface points are correctly classified (note that all should be classified as surface points for this cropped part). In this special case, a value of about d = 13 performed well, since the number of classified surface points as well as the computation time per simulation time step plateaued at this value. Please note that a lower value leads to faster computation times by avoiding re-classification in the Monte Carlo process. Nevertheless, low values may lead to incorrect classifications, wherefore a higher value should be preferable.

3.2. Normal Calculation

Once the classification has been finalized, we have to compute the surface normals, as well as the curvatures, per particle. Since we do not make use of any smoothed field in the fluid we have to calculate both values using only the geometry as input. Both calculations follow a similar notion, wherefore, the general idea of both will be outlined first. The following will address the 3D case, but the concept applies analogously to 2D. The key idea in both cases is to first assume a sphere S 1 of radius r 1 around a considered surface particle at position x i . The radius will always be selected equal to the SPH kernel size h. Next, additional spheres S 2 j of radius r 2 are considered, with their respective centers given by all neighboring particles at position x j (i.e., all surface and non-surface ones combined). For this, the neighborhood of a particle is again given according to kernel radius h. Also, note that r 2 is usually smaller than r 1 . The neighboring spheres S 2 j will overlap the initial sphere S 1 , located at particle i, thus leaving a smaller spherical area A 1 that is not overlapped, i.e., not within the neighbor spheres.
Since we work with incompressible or weakly compressible SPH particle distributions the density of the point cloud has to be nearly constant. Thus, it can be conjectured that the surface normal n i at the particle will point towards the centroid of the non-overlapped spherical area on the sphere. In addition, as will be discussed in more detail below, we also found that the fraction of the sphere that has not been covered is related to the surface curvature at that point. The area of the sphere that is not overlapped by the neighboring ones can be calculated with a spherical integral. However, this integral can be computationally very expensive to determine exactly, wherefore we propose to estimate it using a Monte Carlo integration strategy. With regard to the normal computation, we first generate N i uniformly distributed random sample points p k on the surface of sphere S 1 of particle i. Initially, the positions are determined accordingly to a uniform random distribution. Below we will also examine other sampling strategies. Of the generated particles, we will next only consider those that are not inside of any neighboring sphere S 2 j . For our following derivations, we will represent this with a binary function:
S ( p k ) = 0 if p k i s o v e r l a p p e d , 1 if p k i s n o t o v e r l a p p e d .
Based on this, we obtain a first estimate of the surface normal:
n ˜ i = n r m k = 1 N i ( p k x i ) S ( p k ) .
Elements in this computation process are visualized in Figure 5b. Also note that non-surface particles will be assigned with a zero vector. Due to the probabilistic nature of our method, discontinuities in the estimated normal field may be encountered; especially, at lower sampling numbers. However, the normal field should be as smooth as possible on the surface of the fluid. Therefore, we propose to carry out an additional smoothing step. First, we compute a weighted average of all neighboring surface particle normals, based on the results obtained in the previous step:
n ˜ N e i i = j = 1 N i W ( | | x j x i | | ) n ˜ j ,
employing a weight kernel W, again with kernel radius h:
W ( x ) = 0 if x > h , 1 x / h if x h .
Also note again that the normals of non-surface particles have been set to zero in the previous step. The final smoothed surface particle normal is then obtained by a weighted average of both computed temporary normals:
n ^ i = n r m ( 1 τ ) n ˜ i + τ n ˜ N e i i ,
where τ is a user selected interpolation parameter. For all our computations we have set it to 0.5; this yielded, for instance for the paraboloid test case, stable and plausible curvature values in the calculations below. Also, for this geometry variation of τ did not result in a strong influence on the angular error as well as the computation time. The outcome of the normal smoothing process is also illustrated in Figure 5c. Further note that this smoothing step can be repeated several times.
To evaluate the accuracy of the proposed normal estimation process we carried out comparisons between analytically defined and our estimated normals. As error measure we determine the angle between the vectors via a c o s ( n ^ i · n a ) , where n ^ i is the estimated normal and n a the analytically correct one. In our first test case, we obtained the latter for the geometry of a 2D circle; a random 2D point cloud is generated by uniformly sampling the geometry. Next, due to the random nature of our estimation process, we determine as final error value the average of 100 computations. The results of this study are summarized in Figure 6 (left). As can be seen, the average error depends both on the number of samples as well as on the number of smoothing steps. The higher the number of sampling points, the more accurate the approximation becomes; while additional smoothing improves the estimates, by filtering out noise incurred by the representation as a point cloud.
A second, more comprehensive analysis in 3D has also been carried out, using the previously described hyperbolic paraboloid test case. We compared the estimated normals to the analytically defined ones. In addition, we also compute the error metric for an alternative sampling approach using a pseudo-random Halton sampling (described in detail below). Finally, for further evaluation we also compute normals on the point cloud via a principal component analysis (PCA) in a local neighborhood, similar to [25]. For the latter, the mean center c of a surface point neighborhood is computed and used to determine the normal n ˜ Pca i as the minor Eigenvector of the local (weighted) covariance matrix C :
C = j = 1 N W ( d j ) ( v j v j ) / i = 1 N W ( d j ) , with v j = p j c , d j = | v j | / r ,
with W the cubic support kernel and r the support radius. Please note that in contrast to the Monte Carlo normal the orientation is not known. For our experiment, we set the orientation according to the Monte Carlo normals. Figure 6 (right) shows the error mean and standard deviation for three sampling methods, examining also the effect of smoothing and the number of samples, for the paraboloid test case. The angular error is low at < 3 ° for high sample counts. Employing a PCA normal computation also leads to good accuracy, but takes about 1.6 m s additional computation time in the paraboloid experiment.

3.3. Sampling Scheme

As indicated above, akin to the notion of importance sampling, we explored possibly improvements of the naive uniformly distributed random Monte Carlo sampling. The first straightforward improvement is to employ pre-computed instead of run-time generated random spherical direction vectors. The former can be stored in look-up tables. We employed 16,384 single precision pre-computed direction vectors. Values are accessed with a modulo index operation. Please note that the quality of the results correlate with precision and table size. Secondly, we also tested pseudo-random schemes that provide more homogeneous sample distributions. We tested various Halton sequences [26] as well as blue noise sampling (see Figure 7). For the former, the 2–3 scheme in 2D provided the best results. For the latter we employed the implementation of [27], which realizes a Poisson distribution on Wang tiles [28].
As best performing scheme we selected the Halton 2-3 ( H 23 ) scheme for further analysis. Using a look-up table as well as reducing the number of samples results in a speedup; mainly due to the more robust homogeneous sample distribution. We further also examined the effect of the number of samples in this context. For our paraboloid test geometry, we calculated the mean angular error of the Monte Carlo (MC) normals to the analytic normals; in addition, also the surface computation time was measured. Both are averaged over 100 steady state simulation steps. In Figure 8 the results are shown for uniform distribution R n d , blue noise B N , and H 23 . To reach a small mean angular error below 5° R n d requires about 360 samples (note values marked by dashed boxes). Using B N the speed up due to using a look-up table can be seen. Nevertheless, obtaining errors below 5° still requires 360 samples. Switching to H 23 allows reducing the samples to 120 for a comparable error, yielding a good speedup of 8.4 .

3.4. Curvature Calculation

The surface curvature of a 3D surface is locally given by two values, also known as principal curvatures [29]. These are defined as the eigenvalues of the shape operator at a point on the surface. By averaging the two we obtain the mean curvature κ . Gaussian and mean curvature estimation fail with point clouds including noise. We have found that it is possible to establish a direct relation between the mean curvature and the fraction of a sphere that is not covered by neighboring ones, via the process outlined above. We begin by noting that the fraction of the uncovered surface area of a sphere, using the mapping function (2), is given by:
λ = 1 4 π r 2 S ( x ( θ , φ ) ) r 2 sin ( φ ) d θ d φ ,
where x ( θ , φ ) is a sphere surface location with spherical coordinates θ and φ . As before, instead of attempting to compute this value exactly, we will approximate it, for a particle i, based on random samples following a Monte Carlo integration strategy:
λ 1 N i k = 1 N i S ( p k ) .
As will be seen later, it is possible to estimate κ from λ , which itself can be determined from the randomly sampled points p k . Please note that λ 0 , 1 . We will first derive the underlying relationship in 2D, and later extend to 3D.

3.4.1. Relationship in 2D

The following derivation is explained while closely referring to the illustration and notation in Figure 9. We start with assuming in 2D a circular outline (shown on the bottom in gray), representing a shape for which the curvature should be determined. The circle has a radius of R , and thus the sought curvature κ is given in this case analytically by the reciprocal 1 / R . However, later the formalism should be applied to any arbitrary shape or curve, based on randomly sampled locations.
First, in order to render our derivation independent of particle size, we will attempt to estimate an adjusted curvature parameter κ ˜ = h κ , considering correspondingly also an adapted R ˜ = R / h . With this in mind, as a starting point for examining in this case the relationship between λ and κ ˜ , we will begin with deriving a lower bound λ m i n , i.e., in 2D the minimal arc that would not be covered by neighboring circles. For this, first consider a particle i on the circular outline. We associate with this particle again a circle C 1 , with radius r 1 , and center x i . Next, consider additional neighboring particles j, akin to what was discussed above; to these again correspond circles C 2 j with centers x j , all with the same radius r 2 < r 1 , overlapping circle C 1 . Please note that the maximal overlap will result for those particles j that are also located on the circular outline; in 2D there would be two of these, next to particle i. Thus, we have to find the geometrical relationship at which circle C 2 around such a particle j would cover a maximal arc on C 1 . When the circles overlap, we can find two intersection points; denoting the outer one as x I , the maximum coverage will result when the vector between x I and x j is perpendicular to the vector between x i and x j ; see also Figure 9 (right). In this situation, the angle between the normal at particle i and the vector between x i and x I is given as φ . Also note that this angle can be obtained via:
φ = π α β if κ ˜ > 0 β α if κ ˜ 0 ,
where angles α and β are defined based on the chord between the particle positions, as depicted. Also note that the distance between the latter is given as:
d = r 1 2 r 2 2 = h 1 r 2 / r 1 2 .
According to the geometric configuration, both angles can be obtained via:
α = sin 1 r 2 / r 1 ,
β = cos 1 d / ( 2 R ) = cos 1 κ ˜ / 2 1 r 2 / r 1 2 .
Finally, due to having two neighboring particles in symmetric configuration, we have to consider 2 φ for the non-covered arc. Overall, we obtain λ m i n = 2 φ / 2 π . Using the previous equations, we obtain a closed form solution, independent of the sign of the curvature:
κ ˜ = 2 1 r 2 / r 1 2 1 / 2 cos λ m i n π + α ,
where the adjusted curvature is related to the ratio of the radii r 2 / r 1 and the minimal covered fraction λ m i n , which we approximate via random sampling.

3.4.2. Relationship in 3D

In 3D we follow a similar derivation. As before, we attempt to do this via estimating the ratio of a minimal, uncovered spherical surface area to the complete surface of a sphere. Again, we assume a particle i on this surface, surrounded by several particles j, for which again local spheres of radius r 1 and r 2 are defined. In 3D the uncovered spherical surface area A will be a spherical cap, which is given analytically. A cap on a sphere with radius R , defined by a projected solid angle φ is given as:
A = 0 ϕ 0 2 π R 2 sin ( φ ) d φ d θ = 2 π R 2 ( 1 cos ( φ ) ) .
As in the 2D case, we compute λ m i n based on the non-covered surface area:
λ m i n = 2 π R 2 ( 1 cos ( φ ) ) / 4 π R 2 = 0 . 5 1 cos ( φ ) .
Thus, rearranging the terms we can also in 3D express the adjusted curvature analytically:
κ ˜ = 2 1 r 2 / r 1 2 1 / 2 cos cos 1 1 2 λ m i n + α ,
again depending on the ratio of r 2 / r 1 and λ m i n , which we can estimate. For our implementation, and the later test scenarios we employed the ratio r 2 / r 1 = 0.8 , which generally yielded optimal performance. In line with this, in another example scene, i.e., the zero gravity droplet, artifacts were encountered when reducing the value to 0.7 ; particles occasionally became disconnected from the surface and remained as outliers. In contrast, the paraboloid case did allow for lower ratios of r 1 and r 2 . Nevertheless, using the above ratio was the most robust over all test cases.
To evaluate our curvature estimation method, we compare our approximations with analytically defined mean curvature values. The latter can, for instance, be obtained in closed form for any point on an ellipsoid [30]. Thus, we create an ellipsoidal point cloud, for which we obtain our estimate, and compare to the ground truth. Due to the stochastic nature of our method, we determine the mean and the standard deviation for 40 measurements. Moreover, note that accuracy again depends on the number of samples, wherefore we also tested our method for different amounts of such samples. The results of this validation are compiled in Figure 10. As can be seen, for smaller curvatures our estimation approaches the correct solution, independent of the curvature sign. Moreover, even for a small number of samples our estimated average curvature is close to the correct solution. Nevertheless, the standard deviation is large for small sample numbers, but can be reduced by increasing the number of samples. In addition, we found that for larger curvatures, also larger errors in the mean curvature resulted. This is due to the sphere radius r 1 becoming closer to actual surface features.
Finally, it should be noted that our initial development of the method was in 2D. There, the circle segment shown in Figure 9 (left) can efficiently be computed analytically. Nevertheless, in 3D the matching computation of a spherical segment becomes more difficult and costly, wherefore we introduced the Monte Carlo approach.

3.4.3. Curvature Smoothing and Adaptive Sampling

Similar to the normal field, the curvature field should also be smooth along the surface. The probabilistic nature of the estimation process may also introduce artifacts. Thus, we again suggest to apply one or more smoothing steps, averaging computed curvatures in a local neighborhood. Furthermore, as already seen in Figure 6, the accuracy of Monte Carlo approaches will depend on the number of samples. A straightforward approach could be to employ a constant number at all times; however, we have found that adapting the number according to the underlying numerical simulation is advantageous. The idea is inspired by the Courant–Friedrichs–Lewy (CFL) condition [1], which relates numerical time step, spatial discretization, and propagation velocity. According to this, solution time steps in SPH algorithms are often adaptively adjusted; commonly based on forces or velocities of the fluid particles. Along this line, we propose to adjust the number of random sampling points used per time step as N = t s C S D , with time step t s and user-selected proportionality constant C SD . The latter can be considered to be a sampling density factor, its value representing a trade-off between accuracy and computation speed. We have achieved good results with setting this parameter to 10,000–100,000. Our adaptive sampling makes the total number of samples per particle over a simulation time period independent of the numerical time step size.

4. Results

To further evaluate our method, we compare it to approaches employed in prior work for the computation of surface tension, specifically the works by Becker and Teschner (2007) [5] and by Akinci, Akinci, and Teschner (2013) [6]. Furthermore, the surface tension calculations are integrated into different full SPH solvers, specifically weakly compressible SPH (WCSPH) [5], predictive corrective incompressible SPH (PCISPH) [31], and divergence free SPH (DFSPH) [32,33], in a reference implementation. As a framework for the comparisons we employ "SPlisHSPlasH" [7], an open source environment for physically-based simulation of fluids, which provides the implementations for the mentioned comparison algorithms. All computations were performed using only the CPU; i.e., no optimizations, such as GPU calculations were employed. Furthermore, we obtain computation times for three different common test scenes, as also suggested by [34]—Droplet, Crown splash, and Dambreak. These scenes cover various dynamic behaviors, and also require different time step intervals in Table 1, according to the CFL condition:
The timing results of the comparison experiment are provided in Table 2. As can be seen, in all cases, our method resulted in reduced average total computation times. Please note that in all cases visually highly similar fluid simulation results were obtained, and no instabilities were encountered. Moreover, the smaller the time step, the better our method performed compared to the other two surface tension calculation methods, when computing f s t ; this becomes especially evident for the Crown splash scene (Figure 11), which employed the smallest time step, where significant improvements resulted for this step. However, for larger time steps, the advantage of employing our proposed approach for calculating f s t is reduced; for instance, in the droplet scene, for both DFSPH and PCISPH, the surface tension calculation times turned out to be slower for our method; nevertheless, the total computation time per time step still remained better. We assume this to be due to non-surface particles also experiencing non-zero surface tension forces in the other methods, which requires additional iterations of the pressure solver to achieve the correct fluid density.

5. Conclusions

We presented a method to accelerate the calculation of surface tension forces in SPH fluid animations. In contrast to most other approaches, we discriminate between surface and non-surface particles. This leads to an improvement in the computation time, since the forces are calculated for just a fraction of the particles. Based on this, we can effectively smooth and minimize the surface of the fluid. The accuracy of our method can be tuned by adjusting the value of C S D . When the time step is reduced, e.g., according to the CFL condition, the number of sampling points N i is also adjusted. Surprisingly, we found that even for a low number, N i 10 the simulation remains stable. It is in cases when the time step is small ( < 1 m s ) that our method offers a considerably improved performance over the other tested methods. However, when the simulation runs slower ( t s 5 m s ) the advantage is diminished; still the computation of our method remains comparable to other algorithms.
A disadvantage we encountered when using a very low resolution in the sampling is the possible creation of incorrect momentum. The sum of the forces around a closed fluid surface should vanish, e.g., in the Droplet scenario, but for low resolution sampling of the integral this is not ensured. This drawback could not be alleviated by employing the pseudo-random Halton 2–3 sampling scheme. However, we noticed that selection of the parameter d may have an effect on this. Still, more analysis is required to characterize the influence of the parameters on minimizing the momentum effect while staying efficient and physically correct. An improvement may be possible by combining the Monte Carlo normals with normals obtained from PCA methods, at an increased computational cost. This idea was implemented and will be explored further in future work.
Our approach also includes an estimation of the local mean curvature at the fluid particles; here, the latter could be considered to be a general point cloud. Since the surface interface in any point cloud is not clearly defined, the curvature is neither. Related works, e.g., [8,9,10], compute the divergence of the gradient of the color field to estimate the curvature. This leads to a quantity that is only proportional to the exact curvature. In our method we obtain an approximation of the curvature based on a spherical integral. The procedure is similar to searching for a spherical surface, locally best fitting to a point cloud. Our method can effectively be coupled with any other SPH algorithm, since it only takes the geometry as input for the computation. It can be employed to improve the overall SPH computation time, when smoothing fluid surfaces in computer graphics applications. It is left for future work to explore the possibility of applying this type of procedure in other contexts of fluid simulations. As a possible extension, we will explore the performance of the method in the context of multi-scale SPH models, i.e., when different sampling densities are employed. Finally, the source code as well as the test scenes of our method are provided by pull request into the SPlisHSPlasH git repository [7].

Author Contributions

Conceptualization, F.Z. and M.H.; methodology, F.Z. and M.R.; software, F.Z. and M.R.; validation, F.Z., J.H. and M.R.; investigation, F.Z., J.H., and M.R.; writing–original draft preparation, F.Z., J.S. and M.R.; writing–review and editing, M.H. and W.R.; visualization, F.Z. and M.R.; supervision, M.H. and W.R.; project administration, M.H.; funding acquisition, M.H. and W.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded through the Vice Rectorate of Research of the University of Innsbruck; and has been carried out in the scope of the doctoral school DK-CIM.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Monaghan, J.J. Smoothed particle hydrodynamics. Annu. Rev. Astron. Astrophys. 1992, 30, 543–574. [Google Scholar] [CrossRef]
  2. He, X.; Wang, H.; Zhang, F.; Wang, H.; Wang, G.; Zhou, K. Robust simulation of sparsely sampled thin features in SPH-based free surface flows. ACM Trans. Graph. 2014, 34, 7. [Google Scholar] [CrossRef]
  3. Sandim, M.; Cedrim, D.; Nonato, L.G.; Pagliosa, P.; Paiva, A. Boundary Detection in Particle-based Fluids. Comput. Graph. Forum 2016, 35, 215–224. [Google Scholar] [CrossRef]
  4. Dilts, G.A. Moving least-squares particle hydrodynamics II: Conservation and boundaries. Int. J. Numer. methods Eng. 2000, 48, 1503–1524. [Google Scholar] [CrossRef]
  5. Becker, M.; Teschner, M. Weakly compressible SPH for free surface flows. In Proceedings of the 2007 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, San Diego, CA, USA, 2–4 August 2007; pp. 209–217. [Google Scholar]
  6. Akinci, N.; Akinci, G.; Teschner, M. Versatile surface tension and adhesion for SPH fluids. ACM Trans. Graph. 2013, 32, 182. [Google Scholar] [CrossRef]
  7. Bender Jan. SPlisHSPlasH, 2019. Available online: https://github.com/InteractiveComputerGraphics//SPlisHSPlasH (accessed on 20 March 2020).
  8. Morris, J.P. Simulating surface tension with smoothed particle hydrodynamics. Int. J. Numer. Methods Fluids 2000, 33, 333–353. [Google Scholar] [CrossRef]
  9. Müller, M.; Charypar, D.; Gross, M. Particle-based fluid simulation for interactive applications. In Proceedings of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, San Diego, CA, USA, 26–27 July 2003; pp. 154–159. [Google Scholar]
  10. Keiser, R.; Adams, B.; Gasser, D.; Bazzi, P.; Dutré, P.; Gross, M. A unified Lagrangian approach to solid-fluid animation. In Proceedings of the 2005 Eurographics/IEEE VGTC Symposium Point-Based Graphics, Stony Brook, NY, USA; pp. 125–148.
  11. Kelager, M. Lagrangian fluid dynamics using smoothed particle hydrodynamics. Univ. Cph.: Dep. Comput. Sci. 2006, 24–26. [Google Scholar]
  12. Tartakovsky, A.; Meakin, P. Modeling of surface tension and contact angles with smoothed particle hydrodynamics. Phys. Rev. E 2005, 72, 026301. [Google Scholar] [CrossRef] [Green Version]
  13. Yu, J.; Wojtan, C.; Turk, G.; Yap, C. Explicit mesh surfaces for particle based fluids. Comput. Graph. Forum 2012, 31, 815–824. [Google Scholar] [CrossRef]
  14. Zhu, B.; Quigley, E.; Cong, M.; Solomon, J.; Fedkiw, R. Codimensional Surface Tension Flow on Simplicial Complexes. ACM Trans. Graph. 2014, 33, 111:1–111:11. [Google Scholar] [CrossRef]
  15. Foorginejad, A.; Khalili, K. Umbrella curvature: A new curvature estimation method for point clouds. Procedia Technol. 2014, 12, 347–352. [Google Scholar] [CrossRef] [Green Version]
  16. Mérigot, Q.; Ovsjanikov, M.; Guibas, L. Robust voronoi-based curvature and feature estimation. In Proceedings of the 2009 SIAM/ACM Joint Conference on Geometric and Physical Modeling, San Francisco, CA, USA, 5–8 October 2009; pp. 1–12. [Google Scholar]
  17. Zhu, Y.; Bridson, R. Animating Sand As a Fluid. ACM Trans. Graph. 2005, 24, 965–972. [Google Scholar] [CrossRef]
  18. Yu, J.; Turk, G. Reconstructing Surfaces of Particle-based Fluids Using Anisotropic Kernels. ACM Trans. Graph. 2013, 32, 5:1–5:12. [Google Scholar] [CrossRef]
  19. Brackbill, J.U.; Kothe, D.B.; Zemach, C. A continuum method for modeling surface tension. J. Comput. Phys. 1992, 100, 335–354. [Google Scholar] [CrossRef]
  20. Tompson, J.; Schlachter, K.; Sprechmann, P.; Perlin, K. Accelerating Eulerian fluid simulation with convolutional networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, Sydney, Australia, 6–11 August 2017; pp. 3424–3433. [Google Scholar]
  21. Chu, M.; Thuerey, N. Data-driven synthesis of smoke flows with CNN-based feature descriptors. ACM Trans. on Graph. (TOG) 2017, 36, 1–14. [Google Scholar] [CrossRef]
  22. Wiewel, S.; Becher, M.; Thuerey, N. Latent-space Physics: Towards Learning the Temporal Evolution of Fluid Flow. In: Comput. Graph. Forum 2019, 38, 71–82. [Google Scholar] [CrossRef] [Green Version]
  23. Jeong, S.; Solenthaler, B.; Pollefeys, M.; Gross, M. Data-driven fluid simulations using regression forests. ACM Trans. on Graph. (TOG) 2015, 34, 199. [Google Scholar]
  24. Ihmsen, M.; Cornelis, J.; Solenthaler, B.; Horvath, C.; Teschner, M. Implicit Incompressible SPH. IEEE Trans. on Vis. and Comput. Graph. 2014, 20, 426–435. [Google Scholar] [CrossRef]
  25. Hoppe, H.; DeRose, T.; Duchamp, T.; McDonald, J.; Stuetzle, W. Surface Reconstruction from Unorganized Points. SIGGRAPH Comput. Graph. 1992, 26, 71–78. [Google Scholar] [CrossRef]
  26. Halton, J.H. On the Efficiency of Certain Quasi-Random Sequences of Points in Evaluating Multi-Dimensional Integrals. Numer. Math. 1960, 2, 84–90. [Google Scholar] [CrossRef]
  27. Jose Esteve. Stippling. 2019. Available online: https://github.com/joesfer/Stippling (accessed on 20 March 2020).
  28. Kopf, J.; Cohen-Or, D.; Deussen, O.; Lischinski, D. Recursive Wang Tiles for Real-Time Blue Noise. In ACM SIGGRAPH 2006 Papers; Association for Computing Machinery: New York, NY, USA, 2006; pp. 509–518. [Google Scholar]
  29. Goldman, R. Curvature formulas for implicit curves and surfaces. Comput. Aided Geom. Des. 2005, 22, 632–658. [Google Scholar] [CrossRef]
  30. Bektas, S. Generalized Euler Formula For Curvature. Intl. Jour-nal of Res. in Eng. and Appl. Sci. 2016, 6, 292–304. [Google Scholar]
  31. Solenthaler, B.; Pajarola, R. Predictive-corrective incompressible SPH. In Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference, New Orleans, LA, USA, 3–7 August 2009; pp. 1–6. [Google Scholar]
  32. Bender, J.; Koschier, D. Divergence-free SPH for incompressible and viscous fluids. IEEE Trans. on Vis. and Comput. Graph. 2017, 23, 1193–1206. [Google Scholar] [CrossRef] [PubMed]
  33. Bender, J.; Koschier, D. Divergence-free smoothed particle hydrodynamics. In Proceedings of the 14th ACM SIGGRAPH/Eurographics Symposium on Computer Animation, Los Angeles, CA, USA, 7–9 August 2015; pp. 147–155. [Google Scholar]
  34. Huber, M.; Reinhardt, S.; Weiskopf, D.; Eberhardt, B. Evaluation of Surface Tension Models for SPH-Based Fluid Animations Using a Benchmark Test. In Proceedings of the VRIPHYS 2015: 12th Workshop on Virtual Reality Interaction and Physical Simulation, Lyon, France, 4–5 November 2015; pp. 41–50. [Google Scholar]
Figure 1. SPH time evolution of droplets in 3D (initially cuboid) in zero gravity, coalescing into a single spherical droplet. Results of our method (left) and comparison to final results obtained with alternative approaches [2,5,6] from literature (right). Surface tension calculation is accelerated using our method; convex and concave regions can be robustly handled. Shape evolution develops as physically expected. For the comparisons we employed implementations available in the SPlisHSPlasH framework [7]. Please note that alternative methods did not converge to a spherical droplet; the illustrated final steady state configuration was reached at the denoted simulation time t e q .
Figure 1. SPH time evolution of droplets in 3D (initially cuboid) in zero gravity, coalescing into a single spherical droplet. Results of our method (left) and comparison to final results obtained with alternative approaches [2,5,6] from literature (right). Surface tension calculation is accelerated using our method; convex and concave regions can be robustly handled. Shape evolution develops as physically expected. For the comparisons we employed implementations available in the SPlisHSPlasH framework [7]. Please note that alternative methods did not converge to a spherical droplet; the illustrated final steady state configuration was reached at the denoted simulation time t e q .
Computers 09 00023 g001
Figure 2. Left: 2D feature space for particle classification. Each point represents a surface (blue) or non-surface particle (green), for the test simulation data. The dashed black line indicates the linear classifier, which was shifted such as to result in no false negatives by adjusting the constant parameter d in the linear classifier. Please note that some false positives are still encountered. Right: Effect of different values of d used in the classifier on a snapshot of the double droplet case, cut by a clipping plane in half to visualize the interior. Selecting d = 13 leads in this example to misclassifications in negative curvature regions, while setting d = 28 avoids this. Moreover, a high value of d = 35 yields false positives, which can, however, be captured by checking for full sphere coverage (white particles).
Figure 2. Left: 2D feature space for particle classification. Each point represents a surface (blue) or non-surface particle (green), for the test simulation data. The dashed black line indicates the linear classifier, which was shifted such as to result in no false negatives by adjusting the constant parameter d in the linear classifier. Please note that some false positives are still encountered. Right: Effect of different values of d used in the classifier on a snapshot of the double droplet case, cut by a clipping plane in half to visualize the interior. Selecting d = 13 leads in this example to misclassifications in negative curvature regions, while setting d = 28 avoids this. Moreover, a high value of d = 35 yields false positives, which can, however, be captured by checking for full sphere coverage (white particles).
Computers 09 00023 g002
Figure 3. A hyperbolic paraboloid was used as ground truth geometry for measurements. Left to right: Snapshot of the simulation filling up the geometry in SPlisHSPlasH; exported surface particles on the bottom cropped boundary; cross section of the surface particles with close-up showing surface normals: analytic (black), Monte Carlo (red), principial component analysis (PCA) (blue); particle positions of the simulation (large points), analytically projected positions (small points).
Figure 3. A hyperbolic paraboloid was used as ground truth geometry for measurements. Left to right: Snapshot of the simulation filling up the geometry in SPlisHSPlasH; exported surface particles on the bottom cropped boundary; cross section of the surface particles with close-up showing surface normals: analytic (black), Monte Carlo (red), principial component analysis (PCA) (blue); particle positions of the simulation (large points), analytically projected positions (small points).
Computers 09 00023 g003
Figure 4. Effect of varying the constant parameter of the linear classifier for the paraboloid test scene (left to right) d: 0, 6, 13; points classified as surface (blue) / non-surface (green), and number of surface points and computation time over d. In this special case, a lower value of d=13 yielded a plateau of the nr-of-points graph (blue) with an optimized computation time (gray).
Figure 4. Effect of varying the constant parameter of the linear classifier for the paraboloid test scene (left to right) d: 0, 6, 13; points classified as surface (blue) / non-surface (green), and number of surface points and computation time over d. In this special case, a lower value of d=13 yielded a plateau of the nr-of-points graph (blue) with an optimized computation time (gray).
Computers 09 00023 g004
Figure 5. Visualization of normal estimation and smoothing in 2D. (a): point cloud with surface (blue) and non-surface (green) particles. (b): samples p k on circle around surface particle i; black crosses are overlapped by neighbor circles; light blue dots not, thus these are used for normal estimation. (c): initially estimated normals (light blue) and smoothed Monte Carlo normals (red).
Figure 5. Visualization of normal estimation and smoothing in 2D. (a): point cloud with surface (blue) and non-surface (green) particles. (b): samples p k on circle around surface particle i; black crosses are overlapped by neighbor circles; light blue dots not, thus these are used for normal estimation. (c): initially estimated normals (light blue) and smoothed Monte Carlo normals (red).
Computers 09 00023 g005
Figure 6. Left: Surface normal estimation error for a 2D circle test case. The average error depends on the number of samples in the Monte-Carlo integration as well as on the number of times a smoothing algorithm is applied. Right: Normal estimation error for the 3D paraboloid experiment: high (240) and low (30) sample rate, smoothing, and principal component analysis (PCA) normal. The H 23 Halton sampling yields better results than the uniformly distributed random sampling ( R n d ). In general, the smoothing also reduces the errors effectively.
Figure 6. Left: Surface normal estimation error for a 2D circle test case. The average error depends on the number of samples in the Monte-Carlo integration as well as on the number of times a smoothing algorithm is applied. Right: Normal estimation error for the 3D paraboloid experiment: high (240) and low (30) sample rate, smoothing, and principal component analysis (PCA) normal. The H 23 Halton sampling yields better results than the uniformly distributed random sampling ( R n d ). In general, the smoothing also reduces the errors effectively.
Computers 09 00023 g006
Figure 7. Different sampling schemes were tested for the Monte Carlo integration; e.g.,: (left to right) uniform distribution ( R n d ), blue noise ( B N ), Halton 2D samples with 2-3 scheme ( H 23 ). The order/index of 4096 samples is illustrated by the intensity-ramp. Here, blue noise results in the most regular distribution. Blue noise and Halton numbers cover the space well within small index intervals (similarly colored dots distributed homogeneously).
Figure 7. Different sampling schemes were tested for the Monte Carlo integration; e.g.,: (left to right) uniform distribution ( R n d ), blue noise ( B N ), Halton 2D samples with 2-3 scheme ( H 23 ). The order/index of 4096 samples is illustrated by the intensity-ramp. Here, blue noise results in the most regular distribution. Blue noise and Halton numbers cover the space well within small index intervals (similarly colored dots distributed homogeneously).
Computers 09 00023 g007
Figure 8. Angular error measures (desaturated) and timing (saturated colors) on the paraboloid testcase, using uniform distribution ( R n d ), blue noise ( B N ), and Halton 2-3 ( H 23 ). The striped box highlights similar angular errors of just below 5° for the different sampling methods. Employing the H 23 scheme allows reducing the sampling count down to 120 while staying below 5°.
Figure 8. Angular error measures (desaturated) and timing (saturated colors) on the paraboloid testcase, using uniform distribution ( R n d ), blue noise ( B N ), and Halton 2-3 ( H 23 ). The striped box highlights similar angular errors of just below 5° for the different sampling methods. Employing the H 23 scheme allows reducing the sampling count down to 120 while staying below 5°.
Computers 09 00023 g008
Figure 9. Configuration for maximal arc coverage of a circle around a neighbor particle j on a circle around particle i. Left: A second neighbor is indicated as light grey dotted circle. The arc coverage λ is related to the curvature κ expressed as 1 / R . Right: Geometric details for the neighbor particle j.
Figure 9. Configuration for maximal arc coverage of a circle around a neighbor particle j on a circle around particle i. Left: A second neighbor is indicated as light grey dotted circle. The arc coverage λ is related to the curvature κ expressed as 1 / R . Right: Geometric details for the neighbor particle j.
Computers 09 00023 g009
Figure 10. Comparison of analytically defined (black) with our estimated (colored) curvatures, obtained for an ellipsoid with semi-axes ( a = 100 , b = 200 , c = 400 ); the latter is approximated for our method with a random point cloud. Measurements given both for negative (inside) and positive (outside) curvature. Average estimated curvatures and standard deviations are provided, based on 40 measurements, for different numbers of random samples N.
Figure 10. Comparison of analytically defined (black) with our estimated (colored) curvatures, obtained for an ellipsoid with semi-axes ( a = 100 , b = 200 , c = 400 ); the latter is approximated for our method with a random point cloud. Measurements given both for negative (inside) and positive (outside) curvature. Average estimated curvatures and standard deviations are provided, based on 40 measurements, for different numbers of random samples N.
Computers 09 00023 g010
Figure 11. Left: Example of crown splash simulation using DFSPH and our surface tension force estimation (adaptive sampling with C S D = 10,000 , adaptive time step 0.1 1 m s ). Right: Example of dam break simulation using DFSPH and our surface tension force estimation (adaptive sampling with C S D = 10,000 , adaptive time step 0.5–2 ms). The color change illustrates differences in velocity.
Figure 11. Left: Example of crown splash simulation using DFSPH and our surface tension force estimation (adaptive sampling with C S D = 10,000 , adaptive time step 0.1 1 m s ). Right: Example of dam break simulation using DFSPH and our surface tension force estimation (adaptive sampling with C S D = 10,000 , adaptive time step 0.5–2 ms). The color change illustrates differences in velocity.
Computers 09 00023 g011
Table 1. Time step configurations according to the CFL condition for the experiments shown in Table 2.
Table 1. Time step configurations according to the CFL condition for the experiments shown in Table 2.
Time Step Size
2–4SceneDFSPHNPCISPHWCSPH
Droplet5 ms5 ms1 ms
Crown0.1–1 ms
Dam Break0.5–2 ms
Table 2. Average computation times in milliseconds per simulation time step, for surface tension calculation as well as the complete SPH solutions. The lowest values are printed in bold font. Three different SPH solvers were employed (DFSPH, PCISPH and WCSPH), as well as three different test scenes (Droplet, Crown, Dam Break), all computed in SPlisHSPlasH. Our proposed adaptive adjustment of sample numbers is employed; also time steps are adapted dynamically according to the CFL condition.
Table 2. Average computation times in milliseconds per simulation time step, for surface tension calculation as well as the complete SPH solutions. The lowest values are printed in bold font. Three different SPH solvers were employed (DFSPH, PCISPH and WCSPH), as well as three different test scenes (Droplet, Crown, Dam Break), all computed in SPlisHSPlasH. Our proposed adaptive adjustment of sample numbers is employed; also time steps are adapted dynamically according to the CFL condition.
Becker 2007 [5]Akinci 2013 [6]Our Method
SceneN f st [ms]Total [ms] f st [ms]Total [ms] f st [ms]Total [ms]
DFSPHDroplet10.1k6.6041.8910.2049.228.9538.43
Crown145k62.19478.44103.89508.4817.25424.81
Dam Break26.116.13116.0519.49111.485.4474.17
PCISPHDroplet10.1k7.1598.9111.10129.179.6251.09
Crown145k87.181089.16115.36961.3519.61859.49
Dam Break26.1k18.01435.2322.11353.406.86337.70
WCSPHDroplet10.1k7.4626.6411.2830.223.9623.82
Crown145k83.93312.13112.73349.3418.28258.49
Dam Break26.115.6761.9022.3869.139.6756.71

Share and Cite

MDPI and ACS Style

Zorilla, F.; Ritter, M.; Sappl, J.; Rauch, W.; Harders, M. Accelerating Surface Tension Calculation in SPH via Particle Classification and Monte Carlo Integration. Computers 2020, 9, 23. https://doi.org/10.3390/computers9020023

AMA Style

Zorilla F, Ritter M, Sappl J, Rauch W, Harders M. Accelerating Surface Tension Calculation in SPH via Particle Classification and Monte Carlo Integration. Computers. 2020; 9(2):23. https://doi.org/10.3390/computers9020023

Chicago/Turabian Style

Zorilla, Fernando, Marcel Ritter, Johannes Sappl, Wolfgang Rauch, and Matthias Harders. 2020. "Accelerating Surface Tension Calculation in SPH via Particle Classification and Monte Carlo Integration" Computers 9, no. 2: 23. https://doi.org/10.3390/computers9020023

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop