Absolute Eigenvalue Filtering for Projected Newton (2024)

Absolute Eigenvalue Filtering for Projected Newton (1)

Honglin ChenColumbia UniversityUSAhonglin.chen@columbia.edu,Hsueh-Ti Derek LiuRoblox Research & University of British ColumbiaCanadahsuehtil@gmail.com,David I.W. LevinUniversity of Toronto & NVIDIACanadadiwlevin@cs.toronto.edu,Changxi ZhengColumbia UniversityUSAcxz@cs.columbia.eduandAlec JacobsonUniversity of Toronto & Adobe ResearchCanadajacobson@cs.toronto.edu

(2024)

Abstract.

Volume-preserving hyperelastic materials are widely used to model near-incompressible materials such as rubber and soft tissues.However, the numerical simulation of volume-preserving hyperelastic materials isnotoriously challenging within this regime due to the non-convexity of theenergy function.In this work,we identify the pitfalls of the popular eigenvalue clamping strategyfor projecting Hessian matrices to positive semi-definiteness duringNewton’s method.We introduce a novel eigenvalue filtering strategy for projected Newton’smethod to stabilize the optimization of Neo-Hookean energy and othervolume-preserving variants under high Poisson’s ratio (near 0.5) and largeinitial volume change.Our method only requires a single line of code change in the existingprojected Newton framework, while achieving significant improvement in bothstability and convergence speed.We demonstrate the effectiveness and efficiency of our eigenvalue projectionscheme on a variety of challenging examples and over different deformations ona large dataset.

Neo-Hookean elasticity, Projected Newton, eigenvalue filtering.

submissionid: 486journalyear: 2024copyright: acmlicensedconference: Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers ’24; July 27-August 1, 2024; Denver, CO, USAbooktitle: Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers ’24 (SIGGRAPH Conference Papers ’24), July 27-August 1, 2024, Denver, CO, USAdoi: 10.1145/3641519.3657433isbn: 979-8-4007-0525-0/24/07ccs: Computing methodologiesPhysical simulation

1. Introduction

Volume-preserving hyperelastic energies play a crucial role in accuratelycapturing the near-incompressible nature of soft tissues and rubber-likematerials.Among the common hyperelastic material models, Neo-Hookean model and itsvariants are usually the de facto choice for modeling such materials with highPoisson’s ratios ν[0.45,0.5)𝜈0.450.5\nu\in[0.45,0.5)italic_ν ∈ [ 0.45 , 0.5 ) (see Table1).Unfortunately, the numerical optimization of Neo-Hookean material is notoriouslychallenging within this near-incompressible regime due to the non-convexity fromthe volume-preserving term, especially in the presence of large volume change.We will show thatexisting projection approaches to ensure positive-definite of Hessian matrices during Newton’s methodeffectively assume the initial shape to have small volumechange, restraining the user from freely editing the initial deformation.

MaterialPR
Rubber0.4999
Tongue0.49
Soft palate0.49
Fat0.49
Skin0.48
Muscle0.47
Saturated clay0.4-0.49

We seek to stabilize and accelerate the optimization of Neo-Hookean energieswith high Poisson’s ratios under the projected Newton framework in arobust and efficient way.For robustness, we target the scenarios of large initial deformation and volume change, allowing the user to freely deform the initial shape without worrying about optimization blowing up.For efficiency, we aim to improve the convergence speed of projectedNewton method but still keep the per-step computation cost unchanged: filtering on the eigenvalues of per-element Hessian contributions à la Teran etal. (2005).Surprisingly, this leads to an extremely simple and elegant solution, requiring only one line of code change in the existing projected Newton framework without any additional parameter(TL;DR):

1 U,S,V=eig(Hi)

2 Hi_proj=U*max(S,0)*V

\rightarrow1 U,S,V=eig(Hi)2 Hi_proj=U*abs(S)*V

Absolute Eigenvalue Filtering for Projected Newton (2)

While our proposed change to code is small, the technical analysis behind it is non-trivial and interesting:

  • Our simple solution is supported by a thorough analysis. We show that the high non-convexity of the Neo-Hookean energy stems fromhigh Poisson’s ratio and large volume change (Sec.LABEL:sec:snh_eigenanalysis).We analyze the behavior of projected Newton method under suchscenariosand identify potential pitfalls of the traditional eigenvalue-clampingprojection strategy (Teran etal., 2005) (Sec.4).

  • In response, we propose a novel eigenvalue projection strategy for Newton’s method tostabilize the optimization of Neo-Hookean-type energy under high Poisson’sratio and large initial volume change (Sec.5).

Our paper is a nice complement to the “Stable Neo Hookean Flesh Simulation” paper (Smith etal., 2018) which our title alludes to, improving the numerical stability by dissecting its nonconvexity in the projected Newton optimization.We illustrate the effectiveness and efficiency of our eigenvalue projectionscheme on a wide range of challenging examples, including differentdeformations, geometries, mesh resolutions, elastic energies, and physicalparameters. Through extensive experiments, we show that in the case of largevolume change, our method outperforms the state-of-the-art eigenvalueprojection strategy and other alternatives in terms of stability andconvergence speed, while still, on average, being comparable for simulations with moderatevolume changes. Our method is robust across different mesh resolutions and toextreme volume change and inversion. It achieves a stable and fast convergencerate in the presence of a high Poisson’s ratio and large volume change.

Absolute Eigenvalue Filtering for Projected Newton (3)

2. Related Work

While researchers have studied first-order methods for simulating volumetricobjects, our approach focuses on improving the robustness and efficiency of thesecond-order methods, to which we limit our discussion.

Volume-preserving hyperelastic energiesplay a crucial role in capturing the volumetric behavior of deformable objects.Many hyperelastic energies are modeled as functions of the deformation gradient 𝐅𝐅{\mathbf{F}}bold_F.As hyperelastic materials (e.g., rubber) resist volume changes, these energyfunctions often contain a volume term, a function of the determinant of thedeformation gradient det(𝐅)𝐅\det({\mathbf{F}})roman_det ( bold_F ), to penalize volume changes.In practice, Neo-Hookean energy (Ogden, 1997) and other volume-preservingvariants (e.g., volume-preserving ARAP (Lin etal., 2022)) have been widely usedfor modeling materials with high Poisson’s ratios.Other alternatives such as St. Venant-Kirchhoff model(Picinbono etal., 2004)and co-rotational model(Müller etal., 2002) approximate or linearizethe volume term and thus compromise its volume-preserving property and visualeffects, particularly in the cases of large deformation and high Poisson’sratios (see Sec.6.2 of (Kim and Eberle, 2022)).

More recently, several works have focused on improving the stability andcomputational efficiency of volume-preserving hyperelastic energies.To improve robustness to mesh inversion and rest-state stability, Smith etal. (2018)proposed the stable Neo-Hookean energyand further applied their analysis to stabilize other hyperelastic energies such asFung and Arruda-Boyce elasticity (Smith etal., 2018) and Mooney-Rivlinelasticity (Kim and Eberle, 2022).To improve computational efficiency, a series of works(Smith etal., 2018, 2019; Lin etal., 2022) derive analyticaleigendecompositions for isotropic distortion energies to accelerate thepositive semi-definite (PSD) projection of per-element Hessiancontributions (following (Teran etal., 2005)).

Absolute Eigenvalue Filtering for Projected Newton (4)

(Element-wise) projected Newton’s method.

When the Hessian matrix is not positive definite, steps in Newton’s methodmay not always find a direction leading to energy decrease(Nocedal and Wright, 2006).Several Hessian approximation strategies, a.k.a. projected Newton’s methods,have been adapted. Directly computing the eigenvalues and eigenvectors of a global Hessian matrix isoften too expensive.For many energy functions, their Hessian matrices can be decomposed into a summationover the sub-Hessian of each mesh element.Thereby,one can project each sub-Hessian tothe PSD cone by clamping negative eigenvalues to a small positive number (orzero)(Teran etal., 2005) or adding a diagonal matrix(Fu and Liu, 2016).These approaches guarantee that the sub-Hessians are PSD and thus the resulting global Hessian is also PSD.While improving robustness, the projected Newton’s method may suffer fromworse convergence rate compared to the classic Newton’s method(Longva etal., 2023).What is a better projection strategy remains an open question (Nocedal and Wright, 2006).

In this work, we analyze the limitations of existingper-element projected Newton strategies, supported by extensive empiricalstudies.In the optimization and machine learning literature, Gill etal. (1981) (Sec.4.4.2.1), Paternain etal. (2019) and Dauphin etal. (2014) suggest projecting the negative eigenvalues of the global Hessian to its absolute values.We demonstrate that projecting the negative eigenvalues of the per-element Hessian to its absolute values, similar to (Paternain etal., 2019) and (Dauphin etal., 2014), performs the best in hyperelastic simulations.

Absolute Eigenvalue Filtering for Projected Newton (5)

Newton-type methods.

Absolute Eigenvalue Filtering for Projected Newton (6)

Alternatively, there are Newton-type methods that do not rely on per-element PSD projection.One strategy is to add a multiple of identity matrix to the local or globalhessian (Nocedal and Wright, 2006) but it tends to overdamp the convergence (see(Liu etal., 2017) Fig.8 and (Shtengel etal., 2017) Fig.2).Shtengel etal. (2017) proposed Composite Majorization, a tightconvex majorizer as an analytic PSD approximation of the Hessian.While this approximation is efficient to compute, it remains unclear how to extend it beyond 2D problems and to different types of energies.Instead of directly approximating the Hessian matrix, Lan etal. (2023)proposed a strategy to adjust the searching direction derivedfrom the (possibly non-PSD) Hessian when performing stencil-wise Newton-CG.Chen etal. (2014) applied the asymptotic numerical method to (inverse) staticequilibrium problem, and demonstrates its advantages over traditionalNewton-type methods.When using incremental potential in dynamic settings,Longva etal. (2023) proposed two alternative strategies: one is to use the originalHessian matrix whenever it is positive definite and use an approximated Hessianmatrix otherwise; another is to add a multiple of the mass matrix to the originalHessian until it becomes positive definite.Both of these strategies could require one or several additional Choleskyfactorizations each Newton iteration anytime the original Hessian is not PSD.Moreover, this method still inherits the flaws of those fallback Hessianswhenever they’re invoked.In contrast, our method maintains the same per-iteration computational cost as theelement-wise projected Newton’s method(Teran etal., 2005) while achieving a betterconvergence rate compared to projected Newton (Teran etal., 2005) and otheralternatives (Longva etal., 2023).

3. Background

Absolute Eigenvalue Filtering for Projected Newton (7)

Our approach focuses on quasi-static simulation, which amounts to solving a minimization problem for a (typically nonlinear) energy f𝑓fitalic_f with respect to parameters 𝐱𝐱{\mathbf{x}}bold_x

(1)min𝐱f(𝐱).subscript𝐱𝑓𝐱\displaystyle\min_{{\mathbf{x}}}f({\mathbf{x}})\;.roman_min start_POSTSUBSCRIPT bold_x end_POSTSUBSCRIPT italic_f ( bold_x ) .

Perhaps the most popular way of solving this problem is Newton’s method.Given a set of parameters 𝐱𝐱{\mathbf{x}}bold_x at the current iteration, Newton’s methodapproximates the energy f𝑓fitalic_f locally with a quadratic function

(2)f~(𝐱+𝐝)f(𝐱)+𝐠T𝐝+12𝐝T𝐇𝐝,~𝑓𝐱𝐝𝑓𝐱superscript𝐠𝑇𝐝12superscript𝐝𝑇𝐇𝐝\displaystyle\tilde{f}({\mathbf{x}}+{\mathbf{d}})\approx f({\mathbf{x}})+{%\mathbf{g}}^{T}{\mathbf{d}}+\frac{1}{2}{\mathbf{d}}^{T}{\mathbf{H}}{\mathbf{d}},over~ start_ARG italic_f end_ARG ( bold_x + bold_d ) ≈ italic_f ( bold_x ) + bold_g start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_d + divide start_ARG 1 end_ARG start_ARG 2 end_ARG bold_d start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_Hd ,

where 𝐠=f(𝐱)𝐠𝑓𝐱{\mathbf{g}}=\nabla f({\mathbf{x}})bold_g = ∇ italic_f ( bold_x ) and 𝐇=2f(𝐱)𝐇superscript2𝑓𝐱{{\mathbf{H}}}=\nabla^{2}f({\mathbf{x}})bold_H = ∇ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_f ( bold_x ) are the gradient andthe Hessian of the energy f𝑓fitalic_f evaluated at 𝐱𝐱{\mathbf{x}}bold_x, and 𝐝𝐝{\mathbf{d}}bold_d denotes the updatevector to the parameter 𝐱𝐱{\mathbf{x}}bold_x.To obtain the optimal 𝐝𝐝{\mathbf{d}}bold_d, one can set the derivative of f~/𝐝~𝑓𝐝\nicefrac{{\partial\tilde{f}}}{{\partial{\mathbf{d}}}}/ start_ARG ∂ over~ start_ARG italic_f end_ARG end_ARG start_ARG ∂ bold_d end_ARG to zero and obtain the update with

(3)𝐝=𝐇1𝐠.𝐝superscript𝐇1𝐠\displaystyle{\mathbf{d}}=-{\mathbf{H}}^{-1}{\mathbf{g}}.bold_d = - bold_H start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT bold_g .

Because f~~𝑓\tilde{f}over~ start_ARG italic_f end_ARG is merely an approximation of the energy, the resulting 𝐝𝐝{\mathbf{d}}bold_d may not guaranteeenergy decrease at the end of the iteration.Thus, a line search is needed to find a step size α𝛼\alphaitalic_α such that 𝐱+α𝐝𝐱𝛼𝐝{\mathbf{x}}+\alpha{\mathbf{d}}bold_x + italic_α bold_d achieves a sufficient decrease in the energy f𝑓fitalic_f.Newton’s method iterates between these steps until convergence.

Absolute Eigenvalue Filtering for Projected Newton (8)

However, the process summarized above only works in the ideal situation wherethe Hessian matrix 𝐇𝐇{\mathbf{H}}bold_H is positive definite.Geometrically, a positive definite Hessian 𝐇𝐇{\mathbf{H}}bold_H corresponds to a convex energy space,which ensures that the update direction 𝐝𝐝{\mathbf{d}}bold_d towards thecritical point f~/𝐝=0~𝑓𝐝0\nicefrac{{\partial\tilde{f}}}{{\partial{\mathbf{d}}}}=0/ start_ARG ∂ over~ start_ARG italic_f end_ARG end_ARG start_ARG ∂ bold_d end_ARG = 0 is adecent direction towards the minimum of f~~𝑓\tilde{f}over~ start_ARG italic_f end_ARG (see inset).An indefinite and a negative definite Hessian 𝐇𝐇{\mathbf{H}}bold_H, however, correspond to a saddle anda concave shape, respectively (see inset). In both cases, the direction𝐝𝐝{\mathbf{d}}bold_d towards the critical point could be an energy ascent direction, and thusthe Newton’s method may fail to converge (see inset).To address this issue, a popular approach is the (per-element) Projected Newton’s method.

3.1. Projected Newton

The idea of projected Newton’s method is to approximate a non-positive definite Hessian matrixusing a positive definite one.In the case of finite-element simulations, the (global) Hessian matrix 𝐇ksubscript𝐇𝑘{\mathbf{H}}_{k}bold_H start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT can be assembled from the per-element Hessian matrix 𝐇isubscript𝐇𝑖{\mathbf{H}}_{i}bold_H start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT for each mesh (or tetrahedral) element i𝑖iitalic_i:

(4)𝐇=i𝐏i𝐇i𝐏i,𝐇subscript𝑖superscriptsubscript𝐏𝑖topsubscript𝐇𝑖subscript𝐏𝑖\displaystyle{\mathbf{H}}=\sum_{i}{\mathbf{P}}_{i}^{\top}{\mathbf{H}}_{i}{%\mathbf{P}}_{i},bold_H = ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT bold_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT bold_H start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT bold_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ,

where 𝐏isubscript𝐏𝑖{\mathbf{P}}_{i}bold_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is a selection matrix that maps the per-element degrees of freedom to the global degrees of freedom.

The de facto way of making the global Hessian 𝐇𝐇{\mathbf{H}}bold_H positive definite is to project the per-element Hessian 𝐇isubscript𝐇𝑖{\mathbf{H}}_{i}bold_H start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to positive (semi-)definite (PSD) by performing eigen decomposition on the per-element Hessian 𝐇isubscript𝐇𝑖{\mathbf{H}}_{i}bold_H start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT numerically or analytically, followed by a clamping strategy to set the negative eigenvalues λksubscript𝜆𝑘\lambda_{k}italic_λ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT to zero (or a small positive number ϵitalic-ϵ\epsilonitalic_ϵ) (Teran etal., 2005):

(5)λk+superscriptsubscript𝜆𝑘\displaystyle\lambda_{k}^{+}italic_λ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT={ϵifλkϵ,λkotherwise.absentcasesitalic-ϵifsubscript𝜆𝑘italic-ϵsubscript𝜆𝑘otherwise.\displaystyle=\begin{cases}\epsilon&\text{ if }\lambda_{k}\leq\epsilon,\\\lambda_{k}&\text{ otherwise. }\end{cases}= { start_ROW start_CELL italic_ϵ end_CELL start_CELL if italic_λ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ≤ italic_ϵ , end_CELL end_ROW start_ROW start_CELL italic_λ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_CELL start_CELL otherwise. end_CELL end_ROW

Then, one can use the clamped eigenvalues λk+superscriptsubscript𝜆𝑘\lambda_{k}^{+}italic_λ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT with the originaleigenvectors of 𝐇isubscript𝐇𝑖{\mathbf{H}}_{i}bold_H start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT of obtain a PSD projected per-element Hessian𝐇i+subscriptsuperscript𝐇𝑖{\mathbf{H}}^{+}_{i}bold_H start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.Although the projection happens locally, this guarantees the approximated global Hessian

(6)𝐇+=i𝐏i𝐇i+𝐏isuperscript𝐇subscript𝑖superscriptsubscript𝐏𝑖topsubscriptsuperscript𝐇𝑖subscript𝐏𝑖\displaystyle{\mathbf{H}}^{+}=\sum_{i}{\mathbf{P}}_{i}^{\top}{\mathbf{H}}^{+}_%{i}{\mathbf{P}}_{i}bold_H start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT = ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT bold_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT bold_H start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT bold_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT

assembed from 𝐇i+subscriptsuperscript𝐇𝑖{\mathbf{H}}^{+}_{i}bold_H start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to be PSD (Rockafellar, 1970), thus a more robust simulation compared to using 𝐇𝐇{\mathbf{H}}bold_H.This per-element Hessian modification strategy is also scalable because it only requires eigendecomposition on each (small) per-element Hessian 𝐇isubscript𝐇𝑖{\mathbf{H}}_{i}bold_H start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, instead of the global matrix.

4. Pitfalls of Eigenvalue Clamping

Absolute Eigenvalue Filtering for Projected Newton (9)

Although the projected Newton method has improved robustness over the vanillaNewton’s method, the widely used eigenvalue clamping strategy described inSec.3.1 still suffers from poor convergence when the simulatedobject undergoes large volume changes.The cause lies in the operation in Eq.5, whichclamps the minimum eigenvalue λminsubscript𝜆min\lambda_{\text{min}}italic_λ start_POSTSUBSCRIPT min end_POSTSUBSCRIPT to zero (or a small positive number).This operation effectively turns the local quadratic approximation from asaddle shape (see inset in Sec.3.1) to a “valley” shape(see inset).In this situation, the Newton update direction 𝐝𝐝{\mathbf{d}}bold_d could point toward anypoint on the bottom of the valley—including points that are far away fromthe current parameter location 𝐱𝐱{\mathbf{x}}bold_x and even points leading to energy increase.

To solidify this intuition, let us consider a minimal example which exemplifies the extreme failure case of eigenvalue clamping.The function of two variables visualized in inFig.8 is defined as

(7)f(x,y)=((x+1)2+y21)2+((x1)2+y21)2.𝑓𝑥𝑦superscriptsuperscript𝑥12superscript𝑦212superscriptsuperscript𝑥12superscript𝑦212f(x,y)=\left(\sqrt{(x+1)^{2}+y^{2}}-1\right)^{2}+\left(\sqrt{(x-1)^{2}+y^{2}}-%1\right)^{2}.italic_f ( italic_x , italic_y ) = ( square-root start_ARG ( italic_x + 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_y start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG - 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + ( square-root start_ARG ( italic_x - 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_y start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG - 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT .

At the point (x,y)=(1106,108)𝑥𝑦1superscript106superscript108(x,y)=(1-10^{-6},10^{-8})( italic_x , italic_y ) = ( 1 - 10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT , 10 start_POSTSUPERSCRIPT - 8 end_POSTSUPERSCRIPT ) (indicated by the white point),we have:

(8)f=2,f=[3.990.02],𝐇=Φ[1.99×106003.99]Φformulae-sequence𝑓2formulae-sequence𝑓matrix3.990.02𝐇Φmatrix1.99superscript106003.99superscriptΦtop\displaystyle f=2,\quad\nabla f=\begin{bmatrix}3.99\\-0.02\end{bmatrix},\quad{\mathbf{H}}=\Phi\begin{bmatrix}-1.99\times 10^{6}&0\\0&3.99\end{bmatrix}\Phi^{\top}italic_f = 2 , ∇ italic_f = [ start_ARG start_ROW start_CELL 3.99 end_CELL end_ROW start_ROW start_CELL - 0.02 end_CELL end_ROW end_ARG ] , bold_H = roman_Φ [ start_ARG start_ROW start_CELL - 1.99 × 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 3.99 end_CELL end_ROW end_ARG ] roman_Φ start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT

with eigenvectors Φ=[0.010.990.990.01]Φmatrix0.010.990.990.01\Phi=\begin{bmatrix}0.01&-0.99\\0.99&0.01\end{bmatrix}roman_Φ = [ start_ARG start_ROW start_CELL 0.01 end_CELL start_CELL - 0.99 end_CELL end_ROW start_ROW start_CELL 0.99 end_CELL start_CELL 0.01 end_CELL end_ROW end_ARG ].

If we project the negative eigenvalues to a small positive value ϵitalic-ϵ\epsilonitalic_ϵ(e.g., δ=103𝛿superscript103\delta=10^{-3}italic_δ = 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT), the corresponding Newton direction can be computedas

(9)(𝐇+)1f=(Φ[ϵ003.99]Φ)1[3.990.02]=[1.1919.99].superscriptsuperscript𝐇1𝑓superscriptΦmatrixitalic-ϵ003.99superscriptΦtop1matrix3.990.02matrix1.1919.99\displaystyle-({\mathbf{H}}^{+})^{-1}\nabla f=-\left(\Phi\begin{bmatrix}%\epsilon&0\\0&3.99\end{bmatrix}\Phi^{\top}\right)^{-1}\begin{bmatrix}3.99\\-0.02\end{bmatrix}=-\begin{bmatrix}1.19\\19.99\end{bmatrix}.- ( bold_H start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∇ italic_f = - ( roman_Φ [ start_ARG start_ROW start_CELL italic_ϵ end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 3.99 end_CELL end_ROW end_ARG ] roman_Φ start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT [ start_ARG start_ROW start_CELL 3.99 end_CELL end_ROW start_ROW start_CELL - 0.02 end_CELL end_ROW end_ARG ] = - [ start_ARG start_ROW start_CELL 1.19 end_CELL end_ROW start_ROW start_CELL 19.99 end_CELL end_ROW end_ARG ] .

For small ϵitalic-ϵ\epsilonitalic_ϵ, the update direction is dominated by y-coordinate.For extreme cases where ϵ=109italic-ϵsuperscript109\epsilon=10^{-9}italic_ϵ = 10 start_POSTSUPERSCRIPT - 9 end_POSTSUPERSCRIPT, the update direction even blows up along the y-coordinate.Although f𝑓fitalic_f technically deceases along the search direction, its projection violates thespirit of Newton’s method which relies on the quadratic approximation of theenergy function in the local region.

Absolute Eigenvalue Filtering for Projected Newton (10)

The issue of eigenvalue clamping also appears in 3D. This is especially commonwhen simulating large deformations on materials with high Poisson ratios (seeSec.6).

In our 2D example, we can drive this bad eigenvaluearbitrarily negative by moving the evaluation toward (x,y)=(1,0)𝑥𝑦10(x,y)=(1,0)( italic_x , italic_y ) = ( 1 , 0 ).A more negative eigenvalue means the function along the corresponding eigendirectionis more concave, thus more poorly approximated by a convex quadratic.Unfortunately, clamping to near-zero effectively prefers this direction(because after inversion the coefficient explodes, see Eq.3).We would rather like a filtering method which avoids this direction.It is much more reasonable to make the effect of non-convex directions directlyproportional to their eigenvalue magnitude.This immediately motivates using the absolute value as a filter.

5. Absolute Value Eigenvalue Projection

The analysis above suggests that projecting a large negative eigenvalue to a small positive value may lead to a poor estimate of the descent direction.The optimization may get stuck in a local minimum or even diverge in this case.Inspired by (Gill etal., 1981) (Sec.4.4.2.1) and (Paternain etal., 2019), we propose to project the negative eigenvalues of the local Hessian 𝐇isubscript𝐇𝑖{\mathbf{H}}_{i}bold_H start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to its absolute value:

(10)λk+=|λk|.superscriptsubscript𝜆𝑘subscript𝜆𝑘\displaystyle\lambda_{k}^{+}=\left|\lambda_{k}\right|.italic_λ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT = | italic_λ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | .

The resulting projected Newton step can also be interpreted as the result of a generalized trust-region method where the model is a first-order Taylor expansion and the trust region is defined using the Hessian metric (see (Dauphin etal., 2014)).

Implementation.For implementations already using per-element projection and scatter-gatherassembly of Hessians following Teran etal. (2005), our method is a single linechange (see Page1).

For completeness, the full algorithm per-element absolute eigenvalue projection Hessian construction follows as:1 initialize H_proj to empty sparse matrix2 foreach element i:3 # Pi*x selects element i’s local variables from x4 either:5 Hi = constructLocalHessian(i,Pi*x)6 U,S,V = eig(Hi)7 or:8 U,S,V = analyticDecomposition(i,Pi*x)9 # Our one-line change10 Hi_proj = U*abs(S)*V11 # accumulate into global Hessian12 H_proj += Pi*Hi_proj*Pi’\end{lstlisting}13\end{minipage}1415Applying our projection to the minimal example above, the Newton direction now becomes16\begin{equation}17 \begin{aligned}18 \vd & = -(\mH^+)^{-1} \vg19 = - \left(\Phi20 \begin{bmatrix}21 \left| -1.99\times10^6 \right| & 0 \\22 0 & 3.9923 \end{bmatrix}24 \Phi^\top \right)^{-1}25 \begin{bmatrix}26 3.99 \\27 -0.0228 \end{bmatrix} \\29 & = - \left(\Phi30 \begin{bmatrix}31 5\times10^{-7} & 0 \\32 0 & 0.2533 \end{bmatrix}34 \Phi^\top \right)35 \begin{bmatrix}36 3.99 \\37 -0.0238 \end{bmatrix}39 = -\begin{bmatrix}40 0.99 \\41 -0.0142 \end{bmatrix}.43 \end{aligned}44\end{equation}45which is more aligned with the direction towards the global minimum (see \reffig{newton_example_2_variable}).4647\begin{figure}48 \centering49 \includegraphics[width=\linewidth]{figures/different_eps.pdf}50 \caption{Our absolute projection strategy is parameter-free and achieves consistent speedup over the eigenvalue clamping strategy in the cases of high Poissons ratio and large volume change.51 On the contrary, for the eigenvalue clamping strategy, the optimal ϵitalic-ϵ\epsilonitalic_ϵ varies across different examples and picking the optimal one requires manual tuning.}52 \label{fig:different_eps}53\end{figure}5455\section{Eigenanalysis of Volume-preserving Energy} \label{sec:snh_eigenanalysis}56While the two variable example may seem contrived, we show that the existence of57large negative eigenvalues is closely connected to the large Poissons ratio and58large volume change due to the volume-preserving term in Neo-Hookean energy.5960When the Hessian contains large negative eigenvalues, the absolute value61projection gives a better estimate of the descent direction, making the62optimization more stable and faster to converge. Then the question comes, when63does the Hessian contain large negative eigenvalues? It turns out that it64appears more often than we thought.6566\begin{figure}[t]67 \includegraphics[width=\linewidth]{figures/lame-youngs-poisson.pdf}68 \caption{As Poissons ratio ν1/2𝜈12\nu\rightarrow\nicefrac{{1}}{{2}}italic_ν → / start_ARG 1 end_ARG start_ARG 2 end_ARG, Lam\’es second parameter λ𝜆\lambda\rightarrow\inftyitalic_λ → ∞.}69 \label{fig:lame_youngs_poisson}70\end{figure}71The class of Neo-Hookean energy usually takes the form of72\begin{align}73 \Psi = \frac{\mu}{2} (I_C - 3) - \mu \log(J) + \frac{\lambda}{2} (J-1)^2,74\end{align}75where μ𝜇\muitalic_μ and λ𝜆\lambdaitalic_λ are the first and second Lame parameters, IC=tr(𝐅𝐅)subscript𝐼𝐶trsuperscript𝐅top𝐅I_{C}=\text{tr}({\mathbf{F}}^{\top}{\mathbf{F}})italic_I start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT = tr ( bold_F start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT bold_F ) is the first right Cauchy-Green invariant and J=det(𝐅)𝐽det𝐅J=\text{det}({\mathbf{F}})italic_J = det ( bold_F ) is the determinant of the deformation gradient 𝐅𝐅{\mathbf{F}}bold_F.7677To ensure the inversion stability and rest stability, \changed{\citet{smith2018snh}} proposes to use the stable Neo-Hookean energy:78\begin{align}79 \Psi = \frac{\mu}{2}\left(I_C-3\right)+\frac{\lambda}{2}(J-\alpha)^2,80\end{align}81where α=1+μλ𝛼1𝜇𝜆\alpha=1+\frac{\mu}{\lambda}italic_α = 1 + divide start_ARG italic_μ end_ARG start_ARG italic_λ end_ARG.828384As shown by the eigenanalysis in \changed{\cite{smith2018snh, kim22deformables}}, aside from the three eigenvalues corresponding to the scaling, the other six twist and flip eigenvalues of the local Hessian matrix for stable Neo-Hookean energy can be written as85\begin{align}86 & \Lambda_3=\mu+\sigma_z\left(\lambda\left(J-1\right)-\mu\right) \\87 & \Lambda_4=\mu+\sigma_x\left(\lambda\left(J-1\right)-\mu\right) \\88 & \Lambda_5=\mu+\sigma_y\left(\lambda\left(J-1\right)-\mu\right) \\89 & \Lambda_6=\mu-\sigma_z\left(\lambda\left(J-1\right)-\mu\right) \\90 & \Lambda_7=\mu-\sigma_x\left(\lambda\left(J-1\right)-\mu\right) \\91 & \Lambda_8=\mu-\sigma_y\left(\lambda\left(J-1\right)-\mu\right).92\end{align}93where μ𝜇\muitalic_μ and λ𝜆\lambdaitalic_λ are the Lame parameters, and σxsubscript𝜎𝑥\sigma_{x}italic_σ start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT, σysubscript𝜎𝑦\sigma_{y}italic_σ start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT, and σzsubscript𝜎𝑧\sigma_{z}italic_σ start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT are the singular \changed{values} of the deformation gradient 𝐅𝐅{\mathbf{F}}bold_F.9495\begin{figure}[t]96 \centering97 \includegraphics[width=\linewidth]{figures/snh_energy.pdf}98 \caption{Energy and smallest eigenvalues of stable Neo-Hookean energy with different Poissons ratios ν𝜈\nuitalic_ν and deformation.99 Here the deformation is created by dragging the top vertex of a regular tetrahedron around, and the center of the plot is with zero displacement.}100 \label{fig:snh_energy}101\end{figure}102103When the Poissons ratio ν𝜈\nuitalic_ν is close to 0.5, the value of Lam\’es second parameter λ=2ν12νμ𝜆2𝜈12𝜈𝜇\lambda=\frac{2\nu}{1-2\nu}\muitalic_λ = divide start_ARG 2 italic_ν end_ARG start_ARG 1 - 2 italic_ν end_ARG italic_μ is very large (see \reffig{lame_youngs_poisson}).104Thus the magnitude of potential negative eigenvalues Λ68subscriptΛsimilar-to68\Lambda_{6\sim 8}roman_Λ start_POSTSUBSCRIPT 6 ∼ 8 end_POSTSUBSCRIPT largely depends on the Lam\’es second parameter λ𝜆\lambdaitalic_λ and the volume change (J1)𝐽1(J-1)( italic_J - 1 ).105For instance, a Poissons ratio of 0.495 corresponds to a Lam\’es second parameter of λ100μ𝜆100𝜇\lambda\approx 100\muitalic_λ ≈ 100 italic_μ.106This gives the eigenvalues Λ68subscriptΛsimilar-to68\Lambda_{6\sim 8}roman_Λ start_POSTSUBSCRIPT 6 ∼ 8 end_POSTSUBSCRIPT a large negative value if there is a relatively large volume change for a specific element (i.e., when (J1)𝐽1(J-1)( italic_J - 1 ) is large, see \reffig{snh_energy}).107108When the Hessian contains large negative eigenvalues, projecting it to a small positive value may lead to a poor estimate of the descent direction which strongly \changed{biases} towards the negative eigenvectors (see \refsec{pitfall_of_clamping}).109In contrast, the absolute value projection gives a better estimate of the descent direction, making the optimization more stable and faster to converge.

6. Results

Absolute Eigenvalue Filtering for Projected Newton (11)

Absolute Eigenvalue Filtering for Projected Newton (12)

Figure 9. Our abs projection strategy stabilizes the optimization under large volume change and high Poisson’s ratio,and achieves faster convergence rates than the traditional eigenvalue clamping strategy (Teran etal., 2005).Here we stretch a cylinder to 3.0x (top) and compress a cylinder to 0.5x (bottom) by moving the topmost vertices.
We evaluate our method by comparing it against the traditional eigenvalueclamping strategy (Teran etal., 2005) and other alternatives on a wide range ofchallenging examples, including different deformations, geometries, elasticenergies and physical parameters.Furthermore, we experiment on the TetWild Thingi10k dataset (Hu etal., 2018; Zhou and Jacobson, 2016) with diverse deformations totest the robustness and scalability of our method.Unless stated otherwise, we use stable Neo-Hookean model (Smith etal., 2018) with Young’s Modulus E=108𝐸superscript108E=10^{8}italic_E = 10 start_POSTSUPERSCRIPT 8 end_POSTSUPERSCRIPT and Poisson’s ratio ν=0.495𝜈0.495\nu=0.495italic_ν = 0.495 for all the experiments, and use 0 as the threshold for the eigenvalue clamping strategy, as suggested by Sec.8 of (Teran etal., 2005).We use a direct sparse Cholesky solver to solve the linear system at each Newton iteration.The convergence threshold is set to be when the newton decrement 0.5𝐝𝐠0.5superscript𝐝top𝐠0.5{\mathbf{d}}^{\top}{\mathbf{g}}0.5 bold_d start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT bold_g is less than 105superscript10510^{-5}10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT (times λ𝜆\lambdaitalic_λ for the scale of the gradient).The initial deformation is created by moving some selected vertices of the mesh.We run Newton’s method for amaximum of 200 iterations with aAbsolute Eigenvalue Filtering for Projected Newton (13)classical backtracking line search strategy (Nocedal and Wright, 2006).As inversion is extremely common in the presence of large initial deformation (see the inset), we do not use an inversion-aware line search.We implement our method in C++ with libigl (Jacobson etal., 2018) and use TinyAD (Schmidt etal., 2022) for the automatic differentiation.Experiments are performed using a MacBook Pro with an Apple M2 processor and 24GB of RAM.Absolute Eigenvalue Filtering for Projected Newton (14)Figure 10. Our abs projection strategy generalizes over various volume-preserving hyperelastic models.Here we add the volume term (J1)2superscript𝐽12(J-1)^{2}( italic_J - 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT to different strain energies, including Mooney-Rivlin (Smith etal., 2018), ARAP (Lin etal., 2022) and Symmetric Dirichlet (Smith and Schaefer, 2015) energy.Absolute Eigenvalue Filtering for Projected Newton (15)Figure 11. The speedup of our abs projection strategy over the eigenvalue clamping strategy increases with the mesh resolution.Here we stretch the rightmost vertices of a cube (of 4 different resolutions) by a factor of 3.0x. Absolute Eigenvalue Filtering for Projected Newton (16)Figure 12. Our method stabilizes and accelerates the optimization in the presence of large rotations.Here we stretch and twist the topmost vertices of the doll by 360 degrees (divided into four 90° subsolves to avoid ambiguity). Absolute Eigenvalue Filtering for Projected Newton (17)Figure 13. Our method can also accelerate the optimization of volume-preserving parameterization, where we minimize the energy 0.5EMIPS+0.5(J1)20.5subscript𝐸MIPS0.5superscript𝐽120.5E_{\text{MIPS}}+0.5(J-1)^{2}0.5 italic_E start_POSTSUBSCRIPT MIPS end_POSTSUBSCRIPT + 0.5 ( italic_J - 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.Here we start from the Tutte embedding with the same total area as the original surface mesh (the cut is visualized in black), but nevertheless the per-element area distortion could still be large. Convergence and Stability. We compare our abs projection strategy to (Teran etal., 2005) under a variety of deformations, including stretching, compression, shearing and twisting in Fig.14, Fig.9, Fig.2, Fig.3 and Fig.12.Our abs projection strategy converges swiftly and smoothly under large local volume change and high Poisson’s ratio, while (Teran etal., 2005) suffers from a much slower convergence rate due to the instability in the optimization.The energies and shapes for both methods are generally consistent within each example when converged, except in a few cases where the other method converges to a different local minimum after “blowing up” mid-optimization (e.g., Fig.1 (clamp) and Fig.16 (global abs)). We also compare our local abs approach to the global abs approach (Dauphin etal., 2014; Paternain etal., 2019) in Fig.16.Compared to our method, the global abs approach is computationally intractable due to a full eigendecomposition of the global Hessian (takes more than 3 hours for one 38k×38k38𝑘38𝑘38k\times 38k38 italic_k × 38 italic_k Hessian in Fig.15),and leads to a damped convergence and suboptimal configurations (Fig.16).We further stress test our method under extreme volume change (stretch to 11x) and an even higher Poisson’s ratio (ν=0.4999𝜈0.4999\nu=0.4999italic_ν = 0.4999) in Fig.7.Our method still maintains a stable and fast convergence rate, while (Teran etal., 2005) still fails to converge after 200 iterations.

Absolute Eigenvalue Filtering for Projected Newton (18)

Absolute Eigenvalue Filtering for Projected Newton (19)

Absolute Eigenvalue Filtering for Projected Newton (20)

Figure 14. Our abs projection strategy enables stability and acceleration over diverse deformations with large volume change, including stretching (top), shearing (middle) and twisting (bottom).
Generalization. We further evaluate the generality of our method over different mesh resolutions, Poisson’s ratios, mesh deformations and hyperelastic models.Under the same deformation, the speedup we gain over (Teran etal., 2005) increases with the mesh resolutions (Fig.11).Our abs projection approach maintains a relatively stable convergence rate across different resolutions while the traditional eigenvalue clamping method requires drastically more iterations as the resolution increases.In Fig.4, we show the Newton iteration counts of our method and (Teran etal., 2005) under different Poisson’s ratios and volume changes.For relatively small volume change (top), the speedup of our method over (Teran etal., 2005) appears after, e.g., the Poisson’s ratio is larger than 0.47.But for volume change that is large enough (bottom), our method can accelerate the convergence rate even when the Poisson’s ratio is low, e.g., ν=0.3𝜈0.3\nu=0.3italic_ν = 0.3.We further demonstrate the growth of our speedup as the initial volume change increases in Fig.5.Our abs projection strategy can also generalize to other tasks, such as surface parameterization in Fig.13, stable Neo-Hookean energy with collisions in Fig.18, and other volume-preserving hyperelastic models in Fig.10, including Mooney-Rivlin (Smith etal., 2018), ARAP (Lin etal., 2022) and Symmetric Dirichlet energy (each with an additional volume term (J1)2superscript𝐽12(J-1)^{2}( italic_J - 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT to make it volume-preserving).Absolute Eigenvalue Filtering for Projected Newton (21)Figure 15. We compare our abs projection strategy with eigenvalue clamping strategy (Teran etal., 2005), adding a multiple of Identity matrix to the local Hessian or the global Hessian (until it becomes PSD) (Martin etal., 2011), and the Projection-on-Demand and Kinetic strategy (Longva etal., 2023).Note that the Projection-on-Demand strategy (Longva etal., 2023) requires an additional Cholesky factorization to check the positive-definiteness of the original Hessian at each Newton iteration.Absolute Eigenvalue Filtering for Projected Newton (22)Figure 16. Compared to our local approach, the global abs approach (Dauphin etal., 2014; Paternain etal., 2019) is computationally intractable due to a full eigendecomposition of the global Hessian (takes more than 3 hours for one 38k×38k38𝑘38𝑘38k\times 38k38 italic_k × 38 italic_k Hessian in Fig.15), and leads to a damped convergence and suboptimal configurations (right).Here on average, the global abs approach takes 10.2 minutes per Newton iteration for one 8.8k×8.8k8.8𝑘8.8𝑘8.8k\times 8.8k8.8 italic_k × 8.8 italic_k Hessian, while our local abs approach takes only 0.25 seconds for one Newton iteration.Absolute Eigenvalue Filtering for Projected Newton (23)Figure 17. We visualize the first 20 eigenvalues of the Hessian and the resulting descent direction using our abs projection strategy (red) and the eigenvalue clamping strategy (blue) under different volume changes and Poisson’s ratio.Here we uniformly scale the mesh by 1.05x, 1.2x and 1.5x.The eigenvalues of the original Hessian are denoted by yellow. Comparison. We evaluate our method and (Teran etal., 2005) (with threshold 103superscript10310^{-3}10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT) on a total of 593 the closed, genus-0 high-resolution tetrahedral meshes (with more than 5,000 vertices) from the TetWild Thingi10k dataset (Hu etal., 2018; Zhou and Jacobson, 2016) with diverse deformations, including stretching, stretching (2x), compression, twisting and bending.As shown by the histogram in Fig.6, our method is at least comparable with the eigenvalue clamping and offers, on average, 2.5 times speedup for large deformations.In Fig.LABEL:fig:different_eps, since the eigenvalue clamping strategy contains an additional user-picked parameter ϵitalic-ϵ\epsilonitalic_ϵ to control the clamping threshold, we compare the convergence rate of our method and (Teran etal., 2005) with different ϵitalic-ϵ\epsilonitalic_ϵ.Our method is parameter-free and achieves consistent speedup over (Teran etal., 2005).We further compare our abs projection scheme with other alternatives in Fig.15, including adding a multiple of Identity matrix to the local Hessian or the global Hessian (until it becomes PSD) (Martin etal., 2011), and the Projection-on-Demand and Kinetic strategy (Longva etal., 2023).Adding a multiple of Identity or mass matrix to the Hessian eventually makes the optimization closer to (preconditioned) gradient descent, and thus damps the convergence too much for large deformation problems.Note that the Projection-on-Demand strategy (Longva etal., 2023) requires an additional Cholesky factorization to check the positive-definiteness of the original Hessian at each Newton iteration.

7. CONCLUSION & FUTURE WORK

Absolute Eigenvalue Filtering for Projected Newton (24)Figure 18. We perform a collision experiment using Incremental Potential Contact (IPC) (Li etal., 2020), where we put a cylinder (orange) above the back of a horse.Our method is still able to achieve speedup with collisions, even though IPC’s intersection-aware line search clamps down the step size and dominates convergence after collisions happen.We propose a Hessian modification strategy to improve the robustness and efficiency of hyperelastic simulations.Absolute Eigenvalue Filtering for Projected Newton (25)Our method is a simple one-line change to the existing projected Newton’s method, making it extremely reproducible and widely applicable in many simulation frameworks.We primarily evaluate our method on Neo-Hookean simulations with large deformations.For small deformation, especially with compression (see the inset), our method can sometimes slightly damp the convergence compared to (Teran etal., 2005).Extending our analysis to a wider range of finite element simulations (e.g., collisions (Fig.18)) could better identify the applicability of our approach, and potentially deriving an even better adaptive PSD projection method.Exploring other Hessian modification strategies, such as adding higher-order regularization terms to the elastic energy (Kim and Eberle, 2022) or a combination with (Longva etal., 2023), could also be an interesting future direction.Considering using another volume-preserving term that does not introduce large negative eigenvalues in the presence of large deformations could be another promising future direction.
Acknowledgements.

This work is funded in part by two NSERC Discovery grants, the Ontario Early Research Award program, the Canada Research Chairs Program, a Sloan Research Fellowship, the DSI Catalyst Grant program, SoftBank and gifts by Adobe Research and Autodesk.We thank Danny Kaufman for early discussions;Yuta Noma for testing the code;all the artists for sharing the 3D models and anonymous reviewers for their helpful comments and suggestions.

References

  • (1)
  • Chen etal. (2014)Xiang Chen, ChangxiZheng, Weiwei Xu, and Kun Zhou.2014.An Asymptotic Numerical Method for Inverse ElasticShape Design.ACM Transactions on Graphics (Proceedings ofSIGGRAPH 2014) 33, 4(Aug. 2014).
  • Dauphin etal. (2014)YannN. Dauphin, RazvanPascanu, Caglar Gulcehre, Kyunghyun Cho,Surya Ganguli, and Yoshua Bengio.2014.Identifying and attacking the saddle point problemin high-dimensional non-convex optimization. InProceedings of the 27th International Conference onNeural Information Processing Systems - Volume 2 (Montreal, Canada)(NIPS’14). MIT Press,Cambridge, MA, USA, 2933–2941.
  • Fu and Liu (2016)Xiao-Ming Fu and YangLiu. 2016.Computing inversion-free mappings by simplexassembly.ACM Trans. Graph. 35,6 (2016), 216:1–216:12.
  • Gill etal. (1981)P.E. Gill, W. Murray,and M.H. Wright. 1981.Practical Optimization.Academic Press.
  • Hu etal. (2018)Yixin Hu, Qingnan Zhou,Xifeng Gao, Alec Jacobson,Denis Zorin, and Daniele Panozzo.2018.Tetrahedral Meshing in the Wild.ACM Trans. Graph. 37,4, Article 60 (July2018), 14pages.https://doi.org/10.1145/3197517.3201353
  • Jacobson etal. (2018)Alec Jacobson, DanielePanozzo, etal. 2018.libigl: A simple C++ geometry processinglibrary.https://libigl.github.io/.
  • Kim and Eberle (2022)Theodore Kim and DavidEberle. 2022.Dynamic Deformables: Implementation and ProductionPracticalities (Now with Code!). In ACM SIGGRAPH2022 Courses (Vancouver, British Columbia, Canada)(SIGGRAPH ’22). Association forComputing Machinery, New York, NY, USA, Article7, 259pages.https://doi.org/10.1145/3532720.3535628
  • Lan etal. (2023)Lei Lan, Minchen Li,Chenfanfu Jiang, Huamin Wang, andYin Yang. 2023.Second-Order Stencil Descent for Interior-PointHyperelasticity.ACM Trans. Graph. 42,4, Article 108 (jul2023), 16pages.https://doi.org/10.1145/3592104
  • Li etal. (2020)Minchen Li, ZacharyFerguson, Teseo Schneider, TimothyLanglois, Denis Zorin, Daniele Panozzo,Chenfanfu Jiang, and DannyM. Kaufman.2020.Incremental Potential Contact: Intersection- andInversion-free Large Deformation Dynamics.ACM Trans. Graph. (SIGGRAPH)39, 4, Article 49(2020).
  • Lin etal. (2022)Huancheng Lin, FloydM.Chitalu, and Taku Komura.2022.Isotropic ARAP Energy Using Cauchy-GreenInvariants.ACM Trans. Graph. 41,6, Article 275 (nov2022), 14pages.https://doi.org/10.1145/3550454.3555507
  • Liu etal. (2017)Tiantian Liu, SofienBouaziz, and Ladislav Kavan.2017.Quasi-Newton Methods for Real-Time Simulation ofHyperelastic Materials.ACM Transactions on Graphics (TOG)36, 3 (2017),23.
  • Longva etal. (2023)Andreas Longva, FabianLöschner, JoséAntonio Fernández-Fernández,Egor Larionov, UriM. Ascher, andJan Bender. 2023.Pitfalls of Projection: A study of Newton-typesolvers for incremental potentials.arXiv:2311.14526
  • Martin etal. (2011)Sebastian Martin, BernhardThomaszewski, Eitan Grinspun, andMarkus Gross. 2011.Example-based elastic materials. InACM SIGGRAPH 2011 Papers (Vancouver, BritishColumbia, Canada) (SIGGRAPH ’11).Association for Computing Machinery,New York, NY, USA, Article 72,8pages.https://doi.org/10.1145/1964921.1964967
  • Müller etal. (2002)Matthias Müller, JulieDorsey, Leonard McMillan, Robert Jagnow,and Barbara Cutler. 2002.Stable Real-Time Deformations. InProceedings of the 2002 ACM SIGGRAPH/EurographicsSymposium on Computer Animation (San Antonio, Texas)(SCA ’02). Association forComputing Machinery, New York, NY, USA,49–54.https://doi.org/10.1145/545261.545269
  • Nocedal and Wright (2006)Jorge Nocedal andStephenJ. Wright. 2006.Numerical Optimization.(2006).
  • Ogden (1997)RaymondW Ogden.1997.Non-linear elastic deformations.Courier Corporation.
  • Paternain etal. (2019)Santiago Paternain, AryanMokhtari, and Alejandro Ribeiro.2019.A Newton-Based Method for Nonconvex Optimizationwith Fast Evasion of Saddle Points.SIAM Journal on Optimization29, 1 (2019),343–368.https://doi.org/10.1137/17M1150116
  • Picinbono etal. (2004)Guillaume Picinbono,Hervé Delingette, and NicholasAyache. 2004.Real-Time Large Displacement Elasticity for SurgerySimulation: Non-linear Tensor-Mass Model.MICCAI 1935,CH41–CH41.https://doi.org/10.1007/978-3-540-40899-4_66
  • Rockafellar (1970)RalphTyrell Rockafellar.1970.Convex Analysis.Princeton University Press,Princeton.https://doi.org/doi:10.1515/9781400873173
  • Schmidt etal. (2022)Patrick Schmidt, JanisBorn, David Bommes, Marcel Campen, andLeif Kobbelt. 2022.TinyAD: Automatic Differentiation in GeometryProcessing Made Simple.Computer Graphics Forum41, 5 (2022).
  • Shtengel etal. (2017)Anna Shtengel, RoiPoranne, Olga Sorkine-Hornung, ShaharZ.Kovalsky, and Yaron Lipman.2017.Geometric Optimization via Composite Majorization.ACM Trans. Graph. 36,4, Article 38 (jul2017), 11pages.https://doi.org/10.1145/3072959.3073618
  • Smith etal. (2018)Breannan Smith,FernandoDe Goes, and Theodore Kim.2018.Stable Neo-Hookean Flesh Simulation.ACM Trans. Graph. 37,2, Article 12 (mar2018), 15pages.https://doi.org/10.1145/3180491
  • Smith etal. (2019)Breannan Smith,FernandoDe Goes, and Theodore Kim.2019.Analytic Eigensystems for Isotropic DistortionEnergies.ACM Trans. Graph. 38,1, Article 3 (feb2019), 15pages.https://doi.org/10.1145/3241041
  • Smith and Schaefer (2015)Jason Smith and ScottSchaefer. 2015.Bijective parameterization with free boundaries.ACM Trans. Graph. 34,4, Article 70 (jul2015), 9pages.
  • Teran etal. (2005)Joseph Teran, EftychiosSifakis, Geoffrey Irving, and RonaldFedkiw. 2005.Robust Quasistatic Finite Elements and FleshSimulation. In ACM/Eurographics Symposium onComputer Animation (SCA), K.Anjyoand P.Faloutsos (Eds.). 181–190.
  • Zhou and Jacobson (2016)Qingnan Zhou and AlecJacobson. 2016.Thingi10K: A Dataset of 10,000 3D-Printing Models.arXiv preprint arXiv:1605.04797(2016).

Absolute Eigenvalue Filtering for Projected Newton (2024)
Top Articles
Latest Posts
Article information

Author: Carmelo Roob

Last Updated:

Views: 6374

Rating: 4.4 / 5 (45 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Carmelo Roob

Birthday: 1995-01-09

Address: Apt. 915 481 Sipes Cliff, New Gonzalobury, CO 80176

Phone: +6773780339780

Job: Sales Executive

Hobby: Gaming, Jogging, Rugby, Video gaming, Handball, Ice skating, Web surfing

Introduction: My name is Carmelo Roob, I am a modern, handsome, delightful, comfortable, attractive, vast, good person who loves writing and wants to share my knowledge and understanding with you.