Strain-Based Dynamics

Matthias Mueller, Nuttapong Chentanez, Tae-Yong Kim, Miles Macklin

We propose a new set of constraints within the Position Based Dynamics (PBD) framework that allow the control of strain in directions that are independent of the edge directions of the simulation mesh. Instead of constraining distances between points, we constrain the entries of the Green – St Venant strain tensor. Varying the stiffness values corresponding to the individual strain coefficients lets us simulate anisotropic behavior. By working with Green’s rotation-independent, non-linear strain tensor directly we do not have to perform a polar decomposition of the deformation gradient as in most strain limiting approaches. In addition, we propose a modification of the constraints corresponding to the diagonal entries of the strain tensor such that they can be solved in a single step and a modification of the constraints corresponding to the off-diagonal entries to decouple stretch from shear resistance. By formulating the constraints within the PBD framework, they can be used not only for strain limiting but to perform the actual simulation of the deformable object whereas traditional strain limiting methods have to be paired with a separate simulation method.

Strain-Based Dynamics

A Reduced Model for Interactive Hairs

Menglei Chai, Changxi Zheng, Kun Zhou

Realistic hair animation is a crucial component in depicting virtual characters in interactive applications. While much progress has been made in high-quality hair simulation, the overwhelming computation cost hinders similar fidelity in realtime simulations. To bridge this gap, we propose a data-driven solution. Building upon precomputed simulation data, our approach constructs a reduced model to optimally represent hair motion characteristics with a small number of guide hairs and the corresponding interpolation relationships. At runtime, utilizing such a reduced model, we only simulate guide hairs that capture the general hair motion and interpolate all rest strands. We further propose a hair correction method that corrects the resulting hair motion with a position-based model to resolve hair collisions and thus captures motion details. Our hair simulation method enables a simulation of a full head of hairs with over 150K strands in realtime. We demonstrate the efficacy and robustness of our method with various hairstyles and driven motions (e.g., head movement and wind force), and compared against full simulation results that does not appear in the training data.

A Reduced Model for Interactive Hairs

Unified Particle Physics for Real-Time Applications

Miles Macklin, Matthias Müller, Nuttapong Chentanez, and Tae-Yong Kim

We present a unified dynamics framework for real-time visual effects. Using particles connected by constraints as our fundamental building block allows us to treat contact and collisions in a unified manner, and we show how this representation is flexible enough to model gases, liquids, deformable solids, rigid bodies and clothing with two-way interactions. We address some common problems with traditional particle based methods and describe a parallel constraint solver based on position based dynamics that is efficient enough for real-time applications.

Unified Particle Physics for Real-Time Applications

Robust Hair Capture Using Simulated Examples

Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li

We introduce a data-driven hair capture framework based on example strands generated through hair simulation. Our method can robustly reconstruct faithful 3D hair models from unprocessed input point clouds with large amounts of outliers. Current state-of-the-art techniques use geometrically-inspired heuristics to derive global hair strand structures, which can yield implausible hair strands for hairstyles involving large occlusions, multiple layers, or wisps of varying lengths. We address this problem using a voting-based fitting algorithm to discover structurally plausible configurations among the locally grown hair segments from a database of simulated examples. To generate these examples, we exhaustively sample the simulation configurations within the feasible parameter space constrained by the current input hairstyle. The number of necessary simulations can be further reduced by leveraging symmetry and constrained initial conditions. The final hairstyle can then be structurally represented by a limited number of examples. To handle constrained hairstyles such as a ponytail of which realistic simulations are more difficult, we allow the user to sketch a few strokes to generate strand examples through an intuitive interface. Our approach focuses on robustness and generality. Since our method is structurally plausible by construction, we ensure an improved control during hair digitization and avoid implausible hair synthesis for a wide range of hairstyles.

Robust Hair Capture Using Simulated Examples

Inverse-Foley Animation: Synchronizing rigid-body motions to sound

Timothy R. Langlois and Doug L. James

In this paper, we introduce Inverse-Foley Animation, a technique for optimizing rigid-body animations so that contact events are synchronized with input sound events. A precomputed database of randomly sampled rigid-body contact events is used to build a contact-event graph, which can be searched to determine a plausible sequence of contact events synchronized with the input sound’s events. To more easily find motions with matching contact times, we allow transitions between simulated contact events using a motion blending formulation based on modified contact impulses. We fine tune synchronization by slightly retiming ballistic motions. Given a sound, our system can synthesize synchronized motions using graphs built with hundreds of thousands of precomputed motions, and millions of contact events. Our system is easy to use, and has been used to plan motions for hundreds of sounds, and dozens of rigid-body models.

Inverse-Foley Animation: Synchronizing Rigid-Body Motions to Sound

Space-Time Editing of Elastic Motions through Material Optimization and Reduction

Siwang Li, Jin Huang, Fernando de Goes, Xiaogang Jin, Hujun Bao, and Mathieu Desbrun

We present a novel method for elastic animation editing with spacetime constraints. In a sharp departure from previous approaches, we not only optimize control forces added to a linearized dynamic model, but also optimize material properties to better match user constraints and provide plausible and consistent motion. Our approach achieves efficiency and scalability by performing all computations in a reduced rotation-strain (RS) space constructed with both cubature and geometric reduction, leading to two orders of magnitude improvement over the original RS method. We demonstrate the utility and versatility of our method in various applications, including motion editing, pose interpolation, and estimation of material parameters from existing animation sequences.

Space-Time Editing of Elastic Motions through Material Optimization and Reduction

SIGGRAPH 2014 papers

Here they are thus far:

 

TOG:

Multimaterial Mesh-Based Surface Tracking

Fang Da, Christopher Batty, Eitan Grinspun

We present a triangle mesh-based technique for tracking the evolution of three-dimensional multimaterial interfaces undergoing complex deformations. It is the first non-manifold triangle mesh tracking method to simultaneously maintain intersection-free meshes and support the proposed broad set of multimaterial remeshing and topological operations. We represent the interface as a non-manifold triangle mesh with material labels assigned to each half-face to distinguish volumetric regions. Starting from proposed application-dependent vertex velocities, we deform the mesh, seeking a non-intersecting, watertight solution. This goal necessitates development of various collision-safe, label-aware non-manifold mesh operations: multimaterial mesh improvement; T1 and T2 processes, topological transitions arising in foam dynamics and multi-phase flows; and multimaterial merging, in which a new interface is created between colliding materials. We demonstrate the robustness and effectiveness of our approach on a range of scenarios including geometric flows and multiphase fluid animation.

Multimaterial Mesh-Based Surface Tracking

Blending Liquids

Karthik Raveendran, Chris Wojtan, Nils Thuerey, Greg Turk

We present a method for smoothly blending between existing liquid animations. We introduce a semi-automatic method for matching two existing liquid animations, which we use to create new fluid motion that plausibly interpolates the input. Our contributions include a new space-time non-rigid iterative closest point algorithm that incorporates user guidance, a subsampling technique for efficient registration of meshes with millions of vertices, and a fast surface extraction algorithm that produces 3D triangle meshes from a 4D space-time surface. Our technique can be used to instantly create hundreds of new simulations, or to interactively explore complex parameter spaces. Our method is guaranteed to produce output that does not deviate from the input animations, and it generalizes to multiple dimensions. Because our method runs at interactive rates after the initial precomputation step, it has potential applications in games and training simulations.

Blending Liquids

Adaptive Tearing and Cracking of Thin Sheets

Tobias Pfaff, Rahul Narain, Juan Miguel de Joya, and James F. O’Brien

This paper presents a method for adaptive fracture propagation in thin sheets. A high-quality triangle mesh is dynamically restructured to adaptively maintain detail wherever it is required by the simulation. These requirements include refining where cracks are likely to either start or advance. Refinement ensures that the stress distribution around the crack tip is well resolved, which is vital for creating highly detailed, realistic crack paths. The dynamic meshing framework allows subsequent coarsening once areas are no longer likely to produce cracking. This coarsening allows efficient simulation by reducing the total number of active nodes and by preventing the formation of thin slivers around the crack path. A local reprojection scheme and a substepping fracture process help to ensure stability and prevent a loss of plasticity during remeshing. By including bending and stretching plasticity models, the method is able to simulate a large range of materials with very different fracture behaviors.

Adaptive Tearing and Cracking of Thin Sheets