Communication is hard – Especially communicating an artistic vision for visual effects. The Shareplay foundation sponsored a project investigating new software-based approaches to aiding artists in such communication – and this post explains our findings.
Together with our project partners Ja Film, Sunday Studio, Redeye Film and WilFilm, we have come up with ways to improve the communication of animated volumetric effects like smoke, fumes, explosions, fluids, etc. The production of shots for film, commercials and TV goes though several phases – each of these increase the visual quality while the artistic choices get locked down. Very early, the camera movement, animation timing and effect timing are locked down. This typically involves very crude assets which are then later on replaced by more detailed assets. This is fairly straightforward for normal “surface assets” like a typical character and scene prop. It is much more difficult to do this when it involves simulation-based volumetric effects like smoke, fumes, explosions or fire. These type of effects are very expensive to produce due to computationally intensive simulations and long rendering times – as well as the high level of artist experience required to make them look good. Because of this, the early pre-visualization often looks very crude – and poorly communicates the artistic visions of the creative artist. Take for example an explosion – to save time, the general timing of the explosion is often done by a rapidly expanding phong-shaded sphere. This is great for communicating the animation timing, but does not apply to the visual aesthetics of the explosion (rolling smoke, balls of fire, pressure wave, contrast between fire and smoke etc).
We have identified two themes for our experiments:
- Pushing the visual decisions much further down the shot production pipeline.
- Procedural animation of volumetric effects.
We go into each of these themes under the next two headlines
Pushing the visual decisions much further down the shot production pipeline.
Last year, we did a project on how fast procedural volumetric effects could empower the artist and re-envision the way artists work with volumetric effects. This project was also funded by the Shareplay Foundation with participation from our Computer Graphics Lab and Sunday Studio. More information on this project here. In the current project on visual communication we chose to build on the experiments and knowledge from that project.
One of the major issues when it comes to using volumetric effects in production is the workflow. The effects are generally made in isolation, pre-rendered and then later integrated into the final shot. We wanted to keep the flexibility all the way into the composting programs, without locking down the special effect. Our post processing program of choice was Adobe AfterEffects. We realized a prototype implementation in AfterEffects, and found that it makes perfect sense to be able to make procedural volumetric effects as late as post-processing. A perfect example is a cloud backdrop, that would usually be composited from photographs of clouds – whereas our approach would allow for custom generation of this basic content within the compositing program itself. Some challenges also arose that could be a source of future research. First of all, it turns out to be very difficult to impose a 3D workflow to a predominantly 2D application. All our previous experiments had been done in Autodesk Maya where all tools are meant to work in 3D – and this was naturally not the case for AfterEffects. This difference is especially obvious in the handling of camera and construction of base geometry for the special effect. At a later point we would like to investigate efficient user interfaces to handle the 3D navigation in a post-processing program – and to efficiently be able to generate base geometry within AfterEffects.
Procedural animation of volumetric effects.
Special effects such as smoke or fluids often evolve over time; splashing water, drifting clouds or expanding explosions – and timing plays a major role in the communication. In a traditional workflow the timing is decided in the input parameterization of the underlying simulations. Experimenting with those parameters is a very time consuming process and marred by trial-and-error. We wanted to do a procedural animation system for quick-and-dirty volumetric animations without any wait. Specifically we wished a system where the artist is able to;
- Key-frame the shape of effects with exact timing.
- Control the shape of the effect through geometry
- Change every key frame at any time without waiting for a simulation.
- Create the rolling motions of smoke.
All of these requirements are designed to make it easy for the artists to get the results they intend – quickly.
We needed to find a way to calculate the intermediate frames in-between the artist defined key frames. Our first approach was to do a full flow based registration between shapes. This had to be done each time the geometries were updated – but only once for each pair of key frames. This unfortunately didn’t turn out as good as we would have wished it to. The calculations needed to derive a good quality registration were too time consuming. The vector field between two shapes can be seen below, using the Demons framework for flow registration.
Instead, we came up with the idea to convert the geometry from each keyframe into a signed distance field and do a standard linear interpolation (or morphing) between those distance fields. In order to have control of the behavior, we defined two positions in each keyframe – an “entry” and an “exit”. The entry and exit of two consecutive keyframes would overlap in the interpolation itself – but moved by the true offset in world space. The results of that interpolation scheme for volumetric effects can be seen in the below animation.
Next we approached the problem of rendering believable animated rolling smoke, without any underlying simulation. The basic idea is to distribute points inside a volume. Then for each point splat a volumetric particle into a common field. Instead of using fields, we decided to do this implicitly using a variant of world-space Worley noise to distribute the points. For each ray step we evaluate the distance to each point and use that to evaluate the particle splat data. The rolling motion of the particles is done by rotating the particle noise splat lookup according to a vector field from the interpolation routine – in our test case below it was simply rotating away from the middle axis. This gives a believable rolling motion. It is also crucial to note that the particles are stationary in world space and the notion of movement is done by moving the particle noise field in the opposite direction of the motion. The effect can be seen on the follow video;
This scheme does indeed give a believable smoke motion which allows the artists to control the animation precisely as intended. Unfortunately, it doesn’t perform as well as we had hoped. The implicit evalulation of the closest particles along with the evaluation of particle noise made it very slow. Future work would be pre-calculating the particle noise to alleviate slow runtime performance.
The project ended in November 2014 and has pointed out some important trends in the search for quality improvements with smaller budgets within special effects. Through experiments and prototypes we have found that current commercial software are viable as frameworks for special effects design late in the production pipeline – and that there is reason to believe that we can replace at least part of the simulation based tools with pure procedural tools for animating visual effects. Furthermore we have established a good model for project collaboration between creative animation companies and R&D institutes such as the Alexandra Institute.