Jesper Mosegaard – Visual Computing Lab https://viscomp.alexandra.dk Computer Graphics, Computer Vision and High Performance Computing Thu, 04 Apr 2019 12:04:52 +0000 en-GB hourly 1 https://wordpress.org/?v=5.8.2 While we wait – Approaching Zero Driver Overhead https://viscomp.alexandra.dk/?p=3778 https://viscomp.alexandra.dk/?p=3778#respond Sun, 11 Jan 2015 20:07:47 +0000 http://viscomp.alexandra.dk/?p=3778 As I (red: Jesper Børlum previous employee), was looking through the presentations from Siggraph Asia 2014, one presentation in particular caught my eye. Tristan Lorachs presentation on Nvidias upcoming manual Command-List OpenGL extension. With all the focus on reducing the CPU-side driver overhead in the current graphics APIs this last year, and the upcoming new rendering APIs (AMDs Mantle, Microsoft DirectX 12, Apple Metal), I decided to make an overview of the current recommendations for scene rendering using core OpenGL and take a poke at Nvidias new extension. This first article is going to look at the core OpenGL recommendations, and the next article is going to be on Nvidias new extension. I am writing this article because I wanted to get a better grasp on the implementation details in the excellent GTC / Siggraph performance presentations found here:
http://on-demand.gputechconf.com/gtc/2013/presentations/S3032-Advanced-Scenegraph-Rendering-Pipeline.pdf
http://on-demand.gputechconf.com/gtc/2014/presentations/S4379-opengl-44-scene-rendering-techniques.pdf
http://www.slideshare.net/tlorach/opengl-nvidia-commandlistapproaching-zerodriveroverhead

For performance results and shader code please refer to the Nvidia presentations.

Disclaimer – This post is a simplification of a complex topic. If you feel I have left out important details, please add them to the comments at the end or write me.


Modern GPUs are absolute beasts. It never ceases to amaze me how much raw processing power they can handle. Even standard gaming hardware. However, scenes requirements are getting increasingly complex. They contain more geometry, more different types of materials used, and new and complex render effects. The GPU driver often ends up being a serious performance bottleneck handling this complexity. This means that no matter how much GPU power you throw at the rendering the overall performance is not going to increase.
A lot of stuff eats up CPU performance. Scenegraph traversal, animation, renderlist generation, sorting by state and all driver interactions etc.
Current driver performance culprits are:

  • Frequent GPU state changes (shader, parameters, textures, framebuffer etc.).
  • Draw commands.
  •  Geometry stream changes.
  •  Data transfers (uploads / read-backs).

All of these boils down to the driver eating up your precious CPU clockcycles.
Using the techniques below most of this CPU driver overhead can be reduces to almost zero. In the following sections, I will be looking at several methods for reducing the overhead. Most achieve this simply by calling the driver less. Seams simple enough, but handling material changes, texture changes, buffer changes, state changes between the draw calls can get tricky. Also, note that most of these methods require a newer version of OpenGL. Some of the functions only just made it into the core specifications (OpenGL 4.4 / 4.5).

A scene, in the context of this post, is a collection of objects, each consisting of sub-objects. A sub-object is a material and a draw command. Objects are logical collections of sub-objects each with their own world transform matrix. A material is a collection consisting of a shader program, parameters for the shader program and an OpenGL render state collection.

I have provided two naïve approaches to scene rendering and uploading of shader parameters – The two areas we will be focusing on.


Naïve scene rendering
This will act as the baseline for performance, and is what each improvement will try to improve on.

foreach(object in scene)
{
    foreach(subobject in object)
    {
        // Attaches the vertex and index buffers to the pipeline.
        SetupGeometry(subobject.geometry);

        // Updates active shaders if changed.
        // Uploads the material parameters.
        // Uploads the world transform parameter.
        SetMaterialIfChanged(subobject.material, object.transform);

        // Dispatch the draw call.
        Draw();
    }
}

This method imposes a large number of driver interactions:

  • Geometry streams are changed per sub-object.
  • Shaders are changed per sub-object, if different from current.
  • Shader parameters are uploaded per draw.
  • A draw call per sub-object.

Naïve parameter update
Uploading parameters, also known as uniform parameters, to shaders can impose a significant number of driver calls – especially if uploaded “the old fashioned way” where each parameter upload is a separate call to glUniform. This will act as the baseline for performance, and is what each improvement will try to improve on.

foreach(object in scene)
{
    ...
    foreach(batch in object.materialBatches)
    {
        if (batch.material != currentMaterial)
        {
            // Apply the active program to the pipeline.
            glUseProgram(batch.material.program);

            // Uniforms are program object state which needs to be updates for each program!
            glUniform(transformLoc, object.tranform);
            glUniform(diffuseColorLoc, batch.material.diffuseColor);
            glUniform(...);
            ...
        }

        // Dispatch draw.
    }
}

This technique has several weaknesses. Its many separate driver calls, which the driver cannot predict. To make it even worse, we need to re-upload all the parameters each time we change the shader program. Shader program objects contain the parameter values – Not the general OpenGL state. In the past, I have solved this by maintaining CPU-side parameter state cache per shader program. The proxy is then responsible for re-uploading if the uniform becomes dirty. This is a workable solution if you cannot use buffer objects, which trivializes the sharing of parameter data across shader programs as seen later in this post.


Improvement 1 – Single buffer per object
The obvious improvement to the naïve scene rendering is to move the buffers from the sub-objects into a collection of collapsed buffers in the containing object. This will allow us to move the buffer bind call from the inner loop to the outer loop. This will dramatically lower the number of geometry driver calls in a scene were each object contains many sub-objects. Each sub-object will now need to know the correct stream offset into the collapsed buffers to be able to draw correctly. When loading geometry you will need to collapse all sub-object buffers and offset the vertex indices to reflect the new position in the collapsed buffer.

foreach(object in scene)
{
    // Attaches the vertex and index buffers to the pipeline.
    SetupGeometry(object.geometry);

    foreach(subobject in object)
    {
        // Updates active shaders if changed.
        // Uploads the material parameters.
        // Uploads the world transform parameter.
        SetMaterialIfChanged(subobject.material, object.transform);

        // Dispatch the draw call.
        Draw();
    }
}


Improvement 2 – Sort sub-objects by material
Sorting by complete materials (same shaders, render state and material parameters – for now) achieves two things. We can now draw several sub-objects at a time and avoid costly shader changes.
The main difference to the render loop is that instead of looping over each sub-object, we now loop over a material batch. A material batch contains the material information, along with information about which parts of the geometry is to be rendered using that material setup.
During geometry load, you will need to sort by materials so that each batch contains enough information to render all sub-objects it contains.
You can opt to rearrange the vertex buffer data so that the draw command ranges can be “grown” to draw several sub-objects in a single command.
When drawing you can choose between two different ways:

  • Using a loop over each of the sub-object buffer ranges in the batch drawing each with glDrawElements.
  • Submitting all draw calls in one call using the slightly improved glMultiDrawElements.

The second multi draw approach will execute the loop for you inside the driver – hence only a slight improvement.

foreach(object in scene)
{
    // Attaches the vertex and index buffers to the pipeline.
    SetupGeometry(object.geometry);

    foreach(batch in object.materialBatches)
    {
        // Updates active shaders if changed.
        // Uploads the material parameters.
        // Uploads the world transform parameter.
        SetMaterialIfChanged(batch.material, object.transform);

        // Dispatch the draw call.
        foreach(range in batch.ranges)
            glDrawElements(GL_TRIANGLES, range.count, GL_UNSIGNED_INT, range.offset);
    }
}


Improvement 3 – Buffers for uniforms
Instead of uploading each uniform separately as shown in the naïve parameter update, OpenGL allows you to store uniform in objects. So called Uniform Buffer Objects (UBO). Instead of having a glUniform call per object, you can upload a chunk of uniforms using a buffer upload like glBufferData or glBufferSubData. It is important to group uniforms according to frequency of change, when uploading data into buffers. A practical grouping of uniforms could look something like the following:

  • Scene globals – camera etc.
  • Active lights.
  • Material parameters.
  • Object specifics – transform etc.

Grouping parameters allows you to leave infrequently changed data on the GPU, while the only the dynamic data is re-uploaded. A key UBO feature is that they allow parameter sharing across shader programs unlike glUniform. I am not going to write a full usage guide on UBOs – one can be found here.
There are different ways to use Uniform Buffer Objects. They recommended way changes according to if the data you are using is fairly static or dynamic. Below are examples of both. Note – You can mix the methods as best fit your use case.

Static buffer data:
If the data changes infrequently, upload parameters for all the sub-objects in one go into a large UBO. Then target the correct parameters by using the glBindBufferRange calls as shown below:

#define UBO_GLOBAL_SLOT 0
#define UBO_TRANS_SLOT 1
#define UBO_MAT_SLOT 2

// Update combined uniform buffers for all objects.
UpdateUniformBuffers();

// Bind global uniform buffers.
glBindBufferBase(GL_UNIFORM_BUFFER, UBO_GLOBAL_SLOT, uboGlobal);

foreach(object in scene)
{
    ...
    // Bind object uniform buffer.
    glBindBufferRange(GL_UNIFORM_BUFFER, UBO_TRANS_SLOT, uboTransforms, object.transformOffset, matrixSize);

    foreach(batch in object.materialBatches)
    {
        // Bind material uniform buffer.
        glBindBufferRange(GL_UNIFORM_BUFFER, UBO_MAT_SLOT, uboMaterials, batch.materialOffset, mtlSize);

        if (batch.material.program != currentProgram)
        {
            // Apply the active program to the pipeline.
            glUseProgram(batch.material.program);
        }

        // Draw.
    }
}

Dynamic buffer data:
If data change frequently, upload parameters into a small UBO for each material batch. The example below takes advantage of the new direct state methods (DSA) introduced in OpenGL 4.5. The below shows how such a render loop could look.

#define UBO_GLOBAL_SLOT 0
#define UBO_TRANS_SLOT 1
#define UBO_MAT_SLOT 2

// Bind buffers to their respective slots.
glBindBufferBase(GL_UNIFORM_BUFFER, UBO_GLOBAL_SLOT, uboGlobal);
glBindBufferBase(GL_UNIFORM_BUFFER, UBO_TRANS_SLOT, uboTransforms);
glBindBufferBase(GL_UNIFORM_BUFFER, UBO_MAT_SLOT, uboMaterials);

foreach(object in scene)
{
    ...
    // Upload object transform.
    glNamedBufferSubData(uboTransforms, 0, matrixSize, object.transform);

    foreach(batch in object.materialBatches)
    {
        // Upload batch material.
        glNamedBufferSubData(uboMaterials, 0, mtlSize, batch.material);

        if (batch.material.program != currentProgram)
        {
            // Apply the active program to the pipeline.
            glUseProgram(batch.material.program);
        }

        // Draw.
    }
}


Note – Upload of scattered data changes to static buffer using compute + SSBO
Nvidia mentioned a cute way to scatter data into a buffer. Normally you need to upload using a series of smaller glBufferSubData calls if the changes are non-continuous in memory. Alternatively, you could re-upload the entire buffer from scratch. This could potentially degrade performance significantly. They suggests placing all changes in a SSBO and perform the scatter-write using a compute shader. A shader storage buffer object (SSBO) is just a user-defined OpenGL buffer object that can be read/written using compute shaders. I have yet to try this technique out so I cannot comment if the performance makes it feasible. I really like the idea though.


Improvement 4 – Shader-based material / transform lookup
Improvement 3 introduces the notion of using UBOs to improve the uniform communication performance. Unfortunately, there are still many glBindBufferRange operations. It is possible to remove those binds by binding the entire buffer and then have the shader index the information. Communication of the index is done through a generic vertex attributes as shown below.

#define UBO_GLOBAL_SLOT 0
#define UBO_TRANS_SLOT 1
#define UBO_MAT_SLOT 2

// Update combined uniform buffers for all objects.
UpdateUniformBuffers();

// Bind buffers to their respective slots.
glBindBufferBase(GL_UNIFORM_BUFFER, UBO_GLOBAL_SLOT, uboGlobal);
glBindBufferBase(GL_UNIFORM_BUFFER, UBO_TRANS_SLOT, uboTransforms);
glBindBufferBase(GL_UNIFORM_BUFFER, UBO_MAT_SLOT, uboMaterials);

foreach(object in scene)
{
    ...

    foreach(batch in object.materialBatches)
    {
        if (batch.material.program != currentProgram)
        {
            // Apply the active program to the pipeline.
            glUseProgram(batch.material.program);
        }

        // Set buffer indices - shader program specific location!
        glVertexAttribI2i(indexAttribLoc, object.transformLoc, batch.materialLoc);

        // Draw.
    }
}

You use a generic vertex attribute as any other vertex attribute from inside the shader.


Improvement 5 – Bindless resources
Changing texture state have up to recently been a major headache when it came to batching efficiently. Sure, it is possible to store several textures inside an array texture and then index into the different layers, but there are several limitations and it is generally a pain to work with. OpenGL requires the application to bind textures to the texture slots prior to dispatching the draw calls. Textures are merely CPU-side handles as all other OpenGL object, but the new extension ARB_bindless_texture allows the application to retrieve a unique 64-bit GPU handle that the shader can use to lookup texture data without binding first. It is possible to store these new GPU handles in uniform buffers, unlike the CPU-side handles. GPU handles can be set like any other uniform using glUniformHandleui64, but it is strongly recommended to use UBOs (or similar – see Improvement 3). It is the applications responsibility to make sure textures are resident before dispatching the draw call. More information regarding this can be found in the extension spec here.
Nvidia has an extension that allows bindless buffers as well – More information can be found here. This is something we will have a look at when looking at the new Nvidia commandlist extension in the next article.


Improvement 6 – The indirect draw commands
A new addition to the numerous ways to draw in OpenGL is the indirect draw commands. Rather than submitting each draw call from the CPU, it is now possible to store all the draw information inside a buffer, which the GPU then loops through when drawing. The buffer contains an array of predefined structures, which in the case of glMultiDrawElementsIndirect looks like this:

typedef struct
{
    uint count;
    uint instanceCount;
    uint firstIndex;
    uint baseVertex;
    uint baseInstance;
} DrawElementsIndirectCommand;

Using an indirect draw command works much like the glMultiDrawElements described in Improvement 2 works. An added benefit is that you can create your GPU worklist directly on the GPU. You can e.g. use this to cull your scene from a compute shader rather than use the CPU.

There is a special bind target for indirect buffers called GL_DRAW_INDIRECT_BUFFER. The driver uses bound buffer to read the draw data. It is illegal to submit an indirect draw call using client memory.
Using indirect draw you will not need a separate draw command for each sub-object in a material batch as described in Improvement 2. To draw efficiently you will only have to create a buffer filled with the structs that describe the ranges of the objects you wish to draw using the active shader. This can be a huge draw command improvement. I have yet to test if you get an improved performance by growing the draw ranges by physically rearranging the vertex buffers.
Which material parameters and matrix to use when drawing each of the sub-objects can be handled much like in Improvement 4. Through a matrix / material array index. However, the method is a bit different as we are no longer able to set a generic vertex between each drawn sub-object. The indirect struct contains a lot of information, not all of which we need to use. The baseInstance member for example. By using this, we can communicate both the material and matrix index, so the shader program can get the data it needs. How you choose to split the bits all comes down to how much you need to draw.

#define UBO_GLOBAL_SLOT 0
#define UBO_TRANS_SLOT 1
#define UBO_MAT_SLOT 2

// Update combined uniform buffers for all objects.
UpdateUniformBuffers();

// Bind buffers to their respective slots.
glBindBufferBase(GL_UNIFORM_BUFFER, UBO_GLOBAL_SLOT, uboGlobal);
glBindBufferBase(GL_UNIFORM_BUFFER, UBO_TRANS_SLOT, uboTransforms);
glBindBufferBase(GL_UNIFORM_BUFFER, UBO_MAT_SLOT, uboMaterials);

// Bind indirect buffer for entire scene.
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, scene.indirectBuffer);

foreach(object in scene)
{
    ...
    
    foreach(batch in object.materialBatches)
    {
        if (batch.material.program != currentProgram)
        {
            // Apply the active program to the pipeline.
            glUseProgram(batch.material.program);
        }

        // Draw batch.
        glMultiDrawElementsIndirect(GL_TRIANGLES, GL_UNSIGNED_INT, object->indirectOffset, object->numIndirects, 0);
    }
}

Unfortunately, it is not yet possible to change state (renderstate and shaders) using the indirect draw commands. This is something I am going to look at in the next article on the Nvidia CommandList extension.


This post turned out to be bigger than I had first anticipated, but efficient drawing is tricky. If you made it this far – Good for you! I hope to get time to write the follow up article as soon as real life allows me.

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=3778 0
Communicating through visual effects https://viscomp.alexandra.dk/?p=3671 https://viscomp.alexandra.dk/?p=3671#respond Wed, 26 Nov 2014 07:55:39 +0000 http://viscomp.alexandra.dk/?p=3671 Communication is hard – Especially communicating an artistic vision for visual effects. The Shareplay foundation sponsored a project investigating new software-based approaches to aiding artists in such communication – and this post explains our findings.

Together with our project partners Ja Film, Sunday Studio, Redeye Film and WilFilm, we have come up with ways to improve the communication of animated volumetric effects like smoke, fumes, explosions, fluids, etc. The production of shots for film, commercials and TV goes though several phases – each of these increase the visual quality while the artistic choices get locked down. Very early, the camera movement, animation timing and effect timing are locked down. This typically involves very crude assets which are then later on replaced by more detailed assets. This is fairly straightforward for normal “surface assets” like a typical character and scene prop. It is much more difficult to do this when it involves simulation-based volumetric effects like smoke, fumes, explosions or fire. These type of effects are very expensive to produce due to computationally intensive simulations and long rendering times – as well as the high level of artist experience required to make them look good. Because of this, the early pre-visualization often looks very crude – and poorly communicates the artistic visions of the creative artist. Take for example an explosion – to save time, the general timing of the explosion is often done by a rapidly expanding phong-shaded sphere. This is great for communicating the animation timing, but does not apply to the visual aesthetics of the explosion (rolling smoke, balls of fire, pressure wave, contrast between fire and smoke etc).

We have identified two themes for our experiments:

  • Pushing the visual decisions much further down the shot production pipeline.
  • Procedural animation of volumetric effects.

We go into each of these themes under the next two headlines

Pushing the visual decisions much further down the shot production pipeline.

Last year, we did a project on how fast procedural volumetric effects could empower the artist and re-envision the way artists work with volumetric effects. This project was also funded by the Shareplay Foundation with participation from our Computer Graphics Lab and Sunday Studio. More information on this project here. In the current project on visual communication we chose to build on the experiments and knowledge from that project.

Volumetric effects generated directly within Adobe AfterEffects

One of the major issues when it comes to using volumetric effects in production is the workflow. The effects are generally made in isolation, pre-rendered and then later integrated into the final shot. We wanted to keep the flexibility all the way into the composting programs, without locking down the special effect. Our post processing program of choice was Adobe AfterEffects. We realized a prototype implementation in AfterEffects, and found that it makes perfect sense to be able to make procedural volumetric effects as late as post-processing. A perfect example is a cloud backdrop, that would usually be composited from photographs of clouds – whereas our approach would allow for custom generation of this basic content within the compositing program itself. Some challenges also arose that could be a source of future research. First of all, it turns out to be very difficult to impose a 3D workflow to a predominantly 2D application. All our previous experiments had been done in Autodesk Maya where all tools are meant to work in 3D – and this was naturally not the case for AfterEffects. This difference is especially obvious in the handling of camera and construction of base geometry for the special effect. At a later point we would like to investigate efficient user interfaces to handle the 3D navigation in a post-processing program – and to efficiently be able to generate base geometry within AfterEffects.

Procedural animation of volumetric effects.

Special effects such as smoke or fluids often evolve over time; splashing water, drifting clouds or expanding explosions – and timing plays a major role in the communication. In a traditional workflow the timing is decided in the input parameterization of the underlying simulations. Experimenting with those parameters is a very time consuming process and marred by trial-and-error.  We wanted to do a procedural animation system for quick-and-dirty volumetric animations without any wait. Specifically we wished a system where the artist is able to;

  • Key-frame the shape of effects with exact timing.
  • Control the shape of the effect through geometry
  • Change every key frame at any time without waiting for a simulation.
  • Create the rolling motions of smoke.

All of these requirements are designed to make it easy for the artists to get the results they intend – quickly.

We needed to find a way to calculate the intermediate frames in-between the artist defined key frames. Our first approach was to do a full flow based registration between shapes. This had to be done each time the geometries were updated – but only once for each pair of key frames. This unfortunately didn’t turn out as good as we would have wished it to. The calculations needed to derive a good quality registration were too time consuming. The vector field between two shapes can be seen below, using the Demons framework for flow registration.

Instead, we came up with the idea to convert the geometry from each keyframe into a signed distance field and do a standard linear interpolation (or morphing) between those distance fields. In order to have control of the behavior, we defined two positions in each keyframe – an “entry” and an “exit”. The entry and exit of two consecutive keyframes would overlap in the interpolation itself – but moved by the true offset in world space. The results of that interpolation scheme for volumetric effects can be seen in the below animation.

Next we approached the problem of rendering believable animated rolling smoke, without any underlying simulation. The basic idea is to distribute points inside a volume. Then for each point splat a volumetric particle into a common field. Instead of using fields, we decided to do this implicitly using a variant of world-space Worley noise to distribute the points. For each ray step we evaluate the distance to each point and use that to evaluate the particle splat data. The rolling motion of the particles is done by rotating the particle noise splat lookup according to a vector field from the interpolation routine – in our test case below it was simply rotating away from the middle axis. This gives a believable rolling motion. It is also crucial to note that the particles are stationary in world space and the notion of movement is done by moving the particle noise field in the opposite direction of the motion. The effect can be seen on the follow video;

This scheme does indeed give a believable smoke motion which allows the artists to control the animation precisely as intended. Unfortunately, it doesn’t perform as well as we had hoped. The implicit evalulation of the closest particles along with the evaluation of particle noise made it very slow. Future work would be pre-calculating the particle noise to alleviate slow runtime performance.

Conclusion

The project ended in November 2014 and has pointed out some important trends in the search for quality improvements with smaller budgets within special effects. Through experiments and prototypes we have found that current commercial software are viable as frameworks for special effects design late in the production pipeline – and that there is reason to believe that we can replace at least part of the simulation based tools with pure procedural tools for animating visual effects. Furthermore we have established a good model for project collaboration between creative animation companies and R&D institutes such as the Alexandra Institute.

Untitled

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=3671 0
Xcelgo case – Custom real-time rendering optimization https://viscomp.alexandra.dk/?p=3635 https://viscomp.alexandra.dk/?p=3635#respond Tue, 25 Nov 2014 14:24:51 +0000 http://viscomp.alexandra.dk/?p=3635 In this project we helped Xcelgo with a brand new custom DirectX 11 renderer as a replacement of their existing fixed function DirectX 9.0 renderer.

Xcelgo provides virtual automation software for 3D modeling along the cycle of automated material handling systems – Like airport baggage handling or larger warehouse storage systems. The purpose of their product is to eliminate the risks involved in building these large and very expensive systems, by allowing simulation and modelling of the system up front.

Experior, the 3D modelling system by Xcelgo, is built around a fixed function DirectX 9.0 pipeline programmed in C# though wrapper code. DirectX 9.0 is characterized by a lack of scalability because of the driver overhead imposed by the dated rendering paradigm. The 3D simulation is built from a large number of user generated primitives which are able to freely move around the scene. Each of these are being rendered individually which causes the GPU and CPU to lockstep.

The fixed function rendering pipeline supports only very limited lighting techniques, hence limiting the visual appeal of their presented scenes. And even in engineering type visualizations, the visual quality gets attention and opens for expanding the customer base.

Xcelgo wanted to prepare for future scenarios with larger models and a more easily maintained rendering framework – and decided to update the rendering pipeline to a modern shader-based DirectX 11 pipeline. In close collaboration we have designed and implemented a completely new rendering pipeline.

Integration

The new pipeline supports a lot of features which will help Xcelgo further push the limits of virtual automation:

* DirectX 11 rendering pipeline written from the bottom up based on Xcelgos domain knowledge about their customers wishes.
* Intelligent optimization of scene rendering to avoid expert rendering knowledge when designing the scene geometry.
* Threaded rendering freeing the rest of the workstation to do simulation.
* Massive increase in number of dynamic objects that the system can handle. Hundreds primitives -> Tens of thousands skinned and textured models.
* Support for instanced rendering of skinned robots.
* Support for fully detailed CAD line renderings in full resolution to better guide modelling engineers when building systems.
* Modern cascaded shadow mapping solution which fully envelops the scene in crisp shadows.
* Rasterization-based pixel perfect picking of objects in the scene vastly improving runtime performance when selecting objects.
* Modern surface shading much improving the visual aesthetics of the scene.

The project is now completed and Xcelgo is hard at work finishing the integration of the new rendering which should be complete in time for Experior 6.0.

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=3635 0
Afternoon on Visual Communication of Spatial Special Effects https://viscomp.alexandra.dk/?p=3622 https://viscomp.alexandra.dk/?p=3622#respond Wed, 05 Nov 2014 13:56:05 +0000 http://viscomp.alexandra.dk/?p=3622 On the 14th of November at 15:00 in Aarhus, Denmark we will give an afternoon talk about an applied research projects about visually based communication of spatial special effects such as clouds and smoke. Prototypes where developed and integrated into After Effects and Autodesk Maya.

Sign-up here on the (Danish) invitation.

Clouds procedurally rendered directly in After Effects

Clouds procedurally rendered directly in After Effects

Morphing

Fast simulation of smoke simulation based only on morphing of individual volumetric frames

The project was sponsored by the Shareplay Foundation, and executed together with the animation companies Ja-Film, Redeye, Wil-film, Sunday Studio

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=3622 0
Going to SIGGRAPH 2014 https://viscomp.alexandra.dk/?p=3607 https://viscomp.alexandra.dk/?p=3607#respond Fri, 08 Aug 2014 18:07:41 +0000 http://viscomp.alexandra.dk/?p=3607 I am attending the premier computer graphics conference Siggraph this year – together with some colleagues. Let me know if you want to meet-up in Vancouver between the 10th and 14th of August 2014 or if you want an overview of current trends when I am back in Denmark.

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=3607 0
Launch of Elementacular Maya plug-in https://viscomp.alexandra.dk/?p=3603 https://viscomp.alexandra.dk/?p=3603#comments Fri, 08 Aug 2014 15:59:41 +0000 http://viscomp.alexandra.dk/?p=3603 I am very happy to announce that the Elementacular Maya plug-in for interactive modelling and rendering of high quality clouds has been release in an Early Adopters Version (64 bit windows, Maya 2013-2015, OpenGL 4.4). The entire Lab is very excited to get our first experience launching a commercial off-the-shelf software product.

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=3603 6
Procedural Cell Generation – Results https://viscomp.alexandra.dk/?p=3581 https://viscomp.alexandra.dk/?p=3581#respond Fri, 21 Mar 2014 09:57:21 +0000 http://viscomp.alexandra.dk/?p=3581 As outlined in a previous post we have developed a customized plug-in for Maya which assists MediaFarm in creating compelling cellular surface geometry. We have just received some initial results from one of their productions.

The photo above shows a still from the final result. The contributions of our plug-in are highlighted in the image below.

MediaFarm on the results:
“All in all, cooperation with the Alexandra Institute has provided us with some tools, which in certain situations have enabled us to produce a better and faster result than we would normally be able to. We expect that we will continue to benefit from the tools in future projects, and we look forward to discovering new ways to take advantage of them.
[…]
We can highly recommend a collaboration with the Alexandra Institute to other companies.”

 

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=3581 0
Material from the future of 3D conference https://viscomp.alexandra.dk/?p=2067 https://viscomp.alexandra.dk/?p=2067#respond Thu, 02 Jan 2014 14:20:53 +0000 http://viscomp.alexandra.dk/?p=2067 On the 29th of November 2013 the Future of 3D conference was held in Aarhus, Denmark with 84 engaged participants and 8 exciting lectures. The following material is available from our day:

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=2067 0
Elementacular Beta start! https://viscomp.alexandra.dk/?p=1995 https://viscomp.alexandra.dk/?p=1995#respond Tue, 10 Dec 2013 09:39:11 +0000 http://viscomp.alexandra.dk/?p=1995 We would like to invite you to participate in Elementacular beta test. We will start sending out beta mails to the people who have already signed up on our project webpage http://www.elementacular.com by ultimo tomorrow wednesday.

We hope that you will enjoy working with procedurally generated effects as much as we do!

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=1995 0
Elementacular videos and Beta application opened https://viscomp.alexandra.dk/?p=1925 https://viscomp.alexandra.dk/?p=1925#respond Wed, 27 Nov 2013 09:02:37 +0000 http://viscomp.alexandra.dk/?p=1925 We have just released some videos showcasing our upcoming plugin for Autodesk Maya. Working with a visual effects and professional film maker does have its privileges.

Come take a look at how the tool is used at our vimeo channel at http://vimeo.com/channels/elementacular

Interested in trying it out? – apply for the beta starting soon at http://www.elementacular.com

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=1925 0