tutorials – Visual Computing Lab https://viscomp.alexandra.dk Computer Graphics, Computer Vision and High Performance Computing Fri, 14 Aug 2009 13:01:50 +0000 en-GB hourly 1 https://wordpress.org/?v=5.8.2 Noe’s tutorial on deforming 3D geometry using RBFs https://viscomp.alexandra.dk/?p=312 https://viscomp.alexandra.dk/?p=312#comments Fri, 14 Aug 2009 13:01:50 +0000 http://viscomp.alexandra.dk/?p=312 In the following I will present a method for deforming three dimensional geometry using a technique relying on radial basis functions (RBFs). These are mathematical functions that take a real number as input argument and return a real number. RBFs can be used for creating a smooth interpolation between values known only at a discrete set of positions. The term radial is used because the input argument given is typically computed as the distance between a fixed position in 3D space and another position at which we would like to evaluate a certain quantity.

The tutorial will give a short introduction to the linear algebra involved. However the source code contains a working implementation of the technique which may be used as a black box.

Source code with Visual Studio 2005 solution can be found here. The code should also compile on other platforms.

Interpolation by radial basis functions

Assume that the value of a scalar valued function $$F : mathbb{R}^3 rightarrow mathbb{R}$$ is known in $$M$$ distinct discrete points $$mathbf{x}_i$$  in three dimensional space. Then RBFs provide a means for creating a smooth interpolation function of $$F$$ in the whole domain of $$mathbb{R}^3$$. This function is written as a sum of $$M$$ evaluations of a radial basis function $$g(r_i) : mathbb{R} rightarrow mathbb{R}$$ where $$r_i$$ is the distance between the point $$mathbf{x} = (x, y, z)$$ to be evaluated and $$mathbf{x}_i$$:

$$!F(mathbf{x}) = sum_{i=1}^M a_i g(||mathbf{x} – mathbf{x}_i||) + c_0 + c_1 x + c_2 y + c_3 z, mathbf{x} = (x,y,z) mathbf{(1)}$$

Here $$a_i$$ are scalar coefficients and the last four terms constitute a first degree polynomial with coefficients $$c_0$$ to $$c_3$$. These terms describe an affine transformation which cannot be realised by the radial basis functions alone. From the $$M$$ known function values $$F( x_i, y_i, z_i ) = F_i$$ we can assemble a system of $$M+4$$ linear equations:  $$mathbf{G} mathbf{A} = mathbf{F}$$
where $$mathbf{F} = (F_1, F_2, ldots, F_M, 0, 0, 0, 0)$$, $$mathbf{A} = (a_1, a_2, ldots, a_M, c_0, c_1, c_2, c_3)$$ and $$mathbf{G}$$ is an $$(M+4) times (M+4)$$ matrix :

$$! mathbf{G} = left[begin{array}{cccccccccc}g_{11} & g_{12} & bullet & bullet & bullet & g_{1M} & 1 & x_1 & y_1 & z_1 \ g_{21} & g_{22} & bullet & bullet & bullet & g_{2M} & 1 & x_2 & y_2 & z_2 \ bullet & bullet & bullet & bullet & bullet & bullet & bullet & bullet & bullet & bullet \ bullet & bullet & bullet & bullet & bullet & bullet & bullet & bullet & bullet & bullet \ bullet & bullet & bullet & bullet & bullet & bullet & bullet & bullet & bullet & bullet \ g_{M1} & g_{M2} & bullet & bullet & bullet & g_{MM} & 1 & x_M & y_M & z_M \ 1 & 1 & bullet & bullet & bullet & 1 & 0 & 0 & 0 & 0 \ x_1 & x_2 & bullet & bullet & bullet & x_M & 0 & 0 & 0 & 0 \ y_1 & y_2 & bullet & bullet & bullet & y_M & 0 & 0 & 0 & 0 \ z_1 & z_2 & bullet & bullet & bullet & z_M & 0 & 0 & 0 & 0end{array}right] $$

Here $$g_{ij} = g(|| mathbf{x}_i – mathbf{x}_j ||)$$. A number of choices for $$g$$ will result in a unique solution of the system. In this tutorial we use the shifted log function:

$$! g(t) = sqrt{log(t^2 + k^2)}, k^2geq 1$$
with $$k = 1$$. Solving the equation system for $$mathbf{A}$$ gives us the coefficients to use  in equation $$textbf{(1)}$$ when interpolating between known values.

 

Interpolating displacements

How can RBF’s be used for deforming geometry? Well assume that the deformation is known for $$M$$ 3D positions $$mathbf{x}_i$$ and that this information is represented by a vector describing 3D displacement $$mathbf{u}_i$$ of the geometry that was positioned at $$mathbf{x}_i$$ in the original, undeformed state. You can think of the $$mathbf{x}_i$$ positions as control points that have been moved to positions $$mathbf{x}_i+mathbf{u}_i$$. The RBF interpolation method can now be used for interpolating these displacements to other positions.

Using the notation $$mathbf{x}_i = (x_i, y_i, z_i)$$ and $$mathbf{u}_i = (u^x_i, u^y_i, u^z_i)$$  three linear systems are set up as above letting the displacements $$u$$ be the quantity we called $$a$$ in the previous section:

$$!mathbf{G} mathbf{A}_x = (u^x_1, u^x_2, ldots, u^x_M, 0, 0, 0, 0)^T$$
$$!mathbf{G} mathbf{A}_y = (u^y_1, u^y_2, ldots, u^y_M, 0, 0, 0, 0)^T$$
$$!mathbf{G} mathbf{A}_z = (u^z_1, u^z_2, ldots, u^z_M, 0, 0, 0, 0)^T$$

where $$mathbf{G}$$ is assembled as described above. Solving for $$mathbf{A}_x$$, $$mathbf{A}_y$$, and $$mathbf{A}_z$$ involves a single matrix inversion and three matrix-vector multiplications and gives us the coefficients for interpolating displacements in all three directions by the expression $$mathbf{(1)}$$

 

The source code

In the source code accompanying this tutorial you will find the class RBFInterpolator which has an interface like this:

class RBFInterpolator
{
public:
	RBFInterpolator();
	~RBFInterpolator();

	//create an interpolation function f that obeys F_i = f(x_i, y_i, z_i)
	RBFInterpolator(vector x, vector y, vector z, vector F);

	//specify new function values F_i while keeping the same
	void UpdateFunctionValues(vector F);

	//evaluate the interpolation function f at the 3D position (x,y,z)
	real interpolate(float x, float y, float z);

private:
            ...
};

This class implements the interpolation method described above using the newmat matrix library. It is quite easy to use: just fill stl::vectors with the $$x_i$$, $$y_i$$ and $$z_i$$ components of the positions where the value $$F$$ is known and another stl::vector with the $$F_i$$ values. Then pass these vectors to the RBFInterpolator constructor, and it will be ready to interpolate. The $$F$$ value at any position is then evaluated by calling the ‘interpolate’ function. If some of the $$F_i$$ values change at any time, the interpolator can be quickly updated using the ‘UpdateFunctionValues’ method.

We want to deform a triangle surface mesh. These are stored in a class TriangleMesh, and loaded from OBJ files.
In the source code the allocation of stl::vectors describing the control points and the initialisation of RBFInterpolators looks like this:

void loadMeshAndSetupControlPoints()
{
	// open an OBJ file to deform
	string sourceOBJ = "test_dragon.obj";
	undeformedMesh = new TriangleMesh(sourceOBJ);
	deformedMesh = new TriangleMesh(sourceOBJ);

	// we want 11 control points which we place at different vertex positions
	const int numControlPoints = 11;

	const int verticesPerControlPoint = ((int)undeformedMesh->getParticles().size())/numControlPoints;

	for (int i = 0; i<numControlPoints; i++)
	{
		Vector3 pos = undeformedMesh->getParticles()[i*verticesPerControlPoint].getPos();
		controlPointPosX.push_back(pos[0]);
		controlPointPosY.push_back(pos[1]);
		controlPointPosZ.push_back(pos[2]);
	}

	// allocate vectors for storing displacements
	for (unsigned int i = 0; i<controlPointPosX.size();  i++)
	{
		controlPointDisplacementX.push_back(0.0f);
		controlPointDisplacementY.push_back(0.0f);
		controlPointDisplacementZ.push_back(0.0f);
	}

	// initialize interpolation functions
	rbfX = RBFInterpolator(controlPointPosX, controlPointPosY, 
                                           controlPointPosZ, controlPointDisplacementX );
	rbfY = RBFInterpolator(controlPointPosX, controlPointPosY, 
                                           controlPointPosZ, controlPointDisplacementY );
	rbfZ = RBFInterpolator(controlPointPosX, controlPointPosY, 
                                           controlPointPosZ, controlPointDisplacementZ );
}

Now all displacements are set to zero vectors – not terribly exciting! To make it a bit more fun we can vary the displacements with time:

// move control points
	for (unsigned int i = 0; i < controlPointPosX.size(); i++ )
	{
		controlPointDisplacementX[i] = displacementMagnitude*cosf(time+i*timeOffset);
		controlPointDisplacementY[i] = displacementMagnitude*sinf(2.0f*(time+i*timeOffset));
		controlPointDisplacementZ[i] = displacementMagnitude*sinf(4.0f*(time+i*timeOffset));
	}

	// update the control points based on the new control point positions
	rbfX.UpdateFunctionValues(controlPointDisplacementX);
	rbfY.UpdateFunctionValues(controlPointDisplacementY);
	rbfZ.UpdateFunctionValues(controlPointDisplacementZ);

	// deform the object to render
	deformObject(deformedMesh, undeformedMesh);

Here the function ‘deformObject’ looks like this:

// Code for deforming the mesh 'initialObject' based on the current interpolation functions (global variables). 
// The deformed vertex positions will be stored in the mesh 'res'
// The triangle connectivity is assumed to be already correct in 'res'  
void deformObject(TriangleMesh* res, TriangleMesh* initialObject)
{
	for (unsigned int i = 0; i < res->getParticles().size(); i++)
	{
		Vector3 oldpos = initialObject->getParticles()[i].getPos();

		Vector3 newpos;
		newpos[0] = oldpos[0] + rbfX.interpolate(oldpos[0], oldpos[1], oldpos[2]);
		newpos[1] = oldpos[1] + rbfY.interpolate(oldpos[0], oldpos[1], oldpos[2]);
		newpos[2] = oldpos[2] + rbfZ.interpolate(oldpos[0], oldpos[1], oldpos[2]);

		res->getParticles()[i].setPos(newpos);
	}
}

That’s it!!! Now I encourage you to download the source code and play with it. Perhaps you can experiment with other radial basis functions? Or make the dragon crawl like a caterpillar? If you code something interesting based on this tutorial send a link to me and we will link to it from this page 🙂

Karsten Noe

I got a mail from Woo Won Kim from Yonsei University in South Korea who has made a head modeling program that can generate 3D human heads from two pictures of the person using code from this RBF tutorial. Check out a video of this here.

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=312 11
Easy GPGPU program – OpenGL and CG https://viscomp.alexandra.dk/?p=103 https://viscomp.alexandra.dk/?p=103#comments Wed, 29 Apr 2009 18:43:56 +0000 http://viscomp.alexandra.dk/?p=103 As part of the GPGPU course at the University of Aarhus in 2005 we developed a very simple set of base-classes for General Purpose Computation using the Graphics Processing Unit (GPGPU) through OpenGL, Nvidia CG, and either framebuffer objects or PBuffers for render-to-texture functionality. Today you should ideally use Nvidia CUDA or OpenCL for GPGPU – but the code might still be of interest for older hardware or a pure OpenGL/CG based approach to GPGPU:

SimpleReactionDiffusion (framebuffer_object).zip

The archive file includes the EasyGPUProgram class that has methods to initiate data in a 2d grid layout, do computation (as a cg fragment shader), and retrieve the data. We have  included a reaction-diffusion example based on GPU Gems 2 chapter 31 using the EasyGPUProgram class.

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=103 1
GPU Raycasting Tutorial https://viscomp.alexandra.dk/?p=107 https://viscomp.alexandra.dk/?p=107#comments Tue, 28 Apr 2009 18:56:48 +0000 http://viscomp.alexandra.dk/?p=107 smokedragon.jpg

The famous Stanford dragon rendered using GPU raycasting.

This post will try to explain how to implement a GPU based raycasting render, using open GL and Nvidia’s CG. This tutorial assumes some experiance with OpenGl and vertex-fragment shaders.

First of all why do we need this algortihm? Because it is a smart way to achieve high quality volume rendering and the raycasting algorithm is well suited for the modern GPU’s. Esspecially the new 8800 series because of the unified shading system.

The reason behind this tutorial is to help people getting startet with GPU raycasting because there is some technical difficulties that has to be adressed in order to render volumetric data like in the picture above.

The main core of the algorithm is to send one ray per screen pixel and trace this ray through the volume. This is possible to implement in a fragment program and the rendering can be done in realtime. The techinque is pretty flexible for instance effect like shadows can be implemented with a few lines of code.

raycastillustration.jpg

Here is a conceptual image of the raycasting algorithm where one ray pr pixel is spawned and traced through the volume.

In order to generate the nessesary rays we use a clever trick by using OpenGl abillity to render geometry. How can this help us you might say? Well listen up my young apprentice. First we define a ray:

  • A ray is just a origin point o and a direction vector dir.
  • A ray descripes a line in 3d space by using the formula P(t) = o + dit * t
  • So to generate a ray we need to find the origin point and the direction vector.

This can be done be rendering a cube where the colors represent coordinates, and let OpenGL’s interpolation take care of the rest. The way to do this is to render the front and the backside of a unitcube that is illustrated just below.

texcoords.png

If we subtract the backface (on the right) from the front we get at a direction vector for each pixel. This is the direction of our ray. The origin is just the frontface values of the cube. So we have to do two renderpasses one for the front and one for the back. To render the backside we enable frontface culling. In my implementation i use an OpenGL framebuffer object (fbo) to store the result from the rendering of the backside, and use the frontface rendering to generate the fragments that starts the raycasting process. If you are unfamiliar with framebuffer objects check out this link.

So to calculate the raycasting we need to create the ray and then step through the volumetexture. This is all done in a single fragmentprogram and calculated on the GPU. The fragment program is fairly simple, the only real issue is to calculate texturecoordinates used to index the backface buffer in order to get the ray exit point. These texture coordinates is refered to as normalized device coordinate, and in the implementation we find a corresponding pixel in the backface buffer by this calculation.

float2 texc = ((IN.Pos.xy / IN.Pos.w) + 1) / 2;

Where IN.Pos is the modelviewprojected position. This calulation gives us the fragments screen position in the interval between [0,1]. Then the ray exitposition is found be using texc to index in the backface buffer like this:

float4 exit_position  = tex2D(backface_buffer, texc);

Now we create the ray and use the shader model 3.0 looping capabilities to create a for loop. This loop will step through the volume with a certain stepsize delta and we can accumulate opacity and color value according to the nature of our volume data set. In the demo implementation the ray terminates when it leaves the volume or when the accumulated opacity reaches a high enough value. But there are many possibilities. For instance if we terminate the ray when a certain opacity threshold is first encountered then the result will be some kind of iso surface rendering.

Here is a link to the raycasting shader.

The raycasting technique can be used for many types of rendering problems where polygonbased rendering have a hard time. For instance effects like smoke and glass can be rendered in realtime. Maybe i will do a tut on these subjects in the near future.

To get you started with this cool rendering algorithm, i have made a simple GPU raycasting implementation that hopefully will clear up the details. That is just the kind of guy i am :-)

The demo contains a windows executable and source code that implements the GPU raycasting algorithm. To compile you need Nvidias cg and Glew.

The demo just shows a volume of colors where some spheres have been subtracted. The stepsize of the ray can be adjusted be pressing the “w” and “e” buttons. Notice this demo might be hard on your gfx card, just try to press “w” a lot this will result in a big stepsize and the raycasting will update more rapidly.

raycast_tut_img1.jpg

To download windows demo and source code click here.

As a last comment: this is just a tutorial that will get you started with GPU ryacasting, the technique was invented back in 2003 by these guys:

J. Kr¨uger and R. Westermann. Acceleration techniques for
GPU-based volume rendering. In Proceedings of IEEE Visualization
2003, pages 287–292, 2003.

The GPU raycasting is an active research area and if you want to learn more about this, here is a couple of references:

A more recent article that explain a shader model 3.0 implementation can be fund at this url: http://www.vis.uni-stuttgart.de/ger/research/fields/current/spvolren/

VRVis has posted a lot of really good papers on the subject so that has been my greatest resource. Visit them at this address: http://medvis.vrvis.at/home/

If you find some errors or make some cool improvements please let me know at peter.trier@alexandra.dk. Have fun!!

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=107 2