Thomas Kim Kjeldsen – Visual Computing Lab https://viscomp.alexandra.dk Computer Graphics, Computer Vision and High Performance Computing Thu, 04 Apr 2019 12:03:42 +0000 en-GB hourly 1 https://wordpress.org/?v=5.8.2 WebGL Virtual Texturing https://viscomp.alexandra.dk/?p=4014 https://viscomp.alexandra.dk/?p=4014#respond Fri, 18 Nov 2016 14:05:36 +0000 http://viscomp.alexandra.dk/?p=4014 screenshot-from-2016-11-18-14-19-23

We visualize Denmark’s Digital Elevation Model in real-time, directly in a browser using WebGL. A virtual texturing technique is applied which enables us to handle a virtual raster size of 1048576 x 1048576 pixels. Hence, it is possible to cover the whole country, except Bornholm, with 40 cm horizontal resolution.

Online demo:

http://denmark3d.alexandra.dk

 

References:

Making Digital Elevation Models Accessible, Comprehensible, and Engaging through Real-Time Visualization

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=4014 0
Papers published in the 2014 IEEE International Ultrasonics Symposium proceedings https://viscomp.alexandra.dk/?p=3631 https://viscomp.alexandra.dk/?p=3631#respond Thu, 27 Nov 2014 08:08:10 +0000 http://viscomp.alexandra.dk/?p=3631 In our Advanced Technology Foundation project “FutureSonic”, we recently presented two papers at the 2014 IEEE International Ultrasonics Symposium together with our research partners at the Technical University of Denmark.

multicore_beamforming
The first paper [1] presents how ultrasound images can be computed efficiently on GPUs and on multicore CPUs that support Single Instruction Multiple Data (SIMD) extensions. We were able to accelerate a reference implementation in C from around 700 ms/frame to 5.4 ms/frame using the same multicore CPU. The speedup was achieved primarily by optimizing the memory access patterns and by utilizing AVX instructions. On a high-end GPU the fastest computation time was less than 0.5 ms/frame.

sasb_handheld_benchmark

The results obtained above were utilized in the second paper [2] where the GPU implementation was ported to mobile devices. We showed that modern mobile GPUs provide enough computing power to produce ultrasound images in real-time. Furthermore, we showed that the WiFi throughput is sufficient for real-time reception of raw data from a wireless ultrasound trandsucer.

References
[1] Synthetic Aperture Sequential Beamforming implemented on multi-core platforms
IEEE International Ultrasonics Symposium (IUS), p2181 – 2184 (2014)

[2] Implementation of synthetic aperture imaging on a hand-held device
IEEE International Ultrasonics Symposium (IUS), p2177 – 2180 (2014)

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=3631 0
Denmark’s new elevation model visualized in WebGL https://viscomp.alexandra.dk/?p=3636 https://viscomp.alexandra.dk/?p=3636#respond Tue, 18 Nov 2014 12:03:27 +0000 http://viscomp.alexandra.dk/?p=3636 Recently, the Danish Geodata Agency released a new high-resolution LiDAR pointcloud dataset of parts of Denmark.
We have developed a real-time terrain visualization that runs entirely in a web-browser using WebGL. The terrain model was generated from the pointcloud (17.6 billion points) to a raster map with 40 cm lateral and longitudinal resolution and 1 cm height resolution. In the final visualization, we have added overlays from satellite photos and from OpenStreetMap as shown in the screenshots.

dhm_hindsgavldhm_kalundborg

A demo video is available here.

Unfortunately, the interaction is not perfectly smooth at the moment when the camera is moved around and the map updates. We expect to address this issue in the near future.

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=3636 0
First ultrasound images presented https://viscomp.alexandra.dk/?p=1961 https://viscomp.alexandra.dk/?p=1961#respond Tue, 03 Dec 2013 15:59:40 +0000 http://viscomp.alexandra.dk/?p=1961 We have successfully demonstrated the first ultrasound images in our Advanced Technology Foundation project. The demonstration was presented today at an event at the Center for Fast Ultrasound Imaging at the Technical University of Denmark.

ultrasound_demo_small

The picture shows images acquired and processed in real time using our newly developed software (left) and the corresponding raw image produced by a commercially available scanner from BK Medical (right). We will continue working on improving the image quality. Details about our software implementation will be published in a future post.

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=1961 0
WebGL raytracing presented at Visionday https://viscomp.alexandra.dk/?p=1748 https://viscomp.alexandra.dk/?p=1748#respond Mon, 10 Jun 2013 08:37:50 +0000 http://viscomp.alexandra.dk/?p=1748 We presented our WebGL raytracer at the Visionday 2013 event.

Material presented in the talk:
Slides (pdf)
Demo 1: Motorcycle
Demo 2: Cornell box Xmas theme

Usage
Use the left mouse button and the keys “wasdqe” to control the camera.
Use the right mouse button to select objects in the scene. The active object can be translated with the gizmo and the material can be changed in the column on the right hand side.

Requirements
You should ensure that you have a web browser that supports WebGL with the OES_texture_float extension.
If you use Windows you will need a recent version Firefox (v17 has been tested) or Chrome (v23 has been tested) due to some optimizations in the shader compiler in the Angle layer. Alternatively you have to enable native opengl in your browser (in Firefox open about:config and set webgl.prefer-native-gl=true, in Chrome use the command line argument –use-gl=desktop).

 

You can also load your own 3d models. This feature was used for our Xmas competition which was won by this nice image created by Jonas Raagaard.

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=1748 0
GLSL and WebGL pathtracing benchmark https://viscomp.alexandra.dk/?p=1616 https://viscomp.alexandra.dk/?p=1616#respond Fri, 21 Dec 2012 14:27:27 +0000 http://viscomp.alexandra.dk/?p=1616 We recently published a pathtracer that runs in JavaScript and WebGL (link). The WebGL pathtracer is inspired by a pathtracer that we previously implemented in C++ with OpenGL shaders written in the OpenGL Shading Language (GLSL) 3.3. However, since WebGL 1.0 uses a simpler version of GLSL, namely the OpenGL ES shading language 1.0, we encountered some language constructs that were not supported. This post describes some of the challenges we encountered in order to implement our pathtracer in WebGL. Finally, we benchmark how our WebGL pathtracer performs compared to the OpenGL version.

We use a standard pathtracing algorithm with a binary bounding volume hierarchy (BVH) acceleration structure. We store one triangle in each leaf node of the BVH tree. Probably the most common and efficient way to do ray-triangle intersection tests with the BVH tree is to maintain a stack of pointers to nodes that need to be tested. The basic tree traversal algorithm is outlined in the code listing below. A nice property of this algorithm is that each node is never visited more than once.

closest_hit = infinity
stack.clear()
stack.push(root)
while(stack.size() > 0)
{
  currentnode = stack.pop()
  if (currentnode.is_leaf == true)
  {
    // Do ray primitive intersection and update hit record
    hit = ray_triangle_intersection(ray,currentnode.triangle)
    if ( hit < closest_hit )
      closest_hit = hit
  }
  else if ( ray_box_intersection(ray,currentnode.bbox) < closest_hit )
  {
    stack.push(currentnode.leftchild)
    stack.push(currentnode.rightchild)
  }
}

In GLSL it is not possible to implement a fully dynamics stack. However, in GLSL 3.3 it is straightforward to implement a fixed sized stack of pointers as an integer array with a stack counter, e.g.

int stack[16];
int stackcounter;
...
int currentnode = stack[stackcounter];
...
stack[stackcounter] = child;

Unfortunately, the OpenGL ES Shading Language 1.0 does not allow us to access an array element with a variable index, so a stack based approach is not feasible in WebGL. Thus, we implemented the stackless BVH traversal proposed in Ref. [1], which was reported to be about 30 % slower than the stack based traversal.

An additional problem with WebGL is that Windows browsers by default translates OpenGL calls to DirectX through a layer called ANGLE. Our experience with this translation is that loops in shaders get unrolled, and, hence, if we have shaders with very long loops, the shader compiler may run out of resources and fails to compile. With the typical scenes and BVH trees that we have tested, the ANGLE shader compiler can only compile a shader that traverses the tree a single time, i.e., we can only implement a pathtracer with a single bounce. Our solution to this problem is to run each bounce in a separate pass and save the state between the passes. This method will potentially involve some overhead because intermediate results must be read from and written to a texture. Additionally we must issue an additional draw instruction for each trace pass.

Linux and Mac browsers use the native OpenGL shader compiler. The Nvidia compiler that we have tested compiles the full pathtracer in a single shader without any problems.

Test setup
Nvidia GeForce 470 GTX
Intel E5620 2.4 GHz Quad Core
Linux Nvidia drivers version 304.43
Windows Nvidia drivers version 306.97

Our test scene consists of 8728 triangles. The figure shows the final pathtraced image.

Results
Our benchmark results are shown in the table and figure below. If we compare the stackless and stack based versions in GLSL 3.3, we see that the stackless version is almost 50% slower, which is somewhat disappointing compared to the results reported in [1]. WebGL/GLSL ES 1.0 seems to be slightly slower than GLSL 3.3 when we do the full pathtracing in a single shader. The WebGL multipass version does not seem to be affected much by storing intermediate results between the passes. In fact the fastest multipass results are comparable with the results obtained with GLSL 3.3.

Table 1 Samples per second for a 512×512 pixel image of the test scene. The maximum trace depth is 4.
C++/GLSL 3.3 linux WebGL Chrome linux WebGL Firefox linux WebGL Chrome Windows (native) WebGL Firefox Windows (native) WebGL Chrome Windows (Angle) WebGL Firefox Windows (Angle)
Singlepass with stack 34.4
Singlepass stackless 18.2 15.0 13.7 15.8 14.5
Multipass stackless 15.1 15.1 17.4 15.1 19.2 18.2
[1] Efficient Stack-less BVH Traversal for Ray Tracing, M. Hapala et al., SCCG (2011).

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=1616 0
WebGL Tutorial: Optimizing Data Transfer for WebGL Applications https://viscomp.alexandra.dk/?p=1386 https://viscomp.alexandra.dk/?p=1386#comments Mon, 26 Nov 2012 09:33:46 +0000 http://viscomp.alexandra.dk/?p=1386 Introduction

WebGL is a technology that has pushed the limits of content that can be published on the web. For example, Fig. 1 shows that with a modern graphics card, WebGL allows us to render very complex scenes with hundreds of thousands of polygons directly in a browser window. One challenge is now that large amounts of geometric data must be transferred over a network connection prior to rendering of such complex scenes.

With current broadband connections users expect smooth browsing experience where web pages load almost immediately. If the loading time exceeds a few seconds the audience often lose interest and proceed to another website [1]. This timeframe turns out to be hard to reach for some WebGL applications.

Video, Figure 1: The Stanford dragon consists of 871414 triangles. Models with this polygon count are straightforwardly rendered in real time in WebGL. A big challenge, however, is how to load the massive amounts of vertex data for such detailed models. For example, the model shown in the figure uses 60 megabytes of vertex position and normal data.

As shown in the video above (Fig. 1,) the size of raw vertex data can easily be on the order of tens of megabytes. With a reasonably fast connection with a 10 Mbit/s bandwidth, it takes around one second to load each megabyte of load vertex data. On the other hand, javascript engines used in modern browsers can process data at a much higher rate, and, hence, we can obtain faster loading times if we somehow can decrease the amount of data transferred over network at the expense of some postprocessing at the client side.

In this tutorial we will go through various techniques that can be used to optimize the loading time for large amounts of vertex data. Furthermore, we will demonstrate how data can be cached so that it does not need to be reloaded between browser sessions. We assume that the reader has basic knowledge about OpenGL vertexbuffers, JavaScript, and Ajax. We will extensively make use of features that are still W3C working drafts the so code may not run on older browsers and functionality may change in the future. We have tested all our code samples with Google Chrome 20 and Mozilla Firefox 16.

Basic WebGL

In the rest of the tutorial, we will use the code structure listed below. When the page loads, window.onload is triggered and sets up the WebGL context, creates the vertexbuffer, and compiles shaders. Next the vertexdata is loaded with an asynchronous XMLHttpRequest. We attach eventhandlers to the request in order to show the load progress. When the request completes we use the response to fill the vertexbuffer, and, finally, we draw the scene.

// Globals
var gl;             // GL context
var vertexbuffer;   // GL vertexbuffer object

function openProgress() {
    /* Open a progress dialog */
};

function runProgress(e) {
    /* Update progress dialog */
};

function closeProgress() {
    /* Close progress dialog */
};

function loadData(filename) {
    var xhr = new XMLHttpRequest();
    xhr.open("GET", filename);
    xhr.onload = function(){
        var vertexdata;
        /* use this.response to construct vertexdata */
        gl.bindBuffer(gl.ARRAY_BUFFER, vertexbuffer);
        gl.bufferData(gl.ARRAY_BUFFER, vertexdata, gl.STATIC_DRAW);
        drawScene();
        closeProgress();
    };
    xhr.onprogress = runProgress;
    xhr.onloadstart = openProgress;
    xhr.send();
};

function drawScene() {
    /* Render the scene */
};

window.onload = function(){
    /* Setup GL context, create vertexbuffer, and compile shaders */
    loadData("path/to/vertexdatafile");
};

The complete source code is available here.

The main topic of the present tutorial is how to write the loadData function in order to load the vertexbuffer most efficiently. If we have stored the vertex data as a raw binary file on our webserver, we can easily construct the vertexbuffer as follows

function loadBinaryData(filename)
{
    var xhr = new XMLHttpRequest();
    xhr.open('GET', filename);
    xhr.responseType = "arraybuffer";
    xhr.onload = function(){
        // this.response is now a generic binary buffer which
        // we can interpret as 32 bit floating point numbers.
        var vertexdata = new Float32Array(this.response);
        gl.bindBuffer(gl.ARRAY_BUFFER, vertexbuffer);
        gl.bufferData(gl.ARRAY_BUFFER, vertexdata, gl.STATIC_DRAW);
        drawScene();
    };
    xhr.onprogress = runProgress;
    xhr.onloadstart = openProgress;
    xhr.send();
}

Notice that we set the response type to arraybuffer” to indicate that we expect binary data, and, correspondingly, this.response will be a generic binary buffer. It is not strictly necessary to create the floating point view of the buffer. We could just use the generic buffer in the bufferData call. The actual specification of how the content of the vertexbuffer should be interpreted is set with the vertexAttribPointer method.

One note about method listed above is that we must ensure that client uses the same byte ordering (endianess) as is used for the binary file. One workaround to this problem could be to convert the binary file to a text file with a string representation of one floating point number on each line. We can load such a datafile with the following method.

function loadAsciiData(filename)
{
    var xhr = new XMLHttpRequest();
    xhr.open("GET", filename);
    xhr.onload = function(){
        var vertexdata = new Float32Array(this.response.split("n"));
        gl.bindBuffer(gl.ARRAY_BUFFER, vertexbuffer);
        gl.bufferData(gl.ARRAY_BUFFER, vertexdata, gl.STATIC_DRAW);
        drawScene();
        closeProgress();
    };
    xhr.onprogress = runProgress;
    xhr.onloadstart = openProgress;
    xhr.send();
}

This, however, is likely to increase the size of the datafile by a factor of 2-3 depending on the number of digits printed on each line. Another way to solve the byte ordering problem would be to create a DataView of the arraybuffer and use getFloat32(offset, true) to change byte ordering [3].

Example: Danish municipalities

We recently published a WebGL demo that allows the user to inspect data about Danish municipalities interactively [4].


Figure 2: Interactive information visualization of Danish municipalities. The map consists of 74157 polygons.

The geometric model used in Fig. 2 consists of 74157 triangles, i.e., 222471 vertices. Each vertex has the following attributes: position (three floats), normal used for shading (three floats), and one texture coordinate used associate the vertex to a specific municipality (one float). The size of the vertex buffer is then 222471 · (3 + 3 + 1) · 4 B = 5.94 MB.

As mentioned in the introduction, it can easily take a couple of seconds to load such amounts of data. If we want faster loading time, we need to decrease the network datatransfer. Since almost every vertex is shared between three triangles so the first idea would be perhaps be to store only the unique vertices and use an index buffer to draw the triangles. This would roughly decrease the size of the vertexbuffer to one third with a minor additional cost of the indexbuffer. One problem is that WebGL only supports 16 bit indexbuffers, i.e., it is impossible to index vertexbuffers with more that 65536 vertices. Additionally, an indexbuffer does not take advantage of the fact that many vertices share the same normal and the same texture coordinate. However, with much duplicated data, it should be possible to reduce the datatransfer by standard compression methods which will be discussed in the following sections.

Serverside zlib compression

A simple way to compress the vertexdata is to let the webserver compress the data on the fly prior to the network transfer. It is possible to enable serverside zlib compression on an apache server using the deflate module  [5]. The module must be configured to compress binary data as follows

deflate.conf:

    AddOutputFilterByType DEFLATE application/octet-stream

Furthermore, we must ensure that the server interprets the datafile as the mime type application/octet-stream. Usually it is sufficient to set the filename extension to .bin”. All modern browsers have built-in support for zlib decompression.

Using this techniques the amount of data transferred over the network is reduced from 5.9 MB to 767 KB with a negleglible overhead in compressing and decompressing. The code listed in the previous section does not need to be modified in any way. The main drawback with this method is that you need administrator access to your webserver or that you can convince your admin that it is a good idea to enable the compression module. Another problem is that the server does not know a priori the number of bytes that it needs to send because the compression happens on the fly in several chunks. This may not seem to be a major issue, however, it is necessary to know the transfer size if we want to implement a reliable progress counter.

We may suggest to gzip the binary data and put the gzip’ed file on the server and load the data with something like

xhr.open("GET", filename.gz);
  xhr.onload = function(){
    // Port zlib to javascript and implement gunzip
    var vertexdata = new Float32Array(gunzip(this.response));
    gl.bindBuffer(gl.ARRAY_BUFFER, vertexbuffer);
    gl.bufferData(gl.ARRAY_BUFFER, vertexdata, gl.STATIC_DRAW);
    drawScene();
  }

Unfortunately this would require that we port zlib to javascript and run the decompression within javascript which would probably be somewhat slower than the browser’s native zlib support.

PNG compression

In this section we will show how we can use a PNG image to transfer binary data. This method relies on the fact that the PNG format is lossless, inherently applies zlib compression, and that all modern browsers have built-in support for PNG decompression.
The basic idea is to create a png image from the floating point vertex data by encoding the raw bytes as pixel colors. Examples of how to create the image with graphics libraries such as GD and ImageMagick are provided on our website. The size of the image corresponding to the 5.9 MB vertex data used in the previous example is just 757 KB. We upload the image to our webserver and specify its path to the loadData function. Information about how to create an html image element from an XMLHttpRequest can be found in Ref.  [6].

When the image has been loaded, we must convert the pixel colors back to the original vertex data. The main steps in the conversion are outlined below

  • Create a new canvas element and resize it to fit the image size.
  • Draw the image to the canvas.
  • Read back the canvas pixels to an arraybuffer.
  • Upload the arraybuffer to graphics card.

A minor note about the readback is that the canvas has an alpha channel even if the image that we draw does not have an alpha channel. Consequently, we must remove every fourth entry of the readback buffer in order to restore the original byte sequence. The complete code for loading the png encoded vertex data is listed below.

function loadPNGData(filename)
{
  // browser prefixing needed for cross-browser compatibility
  window.URL = window.URL || window.webkitURL;

  var xhr = new XMLHttpRequest();
  xhr.open("GET", filename);
  xhr.responseType = "blob";
  xhr.onload = function(){

    var img = document.createElement("img");

    img.onload = function(){

      // Create a new canvas element and resize it
      var canvas2d = document.createElement("canvas");
      canvas2d.width = img.width;
      canvas2d.height = img.height;

      var ctx2d = canvas2d.getContext("2d");

      // Draw the image to the canvas
      ctx2d.drawImage(img,0,0);

      // Read back the canvas pixels
      var imagedata = ctx2d.getImageData(0, 0, img.width, img.height).data;

      // imagedata is now an Uint8Array of length 4*img.width*img.height
      // which contains the RGBA pixel values read from the canvas.
      // Remove alpha channel from each pixel. Reuse the imagedata array.
      for (var i = 0; i < img.width*img.height; i++)
      {
        imagedata[3*i] = imagedata[4*i];
        imagedata[3*i+1] = imagedata[4*i+1];
        imagedata[3*i+2] = imagedata[4*i+2];
      }
      // The first 3*img.width*img.height elements in imagedata are now
      // exactly equal to the raw bytes of the original vertex data
      var vertexdata = imagedata.subarray(0,3*img.width*img.height);

      gl.bindBuffer(gl.ARRAY_BUFFER, vertexbuffer);
      gl.bufferData(gl.ARRAY_BUFFER, vertexdata, gl.STATIC_DRAW);

      drawScene();
      closeProgress();

      // Explicit destruction is required
      window.URL.revokeObjectURL(img.src);
    };
    img.src = window.URL.createObjectURL(this.response);

  };

  xhr.onprogress = runProgress;
  xhr.onloadstart = openProgress;
  xhr.send();
};

The method listed above may appear to require a lot of post-processing on the client side. However, as stated in the introduction and shown in the benchmark below, the PNG conversion actually turns out to be very fast compared to the time that we save on network transfer.

Using web storage

In the previous section, we showed how to encode data in a PNG image to reduce the amount of network transfer. Going a step further, we can utilize the HTML5 web storage functionality to store the server response. With this technique, it is only necessary to request data from the server the first time a user visits the page. If the user returns to the page at a later time, data will be fetched from the web storage on the client side. One may question the relevance of web storage since all major browsers already have built-in support for caching. The advantage of web storage over browser caching, however, is that web storage provides much more control to the programmer.

We can store the server response in the previous section by calling the following function somewhere in the xhr.onload handler

function savePNGData(blob)
{
  var reader = new FileReader();
  reader.onload = function(e)
  {
      localStorage.setItem("PNGData", e.target.result);
  };
  reader.readAsDataURL(blob);
};

This will store the image in a slot called PNGData” in localStorage where it will exist until it is explicitly removed. An alternative to localStorage is to use sessionStorage which will be cleared when the browser session ends. The content of the web storage can be listed, modified, and deleted e.g. in Chrome’s developer tools as shown in Fig. 3.


Figure 3: Using Chrome’s developer tools to inspect the web storage.

The image data stored in localStorage can be loaded with the following code

if ( localStorage.PNGData )
{
    var img = document.createElement("img");
    img.onload = function(e) {
        /* Convert image to vertex buffer as in the previous example */
    };
    img.src = localStorage.PNGData;
}
else
{
    /* Get the image from the server as in the previous example */
}

One disadvantage about web storage is that the storage limit is not guaranteed by any specification. Currently, a 5 MB limit per domain seems to be standard.

Benchmark

Table 4 shows the loading times for the vertex data used for the map shown in Fig. 2 using the various techniques described in this tutorial. We have limited the upload speed from the webserver to 4 Mbit/s and 10 Mbit/s. The test is available at our website [2].


Figure 4: Loading times for the vertexbuffer used in Fig. 2. The test setup used Chrome 20 on an Intel Xeon E5620 2.4 GHz Quad Core CPU running linux.

We see that compression methods efficiently reduce the loading time by a factor of six to seven to compared to raw binary data. A two seconds timeframe has been identified as the tolerable threshold for web page loading time for the average online shopper [1]. Hence, using a compression scheme is essential for maintaining the audience in our case if we assume that 10 Mbit/s is a typical bandwidth for our visitors. Browser caching effectively eliminates the loading time for returning visitors. If the browser cache is cleared or disabled, web storage still provides almost immediate response.

Summary

In this tutorial we demonstrated how one can optimize the loading time for large amounts of vertex data used in WebGL applications. We showed that the loading time often is limited by the speed of the network connection. Thus, using data compression such as serverside zlib compression or data encoding in a PNG image can lead to significantly increased performance. Finally, we showed how to use web storage to cache data between browser sessions.

Source code and downloads

Pdf version of this document
Complete source code for the benchmark
Conversion tool from binary to PNG written in C using libgd
Conversion tool from binary to PNG written in C++ using Magick++

Bibliography

http://www.akamai.com/html/about/press/releases/2009/press_091409.html.
http://daimi.au.dk/~thomaskj/tutorials/WebGL-VBODemo/.
https://developer.mozilla.org/en-US/docs/JavaScript_typed_arrays/DataView.
http://viscomp.alexandra.dk/2012/10/12/interactive-infographics-in-webgl/.
http://httpd.apache.org/docs/2.2/mod/mod_deflate.html.
http://www.html5rocks.com/en/tutorials/file/xhr2/.
]]>
https://viscomp.alexandra.dk/?feed=rss2&p=1386 3
Interactive infographics in WebGL https://viscomp.alexandra.dk/?p=1284 https://viscomp.alexandra.dk/?p=1284#respond Fri, 12 Oct 2012 13:10:11 +0000 http://viscomp.alexandra.dk/?p=1284 In a previous post we presented a bar diagram as a quick overview over data from Danish municipalities. We have now released a live demo that utilizes some of the newest html5 features, e.g. WebGL.

Click here to launch the demo. You need a recent chrome, firefox, safari (non iOS) or opera browser. Internet Explorer is not supported.

Use the keys “wasdqe” and the mouse to navigate and change between dataset in the right column.

Befolkningstal1

 

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=1284 0
Merry Christmas glsl-pathtracing-demo https://viscomp.alexandra.dk/?p=1149 https://viscomp.alexandra.dk/?p=1149#comments Thu, 22 Dec 2011 13:43:14 +0000 http://viscomp.alexandra.dk/?p=1149 Merry Christmas to you all out there. Do you want to stress your computer a bit during the holidays.

Then please try our new Christmas path-tracing demo.

Cozy path tracing graphics from our lab.

Download the Tech Demo

]]>
https://viscomp.alexandra.dk/?feed=rss2&p=1149 3