Implementation of synthetic aperture imaging on a hand-held device

This paper presents several implementations of Synthetic Aperture Sequential Beamforming (SASB) on commercially available hand-held devices. The implementations include real-time wireless reception of ultrasound radio frequency signals and GPU processing for B-mode imaging. The proposed implementation demonstrates that SASB can be executed in-time for real-time ultrasound imaging. The wireless communication between probe and processing device satisfies the required bandwidth for real-time data transfer with current 802.11ac technology. The implementation is evaluated using four different hand-held devices all with different chipsets and a BK Medical UltraView 800 ultrasound scanner emulating a wireless probe. The wireless transmission is benchmarked using an imaging setup consisting of 269 scan lines × 1472 complex samples (1.58 MB pr. frame, 16 frames per second). The measured data throughput reached an average of 28.8 MB/s using a LG G2 mobile device, which is more than the required data throughput of 25.3 MB/s. Benchmarking the processing performance for B-mode imaging showed a total processing time of 18.9 ms (53 frames/s), which is less than the acquisition time (62.5 ms).
Synthetic Aperture Sequential Beamforming implemented on multi-core platforms

This paper compares several computational approaches to Synthetic Aperture Sequential Beamforming (SASB) targeting consumer level parallel processors such as multi-core CPUs and GPUs. The proposed implementations demonstrate that ultrasound imaging using SASB can be executed in real-time with a significant headroom for post-processing. The CPU implementations are optimized using Single Instruction Multiple Data (SIMD) instruction extensions and multithreading, and the GPU computations are performed using the APIs, OpenCL and OpenGL. The implementations include refocusing (dynamic focusing) of a set of fixed focused scan lines received from a BK Medical UltraView 800 scanner and subsequent image processing for B-mode imaging and rendering to screen. The benchmarking is performed using a clinically evaluated imaging setup consisting of 269 scan lines × 1472 complex samples (1.58 MB per frame, 16 frames per second) on an Intel Core i7 2600 CPU with an AMD HD7850 and a NVIDIA GTX680 GPU. The fastest CPU and GPU implementations use 14% and 1.3% of the real-time budget of 62 ms/frame, respectively. The maximum achieved processing rate is 1265 frames/s.
Multiphase Flow of Immiscible Fluids on Unstructured Meshes

In this paper, we present a method for animating multiphase flow of immiscible fluids using unstructured moving meshes. Our underlying discretization is an unstructured tetrahedral mesh, the deformable simplicial complex (DSC), that moves with the flow in a Lagrangian manner. Mesh optimization operations improve element quality and avoid element inversion. In the context of multiphase flow, we guarantee that every element is occupied by a single fluid and, consequently, the interface between fluids is represented by a set of faces in the simplicial complex. This approach ensures that the underlying discretization matches the physics and avoids the additional book-keeping required in grid-based methods where multiple fluids may occupy the same cell. Our Lagrangian approach naturally leads us to adopt a finite element approach to simulation, in contrast to the finite volume approaches adopted by a majority of fluid simulation techniques that use tetrahedral meshes. We characterize fluid simulation as an optimization problem allowing for full coupling of the pressure and velocity fields and the incorporation of a second-order surface energy. We introduce a preconditioner based on the diagonal Schur complement and solve our optimization on the GPU. We provide the results of parameter studies as well as a performance analysis of our method.
SSLPV: Subsurface Light Propagation Volumes

This paper presents the Subsurface Light Propagation Volume (SSLPV) method for real-time approximation of subsurface scattering effects in dynamic scenes with changing mesh topology and lighting. SSLPV extends the Light Propagation Volume (LPV) technique for indirect illumination in video games. We introduce a new consistent method for injecting flux from point light sources into an LPV grid, a new rendering method which consistently converts light intensity stored in an LPV grid into incident radiance, as well as a model for light scattering and absorption inside heterogeneous materials. Our scheme does not require any precomputation and handles arbitrarily deforming meshes. We show that SSLPV provides visually pleasing results in real-time at the expense of a few milliseconds of added rendering time.
Teknologiske udfordringer i urbane miljøer

We have taken part in a collaborative publication made to wrap up the Digital Urban Living initiative. The chapter that we have written covers the technological chalenges in urban settings. That is topics such as urban platforms, sensors and the urban installations that was crafted as part of the project.
Out-of-Core Computations of High-Resolution Level Sets by Means of Code Transformation

We propose a storage efficient, fast and parallelizable out-of-core framework for streaming computations of high resolution level sets. The fundamental techniques are skewing and tiling transformations of streamed level set computations which allow for the combination of interface propagation, re-normalization and narrow-band rebuild into a single pass over the data stored on disk. When combined with a new data layout on disk, this improves the overall performance when compared to previous streaming level set frameworks that require multiple passes over the data for each time-step. As a result, streaming level set computations are now CPU bound and consequently the overall performance is unaffected by disk latency and bandwidth limitations. We demonstrate this with several benchmark tests that show sustained out-of-core throughputs close to that of in-core level set simulations.
The Visible Ear Simulator: A Public PC Application for GPU-Accelerated Haptic 3D Simulation of Ear Surgery Based on the Visible Ear Data

Existing virtual simulators for middle ear surgery are based on 3-dimensional (3D) models from computed tomographic or magnetic resonance imaging data in which image quality is limited by the lack of detail (maximum, ~50 voxels/mm3), natural color, and texture of the source material. Virtual training often requires the purchase of a program, a customized computer, and expensive peripherals dedicated exclusively to this purpose.
GPU Accelerated Surgical Simulators for Complex Morphology

Surgical training in virtual environments, surgical simulation in other words, has previously had difficulties in simulating deformation of complex morphology in real-time. Even fast spring-mass based systems had slow convergence rates for large models. This paper presents two methods to accelerate a springmass system in order to simulate a complex organ such as the heart. Computations are accelerated by taking advantage of modern graphics processing units (GPUs). Two GPU implementations are presented. They vary in their generality of spring connections and in the speedup factor they achieve.
Real-time Deformation of Detailed Geometry Based on Mappings to a Less Detailed Physical Simulation on the GPU.

Modern graphics processing units (GPUs) can be effectively used to solve physical systems. To use the GPU optimally, the discretization of the physical system is often restricted to a regular grid. When grid values represent spatial positions, a direct visualization can result in a jagged appearance. In this paper we propose to decouple computation and visualization of such systems. We define mappings that enable the deformation of a high-resolution surface based on a physical simulation on a lower resolution uniform grid. More specifically we investigate new approaches for the visualization of a GPU based spring-mass simulation.