Jesper Børlum | Brian Bunch Christensen | Thomas Kim Kjeldsen | Peter Trier Mikkelsen |
Karsten Østergaard Noe | Jens Rimestad | Jesper Mosegaard |
Alexandra Institute |
Abstract:
This paper presents the Subsurface Light Propagation Volume (SSLPV) method for real-time approximation of subsurface scattering effects in dynamic scenes with changing mesh topology and lighting. SSLPV extends the Light Propagation Volume (LPV) technique for indirect illumination in video games. We introduce a new consistent method for injecting flux from point light sources into an LPV grid, a new rendering method which consistently converts light intensity stored in an LPV grid into incident radiance, as well as a model for light scattering and absorption inside heterogeneous materials. Our scheme does not require any precomputation and handles arbitrarily deforming meshes. We show that SSLPV provides visually pleasing results in real-time at the expense of a few milliseconds of added rendering time.
Paper:
Preprint
Video:
Download Video
Code:
Demo + GLSL shaders
ejulien
Jun 7, 2011
Very interesting but unfortunately broken on ATI Radeon HD5750 with latest AMD released drivers.
roxlu
Jun 8, 2011
Congrats! Great work and really appreciate you’re shareing code with the community!
Jon
Jun 9, 2011
Tried the demo on my AMD Radeon HD 5700 (Win7) and am seeing strange rainbow patterns instead of the subsurface “glow”. This is using the 4/19/11 build of the drivers.
admin
Jun 10, 2011
We are currently investigating the ATI issues people are reporting, and will post a fix as soon as possible. All development was done on Nvidia hardware.
Michael Schøler
Jun 16, 2011
Congratulations with the results and getting the paper published!
Ming
Jun 6, 2012
Hi guys, I saw a part which I am in doubt over the Zonal Harmonic coefficients in the shader InjectGS.glsl. I saw that you were using the values:
vec2(0.886226925452758, 1.023326707946488);
Shouldnt it be: vec2(0.282094791, 0.488602511);
The coefficients are 1/(2*sqrt(PI) and sqrt(3)/(2*sqrt(PI)). I assume there might be some compensation where you might have managed to offset the value back to the original values. I presumed that the division by PI would have compensated for the values for the 0 coefficient, but that would also mean the attentuation factor in equation (1) from your paper might have been negated.
admin
Jul 23, 2012
Ming:
The numbers are the two coefficients when expanding the clamped
cosine function (centred along the z axis) on the spherical harmonics.
I.e.,
c_00 = int_4pi Y_00(theta,phi) max(0, cos(theta)) * sin(theta) * dtheta dphi
= sqrt(pi) / 2
c_10 = int_4pi Y_10(theta,phi) max(0, cos(theta)) * sin(theta) * dtheta dphi
= sqrt(pi/3)
where Y_00(theta,phi) = 1 / sqrt(4pi) and Y_10(theta,phi) = sqrt(3/4pi) * cos(theta).
The usual complex conjugation of the Y_lm’s can be omitted for m=0. c_1,1 and c_1,-1 are zero.
The injection step is to replace the incoming refracted light by a virtual
point light which angular dependence is assumed to be proportional to the
clamped cosine function centred along the refracted direction. If you
expand this function in spherical harmonics, the coefficients are obtained
by applying a rotation matrix (corresponding to the refracted direction) to
the c_00 and c_10 coefficients above.
Phi
Apr 2, 2014
Hi,
Cool stuff. When I was reading the paper, I think the final equation of the projected solid angle w.r.t. the pixel is wrong (right-most one)? I wrote some code to check the values and can’t seem to make them agree.
I checked “RSMFS.glsl” and my derivation almost matches it: how did the cosine(theta_p) disappear?
cos(theta_p) = dot(normalized_forward_vector, r_p) / norm(r_p)
This would explain the pow-1.5 factor.
Thank you,
Phi
=================Python Code=======================
n_x = 512
n_y = 512
h_fov = numpy.radians(75)
alpha = n_y / n_x
area_pixel = 1 / (n_x * n_y)
near = 1
forward = [0, 0, -near]
r = [0.5,
0.5,
-near]
norm_r = numpy.linalg.norm(r)
r /= norm_r
print(4 * alpha * (numpy.tan(h_fov / 2) ** 2) * (numpy.dot(r, forward) ** 3) * area_pixel)
print(area_pixel * numpy.dot(r, forward) / (norm_r ** 2))
Thomas Kim Kjeldsen
Apr 2, 2014
Hello Phi,
The pixel area is
area of shadowmap / number of pixels,
i.e., you must scale your pixel area by the total area of the shadow map, which can be calculated from the field of view and the distance to the virtual plane.
=================Python Code=======================
import numpy
n_x = 512.0
n_y = 512.0
h_fov = numpy.radians(75.0)
alpha = n_y / n_x
near = 1.0
shadowmap_width = 2.0 * near*numpy.tan(h_fov/2.0)
shadowmap_height = alpha * shadowmap_width
area_pixel = shadowmap_width * shadowmap_height / (n_x * n_y)
forward = [0.0, 0.0, -near]
r = [0.5,
0.5,
-near]
norm_r = numpy.linalg.norm(r)
r /= norm_r
print(4.0 * alpha * (numpy.tan(h_fov / 2.0) ** 2) * (numpy.dot(r, forward) ** 3) / (n_x * n_y))
print(area_pixel * numpy.dot(r, forward) / (norm_r ** 2))
Phi
Apr 2, 2014
D’oh! Thanks Thomas. That would have taken me ages to debug.
For future references, the paper formula assumes near = 1.
Should divide by (near^2) unless I’m overlooking something again.