Feeds:
Posts

# Introduction:

Check here for Part 1 or Part 3

Last time, in part-one, we established the assumptions that we are going to make about our raindrops. These assumptions included the shape, the different rays we will consider, refraction, reflection, total internal reflection and also the Fresnel effect. This time we will discuss our first approach to solving the problem.

So our first attempt to optimize the rendering of raindrops is an interpolation based approach. Interpolation may sound like a complex word at first, but its really just weighted averages. If you have two points along an axis, points A and B, and a third point in the middle point M. Point M is affected by A and B each, equally. In this case the value of M is equal to the value of A * 0.5 + B * 0.5. When M is only 10% of the way to point B, so much closer to point A. The value of M is equal to A * 0.9 + B * 0.1.

So when we first wanted to address rendering thousands of raindrops, we wanted to use this idea, of interpolation, to interpolate the points of where rays intersected the environment map. Below is a series of images and explanations to hopefully explain it in better detail.

# Solution Explained:

When we first began, the idea was that we could interpolate XYZ values, (3-dimensional space), vectors of the refracted rays of light as they intersect the environment map. The illustration below will hopefully capture the idea, in a 2-dimensional space. Figure 1: The camera is at the bottom in dark-purple, with a view vector. The raindrop is in blue. The orange is the environment map.

The first step is depicted above. Basically we are going to ray-trace a raindrop. The key here is that for this specific View vector we end up with a certain I(x,y,z). I is the point in which this view vector for this raindrop intersects the environment map. N1 and N2 are the surface normals where the light vector intersects the surface of the raindrop. For our research we treated all raindrops as spheres.

Continuing on from Figure 1, we can calculate other view vectors and how they refract with the raindrop. Figure 2: The camera is at the bottom in dark-purple, with a view vector. The raindrop is in blue. The orange is the environment map.

Looking at Figure 2, we can calculate all the vectors that intersect the raindrop, and this is done through ray-tracing. The next task we want to perform is to store all of the I(x,y,z) data into a map, for the raindrop. Figure 3: Same as Figure 2, but this time we have the mask where we are going to store our data from I(x,y,z).

What we have done in Figure 3, above, is defined the “screen” for the raindrop, depicted by the green section. In the primary view we are looking at the scene from above, looking down. The green-grid is the view from the camera. These green screens are always aligned to the camera. Within each “pixel” of the green-screen we are going to store I(x,y,z) depending on where it intersects the environment map. Normally an environment map is interested in storing color data that is RGB information, however for us, we want to store X,Y,Z data that is positional data. Later we will grab the color data that correlates to the position of the intersection with the environment map. Let’s take a look at what we are interpolating. Figure 4: Here we have raindrops 1 and 2 with their view-aligned screens. Raindrop 3 is going to have it's values interpolated.

So for Raindrops 1 and 2 we go ahead and calculate their values for their screens using ray-tracing. We store I(x,y,z) data in each “pixel”. Raindrop 3 is going to have it’s values of its screen interpolated from the values found in raindrop’s 1 and 2.

In figure 5, we have the same “pixel” in each screen. That pixel for raindrop-3 is going to have its value interpolated between raindrops 1 and 2. The result should be that we can ray-trace raindrops 1 and 2 and for raindrop 3 we do not need to perform the expensive activity of ray-tracing, primarily calculating intersection, refraction and reflection.

Unfortunately we get mixed results. When the raindrops are along the X axis we receive decent results. However as the raindrops move along the Y-axis, especially near the poles the solution collapses. Here are some examples of the results.

# Results:

When we interpolate along the X-axis as shown in this image:

We achieve the following results. Figure 7: The image on the right is directly to the right of the camera, the image on the left is to the left of the camera. The image in the middle is the drop with interpolated intersection (X,Y,Z) positional data used to access the environment map. The drop on top is the desired result which was ray-traced and acts as our ground-truth.

Looking at this image, you might think, hey not bad, looks pretty accurate. However there is an issue that arises when you then decide to incorporate the Y-axis. So we put two drops in the scene one directly above the camera and one directly below the camera, and then interpolated the same middle drop. Here is what results: Figure 8: The image above is located above the camera, and the bottom image, below the camera. The drop on the right was ray-traced. The drop on the left is the interpolated result. Looks like we might have some artifacts...

Yeah, so umm, not quite right. Look like we have some issues with the result. Obviously there is a problem with how the values are being interpolated but although we know what the problem is, we never found a way to solve it. In addition I am not aware of the mathematical cause of the problem. My guess is that we are using a spherical environment map, but our interpolation process ignores this fact.

# Conclusions:

Of course there is a mathematical explanation for the cause of the interpolation results. Sadly, I was never able to discover what that explanation is and continue to be very interested in one day figuring it out. I am hoping that these posts will help to bring those intelligent individuals forward so that they can explain it to me. This is one example how doing research raises more questions than it answers. Nevertheless the fight continues and you will see how we solve this problem next time.

There are some other issues that this approach does not solve. What happens when the raindrops are not along the same axis, or if one drop is placed behind the camera. Another challenge is that refraction is based on the vector between the viewpoint (the camera) and the raindrop, so raindrops at different distances will have different refraction vectors calculated, in turn affecting the results of the interpolated drops. All in all it can become quite complicated. So for the time being we abandoned interpolating intersection points with the environment map.

Next time, you’ll see what we did to solve this problem and get wonderful raindrops, and it begins with none other than the power of the GPU.