In Review

In this tutorial, you have learned the following:

Further Study

Try doing these things with the given programs.

  • In the spotlight project, change the projection texture coordinate from a full 4D coordinate to a 2D. Do this by performing the divide-by-W step directly in the vertex shader, and simply pass the ST coordinates to the fragment shader. Just use texture instead of textureProj in the fragment shader. See how that affects things. Also, try doing the perspective divide in the fragment shader and see how this differs from doing it in the vertex shader.

  • In the spotlight project, change the interpolation style from smooth to noperspective. See how non-perspective-correct interpolation changes the projection.

  • Instead of using a projective texture, build a lighting system for spot lights entirely within the shader. It should have a maximum angle; the larger the angle, the wider the spotlight. It should also have an inner angle that is smaller than the maximum angle. This the the point where the light starts falling off. At the maximum angle, the light intensity goes to zero; at the minimum angle, the light intensity is full. The key here is remembering that the dot product between the spotlight's direction and the direction from the surface to the light is the cosine of the angle between the two vectors. The acos function can be used to compute the angle (in radians) from the cosine.

Further Research

Cube maps are fairly old technology. The version used in GPUs today derive from the Renderman standard and earlier works. However, before hardware that allowed cubemaps became widely available, there were alternative techniques that were used to achieve similar effects.

The basic idea behind all of these is to transform a 3D vector direction into a 2D texture coordinate. Note that converting a 3D direction into a 2D plane is a problem that was encountered long before computer graphics. It is effectively the global mapping problem: how you create a 2D map of a 3D spherical surface. All of these techniques introduce some distance distortion into the 2D map. Some distortion is more acceptable in certain circumstances than others.

One of the more common pre-cube map techniques was sphere mapping. This required a very heavily distorted 2D texture, so the results left something to be desired. But the 3D-to-2D computations were simple enough to be encoded into early graphics hardware, or performed quickly on the CPU, so it was acceptable as a stop-gap. Other techniques, such as dual paraboloid mapping, were also used. The latter used a pair of textures, so they ate up more resources. But they required less heavy distortions of the texture, so in some cases, they were a better tradeoff.

OpenGL Functions of Note

glCompressedTexImage2D

Allocates a 2D image of the given size and mipmap for the current texture, using the given compressed image format, and uploads compressed pixel data. The pixel data must exactly match the format of the data defined by the compressed image format.

GLSL Functions of Note

vec4 textureProj(sampler texSampler,
 vec texCoord);
 

Accesses the texture associated with texSampler, using post-projective texture coordinates specified by texCoord. The sampler type can be many of the sampler types, but not samplerCube, among a few others. The texture coordinates are in homogeneous space, so they have one more components than the number of dimensions of the texture. Thus, the number of components in texCoord for a sampler of type sampler1D is vec2. For sampler2D, it is vec3.

Fork me on GitHub