As you can see, doing point lighting is quite simple. Unfortunately, the visual results are not.

For example, use the controls to display the position of the point light source, then position it near the ground plane. See anything wrong?

If everything were working correctly, one would expect to see a bright area directly under the light. After all, geometrically, this situation looks like this:

The surface normal for the areas directly under the light are almost the same as the direction towards the light. This means that the angle of incidence is small, so the cosine of this angle is close to 1. That should translate to having a bright area under the light, but darker areas farther away. What we see is nothing of the sort. Why is that?

Well, consider what we are doing. We are computing the lighting at every triangle's
*vertex*, and then interpolating the results across the surface
of the triangle. The ground plane is made up of precisely four vertices: the four
corners. And those are all very far from the light position and have a very large angle
of incidence. Since none of them have a small angle of incidence, none of the colors
that are interpolated across the surface are bright.

You can see this is evident by putting the light position next to the cylinder. If the light is at the top or bottom of the cylinder, then the area near the light will be bright. But if you move the light to the middle of the cylinder, far the top or bottom vertices, then the illumination will be much dimmer.

This is not the only problem with doing per-vertex lighting. For example, run the tutorial again and do not move the light. Just watch how the light behaves on the cylinder's surface as it animates around. Unlike with directional lighting, you can very easily see the triangles on the cylinder's surface. Though the per-vertex computations are not helping matters, the main problem here has to do with interpolating the values.

If you move the light source farther away, you can see that the triangles smooth out and become indistinct from one another. But this is simply because, if the light source is far enough away, the results are indistinguishable from a directional light. Each vertex's direction to the light is almost the same as each other vertex's direction to the light.

Per-vertex lighting was reasonable when dealing with directional lights. But it simply is not a good idea for point lighting. The question arises: why was per-vertex lighting good with directional lights to begin with?

Remember that our diffuse lighting equation has two parameters: the direction to the light and the surface normal. In directional lighting, the direction to the light is always the same. Therefore, the only value that changes over a triangle's surface is the surface normal.

Linear interpolation of vectors looks like this:

${V}_{a}\alpha +{V}_{b}\left(1-\alpha \right)$

The α in the equation is the factor of interpolation between the two values. When α is
one, we get V_{a}, and when it is zero, we get
V_{b}. The two values, V_{a} and
V_{b} can be scalars or vectors.

Our diffuse lighting equation is this:

$D*I*\left(\hat{N}\xb7\hat{L}\right)$

If the surface normal N is being interpolated, then at any particular point on the surface, we get this equation for a directional light (the light direction L does not change):

$D*I*\left(\hat{L}\xb7\left(\hat{{N}_{a}}\alpha +\hat{{N}_{b}}\left(1-\alpha \right)\right)\right)$

The dot product is distributive, like scalar multiplication. So we can distribute the L to both sides of the dot product term:

$D*I*\left(\left(\hat{L}\xb7\left(\hat{{N}_{a}}\alpha \right)\right)+\left(\hat{L}\xb7\left(\hat{{N}_{b}}\left(1-\alpha \right)\right)\right)\right)$

We can extract the linear terms from the dot product. Remember that the dot product is the cosine of the angle between two vectors, times the length of those vectors. The two scaling terms directly modify the length of the vectors. So they can be pulled out to give us:

$D*I*\left(\alpha \left(\hat{L}\xb7\hat{{N}_{a}}\right)+\left(1-\alpha \right)\left(\hat{L}\xb7\hat{{N}_{b}}\right)\right)$

Recall that vector/scalar multiplication is distributive. We can distribute the multiplication by the diffuse color and light intensity to both terms. This gives us:

$\begin{array}{c}\left(D*I*\alpha \left(\hat{L}\xb7\hat{{N}_{a}}\right)\right)+\left(D*I*\left(1-\alpha \right)\left(\hat{L}\xb7\hat{{N}_{b}}\right)\right)\\ \left(D*I*\left(\hat{L}\xb7\hat{{N}_{a}}\right)\right)\alpha +\left(D*I*\left(\hat{L}\xb7\hat{{N}_{b}}\right)\right)\left(1-\alpha \right)\end{array}$

This means that if L is constant, linearly interpolating N is exactly equivalent to linearly interpolating the results of the lighting equation. And the addition of the ambient term does not change this, since it is a constant and would not be affected by linear interpolation.

When doing point lighting, you would have to interpolate both N and L. And that does not yield the same results as linearly interpolating the two colors you get from the lighting equation. This is a big part of the reason why the cylinder does not look correct.

The more physically correct method of lighting is to perform lighting at every rendered pixel. To do that, we would have to interpolate the lighting parameters across the triangle, and perform the lighting computation in the fragment shader.