2. Setting pixels using the image buffer.
render.h
, we have the following:
typedef int Pixel; extern Pixel currColor; extern GLint winWidth; extern GLint winHeight; extern Pixel *buffer;A
Pixel
is defined to be a 32-bit quantity holding
4 8-bit components in the format 0xRRGGBBAA, with 0xRR (a hexadecimal
number: 0 to 255) representing the red component, 0xGG representing the
green component, 0xBB representing the blue component, and 0xAA representing
the alpha component (which will be unused for this assignment).
buffer
is a pointer to winWidth
*
winHeight
Pixel
's layed out in row
major order. It is the image buffer that will be used for display.
Let's say you want to set pixel (x, y) to (r, g, b), where r, g, and b are integer values between 0 and 255. You would type:
buffer[ x + y * winWidth] = (r << 24) | (g << 16) | (b << 8);Let's say you want to bitwise xor pixel (x, y) with currColor. You would type:
buffer[ x + y * winWidth] ^= currColor;Note that it is very inefficient to do that integer multiply by winWidth on every pixel, and that the calculation can be incrementalized very easily.
3. Getting a texel from the texture map.
render.h
, we have the following:
typedef struct Texture_st { Pixel *texel; int width, height; int log2width, log2height; int widthmask, heightmask; } Texture; extern Texture *texture;
texture
is a pointer to the current texture map.
texel
is a pointer to width
* height
Pixel
's. If you want to grab a texel (u, v), you refer to it as:
texture->texel[u + v * texture->width]Because integer multiplies are very expensive, and there is no way to incrementalize texture access using finite differences,
width
and height
are required to be powers of two. This allow
us to rewrite the texture access without any integer multiply:
texture->texel[u + (v << texture->log2width)]You will also want to see if your texture coordinates are beyond the bounds of the texture map. Let's say you have a texture map that is 256 wide. This means that u can be any number from 0 to 255. An easy way to check to see if this value is outside of this range (and thus needs clamping) is to do the following test:
if (u & ~texture->widthmask) clamp(u);The reason this works is because
widthmask
is 255, which in
binary is just the 7 least significant bits set to 1. Thus,
~widthmask
is 0 in the 7 least significant bits, and 1 in all
the other bits. Now if u is larger than 255 or smaller than 0, one of
these more significant bits will be 1, and the test will give a non-zero
number. Thus, by using powers of two, we can do a quick bit test to check
if we need clamping.
RasterizeXor()
, you are supposed to xor the pixel
value with the value in the currColor
. This is a bitwise
xor, and is intended to help you check for double hits.
For example, if a pixel is cleared to black to begin with, and then you xor it with white, it will turn white. Then if you xor it with white again, it will turn black. Thus if you double-hit a pixel with white, it will turn black. In fact if you double-hit a pixel with the any color, it will always go back to the color it was before you touched the pixel.
On the other hand, if you first xor with red, and then xor with green, you will get yellow back because red and green cover different bits:
( (0x00000000 ^ 0xFF000000) ^ 0x00FF0000 ) => ( 0xFF000000 ^ 0x00FF0000 ) => ( 0xFFFF0000 )
5. Mapping to screen coordinates.
X = ((x/w) + 1.0f) * winWidth * 0.5f; Y = ((y/w) + 1.0f) * winHeight * 0.5f;Note that this will map the left edge to the left edge of the image buffer, etc. You should however, take your samples at pixel centers, not at the bottom-left corner of the pixels. This means that your samples should be at (1/2, 1/2), (1/2, 3/2), etc. But any code you write will probably having a lot easier time taking samples at integer locations. You can fix this by shifting everything and taking samples at (0, 0), (0, 1), etc. Doing this will mean that the code in the lecture notes will work as is. Thus you should transform the vertices as follows to move pixel centers to integer coordinates:
X = (((x/w) + 1.0f) * winWidth * 0.5f) - 0.5f; Y = (((y/w) + 1.0f) * winHeight * 0.5f) - 0.5f;
6. Mapping to texture coordinates.
For point sampling, you want to view the texture as being split into 256 equal parts, with each texel representing a sample taken at the center of the texel. In order to get the nearest neighbor with point sampling, with u and v in the range [0, 1), you would just access texel:
( Floor(u * width), Floor(v * height) )Note that you don't have to do those multiplies per pixel; you can just scale the texture coordinates to be in the range [0, 256) by multiplying only at the vertices.
For bilinear interpolation, the situation gets a bit more tricky. You want to find the four nearest sample locations, and find the fractional distance to each in order to do a bilinear interpolation. Remembering that texels represent samples taken at the center of the texel area, the correct access for the four texels is:
( Floor(u * width - 0.5f) , Floor(v * height - 0.5f) ) ( Floor(u * width - 0.5f) + 1, Floor(v * height - 0.5f) ) ( Floor(u * width - 0.5f) , Floor(v * height - 0.5f) + 1 ) ( Floor(u * width - 0.5f) + 1, Floor(v * height - 0.5f) + 1 )And the correct fractional distance in each direction is:
ufrac = (u * width - 0.5f) - (float) Floor(u * width - 0.5f) vfrac = (v * height - 0.5f) - (float) Floor(v * height - 0.5f)Again, you can do the
(u * width - 0.5f)
at the vertices
instead of at every pixel.
Remember that in either point sampling or bilinear interpolation, you must clamp the texture coordinates. This means that if you are trying to sample (256, -1), you should clamp that value to (255, 0). In some texturing applications, you could want to wrap the values (i.e., do a mod), but this isn't one of them. Clamping adds complications to finding the actual samples and finding the fractional distances, and you must do these correctly to get the edges of the texture mapped cube to fit each other seamlessly.
Scissoring implements clipping by ignoring pixels that lie outside the clipping region. It is an image-space algorithm as opposed to an object-space algorithm. In its simplest form, clipping can be implemented by slightly modifying your scan conversion code.
Instead of writing:
draw_pixel(x, y);You write:
if ((x >= 0) && (x < winWidth) && (y >= 0) && (x < winHeight)) draw_pixel(x, y);This, however, is extremely inefficient to do. The problem is that if large portions of a triangle lie outside the clipping region, then you spend time on them. You also have to do this testing for all the pixels inside the clipping region. Unfortunately, because you are standing in the middle of a cube for an environment map viewer, the faces that need to be rasterized extend very far beyond the borders of the screen. Thus, you need to do something better, based on the algorithm you are using.
Also note that clamping the input vertex locations to the range (-1, 1) is the wrong thing to do. This moves the vertices, ruining the geometric interpretation.
Scissoring is very easy to implement efficiently if you are using Inside Testing for scan conversion. Remember that as the first step of the algorithm, you take the bounding box of the triangle to be rasterized. You then test all the pixels inside this bounding box. In order to implement scissoring, all you have to do is make sure that the bounding box is contained within the screen.
Thus, if { xmin, xmax, ymin, ymax }
hold your bounding box
coordinates, all you have to do is modify this bounding box:
xmin = MAX(xmin, 0); ymin = MAX(ymin, 0); xmax = MIN(xmax, winWidth); ymax = MIN(ymax, winHeight);You will also need to implement linear interpolation across the triangle. For this linear interpolation discussion, let us assume you are doing a linear interpolation of z across the triangle. Note that z could be anything: it could be a depth value, it could be a color value (r, g, b), etc. You just want linear interpolation of some value.
Let's say that your triangle vertices are (x0, y0, z0), (x1, y1, z1), and (x2, y2, z2). We know that in three dimensions, any 3 points define a plane, and moving along this plane is equivalent to doing linear interpolation.
The plane equation is simply:
A x + B y + C z + D = 0We can rewrite this as:
z = A' x + B' y + C'Thus, given this plane equation, we can find z for any (x, y).
In order to find (A', B', C'), we can do many things. I will explain a geometric solution. The three vertices of the triangle represent a plane. Making these three vertices into two vectors and taking the cross product will give the plane normal, which is just [A, B, C, 0] in homogeneous coordinates. A' and B' can easily be derived from these. In order to get C', you just plug (x0, y0, z0) into the equation for z, and solve for C'.
Note that this is equivalent to doing Cramer's rule on a 2x2 matrix representing the vectors. An alternative algorithm is to do Gaussian elimination on this 2x2 matrix. Another alternative algorithm is to use an approach based on areas and barycentric coordinates.
Once you have your equation for z, you solve for it at the bottom left pixel, and then do incremental updates of the value (i.e., add A' every time you increase x by 1) for the rest of the pixels.
9. Scanline Algorithm (Crow's Algorithm ).
First, let me give a quick reminder on how to do linear interpolation. For a given interpolant, say z, you first do linear interpolation across the edges using the values at the vertices, and then you do linear interpolation across a scanline using the values at the edges.
Now, don't forget to do fractional updates of any values you are interpolating. Look at this diagram, for example:
. . . . / . . . . . \ . . .The '/' represents the left edge, say, and the '\' the right edge, and a '.' represents a pixel center at which samples are to be taken. If you are doing linear interpolation of any value, you must update the value by some fractional amount to get the interpolant to lie on the first sample. Then you can use finite differencing to incrementalize the computation. Note that this fractional update is required in both the X and Y directions, and is required for both interpolants as well as edge information.
Here are a few things you can do for optimizing your code:
. . . . / . . | . . \ . . .The '/' represents the left edge, the '\' represents the right edge, a '.' represents integer grid values, and the '|' represents the left clipping edge. That is, everything to the left of the '|' needs to be scissored. We need to make a quick update from the '/' to the '|'. But we already have the machinery for this for doing fractional updates. The only difference is that we have to detect that our first sample location is to the left of the '|', and that our fraction from the fractional update is no longer a fraction.
10. Texture coordinate interpolation.
Given three vertices of the form (x, y, z, w, u, v), first do the following on each vertex:
U = u / w; V = v / w; Q = 1 / w;Then, put (U, V, Q) through your linear interpolation machinery. For any given pixel, the perspective corrected texture map coordinate will be ( U / Q, V / Q ).
Note that you can do any affine transformation (i.e., scale, translate, rotate, etc.) on the original (u, v)'s, and all this will still work. This means that you can put (u, v) in the appropriate range at the vertices before you begin interpolation and still get the same results you would have gotten by putting (u, v) in the appropriate range at every pixel.
[email protected]