(B) At what rate (bits per second) does the video controller
need to read from the frame buffer?
(B) If you could build a shadow mask with a pitch of 0.16mm,
how small could you make a color TV CRT display (width and height)
while maintaining a 350x350 dot resolution?
How much memory would be required to store
a frame buffer that provided 24 bit color directly?
Short derivation. The decision to be made is between N and NE.
The midpoint to be tested is at (xp+0.5, yp+1):
Quantity
Variable
Expression
Value
Decision variable
dp
F(xp+0.5, yp+1)
a(xp+0.5) + b(yp+1) + c
Initial value
d0
F(x0+0.5,y0+1)
a/2 + b
N chosen
dp+1
F(xp+0.5,yp+2)
a(xp+0.5) + b(yp+2) + c
-
incrN
-
b (= -dx)
NE chosen
dp+1
F(xp+1.5,yp+2)
a(xp+1.5) + b(yp+2) + c
-
incrNE
-
a+b (= dy-dx)
Here is code for lines with slopes between 1 and infinity:
void MidpointLine (int x0, int y0, int x1, int y1, int value) { int dx = x1 - x0; int dy = y1 - y0; int d = dy - 2 * dx; /* initial value of d */ int incrN = -2 * dx; /* incr used for move to N */ int incrNE = 2 * (dy - dx); /* incr used for move to NE */ int x = x0; int y = y0; WritePixel (x, y, value); /* the start pixel */ while (y < y1) { if (d <= 0) { /* choose NE */ d += incrNE; x++; y++; } else { /* choose N */ d += incrN; y++; } WritePixel (x, y, value); /* the selected pixel */ } }
(B)If d > 0, which pixel (E or NE) is highlighted next?
An increase in y will increase d. Based on this test, d > 0
implies that the midpoint is above the line (the line passes below the
midpoint), and so we should choose E.
(C)An iterative algorithm can be used to increment the value of
the decision variable at each step. Let d' be the decision variable
for the point following (xp, yp). Define deltaE
as (d'-d) when pixel E is chosen next and deltaNE as (d'-d) when pixel
NE is chosen next. Write expressions for deltaE and deltaNE as
functions of a, b, c, xp, and yp.
deltaE | = (d'-d)E | = [F(xp+2, yp+0.5) - F(xp+1, yp+0.5)] | = b + c(2xp + 3) |
deltaNE | = (d'-d)NE | = [F(xp+2, yp+1.5) - F(xp+1, yp+0.5)] | = b + c(2xp + 3) + 1 |
(D)Although the first order differences deltaE and deltaNE
depend on xp, the second order differences depend only on
coefficients a, b, and c. Let d'' be the decision parameter for the
second point following (xp, yp). The second
order difference is defined as [(d''-d') - (d'-d)]. Find the changes in deltaE
and deltaNE when E or NE is chosen as the second point (4 cases in
total). Hint: this is very similar to the midpoint circle algorithm
in Section 3.3.2.
E chosen: next pixel is (xp+1, yp)
(deltaEnew - deltaE) | = [b + c(2(xp+1) + 3)] - [b + c(2xp + 3)] | = 2c |
(deltaNEnew - deltaNE) | = [b + c(2(xp+1) + 3) + 1] - [b + c(2xp + 3) + 1] | = 2c |
(deltaEnew - deltaE) | = [b + c(2(xp+1) + 3)] - [b + c(2xp + 3)] | = 2c |
(deltaNEnew - deltaNE) | = [b + c(2(xp+1) + 3) + 1] - [b + c(2xp + 3) + 1] | = 2c |
Assume that we have only two intensity values: on and off, and our application wants to draw a line of width w. The equation for the line is ax + by + c = 0. Each processor knows its x and y location. Coefficients a, b, and c are broadcast to all processors.
(A) Describe (in words) a simple algorithm that a processor
can use to determine whether the pixel it represents should be on or
off. Do not worry about clipping the line at its endpoints. One
solution would be for each processor to compute its distance from the
ideal (zero width) line, and to highlight itself if and only if this
distance is less than w/2.
(B) Derive an expression as a function of w, a, b, c, x, and y
that must be true for the pixel at (x,y) to be turned on. You may
want to use the expression for distance to the line: [(ax + by + c) /
sqrt(a2 + b2)]
(C) Computations performed on the main computer are usually
much faster than computations performed on individual processors. It
it is also expensive to broadcast information to the processors. What
computations could you do on the main computer that would reduce the
amount of information transferred and reduce amount of work each
processor has to do?
If the speed difference is not as great, and we wish to avoid
fractional arithmetic and the sqrt operation, it might be better to
reformulate the inequality as:
If the speed difference is very large, reformulate the inequality
as:
Let g = [w/2 * sqrt(a2 + b2) - c] be computed on
the main processor and broadcast a,b, and g.
Let g = w2 (a2 + b2) be computed on
the main processor and broadcast a, b, c, and g.
/* get circle-registered, positive x value */ px = x - xc; while (px < 0) px += M; /* get circle-registered, positive y value */ py = y - yc; while (py < 0) py += N; /* write appropriate pattern element (assume P is an array of color values */ WritePixel(x, y, P[px%M][py%N]);
To get a thick line, one option is to use the "replicated pixels"
technique. For lines with |slope| <= 1, the midpointLine algorithm
iterates over columns. This means that for each column, the extra
pixels should be stacked above and below the pixel selected by the
midpointLine algorithm. To incorporate the replicating pixels
technique into the midpointLine algorithm for lines where |slope| <=
1, replace the following call:
WritePixel (x, y, value); /* the selected pixel */
with this code, where the lineMask pattern is used to make the line
appear dashed:
int lineMask[8] = {0,0,0,0,1,1,1,1};
if (lineMask[x % 8]) {
WritePixel(x, y, value);
WritePixel(x, y+1, value);
WritePixel(x, y+2, value);
WritePixel(x, y-1, value);
}
For lines with |slope| > 1, the midpointLine algorithm iterates
over rows, so pixels are duplicated in rows. The corresponding code
is:
if (lineMask[x % 8]) {
WritePixel(x, y, value);
WritePixel(x+1, y, value);
WritePixel(x+2, y, value);
WritePixel(x-1, y, value);
}
Pros: Both the use of the line mask and the technique of
replicating pixels are very fast.
Cons:
Another way to do fast weighted area sampling is to subsample the area covered by each pixel and use a pixel-weighting mask located at each pixel to determine the desired pixel intensity value...
(A) Define a pixel weighting mask to approximate a cone filter
with a radius equal to 1 grid spacing. Assume that pixel intensity
values can range from 0 to 256.
Here is one approximation:
(B) Estimate the intensity value for the pixel shown above
using your filter.
The pixel intensity is set to the sum of subpixel intensities, in
this case, 78:
(C) What are some of the pros and cons of this approach
compared to the Gupta-Sproull algorithm?
Pros: You can create many different types of masks very easily, by
simply adjusting the weighting applied to each subpixel. This would
allow control, for example, over the amount of blur added to an
image.
Cons: The major disadvantage is that pixel subsampling is computationally expensive.
(A) Why does this occur?
When the midpointLine algorithm is used, the number of pixels highlighted for a line depends only on the number of rows or the number of columns spanned by that line. For lines with |slope| <= 1, for example, the number of pixels highlighted depends on (xmax - xmin). Lines that are at a 45 degree angle will have the same number of pixels highlighted as horizontal lines having endpoints at the same x values. The 45 degree line, however, will be sqrt(2) times as long as the horizontal line. The pixels of the angled line will be more stretched out, making the line appear dimmer.
(B) Does weighted area sampling fix the problem? Why or why not?
Unweighted area sampling is sufficient to fix this problem by making intensity of the line proportional to the area covered by that line and thus proportional to its length.
Weighted area sampling has a similar effect, because the area of overlap of a pixel with the line is used in computing pixel intensity. There will be some variation in intensity of a line, depending on the exact location and angle of the line, but this variation will be very small compared to that introduced by the midpoint scan line algorithm.
We also showed that two scale matrices with scale parameters (sx1, sy1) and (sx2, sy2) can be multiplied to obtain a matrix that scales each point by (sx1*sx2, sy1*sy2).
(A) Rederive the form for the rotation matrix as a function of
angle theta.
(B) Show that two successive rotation matrices representing
rotations of theta1 and theta2 can be multiplied to yield a rotation
matrix representing rotation by (theta1 + theta2).
PART 1:
PART 2:
Plug
in endpoints:
Subtract the first equation from the second:
Plug this result into the first equation:
Set a=10 to obtain integer values:
The line equation is:
(A)Draw the four stages of the Sutherland-Hodgeman clipping algorithm as the polygon shown below is clipped by the right, top, left, and bottom clipping planes. Number the vertices of the polygon in counterclockwise order at each stage.
(B)The result has extra edges. Describe an algorithm for
cleaning up these extra edges to create two separate polygons.
Here is one possible algorithm, based on identifying independent
loops.
These are the polygons we want.