Hidden Line and Surfaces
Hidden Line and Surfaces
OR visible line and surface determination (chapter 13) we have polygon faces defined for our object; we need to draw these faces so that faces in front overwrite faces in the back
this is a central problem in graphics and computational geometry: speed! various algorithms devised which are suited to different needs and devices o eg. a video technique may differ substantially from that for a pen plotter
algorithms fall into 2 categories: 1. image-precision (device-precision) algorithms: work on a pixel basis by determining visibility at each pixel o "sampling": samples object geometry at device-dependent increments o algorithm complexity proportional to device resolution o because of sampling, can get aliasing problems (ie. "jagged line" error) 2. object-precision algorithms: determine visibility via the geometry of the model o "continuous": finds most precise answers as possible o compares n graphical objects with each other: n*n complexity o however, more complex mathematics, so this is usually slower than image precision
Types of algorithms
1. Visible-line determination: examine edge geometry, and determine edge segments that are visible or are hidden 2. Z-buffer: device-precision algorithm that records the visible object found at each pixel 3. List priority algorithms: object-precise algorithms that sort objects such that when drawn in that order, a correct rendering of hidden surfaces occurs 4. Scan-line algorithms: image-precise algorithms that determine visibility one scan-line at a time 5. Area subdivision algorithms: divide-and-conquer applied to object areas 6. Ray tracing: determine visible object by tracing line from user eye to each pixel in scene ... and others
Coherence
some general principles apply to ALL computer graphics algorithms coherence: degree to which parts of object exhibit local similarities most parts of the objects in a drawing are more similar than not o if you account for this, then changes can be done incrementally & therefore quickly face coherence: face shading is gradual scan line coherence: each scan line typically changes little from those above and below it area coherence: adjacent pixels are typically on the same face frame coherence: in animation, each frame differs very slightly from previous frame many hidden line techniques exploit these facts
if you are drawing wireframes, then the hidden line problem is to find the lines or line segments that are obstructed by interceding polygons, and not draw them in full or part eg. functions of 2 variables: y = f(x,z) simple approach: for constant z values, compute values at x intervals, and draw lines between o need to remove line portions that are obstructed
Hidden lines
Quick + dirty: if on a CRT, then start at min z value, compute y's for x's and z, rotate & convert to 2D (screen coordinates), and draw it o works from screen top to bottom, left to right o but each time you draw a new y value, erase everything BELOW that point o effectively removes obstructed parts of drawing o disadvantage: won't work for plotters, and won't work for x-y meshes better: horizon line algorithm: keep track of minimum and maximum y coordinates drawn so far in a "silhouette array" o draw from front to back o when you draw a new line segment, see what portions of it lie outside these min and max values: those are the parts you see o don't draw any part of line within these boundaries o also, need to set new y min and y max for lines outside boundaries o problem: aliasing - when you use device y coordinates, aliasing can occur o solution is to use image-precise table instead Example
Hidden lines
Can also do this using constant x and z increments: Example split drawing into sections left & right of most vertical line, and draw from back to front Finally, can combine to create grid effect: Example to combine, draw horiz lines L to R, but draw vert lines between each horizontal section you come to
Hidden surfaces
hidden line algorithms only work for strict 2 variable functions the more general problem is when you have assorted volumes in 3D space o these volumes are not necessarily mathematical functions general problem: given points P1=(x1,y1,z1), P2=(x2,y2,z2) are P1 and P2 on same projector to viewers eye? o and if so, which one is closer (ie. the one the eye sees) and should be drawn? note that this takes into account perspective, as well as colouring, shading, etc, since we draw the actual colour of that object point hidden surface detection done after model is transformed and perspective applied o parallel projection: compare projections (x1, y1) and (x2, y2) o perspective projection: apply perspective scaling t', and then compare points (x1', y1') and (x2', y2') different algorithms attempt to make this comparison as efficient as possible o eg, prune the # of points/surfaces to compare
Efficiency techniques
a) perform volume clipping: then less objects to compare b) check if object extents overlap o object extent: the minimum and maximum screen X and Y values o a quick way to disregard hidden line checking
Efficiency:
c) back-facing polygon elimination: polygon normal: the perpendicular ray (direction) from the plane containing the polygon the equation of this plane determines the side of plane that normal points in graphics, useful to make the normal point towards the front of the poly. o if an object defined in terms of planar polygons, and if the object is closed, then we can then disregard drawing polygons whose normals are pointing away from user (neg values) o graphics routines: compute normal for the polygon; if it points towards user then use it; otherwise ignore it
Backface elimination
this can be expensive: have to compute normal's direction wrt user eye however, there's a trick: when you use simple convex planar polygons, then you can use right-hand rule to determine direction of normal define the polygon edges in a counterclockwise order when you wrap right hand fingers in this direction, thumb points at front normal then, the graphics routine simply checks if the polygon being drawn is being done so in a counterclockwise fashion o --> if yes, it is a front-facing one; if not, it is back-facing OpenGL: o glFrontFace(GL_CCW); (or GL_CW) specifies clockwise or counterclockwise o glCullFace(GL_BACK); (or GL_FRONT) specifies back or front facing culling o glEnable(GL_CULL_FACE); turns backface elimination on o significant speedup on O2's: cuts number of polygons to process by half o make sure that you define your polygons in counterclockwise direction if GL_CCW set o (if you see very little of object, you've defined polygon's in clockwise
Z Buffering
z buffering (or depth buffering) is one of the simplest hidden surface algorithms via hardware or software, an extra "z" buffer is maintained along with frame buffer. It keeps track of the nearest object at each pixel location. initialized to minimum z value (eg. most negative) then, when object being drawn, if its z coordinate at a point is greater (more positive, less distance to viewer) than z buffer value, it is drawn, and new z coord is saved; otherwise, it is not drawn if a line seg in 3D is being drawn, then the intermediate z values between endpoint z coords are interpolated: linear interpolation for polygons, and can compute z for more complex surfaces some problems: o aliasing: limited to size of z buffer word per pixel; can have imprecision at extremely far distances --> important to have as small a viewing volume as possible possible to write into and manipulate z buffer memory directly, eg. cursors
Z buffer
OpenGL: Z buffer
SGI's directly support z buffer: O2's integrated memory z buffer (old systems used software) initialize (GLUT):
glClear(GL_DEPTH_BUFFER_BIT);
And that's it! must remember to clear the z-buffer whenever you draw a new frame
List-Priority algorithms
These algorithms sort objects in an ordering that, when drawn, causes hidden surface removal. if objects don't overlap, then simply sorting on "z" coordinate and drawing them from farthest to nearest z will work However, in general, objects can overlap, so more complex criteria are needed
Depth-sort
1. sort all polygons according to smallest z coordinate of each (ie. distance to viewer) 2. resolve overlaps (split polygons if necessary) 3. scan convert polygons from farthest to nearest
algorithm works for objects that lie in planes of constant z o (eg. windowing systems), or those whose z extents do not overlap o --> step 2 not required then otherwise, step 2 must test whether polygons do not overlap, if any of these tests are true: o i) polygons' x extents overlap o ii) polygons' y extents overlap o iii) their projection extents on x-y plane overlap o iv) one polygon is on one side or the other of other's plane and if so, split them up using clipping
with simple objects and meshes (eg. closed solid volumes), this technique is often quite efficient, because objects can be drawn from furthest to closest without worrying about overlapping polygons.
Idea: try to make an easy decision about which polygon is visible in a section of the image (area coherence) o If a decision cannot be made, subdivide the area recursively until one can be made. Can be device-precise (stop at pixel) or object-precise (stop at error range) Recursive algorithm: o divide image into 4 equal areas o for each area (see tests below): o 1. are all polygons disjoint from area? (test d) yes --> display background colour o 2. only one intersecting or contained polygon (tests b, c) yes --> fill with background colour, and then draw contained polygon or intersecting portion o 3. one single surrounding polygon, no intersecting or contained polygons (test a) yes --> draw area with that polygon's colour o 4. more than one polygon is intersecting, contained in, or surrounding, but only one polygon is surrounding the area and is in front of others (test a) yes --> draw area with that front polygon's colour o e) ELSE --> subdivide the area into 4 equal areas and recurse 4 test cases (shaded polygon is to be rendered):
Area subdivision
Clipping the polygons at step 2 and draw can be done at an object precise or image precise level Quitting: if none of the test cases have terminated, then you can stop either at image precise or device precise resolution o eg. if stopping at pixel resolution, then find the nearest polygon at the centre of the area - colour the pixel that colour o eg. for antialiasing, could compute the relative area of the area covered by different polygons and the background, and give an average colour
Ray Tracing
For each pixel, compute the colour that the viewer's eye will see in the scene hidden surface removal is a product of this ray tracing is also effective for photorealistic rendering
COMPUTER GRAPHICS 125 If the new Z is greater than the previously stored vale, thenew polygon is at a farther distance than the earlier one and nochanges need be made. The polygon continues to represents theprevious polygon.One may note that at the end of the processing of all the polygons,every pixel, will have the intensity value of the object which it should display inits intensity location and this can be displayed. This simple algorithm, as can be expected, works on the imagespace. The scene should have properly projected and clipped before thealgorithm is used. The basic limitation of the algorithm is its computationalintensiveness. On a 1024 X 1024 screen it will have to evaluate the status of each of these pixels in a limiting case. In its present form, it does not use any of the coherence or other geometric properties to reduce the computationalefforts. To reduce the storage, some times the screen is divided intosmaller regions like say 50 X 50 or 100 X 100 pixels, computations made foreach of this religions, displayed on the screen and then the next region isundertaken. However this can be both advantageous and disadvantageous. It isobvious that such a division of screen would need each of the polygons to beprocessed for each of the regions thereby increasing the computationalefforts. This is disadvantage. But when smaller regions are being considered, itis possible to make use of various coherence tests, thereby reducing thenumber of pixels to be handled explicitly. 11.4 Properties that help in reducing the efforts Several features can be made use of to identify the polygons thatare totally/partially covered, so that the actual effort of elimination of hiddensurfaces can be reduced. A few popularly used tests are as follows.i) Use of geometric properties: The depth bufferalgorithm reduces the objects to a series of polygons and tests them forvisibility. The polygon, in geometric terms is a surface, which isrepresented by the equation ax + by + cz +d =0 for any point (x,y,z) thatlies on the surface (or plane in geometric terminology). A point that doesnot satisfy the equation lies outside the plane. A point that gives anegative value for the equation lies on the back face of the plane (since x,y coordinates cannot be negative, it implied z becomes negative). Suchback faces like the back of a cube or a pyramid or some similar shape, COMPUTER GRAPHICS 126 can be totally removed from the calculations then reducing the effortsconsiderably.ii) Overlap tests: Common sense gives us one simple idea.An object can obscure another only if (a) one of them is at a fartherdistance than another obviously two objects standing side by sidecannot obscure each otheriii) Even if they are at unequal distances, they canobscure only if either their x or y coordinates overlap. For example thetwo polygons in the given figure cannot obscure each other irrespective of how far or near is each of them than the other.P2P1 The minimax tests guarantee just this. If the minimum xcoordinate of one polygon is larger than the maximum x coordinate of another(P2 and P1 respectively of the figure) and similarly the max is coordinate of oneis less than the minimum is coordinate of another (P1 and P2 respectively). Thetwo figures cannot overlap no matter what their z coordinates are. This testallows us to trivially avoid testing a few pairs of polygons for obscuring eachother.However it should be noted that failure to pass the minimax testdoes not always imply that they obscure consider the following caseP1 P2Here though P1 and P2 coordinates overlap, they still do notobscure each other. Testing such instances need more elaborate computations.
COMPUTER GRAPHICS 127 11.5 Scan Line Coherence Algorithms Scan line algorithms solve the hidden surface problem, onescanline at a time. They traverse the picture the picture space from top tobottom, one line at a time removing all the hidden surface along that scan line. The simplest of them can be the depth buffer algorithm itself. Sincethe algorithm has to consider every pixel, it can take care of pixels along eachscan line at a time and then go to the next line and so on. Scan line coherence algorithm: For each scan line perform the following steps.a. For every pixel on a scan line, set depth value to 1.0 andintensity to the back ground value.b. For each polygon in the view scene, find all pixels of the scanline under consideration that lie within the polygon. For each of themi. Find the depth z of the polygon at the point.ii. If Z < depth [x], set depth[x] to z and the intensity tothe intensity of the polygon.c. Once all polygons have been taken care of, the pixels containthe intensity values that are to be displayed. This algorithm when used inconjunction with Y-X scan conversion forms the simplest of scan linecoherence algorithms. 11.6 Span Coherence algorithm Another property of coherence that can be made use of in scanconversion is the one-dimensional form of area coherence, called spancoherence. If a pixel is inside a polygon, its neighbours also lie inside thepolygon. This holds good upto a Span once the Span is detected, all pixelswithin the span can be set to the intensity value of the polygon and the nextcomparison can take place at the end of the span. This reduces thecomputation by a very large amount, especially if the no. of polygons is limited. The concept of spans can be considered in a simplisticmanner by the following example.In the 3-dimensional space, each scan line produces a plane.I.e. each scan line is for a particular value of y. A plane with this value of y as aconstant over different values of x and z form a plane.