The second Renderer implemented is supposed to resemble a quick sketch as maybe an engineer would make to describe the approximate shape of an object. No shading should be used, giving the rendered object a somewhat flat appearance. Instead Hidden Line Removal (HLR) should be used to convey a certain degree of depth information. Edges should not be straight, but divert from the ideal lines and some lines should be drawn repetitively, as if the drawer was uncertain about the correct way of drawing a specific line. The imagined output can be seen in Figure 1.
Figure 1 - A quick Sketch (we decided against shading in
the actual implementation)
Most of the implementation details for this renderer have already been dealt with in the course of this project. Namely the silhouette-detection (see Comic-style Rendering) and the generation of semi-random lines with pre-determinable end-points (see Generating Coast-lines). The result of the first implementation can be seen in Figure 2. We decided that even when the object is not animated (as in rotated, scaled, translated) we still would like to see it in a more lively form. For this reason the randomisation of the object was performed several times per second. We found that changing the shape of the object in this way could be rather distracting if done too rapidly and would seem jerky when performed too seldom. A pleasing looking compromise was found at a reshape-rate of about 10/sec.
Figure 2 - "Normal" OpenGL rendering (smooth shaded) on the left and
with the sketch appearance on the right.
The problems that remained were:
- Optimisations for medium to large objects
Their solutions are discussed in detail below.
Hidden Line Removal was simply achieved by scaling (shrinking) the original object by a certain percentage towards the geometric centre of the bounding box of the object. Figure 3 shows the sketch-rendering in white and the scaled object in transparent red. In the proper implementation the red would be substituted with the background-colour to remove hidden lines. Z-Buffer testing is used to prevent deletion of visible lines.
Figure 3 - Hidden Line Removal for the
Sketch-renderer (detail of HLR flaws on the right)
Even though this method works well in practice there are shortcomings that need to be mentioned. Firstly, it is not guaranteed, that the geometric centre of the bounding box of an object will coincide with the geometric centre of the object itself. The result is that the HLR-object might be inadequately offset with respect to the sketch-rendering (this may result in artefacts like that depicted in Figure 3, mark a, where the HLR-object fails to delete part of a line that should logically be hidden). Secondly, some lines diverting from ideal edges towards the centre could be clipped by the HLR-object (see Figure 3, mark b and detail: The Left leg of the Stairs cuts through almost all of the steps). Both of these restrictions are tolerable, seeing as a sketchy rendering is not expected to exhibit a flawless appearance.
As we already observed in the comic-style renderer, the overhead to generate and maintain silhouette-information for a moving object can be quite considerable. This is especially true for our sketch-renderer, as it breaks up each edge in one or more segments as well as draws each edge one or more times. The optimisations we looked for had the only restriction that they had to look acceptable under rotation (because the silhouette-information doesn't change under a scaling-transformation and varies only slowly under translation). There are several aspects of the sketch-renderer that determine the rendering-time of an object (apart from object-complexity). These are (in no particular order):
- calculation of two dotproducts per edge (to obtain silhouette information)
- number of times each silhouette-edge is drawn
- recusion-depth to generate jagged lines
As the last two aspects make up the very essence of the renderer, we first looked for ways to optimise the first one. An attempt that proved very fruitful was to cache silhouette-information for several frames (i.e. only updating a certain percentage of silhouette-information per frame or updating the entire silhouette-information only every x frames). We opted for the second option, as this allows us to use and re-use display-lists. We found that the silhouette-information could be re-used for up to 10 times without any artefacts becoming noticeable. There are several facts that should be noted about this: The above-mentioned optimisation holds for simple as well as complex objects (see below for an attempted explanation). It also holds for slow and fast rotations. We explain this as follows: For slow rotations, the silhouette-information does not change rapidly so that slight incorrections can be neglected. For fast rotations, the observer cannot follow the object in all its detail anyway and inprecise rendering remains largely unnoticed. All of the above-mentioned is supported by the fact that the rendering is supposed to look sketchy (an approximation) in any case.
We put these observations to use by implementing three displaylists, which are displayed three times in random order (e.g. 122132313). As creating a display-list takes more time than merely displaying one, the result is a certain degree of randomisation in the time-domain of the shape-transformation. This actually looks nice for small to medium size objects, but for large objects the ratio of time spend on creating display-lists vs. displaying them becomes unacceptably large.
Another way to reduce rendering time is to decrease the number of lines rendered. This can happen in one of several ways:
- Decrease Nr. of segments that each line may be split up into
- Decrease Nr. of times a line may be redrawn
- Decrease the total number of lines that may be drawn
Whereas the first two points can be varied by adjusting already existing variables in the rendering implementation, the third one involves a more elaborate approach. We first tried sampling the object (i.e. setting an upper bound on the edge-complexity that can be rendered in a reasonable amount of time and then rendering only every ith edge so that this criterion is fulfilled) but had to realise that performing linear sampling of the object produced very unpleasing results, as the distribution of geometric detail is in most cases far from linear. In order to solve this problem we suggest any established level of detail (LOD) algorithm as a pre-rendering pass, when the object is loaded.
We will fist talk about a very simple object: a cube. We assume it is centered at the origin, has an edge-length of sqrt(2) (i.e. unit length from centre to any corner) and the eye is situated at a distance d from the origin, looking down the negative z-axis. This setup is depicted in Figure 4.
Figure 4 - Side-view of explanation set-up
The two edges we will investigate are located on the top side of the cube. We now perform a rotation of the cube about the x-axis from -20° to +20° as depicted in Figure 5 and watch the blue and red edge.
Figure 5 - Sequence of rotation of the cube
The following animated GIF (Figure 6) consists of two graphs. The left graph shows the y-coordinates of the two edges after a perspective projection that takes into account the distance of the viewer from the origin. On the x-axis the rotation as in Figure 5 is shown and on the y-axis the projected co-ordinate. On the right graph we show the absolute difference between the blue and the red edge as a percentage of the side-length of the cube. As time progresses, the observer moves away from the origin (from 2 on the z-axis to 6).
Figure 5 - Edge-relation as Observer moves away from Cube
What we can see here is that the following: With increasing absolute angle the difference between the two edges increases as expected, but this difference decreases rapidly as we move away from the object. This can be translated as follows: By updating silhouette (or edge-) information only every so often, we run the risk of rendering an incorrect edge (i.e. keeping to render the red edge instead of switching to the blue one in Figure 5). The visible effect of this is the larger, the longer we keep working with the wrong silhouette-information and the closer we are to the object.
If we assume an average viewing-distance of several cube-lengths and an update-rate of 10 per rotation (a conservative estimate, this translates to a rotation span of 36 degrees), the relative error is less than 5%. The fact that the uncertainty of the jagged lines drawn by the sketch-renderer is of the same magnitude explains why a degradation of rendering quality is barely noticeable.
Objects of higher complexity are assumed to exhibit a higher spatial redundancy, i.e. edges under rotation are more closely spaced, so that the effect discussed above is even more enhanced.