A graphics or image rendering system, such as a map image rendering system, receives image data from an image database in the form of vector data that defines various image objects, such as roads, boundaries, etc., which are to be rendered as straight lines within an image. The imaging rendering system renders the image objects by applying an anti-aliasing technique that determines varying pixel color values at or near the edges of each straight line to be rendered on the image, so as to obtain a pleasing visual effect when rendering a road or other boundary in any orientation on the image screen. The anti-aliasing technique determines a scaling vector having values dependent on the location of a particular pixel in the image along the normal to the straight line forming a road and determines a pixel color value at each pixel location associated with the road based on this scaling vector, such that the pixel color value at each pixel in or near the line is proportional to a component of the scaling vector. This technique produces a more gradual transition in pixel color values from a non-road location to a road location in the image, and thus provides a non-aliased rendering of the road regardless of the orientation or direction in which the road is being rendered in the image.
|
27. A method of rendering a straight line within an image being rendered on a display device, comprising:
receiving vector image data identifying a straight line to be rendered within the image on the display device, the vector image data including a pixel value indicative of a pixel color value for pixel positions within the straight line;
determining a desired width of the straight line in the direction normal to the edges of the straight line;
determining one or more preliminary scaling vectors, the values of which are linearly related to the distance from the center of the straight line in the direction of the normal to the straight line over at least a portion of the extended width of the straight line;
determining a scaling vector to be applied to pixel positions along the normal to the line within the extended width of the straight line, wherein the scaling vector is derived from the one or more preliminary scaling vectors such that the scaling vector has a value that differs from the value of each of the one or more preliminary scaling vectors for at least one distance from the center of the straight line;
using the scaling vector and the pixel value indicative of a pixel color value for pixel positions within the straight line to determine a pixel color value for each of the pixel positions along the normal to the straight line near the edges of the straight line; and
rendering the straight line in the image on the display device using the determined pixel values at each of the pixel positions along the normal to the straight line near the edges of the straight line.
1. A computer-implemented method for rendering a straight line image object on a display device, comprising:
receiving at a computer device line vector image data identifying a straight line to be rendered within an image on a display device;
determining, using a computer device, a desired width for the straight line based on the line vector image data, the desired width of the straight line being determined in the direction normal to the straight line;
determining, using a computer device, an extended width of the straight line to be used in anti-aliasing the straight line when the straight line is rendered in the image on the display device, the extended width of the straight line being determined in the direction of the normal to the straight line and extending the width of the straight line equally on either side of the straight line;
determining one or more preliminary scaling vectors, the values of which are linearly related to the distance from the center of the straight line in the direction of the normal to the straight line over at least a portion of the extended width of the straight line;
determining a scaling vector to be applied to pixel positions along the normal to the straight line within the extended width of the straight line, wherein the scaling vector is derived from the one or more preliminary scaling vectors such that the scaling vector has a value that differs from the value of each of the one or more preliminary scaling vectors for at least one distance from the center of the straight line;
using the scaling vector to determine a pixel value for each of the pixel positions along the normal to the straight line within the extended width of the straight line; and
rendering, using the computer device, the straight line in the image on the display device using the determined pixel values at each of the pixel positions along the normal to the straight line within the extended width of the straight line.
18. An image rendering engine, comprising;
a communications network interface;
a processor;
a memory coupled to the processor;
a display device coupled to the processor;
a first routine, stored in the memory, that executes on the processor to receive, via the communications network interface, a set of vector data comprising data defining one or more straight line image objects;
a second routine, stored in the memory, that executes on the processor to determine an extended width of the straight line to be used in anti-aliasing the straight line when the straight line is rendered in the image on the display device, the extended width of the straight line being determined in the direction of the normal to the straight line and extending the width of the straight line equally on either side of the straight line;
a third routine, stored in the memory, that executes on the processor to determine a scaling vector to be applied to pixel positions along the normal to the line within the extended width of the straight line, wherein determining the scaling vector comprises:
determining one or more preliminary scaling vectors, the values of which are linearly related to the distance from the center of the straight line in the direction of the normal to the straight line over at least a portion of the extended width of the straight line; and
derive the scaling vector from the one or more preliminary scaling vectors such that the scaling vector has a value that differs from the value of each of the one or more preliminary scaling vectors for at least one distance from the center of the straight line;
a fourth routine, stored in the memory, that executes on the processor to use the scaling vector to determine a pixel value for each of the pixel positions along the normal to the straight line within the extended width of the straight line; and
a fifth routine, stored in the memory, that executes on the process to render the straight line in the image on the display device using the determined pixel values at each of the pixel positions along the normal to the straight line within the extended width of the straight line.
2. The computer-implemented method of
3. The computer-implemented method of
4. The computer-implemented method of
5. The computer-implemented method of
6. The computer-implemented method of
7. The computer-implemented method of
9. The computer-implemented method of
10. The computer-implemented method of
11. The computer-implemented method of
12. The computer-implemented method of
13. The computer-implemented method of
14. The computer-implemented method of
15. The computer-implemented method of
16. The computer-implemented method of
17. The computer-implemented method of
19. The image rendering engine of
20. The image rendering engine of
21. The image rendering engine of
22. The image rendering engine of
23. The image rendering engine of
24. The image rendering engine of
25. The image rendering engine of
26. The image rendering engine of
28. The method of rendering a straight line within an image of
29. The method of rendering a straight line within an image of
30. The method of rendering a straight line within an image of
31. The method of rendering a straight line within an image of
32. The method of rendering a straight line within an image of
33. The method of rendering a straight line within an image of
34. The method of rendering a straight line within an image of
35. The method of rendering a straight line within an image of
36. The method of rendering a straight line within an image of
37. The method of rendering a straight line within an image of
38. The method of rendering a straight line within an image of
39. The method of rendering a straight line within an image of
|
The present disclosure relates to image rendering systems, such as electronic map display systems, and more specifically to an image rendering engine that performs anti-aliasing on straight lines within a map image.
Digital maps are found in and may be displayed by a wide variety of devices, including mobile phones, car navigation systems, hand-held GPS units, computers, and many websites. Although digital maps are easy to view and to use from an end-user's perspective, creating a digital map is a difficult task and can be a time-consuming process. In particular, every digital map begins with storing, in a map database, a set of raw data corresponding to millions of streets and intersections and other features to be displayed as part of a map. The raw map data that is stored in the map database and that is used to generate digital map images is derived from a variety of sources, with each source typically providing different amounts and types of information. This map data must therefore be compiled and stored in the map database before being accessed by map display or map rendering applications and hardware.
There are, of course, different manners of digitally rendering map images (referred to as digital map images) based on map data stored in a map database. One method of rendering a map image is to store map images within the map database as sets of raster or pixelated images made up of numerous pixel data points, with each pixel data point including properties defining how a particular pixel in an image is to be displayed on an electronic display device. While this type of map data is relatively easy to create and store, the map rendering technique using this data typically requires a large amount of storage space for comprehensive digital map images, and it is difficult to manipulate the digital map images as displayed on a display device in very many useful manners.
Another, more flexible methodology of rendering images uses what is traditionally called vector image data. Vector image data is typically used in high-resolution and fast-moving imaging systems, such as those associated with gaming systems, and in particular three-dimensional gaming systems. Generally speaking, vector image data (or vector data) includes data that defines specific image objects or elements (also referred to as primitives) to be displayed as part of an image via an image display device. In the context of a map image, such image elements or primitives may be, for example, individual roads or road segments, text labels, areas, text boxes, buildings, points of interest markers, terrain features, bike paths, map or street labels, etc. Each image element is generally made up or drawn as a set of one or more triangles (of different sizes, shapes, colors, fill patterns, etc.), with each triangle including three vertices interconnected by lines. Thus, for any particular image element, the image database stores a set of vertex data points, with each vertex data point defining a particular vertex of one of the triangles making up the image element. Generally speaking, each vertex data point includes data pertaining to a two-dimensional or a three-dimensional position of the vertex (in an X, Y or an X, Y, Z coordinate system, for example) and various vertex attributes defining properties of the vertex, such as color properties, fill properties, line width properties for lines emanating from the vertex, etc.
During the image rendering process, the vertices defined for various image elements of an image to be rendered are provided to and are processed in one or more image shaders which operate in conjunction with a graphics processing unit (GPU), such as a graphics card or a rasterizer, to produce a two-dimensional image on a display screen. Generally speaking, an image shader is a set of software instructions used primarily to calculate rendering effects on graphics hardware with a high degree of flexibility. Image shaders are well known and various types of image shaders are available in various application programming interfaces (APIs) provided by, for example, OpenGL and Direct3D, to define special shading functions. Basically, image shaders are simple programs in a high level programming language that describe or determine the traits of either a vertex or a pixel. Vertex shaders, for example, define the traits (e.g., position, texture coordinates, colors, etc.) of a vertex, while pixel or fragment shaders define the traits (color, z-depth and alpha value) of a pixel. A vertex shader is called for each vertex in an image element or primitive so that, for each vertex input into the vertex shader, the vertex shader produces one (updated) vertex output. Each vertex output by the vertex shader is then rendered as a series of pixels onto a block of memory that will eventually be sent to a display screen. As another example, fragment shaders use the vertices output by the vertex shaders to pixelate the image, i.e., to determine pixel color values of the image being created. Fragment shaders may fill in or render pixels based on the vertex attribute values of the vertices produced by the vertex shaders by interpolating between the vertex attribute values of different vertices of an image object. In other cases, fragment shaders may use predefined textures in the form of pixel color maps to fill in or to pixelate particular areas defined by the vertices of the image object. In this case, the textures define pixel values for various images to be rendered, and are generally used to apply a material texture (e.g., fur, wood, etc.) to objects or to display pictures on an image screen.
As a more particular example of image shader technology, Direct3D and OpenGL graphic libraries use three basic types of shaders including vertex shaders, geometry shaders, and pixel or fragment shaders. Vertex shaders are run once for each vertex given to the graphics processor. As noted above, the purpose of a vertex shader is to transform a position of a vertex in a virtual space to the two-dimensional coordinate at which it appears on the display screen (as well as a depth value for the z-buffer of the graphics processor). Vertex shaders can manipulate properties such as position, color, and texture coordinates by setting vertex attributes of the vertices, but cannot create new vertices. The output of the vertex shader is provided to the next stage in the processing pipeline, which is either a geometry shader if present or the rasterizer. Geometry shaders can add and remove vertices from a mesh of vertices and can be used to generate image geometry procedurally or to add volumetric detail to existing images that would be too costly to process on a central processing unit (CPU). If geometry shaders are being used, the output is then sent to the rasterizer. Pixel shaders, which are also known as fragment shaders, calculate the color and light properties of individual pixels in an image. The input to this stage comes from the rasterizer, and the fragment shaders operate to fill in the pixel values of the polygons being sent through the graphics pipeline and may use textures to define the pixel values within a particular image object. Fragment shaders are typically used for scene lighting and related effects such as color toning. There is not a one-to-one relationship between calls to the fragment shader and pixels on the screen as fragment shaders are often called many times per pixel because they are called for every image element or object that is in the corresponding space, even if that image object is occluded. However, if the occluding object is drawn first, the occluded pixels of other objects will generally not be processed in the fragment shader.
The use of vector graphics can be particularly advantageous in a mobile map system in which image data is sent from a centralized map database via a communications network (such as the Internet, a wireless communications network, etc.) to one or more mobile or remote devices for display. In particular, vector data, once sent to the receiving device, may be more easily scaled and manipulated (e.g., rotated, etc.) than pixelated raster image data. However, the processing of vector data is typically much more time consuming and processor intensive on the image rendering system that receives the data. Moreover, using vector image data that provides a higher level of detail or information to be displayed in a map leads to a higher number of vector data or vertices that need to be sent to the map rendering system from the map database that stores this information, which can result in higher bandwidth requirements or downloading time in some cases.
In the case of both rasterized map images and vector data generated images, image features that have long straight edges, such as roads, are in many cases disposed at or extend within the image at an angle with respect to the edges of a linear array of pixels used on the display screen to render the image. In some cases, such as in cases in which the edges of the image feature (e.g., line) are long and very straight and extend at only a slight angle to the edges of the pixel field, a phenomena called aliasing occurs. Generally speaking, aliasing results in an image in which the straight edges of lines have sudden or abrupt changes therein, making the edges of the lines look jagged or non-straight, typically in a repetitive manner. While many types of anti-aliasing techniques are known, it can be difficult to perform consistent anti-aliasing of lines within an image, such as in the lines forming a road in a map image, in a manner that can be easily and quickly performed using image vector data.
A computer-implemented method for rendering a straight line image object on a display device includes using a computer device to receive vector image data identifying a straight line to be rendered within an image on a display device and to determine a desired width for the straight line based on the vector image data, wherein the desired width of the straight line is determined in the direction normal to the straight line. The method also determines an extended width of the straight line to be used in anti-aliasing the straight line when the straight line is rendered in the image on the display device, the extended width of the straight line also being determined in the direction of the normal to the straight line and extending the width of the straight line equally on either side of the straight line. The method additionally determines a scaling vector to be applied to pixel positions along the normal to the line within the extended width of the straight line, using the scaling vector to determine a pixel value for each of the pixel positions along the normal to the straight line within the extended width of the straight line and rendering the straight line in the image on the display device using the determined pixel values at each of the pixel positions along the normal to the straight line within the extended width of the straight line.
If desired, using the scaling vector to determine a pixel value for each of the pixel positions along the normal to the straight line within the extended width of the straight line may include multiplying a component of the scaling vector by a pixel color value for a pixel within boundaries of the straight line to determine the pixel color value at a particular pixel position. Moreover, determining a scaling vector may include determining first and second scaling components extending normal to the straight line but in opposite directions, wherein each of the first and second scaling components has values that range from a first value to a second value, and further including combining the first and second scaling components to produce the scaling vector. In one case, combining the first and second scaling components may include one or more of determining a minimum value of the first and second scaling components at each of a set of locations along the first and second scaling components, limiting the determined minimum value at each of the set of locations along the first and second scaling components between a high value and a low value, such as 0.5 and −0.5, and translating the minimized, limited minimum values at each of the set of locations by a predetermined amount, such as 0.5, to produce the scaling vector.
In another case, determining the first and second scaling components extending normal to the straight line but in opposite directions includes assigning values to the first and second scaling components ranging from a first pre-determined amount at the start of each of the first and second scaling components to a second predetermined amount at the end of each of the first and second scaling components. If desired, the first predetermined amount may be a negative number and the second predetermined amount may be a positive number. Also, determining the first and second scaling components extending normal to the straight line but in opposite directions may include determining a scaling factor and assigning values to the first and second scaling components based on the scaling factor and the width of the straight line. Thus, for example, assigning values to the first and second scaling components may include assigning values to each of the first and second scaling components ranging from the negation of the scaling factor to three times the scaling factor plus the square root of the width of the straight line. If desired, the scaling factor may be set as one-half of the inverse square root of the width of the straight line. In this case, combining the first and second scaling components to produce the scaling vector may include multiplying the values of the first and second scaling components at each of the set of locations along the first and second scaling components to produce a scaling vector component at each of the set of locations. If desired, combining the first and second scaling components to produce the scaling vector may also include limiting the multiplied values of the first and second scaling components at each of the set of locations along the first and second scaling components to be between zero and one.
Moreover, the method may include performing one or more transformations on the line vector image data prior to making a determination of a width of the straight line to be rendered in the image on the display device and/or may include performing one or more transformations on the vector image data after determining the extended width of the straight line.
In another embodiment, an image rendering engine includes a communications network interface, a processor, a memory coupled to the processor, a display device coupled to the processor and a plurality of routines stored in the memory and executable on the processor to perform anti-aliasing of a straight line image object. In particular, a first routine executes on the processor to receive, via the communications network interface, a set of vector data comprising data defining one or more straight line image objects and a second routine executes on the processor to determine an extended width of the straight line to be used in anti-aliasing the straight line when the straight line is rendered in the image on the display device. In this case, the extended width of the straight line is determined in the direction of the normal to the straight line and extends the width of the straight line equally on either side of the straight line. A third routine executes on the processor to determine a scaling vector to be applied to pixel positions along the normal to the line within the extended width of the straight line and a fourth routine executes on the processor to use the scaling vector to determine a pixel value for each of the pixel positions along the normal to the straight line within the extended width of the straight line. A fifth routine executes on the processor to render the straight line in the image on the display device using the determined pixel values at each of the pixel positions along the normal to the straight line within the extended width of the straight line. If desired, the second and third routines may be performed in a vertex shader while the fourth and fifth routines may be performed in a fragment shader.
In another embodiment, a method of rendering a straight line within an image being rendered on a display device includes receiving vector image data identifying a straight line to be rendered within the image on the display device, the vector image data including a pixel value indicative of a pixel color value for pixel positions within the straight line, and determining a desired width of the straight line in the direction normal to the edges of the straight line. The method also includes determining a scaling vector to be applied to pixel positions along the normal to the straight line near the edges of the straight line, using the scaling vector and the pixel value indicative of a pixel color value for pixel positions within the straight line to determine a pixel color value for each of the pixel positions along the normal to the straight line near the edges of the straight line and rendering the straight line in the image on the display device using the determined pixel values at the pixel positions along the normal to the straight line near the edges of the straight line.
A graphics or image rendering system, such as a map image rendering system, receives image data from an image database in the form of vector data that defines various image objects, such as roads, boundaries, etc., which are to be rendered as straight lines within an image. The imaging rendering system renders the image objects by applying an anti-aliasing technique that determines varying pixel color values to use at or near the edges of each straight line to be rendered on the image, so as to obtain a pleasing visual effect when rendering a road or other boundary in any orientation on the image screen. The anti-aliasing technique determines a scaling vector having values dependent on the location of a particular pixel in the image along the normal to the straight line forming a road and determines a pixel color value at each pixel location associated with the road based on this scaling vector, such that the pixel color value at each pixel in or near the line is proportional to a component of the scaling vector. This technique produces a more gradual transition in pixel color values from a non-road location to a road location in the image, and thus provides a non-aliased rendering of the road regardless of the orientation or direction in which the road is being rendered in the image on the display device.
Referring now to
The map database 12 may store any desired types or kinds of map data including raster image map data and vector image map data. However, the image rendering systems described herein are best suited for use with vector image data which defines or includes a series of vertices or vertex data points for each of numerous sets of image objects, elements or primitives within an image to be displayed. Generally speaking, each of the image objects defined by the vector data will have a plurality of vertices associated therewith and these vertices will be used to display a map related image object, such as a road object, to a user via one or more of the client devices 16-22.
As will also be understood, each of the client devices 16-22 includes an image rendering engine having one or more processors 30, one or more memories 32, a display device 34, and in many cases a rasterizer or graphics card 36 which are generally programmed and interconnected in known manners to implement or to render graphics (images) on the associated display device 34. The display device 34 for any particular client device 16-22 may be any type of electronic display device such as a liquid crystal display (LCD), a light emitting diode (LED) display, a plasma display, a cathode ray tube (CRT) display, or any other type of known or suitable electronic display.
Generally, speaking, the map-related imaging system 10 of
Referring now to
During operation, the map logic of the map application 48 executes on the processor 30 to determine the particular image data needed for display to a user via the display device 34 using, for example, user input, GPS signals, prestored logic or programming, etc. The display logic or map logic of the application 48 interacts with the map database 12, using the communications routine 43, by communicating with the server 14 through the network interface 42 to obtain map data, preferably in the form of vector data or compressed vector data that is stored in the map database 12. This vector data is returned via the network interface 42 and may be decompressed and stored in the data memory 49 by the routine 43. In particular, the data downloaded from the map database 12 may be a compact, structured, or otherwise optimized version of the ultimate vector data to be used, and the map application 48 may operate to transform the downloaded vector data into specific vertex data points using the processor 30A. In one embodiment, the image data sent from the server 14 includes vector data generally defining data for each of a set of vertices associated with a number of different image elements or image objects to be displayed on the screen 34, including vector data defining one or more roads or road segments or other image objects having relatively long expanses of straight lines associated therewith. More particularly, the vector data for each straight line or road image element or image object may include multiple vertices associated with one or more triangles making up the particular element or object of an image. Each such triangle includes three vertices (defined by vertex data points) and each vertex data point has vertex data associated therewith. In one embodiment, each vertex data point includes vertex location data defining a two-dimensional or a three-dimensional position or location of the vertex in a reference or virtual space, as well as one or more vertex attribute values and/or an attribute reference pointing to or defining a set of vertex attribute values. In the case of roads, one or more of the vector attribute values may define a pixel color value associated with the interior or inside of the line or road, a width of the line to illustrate as a road, etc. Each vertex data point may additionally include other information, such as an object type identifier that identifies the type of image object with which the vertex data point is associated.
As illustrated in
Next, a block 104 performs any pre-width transformations on the line or the vertex data defining the line that need to be performed on the image object. More particularly, the vertex shader 44 may apply various transformation matrices to the vertex points of the line before the width of the line is computed or determined. It is important for the anti-aliasing technique described herein that certain of the transformations, e.g., those that do not effect or transform the width characteristics of the line, be applied to the line or the vertexes defining the line prior to the width of the line being calculated and determined. For example, the transformations associated with the orientation of the line on the image screen may need to be performed prior to the anti-aliasing routine described herein being applied. However, other transformations may need to be applied after the width of the line is calculated or determined, such as transformations that determine the edges of the line and pixel values for pixels at or near the edges of the line. As a general matter, the vertex shader 44 may compute the transformed data points using any transformations that are to be applied before the width of the line is computed, and these same transformations may be applied to the normal of the line, so that the line is normalized to the image or screen size in which the line is to be rendered.
Next, a block 106 determines, e.g., computes, the width of the line, as defined by the image object data or as computed by the map application program 48 based on desired display features or characteristics of the line to be displayed. This width may be used to define the locations of the vertices of the triangles defining the line. At this point, the triangles for the image object, e.g., the road, define a line of constant and predefined width (in the direction to the normal of the line) and anti-aliasing can be performed.
To perform anti-aliasing, a block 108, which may be performed in the vertex shader 44, first transforms the vertex coordinates of the line into viewport coordinates. This operation requires knowing the size of the viewport, which could be passed to the vertex shader 44 as a uniform variable. As an example only, the viewpoint coordinates may be obtained for one side of a line defined by a set of vertex points by dividing the computed coordinate by its perspective (also referred to as width) component, adding one to the x component, subtracting the y component from one, and multiplying by half of the viewport size to get the viewport coordinates. Using the same technique, the vertex shader 44 can compute the viewport coordinates for the vertices on the other side of the line. The distance between these vertices gives the true width of the line after vertex transformations.
Referring to
Once the line vertices 202 are converted to the viewport coordinates, a block 110 of
A block 112 then determines extended line vertices 202A to define a bounding box over which a scaling vector will be computed and applied to the line 200 to perform anti-aliasing. Extended vertices 202A are illustrated in
Next, if needed, a block 114 of
In any event, blocks 116 and 118 may be used to determine a scaling vector for the line 200 to be used when rendering the line 200 to perform anti-aliasing. More particularly, this scaling vector, once computed, is used to determine the pixel color values within the bounding box defined by the extended vertices 202A so as to render the line 200 in a manner that performs anti-aliasing. While the scaling vector may be calculated in any number of manners, the general purpose of the scaling vector is to provide a scale disposed normal to the line 200 that indicates how much of the pixel color value of the line to use at each pixel position along the normal to the line (for each pixel column of the line along the length of the line) to calculate pixel color values at each such pixel position. More particularly, the scaling vector is formed such that pixels at pixel locations firmly within the original bounds or edges 205 and 207 of the line 200 are rendered using the pixel color value of the line, that pixels at pixel locations firmly outside of the original bounds or edges 205 and 207 of the line 200 are rendered such that a non-line pixel color value is used at these pixel locations, and such that pixels near the original edges 205 and 207 of the line 200 use scaled pixel color values depending on how close a particular pixel is to one of the edges 205 and 207 of the line 200. This scaling vector is used by a fragment shader 46 to determine a pixel color value for each pixel within the bounding box defined by the vertices 202A to perform anti-aliasing. In one instance, the pixels near the edges 205 and 207 of the line 200 (e.g., both outside the line 200 and inside the line 200) are scaled in value based on their distance from the edge of the line 200.
To compute the scaling vector, the block 116 determines first and second scaling components 210A and 210B for the line 200, wherein the scaling components 210A and 210B span the extended line width we defined by the vertices 202A in the direction of the normal 203 but do so in opposite directions. Example scaling components 210A and 210B are illustrated with thicker lines in
If desired, the scaling components 210A and 210B may be formed by forming a box or extending the vertices 202 of the line 200 by a certain amount in each direction, based on, for example, the width w of the line 200. Thus, for example the vertices 202A may be formed as extending the vertices 202 in each direction above and below the line 200 in the direction of the normal 203 to the line 200 (for the vertices above the line 200) and in the direction opposite to the normal 203 of the line 200 (180 degrees from the direction of the normal) for the vertices below the line 203.
Now, values for different locations along the scaling components 210A and 210B may be determined or set dependent on the particular location along the normal 203 to the line 200 at which the scaling component is analyzed. In one example, the value of the components 210A and 210B may vary linearly from a first value to a second value. If desired, the first value may be a negative number and the second value may be a positive number with the value being zero at one of the edges 205 or 207 of the line 200. In a more particular example, the values of the scaling components 210A and 210B at any location along these components may be a pixel count or a pixel distance defining the distance that that location is away from one of the edges 205 or 207. For example, as illustrated in
In one case, the vertices 202A may be extended by a fixed number of pixels on either side of the line 200, e.g. two pixels, or may be extended by a number of pixels based on the width of the line, e.g., one fifth of the line width. In either case, in the illustration of
After the block 116 determines the scaling components 210A and 210B, a block 118 determines the scaling vector by combining the values of the two scaling components 210A and 210B at each position along the components 210A and 210B so as to produce a single scaling vector having a value at each pixel position along the normal 203 to the line 200.
In one example, the block 118 may combine the two scaling components 210A and 210B by first determining the minimum value of the two scaling components 210A and 210B at each position or location along these components (e.g., in the direction of the normal 203 to the line 200). This operation is illustrated in
Next, the block 118 may shift or translate the entire line of
Referring again to
Of course, the method of determining a scaling vector that is illustrated in and described with respect to
In this case, the fragment shader 46 (or the vertex shader 44 if so desired) may combine the two scaling components by multiplying the values of the two scaling components together (on a location by location basis) to produce the scaling vector. If the value of the scaling vector is zero or less at a particular pixel location along the normal to the line 200, the line is not drawn at that pixel location (i.e., the pixel color value is set to be a non-line pixel color value). If the value of the multiplied component or scaling vector at a particular pixel location one or more, the line is drawn fully at that pixel value (i.e., the pixel color value is set to be the line pixel color value). If the value of the scaling vector at a particular pixel is between zero and one, the value of the scaling vector at that particular location is used as a percentage of the line pixel color value to be drawn at that pixel, and can be used to determine the percentage of the pixel color value of the line to be rendered at that pixel location to perform anti-aliasing. In one example, the two components have the property that their product will be between zero and one for only 1 viewport pixel on either side of the line 200, and will be one for the desired width of the line.
As an example of this technique, if the line has a width of 4 and has no transformations applied, the scaling factor is computed as ¼. The two scaling components will thus vary from negative ¼ to 2¾, and from 2¾ to negative ¼. Ten percent (10%) of the way through the line (in the direction of the normal to the line) these components would interpolate to 0 and 2.5 yielding a product of zero. Twenty percent (20%) of the way through the line in the direction of the normal to the line, these components interpolate to 0.25 and 2.25, yielding a product of 0.5625. Thirty percent (30%) of the way through the line in the direction of the normal to the line, these components interpolate to 0.5 and 2, yielding a product of 1. Seventy percent (70%) of the way through the line in the direction normal to the line, these components interpolate back down to values that produce a product of one. Using this factor, twenty percent of the line on either side of the line is anti-aliased, and forty percent of the line is solid. If the line is drawn over five pixels, in the direction normal to the line, the line will be four pixels wide with one pixel of anti-aliasing on either side. In such a case, it may be desirable to push the extended vertices a half of a pixel further from the line in the vertex shader in order to accomplish better anti-aliasing. Of course, while two specific manners of computing and combining scaling components are described herein, other manners of developing and combining scaling components can be used to develop a scaling vector, and other manners of developing scaling vectors can be used as well.
Thus, if for example, anti-aliasing is not provided by an OpenGL implementation, it may be important to anti-alias lines to be rendered in an image. To do so, the system first computes the vertices for triangles covering the lines of a road or other lines of constant thickness in a vertex shader so that the vertex shader can render the lines with widths that are independent of some transformations, but dependent on others. The system then computes a pair of carefully crafted scaling components which can be combined to interpolate across the line so as to be able to perform anti-aliasing of the line with minimal additional processing cost or time. Using the technique described herein for applying anti-aliasing to lines in an OpenGL shader, the lines remain constant width regardless of some transformations, but change width based on other transformations.
Of course, the anti-aliasing techniques described herein may be altered or varied in any number of manners to provide an image rendering system, such as a map rendering system, having the ability to efficiently render individual lines with sufficient anti-aliasing to produce a pleasing rendering of a line at any orientation.
Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.
For example, the network 25 may include but is not limited to any combination of a LAN, a MAN, a WAN, a mobile, a wired or wireless network, a private network, or a virtual private network. Moreover, while only four client devices are illustrated in
Additionally, certain embodiments are described herein as including logic or a number of components, modules, or mechanisms. Modules may constitute either software modules (e.g., code embodied on a machine-readable medium or in a transmission signal) or hardware modules. A hardware module is tangible unit capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein.
In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
Accordingly, the term hardware should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where the hardware modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time.
Hardware and software modules can provide information to, and receive information from, other hardware and/or software modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple of such hardware or software modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware or software modules. In embodiments in which multiple hardware modules or software are configured or instantiated at different times, communications between such hardware or software modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware or software modules have access. For example, one hardware or software module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware or software module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware and software modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information).
The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules.
Similarly, the methods or routines described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or processors or processor-implemented hardware modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations.
The one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., application program interfaces (APIs).)
The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations.
Some portions of this specification are presented in terms of algorithms or symbolic representations of operations on data stored as bits or binary digital signals within a machine memory (e.g., a computer memory). These algorithms or symbolic representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others skilled in the art. As used herein, an “algorithm” or a “routine” is a self-consistent sequence of operations or similar processing leading to a desired result. In this context, algorithms, routines and operations involve physical manipulation of physical quantities. Typically, but not necessarily, such quantities may take the form of electrical, magnetic, or optical signals capable of being stored, accessed, transferred, combined, compared, or otherwise manipulated by a machine. It is convenient at times, principally for reasons of common usage, to refer to such signals using words such as “data,” “content,” “bits,” “values,” “elements,” “symbols,” “characters,” “terms,” “numbers,” “numerals,” or the like. These words, however, are merely convenient labels and are to be associated with appropriate physical quantities.
Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information.
As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. For example, some embodiments may be described using the term “coupled” to indicate that two or more elements are in direct physical or electrical contact. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still cooperate or interact with each other. The embodiments are not limited in this context.
As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is performed merely for convenience and to give a general sense of the description. This description should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise.
Still further, the figures depict preferred embodiments of a map rendering system for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein
Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for a system and a process for rendering map or other types of images using the principles disclosed herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope defined in the appended claims.
Patent | Priority | Assignee | Title |
10223816, | Feb 13 2015 | HERE Global B.V. | Method and apparatus for generating map geometry based on a received image and probe data |
11087523, | Oct 26 2018 | Autodesk, Inc.; AUTODESK, Inc | Production ray tracing of feature lines |
11255678, | May 19 2016 | Microsoft Technology Licensing, LLC | Classifying entities in digital maps using discrete non-trace positioning data |
11887255, | Nov 20 2020 | SHANGHAI LILITH NETWORK TECHNOLOGY COMPANY LIMITED | Method and system for rendering boundary of map area within game map, and computer-readable storage medium |
9300842, | Dec 16 2014 | Xerox Corporation | Gated edge enhancement using orthogonal rational counters |
Patent | Priority | Assignee | Title |
4612540, | Apr 30 1982 | International Computers Limited | Digital display system |
5243695, | May 24 1990 | Rockwell International Corporation; ROCKWELL INTERNATIONAL CORPORATION, | Method and apparatus for generating anti-aliased lines on a video display |
6133901, | Mar 31 1998 | Microsoft Technology Licensing, LLC | Method and system for width independent antialiasing |
6329977, | Mar 10 1998 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Pre-filtered antialiased lines using distance functions |
20050068321, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jun 30 2011 | Google Inc. | (assignment on the face of the patent) | / | |||
Jun 30 2011 | CORNELL, BRIAN | Google Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026563 | /0876 | |
Sep 29 2017 | Google Inc | GOOGLE LLC | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 044277 | /0001 |
Date | Maintenance Fee Events |
Nov 20 2017 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jan 10 2022 | REM: Maintenance Fee Reminder Mailed. |
Jun 27 2022 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
May 20 2017 | 4 years fee payment window open |
Nov 20 2017 | 6 months grace period start (w surcharge) |
May 20 2018 | patent expiry (for year 4) |
May 20 2020 | 2 years to revive unintentionally abandoned end. (for year 4) |
May 20 2021 | 8 years fee payment window open |
Nov 20 2021 | 6 months grace period start (w surcharge) |
May 20 2022 | patent expiry (for year 8) |
May 20 2024 | 2 years to revive unintentionally abandoned end. (for year 8) |
May 20 2025 | 12 years fee payment window open |
Nov 20 2025 | 6 months grace period start (w surcharge) |
May 20 2026 | patent expiry (for year 12) |
May 20 2028 | 2 years to revive unintentionally abandoned end. (for year 12) |