A 3D-graphics processing method for processing 3D objects in a computer system defines a visible region having a far plane and a near plane. A clipping process is performed for a first object lying across the near plane while a second object lying across the far plane is exempted from the clipping process. In stead, the second object is performed with a rendering process as a whole to obtain a plurality of pixels. The depth values of the pixels are then compared with a depth value of the far plane. For any of the pixels having a depth value greater than the depth value of the far plane, it is discarded from display. On the other hand, any of the pixels of the first object not blocked by other pixels in front thereof and having a depth value smaller than the depth value of the far plane is outputted for display.
|
15. A 3D-graphics processing method for processing 3D objects in a computer system, comprising steps of:
defining a visible region, said visible region having a far plane and a near plane;
performing a rendering process for a selected object which is at least partially in said visible region; and
defining a visible depth on display (Zs) of each pixel of said selected object according to the following Z-conversion formula:
Zs((Zf+e1)/((Zf+e1)−(Zn−e2)))*(1−((Zn−e2)/Z)), where Z is an actual visible depth of the pixel of interest, Zf is a depth value of said far plane, Zn is a depth value of said near plane, e1 and e2 are modifying coefficients, and e1 and e2 are not equal to zero at the same time.
1. A 3D-graphics processing method for processing 3D objects in a computer system, comprising steps of:
defining a visible region, said visible region having a far plane and a near plane;
performing a rendering process for a first object among said 3D objects, which lies across said far plane, to obtain a plurality of pixels;
comparing depth values of said pixels with a depth value of said far plane; and
discarding any of said pixels having a depth value greater than said depth value of said far plane from display;
wherein a visible depth on display (Zs) of a pixel in said visible region is defined according to the following Z-conversion formula:
Zs=((Zf+e)/((Zf+e)−Zn))*(1−(Zn/Z)), where Z is an actual visible depth of said pixel, Zf is said depth value of said far plane, Zn is said depth value of said near plane, and e is a positive modifying coefficient.
8. A 3D-graphics processing method for processing 3D objects in a computer system, comprising steps of:
defining a visible region, said visible region having a far plane and a near plane;
performing a rendering process for a first object among said 3D objects, which lies across said near plane, to obtain a plurality of pixels;
comparing depth values of said pixels with a depth value of said near plane; and
discarding any of said pixels having a depth value smaller than said depth value of said near plane from display;
wherein a visible depth on display (Zs) of a pixel in said visible region is defined according to the following Z-conversion formula:
Zs=(Zf/(Zf−(Zn−e))*(1−((Zn−e)/Z)). where Z is an actual visible depth of said pixel of interest, Zf is said depth value of said far plane, Zn is said depth value of said near plane, and e is a positive modifying coefficient.
2. The method according to
3. The method according to
4. The method according to
5. The method according to
6. The method according to
7. The method according to
9. The method according to
10. The method according to
11. The method according to
12. The method according to
13. The method according to
14. The method according to
16. The 3D-graphics processing method according to
|
The present invention relates to a three-dimensional (3D) graphics processing method, and more particularly to a 3D-graphics clipping process implemented with a computer system.
In 3D computer graphics, the image of an object is projected onto a projection plane and then recorded into a display memory so as to be able to show a 3D-graphics on a planar display. Please refer to
For solving this problem, a 3D-graphics clipping process is developed. As is known to those skilled in the art, a 3D-graphics clipping process is generally a time-consuming task in the 3D image processing pipeline. There are six clipping planes based on to clip a polygon against the view volume. For each clipping plane, an intersection with the polygon is performed. For each intersection, new polygons must be determined based on the intersection points. Therefore, for an application involving hundreds of thousands of polygons, the clipping process will be one of the bottlenecks of the 3D image processing pipeline. Give the object 144 shown in
In order to enhance the processing efficiency and reducing cost, various techniques have been developed to deal with the clipping process. For example, a so-called “guardband clipping” process is widely adopted by defining a guardband range outside the clipping window. According to the guardband clipping process, the object that partially intersecting with the clipping window is processed with a fast pixel rasterization mechanism to remove the pixels outside the clipping window. Please refer to
However, as mentioned above, there are generally six clipping planes based on to clip a polygon against the view volume. In other words, in addition to the aforementioned four side clipping planes 391, 392, 393 and 394, the other two clipping planes, e.g. a near plane defined with points 121, 122, 123 and 124 and a far plane defined with points 131, 132, 133 and 134 as illustrated in
Therefore, the present invention provides a 3D-graphics processing method capable of performing 3D-graphics image processing with the far and/or near plane.
The present invention relates to a 3D-graphics processing method for processing 3D objects in a computer system. The method comprises steps of: defining a visible region, the visible region having a far plane and a near plane; performing a rendering process for a first object among the 3D objects, which lies across the far plane, to obtain a plurality of pixels; comparing depth values of the pixels with a depth value of the far plane; and discarding any of the pixels having a depth value greater than the depth value of the far plane from display.
In an embodiment, the method further comprises a step of performing a clipping process for a second object among the 3D objects, which lies across the near plane, to obtain data of a first portion having depth values smaller than a depth value of the near plane and data of a second portion having depth values greater than the depth value of the near plane. The second portion is performed with a rendering process while the first portion is exempted from the rendering process. The resulting pixels associated with the second portion of the second object are then outputted for display after the rendering process. Any of the pixels of the first object having a depth value smaller than the depth value of the far plane is also outputted for display if they are not blocked by other pixels in front of them.
For example, the 3D objects are a plurality of polygons defined with primitive data.
The present invention also relates to a 3D-graphics processing method for processing 3D objects in a computer system, which comprising steps of: defining a visible region, the visible region having a far plane and a near plane; performing a rendering process for a first object among the 3D objects, which lies across the near plane, to obtain a plurality of pixels; comparing depth values of the pixels with a depth value of the near plane; and discarding any of the pixels having a depth value smaller than the depth value of the near plane from display. Preferably, any of the pixels having a depth value greater than the depth value of the near plane is outputted for display if they are not blocked by other pixels in front of them.
In an embodiment, the alternative method further comprises steps of performing a clipping process for a second object among the 3D objects, which lies across the far plane, to obtain data of a first portion having depth values smaller than a depth value of the far plane and data of a second portion having depth values greater than the depth value of the far plane; performing a rendering process for the first portion but exempting the second portion from the rendering process; and outputting the resulting pixels associated with the first portion of the second object for display after the rendering process.
The present invention further provides a 3D-graphics processing method for processing 3D objects in a computer system, which comprises steps of: defining a visible region, the visible region having a far plane and a near plane; performing a rendering process for a selected object which is at least partially in the visible region; and defining a visible depth on display (Zs) of each pixel of the selected object according to the following Z-conversion formula:
Zs=((Zf+e1)/((Zf+e1)−(Zn−e2)))*(1−((Zn−e2)/Z)),
where Z is an actual visible depth of the pixel of interest, Zf is a depth value of the far plane, Zn is a depth value of the near plane, e1 and e2 are modifying coefficients, and e1 and e2 are not equal to zero at the same time.
For example, e1=0 and e2>0; e1>0 and e2=0; or e1>0 and e2>0.
The present invention may best be understood through the following description with reference to the accompanying drawings, in which:
The present invention will now be described more specifically with reference to the following embodiments. It is to be noted that the following descriptions of preferred embodiments of this invention are presented herein for purpose of illustration and description only; it is not intended to be exhaustive or to be limited to the precise form disclosed.
In addition to the simple clipping process shown in
Please refer to
The object 44 having partial depth values greater than the depth value of the far end 42, although remaining unclipped, is processed with calculating and comparing operations before being outputted for display. In other words, the primitive data of the object 44 and any other object across the far end 42 of the visible depth region 4 are directly enter the subsequent rendering process without clipping. The resulting pixels, before being outputted to the display of the computer system, are compared with the depth value of the far end 42 in advance to determine which of the pixels can be outputted for display and which of them should be discarded. For any of the pixels having a depth value greater than or equal to the far end 42, it is discarded and will not be shown on the display. On the other hand, for those pixels having depth values smaller than the far end 42, they can be outputted for display. In this way, the object 44 can be partially shown without clipping, and thus the adverse effect of the clipping process on the 3D-graphics image processing can be efficiently avoided.
In another embodiment, it is the object 44 across the far end 42 of the visible depth region 4 being clipped, while the object 45 across the near end 41 of the visible depth region 4 remains unclipped. Likewise, the object 45 is subjected to a clipping process but the object 44 is processed with calculating and comparing operations. The primitive data of the object 45 and any other object across the near end 41 of the visible depth region 4 are subjected to a subsequent rendering process without clipping. The resulting pixels, instead of direct output to the display of the computer system, are compared with the depth value of the near end 41 in advance to determine which of the pixels can be outputted for display and which of them should be discarded. For any pixel having a depth value smaller than or equal to the near end 41, it is discarded and will not be shown on the display. On the other hand, for those pixels having depth values greater than the near end 41, they can be outputted for display. In other words, the object 45 can be partially shown without clipping. Of course, it is also possible to process objects across both ends with aforementioned calculating and comparing operations. In this fashion, the adverse effect of the clipping process on the 3D-graphics image processing can be efficiently avoided.
Although the present invention suggests to process objects across either or both of the end planes with aforementioned calculating and comparing operations after the rendering process, it is more practical that the objects across the far plane are processed with the present calculating and comparing operations in current applications.
In spite the present invention has efficiently minimized the adverse effect of the clipping process on the 3D-graphics image processing, the above embodiments can be further improved to avoid possible abnormal display.
Zs=Zc/Wc,
where Wc is a non-linear conversion parameter, and where
Zc=Zf(Z−Zn)/(Zf−Zn), and
Wc=Z.
Thus, it is derived that
Zs=Zc/Wc=(Zf/(Zf−Zn))*(1−(Zn/Z)),
where Zf is the largest actual depth value in the visible region, Zn is the smallest actual depth value in the visible region, and Z is the actual depth value of the point of interest, as exemplified in
It is understood from the above formulae that for the near plane, the value Z can be set to be Zn, and thus the value Zs is equal to 0. On the other hand, for the far plane, the value Z can be set to be Zf, and thus the value Zs is equal to 1. As for the objects located within the visible range, i.e. between the near and far planes, the actual depth value Z thereof will lie between Zn and Zf and the display depth value Zs thereof is supposed to lie between 0 and 1. However, in the case that calculation precision is not as high as required, it is possible for the calculated displayed depth value Zs to erroneously become less than 0 or greater than 1. Accordingly, the nearest pixels or the farthest pixels are possibly undesirably discarded from display. Particularly, according to the plot of
In order to avoid this possible defect, it is preferred to shift the largest actual depth value in the visible region to (Zf+e) in lieu of Zf, where e is a positive modifying coefficient, while setting the value Z of the far plane as Zf, as shown in
Zc=(Zf+e)(Z−Zn)/(Zf+e−Zn),
Wc=Z, and
Zs=Zc/Wc((Zf+e)/((Zf+e)−Zn))*(1−(Zn/Z)).
By this way, the largest depth value Zs on display will be slightly smaller than the threshold value 1. Therefore, the background-associated pixels can be assured of lying inside the visible region to be successfully displayed.
Likewise, in order to avoid the nearest pixels being undesirably discarded due to low calculation precision, the smallest actual depth value in the visible region can be shifted to (Zn−e) in lieu of Zn, where e is a positive modifying coefficient, while setting the value Z of the near plane as Zn, as shown in
Zc=Zf(Z−(Zn−e))/(Zf−(Zn−e)),
Wc=Z, and
Zs=Zc/Wc=(Zf/(Zf−(Zn−e))*(1−((Zn−e)/Z)).
By this way, the smallest depth value Zs on display will be slightly greater than the threshold value 0. Therefore, the front pixels can be assured of lying inside the visible region to be successfully displayed.
Of course, it is also possible to adjust the boundary values of both near and far ends so that the Z-conversion formulae become expressed as
Zc=(Zf+e1)(Z−(Zn−e2))/((Zf+e1)−(Zn−e2)),
Wc=Z, and
Zs=Zc/Wc=((Zf+e1)/((Zf+e1)−(Zn−e2))*(1−((Zn−e2)/Z)),
where e1 and e2 are positive modifying coefficients and can be equal or different.
According to the 3D-graphics image processing method of the present invention, the objects, if necessary, can be processed with the near and far planes. Further, direct rendering of selected objects followed by calculating and comparing operations with one or both of the far and near planes are executed instead of the clipping process so as to minimize the adverse effect of the clipping process on the 3D-graphics image processing.
While the invention has been described in terms of what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention needs not be limited to the disclosed embodiment. On the contrary, it is intended to cover various modifications and similar arrangements included within the spirit and scope of the appended claims which are to be accorded with the broadest interpretation so as to encompass all such modifications and similar structures.
Wang, Yu-Chang, Lee, Ruen-Rone, Wang, Cai-Sheng
Patent | Priority | Assignee | Title |
7817126, | Aug 23 2005 | LG DISPLAY CO , LTD | Liquid crystal display device and method of driving the same |
Patent | Priority | Assignee | Title |
4888712, | Nov 04 1987 | APPLICON, INC | Guardband clipping method and apparatus for 3-D graphics display system |
6774895, | Feb 01 2002 | Nvidia Corporation | System and method for depth clamping in a hardware graphics pipeline |
6864893, | Jul 19 2002 | Nvidia Corporation | Method and apparatus for modifying depth values using pixel programs |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 28 2004 | LEE, RUEN-RONE | Via Technologies, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 016176 | /0143 | |
Dec 28 2004 | WANG, CAI-SHENG | Via Technologies, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 016176 | /0143 | |
Dec 28 2004 | WANG, YU-CHANG | Via Technologies, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 016176 | /0143 | |
Jan 07 2005 | VIA Technologies, Inc. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Aug 12 2011 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jul 29 2015 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Aug 02 2019 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Feb 12 2011 | 4 years fee payment window open |
Aug 12 2011 | 6 months grace period start (w surcharge) |
Feb 12 2012 | patent expiry (for year 4) |
Feb 12 2014 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 12 2015 | 8 years fee payment window open |
Aug 12 2015 | 6 months grace period start (w surcharge) |
Feb 12 2016 | patent expiry (for year 8) |
Feb 12 2018 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 12 2019 | 12 years fee payment window open |
Aug 12 2019 | 6 months grace period start (w surcharge) |
Feb 12 2020 | patent expiry (for year 12) |
Feb 12 2022 | 2 years to revive unintentionally abandoned end. (for year 12) |