A method and apparatus for performing multiple frame image compression and decompression of motion video data are provided. In one embodiment, a plurality of sequential frames in a motion video sequence are collected and digitally filtered as a single image. At least some of the results of the digital filtering are then encoded to generate compressed data. In another embodiment, a plurality of sequential frames are filtered as if the boundary of each frame is adjacent to a boundary in the same spatial location of another of the plurality of sequential frames.
|
48. A method comprising:
storing a part of a compressed motion video sequence that represents a group of frames that were compressed as a single image, such that each frame boundary that is on the interior of the single image is adjacent to the same boundary in another frame in the group frames; decompressing the part of a compressed motion video sequence to generate a plurality of wavelet coefficients; and performing the inverse of a filter used during compression on the plurality of wavelet coefficients to generate a single image.
34. A method of compressing multiple frames of a motion video sequence, comprising:
storing the plurality of frames in the motion video sequence as a single image to generate a plurality of parts; processing the plurality of frames such that at least one boundary of at least one of the plurality of frames that is on the interior of said single image is adjacent to a boundary at the same spatial location in another of said plurality of frames; encoding at least some of the plurality of parts to generate compressed data; and performing at least one wavelet decomposition.
1. A method of compressing frames of a motion video sequence, said method comprising:
digitally filtering a plurality of frames in said motion video sequence as a single image to generate a plurality of parts, wherein said digitally filtering includes processing said plurality of frames such that at least one boundary of at least one of said plurality of frames that is on the interior of said single image is adjacent to a boundary at the same spatial location in another of said plurality of frames; and encoding at least some of said plurality of parts to generate compressed data.
37. A machine-readable medium that provides instructions, which when executed by a machine, causes the machine to perform operations comprising:
storing the plurality of frames in the motion video sequence as a single image to generate a plurality of parts; processing the plurality of frames such that at least one boundary of at least one of the plurality of frames that is on the interior of said single image is adjacent to a boundary at the same spatial location in another of said plurality of frames; encoding at least some of the plurality of parts to generate compressed data; and performing at least one wavelet decomposition.
52. A machine-readable medium that provides instructions, which when executed by a machine, causes the machine to perform operations comprising:
storing a part of a compressed motion video sequence that represents a group of frames that were compressed as a single image, such that each frame boundary that is on the interior of the single image is adjacent to the same boundary in another frame in the group frames; decompressing the part of a compressed motion video sequence to generate a plurality of wavelet coefficients; and performing the inverse of a filter used during compression on the plurality of wavelet coefficients to generate a single image.
7. A machine readable medium having stored thereon sequences of instructions, which when executed by a processor, cause the processor to perform operations comprising:
digitally filtering a plurality of frames in a motion video sequence as a single image to generate a plurality of parts, wherein said digitally filtering includes processing said plurality of frames such that at least one boundary of at least one of the plurality of frames that is on the interior of said single image is adjacent to a boundary at the same spatial location in another of said plurality of frames; and encoding at least some of said plurality of parts to generate compressed data.
12. A method of compressing frames of a motion video sequence, said method comprising:
storing groups of frames of said motion video sequence in a frame buffer, wherein said storing includes orienting each group of frames such that at least one boundary of said frames that is on the interior of said single image is adjacent to a boundary at the same spatial location in another of said frames; decomposing each group of frames stored in said frame buffer as a frame image to generate a plurality of wavelet coefficients; and compressing at least some of said plurality of wavelet coefficients to generate a compressed data representing said motion video sequence.
29. A computer system comprising:
a processor; and a memory, coupled to said processor, to provide a buffer to decompress a plurality of motion video frames that were compressed as a single image using wavelet based encoding; a plurality of instructions, which when executed by said processor, cause said processor to perform the inverse of the wavelet based encoding to generate a decompressed version of said single image, wherein the inverse of the wavelet based encoding is performed such that at least one frame boundary that is on the interior of said single image is adjacent to the same boundary of another frame in the decompressed version of said single image.
32. An apparatus comprising:
a decoder unit coupled to received compressed data representing a group of frames of a motion video sequence that was compressed as a single image; a quantization unit coupled to said decoder unit to dequantize the output of the decoder unit into a plurality of wavelet coefficients; and a digital filter coupled to said quantization unit to perform the inverse of the wavelets used during compression to generate said single image, wherein the inverse of the wavelets is performed such that at least one frame boundary that is the interior of said single image is adjacent to the same boundary of another frame boundary of said single image.
21. A method comprising:
storing a part of a compressed motion video sequence that represents a group of frames that were compressed as a single image; decompressing said part to generate a plurality of wavelet coefficients; and performing the inverse of a filter used during compression on said plurality of wavelet coefficients to generate the single image, wherein said performing the inverse of the filter used during compression includes processing said group of frames such that at least one boundary of said group of frames that is on the interior of said single image is adjacent to a boundary at the same spatial location in another frame in said group of frames.
40. A method of compressing multiple frames of a motion video sequence, comprising:
storing groups of frames of the motion video sequence in a frame buffer; orienting each group of frames such that each boundary of the frames that is on the interior of the single image is adjacent to a boundary at the same spatial location in another of the frames; decomposing each group of frames stored in the frame buffer as a frame image to generate a plurality of wavelet coefficients; compressing at least some of the plurality of wavelet coefficients to generate compressed data representing the motion video sequence; quantizing at least some of the plurality of wavelet coefficients; and encoding the results of the quantization.
56. A machine readable medium having stored thereon sequences of instructions, which when executed by a processor, cause the processor to perform operations comprising:
storing groups of frames of a motion video sequence in a frame buffer, wherein said storing includes orienting each group of frames such that at least one boundary of said frames that is on the interior of said single image is adjacent to a boundary at the same spatial location in another of said frames; decomposing each group of frames stored in said frame buffer as a frame image to generate a plurality of wavelet coefficients; and compressing at least some of said plurality of wavelet coefficients to generate a compressed data representing said motion video sequence.
17. A computer system comprising:
a processor; and a memory, coupled to said processor, to include a buffer, the buffer to store a plurality of frames of a motion video sequence; a plurality of instructions , which when executed by said processor, cause said processor to: decompose said single multiple frame image to generate a plurality of wavelet coefficients, wherein the decomposition comprises processing said plurality of frames wherein at least one boundary of said plurality of frames that is on the interior of said single image is adjacent to a boundary at the same spatial location in another of said plurality of frames; and compress at least some of said plurality of wavelet coefficients to generate compressed data representing said motion video sequence. 25. A machine readable medium having stored thereon sequences of instruction, which when executed by a processor, cause the processor to perform operations comprising:
storing a part of a compressed motion video sequence that represents a group of frames that were compressed as a single image; decompressing said part to generate a plurality of wavelet coefficients; and performing the inverse of a filter used during compression on said plurality of wavelet coefficients to generate the single image, wherein said performing the inverse of the filter used during compression comprises processing said group of frames such that at least one frame boundary that is on the interior of said single image is adjacent to a boundary at the same spatial location in another of said group of frames.
44. A machine-readable medium that provides instructions, which when executed by a machine, causes the machine to perform operations comprising:
storing groups of frames of the motion video sequence in a frame buffer; orienting each group of frames such that each boundary of the frames that is on the interior of the single image is adjacent to a boundary at the same spatial location in another of the frames; decomposing each group of frames stored in the frame buffer as a frame image to generate a plurality of wavelet coefficients; compressing at least some of the plurality of wavelet coefficients to generate a compressed data representing the motion video sequence; quantizing at least some of the plurality of wavelet coefficients; and encoding the results of the quantization.
19. An apparatus comprising:
a frame buffer for storing a plurality of frames of a motion video sequence; a digital filter coupled to said frame buffer, the digital filter to decompose the contents of said frame buffer as a single image to generate a plurality of wavelet coefficients, wherein the decomposition comprises processing said plurality of frames wherein at least one boundary of said plurality of frames that is on the interior of the single image is adjacent to a boundary at the same spatial location in another of said plurality of frames; a quantization unit coupled to said digital filter, the quantization unit to quantize the plurality of wavelet coefficients; and an encoder unit coupled to the output of said quantization unit, the encoder unit to compress the quantized plurality of wavelet coefficients.
2. The method of
3. The method of
4. The method of
5. The method of
8. The machine readable medium of
9. The machine readable medium of
10. The machine readable medium of
11. The machine readable medium of
13. The method of
14. The method of
15. The method of
16. The method of
quantizing at least some of said plurality of wavelet coefficients; and encoding the results of said quantizing.
18. The computer system of
20. The apparatus of
22. The method of
23. The method of
24. The method of
26. The machine readable medium of
27. The machine readable medium of
28. The machine readable medium of
30. The computer system of
31. The computer system of
33. The computer system of
35. The method of
36. The method of
38. The machine-readable medium of
39. The machine-readable medium of
41. The method of
42. The method of
43. The method of
45. The machine-readable medium of
46. The machine-readable medium of
47. The machine-readable medium of
49. The method of
50. The method of
51. The method of
53. The machine-readable medium of
54. The machine-readable medium of
55. The machine-readable medium of
57. The machine readable medium of
58. The machine readable medium of
59. The machine readable medium of
60. The machine readable medium of
quantizing at least some of said plurality of wavelet coefficients; and encoding the results of said quantizing.
61. The computer system of
62. The computer system of
63. The apparatus of
64. The apparatus of
65. The computer system of
66. The apparatus of
|
Not Applicable.
1. Field of the Invention
The invention relates to the field of motion video compression and decompression.
2. Background Information
Motion video data usually consists of a sequence of frames that, when displayed at a particular frame rate, will appear as "real-time" motion to a human eye. A frame of motion video comprises a number of frame elements referred to as pixels (e.g., a 640×480 frame comprises over 300,000 pixels). Each pixel is represented by a binary pattern that describes that pixel's characteristics (e.g., color, brightness, etc.). Given the number of pixels in a typical frame, storing and/or transmitting uncompressed motion video data requires a relatively large amount of computer storage space and/or bandwidth. Additionally, in several motion video applications, processing and displaying a sequence of frames must be performed fast enough to provide real-time motion (typically, between 15-30 frames per second).
Techniques have been developed to compress the amount of data required to represent motion video, making it possible for more computing systems to process motion video data. Typical compression techniques compress motion video data based on either: individual pixels (referred to as pixel compression); blocks or regions of pixels in a frame (referred to as block compression); individual frames; or some combination of these techniques.
Pixel compression techniques tend to be easier to implement and provide relatively high quality (U.S. patent application, Ser. No. 08/866,193. filed May 30, 1997). However, pixel compression techniques suffer from lower compression ratios (e.g., large encoding bit rates) because pixel compression techniques consider, encode, transmit, and/or store individual pixels.
In contrast to pixel compression, block compression systems operate by dividing each frame into blocks or regions of pixels. Block compression is typically based on a discrete Fourier transform (DFT) or a discrete cosine transform (DCT). In particular, each region of pixels in the first frame in a sequence of frames is DFT or DCT encoded. Once encoded, the first frame becomes the "base frame." To achieve a higher degree of compression, block compression systems attempt to compress the next new frame (i.e., the second frame) and all subsequent frames in terms of previously DFT/DCT encoded regions where possible (referred to as interframe encoding). Thus, the primary aim of interframe compression is to eliminate the repetitive DFT/DCT encoding and decoding of substantially unchanged regions of pixels between successive frames in a sequence of motion video frames.
To perform interframe compression on a new frame, the pixels in each region of the new frame are compared to the corresponding pixels (i.e., at the same spatial location) in the base frame to determine the degree of similarity between the two regions. If the degree of similarity between the two regions is high enough, the region in the new frame is classified as "static." A static region is encoded by storing the relatively small amount of data required to indicate that the region should be drawn based on the previously encoded corresponding region of the base frame. In addition to classifying regions as "static," interframe compression techniques typically also perform motion estimation and compensation. The principle behind motion estimation and compensation is that the best match for a region in a new frame may not be at the same spatial location in the base frame, but may be slightly shifted due to movement of the image(s)/object(s) portrayed in the frames of the motion video sequence. If a region in a new frame is found to be substantially the same as a region at a different spatial location in the base frame, only the relatively small amount of data required to store an indication (referred to as a motion compensation vector) of the change of location of the region in the new frame relative to the base frame is stored (U.S. patent application, Ser. No. 08/719,834, filed Sep. 30, 1996). By way of example, MPEG (a standard for Block Compression ) performs a combination of: 1) intraframe compression on the first frame and on selected subsequent frames (e.g., every four frames); and 2) interframe compression on the remaining frames.
In contrast to both pixel compression and block compression, frame compression systems operate on one entire frame at a time. Typical frame compression systems are based on decomposing a frame into its different components using a digital filter, and then encoding each component using the coding technique best suited to that component's characteristics. To provide an example, subband coding in a technique by which each frame is decomposed into a number of frequency subbands, which are then encoded using the coding technique best suited to that subband's characteristics. As another example, various references describe different frame compression systems that are based on using wavelets to decompose a frame into its constituent components (e.g., U.S. Pat. Nos. 5,661,822; 5,600,373).
When using a digital filter in frame compression systems, a problem arises due to the lack of input data along the boundaries of a frame. For example, when a digital filter begins processing at the left boundary of a frame, some of the filter inputs required by the filter do not exist (i.e., some of the filter inputs are beyond the left boundary of the frame). Several techniques have been developed in an attempt to solve the problem of the digital filter input requirements extending beyond the boundaries of a frame. As a example, one technique uses zero for the nonexistent filter inputs. Another technique called circular convolution joins the spatially opposite boundaries of the image together (e.g., the digital filter is performed as if the left boundary of the frame is connected to the right boundary of the frame). In another system, a different wavelet is used at the boundaries (see U.S. Pat. No. 5,661,822). In still another system, the image data at the boundaries is mirrored as illustrated below in Table 1.
TABLE 1 |
A method and apparatus for performing multiple frame image (or super frame) compression and decompression of motion video data is described. According to one aspect of the invention, a plurality of sequential frames in a motion video sequence are collected and digitally filtered as a single image. At least some of the results of the digital filtering are then encoded to generate compressed data. According to another aspect of the invention, the plurality of sequential frames are filtered as if the boundary of each frame is adjacent to a boundary in the same spatial location of another of the plurality of sequential frames.
According to another aspect of the invention, a computer system is described including a processor and a memory. The memory provides a buffer for processing a plurality of frames of a motion video sequence as a single image. In addition, a plurality of instructions are provided, which when executed by the processor, cause the processor to generate compressed data representing the motion video sequence by decomposing the single image and compressing at least some of the resulting wavelet coefficients. According to another aspect of the invention, the decomposition is performed such that at least one boundary of a frame in the single image is processed as if it were adjacent to the same boundary of another frame in the single image.
The invention may best be understood by referring to the following description and accompanying drawings which are used to illustrate embodiments of the invention. In the drawings:
In the following description, numerous specific details are set forth to provide a thorough understanding of the invention. However, it is understood that the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the invention.
A method and apparatus for performing multiple frame image (or super frame) compression and decompression of motion video data is described. According to one aspect of the invention, multiple frames of a motion video sequence are grouped to form a multiple frame image that is compressed as a single image (or super frame). This is in contrast to the previously described techniques that operated on at most one frame of a motion video sequence at a time. According to another aspect of the invention, the frames in each multiple frame image are oriented/processed appropriately to provide inputs for the digital filter at the boundaries of the frames.
While one embodiment is shown in
Furthermore, while one embodiment is shown in which each frame is placed in a multiple frame image, alternative embodiments can compress certain frames as part of multiple frame images and compress other frames individually. Additionally, while one embodiment is illustrated in
In addition, while one embodiment is described in which the same compression technique is used to compress the frames of a motion video sequence as multiple frame images, alternative embodiments can use different compression techniques for multiple frame images and/or for selected individual frames. For example, one alternative embodiment uses a combination of: 1) subband decomposition using wavelets on selected multiple frame images formed with non-sequential frames; and 2) block decompression on the remaining frames. As another example, one alternative embodiment uses a combination of: 1) pixel compression on the first frame; 2) block compression on the next three frames; 3) subband decomposition using wavelets on selected multiple frame images; and 4) block compression on the remaining frames. Thus, the concept of compressing a group of frames as a multiple frame image can be used in any number of different compression schemes that use any combination of different compression techniques.
In summary, one aspect of the invention described with reference to
As previously described, encoding schemes that use digital filters and/or wavelets have difficulty at the boundaries of the frame being compressed due to a lack of data values required beyond the boundary of that frame. While any technique (e.g., those described in the background sections) may be used for processing the boundaries of the frames in the multiple frame images when using digital filters and/or wavelets, one embodiment of the invention uses another aspect of the invention described here.
It was determined that the pixels along a given boundary of a given frame are often similar or the same as the corresponding pixels along the same boundary of a different frame in the motion video sequence. To provide an example, the boundary pixels between different frames of a video conference are often the same because the camera and background image are fixed. As a result, the left boundary pixels of one frame will often be quite similar or the same as the left boundary pixels of other frames of a video conference. To provide an example,
In particular,
Similarly, the third frame is rotated about its horizontal axis 180 degrees and placed in the lower left-hand corner of the multiple frame image (illustrated by the up-side-down 1-4 corner labels). As a result, the bottom boundary of the first and third frames are adjacent. In addition, the fourth frame is placed in the lower right-hand corner and is rotated about its horizontal and vertical axis 180 degrees (illustrated by the up-side-down and backwards 1-4 corner labels). As a result, the bottom boundary of the second and fourth frames are adjacent, and the left boundary of the third and fourth frames are adjacent.
In the typical case where the pixels along the boundaries of the frames in the motion video sequence are similar (if not the same), there will be a smooth transition between the adjacent boundaries of the first, second, third, and fourth frames as oriented in FIG. 2B. Thus, the boundaries of the frames that lie on the interior of the multiple frame image no longer have the problem previously associated with a lack of data values for the digital filter.
Furthermore, the filter is performed on the multiple frame image such that the boundaries of the multiple frame image are treated as if they are adjacent to the spatially opposite boundary of the multiple frame image. As illustrated in
In summary, by both grouping frames together into a single multiple frame image and orienting/processing the frames in that multiple frame image properly, the pixels along the boundary of a given frame are treated as adjacent to the pixels along the same boundary of a different frame. This is advantageous over the techniques described in the background because: 1) using zeros for the nonexistent filter inputs adversely affects the filters results; 2) the pixels along a given frame boundary are more likely to be similar to the pixels along the same boundary of a different frame, than to be similar to the pixels on the opposite boundary of the same frame (the circular convolution scheme described in the background); 3) different wavelet calculations are not required along the boundaries of the frames; and 4) additional mirrored pixels need not be compressed as in the prior art technique described with reference to Table 1.
As similarly described with reference to
Regardless of the manner in which the multiple frame image buffer unit 305 is implemented, certain embodiments of the invention include logic in the multiple frame image buffer unit 305 to store the frames in the correct orientation (e.g., to orient the frames as illustrated in FIG. 2B). Of course, alternative embodiments can store the frames in the buffer in any number of different ways. For example, an alternative embodiment can store all the frames in the same orientation and use addressing logic to determine the order for inputting the pixels to the filter.
As shown in
The results of the quantization unit 315 are then processed by an encoder unit 320. As with the quantization unit 315, any number of different coding techniques may be supported and/or selected from as part of the operation of the encoder unit 320. Since the encoders supported by the encoder unit 320 and the technique for selecting from those encoders are not critical to the invention, these concepts will not further be described herein. The output of the encoder unit 320 is a compressed motion video sequence 325.
While any number of different operations can be performed on the compressed motion video sequence 325 (e.g., storage, transmission, etc.), the compressed motion video sequence 325 will often be decompressed.
In
The inverse of the filtering operation performed during compression is then performed on the dequantized data stored in the multiple frame image buffer unit 340. Since
While a wavelet based compression scheme is described which uses quantization and encoding, alternative embodiments could use other wavelet coefficient compression techniques. Furthermore, while a wavelet based compression scheme is shown in
Different embodiments of the invention can implement the different units illustrated in
While the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described. The method and apparatus of the invention can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting on the invention.
Patent | Priority | Assignee | Title |
8243823, | Aug 29 2007 | Samsung Electronics Co., Ltd. | Method and system for wireless communication of uncompressed video information |
8605797, | Feb 15 2006 | SAMSUNG ELECTRONICS CO LTD | Method and system for partitioning and encoding of uncompressed video for transmission over wireless medium |
8842739, | Jul 20 2007 | Samsung Electronics Co., Ltd. | Method and system for communication of uncompressed video information in wireless systems |
8923636, | Jun 30 2008 | Synaptics Incorporated | Image processing circuit, and display panel driver and display device mounting the circuit |
9369759, | Apr 15 2009 | Samsung Electronics Co., Ltd.; SAMSUNG ELECTRONICS CO , LTD | Method and system for progressive rate adaptation for uncompressed video communication in wireless systems |
Patent | Priority | Assignee | Title |
3922493, | |||
5315670, | Nov 12 1991 | General Electric Company | Digital data compression system including zerotree coefficient coding |
5412741, | Jan 22 1993 | MEDIATEK INC | Apparatus and method for compressing information |
5506866, | Nov 15 1993 | DELAWARE RADIO TECHNOLOGIES, LLC | Side-channel communications in simultaneous voice and data transmission |
5546477, | Mar 30 1993 | CREATIVE TECHNOLOGY LTD | Data compression and decompression |
5600373, | Jan 14 1994 | FOTO-WEAR, INC | Method and apparatus for video image compression and decompression using boundary-spline-wavelets |
5602589, | Aug 19 1994 | Xerox Corporation | Video image compression using weighted wavelet hierarchical vector quantization |
5604824, | Sep 22 1994 | FOTO-WEAR, INC | Method and apparatus for compression and decompression of documents and the like using splines and spline-wavelets |
5638068, | Apr 28 1994 | Intel Corporation | Processing images using two-dimensional forward transforms |
5661822, | Mar 30 1993 | CREATIVE TECHNOLOGY LTD | Data compression and decompression |
5798726, | Feb 03 1995 | Harris Corporation | Air traffic surveillance and communication system |
5909251, | Apr 10 1997 | COGNITECH, INC | Image frame fusion by velocity estimation using region merging |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Nov 30 1997 | YU, RAYMOND | ALARIS INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 008898 | /0832 | |
Dec 03 1997 | Alaris Inc. | (assignment on the face of the patent) | / | |||
Dec 12 2002 | DIGITAL STREAM USA, INC | DIGITAL STREAM USA, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014770 | /0949 | |
Dec 12 2002 | DIGITAL STREAM USA, INC | BHA CORPORATION | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014770 | /0949 | |
Dec 12 2002 | ALARIS, INC | RIGHT BITS, INC , THE | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 013828 | /0362 | |
Jan 24 2003 | RIGHT BITS, INC , THE, A CALIFORNIA CORPORATION | DIGITAL STREAM USA, INC | MERGER SEE DOCUMENT FOR DETAILS | 013828 | /0371 | |
Apr 01 2004 | DIGITAL STREAM USA, INC | XVD Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 016883 | /0382 | |
Apr 01 2004 | BHA CORPORATION | XVD Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 016883 | /0382 | |
Apr 22 2008 | XVD CORPORATION USA | XVD TECHNOLOGY HOLDINGS, LTD IRELAND | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 020845 | /0348 |
Date | Maintenance Fee Events |
Jan 09 2006 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jan 05 2010 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Feb 14 2014 | REM: Maintenance Fee Reminder Mailed. |
Jul 09 2014 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jul 09 2005 | 4 years fee payment window open |
Jan 09 2006 | 6 months grace period start (w surcharge) |
Jul 09 2006 | patent expiry (for year 4) |
Jul 09 2008 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 09 2009 | 8 years fee payment window open |
Jan 09 2010 | 6 months grace period start (w surcharge) |
Jul 09 2010 | patent expiry (for year 8) |
Jul 09 2012 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 09 2013 | 12 years fee payment window open |
Jan 09 2014 | 6 months grace period start (w surcharge) |
Jul 09 2014 | patent expiry (for year 12) |
Jul 09 2016 | 2 years to revive unintentionally abandoned end. (for year 12) |