A method of rendering an image of three-dimensional laser scan data is described. The method includes providing a set of laser scan data for a given scan as a spherical displacement map and generating a tessellation pattern by sampling the spherical displacement map. A graphics processing unit may generate the tessellation pattern.
|
1. A method of rendering an image of three-dimensional laser scan data, the method comprising:
obtaining a plurality of sets of laser scan data for a set of scans as spherical displacement maps to a graphics processing unit, wherein each set of laser scan data corresponds to a different respective scan; and
for each set of laser scan data, the graphics processing unit generating an image corresponding to the respective scan by:
generating a tessellation pattern comprising a set of triangles by sampling a spherical displacement map corresponding to the respective set of laser scan data; and
converting each triangle into raster data comprising pixels, each pixel including a value of depth;
from different images each corresponding to a different respective set of laser scan data, the graphics processing unit generating a combined rendered image by:
writing pixel data generated from a respective set of laser scan data to a depth buffer;
comparing data of a pixel generated from a different respective set of laser scan data to pixel data written to the depth buffer; and
rendering one image by combining an image generated from the respective set of laser scan data with an image generated from a different respective set of laser scan data.
11. A computer system comprising:
memory;
at least one processing unit comprising at least one graphics processing unit;
wherein the memory stores a plurality of sets of laser scan data for a set of scans as spherical displacement maps, wherein each set of laser scan data corresponds to a different respective scan and for each set of laser scan data, the at least one graphics processing unit is (are) configured to generate an image corresponding to the respective scan by:
generating a tessellation pattern comprising a set of triangles by sampling a spherical displacement map corresponding to a set of laser scan data; and
converting each triangle into raster data comprising pixels each pixel including a value of depth;
from different images, each corresponding to a different respective set of laser scan data, the graphics processing unit generating a combined rendered image by:
writing pixel data generated from a respective set of laser scan data to a depth buffer;
comparing a pixel generated from a different respective set of laser scan data to pixel data written to the depth buffer; and
rendering one image by combining an image generated from the respective set of laser scan data with an image generated from a different respective set of laser scan data.
2. A method according to
receiving a set of laser scan data generated by a laser scanner at a given location; and
copying a value of range for a given laser scan point at a given azimuth and a given elevation from the laser scan data into a respective texel of an at least two-dimensional texture at a texel position corresponding to the given azimuth and the given elevation.
3. A method according to
identifying discontinuities between adjacent points in each spherical displacement map; and
marking said adjacent points.
4. A method according to
generating a set of normal maps in dependence upon said set of spherical displacement maps.
5. A method according to
6. A method according to
generating a blending texture in dependence upon each said spherical displacement map.
7. A method according to
generating a patch map for a given scan, the patch map comprising a plurality of patches.
8. A method according to
setting the relative tessellation level of a given patch in dependence upon discontinuities.
9. A method according to
determining whether a vertex forming part of a triangle in the tessellation pattern is marked as being on or adjacent to a discontinuity;
in dependence upon determining that the vertex is invalid, culling the triangle.
10. A non-transitory computer readable medium which stores a computer program for performing a method according to
12. A computer system according to
|
This application is the National Stage of International Application No. PCT/GB2014/053616, filed on Dec. 5, 2014, which claims the benefit of United Kingdom Patent Application No. GB1322113.0, filed Dec. 13, 2013. The contents of all of these applications are incorporated herein by reference.
Various embodiments of the present invention relate to a method of, and a system for, rendering an image of laser scan data.
A three-dimensional laser scanner can be used to survey an environment such as a process plant, vessel or other facility. A typical scanner includes a laser rangefinder which can measure a distance between the scanner and a point on a surface which is in view. By sweeping through a field of view (typically 360 degrees horizontally and nearly 180 vertically), the scanner can capture a set of ranges (herein referred to as “laser scan data”) for the surrounding environment. These can be used to generate a set of points in three-dimensional space, often referred to as a “point cloud”. An example of a point cloud is described in EP 1 176 393 A2.
Multiple scans can be performed at different positions in an environment and point clouds from different scans can be combined to produce a combined (or “aggregated”) point cloud covering a wider area. An example of combining point cloud data can be found in WO 2004/003844 A1.
In addition to acquiring range data, the scanner can also capture images of the surrounding environment by measuring intensity of reflected laser light or using a camera.
The point cloud(s) and images can be used to visualize and/or analyze an environment using a point cloud viewer application or a three-dimensional computer-aided design (CAD) application. Typically, these applications fall into two categories, namely those that work with points from individual scans and those that work with points combined from multiple scans.
One of the simplest applications of laser scanning is to display an image captured by an individual scan. Because the image from a laser scan is spherical, covering the area around the laser scanner, the software application can map the image onto the inside of a sphere. The application can display a portion of the sphere on a computer screen. The user can rotate the view in order to view different portions of the entire image. This presentation is called a “bubble view”.
In bubble view, the user can select a spot on the image and retrieve the three-dimensional coordinate of that location using the point cloud data for that laser scan. By selecting two points, the user can measure distances.
One type of application can overlay a three-dimensional CAD model in a bubble view. Because the application knows the three-dimensional location of the points in bubble view, it can obscure the appropriate portions of the CAD model behind the bubble view. This combined image can be useful when designing new areas of the facility.
An appealing feature of a bubble view is that it looks realistic. Realism comes from the image captured at the scanner location. A limitation of bubble views is, however, that they can only be produced for the locations at which a laser scanner was positioned. A user can select a bubble view and rotate to the left and right or up and down, but he/she cannot move forward, backward, horizontally or vertically in order to view the environment from a different perspective.
To allow free roaming, some software applications work with a combined point cloud from multiple scans. Using such an application, a user chooses a location within a facility and a viewing direction. The application then displays each point in the combined point cloud around that location from the point of view of the user. The user can move the viewing location and direction to see the points from different perspectives.
Some applications can display a CAD model in the same three-dimensional space as the combined point cloud. A user can then measure distances between locations in the CAD model and points in the point cloud. The user can also determine if portions of the point cloud intersect portions of the CAD model.
Although displaying a combined point cloud allows the user to view points from more than one perspective, this approach can have one or more drawbacks.
Displaying individual points tends to be computationally expensive. Gaps can appear in a representation of a scanned surface at close distances and so it can become difficult to discern the surfaces.
According to a first aspect of the present invention there is provided a method of rendering an image of three-dimensional laser scan data. The embodiment method comprises providing a set of laser scan data for a given scan as a spherical displacement map and generating a tessellation pattern by sampling the spherical displacement map. The embodiment method comprises rendering the image using the tessellation pattern.
By preserving laser scan data for a given scan as a set of points (as opposed to aggregating laser scan data for multiple scans) and by taking advantage of the fact that the laser scan data can be provided in the form of a displacement map which can be handled directly by a graphics system, an image of the laser scan data can be rendered efficiently and/or quickly. This can be particularly helpful when combining images from multiple scans since each scan can be processed independently and the images from different scans can be easily combined in a common buffer. This allows efficient/fast rendering not only of static images, but also moving images, for example, as a user “walks through” the environment.
Providing the set of laser scan data for the given scan as the spherical displacement map may comprise receiving a set of laser scan data generated by a laser scanner at a given location and copying a value of range for a given laser scan point at a given azimuth and a given elevation from the laser scan data into a respective texel of a two-dimensional texture (or higher-dimensional texture) at a texel position corresponding to the given azimuth and the given elevation.
The method may comprise a graphics processing unit (GPU) generating the tessellation pattern. However, the method may comprise a central processing unit (CPU) generating the tessellation pattern.
The processing unit may be configured using Microsoft® DirectX® 11 (or later) or OpenGL 4.4 (or later) application programming interface (API).
The method may further comprise identifying discontinuities between adjacent points in the spherical displacement map and marking the adjacent points. Marking the adjacent points may comprise setting the displacement value to a predefined number, such as a 0 or −1.
The method may further comprise generating a normal map in dependence upon said spherical displacement map. The displacement map and normal map may be combined in one texture. The one texture may comprise at least four channels.
Generating the normal map may comprise calculating a normal for each point in the spherical displacement map and storing the normal in the normal map. The normal may comprise first, second and third vector component values.
The method may further comprise generating a blending texture in dependence upon the spherical displacement map. The blending texture may comprise an array of blending texels, each blending texel comprising a value which depends on distance from a discontinuity.
The method may further comprise generating a patch map for a given scan, the patch map comprising a plurality of patches.
The patch map may comprise polygonal patches, each patch having three or more vertices. The patches may be the same shape. The patches may be the same size. The patches may be rectangles. If the patches are rectangles, then the patch map may comprise positions of opposite vertices.
Position, shape and/or size of the patches may depend on discontinuities in the spherical displacement map.
The patch map may include a relative tessellation level for each patch. The method may comprise setting the relative tessellation level of a given patch in dependence upon discontinuities. The method may comprise setting the relative tessellation level of a given patch in dependence upon normal variance across the given patch.
The method may comprise calculating an absolute tessellation level for a given patch in dependence upon a view position and/or visibility of the given patch.
The method may further comprise determining whether a vertex forming part of a triangle in the tessellation pattern is marked as being on or adjacent to a discontinuity and, in dependence upon determining that the vertex is invalid, culling (or “discarding”) the triangle.
The method may further comprise generating a set of pixels for the scan and performing a depth test.
The method may comprise colouring a pixel in dependence upon the normal of the pixel. The method may comprise colouring a pixel in dependence upon intensity and/or colour in a corresponding part of an image.
The method may comprise providing more than one set of laser scan data, each set of laser scan data corresponding to a respective scan. Each set of laser scan data is provided as a respective spherical displacement map. The method may comprise combining rendered images from different scans. Combining rendered images from different scans may comprise using a depth buffer.
According to a second aspect of the present invention there is provided a computer program which comprises instructions for performing the method.
According to a third aspect of the present invention there is provided a computer readable medium or non-transitory computer-readable medium which stores the computer program.
According to a fourth aspect of the present invention there is provided computer system comprising memory and at least one processing unit. The memory stores a set of laser scan data for a given scan as a spherical displacement map and the at least one processing unit is (are) configured to generate a tessellation pattern by sampling the spherical displacement map.
The at least one processing unit may comprise at least one graphical processing unit. The at least one processing unit may comprise at least one central processing unit. The at least one processing unit may comprise one processing unit, for example, one graphical processing unit.
The at least one processing unit may be configurable using a Microsoft® DirectX® 11 (or later) application programming interface. The at least one processing unit may be configurable using OpenGL application programming interface.
Certain embodiments of the present invention will now be described, by way of example, with reference to the accompanying drawings, in which:
System Overview
Referring to
The system 1 includes one or more three-dimensional laser scanners 2 for surveying an environment 3 which includes a number of target surfaces 4. The, or each, laser scanner 2 includes a laser scanning unit 5 which generates laser scan data 6 (herein referred to simply as “scan data”), an optional camera 7 which can be used to generate image data 8, for example in the form a JPEG file, and on-board storage 9 for storing data 6, 8. The, or each, laser scanner 2 includes processor(s) 10 and memory 11 which can be used to process the laser scan data 6, for example, to format the data. A separate computer system (not shown) can be used to process the data.
The laser scanning unit 5 generates an element of scan data 6 for a point by emitting a pulsed laser beam 12 in a given direction (i.e. at given a horizontal angle and a given vertical angle), sensing the reflected beam 13 that is reflected off a target surface 4, back to the laser scanner 2, and determining a range to the target surface 4 based on time of flight of the laser beam 12, 13. A set of scan data 6 can be acquired by scanning the laser beam 12 horizontally and vertically so as to build up a set of points around the scanner 2. Each point in the scan data 6 can be provided in the form of a set of Cartesian coordinates, i.e. each point is expressed in (x, y, z). Points in a set of data 6 can be ordered by azimuth and elevation.
The scan data 6 and, optionally, image data 8 are supplied to a computer system 15 which includes a scan data processing module 16 which generates a set 17 of textures 18, 19, 20, 21, some textures combined into a single texture 22, and a patch map 23 for each scan. The computer system 15 also includes storage 24 in the form of one or more hard drives for storing data. The hard drive(s) may be hard disk drives, solid-state drive, optical drive or other form of suitable storage.
As will be explained in more detail later, the set of textures 17 include a spherical displacement map 18 (herein also referred to as a “depth map”) and a normal map 19 obtained from scan data 6. As will also be explained in more detail later, the displacement map 18 and the normal map 19 are combined in a single, 4-channel texture 22 (herein referred to as a “combined texture”).
The set of textures 17 can include a blending map 20. The set of textures 17 can include a colour map 21 obtained from the image data 8.
The computer system 15 includes a user input devices 25 (such as a mouse and/or keyboard), a rendering system 26 and a display 27 for displaying an image 28 from a view point 29 which is supplied by the user via a user input device 25. The rendering system 26 produces triangulated three-dimensional surfaces using the set of textures 17 and the patch map 23 obtained from one or more different scans and renders the surfaces in real time, from any view point, combining surfaces obtained from the scan(s) in an image 28.
The scan data processing module 16 and the rendering system 26 may be implemented in different computer systems. For example, the scan data processing module 16 may be implemented all or in part by a laser scanner 2. Alternatively, the laser scanner 2 may generate the displacement map 18 and a first computer system may generate the other texture(s) 19, 20, 21 and the patch map 23, or just the patch map 23, and supply texture(s) 18, 19, 20, 21 and patch map 23 to a second computer system which implements the rendering system 26.
Referring also to
The computer system 15 includes one or central processing units (CPUs) 31 having respective memory caches (not shown), system memory 32, a graphics module 33, for example in the form of a graphics card, which includes a graphics processing unit (GPU) 34 and graphics memory 35 (which may be referred to as “video RAM”), and an input/output (I/O) interface 36 operatively connected by a bus 37. An example of a suitable graphics module 33 is an NVIDIA® GeForce 460 GPU with 1 GB of video RAM.
The I/O interface 36 is operatively connected to bus and/or network interface(s) 38 (such as USB interface or WLAN interface) for receiving scan data 6 from the, or each, scanner 2. The I/O interface 36 is also operatively connected to user input devices 25 and the storage 24, for example, in the form of one or more hard disk drives and/or solid-state drives. Some peripheral devices, such as removable storage, and other computer components are not shown. The computer system 15 may have a different configuration from that shown in
The scan processing module 16 is implemented in software. Computer code 39 for implementing the scan processing module 16 is held in storage 24 and loaded into memory 32 for execution by the CPU(s) 31.
The rendering system 26 is preferably implemented using a GPU so as to take advantage of the enhanced graphics processing capabilities of a GPU, in particular tessellation. However, the rendering system 26 can be implemented using one or more CPUs.
As will be explained in more detail later, the rendering system 26 implements a graphics pipeline 60 (
Scan Processing
Referring to
The module 16 loads a set of scan data 6 for a scan from a scanner 2 via a bus or network (not shown) or portable storage (not shown) and/or from storage 24 (step S1). The scan data 6 is in, for example, Cartesian coordinates and so the module 16 converts and stores the scan data 6 as a spherical displacement map 18 (step S2).
Referring also to
Scan data 6 for a scan is copied into a texture 22. The texture 22 takes the form of a (u,v) map which includes a two-dimensional array of texture elements (or “texels”) 46. Each texel 46 comprising first, second, third and fourth elements 461, 462, 463, 464.
As explained earlier, the texture 22 has four channels. The first channel stores the displacement map 18 and the second, third and fourth channels store the normal map 19.
The horizontal axis, u, of the texture 22 is proportional to the horizontal angle (or “azimuth”) of the laser scanner 2. The vertical axis, v, of the texture is proportional to the vertical angle (“elevation”) of the laser scanner 2. Each texel 46 stores a value of range for a point in the scan as a depth, di,j, at the azimuth and elevation corresponding to its (u,v) location. Thus, the values of range, when stored in the texture 22, provide a spherical displacement map 18 of the laser scan.
Image data 8 in the form of RGB component values can be copied into a colour map 21 in a similar way.
Referring to
The displacement map 18 is spherical and is taken from the point of view of the laser scanner 2. Thus, a discontinuity between adjacent texels 46 in the spherical displacement map 18 represents a region or line 47 where the laser scanner 2 has moved from one surface 481 to another surface 482. Therefore, it is preferable that no surface is drawn in this area.
For example, as shown in
Adjacent texels 46 in u- and/or v-directions can be compared.
Referring to
For each texel 46 in the displacement map 18, the module 16 calculates the Cartesian coordinates from the spherical coordinates represented by the texel 46. The module calculates a normal 49 for a given texel 46 (which is shown shaded in
The module 16 stores the x-, y- and z-components (nx, ny, nz) of the normal 49 in the texel 46 in the second, third and fourth elements 462, 463, 464.
Referring to
Each texel 50 in the blending texture 20 contains a blending value, s, which lies in a range between 0 and 1.
The scan data processing module 16 identifies texels 46 in the displacement map 18 which correspond to points along the discontinuity and sets corresponding texels 50 in the blending texture 20 (i.e. having the same values of u and v) to have a blending value of 0. The scan data processing module 16 generates further blending values which gradually propagate through the blending texture 20, i.e. values which gradually increase. For example, as shown in
Referring to
Each patch 51 defines a region of the scan textures in (u,v) coordinates. Each patch has a set of vertices 52.
The patches 51 can be any shape and can be regularly or irregularly distributed in the scan. The shape and position of the patches 51 can be predetermined or based on any aspect of the scan, such as locations of discontinuities or variance of normals.
For example,
In another example,
In yet another example,
Referring also to
Referring to
The relative tessellation level 55 represents how much a patch 51 will be tessellated relative to the other patches 51. An absolute tessellation level is calculated later, after view-dependent factors or other factors are applied. The relative tessellation level 55 can be predetermined or based on any aspect of the patch such as the location of discontinuities or the variance of normals.
The displacement map 18, the normal map 19, the blending texture 20 and the patch map 23 can be produced once for each scan and stored for later use.
Rendering
Referring to
The rendering system 26 employs Microsoft® DirectX 11 ® running on the GPU 34. However, other graphics systems, such as OpenGL 4.x, can be used.
The textures 18, 19, 20 and patch map 23 are stored in a form which can be efficiently processing by the rendering system 26.
The textures 18, 19, 20 are stored as two-dimensional texture resources and the patch map 23 is stored as a vertex buffer with a topology of a one-control-point patch list. As explained earlier, the displacement and normal maps 18, 19 are combined into a combined texture 22 with four channels.
The textures 18, 19, 20 and the vertex buffer 23, i.e. patch map 23, are sent to the GPU 34.
Referring to
The graphics pipeline 60 includes a vertex shader 61, a hull shader 62, a tessellation shader 63 (herein also referred to simply as a “tessellator”), a domain shader 64, a geometry shader 65, a rasterizer 66, a pixel shader 67 and an output merger stage 68 which, among other things, carries out a depth test in conjunction with a depth buffer 69 (also referred to as a “z-buffer”).
In OpenGL, a hull shader is referred to as a “tessellation control shader”, a tessellation shader is referred to as a “primitive generator”, a domain shader is referred to as a “tessellation evaluation shader” and pixel shader is referred to as a “fragment shader”.
Referring to
In this example, the absolute tessellation level 70 for each patch 51 is calculated based on the view point 28 and the relative tessellation level 55. The hull shader 62 also adjusts the tessellation level around the edges of a patch 51 so that it matches the tessellation level of the neighbouring patches 51. This produces continuous tessellation across patches 51.
Referring also to
The tessellation shader 63 generates a tessellation pattern 71 (herein referred to as a “tessellated patch”) based on the absolute tessellation level 70 calculated by the hull shader 62. The tessellation pattern 71 is made up of triangles 72. Points 73 in the tessellation pattern 71 contain their (u,v) coordinates within the combined texture 22.
As shown in
Referring also to
The domain shader 64 calculates the position of each point 73 in the tessellation pattern 71 in a patch 51. The domain shader 64 samples the combined depth/normal texture 22 at the (u, v) coordinate of the point 73 to retrieves values of depth and normal. If the sample includes a texel 46 that is next to a discontinuity, the point is marked invalid.
The domain shader 64 calculates the azimuth and elevation from the (u, v) coordinate using information in the patch 51. The domain shader 64 then converts the spherical coordinates (i.e. azimuth, elevation, depth) into view coordinates 74 either directly or (as shown in
The domain shader 64 effectively displaces points on a spherical surface by the depth specified in the (spherical) displacement map 19.
Referring to
The geometry shader 65 determines if a triangle 72 contains any invalid vertices 77. A vertex 74 is invalid if it is next to a discontinuity. If an invalid vertex 77 is present, then the geometry shader 65 discards the triangle 72, i.e. discards the invalid triangle 78. The geometry shader 65 also determines if the triangle 72 is at an angle which is too oblique to the laser scanner position. If so, the triangle 72 is also discarded. The geometry shader 65 outputs an updated vertex list 79.
Referring to
Each rasterized triangle 72 is then processed by the pixel shader 67.
Referring to
Referring to
The pixel shader 67 can measure quality of the depth by sampling, at a given location and offset, the depth 82. The pixel shader 67 can favour higher-quality samples at the same screen coordinate. The pixel shader 67 can use information in the blending texture 20 and/or the combined depth/normal texture 22 to determine the quality of the sample.
The output merger stage 68 performs, among other things, a depth test.
Referring also to
Pixel data generated from several scans can be written to the same depth buffer 69 and, thus, an image comprising data from several scans can be formed.
Referring to
The image 91 shows the interior of a process plant which includes, among other things, a floor 92, a ceiling 93, storage tanks 94, a row of three reaction vessels 95, various pipes 96, a row of four control panels 97, cable trays 98 suspended from the ceiling 93, lighting units 99 (or “luminaries”) and a row of four pallet-type liquid storage containers 100.
As shown in
Referring to
Objects which should be hidden, such as the reaction vessels 95, are still transparent since the triangles 72 have not been rasterized and, thus, filled in.
As shown in
As explained earlier, the rendering system 26 (
As shown in
A comparison of the point cloud image 91 (
By not aggregating scan data for individual scans into one, larger set of scan data, the system can take advantage of the order in which scan data are collected. Each set of scan data can be treated as a spherical displacement map and can be sampled to generate a tessellation pattern using computationally-efficient techniques. Further graphical processes, such as rasterization, can be performed and only then are images from different scans combined. Thus, the system allows an image of the scan data to be rendered efficiently and/or quickly, particularly if the graphics capabilities of a GPU are used.
It will be appreciated that various modifications may be made to the embodiments hereinbefore described. Such modifications may involve equivalent and other features which are already known in the design, manufacture and use of laser scan systems and/or graphics processing systems, and component parts thereof, and which may be used instead of or in addition to features already described herein. Features of one embodiment may be replaced or supplemented by features of another embodiment.
Although claims have been formulated in this application to particular combinations of features, it should be understood that the scope of the disclosure of the present invention also includes any novel features or any novel combination of features disclosed herein either explicitly or implicitly or any generalization thereof, whether or not it relates to the same invention as presently claimed in any claim and whether or not it mitigates any or all of the same technical problems as does the present invention. The applicants hereby give notice that new claims may be formulated to such features and/or combinations of such features during the prosecution of the present application or of any further application derived therefrom.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5847717, | Jun 08 1995 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Data synchronization between a plurality of asynchronous data renderers |
5920320, | Jun 13 1996 | Fujitsu Ltd. | Three-dimensional shape data transforming apparatus |
6420698, | Apr 24 1997 | Leica Geosystems AG | Integrated system for quickly and accurately imaging and modeling three-dimensional objects |
8509520, | Nov 17 2009 | Institute For Information Industry | System and method for establishing association for a plurality of images and recording medium thereof |
20010002131, | |||
20050216237, | |||
20100079454, | |||
20120294534, | |||
20140028678, | |||
20140176535, | |||
EP1176393, | |||
JP2005208868, | |||
JP2011192214, | |||
JP2012230594, | |||
JP2012527025, | |||
RU2168192, | |||
RU2497194, | |||
WO2004003844, | |||
WO2009047287, | |||
WO2010130650, | |||
WO2010130987, | |||
WO2013142819, | |||
WO2015087055, | |||
WO9621171, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 05 2014 | Aveva Solutions Limited | (assignment on the face of the patent) | / | |||
Aug 16 2016 | FREEDMAN, AARON | Aveva Solutions Limited | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 039793 | /0686 | |
Sep 16 2016 | ELTON, PAUL | Aveva Solutions Limited | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 039793 | /0686 |
Date | Maintenance Fee Events |
Apr 20 2023 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Nov 05 2022 | 4 years fee payment window open |
May 05 2023 | 6 months grace period start (w surcharge) |
Nov 05 2023 | patent expiry (for year 4) |
Nov 05 2025 | 2 years to revive unintentionally abandoned end. (for year 4) |
Nov 05 2026 | 8 years fee payment window open |
May 05 2027 | 6 months grace period start (w surcharge) |
Nov 05 2027 | patent expiry (for year 8) |
Nov 05 2029 | 2 years to revive unintentionally abandoned end. (for year 8) |
Nov 05 2030 | 12 years fee payment window open |
May 05 2031 | 6 months grace period start (w surcharge) |
Nov 05 2031 | patent expiry (for year 12) |
Nov 05 2033 | 2 years to revive unintentionally abandoned end. (for year 12) |