A system for capturing a virtual model of a site includes a range scanner for scanning the site to generate range data indicating distances from the range scanner to real-world objects. The system also includes a global positioning system (gps) receiver coupled to the range scanner for acquiring gps data for the range scanner at a scanning location. In addition, the system includes a communication interface for outputting a virtual model comprising the range data and the gps data.
|
27. A method for modeling an object including one or more occluded surfaces when viewed from any vantage point, the method comprising:
automatically scanning an object from a plurality of fixed vantage points to generate a plurality of separate range images, each range image comprising a three-dimensional model of the object from a different perspective, wherein at least one range image includes a surface of the object that is occluded in at least one other range image;
obtaining digital images of the object from each vantage point;
obtaining a bearing of the scanner at each vantage point;
acquiring global position system (gps) readings for the range scanner at each vantage point using a gps receiver that accesses a separate base station to achieve sub-meter accuracy;
transforming the range images from local coordinate systems relative to the vantage points to a single coordinate system independent of the vantage points using the gps readings associated with each range image, as well as information about the range scanner's bearing at each vantage point; and
automatically co-registering the transformed range images into a single virtual model of the object that includes the one or more occluded surfaces.
17. A method for capturing a virtual model of a site including one or more occluded surfaces when viewed from any given perspective, the method comprising:
automatically scanning a site from a plurality of different fixed locations to generate a separate set of range data at each scanning location indicating distances from a range scanner to real-world objects within the site, each set of range data comprising a three-dimensional model of the same site from a different perspective, wherein at least one set of range data includes a surface of a real-world object that is occluded in at least one other set of range data;
obtaining digital images of the real-world objects scanned by the range scanner at each location;
acquiring global positioning system (gps) data for the range scanner at each scanning location using a gps receiver that interacts with a separate base station to achieve sub-meter accuracy;
obtaining orientation data information for the scanner at each scanning location;
automatically transforming the separate sets of range data from individual scanning coordinate systems to a modeling coordinate system using the gps data with the orientation data information for the range scanner at each scanning location; and
automatically co-registering the transformed sets of range data into a single virtual model of the site that includes the one or more occluded surfaces.
12. A system for modeling an object including one or more occluded surfaces when viewed from any vantage point, the system comprising:
a range scanner for automatically scanning an object from a plurality of fixed vantage points to generate a plurality of separate range images, each range image comprising a three-dimensional model of the object from a different perspective, wherein at least one range image includes a surface of the object that is occluded in at least one other range image;
a digital camera coupled to the range scanner for obtaining digital images of the object from each vantage point;
a global positioning system (gps) receiver for obtaining gps readings for the range scanner at each vantage point, wherein the gps receiver interacts with a separate base station to achieve sub-meter accuracy;
an a bearing indicator coupled to the range scanner for indicating a bearing of the range scanner at each scanning location;
a transformation module for using the gps readings associated with each range image, as well as information about the range scanner's bearing at each vantage point, to automatically transform the range images from local coordinate systems relative to the vantage points to a single coordinate system independent of the vantage points; and
a co-registration module for automatically co-registering the transformed range images into a single virtual model of the object that includes the one or more occluded surfaces.
36. A computer program product comprising program code for performing a method for capturing a virtual model of a site including one or more occluded surfaces when viewed from any given perspective, the computer program product comprising:
program code for automatically scanning a site from a plurality of different fixed locations to generate a separate set of range data at each scanning location indicating distances from a range scanner to real-world objects within the site, each set of range data comprising a three-dimensional model of the same site from a different perspective, wherein at least one set of range data includes a surface of a real-world object that is occluded in at least one other set of range data;
program code for obtaining digital images of the real-world objects scanned by the range scanner at each location;
program code for acquiring global positioning system (gps) data for the range scanner at each scanning location using a gps receiver that interacts with a separate base station to achieve sub-meter accuracy;
program code for obtaining orientation data information for the scanner at each scanning location;
program code for automatically transforming the separate sets of range data from individual scanning coordinate systems to a modeling coordinate system using the gps data with the orientation data information for the range scanner at each scanning location; and
program code for automatically co-registering the transformed sets of range data into a single virtual model of the site that includes the one or more occluded surfaces.
1. A system for capturing a virtual model of a site including one or more occluded surfaces when viewed from any given perspective, the system comprising:
a range scanner for automatically scanning a site from a plurality of different fixed locations to generate a separate set of range data at each scanning location indicating distances from the range scanner to real-world objects within the site, each set of range data comprising a three-dimensional model of the same site from a different perspective, wherein at least one set of range data includes a surface of a real-world object that is occluded in at least one other set of range data;
a digital camera coupled to the range scanner for obtaining digital images of the real-world objects scanned by the range scanner at each location;
a global positioning system (gps) receiver coupled to the range scanner for acquiring gps data for the range scanner at a each scanning location, wherein the gps receiver interacts with a separate base station to achieve sub-meter accuracy;
an orientation indicator coupled to the range scanner for indicating an orientation of the range scanner at each scanner location;
a transformation module for using the gps data with orientation data information for the range scanner at each scanning location to automatically transform the sets of range data from individual scanning coordinate systems based on the scanning locations to a single modeling coordinate system; and
a co-registration module for automatically co-registering the transformed sets of range data into a single virtual model of the site that includes the one or more occluded surfaces.
35. An apparatus for capturing a virtual model of a site including one or more occluded surfaces when viewed from any given perspective, the system apparatus comprising:
scanning means for automatically scanning a site from a plurality of different fixed locations to generate a separate set of range data at each scanning location indicating distances from the scanning means to real-world objects within the site, each set of range data comprising a three-dimensional model of the same site from a different perspective, wherein at least one set of range data includes a surface of a real-world object that is occluded in at least one other set of range data;
camera means coupled to the scanning means for obtaining digital images of the real-world objects scanned by the scanning means at each location;
position detection means coupled to the scanning means for acquiring global positioning system (gps) data for the scanning means at a each scanning location, wherein the position detection means interacts with a separate base station to achieve sub-meter accuracy;
an orientation detection means coupled to the scanning means for indicating an orientation of the scanning means at each scanning location;
transformation means for using the gps data with orientation data information for the scanning means at each scanning location to automatically transform the sets of range data from individual scanning coordinate systems based on the scanning locations to a single modeling coordinate system; and
co-registration means for automatically co-registering the transformed sets of range data into a single virtual model of the site that includes the one or more occluded surfaces.
10. A system for capturing a virtual model of a site including one or more occluded surfaces when viewed from any given perspective, the system comprising:
a range scanner for automatically scanning the site to generate a first set of range data indicating distances from the range scanner at a first location to real-world objects in the site, wherein the range scanner is to automatically re-scan the site to generate a second set of range data indicating distances from the range scanner at a second scanning location to real-world objects in the site, each set of range data comprising a three-dimensional model of the same site from a different perspective, wherein the second set of range data includes a surface of a real-world object that is occluded in the first set of range data;
a digital camera coupled to the range scanner for obtaining digital images of the real-world objects scanned by the range scanner at each location;
a global positioning system (gps) receiver coupled to the range scanner for acquiring a first set of gps data for the range scanner at the first scanning location and a second set of gps data for the range scanner at the second location, wherein the gps receiver interacts with a separate base station to achieve sub-meter accuracy;
an orientation indicator for indicating an orientation of the range scanner at each scanning location;
a transformation module for using the first and second sets of gps data with orientation data information for the range scanner at the scanning locations to automatically transform the first and second sets of range data from local coordinate systems referenced to the scanning locations to a single coordinate system independent of the scanning locations;
a co-registration module for automatically co-registering the first and second sets of range data into a single virtual model of the site that includes the one or more occluded surfaces; and
a merging module for merging at least two points represented within the co-registered virtual model that correspond to the same physical location within the site.
26. A method for capturing a virtual model of a site including one or more occluded surfaces when viewed from any given perspective, the method comprising:
automatically scanning the site to generate a first set of range data indicating distances from a range scanner at a first location to real-world objects in the site, wherein the first set of range data comprises a three-dimensional model of the site from a first perspective;
obtaining digital images of the real-world objects scanned by the range scanner at the first location;
acquiring a first set of global positioning system (gps) data for the range scanner at the first location using a gps receiver that interacts with a base station to achieve sub-meter accuracy;
determining orientation information for the range scanner at the first location;
scanning the same site from a second perspective to generate a second set of range data indicating distances from the range scanner at a second location to real-world objects in the site, wherein the second set of range data comprises a three-dimensional model of the site from a second perspective, wherein the second set of range data includes a surface of a real-world object that is occluded in the first set of range data;
obtaining digital images of the real-world objects scanned by the range scanner at the second location;
acquiring a second set of gps data for the range scanner at the second location;
determining orientation information for the range scanner at the second location;
automatically transforming the first and second sets of range data from individual local coordinate systems to a single coordinate system independent of the range scanner locations using the first and second sets of gps data with the orientation information;
automatically co-registering the first and second sets of range data into a single virtual model of the site that includes the one or more occluded surfaces;
converting the co-registered virtual model of the site into a polygon mesh; and
applying textures to the polygon mesh derived from the digital imagery to create an a visualization of the site that is substantially free of occlusions, the textures being derived from the digital images.
2. The system of
a visualization module for converting the co-registered virtual model of the site into a polygon mesh and for applying textures to the polygon mesh derived from the digital imagery to create an a visualization of the site that is substantially free of occlusions, the textures being derived from the digital images.
3. The system of
a merging module for merging at least two points represented within the co-registered virtual model that correspond to the same physical location within the site.
5. The system of
6. The system of
7. The system of
8. The system of
9. The system of
a servo for continuously changing an orientation of the range scanner with respect to a fixed location to scan the site; and
a lidar to obtain range measurements to real-world objects along a changing path of the range scanner responsive to the servo.
11. The system of
a visualization module for converting the co-registered virtual model of the site into a polygon mesh and for applying textures to the polygon mesh derived from the digital imagery to create an a visualization of the site that is substantially free of occlusions, the textures being derived from the digital images.
13. The system of
a visualization module for converting the co-registered virtual model of the object into a polygon mesh and for applying textures to the polygon mesh derived from the digital imagery to create an a visualization of the object that is substantially free of occlusions, the textures being derived from the digital images.
14. The system of
the range scanner comprises
a servo for continuously changing an orientation of the range scanner with respect to a fixed location to scan the object; and
a lidar to obtain range measurements of the object along a changing path of the range scanner responsive to the servo.
15. The system of
16. The system of
a merging module for merging at least two points represented within the co-registered range images that correspond to the same physical location on the object.
18. The method of
converting the co-registered virtual model of the site into a polygon mesh; and
applying textures to the polygon mesh derived from the digital imagery to create an a visualization of the site that is substantially free of occlusions, the textures being derived from the digital images.
19. The method of
merging at least two points represented within the co-registered virtual model that correspond to the same physical location within the site.
20. The method of
21. The method of
determining the bearing of the range scanner.
22. The system method of
23. The method of
24. The method of
associating the digital images of the real-world objects with the corresponding range data.
25. The method of
continuously changing an orientation of the range scanner with respect to a fixed location to scan the site; and
obtaining range measurements to real-world objects along a changing path of the range scanner.
28. The method of claim, 27, further comprising:
converting the co-registered virtual model of the object into a polygon mesh; and
applying textures to the polygon mesh derived from the digital imagery to create an a visualization of the object that is substantially free of occlusions, the textures being derived from the digital images.
29. The system method of
continuously changing an orientation of the range scanner with respect to a fixed location to scan the object; and
obtaining range measurements of the object along a changing path of the range scanner responsive to the servo.
30. The system method of
31. The method of
associating the digital imagery images with the corresponding range images within the virtual model.
32. The method of
wherein at least two of the range images depict the same physical location within the site.
0. 33. The system of
34. The system method of
wherein at least two of the range images depict the same physical location on the object.
0. 37. The system of
0. 38. The system of
0. 39. The system of
0. 40. The system of
0. 41. The method of
0. 42. The method of
0. 43. The method of
0. 44. The method of
|
Y=R sin φ Eq. 2
Z=R cos φsin θ Eq. 3
In certain embodiments, the geometry of the range scanner 102 (e.g., the axis of rotation, offset, etc.) may result in a polar-like coordinate system that requires different transformations, as will be known to those of skill in the art. In general, the origin of each of the scanning coordinate systems 402a-c is the light-reception point of the lidar 103.
Referring to
In one embodiment, the modeling coordinate system 602 is based on a geographic coordinate system, such as Universal Transverse Mercator (UTM), Earth-Centered/Earth-Fixed (ECEF), or longitude/latitude/altitude (LLA). GPS receivers 104 are typically able to display Earth-location information in one or more of these coordinate systems. UTM is used in the following examples because it provides convenient Cartesian coordinates in meters. In the following examples, the UTM zone is not shown since the range data 302 will typically be located within a single zone.
As depicted in
X1=X cos (b)−Z sin (b) Eq. 4
Z1=Z cos (b)+X sin (b) Eq. 5
These equations assume that the range scanner 102 was level at the time of scanning, such that the XZ planes of the scanning coordinate system 402 and modeling coordinate system 602 are essentially co-planer. If, however, the range scanner 102 was tilted with respect to the X and/or Z axes, the transformations could be modified by one of skill in the art.
Next, as shown in
X2=X1+GPSE Eq. 6
Y2=Y1+GPSH Eq. 7
Z2=Z1+GPSN Eq. 8
where
Those of skill in the art will recognize that the invention is not limited to UTM coordinates and that transformations exist for other coordinate systems, such as ECEF and LLA. In certain embodiments, the modeling coordinate system 602 may actually be referenced to a local landmark or a point closer to the range data 302, but will still be geographically oriented.
In the preceding example, the units of the range data 302 and GPS data 304 are both in meters. For embodiments in which the units differ, a scaling transformation will be needed. Furthermore, while
When the transformation is complete, the co-registration module 228 co-registers or combine combines the range data 302a-c from the various views into a co-registered model 702 of the entire site 104. This may involve, for example, combining the sets of range data 302a-c into a single data structure, while still preserving the ability to access the individual sets.
In one embodiment, the co-registered model 702 includes GPS data 304 for at least one point. This allows the origin of the modeling coordinate system 602 to be changed to any convenient location, while still preserving a geographic reference.
As illustrated in
Referring to
In one embodiment, the merging module 230 incorporates the Scanalyze™ product available from Stanford University. Scanalyze™ is an interactive computer graphics application for viewing, editing, aligning, and merging range images to produce dense polygon meshes.
Scanalyze™ processes three kinds of files: triangle-mesh PLY files (extension .ply), range-grid PLY files (also with extension .ply), and SD files (extension .sd). Triangle-mesh PLY files encode general triangle meshes as lists of arbitrarily connected 3D vertices, whereas range-grid PLY files and SD files encode range images as rectangular arrays of points. SD files also contain metadata that describe the geometry of the range scanner 102 used to acquire the data. This geometry is used by Scanalyze™ to derive line-of-sight information for various algorithms. PLY files may also encode range images (in polygon mesh form), but they do not include metadata about the range scanner and thus do not provide line-of-sight information.
Once the PLY or SD files have been loaded, they can be pairwise aligned using a variety of techniques—some manual (i.e. pointing and clicking) and some automatic (using a variant of the ICP algorithm).
Pairs of scans can be selected for alignment either automatically (so-called all-pairs alignment) or manually, by choosing two scans from a list. These pairwise alignments can optionally be followed by a global registration step whose purpose is to spread the alignment error evenly across the available pairs. The new positions and orientations of each PLY or SD file can be stored as a transform file (extension .xf) containing a 4×4 matrix.
Referring to
The visualization module 232 also decomposes the digital images 306 into textures 904, which are then applied to the polygon mesh 902. In essence, the digital images 306 are “draped” upon the polygon mesh 902. Due to the relatively higher resolution of the digital images 306, the textures 904 add a high degree of realism to the visualization 112. Techniques and code for applying textures 904 to polygon meshes 902 are known to those of skill in the art.
In one embodiment, the mesh 902 and textures 904 are used to create the visualization 112 of the site 104 using a standard modeling representation, such as the virtual reality modeling language (VRML). Thereafter, the visualization 112 can be viewed using a standard VRML browser, or a browser equipped with a VRML plugin, such as the Microsoft™ VRML Viewer. Of course, the visualization 112 could also be created using a proprietary representation and viewed using a proprietary viewer.
As depicted in
After then range scanner 102 is moved to a second location, the method 1000 continues by scanning 1008 the site 104 to generate a second set of range data 302 indicating distances from the range scanner 102 at the second location to real-world objects in the site 104. In addition, the GPS receiver 116 acquires 1010 a second set of GPS data 304 relative to the range scanner 102 at the second location, after which the range scanner 102 outputs 1012 a second virtual model 234 comprising the second sets of range data 302 and GPS data 304.
In one configuration, a transformation module 229 then uses 1014 the sets of GPS data 304 to transform the sets of range data 302 from scanning coordinate systems 402 to a single modeling coordinate system 602. Thereafter, the transformed range data 302 can be merged and visualized using standard applications.
As illustrated in
The site models 1104a-b may be co-registered models 702 or merged models 802, as previously shown and described. Furthermore, as previously noted, a site model 1104a-b may include GPS data 304.
In one embodiment, the transformation module 229 uses the sets of GPS data 304a-b to combine the individual site models 1104a-b into a single area model 1106. This may be done in the same manner as the virtual models 302a-c of
The resulting area model 1106 may then be used to produce an interactive, three-dimensional visualization 112 of the entire area 1102 that may be used for many purposes. For example, a user may navigate from one site 104 to another within the area 1102. Also, when needed, a user may remove any of the site models 1104 from the area model 1106 to visualize the area 1102 within the objects from the removed site model 1104. This may be helpful in the context of architectural or land-use planning.
While specific embodiments and applications of the present invention have been illustrated and described, it is to be understood that the invention is not limited to the precise configuration and components disclosed herein. Various modifications, changes, and variations apparent to those skilled in the art may be made in the arrangement, operation, and details of the methods and systems of the present invention disclosed herein without departing from the spirit and scope of the invention.
Bunger, James W., Vashisth, Robert M., Jensen, James U.
Patent | Priority | Assignee | Title |
10458792, | Dec 15 2016 | NovAtel Inc. | Remote survey system |
10634791, | Jun 30 2016 | TOPCON CORPORATION | Laser scanner system and registration method of point cloud data |
10665035, | Jul 11 2017 | B+T Group Holdings, LLC; B+T GROUP HOLDINGS, INC | System and process of using photogrammetry for digital as-built site surveys and asset tracking |
11151782, | Dec 18 2018 | B+T Group Holdings, Inc. | System and process of generating digital images of a site having a structure with superimposed intersecting grid lines and annotations |
11215597, | Apr 11 2017 | AGERPOINT, INC. | Forestry management tool for assessing risk of catastrophic tree failure due to weather events |
8207964, | Feb 22 2008 | VISUAL REAL ESTATE, INC | Methods and apparatus for generating three-dimensional image data models |
8379191, | Jun 23 2004 | Leica Geosystems AG | Scanner system and method for registering surfaces |
8436901, | Feb 21 2008 | ID FONE CO , LTD | Field monitoring system using a mobile terminal and method thereof |
8558848, | Aug 31 2004 | VISUAL REAL ESTATE, INC | Wireless internet-accessible drive-by street view system and method |
8600713, | Apr 14 2011 | National Central University | Method of online building-model reconstruction using photogrammetric mapping system |
8884950, | Jul 29 2011 | GOOGLE LLC | Pose data via user interaction |
8890866, | Aug 31 2004 | VISUAL REAL ESTATE, INC | Method and apparatus of providing street view data of a comparable real estate property |
8902226, | Aug 31 2004 | VISUAL REAL ESTATE, INC | Method for using drive-by image data to generate a valuation report of a selected real estate property |
9098870, | Aug 31 2004 | VISUAL REAL ESTATE, INC | Internet-accessible real estate marketing street view system and method |
9134339, | Sep 24 2013 | FARO TECHNOLOGIES, INC | Directed registration of three-dimensional scan measurements using a sensor unit |
9311396, | Aug 31 2004 | VISUAL REAL ESTATE, INC | Method of providing street view data of a real estate property |
9311397, | Aug 31 2004 | VISUAL REAL ESTATE, INC | Method and apparatus of providing street view data of a real estate property |
9367914, | May 16 2012 | The Johns Hopkins University | Imaging system and method for use of same to determine metric scale of imaged bodily anatomy |
9377298, | Apr 05 2013 | Leica Geosystems AG | Surface determination for objects by means of geodetically precise single point determination and scanning |
9384277, | Aug 31 2004 | VISUAL REAL ESTATE, INC | Three dimensional image data models |
9528834, | Nov 01 2013 | INTELLIGENT TECHNOLOGIES INTERNATIONAL, INC, | Mapping techniques using probe vehicles |
RE45264, | Aug 31 2004 | Visual Real Estate, Inc. | Methods and apparatus for generating three-dimensional image data models |
Patent | Priority | Assignee | Title |
5337149, | Nov 12 1992 | Computerized three dimensional data acquisition apparatus and method | |
5988862, | Apr 24 1996 | Leica Geosystems AG | Integrated system for quickly and accurately imaging and modeling three dimensional objects |
6166744, | Nov 26 1997 | Microsoft Technology Licensing, LLC | System for combining virtual images with real-world scenes |
6246468, | Apr 24 1996 | Leica Geosystems AG | Integrated system for quickly and accurately imaging and modeling three-dimensional objects |
6249600, | Nov 07 1997 | The Trustees of Columbia University in the City of New York | System and method for generation of a three-dimensional solid model |
6292215, | Jan 31 1995 | TRANSCENIC, INC | Apparatus for referencing and sorting images in a three-dimensional system |
6307556, | Sep 10 1993 | GeoVector Corp. | Augmented reality vision systems which derive image information from other vision system |
6330523, | Oct 23 1998 | Leica Geosystems AG | Integrated system for quickly and accurately imaging and modeling three-dimensional objects |
6420698, | Apr 24 1997 | Leica Geosystems AG | Integrated system for quickly and accurately imaging and modeling three-dimensional objects |
6473079, | Apr 24 1996 | Leica Geosystems AG | Integrated system for quickly and accurately imaging and modeling three-dimensional objects |
6526352, | Jul 19 2001 | AMERICAN VEHICULAR SCIENCES LLC | Method and arrangement for mapping a road |
6664529, | Jul 19 2000 | Utah State University | 3D multispectral lidar |
6759979, | Jan 22 2002 | INTELISUM, INC ; RAPPIDMAPPER, INC | GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site |
20010010546, | |||
20020060784, | |||
20030090415, | |||
20040105573, | |||
20050057745, | |||
WO104576, | |||
WO188565, | |||
WO188566, | |||
WO188741, | |||
WO188849, | |||
WO216865, | |||
WO9740342, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jun 30 2006 | Intelisum, Inc. | (assignment on the face of the patent) | / | |||
May 18 2007 | INTELISUM, INC | Square 1 Bank | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 020930 | /0037 |
Date | Maintenance Fee Events |
Dec 20 2011 | M2552: Payment of Maintenance Fee, 8th Yr, Small Entity. |
Feb 12 2016 | REM: Maintenance Fee Reminder Mailed. |
Jul 06 2016 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Mar 30 2013 | 4 years fee payment window open |
Sep 30 2013 | 6 months grace period start (w surcharge) |
Mar 30 2014 | patent expiry (for year 4) |
Mar 30 2016 | 2 years to revive unintentionally abandoned end. (for year 4) |
Mar 30 2017 | 8 years fee payment window open |
Sep 30 2017 | 6 months grace period start (w surcharge) |
Mar 30 2018 | patent expiry (for year 8) |
Mar 30 2020 | 2 years to revive unintentionally abandoned end. (for year 8) |
Mar 30 2021 | 12 years fee payment window open |
Sep 30 2021 | 6 months grace period start (w surcharge) |
Mar 30 2022 | patent expiry (for year 12) |
Mar 30 2024 | 2 years to revive unintentionally abandoned end. (for year 12) |