A discrete linear space sampling method and system for generating digital 3d models comprising. A plurality of digital images are acquired of a subject from a respective plurality of image sensor positions near the image sensor location. candidate 3d-spels are identified, each 3d-spel being an image pixel corresponding to a common point on said subject. candidate 3d-spels are rejected based on a differential analysis of the candidate 3d-spels, the remaining 3d-spels forming a set of accepted 3d-spels. 3d coordinates are calculated for each accepted 3d-spel, thereby forming a point-cloud of the subject.
|
1. A discrete linear space sampling method for generating digital 3d models comprising:
acquiring a plurality of digital images of a subject from a respective plurality of image sensor positions near to an image sensor location;
identifying candidate 3d-spels from the acquired digital images, each 3d-spel being an image pixel corresponding to a common point on said subject, and each 3d-spel corresponding to a pair of acquired pixels, the first acquired pixel of the pair of acquired pixels being selected from one of the plurality of acquired images and the second acquired pixel of the pair of acquired pixels being selected from one of the remaining of the plurality of acquired images;
rejecting candidate 3d-spels based on a differential analysis of the candidate 3d-spels, the remaining 3d-spels forming a set of accepted 3d-spels; and,
calculating 3d coordinates for each accepted 3d-spel, thereby forming a point-cloud of the subject.
24. A system for generating digital 3d models comprising:
means for acquiring a plurality of digital images of a subject from a respective plurality of image sensor positions near to an image sensor location;
means for identifying candidate 3d-spels from the acquired digital images, each 3d-spel being an image pixel corresponding to a common point on said subject, and each 3d-spel corresponding to a pair of acquired pixels, the first acquired pixel of the pair of acquired pixels being selected from one of the plurality of acquired images and the second acquired pixel of the pair of acquired pixels being selected from one of the remaining of the plurality of acquired images;
means for rejecting candidate 3d-spels based on a differential analysis of the candidate 3d-spels, the remaining 3d-spels forming a set of 3d-spels; and,
means for calculating 3d coordinates for each accepted 3d-spel, thereby forming a point-cloud of the subject.
35. A system generating digital 3d models comprising:
means for acquiring a plurality of digital images of a subject from a respective plurality of image sensor positions near to an image sensor location;
means for identifying candidate 3d-spels, each 3d-spel being an image pixel corresponding to a common point on said subject;
means for rejecting candidate 3d-spels based on a differential analysis of the candidate 3d-spels, the remaining 3d-spels forming a set of accented 3d-spels;
means for calculating 3d coordinates for each accented 3d-spel, thereby forming a point-cloud of the subject;
means for repeating the acquiring, identifying, rejecting and calculating for one or more additional image sensor locations, thereby providing a plurality of sets of digital images and a plurality of sets of 3d-spels; and,
means for registering the 3d-spels of the plurality of sets of 3d-spels, thereby creating a merged set of 3d-spels having a common origin.
12. A discrete linear space sampling method for generating digital 3d models comprising:
acquiring a plurality of digital images of a subject from a respective plurality of image sensor positions near to an image sensor location;
identifying candidate 3d-spels, each 3d-spel being an image pixel corresponding to a common point on said subject;
rejecting candidate 3d-spels based on a differential analysis of the candidate 3d-spels, the remaining 3d-spels forming a set of accepted 3d-spels;
calculating 3d coordinates for each accepted 3d-spel, thereby forming a point-cloud of the subject;
repeating the acquiring, identifying, rejecting and calculating for one or more additional image sensor locations with respect to said subject, thereby providing a plurality of sets of digital images and a plurality of sets of 3d-spels; and,
registering the 3d-spels of the plurality of sets of 3d-spels, thereby creating a merged set of 3d-spels having a common origin.
7. A discrete linear space sampling method for generating digital 3d models comprising:
acquiring a plurality of digital images of a subject from a respective plurality of image sensor positions near to an image sensor location;
identifying candidate 3d-spels, each 3d-spel being an image pixel corresponding to a common point on said subject;
rejecting candidate 3d-spels based on a differential analysis of the candidate 3d-spels, the remaining 3d-spels forming a set of accented 3d-spels;
calculating 3d coordinates for each accepted 3d-spel, thereby forming a point-cloud of the subject; and,
organizing said acquired digital images into a three-dimensional array of pixels having a plurality of rows and a plurality of columns, each row and column corresponding to a two-dimensional pixel position for each of said images, wherein each image occupies a respective position in the third dimension of the array, and wherein each position in the three-dimensional array contains at least one pixel value for the respective pixel position.
30. A system generating digital 3d models comprising:
means for acquiring a plurality of digital images of a subject from a respective plurality of image sensor positions near to an image sensor location;
means for identifying candidate 3d-spels, each 3d-spel being an image pixel corresponding to a common point on said subject;
means for rejecting candidate 3d-spels based on a differential analysis of the candidate 3d-spels, the remaining 3d-spels forming a set of accented 3d-spels;
means for calculating 3d coordinates for each accented 3d-spel, thereby forming a point-cloud of the subject; and,
means for organizing said acquired digital images into a three-dimensional array of pixels having a plurality of rows and a plurality of columns, each row and column corresponding to a two-dimensional pixel position for each of said images, wherein each image occupies a respective position in the third dimension of the array, and wherein each position in the three-dimensional array contains at least one pixel value for the respective pixel position.
2. The method as set forth in
3. The method as set forth in
8. The method as set forth in
9. The method as set forth in
generating a geometric mesh of the scene or object based on the accepted 3d-spels, including determining a visible face of each geometric entity comprising the geometric mesh;
mapping a texture to the geometric mesh including determining which of the plurality of acquired digital images each geometric entity corresponds to and determining for each geometric entity a selected geometric region of the corresponding digital image; and,
pasting the texture onto the geometric mesh including pasting digital image data from each selected geometric region onto the visible face of the geometric entity of the geometric mesh.
10. The method as set forth in
11. The method as set forth in
replacing a plurality of accepted 3d-spels within a sphere of a first predetermined spherical radius with a single representative 3d-spel; and,
removing accepted 3d-spels determined to be noise and/or determined to be sparsely populated within a sphere of a second predetermined spherical radius.
13. The method as set forth in
organizing said acquired plurality of sets of digital images into a plurality of sets of three-dimensional arrays of pixels, each array having a plurality of rows and a plurality of columns, each row and column corresponding to a two-dimensional pixel position for each of said images, wherein each image occupies a respective position in the third dimension of the array, and wherein each position in the three-dimensional array contains at least one pixel value for the respective pixel position.
14. The method as set forth in
15. The method as set forth in
generating a geometric mesh of the scene or object based on the merged set of accepted 3d-spels, including determining a visible face of each geometric entity comprising the geometric mesh;
mapping a texture to the geometric mesh including determining which of the plurality of acquired digital images each geometric entity corresponds to and determining for each geometric entity a selected geometric region of the corresponding digital image; and,
pasting the texture onto the geometric mesh including pasting digital image data from each selected geometric region onto the visible face of the geometric entity of the geometric mesh.
16. The method as set forth in
17. The method as set forth in
replacing a plurality of 3d-spels of the merged set of 3d-spels within a sphere of a first predetermined spherical radius with a single representative 3d-spel; and,
removing 3d-spels of the merged set of 3d-spels determined to be noise and/or determined to be sparsely populated within a sphere of a second predetermined spherical radius.
18. The method as set forth in
19. The method as set forth in
20. The method as set forth in
21. The method as set forth in
determining a first set of three points from a first image sensor location and a second set of three matching points from a second image sensor location;
aligning respective image sensor coordinate systems according to the first and second sets of three matching points; and,
repeating the determining and aligning steps at remaining image sensor locations.
22. The method as set forth in
25. The system as set forth in
26. The system as set forth in
27. The system as set forth in
31. The system as set forth in
32. The system as set forth in
means for generating a geometric mesh of the scene or object based on the accepted 3d-spels, including means for determining a visible face of each geometric entity comprising the geometric mesh;
means for mapping a texture to the geometric mesh including determining which of the plurality of acquired digital images each geometric entity corresponds to and means for determining for each geometric entity a selected geometric region of the corresponding digital image; and,
means for pasting the texture onto the geometric mesh including pasting digital image data from each selected geometric region onto the visible face of the geometric entity of the geometric mesh.
33. The system as set forth in
34. The system as set forth in
means for replacing a plurality of accepted 3d-spels within a sphere of a first predetermined spherical radius with a single representative 3d-spel; and,
means for removing accepted 3d-spels determined to be noise and/or determined to be sparsely populated within a sphere of a second predetermined spherical radius.
36. The system as set forth in
means for organizing said acquired plurality of sets of digital images into a plurality of sets of three-dimensional arrays of pixels, each array having a plurality of rows and a plurality of columns, each row and column corresponding to a two-dimensional pixel position for each of said images, wherein each image occupies a respective position in the third dimension of the array, and wherein each position in the three-dimensional array contains at least one pixel value for the respective pixel position.
37. The system as set forth in
38. The system as set forth in
means for generating a geometric mesh of the scene or object based on the merged set of accepted 3d-spels, including means for determining a visible face of each geometric entity comprising the geometric mesh;
means for mapping a texture to the geometric mesh including means for determining which of the plurality of acquired digital images each geometric entity corresponds to and determining for each geometric entity a selected geometric region of the corresponding digital image; and,
means for pasting the texture onto the geometric mesh including pasting digital image data from each selected geometric region onto the visible face of the geometric entity of the geometric mesh.
39. The system as set forth in
40. The system as set forth in
means for replacing a plurality of 3d-spels of the merged set of 3d-spels within a sphere of a first predetermined spherical radius with a single representative 3d-spel; and,
means for removing 3d-spels of the merged set of 3d-spels determined to be noise and/or determined to be sparsely populated within a sphere of a second predetermined spherical radius.
41. The system as set forth in
42. The system as set forth in
43. The system as set forth in
44. The system as set forth in
means for determining a first set of three points from a first image sensor location and a second set of three matching points from a second image sensor location;
means for aligning respective image sensor coordinate systems according to the first and second sets of three matching points; and,
means for repeating the determining and aligning steps at remaining image sensor locations.
45. The system as set forth in
|
|||||||||||||||||||||||||
This application claims the benefit of U.S. Provisional Application No. 60/388,605, filed Jun. 12, 2002, incorporated herein by reference in its entirety.
This invention relates in general to imaging and, more particularly, to a method and apparatus of using a moving camera or fixed sensor array for 3-dimensional scanning of an object.
In many computer applications, it is desirable to have the ability to generate a file describing a three dimensional object. For example, in order to reverse engineer a mechanical part for which drawings are no longer available, it is desirable to be able to efficiently and accurately generate a digital file describing the part. One method is to scan the part with a 3D scanner such as a laser scanning device, generating a digital model therefrom. The models can then be manipulated using computer aided-design/computer aided manufacturing (CAD/CAM) processing techniques to reproduce the desired part. 3D scanners are used in a variety of fields including medical imaging, topography, computer aided-design/computer aided manufacturing (CAD/CAM), architecture, reverse engineering and computer animation/virtual reality.
Early 3D scanners used mechanical probes moving across an object's surface. A mechanical arm connected to the probe moves in accordance with the contours of the surface, and the arm movements are translated into information describing the location of the probe at multiple points. These early digital systems are slow, since they must touch each position on the object at which a measurement reading is taken, and are not suitable for scanning soft objects such as clay. Further, they are not suitable for modeling large 3D spaces or scenes, such as the interior of a room or building.
More recently, optical 3D scanners have become available. 3D scanners of this type project a laser pattern on an object and determine the location of points on the object using triangulation techniques. These systems can be extremely complicated and, therefore, costly. Further, laser systems are limited to a monochromatic light source, limiting their usefulness. Still further, some materials may be opaque or transparent at the frequency of the laser, further limiting their usefulness.
It is the principle objective of the present invention to provide an object and space sampling method and apparatus that overcomes the above-described limitations.
Another object of the present invention is to provide an accurate object modeling method and apparatus that is multi-spectrum, able to provide models of night scenes and able to provide color models.
It is another object of the present invention is to provide an accurate object modeling method and apparatus requiring a minimum of human intervention and setup.
In accordance with one aspect of the present invention, a discrete linear space sampling method for generating digital 3D models is provided. The method comprises acquiring a plurality of digital images of a subject from a respective plurality of image sensor positions near to an image sensor location, identifying candidate 3d-spels, wherein each 3d-spel is an image pixel corresponding to a common point on the subject, rejecting candidate 3d-spels based on a differential analysis of the candidate 3d-spels, wherein the remaining 3d-spels form a set of accepted 3d-spels, and calculating 3D coordinates for each accepted 3d-spel. The resulting 3D coordinates form a point-cloud of the subject.
In accordance with another aspect of the present invention, a system for generating digital 3D models is provided. The system comprises a means for acquiring a plurality of digital images of a subject from a plurality of image sensor positions near the image sensor location, a means for identifying candidate 3d-spels, a means for rejecting candidate 3d-spels based on a differential analysis of the candidate 3d-spels, a means for calculating 3D coordinates for each remaining 3d-spel, thereby forming a point-cloud of the subject.
The invention may take physical form in certain parts and arrangements of parts, a preferred embodiment of which will be described in detail in the specification and illustrated in the accompanying drawings which form a part hereof and wherein:
It will become evident from the following discussion that embodiments of the present application set forth herein, are suited for use in a wide variety object modeling systems, and are not necessarily limited in application to the particular systems illustrated.
With reference to the drawings, where the showings are for the purpose of illustrating exemplary embodiments of the invention and not for the purpose of limiting same, one goal of a Discrete Linear Space Sampling (DLSS) technique in accordance with aspects of the present invention is to build a spatially accurate, photo-realistic model of a 3D space. With reference to
1. Data acquisition
2. Generation of 3D points
3. Registration of the 3D points
4. Triangulation
5. Texture generation
In the following description, a detailed discussion is given of an exemplary DLSS system and process. For clarity and completeness, the description is divided into several subsections as follows:
1. DLSS Theory
In this section, a discussion of some mathematical techniques that are used in the creation of a 3D model by DLSS is presented. Essentially this section describes how:
Parallax, as used herein, is the change in the apparent position of an object due to a shift in the position of an observer. Parallax has been widely used to find the distances to stars and to find large distances on earth that are too difficult to measure directly.
In DLSS terminology, the “object” is a point in space. The two positions of the observer are simply two image sensor positions a known distance apart. For example, the image sensor optionally comprises a digital camera including a CCD, lens, etc. as is known in the art. Alternately the image sensor comprises a fixed linear array of sensors. While the various embodiments of the present invention are discussed with respect to movable cameras, it is to be appreciated that fixed arrays of sensors are also suitably adapted to, and included in the scope of, the present invention. This arrangement is shown in
Using similar triangles, it is seen that
D/F=(B+C1+C2)/(C1+C2)
and, if P is defined as P=(C1+C2), then
D/F=(B+P)/P.
Rearranging the terms yields
D=F*(B+P)/P. Equation 1.1.1
Note that D is actually the z coordinate of the point in space with respect to an origin 42 located at O in
z=F*(B+P)/P. Equation 1.1.1a
Similarly, expressions can be found for the x and y coordinates of the point in space. Let Yoffset denote the actual distance the image of the point is offset from the center of either image sensor. Then
x=z*C1/F Equation 1.1.2
y=z*Yoffset/F. Equation 1.1.3
From the discussion above, it is clear that, if it is ensured that the same point from the two image sensor positions has been located, the point's spatial coordinates can be reliably computed, and hence a 3d-spel. Several methods according to the present invention of insuring that such a point has been located are described herein.
It is often advantageous to use more than one image sensor, or more than one image sensor location, to accurately model an object or scene. In DLSS, the different image sensor locations are preferably obtained in either one of two ways:
The 3d-spels generated from different image sensor locations should have accurate 3D coordinates, but these coordinates are with respect to different origins. Sets of 3d-spels are, therefore, combined into a single set with a common origin. If the 3d-spels are generated as in Case A above, 3d-spels are transformed to a common coordinate system by applying the inverse transformation. For Case B above, 3d-spels are registered to a common origin by the use of Euler angles.
For Case A, Linear Transformations (LT) are used to map one vector space into another. Linear transformations satisfy two properties:
There is a theorem from Linear Algebra that a LT from one finite dimensional vector to another has a matrix representation. Thus if T is a LT from a vector space (X for example) with dimension m to another vector space (Y for example) of dimension n, then there is a matrix A of size m×n such that T(a)=Aa.
It is also well known that the transformation represented by the matrix A is invertible if and only if A is invertible. Furthermore, the matrix representation of the inverse of T is given by A−1.
An affine transformation is one of the three basic operations of rotation, scaling, and translation. If these operations are viewed as mapping 3-space (3 dimensional space) to 3-space, there is no known matrix representation for translation since it fails the second property of LTs.
To remedy this, homogeneous coordinates are used. To form these coordinates, a point (x,y,z) in 3-space is identified with the point (x,y,z,1) in 4-space. By using these coordinates, it is possible to find a matrix representation of a LT that operates on 4-space but effectively translates points in 3-space. Furthermore, it is known that any affine transformation is invertible, hence the associated matrix is also invertible.
In the current context, it is not necessary to scale points, only to translate them or rotate them. The 4×4 matrix TR that translates points by Tx, Ty, and Tz in the x, y, and z directions respectively is given by
The matrix Rx that rotates by an angle θ about the z-axis is given by
It is easy to see that the inverse matrix for TR(Tx,Ty,Tz) is given by TR(−Tx,−Ty,−Tz) and that the inverse of Rz(θ) is given by Rz(−θ). Matrices to rotate about the x and y axes are similar to Rz. They are generally denoted by Rx and Ry.
Using homogeneous coordinates, it is known that:
In DLSS, the use of transformations and their inverses applies when one image sensor position is obtained from another by known translations and/or rotations. Suppose that image sensor position B is obtained from image sensor position A by a transformation T that is some sequence of n translations and/or rotations. Then T has the form:
T=T1●T2●T3● . . . ●Tn
where ● denotes function composition.
From the above, the matrix for T is given by
M=M1×M2×M3× . . . ×Mn
where Mi is the matrix representation for Ti.
To register the points obtained from image sensor location B to those obtained at location A, the inverse transformation is applied to the 3d-spels from B. This means for each 3d-spel S collected at location B, the 3d-spel S can be registered to a 3d-spel S′ at image sensor location A for B by
S′=Mn−1×Mn−1−1 . . . ×M3−1×M2−1×M1−1 Equation 1.2.1
As a simple example, suppose that image sensor location B is obtained by rotating image sensor location A by 60 degrees about the y axis and translating by 0,0, and 10 in the x, y, and z directions respectively. A point S, at B, can be registered with a point S′ at A by rotating −60 degrees about the y axis and translating by −10 in the z direction. Using matrix notation, M1=TR(0,0,10) and M2=Rz(60). Consequently the matrix representation for T is Rz(60)*TR(0,0,10). From the discussion above, the inverse of M1 is TR(0,0,−1) and the inverse of M2 is Rz(−60). Applying equation 1.2.1 yields
S′=TR(0,0,−10)*Rz(−60)*S.
In some cases, it is not known know exactly how one image sensor location is obtained from another. In this case, Case B above, DLSS employs another method of registering points. This method is based on the following fact: performing one translation and 3 rotations can align any two coordinate systems.
In DLSS embodiments, the two coordinate systems are established by locating the same set of 3 points in two separate image sensor locations. With reference to FIG. 3, at each image sensor position, the three points 44, 46, 48 are located and given 3D coordinates as previously described.
Two vectors 50, 52, labeled V1 and V2, are formed from the 3 points 44-48 and the normal vector 54, denoted by N, to the plane as calculated by taking the cross product of V1 with V2.
N=V1×V2 Equation 1.2.2
The direction represented by N becomes the positive y-axis for the new coordinate system. One of the vectors, V1 or V2, is selected as the direction of the positive x-axis. These vectors are perpendicular to N, but not necessarily to each other. The positive z-axis is determined by crossing N with the vector selected for the x-axis.
A normalized, mutually orthogonal set of vectors that form a basis for the new coordinate system is given by
(V1/∥V1∥, (V1×N)/∥V1×N∥, N/∥N∥)
where ∥ ∥ denotes vector norm.
This process is repeated at all remaining image sensor positions. After the points collected at each position are combined into one set of 3d-spels, all of the coordinate systems can then be aligned. With reference now to
To align one coordinate system with another, the procedure described above prescribes that a translation be performed that shifts the origin 60, O′=(O′x,O′y,O′z), to the origin 62, O=(Ox,Oy,Oz). In matrix notation, this step is described by
TR(0x-O′x,Oy-O′y,Oz-O′z) Equation 1.2.3.
With the coordinate system 56 at the origin O′ having axes labeled x′ (64), y′ (66) and z′ (68), and the coordinate system 58 at the origin O having axes labeled x (70), y (72) and z (74), all points are rotated about the y axis so that the z′ axis of the system with origin O′ is rotated into the x-y plane of the system with origin O. In matrix notation, this step is described by
Ry(ε1) Equation 1.2.4
where ε1 is the first Euler angle.
The z′ and z axes are made to align by rotating points about the x-axis so that the z′ axis is in the x-z plane. In matrix notation
Rx(ε2) Equation 1.2.5
where ε2 is the second Euler angle.
A rotation is now performed about the z axis so that the x′ and x axes and the y′ and y axes are aligned. In matrix notation
Rz(ε3) Equation 1.2.6
where ε3 is the third Euler angle.
When the above transformations are done on the 3d-spels with an origin at O′, the 3d-spels are registered with the points having an origin at O. In practice, the method of Euler is more general than the method of inverse transformations. The latter method may be treated as a special case of the former.
In embodiments of a DLSS triangulation system, a tangent plane is established at each 3D point on the surface of the object. These tangent planes are used in triangulation and, since the texture will be pasted on the visible surface, the normal vector to each plane is assigned such that it points away from the visible surface. This prevents the texture from being pasted on the inside surface of the object being modeled.
For each point (x,y,z) on the surface, the least squares plane is constructed. This plane serves as the tangent plane. Curve fitting is done on all points in a neighborhood of the point (x,y,z). The size of the neighborhoods can vary to make the number of planes larger or smaller.
Using all points from the neighborhood, the coverage matrix, C, is created. This matrix is defined to be the sum of all of the inner products of vectors of the form (v−0) with themselves where:
From statistical theory it is known that C is a positive definite matrix and that an eigenvector, N, corresponding to the largest eigenvalue of C is a normal vector to the regression plane. However, it is not known which vector, N or −N, points away from the visible surface.
To determine the proper orientation of the normal vector, a minimal spanning tree is built. For a set of edges and vertices such as those collected by DLSS, a minimal spanning tree is simply a graph in which the sum of the lengths of the edges is minimal and that contains all the vertices.
During a traversal of the minimal spanning tree, a normal vector is expected to be oriented similarly to that for neighboring tangent planes. For example, a cluster of 10 planes would not generally be expected to have 9 normal vectors pointing in one direction while the 10th points in the opposite direction. Based on threshold values, a normal vector is negated based on its neighbors.
A surface-fitting algorithm, known as Marching Cubes, is implemented for visualizing the surface of a 3D object. The Marching Cubes algorithm was developed in 1987 by Lorensen and Cline. {W. E. Lorensen and H. E. Cline. Marching cubes: A high resolution 3D surface reconstruction algorithm. Computer Graphics, 21 (4):163-169, July 1987.} It uses a specified threshold value to determine the surface to render. The basic idea is based on this principle: “If a point inside the desired volume has a neighboring point outside the volume, the iso-surface must be between these points.”
The three-dimensional case works analogously but, as expected, it is more complicated. It has been observed that cubes rather than squares approximate the boundary of the surface. Note that it is important to know whether a point is inside or outside the surface. This is accomplished by using the signed distance previously described. Recall that the signed distance to the surface is the distance from the point to the surface with a negative sign affixed if the point is inside the surface and a positive sign affixed if the point is outside the surface.
With reference to
Another of the 16 cases is shown in
For the other two vertices, two triangles 100, 102 are added. The first added triangle 100 has, as its base, the edge between the two vertices known to lie inside the surface. The third vertex 104 is chosen as the midpoint of either of the edges on the opposite face. The second added triangle 102 is chosen with its base as the midpoints 104, 106 of the two edges on the opposite face and its vertex 96 as one of the 2 vertices known to lie in the cube. This essentially creates a plane that passes through the two vertices 94, 96 and the midpoints 104, 106 of the opposite face. A side view of the process is illustrated in
With reference now to
With reference now to
Texture mapping or pattern mapping is the process by which a texture or pattern in a plane is mapped to a 3D object. In DLSS embodiments, the objective is to map the texture or pattern from a subset of a 2D bitmap to the triangle that represents that subset in 3-space.
The 3D modeling process begins with pixel coordinates from a set of 2D images. These pixels are passed through a process that assigns them 3D coordinates. Furthermore, a 3d-spel is connected to the pixel coordinates from which it originated. When the triangulation process forms a triangle, it is thus possible to determine a triangular region from a bitmap that corresponds to the 3D triangle. The texture can then be transferred from the bitmap to the 3D model.
2. Digital Data Capture
The first step in a preferred embodiment of the DLSS process is digital data capture. Typically, data is captured from several image sensor positions, or several image sensors, and combined by the previously described methods.
Suitably, the image sensor is mounted on a slider 130 of length L as shown in
The following assumptions are made in selected DLSS embodiments:
In the following description, it is shown how DLSS embodiments use these images and one of two candidate-point algorithms to generate 3d-spels. One of these algorithms assumes that the images are color images. The other assumes only that the image used for texture mapping is a color image.
3. Generating 3d-spels
Generating 3d-spels involves 2 steps. The first step is to find the same point in two of the images that were collected in the Data Capture Step. This means that, firstly, that pairs of pixels are found, one from image 1 and one from image n, that correspond to the same point in space. Such points are referred to as candidate 3d-spels. The candidate 3d-spels are accepted or rejected based on criteria described below. The second step is to calculate the coordinates for an accepted candidate 3d-spel by the methods previously described in the section on DLSS theory. As indicated in that section, calculating the (x,y,z) coordinates is straightforward provided a proper point is located. However, care is taken to ensure that the pixels in image 1 and image n do in fact represent the same point in space to ensure accurate 3D models.
Suitably, DLSS embodiments optionally use one of two methods to generate candidate 3d-spels, and to accept or reject them as actual 3d-spels. One method is based on color differential in the center image. This method is called Color Differential Analysis (CDA). The second method is based on edge detection in gray scale images and is called Gray-Scale Differential Analysis (GDA). A description is provided herein for each method.
The first step in CDA is to form a three dimensional array from the sequence of images 140 previously generated as shown in
Factors used for candidate 3D-spel location by color differential and DLSS methods are the cross sections 142 of the array of images. These cross sections 142 are simply y-slices of the array E that represent the points in the images located at a fixed y-value in space. It is now apparent why the assumption was made that there are no changes in y-values for an object as the image sensor moves along the slider during the data capture step. Since analysis is done on horizontal cross sections, it follows that the y-values should be consistent from image 0 to image N.
In
Most cross sections are not as simple as the one above. A typical cross section 154 may look more like that shown in
CDA matches pixels by first looking for color differential in the aforementioned center image. By color differential, what is meant is an abrupt change in color moving across a set of pixels for a fixed y-value. In
In the next step, CDA tests a family of lines 164 that pass through the points of color differential and that pass from the first image 166 to the last 168. Such a family 170 is illustrated in
For all such lines in the sequence, the dispersion or color variance of the pixel line is calculated. Suitably, this is simply the variation in the Red, Green, and Blue components of the pixels along this line. From the diagram, it is seen that the minimum variance will be on the line that represents a definite boundary 172 between objects of different color. In fact, if noise were not present (due to aliasing, inexact lighting or instrument error for example), the variance would be zero along such a line. For points not on the boundary line 172, the color variance will be non-zero.
In the CDA method, the line with the minimum color variance is tested. If the variance along the line with minimum color variance exceeds a pre-set threshold, the point is rejected and analysis proceeds to the next point. If this variance is below a certain threshold, the point is accepted as a 3d-spel. It is easily seen that, should a point be selected, the needed pixel offsets maybe found so that the (x,y,z) coordinates of the point may be calculated as previously described.
In addition to the CDA analysis described above, DLSS methods also identify candidate 3dspels by a second method that performs Gray Scale Differential Analysis on gray scale images. This method allows considerable performance gains over CDA by using simpler methods for locating candidate 3d-spels and for searching.
The image capture process takes grayscale images, rather than the color images used by CDA, as the image sensor slides along horizontally. These pictures are first passed through an edge detection algorithm. Edge detection takes the place of the analysis of rows of pixels in the center image. The result of the edge detection process is a second series of grayscale pictures. The intensity value in these images indicates the degree of “strength” or “hardness” of the edge, 0 for no edge, 255 for a large discontinuity for example.
The edge-detected images are arranged in a 3 dimensional array, just as the color images were in CDA, as shown in
If it is possible to find a line of edges through a cross section, this implies the existence of a 3d-spel whose distance from the image sensor can be calculated by the previously described methods. Just as in CDA, the number of pixels the point travels from the first image 178 to the last 180 is readily available and determines the distance to the point.
To find 3d-spels, edges 176 are found that can be followed as they move through the images taken by adjacent image sensors. The process of finding all lines in all cross images is a highly computationally intensive task. In order to minimize the time required for these computations, the search algorithm tries to minimize the search time and reduce the search space. The primary means of minimizing line searching is to look in the bottom row 178 for a strong edge, and when one is found, try to find an edge in the top row 180 that matches it. If such a match is found, a candidate 3d-spel has been located and an attempt is made to follow the edge through the other images.
The degree of fit is measured by finding all pixels on the line 176 drawn between strong edges identified on the top and bottom images. If all pixels on the line exceed a certain threshold, and if they are all of roughly the same strength, then it is concluded that the line represents a point in 3-space. The (x,y,z) location of the point can then be determined from the image as described previously.
With reference now to
4. Point Registration
After the 3d-spels are calculated at all of the image sensor positions, They are merged into a single data set of 3d-spels with a common origin. This allows a point cloud generation of the entire space or object.
As previously described, one image sensor position may be obtained from another by:
If Case A applies, the inverse transformation is applied for each transformation in the known series. For example, suppose image sensor position B is obtained from an image sensor location by translating points by +10 units in the y-direction and rotating by 45 degrees about the y-axis. To register the points from image sensor B to the origin for the image sensor location, the transformations is applied that rotates by −45 degrees about the y-axis and translates by −10 in the y-direction.
If Case B applies, DLSS methods use routines that locate the same three points in each of camera A and camera B. Suitably, the same set of 3 points is located from each of the camera positions. Optionally, this step is separate from actual data capture from a given camera location. Using techniques previously summarized, DLSS embodiments proceed as follows:
When point registration is complete, a 3D cloud of points is available for the entire space or object.
5. Triangulation and Mesh Generation
The next step in the DLSS preferred embodiment is to build a mesh, in particular a triangle mesh, from the set of 3d-spels gathered as described above. After the points are loaded in an appropriate data structure, they are filtered.
There are several reasons why the filtering step is preferable. First, the 3d-spel generating program generates a large number of points, often far too many to allow the triangulation procedure to work efficiently. In other technologies, such as laser technology, the number and approximate location of 3D points are controllable by the user. Optionally, in DLSS, all 3d-spels identified by CDA or GDA are included. In areas of high contrast or in areas replete with edges, many extra or superfluous points may be generated.
To improve the efficiency of the triangulation algorithm, the filtering algorithm seeks a single, representative point to replace points that are “too close” together. By “too close”, it is meant that the points all lie with a sphere of radius R, where R is an input parameter to the triangulation subsystem. As R increases, more and more points are removed. Conversely, as R decreases, the number of points retained is increased. Clearly the size of R affects the final 3D resolution of the surface.
Another problem is that some 3D-spels may simply be noise. Due to the lighting, reflections, shadows, or other anomalies, points are generated that are not part of the surface. These points are preferably located and removed so that the surface is modeled correctly. Generally, noisy points are characterized by being “far away from any supporting points”, i.e. they are very sparsely distributed within some sphere.
In a preferred embodiment, the construction of the triangle mesh itself is done by the method of Marching Cubes. Marching Cubes is a well-documented way of generating a triangle mesh from a set of points. The Method of Marching Cubes was developed by Lorensen and Cline in 1987 and has since been expanded to include tetrahedra and other solids. The input to this algorithm is the filtered set of points and a signed distance from the object being modeled to each of the points.
The notions of signed distance and a summary of the Marching cubes have been previously described. Together, these concepts allow the generation of triangles. The triangle generating process, like the 3d-spel generation process, is one of the most computationally expensive steps in the DLSS process.
To generate the triangle mesh, a set of cubes with controllable volume is superimposed over the set of 3d-spels. Controllable volume simply means that the length of one edge of the cube is an input parameter to the triangulation subsystem. It is easily seen that as the volumes of the cubes decreases, the surface is modeled more closely.
Triangles are chosen by the methods of Marching Cubes. Hence the cubes that have 3d-spels inside, as well close neighboring 3d-spels outside the cube, are identified. The triangles are formed according to the 16 possibilities specified by the Marching Cubes algorithm and illustrated in the diagrams of
In addition to determining the vertices of the triangles, each triangle is oriented, i.e., the visible face of the triangle is determined. Using the signed distance and the vertices of the triangle, it is possible to determine a normal vector to each triangle that points away from the visible side of the triangle. The orientation step is performed since the texture of the surface is pasted on the visible side of the triangle. An incorrect orientation would result in a partially or completely inverted surface. When the triangulation process is finished, it is possible to display the object or space being modeled as a wire frame or wire mesh 184 as shown in
6. Texture Mapping
The last step in the preferred DLSS method is to map texture to each of the triangles generated in the Triangulation process. The input to this step is
Since DLSS methods begin with pixels and ends with 3d-spels, it is possible to invert that process and find the pixel from which a 3d-spel was generated. Using the input data above, each of the 3d-spels which form the vertices of a triangle are converted to pixel coordinates. Furthermore, the image is known from which the triangle was generated.
Using these pixel coordinates, a triangle with 3d-spels as vertices can be matched with a triangular region in the image. Furthermore, it is known from which image the triangle was taken. Consequently the triangular region from that image corresponding to the given triangle can be extracted and pasted on.
After texture mapping is done, a complete, photo-realistic, spatially accurate model 186 of the space is available as shown in
Alternative Configurations for DLSS Systems
Using the processes described in the previous sections, three exemplary embodiments of DLSS systems and/or methods are now disclosed. The first two primarily model objects or sets of objects, and the third is used primarily to model large spaces. A description of each embodiment is provided below.
Single Sensor Imager
The Single Sensor Imager is appropriate for modeling lateral surfaces of an object or objects placed on a rotating table. A typical single sensor arrangement 188 is shown in
To use the exemplary single image sensor object modeler, the user specifies three parameters:
A user may elect to project a grid of lines or points onto the object being modeled. This has the effect of introducing artificial edges and/or artificial points of color differential. Using these artificially generated edges, more candidate 3d-spels can be located on the object and, consequently, the model is more accurate.
When the user is satisfied with the setup, the following process is followed:
As indicated above, the exemplary single image sensor object modeler is suited to providing lateral views of an object. Due to the fixed position of the image sensor, it is possible that the top or bottom of an object will be insufficiently modeled. In the next section, a DLSS setup is described that can model an object from multiple image sensor positions.
Multi-Sensor Imager
The Multi-Sensor Imager is an extension of the single image sensor modeler. In fact, each individual image sensor in the multi-sensor imager is simply an instance of a single sensor imager 198. The purpose of adding additional image sensors is to capture data from regions such as the top or bottom of the object being modeled.
An exemplary two-image sensor setup 200 is shown in
To use the multi-sensor system, the user first registers the various image sensor positions, i.e., the user establishes an origin that is common to the data collected from each image sensor. This is done by locating the same three points in each sensor and applying the coordinate transformations previously described.
After the coordinate systems are registered, in the exemplary embodiment, the user specifies the parameters R, N, and B described above. These parameters may vary for each image sensor or they may all be assigned common values. As in the single image sensor case, the user may elect to project a grid on the object to introduce artificial edges and points of color differential.
When the user is satisfied with the set up,
This model is appropriate for modeling objects that are in the middle of or surrounded by image sensors located at various positions. It is an improvement over the single image sensor method since it allows the possibility of collecting additional 3d-spels to model the top or bottom of an object.
Pan and Tilt Imager
With reference to
The pan and tilt imager method distinguishes itself from the other two methods in several ways:
To use the exemplary pan and tilt modeler, the user specifies which subset of the space is going to be mapped. Due to the varying distances to the boundaries of the space, the field of view may change. For example, if the image sensor is in its initial position, the distance to the enclosure boundaries and the size of the image sensor lens will determine the field of view. If the image sensor is then tilted up or down, or rotated left or right, the distance to the boundary may change. If this distance increases, the field of view becomes larger. If the distance decreases, the field of view becomes smaller.
To accommodate the change in field of view, the DLSS system automatically determines a sequence of pans and tilts that will cover the entire area to be modeled. The sequence that generates the model is given here:
Viewing and Imaging Tool for DLSS 3D Models
The preferred DLSS embodiments model objects or scenes (spaces) and generate spatially accurate, photo-realistic 3D models of the objects or spaces. It is thus advantageous to have a viewing tool for examining and manipulating the generated 3D models.
The viewing tool has file menu selections 214 for opening 3D models, opening 3D point clouds, importing other 3D model types, exporting to other 3D model types, and exiting the viewer. Edit menu selections 216 are provided for copying selected sets of points, cutting selected sets of points, pasting sets of points from previous copy or cut operations, and for deleting sets of points. A selection is also provided for setting user preferences. A view menu 218 provides selections for setting a navigation mode, for adjusting the field of view, for centering the viewed model in the view area, and for selecting various viewpoints. Provision is also made for setting the view area to a full screen mode, for displaying in either wire-frame or texture-mapped modes, for showing the X, Y and Z axes, or for showing the X, Y and Z planes in the view area. Optional tool bars 220 and status bars 222 are also provided.
While this overview of the viewing and imaging tool provides a basic description of the tool, it is not an exhaustive description, and additional features and menus may be provided with the tool as are well known in the art.
Navigation Modes
The exemplary viewing and imaging tool provides four ways to view 3D models. A fly mode provides flexible navigation in the view area. This mode is similar to the interactive modes used on many interactive video game systems. A spin mode, the default mode, permits rotating the 3D model 212 in the view area on each of its axes so the model can be viewed from any angle. A pan mode allows the user to pan around the 3D model 212 in the view area. A zoom mode provides for zooming in towards the 3D model 212 or out from the 3D model 212. While the aforementioned modes provide the essential requirements for viewing 3D models, the viewing and imaging tool is not limited in scope to these modes and other modes may be provided.
While particular embodiments have been described, alternatives, modifications, variations, improvements, and substantial equivalents that are, or may be presently unforeseen, may arise to applicants or others skilled in the art. Accordingly, the appended claims as filed, and as they may be amended, are intended to embrace all such alternatives, modifications, variations, improvements and substantial equivalents.
| Patent | Priority | Assignee | Title |
| 10228242, | Jul 12 2013 | CITIBANK, N A | Method and system for determining user input based on gesture |
| 10288419, | Jul 12 2013 | CITIBANK, N A | Method and system for generating a virtual user interface related to a totem |
| 10295338, | Jul 12 2013 | CITIBANK, N A | Method and system for generating map data from an image |
| 10352693, | Jul 12 2013 | CITIBANK, N A | Method and system for obtaining texture data of a space |
| 10408613, | Jul 12 2013 | CITIBANK, N A | Method and system for rendering virtual content |
| 10473459, | Jul 12 2013 | CITIBANK, N A | Method and system for determining user input based on totem |
| 10495453, | Jul 12 2013 | CITIBANK, N A | Augmented reality system totems and methods of using same |
| 10533850, | Jul 12 2013 | CITIBANK, N A | Method and system for inserting recognized object data into a virtual world |
| 10571263, | Jul 12 2013 | CITIBANK, N A | User and object interaction with an augmented reality scenario |
| 10591286, | Jul 12 2013 | CITIBANK, N A | Method and system for generating virtual rooms |
| 10641603, | Jul 12 2013 | CITIBANK, N A | Method and system for updating a virtual world |
| 10767986, | Jul 12 2013 | CITIBANK, N A | Method and system for interacting with user interfaces |
| 10866093, | Jul 12 2013 | CITIBANK, N A | Method and system for retrieving data in response to user input |
| 11029147, | Jul 12 2013 | CITIBANK, N A | Method and system for facilitating surgery using an augmented reality system |
| 11060858, | Jul 12 2013 | CITIBANK, N A | Method and system for generating a virtual user interface related to a totem |
| 11221213, | Jul 12 2013 | CITIBANK, N A | Method and system for generating a retail experience using an augmented reality system |
| 11656677, | Jul 12 2013 | Magic Leap, Inc. | Planar waveguide apparatus with diffraction element(s) and system employing same |
| 7555163, | Dec 16 2004 | Sony Corporation; Sony Pictures Entertainment Inc. | Systems and methods for representing signed distance functions |
| 7675513, | Mar 14 2008 | Evans & Sutherland Computer Corp. | System and method for displaying stereo images |
| 7830374, | Aug 04 2006 | Hong Fu Jin Precision Industry (ShenZhen) Co., Ltd.; Hon Hai Precision Industry Co., Ltd. | System and method for integrating dispersed point-clouds of multiple scans of an object |
| 7986825, | Jun 17 2005 | TOPCON CORPORATION | Model forming apparatus, model forming method, photographing apparatus and photographing method |
| 8254666, | Apr 15 2005 | BEISSBARTH AUTOMOTIVE TESTING SOLUTIONS GMBH | Method for the determination of the wheel geometry and/or axle geometry of motor vehicles |
| 8559702, | Jun 18 2004 | TOPCON CORPORATION | Model forming apparatus, model forming method, photographing apparatus and photographing method |
| 8983121, | Oct 27 2010 | HANWHA VISION CO , LTD | Image processing apparatus and method thereof |
| 9208607, | Feb 15 2012 | Electronics and Telecommunications Research Institute | Apparatus and method of producing 3D model |
| 9401022, | May 13 2014 | Samsung Electronics Co., Ltd. | Method and apparatus for generating spanning tree, method and apparatus for stereo matching, method and apparatus for up-sampling, and method and apparatus for generating reference pixel |
| 9488474, | Jul 12 2013 | Magic Leap, Inc. | Optical system having a return planar waveguide |
| 9541383, | Jul 12 2013 | CITIBANK, N A | Optical system having a return planar waveguide |
| 9563813, | May 26 2011 | GOOGLE LLC | System and method for tracking objects |
| 9569888, | Dec 15 2014 | Industrial Technology Research Institute | Depth information-based modeling method, graphic processing apparatus and storage medium |
| 9612403, | Jul 12 2013 | CITIBANK, N A | Planar waveguide apparatus with diffraction element(s) and system employing same |
| 9651368, | Jul 12 2013 | CITIBANK, N A | Planar waveguide apparatus configured to return light therethrough |
| 9671566, | Jul 12 2013 | CITIBANK, N A | Planar waveguide apparatus with diffraction element(s) and system employing same |
| 9857170, | Jul 12 2013 | CITIBANK, N A | Planar waveguide apparatus having a plurality of diffractive optical elements |
| 9952042, | Jul 12 2013 | CITIBANK, N A | Method and system for identifying a user location |
| Patent | Priority | Assignee | Title |
| 6081273, | Jan 31 1996 | Michigan State University | Method and system for building three-dimensional object models |
| 6363269, | Dec 17 1999 | Datex-Ohmeda, Inc. | Synchronized modulation/demodulation method and apparatus for frequency division multiplexed spectrophotometric system |
| 6476803, | Jan 06 2000 | Microsoft Technology Licensing, LLC | Object modeling system and process employing noise elimination and robust surface extraction techniques |
| 7003136, | Apr 26 2002 | Qualcomm Incorporated | Plan-view projections of depth image data for object tracking |
| 7046838, | Mar 30 1999 | MINOLTA CO , LTD | Three-dimensional data input method and apparatus |
| EP1100048, |
| Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
| Jun 12 2003 | Spatial Integrated Systems, Inc. | (assignment on the face of the patent) | / | |||
| Jun 12 2003 | FARSAIE, ALI | SPATIAL INTEGRATED SYSTEMS, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014441 | /0465 |
| Date | Maintenance Fee Events |
| Jan 26 2011 | M2551: Payment of Maintenance Fee, 4th Yr, Small Entity. |
| Mar 27 2015 | REM: Maintenance Fee Reminder Mailed. |
| Aug 14 2015 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
| Date | Maintenance Schedule |
| Aug 14 2010 | 4 years fee payment window open |
| Feb 14 2011 | 6 months grace period start (w surcharge) |
| Aug 14 2011 | patent expiry (for year 4) |
| Aug 14 2013 | 2 years to revive unintentionally abandoned end. (for year 4) |
| Aug 14 2014 | 8 years fee payment window open |
| Feb 14 2015 | 6 months grace period start (w surcharge) |
| Aug 14 2015 | patent expiry (for year 8) |
| Aug 14 2017 | 2 years to revive unintentionally abandoned end. (for year 8) |
| Aug 14 2018 | 12 years fee payment window open |
| Feb 14 2019 | 6 months grace period start (w surcharge) |
| Aug 14 2019 | patent expiry (for year 12) |
| Aug 14 2021 | 2 years to revive unintentionally abandoned end. (for year 12) |