Methods and apparatus for accurately surveying and determining the physical location of objects in a scene are disclosed which use image data captured by one or more cameras and three points from the scene which may either be measured after the images are captured or may be included in the calibrated target placed in the scene at the time of image capture. Objects are located with respect to a three dimensional coordinate system defined with reference to the three points. The methods and apparatus permit rapid set up and capture of precise location data using simple apparatus and simple image processing. The precise location and orientation of the camera utilized to capture each scene is determined from image data, from the three point locations and from optical parameters of the camera.

Patent
   5699444
Priority
Mar 31 1995
Filed
Mar 31 1995
Issued
Dec 16 1997
Expiry
Mar 31 2015
Assg.orig
Entity
Large
299
5
all paid
14. A method of measuring the absolute three dimensional location o of a camera with respect to a coordinate system defined using three points, A, B and C, separated by known distances using image data comprising:
a. capturing an images of a scene containing the points A, B, and C, using a camera of known principal distance,
b. determining the location of said camera at the time said image was captured with reference to said coordinate system using 3 to 5 points from said image, principal distance and said known distances.
13. A method of measuring the absolute three dimensional location o of a camera with respect to a coordinate system defined using three points, A, B and C, separated by known distances using image data comprising:
a. capturing an image of a scene containing the points A, B, and C, using a camera,
b. determining the principal distance of said camera,
c. determining the location of said camera at the time said image was captured with reference to said coordinate system using 3 to 5 points from said image, principal distance and said known distances.
1. A method of measuring the absolute three dimensional location of a point d with respect to a coordinate system defined using three points, A, B and C, separated by known distances using image data comprising:
a. capturing two images of a scene containing the points of B, C and d, using one or more cameras of known principal distance,
b. determining the location and orientation of said one or more cameras at the time each of said images was captured with reference to said coordinate system using 3 to 5 points from said images, principal distance and said known distances,
c. using the locations of the one or more cameras at the time the images were captured to determine the location of said point d from image data.
17. Apparatus for measuring the absolute three dimensional location of a point d with respect to a coordinate system defined using three points, A, B and C, separated by known distances using image data comprising:
a. one or more cameras for capturing images of a scene containing the points A, B, C and d,
b. means for storing images captured by said one or more cameras,
c. means for processing stored images to determine the location and orientation of said one or more cameras at the time each of said images was captured with reference to said coordinate system, using 3 to 5 points from said images, principal distance and said known distances,
d. means for using the locations of said one or more cameras at the time the images were captured to determine the location of said point d from image data.
15. A method of measuring distance including vertical height comprising:
a. measuring the absolute three dimensional location of points d, E and F with respect to a coordinate system defined using three points, A, B and C, separated by known distances using image data by:
a1. capturing two images of a scene containing the points A, B, C, d, E and F, using one or more cameras of known principal distance,
a2. determining the location and orientation of said one or more cameras at the time each of said images was captured with reference to said coordinate system using points A, B, and C from said images, principal distance and said known distances,
a3. using the locations of the one or more cameras at the time the images were captured to determine the locations of said points d, E and F from image data,
b. determining distances between points d, E and F, and
c. using the location of said points d, E and F and the location of one or more cameras at the time images were captured to determine the location of other points.
2. A method of measuring the absolute three dimensional location of a point d with respect to a coordinate system defined using three points, A, B and C, separated by known distances using image data comprising:
a. capturing two images of a scene containing the points A, B, C and d, using one or more cameras of known principal distance,
b. determining the location and orientation of said one or more cameras at the time each of said images was captured with reference to said coordinate system using information derived from said images, principal distance and said known distances, and
c. using the locations of the one or more cameras at the time the images were captured to determine the location of said point d from image data by
c1. defining an auxiliary coordinate system with origin along the line joining the locations of the cameras,
c2. defining the center point of each image as an origin of a set of image reference axes pointing in X', Y' and Z' directions, respectively,
c3. measuring offset in at least one of the X' and Y' directions of a point on the first image and of a corresponding point of a second image,
c4. determining the angles formed between a line joining point d, the principal point of the objective and the image of point d on one of the X' or Y' planes for each of the images,
c5. determining a distance h representing a distance of point d to a line joining principle points of said one or more cameras used to capture said two images using the measured offsets, the focal length and the angles,
c6. determining the X' and Y' coordinates of point d in the auxiliary coordinate system, and
c7. transforming coordinates (X', Y', h) of the auxiliary coordinate system to a representation in said coordinate system defined using said three points, A, B and C.
3. A method of measuring the absolute three dimensional location of a point d with respect to a coordinate system defined using three points, A, B and C, separated by known distances using image data comprising:
a. capturing two images of a scene containing the points A, B, C and d, usinq one or more cameras of known principal distance,
b. determining the location and orientation of said one or more cameras at the time each of said images was captured with reference to said coordinate system using information derived from said images, principal distance and said known distances by,
b1. representing the distance between point A, B and C and the principal point of a camera o as a viewing pyramid,
b2. modifying the representation of the pyramid to a three triangle flattened representation,
b3. selecting a low estimate ob1 for one interior side of a first triangle of said flattened representation,
b4. solving the first triangle using image data, principal distance and said known distances, yielding, inter alia, a first calculated value for length OA, given estimate ob1,
b5. solving the second triangle using results obtained,
b6. solving the third triangle using results obtained, yielding, inter alia, a second calculated value for length OA,
b7. subtracting the second calculated value for length OA from the first calculated value for length OA to produce a difference value,
b8. revising the value of estimate ob1 by adding said difference value to achieve a revised estimate,
b9. iterating steps d-h using the revised estimate until said difference value is less than a desired accuracy, and
b10. deriving values for camera location using one or more sets of values for distances OA, ob and OC, and
c. using the locations of the one or more cameras at the time the images were captured to determine the location of said point d from image data.
4. The method of claim 3 in which the step of deriving values for camera location using one or more sets of values for distances OA, ob and OC comprises solving simultaneously equations for spheres centered at points A, B and C with respective radii of OA, ob and OC.
5. The method of claim 3, further comprising:
k. determining the orientation of one of more of said cameras by calculating the azimuthal and elevational adjustment required to direct the camera to the location of point A.
6. The method of claim 5, further comprising:
l. determining the orientation of one of more of said cameras by calculating the amount of rotation about the optical axis required to align point B once the camera points at point A.
7. The method of claim 5 further comprising iterating steps k and l until the degree of alignment is within the desired degree of accuracy.
8. The method of claim 1 used to measure the distance between two points.
9. The method of claim 1 used to measure distances in a vertical direction.
10. The method of claim 1 used to locate physical position accurately of objects visible in said images.
11. The method of claim 1 used to create a three dimensional wireframe representation or a three dimensional surface model comprising 3 or 4 vertices surface element.
12. The method of claim 1 used to document the as built condition of an object.
16. The method of claim 15 in which the locations of points d, E and F are used to determine the location of said other points using image data from images different from those used to determine the location of points d, E and F.
18. Apparatus as claimed in claim 17 in which the location of point d is stored in a database utilized to store a three dimensional wireframe representation.
19. Apparatus as claimed in claim 18 in which the location of point d is stored in a database of locations of points surveyed.

The invention relates to the field of image processing and more particularly to methods and apparatus for determining camera position and orientation from an image captured with that camera and to accurate surveying using such methods and apparatus.

Since the invention of the stereoscope in 1847, inventors have attempted to replicate three dimensional (3D) images found in nature. Two dimensional images lack realism due to the absence of depth queues. Many techniques have been devised for producing 3D images with varying degrees of success.

Stereoscopic photographic cameras are known which utilize a single camera body and two objective lenses separated by a fixed distance, usually corresponding to the interocular distance. Other such cameras use a single objective and external arrangements which form two image areas on film positioned on the camera's image plane. Still other arrangements use two separate cameras separated by a fixed distance to form images corresponding to a left and right eye view of the scene being photographed.

Once stereoscopic photographic images of the prior art are developed, they are often viewed through separate eye pieces, one for each eye. Each eye piece projects a view of a respective one of the developed images which the user's eyes would have seen had the eyes viewed the scene directly. Depth is clearly discernable when viewing a stereoscopic image.

There are several problems with prior art techniques for generating three dimensional images. First, the requirement that there be a fixed camera to camera or objective to objective separation limits flexibility in the construction of cameras. The requirement for two objective lenses or two cameras dictates special apparatus in order to capture stereoscopic images.

Another problem with the prior art is that complicated lens arrangements are necessary to view stereoscopic images. Further, in the stereoscopic photographic systems of the prior art, depth was not readily quantifiable.

Calculations of depth is a difficult task when using images captured from different positions vis-a-vis the scene being photographed because the planar relationships which result from projection of a three dimensional scene onto a two dimensional plane do not undergo a linear transformation or mapping compared with the same points projected onto a different image plane. Different portions of a scene viewed from one point relate differently to corresponding points from the same scene viewed from another point. As one changes viewing positions, some portions of a scene become hidden as the view point changes. Planar surfaces which are viewed normally in one view are reduced in extent when viewed obliquely.

In the prior art, methods and apparatus are known for surveying a plot of land to identify the locations of significant features of the plot. Typically, this involves a team of surveyors who go to the plot and make physical measurements of distance and angle using a surveyor's transit theodolite and calibrated standards for measuring distance. Surveys using these techniques are typically baselined against a national grid of survey markers. This technique is subject to errors of various kinds in reading the instruments and in performing calculations.

Aerial surveying is also known. Images are captured from an airplane or other vehicle in transit over an area to be surveyed at positions which are precisely known by modern navigation techniques. Position of significant ground features can then be calculated using sophisticated image processing techniques which often require supercomputers. Aerial surveying techniques have the advantage that they can be accomplished without the need to place people on the ground in the area to be surveyed. Inaccessible terrain can also be surveyed in this way. However, expensive image capture equipment is required and even with very good optics and image processing, the resolution is not always as good as one might like. Also, accurate measurements in the vertical direction are even more difficult to take using aerial techniques.

In forensic investigations such as those of a crime scene or archeological dig, spatial relationships are very important. Such investigations often occur under conditions where some urgency or public necessity exists to vacate the scene of the investigation in a short period of time. If a freeway is blocked for an investigation during rush hour, the need to resume traffic flow is a political necessity. In crime scene analysis, if details are not observed and recorded immediately, valuable evidence may be lost. In such circumstances, there is not time for a careful manual survey and aerial techniques generally lack needed resolution or are too expensive for general application to police investigations.

In a manufacturing environment, it is often desirable to determine the physical details of a product "as built" either for inspection purposes or for documentation with substantial accuracy.

In manufacturing, it is often desirable to capture the physical dimensions of complex objects for purposes of creating a three dimensional (3-D) representation, such as a wireframe, for use in computer assisted design or computer assisted manufacturing (CAD/CAM). In entertainment, it is desirable to use such a 3-D representation for creating animations which result in changes to the position or viewing perspective of a 3-D object.

There is thus a need to accurately capture 3-D information about objects and scenes in ways which are convenient and economical and which don't require sophisticated computing equipment. There is also a need to accurately capture physical dimensions of objects in the vertical direction which might be inaccessible to a physical survey.

Every recorded image, whether it be a photograph, a video frame, a true perspective drawing or other form of recorded image, has associated with it a viewing location and viewing look angles that exactly describe the orientation of the recording mechanism relative to the recorded scene.

When making distance calculations from images captured using cameras, it is necessary to know the location of the camera at the time the picture was taken, or more precisely the front principal point of the camera lens or system of lenses at the time the picture was taken. To calculate distances accurately, it is also desirable to know the azimuth, elevation and rotation angle of the optical axis of the lens or lens system as it emerges from the camera.

In the prior art, camera location was either estimated or known a priori by locating the position from which the picture was taken using surveying techniques. Typically, rotation angle was assumed to be 0 (horizontal) and elevation and azimuth were either measured with varying degrees of accuracy or estimated. Clearly, such surveying and measurement increase the set up time required before capturing images for analysis, often to the point where any hope of accurate measurements would be abandoned in favor of qualitative information which could be gleaned from images captured under uncontrolled conditions.

The need for accurate viewing parameters is being expressed by an ever increasing population of computer users who use digital and analog images for a wide range of purposes, from engineering measurement applications to marketing and sales presentations.

For example, stereo photographs are frequently used to investigate and document accident or crime scenes. The accuracy of the documentation depends to a high degree on knowing exactly the viewing parameters of the cameras at the time the photographs were taken.

Computer-generated renderings are often merged with actual photographs to convey an image of a completed construction project while still in the planning and review stages. In order to make the computer rendering blend into and match the photograph in a visually convincing manner, it is necessary for the viewing parameters of the computer rendering to be exactly the same as the viewing parameters of the camera that took the photograph.

Typically, the viewing parameters for any given recorded image are unknown and difficult to determine with a high degree of accuracy, even when the camera positions are physically measured relative to some established coordinate system. The difficulties arise from the fact that the camera lens principle points are usually located inside the lens structure and therefore inaccessible for purposes of direct measurement. The measurement of viewing angles is even more difficult to accomplish without the use of surveying type tripods, levels and transits.

Photogrammetry is a science that deals with measurements made from photographs. Generally, photogrammetrists use special camera equipment that generates fiducial marks on the photographs to assist in determining the viewing parameters. Non-photogrammetric cameras can be used in some analyses, however the associated techniques generally require knowing the locations of a large number of calibration points (five or more) that are identifiable in the recorded scene. Generally, the three-dimensional location of five or more calibration points need to be known in terms of some orthogonal, reference coordinate system, in order to determine the viewing parameters. The Direct Linear Transform (DLT) is a five-point calibration procedure that is sometimes employed by photogrammitrists. It is usually difficult and expensive to establish the locations of these points and it is certainly complicated enough to deter a non-technical person from attempting to determine the viewing parameters. Unless a tightly controlled calibration coordinates system is established prior to taking the photographs, it is necessary for the user to know a minimum of nine linear dimensions between the five points. This requirement limits the use of the technique considerably.

In some specialized cases, such as certain aerial surveying applications, conventional photogrammetry can be employed to determine camera parameters using as few as three calibration points. In particular, the Church resection model may be used when the optical axis of an aerial camera lens is within four or five degrees of looking vertically down on the terrain. Angular displacements from the vertical of more than a few degrees results in noticeable mathematical nonlinearities that are associated with transcendental trigonometric functions. Under these conditions, the Church resection model is no longer valid and the three-point calibration procedure no longer applies.

All of the calibration techniques discussed above suffer from a number of disadvantages:

(a) They required calibrated camera equipment;

(b) They require calibration targets consisting of too many points to make the procedures practical for common everyday use by non-professionals;

(c) Techniques which use a three-point calibration target are valid only over a very limited range of off normal camera look angles; and

(d) All of the previous methods for solving viewing parameters employ matrix operations operating on all point data at the same time, thus allowing one poorly defined measurement parameter to inject errors in a relatively unknown and indeterminable sense due to parameter cross-talk effects.

The problems of the prior art are overcome in accordance with the invention by automatically identifying camera location and orientation based on image content. This can be done either by placing a calibrated target within the field of the camera or by measuring the distances among three relatively permanent points in the scene of images previously captured. Using the points, the location and orientation of a camera at the time a picture was take can be precisely identified for each picture. Once the location and orientation of the camera are known precisely for each of two or more pictures, accurate 3-D positional information can be calculated for all other identifiable points on the images, thus permitting an accurate survey of the scene or object. The images can be captured by a single camera and then used to generate stereo images or stereo wireframes.

Accordingly, besides the advantages of the simple three-point calibration target described above, several additional objects and advantages of the present invention are:

(a) to provide a decoupling of error terms such that Azimuth, Elevation and Tilt terms do not affect the accuracy of X, Y, and Z terms;

(b) to provide simple procedures that can be applied successfully by non-technical personnel;

(c) to provide an iterative solution such that all viewing parameters are determined to an accuracy in excess of 12 decimal places or the limitations of pixellation error, whichever is larger;

(d) to provide a test of all possible solutions prior to selecting the solution with the least error, and

(e) to provide a surveying system which permits capture of 3-D information at large angles off normal.

The above and other objects and advantages of the invention are achieved by providing a method of measuring the absolute three dimensional location of points, such as point D of FIG. 1 with respect to a coordinate system defined using three points, A, B and C, separated by known distances using image data. The image data is captured by using one or more cameras of known focal length to capture two images of a scene containing the points A, B, C and D. The location and orientation of the camera(s) at the time each of said images was captured is determined with reference to said coordinate system by using information derived from said images, the known focal length and the known distances. The locations of the cameras at the time the images were captured is then utilized with other image data, to determine the location of points such as point D.

The step of using the locations of the cameras at the time the images were captured to determine the location of said point D from image data includes defining an auxiliary coordinate system with origin along the line joining the locations of the cameras, defining the center point of each image as an origin of a set of image reference axes pointing in X', Y' and Z' directions, respectively, measuring offset in at least one of the X' and Y' directions of a point on the first image and of a corresponding point of a second image, determining the angles formed between a line joining point D, the focal point of the objective and the image of point D on one of the X' or Y' planes for each of the images, determining said distance h using the measured offsets, the focal length and the angles, determining the X' and Y' coordinates of point D in the auxiliary coordinate system, and transforming coordinates (X', Y', h) of the auxiliary coordinate system to a representation in said coordinate system defined using said three points, A, B and C.

The step of determining the location and orientation of said one or more cameras at the time said images were captured with reference to said coordinate system using image data, known focal length and said known distances includes representing the distance between point A, B and C and the focal point of a camera O as a viewing pyramid, modifying the representation of the pyramid to a joined three triangle flattened representation, selecting a low estimate Ob1 for one interior side of a first triangle of said flattened representation, solving the first triangle using image data, known focal length and said known distances, yielding, inter alia, a first calculated value for length OA, given estimate Ob1, solving the second triangle using results obtained, solving the third triangle using results obtained, yielding, inter alia, a second calculated value for length OA. Subtracting the second calculated value for length OA from the first calculated value for length OA to produce a difference value, revising the value of estimate Ob1 by adding said difference value to achieve a revised estimate, iterating using the revised estimate until said difference value is less than a desired accuracy, and deriving values for camera location using distances OA, OB and OC.

The process of deriving values for camera location using distances OA, OB and OC comprises solving simultaneously equations for spheres centered at points A, B and C with respective radii of OA, OB and OC.

When one determines the orientation of one of more of the cameras, one calculates the azimuthal and elevational adjustment required to direct the camera to the location of point A and calculates the amount of rotation about the optical axis required to align point B once the camera points at point A. This is done interactively until the degree of alignment is within the desired degree of accuracy.

The invention can be used to measure the distance between two points especially in a vertical direction, to locate the physical position of objects visible in images accurately, to create a three dimensional wireframe representation and to document the "as built" condition of an object.

The invention is also directed to a method of measuring the absolute three dimensional location O of a camera with respect to a coordinate system defined using three points, A, B and C, separated by known distances using image data by capturing an image of a scene containing the points A, B, and C, using a camera, determining or knowing a priori the focal length of said camera, determining the location of said camera at the time said image was captured with reference to said coordinate system using information derived from said image, known focal length and said known distances.

The invention is also directed to a method of measuring distance including vertical height by measuring the absolute three dimensional location of points D, E and F with respect to a coordinate system defined using three points, A, B and C, separated by known distances using image data using techniques described above, by determining distances between points D, E and F, and by using the location of said points D, E and F and the location of cameras at the time images were captured to determine the location of other points. The other points may be optionally located on images different from those used to determine the location of points D, E and F.

The invention is also directed to apparatus for measuring the absolute three dimensional location of a point D with respect to a coordinate system defined using three points, A, B and C, separated by known distances using image data including one or more cameras for capturing images of a scene containing the points A, B, C and D, a memory interfaced to the camera(s) for storing images captured by the camera(s), a computer for processing stored images to determine the location and orientation of the camera(s) at the time each of said images was captured with reference to said coordinate system, using information derived from said images, known focal length and said known distances, and for using the locations of said one or more cameras at the time the images were captured to determine the location of said point D from image data. Location information can be stored in a database which can be used for different purposes. For example, it can be used to store a three dimensional wireframe representation or the locations of points surveyed.

Still other objects and advantages of the present invention will become readily apparent to those skilled in this art from the following detailed description, wherein only the preferred embodiment of the invention is shown and described, simply by way of illustration of the best mode contemplated of carrying out the invention. As will be realized, the invention is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the invention. Accordingly, the drawing and description are to be regarded as illustrative in nature, and not as restrictive.

FIG. 1 is an illustration of the capture of two images of a scene, including a building, according to the invention.

FIG. 2 is an illustration of a viewing pyramid of three calibration points as projected through the focal point of a camera.

FIG. 3 is an illustration of a flattened pyramid used for calculation of camera distance.

FIG. 4 is an illustration of viewing angle determination used in calculation of camera distance.

FIG. 5 is an illustration of near, mid and far ambiguity.

FIG. 6 is an illustration of how to resolve near, mid and far ambiguity.

FIG. 7 is an illustration of azimuthal and elevational correction.

FIG. 8 is a flow chart of the algorithm used to determine camera distance and orientation.

FIG. 9 is a flow chart of the algorithm used to calculate camera location.

FIG. 10 is an illustration of how to calculate the distance of a point from a line joining the principal points of two cameras.

FIG. 11 is an illustration of the calculation of the location of a point in the X direction.

FIG. 12 is an illustration of the calculation of the location of a point in the Y direction.

FIG. 13 is an illustration of how to calculate point location generally given a determination of the location and orientation of the camera at the time when two images were captured.

FIG. 14 is an illustration of hardware utilized in accordance with the invention.

FIG. 1 illustrates a building 100 in front of which is located a calibrated target such as a builder's square 110. Pictures of the building are taken from two positions. The first from point f1 and the second from point f2. f1 is the location of the principal point of the lens or lens system of a camera and the image projected through that point falls on image plane fp1. A second image of the scene is captured from position f2 and the image through principal point f2 is cast upon image plane fp2. The positioning of the cameras is arbitrary. In some circumstances, it is desirable to capture images from two locations using the same camera. In other circumstances, it may be desirable to capture the images using different cameras.

Typically, the camera is aimed so as to center the object of interest within the viewing frame. In the picture shown, both cameras are pointed at center point T which means that the images of points A, B and C on the builder's square are not in the center of the image.

Once images are available in viewable form for analysis, knowing the distance between the principal point and the image plane of the camera (principal distance) and the physical displacement of the points on the reproduced image, one may calculate the angles Af1 B, Bf1 C and Cf1 A because the angles subtended by pairs of points vis-a-vis the principal point are identical whether they are measured in the real scene or on the image plane side of the focal point.

In the implementation of this invention, a real world coordinate system is defined with the Y axis running through points A and C and an X axis defined perpendicular to the Y axis through point A in the plane of A, B and C, thus forming an origin 0 at point A. A Z axis is defined perpendicular to the XY plane and running through point A. By convention, the +Y direction runs from the origin at A to point C, the +X direction runs to the right when standing at the origin and facing the +Y direction and the +Z direction proceeds in a vertical direction from the origin out of the XY plane in a direction indicated by the cross product of a vector in the +X direction with a vector in the +Y direction.

Given this coordinate system, it is desirable to calculate the location of the camera, namely, the location of the principal point of the camera from which an image was captured. Thus, principal point f1 is located at (X1, Y1, Z1). Likewise, the principal point f2 is located at (X2, Y2, Z2).

With respect to that coordinate system, one can see that a camera directed at target point T has both an azimuth and an elevation which can be specified utilizing the coordinate system. In addition, the camera may be rotated about the optical axis of the camera differently when the two pictures were taken. In short, there is no guarantee that the camera was horizontal to the XY plane when the picture was taken and thus, the orientation of the images may require correction prior to processing.

FIG. 2 illustrates a viewing pyramid formed by the three points A, B and C vis-a-vis the origin 0 (the principal point of a camera). The viewing pyramid can be viewed as having three surfaces, each corresponding to a surface triangle, namely, triangles AOB, BOC and COA. If one were to view the pyramid shown in FIG. 2 as hollow and made of paper and if one were to cut along the line OA and flatten the resulting pattern, one would achieve a flattened pyramid such as shown in FIG. 3.

FIG. 3 will be utilized to describe the process by which camera position is determined in accordance with the invention. The distance OA represents the distance from point A which is at the origin of the coordinate system to point O which is at the principal point of the lens.

At the beginning of the determination, one knows values for angles AOB, AOC and BOC by virtue of knowing the distance between the principal point and the image plane and the measured distance separating two points on the image plane.

FIG. 4 assists in illustrating how this is done. In FIG. 4, the XY plane constitutes the image plane of the camera. f0 is the principal point of the lens. Images of points A and B are formed on the image plane after passing through the principal point at locations A and B shown on the XY plane. The incoming rays from points A and B are respectively shown at 400 and 410 of FIG. 4. For purposes of image plane analysis, an image plane origin FP0 is defined and an X axis is defined as parallel to the longest dimension of the image aspect ratio. The Y axis is formed perpendicular thereto, and the origin FP0 lies directly under the principal point. Rays from points A and B form an angle alpha (<α) as they pass through the focal point. The projection of those rays beyond the focal point also diverge at <α. <α corresponds to <AOB of FIG. 3.

By taking careful measurements from the image capture medium (e.g. photographic film, digital array etc.), one can determine the distances AFP0 and BFP0.

Calculating the distances AF0 and BF0 using the Pythogorean Theorem using the known distance F0 FP0 (the distance between the principal point and the focal plane) and measured distance AFP0 and BFP0, one may determine angle 2 using the law of cosines as follows:

AB2 =(F0 A)2 +(F0 B)2 -2(F0 A) (F0 B)cos α (1)

α=arc cos [((F0 A)2+(F0 B)2 -(AB)2 /2(F0 A) (F0 B)(2) (2)

Thus, by analyzing points in the focal plane, the angles separating points A, B and C can be determined in the manner just described.

The distances separating points A, B and C are also known, either a priori by placing a calibrated target, such as a carpenter's square in the scene being photographed, or by measuring the distances between three relatively permanent points in the scene previously captured after the images have been formed.

In FIG. 3, the distance OA represents the distance from the principal point of the camera (O) to point A which is the origin of the coordinate system utilized to define camera position. At a high level, this is done by first assuming a very low estimate for the distance OB, such as the distance Ob1, then with that assumption, triangle AOB is solved. "Solving a triangle" means establishing (e.g. calculating) values for the length of each side and for each of the angles within the triangle. With the distance Ob1 assumed, the first triangle is solved using known, assumed or calculated values. In the process, a value for distance OA is calculated. Using the estimate Ob1, the second triangle BOC is solved and the derived distance OC is then utilized to solve the third triangle COA. When the third triangle is solved, the calculated value for OA of the third triangle is compared with the calculated value of OA of the first triangle and the estimate Ob1 is revised by adding the difference between the values for OA from the third triangle and the value for OA from the first triangle to the estimate Ob1 and the process is repeated. By successive iterations, the estimate Ob1 will be improved until the difference between the calculated values of OA reduces to a value less than ε. When ε is low enough for the accuracy needed, the iterations cease and the true value of OA is assumed to lie between the values calculated for the first and third triangles.

A calculation of one iteration will illustrate in detail how this is done.

From the law of sines, one knows: ##EQU1##

Distance Ob1 is the estimate of the length of OB, which, at the outset, is set to be low. The distance AB is known because the dimensions of a calibrated target are known or because the distance AB has been measured after the images are captured. The value for <AOB is calculated from measurements from the image plane as illustrated in FIG. 4 and discussed in connection with equations 1-7. Therefore, <OAB can be calculated as follows: ##EQU2##

Once the first estimate of <OAB is known, the first estimate of <OBA can be calculated as follows:

<OBA=180°=<AOB-<OAE (5)

At this point, one knows all three angles of the first triangle of FIG. 3 and is in a position to calculate a value for OA of the first triangle. Again using the law of sines, OA can be determined as follows: ##EQU3##

At this point, the first triangle is entirely solved under the assumption that the distance Ob1 is the actual value of length OB.

Turning to the second triangle, Ob1 is assumed to be the distance OB. Distance BC is known from the target or measurements and angle BOC is known from measurements from the image plane. Thus, there is enough information to solve the second triangle completely as shown in equations 13-17. ##EQU4##

With the distance OC calculated as shown in equation 12, the same information is available with respect to the third triangle that was available at the beginning of the solution of the second triangle. Therefore, the third triangle can be solved in a manner completely analogously to the solution of the second triangle substituting in the corresponding lengths and angles of the third triangle in equations 8-12.

One result of the solution of the third triangle is the distance OA which has been calculated as set forth above. This distance OA from the third triangle will have been derived based on calculations from the first, second and third triangles. Note, however, that the distance OA from the third triangle and the distance OA from the first triangle should be identical if the assumed value Ob1 were equal in fact to the real length OB. Since Ob1 was initially assumed to be of very low value, there will be generally a difference between the value of OA from the third triangle as compared with that from the first triangle. The difference between the two calculated lengths is added to original estimate Ob1 to form an estimate Ob2 for the second, iteration.

With the distance assumed to be Ob2, the calculations set forth above for the solution of the first, second and third triangles are repeated and the resulting values for OA from the first and third triangles are compared once again and an adjustment made to the estimate Ob2 based on the difference between the lengths as set forth above.

By successive iteration, the estimate for the distance OB can be made accurate to whatever degree of resolution one desires by continuing the iterative process until the difference between OA from the first triangle and that from the third triangle is reduced to an acceptable level, ε. The distance OA which results from the iterative process is then equal to the distance of the principal point of the camera shown at O in FIG. 3 to point A which is the origin of the coordinate system defined for this set of measurements.

If the values for OA from the first and third triangles agree within ε, all of the triangles are solved and therefore the entire viewing pyramid is solved.

Turning to FIG. 5, when viewing the points A, B and C from the principal point of the camera, one cannot necessarily determine which of points A, B and C are closest and next closest to the camera. For example, in FIG. 5, given that point B1 is closest to the camera, it is possible that either point A is closer and point C farther, or alternatively, that point C is closer and point A farther. These differences are reflected in triangles A1 B1 C1 as compared with triangle A2 B1 C2. The table shown in FIG. 5 illustrates that the relationship between points A, B and C may in general result in six different permutations. There will always be these combinations of near, mid and far when working toward a solution. Right at the start, one doesn't know which point is closest to the camera and which is furthest and which is midpoint.

To avoid incorrect answers, it is desirable to try all combinations. For each of the combinations one assumes that one knows which one is which and then tries the calculation. If the calculation converges to a potential solution, then one holds that solution over for further analysis. If one is close to the plane of a particular triangle, there can be as many as five potential solutions or orientations of the triangle that will give you the same relationship of side lengths and viewing pyramid apex angles.

If a particular combination of near, mid and far is not feasible, the calculations do not converge and the process blows up, usually terminating in a math error, typically in a trigonometric function. However, if the calculations proceed normally, then potential solutions are realized and each potential solution is retained for further investigation.

In FIG. 5, it is clear that sometimes there may be degeneracy in which two or more points are located at exactly the same distance from the focal point. That reduces the number of different possible solutions.

During the iterative process, in the example shown above, the difference between OA of the first and third triangles is added to the estimate Ob1 to determine the estimate to be utilized in the next iteration. It is, of course, possible to utilize a factor other than 1 to 1 and to adjust the estimate by a fraction or a multiple of the difference between the values of OA for the first and third triangles. The preferred adjustment, however, is 1 to 1.

When utilizing a calibrated target, it is preferred that a right angle calibration target be used, like an 8 1/2×11 piece of paper or a carpenter's square.

The six potential arrangements of near, mid and far for points A, B, C can be viewed as different ways of flattening the pyramid. Three sets of flattened pyramids can be formed by using each vertex OA, OB and OC as the edge which is "opened" (e.g. If the pyramid were formed by folding paper into a pyramid shape, and one vertex were cut open and the pyramid unfolded into a pattern like that shown in FIG. 3, three different sets of flattened pyramids are formed, each by cutting a different vertex). Each set has two members corresponding to the two orders in which the triangles may occur. As illustrated in FIG. 3, for example, the triangles are solved in 1-2-3 order. This ordering represents one of the 2 members. The other member is formed by flipping the flattened pyramid over on its face so that triangle 3, as shown in FIG. 3 is put in the triangle 1 position. This member of the set is solved in 3-2-1 order as labeled.

The 1-2-3 ordering of the solution of the triangle of a flattened pyramid implicitly assumes that the left (and right) exterior edge (OA in the figure) is the farthest, the next (OB) is intermediate (mid) and OC is closest.

When searching for a solution for each of the possible arrangements of near, mid and far, the algorithm converges only for that (those) solution(s) which are "possible". Usually only one of the 6 combinations is possible. However, sometimes degeneracy occurs when 2 (or 3) points are exactly the same distance away. In such a case, multiple solutions are possible but they will yield to the same result.

Thus convergent solutions will uniquely define the X, Y and Z locations of the camera in the coordinate system defined by the points A, B and C as set forth above.

The techniques described herein are applicable to images photographed without a calibrated target. By selecting 3 convenient points on the image and physically measuring the distance between them after the image has been captured, the same effect can be achieved as is achieved using a calibrated target at the time the image is captured.

To resolve the near, mid and far ambiguities, as shown in FIG. 6, one notes that the principal point of the camera is going to be where the known lengths of OA, OB and OC coincide at point O. For each of the possible solutions for the location of point O, one can then write an equation for a sphere about the point A, about point B and then about point C. The intersection of the spheres can be understood by visualizing two soap bubbles coming together. As they get progressively closer, they can touch at one point and then as one penetrates the other it will generate a circle which will be a locus of points that is common to the two spheres. As long as the spheres are not identically the same size, one bubble will go inside of the other and as it goes inside it will, at worst case, touch again at one point. As it goes out the other side, it will touch at a point, form a circle, and then as it leaves it will touch a diametrically opposite point.

By writing equations for spheres centered at points A, B and C with radii respectively of length OA, OB and OC, one obtains three equations in three unknowns (assuming a rectangular coordinate system).

Each of the possible solutions for near, mid and far is utilized to generate a set of spheres which are then solved for common points of intersection. Looking at FIG. 6, one can see that in addition to intersection at point O of the three spheres in the +Z plane, there will be a symmetrical solution in the -Z plane. By convention, one assumes that the horizontal control grid established by the XY plane is viewed from the +Z direction looking down on the XY plane. By that convention, there is only one solution and that is the one is the +Z space and the -Z space solution is eliminated. That then determines the XYZ location of the principal point of the camera.

Once the camera position is determined, there are three possible orientations for the camera which need to be specified. They are (1) the azimuthal rotation, (2) the elevation rotation and (3) the tilt about the optical axis. FIG. 7 illustrates how azimuthal and elevational corrections are determined. FIG. 7 illustrates the image plane. Points ABC are the same points ABC utilized to define a coordinate system and to calculate the distance of the camera in that coordinate system. Points A, B and C are illustrated as part of the image shown in the image plane. A center of the plane (i.e. the center of the picture) is typically placed on the object of interest so that the object of interest appears in the center of the image. A calibrated target or the three points utilized to establish a coordinate system, A, B and C, are typically not at the center of the photograph. The azimuthal correction is essentially that required to displace point A, the image of the origin of the external world coordinate system so that it lies exactly on top of the photographic location of point A shown to the right of axis 710 of the coordinate system of the image plane. The elevational correction is the angle of elevation or declination required to place the image of point A exactly on top of the photographic location of point A shown below the abscissa of the image plane coordinate system 700. In short, azimuthal and elevational corrections are determined such that if they were applied to the camera, point A, the origin of the real world coordinate system would coincide with point A, the origin as captured on the photograph.

Mathematically, the differential offset angles, that place the image of the origin of the real world coordinate system exactly on point A in the image plane, are calculated as follows: ##EQU5##

The corrections required to coalign or superimpose points A are shown in FIG. 7.

FIG. 7 assumes that if A is correctly located, points B and C will be correctly located. However, this is generally not true because of tilt of the camera about the optical axis. Once points A have been superimposed, one knows where point B should be because of the axis definitions in the real world coordinate system. If the origin of the real world coordinate system centered on A, and the origin of the image plane coordinate system, now also centered on A by virtue of the azimuthal and elevational corrections applied in connection with FIG. 7, then point B on the image plane should be located where point B in the real world coordinate system is located. This would be the case if the camera were absolutely horizontal when the picture was taken. However, if there is tilt, B will be displaced off the axis. On the image plane, one knows the actual angle that the line AB makes to the X axis of the image plane by measurement from the image plane. By taking the viewing pyramid and projecting it onto a projection plane, as is commonly done when projecting three dimensional images onto a two dimensional surface, one can determine what angle BAC should be on the image plane. To correct for camera tilt, one must rotate the image plane about the optical axis. However, doing so potentially changes the location of points A, B and C requiring another iteration of corrections in which points A are superimposed and the amount of tilt recalculated until the points converge to an arbitrary amount of error ε1.

Using these techniques, convergence can commonly be achieved to an accuracy of 1 part in 10-14. If there is more than one convergent candidate, the B point residual error and the C point residual error are utilized as a discriminators.

FIG. 8 illustrates the process utilized to fully determine the location and orientation of a camera from the image. At step 800, one determines the location of the calibration points A, B and C and either knows or measures the distances between them (810). The camera location in XYZ coordinates is determined using the technique set forth in FIG. 9. Once the XYZ camera location is determined, corrections are made to azimuth and elevation (830) and then to tilt (840). With azimuth and tilt correction made, one determines whether the points are correctly located within a desired accuracy ε (850). If they are, the location and orientation of the camera is fully determined (860) and the process ends. If they are not, another iteration of steps 830 and 840 is undertaken to bring the location determination within the desired accuracy.

FIG. 9 illustrates the details of block 820 of FIG. 8. Knowing the principal distance of the camera, one measures the three angles AOB, BOC and COA from the image plane (900). A viewing pyramid is constructed with distance OA assumed as the longest dimension (905). The pyramid is flattened and a value estimated for line segment OB which is known to be low (910). Using the estimate for OB, the first triangle is solved (915). Second and third triangles are then sequentially solved using the results of the prior calculations (920 and 925). If the difference between the value for OA calculated in connection with the first triangle differs from the value for OA calculated from the third triangle (930) by an amount greater than ε (940), the value ΔOA is added to the prior estimate of OB to form a new estimate and a new iteration of steps 915, 920, 925, 930 and 940 occurs. If ΔOA<ε (940), then the viewing pyramid is solved (950) and it is only necessary to resolve the near, mid and far ambiguity (960) before the objective of totally determining the position and orientation of the camera (970) is achieved.

If the images had been captured with two cameras aligned as shown in FIG. 10, the location of the point X1, Y1, Z1 would be calculated as follows:

Assume a set of axes with origin at 0, the X and Z axes as shown in FIG. 10 and the Y axis being perpendicular to the plane of the page. Assume that the images are captured with an objective at point C and an objective at point F in FIG. 10. The distance between C and F being d1 +d2. The camera capturing the image will have a known focal length F and the image plane corresponding to each of the points at which the image is captured is shown in a heavy line on the X axis. The distance of the point labeled D from the line joining the focal points of the camera (C & F) can be calculated as follows:

Triangles ABC and CED are similar in a geometric sense and triangles DEF and FHG are also similar.

Because they are similar, ##EQU6## Equating (20) and (21) as shown in (23) and the subtracting the right hand term from both sides of the equation results in: ##EQU7## For (24) to be true, the numerator must=0.

d12 ΔXR -(d2 +d11) ΔXL =0(20)

Solving equation 22 for d11, substituting in equation 25 and moving the right term to the right side of the equation results in:

d12 ΔXR =(d2 +d1 -d12) ΔXL(21)

d12 (ΔXR +ΔXL)=(d2 +d1) ΔXL(22) ##EQU8##

Once h is known, the coordinates X0 and Y0 of the point O can be defined with respect to a camera axis by the following. See FIGS. 11 and 12.)

αx =tan-1 f/ΔX (26)

αy =tan-1 f/ΔY (27)

X0 =-h Cot αx (28)

Y0 =-h Cot αy (29)

In capturing images under field conditions, the positioning of cameras as shown in FIG. 10 is rarely so cleanly defined.

FIG. 13 illustrates a typical real world situation. In FIG. 13 the points A, B and C represent the calibrated target or the points measured subsequent to image capture. The coordinate system X, Y and Z is established in accordance with the conventions set forth above with A as the origin. Camera positions 1 and 2 illustrated only by their principal points O1 and O2 respectively and their image planes IP1 and IP2 respectively, are positioned with their principal points located at O1 and O2 and with their optical axis pointed at point T which would be the center of the field on the image plane. One desires to obtain the coordinates (X1, Y1, Z1) for an arbitrary point P.

This can be accomplished by a two-stage transformation. If one were to draw a line between the focal points O1 and O2 and define a mid-point OM(Xm, Ym, Zm) at the center of that line, and then if one were to perform an azimuthal rotation and if the same kind of rotation were applied to camera 2 about focal point O2 then, the cameras would be oriented as shown in FIG. 10 and the coordinates for point P could be calculated using equations 15-19 as shown above. However, the coordinates calculated are with reference to point O of FIG. 10 which corresponds to point Om of FIG. 13. To obtain the coordinates of point P with reference to the world coordinate system defined for measurements requires then only a simple coordinate transformation to change the representation from a coordinate system centered at Om to one centered at point A. This is done routinely using well-known mathematics.

FIG. 14 illustrates hardware utilized to carry out certain aspects of the invention. Camera 1400 is used to capture images to be analyzed in accordance with the invention. Camera 1400 may be a digital still camera or a video camera with a frame grabber. Images from the camera are loaded onto computer 1420 using camera interface 1410. Normally, images loaded through interface 1410 would be stored on hard drive 1423 and then later retrieved for processing in video RAM 1430. However, images can be loaded directly into video RAM if desired. Video RAM 1430 preferably contains sufficient image storage to permit the simultaneous processing of two images from the camera. Video display 1440 is preferably a high resolution video display such as a cathode ray tube or a corresponding display implemented in the semiconductor technology. Display 1440 is interfaced to the computer bus through display at interface 1424 and may be utilized to display individual images or both images simultaneously or three dimensional wire frames created in accordance with the invention. Keyboard 1450 is interfaced to the bus over keyboard interface 1422 in the usual manner.

When utilizing a computer implementation, such as found in FIG. 14, distance measurements may be conveniently measured in number of pixels in the vertical and horizontal direction which may be translated into linear measurements on the display screen knowing the resolution of the display in vertical and horizontal directions. Numbers of pixels may be readily determined by pointing and clicking on points under consideration and by obtaining the addresses of the pixels clicked upon from the cursor addresses.

Thus, by knowing the position and orientation of the cameras or other image capture devices, as determined from images analyzed after the capture, one can calculate the precise position in terms of the XYZ real world coordinates in a system centered at point A thus enabling one to specify with great accuracy the position of those points relative to the real world coordinate system.

The techniques set forth herein permit accurate forensic surveying of accident or crime scenes as well as accurate surveying of buildings or construction sites, particularly in the vertical direction which had heretofore been practically impossible.

In this disclosure, there is shown and described only the preferred embodiment of the invention, but, as aforementioned, it is to be understood that the invention is capable of use in various other combinations and environments and is capable of changes or modification within the scope of the inventive concepts as expressed herein.

Palm, Charles S.

Patent Priority Assignee Title
10068344, Mar 05 2014 SMART PICTURE TECHNOLOGIES INC Method and system for 3D capture based on structure from motion with simplified pose detection
10074381, Feb 20 2017 Snap Inc. Augmented reality speech balloon system
10083522, Jun 19 2015 SMART PICTURE TECHNOLOGIES INC Image based measurement system
10083524, Apr 21 2017 Octi Systems and methods for determining location and orientation of a camera
10088317, Jun 09 2011 Microsoft Technology Licensing, LLC Hybrid-approach for localization of an agent
10091491, Jun 05 2012 Samsung Electronics Co., Ltd. Depth image generating method and apparatus and depth image processing method and apparatus
10102680, Oct 30 2015 SNAPCHAT, INC Image based tracking in augmented reality systems
10127629, Aug 02 2006 InnerOptic Technology, Inc. System and method of providing real-time dynamic imagery of a medical procedure site using multiple modalities
10136951, Feb 17 2009 InnerOptic Technology, Inc. Systems, methods, apparatuses, and computer-readable media for image guided surgery
10188467, Dec 12 2014 INNEROPTIC TECHNOLOGY, INC Surgical guidance intersection display
10197391, May 22 2009 Pictometry International Corp. System and process for roof measurement using imagery
10278778, Oct 27 2016 INNEROPTIC TECHNOLOGY, INC Medical device navigation using a virtual 3D space
10296970, Aug 01 2013 Ebay Inc. Bi-directional project information updates in multi-party bidding
10304254, Aug 08 2017 SMART PICTURE TECHNOLOGIES, INC Method for measuring and modeling spaces using markerless augmented reality
10314559, Mar 14 2013 INNEROPTIC TECHNOLOGY, INC Medical device guidance
10319149, Feb 17 2017 Snap Inc.; SNAP INC Augmented reality anamorphosis system
10366543, Oct 30 2015 Snap Inc. Image based tracking in augmented reality systems
10387730, Apr 20 2017 Snap Inc. Augmented reality typography personalization system
10398513, Feb 17 2009 InnerOptic Technology, Inc. Systems, methods, apparatuses, and computer-readable media for image management in image-guided medical procedures
10433814, Feb 17 2016 InnerOptic Technology, Inc. Loupe display
10488195, Oct 25 2016 Microsoft Technology Licensing, LLC Curated photogrammetry
10611613, Aug 26 2011 Crown Equipment Corporation Systems and methods for pose development using retrieved position of a pallet or product load to be picked up
10614828, Feb 20 2017 Snap Inc. Augmented reality speech balloon system
10648800, May 22 2009 Pictometry International Corp. System and process for roof measurement using imagery
10657708, Nov 30 2015 Snap Inc. Image and point cloud based tracking and in augmented reality systems
10665035, Jul 11 2017 B+T Group Holdings, LLC; B+T GROUP HOLDINGS, INC System and process of using photogrammetry for digital as-built site surveys and asset tracking
10677583, Apr 17 2015 Rememdia LC Strain sensor
10679424, Aug 08 2017 SMART PICTURE TECHNOLOGIES, INC. Method for measuring and modeling spaces using markerless augmented reality
10690479, Jul 09 2013 REMEMDIA LLC Optical positioning sensor
10733700, Aug 02 2006 InnerOptic Technology, Inc. System and method of providing real-time dynamic imagery of a medical procedure site using multiple modalities
10733802, Oct 30 2015 Snap Inc. Image based tracking in augmented reality systems
10740974, Sep 15 2017 Snap Inc.; SNAP INC Augmented reality system
10769458, Feb 12 2008 DBI/CIDAUT Technologies, LLC Determination procedure of the luminance of traffic signs and device for its embodiment
10772686, Oct 27 2016 InnerOptic Technology, Inc. Medical device navigation using a virtual 3D space
10778908, Sep 03 2015 Method for correcting image of multi-camera system by using multi-sphere correction device
10788865, Apr 26 2019 Dell Products L.P. Information handling system dual pivot hinge signal path
10795006, Aug 12 2008 IEE INTERNATIONAL ELECTRONICS & ENGINEERING S A 3D time-of-flight camera system and position/orientation calibration method therefor
10820944, Oct 02 2014 InnerOptic Technology, Inc. Affected region display based on a variance parameter associated with a medical device
10820946, Dec 12 2014 InnerOptic Technology, Inc. Surgical guidance intersection display
10885761, Oct 08 2017 Magik Eye Inc.; MAGIK EYE INC Calibrating a sensor system including multiple movable sensors
10904458, Sep 03 2015 Error correction unit for time slice image
10931883, Mar 20 2018 Magik Eye Inc.; MAGIK EYE INC Adjusting camera exposure for three-dimensional depth sensing and two-dimensional imaging
10997760, Aug 31 2018 SNAP INC Augmented reality anthropomorphization system
10997783, Nov 30 2015 Snap Inc. Image and point cloud based tracking and in augmented reality systems
11002537, Dec 07 2016 Magik Eye Inc. Distance sensor including adjustable focus imaging sensor
11009936, May 02 2019 Dell Products L.P. Information handling system power control sensor
11017742, May 02 2019 Dell Products L.P.; Dell Products L P Information handling system multiple display viewing angle brightness adjustment
11019249, May 12 2019 Magik Eye Inc.; MAGIK EYE INC Mapping three-dimensional depth map data onto two-dimensional images
11060857, May 22 2009 Pictometry International Corp. System and process for roof measurement using imagery
11062468, Mar 20 2018 Magik Eye Inc. Distance measurement using projection patterns of varying densities
11103200, Jul 22 2015 InnerOptic Technology, Inc. Medical device approaches
11138757, May 10 2019 SMART PICTURE TECHNOLOGIES, INC Methods and systems for measuring and modeling spaces using markerless photo-based augmented reality process
11151782, Dec 18 2018 B+T Group Holdings, Inc. System and process of generating digital images of a site having a structure with superimposed intersecting grid lines and annotations
11164387, Aug 08 2017 SMART PICTURE TECHNOLOGIES, INC. Method for measuring and modeling spaces using markerless augmented reality
11179136, Feb 17 2016 InnerOptic Technology, Inc. Loupe display
11189299, Feb 20 2017 Snap Inc. Augmented reality speech balloon system
11195018, Apr 20 2017 Snap Inc. Augmented reality typography personalization system
11199397, Oct 08 2017 Magik Eye Inc.; MAGIK EYE INC Distance measurement using a longitudinal grid pattern
11215711, Dec 28 2012 Microsoft Technology Licensing, LLC Using photometric stereo for 3D environment modeling
11259879, Aug 01 2017 INNEROPTIC TECHNOLOGY, INC Selective transparency to assist medical device navigation
11288831, Dec 05 2018 VIVOTEK INC. Information measuring method and information measuring system
11315331, Oct 30 2015 Snap Inc. Image based tracking in augmented reality systems
11320537, Dec 01 2019 Magik Eye Inc.; MAGIK EYE INC Enhancing triangulation-based three-dimensional distance measurements with time of flight information
11335067, Sep 15 2017 Snap Inc. Augmented reality system
11341925, May 02 2019 Dell Products L.P. Information handling system adapting multiple display visual image presentations
11347331, Apr 08 2019 Dell Products L.P. Portable information handling system stylus garage and charge interface
11369439, Oct 27 2016 InnerOptic Technology, Inc. Medical device navigation using a virtual 3D space
11380051, Nov 30 2015 Snap Inc. Image and point cloud based tracking and in augmented reality systems
11381753, Mar 20 2018 Magik Eye Inc. Adjusting camera exposure for three-dimensional depth sensing and two-dimensional imaging
11423605, Nov 01 2019 ACTIVISION PUBLISHING, INC Systems and methods for remastering a game space while maintaining the underlying game simulation
11450050, Aug 31 2018 Snap Inc. Augmented reality anthropomorphization system
11464575, Feb 17 2009 InnerOptic Technology, Inc. Systems, methods, apparatuses, and computer-readable media for image guided surgery
11464578, Feb 17 2009 INNEROPTIC TECHNOLOGY, INC Systems, methods, apparatuses, and computer-readable media for image management in image-guided medical procedures
11474209, Mar 25 2019 Magik Eye Inc.; MAGIK EYE INC Distance measurement using high density projection patterns
11474245, Jun 06 2018 MAGIK EYE INC Distance measurement using high density projection patterns
11475584, Aug 07 2018 Magik Eye Inc. Baffles for three-dimensional sensors having spherical fields of view
11481868, Aug 02 2006 InnerOptic Technology, Inc. System and method of providing real-time dynamic imagery of a medical procedure she using multiple modalities
11483503, Jan 20 2019 Magik Eye Inc. Three-dimensional sensor including bandpass filter having multiple passbands
11484365, Jan 23 2019 INNEROPTIC TECHNOLOGY, INC Medical image guidance
11527009, May 10 2019 SMART PICTURE TECHNOLOGIES, INC. Methods and systems for measuring and modeling spaces using markerless photo-based augmented reality process
11534245, Dec 12 2014 InnerOptic Technology, Inc. Surgical guidance intersection display
11536857, Dec 19 2019 Trimble Inc. Surface tracking on a survey pole
11568614, Aug 02 2021 Bank of America Corporation Adaptive augmented reality system for dynamic processing of spatial component parameters based on detecting accommodation factors in real time
11580662, Dec 29 2019 Magik Eye Inc.; MAGIK EYE INC Associating three-dimensional coordinates with two-dimensional feature points
11587265, Aug 02 2019 Samsung Electronics Co., Ltd. Electronic apparatus and control method thereof
11595638, Feb 06 2019 Robert Bosch GmbH Calibration unit for a monitoring device, monitoring device for man-overboard monitoring, and method for calibration
11609345, Feb 20 2020 Rockwell Automation Technologies, Inc.; ROCKWELL AUTOMATION TECHNOLOGIES, INC System and method to determine positioning in a virtual coordinate system
11676319, Aug 31 2018 Snap Inc. Augmented reality anthropomorphtzation system
11682177, Aug 08 2017 SMART PICTURE TECHNOLOGIES, INC. Method for measuring and modeling spaces using markerless augmented reality
11684429, Oct 02 2014 InnerOptic Technology, Inc. Affected region display associated with a medical device
11688088, Jan 05 2020 Magik Eye Inc. Transferring the coordinate system of a three-dimensional camera to the incident point of a two-dimensional camera
11704723, Aug 01 2013 Ebay Inc. Bi-directional project information updates in multi-party bidding
11710309, Feb 22 2013 Microsoft Technology Licensing, LLC Camera/object pose from predicted coordinates
11721080, Sep 15 2017 Snap Inc. Augmented reality system
11748579, Feb 20 2017 Snap Inc. Augmented reality speech balloon system
11769307, Oct 30 2015 Snap Inc. Image based tracking in augmented reality systems
11861795, Feb 17 2017 Snap Inc. Augmented reality anamorphosis system
11908042, Aug 02 2019 Samsung Electronics Co., Ltd. Electronic apparatus and control method thereof
5841353, Aug 16 1995 Trimble Navigation Limited Relating to the determination of verticality in tall buildings and other structures
5850469, Nov 04 1996 General Electric Company Real time tracking of camera pose
5870136, Dec 05 1997 The University of North Carolina at Chapel Hill Dynamic generation of imperceptible structured light for tracking and acquisition of three dimensional scene geometry and surface characteristics in interactive three dimensional computer graphics applications
5991437, Jul 12 1996 SIZMEK TECHNOLOGIES, INC Modular digital audio system having individualized functional modules
5999642, Apr 22 1996 Method and apparatus for determining the configuration of a workpiece
6101268, Apr 22 1996 Method and apparatus for determining the configuration of a workpiece
6108497, Nov 06 1996 Asahi Kogaku Kogyo Kabushiki Kaisha Standard measurement scale and markers for defining standard measurement scale
6144761, Feb 03 1997 Hoya Corporation Photogrammetric analytical measurement system
6195455, Jul 01 1998 Intel Corporation Imaging device orientation information through analysis of test images
6201882, Jul 21 1998 NEC Corporation Camera calibration apparatus
6208348, May 27 1998 REALD DDMG ACQUISITION, LLC System and method for dimensionalization processing of images in consideration of a pedetermined image projection format
6249616, May 30 1997 HANGER SOLUTIONS, LLC Combining digital images based on three-dimensional relationships between source image data sets
6266442, Oct 23 1998 GOOGLE LLC Method and apparatus for identifying objects depicted in a videostream
6278479, Feb 24 1998 Wilson, Hewitt & Associates, Inc. Dual reality system
6281904, Jun 09 1998 ADOBE SYSTEMS INCORPORATED , A DELAWARE CORPORATION Multi-source texture reconstruction and fusion
6301372, Jul 23 1997 NEC Corporation Camera calibration apparatus
6304669, Nov 10 1997 Asahi Kogaku Kogyo Kabushiki Kaisha Photogrammetric analytical measurement system
6333749, Apr 17 1998 ADOBE SYSTEMS INCORPORATED , A DELAWARE CORPORATION Method and apparatus for image assisted modeling of three-dimensional scenes
6339683, Nov 06 1996 Asahi Kogaku Kogyo Kabushiki Kaisha Standard measurement scale and markers for defining standard measurement scale
6359647, Aug 07 1998 Philips Electronics North America Corporation Automated camera handoff system for figure tracking in a multiple camera system
6363161, Oct 23 1998 GOOGLE LLC System for automatically generating database of objects of interest by analysis of images recorded by moving vehicle
6385334, Mar 12 1998 Subaru Corporation System and method for adjusting stereo camera
6449384, Oct 23 1998 GOOGLE LLC Method and apparatus for rapidly determining whether a digitized image frame contains an object of interest
6453056, Oct 23 1998 GOOGLE LLC Method and apparatus for generating a database of road sign images and positions
6498618, Feb 24 1998 Dual reality system
6503195, May 24 1999 UNIVERSITY OF NORTH CAROLINA, THE Methods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction
6515659, May 27 1998 REALD DDMG ACQUISITION, LLC Method and system for creating realistic smooth three-dimensional depth contours from two-dimensional images
6600511, Jan 08 1997 PENTAX Corporation Camera for use in photogrammetric analytical measurement
6618497, Jun 24 1999 PENTAX Corporation Photogrammetric image processing apparatus and method
6618498, Jul 07 1999 PENTAX Corporation Image processing computer system for photogrammetric analytical measurement
6625315, Oct 23 1998 GOOGLE LLC Method and apparatus for identifying objects depicted in a videostream
6628803, Nov 25 1998 PENTAX Corporation Device for calculating positional data of standard points of photogrammetric target
6650764, Mar 01 1999 PENTAX Corporation Device for calculating positional data of standard points of a photogrammetric target
6668082, Aug 05 1997 Canon Kabushiki Kaisha Image processing apparatus
6674878, Jun 07 2001 AMAC NORTH AMERICA, LLC; DBI HOLDING, LLC; CIDAPI, S L System for automated determination of retroreflectivity of road signs and other reflective objects
6686926, May 27 1998 REALD DDMG ACQUISITION, LLC Image processing system and method for converting two-dimensional images into three-dimensional images
6693650, Mar 17 2000 PENTAX Corporation Image processing computer system for a photogrammetric analytical measurement
6717683, Sep 30 1998 PENTAX Corporation Target for photogrammetric analytical measurement system
6741948, Jan 03 2001 Carl-Zeiss-Stiftung; CARL-ZEISS-STIFTUNG TRADING AS CARL ZEISS Method and apparatus for fixing a location
6750426, Aug 29 2000 Meyer Turku Oy Welding arrangement and method
6754378, Jun 11 1998 Kabushiki Kaisha Topcon Image forming apparatus, image forming method and computer-readable storage medium having an image forming program
6762766, Jul 06 1999 PENTAX Corporation Image processing computer system for photogrammetric analytical measurement
6768813, Jun 16 1999 PENTAX Corporation Photogrammetric image processing apparatus and method
6781618, Aug 06 2001 Mitsubishi Electric Research Laboratories, Inc. Hand-held 3D vision system
6782123, Feb 17 1997 Compagnie Generale des Matieres Nucleaires Method and device for mapping radiation sources
6789039, Apr 05 2000 Microsoft Technology Licensing, LLC Relative range camera calibration
6833858, Oct 02 1998 Canon Kabushiki Kaisha Image input apparatus
6873924, Sep 30 2003 GE SECURITY, INC Method and system for calibrating relative fields of view of multiple cameras
6891960, Aug 12 2000 MANDLI INC ; MANDLI COMMUNICATIONS, INC System for road sign sheeting classification
6912293, Jun 26 1998 KOROBKIN, CARL P Photogrammetry engine for model construction
6954217, Jul 02 1999 PENTAX Corporation Image processing computer system for photogrammetric analytical measurement
7002551, Sep 25 2002 HRL Laboratories, LLC Optical see-through augmented reality modified-scale display
7003427, Apr 05 2000 Microsoft Technology Licensing, LLC Relative range camera calibration
7043057, Jun 07 2001 AMAC NORTH AMERICA, LLC; DBI HOLDING, LLC; CIDAPI, S L System for automated determination of retroreflectivity of road signs and other reflective objects
7065242, Mar 28 2000 SIZMEK TECHNOLOGIES, INC System and method of three-dimensional image capture and modeling
7071970, Mar 10 2003 CBIP, LLC Video augmented orientation sensor
7092548, Oct 23 1998 GOOGLE LLC Method and apparatus for identifying objects depicted in a videostream
7103212, Nov 22 2002 Strider Labs, Inc. Acquisition of three-dimensional images by an active stereo technique using locally unique patterns
7116323, May 27 1998 REALD DDMG ACQUISITION, LLC Method of hidden surface reconstruction for creating accurate three-dimensional images converted from two-dimensional images
7116324, May 27 1998 REALD DDMG ACQUISITION, LLC Method for minimizing visual artifacts converting two-dimensional motion pictures into three-dimensional motion pictures
7116818, Jun 11 1998 Kabushiki Kaisha Topcon Image forming apparatus, image forming method and computer-readable storage medium having an image forming program
7139424, Mar 06 2002 Subaru Corporation Stereoscopic image characteristics examination system
7151562, Aug 03 2000 UNILOC 2017 LLC Method and apparatus for external calibration of a camera via a graphical user interface
7182465, Feb 25 2004 UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL, THE Methods, systems, and computer program products for imperceptibly embedding structured light patterns in projected color images for display on planar and non-planar surfaces
7184089, Mar 28 2002 OLYMPUS OPTICAL CO , LTD Electronic camera and photographing composition determination apparatus mountable on electronic camera
7191056, Jan 04 2005 The Boeing Company Precision landmark-aided navigation
7193633, Apr 27 2000 ADOBE SYSTEMS INCORPORATED , A DELAWARE CORPORATION Method and apparatus for image assisted modeling of three-dimensional scenes
7193645, Jul 27 2000 DISNEY ENTERPRISES, INC Video system and method of operating a video system
7221775, Sep 10 2003 JOHNSON CONTROLS, INC ; Johnson Controls Tyco IP Holdings LLP; JOHNSON CONTROLS US HOLDINGS LLC Method and apparatus for computerized image background analysis
7280673, Oct 10 2003 JOHNSON CONTROLS, INC ; Johnson Controls Tyco IP Holdings LLP; JOHNSON CONTROLS US HOLDINGS LLC System and method for searching for changes in surveillance video
7286157, Sep 11 2003 JOHNSON CONTROLS, INC ; Johnson Controls Tyco IP Holdings LLP; JOHNSON CONTROLS US HOLDINGS LLC Computerized method and apparatus for determining field-of-view relationships among multiple image sensors
7346187, Oct 10 2003 JOHNSON CONTROLS, INC ; Johnson Controls Tyco IP Holdings LLP; JOHNSON CONTROLS US HOLDINGS LLC Method of counting objects in a monitored environment and apparatus for the same
7382900, Sep 18 2003 LaVision GmbH Method of determining a three-dimensional velocity field in a volume
7385708, Jun 07 2002 University of North Carolina at Chapel Hill Methods and systems for laser based real-time structured light depth extraction
7444003, Oct 23 1998 GOOGLE LLC Method and apparatus for identifying objects depicted in a videostream
7453456, Mar 28 2000 ANDREAS ACQUISITION LLC System and method of three-dimensional image capture and modeling
7460685, Nov 12 2002 JOHNSON CONTROLS, INC ; Johnson Controls Tyco IP Holdings LLP; JOHNSON CONTROLS US HOLDINGS LLC Method and apparatus for computerized image background analysis
7474803, Mar 28 2000 ANDREAS ACQUISITION LLC System and method of three-dimensional image capture and modeling
7515736, Aug 12 2000 MANDLI INC ; MANDLI COMMUNICATIONS, INC System for road sign sheeting classification
7526121, Oct 23 2002 Fanuc Ltd Three-dimensional visual sensor
7576767, Jul 26 2004 ROADMAP GEO LP III, AS ADMINISTRATIVE AGENT Panoramic vision system and method
7590310, May 05 2004 GOOGLE LLC Methods and apparatus for automated true object-based image analysis and retrieval
7602404, Apr 17 1998 ADOBE SYSTEMS INCORPORATED , A DELAWARE CORPORATION Method and apparatus for image assisted modeling of three-dimensional scenes
7613323, Jun 22 2004 SRI International Method and apparatus for determining camera pose
7671728, Jun 02 2006 JOHNSON CONTROLS, INC ; Johnson Controls Tyco IP Holdings LLP; JOHNSON CONTROLS US HOLDINGS LLC Systems and methods for distributed monitoring of remote sites
7688381, Apr 08 2003 System for accurately repositioning imaging devices
7720276, Jun 26 1998 Photogrammetry engine for model construction
7728868, Aug 02 2006 INNEROPTIC TECHNOLOGY, INC System and method of providing real-time dynamic imagery of a medical procedure site using multiple modalities
7738678, Jun 07 1995 AMERICAN VEHICULAR SCIENCES LLC Light modulation techniques for imaging objects in or around a vehicle
7825792, Jun 02 2006 JOHNSON CONTROLS, INC ; Johnson Controls Tyco IP Holdings LLP; JOHNSON CONTROLS US HOLDINGS LLC Systems and methods for distributed monitoring of remote sites
7830561, Mar 16 2005 The Trustees of Columbia University in the City of New York Lensless imaging with controllable apertures
7860298, Nov 23 2001 MAPVISION OY LTD Method and system for the calibration of a computer vision system
7860301, Feb 11 2005 MACDONALD, DETTWILER AND ASSOCIATES INC 3D imaging system
7907793, May 04 2001 LEGEND3D, INC Image sequence depth enhancement system and method
7933001, Jul 11 2005 Kabushiki Kaisha Topcon Geographic data collecting system
7941269, May 06 2005 CALLAHAN CELLULAR L L C Network-based navigation system having virtual drive-thru advertisements integrated with actual imagery from along a physical route
7995796, Aug 12 2000 MANDLI INC ; MANDLI COMMUNICATIONS, INC System for road sign sheeting classification
8013729, Jun 02 2006 JOHNSON CONTROLS, INC ; Johnson Controls Tyco IP Holdings LLP; JOHNSON CONTROLS US HOLDINGS LLC Systems and methods for distributed monitoring of remote sites
8031909, Feb 11 2005 MACDONALD, DETTWILER AND ASSOCIATES INC Method and apparatus for producing 3D model of an underground environment
8031933, Feb 11 2005 MACDONALD, DETTWILER AND ASSOCIATES INC Method and apparatus for producing an enhanced 3D model of an environment or an object
8073247, May 04 2001 LEGEND3D, INC. Minimal artifact image sequence depth enhancement system and method
8078006, May 04 2001 LEGEND3D, INC. Minimal artifact image sequence depth enhancement system and method
8144376, Mar 16 2005 The Trustees of Columbia University in the City of New York Lensless imaging with controllable apertures
8150216, May 05 2004 GOOGLE LLC Methods and apparatus for automated true object-based image analysis and retrieval
8152305, Jul 16 2004 CHAPEL HILL, THE UNIVERSITY OF NORTH CAROLINA AT Methods, systems, and computer program products for full spectrum projection
8160390, May 04 2001 LEGEND3D, INC. Minimal artifact image sequence depth enhancement system and method
8174572, Mar 25 2005 JOHNSON CONTROLS, INC ; Johnson Controls Tyco IP Holdings LLP; JOHNSON CONTROLS US HOLDINGS LLC Intelligent camera selection and object tracking
8280677, Mar 03 2008 Kabushiki Kaisha Topcon Geographical data collecting device
8295643, Jun 01 2007 Toyota Jidosha Kabushiki Kaisha Device and associated methodology for measuring three-dimensional positions based on retrieved points from one view angle and positions and postures from another view angle
8315456, Apr 10 2008 CITIBANK, N A Methods and apparatus for auditing signage
8319952, Jul 11 2005 Kabushiki Kaisha Topcon Geographic data collecting system
8340379, Mar 07 2008 INNEROPTIC TECHNOLOGY, INC Systems and methods for displaying guidance data based on updated deformable imaging data
8350902, Aug 02 2006 InnerOptic Technology, Inc. System and method of providing real-time dynamic imagery of a medical procedure site using multiple modalities
8385684, May 04 2001 LEGEND3D, INC.; LEGEND3D, INC System and method for minimal iteration workflow for image sequence depth enhancement
8396328, May 04 2001 LEGEND3D, INC. Minimal artifact image sequence depth enhancement system and method
8401222, May 22 2009 PICTOMETRY INTERNATIONAL CORP System and process for roof measurement using aerial imagery
8401336, May 04 2001 LEGEND3D, INC. System and method for rapid image sequence depth enhancement with augmented computer-generated elements
8406992, May 06 2005 CALLAHAN CELLULAR L L C Network-based navigation system having virtual drive-thru advertisements integrated with actual imagery from along a physical route
8482606, Aug 02 2006 InnerOptic Technology, Inc. System and method of providing real-time dynamic imagery of a medical procedure site using multiple modalities
8502868, Mar 25 2005 JOHNSON CONTROLS, INC ; Johnson Controls Tyco IP Holdings LLP; JOHNSON CONTROLS US HOLDINGS LLC Intelligent camera selection and object tracking
8508595, Oct 04 2007 HANWHA VISION CO , LTD Surveillance camera system for controlling cameras using position and orientation of the cameras and position information of a detected object
8532368, Feb 11 2005 MACDONALD, DETTWILER AND ASSOCIATES INC Method and apparatus for producing 3D model of an environment
8542911, Jun 26 1998 Photogrammetry engine for model construction
8547437, Nov 12 2002 JOHNSON CONTROLS, INC ; Johnson Controls Tyco IP Holdings LLP; JOHNSON CONTROLS US HOLDINGS LLC Method and system for tracking and behavioral monitoring of multiple objects moving through multiple fields-of-view
8554307, Apr 12 2010 INNEROPTIC TECHNOLOGY, INC Image annotation in image-guided medical procedures
8570320, Jan 31 2011 Microsoft Technology Licensing, LLC Using a three-dimensional environment model in gameplay
8577089, Mar 04 2011 Samsung Electronics Co., Ltd. Apparatus and method for depth unfolding based on multiple depth images
8585598, Feb 17 2009 InnerOptic Technology, Inc. Systems, methods, apparatuses, and computer-readable media for image guided surgery
8586368, Jun 25 2009 The University of North Carolina at Chapel Hill Methods and systems for using actuated surface-attached posts for assessing biofluid rheology
8625854, Sep 09 2005 CALLAGHAN INNOVATION 3D scene scanner and a position and orientation system
8641621, Feb 17 2009 INNEROPTIC TECHNOLOGY, INC Systems, methods, apparatuses, and computer-readable media for image management in image-guided medical procedures
8649610, Apr 10 2008 CITIBANK, N A Methods and apparatus for auditing signage
8660311, Aug 12 2000 MANDLI INC ; MANDLI COMMUNICATIONS, INC System for assessment reflective objects along a roadway
8670816, Jan 30 2012 InnerOptic Technology, Inc.; INNEROPTIC TECHNOLOGY, INC Multiple medical device guidance
8675073, Nov 08 2001 Video system and methods for operating a video system
8690776, Feb 17 2009 INNEROPTIC TECHNOLOGY, INC Systems, methods, apparatuses, and computer-readable media for image guided surgery
8717432, Mar 04 2008 Kabushiki Kaisha Topcon Geographical data collecting device
8730232, Feb 01 2011 LEGEND3D, INC.; LEGEND3D, INC Director-style based 2D to 3D movie conversion system and method
8774556, Nov 30 2011 Microsoft Technology Licensing, LLC Perspective correction using a reflection
8831310, Mar 07 2008 InnerOptic Technology, Inc. Systems and methods for displaying guidance data based on updated deformable imaging data
8860944, Aug 12 2000 MANDLI INC ; MANDLI COMMUNICATIONS, INC System and assessment of reflective objects along a roadway
8872852, Jun 30 2011 International Business Machines Corporation Positional context determination with multi marker confidence ranking
8903199, May 05 2004 GOOGLE LLC Methods and apparatus for automated true object-based image analysis and retrieval
8908996, May 05 2004 GOOGLE LLC Methods and apparatus for automated true object-based image analysis and retrieval
8908997, May 05 2004 GOOGLE LLC Methods and apparatus for automated true object-based image analysis and retrieval
8934009, Sep 02 2010 Kabushiki Kaisha Topcon Measuring method and measuring device
8942917, Feb 14 2011 Microsoft Technology Licensing, LLC Change invariant scene recognition by an agent
8977075, Mar 31 2009 Alcatel Lucent Method for determining the relative position of a first and a second imaging device and devices therefore
8983227, Nov 30 2011 Microsoft Technology Licensing, LLC Perspective correction using a reflection
9007365, Nov 27 2012 LEGEND3D, INC Line depth augmentation system and method for conversion of 2D images to 3D images
9007404, Mar 15 2013 LEGEND3D, INC Tilt-based look around effect image enhancement method
9036028, Sep 02 2005 JOHNSON CONTROLS, INC ; Johnson Controls Tyco IP Holdings LLP; JOHNSON CONTROLS US HOLDINGS LLC Object tracking and alerts
9107698, Apr 12 2010 InnerOptic Technology, Inc. Image annotation in image-guided medical procedures
9147379, Jun 30 2011 International Business Machines Corporation Positional context determination with multi marker confidence ranking
9160979, May 27 2011 GOOGLE LLC Determining camera position for a photograph having a displaced center of projection
9206023, Aug 26 2011 Crown Equipment Corporation Method and apparatus for using unique landmarks to locate industrial vehicles at start-up
9215441, Feb 17 2011 KONICA MINOLTA, INC Image processing apparatus, non-transitory computer readable recording medium, and image processing method
9238869, Jun 25 2009 The University of North Carolina at Chapel Hill Methods and systems for using actuated surface-attached posts for assessing biofluid rheology
9241147, May 01 2013 LEGEND3D, INC External depth map transformation method for conversion of two-dimensional images to stereoscopic images
9245916, Jul 09 2013 Rememdia LC Optical positioning sensor
9251582, Dec 31 2012 GE INFRASTRUCTURE TECHNOLOGY LLC Methods and systems for enhanced automated visual inspection of a physical asset
9265572, Jan 24 2008 NORTH CAROLINA, UNIVERSITY OF, THE Methods, systems, and computer readable media for image guided ablation
9270965, Feb 06 2012 LEGEND3D, INC Multi-stage production pipeline system
9282321, Feb 17 2011 LEGEND3D, INC. 3D model multi-reviewer system
9282947, Dec 01 2009 INNEROPTIC TECHNOLOGY, INC Imager focusing based on intraoperative data
9286941, May 04 2001 LEGEND3D, INC Image sequence enhancement and motion picture project management system
9288476, Feb 17 2011 LEGEND3D, INC. System and method for real-time depth modification of stereo images of a virtual reality environment
9335255, Aug 12 2000 MANDLI INC ; MANDLI COMMUNICATIONS, INC System and assessment of reflective objects along a roadway
9364294, Feb 17 2009 InnerOptic Technology, Inc. Systems, methods, apparatuses, and computer-readable media for image management in image-guided medical procedures
9367650, Jan 10 2014 Ebay Inc. Solar installation mapping
9398936, Feb 17 2009 InnerOptic Technology, Inc. Systems, methods, apparatuses, and computer-readable media for image guided surgery
9407878, Sep 02 2005 JOHNSON CONTROLS, INC ; Johnson Controls Tyco IP Holdings LLP; JOHNSON CONTROLS US HOLDINGS LLC Object tracking and alerts
9407904, May 01 2013 LEGEND3D, INC.; LEGEND3D, INC Method for creating 3D virtual reality from 2D images
9424277, May 05 2004 GOOGLE LLC Methods and apparatus for automated true object-based image analysis and retrieval
9438878, May 01 2013 LEGEND3D, INC. Method of converting 2D video to 3D video using 3D object models
9443555, Feb 06 2012 LEGEND3D, INC. Multi-stage production pipeline system
9470511, Nov 12 2013 TRIMBLE INC Point-to-point measurements using a handheld device
9547937, Nov 30 2012 LEGEND3D, INC. Three-dimensional annotation system and method
9551570, Mar 31 2011 ATS AUTOMATION TOOLING SYSTEMS INC. Three dimensional optical sensing through optical media
9595296, Feb 06 2012 USFT PATENTS, INC Multi-stage production pipeline system
9609307, Sep 17 2015 USFT PATENTS, INC Method of converting 2D video to 3D video using machine learning
9612211, Mar 14 2013 GE INFRASTRUCTURE TECHNOLOGY LLC Methods and systems for enhanced tip-tracking and navigation of visual inspection devices
9615082, May 04 2001 LEGEND3D, INC. Image sequence enhancement and motion picture project management system and method
9619561, Feb 14 2011 Microsoft Technology Licensing, LLC Change invariant scene recognition by an agent
9651365, Jul 09 2013 Rememdia LC Optical positioning sensor
9659345, Aug 02 2006 InnerOptic Technology, Inc. System and method of providing real-time dynamic imagery of a medical procedure site using multiple modalities
9671328, Aug 12 2000 MANDLI INC ; MANDLI COMMUNICATIONS, INC System and assessment of reflective objects along a roadway
9675319, Feb 17 2016 InnerOptic Technology, Inc. Loupe display
9677840, Mar 14 2014 Lineweight LLC Augmented reality simulator
9709473, Nov 05 2012 JFE Steel Corporation Method and apparatus for measuring dynamic panel stiffness of outer panel for automobile parts
9836890, Oct 30 2015 Snap Inc. Image based tracking in augmented reality systems
9874433, Jul 09 2013 Rememdia LC Optical positioning sensor
9881216, Sep 02 2005 JOHNSON CONTROLS, INC ; Johnson Controls Tyco IP Holdings LLP; JOHNSON CONTROLS US HOLDINGS LLC Object tracking and alerts
9901406, Oct 02 2014 INNEROPTIC TECHNOLOGY, INC Affected region display associated with a medical device
9933254, May 22 2009 Pictometry International Corp. System and process for roof measurement using aerial imagery
9939349, Sep 30 2011 LUFTHANSA TECHNIK AG Endoscopy system and corresponding method for examining gas turbines
9949700, Jul 22 2015 InnerOptic Technology, Inc. Medical device approaches
9984499, Nov 30 2015 SNAP INC Image and point cloud based tracking and in augmented reality systems
9986208, Jan 27 2012 Qualcomm Incorporated System and method for determining location of a device using opposing cameras
9989456, Aug 12 2000 DBI CIDAUT TECHNOLOGIES, LLC System for the determination of retroreflectivity of road signs and other reflective objects
9989457, Aug 12 2000 MANDLI INC ; MANDLI COMMUNICATIONS, INC System and assessment of reflective objects along a roadway
Patent Priority Assignee Title
4965840, Nov 27 1987 STATE UNIVERSITY OF NEW YORK, STATE UNIVERSITY PLAZA ALBANY, NEW YORK 12246 Method and apparatus for determining the distances between surface-patches of a three-dimensional spatial scene and a camera system
4969106, Feb 27 1989 CAMSYS, INC Computerized method of determining surface strain distributions in a deformed body
5146346, Jun 14 1991 Adobe Systems Incorporated Method for displaying and printing multitone images derived from grayscale images
5259037, Feb 07 1991 L-3 Communications Corporation Automated video imagery database generation using photogrammetry
5525883, Jul 08 1994 AVITZOUR, SARA Mobile robot location determination employing error-correcting distributed landmarks
//////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Mar 31 1995Synthonics Incorporated(assignment on the face of the patent)
May 06 1995PALM, CHARLES S Synthonics IncorporatedASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0074880551 pdf
Nov 14 2002Synthonics IncorporatedPATENT PORTFOLIOS CONSULTING, INC CORRECTIVE ASSIGNMENT TO CORRECT THE NAME OF THE ASSIGNOR FOR A TRANSCRIPTIONAL ERROR PREVIOUSLY RECORDED ON REEL 013746 FRAME 0833 ASSIGNOR S HEREBY CONFIRMS THE CORRECTION OF A TRANSCRIPTIONAL ERROR IN THE ASSIGNOR S NAME FROM SYNTHONICS TECHNOLOGIES, INC TO SYNTHONICS INCORPORATED 0194770119 pdf
Nov 14 2002SYNTHONICS TECHNOLOGIES, INC PATENT PORTFOLIOS CONSULTING, INC ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0137460833 pdf
May 07 2003PATENT PORTFOLIOS CONSULTING, INC Diversified Patent Investments, LLCASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0141540597 pdf
Jul 31 2006Diversified Patent Investments, LLCJADE PIXEL, LLCASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0192240406 pdf
Date Maintenance Fee Events
Jun 14 2001M283: Payment of Maintenance Fee, 4th Yr, Small Entity.
Jun 15 2005M2552: Payment of Maintenance Fee, 8th Yr, Small Entity.
Apr 28 2008RMPN: Payer Number De-assigned.
Apr 29 2008ASPN: Payor Number Assigned.
Oct 27 2008STOL: Pat Hldr no Longer Claims Small Ent Stat
May 21 2009M1553: Payment of Maintenance Fee, 12th Year, Large Entity.


Date Maintenance Schedule
Dec 16 20004 years fee payment window open
Jun 16 20016 months grace period start (w surcharge)
Dec 16 2001patent expiry (for year 4)
Dec 16 20032 years to revive unintentionally abandoned end. (for year 4)
Dec 16 20048 years fee payment window open
Jun 16 20056 months grace period start (w surcharge)
Dec 16 2005patent expiry (for year 8)
Dec 16 20072 years to revive unintentionally abandoned end. (for year 8)
Dec 16 200812 years fee payment window open
Jun 16 20096 months grace period start (w surcharge)
Dec 16 2009patent expiry (for year 12)
Dec 16 20112 years to revive unintentionally abandoned end. (for year 12)