An imaging sensor system, having a view of a target area comprising: a rigid mount unit having at least two imaging sensors disposed within the mount unit, wherein a first imaging and a second imaging sensor each has a focal axis passing through an aperture in the mount unit, wherein the first imaging sensor generates a first image area comprising a first data array of pixels and the second imaging sensor generates a second image area comprising a second data array of pixels, wherein the first and second imaging sensors are offset to have a first image overlap area in the target area, wherein the first sensors image data bisects the second sensors image data in the first image overlap area.

Patent
   RE49105
Priority
Sep 20 2002
Filed
Oct 23 2019
Issued
Jun 14 2022
Expiry
Sep 18 2023

TERM.DISCL.
Assg.orig
Entity
Small
0
202
currently ok
43. An imaging sensor system comprising:
a mount unit, having a first and second imaging sensors disposed within the mount unit, wherein the first imaging and second imaging sensors each have a focal axis passing through an aperture in the mount unit, wherein the first imaging sensor generates a first image area comprising a first data array of pixels and the second imaging sensor generates a second image area comprising a second data array of pixels, wherein the first and second data array of pixels is at least two dimensional.
0. 51. An imaging sensor system comprising:
a rigid mount unit in alignment with a target area;
a first imaging sensor rigidly connected to the mount unit, wherein the first imaging sensor generates a first image area comprising a first data array of pixels;
a second imaging sensor rigidly connected to the mount unit, wherein the second imaging sensor generates a second image area comprising a second data array of pixels;
wherein the first and second imaging sensors are offset to have a first image overlap area in the target area,
wherein the first data array of pixels bisects the second data array of pixels in the first image overlap area.
16. An imaging sensor system comprising:
a mount unit in alignment with a target area, having at least two imaging sensors disposed within the mount unit, wherein a first imaging sensor and a second imaging sensor each has a focal axis passing through an aperture in the mount unit, wherein the first imaging sensor generates a first image area comprising a first data array of pixels and the second imaging sensor generates a second image area comprising a second data array of pixels, wherein the first and second imaging sensors are offset to have a first image overlap area in the target area, wherein the first sensors image data bisects the second sensors image data in the first image overlap area.
0. 63. A system for generating an image of a surface, comprising:
a global position receiver;
a first imaging sensor adapted to view a surface and disposed at least partially in a mount unit, wherein the mount unit flexes less than 100th of a degree during operation,
wherein the first imaging sensor generates a first image area comprising a first data array of pixels,
wherein the first data array of pixels is at least two dimensional; and
a computer connected to the global position receiver and the first imaging sensor, wherein a calculated longitude value and a calculated latitude value are generated for at least one pixel in the first data array of pixels based on input from the global position receiver.
0. 72. An imaging sensor system comprising:
a rigid mount unit;
a first imaging sensor disposed at least partially within the rigid mount unit, wherein the first imaging sensor generates a first image area comprising a first two-dimensional data array of pixels;
a second imaging sensor disposed at least partially within the rigid mount unit, wherein the second imaging sensor generates a second image area comprising a second two-dimensional data array of pixels;
wherein the first imaging sensor and the second imaging sensor are offset such that the first image area overlaps with the second image area to form a first image overlap area;
a computer in communication with the first imaging sensor and the second imaging sensor; and
an intensity balancing module associated with the computer for balancing the intensity of the first two-dimensional data array of pixels and the second two-dimensional data array of pixels using a balancing correlation matrix.
0. 66. A system for generating an image, comprising:
a rigid mount unit;
a first imaging sensor disposed at least partially within the rigid mount unit, wherein the first imaging sensor generates a first image area comprising a first two-dimensional data array of pixels;
a second imaging sensor disposed at least partially within the rigid mount unit, wherein the second imaging sensor generates a second image area comprising a second two-dimensional data array of pixels;
wherein the first imaging sensor and the second imaging sensor are offset such that the first image area overlaps with the second image area to form a first image overlap area;
a computer in communication with the first imaging sensor and the second imaging sensor;
a mosaicking module associated with the computer for balancing the color of the second two-dimensional data array of pixels based on the average intensity of green-dominant pixels in the first two-dimensional data array of pixels.
41. A system for generating a map of a target area, comprising:
a global position receiver;
a global positioning antenna;
an imaging sensor system, having a view of the target area, comprising:
a mount unit, having a first and second imaging sensor disposed within the mount unit, wherein the first and second imaging sensors each have a focal axis passing through an aperture in the mount unit, wherein the first imaging sensor generates a first image area comprising a first data array of pixels and second imaging sensor generates a second image area comprising a second data array of pixels, wherein the first and second data array of pixels is at least two dimensional; and
a computer in communication with the global positioning antenna, the first imaging sensor, and the second imaging sensor; correlating at least a portion of the image area from the first imaging sensor and the second imaging sensor to a portion of the target area based on input from the global positioning antenna.
0. 45. A system for generating a map of a target area, comprising:
a global positioning receiver;
an imaging sensor system having a view of the target area, comprising:
a rigid mount unit having a first imaging sensor and a second imaging sensor,
wherein the first imaging sensor generates a first image area comprising a first data array of pixels and the second imaging sensor generates a second image area comprising a second data array of pixels,
wherein the first and second imaging sensors are offset to have a first image overlap area in the target area,
wherein the first data array of pixels bisects the second data array of pixels in the first image overlap area; and
a computer in communication with the global positioning receiver, the first imaging sensor, and the second imaging sensor, wherein at least a portion of the first image area from the first imaging sensor is correlated to a portion of the target area based on input from the global positioning receiver or other geographical positioning technique.
1. A system for generating a map of a target area, comprising:
a global positioning receiver;
an imaging sensor system having a view of the target area, comprising:
a rigid mount unit having at least two imaging sensors disposed within the mount unit,
wherein a first imaging sensor and a second imaging sensor each has a focal axis passing through an aperture in the mount unit,
wherein the first imaging sensor generates a first image area comprising a first data array of pixels and the second imaging sensor generates a second image area comprising a second data array of pixels,
wherein the first and second imaging sensors are offset to have a first image overlap area in the target area,
wherein the first sensors image data bisects the second sensors image data in the first image overlap area; and
a computer in communication with the a global positioning antenna, the first imaging sensor, and the second imaging sensor; correlating at least a portion of the image areas from the first imaging sensor and the second imaging sensor to a portion of the target area based on input from the global positioning antenna.
0. 59. A system for generating an image of a surface, comprising:
a global position receiver;
an imaging array, having a view of the surface, comprising:
a mount unit;
a first imaging sensor, coupled to the mount unit, having a first focal axis passing through an aperture in the mount unit,
wherein the first image sensor generates a first image area of the surface comprising a first data array of pixels,
wherein the first data array of pixels is at least two dimensional; and
a second imaging sensor, coupled to the mount unit and offset from the first imaging sensor, having a second focal axis passing through an aperture in the mount unit,
wherein the second imaging sensor generates a second image area of the surface comprising a second data array of pixels,
wherein the second data array of pixels is at least two dimensional; and
a computer, connected to the global position receiver, the first imaging sensor, and the second imaging sensor, wherein at least a portion of the first image area from the first imaging sensor is correlated to a portion of the surface based on input from the global position receiver.
0. 54. A method of calibrating imaging sensors comprising the steps of:
performing an initial calibration of the imaging sensors comprising:
determining the position of an attitude measurement unit (AMU) selected from the group consisting of a gyroscope, an inertial measurement unit (IMU), and a global positioning system (GPS);
determining the position of a first imaging sensor relative to the AMU;
calibrating the first imaging sensor against a target area and determining a boresight angle of the first imaging sensor; and
determining the position of the second imaging sensor relative to the first imaging sensor; and
calibrating the second imaging sensor using the boresight angle of the first imaging sensor; and
using oversampling techniques to update at least one initial calibration parameter of the first imaging sensor against a target area and the boresight angle of the first imaging sensor;
using oversampling techniques to update the position of the second imaging sensor relative to the first imaging sensor; and
updating at least one calibration parameter of the second imaging sensor using the updated boresight angle of the first imaging sensor.
36. A system for generating a map of a surface, comprising:
a global position receiver;
a global positioning antenna;
an imaging array, having a view of the surface, comprising:
a mount unit;
an aperture, formed in the mount unit;
a first imaging sensor, coupled to the mount unit, having a first focal axis passing through the aperture, wherein the first image sensor generates a first image area of the surface comprising a first data array of pixels, wherein the first data array of pixels is at least two dimensional; and
a second imaging sensor, coupled to the mount unit and offset from the first imaging sensor, having a second focal axis passing through the aperture and intersecting the first focal axis, wherein the second imaging sensor generates a second image area of the surface comprising a second data array of pixels, wherein the second data array of pixels is at least two dimensional; and
a computer, connected to the global positioning antenna, and first and second imaging sensors; correlating at least a portion of the image area from the first and second imaging sensors to a portion of the surface based on input from the global positioning antenna.
31. A method of calibrating imaging sensors comprising the steps of:
performing an initial calibration of the imaging sensors comprising:
determining the position of an attitude measurement unit (AMU) selected from the group consisting of a gyroscope, an inertial measurement unit (IMU), and a global positioning system (GPS);
determining the position of a first imaging sensor within a rigid mount unit relative to the AMU;
determining the position of a second imaging sensor within the rigid mount unit relative to the AMU;
calibrating the first imaging sensor against a target area and determining a boresight angle of the first imaging sensor; and
calculating the position of one or more subsequent imaging sensors within the rigid mount unit relative to the first imaging sensor; and
calibrating the one or more subsequent imaging sensors using the boresight angle of the first imaging sensor; and
using oversampling techniques to update at least one initial calibration parameter of the first imaging sensor against a target area and the boresight angle of the first imaging sensor;
using oversampling techniques to update the position of one or more subsequent imaging sensors within the rigid mount unit relative to the first imaging sensor; and
updating at least one calibration parameter of one or more subsequent imaging sensors within the rigid mount using the updated boresight angle of the first imaging sensor.
2. The system of claim 1 further comprising:
a third imaging sensor disposed within the mount unit, wherein the third imaging sensor has a focal axis passing through the aperture in the mount unit, wherein the third imaging sensor generates a third image area comprising a third data array of pixels.
3. The system of claim 2, further comprising:
a fourth imaging sensor disposed within the mount unit, wherein the fourth imaging sensor has a focal axis passing through the aperture in the mount unit, wherein the fourth imaging sensor generates a fourth image area comprising a fourth data array of pixels, wherein the third and fourth imaging sensors are offset to have a second image overlap area in the target area, wherein the third sensors image data bisects the fourth sensors image data in the second image overlap area.
4. The system of claim 3, wherein a first sensor array comprising the first and second image sensors and a second sensor array comprising the third and fourth image sensors are offset to have a third image overlap area in the target area, wherein the first sensor arrays image data bisects the second sensor arrays image data in the third overlap area.
5. The system of claim 3, wherein the first sensors arrays image data completely overlaps the second sensors arrays image data.
6. The system of claim 3, wherein third and fourth imaging sensors are selected from the group consisting of digital cameras, light detection and ranging (LIDAR), infrared, heat-sensing and gravitometers.
7. The system of claim 3, wherein the first and second imaging sensors are a digital camera and the third imaging sensor is a light detection and ranging (LIDAR).
8. The system of claim 2, wherein the third imaging sensor is selected from the group consisting of digital cameras, light detection and ranging (LIDAR), infrared, heat-sensing and gravitometers.
9. The system of claim 2, wherein the third imaging sensor is selected from the group consisting of a digital camera having a hyperspectral filter and a light detection and ranging (LIDAR).
10. The system of claim 2, wherein the first and second imaging sensors are a digital camera and the third imaging sensor is a light detection and ranging (LIDAR).
11. The system of claim 1, wherein the mount unit flexes less than 100th of a degree during operation.
12. The system of claim 11, wherein the mount unit flexes less than 1,000th of a degree during operation.
13. The system of claim 12, wherein the mount unit flexes less than 10,000th of a degree during operation.
14. The system of claim 1, wherein the first imaging sensor is calibrated relative to one or more attitude measuring devices selected from the group consisting of a gyroscope, an inertial measurement unit (IMU), and a global positioning system (GPS).
15. The system of claim 1, wherein the first and second imaging sensors are selected from the group consisting of digital cameras, light detection and ranging (LIDAR), infrared, heat-sensing and gravitometers.
17. The system of claim 16 further comprising:
a third imaging sensor disposed within the mount unit, wherein the third imaging sensor has a focal axis passing through the aperture in the mount unit, wherein the third imaging sensor generates a third image area comprising a third data array of pixels.
18. The system of claim 17 further comprising:
a fourth imaging sensor disposed within the mount unit, wherein the fourth imaging sensor has a focal axis passing through the aperture in the mount unit, wherein the fourth imaging sensor generates a fourth image area comprising a fourth data array of pixels, wherein the third and fourth imaging sensors are offset to have a second image overlap area in the target area, wherein the third sensors image data bisects the fourth sensors image in the second image overlap area.
19. The system of claim 18, wherein a first sensors array comprising the first and the second image sensor and a second sensors array comprising the third and the fourth image sensor are offset to have a third image overlap area in the target area, wherein first sensor arrays image data bisects the second sensor arrays image data in the third image overlap area.
20. The system of claim 18, wherein the first sensors arrays image data completely overlaps the second sensors arrays image data.
21. The system of claim 18, wherein the third and fourth imaging sensors are selected from the group consisting of digital cameras, light detection and ranging (LIDAR), infrared, heat-sensing and gravitometers.
22. The system of claim 18, wherein the first and second imaging sensors are a digital camera and the third imaging sensor is a light detection and ranging (LIDAR).
23. The system of claim 17, wherein the third imaging sensor is selected from the group consisting of digital cameras, light detection and ranging (LIDAR), infrared, heat-sensing and gravitometers.
24. The system of claim 17, wherein the third imaging sensor is selected from the group consisting of a digital camera having a hyperspectral filter and a light detection and ranging (LIDAR).
25. The system of claim 17, wherein the first and second imaging sensors are a digital camera and the third imaging sensor is a light detection and ranging (LIDAR).
26. The system of claim 16, wherein the mount unit flexes less than 100th of a degree during operation.
27. The system of claim 26, wherein the mount unit flexes less than 1,000th of a degree during operation.
28. The system of claim 27, wherein the mount unit flexes less than 10,000th of a degree during operation.
29. The system of claim 16, wherein the first imaging sensor is calibrated relative to one or more attitude measuring devices selected from the group consisting of a gyroscope, an inertial measurement unit (IMU), and a global positioning receiver (GPS).
30. The system of claim 16, wherein the first and second imaging sensors are selected from the group consisting of digital cameras, light detection and ranging (LIDAR), infrared, heat-sensing and gravitometers.
32. The method of claim 31, wherein the initial calibration step further comprises the step of:
calibrating the second imaging sensor using the updated boresight angle of the first imaging sensor.
33. The method of claim 32, further comprising the step of:
using oversampling techniques to update the position of the second imaging sensor within the rigid mount unit relative to the first imaging sensor.
34. The method of claim 31, further comprising the steps of:
using flight line oversampling techniques to update the calibration of the first imaging sensor against a target area and the boresight angle of the first imaging sensor; and
using flight line oversampling techniques to update the position of one or more subsequent imaging sensors within the rigid mount unit relative to the first imaging sensor.
35. The method of claim 34, further comprising the steps of:
using flight line oversampling techniques to update the position of the second imaging sensor within the rigid mount unit relative to the first imaging sensor;
using flight line oversampling techniques to update the position of one or more subsequent imaging sensors within the rigid mount unit relative to the first imaging sensor; and
updating at least one calibration parameter of one or more subsequent imaging sensors within the rigid mount using the updated boresight angle of the first imaging sensor.
37. The system of claim 36, further comprising a third imaging sensor, coupled to the mount unit and offset from the first imaging sensor, having a third focal axis passing through the aperture and intersecting the first focal axis within an intersection area.
38. The system of claim 37, wherein the focal axes of the third imaging sensor lies in a common plane with the focal axes of the first and second imaging sensors.
39. The system of claim 37, wherein the focal axes of the first and second imaging sensors lie in a first common plane and the focal axis of the third imaging sensor lies in a plane orthogonal to the first common plane.
0. 40. A system for generating a map of a surface, comprising:
a global position receiver;
a global positioning antenna;
a first imaging sensor, having a view of the surface, having a focal axis disposed in the direction of the surface, wherein the first imaging sensor generates an image area comprising a first data array of pixels, wherein the first data array of pixels is at least two dimensional; and
a computer, connected to the global positioning antenna, and the first imaging sensor; generating a calculated longitude and calculated latitude value for a coordinate corresponding to at least one pixel in the array based on input from the global positioning antenna.
42. The system of claim 41, further comprising a third imaging sensor disposed within the mount unit, wherein the third imaging sensor has a focal axis passing through an aperture in the mount unit, wherein the third imaging sensor generates a third image area comprising a third data array of pixels.
44. The system of claim 43, further comprising a third imaging sensor disposed within the mount unit, wherein the third imaging sensor has a focal axis passing through an aperture in the mount unit, wherein the third imaging sensor generates a third image area comprising a third data array of pixels.
0. 46. The system of claim 45, wherein the first image overlap area comprises at least one oversampling pattern.
0. 47. The system of claim 46, wherein the at least one oversampling pattern comprises a plurality of pixels that have been positioned using an oversampling technique to a precision that is less than one pixel in magnitude.
0. 48. The system of claim 47, wherein the mount unit flexes less than 100th of a degree during operation.
0. 49. The system of claim 47, wherein the mount unit flexes less than 1,000th of a degree during operation.
0. 50. The system of claim 47, wherein the mount unit flexes less than 10,000th of a degree during operation.
0. 52. The system of claim 51, wherein the first image overlap area comprises at least one oversampling pattern.
0. 53. The system of claim 52, wherein the at least one oversampling pattern comprises a plurality of pixels that have been positioned using an oversampling technique to a precision that is less than one pixel in magnitude.
0. 55. The method of claim 54, wherein the oversampling techniques comprise flight line oversampling.
0. 56. The method of claim 54, wherein the oversampling techniques comprise lateral oversampling.
0. 57. The method of claim 54, wherein the oversampling techniques comprise generating at least one pixel that has been positioned using the oversampling techniques to a precision that is less than one pixel in magnitude.
0. 58. The method of claim 54, wherein the oversampling techniques comprise generating a plurality of pixels that have been positioned using the oversampling techniques to a precision that is less than one pixel in magnitude.
0. 60. The system of claim 59, wherein the mount unit flexes less than 100th of a degree during operation.
0. 61. The system of claim 59, wherein the mount unit flexes less than 1,000th of a degree during operation.
0. 62. The system of claim 59, wherein the mount unit flexes less than 10,000th of a degree during operation.
0. 64. The system of claim 63, wherein the mount unit flexes less than 1,000th of a degree during operation.
0. 65. The system of claim 63, wherein the mount unit flexes less than 10,000th of a degree during operation.
0. 67. The system of claim 66, wherein the first image overlap area comprises at least one oversampling pattern.
0. 68. The system of claim 67, wherein the at least one oversampling pattern comprises a plurality of pixels that have been positioned using an oversampling technique to a precision that is less than one pixel in magnitude.
0. 69. The system of claim 66, wherein the mount unit flexes less than 100th of a degree during operation.
0. 70. The system of claim 66, wherein the mount unit flexes less than 1,000th of a degree during operation.
0. 71. The system of claim 66, wherein the mount unit flexes less than 10,000th of a degree during operation.
0. 73. The system of claim 72, wherein the first image overlap area comprises at least one oversampling pattern.
0. 74. The system of claim 73, wherein the at least one oversampling pattern comprises a plurality of pixels that have been positioned using an oversampling technique to a precision that is less than one pixel in magnitude.
0. 75. The system of claim 72, wherein the mount unit flexes less than 100th of a degree during operation.
0. 76. The system of claim 72, wherein the mount unit flexes less than 1,000th of a degree during operation.
0. 77. The system of claim 72, wherein the mount unit flexes less than 10,000th of a degree during operation.

This application is a
where ƒ(x) is a function of the form:
ƒ(x)=cos(off-axis angle)**4.
The off-axis angle 514 is: zero for center column 504; larger for columns 502 and 506; and larger still for columns 500 and 508. The overall field of view angle 516 (FOVx angle) is depicted between columns 504 and 508.

The function ƒ(x) can be approximated by a number of line segments between columns. For a point falling within a line segment between any given columns c1 and c2, an adjustment factor is computed as follows:
<adjustment factor for c>=ƒ(c1)+[ƒ(c2)−ƒ(c1)*(c−c1)/(c2−c1)];

where ƒ(c1) and ƒ(c2) are the ƒ function values of the off-axis angles at column c1 and c2, respectively.

Each set of input images needs to be stitched into a mosaic image. Even though the exposure control module regulates the amount of light each camera or sensor receives, the resulting input images may still differ in intensity. The present invention provides an intensity-balancing module that compares overlapping area between adjacent input images, to further balance the relative intensities. Because adjoining input images are taken simultaneously, the overlapping areas should, in theory, have identical intensity in both input images. However, due to various factors, the intensity values are usually not the same. Some such factors causing intensity difference could include, for example, the exposure control module being biased by unusually bright or dark objects present in the field of view of only a particular camera, or the boresight angles of cameras being different (i.e., cameras that are more slanted receive less light than those more vertical).

To balance two adjacent images, one is chosen as the reference image and the other is the secondary image. A correlation vector (fR, fG, FB) is determined using, for example, the following process. Let V be a 3×1 vector representing the values (R, G and B) of a pixel:

V = R G B .
A correlation matrix C may be derived as:

C = FR 0 0 0 FG 0 0 0 FB ;
where FR=AvgIr/AvgIn; AvgIr=Red average intensity of overlapped region in reference image; AvgIn=Red average intensity of overlapped region in new image; and FG and FB are similarly derived.

The correlation matrix scales pixel values of the secondary image so that the average intensity of the overlapping area of the secondary image becomes identical to the average intensity of the overlapping area of the reference image. The second image can be balanced to the reference image by multiplying its pixel values by the correlation matrix.

Thus, in one embodiment of a balancing process according to the present invention, a center image is considered the reference image. The reference image is first copied to the compound image (or mosaic). Overlapping areas between the reference image and an adjoining image (e.g., the near left image) are correlated to compute a balancing correlation matrix (BCM). The BCM will be multiplied with vectors representing pixels of the adjoining image to make the intensity of the overlapping area identical in both images. One embodiment of this relationship may be expressed as:

Let I(center)=Average intensity of overlapping area in center image;

I(adjoining)=Average intensity of overlap in adjoining image; then
Balancing factor=I(center)/I(adjoining).

The balancing factor for each color channel (i.e., red, green and blue) is independently computed. These three values form the BCM. The now-balanced adjoining image is copied to the mosaic. Smooth transitioning at the border of the copied image is providing by “feathering” with a mask. This mask has the same dimension as the adjoining image and comprises a number of elements. Each element in the mask indicates the weight of the corresponding adjoining image pixel in the mosaic. The weight is zero for pixels at the boundary (i.e. the output value is taken from the reference image), and increases gradually in the direction of the adjoining image until it becomes unity—after a chosen blending width has been reached. Beyond the blending area, the mosaic will be entirely determined by the pixels of the adjoining image. Similarly, the overlaps between all the other constituent input images are analyzed and processed to compute the correlation vectors and to balance the intensities of the images.

A correlation matrix is determined using, for example, the following process with reference to FIG. 6. FIG. 6 depicts a strip 600 being formed in accordance with the present invention. A base mosaic 602 and a new mosaic 604, added along path (or track) 606, overlap each other in region 608. Let V be a vector that represents the R, G and B values of a pixel:

V = R G B
Let h be the transition width of region 608, and y be the along-track 606 distance from the boundary 610 of the overlapped region to a point A, whose pixel values are represented by V. Let C be the correlation matrix:

C = FR 0 0 0 FG 0 0 0 FB
The balanced value of V, called V′ is:
V′=[y/h·I+(1−y/h)·C]×V, for 0<y<h;
V′=V, for y>=h;

Where I is the identity matrix

I = 1 0 0 0 1 0 0 0 1 .
Note that the “feathering” technique is also used in combination with the gradient to minimize seam visibility.

When mosaics are long, differences in intensity at the overlap may change from one end of the mosaic to the other. Computing a single correlation vector to avoid creating visible seams may not be possible. The mosaic can be divided into a number of segments corresponding to the position of the original input images that make up the mosaic. The process described above is applied to each segment separately to provide better local color consistency.

Under this refined algorithm, pixels at the border of two segments may create vertical seams (assuming north-south flight lines). To avoid this problem, balancing factors for pixels in this area have to be “transitioned” from that of one segment to the other. This is explained now with reference to FIG. 7.

FIG. 7 depicts a strip 700 being formed in accordance with the present invention. A base mosaic 702 and a new segment 704 overlap in area 706. Mosaic 702 and another new segment 708 overlap in area 710. Segments 704 and 708 overlap in area 712, and areas 706, 710 and 712 all overlap and coincide at area 714. For explanation purposes, point 716 serves as an origin for y-axis 718 and x-axis 720. Movement along y-axis 718 represents movement along the flight path of the imaging system. Point 716 is located at the lower left of area 714.

According to the present invention, the dimensions of a strip are determined by the minimum and maximum x and y values of the constituent mosaics. An output strip is initialized to a background color. A first mosaic is transferred to the strip. The next mosaic (along the flight path) is processed next. Intensity values of the overlapping areas of the new mosaic and the first mosaic are correlated, separately for each color channel. The new mosaic is divided into a number of segments corresponding to the original input images that made up the mosaic. A mask matrix, comprising a number of mask elements, is created for the new mosaic. A mask element contains the correlation matrix for a corresponding pixel in the new mosaic. All elements in the mask are initialized to unity. The size of the mask can be limited to just the transition area of the new mosaic. The correlation matrix is calculated for the center segment. The mask area corresponding to the center segment is processed. The values of the elements at the edge of the overlap area are set to the correlation vector. Then, gradually moving away from the first mosaic along the strip, the components of the correlation matrix are either increased or decreased (whether they are less or more than unity, respectively) until they become unity at a predetermined transition distance. The area of the mask corresponding to a segment adjoining the center segment is then processed similarly. However, the area 714 formed by the first mosaic and the center and adjoining segments of the new image requires special treatment. Because the correlation matrix for the adjoining segment may not be identical to that of the center segment, a seam may appear at the border of the two segments in the overlap area 714 with the first mosaic. Therefore, the corner is influenced by the correlation matrices from both segments. For a mask cell A at distance x to the border with the center segment and distance y to the overlap edge, its correlation matrix is the distance-weighted average of the two segments, evaluated as follows:

Further according to the present invention, a color fidelity (i.e., white-balance) filter is applied. This multiplies R and B components with a determinable factor to enhance color fidelity. The factor may be determined by calibrating the cameras and lenses. The color fidelity filter ensures that the colors in an image retain their fidelity, as perceived directly by the human eye. Within the image capture apparatus, the Red, Green and Blue light receiving elements may have different sensitivities to the color they are supposed to capture. A “while-balance” process is applied—where image of a white object is captured. Theoretically, pixels in the image of that white object should have equivalent R, G and B values. In reality, however, due to different sensitivities and other factors, the average color values for each R, G and B may be avgR, avgG and avgB, respectively. To equalize the color components, the R, G and B values of the pixels are multiplied by the following ratios:

R values are multiplied by the ratio avgG/avgR; and

B values are multiplied by the ratio avgG/avgB.

The end result is that the image of the white object is set to have equal R G B components.

In most applications, a strip usually covers a large area of non-water surface. Thus, average intensity for the strip is unlikely to be skewed by anomalies such as highly reflecting surfaces. The present invention provides an intensity normalization module that normalizes the average intensity of each strip so that the mean and standard deviation are of a desired value. For example, a mean of 127 is the norm in photogrammetry. A standard deviation of 51 helps to spread the intensity value over an optimal range for visual perception of image features. Each strip may have been taken in different lighting conditions and, therefore, may have different imaging data profiles (i.e., mean intensity and standard deviation). This module normalizes the strips, such that all have the same mean and standard deviation. This enables the strips to be stitched together without visible seams.

This intensity normalization comprises a computation of the mean intensity for each channel R, G and B, and for all channels. The overall standard deviation is then computed. Each R, G and B value of each pixel is transformed to the new mean and standard deviation:
new value=new mean+(old value−old mean)*(new std/old std).

Next, multiple adjacent strips are combined to produce tiled mosaics for an area of interest. Finished tiles can correspond to the USGS quads or quarter-quads. Stitching strips into mosaics is similar to stitching mosaics together to generate strips, with strips now taking the role of the mosaics. At the seam line between two strips, problems may arise if the line crosses elevated structures such as buildings, bridges, etc. This classic problem in photogrammetry arises from the parallax caused by the same object being looked at from two different perspectives. During imaging of a building, for example, one strip may present a view from one side of the building while another strip presents a view from another side of the building. After the images are stitched together, the resulting mosaic may look like a tepee. In order to address this, a terrain-guided mosaicing process may be implemented to guide the placement of a seam line. For example, LIDAR or DEM data collected with, or analyzed from, image data may be processed to determine the configuration and shaping of images as they are mosaiced together. Thus, in some mosaiced images, a seam line may not be a straight line—instead comprising a seam line that shifts back and forth to snake through elevated structures.

Referring now to FIG. 8, one embodiment of an imaging process 800 is illustrated in accordance with the present invention as described above. Process 800 begins with a series 802 of one, or more, raw collected images. Images 802 are then processed through a white-balancing process 804, transforming them into a series of intermediate images. Series 802 is then processed through anti-vignetting function 806 before progressing to the orthorectification process 808. As previously noted, orthorectification may rely on position and attitude data 810 from the imaging sensor system or platform, and on DTM data 812. DTM data 812 may be developed from position data 810 and from, for example, USGS DTM data 814 or LIDAR data 816. Series 802 is now orthorectified and processing continues with color balancing 818. After color balancing, series 802 is converted by mosaicing module 820 into compound image 822. Module 820 performs the mosaicing and feathering processes during this conversion. Now, one or more compound images 822 are further combined in step 824, by mosaicing with a gradient and feathering, into image strip 826. Image strips are processed through intensity normalization 828. The now normalized strips 828 are mosaiced together in step 830, again by mosaicing with a gradient and feathering, rendering a finishing tiled mosaic 832. The mosaicing performed in step 830 may comprise a terrain-guided mosaicing, relying on DTM data 812 or LIDAR data 816.

FIG. 9 illustrates diagrammatically how photos taken with the camera array assembly may be aligned to make an individual frame. This embodiment shows a photo patter illustration looking down from a vehicle, using data ortho-rectified from five cameras.

FIG. 10 is a block diagram of the processing logic according to certain embodiments of the present invention. As shown in block diagram 1000, the processing logic accepts one or more inputs, which may include elevation measurements 1002, attitude measurements 1004 and/or photo and sensor imagery 1006. Certain inputs may be passed through an initial processing step prior to analysis, as is shown in block 1008, wherein the attitude measurements are combined with data from ground control points. Elevation measurements 1002 and attitude measurements 1004 may be combined to generate processed elevation data 1010. Processed elevation data 1010 may then be used to generate elevation DEM 1014 and DTM 1016. Similarly, attitude measurements 1006 may be combined with photo and sensor imagery 1006 to generate georeferenced images 1012, which then undergo image processing 1018, which may include color balancing and gradient filtering.

Depending on the data set to be used (1020), either DTM 1016 or a USGS DEM 1022 is combined with processed images 1018 to generate orthorectified imagery 1024. Orthorectified imagery 1024 then feeds into self-locking flightlines 1026. Balancing projection mosaicing 1028 then follows, to generate final photo output 1030.

The present invention may employ a certain degree of lateral oversampling to improve output quality. FIG. 11 is an illustration of a lateral oversampling pattern 1100 looking down from a vehicle according to certain embodiments of the present invention showing minimal lateral oversampling. In this illustration, the central nadir region 1102 assigned to the center camera overlaps only slightly with the left nadir region 1104 and right nadir region 1106, so that overlap is minimized FIG. 12 is an illustration of a lateral oversampling pattern 1200 looking down from a vehicle according to certain embodiments of the present invention showing a greater degree of lateral oversampling. In this illustration, the central nadir region 1202 shows a high degree of overlap with left nadir region 1204 and right nadir region 1206.

In addition to the use of lateral oversampling as shown in FIGS. 11 and 12, the present invention may employ flight line oversampling as well. FIG. 13 is an illustration of a flight line oversampling pattern 1300 looking down from a vehicle according to certain embodiments of the present invention showing a certain degree of flight line oversampling but minimal lateral oversampling. Central nadir regions 1302 and 1304 are overlapped to one another along the flight line, but do not overlap laterally with left nadir regions 1306 and 1308 or with right nadir regions 1310 and 1312.

FIG. 14 is an illustration of flight line oversampling looking down from a vehicle according to certain embodiments of the present invention showing significant flight line oversampling as well as significant lateral oversampling. It can be seen that each of the central nadir regions 1402 through 1406 are significantly overlapped with one another as well as with left nadir regions 1408 through 1412 and right nadir regions 1414 through 1418. Left nadir regions 1408 through 1412 are overlapped with one another, as are right nadir regions 1414 through 1418. Accordingly, each point on the surface is sampled at least twice, and in some cases as many as four times. This technique uses the fact that in the area of an image that is covered twice, or more, by different camera sensors, a doubling of the image resolution is possible in both the lateral (across path) and flight line (along path) directions for an overall quadrupling of the resolution. In practice, the improvement in image/sensor resolution is somewhat less than doubled in each of the dimensions, approximately 40% in each dimension, or 1.4×1.4=˜2 times. This is due to the statistical variations of the sub-pixel alignment/orientation. In effect, the pixel grid is rarely exactly equidistant from the overlaid pixel grid. If extremely precise lateral camera sensor alignments were made at the sub-pixel level, a quadrupling of image resolution could be realized.

FIG. 15 is an illustration of a progressive magnification pattern 1500 looking down from a vehicle according to certain embodiments of the present invention. Central nadir region 1502 is bounded on its left and right edges by inner left nadir region 1504 and inner right nadir region 1506, respectively. Inner left nadir region 1504 is bounded on its left edge by outer left nadir region 1508, while inner right nadir region 1506 is bounded on its right edge by outer right nadir region 1510. Note that these regions exhibit a minimal degree of overlap and oversampling from one to another.

FIG. 16 is an illustration of a progressive magnification pattern 1600 looking down from a vehicle according to certain embodiments of the present invention. Central nadir region 1602 is bounded on its left and right edges by inner left nadir region 1604 and inner right nadir region 1606, respectively. Inner left nadir region 1604 is bounded on its left edge by outer left nadir region 1608, while inner right nadir region 1606 is bounded on its right edge by outer right nadir region 1610. Note that, as above, these regions exhibit a minimal degree of overlap and oversampling from one to another. Within each of the nadir regions 1604 through 1610, there is a central image region 1614 through 1620 shown shaded in grey.

FIG. 17 is an illustration of a progressive magnification pattern 1700 looking down from a vehicle according to certain embodiments of the present invention. In the center of pattern 1700, a left inner nadir region 1702 and a right inner nadir region 1704 overlap in the center. A left intermediate nadir region 1706 and a right intermediate nadir region 1708 are disposed partly outside of regions 1702 and 1704, respectively, each sharing an overlapping area with the respective adjacent area by approximately 50%. An outer left nadir region 1710 and an outer right nadir region 1712 are disposed partly outside of regions 1706 and 1708, respectively, each sharing an overlapping area with the respective adjacent area by approximately 50%. A central image region 1714 is disposed in the center of pattern 1700, comprised of the central portions of nadir regions 1702 through 1712.

FIG. 18 depicts a schematic of the architecture of a system 1800 according to certain embodiments of the present invention. System 1800 may include one or more GPS satellites 1802 and one or more SATCOM satellites 1804. One or more GPS location systems 1806 may also be included, operably connected to one or more modules 1808 collecting LIDAR, GPS and/or X, Y, Z location data and feeding such information to one or more data capture system applications 1812. One or more data capture system applications 1812 may also receive spectral data from a camera array 1822. A DGPS 1810 may communicate with one or more SATCOM satellites 1804 via a wireless communications link 1826. One or more SATCOM satellites 1804 may, in turn, communicate with one or more data capture system applications 1812.

One or more data capture system applications 1812 may interface with an autopilot 1816, an SSD and/or a RealTime StitchG system 1820, which may also interact with one another. SSD 1814 may be operably connected to RealTime DEM 1818. Finally, RealTime DEM 1818 and RealTime StitchG 1820 may be connected to a storage device, such as disk array 1824.

The present invention may employ a certain degree of co-mounted, co-registered oversampling to overcome physical pixel resolution limits. FIG. 19 is an illustration of a lateral co-mounted, co-registered oversampling configuration 1900 for a single camera array 112 looking down from a vehicle according to certain embodiments of the present invention showing minimal lateral oversampling. The cameras overlap a few degrees in the vertical sidelap area 1904 and 1908. Whereas FIG. 19 depicts a 3-camera array, these subpixel calibration techniques work equally well when utilizing any number of camera sensors from 2 to any number of cameras being calibrated.

Similar to the imaging sensors in FIGS. 3 and 4, the camera sensors may be co-registered to calibrate the physical mount angle offset of each sensor relative to each other and/or to the nadir camera. This provides an initial, “close” calibration. These initial calibration parameters may be entered into an onboard computer system 104 in the system 100, and updated during flight using oversampling techniques.

Referring now to FIG. 19, the rectangles labeled A, B, and C represent image areas 1902, 1906 and 1910 from a 3-camera array C-B-A (not shown). Images of areas 1902, 1906 and 1910 taken by cameras A through C (not shown), respectively, are illustrated from an overhead view. Again, similar to FIGS. 3 and 4, because of the “cross-eyed” arrangement, the image of area 1902 is taken by right camera A, the image of area 1906 is taken by center/nadir camera B, and the image of area 1910 is taken by left camera C. Cameras A through C form an array (not shown) that is, in most applications, pointed down vertically.

In FIG. 19, the hatched areas labeled A/B and B/C sidelaps represent image overlap areas 1904 and 1908, respectively. The left image overlap area 1904 is where right camera A overlaps with the center/nadir camera B, and the right image overlap area 1908 is where the left camera C overlaps with the center/nadir camera B. In these sidelap areas 1904 and 1908, the camera sensor grid bisects each pixel in the overlap areas 1904 and 1908, which effectively quadruples the image resolution in these areas 1904 and 1908 via the mechanism of co-mounted, co-registered over-sampling. In effect, the improvement in image/sensor resolution is doubled in each dimension, or 2×2=4 times. This quadrupling of the image resolution also quadruples the alignment precision between adjacent cameras.

Further, this quadrupling of alignment precision between adjacent cameras improves the systems 100 alignment precision for all sensors affixed to a rigid mount plate. The cameras and sensors are affixed to a rigid mount unit, which is affixed to the rigid mount plate, as discussed above. In particular, when the angular alignment of adjacent cameras affixed to the rigid mount unit is improved, the angular alignment of the other sensors is also enhanced. This enhancement of alignment precision for the other sensors affixed to the rigid mount plate also improves the image resolution for those sensors.

A lateral co-mounted, co-registered oversampling configuration 2000 for two overlapping camera arrays 112 is illustrated in FIG. 20. In particular, FIG. 20 is an illustration of a lateral co-mounted, co-registered oversampling configuration 2000 for two overlapping camera arrays 112 looking down from a vehicle according to certain embodiments of the present invention showing maximum lateral oversampling. The adjacent cameras overlap a few degrees in the vertical sidelap areas 2006, 2008, 2014 and 2016, and the corresponding cameras overlap completely in the image areas 2002, 2010, 2018 and 2004, 2012, 2020. Whereas FIG. 20 depicts two 3-camera arrays, these subpixel calibration techniques work equally well when utilizing two overlapping camera arrays with any number of camera sensors from 2 to any number of cameras being calibrated.

Similar to the imaging sensors in FIGS. 3 and 4, the camera sensors may be co-registered to calibrate the physical mount angle offset of each sensor relative to each other and/or to the nadir camera. In this embodiment, multiple, i.e., at least two, rigid mount units are affixed to a rigid mount plate and are co-registered. This provides an initial, “close” calibration. These initial calibration parameters may be entered into an onboard computer system 104 in the system 100, and updated during flight.

Referring now to FIG. 20, the rectangles labeled A, B, and C represent image areas 2002, 2010, 2018, and 2004, 2012, 2020 from two overlapping 3-camera arrays C-B-A (not shown), respectively. Images of areas 2002, 2010, 2018, and 2004, 2012, 2020 taken by cameras A through C (not shown) and overlapping cameras A′ through C′ (not shown), respectively, are illustrated from an overhead view. Again, similar to FIGS. 3 and 4, because of the “cross-eyed” arrangement, the image of area 2002 is taken by right camera A, the image of area 2010 is taken by center/nadir camera B, and the image of area 2018 is taken by left camera C. Further, the image of area 2004 is taken by right camera A′, the image of area 2012 is taken by center camera B′, and the image of area 2020 is taken by left camera C′. Cameras A through C and overlapping cameras A′ through C′ form arrays (not shown) that are, in most applications, pointed down vertically.

In FIG. 20, the hatched areas labeled A/B and B/C sidelaps represent two overlapping image overlap areas 2006, 2008 and 2014, 2016, respectively. The left image overlap areas 2006, 2008 is where right camera A overlaps with the center/nadir camera B, and where right camera A′ overlaps with the center camera B′, respectively. The right image overlap areas 2014 and 2016 is where the left camera C overlaps with the center/nadir camera B, and where the left camera C′ overlaps with the center camera B′. In these sidelap areas 2006, 2008 and 2014, 2016, respectively, the camera sensor grid bisects each pixel in the overlap areas 2006, 2008 and 2014, 2016, which effectively quadruples the image resolution in these areas 2006, 2008 and 2014, 2016 via the mechanism of co-mounted, co-registered oversampling. In effect, the improvement in image/sensor resolution is doubled in each dimension, or 2×2=4 times. This quadrupling of the image resolution quadruples the alignment precision between adjacent cameras, as discussed above.

By having two overlapping camera arrays, the image resolution is effectively quadrupled again for the overlapping sidelap overlap areas 2006, 2008 and 2014, 2016. This results in an astounding overall 64 times improvement in system 100 calibration and camera alignment.

In the overlapping sidelap areas 2006 and 2008, the overlapping camera sensor grids bisects each pixel in the sidelap areas 2006 and 2008, which effectively quadruples the image resolution in these areas 2006 and 2008 via the mechanism of co-mounted, co-registered oversampling. Similarly, in the overlapping sidelap areas 2014 and 2016, the overlapping camera sensor grids bisects each pixel in the sidelap areas 2014 and 2016, which effectively quadruples the image resolution in these areas 2014 and 2016. In effect, the improvement in image/sensor resolution is again doubled in each dimension, or 2×2×2×2×2×2=64 times. This overall 64 times improvement of the image resolution also enhances alignment precision by 64 times between adjacent cameras.

This 64 times improvement of alignment precision between adjacent and corresponding cameras enhances the systems 100 alignment precision for all sensors affixed to a rigid mount plate. Cameras A through C and, optionally, other sensors are affixed to a first rigid mount unit and cameras A′ through C′ and, optionally, other sensors are affixed to a second rigid mount unit, which are each affixed to a rigid mount plate. In particular, when the angular alignment of adjacent and/or corresponding cameras affixed to the first and/or second rigid mount units is improved, the angular alignment of the other sensors is also enhanced. This enhancement of alignment precision for the other sensors affixed to the rigid mount plate also improves the image resolution for those sensors.

By having two overlapping camera arrays, the image resolution is effectively quadrupled for the entire image, not just for the A/B and B/C sidelap overlap areas. Referring now to FIG. 20, the overlapping grid detail labeled “OVERLAPPING GRID 4X” represents overlapping areas 2022 and 2024 in right images areas 2018 and 2020, respectively. In the overlapping areas 2022 and 2024, the overlapping camera sensor grids bisects each pixel in the overlapping areas 2022 and 2024, which effectively quadruples the image resolution in these areas 2022 and 2024 via the mechanism of co-mounted, co-registered oversampling. In effect, the improvement in image resolution is doubled in each dimension, or 2×2=4 times.

In a preferred embodiment, one camera array is monochrome, and another camera array is red-green-blue. Even though each array covers different color bands, simple image processing techniques are used so that all color bands realize the benefit of this increased resolution. Another advantage provided by these techniques is that, in the case where one camera array is red-green-blue and the other, overlapping camera array is an infrared or near infrared (or some other bandwidth), which results in a superior multi-spectral image.

Accordingly, all of the improvements (i.e., 4 times) identified for the embodiment of FIG. 19 discussed above apply to the embodiment of FIG. 20, however, additional significant enhancements (i.e., 64 times) to the systems 100 calibration precision and overall image resolution may be realized through the two overlapping camera arrays.

FIG. 21 is an illustration of a fore and lateral co-mounted, co-registered oversampling configuration 2100 for two camera arrays 112 looking down from a vehicle according to certain embodiments of the present invention. In particular, FIG. 21 is an illustration of a fore and lateral co-mounted, co-registered oversampling configuration 2100 for two overlapping camera arrays 112 looking down from a vehicle according to certain embodiments of the present invention showing minimal fore and minimal lateral oversampling. The adjacent cameras overlap a few degrees in the vertical sidelap areas 2104, 2108, 2124 and 2128, and the corresponding cameras overlap a few degrees along the horizontal forelap areas 2112, 2116 and 2120. Whereas FIG. 21 depicts two 3-camera arrays, these subpixel calibration techniques work equally well when utilizing two overlapping camera arrays with any number of camera sensors from 2 to any number of cameras being calibrated.

Similar to the imaging sensors in FIGS. 3 and 4, the camera sensors may be co-registered to calibrate the physical mount angle offset of each sensor relative to each other and/or to the nadir camera. In this embodiment, multiple, i.e., at least two, rigid mount units are affixed to a rigid mount plate and are co-registered. This provides an initial, “close” calibration. These initial calibration parameters may be entered into an onboard computer system 104 in the system 100, and updated during flight.

Referring now to FIG. 21, the rectangles labeled A, B, and C represent image areas 2102, 2106 and 2110 from a 3-camera array C-B-A (not shown), and the rectangles D, E, and F represent image areas 2122, 2126 and 2130 from a 3-camera array F-E-D (not shown), respectively. Images of areas 2102, 2106 and 2110 taken by cameras A through C (not shown), and images of areas 2122, 2126 and 2130 taken by cameras D through F (not shown), respectively, are illustrated from an overhead view. Again, similar to FIGS. 3 and 4, because of the “cross-eyed” arrangement, the rear, left image of area 2102 is taken by rear, right camera A, the rear, center image of area 2106 is taken by rear, center/nadir camera B, and the rear, right image of area 2110 is taken by rear, left camera C. Further, the forward, left image of area 2122 is taken by forward, right camera D, the forward, center image of area 2126 is taken by forward, center camera E, and the forward, right image of area 2020 is taken by forward, left camera F. Cameras A through C and overlapping cameras D through F form arrays (not shown) that are, in most applications, pointed down vertically.

In FIG. 21, the vertical hatched areas represent four image overlap areas 2104, 2108, 2124 and 2128. The rear, left image overlap area 2104 is where rear, right camera A overlaps with the center/nadir camera B, and the rear, right image overlap area 2108 is where rear, left camera C overlaps with the center/nadir camera B. The forward, left image overlap area 2124 is where forward, right camera D overlaps with the center/nadir camera E, and the forward, right image overlap area 2128 is where forward, left camera F overlaps with the center camera E.

Referring now to FIG. 21, the overlapping grid detail labeled “SIDELAP AREA 4:1” represents overlaping sidelap overlap areas 2104, 2108 and 2124, 2128. In these sidelap overlap areas 2104, 2108, 2124 and 2128, the camera sensor grid bisects each pixel in the overlap areas 2104, 2108, 2124 and 2128, which effectively quadruples the image resolution in these areas 2104, 2108, 2124 and 2128 via the mechanism of co-mounted, co-registered oversampling. In effect, the improvement in image/sensor resolution is doubled in each dimension, or 2×2=4 times. This quadrupling of the image resolution quadruples the alignment precision between adjacent cameras, as discussed above.

This quadrupling of alignment precision between adjacent cameras improves the systems 100 alignment precision for all sensors affixed to a rigid mount plate. Cameras A through C and, optionally, other sensors are affixed to a first rigid mount unit and cameras D through F and, optionally, other sensors are affixed to a second rigid mount unit, which are each affixed to a rigid mount plate. In particular, when the angular alignment of adjacent cameras affixed to the first or second rigid mount units is improved, the angular alignment of the other sensors affixed to the mount unit is also enhanced. This enhancement of alignment precision for the other sensors affixed to the rigid mount plate also improves the image resolution for those sensors.

Similarly, the horizontal hatched areas represent three image overlap areas 2112, 2116 and 2120. The forward, left image overlap area 2112 is where rear, right camera A overlaps with the forward, right camera D, forward, center image overlap area 2116 is where rear, center/nadir camera B overlaps with the forward, center camera E, and the rear, right image overlap area 2120 is where rear, left camera C overlaps with forward, left camera F.

Referring now to FIG. 21, the overlapping grid detail labeled “FORELAP AREA 4:1” represents overlaping forelap overlap areas 2112, 2116 and 2120. In these forelap overlap areas 2112, 2116 and 2120, the camera sensor grid bisects each pixel in the overlap areas 2112, 2116 and 2120, which effectively quadruples the image resolution in these areas 2112, 2116 and 2120 via the mechanism of co-mounted, co-registered oversampling. In effect, the improvement in image/sensor resolution is doubled in each dimension, or 2×2=4 times. This quadrupling of the image resolution quadruples the alignment precision between corresponding cameras.

This quadrupling of alignment precision between corresponding cameras improves the systems 100 alignment precision for all sensors affixed to a rigid mount plate. Cameras A through C and, optionally, other sensors are affixed to a first rigid mount unit and cameras D through F and, optionally, other sensors are affixed to a second rigid mount unit, which are each affixed to a rigid mount plate. In particular, when the angular alignment of corresponding cameras affixed to the first or second rigid mount units is improved, the angular alignment of the other sensors is also enhanced. This enhancement of alignment precision for the other sensors affixed to the rigid mount plate also improves the image resolution for those sensors.

Similar to the overlapping sidelap overlap areas 2006, 2008 and 2014, 2016 in FIG. 20, the intersecting forelap and sidelap overlap areas 2114 and 2118 in FIG. 21 results in an astounding overall 64 times improvement in system calibration and camera alignment. Referring now to FIG. 21, the intersecting grid detail labeled “QUAD OVERLAP AREA 64:1” represents intersecting forelap and sidelap overlap area 2118. In the intersecting forelap and sidelap overlap areas 2114 and 2118, the overlapping camera sensor grids bisects each pixel in the intersecting areas 2114 and 2118, which effectively quadruples the image resolution in these areas 2114 and 2118 via the mechanism of co-mounted, co-registered oversampling. In effect, the improvement in image/sensor resolution is again doubled in each dimension, or 2×2×2×2×2×2=64 times. This overall 64 times improvement of the image resolution also enhances alignment precision by 64 times between adjacent cameras.

This 64 times improvement of alignment precision between adjacent and corresponding cameras enhances the systems 100 alignment precision for all sensors affixed to a rigid mount plate. Cameras A through C and, optionally, other sensors are affixed to a first rigid mount unit and cameras D through E and, optionally, other sensors are affixed to a second rigid mount unit, which are each affixed to a rigid mount plate. In particular, when the angular alignment of adjacent and/or corresponding cameras affixed to the first and/or second rigid mount units is improved, the angular alignment of the other sensors is also enhanced. This enhancement of alignment precision for the other sensors affixed to the rigid mount plate also improves the image resolution for those sensors.

In a preferred embodiment, one camera array is monochrome, and another camera array is red-green-blue. Even though each array covers different color bands, simple image processing techniques are used so that all color bands realize the benefit of this increased resolution. Another advantage provided by these techniques is that, in the case where one camera array is red-green-blue and the other, overlapping camera array is an infrared or near infrared (or some other bandwidth), which results in a superior multi-spectral image.

As shown in FIGS. 19-21, these techniques may be used to overcome the resolution limits imposed on camera systems due to the inability of optical glass to resolve “very small” objects. In particular, there are known physical limits to the ability of optical glass in camera lenses to resolve very small objects. This is often called “the resolving limit of glass”. For example, if 1 millimeter pixels are required from 10,000 feet of altitude, the use of an extremely high magnification telescopic lens would be required to obtain a ground swath of about 100 feet. This is because no matter how many pixels can be produced by a charged-coupled device sensor (e.g., 1 billion pixels), the resolving power of the purest glass would not permit image resolution to 1 millimeter pixels at 10,000 feet of altitude. This example is used to make the point that there are physical limits for pixel resolution in glass as well as pixel density limits for an imaging sensor.

The systems 100 imaging sensor alignment in the rigid mount unit(s) affixed to the rigid mount plate and related calibration techniques provide a unique solution to this problem, as described above. By using these techniques, the resolving limitations of glass can effectively be overcome. For example, a single camera array results in 1 times (or no) oversampling benefits. However, two overlapping camera arrays results in 4 times overall improvement in both image resolution and overall geospatial horizontal and vertical accuracy. Further, three overlapping camera arrays results in 16 times overall improvement, four overlapping camera arrays results in 64 times overall improvement, and so on.

As can be deduced from these examples, the equation for overall improvement is as follows:
overall improvement=4N
where N is the number of overlapping camera arrays.

If there are four camera arrays, then there are three overlapping camera arrays (i.e., N=3). Accordingly, four camera arrays provide a 64 times (i.e., 43=64 times) overall improvements in both the image resolution and overall geospatial horizontal and vertical accuracy.

Further, these subpixel calibration techniques may be combined with the self-locking flight path techniques, as disclosed in U.S. Publication No. 2004/0054488A1, now U.S. Pat. No. 7,212,938B2, the disclosure of which is hereby incorporated by reference in full.

In addition to fore and/or lateral co-mounted, co-registered oversampling as shown in FIGS. 19-21, the present invention may also employ flight line oversampling as well to further improve the image resolution, as shown in FIGS. 13-17. As shown in FIGS. 13-17, the flight lines overlap each other in an image region because each flight line is parallel to one another. These overlapping image regions may be used to calibrate the sensors by along-track and cross-track parallax of images in adjacent flight lines using stereographic techniques.

In an embodiment, the self-locking flight path may comprise any pattern that produces at least three substantially parallel travel lines out of a group of three or more travel lines. Further, at least one of the travel lines should be in an opposing direction to the other substantially parallel travel lines. In a preferred embodiment, the travel pattern comprises at least one pair of travel lines in a matching direction and at least one pair of travel lines in an opposing direction.

When using the self-locking flight path in opposite directions, the observable positional error may be doubled in some image regions. According, the self-locking flight path technique includes an algorithm to significantly reduce these positional errors. This reduction in positional errors is especially important in the outside, or far left and far right “wing” image areas where the greatest positional errors occur.

In an embodiment, these positional improvements may be realized by using a pattern matching technique to automatically match a pixel pattern area obtained from a flight line (e.g., North/South) with the same pixel pattern area obtained from an adjacent flight line (e.g., North/South). In a preferred embodiment, the latitude/longitude coordinates from one or more GPS location systems may be used to accelerate this pattern matching process.

Similarly, these subpixel calibration and self-locking flight path techniques may be combined with stereographic techniques because stereographic techniques rely heavily on the positional accuracy of each pixel relative to all other pixels. In particular, these techniques improve the stereographic image resolution and overall geospatial horizontal and vertical accuracy, especially, in the far left and far right “wing” image areas, where the greatest positional errors occur. Further, stereographic techniques are used to match known elevation data with the improved stereographic datasets. Accordingly, the combined subpixel calibration, self-locking flight path and stereographic techniques provide a greatly improved Digital Elevation Model, which results in superior image.

Further, these subpixel calibration and self-locking flight path techniques may be used to provide a dynamic, RealTime calibration of the system 100. In particular, these techniques provide the ability to rapidly “roll on” one or more camera array assemblies 112 onto the system 100, to immediately begin collecting image data of a target area and to quickly produce high-quality images because the individual sensors have been initially calibrated in the rigid mount unit(s) affixed to the rigid mount plate, as discussed above. In particular, the camera sensors are co-registered to calibrate the physical mount angle offset of each sensor relative to each other and/or to the nadir camera. In an embodiment, multiple, i.e., at least two, rigid mount units are affixed to a rigid mount plate and are co-registered. This provides an initial, “close” calibration. These initial calibration parameters may be entered into an onboard computer system 104 in the system 100, and updated during flight using oversampling techniques, as discussed above.

In an embodiment, the system 100 comprises a RealTime, self-calibrating system to update the calibration parameters. In particular, the onboard computer 104 software comprises a RealTime software “daemon” (i.e., a background closed-loop monitoring software) to constantly monitor and update the calibration parameters using the co-mounted, co-registered oversampling and flight line oversampling techniques, as discussed above. In a preferred embodiment, the RealTime daemon combines subpixel calibration, self-locking flight path and stereographic techniques to improve the stereographic image resolution and overall geospatial horizontal and vertical accuracy. In particular, stereographic techniques are used to match known elevation data to the improved stereographic datasets. Accordingly, the combined subpixel calibration, self-locking flight path and stereographic techniques provide a greatly improved Digital Elevation Model, which results in superior image.

In an embodiment, the system 100 comprises a RealTime GPS data system to provide GPS input data. Calibration accuracy is driven by input data from electronic devices such as a GPS and an IMU, and by calibration software which is augmented by industry standard GPS and IMU software systems. Accordingly, a key component of this RealTime, self-calibrating system is a RealTime GPS input data via a potentially low bandwidth communication channel such as satellite phone, cell phone, RF modem, or similar device. Potential sources for the RealTime GPS input data include project controlled ad-hoc stations, fixed broadcast GPS locations (or similar) or inertial navigation via an onboard IMU.

The modules, algorithms and processes described above can be implemented in a number of technologies and configurations. Embodiments of the present invention may comprise functional instances of software or hardware, or combinations thereof. Furthermore, the modules and processes of the present invention may be combined together in a single functional instance (e.g., one software program), or may comprise operatively associated separate functional devices (e.g., multiple networked processor/memory blocks). All such implementations are comprehended by the present invention.

The embodiments and examples set forth herein are presented to best explain the present invention and its practical application and to thereby enable those skilled in the art to make and utilize the invention. However, those skilled in the art will recognize that the foregoing description and examples have been presented for the purpose of illustration and example only. The description as set forth is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching without departing from the spirit and scope of the following claims.

Smitherman, Chester L.

Patent Priority Assignee Title
Patent Priority Assignee Title
1699136,
1910425,
2036062,
2104976,
2433534,
2720029,
2747012,
2896501,
2955518,
2988953,
3109057,
3518929,
3527880,
4217607, May 07 1974 Societe Anonyme de Telecommunications Process and device for the instantaneous display of a countryside scanned by a camera of the single line scanning type
4313678, Sep 24 1979 The United States of America as represented by the Secretary of the Automated satellite mapping system (MAPSAT)
4322741, Aug 26 1980 Image dividing system for use in television
4398195, Jul 02 1979 Del Norte Technology, Inc. Method of and apparatus for guiding agricultural aircraft
4504914, Nov 19 1980 LFK-LENKFLUGKORPER SYSTEME GMBH Photogrammetric device for aircraft and spacecraft for producing a digital terrain representation
4543603, Nov 30 1982 Societe Nationale Industrielle et Aerospatiale Reconnaissance system comprising an air-borne vehicle rotating about its longitudinal axis
4583703, Mar 17 1982 The United States of America as represented by the Secretary of the Army One fin orientation and stabilization device
4650305, Dec 19 1985 HINESLAB, A FIRM OF CA Camera mounting apparatus
4686474, Apr 05 1984 DESERET RESEARCH, INC Survey system for collection and real time processing of geophysical data
4689748, Oct 09 1979 LFK-LENKFLUGKORPER SYSTEME GMBH Device for aircraft and spacecraft for producing a digital terrain representation
4708472, May 19 1982 LFK-LENKFLUGKORPER SYSTEME GMBH Stereophotogrammetric surveying and evaluation method
4712010, Jan 30 1986 Hughes Aircraft Company Radiator scanning with image enhancement and noise reduction
4724449, Mar 25 1986 KAEMMLEIN, HANS J , ACTING AS AGENT FOR INVESTORS SEE DOCUMENT Method and apparatus for stereoscopic photography
4750810, Nov 08 1985 British Telecommunications plc Camera optics for producing a composite image from two scenes
4754327, Mar 20 1987 Honeywell, Inc.; HONEYWELL INC , A CORP OF DE Single sensor three dimensional imaging
4757378, Sep 30 1986 BOEING COMPANY THE, A CORP OF DE Monocular scene generator for biocular wide field of view display system
4764008, Nov 19 1987 Surveillance housing assembly
4814711, Apr 05 1984 Deseret Research, Inc.; DESERET RESEARCH, INC , A CORP OF UT Survey system and method for real time collection and processing of geophysicals data using signals from a global positioning satellite network
4887779, Dec 01 1987 The Boeing Company Roll drum sensor housing having sliding window
4935629, Oct 24 1988 LORAL INFRARED & IMAGING SYSTEMS, INC Detector array for high V/H infrared linescanners
4951136, Jan 26 1988 DEUTSCHE FORSCHUNGS- UND VERSUCHSANSTALT FUR LUFT- UND RAUMFAHRT E V ; Ingenieurburo Burkhard Braumer Method and apparatus for remote reconnaissance of the earth
4956705, Mar 10 1989 KAEMMLEIN, HANS J , ACTING AS AGENT FOR INVESTORS SEE DOCUMENT Electronic method and apparatus for stereoscopic photography
4964721, Oct 12 1989 KAMAN AEROSPACE CORPORATION, A CORP OF DE Imaging lidar system
4965572, Jun 10 1988 Turbulence Prediction Systems Method for producing a warning of the existence of low-level wind shear and aircraftborne system for performing same
5013917, Jul 07 1988 Kaman Aerospace Corporation Imaging lidar system using non-visible light
5027199, Sep 20 1988 THERMO INSTRUMENT SYSTEMS INC Image pickup system capable of obtaining a plurality of stereo images with different base height ratios
5029009, May 08 1989 Kaman Aerospace Corporation Imaging camera with adaptive range gating
5045937, Aug 25 1989 Space Island Products & Services, Inc. Geographical surveying using multiple cameras to obtain split-screen images with overlaid geographical coordinates
5104217, Mar 17 1986 GeoSpectra Corporation; GEOSPECTRA CORPORATION, P O BOX 1387, 333 PARKLAND PLAZA, ANN ARBOR, MI 48106, A CORP OF MI System for determining and controlling the attitude of a moving airborne or spaceborne platform or the like
5138444, Sep 05 1991 NEC TOSHIBA SPACE SYSTEMS, LTD Image pickup system capable of producing correct image signals of an object zone
5166789, Aug 25 1989 Space Island Products & Services, Inc. Geographical surveying using cameras in combination with flight computers to obtain images with overlaid geographical coordinates
5187754, Apr 30 1991 Intel Corporation Forming, with the aid of an overview image, a composite image from a mosaic of images
5193124, Jun 15 1990 RESEARCH FOUNDATION OF STATE UNIVERSITY OF NEW YORK, THE, Computational methods and electronic camera apparatus for determining distance of objects, rapid autofocusing, and obtaining improved focus images
5198657, Feb 05 1992 General Atomics; GENERAL ATOMICS - A CORP OF CA Integrated imaging and ranging lidar receiver
5231401, Aug 10 1990 Kaman Aerospace Corporation Imaging lidar system
5247356, Feb 14 1992 PICTOMETRY INTERNATIONAL CORP Method and apparatus for mapping and measuring land
5249034, Jan 29 1991 Toyo Glass Co., Ltd. Method of and apparatus for inspecting end of object for defect
5259037, Feb 07 1991 L-3 Communications Corporation Automated video imagery database generation using photogrammetry
5262953, Oct 31 1989 Agence Spatiale Europeenne Method of rectifying images from geostationary meteorological satellites in real time
5266799, Sep 15 1989 STATE OF ISRAEL, MINISTRY OF ENERGY & INFRASTRUCTURE, THE, GEOLOGICAL SURVEY OF ISRAEL, JERUSALEM, ISRAEL; STATE OF ISRAEL, ATOMIC ENERGY COMMISSION, THE, SOREQ NUCLEAR RESEARCH CENTER, YAVNE 70 600 ISRAEL; Israel Aircraft Industries Ltd Geophysical survey system
5276321, Apr 15 1991 INTERNAL REVENUE SERVICE Airborne multiband imaging spectrometer
5308022, Apr 30 1982 Cubic Corporation Method of generating a dynamic display of an aircraft from the viewpoint of a pseudo chase aircraft
5317394, Apr 30 1992 Northrop Grumman Systems Corporation Distributed aperture imaging and tracking system
5332968, Apr 21 1992 University of South Florida Magnetic resonance imaging color composites
5347539, Apr 15 1991 Motorola, Inc High speed two wire modem
5371358, Apr 15 1991 INTERNAL REVENUE SERVICE Method and apparatus for radiometric calibration of airborne multiband imaging spectrometer
5379065, Jun 22 1992 The United States of America as represented by the Administrator of the Programmable hyperspectral image mapper with on-array processing
5414462, Feb 11 1993 Method and apparatus for generating a comprehensive survey map
5426476, Nov 16 1994 Aircraft video camera mount
5448936, Aug 23 1994 HE HOLDINGS, INC , A DELAWARE CORP ; Raytheon Company Destruction of underwater objects
5450125, Apr 24 1991 Kaman Aerospace Corporation Spectrally dispersive imaging lidar system
5467271, Dec 17 1993 Northrop Grumman Corporation Mapping and analysis system for precision farming applications
5471056, Sep 25 1992 Texaco Inc. Airborne scanner image spectrometer
5517419, Jul 22 1993 Synectics Corporation Advanced terrain mapping system
5555018, Apr 25 1991 Large-scale mapping of parameters of multi-dimensional structures in natural environments
5557397, Sep 21 1994 AIRBORNE REMOTE MAPPING, INC Aircraft-based topographical data collection and processing system
5596494, Nov 14 1994 EOTEK INC Method and apparatus for acquiring digital maps
5604534, May 24 1995 IMAGEAMERICA, INC ; TOPEKA AIRCRAFT, INC Direct digital airborne panoramic camera system and method
5625409, Oct 14 1992 Matra Cap Systemes High resolution long-range camera for an airborne platform
5633946, May 19 1994 Geospan Corporation Method and apparatus for collecting and processing visual and spatial position information from a moving platform
5639964, Oct 24 1994 FIDELITY TECHNOLOGIES CORPORATION Thermal anemometer airstream turbulent energy detector
5647015, Dec 11 1991 Texas Instruments Incorporated Method of inferring sensor attitude through multi-feature tracking
5668593, Jun 07 1995 GOODRICH CORPORATION Method and camera system for step frame reconnaissance with motion compensation
5721611, Feb 15 1993 E.M.S. Technik, GmbH Photogrammetric camera, in particular for photogrammetric measurements of technical objects
5734507, Nov 29 1993 DRS TECHNOLOGIES UK LIMITED Optical beam splitter and electronic high speed camera incorporating such a beam splitter
5765044, Dec 13 1993 LEICA GEOSYSTEMS GIS & MAPPING, LLC Airborne photographing apparatus
5790188, Sep 07 1995 Keyw Corporation Computer controlled, 3-CCD camera, airborne, variable interference filter imaging spectrometer system
5798786, May 07 1996 GOODRICH CORPORATION Electro-optical imaging detector array for a moving vehicle which includes two axis image motion compensation and transfers pixels in row directions and column directions
5815314, Dec 27 1993 Canon Kabushiki Kaisha Image display apparatus and image display method
5872590, Nov 11 1996 Fujitsu Limited Image display apparatus and method for allowing stereoscopic video image to be observed
5878356, Jun 14 1995 Agrometrics, Inc. Aircraft based infrared mapping system for earth based resources
5886821, Oct 02 1997 Fresnel Technologies, Inc. Lens assembly for miniature motion sensor
5894323, Mar 22 1996 E-CONAGRA COM, INC , A CORP OF DELAWARE Airborne imaging system using global positioning system (GPS) and inertial measurement unit (IMU) data
5937212, Nov 15 1996 Canon Kabushiki Kaisha Image pickup apparatus
5953054, May 31 1996 Geo-3D Inc. Method and system for producing stereoscopic 3-dimensional images
5963664, Jun 22 1995 Sarnoff Corporation Method and system for image combination using a parallax-based technique
5982951, May 28 1996 Canon Kabushiki Kaisha Apparatus and method for combining a plurality of images
5999211, May 24 1995 IMAGEAMERICA, INC ; TOPEKA AIRCRAFT, INC Direct digital airborne panoramic camera system and method
6002815, Jul 16 1997 GEMALTO SA Linear sensor imaging method and apparatus
6005987, Oct 17 1996 Sharp Kabushiki Kaisha Picture image forming apparatus
6055012, Dec 29 1995 THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT Digital multi-view video compression with complexity and compatibility constraints
6075905, Jul 17 1996 Sarnoff Corporation Method and apparatus for mosaic image construction
6078701, Aug 01 1997 Sarnoff Corporation Method and apparatus for performing local to global multiframe alignment to construct mosaic images
6087984, May 04 1998 Trimble Navigation Limited GPS guidance system for use with circular cultivated agricultural fields
6125329, Jun 17 1998 MDA INFORMATION SYSTEMS LLC Method, system and programmed medium for massive geodetic block triangulation in satellite imaging
6130705, Jul 10 1998 GOODRICH CORPORATION Autonomous electro-optical framing camera system with constant ground resolution, unmanned airborne vehicle therefor, and methods of use
6173087, Nov 13 1996 Sarnoff Corporation Multi-view image registration with application to mosaicing and lens distortion correction
6204799, May 27 1980 Harris Corporation Three dimensional bistatic imaging radar processing for independent transmitter and receiver flightpaths
6209834, Apr 12 1999 Verimap Plus Inc. Optical imaging mount apparatus
6211906, Jun 09 1998 ROYAL BANK OF CANADA, AS ADMINISTRATIVE AGENT Computerized component variable interference filter imaging spectrometer system method and apparatus
6281970, Mar 12 1998 SYNERGISTIX, LLC Airborne IR fire surveillance system providing firespot geopositioning
6282301, Apr 08 1999 The United States of America as represented by the Secretary of the Army Ares method of sub-pixel target detection
6282362, Nov 07 1995 Trimble Navigation Limited Geographical position/image digital recording and display system
6323858, May 12 1999 IMMERSIVE LICENSING, INC System for digitally capturing and recording panoramic movies
6353409, May 04 1998 Trimble Navigation Limited GPS guidance system for use with circular cultivated agricultural fields
6356646, Feb 19 1999 Method for creating thematic maps using segmentation of ternary diagrams
6393163, Nov 14 1994 SRI International Mosaic based image processing system
6422508, Apr 05 2000 GALILEO GROUP, INC System for robotic control of imaging data having a steerable gimbal mounted spectral sensor and methods
6434280, Nov 10 1997 SODIMENSION CORP System and method for generating super-resolution-enhanced mosaic images
6456938, Jul 23 1999 LASALLE BUSINESS CREDIT, LLC Personal dGPS golf course cartographer, navigator and internet web site with map exchange and tutor
6473119, Apr 08 1997 Leica Geosystems AG Photogrammetic camera
6526352, Jul 19 2001 AMERICAN VEHICULAR SCIENCES LLC Method and arrangement for mapping a road
6542831, Apr 18 2001 Desert Research Institute Vehicle particulate sensor system
6553311, Dec 08 2000 Trimble Navigation Limited Navigational off- line and off-heading indication system and method
6570612, Sep 21 1998 Bank One, NA, as Administrative Agent System and method for color normalization of board images
6597818, May 09 1997 SRI International Method and apparatus for performing geo-spatial registration of imagery
6597991, Mar 28 2001 Agrosense Ltd. System and method for remote monitoring of water stress status of growing crops
6611289, Jan 15 1999 Digital cameras using multiple sensors with multiple lenses
6664529, Jul 19 2000 Utah State University 3D multispectral lidar
6694064, Nov 19 1999 CORVEQ LLC IMAGING Digital aerial image mosaic method and apparatus
6694094, Aug 31 2000 GOODRICH CORPORATION Dual band framing reconnaissance camera
6711475, Mar 16 2000 The Johns Hopkins University Light detection and ranging (LIDAR) mapping system
6747686, Oct 05 2001 GOODRICH CORPORATION High aspect stereoscopic mode camera and method
6766226, May 16 2002 THOMPSON, DOUGLAS B Method of monitoring utility lines with aircraft
6771208, Apr 24 2002 AUTOBRILLIANCE, LLC Multi-sensor system
6781707, Mar 22 2002 ORASEE CORP Multi-spectral display
6826358, Aug 31 2000 GOODRICH CORPORATION Dual band hyperspectral framing reconnaissance camera
6834163, Jul 14 2000 Intergraph Technologies Company Camera system having at least two first cameras and two second cameras
6839972, Jun 15 2001 Snap-On Incorporated Self-calibrating position determination system
6954310, Sep 25 2003 International Technology Center High resolution multi-lens imaging device
7006132, Feb 25 1998 California Institute of Technology Aperture coded camera for three dimensional imaging
7006709, Jun 15 2002 Microsoft Technology Licensing, LLC System and method deghosting mosaics using multiperspective plane sweep
7009638, May 04 2001 VEXCEL IMAGING US, INC Self-calibrating, digital, large format camera with single or multiple detector arrays and single or multiple optical systems
7019777, Apr 21 2000 ROYAL BANK OF CANADA, AS ADMINISTRATIVE AGENT Multispectral imaging system with spatial resolution enhancement
7127348, Sep 20 2002 VI TECHNOLOGIES, LLC Vehicle based data collection and processing system
7184072, Jun 15 2000 Power View Company, L.L.C. Airborne inventory and inspection system and apparatus
7212938, Sep 17 2002 VI TECHNOLOGIES, LLC Method of using a self-locking travel pattern to achieve calibration of remote sensors using conventionally collected data
7339614, May 04 2001 VEXCEL IMAGING US, INC Large format camera system with multiple coplanar focusing systems
7365774, Dec 13 2002 TELEDYNE DIGITAL IMAGING, INC Device with camera modules and flying apparatus provided with such a device
7424133, Nov 08 2002 PICTOMERTRY INTERNATIONAL CORPORATION Method and apparatus for capturing, geolocating and measuring oblique images
7437062, Nov 10 2005 INTERGRAPH GOVERNMENT SOLUTIONS CORPORATION; Intergraph Corporation Remote sensing system capable of coregistering data from sensors potentially having unique perspectives
7725258, Sep 20 2002 VI TECHNOLOGIES, LLC Vehicle based data collection and processing system and imaging sensor system and methods thereof
7787659, Nov 08 2002 PICTOMETRY INTERNATIONAL CORP Method and apparatus for capturing, geolocating and measuring oblique images
7995799, Nov 08 2002 PICTOMETRY INTERNATIONAL CORP Method and apparatus for capturing geolocating and measuring oblique images
8068643, Nov 08 2002 PICTOMETRY INTERNATIONAL CORP Method and apparatus for capturing, geolocating and measuring oblique images
8462209, Jun 26 2009 ROYAL BANK OF CANADA, AS ADMINISTRATIVE AGENT Dual-swath imaging system
20020060784,
20020085094,
20020101438,
20020163582,
20030048357,
20030081827,
20030138247,
20030169259,
20030198364,
20030210336,
20040041914,
20040054488,
20040257441,
20070046448,
20080278828,
20090295924,
20100235095,
20110091076,
20120020571,
CA2268611,
CA2534968,
CN101344391,
CN102506868,
CN103038761,
DE10341822,
DE19714396,
DE2811428,
EP494700,
EP1069547,
EP1178283,
EP1189021,
EP1231780,
GB2284273,
JP2005333336,
JP2006217131,
JP2007323615,
JP2008109477,
JP2009501350,
JP2010085719,
JP201085719,
JP7028400,
JP8030194,
JP8335298,
WO2006892,
WO2012830,
WO2065155,
WO206892,
WO212830,
WO199934346,
WO2004021692,
WO2004028134,
WO9918732,
WO9934346,
/////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Apr 10 2010SMITHERMAN, CHESTER L M7 VISUAL INTELLIGENCE, L P ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0508080063 pdf
Sep 02 2010M7 VISUAL INTELLIGENCE LPVisual Intelligence LPCHANGE OF NAME SEE DOCUMENT FOR DETAILS 0508100325 pdf
Oct 23 2019VI TECHNOLOGIES, LLC(assignment on the face of the patent)
Nov 24 2020Visual Intelligence, LPVI TECHNOLOGIES, LLCASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0546000663 pdf
Mar 12 2021Visual Intelligence, LPVI TECHNOLOGIES, LLCASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0556460409 pdf
Date Maintenance Fee Events
Oct 23 2019BIG: Entity status set to Undiscounted (note the period is included in the code).
Oct 29 2019SMAL: Entity status set to Small.


Date Maintenance Schedule
Jun 14 20254 years fee payment window open
Dec 14 20256 months grace period start (w surcharge)
Jun 14 2026patent expiry (for year 4)
Jun 14 20282 years to revive unintentionally abandoned end. (for year 4)
Jun 14 20298 years fee payment window open
Dec 14 20296 months grace period start (w surcharge)
Jun 14 2030patent expiry (for year 8)
Jun 14 20322 years to revive unintentionally abandoned end. (for year 8)
Jun 14 203312 years fee payment window open
Dec 14 20336 months grace period start (w surcharge)
Jun 14 2034patent expiry (for year 12)
Jun 14 20362 years to revive unintentionally abandoned end. (for year 12)