image acquisition systems are described herein. One image acquisition system includes an image recording device configured to determine and record a tracking error associated with a raw image of a moving subject, and a computing device configured to deblur the raw image using the tracking error.
|
6. A method of operating an image acquisition system, comprising:
determining and recording a tracking error associated with a raw image of a moving subject, wherein determining the tracking error includes comparing an actual location of the moving subject at particular point in time with a predicted location of the moving subject at the particular point in time;
determining a convolution kernel based, at least in part, on the tracking error; and
applying the convolution kernel to the raw image to produce a deblurred image.
1. An image acquisition system, comprising;
an image recording device configured to determine and record a tracking error associated with a raw image of a moving subject, wherein the raw image is expressed in x-y coordinates; and
a computing device configured to deblur the raw image using the tracking error by:
determining a convolution kernel based, at least in part on the tracking error, wherein:
the convolution kernel is constructed from a residual motion trajectory approximation resulting from the tracking error; and
constructing the convolution kernel includes transforming the residual motion trajectory approximation into the x-y coordinates in which the raw image is expressed; and
applying the convolution kernel to the raw image.
16. An image acquisition system, comprising:
an image recording device configured to determine and record a tracking error associated with a raw image of a moving subject; and
a computing device configured to deblur the raw image using the tracking error by;
determining a convolution kernel based, at least in part, on the tracking error, wherein:
the convolution kernel is constructed from a residual motion trajectory approximation resulting from the tracking error;
the residual motion trajectory approximation includes segments of different lengths; and
constructing the convolution kernel includes assigning the segments of the residual motion trajectory approximation weights scaling the different lengths of the segments to a same sampling period; and
applying the convolution kernel to the raw image.
11. An image acquisition system, comprising:
an image recording device, wherein the image recording device includes:
a sensing element configured to capture a motion-stabilized image of a moving subject; and
a controller configured to:
determine and record a tracking error associated with the image;
predict a location of the subject at a particular point in time:
shift the sensing element such that projection of the moving subject at the particular point in time coincides with the predicted location on the sensing element; and
determine an actual location of the subject at the particular point in time after the image recording device captures the image of the moving subject at the predicted location at the. particular point in time; and
a computing device configured to remove a blur from the image using the tracking error.
2. The image acquisition system of
the residual motion trajectory approximation includes segments of different lengths; and
constructing the convolution kernel includes assigning the segments of the residual motion trajectory approximation weights scaling the different lengths of the segments to a same sampling period.
3. The image acquisition system of
4. The image acquisition system of
vector summing the weighted segments of the residual motion trajectory approximation results in a matrix; and
constructing the convolution kernel includes normalizing the matrix so that all kernel entries of the matrix together add up to one.
5. The image acquisition system of
7. The method of
9. The method of
10. The method of
12. The image acquisition system of
13. The image acquisition system of
14. The image acquisition system of
15. The image acquisition system of
|
The subject matter of this disclosure was made with government support under Contract Number W911 NF-10-C-0022 awarded by the Intelligence Advanced Research Projects Activity (IARPA). Accordingly, the U.S. Government has certain rights to subject matter disclosed herein.
The present disclosure relates to image acquisition systems.
Some previous approaches to acquiring high quality iris and/or face images of moving subjects include freezing the subject motion by using extremely short exposures. In such approaches, the subject may need to be brightly illuminated by a flash in order to obtain a well exposed image. However, such previous approaches may break down over large distances because the flash power used to obtain an acceptable image may become unsafe for the subject's eyes.
Producing a well exposed image without flash illumination, however, may require extending the image exposure, which can degrade the image quality. For example, extending the image exposure can introduce blur (e.g., motion blur for lateral motion in a plane perpendicular to the camera's optical axis) into the image unless the relative motion between the subject and the camera is reduced and/or eliminated.
Image acquisition systems using orthogonal transfer charge-coupled device (OTCCD) sensors can stabilize the projection of a moving subject on to the sensor by shifting the sensor's pixels' potential wells so as to counter the motion. This image projection stabilization can be performed by a control system that optically measures the actual location of the moving subject in the image and shifts the OTCCD array in real time to track the motion of the subject as close as possible. However, as good as such a control system may be able to track the motion of the subject, in practice it can not operate without a tracking error. The tracking error is manifested as the motion blur and degrades the quality of the resulting OTCCD image.
Image acquisition systems are described herein. For example, one or more embodiments include an image recording device configured to determine and record a tracking error associated with a raw image of a moving subject, and a computing device configured to deblur the raw image using the tracking error.
Image acquisition systems in accordance with one or more embodiments of the present disclosure can effectively deblur (e.g., reduce or remove motion blur from) images of moving subjects. As used herein, a deblurred image is an image that has reduced blurring or no blurring as compared to the originally captured image.
For example, image acquisition systems that include orthogonal transfer charge-coupled device (OTCCD) sensors in accordance with one or more embodiments of the present disclosure can reduce or remove the blur from images of moving subjects caused by tracking error. Accordingly, image acquisition systems in accordance with one or more embodiments of the present disclosure can produce increased quality images as compared with previous image acquisition systems.
Further, image acquisition systems in accordance with one or more embodiments of the present disclosure can deblur images of moving subjects without the addition of any new components or elements to the image acquisition system. For example, image acquisition systems that include OTCCD sensors in accordance with one or more embodiments of the present disclosure can deblur images of moving subjects using the tracking error associated with the image, which is already determined by, present in, and/or readily available in the image acquisition system. Accordingly, image acquisition systems in accordance with one or more embodiments of the present disclosure can deblur images of moving subjects at no extra and/or additional cost of the camera hardware.
In the following detailed description, reference is made to the accompanying drawings that form a part hereof. The drawings show by way of illustration how one or more embodiments of the disclosure may be practiced.
These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice one or more embodiments of this disclosure. It is to be understood that other embodiments may be utilized and that process, electrical, and/or structural changes may be made without departing from the scope of the present disclosure.
As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, combined, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. The proportion and the relative scale of the elements provided in the figures are intended to illustrate the embodiments of the present disclosure, and should not be taken in a limiting sense.
As used herein, “a” or “a number of” something can refer to one or more such things. For example, “a number of image frames” can refer to one or more image frames.
As shown in
As shown in
Second image recording device 113 (e.g., OTCCD sensing element 116) can record a target image of the subject. For example, as shown in
The subject can be located a distance of, for example, up to 50 feet away from OTCCD sensing element 116 when the target image of the subject is recorded. However, embodiments of the present disclosure are not limited to a particular distance between the subject and OTCCD sensing element 116.
As shown in
For example, controller 117 can predict a location of the subject at a particular point in time based on the lateral velocity vector estimates, and then shift the array of pixels 118 of the OTCCD sensing element 116 so as it arrives at the predicted location at the particular point in time. Controller 117 can also determine, based on the lateral velocity vector data, the actual location of the subject at the particular point in time after enough time elapses to turn the predicted time into the present time.
In some embodiments, first image recording device 112 can be a camera. However, embodiments of the present disclosure are not so limited, and first image recording device 112 can be any device capable of sending a number of (e.g., a series) of images of the subject to lateral velocity vector estimator 114. For example, in some embodiments, first image recording device 112 can be a scanning laser range finder measuring the velocity vector of the subject relative to image acquisition system 100 in real time (e.g., as the subject moves).
Lateral velocity vector estimator 114 can be any device capable of receiving a number of images of the subject from first image recording device 112. For example, lateral velocity vector estimator 114 can use data from a scanning laser range finder, or depth images from a camera whose sensor has pixels capable of real time time-of-flight ranging.
Image acquisition system 100 can include a closed loop control such that estimates are applied immediately after they are computed (e.g., in real time) while the OTCCD image is still being exposed. The rate of updates may be dictated by the frame rate of first image recording device 112, and may, for example, number tens to thousands per exposure. Further, the number of updates may be the same as the number of images in the series of images.
Using OTCCD sensor element 116 as the image stabilizing element can improve the performance of image acquisition system 100 because, unlike previous image stabilization concepts and/or systems, OTCCD sensor element 116 involves no moveable mechanical parts (e.g., lenses, mirrors, or sensor chips). Rather, OTCCD sensor element 116 moves the potential wells that correspond to pixel array 118 and accumulates their photoelectrons.
Since the wells do not have any inertia, the wells can be moved extremely fast by manipulating, in real time, the voltages that define the operation of OTCCD sensor element 116. With no movable mechanical parts, OTCCD sensor element 116 can offer an extremely rugged solution that is well suited for security and/or military applications, among other applications.
Further, OTCCD sensor element 116 can offer an appealing alternative to mechanically delicate tip-tilt mirrors and/or movable stabilizing lenses. For example, OTCCD sensor element 116 can stabilize the image not by mechanically reconfiguring the optics and/or moving the sensing chip, but rather by electronically changing the location of pixel array 118.
Since image acquisition system 100 employs an image stabilization concept, image acquisition system 100 needs to estimate the relative velocity between the subject and image acquisition system 100 in order to properly drive OTCCD sensor element 116. The velocity vector can be estimated before or during the image exposure. Estimating the velocity vector before the image exposure can be done in simple scenarios (e.g., scenarios involving only the physical motion of the subject). In the latter case, the series of images provided to lateral velocity vector estimator 114 during the image exposure can be continuously evaluated as its frames are arriving to determine the velocity vector. The velocity vector updates can then be used to drive, in real time, the potential well movements in OTCCD sensor element 116.
In some embodiments, controller 117 can issue updates to OTCCD sensor element 116 at rates on the order of 100 updates per second, for example. The update can include, for example, calls for shifting the pixels of pixel array 118.
In some embodiments, OTCCD sensor element 116 can execute array shifts at rates on the order of 100,000 updates per second, for example. Further, OTCCD sensor element 116 may be able to execute single step shifts (e.g., one pixel left, right, or none and/or one pixel up, down, or none).
In the embodiment illustrated in
For example, in some embodiments, first image recording device 112 and second image recording device 113 can at least partially record their respective images along the same optical path from the subject. In such embodiments, the light can travel along a common optical path through a common lens. The light can then be split (e.g., using a splitter), with some light traveling along optical path 121 to first image recording device 112 and some light traveling along optical path 122 to second image recording device 113.
Depending on the hardware used in image acquisition system 100, first image recording device 112 and second image recording device 113 may record light at different, the same, or overlapping wavelengths. For example, in some embodiments, light having wavelengths in the visible range of the spectrum can be directed to first image recording device 112, and light having wavelengths in the near infrared range of the spectrum can be directed to second image recording device 113.
As shown in
Memory 230 can store executable instructions, such as, for example, computer readable instructions (e.g., software), that can be executed by processor 230 to perform various embodiments of the present disclosure. For example, memory 230 can be a non-transitory computer readable medium having computer readable instructions (e.g., computer program instructions) stored thereon that are executable by processor 232 to perform various embodiments of the present disclosure.
Memory 230 can be volatile or nonvolatile memory. Memory 230 can also be removable (e.g., portable) memory, or non-removable (e.g., internal) memory. For example, memory 230 can be random access memory (RAM) (e.g., dynamic random access memory (DRAM) and/or phase change random access memory (PCRAM)), read-only memory (ROM) (e.g., electrically erasable programmable read-only memory (EEPROM) and/or compact-disc read-only memory (CD-ROM)), flash memory, a laser disc, a digital versatile disc (DVD) or other optical disk storage, and/or a magnetic medium such as magnetic cassettes, tapes, or disks, among other types of memory.
In the embodiment illustrated in
As shown in
The blur in image 234 can be, for example, a residual motion blur that results from the uncompensated motion of the subject relative to the OTCCD sensor array (e.g., from an inability of OTCCD sensing element 116 to perfectly track the motion of the subject). That is, tracking errors 238 associated with image 234 can correspond to (e.g., be manifested as and/or cause) the motion blur in image 234.
Tracking error record 238 associated with the raw OTCCD image 234 can be, for example, a list of pairs of the predicted and actually observed locations of a subject at different instants during the raw OTCCD image exposure. Those instants are the times at which first image recording device 112 previously described in connection with
As an example, tracking error 238 at a particular time can be the difference (e.g., discrepancy) between a location where the subject is predicted by the controller to be found in a particular future frame and a location where the subject was eventually actually found when the frame became available. That is, tracking error 238 can be determined by comparing the actual location of the subject at the particular point in time with the predicted location of the subject at the particular point in time. The predicted location of the subject at the particular point in time can be determined, for example, by controller 217 based on lateral velocity vector estimates, as previously described in connection with
For instance, for circular motion, tracking error 238 can be understood by breaking each error vector into its amplitude and phase error components, as shown in plot 360 illustrated in
As shown in
Plot 580 illustrates both the amplitude error component and the phase error component of the tracking error. The amplitude error is measured along the y-axis of plot 580, and the phase error is measured along the x-axis of plot 580. That is, the magnitude (e.g., length) of the y-component of each vector measured along the y-axis (e.g., in the vertical direction) represents the magnitude (e.g., amount) of the amplitude error associated with the different image frames, and the magnitude of the x-component of each vector measured along the x-axis (e.g., in the horizontal direction) represents the magnitude of the phase error associated with the different image frames.
As an example, consider the tracking error associated with the second, fifth, eleventh, and fifteenth image frames (e.g., the vectors originating from the second, fifth, eleventh, and fifteenth dots from the left on the x-axis). The tracking error associated with these image frames is relatively smaller than the tracking error associated with the other image frames, as represented by the shorter length of these vectors. Further, the magnitudes of the amplitude and phase errors associated with the second image frame are approximately equal (e.g., the amplitude and phase error components of the tracking error associated with the second image frame are approximately equal), the magnitudes of the amplitude errors associated with the fifth and eleventh image frames are relatively smaller than the magnitudes of the phase errors associated with the fifth and eleventh image frames (e.g., phase error constitutes the majority of the tracking error associated with the fifth and eleventh image frames), and the magnitude of the amplitude error associated with the fifteenth image frame is relatively larger than the magnitude of the phase error associated with the fifteenth image frame (e.g., amplitude error constitutes the majority of the tracking error associated with the fifteenth image frame), as represented by their respective vectors.
In the example illustrated in
As shown in
Convolution kernel 242 can be, for example, a square matrix that is 3 pixels×3 pixels, 5×5, or 7×7, among other arrangements. However, embodiments of the present disclosure are not limited to a particular shape, size, or arrangement for convolution kernel 242. Its size can be determined, for instance, by data present in tracking error record 238. For example, convolution kernel 242 can be any shape or arrangement such as, for instance, a square, rectangle, or more complicated shapes or arrangements as may be desired.
Convolution kernel 242 can be applied to raw OTCCD image 234, whose rows and column directions can define the x-y coordinates; because convolution kernel 242 is applied to it, it must be expressed in the same x-y coordinates (e.g., not in the amplitude-phase coordinates). Convolution kernel 242 can be constructed from the motion trajectory shown in
In some embodiments, connecting the error vector end points by straight line segments points as in
The size (e.g., dimensions) of convolution kernel 242 can be based, at least in part, on the size (e.g., the magnitude and/or amount) of tracking error 238. For example, the greater the tracking error 238 associated with image 234, the greater the size of the convolution kernel 242.
As shown in
Reconstructing an image in accordance with one or more embodiments of the present disclosure may not be an inexpensive substitute for an expensive precision tracking control system. For example, a good reconstruction may not be achieved unless there are at least short intervals during exposure when the tracking errors drop to zero or at least close to zero, both in magnitude and phase (e.g., as the error vector lengths approach zero). That is, the tracking control may not be perfect at all times, but may be perfect or close to perfect at least for short parts of the overall exposure. The longer the parts, the better will be the reconstruction.
Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that any arrangement calculated to achieve the same techniques can be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments of the disclosure. In particular, circular motion of the subject was selected only for the sake of explanation. Embodiments of the present disclosure are not limited to any particular motion.
It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description.
The scope of the various embodiments of the disclosure includes any other applications in which the above structures and methods are used. Therefore, the scope of various embodiments of the disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.
In the foregoing Detailed Description, various features are grouped together in example embodiments illustrated in the figures for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the embodiments of the disclosure require more features than are expressly recited in each claim.
Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
Patent | Priority | Assignee | Title |
10136063, | Jul 12 2013 | HANWHA VISION CO , LTD | Image stabilizing method and apparatus |
10848672, | Jul 12 2013 | HANWHA VISION CO , LTD | Image stabilizing method and apparatus |
9578241, | Mar 23 2015 | HANWHA VISION CO , LTD | Image stabilizing apparatus and method based on a predicted movement position |
9854171, | Mar 23 2015 | HANWHA VISION CO , LTD | Image stabilizing apparatus and method based on a predicted movement position |
Patent | Priority | Assignee | Title |
20050047672, | |||
20100246989, | |||
20120162359, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 30 2011 | Honeywell International Inc. | (assignment on the face of the patent) | / | |||
Jan 05 2012 | JELINEK, JAN | Honeywell International Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027827 | /0455 |
Date | Maintenance Fee Events |
Nov 20 2017 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Nov 16 2021 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
May 27 2017 | 4 years fee payment window open |
Nov 27 2017 | 6 months grace period start (w surcharge) |
May 27 2018 | patent expiry (for year 4) |
May 27 2020 | 2 years to revive unintentionally abandoned end. (for year 4) |
May 27 2021 | 8 years fee payment window open |
Nov 27 2021 | 6 months grace period start (w surcharge) |
May 27 2022 | patent expiry (for year 8) |
May 27 2024 | 2 years to revive unintentionally abandoned end. (for year 8) |
May 27 2025 | 12 years fee payment window open |
Nov 27 2025 | 6 months grace period start (w surcharge) |
May 27 2026 | patent expiry (for year 12) |
May 27 2028 | 2 years to revive unintentionally abandoned end. (for year 12) |