Methods, systems and apparatuses are disclosed for approximating vertical and horizontal correction values in a pixel value correction calculation. Multiple vertical and/or horizontal correction curves are used and may be employed for one or more color channels of the imager. The use of multiple correction curves allows for a more accurate approximation of the correction curves for image pixels.
|
19. An imaging system comprising:
at least one stored first correction curve associated with columns of pixels of an image pixel array, the number of stored first correction curves being less than the number of columns of the pixel array;
at least one stored second correction curve associated with rows of pixels of the image pixel array, the number of stored second correction curves being less than the number of rows of the pixel array, wherein at least one of the columns and rows have a plurality of associated correction curves; and,
an image processor for determining a location of a pixel in the image pixel array and applying correction values thereto based on one or more of the first stored correction curves and one or more of the second stored correction curves.
10. A method of adjusting a value for a pixel of an image, comprising:
inputting an image pixel signal value associated with a pixel to be adjusted;
determining a location in a pixel array of the pixel to be adjusted; and
adjusting the image pixel signal value, wherein the adjusting further comprises:
determining a correction value for the pixel to be adjusted, the correction value being based on the location in the pixel array of the pixel to be adjusted, and at least one of a vertical correction value portion and a horizontal correction value portion; and
applying the correction value to the pixel signal value,
wherein the vertical correction value portion for the pixel to be adjusted is determined from at least one of a plurality of vertical correction value curves, the number of vertical correction value curves being less than the number of columns in the pixel array, each of the plurality of vertical correction value curves representing vertical correction value portions for all pixels in a particular column of the pixel array with which the respective vertical correction value curve is associated, and
wherein the horizontal correction value portion for the pixel to be adjusted is determined from at least one of a plurality of horizontal correction value curves, the number of horizontal correction value curves being less than the number of rows in the pixel array, each of the plurality of horizontal correction value curves representing horizontal correction value portions for all pixels in a particular row of the pixel array with which the respective horizontal correction value curve is associated.
1. An imaging device comprising:
a pixel array comprising a plurality of pixels arranged in columns and rows, the pixel array outputting a plurality of pixel signals, each pixel signal corresponding to a particular pixel of the pixel array;
an adjustment circuit coupled to the pixel array and configured to correct at least one of the plurality of pixel signals by:
determining a correction value for a pixel signal to be adjusted, the correction value being based on a respective row and a respective column of the pixel corresponding to the pixel signal to be adjusted, and at least one of a vertical correction value portion and a horizontal correction value portion; and
applying the correction value to the pixel signal,
wherein the vertical correction value portion for the pixel signal to be adjusted is determined from at least one of a plurality of vertical correction value curves, the number of vertical correction value curves being less than the number of columns in the pixel array, each of the plurality of vertical correction value curves representing vertical correction value portions for all pixels in a particular column of the pixel array with which the respective vertical correction value curve is associated, and
wherein the horizontal correction value portion for the pixel signal to be adjusted is determined from at least one of a plurality of horizontal correction value curves, the number of horizontal correction value curves being less than the number of rows in the pixel array, each of the plurality of horizontal correction value curves representing horizontal correction value portions for all pixels in a particular row of the pixel array with which the respective horizontal correction value curve is associated.
2. The imaging device of
3. The imaging device of
4. The imaging device of
5. The imaging device of
6. The imaging device of
7. The imaging device of
8. The imaging device of
9. The imaging device of
11. The method of
12. The method of
13. The method of
14. The method of
15. The method of
16. The method of
17. The method of
18. The method of
20. The imaging system as in
21. The imaging system as in
22. The imaging system as in
23. The imaging system as in
24. The imaging system as in
25. The imaging system as in
26. The imaging system as in
|
This application is a continuation of U.S. patent application Ser. No. 11/889,214, filed on Aug. 9, 2007, now U.S. Patent No. 8,463,068, the subject matter of which is incorporated in its entirety by reference herein.
Embodiments relate generally to generation of pixel value correction surfaces accounting for pixel value variations caused by parameters causing spatial variation.
Imagers, for example CCD, CMOS and others, are widely used in imaging applications, for example, in digital still and video cameras.
It is well known that, for a given optical lens used with a digital still or video camera, the pixels of the pixel array will generally have varying signal values even if the imaged scene is uniform. The varying responsiveness often depends on a pixel's spatial location within the pixel array. One source of such variation is caused by lens shading. Roughly, lens shading causes pixels in a pixel array located farther away from the center of the pixel array to have a lower value when compared to pixels located closer to the center of the pixel array, when the camera is exposed to the same level of light stimulus. Other sources may also contribute to variations in pixel value with spatial location. These variations can be compensated for by adjusting, for example, the gain applied to the pixel values based on spatial location in a pixel array. For lens shading correction, for example, it may happen that the further away a pixel is from the center of the pixel array, the more gain is needed to be applied to the pixel value. Different color channels of an imager may be affected differently by various sources of shading. In addition, sometimes an optical lens is not centered with respect to the optical center of the image sensor; the effect is that lens shading may not be centered at the center of the imager pixel array. Each color channel may also have a different center, i.e., the pixel with the highest response.
Variations in the shape and orientation of photosensors used in the pixels may contribute to a non-uniform spatial response across the pixel array. Further, spatial non-uniformity may be caused by optical crosstalk or other interactions among the pixels in a pixel array. Further, changes in the optical state of a given lens, such as changes in iris opening or focus position, may affect the spatial pattern of non-uniformity among the lenses. Different lenses and/or cameras will generally produce different patterns of non-uniform spatial response from a given pixel array.
Variations in a pixel signal caused by the spatial position of a pixel in a pixel array can be measured and the pixel response value can be corrected with a pixel value gain adjustment. Lens shading, for example, can be corrected using a set of positional gain adjustment values, which adjust pixel values in post-image capture processing. With reference to positional gain adjustment to correct for shading variations with a fixed optical state configuration, gain adjustments across the pixel array can typically be provided as pixel signal correction values, one corresponding to each of the pixels. For color sensors, the set of pixel correction values for the entire pixel array forms a gain adjustment surface for each of a plurality of color channels. The gain adjustment surface is then applied to the pixels of the corresponding color channel during post-image capture processing to correct for variations in pixel value due to the spatial location of the pixels in the pixel array. For monochrome sensors, a single gain adjustment surface is applied to all the pixels of the pixel array.
“Positional gain adjustments” across the pixel array are provided as correction values, one correction value corresponding to each of the pixels and applied to the pixel values during post-image capture processing.
One method of determining spatial pixel correction values is to approximate desired vertical and horizontal correction values for each pixel in a pixel array by using one vertical and one horizontal correction value curve which is determined for the pixels. A vertical correction value curve (Fy(y)) provides a correction value for each row. A horizontal correction value curve (Fx(x)) provides a correction value for each column.
For each color channel, the correction value may be represented in as similar manner as that in co-pending U.S. patent application Ser. No. 10/915,454, entitled CORRECTION OF NON-UNIFORM SENSITIVITY IN AN IMAGE ARRAY, filed on Aug. 11, 2004, (“the '454 application”), and Ser. No. 11/514,307, entitled POSITIONAL GAIN ADJUSTMENT AND SURFACE GENERATION FOR IMAGE PROCESSING, filed on Sep. 1, 2006 (“the '307 application”) which are incorporated herein in their entirety by reference. The correction value may be determined as shown in Equation (1):
Fc(x,y)=Fxc(x)+Fyc(y)+kc*Fxc(x)*Fyc(y)+1 (1)
where c is the channel, where Fxc(x) is a horizontal correction value curve for that channel, where Fyc(y) is a vertical correction value curve for that channel, and where kc*Fxc(x)*Fyc(y) (the “cross term”) is used to increase/decrease the lens correction values in the pixel array corners. In the '454 application, the value of the vertical correction value curve at a given row is the vertical correction value for that row and is the same for each pixel in the row. Likewise, the value of the horizontal correction value curve at a given column is the horizontal correction value for that column and is the same for each pixel in the column. The total correction is a function of the vertical and horizontal correction, as shown in Equation (1).
One problem with using one vertical and one horizontal correction value curve as above to approximate the desired vertical and horizontal correction values for an entire pixel array is that typically when a particular pixel is located away from the column/row on which the vertical/horizontal correction value curve is centered, the less closely the value calculated in accordance with Equation (1) may model the desired column/row correction values for that particular pixel.
In the '454 application, typically, the single vertical correction value curve for a pixel array corresponds to the correction value curve for a reference center column of the pixel array and, also typically, the single horizontal correction value curve for a pixel array corresponds to the correction value curve for a reference center row of the pixel array. That is, for a given channel, Fxc(x) and Fyc(y) are usually calibrated such that the gain required at the brightest pixel (generally, the center pixel) is equal to one. Thus, assuming the center of color channel c is located at (0, 0), then Fxc(0)=Fyc(0)=0. Thus, generally, at the center row, the correction value is calculated as Fc=Fxc(x)+Fyc(y)+1=Fxc(x)+1 and at the center column, the correction value is calculated as Fc=Fxc(x)+Fyc(y)+1=Fyc(y)+1. (The cross term doesn't appear because along the center row and along the center column the cross term (kc*Fxc(x)*Fyc(y)) is also equal to zero). It follows that Fc(0,0)=1. (Note that multiplying by a correction factor of one has no effect on the pixel value.) This means that Fxc(x) and Fyc(y) are typically calibrated such that Fc (x, y) depicts vertical correction values for pixels along the centermost column of the pixel array and horizontal correction values for pixels along the centermost row of the pixel array with sufficient accuracy. However, a vertical correction value curve that would correct, for example, a column on the right side of the pixel array with sufficient accuracy could be much different than the vertical correction value curve desired for correcting the center reference column. Accordingly, a correction surface based on a single vertical correction value curve may not correct the entire pixel array to a desired accuracy because the vertical correction values determined from the single correction value curve, even if adjusted by the cross term and Fxc(x), will not always yield a correction value that closely maps to desired values on columns positioned relatively far away from the center column, for example, on either side of the pixel array. Analogously, there may be no choice of values for the single horizontal correction value curve provided in the '454 application that will correct the entire pixel array to a desired accuracy.
Accordingly, methods, systems and apparatuses for more accurately representing and calculating desired correction values for the entire pixel array for use in pixel value correction are desired.
In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and in which are shown by way of illustration specific disclosed embodiments. These disclosed embodiments are described in sufficient detail to enable those skilled in the art to make and use them, and it is to be understood that structural, logical or procedural changes may be made. Particularly, in the description below, processing is described by way of flowchart. In some instances, steps which follow other steps may be in reverse or in a different sequence except where a following procedural step requires the presence of a prior procedural step. The processes illustrated in the flowcharts can be implemented using hardware components including an ASIC, a processor executing a program, or other signal and/or image processing hardware and/or processor structures or any combination thereof. For example, these processes may be implemented as a pixel processing circuit which may be provided in an image processor of a solid state imager device, but are not limited to such an implementation.
For purposes of simplifying description, the disclosed embodiments are described in connection with performing positional gain adjustment on the pixels of a captured image affected by lens shading. However, the disclosed embodiments may also be used for any other spatial pixel value corrections, or more generally, corrections required due to interaction among imperfections or practical limitations or variations in the designs and layout of imaging device components, etc. For example, a given lens or filter may contribute to the pixel circuits' having varying degrees of sensitivity depending upon their geographic locations in the array. Further, variations in the shape and orientation of photosensors and other elements of the pixel circuits may also contribute to non-uniformity of pixel sensitivity across the imager.
Positional gain adjustment (PGA) refers to the process of compensating for non-uniform pixel values depending on pixel positions in an array, when the camera is exposed to a scene having uniform irradiance, in each channel, throughout the scene. As stated, for purposes of description, disclosed embodiments are described with reference to positional gain adjustment, used as a non-limiting example. Finally, the disclosed embodiments are generally described with reference to a single color channel; however, it should be understood that each color channel may be separately corrected with a separate correction function, Fc(x, y), or the embodiments may be applied to a monochrome imager.
Correction values are determined by correction functions that determine a corrected pixel value, P(x, y) based upon the pixel's location in the pixel array. In the following equations, “x” represents a column number and “y” represents a row number. A corrected pixel value, P(x, y), is equal to a readout pixel value output from the sensor array, PIN(x, y), multiplied by the correction function, F(x, y) as shown in Equation (2) below:
P(x,y)=PIN(x,y)*F(x,y) (2)
The corrected pixel value P(x, y) represents a value of the pixel corrected for the pixel's location in the pixel array. It the context of positional gain adjustment, the correction values are gain adjustments applied to each pixel value. The readout pixel value PIN(x, y) is the value for the pixel that is acquired by the pixel array, where “x” and “y” define the location of the pixel in the pixel array, x being the column location of the pixel and y being the row location of the pixel. Thus, F(x, y) is a correction surface for the entire pixel array. It should be noted that a corrected pixel value is not limited to the value of the readout pixel value output from the sensor array multiplied by the value of a correction function, but it may also be a function of more than one correction value and/or pixel values of pixels neighboring the pixel, in addition to the pixel's own value. Each correction value is determined from a correction surface, in accordance with disclosed embodiments.
One possible representation of a correction function for a particular pixel is described in the '454 application and the '307 application and shown in Equation (3):
F(x,y)=Fx(x)+Fy(y)+k*Fx(x)*Fy(y)+G (3)
where Fx(x) represents a piecewise-quadratic correction function in the x-direction, Fy(y) represents a piecewise-quadratic correction function in the y-direction, k*Fx(x)*Fy(y) is used to increase or decrease the lens correction values toward the array corners, and where G represents a “global” constant offset (increase or decrease) applied to every pixel in the pixel array, regardless of pixel location. The expression k*Fx(x)*Fy(y) is referred to as a “cross-term” while k is sometimes referred to as the “corner factor.” The value of G is typically +1. It should be noted that disclosed embodiments are not limited to the correction function in Equation (3), but alternative suitable correction functions may be used. It should further be noted that while the value of G is typically +1, especially for positional gain adjustment applications, that the value of G is not limited as such and may be the same or vary among different color channels as well.
The Fx(x) and Fy(y) functions may also be referred to as horizontal and vertical correction value curves, respectively. As above, horizontal and vertical correction value curves are used to determine pixel correction values F(x, y) for each pixel in a pixel array. For simplicity, descriptions herein are in terms of a monochrome image, for which the same correction applies to each color channel; however, disclosed embodiments may also apply to each channel of multiple color channel applications. Correction values may be calculated from the Fx(x) and Fy(y) functions by evaluating the function F(x, y) at a particular x or y location. The horizontal curve (Fx(x)) is a function in the x (column number or location) direction, which determines a horizontal correction value for each pixel, based on the x value (column location) of the pixel. The vertical curve (Fy(y)) is a function in the y (row number or location) direction, which determines a vertical correction value for each pixel, based on the y value (row location) of the pixel. As noted, the sum of a single vertical and a single horizontal correction value curve plus the cross term plus G (or 1) have been used to approximate desired correction values for all pixels in a pixel array.
Disclosed embodiments provide methods of storing and evaluating correction surfaces that provide improved correction values when compared to correction values that are based on single stored vertical and horizontal correction value curves in post-image capture processing (or during scan out of the array, pixel-by-pixel). This improvement is made possible by providing multiple stored vertical and/or horizontal correction value curves. The horizontal and vertical correction value curves are pre-stored and may be represented in memory as the actual values of the multiple Fx(x) and/or Fy(y) functions or as parameters which can be used to generate the functions as needed during image processing. The stored vertical and/or horizontal correction value curves may be used to interpolate/extrapolate additional curves from which correction values for pixel values may be determined. It should also be noted that although the correction value curves, Fx(x) and Fy(y) are described throughout as piecewise-quadratic functions, they may take the form of any other suitable function, such as piecewise-cubic, polynomial, etc, or any combination thereof. Also, it may not be necessary to include the cross term, depending on the surface desired to be generated.
Corrections for pixel spatial location in a pixel array may also be separately performed for each color channel of the pixel array. For example, for an RGB array, the red, green and blue pixels fruit red, green and blue color channels, respectively. Each of these color channels may be adjusted separately. Thus, for each channel of an RGB array, different stored vertical and horizontal correction value curves may be used to determine appropriate correction values for the pixels of the channel. A separate correction value may be calculated for each channel, based on its own Fx(x) and Fy(y) as represented by stored parameters. Channels could also represent signals outside of the visible spectrum. As in the '307 application, two green channels may be separately corrected. Also, channels other than the typical RGB channels may be used.
One disclosed embodiment is now described with reference to
For a Bayer color array of one possible disclosed embodiment, depicted in
Referring to
Referring to
Referring now to
If the pixel is located on a column with a corresponding vertical correction value curve or if there is only one vertical correction value curve, the vertical correction value for the pixel is the value of the vertical correction value curve (Fy(y)), evaluated at that pixel's row, row y, and is determined at step S42. The vertical correction value for the particular pixel is the value of the corresponding vertical correction value curve evaluated at the row in which the particular pixel is located.
If there are multiple columns, for pixels which lie to one side of all columns with corresponding vertical correction value curves, the vertical correction value of the particular pixel is determined by combining vertical correction values for that pixel's row, as determined for the vertical correction value curves associated with the two columns closest to the pixel, as follows. In step S43, the two columns with associated vertical correction value curves located closest to the particular pixel are determined. Then, at step S44, the vertical correction values for each of these vertical correction curves are determined for the particular pixel's row. In step S45, the vertical correction value for the pixel itself is determined by an extrapolation of the vertical correction values determined in step S44.
For pixels that lie between two columns with corresponding vertical correction value curves, the vertical correction value of the particular pixel is determined by combining vertical correction values for that pixel's row. The vertical correction values that are combined are those associated with the two columns with corresponding vertical correction value curves that are the closest, one to the right and one to the left, to the particular pixel, as follows. First, the closest column with a corresponding vertical correction value curve lying to the right of the pixel and the closest column with a corresponding vertical correction value curve lying to the left are identified at step S46. Then the value of each of these vertical correction value curves is determined at step S47 or S48, respectively, based on the row location of the pixel. In step S49, the vertical correction value for the pixel itself is determined by an interpolation of the vertical correction values determined in steps S47 and S48.
A weighted linear interpolation/extrapolation may be employed to determine a vertical correction value from the two contributing vertical correction values. One possible method of weighting may be based upon the ratio of (a) the number of pixels between the column of the particular pixel and one of the closest columns to (b) the number of pixels between the column of the particular pixel and the other of the closest columns. However, it should be understood that non-linear interpolation/extrapolation functions may also be used. One benefit of using linear interpolation/extrapolation is that it is very cost effective.
Referring now to
If there are multiple rows, for pixels which lie to one side of all rows with corresponding horizontal correction value curves, the horizontal correction value of the particular pixel is determined by combining horizontal correction values for that pixel's column, as determined for the horizontal correction value curves associated with the two rows closest to the pixel, as follows. In step S53, the two rows with associated horizontal correction value curves that are located closest to the particular pixel are determined. Then, at step S54, the horizontal correction values for each of these horizontal correction value curves is determined for the particular pixel's column. In step S55, the horizontal correction value for the pixel itself is determined by an extrapolation of the horizontal correction values of these horizontal correction value curves evaluated at the column in which the pixel is located.
For pixels that lie between two rows with corresponding horizontal correction value curves, the horizontal correction value of the particular pixel is determined by combining horizontal correction values for that pixel's column. The horizontal correction values that are combined are those associated with the two rows with corresponding horizontal correction value curves that are the closest above and the closest below the particular pixel, as follows. First, the closest row with a corresponding horizontal correction value curve above the pixel and the closest row with a corresponding horizontal correction value curve below the pixel are identified at step S56. Then the horizontal correction value of each of these horizontal correction value curves is determined at step S57 or S58, respectively, based on the column location of the pixel. In step S59, the horizontal correction value for the pixel itself is detei mined by an interpolation of the horizontal correction values of these horizontal correction value curves evaluated at the column in which the pixel is located.
As with determination of the vertical correction values, a weighted linear interpolation/extrapolation may be employed to determine a horizontal correction value from the two contributing horizontal correction values. One possible method of weighting may be based upon the ratio of (a) the number of pixels between the row of the particular pixel and one of the closest rows to (b) the number of pixels between the row of the particular pixel and the other of the closest rows. However, it should be understood that non-linear interpolation/extrapolation functions may also be used. One benefit of using linear interpolation/extrapolation is that it is very cost effective.
Typically each Fy(y) would be evaluated only once per row, since it does not change for a given row, from one pixel to the next. Determination of such a value at a given pixel may consist of reading a previously computed value from a register; this register need be updated only once per scanned line. For calculation in real time, as pixels are scanned out from a sensor array, one row being scanned out before the next, disclosed embodiments may compute vertical correction value curves (updating the value of each vertical correction value curve only once per scanned row) using the same value of each curve for every pixel in a row, for efficiency. When computing horizontal correction value curves, the value of the horizontal correction value curve must be updated for each pixel in the scanned row. This difference occurs because the vertical correction value curve calculations are dependent on the row number of the pixel, which remains the same for all pixels within a given row, whereas the horizontal correction value curve calculations are dependent on the column number of the pixel, which is different for every pixel in a given row. Accordingly, when rows are corrected one after the other, it is generally more efficient to use more vertical correction value curves than horizontal correction value curves because more or faster circuitry, more power, or a faster processor may be required for calculating horizontal correction values as rapidly as pixels are scanned out along a row than for calculating vertical values that change only for each row.
For images from different cameras, the ideal number and configuration of vertical and horizontal correction value curves may vary. For example, three vertical correction value curves and one horizontal correction value curve are implemented in the example shown in
To determine if an adequate combination of correction value curves is being utilized, the results obtained using the corrected pixel values P(x, y) (from Equation (2) using the correction function Equation (3) based on the vertical/horizontal correction values calculated from the stored correction value curves as described above with reference to
The number and placement of row and column curves may be selected to try to optimize cost and expected performance quality for a particular camera, or in the case of an image sensor processor, a family of cameras to be supported by the image sensor processor. Testing a candidate design may be as simple as application of the design, for example, in simulation, to an image of a gray card using uniform illumination captured by a subject camera.
In some cases, it may be beneficial to use more than one horizontal correction value curve, notwithstanding the additional circuitry cost required. This determination is the result of a balancing between the benefit of getting the best approximation of the desired corrected pixel values and the cost of the pixel processing circuit, the amount of power required and/or the amount of processing time required. For example, three horizontal and five vertical correction value curves may be more desirable than one horizontal and 17 vertical correction value curves, even though horizontal correction value curve circuitry, in general, is more expensive, because increasing the number of horizontal correction value curves may achieve a better approximation of desired pixel correction values. This may depend on the camera to be used, the circuitry technology available, processor technology and/or other variables.
Typically, to determine the curves to be used for a particular camera, optimal central row and column curves are first developed/calibrated for example with all other curves neutralized or turned off and imaging a uniform field with the camera. This can be done as discussed in the '454 application or in the '307 application.
Suppose, however, that significant shading remains in a corner of a captured image, which is a common occurrence for systems with only one vertical and one horizontal correction curve per channel. An additional vertical curve is defined at or near the image edge, near the corner, with the same values as the central vertical curve previously optimized for the center column. The values of the new curve are then varied, with the new curve enabled to contribute to the final correction results, in accordance with the invention, until the desired values are obtained along the associated column, rendering the processed image more uniform along that column, and in its vicinity, and in particular in the corner of concern.
Similarly, another column may be added if there remains shading in another region. Analogously, rows may be added and adjusted. Various combinations are tried until a sufficiently cost-effective design is achieved for a given camera or family of cameras for which the design is intended. Then the number and placement of extra rows and columns is fixed. The design of disclosed embodiments may permit the placement of one or more rows to be programmable.
A similar procedure is used, once a design is fixed, to determine the best curve values to be used for a particular camera. Typically, a central row and column are first calibrated for each color channel. Each subsequently calibrated curve is calibrated while holding fixed the parameters and curves of the previously calibrated curves. As previously discussed, for positional gain adjustment applications, typically Fx(x) and Fy(y) are equal to zero along the center column and row, respectively, such that the cross term is equal to zero along the central axes. In other embodiments, the cross term may be omitted if sufficient accuracy of correction is achieved, such as if sufficiently many curves are provided to enable the desired accuracy of correction without the cross terms.
In a further embodiment, a vertical curve may be stored as a function that defines the difference at each pixel between the desired correction along the associated column and the correction calculated from some other vertical curve having an associated column. For example, if a representation of a vertical curve for the center column is stored, a left-hand column could have a corresponding stored function, where the actual vertical curve that would be used for the left-hand column would be the sum of the vertical curve stored for the center column and the stored function for the left-hand column. The description of this embodiment applies to horizontal curves as well. This implementation may save storage space, as only the differences need to be stored for the left-hand column, in this example, rather than the actual curve itself. More computation may be required, however, since as the correction curve is generated, two curves must first be generated and then summed, to get the final correction value at the particular pixel.
The imager 100 comprises a sensor core 200 that communicates with an image processor 110 that is connected to an output interface 130. A phase lock loop (PLL) 244 is used as a clock for the sensor core 200. The image processor 110, which is responsible for image and color processing, includes interpolation line buffers 112, decimator line buffers 114, and a color processing pipeline 120. One of the functions of the color processing pipeline 120 is the performance of positional gain adjustment in accordance with the disclosed embodiments, discussed above.
The output interface 130 includes an output first-in-first-out (FIFO) parallel buffer 132 and a serial Mobile Industry Processing Interface (MIPI) output 134, particularly where the imager 100 is used in a camera in a mobile telephone environment. The user can select either a serial output or a parallel output by setting registers in a configuration register within the imager 100 chip. An internal bus 140 connects read only memory (ROM) 142, a microcontroller 144, and a static random access memory (SRAM) 146 to the sensor core 200, image processor 110, and output interface 130. The read only memory (ROM) 142 may serve as a storage location for the correction values or the parameters used to generate the correction value curves and associated correction values as needed.
While disclosed embodiments have been described for use in correcting positional gains for an acquired image, disclosed embodiments may be used for other pixel corrections as well.
When employed in a video camera, pixel corrections may be employed in real time for each captured frame of the video image.
As noted, disclosed embodiments may be implemented as part of an image processor 110 and can be implemented using hardware components including an ASIC, a processor executing a program, or other signal processing hardware and/or processor structure or any combination thereof.
Disclosed embodiments may be implemented as part of a camera such as e.g., a digital still or video camera, or other image acquisition system, and may also be implemented as stand-alone software or as a plug-in software component for use in a computer, such as a personal computer, for processing separate images. In such applications, the process can be implemented as computer instruction code contained on a storage medium for use in the computer image-processing system.
For example,
The camera system 800 is an example of a processor system having digital circuits that could include image sensor devices. Without being limiting, such a system could also include a computer system, cell phone system, scanner system, machine vision system, vehicle navigation system, video phone, surveillance system, star tracker system, motion detection system, image stabilization system, and other image processing systems.
Although the disclosed embodiments employ a pixel processing circuit, e.g., image processor 110, which is part of an imager 100, the pixel processing described above and illustrated in
As described above, the disclosed embodiments of the invention describe a method providing more accurate approximations of desired correction surfaces for use in calculating pixel correction values at a given cost of circuit area, power, memory, bandwidth, processing time required, etc.
While several embodiments have been described in detail, it should be readily understood that the invention is not limited to the disclosed embodiments. Rather the disclosed embodiments can be modified to incorporate any number of variations, alterations, substitutions or equivalent arrangements not heretofore described.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
4731652, | Mar 25 1986 | Kabushiki Kaisha Toshiba | Shading correction signal generating device for a television camera apparatus |
4970598, | May 30 1989 | Eastman Kodak Company | Method for correcting shading effects in video images |
5157497, | Feb 25 1991 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for detecting and compensating for white shading errors in a digitized video signal |
5272536, | Mar 13 1990 | Sony Corporation | Dark current and defective pixel correction apparatus |
5493334, | Dec 20 1993 | Matsushita Electric Corporation of America | Automatic digital black shading for cameras |
5510851, | Mar 29 1994 | AUTODESK, Inc | Method and apparatus for dynamic purity correction |
5548332, | Apr 18 1994 | Matsushita Electric Corporation of America | Apparatus and method for black shading correction |
6094221, | Jan 02 1997 | Apple Computer, Inc | System and method for using a scripting language to set digital camera device features |
6734905, | Oct 20 2000 | U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT | Dynamic range extension for CMOS image sensors |
6747757, | May 20 1998 | FUJIFILM Corporation | Image processing method and apparatus |
6912307, | Feb 07 2001 | RAMOT AT TEL AVIV UNIVERSITY LTD | Method for automatic color and intensity contrast adjustment of still and video images |
6937777, | Jan 17 2001 | Canon Kabushiki Kaisha | Image sensing apparatus, shading correction method, program, and storage medium |
6989872, | Jul 25 2000 | MATSUSHITA ELECTRIC INDUSTRIAL CO , LTD | Image distortion correcting device and image distortion correcting method |
7391450, | Aug 16 2002 | Qualcomm Incorporated | Techniques for modifying image field data |
7457478, | Aug 20 2002 | Sony Corporation | Image processing apparatus, image processing system, and image processing method |
7499082, | Feb 10 2004 | SOCIONEXT INC | Distortion correction circuit for generating distortion-corrected image using data for uncorrected image |
7782380, | Sep 01 2006 | Aptina Imaging Corporation | Positional gain adjustment and surface generation for image processing |
8078001, | May 11 2007 | U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT | Methods, apparatuses and systems for piecewise generation of pixel correction values for image processing |
8331722, | Jan 08 2008 | Aptina Imaging Corporation | Methods, apparatuses and systems providing pixel value adjustment for images produced by a camera having multiple optical states |
8463068, | Aug 09 2007 | U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT | Methods, systems and apparatuses for pixel value correction using multiple vertical and/or horizontal correction curves |
8478066, | Oct 31 2003 | Mitsubishi Denki Kabushiki Kaisha | Image-correction method and image pickup apparatus |
8620102, | May 11 2007 | U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT | Methods, apparatuses and systems for piecewise generation of pixel correction values for image processing |
20020094131, | |||
20020135705, | |||
20030234864, | |||
20030234872, | |||
20040027451, | |||
20040032952, | |||
20040155970, | |||
20050041806, | |||
20050053307, | |||
20050174437, | |||
20050179793, | |||
20050213159, | |||
20060033005, | |||
20070025625, | |||
20070164925, | |||
20070211154, | |||
20080002037, | |||
20080055430, | |||
20080175514, | |||
20080191985, | |||
20080279471, | |||
20080297816, | |||
20080309824, | |||
20090040371, | |||
20090175556, | |||
20090190006, | |||
20100135595, | |||
20110298943, | |||
20130257919, | |||
20130258146, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
May 29 2013 | Micron Technology, Inc. | (assignment on the face of the patent) | / | |||
Apr 26 2016 | Micron Technology, Inc | U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT | CORRECTIVE ASSIGNMENT TO CORRECT THE REPLACE ERRONEOUSLY FILED PATENT #7358718 WITH THE CORRECT PATENT #7358178 PREVIOUSLY RECORDED ON REEL 038669 FRAME 0001 ASSIGNOR S HEREBY CONFIRMS THE SECURITY INTEREST | 043079 | /0001 | |
Apr 26 2016 | Micron Technology, Inc | MORGAN STANLEY SENIOR FUNDING, INC , AS COLLATERAL AGENT | PATENT SECURITY AGREEMENT | 038954 | /0001 | |
Apr 26 2016 | Micron Technology, Inc | U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 038669 | /0001 | |
Jun 29 2018 | U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT | Micron Technology, Inc | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 047243 | /0001 | |
Jul 03 2018 | MICRON SEMICONDUCTOR PRODUCTS, INC | JPMORGAN CHASE BANK, N A , AS COLLATERAL AGENT | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 047540 | /0001 | |
Jul 03 2018 | Micron Technology, Inc | JPMORGAN CHASE BANK, N A , AS COLLATERAL AGENT | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 047540 | /0001 | |
Jul 31 2019 | JPMORGAN CHASE BANK, N A , AS COLLATERAL AGENT | MICRON SEMICONDUCTOR PRODUCTS, INC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 051028 | /0001 | |
Jul 31 2019 | JPMORGAN CHASE BANK, N A , AS COLLATERAL AGENT | Micron Technology, Inc | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 051028 | /0001 | |
Jul 31 2019 | MORGAN STANLEY SENIOR FUNDING, INC , AS COLLATERAL AGENT | Micron Technology, Inc | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 050937 | /0001 |
Date | Maintenance Fee Events |
Oct 10 2014 | ASPN: Payor Number Assigned. |
May 03 2018 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
May 10 2022 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Nov 18 2017 | 4 years fee payment window open |
May 18 2018 | 6 months grace period start (w surcharge) |
Nov 18 2018 | patent expiry (for year 4) |
Nov 18 2020 | 2 years to revive unintentionally abandoned end. (for year 4) |
Nov 18 2021 | 8 years fee payment window open |
May 18 2022 | 6 months grace period start (w surcharge) |
Nov 18 2022 | patent expiry (for year 8) |
Nov 18 2024 | 2 years to revive unintentionally abandoned end. (for year 8) |
Nov 18 2025 | 12 years fee payment window open |
May 18 2026 | 6 months grace period start (w surcharge) |
Nov 18 2026 | patent expiry (for year 12) |
Nov 18 2028 | 2 years to revive unintentionally abandoned end. (for year 12) |