In an embodiment, computational complexity of estimating the actual illuminant of a scene is reduced by examining only a subset of the pixel values generated for a received image frame. In another embodiment, number of rotations of color values is minimized by selecting an area which contains the color cue values of a color in an original/unrotated coordinate space and has boundaries which parallel the axis of the original coordinate space, and rotating a color value only if the color value is within the selected area. In another embodiment, such an area is used in conjunction with a histogram-based approach to determine the actual illuminant.
|
1. A method of determining a scene illuminant illuminating a scene of interest, said method comprising:
receiving an image frame representing said scene of interest and information representing how each of a set of colors would be manifested under each of a plurality of potential illuminants, wherein said image frame contains a plurality of color values;
determining a current illuminant from a plurality of illuminants by comparing only a subset of color values with said information corresponding to said plurality of illuminants for a match, wherein said subset of color values are contained in said plurality of color values, said subset of color values stored as an original map comprising a bit plane; and
providing said current illuminant as said scene illuminant,
wherein a closest matching illuminant can be determined based on an aggregate count and closeness measures calculated for each detectable color-potential illuminant combination.
2. The method of
maintaining a plurality of original maps, wherein each original map indicates whether the corresponding one of said subset of color values at a corresponding position in said image frame matches a corresponding color for a corresponding illuminant; and
processing said plurality of original maps to identify said current illuminant.
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
forming an area, wherein all hue values of said set of colors lie within said area;
and checking whether each of said subset of color values is in said area, wherein said determining performs said comparing for a color value only if the color value is in said area.
8. The method of
9. The method of
|
1. Field of Disclosure
The present disclosure relates generally to image processing in image capture devices such as video cameras and still cameras, and more specifically to reducing computational complexity in determining an illuminant of a scene in such devices.
2. Related Art
A scene refers to any area/object, the image of which is sought to be captured using an image capture device. An image capture device (ICD) in turn refers to a device such as a still camera or a video camera which is designed to receive light signals from a scene and represent the corresponding image in a suitable format (analog or digital).
In general, light (“incident light”) originating from a light source (e.g., Sun, light bulb, reflection from an object) is incident on a scene, and emanates from the scene due to interactions with the objects present in the scene. The interactions include acts such as reflection, absorption, dispersion, diffraction, etc., as is well known in the arts. Some times, a light source itself may be part of a scene. The light (“received light”) from the scene is eventually received at an ICD and the image of the scene is captured as an image frame.
The nature of incident light generally depends on various factors such as any intervening medium (e.g., clouds, glass) present between a light source and a scene, the colors and their brightness with which the light source/light sources generate light, etc. Incident light generally is a combination of different colors of same/different brightness. Thus, the incident light has brighter characteristics on clear-sky days in comparison to cloudy situations.
The general type of light incident on a scene is referred to as an illuminant, which is typically dependent on the light source/light sources as well as the factors noted above. Such illuminant is henceforth referred to as “actual illuminant” to differentiate from “potential illuminants” described in the sections below.
There is often a need to determine an actual illuminant of a scene by examining the image of a scene. For example, in an ICD, there are various corrections that may need to be performed based on a determination of actual illuminant. Auto-white balance (AWB) correction is one such example.
As is well known in the relevant arts, AWB correction generally refers to a color correction that may need to be performed on image representation, with the nature of correction depending on the actual illuminant. Often the AWB correction parallels the correction that human eye often performs depending on different illuminants based on which light is received.
One approach to determining an actual illuminant is to store (in an ICD) data representing how a color in a scene would be represented (in the received light) under various illuminants (pre calibrated colors called as reference colors), and comparing the image content for match with these reference colors. Such approaches often require substantial processing resources.
In general, there is a need to reduce the computational complexity (or resource requirements, in general) in determining the actual illuminants, without possibly compromising accuracy at least substantially.
In general, there is a need to reduce the computational complexity (or resource requirements, in general) in determining the actual illuminants.
Example embodiments will be described with reference to the following accompanying drawings, which are described briefly below.
In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.
1. Overview
An aspect of the present invention determines an actual illuminant from multiple illuminants by comparing only a subset of color values (respectively representing the corresponding pixels) of an image frame with information representing how each detectable color manifests under different illuminants. Due to the comparison of only a subset of the color values, the computational requirements are reduced.
Another aspect of the present invention forms an area covering the cue values of a set of (one or more) detectable colors (for respective illuminants), and checks whether a color value is within the area before checking whether the color value matches any of the detectable colors. In an embodiment, the area is chosen to have a set of boundary lines, which parallel the axis defining different chromaticity values in a first coordinate space. To check for a match with a detectable color, each color value may be rotated to a new coordinate axis in which the boundaries of the cue information for a detectable color are parallel to the axis of the new coordinate space.
Due to the use of the area, some of the unneeded rotations of color values (to the new coordinate space) may be avoided.
One more aspect of the present invention applies similar concept of an area when using a histogram approach to determining the actual illuminant. In the histogram based approach, the frequency of occurrence of each color candidates in the chromaticity space are determined. An area covering the cue values (of a detectable color) with the boundaries being in parallel to the axis of the coordinate space (in unrotated coordinate space, i.e., the same space as in which the color values are received), is formed. Only the color candidates (and their frequency counts) falling within such an area are then used to determine the level of match for the detectable color.
Several aspects of the invention are described below with reference to examples for illustration. It should be understood that numerous specific details, relationships, and methods are set forth to provide a full understanding of the invention. One skilled in the relevant art, however, will readily recognize that the invention can be practiced without one or more of the specific details, or with other methods, etc. In other instances, well known structures or operations are not shown in detail to avoid obscuring the features of the invention.
2. Image Capture Device (Camera)
Lens enclosure 105 (denoted by dotted lines) is shown housing lens assembly 115 and image sensor array 120, and is generally designed to shield extraneous (i.e., other than the light being received via the lens assembly) light from being incident on image sensor array 120 (in general, capturing medium). Lens assembly 115 may contain one or more lenses, which can be configured to focus light rays (denoted by arrow 101) from a scene to impinge on image sensor array 120.
Image sensor array 120 may contain an array of sensors, with each sensor generating an output value representing the corresponding point (small portion or pixel) of the image, and proportionate to the amount of light that is allowed to fall on the sensor. The output of each sensor is converted to a corresponding digital value (for example, in RGB format). The digital values, produced by the sensors are forwarded on path 123 to image processor 130 for further processing.
Display 140 displays an image frame in response to the corresponding display signals received from image processor 130 on path 134. Display 140 may also receive various control signals (not shown) from image processor 130 indicating, for example, which image frame is to be displayed, the pixel resolution to be used etc. Display 140 may also contain memory internally for temporary storage of pixel values for image refresh purposes, and is implemented in an embodiment to include an LCD display.
Input interface 160 provides a user with the facility to provide inputs, for example, to select features such as whether auto-white balance (AWB) correction is to be enabled/disabled. The AWB correction may be performed only if the feature is enabled. The user may be provided the facility of any additional inputs, as described in sections below.
RAM 190 stores program (instructions) and/or data used by image processor 130. Specifically, pixel values that are to be processed and/or to be used later, may be stored in RAM 190 via path 139 by image processor 130.
Non-volatile memory 150 stores image frames received from image processor 130 via path 135. The image frames may be retrieved from non-volatile memory 150 by image processor 130 and provided to display 140 for display. In an embodiment, non-volatile memory 150 is implemented as a flash memory. Alternatively, non-volatile memory 150 may be implemented as a removable plug-in card, thus allowing a user to move the captured images to another system for viewing or processing or to use other instances of plug-in cards.
Non-volatile memory 150 may contain an additional memory unit (e.g. ROM, EEPROM, etc.), which store various instructions, which when executed by image processor 130 provide various features of the invention described herein. In general, such memory units (including RAMs, non-volatile memory, removable or not) from which instructions can be retrieved and executed by processors are referred to as a computer (or in general, machine) readable medium.
Image processor 130 forwards pixel values received on path 123 to path 134 to enable a user to view the scene presently pointed by the camera. Further, when the user ‘clicks’ a button (indicating intent to record the captured image on non-volatile memory 150), image processor 130 causes the pixel values representing the present (at the time of clicking) image to be stored in non-volatile memory 150.
It may be appreciated that the image frames thus captured may need to be corrected based on the illuminant of the scene captured, Auto-white balance (AWB) being one such correction. This requires the determination of the actual illuminant of the scene captured.
Image processor 130 may operate to determine the actual illuminant of a scene by comparing the stored color characteristics of a predetermined number of illuminants with the color characteristics of the captured image frame and identifying the closest match, according to several aspects of the present invention, and the description is accordingly continued with the internal details of image processor 130 in one embodiment.
3. Image Processor
ISP pipeline 220 receives a stream of pixel values representing an entire image frame (row wise) on path 123. The pixel values may be received directly from image sensor array 120 (of
Sub-window logic 230 receives control inputs from CPU 250 specifying dimensions and locations of one or more sub-windows (e.g., in the form of rectangular areas) in the captured image that are to be subsampled. For each of a group (for example, nine adjacent pixels) of pixel values in the sub-window, sub-window logic 230 computes the average of the pixel values in the group, and generates a corresponding single pixel value having the computed average value. The ‘averaged’ pixels thus generated form a subsampled version of portion of the image in the sub-window, and the subsampled version is provided by sub-window logic 230 to CPU 250. In an embodiment, sub-window logic 230 receives a 800×600image frame, and generate a one pixel for each 8×8pixel area. The resulting 100*75 image frame may be provided to CPU 250.
CPU 250 may operate on the subsampled image frame to determine an illuminant from a set of potential illuminants, providing the closest match to the illuminant of the scene, as described with examples, in detail below. Operation on such sub-sampled images reduces the computational requirements in CPU 250. Alternative embodiments can be implemented to operate in different ways (e.g., on the original image frame, without subsampling). Irrespective, CPU 250 may determine an illuminant from a set of potential illuminants, providing the closest match to the illuminant of the scene, by examining/processing the image data (either in subsampled form or otherwise), as described with examples below.
As noted above, in an embodiment, image processor 130 may operate to determine the actual illuminant of a scene by comparing the stored color characteristics of a predetermined number of potential illuminants with the color characteristics of the captured image frame and identifying the closest match. Some of the potential illuminants in an example embodiment are described below.
4. Potential Illuminants
It may be appreciated that some of the colors in a scene may not be detected in CPU 250 under certain illuminants. For example, when an image frame from a scene illuminated by a white illuminant (such as Sunlight) is received, all colors may be detected. On the other hand, when an illuminant has lower color temperature (more reddish hue) or higher color temperature (more bluish hue) than white light, some of the colors may have very little spectral power density. Accordingly, such colors may not be detected for the corresponding color temperatures.
While the examples herein are described with this set of illuminants and color temperatures merely for illustration, it may be appreciated that several features of the invention can be practiced with more or less illuminants with other color temperatures or other characteristics, etc., as will be apparent to one skilled in the relevant arts by reading the disclosure provided herein.
Thus, for each potential illuminant in the set of illuminants, data representing how each of the corresponding detectable colors appear when illuminated by that illuminant, is made available to the image processor in an ICD. In an embodiment, the set of colors comprise skin, green and white.
For illustration, it is assumed that all the set of three colors are detectable for each potential illuminant. Thus, determination of an actual illuminant may entail comparing the color values of the pixels in an image frame with 36 different sets of data (corresponding to 12 illuminants and 3 detectable colors). Such comparison and other related computations may cause excessive computational load. Several aspects of the present invention reduce the computational complexity in determining an illuminant of a scene, as described below with examples.
5. Reducing Computational Complexity in Determining an Illuminant of a Scene
Alternative embodiments in other environments, using other components, and different sequence of steps can also be implemented without departing from the scope and spirit of several aspects of the present invention, as will be apparent to one skilled in the relevant arts by reading the disclosure provided herein. The flowchart starts in step 401, in which control passes immediately to step 410.
In step 410, image processor 130 receives a set of illuminants. The set of illuminants may represent the illuminants likely to be encountered by target ICDs. In an embodiment, the set of illuminants consist of 12 illuminants described with respect to
In step 420, image processor 130 receives an image frame representing a scene illuminated by an illuminant. The image frame may be received in the form of pixel values, with each pixel value indicating both color and intensity of the corresponding point (small portion of the image). In the description below, only the color values (in chromaticity space) are described as being used in determining an actual illuminant. However, alternative embodiments can be implemented using the intensity as well as color values in different representations, without departing from the scope and spirit of several aspects of the present invention.
In step 430, image processor 130 selects a subset of pixels of the image frame. The subset may be selected in specific portions of an image frame and/or according to a pattern with such portions. The patterns can be fixed or adaptive. An adaptive pattern can be generated by using area growing techniques. Some example patterns in case of fixed patterns are described in sections below. However, other approaches, as suited to the specific environments and requirements, may be used for selecting the subset without departing from the scope and spirit of several aspects of the present invention.
In step 440, image processor 130 examines the color values of selected pixels to determine their match with detectable color and potential illuminant combinations. As noted above, data representing how detectable colors would be represented when illuminated by corresponding potential illuminants, may be stored (or made available otherwise from externally or by appropriate logic) in the ICD. The set of detectable colors may be chosen such that they are likely to be present in a large proportion of the images captured by ICDs under normal use. Image processor 130 may then compare the color values of the selected pixels with the stored color values to conclude whether the specific pixel matches the color.
In step 450, image processor 130 checks whether a “selective growing flag” is set. The selective growing flag is used to decide whether all the potential illuminants are to be considered for identifying the closest matching illuminant of the scene or not. If the “selective growing flag” is set, control passes to step 470. Otherwise, control passes to step 460.
In step 460, image processor 130 assigns the set of potential illuminants received in step 410 as the set of probable illuminants. The set of probable illuminants represent the set of illuminants from which image processor 130 may identify the closest matching illuminant of the scene. As the “selective growing flag” was not set in step 450, all the potential illuminants will be considered for identifying the closest matching illuminant.
In step 470, image processor 130 selects a subset of the set of potential illuminants received in step 410 as the set of probable illuminants. The subset is selected to contain those illuminants from the set of potential illuminants which are likely to provide a close match to the illuminant of the scene. The subset is selected based on the match information determined in step 440. Factors such as the number/location of color values matching a detectable color, the extent to which the comparison is close, etc., may be considered in selecting the set of probable illuminants. Several approaches to such selection will be apparent to one skilled in the relevant arts by reading the disclosure provided herein.
In step 480, image processor 130 generates match information, corresponding to the each of probable illuminants, for pixels of the image frame not in the selected subset of pixels. In other words, the match information of step 440 is extrapolated to other positions (not considered in step 440). Such match information may be generated in a number of ways, at least some of which are described in sections below.
In step 490, image processor 130 identifies the closest matching illuminant as the scene illuminant based on the match information of step 480. Various factors such as the number/ location of color values matching a detectable color, the extent to which the comparison is close, etc., may be considered in determining the closest matching illuminant. Several approaches to such identification will be apparent to one skilled in the relevant arts by reading the disclosure provided herein. The flowchart ends in step 499.
It should be appreciated that the features described above can be implemented in various embodiments. The description is continued with respect to example implementation of the above noted features.
6. Example Implementation Using a Subset of Pixels
As noted above with respect to step 410, a subset of the received pixels is first selected (within an area of a received frame). Example approaches to selection are depicted in
The match information corresponding to step 440 may be represented as logically shown in
The match information thus generated is processed to determine the closest matching illuminant in step 490. In an embodiment, the non-shaded areas are ignored in determining the closest matching illuminant. Alternatively, various well known techniques such as area growing, dilation and propagation may be used in predicting the likely values of each white area (from the match information of the surrounding pixels) and to enhance the accuracy of the indications in shaded areas, and the resulting 8×8 values can then be used in determining the closest matching illuminant. Predicting the likely values of each white area can be done for all or a selected subset of potential illuminants.
The resulting 8×8 values may be referred to as a larger map, while the map formed (not shown) just by the shaded area is referred to as an original map. A pixel in the white area is generally set to 1 if the surrounding pixels (exceeding a pre-specified match count) are also set to 1, or else to 0. Alternatively if the color match count of surrounding pixels is greater than or equal to 1 and less than pre-specified count then the pixel of white area is examined for color match before being set as 1 or 0 in the larger map. Each of the bits of the original/larger map may be conveniently stored as a bit of a random access memory (RAM).
By using the subset of pixels in determining match information, the computational complexity is reduced. The reduction can be appreciated by appreciating the comparisons that may need to be performed in an embodiment. Accordingly, the description is continued to illustrate the comparisons needed in an example scenario.
7. Comparisons Required
The color cue values are assumed to be contained in rectangle 710 merely as an illustration. However, the color cue values can be modeled as any other shapes as suited in the specific environment. The area covering the color cue values is referred to as a cue area.
In general, a color value is deemed to match if the value falls in rectangle 710. The color value may be compared with each point in rectangle 710, but may consume substantial processing power. Alternatively, the color value may be checked against the boundaries, but there would be many of such points given that the boundaries of the rectangle are not aligned (in parallel to) with the X and Y axis.
To reduce the number of points for comparison on the boundaries, rectangle 710 may be mapped to a new coordinate space shown with new axis X′ and Y′. X′ and Y′ are chosen to be parallel to the boundaries of rectangle 710. Accordingly, the values of each color cue in rectangle 710 needs to be also rotated. For the present purpose, given that the boundaries of the rotated rectangle are parallel to the new axis X′ and Y′, it may be sufficient to compute the values of the four corners of the rotated rectangle, and such rotated values may be stored in the ICD for determining the actual illuminant.
However, prior to checking with the boundaries of rectangle 710 in the new coordinate space (X′ and Y′ axis), a color value may also need to be rotated by the same angle (701). Though only a single rectangle (detectable color) is shown for illustration, many environments contain multiple detectable colors, and the corresponding rectangles (cue area, in general) may need be rotated by corresponding different degrees. Thus, each color value may also need to be rotated to corresponding degrees, before comparison with the corresponding rotated (in general, transformed) cue area.
Once rotated, the X′ and Y′ coordinates of the rotated color value can be compared with the X′ and Y′ coordinates of the boundaries of the rotated rectangle. A maximum of four comparisons may need to be performed to determine a match.
However, rotation of each color value is also computationally intensive (particularly given that there are multiple detectable colors and multiple potential illuminants) and it may be thus desirable to avoid rotations of color values. Several aspects of the present invention minimize such rotation as described with examples below.
8. Minimizing Unneeded Rotations
Alternative embodiments in other environments, using other components, and different sequence of steps can also be implemented without departing from the scope and spirit of several aspects of the present invention, as will be apparent to one skilled in the relevant arts by reading the disclosure provided herein. The flowchart starts in step 801, in which control passes immediately to step 810.
In step 810, image processor 130 receives a set of illuminants and color cue data for each color that can be used in determining each illuminant as a matching illuminant. The color cue data may merely specify the four corners of the cue area in the rotated coordinate system (illustrated above) and the angle of rotation. The set of illuminants may represent the potential illuminants or a subset of these from which the closest matching illuminant may be determined, as described in step 410 and
In step 820, image processor 130 receives an image frame representing a scene illuminated by an illuminant. Image processor 130 may use all the pixels in the frame or select a subset of pixels through techniques well known in the relevant arts, including those described before.
In step 830, image processor 130 forms an area in non-rotated coordinate space covering the color cue values of a color of an illuminant. The area may be advantageously chosen to cover the color cue values of potentially all detectable colors for all illuminants or it may be chosen to cover the color values of one or more detectable colors for one of more illuminants. The area can be defined using as many lines (or curves, as suited in the specific environment) as needed.
In case of using a single line, all the color cue values lie on one side of the line (forming an area). In case of using two lines, the color cue values would lie (in an area formed) between the two lines. Each line may be chosen to be parallel to non-rotated axis X and Y, in addition to covering minimum area. As an illustration,
In step 840, image processor 130 sets the current pixel to the first pixel of the image frame. The current pixel represents the pixel, which is currently being matched with detectable colors, the cues of which are covered by the area.
In step 850, image processor 130 checks whether the color value (in the non-rotated coordinate space) of the current pixel falls in the area formed in step 830. Assuming the boundaries of the area are parallel to the X/Y axis as shown in
If it falls in the area, there is a possibility that the color value matches a detectable color and hence may be processed further by transferring control to step 860. If it does not fall in the area, there is no likelihood of it matching a color and hence may be ignored and control passes to step 870.
In step 860, image processor 130 checks whether the color value of the current pixel matches a detectable color. Here, the color value may be rotated to the X′, Y′ coordinate space and then compared against the boundary coordinates of the rotated rectangle (cue area). If the rotated color value falls in the rotated rectangle, the pixel (or color value) is deemed to match the detectable color.
If the color value of the current pixel matches the detectable color, control passes to step 880. If the color value of the current pixel does not match the detectable color, control passes to step 870.
In step 870, image processor 130 checks whether there are any more pixels of the image frame to be processed. If there are pixels of the image frame to be processed, control passes to step 875 for a next iteration. If there are no more pixels of the image frame to be processed, control passes to step 890.
In step 875, image processor 130 sets the current pixel to the next pixel of the image frame and processing is continued from step 850.
In step 880, image processor 130 adds the pixel as a matching pixel to the respective color mask for the respective illuminant. The color mask is a representation of the match information for a color under an illuminant, described before in section 7. The match information may be shown as a 0 (no match) or 1 (matched). In step 880, image processor 130 sets the value in the color mask for the respective color under the respective illuminant corresponding to the current pixel (for which a match was found in step 860) to a 1. Control then passes to step 870.
In step 890, image processor 130 identifies the closest matching illuminant as the scene illuminant based on the match information of step 880. Such identification can be performed using various well known approaches, once the match information for individual pixels is determined. The flowchart ends in step 899.
It should be appreciated that rotation of a pixel value is avoided when control passes from step 850 to step 870. Accordingly computational requirements may be reduced at least in some circumstances.
It should be appreciated that there are a substantial number of rotations in step 860 above, particularly given that a rotation may need to be performed for each color-potential illuminant combination (assuming a color value falls within the corresponding area of step 850). Such rotations can be avoided by using a histogram approach, in which the frequency count of occurrences of each color candidate (each Kx, Ky combination) is determined, and the counts and other closeness measures related to the corresponding Kx, Ky combination are then examined to determine the actual illuminant.
An aspect of the present invention uses some of the techniques above to reduce computational requirements even in such a context, as described below with an example.
9. One More Example Approach for Reducing Computational Complexity
Alternative embodiments in other environments, using other components, and different sequence of steps can also be implemented without departing from the scope and spirit of several aspects of the present invention, as will be apparent to one skilled in the relevant arts by reading the disclosure provided herein. The flowchart starts in step 1001, in which control passes immediately to step 1010.
Steps 1010, 1020 and 1030 may be performed respectively similar to steps 810, 820 and 830 described above for
In step 1040, image processor 130 generates a 2D chromaticity histogram of the image frame. The 2D chromaticity histogram is created from the chromaticity values in a color space (for example the Kx, Ky color space) and may be logically represented on a three dimensional space, with each point on the X-Y coordinates representing a color candidates (coordinates representing points on the Kx, Ky plane), and counters holding their count (the number of times a color candidate equals the color values in the image frame) on the Z axis.
In step 1050, image processor 130 selects the color candidates (histogram values) which fall within the area formed in step 1030. An example area, formed in the manner described, is shown as area 1120 of
In step 1060, image processor 130 processes the selected histogram values to identify the scene illuminant. In an embodiment, the color candidates selected in step 1050 are rotated (while maintaining the associated counts/Z coordinates) so that they align with a coordinate space having axis in parallel to the boundaries of rectangle covering all the color cue values for a detectable color-illuminant combination. It may be appreciated that an aggregate count of the number of pixels matching a detectable color can easily be generated (e.g., using two for loop constructs) from the rotated information since the counters in a rectangle having boundaries which are parallel to the rotated space (X′, Y′) need to be added. The flowchart ends in step 1099.
The closest matching illuminant can be determined based on such aggregate counts and closeness measures calculated for each detectable color-potential illuminant combination. However, alternative approaches also can be employed in identifying the actual scene illuminant from the histogram information.
Due to the use of the formed area and chromaticity histogram, the number of color candidates that need to be considered and rotated may be reduced while determining an actual illuminant. To further reduce the computational complexity, the flowchart of
It should be appreciated that the flow chart of
Though described in specific Figures/flowcharts merely for illustration, it should be appreciated that the individual features described above may be combined in different embodiments as suited for the corresponding environments. Such combinations are contemplated to be covered by various features of the present invention.
10. Conclusion
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of the present invention should not be limited by any of the above described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
3904818, | |||
4253120, | Dec 05 1979 | RCA Corporation | Defect detection means for charge transfer imagers |
4642684, | May 06 1985 | Eastman Kodak Company | Method and apparatus for detecting and printing color photographic images of scenes exposed with narrow band illuminant |
4739495, | Sep 25 1985 | RCA LICENSING CORPORATION, TWO INDEPENDENCE WAY, PRINCETON, NJ 08540, A CORP OF DE | Solid-state imager defect corrector |
4920428, | Jul 08 1988 | Xerox Corporation | Offset, gain and bad pixel correction in electronic scanning arrays |
5446831, | Jul 26 1991 | Matsushita Electric Industrial Co., Ltd. | Image data processor for converting input image data into output image data suitable for a lower resolution output device |
5652621, | Feb 23 1996 | Intellectual Ventures Fund 83 LLC | Adaptive color plane interpolation in single sensor color electronic camera |
5667944, | Apr 22 1996 | Eastman Kodak Company | Digital process sensitivity correction |
5850470, | Aug 30 1995 | Siemens Corporation | Neural network for locating and recognizing a deformable object |
6038339, | Nov 14 1997 | Hewlett-Packard Company; HEWLETT-PACKARD DEVELOPMENT COMPANY, L P ; Agilent Technologies, Inc | White point determination using correlation matrix memory |
6377702, | Dec 08 1999 | Sony Corporation; Sony Electronics, Inc. | Color cast detection and removal in digital images |
6573932, | Mar 15 2002 | Eastman Kodak Company | Method for automatic white balance of digital images |
6683643, | Mar 19 1997 | Konica Minolta Holdings, Inc. | Electronic camera capable of detecting defective pixel |
6724422, | Mar 12 1998 | Heidelberger Druckmaschinen AG | Method for automatically selecting color calibrations |
6724932, | Jul 27 1999 | FUJIFILM Corporation | Image processing method, image processor, and storage medium |
6785814, | Jul 28 1998 | FUJIFILM Corporation | Information embedding method and apparatus |
6873727, | Jul 23 2001 | HEWLETT-PACKARD DEVELOPMENT COMPANY L P | System for setting image characteristics using embedded camera tag information |
6900836, | Feb 19 2001 | Monument Peak Ventures, LLC | Correcting defects in a digital image caused by a pre-existing defect in a pixel of an image sensor |
6992709, | Feb 26 1999 | Intel Corporation | Method and apparatus for high-speed broadband illuminant discrimination |
7009639, | May 31 1999 | Sony Corporation | Color imaging by independently controlling gains of each of R, Gr, Gb, and B signals |
7092018, | Oct 27 1999 | DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT | Image signal processor and deficient pixel detection method |
7133072, | Jun 30 2000 | Canon Kabushiki Kaisha | Image processing apparatus having an image correction circuit and its processing method |
7184080, | Jun 25 2001 | Texas Instruments Incorporated | Automatic white balancing via illuminant scoring |
7277575, | Nov 25 2003 | Sony Corporation; Sony Electronics Inc.; Sony Electronics INC | System and method for effectively performing an image data transformation procedure |
7305148, | Jul 30 2004 | STMICROELECTRONICS S R L | Color interpolation using data dependent triangulation |
7352894, | Sep 30 2003 | Sharp Kabushiki Kaisha | Systems and methods for illuminant estimation |
7450160, | Jan 31 2005 | Monument Peak Ventures, LLC | Auto white balance apparatus and white balance adjusting method |
7486844, | Nov 17 2005 | AVISONIC TECHNOLOGY CORPORATION | Color interpolation apparatus and color interpolation method utilizing edge indicators adjusted by stochastic adjustment factors to reconstruct missing colors for image pixels |
7502505, | Mar 15 2004 | Microsoft Technology Licensing, LLC | High-quality gradient-corrected linear interpolation for demosaicing of color images |
7522767, | Sep 09 2005 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | True color communication |
7542077, | Apr 14 2005 | Apple Inc | White balance adjustment device and color identification device |
7551207, | Aug 26 2003 | Casio Computer Co., Ltd. | Image pickup apparatus, white balance control method, and white balance control program |
7576797, | Jun 25 2001 | Texas Instruments Incorporated | Automatic white balancing via illuminant scoring autoexposure by neural network mapping |
7593043, | May 18 2005 | Monument Peak Ventures, LLC | Image processing device and white balance adjustment device |
7616810, | Sep 30 2003 | Sharp Kabushiki Kaisha | Systems and methods for illuminant estimation |
7630107, | Apr 11 2001 | FUJIFILM Corporation | Printer system and image processing system having image correcting function |
7643068, | Feb 09 2005 | FUJIFILM Corporation | White balance control method, white balance control apparatus and image-taking apparatus |
7671910, | Mar 31 2003 | Samsung Electronics Co., Ltd. | Interpolator, method, and digital image signal processor for adaptive filtering of Bayer pattern color signal |
7705892, | Jan 19 2005 | Monument Peak Ventures, LLC | Auto white balance apparatus and white balance adjusting method |
7830419, | Dec 19 2005 | Monument Peak Ventures, LLC | Digital camera, gain-computing device and method |
7885458, | Oct 27 2005 | Nvidia Corporation | Illuminant estimation using gamut mapping and scene classification |
20020015111, | |||
20020044778, | |||
20020080245, | |||
20020105579, | |||
20020118967, | |||
20020167596, | |||
20020167602, | |||
20030007076, | |||
20030052978, | |||
20030194125, | |||
20030222995, | |||
20040001234, | |||
20040032516, | |||
20040120572, | |||
20050069201, | |||
20050238225, | |||
20050248671, | |||
20050264658, | |||
20060103728, | |||
20060159336, | |||
20060170789, | |||
20060176375, | |||
20060232684, | |||
20060284991, | |||
20070002150, | |||
20070024719, | |||
20070091188, | |||
20070132966, | |||
20070165113, | |||
20070229676, | |||
20070247532, | |||
20080143844, | |||
20080259186, | |||
20080278601, | |||
20090010539, | |||
20090116750, | |||
20100103289, | |||
CN1275870, | |||
WO2004063989, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
May 31 2007 | GOEL, ANURAG | Nvidia Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 019372 | /0007 | |
Jun 04 2007 | Nvidia Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Sep 25 2017 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Sep 24 2021 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Apr 15 2017 | 4 years fee payment window open |
Oct 15 2017 | 6 months grace period start (w surcharge) |
Apr 15 2018 | patent expiry (for year 4) |
Apr 15 2020 | 2 years to revive unintentionally abandoned end. (for year 4) |
Apr 15 2021 | 8 years fee payment window open |
Oct 15 2021 | 6 months grace period start (w surcharge) |
Apr 15 2022 | patent expiry (for year 8) |
Apr 15 2024 | 2 years to revive unintentionally abandoned end. (for year 8) |
Apr 15 2025 | 12 years fee payment window open |
Oct 15 2025 | 6 months grace period start (w surcharge) |
Apr 15 2026 | patent expiry (for year 12) |
Apr 15 2028 | 2 years to revive unintentionally abandoned end. (for year 12) |