According to various embodiments, the system and method of the present invention process light-field image data so as to reduce color artifacts, reduce projection artifacts, and/or increase dynamic range. These techniques operate, for example, on image data affected by sensor saturation and/or microlens modulation. Flat-field images are captured and converted to modulation images, and then applied on a per-pixel basis, according to techniques described herein.
|
5. A method for adjusting pixel sensitivities in an image capture device, comprising:
in a processor, receiving image data representing a scene, the image data comprising a plurality of pixels, each pixel having a plurality of values associated with different colors;
in the processor, estimating a chrominance of the scene illumination;
in the processor, based on the estimated chrominance, determining maximum sensor values for each of the different colors;
in the processor, clamping pixel values to corresponding maximum sensor values, to generate a processed image;
performing white-point adjustment on the processed image; and
outputting the processed image on a display device.
6. A method for adjusting pixel sensitivities in an image capture device, comprising:
in a processor, receiving image data representing a scene, the image data comprising a plurality of pixels, each pixel having a plurality of values associated with different colors;
in the processor, estimating a chrominance of the scene illumination;
in the processor, based on the estimated chrominance, determining maximum sensor values for each of the different colors;
in the processor, clamping pixel values to corresponding maximum sensor values, to generate a processed image;
converting the processed image to a chrominance component and a luminance component; and
outputting the converted image on a display device.
1. A method for adjusting pixel sensitivities in an image capture device, comprising:
capturing a frame of light-field image data representing a scene, the light-field image data comprising a plurality of pixels, each pixel having a plurality of values associated with different colors;
for each color, determining a flat-field response contour for each of at least one region of an image sensor;
for each color, generating a modulation image based on the at least one flat-field response contour;
for each color, generating a demodulation image from the modulation image;
applying the generated demodulation images to the captured light-field image data to generate a demodulated light-field image;
applying a demosaicing operation to the demodulated light-field image;
applying an automatic white-balance correction algorithm to the demodulated light-field image;
estimating a chrominance of the scene illumination;
adjusting sensor gain individually for each color, based on the estimated chrominance;
capturing a subsequent frame of light-field image data, using the adjusted sensor gain for each of the colors; and
storing the captured subsequent frame.
3. A method for adjusting pixel sensitivities in an image capture device, comprising:
in a processor, receiving light-field image data representing a scene, the light-field image data comprising a plurality of pixels, each pixel having a plurality of values associated with different colors;
for each color, determining a flat-field response contour for each of at least one region of an image sensor;
for each color, generating a modulation image based on the at least one flat-field response contour; and
for each color, generating a demodulation image from the modulation image;
applying the generated demodulation images to the received light-field image data to generate a demodulated light-field image;
applying a demosaicing operation to the demodulated light-field image;
applying an automatic white-balance correction algorithm to the demodulated light-field image;
in a processor, estimating a chrominance of the scene illumination;
in the processor, based on the estimated chrominance, determining maximum sensor values for each of the different colors;
in the processor, clamping pixel values to corresponding maximum sensor values, to generate a processed image; and
outputting the processed image on a display device.
16. A system for adjusting pixel sensitivities in an image capture device, comprising:
circuitry configured to perform the steps of:
receiving image data representing a scene, the image data comprising a plurality of pixels, each pixel having a plurality of values associated with different colors;
for each color, determining a flat-field response contour for each of at least one region of an image sensor;
for each color, generating a modulation image based on the at least one flat-field response contour;
for each color, generating a demodulation image from the modulation image;
applying the generated demodulation images to the received light-field image data to generate a demodulated light-field image;
applying a demosaicing operation to the demodulated light-field image;
applying an automatic white-balance correction algorithm to the demodulated light-field image;
estimating a chrominance of the scene illumination;
based on the estimated chrominance, determining maximum sensor values for each of the different colors; and
clamping pixel values to corresponding maximum sensor values, to generate a processed image; and
a display device, communicatively coupled to the circuitry, configured to output the processed image.
15. A non-transitory computer-readable medium for adjusting pixel sensitivities in an image capture device, comprising instructions stored thereon, that when executed by a processor, perform the steps of:
receiving image data representing a scene, the image data comprising a plurality of pixels, each pixel having a plurality of values associated with different colors;
for each color, determining a flat-field response contour for each of at least one region of an image sensor;
for each color, generating a modulation image based on the at least one flat-field response contour;
for each color, generating a demodulation image from the modulation image;
applying the generated demodulation images to the received light-field image data to generate a demodulated light-field image;
applying a demosaicing operation to the demodulated light-field image;
applying an automatic white-balance correction algorithm to the demodulated light-field image;
estimating a chrominance of the scene illumination;
based on the estimated chrominance, determining maximum sensor values for each of the different colors;
clamping pixel values to corresponding maximum sensor values, to generate a processed image;
causing a display device to output the processed image.
2. The method of
4. The method of
reapplying the generated demodulation images to the processed image generate a demodulated processed image; and
reapplying a demosaicing operation to the demodulated processed image.
7. The method of
8. The method of
for each pixel in the image data, estimating the severity of the pixel's saturation and the likelihood that the chrominance of the saturation matches the estimated chrominance of the scene illumination; and
responsive to the estimated severity of the pixel's saturation exceeding a predetermined threshold severity level, and further responsive to the likelihood that the chrominance of the saturation matches the estimated chrominance of the scene illumination exceeding a predetermined threshold likelihood, replacing the chrominance of the pixel with the chrominance of the scene illumination.
9. The method of
responsive to the estimated severity of the pixel's saturation not exceeding the predetermined threshold severity level or the likelihood that the chrominance of the saturation matches the estimated chrominance of the scene illumination not exceeding a predetermined threshold likelihood, replacing the chrominance of the pixel with a linear interpolation between the original chrominance of the pixel and the chrominance of the scene illumination.
10. The method of
11. The method of
applying a variable blur/sharpen filter to the chrominance component, based on a filter-control computation; and
applying a variable blur/sharpen filter to the luminance component, based on a filter-control computation.
12. The method of
13. The method of
determining a luminance gain function;
determining a chrominance gain function;
for each pixel in the luminance component of the processed image, multiplying the pixel value by the luminance gain function; and
for each pixel in the chrominance component of the processed image, multiplying the pixel value by the chrominance gain function.
14. The method of
applying a blurring filter to the luminance component of the processed image, to generate a blurred image;
determining a histogram of luminance values from the blurred image;
computing a gain function from the determined histogram.
|
The present application claims priority as a divisional of U.S. Utility application Ser. No. 13/774,925 for “Compensating For Sensor Saturation and Microlens Modulation During Light-Field Image Processing”, filed on Feb. 22, 2013, now U.S. Pat. No. 8,948,545 the disclosure of which is incorporated herein by reference in its entirety.
U.S. Utility application Ser. No. 13/774,925 claims priority from U.S. Provisional Application Ser. No. 61/604,155 for “Compensating for Sensor Saturation and Microlens Modulation During Light-Field Image Processing”, filed on Feb. 28, 2012, the disclosure of which is incorporated herein by reference in its entirety.
U.S. Utility application Ser. No. 13/774,925 further claims priority from U.S. Provisional Application Ser. No. 61/604,175 for “Compensating for Variation in Microlens Position During Light-Field Image Processing”, filed on Feb. 28, 2012, the disclosure of which is incorporated herein by reference in its entirety.
U.S. Utility application Ser. No. 13/774,925 further claims priority from U.S. Provisional Application Ser. No. 61/604,195 for “Light-Field Processing and Analysis, Camera Control, and User Interfaces and Interaction on Light-Field Capture Devices”, filed on Feb. 28, 2012, the disclosure of which is incorporated herein by reference in its entirety.
U.S. Utility application Ser. No. 13/774,925 further claims priority from U.S. Provisional Application Ser. No. 61/655,790 for “Extending Light-Field Processing to Include Extended Depth of Field and Variable Center of Perspective”, filed on Jun. 5, 2012, the disclosure of which is incorporated herein by reference in its entirety.
U.S. Utility application Ser. No. 13/774,925 further claims priority as a continuation-in-part of U.S. Utility application Ser. No. 13/688,026 for “Extended Depth of Field and Variable Center of Perspective In Light-Field Processing”, filed on Nov. 28, 2012, the disclosure of which is incorporated herein by reference in its entirety.
The present application is related to U.S. Utility application Ser. No. 11/948,901 for “Interactive Refocusing of Electronic Images,” filed Nov. 30, 2007, the disclosure of which is incorporated herein by reference in its entirety.
The present application is related to U.S. Utility application Ser. No. 12/703,367 for “Light-field Camera Image, File and Configuration Data, and Method of Using, Storing and Communicating Same,” filed Feb. 10, 2010, the disclosure of which is incorporated herein by reference in its entirety.
The present application is related to U.S. Utility application Ser. No. 13/027,946 for “3D Light-field Cameras, Images and Files, and Methods of Using, Operating, Processing and Viewing Same”, filed on Feb. 15, 2011, the disclosure of which is incorporated herein by reference in its entirety.
The present application is related to U.S. Utility application Ser. No. 13/155,882 for “Storage and Transmission of Pictures Including Multiple Frames,” filed Jun. 8, 2011, the disclosure of which is incorporated herein by reference in its entirety.
The present application is related to U.S. Utility application Ser. No. 13/664,938 for “Light-field Camera Image, File and Configuration Data, and Method of Using, Storing and Communicating Same,” filed Oct. 31, 2012, the disclosure of which is incorporated herein by reference in its entirety.
The present application is related to U.S. Utility application Ser. No. 13/774,971 for “Compensating for Variation in Microlens Position During Light-Field Image Processing,” filed Feb. 22, 2013, the disclosure of which is incorporated herein by reference in its entirety.
The present application is related to U.S. Utility application Ser. No. 13/774,986 for “Light-Field Processing and Analysis, Camera Control, and User Interfaces and Interaction on Light-Field Capture Devices,” filed Feb. 22, 2013, the disclosure of which is incorporated herein by reference in its entirety.
The present invention relates to systems and methods for processing and displaying light-field image data.
According to various embodiments, the system and method of the present invention process light-field image data so as to reduce color artifacts, reduce projection artifacts, and/or increase dynamic range. These techniques operate, for example, on image data affected by sensor saturation and/or microlens modulation. Flat-field images are captured and converted to modulation images, and then applied on a per-pixel basis, according to techniques described herein.
The accompanying drawings illustrate several embodiments of the invention and, together with the description, serve to explain the principles of the invention according to the embodiments. One skilled in the art will recognize that the particular embodiments illustrated in the drawings are merely exemplary, and are not intended to limit the scope of the present invention.
For purposes of the description provided herein, the following definitions are used:
In addition, for ease of nomenclature, the term “camera” is used herein to refer to an image capture device or other data acquisition device. Such a data acquisition device can be any device or system for acquiring, recording, measuring, estimating, determining and/or computing data representative of a scene, including but not limited to two-dimensional image data, three-dimensional image data, and/or light-field data. Such a data acquisition device may include optics, sensors, and image processing electronics for acquiring data representative of a scene, using techniques that are well known in the art. One skilled in the art will recognize that many types of data acquisition devices can be used in connection with the present invention, and that the invention is not limited to cameras. Thus, the use of the term “camera” herein is intended to be illustrative and exemplary, but should not be considered to limit the scope of the invention. Specifically, any use of such term herein should be considered to refer to any suitable device for acquiring image data.
In the following description, several techniques and methods for processing light-field images are described. One skilled in the art will recognize that these various techniques and methods can be performed singly and/or in any suitable combination with one another.
Architecture
In at least one embodiment, the system and method described herein can be implemented in connection with light-field images captured by light-field capture devices including but not limited to those described in Ng et al., Light-field photography with a hand-held plenoptic capture device, Technical Report CSTR 2005-02, Stanford Computer Science. Referring now to
In at least one embodiment, camera 800 may be a light-field camera that includes light-field image data acquisition device 809 having optics 801, image sensor 803 (including a plurality of individual sensors for capturing pixels), and microlens array 802. Optics 801 may include, for example, aperture 812 for allowing a selectable amount of light into camera 800, and main lens 813 for focusing light toward microlens array 802. In at least one embodiment, microlens array 802 may be disposed and/or incorporated in the optical path of camera 800 (between main lens 813 and sensor 803) so as to facilitate acquisition, capture, sampling of, recording, and/or obtaining light-field image data via sensor 803. Referring now also to
In at least one embodiment, light-field camera 800 may also include a user interface 805 for allowing a user to provide input for controlling the operation of camera 800 for capturing, acquiring, storing, and/or processing image data.
In at least one embodiment, light-field camera 800 may also include control circuitry 810 for facilitating acquisition, sampling, recording, and/or obtaining light-field image data. For example, control circuitry 810 may manage and/or control (automatically or in response to user input) the acquisition timing, rate of acquisition, sampling, capturing, recording, and/or obtaining of light-field image data.
In at least one embodiment, camera 800 may include memory 811 for storing image data, such as output by image sensor 803. Such memory 811 can include external and/or internal memory. In at least one embodiment, memory 811 can be provided at a separate device and/or location from camera 800.
For example, camera 800 may store raw light-field image data, as output by sensor 803, and/or a representation thereof, such as a compressed image data file. In addition, as described in related U.S. Utility application Ser. No. 12/703,367 for “Light-field Camera Image, File and Configuration Data, and Method of Using, Storing and Communicating Same,” filed Feb. 10, 2010, memory 811 can also store data representing the characteristics, parameters, and/or configurations (collectively “configuration data”) of device 809.
In at least one embodiment, captured image data is provided to post-processing circuitry 804. Such circuitry 804 may be disposed in or integrated into light-field image data acquisition device 809, as shown in
Overview
Light-field images often include a plurality of projections (which may be circular or of other shapes) of aperture 812 of camera 800, each projection taken from a different vantage point on the camera's focal plane. The light-field image may be captured on sensor 803. The interposition of microlens array 802 between main lens 813 and sensor 803 causes images of aperture 812 to be formed on sensor 803, each microlens in array 802 projecting a small image of main-lens aperture 812 onto sensor 803. These aperture-shaped projections are referred to herein as disks, although they need not be circular in shape. The term “disk” is not intended to be limited to a circular region, but can refer to a region of any shape.
Light-field images include four dimensions of information describing light rays impinging on the focal plane of camera 800 (or other capture device). Two spatial dimensions (herein referred to as x and y) are represented by the disks themselves. For example, the spatial resolution of a light-field image with 120,000 disks, arranged in a Cartesian pattern 400 wide and 300 high, is 400×300. Two angular dimensions (herein referred to as u and v) are represented as the pixels within an individual disk. For example, the angular resolution of a light-field image with 100 pixels within each disk, arranged as a 10×10 Cartesian pattern, is 10×10. This light-field image has a 4-D (x, y, u, v) resolution of (400, 300, 10, 10). Referring now to
In at least one embodiment, the 4-D light-field representation may be reduced to a 2-D image through a process of projection and reconstruction. As described in more detail in related U.S. Utility application Ser. No. 13/774,971 for “Compensating for Variation in Microlens Position During Light-Field Image Processing,”, filed on the same date as the present application, the disclosure of which is incorporated herein by reference in its entirety, a virtual surface of projection may be introduced, and the intersections of representative rays with the virtual surface can be computed. The color of each representative ray may be taken to be equal to the color of its corresponding pixel.
Sensor Saturation
As described above, digital sensor 803 in light-field image data acquisition device 809 may capture an image as a two-dimensional array of pixel values. Each pixel 203 may report its value as an n-bit integer, corresponding to the aggregated irradiance incident on that pixel 203 during its exposure to light. Typical pixel representations are 8, 10, or 12 bits per pixel, corresponding to 256, 1024, or 4096 equally spaced aggregated irradiances. In some devices 809, the sensitivity of sensor 803 may be adjusted, but, in general, such adjustment affects all pixels 203 equally. Thus, for a given sensitivity, sensor 803 may capture images whose aggregated irradiances vary between zero and the aggregated irradiance that drives a pixel 203 to its maximum integer representation. Aggregated irradiances greater than this value may also drive the pixel 203 upon which they are incident to its maximum representation, so these irradiances may not be distinguishable in subsequent processing of the captured image. Such a pixel 203 is referred to herein as being saturated. Sensor saturation refers to a condition in which sensor 803 has one or more saturated pixels 203.
It is well known to provide capability, within an image capture device such as a digital camera, to adjust the sensitivity (ISO value) of digital sensor 803, the duration of its exposure to light captured by main lens 813 (shutter speed), and/or the size of aperture 812 (f-stop) to best capture the information in the scene. The net sensitivity resulting from ISO, shutter speed, and f-stop is referred to as exposure value, or EV. If EV is too low, information may be lost due to quantization, because the range of aggregated irradiances uses only a small portion of the available pixel representations. If EV is too high, information may be lost due to saturation, because pixels 203 with high aggregated irradiances have values that are indistinguishable from one another. While an EV that avoids sensor saturation is appropriate for some images, many images are best sampled with an EV that results in some saturated pixels 203. For example, a scene for which most aggregated pixel irradiances fall in a small range, but a few pixels 203 experience much greater aggregated irradiances, is best sampled with an EV that allows the high-aggregated-irradiance pixels 203 to be saturated. If EV were adjusted such that no pixel 203 saturated, the scene would be highly quantized. Thus, in some lighting conditions, sensor saturation may not be avoided without significant compromise.
As described above, digital sensors 803 may represent pixel values with differing numbers of bits. Pixel values may be normalized such that integer value zero corresponds to real value 0.0, and integer value 2n−1 (the maximum pixel value for a pixel represented with n bits) corresponds to real value 1.0. For purposes of the description provided herein, other factors such as black-level offset, noise other than that due to quantization, and pixels that do not operate correctly may be ignored.
Bayer Pattern
Ideally a digital sensor 803 would capture full chromatic information describing the aggregated irradiance at each pixel 203. In practice, however, each pixel 203 often captures a single value indicating the aggregate irradiance across a specific range of spectral frequencies. This range may be determined, for example, by a spectral filter on the surface of digital sensor 803, which restricts the range of light frequencies that is passed through to the pixel sensor mechanism. Because humans may distinguish only three ranges of spectra, in at least one embodiment, sensor 803 is configured so that each pixel 203 has one of three spectral filters, thus capturing information corresponding to three spectral ranges. These filters may be arranged in a regular pattern on the surface of digital sensor 803.
Referring now to
In alternative embodiments, other color filters can be represented, such as those that include additional primary colors. In various embodiments, the system of the present invention can also be used in connection with multi-spectral systems.
In alternative embodiments, the filters can be integrated into microlens array 802 itself.
Modulation
Pixels 203 within a disk 102 may not experience equal irradiance, even when the scene being imaged has uniform radiance (i.e., radiance that is the same in all directions and at all spatial locations). For example, pixels 203 located near the center of a disk 102 may experience greater irradiance, and pixels near or at the edge of the disk 102 may experience lower irradiance. In some situations, the ratio of the greatest pixel irradiance to the lowest pixel irradiance may be large, for example, 100:1. Referring now to
Vignetting is a related phenomenon, in which an image's brightness or saturation is reduced at the periphery as compared to the image center.
As depicted in
In graph 400, ten discrete values 404 are plotted, corresponding to normalized pixel values along a contour segment 401 drawn horizontally through the (approximate) center of disk 102. Although these ten values 404 are discrete, a continuous flat-field contour 402 is also plotted. Contour 402 describes the values pixels 203 would have if their centers were located at each position along the x-axis.
It may be a good approximation to predict that all of the light that is incident on microlens array 802 also reaches digital sensor 803—assuming that microlens array 802 may refract light, but does not occlude light. Referring now to
A modulation image, having pixel values that are the modulation values corresponding to each pixel 203 in a light-field image, may be computed by imaging a scene with uniform radiance. To ensure numerically accurate results, EV and scene radiance may be adjusted so that pixels with maximum irradiance have normalized values near 0.5. Such a light-field image is referred to herein as a flat-field image. The average pixel value of this flat-field image may be computed. The modulation value for each pixel in the modulation image may then be computed as the value of the corresponding pixel in the flat-field image, divided by the average pixel value of the flat-field image.
Bayer Consequences
As described above, digital sensor 803 may include pixels 203 with different spectral filters, which are sensitive to different ranges of visible spectra. These pixels 203 may be arranged in a regular pattern, such as the Bayer pattern described above in connection with
Sampling and Interpolation
Modulation may differ as a function of several parameters of light-field camera 800. For example, modulation may differ as the focal length and focus distance of main lens 813 are changed, and as the exposure duration of a mechanical shutter is changed. In some embodiments, it may be impractical to compute and retain a modulation image for each possible combination of such parameters.
For example, there may be n camera parameters that affect modulation. These n parameters may be thought of as defining an n-dimensional space. This space may be sampled at points (n-tuples) that are distributed throughout the space. Each sample may be taken by 1) setting camera parameters to the values specified by the sample coordinates, and 2) capturing a flat-field image. All camera parameters other than the n parameters, and all consequential external variables (for example, the scene radiance) may retain the same values during the entire sampling operation. The sample locations may be selected so that there is minimal difference between the values in corresponding pixels 203 of flat-field images that are adjacent in the n-dimensional space. Under these circumstances, the flat-field image for a point in the n-dimensional space for which no sample was computed may be computed by interpolating or extrapolating from samples in the n-dimensional space. Such an interpolation or extrapolation may be computed separately for each pixel 203 in the flat-field image. After the flat-field image for the desired coordinate in the n-dimensional space has been computed, the modulation image for this coordinate may be computed from the flat-field image as described above.
Storage
Flat-field images may be captured during the manufacture and calibration of camera 800, or at any time thereafter. They may be stored by any digital means, including as files in custom formats or any standard digital-image format, or in a data base (not shown). Data storage size may be reduced using compression, either lossless (sufficient for an exact reconstruction of the original data) or lossy (sufficient for a close but not exact reconstruction of the original data.) Flat-field data may be stored locally or remotely. Examples of such storage locations include, without limitation: on camera 800; in a personal computer, mobile device, or any other personal computation appliance; in Internet storage; in a data archive; or at any other suitable location.
Demodulation
It may be useful to eliminate the effects of modulation on a light-field image before processing the pixels 203 in that image. For example, it may be useful to compute a ratio between the values of two pixels 203 that are near each other. Such a ratio is meaningless if the pixels 203 are modulated differently from one another, but it becomes meaningful after the effects of modulation are eliminated. The process of removing the effects of modulation on a light-field image is referred to herein as a demodulation process, or as demodulation.
According to various embodiments of the present invention, flat-field images are captured and converted to modulation images, then applied on a per-pixel basis, according to techniques described herein.
The techniques described herein can be used to correct the effects of vignetting and/or modulation due to microlens arrays 802.
Each pixel in a modulation image describes the effect of modulation on a pixel in a light-field image as a simple factor m, where
pmod=mpideal
To eliminate the effect of modulation, pmod can be scaled by the reciprocal of m:
Using this relationship, a demodulation image is computed as an image with the same dimensions as its corresponding modulation image, wherein each pixel has a value equal to the reciprocal of the value of the corresponding pixel in the modulation image. A light-field image is demodulated by multiplying it, pixel by pixel, with the demodulation image. Pixels in the resulting image have values that nearly approximate the values in an ideal (unmodulated) light-field image. Referring now to
In some cases, noise sources other than quantization may cause a pixel 203 whose aggregate illumination is very low (such as a highly modulated pixel 203) to have a negative value. In at least one embodiment, when performing demodulation, the system of the present invention clamps pixels in the computed modulation image to a very small positive value, so as to ensure that pixels in the demodulation image (the reciprocals of the modulation values) are never negative, and in fact never exceed a chosen maximum value (the reciprocal of the clamp value).
Demodulation can be used to correct for any type of optical modulation effect, and is not restricted to correcting for the effects of modulation resulting from the use of disks. For example, the techniques described herein can be used to correct modulation due to main-lens vignetting, and/or to correct modulation due to imperfections in microlens shape and position.
Demodulation can be performed at any suitable point (or points) in the image processing path of digital camera 800 or other image processing equipment. In some cases, such as when using a light field digital camera 800, existing hardware-accelerated operations (such as demosaicing) may operate more effectively if demodulation is performed earlier along the image processing path.
Demosaicing
In at least one embodiment, pixels 203 in the demodulated image may have single values, each corresponding to one of three spectral ranges: red, green, or blue. The red, green, and blue pixels may be arranged in a mosaic pattern, such as the Bayer pattern depicted in
In other embodiments, any number of spectral ranges can be used; thus, the above example (in which three spectral ranges are used) is merely exemplary.
One demosaicing approach is to estimate unknown pixel values from known values that are spatially near the known value in the image. For these estimations to give meaningful results, the values they operate on must be commensurate, meaning that their proportions are meaningful. However, pixel values in a modulated image are not commensurate—their proportions are not meaningful, because they have been scaled by different values. Thus demosaicing a modulated image (specifically, demosaicing a light-field image that has not been demodulated) may give unreliable results.
Because modulation in a light field camera can have higher amplitude and frequency (i.e. pixel modulation varies more dramatically than in a conventional camera), it can have a more significant effect on demosaicing than does vignetting in a conventional camera. Accordingly, the techniques of the present invention are particularly effective in connection with demosaicing efforts for light-field cameras.
Estimation of the Color of Scene Illumination
The three color-channel values of a demosaiced pixel may be understood to specify two distinct properties: chrominance and luminance. In general, chrominance is a mapping from n-valued to (n−1)-valued tuples, while luminance is a mapping from n-valued tuples to single values. More particularly, where three color channel values are available, chrominance is a two-value mapping of the three color channel values into the perceptual properties of hue (chrominance angle) and saturation (chrominance magnitude); luminance is a single-valued reduction of the three pixel values. Perceptual properties such as apparent brightness are specified by this value. For example, luminance may be computed as a weighted sum of the red, green, and blue values. The weights may be, for example, 0.2126 (for red), 0.7152 (for green), and 0.0722 (for blue).
Many algorithms that map and reduce a three-channel RGB signal to separate chrominance and luminance signals are known in the art. For example, chrominance may be specified as the ratios of the RGB channels to one another. These ratios may be computed using any of the values as a base. For example, the ratios r/g and b/g may be used. Regardless of which value is used as the base, exactly (n−1) values (i.e. two values, if there are three color channel values) are required to completely specify pixel chrominance with such a representation.
The illumination in a scene may be approximated as having a single chrominance (ratio of spectral components) that varies in amplitude throughout the scene. For example, illumination that appears red has a higher ratio of low-frequency spectral components to mid- and high-frequency spectral components. Scaling all components equally changes the luminance of the illumination without changing the ratios of its spectral components (its color channels).
The apparent chrominance or color constancy of an object in a scene is determined by the interaction of the surface of the object with the light illuminating it. In a digital imaging system, the chrominance of scene illumination may be estimated from the apparent chrominance of objects in the scene, if the light-scattering properties of some objects in the scene are known or can be approximated. Algorithms that make such approximations and estimations are known in the art as Automatic White Balance (AWB) algorithms.
While the colors in a captured image may be correct, in the sense that they accurately represent the colors of light captured by the camera, in some cases an image having these colors may not look correct to an observer. Human observers maintain color constancy, which adjusts the appearance of colors based on the color of the illumination in the environment and relative spatial location of one patch of color to another. When a picture is viewed by a human observer in an environment with different illumination than was present when the picture was captured, the observer maintains the color constancy of the viewing conditions by adjusting the colors in the image using the color of the illumination of the viewing environment, instead of the illumination of the scene captured in the picture. As a result, the viewer may perceive the colors in the captured picture to be unnatural.
To avoid the perception of unnatural colors in captured images, AWB algorithms may compute white-balance factors, in addition to their estimate of illuminant color. For example, one factor can be used for each of red, green, and blue, although other arrangements such as 3×3 matrices are also possible. Such factors are used to white-balance the image by scaling each pixel's red, green, and blue components. White-balance factors may be computed such that achromatic objects in the scene (i.e., objects that reflect all visible light frequencies with equal efficiency) appear achromatic, or gray, in the final picture. In this case, the white-balance factors may be computed as the reciprocals of the red, green, and blue components of the estimated color of the scene illuminant. These factors may all be scaled by a single factor such that their application to a color component changes only its chrominance, leaving luminance unchanged. It may be more visually pleasing, however, to compute white-balance factors that push gray objects nearer to achromaticity, without actually reaching that goal. For example, a scene captured at sunset may look more natural with some yellow remaining, rather than being compensated such that gray objects become fully achromatic.
Because AWB algorithms operate on colors, and because colors may be reliably available from sensor image data only after those data have been demosaiced, it may be advantageous, in some embodiments, to perform AWB computation on demosaiced image data. Any suitable methodology for sampling the Bayer pattern may be used. In particular, the sampling used for AWB statistical analysis need not be of the same type as is used for demosaicing. It may further be advantageous, in some embodiments, for the AWB algorithm to sample the demosaiced image data only at, or near, disk centers. In some situations, sampling the demosaiced image in highly modulated locations, such as near the edges of disks, may result in less reliable AWB operation, due, for example, to greater quantization noise.
Referring now to
Improving Accuracy of Pixel Values in the Presence of Saturation
The above-described sequence of demodulation followed by demosaicing is intended to generate pixels with accurate chrominance and luminance. Accuracy in these calculations presumes that the pixel values in the sensor image are themselves accurate. However, in cases where sensor saturation has taken place, the pixel values themselves may not be accurate. Specifically, sensor saturation may corrupt both pixel chrominance and luminance when they are computed as described above.
According to various embodiments of the present invention, the accuracy of pixel values can be improved, even when they are computed in the presence of sensor saturation. The following are two examples of techniques for improving the accuracy of pixel values; they can be applied either singly or in combination:
For example, consider the case of complete sensor saturation, where all pixels 203—red, green, and blue—in a region are saturated. In such a situation, it is known that the luminance in the region is high, but chrominance is not known, because all r/g and b/g ratios are possible. However, an informed guess can be made about chrominance, which is that it is likely to be the chrominance of the scene illumination, or an approximation of it. This informed guess can be made because exceptionally bright objects in the scene are likely to be the light source itself, or specular reflections of the light source. Directly-imaged light sources are their own chrominance. The chrominance of specular reflections (reflections at high grazing angles, or off mirror-like surfaces at any angle) may also be the chrominance of the light source, even when the object's diffuse reflectivity has spectral variation (that is, when the object is colored). While objects of any chrominance may be bright enough to cause sensor saturation, gray objects, which reflect all visible light equally, are more likely to saturate all three color channels simultaneously, and will also take on the chrominance of the scene illumination.
If a sensor region is only partially saturated, then some information about chromaticity may be inferred. The pattern of saturation may rule out saturation by the scene illumination chrominance, if, for example, red pixels 203 are saturated and green pixels 203 are not, but the r/g ratio of the scene illumination color is less than one. But the presence of signal noise, spatial variation in color, and, especially in light-field cameras, high degrees of disk modulation, make inferences about chrominance uncertain even in such situations. Thus the chrominance of the scene illumination remains a good guess for both fully and partially saturated sensor regions.
Clamping Sensor Values to the Color of the Scene Illumination
The sensitivity of digital sensor 803 (its ISO) may be adjusted independently for its red, green, and blue pixels 203. In at least one embodiment, it may be advantageous to adjust relative sensitivities of these pixels 203 so that each color saturates at a single specific luminance of light corresponding to the chrominance of the scene illumination. Thus, no pixels 203 are saturated when illuminated with light of the scene-illumination chrominance at intensities below this threshold, and all pixels 203 are saturated when illuminated with light of the scene-illumination chrominance at intensities above this threshold.
An advantage of such an approach is that quantization error may be reduced, because all pixels 203 utilize their full range prior to typical saturation conditions. Another advantage is that, at least in sensor regions that experience relatively constant modulation, sensor saturation effectively clamps chrominance to the chrominance of the illuminant. Thus, subsequent demosaicing will infer the chrominance of the illuminant in clamped regions, because the r/g and b/g ratios will imply this chrominance. Even when modulation does change rapidly, as it may in a light-field image, the average demosaiced chrominance approximates the chrominance of the scene illumination, even while the chrominances of individual pixels 203 depart from this average.
Referring now to
A feedback loop as depicted in
Pixels 203 for the current frame are captured 1901. The captured pixels 203 are processed through demodulation, demosaicing, and AWB to estimate 1902 the chrominance of the scene illumination. Maximum red, green, and blue sensor values are computed 2001 by scaling the red, green, and blue components of the scene-illumination chrominance equally, such that the largest component is equal to 1.0. The value of each pixel 203 in the sensor image is clamped 2002 to the corresponding maximum value. As an optimization, pixels of the color channel whose maximum is 1.0 need not be processed, because they have already been limited to this maximum by the mechanics of sensor saturation.
After the sensor image has been clamped 2002 in this manner, it may be demodulated 2003 and demosaiced 2004 again before subsequent processing is performed. As an optimization, the color channel that was not clamped (because its maximum was already 1.0) need not be demodulated again, but the other two color channels may be.
Referring now to
A control path is also depicted in
In at least one embodiment, a simpler technique for pre-projection light-field image processing is used, wherein the chrominance of the illuminant is actually computed twice, first as a rough approximation (which does not require that the image be first demodulated and demosaiced), and then again after the image is clamped, demodulated, and demosaiced, when it can be computed more accurately for subsequent use. Referring now to
The technique of
Advanced Compensation
While the above-described light-field clamping technique section substantially reduces false-color artifacts in images projected from the light field, some artifacts may remain. In one embodiment, additional techniques can be applied in order to further reduce such artifacts, especially to the extent that they result from sensor saturation.
Referring now to
Referring now to
Referring now to
Because proportionality is violated by this uneven signal reconstruction, subsequent demosaicing may result in incorrect chrominances, causing artifacts. Artifacts in luminance may also occur, depending on the 2-D pattern of ray intersections with the plane of projection. In one embodiment, such saturation-related artifacts are minimized by subsequent processing, referred to herein as advanced compensation. Referring now to
In the advanced compensation method depicted in
Color-space conversion step 1202 is then performed, wherein each pixel's 203 red, green, and blue components are converted into chrominance 1004 and luminance 1003 signals. As described above, chrominance may be represented as a 2-component tuple, while luminance may be represented as a single component. Any known technique can be used for converting red, green, and blue components into chrominance 1004 and luminance 1003 signals, and any known representations of chrominance 1004 and luminance 1003 can be used. Examples include YUV (Y representing luminance 1003, U and V representing chrominance 1004) and L*a*b* (L* representing luminance 1003, a* and b* representing chrominance 1004). Some representations, such as YUV, maintain a linear relationship between the intensity of the RGB value (such an intensity may be computed as a weighted sum of red, green, and blue) and the intensity of luminance 1003 value. Others, such as L*a*b*, may not maintain such a linear relationship. It may be desirable for there to be such a linear relationship for chrominance 1004 and/or for luminance 1003. For example, luminance value 1003 may be remapped so that it maintains such a linear relationship.
In at least one embodiment, three additional operations, named chrominance compensation 1203, spatial filtering 1204, and tone mapping 1205, are performed separately on chrominance 1004 and luminance 1003 signals, as described in more detail below.
Chrominance Compensation 1203
Referring now to
Each pixel 203 in chrominance light-field image 1004 is considered individually. Lerp-factor computation 1301 estimates the severity of each pixel's 203 saturation, and the likelihood that the chrominance of that saturation matches (or approximates) the estimated chrominance of the scene illumination. For example, if a pixel's luminance value is near saturation, it is more likely that the chrominance value is wrong. Accordingly, in at least one embodiment, the system of the present invention uses a weighting between saturation and near saturation to determine how much to shift the chrominance value.
When a pixel's 203 saturation is severe, and there is high likelihood that the pixel's chrominance is equal to the chrominance of the scene illumination, the pixel's 203 chrominance is replaced with the chrominance of the scene illumination. When there is no saturation, the pixel's 203 chrominance is left unchanged. When the pixel's 203 saturation is moderate, and there is an intermediate probability that the saturation is equal to the estimated chrominance of the scene illumination, Lerp-factor computation 1301 produces an output that is intermediate between 0.0 and 1.0. This intermediate value causes the pixel's 203 chrominance to be replaced with a linear combination (such as a linear interpolation, or “Lerp”) 1304 between the pixel's 203 original chrominance and the chrominance of the scene illumination. For example, if the computed Lerp factor was 0.25, and the pixel's 203 chrominance representation was UV, then the output of the linear interpolation would be
U′=(1.0−0.25)U+0.25Uillumination (Eq. 1)
V′=(1.0−0.25)V+0.25Villumination (Eq. 2)
Any of a variety of Lerp-factor computation algorithms may be used. For example, a simple calculation might combine the red (R), green (G), and blue (B) components of the pixel 203, prior to its color-space conversion, as follows:
flerp=G(1|R−B|)D (Eq. 3)
In another embodiment, the Lerp factor can be computed by look-up into a two-dimensional table, indexed in one dimension by an estimation of the severity of saturation, and in the other dimension by an estimation of how closely the saturation chrominance approximates the estimated chrominance of the scene illumination. These indexes can be derived from any functions of the pixel's 203 pre-color-space-conversion R, G, and B values, and its post-color-space-conversion luminance 1003A and chrominance 1004A values (as derived from color-space conversion step 1302).
Although linear interpolation is described herein for illustrative purposes, one skilled in the art will recognize that any other type of blending or interpolation can be used.
It may be desirable to blur the chrominance light-field image 1004 prior to linear interpolation 1304 with the estimated chrominance of the scene illumination. Blurring filter 1303 may thus be applied to chrominance light-field image 1004 before it is provided to linear interpolation step 1304.
Spatial Filtering 1204
Referring now to
In one embodiment, spatial filtering 1204 is applied separately to both the luminance 1003 and chrominance 1004 light-field images. An individualized variable blur/sharpen filter kernel 1402 is used to compute each output pixel's 203 value. This kernel 1402 may either sharpen or blur the image, as specified by a continuous value generated by filter control computation 1401.
In at least one embodiment, input to filter control computation 1401 is a single pixel 203 of a blurred version of luminance light-field image 1003, as generated by blurring filter 1303. In at least one embodiment, filter control computation 1401 estimates the likelihood and severity of pixel saturation, without consideration for the chrominance of that saturation. When saturation is present, filter control computation 1401 generates a value that causes kernel 1402 to blur the light-field images. Such blurring may serve to smooth uneven demodulated values. When saturation is not present, filter control computation 1401 generates a value that causes kernel 1402 to sharpen the images. Such sharpening may compensate for blurring due to imperfect microlenses and due to diffraction. Intermediate pixel conditions result in intermediates between blurring and sharpening of the light-field images.
In one embodiment, two filtered versions of the light-field image are generated: an unsharp mask, and a thresholded unsharp mask in which the positive high-pass image detail has been boosted and the negative high-pass detail has been eliminated. The system then interpolates between these versions of the image using filter control computation 1401. When filter control computation 1401 has a low value (in regions that are not saturated), the unsharp mask is preferred, with the effect of sharpening the image. When filter control computation 1401 has a high value (in regions that are likely to be saturated), the thresholded unsharp mask is preferred. Thresholding “throws out” negative values in the high-pass image, thus removing clamped demodulated pixel values in the saturated region, and leaving valuable demodulated interstitial pixel values.
In various embodiments, any of a variety of filter control computation 1401 algorithms may be used.
Tone Mapping 1205
Referring now to
In one embodiment, spatial filtering 1204 is applied separately to both the luminance 1003 and chrominance 1004 light-field images. Before any light-field pixels 203 are processed, two gain functions are computed: a luminance gain function 1504 and a chrominance gain function 1503. Functions 1503, 1504 may have any of a variety of representations. For example, they may be represented as one-dimensional tables of values. Each function 1503, 1504 maps an input luminance value 1003 to an output scale factor. After the functions have been created, pixels 203 in the incoming chrominance and luminance light-field images 1004, 1003 are processed individually, in lock step with one another. The luminance pixel value is presented as input to both gain functions 1503, 1504, generating two scale factors: one for chrominance and one for luminance. Both components of the chrominance pixel value are multiplied 1505 by the chrominance scale factor determined by gain function 1503, in order to generate the output chrominance pixel values for output chrominance light-field image 1004C. The luminance pixel value from luminance light-field image 1003 is multiplied 1506 by the luminance scale factor determined by gain function 1504, in order to generate the output luminance pixel values for output luminance light-field image 1003C.
Gain functions 1503, 1504 may be generated with any of a variety of algorithms. For example, in at least one embodiment, gain functions 1503, 1504 may be generated by applying a blurring filter 1303 to luminance light-field image 1003, then determining 1501 a histogram of luminance values taken from the blurred version, and computing 1502 gain functions 1503, 1504 therefrom. For example, gain-function computation 1502 may be performed by using the histogram data from step 1501 to shape the gain functions such that the values of pixels 203 processed by gain functions 1503, 1504 are more evenly distributed in the range from 0.0 to 1.0. Thus, in effect, the gain function weights the luminance channel so that the scene has an appropriate amount of dynamic range.
Synergy
The above described techniques can be implemented singly or in any suitable combination. In at least one embodiment, they are implemented in combination so as to can work synergistically to reduce color artifacts due to sensor saturation. For example, consider a scene with a region of increasing luminance but constant chrominance. The sensor region corresponding to this scene region may be divided into three adjacent sub-regions:
Pixel values in the unsaturated sub-region will not be changed by clamping, and their chrominance will not be changed by interpolation. Pixels in the blown-out sub-region will be clamped such that subsequent demosaicing gives them chrominances clustered around of the estimated illumination color, with variation introduced by demodulation. The advanced compensation techniques described above may then be used to reduce these variations in pixel chrominance by interpolating toward the chrominance of the estimated scene-illumination color. Interpolation is enabled because 1) the sub-region is obviously blown out, and 2) the pixel chrominances do not vary too much from the chrominance of the estimated scene-illumination color.
If the chrominance of the scene region matches the chrominance of the estimated scene-illumination color, there will be no transition sub-region; rather, the unsaturated sub-region will be adjacent to the blown-out sub-region. If the chrominance of the scene region differs somewhat from the chrominance of the estimated scene-illumination color, there will be a transition sub-region. In this transition sub-region, clamping ensures that any large difference between pixel chrominance and the chrominance of the estimated scene-illumination color is the result of a true difference in the scene region, and not the result of sensor saturation (which, depending on gains, could substantially alter chrominance). Small differences will then be further reduced by the advanced compensation techniques described above as were small differences in the saturated sub-region. Large differences, which correspond to true differences in the saturated sub-region, will not be substantially changed by advanced compensation techniques, allowing them to be incorporated in the final image.
Variations
The techniques described herein can be extended to include any or all of the following, either singly or in any combination.
The present invention has been described in particular detail with respect to possible embodiments. Those of skill in the art will appreciate that the invention may be practiced in other embodiments. First, the particular naming of the components, capitalization of terms, the attributes, data structures, or any other programming or structural aspect is not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, formats, or protocols. Further, the system may be implemented via a combination of hardware and software, as described, or entirely in hardware elements, or entirely in software elements. Also, the particular division of functionality between the various system components described herein is merely exemplary, and not mandatory; functions performed by a single system component may instead be performed by multiple components, and functions performed by multiple components may instead be performed by a single component.
In various embodiments, the present invention can be implemented as a system or a method for performing the above-described techniques, either singly or in any combination. In another embodiment, the present invention can be implemented as a computer program product comprising a nontransitory computer-readable storage medium and computer program code, encoded on the medium, for causing a processor in a computing device or other electronic device to perform the above-described techniques.
Reference in the specification to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment of the invention. The appearances of the phrase “in at least one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
Some portions of the above are presented in terms of algorithms and symbolic representations of operations on data bits within a memory of a computing device. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps (instructions) leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical signals capable of being stored, transferred, combined, compared and otherwise manipulated. It is convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. Furthermore, it is also convenient at times, to refer to certain arrangements of steps requiring physical manipulations of physical quantities as modules or code devices, without loss of generality.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “displaying” or “determining” or the like, refer to the action and processes of a computer system, or similar electronic computing module and/or device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Certain aspects of the present invention include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present invention can be embodied in software, firmware and/or hardware, and when embodied in software, can be downloaded to reside on and be operated from different platforms used by a variety of operating systems.
The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computing device. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, flash memory, solid state drives, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Further, the computing devices referred to herein may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
The algorithms and displays presented herein are not inherently related to any particular computing device, virtualized system, or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will be apparent from the description provided herein. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any references above to specific languages are provided for disclosure of enablement and best mode of the present invention.
Accordingly, in various embodiments, the present invention can be implemented as software, hardware, and/or other elements for controlling a computer system, computing device, or other electronic device, or any combination or plurality thereof. Such an electronic device can include, for example, a processor, an input device (such as a keyboard, mouse, touchpad, trackpad, joystick, trackball, microphone, and/or any combination thereof), an output device (such as a screen, speaker, and/or the like), memory, long-term storage (such as magnetic storage, optical storage, and/or the like), and/or network connectivity, according to techniques that are well known in the art. Such an electronic device may be portable or nonportable. Examples of electronic devices that may be used for implementing the invention include: a mobile phone, personal digital assistant, smartphone, kiosk, server computer, enterprise computing device, desktop computer, laptop computer, tablet computer, consumer electronic device, television, set-top box, or the like. An electronic device for implementing the present invention may use any operating system such as, for example: Linux; Microsoft Windows, available from Microsoft Corporation of Redmond, Wash.; Mac OS X, available from Apple Inc. of Cupertino, Calif.; iOS, available from Apple Inc. of Cupertino, Calif.; and/or any other operating system that is adapted for use on the device.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of the above description, will appreciate that other embodiments may be devised which do not depart from the scope of the present invention as described herein. In addition, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the claims.
Pitts, Colvin, Knight, Timothy James, Liang, Chia-Kai, Cabral, Brian, Ng, Yi-Ren, Wilburn, Bennett, Akeley, Kurt Barton
Patent | Priority | Assignee | Title |
10089741, | Aug 30 2016 | PIXART IMAGING (PENANG) SDN. BHD. | Edge detection with shutter adaption |
10205896, | Jul 24 2015 | GOOGLE LLC | Automatic lens flare detection and correction for light-field images |
10230911, | Aug 29 2017 | Ricoh Company, LTD | Preview generation for plenoptic imaging systems |
10275892, | Jun 09 2016 | GOOGLE LLC | Multi-view scene segmentation and propagation |
10275898, | Apr 15 2015 | GOOGLE LLC | Wedge-based light-field video capture |
10298834, | Dec 01 2006 | GOOGLE LLC | Video refocusing |
10334151, | Apr 22 2013 | GOOGLE LLC | Phase detection autofocus using subaperture images |
10341632, | Apr 15 2015 | GOOGLE LLC | Spatial random access enabled video system with a three-dimensional viewing volume |
10354399, | May 25 2017 | GOOGLE LLC | Multi-view back-projection to a light-field |
10412373, | Apr 15 2015 | GOOGLE LLC | Image capture for virtual reality displays |
10419737, | Apr 15 2015 | GOOGLE LLC | Data structures and delivery methods for expediting virtual reality playback |
10440407, | May 09 2017 | GOOGLE LLC | Adaptive control for immersive experience delivery |
10444931, | May 09 2017 | GOOGLE LLC | Vantage generation and interactive playback |
10469873, | Apr 15 2015 | GOOGLE LLC | Encoding and decoding virtual reality video |
10474227, | May 09 2017 | GOOGLE LLC | Generation of virtual reality with 6 degrees of freedom from limited viewer data |
10540818, | Apr 15 2015 | GOOGLE LLC | Stereo image generation and interactive playback |
10545215, | Sep 13 2017 | GOOGLE LLC | 4D camera tracking and optical stabilization |
10546424, | Apr 15 2015 | GOOGLE LLC | Layered content delivery for virtual and augmented reality experiences |
10552947, | Jun 26 2012 | GOOGLE LLC | Depth-based image blurring |
10565734, | Apr 15 2015 | GOOGLE LLC | Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline |
10567464, | Apr 15 2015 | GOOGLE LLC | Video compression with adaptive view-dependent lighting removal |
10594945, | Apr 03 2017 | GOOGLE LLC | Generating dolly zoom effect using light field image data |
10679361, | Dec 05 2016 | GOOGLE LLC | Multi-view rotoscope contour propagation |
10965862, | Jan 18 2018 | GOOGLE LLC | Multi-camera navigation interface |
11328446, | Apr 15 2015 | GOOGLE LLC | Combining light-field data with active depth data for depth map generation |
9866810, | May 09 2012 | GOOGLE LLC | Optimization of optical systems for improved light field capture and manipulation |
Patent | Priority | Assignee | Title |
4383170, | Nov 19 1979 | Tokyo Shibaura Denki Kabushiki Kaisha | Image input device |
4661986, | Jun 27 1983 | RCA Corporation | Depth-of-focus imaging process method |
4694185, | Apr 18 1986 | Eastman Kodak Company | Light sensing devices with lenticular pixels |
4920419, | May 31 1988 | Eastman Kodak Company | Zoom lens focus control device for film video player |
5076687, | Aug 28 1990 | Massachusetts Institute of Technology | Optical ranging apparatus |
5282045, | Apr 27 1990 | Hitachi, Ltd.; Hitachi Denshi Kabushiki Kaisha | Depth-of-field control apparatus and image pickup apparatus having the same therein |
5610390, | Oct 03 1994 | Fuji Photo Optical Co., Ltd. | Solid-state image pickup device having microlenses each with displaced optical axis |
5748371, | Feb 03 1995 | Regents of the University of Colorado, The; The Regents of the University of Colorado, a body corporate | Extended depth of field optical systems |
5757423, | Oct 22 1993 | Canon Kabushiki Kaisha | Image taking apparatus |
5949433, | Apr 11 1996 | AUTODESK, Inc | Processing image data |
6023523, | Feb 16 1996 | Uber Technologies, Inc | Method and system for digital plenoptic imaging |
6028606, | Aug 02 1996 | BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY, THE | Camera simulation system |
6097394, | Apr 30 1996 | INTERFLO TECHNOLOGIES, INC | Method and system for light field rendering |
6201899, | Oct 09 1998 | Sarnoff Corporation | Method and apparatus for extended depth of field imaging |
6320979, | Oct 06 1998 | Canon Kabushiki Kaisha | Depth of field enhancement |
6483535, | Dec 23 1999 | GE Inspection Technologies, LP | Wide angle lens system for electronic imagers having long exit pupil distances |
6577342, | Sep 25 1998 | Intel Corporation | Image sensor with microlens material structure |
6587147, | Feb 01 1999 | Intel Corporation | Microlens array |
6597859, | Dec 16 1999 | INTEL CORPORATION, A CORPORATION OF DELAWARE | Method and apparatus for abstracting video data |
6842297, | Aug 31 2001 | The Regents of the University of Colorado | Wavefront coding optics |
6900841, | Jan 11 1999 | Olympus Optical Co., Ltd. | Image processing system capable of applying good texture such as blur |
6924841, | May 02 2001 | Aptina Imaging Corporation | System and method for capturing color images that extends the dynamic range of an image sensor using first and second groups of pixels |
6927922, | Dec 18 2001 | The University of Rochester | Imaging using a multifocal aspheric lens to obtain extended depth of field |
7034866, | Nov 22 2000 | UNILOC 2017 LLC | Combined display-camera for an image processing system |
725567, | |||
7329856, | Aug 24 2004 | Aptina Imaging Corporation | Image sensor having integrated infrared-filtering optical device and related method |
7336430, | Sep 03 2004 | DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT | Extended depth of field using a multi-focal length lens with a controlled range of spherical aberration and a centrally obscured aperture |
7477304, | Aug 26 2004 | U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT | Two narrow band and one wide band color filter for increasing color image sensor sensitivity |
7620309, | Apr 04 2006 | Adobe Inc | Plenoptic camera |
7623726, | Nov 30 2005 | Adobe Inc | Method and apparatus for using a virtual camera to dynamically refocus a digital image |
7683951, | Apr 22 2003 | FUJIFILM Corporation | Solid-state imaging apparatus and digital camera for white balance correction |
7723662, | Oct 07 2005 | The Board of Trustees of the Leland Stanford Junior University | Microscopy arrangements and approaches |
7936392, | Oct 01 2004 | The Board of Trustees of the Leland Stanford Junior University | Imaging arrangements and methods therefor |
7945653, | Oct 11 2006 | Meta Platforms, Inc | Tagging digital media |
7949252, | Dec 11 2008 | Adobe Inc | Plenoptic camera with large depth of field |
8155478, | Oct 26 2006 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | Image creation with software controllable depth of field |
8259198, | Oct 20 2009 | Apple Inc. | System and method for detecting and correcting defective pixels in an image sensor |
8279325, | Nov 25 2008 | GOOGLE LLC | System and method for acquiring, editing, generating and outputting video data |
8289440, | Dec 08 2008 | GOOGLE LLC | Light field data acquisition devices, and methods of using and manufacturing same |
8290358, | Jun 25 2007 | Adobe Inc | Methods and apparatus for light-field imaging |
8427548, | Jun 18 2008 | Samsung Electronics Co., Ltd. | Apparatus and method for capturing digital images |
8442397, | Sep 22 2009 | Samsung Electronics Co., Ltd. | Modulator, apparatus for obtaining light field data using modulator, and apparatus and method for processing light field data using modulator |
8446516, | Nov 25 2008 | GOOGLE LLC | Generating and outputting video data from refocusable light field video data |
8724014, | Dec 08 2008 | GOOGLE LLC | Light field data acquisition |
8811769, | Feb 28 2012 | GOOGLE LLC | Extended depth of field and variable center of perspective in light-field processing |
8948545, | Feb 28 2012 | GOOGLE LLC | Compensating for sensor saturation and microlens modulation during light-field image processing |
8953882, | May 31 2012 | Apple Inc. | Systems and methods for determining noise statistics of image data |
8976288, | Dec 08 2008 | GOOGLE LLC | Light field data acquisition |
20020109783, | |||
20020159030, | |||
20030103670, | |||
20030117511, | |||
20030156077, | |||
20040114176, | |||
20040257360, | |||
20050080602, | |||
20050276441, | |||
20060130017, | |||
20070071316, | |||
20070230944, | |||
20070252074, | |||
20080007626, | |||
20080018668, | |||
20080131019, | |||
20080152215, | |||
20080180792, | |||
20080187305, | |||
20080193026, | |||
20080226274, | |||
20080266655, | |||
20080277566, | |||
20080309813, | |||
20090027542, | |||
20090041381, | |||
20090041448, | |||
20090102956, | |||
20090128658, | |||
20090128669, | |||
20090140131, | |||
20090185801, | |||
20090190022, | |||
20090190024, | |||
20090268970, | |||
20090273843, | |||
20090295829, | |||
20090310885, | |||
20100026852, | |||
20100128145, | |||
20100129048, | |||
20100141802, | |||
20100277629, | |||
20110129165, | |||
20110234841, | |||
20120050562, | |||
20120057040, | |||
20120229691, | |||
20120249550, | |||
20120327222, | |||
20130033636, | |||
20130070060, | |||
20130113981, | |||
20130234935, | |||
20140085282, | |||
20140133749, | |||
20150029386, | |||
20150097985, | |||
DE19624421, | |||
WO3052465, | |||
WO2006039486, | |||
WO2006129677, | |||
WO2007092545, | |||
WO2007092581, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Nov 22 2014 | PITTS, COLVIN | LYTRO, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 034511 | /0077 | |
Nov 24 2014 | AKELEY, KURT BARTON | LYTRO, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 034511 | /0077 | |
Nov 24 2014 | LIANG, CHIA-KAI | LYTRO, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 034511 | /0077 | |
Nov 26 2014 | KNIGHT, TIMOTHY JAMES | LYTRO, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 034511 | /0077 | |
Dec 03 2014 | NG, YI-REN | LYTRO, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 034511 | /0077 | |
Dec 11 2014 | CABRAL, BRIAN | LYTRO, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 034511 | /0077 | |
Dec 14 2014 | WILBURN, BENNETT | LYTRO, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 034511 | /0077 | |
Dec 15 2014 | Lytro, Inc. | (assignment on the face of the patent) | / | |||
Apr 07 2015 | LYTRO, INC GRANTOR | TRIPLEPOINT CAPITAL LLC GRANTEE | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 036167 | /0081 | |
Mar 25 2018 | LYTRO, INC | GOOGLE LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 050009 | /0829 |
Date | Maintenance Fee Events |
Aug 07 2019 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Jan 06 2020 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Feb 26 2024 | REM: Maintenance Fee Reminder Mailed. |
Aug 12 2024 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jul 05 2019 | 4 years fee payment window open |
Jan 05 2020 | 6 months grace period start (w surcharge) |
Jul 05 2020 | patent expiry (for year 4) |
Jul 05 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 05 2023 | 8 years fee payment window open |
Jan 05 2024 | 6 months grace period start (w surcharge) |
Jul 05 2024 | patent expiry (for year 8) |
Jul 05 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 05 2027 | 12 years fee payment window open |
Jan 05 2028 | 6 months grace period start (w surcharge) |
Jul 05 2028 | patent expiry (for year 12) |
Jul 05 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |