An imaging device includes an image sensor that includes imaging pixels that are disposed in a two-dimensional array and focus detection pixels disposed in part of the array of the imaging pixels, a coefficient setting circuit that sets a conversion coefficient to be used to convert an output from each focus detection pixel to an image output at the focus detection pixel, in correspondence to an aperture value set at an imaging optical system, and an estimating circuit that estimates the image output at the focus detection pixel based upon the conversion coefficient and the output from the focus detection pixel.
|
13. An image processing method comprising:
providing an image sensor that includes imaging pixels that are disposed in a two-dimensional array and focus detection pixels disposed in part of the array of the imaging pixels;
setting a conversion coefficient to be used to convert an output from each focus detection pixel to a calculated image output at the focus detection pixel in correspondence to an aperture value set at an imaging optical system; and
calculating the calculated image output at the focus detection pixel based upon the conversion coefficient having been set and the output from the focus detection pixel, wherein
the conversion coefficient is calculated as a ratio of an output from an imaging pixel to the output from the focus detection pixel.
1. An imaging device comprising:
an image sensor that includes imaging pixels that are disposed in a two-dimensional array and focus detection pixels disposed in part of the array of the imaging pixels;
a coefficient setting circuit that sets a conversion coefficient to be used to convert an output from each focus detection pixel to a calculated image output at the focus detection pixel, in correspondence to an aperture value set at an imaging optical system via which an image is formed on the image sensor; and
an arithmetic circuit that calculates the calculated image output at the focus detection pixel based upon the conversion coefficient set by the coefficient setting circuit and the output from the focus detection pixel, wherein
the conversion coefficient is calculated as a ratio of an output from an imaging pixel to the output from the focus detection pixel.
12. A camera comprising:
an image sensor that includes two-dimensionally arrayed imaging pixels and focus detection pixels, and captures an image formed via an imaging optical system with light fluxes that enter the imaging pixels via the imaging optical system and light fluxes that enter the focus detection pixels via the imaging optical system, passing through an exit pupil of the imaging optical system over areas different in size from each other; and
a correction circuit that corrects an image output generated based upon outputs from the imaging pixels and an image output generated based upon outputs from the focus detection pixels, with a conversion coefficient in correspondence to an aperture value set at the imaging optical system, wherein
the conversion coefficient is calculated as a ratio of an output from an imaging pixel to an output from a focus detection pixel.
17. Digital camera for converting a subject image into image data, comprising:
an imaging optical system including an aperture, an aperture value of which is adjustable;
an image sensor that includes imaging pixels that are disposed in a two-dimensional array and focus detection pixels disposed in part of the array of the imaging pixels, the imaging pixels outputting image output signals and the focus detection pixels outputting focus detection output signals;
a coefficient setting circuit that sets a conversion coefficient in correspondence to the aperture value of the aperture; and
an arithmetic circuit that calculates the image output signals at the focus detection pixels based upon the conversion coefficient set by the coefficient setting circuit and the focus detection output signals from the focus detection pixels, wherein:
the image data are generated based upon the image output signals from the imaging pixels and the image output signals calculated by the arithmetic circuit, wherein
the conversion coefficient is calculated as a ratio of an image output signal from an imaging pixel to a focus detection output signal from a focus detection pixel.
14. An image processing method comprising:
providing an image sensor that includes imaging pixels that are disposed in a two-dimensional array and focus detection pixels disposed in part of the array of the imaging pixels;
setting a conversion coefficient to be used to convert an output from each focus detection pixel to a calculated image output at the focus detection pixel in correspondence to an aperture value set at an imaging optical system; and
determining through arithmetic operation an output structure of the calculated image at the focus detection pixel based upon outputs from the imaging pixels around the focus detection pixel, and calculating the calculated image output at the focus detection pixel based upon the output structure, the output from the focus detection pixel and the conversion coefficient having been set, wherein
the conversion coefficient is calculated as a ratio of an output from an imaging pixel to the output from the focus detection pixel, and
the output structure is determined as an output composition ratio of a subset of a predetermined number of the outputs from the imaging pixels around the focus detection pixel, relative to the predetermined number of the outputs from the imaging pixels around the focus detection pixel.
9. An imaging device comprising:
an image sensor that includes imaging pixels that are disposed in a two-dimensional array and focus detection pixels disposed in part of the array of the imaging pixels;
a coefficient setting circuit that sets a conversion coefficient to be used to convert an output from each focus detection pixel to a calculated image output at the focus detection pixel, in correspondence to an aperture value set at an imaging optical system via which an image is formed on the image sensor; and
an arithmetic circuit that determines through arithmetic operation an output structure of the calculated image at the focus detection pixel based upon outputs from the imaging pixels around the focus detection pixel and calculates the calculated image output at the focus detection pixel based upon the output structure, the output from the focus detection pixel and the conversion coefficient set by the coefficient setting circuit, wherein
the conversion coefficient is calculated as a ratio of an output from an imaging pixel to the output from the focus detection pixel, and
the output structure is determined as an output composition ratio of a subset of a predetermined number of the outputs from the imaging pixels around the focus detection pixel, relative to the predetermined number of the outputs from the imaging pixels around the focus detection pixel.
15. An image processing method comprising:
providing an image sensor that includes imaging pixels that are disposed in a two-dimensional array and focus detection pixels disposed in part of the array of the imaging pixels;
setting a conversion coefficient to be used to convert an output from each focus detection pixel to a first calculated image output at the focus detection pixel in correspondence to an aperture value set at an imaging optical system via which an image is formed on the image sensor;
executing a first arithmetic processing to calculate a second image output at the focus detection pixel by statistically processing outputs from the imaging pixels around the focus detection pixel;
executing a second arithmetic processing to determine through arithmetic operation an output structure of outputs from the imaging pixels around the focus detection pixel and calculate the first image output at the focus detection pixel based upon the image output structure, the output from the focus detection pixel and the conversion coefficient having been set; and
determining a third calculated image output at the focus detection pixel through weighted addition executed by individually weighting the second image output calculated through the first arithmetic processing and the first image output calculated through the second arithmetic processing, wherein
the conversion coefficient is calculated as a ratio of an output from an imaging pixel to the output from the focus detection pixel, and
the output structure is determined as an output composition ratio of a subset of a predetermined number of the outputs from the imaging pixels around the focus detection pixel, relative to the predetermined number of the outputs from the imaging pixels around the focus detection pixel.
10. An imaging device comprising:
an image sensor that includes imaging pixels that are disposed in a two-dimensional array and focus detection pixels disposed in part of the array of the imaging pixels;
a coefficient setting circuit that sets a conversion coefficient to be used to convert an output from each focus detection pixel to a first calculated image output at the focus detection pixel, in correspondence to an aperture value set at an imaging optical system via which an image is formed on the image sensor;
a first arithmetic circuit that calculates a second calculated image output at the focus detection pixel by statistically processing outputs from the imaging pixels around the focus detection pixel;
a second arithmetic circuit that calculates through arithmetic operation an output structure of outputs from the imaging pixels around the focus detection pixel and calculates the first calculated image output at the focus detection pixel based upon the output structure, the output from the focus detection pixel and the conversion coefficient set by the coefficient setting circuit; and
a determining circuit that determines a third calculated image output at the focus detection pixel through weighted addition executed by individually weighting the second calculated image output calculated by the first arithmetic circuit and the first calculated image output calculated by the second arithmetic circuit, wherein
the conversion coefficient is calculated as a ratio of an output from an imaging pixel to the output from the focus detection pixel, and
the output structure is determined as an output composition ratio of a subset of a predetermined number of the outputs from the imaging pixels around the focus detection pixel, relative to the predetermined number of the outputs from the imaging pixels around the focus detection pixel.
16. An image processing method comprising:
providing an image sensor that includes a plurality of types of imaging pixels with spectral sensitivity characteristics different from one another, disposed in a two-dimensional array, and focus detection pixels with sensitivity over a range containing all the spectral sensitivity characteristics, which are disposed in part of the array of the imaging pixels;
setting a conversion coefficient to be used to convert an output from each focus detection pixel to a first calculated image output at the focus detection pixel in correspondence to an aperture value set at an imaging optical system via which an image is formed on the image sensor;
executing a first arithmetic processing to correct outputs from imaging pixels disposed at low density based upon outputs from imaging pixels disposed at a high density among the imaging pixels present around the focus detection pixel and calculating a second image output at the focus detection pixel based upon the corrected outputs of the imaging pixels disposed at low density;
executing a second arithmetic processing to determine through arithmetic operation an output structure of outputs from the imaging pixels around the focus detection pixel and calculate the first image output at the focus detection pixel based upon the image output structure, the output from the focus detection pixel and the conversion coefficient having been set; and
determining a third image output at the focus detection pixel based upon the second image output calculated through the first arithmetic processing and the first image output calculated through the second arithmetic processing, wherein
the conversion coefficient is calculated as a ratio of an output from an imaging pixel to the output from the focus detection pixel, and
the output structure is determined as an output composition ratio of a subset of a predetermined number of the outputs from the imaging pixels around the focus detection pixel, relative to the predetermined number of the outputs from the imaging pixels around the focus detection pixel.
11. An imaging device comprising:
an image sensor that includes a plurality of types of imaging pixels with spectral sensitivity characteristics different from one another, which are disposed in a two-dimensional array, and focus detection pixels with sensitivity over a range containing all the spectral sensitivity characteristics, which are disposed in part of the array of the imaging pixels;
a coefficient setting circuit that sets a conversion coefficient to be used to convert an output from each focus detection pixel to a first calculated image output at the focus detection pixel, in correspondence to an aperture value set at an imaging optical system via which an image is formed on the image sensor;
a first arithmetic circuit that corrects outputs of the imaging pixels disposed at low density based upon outputs from the imaging pixels disposed at high density among the imaging pixels present around the focus detection pixel, and calculates a second image output at the focus detection pixel based upon the corrected outputs of the imaging pixels disposed at low density;
a second arithmetic circuit that determines through arithmetic operation an output structure of outputs from the imaging pixels around the focus detection pixel and calculates the first image output at the focus detection pixel based upon the output structure, the output from the focus detection pixel and the conversion coefficient set by the coefficient setting circuit; and
a determining circuit that determines a third image output at the focus detection pixel based upon the second image output calculated by the first arithmetic circuit and the first image output calculated by the second arithmetic circuit, wherein
the conversion coefficient is calculated as a ratio of an output from an imaging pixel to the output from the focus detection pixel, and
the output structure is determined as an output composition ratio of a subset of a predetermined number of the outputs from the imaging pixels around the focus detection pixel, relative to the predetermined number of the outputs from the imaging pixels around the focus detection pixel.
2. An imaging device according to
the image sensor includes a plurality of two-dimensionally arrayed pixel units each made up with a plurality of types of imaging pixels with different spectral sensitivity characteristics from one another disposed with a specific rule and also includes focus detection pixels with sensitivity over a range containing all spectral sensitivity characteristics of the pixel units, disposed within the array of the imaging pixels.
3. An imaging device according to
the pixel units each include three different types of pixels sensitive to red, green and blue, disposed in a Bayer array.
4. An imaging device according to
the focus detection pixels are disposed on the image sensor at positions corresponding to a horizontal row or a vertical row in which the imaging pixels sensitive to blue and green would otherwise be disposed along a straight line.
5. An imaging device according to
the imaging pixels and the focus detection pixels each include a micro-lens and a photoelectric conversion unit.
6. An imaging device according to
the focus detection pixels detect a pair of images formed with a pair of light fluxes passing through a pair of areas at an exit pupil of the imaging optical system.
7. An imaging device according to
the coefficient setting circuit stores in advance a table of conversion coefficients corresponding to the aperture value set at the imaging optical system and selects a conversion coefficient corresponding to the aperture value set at the imaging optical system by referencing the table.
8. An imaging device according to
the image sensor includes the focus detection pixels in correspondence to each of a plurality of focus detection areas set at a predetermined imaging plane of the imaging optical system; and
the coefficient setting circuit stores the table of conversion coefficients corresponding to the aperture value set at the imaging optical system in correspondence to each of the focus detection areas.
|
The disclosure of the following priority application is herein incorporated by reference:
Japanese Patent Application No. 2006-108955 filed Apr. 11, 2006
1. Field of the Invention
The present invention relates to an imaging device equipped with an image sensor that includes imaging pixels and focus detection pixels, a camera equipped with the imaging device and an image processing method adopted in an image sensor that includes imaging pixels and focus detection pixels.
2. Description of the Related Art
There is an imaging device known in the related art equipped with an image sensor having imaging pixels and focus detection pixels disposed together on a single substrate, which captures an image formed on the image sensor and also detects the focus adjustment state of the image (see Japanese Laid Open Patent Publication No. 2000-305010).
In the imaging device in the related art described above, an image output at a given focus detection pixel position, i.e., a virtual imaging pixel output corresponding to the focus detection pixel position, is obtained simply by averaging the outputs from the imaging pixels present around the focus detection pixel or simply averaging the focus detection pixel output and the outputs from the imaging pixel around the focus detection pixel. This means that the virtual imaging pixel output obtained through the simple averaging deviates from the output of the imaging pixel that would otherwise occupy the focus detection pixel position in an image captured by altering the aperture at the imaging optical system. This, in turn, may lead to the occurrence of color artifacts, a false pattern or a loss of pattern, resulting in poor image quality.
According to the 1st aspect of the invention, an imaging device comprises an image sensor that includes imaging pixels that are disposed in a two-dimensional array and focus detection pixels disposed in part of the array of the imaging pixels, a coefficient setting circuit that sets a conversion coefficient to be used to convert an output from each focus detection pixel to an image output at the focus detection pixel, in correspondence to an aperture value set at an imaging optical system via which an image is formed on the image sensor, and an estimating circuit that estimates the image output at the focus detection pixel based upon the conversion coefficient set by the coefficient setting circuit and the output from the focus detection pixel.
The image sensor may include a plurality of two-dimensionally arrayed pixel units each made up with a plurality of types of imaging pixels with different spectral sensitivity characteristics from one another disposed with a specific rule and also includes focus detection pixels with sensitivity over a range containing all spectral sensitivity characteristics of the pixel units, disposed within the array of the imaging pixels.
The pixel units may each include three different types of pixels sensitive to red, green and blue, disposed in a Bayer array.
The focus detection pixels can be disposed on the image sensor at positions corresponding to a horizontal row or a vertical row in which the imaging pixels sensitive to blue and green would otherwise be disposed along a straight line.
The imaging pixels and the focus detection pixels may each include a micro-lens and a photoelectric conversion unit.
The focus detection pixels can detect a pair of images formed with a pair of light fluxes passing through a pair of areas at an exit pupil of the imaging optical system.
The coefficient setting circuit may store in advance a table of conversion coefficients corresponding to aperture value set at the imaging optical system and select a conversion coefficient corresponding to the aperture value set at the imaging optical system by referencing the table.
The image sensor may include the focus detection pixels in correspondence to each of a plurality of focus detection areas set at a predetermined imaging plane of the imaging optical system, and the coefficient setting circuit may store the table of conversion coefficients corresponding to the aperture value set at the imaging optical system in correspondence to each of the focus detection areas.
According to the 2nd aspect of the invention, an imaging device comprises an image sensor that includes imaging pixels that are disposed in a two-dimensional array and focus detection pixels disposed in part of the array of the imaging pixels, a coefficient setting circuit that sets a conversion coefficient to be used to convert an output from each focus detection pixel to an image output at the focus detection pixel, in correspondence to an aperture value set at an imaging optical system via which an image is formed on the image sensor, and an estimating circuit that determines through arithmetic operation an image output structure at the focus detection pixel based upon outputs from the imaging pixels around the focus detection pixel and estimates the image output at the focus detection pixel based upon the output image structure, the output from the focus detection pixel and the conversion coefficient set by the coefficient setting circuit.
According to the 3rd aspect of the invention, an imaging device comprises an image sensor that includes imaging pixels that are disposed in a two-dimensional array and focus detection pixels disposed in part of the array of the imaging pixels, a coefficient setting circuit that sets a conversion coefficient to be used to convert an output from each focus detection pixel to an image output at the focus detection pixel, in correspondence to an aperture value set at an imaging optical system via which an image is formed on the image sensor, a first estimating circuit that estimates an image output at the focus detection pixel by statistically processing outputs from the imaging pixels around the focus detection pixel, a second estimating circuit that determines through arithmetic operation an image output structure of outputs from the imaging pixels around the focus detection pixel and estimates an image output at the focus detection pixel based upon the image output structure, the output from the focus detection pixel and the conversion coefficient set by the coefficient setting circuit, and a determining circuit that determines an image output at the focus detection pixel through weighted addition executed by individually weighting the image output estimated by the first estimating circuit and the image output estimated by the second estimating circuit.
According to the 4th aspect of the invention, an imaging device comprises an image sensor that includes a plurality of types of imaging pixels with spectral sensitivity characteristics different from one another, which are disposed in a two-dimensional array, and focus detection pixels with sensitivity over a range containing all the spectral sensitivity characteristics, which are disposed in part of the array of the imaging pixels, a coefficient setting circuit that sets a conversion coefficient to be used to convert an output from each focus detection pixel to an image output at the focus detection pixel, in correspondence to an aperture value set at an imaging optical system via which an image is formed on the image sensor, a first estimating circuit that corrects outputs of the imaging pixels disposed at low density based upon outputs from the imaging pixels disposed at high density among the imaging pixels present around the focus detection pixel, and estimates an image output at the focus detection pixel based upon the corrected outputs of the imaging pixels disposed at low density, a second estimating circuit that determines through arithmetic operation an image output structure at the imaging pixels around the focus detection pixel and estimates an image output at the focus detection pixel based upon the image output structure, the output from the focus detection pixel and the conversion coefficient set by the coefficient setting circuit, and a determining circuit that determines the image output at the focus detection pixel based upon the image output estimated by the first estimating circuit and the image output estimated by the second estimating circuit.
According to the 5th aspect of the invention, a camera comprises an image sensor that includes two-dimensionally arrayed imaging pixels and focus detection pixels, and captures an image formed via an imaging optical system with light fluxes that enter the imaging pixels via the imaging optical system and light fluxes that enter the focus detection pixels via the imaging optical system, passing through an exit pupil of the imaging optical system over areas different in size from each other, and a correction circuit that corrects an image output generated based upon outputs from the imaging pixels and an image output generated based upon outputs from the focus detection pixels in correspondence to an aperture value set at the imaging optical system.
According to the 6th aspect of the invention, an image processing method comprises providing an image sensor that includes imaging pixels that are disposed in a two-dimensional array and focus detection pixels disposed in part of the array of the imaging pixels, setting a conversion coefficient to be used to convert an output from each focus detection pixel to an image output at the focus detection pixel in correspondence to an aperture value set at an imaging optical system, and estimating the image output at the focus detection pixel based upon the conversion coefficient having been set and the output from the focus detection pixel.
According to the 7th aspect of the invention, an image processing method comprises providing an image sensor that includes imaging pixels that are disposed in a two-dimensional array and focus detection pixels disposed in part of the array of the imaging pixels, setting a conversion coefficient to be used to convert an output from each focus detection pixel to an image output at the focus detection pixel in correspondence to an aperture value set at an imaging optical system, and determining through arithmetic operation an image output structure at the focus detection pixel based upon outputs from the imaging pixels around the focus detection pixel, and estimating the image output at the focus detection pixel based upon the image output structure, the output from the focus detection pixel and the conversion coefficient having been set.
According to the 8th aspect of the invention, an image processing method comprises providing an image sensor that includes imaging pixels that are disposed in a two-dimensional array and focus detection pixels disposed in part of the array of the imaging pixels, setting a conversion coefficient to be used to convert an output from each focus detection pixel to an image output at the focus detection pixel in correspondence to an aperture value set at an imaging optical system via which an image is formed on the image sensor, executing a first estimation processing to estimate an image output at the focus detection pixel by statistically processing outputs from the imaging pixels around the focus detection pixel, executing a second estimation processing to determine through arithmetic operation an image output structure at the imaging pixels around the focus detection pixel and estimate an image output at the focus detection pixel based upon the image output structure, the output from the focus detection pixel and the conversion coefficient having been set, and determining the image output at the focus detection pixel through weighted addition executed by individually weighting the image output estimated through the first estimation processing and the image output estimated through the second estimation processing.
According to the 9th aspect of the invention, an image processing method comprises providing an image sensor that includes a plurality of types of imaging pixels with spectral sensitivity characteristics different from one another, disposed in a two-dimensional array, and focus detection pixels with sensitivity over a range containing all the spectral sensitivity characteristics, which are disposed in part of the array of the imaging pixels, setting a conversion coefficient to be used to convert an output from each focus detection pixel to an image output at the focus detection pixel in correspondence to an aperture value set at an imaging optical system via which an image is formed on the image sensor, executing a first estimation processing to correct outputs from imaging pixels disposed at low density based upon outputs from imaging pixels disposed at a high density among the imaging pixels present around the focus detection pixel and estimating an image output at the focus detection pixel based upon the corrected outputs of the imaging pixels disposed at low density, executing a second estimation processing to determine through arithmetic operation an image output structure at the imaging pixels around the focus detection pixel and estimate an image output at the focus detection pixel based upon the image output structure, the output from the focus detection pixel and the conversion coefficient having been set, and determining the image output at the focus detection pixel based upon the image output estimated through the first estimation processing and the image output estimated through the second estimation processing.
An explanation is now given on an embodiment by adopting the present invention in a digital still camera as an imaging device.
The exchangeable lens unit 202 includes lenses 205-207, an aperture 208 and a lens drive control device 209. It is to be noted that the lens 206 is a zooming lens and that the lens 207 is a focusing lens. The lens drive control device 209, constituted with a CPU and its peripheral components, controls the drive of the focusing lens 207 and the aperture 208, detects the positions of the zooming lens 206, the focusing lens 207 and the aperture 208 and transmits lens information and receives camera information by communicating with a control device in the camera body 203.
An image sensor 211, a camera drive control device 212, a memory card 213, an LCD driver 214, an LCD 215, an eyepiece lens 216 and the like are mounted at the camera body 203. The image sensor 211, set at the predetermined imaging plane (predetermined focal plane) of the exchangeable lens unit 202, captures a subject image formed through the exchangeable lens unit 202 and outputs image signals. At the image sensor 211, pixels used for imaging (hereafter simply referred to as imaging pixels) are disposed two-dimensionally, and rows of pixels used for focus detection (hereafter simply referred to as focus detection pixels), instead of imaging pixels, are disposed in the two-dimensional array over areas corresponding to focus detection positions.
The camera drive control device 212, constituted with a CPU and its peripheral components, controls the drive of the image sensor 211, processes the captured image, executes focus detection and focus adjustment for the exchangeable lens unit 202, controls the aperture 208, controls display operation at the LCD 215, communicates with the lens drive control device 209 and controls the overall operational sequence in the camera. It is to be noted that the camera drive control device 212 communicates with the lens drive control device 209 via an electrical contact point 217 at the mount unit 204.
The memory card 213 is an image storage device in which captured images are stored. The LCD 215 is used as a display unit of a liquid crystal viewfinder (EVF: electronic viewfinder). The photographer is able to visually check a captured image displayed at the LCD 215 via the eyepiece lens 216.
The subject image formed on the image sensor 211 after passing through the exchangeable lens unit 202 undergoes photoelectric conversion at the image sensor 211 and the post-photoelectric conversion output is provided to the camera drive control device 212. The camera drive control device 212 determines through arithmetic operation the defocus amount at a focus detection position based upon the outputs from the focus detection pixels and transmits the defocus amount to the lens drive control device 209. In addition, the camera drive control device 212 provides image signals generated based upon the outputs from the imaging pixels to the LCD driver 214 so as to display the captured image at the LCD 215 and also stores the image signals into the memory card 213.
The lens drive control device 209 detects the positions of the zooming lens 206, the focusing lens 207 and the aperture 208 and obtains through arithmetic operation the lens information based upon the detected positions. It is to be noted that the lens information corresponding to the detected positions may be selected from a lookup table prepared in advance. The lens information is then provided to the camera drive control device 212. In addition, the lens drive control device 209 calculates a lens drive amount indicating the extent to which the lens is to be driven based upon the defocus amount received from the camera drive control device 212, and controls the drive of the focusing lens 207 based upon the lens drive amount.
The overall spectral characteristics of the green pixels, the red pixels and the blue pixels, integrating the spectral sensitivities of the color filters corresponding to the individual colors, the spectral sensitivity of the photodiodes engaged in photoelectric conversion and the spectral characteristics of infrared clipping filters (not shown), are indicated in the graph presented in
It is to be noted that in order to obtain a sufficient light amount, no color filters are disposed at the focus detection pixels and thus, the focus detection pixels have the spectral characteristics shown in
As shown in
As shown in
It is to be noted that the color filters may be arranged in an array other than the Bayer array shown in
Next, in reference to
An exit pupil 90 of the exchangeable lens unit 202 is set at a position assumed over a distance d4 to the front of the micro-lenses 50 and 60 disposed on the predetermined imaging plane of the exchangeable lens unit 202. The distance d4 takes a value determined in correspondence to the curvature and the refractive index of the micro-lenses 50 and 60, the distance between the micro-lenses 50 and 60 and the photoelectric conversion units 52/53 and 62/63 and the like. In the description, the distance d4 is referred to as a range-finding pupil distance.
The micro-lenses 50 and 60 are set at the predetermined imaging plane of the exchangeable lens unit 202. The shapes of the pair of photoelectric conversion units 52 and 53 are projected via the micro-lens 50 set on the optical axis 91 onto the exit pupil 90 set apart from the micro-lenses 50 by the projection distance d4 and the projected shapes define range-finding pupils 92 and 93. The shapes of the pair of photoelectric conversion units 62 and 63 are projected via the micro-lens 60 set off the optical axis 91 onto the exit pupil 90 set apart by the projection distance d4 and the projected shapes define range-finding pupils 92 and 93. Namely, the projecting direction for each pixel is determined so that the projected shapes (range-finding pupils 92 and 93) of the photoelectric conversion units in the individual pixels are aligned on the exit pupil 90 set over the projection distance d4.
The photoelectric conversion unit 52 outputs a signal corresponding to the intensity of an image formed on the micro-lens 50 with a focus detection light flux 72 having passed through the range-finding pupil 92 and having advanced toward the micro-lens 50. The photoelectric conversion unit 53 outputs a signal corresponding to the intensity of an image formed on the micro-lens 50 with a focus detection light flux 73 having passed through the range-finding pupil 93 and having advanced toward the micro-lens 50. Also, the photoelectric conversion unit 62 outputs a signal corresponding to the intensity of an image formed on the micro-lens 60 with a focus detection light flux 82 having passed through the range-finding pupil 92 and having advanced toward the micro-lens 60. The photoelectric conversion unit 63 outputs a signal corresponding to the intensity of an image formed on the micro-lens 60 with a focus detection light flux 83 having passed through the range-finding pupil 93 and having advanced toward the micro-lens 60. It is to be noted that the focus detection pixels 311 are arrayed in a direction matching the direction along which the pair of range-finding pupils are separated from each other.
Numerous focus detection pixels each structured as described above are arranged in a straight row and the outputs from the pairs of photoelectric conversion units at the individual pixels are integrated into output groups each corresponding to the range-finding pupils 92 and 93. Thus, information related to the intensity distribution of the pair of images formed on the focus detection pixel row with the individual focus detection light fluxes passing through the range-finding pupil 92 and the range-finding pupil 93 is obtained. Next, image shift detection calculation processing (correlational processing, phase difference detection processing) to be detailed later is executed by using the information thus obtained so as to detect the image shift amount between the pair of images through the pupil division-type detection method. The image shift amount is then multiplied by a predetermined conversion coefficient and, as a result, the extent of deviation (defocus amount) of the current image forming plane (the image forming plane on which the image is formed at the focus detection position corresponding to a specific micro-lens array position on the predetermined imaging plane) relative to the predetermined imaging plane can be calculated.
It is to be noted that
An area 95 corresponds to an aperture (F 1) at the lightest setting, whereas an area 96 corresponds to an aperture (F 5.6) at a dark setting. The area 94 is greater than the area 95 or 96 and thus, the light flux received at the imaging pixel 310 is restricted at the aperture. In other words, the output from the imaging pixel 310 changes in correspondence to the aperture value. The range-finding pupils 92 and 93 are smaller than the area 95 but are larger than the areas 96. This means that the outputs from the focus detection pixels 311 remain unchanged even when the aperture value changes, as long as the aperture value corresponds to a setting brighter than F 2.8. However, if the aperture value corresponds to a setting darker than F 2.8, the outputs of the focus detection pixels 311 change in correspondence to the aperture value.
In step S120 following step S110, data are read out from the focus detection pixel row. It is to be noted that the focus detection area has been selected by the photographer via a selection means (not shown). In step S130 following step S120, the defocus amount is calculated by executing image shift detection·calculation processing based upon the pair of image data corresponding to the focus detection pixel row.
The image shift detection calculation processing (correlational algorithm) is now explained in reference to
C(L)=Σ|e(i+L)−f(i)| (1)
L in expression (1) is an integer representing the relative shift amount indicated in units corresponding to the pitch at which the pair of sets of data are detected. In addition, L assumes a value within a range Lmin to Lmax (−5 to +5 in the figure). Σ indicates total sum calculation over a range expressed as i=p to q, with p and q satisfying a conditional expression 1≦p<q≦m. The specific values assumed for p and q define the size of the focus detection area.
As shown in
x=kj+D/SLOP (2)
C(x)=C(kj)−|D| (3)
D={C(kj−1)−C(kj+1)}/2 (4)
SLOP=MAX{C(kj+1)−C(kj), C(kj−1)−C(kj)} (5)
In addition, a defocus amount DEF representing the extent of defocusing of the subject image plane relative to the predetermined imaging plane can be determined as expressed in (6) below based upon the shift amount x having been calculated as expressed in (2).
DEF=KX·PY·x (6)
PY in expression (6) represents the detection pitch, whereas KX in expression (6) represents the conversion coefficient that is determined in correspondence to the opening angle formed with the gravitational centers of the pair of range-finding pupils.
The judgment as to whether or not the calculated defocus amount DEF is reliable is made as follows. As shown in
If the level of correlation between the pair of sets of data is low and the correlation quantity C(L) does not dip at all over the shift range Lmin to Lmax, as shown in
To continue the explanation of the camera operations in reference to
It is to be noted that if it is decided in step S140 that the focus detection is not possible, too, the operation proceeds to step S150. In this situation, a scan drive instruction is transmitted to the lens drive control device 209 in the exchangeable lens unit 202. In response, the lens drive control device 209 drives the focusing lens 207 in the exchangeable lens unit 202 to scan through the infinity to close-up range. Once the processing in step S150 is executed, the operation returns to step S110 to repeatedly execute the operations described above.
If, on the other hand, it is decided in step S140 that the focusing lens 207 is set near the focus match position, the operation proceeds to step S160. In step S160, a decision is made as to whether or not a shutter release has been executed through an operation at a shutter release button (not shown). If it is decided that a shutter release has not been executed, the operation returns to step S110 to repeatedly execute the operations described above. If, on the other hand, it is decided that a shutter release has been executed, the operation proceeds to step S170.
In step S170, an aperture adjustment instruction is transmitted to the lens drive control device 209 in the exchangeable lens unit 202 to adjust the aperture value at the exchangeable lens unit 202 to a control F value (an F value set by the user or an automatically selected F value). Once the aperture control ends, the image sensor 211 is engaged in imaging operation and image data from the imaging pixels 310 and all the focus detection pixels 311 at the image sensor 211 are read out.
In step S180 following step S170, image data at pixel positions occupied by the focus detection pixels in the focus detection pixel row are interpolated based upon the data output from the focus detection pixels 311 and the data output from the surrounding imaging pixels 310. This interpolation processing is to be described in detail later. In step S190 following step S180, image data constituted with the data output from the imaging pixels 310 and the interpolation data at the focus detection pixels 311 are saved into the memory card 213. Once the image data are saved, the operation returns to step S110 to repeatedly execute the operations described above.
After starting the interpolation processing in step S200, information related to the characteristics of the image ranging around the focus detection pixel row is obtained through calculation in step S 210. Namely, the parameters (Gau, Gad, Rau, Rad, Bau and Bad) indicating the average values of the individual colors on the two sides above and below the focus detection pixel row, the parameters (Gnu, Gnd, Rnu, Rnd, Bnu and Bnd) indicating the extents of variance with regard to the individual colors on the two sides above and below the focus detection pixel row, and the color composition ratios (Kg, Kr and Kb) around the focus detection pixels are calculated.
Gau=(G1+G2+G3+G4+G5)/5,
Gad=(G6+G7+G8+G9+G10)/5,
Rau=(R1+R2)/2,
Rad=(R3+R4)/2,
Bau=(B1+B2+B3)/3,
Bad=(B4+B5+B6)/3,
Gnu=(|G3−G1|+|G1−G4|+|G4−G2|+|G2−G5|)/(4×Gau),
Gnd=(|G6−G9|+|G9−G7|+|G7−G10|+|G10−G8|)/(4×Gad),
Rnu=|R1−R2|/Rau,
Rnd=|R3−R4|/Rad,
Bnu=(|B1−B2|+|B2−B3|)/(2×Bau),
Bnd=(|B4−B5|+|B5−B6|)/(2×Bad),
Kg=(Gau+Gad)/(Gau+Gad+Rau+Rad+Bau+Bad),
Kr=(Rau+Rad)/(Gau+Gad+Rau+Rad+Bau+Bad),
Kb=(Bau+Bad)/(Gau+Gad+Rau+Rad+Bau+Bad) (7)
It is to be noted that the color composition ratios Kg, Kr and Kb each indicate the output composition ratio of the imaging pixel output that would be provided by the specific color imaging pixel that would otherwise be positioned at the target focus detection pixel, relative to the outputs from the imaging pixels corresponding to all the colors, calculated based upon the outputs from the imaging pixels surrounding the focus detection pixel.
In step S220 following step S210, the data from the imaging pixels 310 surrounding the focus detection pixel 311 are statistically processed to calculate image data S1 at the particular focus detection pixel 311. The image data S1, which may be either image data S1 (X) corresponding to the focus detection pixel X or image data S1 (Y) corresponding to the focus detection pixel Y, can be obtained as expressed in (8) below.
S1(X)=(B2+B5)/2,
S1(Y)=(G3+G4+G6+G7)/4 (8)
The blue pixels, disposed at lower arraying density compared to the green pixels, are located further away from the focus detection pixel X (output AF 3). For this reason, a pattern with fine lines or the like may be present between the focus detection pixel X (output AF 3) and the surrounding blue pixels, which would greatly alter the image pattern to result in an error in S1 (X) calculated as expressed in (8).
This problem may be prevented by first correcting the blue pixel outputs to blue pixel outputs that would be generated at the green pixels around the focus detection pixel X (output AF 3), in correspondence to the outputs from the green pixels near the blue pixels and the outputs from the green pixels near the focus detection pixel X (output AF 3) and then taking the average of the corrected blue pixel outputs.
S1(X)=B2×G4/(G1+G2)+B5×G7/(G9+G10) (9)
The extent of the error in the data generated as expressed in (9) is reduced, since the average is taken after converting the blue pixel outputs from the blue pixels disposed at low density to blue pixel outputs that would be provided at positions in the vicinity of the focus detection pixel X (output AF 3) in correspondence to the changes occurring in the outputs from the green pixels disposed at higher density.
The method adopted when calculating through statistical processing the image data S1 at the focus detection pixel X or Y is not limited to the simple averaging expressed in (9). For instance, image data corresponding to the focus detection pixel X or Y may be obtained through linear interpolation by using the outputs from nearby pixels assuming a plurality of positions along the vertical direction, through interpolation executed by using expressions of multiple degrees, i.e., expressions of at least second degrees, or through median processing.
In step S230, the output from the focus detection pixel 311 is corrected in correspondence to the color composition ratios calculated by using the data from the imaging pixels 310 surrounding the focus detection pixel 311, and thus, image data S2 at the focus detection pixel 311 are calculated. The image data S2 may be either image data S2 (X) (a virtual imaging pixel output calculated in correspondence to the position occupied by the focus detection pixel X) corresponding to the focus detection pixel X or image data S2 (Y) (a virtual image pixel output calculated in correspondence to the position occupied by the focus detection pixel Y) corresponding to the focus detection pixel Y, and are calculated as expressed in (10) below.
S2(X)=AF3×Kb×Ks×kc,
S2(Y)=AF2×Kg×Ks×Kc (10)
The coefficient Ks in expression (10) is the output ratio coefficient having been explained in reference to
The coefficient Kc in expression (10) is an adjustment coefficient used to adjust the discrepancy in the quantity of received light, which is attributed to the difference between the spectral characteristics of the focus detection pixels 311 and the spectral characteristics of the imaging pixels 310. A value obtained through advance measurement is stored as the coefficient Kc in the camera drive control device 212. The adjustment coefficient Kc may be calculated as expressed in (11) below with Sg, Sr, Sb and Sa respectively representing the outputs from the green pixels, the red pixels, the blue pixels and the focus detection pixels generated as an image of a planar light source achieving flat light emission characteristics over the visible light range is captured at the image sensor 211 via an imaging optical system with an aperture at a setting darker than F 2.8.
Kc=(Sg+Sr+Sb)/Sa (11)
In step S240, a decision is made as expressed in the following expression in (12) to determine whether or not the image area around the focus detection pixel 311 is uniform. When the focus detection pixel X is the target pixel, the surrounding image area is judged to be uniform if the following conditions (12) are satisfied.
Bnu<T1 and Bnd<T1 (12)
T1 in expression (12) represents a predetermined threshold value. If, on the other hand, the focus detection pixel Y is the target pixel, the surrounding image is judged to be uniform if the following conditions (13) are satisfied.
Gnu<T2 and Gnd<T2 (13)
T2 in expression (13) represents a predetermined threshold value. If the surrounding image is judged to be nonuniform, the operation proceeds to step S290 to designate the data S1 having been obtained through the statistical processing executed in step S220 as the image data S at the focus detection pixel 311. Namely, if the image is not uniform, the data S1 obtained through the statistical processing which is simple averaging processing are used, since the processing for calculating the color composition ratio data S2 would be complicated under such circumstances. Even if there is an error in the data S1 generated through the statistical processing, the error will not be noticeable since the surrounding image area is not uniform and a significant change occurs over the surrounding image area.
If, on the other hand, it is decided that the surrounding image is uniform, the operation proceeds to step S250. In step S250, a decision is made by comparing the sets of information sampled around the focus detection pixel row, i.e., from the sides above and below the focus detection pixel row, as to whether or not there is an edge pattern that indicates a change in the pixel outputs along the direction perpendicular to the focus detection pixel row. For the target focus detection pixel X, it is determined that an edge pattern indicating a change in the pixel outputs along the direction perpendicular to the focus detection pixel row is present if the conditions expressed in (14) are satisfied.
|Bau−Bad|>T3 (14)
T3 in expression (14) represents a predetermined threshold value. For the target focus detection pixel Y, it is determined that an edge pattern indicating a change in the pixel outputs along the direction perpendicular to the focus detection pixel row is present if the conditions expressed in (15) are satisfied.
|Gau−Gad|>T4 (15)
T4 in expression (15) represents a predetermined threshold value.
If it is decided that there is an edge pattern indicating a change in the imaging pixel outputs along the direction perpendicular to the focus detection pixel row, the operation proceeds to step S260. In step S260, the data S1 resulting from the statistical processing and the color composition ratio data S2 are weighted in correspondence to an edge level Kbe or Kge and image data S at the focus detection pixel 311 are obtained through weighted addition. Once the processing in step S260 is executed, the operation returns from step S300 to the program shown in
The edge levels Kbe and Kge, each indicating the steepness of the edge slope and the increments with which the edge is staged, are calculated as follows. If the focus detection pixel X is the target pixel, the edge level Kbe is calculated as;
Kbe=|Bnu−Bnd|/(T5−T3),
IF Kbe>1 THEN Kbe=1,
S=(1−Kbe)×S1(X)+Kbe×S2(X) (16)
T5 in expression (16) represents a predetermined threshold value (>T3). If the edge level Kbe is high (=1), S=S2 (X) If the focus detection pixel Y is the target pixel, the edge level Kg is calculated as;
Kge=|Gnu−Gnd|/(T6−T4),
IF Kge>1 THEN Kge=1,
S=(1−Kge)×S1(Y)+Kge×S2(Y) (17)
T6 in expression (17) represents a predetermined threshold value (>T4). If the edge level Kge is high (=1), S=S2 (Y).
Over the range in which the edge level expressed in (16) or (17) shifts from low to high, the image data S at the focus detection pixel 311 are obtained as the sum of the data S1 resulting from the statistical processing and the data S2 calculated based upon the color composition ratio, which are individually weighted by using the edge level Kbe or Kge. As a result, stable image data can be obtained since the image data S does not change abruptly regardless of whether the edge judging results in an affirmative judgment or a negative judgment. Values that will provide the optimal image quality are selected for the predetermined threshold values T3 to T6 in correspondence to the characteristics and the like of the optical low pass filter (not shown) mounted at the image sensor 211. For instance, if an optical low pass filter achieving a significant filtering effect is mounted, the edge pattern will be blurred and accordingly, less rigorous (smaller) values should be selected for the predetermined threshold values T3 to T6.
If it is decided that there is no edge pattern indicating a change in the imaging pixel outputs along the direction perpendicular to the focus detection pixel row, the operation proceeds to step S270. In step S270, the focus detection pixel output and the information sampled from the surrounding pixels present above and below the focus detection pixel are compared to make a decision as to whether or not a fine line pattern is present over the focus detection pixel row or over an area near the focus detection pixel row. The term “fine line pattern” in this context refers to a peak pattern showing a pixel output spiking upward, away from the average value of the pixel output, or a bottom pattern showing a pixel output spiking downward away from the average value of the pixel output. If the focus detection pixel X is the target pixel, a fine line pattern is judged to be present when the conditions in (18) are satisfied.
|S1(X)−S2(X)|>T7 (18)
T7 in expression (18) represents a predetermined threshold value. Alternatively, the decision may be made as expressed in (19) below instead of expression (18).
|(Bau+Bad)/2−S2(X)|>T7 (19)
T7 in expression (19) represents a predetermined threshold value.
If the focus detection pixel Y is the target pixel, a fine line pattern is judged to be present when the conditions in (20) are satisfied.
|S1(Y)−S2(Y)|>T8 (20)
T8 in expression (20) represents a predetermined threshold value. Alternatively, the decision may be made as expressed in (21) below instead of expression (20).
|(Gau+Gad)/2−S2(Y)>T8 (21)
T8 in expression (21) represents a predetermined threshold value.
If it is decided that there is no fine line pattern, the operation proceeds to step S290 to designate the data S1 obtained through the statistical processing executed in step S220 as the image data at the focus detection pixel 311, since the data obtained through the statistical processing do not manifest a significant error as long as the image surrounding the focus detection pixel 311 is uniform and no image pattern is present over the focus detection pixel row.
If, on the other hand, it is decided that a fine line pattern is present, the operation proceeds to step S280. In step S280, the data S1 obtained through the statistical processing and the color composition ratio data S2 are weighted in correspondence to a peak/bottom level Kbp or Kgp and the image data S at the focus detection pixel 311 are obtained as the sum of the weighted data S1 and S2. Subsequently, the operation returns from step S300 to the program shown in
The peak/bottom levels Kbp and Kgp, each indicating the level of the peak or the bottom and also the steepness of the peak or the bottom, are calculated as follows. If the focus detection pixel X is the target pixel, Kbp is calculated as;
Kbp=|S1(X)−S2(X)|/(T9−T7),
or
Kbp=|(Bau+Bad)/2−S2(X)|/(T9−T7),
IF Kbp>1 THEN Kbp=1,
S=(1−Kbp)×S1(X)+Kbp×S2(X) (22)
T9 in (22) represents a predetermined threshold value (>T7) As indicated above, if the peak/bottom level Kbp is high (=1), S=S2 (X).
If the focus detection pixel Y is the target pixel, Kgp is calculated as;
Kgp=|S1(Y)−S2(Y)|/(T10−T8),
or
Kgp=|(Gau+Gad)/2−S2(Y)|/(T10−T8),
IF Kgp>1 THEN Kgp=1,
S=(1−Kgp)×S1(Y)+Kgp×S2(Y) (23)
T10 in (23) represents a predetermined threshold value (>T8). As indicated above, if the peak or bottom level Kgp is high (=1), S=S2 (Y).
As indicated in expressions (22) and (23), over the range in which the peak/bottom level shifts from low to high, the image data S at the focus detection pixel 311 are obtained as the sum of the data S1 resulting from the statistical processing and the data S2 calculated based upon the color composition ratio, which are individually weighted by using the peak/bottom level Kbp or Kgp. As a result, stable image data can be obtained since the image data S does not change abruptly regardless of whether the fine line judging results in an affirmative judgment or a negative judgment.
Values that will provide the optimal image quality are selected for the predetermined threshold values T7 to T10 in correspondence to the characteristics and the like of the optical low pass filter (not shown) mounted at the image sensor 211. For instance, if an optical low pass filter achieving a significant filtering effect is mounted, the fine line pattern will be blurred and accordingly, less rigorous (smaller) values should be selected for the predetermined threshold values T7 to T10.
In step S290, the data S1 obtained by statistically processing the data from only the imaging pixels 310 around the focus detection pixel 311 are designated as the image data S at the focus detection pixel, and the operation proceeds to step S300. In step S300, the interpolation processing ends and the operation returns to the program shown in
While the image sensor 211 in
At the image sensor 211A in
For instance, at the image sensor shown in
S1(X)=(B2×(G11+G12)/(G1+G2)+B5×(G11+G12)/(G9+G10))/2 (24)
In the image sensor 211B shown in
At an image sensor 211C shown in
In the image sensor 211 shown in
While the focus detection pixels 311 at the image sensor 211 shown in
S2(Y)=AF2×Ks×Kc (25)
The image data at a focus detection pixel 311 that would otherwise be occupied by a blue pixel may be generated through interpolation as expressed in (26) below instead of expression (10).
S2(X)=AF3×((Bau+Bad)/(Gau+Gad))×Ks×Kc (26)
The focus detection pixels 311 in the image sensor 211 shown in
S2(X)=AF3×Ks×Kc (27)
While the corrected image data are saved into the memory card 213 in the operational flow shown in
While the information related to the characteristics of the image around the focus detection pixel row is obtained through calculation as expressed in (7), the range of the pixels to be used for the calculation is not limited to this example and the pixel range size may be adjusted as necessary. For instance, if an optical low pass filter with an intense filtering effect is mounted, the extent of image blur becomes significant and, accordingly, the range of the pixels used in the calculation expressed in (7) should be increased.
An example in which the image data at the focus detection pixel 311 that would otherwise be occupied by a green pixel are obtained by averaging the outputs from the four green pixels set on the diagonals around the focus detection pixel 311 is explained in reference to expression (8). However, this method gives rise to a problem in that if an edge pattern indicating a pixel output change along the direction in which the focus detection pixels 311 are arrayed is present over the focus detection pixel 311, a significant error manifests. Accordingly, if the relationship expressed in (28) below is satisfied, it may be judged that an edge pattern indicating a pixel output change along the focus detection pixel 311 arraying direction is present over the focus detection pixels 311 and the following processing may be executed. Namely, the image data for the position that would otherwise be occupied by a green pixel may be obtained by averaging the outputs from the green pixels present above and below the focus detection pixel 311, as expressed in (29) below.
|(G3+G6)−(G6+G7)>T11 (28)
T11 in expression (28) represents a predetermined threshold value.
S1(Y)=(G1+G9)/2 (29)
It is to be noted that the image sensors 211, 211A, 211B, 211C and 211D may each be constituted with a CCD image sensor or a CMOS image sensor. In addition, while an explanation is given above in reference to the embodiment on an example in which the imaging device according to the present invention is a digital still camera 201 with the exchangeable lens unit 202 mounted at its camera body 203. However, the present invention may also be adopted in a digital still camera with an integrated lens or in a video camera, as well as in the digital still camera 201 in the embodiment. Furthermore, the present invention may be adopted in a compact camera module built into a portable telephone or the like, a surveillance camera or the like as well.
As described above, the imaging device in the embodiment includes an image sensor equipped with imaging pixels that are disposed in a two-dimensional array and focus detection pixels disposed in part of the array of the imaging pixels. A conversion coefficient used when converting the output from a focus detection pixel to an image output at the focus detection pixel is set in correspondence to the aperture value setting selected at the imaging optical system via which an image is formed on the image sensor and the image output at the focus detection pixel is estimated based upon the conversion coefficient having been set and the output from the focus detection pixel. As a result, even when an image is captured after adjusting the aperture at the imaging optical system, the output from the imaging pixel that would otherwise occupy the position occupied by the focus detection pixel can be estimated with a high level of accuracy, and the occurrence of color artifacts, false pattern, or a pattern loss can be effectively prevented. Thus, a desirable level of image quality can be assured.
In the imaging device in the embodiment, a table of conversion coefficients corresponding to the aperture value settings that may be selected at the imaging optical system is stored in advance and the conversion coefficient corresponding to the aperture value setting at the imaging optical system is selected by referencing the table. Thus, the image output at each focus detection pixel can be estimated quickly and accurately.
Furthermore, the image sensor in the imaging device in the embodiment includes focus detection pixels set in correspondence to each of the plurality of focus detection areas set at the predetermined imaging plane of the imaging optical system. A table of conversion coefficients corresponding to specific aperture value settings that may be selected at the imaging optical system is prepared in correspondence to each focus detection area. Thus, the effect of the eclipse attributable to the aperture at the imaging optical system can be minimized even at the focus detection pixels disposed in correspondence to a focus detection area set in the periphery of the imaging plane, which, in turn, makes it possible to accurately estimate the output from the imaging pixel that would otherwise occupy the position occupied by each focus detection pixel.
Patent | Priority | Assignee | Title |
10298863, | Sep 08 2015 | Apple Inc.; Apple Inc | Automatic compensation of lens flare |
10997470, | Aug 30 2019 | Accenture Global Solutions Limited | Adversarial patches including pixel blocks for machine learning |
8009223, | Dec 26 2007 | Nikon Corporation | Image sensor, focus detection device, focus adjustment device and image-capturing apparatus |
8098321, | Jan 08 2009 | Sony Corporation | Image pickup element for performing focus detection based on phase-difference detecting and image pickup device having such image pickup element |
8304708, | Jan 08 2009 | Sony Corporation | Image pickup element and image pickup device |
8305483, | Mar 31 2009 | Sony Corporation | Imaging device and focus detecting method |
8748793, | Jan 08 2009 | Sony Corporation | Image pickup element and image pickup device |
8749697, | Jan 27 2011 | Canon Kabushiki Kaisha | Image capturing apparatus and control method thereof |
Patent | Priority | Assignee | Title |
4980716, | Apr 28 1988 | Canon Kabushiki Kaisha | Focus detecting device |
5218395, | May 16 1986 | Minolta Camera Kabushiki Kaisha | Camera with a multi-zone focus detecting device |
5517274, | May 13 1988 | Minolta Camera Kabushiki Kaisha | Automatic focusing apparatus of a camera |
6097897, | Jan 30 1998 | Olympus Corporation | Camera |
6208811, | Oct 02 1985 | Canon Kabuhsiki Kaisha | Automatic focussing system |
6377305, | Oct 13 1997 | Canon Kabushiki Kaisha | Image sensing apparatus |
6577344, | Sep 27 1996 | Canon Kabushiki Kaisha | Focus state detection apparatus with image sensing device controls |
6781632, | Apr 20 1999 | Olympus Corporation | Image pick-up apparatus capable of focus detection |
6819360, | Apr 01 1999 | Olympus Corporation | Image pickup element and apparatus for focusing |
20040179128, | |||
JP2000305010, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 02 2007 | KUSAKA, YOSUKE | Nikon Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 018962 | /0052 | |
Feb 06 2007 | Nikon Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Dec 03 2010 | ASPN: Payor Number Assigned. |
Mar 12 2014 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Apr 03 2018 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Mar 30 2022 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Oct 12 2013 | 4 years fee payment window open |
Apr 12 2014 | 6 months grace period start (w surcharge) |
Oct 12 2014 | patent expiry (for year 4) |
Oct 12 2016 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 12 2017 | 8 years fee payment window open |
Apr 12 2018 | 6 months grace period start (w surcharge) |
Oct 12 2018 | patent expiry (for year 8) |
Oct 12 2020 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 12 2021 | 12 years fee payment window open |
Apr 12 2022 | 6 months grace period start (w surcharge) |
Oct 12 2022 | patent expiry (for year 12) |
Oct 12 2024 | 2 years to revive unintentionally abandoned end. (for year 12) |