The present invention relates to an image processing apparatus which can restore, from a color and sensitivity mosaic image acquired using a CCD image sensor of the single plate type or the like, a color image signal of a wide dynamic range wherein the sensitivity characteristics of pixels are uniformized and each of the pixels has all of a plurality of color components. A sensitivity uniformization section uniformizes the sensitivities of pixels of a color and sensitivity mosaic image to produce a color mosaic image, and a color interpolation section interpolates color components of the pixels of the color mosaic image M to produce output images R, G and B. The present invention can be applied to a digital camera which converts a picked up optical image into a color image signal of a wide dynamic range.
|
0. 15. An image pickup method comprising:
a step for sensing light by a plurality of pixels, each of the plurality of pixels having one of a plurality of color components and one of a plurality of sensitivity characteristics to light intensity, wherein pixels having a same color component and a same sensitivity characteristic are arranged in a lattice arrangement, and pixels having the same color component irrespective of the sensitivity characteristic are arranged in a lattice arrangement;
a step for producing a color and sensitivity mosaic image based on the light sensed by the plurality of pixels;
using the color and sensitivity mosaic image as an input, a step for scaling each of the pixels having different sensitivity characteristics to a same light intensity as the pixels having a same sensitivity characteristic to create sensitivity compensated pixel information for each of the pixels having different sensitivity characteristics;
using the color and sensitivity mosaic image as an input, a step for comparing each of the pixels having different sensitivity characteristics with a threshold value to discriminate the validity of a pixel value to create discrimination information for each of the pixels having different sensitivity characteristics; and
a step for interpolating the sensitivity compensated pixel information based on the discrimination information.
0. 16. A program stored on a non-transitory computer readable storage medium, which when executed, causes a computer to:
sense light by a plurality of pixels, each of the plurality of pixels having one of a plurality of color components and one of a plurality of sensitivity characteristics to light intensity, wherein pixels having a same color component and a same sensitivity characteristic are arranged in a lattice arrangement, and pixels having the same color component irrespective of the sensitivity characteristic are arranged in a lattice arrangement;
produce a color and sensitivity mosaic image based on the light sensed by the plurality of pixels;
use the color and sensitivity mosaic image as an input for scaling each of the pixels having different sensitivity characteristics to a same light intensity as the pixels having a same sensitivity characteristic to create sensitivity compensated pixel information for each of the pixels having different sensitivity characteristics;
use the color and sensitivity mosaic image as an input for comparing each of the pixels having different sensitivity characteristics with a threshold value to discriminate the validity of a pixel value to create discrimination information for each of the pixels having different sensitivity characteristics; and
interpolate the sensitivity compensated pixel information based on the discrimination information.
0. 5. An image pickup device comprising:
a plurality of pixels configured to sense light, each of the plurality of pixels having one of a plurality of color components and one of a plurality of sensitivity characteristics to light intensity, wherein pixels having a same color component and a same sensitivity characteristic are arranged in a lattice arrangement, and pixels having the same color component irrespective of the sensitivity characteristic are arranged in a lattice arrangement;
a photo-electric conversion unit configured to produce a color and sensitivity mosaic image based on the light sensed by the plurality of pixels; and
a restoration unit configured to (i) use the color and sensitivity mosaic image as an input for scaling each of the pixels having different sensitivity characteristics to a same light intensity as the pixels having a same sensitivity characteristic to create sensitivity compensated pixel information for each of the pixels having different sensitivity characteristics, (ii) use the color and sensitivity mosaic image as an input for comparing each of the pixels having different sensitivity characteristics with a threshold value to discriminate the validity of a pixel value to create discrimination information for each of the pixels having different sensitivity characteristics, and (iii) interpolate the sensitivity compensated pixel information based on the discrimination information.
0. 27. An image pickup method comprising:
a step for sensing light by a plurality of pixels, each of the plurality of pixels having one of a plurality of color components and one of a plurality of sensitivity characteristics to light intensity, wherein pixels having a same color component and a same sensitivity characteristic are arranged in a lattice arrangement, and pixels having the same color component irrespective of the sensitivity characteristic are arranged in a lattice arrangement;
a step for producing a color and sensitivity mosaic image based on the light sensed by the plurality of pixels; and
producing a restoration image based on the color and sensitivity mosaic image by (i) using the color and sensitivity mosaic image as an input, a step for scaling each of the pixels having different sensitivity characteristics to a same light intensity as the pixels having a same sensitivity characteristic to create sensitivity compensated pixel information for each of the pixels having different sensitivity characteristics, (ii) using the color and sensitivity mosaic image as an input, a step for comparing each of the pixels having different sensitivity characteristics with a threshold value to discriminate the validity of a pixel value to create discrimination information for each of the pixels having different sensitivity characteristics, and (iii) a step for interpolating the sensitivity compensated pixel information based on the discrimination information,
wherein the restoration image has uniform sensitivities of the pixels.
0. 28. A program stored on a non-transitory computer readable storage medium, which when executed, causes a computer to:
sense light by a plurality of pixels, each of the plurality of pixels having one of a plurality of color components and one of a plurality of sensitivity characteristics to light intensity, wherein pixels having a same color component and a same sensitivity characteristic are arranged in a lattice arrangement, and pixels having the same color component irrespective of the sensitivity characteristic are arranged in a lattice arrangement;
produce a color and sensitivity mosaic image based on the light sensed by the plurality of pixels; and
produce a restoration image based on the color and sensitivity mosaic image by (i) using the color and sensitivity mosaic image as an input for scaling each of the pixels having different sensitivity characteristics to a same light intensity as the pixels having a same sensitivity characteristic to create sensitivity compensated pixel information for each of the pixels having different sensitivity characteristics, (ii) using the color and sensitivity mosaic image as an input for comparing each of the pixels having different sensitivity characteristics with a threshold value to discriminate the validity of a pixel value to create discrimination information for each of the pixels having different sensitivity characteristics, and (iii) interpolating the sensitivity compensated pixel information based on the discrimination information,
wherein the restoration image has uniform sensitivities of the pixels.
0. 17. An image pickup apparatus comprising:
a plurality of pixels configured to sense light, each of the plurality of pixels having one of a plurality of color components and one of a plurality of sensitivity characteristics to light intensity, wherein pixels having a same color component and a same sensitivity characteristic are arranged in a lattice arrangement, and pixels having the same color component irrespective of the sensitivity characteristic are arranged in a lattice arrangement;
a photo-electric conversion unit configured to produce a color and sensitivity mosaic image based on the light sensed by the plurality of pixels; and
a restoration unit configured to produce a restoration image based on the color and sensitivity mosaic image by (i) using the color and sensitivity mosaic image as an input for scaling each of the pixels having different sensitivity characteristics to a same light intensity as the pixels having a same sensitivity characteristic to create sensitivity compensated pixel information for each of the pixels having different sensitivity characteristics, (ii) using the color and sensitivity mosaic image as an input for comparing each of the pixels having different sensitivity characteristics with a threshold value to discriminate the validity of a pixel value to create discrimination information for each of the pixels having different sensitivity characteristics, and (iii) interpolating the sensitivity compensated pixel information based on the discrimination information,
wherein the restoration image has uniform sensitivities of the pixels.
0. 39. An image pickup method comprising:
a step for sensing light by a plurality of pixels, each of the plurality of pixels having one of a plurality of color components and one of a plurality of sensitivity characteristics to light intensity, wherein pixels having a same color component and a same sensitivity characteristic are arranged in a lattice arrangement, and pixels having the same color component irrespective of the sensitivity characteristic are arranged in a lattice arrangement;
a step for producing a color and sensitivity mosaic image based on the light sensed by the plurality of pixels; and
producing a mosaic image based on the color and sensitivity mosaic image by (i) using the color and sensitivity mosaic image as an input, a step for scaling each of the pixels having different sensitivity characteristics to a same light intensity as the pixels having a same sensitivity characteristic to create sensitivity compensated pixel information for each of the pixels having different sensitivity characteristics, (ii) using the color and sensitivity mosaic image as an input, a step for comparing each of the pixels having different sensitivity characteristics with a threshold value to discriminate the validity of a pixel value to create discrimination information for each of the pixels having different sensitivity characteristics, and (iii) a step for interpolating the sensitivity compensated pixel information based on the discrimination information,
wherein the mosaic image has uniform sensitivities of the pixels and each of the pixels has one of the plurality of color components.
0. 40. A program stored on a non-transitory computer readable storage medium, which when executed, causes a computer to:
sense light by a plurality of pixels, each of the plurality of pixels having one of a plurality of color components and one of a plurality of sensitivity characteristics to light intensity, wherein pixels having a same color component and a same sensitivity characteristic are arranged in a lattice arrangement, and pixels having the same color component irrespective of the sensitivity characteristic are arranged in a lattice arrangement;
produce a color and sensitivity mosaic image based on the light sensed by the plurality of pixels; and
produce a mosaic image based on the color and sensitivity mosaic image by (i) using the color and sensitivity mosaic image as an input for scaling each of the pixels having different sensitivity characteristics to a same light intensity as the pixels having a same sensitivity characteristic to create sensitivity compensated pixel information for each of the pixels having different sensitivity characteristics, (ii) using the color and sensitivity mosaic image as an input for comparing each of the pixels having different sensitivity characteristics with a threshold value to discriminate the validity of a pixel value to create discrimination information for each of the pixels having different sensitivity characteristics, and (iii) interpolating the sensitivity compensated pixel information based on the discrimination information,
wherein the mosaic image has uniform sensitivities of the pixels and each of the pixels has one of the plurality of color components.
0. 29. An image pickup apparatus comprising:
a plurality of pixels configured to sense light, each of the plurality of pixels having one of a plurality of color components and one of a plurality of sensitivity characteristics to light intensity, wherein pixels having a same color component and a same sensitivity characteristic are arranged in a lattice arrangement, and pixels having the same color component irrespective of the sensitivity characteristic are arranged in a lattice arrangement;
a photo-electric conversion unit configured to produce a color and sensitivity mosaic image based on the light sensed by the plurality of pixels; and
an image processing unit configured to produce a mosaic image based on the color and sensitivity mosaic image by (i) using the color and sensitivity mosaic image as an input for scaling each of the pixels having different sensitivity characteristics to a same light intensity as the pixels having a same sensitivity characteristic to create sensitivity compensated pixel information for each of the pixels having different sensitivity characteristics, (ii) using the color and sensitivity mosaic image as an input for comparing each of the pixels having different sensitivity characteristics with a threshold value to discriminate the validity of a pixel value to create discrimination information for each of the pixels having different sensitivity characteristics, and (iii) interpolating the sensitivity compensated pixel information based on the discrimination information,
wherein the mosaic image has uniform sensitivities of the pixels and each of the pixels has one of the plurality of color components.
0. 1. An image processing apparatus comprising:
restoration means for generating a restoration image based on a color-and-sensitivity mosaic image wherein
each of a plurality of pixels has one of first to third color components and one of a plurality of sensitivity characteristics with respect to intensity of light,
in terms of the sensitivity characteristics of the pixels, the pixels disposed in a same line have a same sensitivity characteristic, and the pixels disposed in different lines, which are adjacent to each other, have different sensitivity characteristics,
in terms of the color components of the pixels, the pixels having the first color component are arranged in a checker pattern irrespective of the pixels' sensitivity characteristics, the pixels having the second color component are arranged so as to be disposed in adjacent different lines and adjacent in a diagonal direction, and the pixels having the third color component are arranged so as to be disposed in adjacent different lines and adjacent in a diagonal direction,
in the restoration image, for each color component, the sensitivities of the pixels are uniformized, using as inputs (i) color mosaic pattern information, (ii) a color and sensitivity mosaic image, and (iii) sensitivity mosaic pattern information, so that each of the pixels having different sensitivity characteristics are (i) scaled to the same light intensity as the pixels having the same sensitivity characteristics to create sensitivity compensated pixel information for each of the pixels having different sensitivity characteristics and (ii) compared with a threshold value to discriminate the validity of the pixel value to create discrimination information for each of the pixels having different sensitivity characteristics, and each of the uniformized pixels has all of the plurality of color components.
0. 2. An image processing method comprising:
a restoration step of generating a restoration image based on a color-and-sensitivity mosaic image wherein,
each of a plurality of pixels has one of first to third color components and one of a plurality of sensitivity characteristics with respect to intensity of light,
in terms of sensitivity characteristics of the pixels, the pixels disposed in a same line have a same sensitivity characteristic, and the pixels disposed in different lines, which are adjacent to each other, have different sensitivity characteristics,
in terms of color components of the pixels, the pixels having the first color component are arranged in a checker pattern irrespective of the pixels' sensitivity characteristics, the pixels having the second color component are arranged so as to be disposed in adjacent different lines and adjacent in a diagonal direction, and the pixels having the third color component are arranged so as to be disposed in adjacent different lines and adjacent in a diagonal direction,
in the restoration image, for each color component, the sensitivities of the pixels are uniformized, using as inputs (i) color mosaic pattern information, (ii) a color and sensitivity mosaic image, and (iii) sensitivity mosaic pattern information, so that each of the pixels having different sensitivity characteristics are (i) scaled to the same light intensity as the pixels having the same sensitivity characteristics to create sensitivity compensated pixel information for each of the pixels having different sensitivity characteristics and (ii) compared with a threshold value to discriminate the validity of the pixel value to create discrimination information for each of the pixels having different sensitivity characteristics, and each of the uniformized pixels has all of the plurality of color components.
0. 3. An image processing apparatus comprising:
restoration means for generating a restoration image based on a color-and-sensitivity mosaic image wherein
each of a plurality of pixels has one of first to third color components and one of a plurality of sensitivity characteristics with respect to intensity of light,
the pixels having the first to third color components are arranged in a Bayer pattern with their color components,
the pixels having the first color component have different sensitivity characteristics from each other in different lines,
the pixels having the second color component are arranged so as to form a checker pattern with their sensitivity characteristics,
the pixels having the third color component are arranged so as to form a checker pattern with their sensitivity characteristics, and
in the restoration image, for each color component, the sensitivities of the pixels are uniformized, using as inputs (i) color mosaic pattern information, (ii) a color and sensitivity mosaic image, and (iii) sensitivity mosaic pattern information, so that each of the pixels having different sensitivity characteristics are (i) scaled to the same light intensity as the pixels having the same sensitivity characteristics to create sensitivity compensated pixel information for each of the pixels having different sensitivity characteristics and (ii) compared with a threshold value to discriminate the validity of the pixel value to create discrimination information for each of the pixels having different sensitivity characteristics, and each of the uniformized pixels has all of the plurality of color components.
0. 4. An image processing method comprising:
a restoration step of generating a restoration image based on a color-and-sensitivity mosaic image wherein
each of a plurality of pixels has one of first to third color components and one of a plurality of sensitivity characteristics with respect to intensity of light,
the pixels having the first to third color components are arranged in a Bayer pattern with their color components,
the pixels having the first color component have different sensitivity characteristics from each other in different lines,
the pixels having the second color component are arranged so as to form a checker pattern with their sensitivity characteristics,
the pixels having, the third color component are arranged so as to form a checker pattern with their sensitivity characteristics, and
in the restoration image, for each color component, the sensitivities of the pixels are uniformized, using as inputs (i) color mosaic pattern information, (ii) a color and sensitivity mosaic image, and (iii) sensitivity mosaic pattern information, so that each of the pixels having different sensitivity characteristics are (i) scaled to the same light intensity as the pixels having the same sensitivity characteristics to create sensitivity compensated pixel information for each of the pixels having different sensitivity characteristics and (ii) compared with a threshold value to discriminate the validity of the pixel value to create discrimination information for each of the pixels having different sensitivity characteristics, and each of the uniformized pixels has all of the plurality of color components.
0. 6. The image pickup device according to claim 5, wherein the plurality of color components include three color components.
0. 7. The image pickup device according to claim 5, wherein the plurality of color components include red, green, and blue color components.
0. 8. The image pickup device according to claim 5, wherein the plurality of color components include green and blue color components.
0. 9. The image pickup device according to claim 5, wherein the plurality of color components include red and blue color components.
0. 10. The image pickup device according to claim 5, wherein the plurality of color components include red and green color components.
0. 11. The image pickup device according to claim 5, wherein the plurality of color components include at least a color component besides red, green and blue.
0. 12. The image pickup device according to claim 5, wherein the plurality of color components include red, green, and blue color components and a fourth color component.
0. 13. The image pickup device according to claim 5, wherein the plurality of sensitivity characteristics include two patterns.
0. 14. The image pickup device according to claim 5, wherein the image pickup device changes between a normal mode and a high dynamic range mode.
0. 18. The image pickup apparatus according to claim 17, wherein the plurality of color components include three color components.
0. 19. The image pickup apparatus according to claim 17, wherein the plurality of color components include red, green, and blue color components.
0. 20. The image pickup apparatus according to claim 17, wherein the plurality of color components include green and blue color components.
0. 21. The image pickup apparatus according to claim 17, wherein the plurality of color components include red and blue color components.
0. 22. The image pickup apparatus according to claim 17, wherein the plurality of color components include red and green color components.
0. 23. The image pickup apparatus according to claim 17, wherein the plurality of color components include at least a color component besides red, green and blue.
0. 24. The image pickup apparatus according to claim 17, wherein the plurality of color components include red, green, and blue color components and a fourth color component.
0. 25. The image pickup apparatus according to claim 17, wherein the plurality of sensitivity characteristics include two patterns.
0. 26. The image pickup apparatus according to claim 17, wherein the image pickup apparatus changes between a normal mode and a high dynamic range mode.
0. 30. The image pickup apparatus according to claim 29, wherein the plurality of color components include three color components.
0. 31. The image pickup apparatus according to claim 29, wherein the plurality of color components include red, green, and blue color components.
0. 32. The image pickup apparatus according to claim 29, wherein the plurality of color components include green and blue color components.
0. 33. The image pickup apparatus according to claim 29, wherein the plurality of color components include red and blue color components.
0. 34. The image pickup apparatus according to claim 29, wherein the plurality of color components include red and green color components.
0. 35. The image pickup apparatus according to claim 29, wherein the plurality of color components include at least a color component besides red, green and blue.
0. 36. The image pickup apparatus according to claim 29, wherein the plurality of color components include red, green, and blue color components and a fourth color component.
0. 37. The image pickup apparatus according to claim 29, wherein the plurality of sensitivity characteristics include two patterns.
0. 38. The image pickup apparatus according to claim 29, wherein the image pickup apparatus changes between a normal mode and a high dynamic range mode.
|
If it is discriminated at step S73 that the type of the color of the noticed pixel is G, then the processing advances to step S75. At step S75, the luminance calculation section 91 applies the modulated color mosaic image Mg and the pixel values of the color difference signals C and D corresponding to the noticed pixel to the following expression (2) to calculate the pixel value of the luminance candidate image Lc corresponding to the noticed pixel:
Lc=3Mg+C+D (2)
If it is discriminated at step S73 that the type of the color of the noticed pixel is B, then the processing advances to step S76. At step S76, the luminance calculation section 91 applies the modulated color mosaic image Mg and the pixel values Mg of the color difference signals C and D corresponding to the noticed pixel to the following expression (3) to calculate the pixel value of the luminance candidate image Lc corresponding to the noticed pixel:
Lc=3Mg+C−2D (3)
It is to be noted that, in the expressions (1) to (3), Lc, Mg, C and D represent the pixel values of the luminance candidate image Lc, modulated color mosaic image Mg, color difference signal C and color difference image D corresponding to the noticed pixel, respectively.
The processing returns to step S71 so that the processing at steps S71 to S76 is repeated until it is discriminated at step S71 that all pixels have been used as a noticed pixel. When it is discriminated at step S71 that all pixels have been used as a noticed pixel, the processing advances to step S77.
The luminance candidate image Lc produced by the processing at steps S71 to S76 described above is supplied to the noise removal section 92.
At step S77, the noise removal section 92 discriminates whether or not all pixels of the modulated color mosaic image Mg have been used as a noticed pixel. If the noise removal section 92 discriminates that all pixels have not been used as a noticed pixel, then the processing advances to step S78. At step S78, the noise removal section 92 determines one by one pixel as a noticed pixel beginning with the left lowermost pixel and ending with the right uppermost pixel of the modulated color mosaic image Mg.
At step S79, the noise removal section 92 applies the pixel values (luminance candidate values) of the pixels positioned upwardly, downwardly, leftwardly and rightwardly of the noticed pixel to the following expression (4) to calculate a gradient ∇ corresponding to the noticed pixel. It is to be noted that the gradient ∇ is a vector whose factors are linear differential coefficients in the horizontal direction and the vertical direction of the image. Further, the pixel values (luminance candidate values) of the pixels positioned upwardly, downwardly, leftwardly and rightwardly of the noticed pixel are represented by Lc(U), Lc(D), Lc(L) and Lc(R), respectively.
gradient ∇=(Lc(R)−Lc(L),Lc(U)−Lc(D)) (4)
At step S80, the noise removal section 92 applies the pixel values (luminance candidate values) of the pixels positioned leftwardly, rightwardly, upwardly and downwardly of the noticed pixel to the following expressions (5) and (6) to calculate a smoothed component Hh in the horizontal direction and a smoothed component Hv in the vertical direction corresponding to the noticed pixel:
Hh=(Lc(L)+Lc(R))/2 (5)
Hv=(Lc(U)+Lc(D))/2 (6)
At step S81, the noise removal section 92 calculates a smoothing contribution wh in the horizontal direction and a smoothing contribution wv in the vertical direction corresponding to the absolute value
∥∇∥ of the gradient ∇ corresponding to the noticed pixel calculated at step S79.
More particularly, where the absolute value of the gradient ∇ is higher than 0, the absolute value of the inner product of the normalized gradient ∇/∥∇∥ and the vector (1, 0) is subtracted from 1 as given by the following expression (7) to obtain the smoothing contribution wh in the horizontal direction. Further, as given by the following expression (8), the absolute value of the inner product of the normalized gradient ∇/∥∇∥ and the vector (0, 1) is subtracted from 1 to obtain the smoothing contribution wv in the vertical direction:
wh=1−|∇/∥∇∥,(1,0)| (7)
wv=1−|∇/∇∥∇∥,(0,1)| (8)
Where the absolute value of the gradient ∇ is 0, the smoothing contribution wh in the horizontal direction and the smoothing contribution wv in the vertical direction are both set to 0.5.
At step S82, the noise removal section 92 uses the following expression (9) to calculate the pixel value (luminance value) of the luminance image L corresponding to the noticed pixel:
L=Lc+(wh·Hh+wv·Hv)/(wh+wv) (9)
It is to be noted that Lc and L in the expression (9) represent the pixel values of the luminance candidate image Lc and the luminance image L corresponding to the noticed pixel.
The processing returns to step S77 so that the processing at steps S77 to S82 is repeated until it is discriminated at step S77 that all pixels have been used as a noticed pixel. When it is discriminated at step S77 that all pixels have been used as a noticed pixel, the processing returns to step S54 of
At step S54, the color space conversion section 75 performs a color space conversion process for the color difference images C and D and the luminance image L to produce modulated images in each of which each pixel has an R, G or B component and supplies the modulated images to the gradation reverse conversion sections 76 to 78, respectively.
Details of the color space conversion process are described with reference to a flow chart of
At step S93, the color space conversion section 75 applies the pixel values of the luminance image L, color difference image C and color difference image D corresponding to the noticed pixel to the following expressions (10), (11) and (12) to calculate the value Rg of the R component, the value Gg of the G component and the value Bg of the B component of the modulated images corresponding to the noticed pixel:
Rg=(L+2C−D)/3 (10)
Gg=(L−C−D)/3 (11)
Bg=(L−C+2D)/3 (12)
It is to be noted that, in the expressions (10) to (12), L, C and D are the pixel values of the luminance image L, color difference signal C and color difference image D corresponding to the noticed pixel, respectively.
The processing returns to step S91 so that the processing at steps S91 to S93 is repeated until it is discriminated at step S91 that all pixels have been used as a noticed pixel. When it is discriminated at step S91 that all pixels have been used as a noticed pixel, the processing returns to step S55 of
At step S55, the gradation reverse conversion section 76 performs a gradation reverse conversion process corresponding to the gradation conversion process at step S51 (more particularly, to raise pixel values to the 1/γth power) for the R component of each pixel of the modulated image supplied from the color space conversion section 75 to produce an output image R. Similarly, the gradation reverse conversion section 77 performs a gradation reverse conversion process corresponding to the gradation conversion process at step S51 for the G component of each pixel of the modulated image supplied from the color space conversion section 75 to produce an output image G. The gradation reverse conversion section 78 performs a gradation reverse conversion process corresponding to the gradation conversion process at step S51 for the B component of each pixel of the modulated image supplied from the color space conversion section 75 to produce an output image B. Through such a color interpolation process as described above, the output images R, G and BG are produced.
Description of the first demosaic process by the first example of the configuration of the sensitivity uniformization section 51 shown in
Now, a second example of the configuration of the sensitivity uniformization section 51 which can be used in place of the second example of the configuration of the sensitivity uniformization section 51 shown in
The second example of the configuration is an example of the configuration wherein the second sensitivity uniformization process in the first demosaic process described with reference to
It is assumed that, in the color and sensitivity mosaic image described below, the color of each pixel is one of the three primary colors of R, G and B and the sensitivity is one of four stages S0, S1, S2 and S3 as in the color and sensitivity mosaic pattern P10 of
In the second example of the configuration of the sensitivity uniformization section 51, a color and sensitivity mosaic image from the image pickup system, color mosaic pattern information and sensitivity mosaic pattern information are supplied to interpolation sections 101-1 to 101-4.
The interpolation section 101-1 performs an interpolation process of the sensitivity S0 without changing the color of each pixel of the color and sensitivity mosaic image and outputs an interpolation value corresponding to the resulting sensitivity S0 to an adder 102. The interpolation section 101-2 performs an interpolation process of the sensitivity S1 without changing the color of each pixel of the color and sensitivity mosaic image and outputs an interpolation value corresponding to the resulting sensitivity S1 to the adder 102. The interpolation section 101-3 performs an interpolation process of the sensitivity S2 without changing the color of each pixel of the color and sensitivity mosaic image and outputs an interpolation value corresponding to the resulting sensitivity S2 to the adder 102. The interpolation section 101-4 performs an interpolation process of the sensitivity S3 without changing the color of each pixel of the color and sensitivity mosaic image and outputs an interpolation value corresponding to the resulting sensitivity S3 to the adder 102.
The adder 102 adds, for each pixel, the sensitivities S0 to S3 inputted thereto from the interpolation sections 101-1 to 101-4 and supplies the sum as a pixel value of a color mosaic candidate image to a synthetic sensitivity compensation section 103.
The synthetic sensitivity compensation section 103 collates the pixel value of the color mosaic candidate image supplied thereto from the adder 102 with a synthetic sensitivity compensation LUT 104 to produce a color mosaic image M wherein the resulting value is used as a pixel value and supplies the color mosaic image M to the color interpolation section 52. The synthetic sensitivity compensation LUT 104 is configured so as to acquire a pixel value of the color mosaic image M using a pixel value of the color mosaic candidate image as an index.
The second sensitivity uniformization process in the first demosaic process by the second example of the configuration of the sensitivity uniformization section 51 shown in
At step S101, the interpolation sections 101-1 to 101-4 discriminate whether or not all pixels of the color and sensitivity mosaic image have been used as a noticed pixel. If the interpolation sections 101-1 to 101-4 discriminate that all pixels have not been used as a noticed pixel, then the processing advances to step S102. At step S102, the interpolation sections 101-1 to 101-4 determine one by one pixel as a noticed pixel beginning with the left lowermost pixel and ending with the right uppermost pixel of the color and sensitivity mosaic image.
At step S103, the interpolation sections 101-1 to 101-4 perform an interpolation process without changing the color of each pixel of the color and sensitivity mosaic image to produce interpolation values corresponding to the sensitivities S0, S1, S2 and the sensitivity S3, respectively, and output the interpolation values to the adder 102.
The interpolation process for the sensitivity S0 by the interpolation section 101-1 is described with reference to a flow chart of
It is to be noted that, since the interpolation processes for the sensitivities S1 to S3 by the interpolation sections 101-2 and 101-3 are similar to the interpolation process for the sensitivity S0 by the interpolation section 101-1 described above, description of the interpolation processes is omitted.
At step S104, the adder 102 adds the interpolation values for the sensitivities S0 to S3 corresponding to the noticed pixel inputted from the interpolation sections 101-1 to 101-4 and supplies the sum as a pixel value of a color mosaic candidate image corresponding to the noticed pixel to the synthetic sensitivity compensation section 103.
At step S105, the synthetic sensitivity compensation section 103 collates the pixel value of the color mosaic candidate image supplied thereto from the adder 102 with the synthetic sensitivity compensation LUT 104 and determines a detected value as a pixel value of a color mosaic image M corresponding to the noticed pixel.
The processing returns to step S101 so that the processing at steps S101 to S105 is repeated until it is discriminated at step S101 that all pixels have been used as a noticed pixel. When it is discriminated at step S101 that all pixels have been used as a noticed pixel, the second sensitivity uniformization process of the first demosaic process is ended.
It is to be noted that, after the second sensitivity uniformization process, the color interpolation process described hereinabove with reference to the flow chart of
Now, a second process for producing a color difference image C which can be executed by the color difference image production section 72 in place of the first process (
At step S121, the smoothing sections 81 and 82 discriminate whether or not all pixels of the modulated color mosaic image Mg have been used as a noticed pixel. If the smoothing sections 81 and 82 discriminate that all pixels have not been used as a noticed pixel, then the processing advances to step S122. At step S122, the smoothing sections 81 and 82 determine one by one pixel as a noticed pixel beginning with the left lowermost pixel and ending with the right uppermost pixel of the modulated color mosaic image Mg.
At step S123, the smoothing section 81 arithmetically operates an image gradient vector g corresponding to the noticed pixel.
Details of the image gradient vector arithmetic operation process are described with reference to a flow chart of
It is to be noted that, although a predetermined single type of a color may be selected arbitrarily, for example, where the color mosaic pattern of the color mosaic image Mg has a Bayer arrangement, since the number of pixels having a G component is equal to twice that of pixels having an R component or pixels having a B component, the single type of a color is reasonably set to G. Accordingly, the following description proceeds assuming that the color mosaic pattern of the color mosaic image Mg has a Bayer arrangement and that G is selected as the predetermined single type of a color.
At step S141, the smoothing section 81 discriminates whether or not the color of the noticed pixel is G. If the smoothing section 81 discriminates that the color of the noticed pixel is G, then the processing advances to step S142. In this instance, the colors of the four pixels positioned upwardly, downwardly, leftwardly and rightwardly of the noticed pixel are not G, and the colors of the four pixels positioned in the oblique directions from the noticed pixel are G.
At step S142, the smoothing section 81 interpolates the values G(U), G(D), G(L) and G(R) of G components corresponding to the four pixels positioned upwardly, downwardly, leftwardly and rightwardly of the noticed pixel, respectively, by applying the pixel value G(LU) of the pixel neighboring leftwardly upwards of the noticed pixel and having a G component, the pixel value G(LD) of the pixel neighboring leftwardly downwards of the noticed pixel and having a G component, the pixel value G(RU) of the pixel neighboring rightwardly upwards of the noticed pixel and having a G component and the pixel value G(RD) of the pixel neighboring rightwardly downwards of the noticed pixel and having a G component to the following expressions (13) to (16):
G(U)=(G(LU)+G(RU))/2 (13)
G(D)=(G(LD)+G(RD))/2 (14)
G(L)=(G(LU)+G(LD))/2 (15)
G(R)=(G(RU)+G(RD))/2 (16)
At step S143, the smoothing section 81 applies the values G(U), G(D), G(L) and G(R) of the G components corresponding to the four pixels positioned upwardly, downwardly, leftwardly and rightwardly of the noticed pixel to the following expressions (17) to (19) to calculate a vector g′ and normalize the vector g′ in accordance with the following expression (20) to calculate a gradient vector g:
gh−G(R)−G(L) (17)
gv=G(U)−G(D) (18)
g′=(gh,gv) (19)
g=G′/∥g′∥ (20)
It is to be noted that, if it is discriminated at step S141 that the color of the noticed pixel is not G, then the processing advances to step S144. In this instance, the colors of the four pixels positioned upwardly, downwardly, leftwardly and rightwardly of the noticed pixel are G.
At step S144, the smoothing section 81 acquires the pixel values of the four pixels positioned upwardly, downwardly, leftwardly and rightwardly of the noticed pixel and substitutes them into the values G(U), G(D), G(L) and G(R), respectively.
The image gradient vector g corresponding to the noticed pixel is arithmetically operated in such a manner as described above. It is to be noted that, also where the color mosaic pattern of the color mosaic image Mg does not have a Bayer arrangement, a similar process can be applied to arithmetically operate the image gradient vector g.
The processing returns to step S124 of
At step S124, the smoothing section 81 refers to the color mosaic pattern information to detect those of pixels neighboring with the noticed pixel (for example, 5×5 pixels centered at the noticed pixel) which have an R component, and extracts the pixel values of the detected pixels (hereinafter referred to as reference pixels). Meanwhile, also the smoothing section 82 similarly refers to the color mosaic pattern information to detect those of pixels neighboring with the noticed pixel which have a G component, and extracts the pixel values of the detected pixels.
At step S125, the smoothing section 81 calculates the position vectors n from the noticed pixel to the reference pixels which have an R component and normalizes them. Meanwhile, also the smoothing section 82 similarly calculates the position vectors n from the noticed pixel to the reference pixels which have a G component and normalizes them.
At step S126, as shown in the following expression (21), the smoothing section 81 divides, for each of the reference pixels having an R component, the absolute value of an inner product of the gradient vector g of the noticed pixel and the position vector n from 1 and arithmetically operates the difference to the ρth power to calculate a significance ω of the reference pixel. Meanwhile, also the smoothing section 82 similarly calculates a significance ω for each of the reference pixels having a G component. Here, ρ is a constant for adjusting the sharpness of direction selection and is set in advance.
ω=(1−|(n,g)|)ρ (21)
At step S127, the smoothing section 81 acquires a number of filter coefficients set in advance corresponding to relative positions of the reference pixels having an R component to the noticed pixel, the number being equal to the number of the reference pixels. Meanwhile, also the smoothing section 82 similarly acquires a number of filter coefficients set in advance corresponding to relative positions of the reference pixels having a G component to the noticed pixel, the number being equal to the number of the reference pixels.
At step S128, the smoothing section 81 multiplies the pixel values of the reference pixels having an R component by the corresponding filter coefficients and significances ω and arithmetically operates the sum total of the products. Further, the smoothing section 81 multiplies the filter coefficients and the significances ω corresponding to the reference pixels and arithmetically operates the sum total of the products. Meanwhile, also the smoothing section 82 similarly multiplies the pixel values of the reference pixels having a G component by the corresponding filter coefficients and significances ω and arithmetically operates the sum total of the products. Further, the smoothing section 82 multiplies the filter coefficients and the significances ω corresponding to the reference pixels and arithmetically operates the sum total of the products.
At step S129, the smoothing section 81 divides the sum total of the products of the pixel values of the reference pixels having an R component and the corresponding filter coefficients and significances ω by the sum total of the products of the filter coefficients and the significances ω corresponding to the reference pixels calculated at step S128 and determines the quotient as a pixel value corresponding to the noticed pixel of the image R′ which includes only smoothed R components. Meanwhile, also the smoothing section 82 divides the sum total of the products of the pixel values of the reference pixels having a G component and the corresponding filter coefficients and significances ω by the sum total of the products of the filter coefficients and the significances ω corresponding to the reference pixels calculated at step S128 and determines the quotient as a pixel value corresponding to the noticed pixel of the image G′ which includes only smoothed G components.
At step S130, the subtractor 83 subtracts the pixel value corresponding to the noticed pixel of the image G′, which only includes smoothed G components, from the smoothing section 82 from the pixel value corresponding to the noticed pixel of the image R′, which only includes smoothed R components, from the smoothing section 81, and determines the difference as a pixel value of the noticed pixel of the color difference image C.
The processing returns to step S121 so that the processing at steps S121 to 130 is repeated until it is discriminated at step S121 that all pixels have been used as a noticed pixel. When it is discriminated at step S121 that all pixels have been used as a noticed pixel, the color difference image production process is ended and the processing returns to step S53 of
It is to be noted that, since the process of the color difference image production section 73 when it produces a color difference image D is similar to the second process of the color difference image production section 72 when it produces the color difference image C described above, description of it is omitted.
In the second process for producing a color difference image C, since a contour of an object in an image is detected and smoothing is executed in parallel to the contour, occurrence of a color moire effect can be suppressed when compared with that in the first process for producing the color difference image C.
Subsequently, a second example of a configuration of the image processing section 7 which principally executes the second demosaic process is described with reference to
The sensitivity uniformization section 111 performs a sensitivity uniformization process for the color and sensitivity mosaic image based on the color mosaic pattern information and the sensitivity mosaic information and outputs a resulting color mosaic image M having a uniformized sensitivity to the color interpolation section 52. It is to be noted, however, that, since the color mosaic arrangement of the resulting color mosaic image M is not necessarily same as the color mosaic arrangement of the original color and sensitivity mosaic image, the sensitivity uniformization section 111 updates the color mosaic pattern information and supplies it to a color interpolation section 112.
The color interpolation section 112 performs, similarly to the color interpolation section 52 of
In the first example of the configuration of the sensitivity uniformization section 111, a color and sensitivity mosaic image from the image pickup system is supplied to a sensitivity compensation section 121 and a validity discrimination section 123. Color mosaic pattern information is supplied to a missing interpolation section 124. Sensitivity mosaic pattern information is supplied to the sensitivity compensation section 121 and the validity discrimination section 123.
The sensitivity compensation section 121 performs sensitivity compensation for the color and sensitivity mosaic image based on a relative sensitivity value S obtained from a relative sensitivity value LUT 122 and outputs the resulting color and sensitivity mosaic image to the missing interpolation section 124. The relative sensitivity value LUT 122 is a lookup table which outputs a relative sensitivity value S using a sensitivity of a pixel as an index.
The validity discrimination section 123 compares the pixel value of each of the pixels of the color and sensitivity mosaic image with the threshold value θH of the saturation level and the threshold value θL of the noise level to discriminate the validity of the pixel value and supplies a result of the discrimination as discrimination information to the missing interpolation section 124. In the discrimination information, information representative of “valid” or “invalid” regarding the pixel value of each pixel is described.
The missing interpolation section 124 uses, based on the discrimination information from the validity discrimination section 123, the pixel values of those pixels from among all pixels of the sensitivity-compensated color and sensitivity mosaic image whose discrimination information is valid as they are, but uses, for each of those pixels whose discrimination information is invalid, the pixel values of those pixels having a color which is included most in the sensitivity-compensated color and sensitivity mosaic image to interpolate the pixel value of the color component. Use of the pixel values of those pixels having a color which is included most in this manner facilitates restoration of a high frequency component. Further, the missing interpolation section 124 updates the color mosaic pattern information corresponding to the color mosaic arrangement of the produced color mosaic image M and outputs the updated color mosaic pattern information to the color interpolation section 112.
Now, a second demosaic process executed principally by the second example of the configuration of the image processing section 7 shown in
At step S151, the missing interpolation section 124 discriminates whether or not all pixels of the sensitivity-compensated color and sensitivity mosaic image have been used as a noticed pixel. If the missing interpolation section 124 discriminates that all pixels have not been used as a noticed pixel, then the processing advances to step S152. At step S152, the missing interpolation section 124 determines one by one pixel as a noticed pixel beginning with the left lowermost pixel and ending with the right uppermost pixel of the sensitivity-compensated color and sensitivity mosaic image.
At step S153, the missing interpolation section 124 discriminates whether or not the discrimination information of the noticed pixel is invalid. If the missing interpolation section 124 discriminates that the discrimination information is invalid, then the processing advances to step S154.
At step S154, the missing interpolation section 124 refers to the color mosaic pattern information to detect those pixels neighboring with the noticed pixel (for example, 5×5 pixels centered at the noticed pixel) which have a G component and whose discrimination information is valid, and extracts the pixel values of the detected pixels (hereinafter referred to as reference pixels). Further, the missing interpolation section 124 acquires a number of filter coefficients set in advance corresponding to relative positions of the reference pixels to the noticed pixel, the number being equal to the number of the reference pixels. Furthermore, the missing interpolation section 124 multiplies the pixel values of the reference pixels and the corresponding filter coefficients and arithmetically operates the sum total of the products. Further, the missing interpolation section 124 divides the sum total of the products by the sum total of the used filter coefficients and determines the quotient as a pixel value of the noticed pixel of the color mosaic image M.
At step S155, the missing interpolation section 124 updates the color of the noticed pixel in the color mosaic pattern information to G.
It is to be noted that, if it is discriminated at step S153 that the discrimination information of the noticed pixel is not invalid, then the processes at steps S154 and S155 are skipped.
The processing returns to step S151 so that the processing at steps S151 to 155 is repeated until it is discriminated at step S151 that all pixels have been used as a noticed pixel. When it is discriminated at step S151 that all pixels have been used as a noticed pixel, the missing interpolation process is ended and the color mosaic image M obtained and the updated color mosaic pattern information are supplied to the color interpolation section 112 in the following stage.
Now, a second example of a configuration of the sensitivity uniformization section 111 which can be used in place of the first example of the configuration of the sensitivity uniformization section 111 shown in
The second example of the configuration is an example of a configuration for allowing the sensitivity uniformization section 111 to execute the second sensitivity uniformization process of the second demosaic process described hereinabove with reference to
The following description proceeds assuming that, in the color and sensitivity mosaic image, the color of each pixel is one of the three primary colors of R, G and B as in the color and sensitivity mosaic pattern P10 of
In the second example of the configuration of the sensitivity uniformization section 111, a color and sensitivity mosaic image from the image pickup system, color mosaic pattern information and sensitivity mosaic pattern information are supplied to interpolation sections 132-1 to 132-4. The color mosaic pattern information is supplied also to an interpolation color determination section 131.
The interpolation color determination section 131 designates the color (interpolation color) of interpolation values to be interpolated by the interpolation sections 132-1 to 132-3 based on the color mosaic pattern information. Further, the interpolation color determination section 131 updates the color mosaic pattern information in accordance with determination of the interpolation colors.
The interpolation section 131-1 performs an interpolation process of the sensitivity S0 for the color and sensitivity mosaic image in accordance with the designation of an interpolation color from the interpolation color determination section 131 and outputs a resulting interpolation value corresponding to the sensitivity S0 to an adder 133. The interpolation section 131-2 performs an interpolation process of the sensitivity S1 for the color and sensitivity mosaic image in accordance with the designation of the interpolation color from the interpolation color determination section 131 and outputs a resulting interpolation value corresponding to the sensitivity S1 to the adder 133. The interpolation section 131-3 performs an interpolation process of the sensitivity S2 for the color and sensitivity mosaic image in accordance with the designation of the interpolation color from the interpolation color determination section 131 and outputs a resulting interpolation value corresponding to the sensitivity S2 to the adder 133. The interpolation section 131-4 performs an interpolation process of the sensitivity S3 for the color and sensitivity mosaic image in accordance with the 131 designation of the interpolation color from the interpolation color determination section and outputs a resulting interpolation value corresponding to the sensitivity S3 to the adder 133.
The adder 133 adds the interpolation values of the sensitivities S0 to S3 inputted thereto from the interpolation sections 132-1 to 132-4 for each pixel and supplies the sum as a pixel value of a color mosaic candidate image to a synthetic sensitivity compensation section 134.
The synthetic sensitivity compensation section 134 collates the pixel value of the color mosaic candidate image supplied thereto from the adder 133 with a synthetic sensitivity compensation LUT 135 and produces and supplies a color mosaic image wherein the resulting value is used as a pixel value to the color interpolation section 112. The synthetic sensitivity compensation LUT 135 allows a pixel value of the color and sensitivity mosaic image M using a pixel value of the color mosaic candidate image as an index.
A second sensitivity uniformization process in the second demosaic process by the second example of the configuration of the sensitivity uniformization section 111 shown in
At step S161, the interpolation sections 132-1 to 132-4 discriminate whether or not all pixels of the color and sensitivity mosaic image have been used as a noticed pixel. If the interpolation sections 132-1 to 132-4 discriminate that all pixels have not been used as a noticed pixel, then the processing advances to step S162. At step S162, the interpolation sections 132-1 to 132-4 determine one by one pixel as a noticed pixel beginning with the left lowermost pixel and ending with the right uppermost pixel of the color and sensitivity mosaic image.
At step S163, the interpolation color determination section 131 executes an interpolation color determination process based on the color mosaic pattern information and issues a notification of a resulting interpolation color of the noticed pixel to the interpolation sections 132-1 to 132-4.
Details of the interpolation color determination process of the interpolation color determination section 131 are described with reference to a flow chart of
At step S171, the interpolation color determination section 131 refers to the color mosaic pattern information to discriminate the color of the noticed pixel.
If it is discriminated at step S171 that the color of the noticed pixel is G, then the processing advances to step S172. In this instance, also the colors of the four pixels neighboring in the oblique directions with the noticed pixel are G. At step S172, the interpolation color determination section 131 determines the interpolation color of the noticed pixel as G and issues a notification of this to the interpolation sections 132-1 to 132-4. Further, the interpolation color determination section 131 updates the color mosaic pattern information corresponding to the noticed pixel to G.
If it is discriminated at step S171 that the color of the noticed pixel is R, then the processing advances to step S173. In this instance, the colors of the four pixels neighboring in the oblique directions with the noticed pixel are B. At step S173, the interpolation color determination section 131 determines the interpolation color of the noticed pixel as B and issues a notification of this to the interpolation sections 132-1 to 132-4. Further, the interpolation color determination section 131 updates the color mosaic pattern information corresponding to the noticed pixel to G.
If it is discriminated at step S171 that the color of the noticed pixel is B, then the processing advances to step S174. In this instance, also the colors of the four pixels neighboring in the oblique directions with the noticed pixel are R. At step S174, the interpolation color determination section 131 determines the interpolation color of the noticed pixel as R and issues a notification of this to the interpolation sections 132-1 to 132-4. Further, the interpolation color determination section 131 updates the color mosaic pattern information corresponding to the noticed pixel to R.
With the interpolation color determination process described above, the interpolation color of the noticed pixel is designated so that R and B of the color and sensitivity mosaic image whose color mosaic arrangement is a Bayer arrangement are exchanged for each other. Therefore, also the updated color mosaic pattern information maintains the Bayer arrangement.
The processing returns to step S164 of
More particularly, for example, the interpolation section 132-1 detects, from among pixels positioned in the neighborhood of the noticed pixel of the color and sensitivity mosaic image (for example, from among 5×5 pixels centered at the noticed pixel), those pixels which have the color designated from the interpolation color determination section 131 and whose sensitivity is S0, and extracts the pixel values of the detected pixels (hereinafter referred to as reference pixels). Further, the interpolation section 132-1 acquires a number of filter coefficients set in advance corresponding to relative positions of the detected reference pixels to the noticed pixel, the number being equal to the number of the reference pixels. Furthermore, the interpolation section 132-1 multiplies the pixel values of the reference pixels and the corresponding filter coefficients and arithmetically operates the sum total of the products. Further, the interpolation section 132-1 divides the sum total of the products by the sum total of the used filter coefficients and determines the quotient as an interpolation value corresponding to the sensitivity S0 of the noticed pixel.
It is to be noted that the interpolation processes for the sensitivities S1 to S3 by the interpolation sections 132-2 to 132-3 are similar to the interpolation process for the sensitivity S0 by the interpolation section 132-1, and therefore, description of it is omitted.
At step S165, the adder 133 adds the interpolation values for the sensitivities S0 to S3 corresponding to the noticed pixel inputted therefrom from the interpolation sections 132-1 to 132-4 and supplies the sum as a pixel value of the color mosaic candidate image corresponding to the noticed pixel to the synthetic sensitivity compensation section 133.
At step S166, the synthetic sensitivity compensation section 134 collates the pixel value of the color mosaic candidate image supplied thereto from the adder 133 with the synthetic sensitivity compensation LUT 135 and determines a resulting value as a pixel value of the color mosaic image M corresponding to the noticed pixel.
The processing returns to step S161 so that the processing at steps S161 to 166 is repeated until it is discriminated at step S161 that all pixels have been used as a noticed pixel. When it is discriminated at step S161 that all pixels have been used as a noticed pixel, the second sensitivity uniformization process in the second demosaic process is ended.
It is to be noted that, although a color interpolation process is performed by the color interpolation section 112 for the color mosaic image M obtained by the second sensitivity uniformization process of the second demosaic process, since the process is similar to the color interpolation process described hereinabove with reference to the flow chart of
The third demosaic process includes, as seen in
The by-sensitivity-basis color interpolation process of the third demosaic process includes an extraction process for extracting only those pixels which have the same sensitivity from the color and sensitivity mosaic image, a color interpolation process for interpolating the pixel values of the RGB components of the pixels extracted by the extraction process, and an insertion process for synthesizing the pixel values interpolated by the color interpolation process for each of the RGB components to produce sensitivity mosaic images.
For example, in the extraction process, only the pixels which have the sensitivity S1 are extracted from the color and sensitivity mosaic image to produce a color mosaic image McS1 wherein the pixels are disposed in a checkered manner. In the color interpolation process, an image Rs1 wherein the pixels which have the sensitivity S1 and have an R component are disposed in a checkered manner, another image Gs1 wherein the pixels which have the sensitivity S1 and have a G component are disposed in a checkered manner and a further image Bs1 wherein the pixels which have the sensitivity S1 and have a B component are disposed in a checkered manner are produced from the color mosaic image McS1.
For example, in the insertion process, an image RS0 and another image RS1 produced by the color interpolation process are combined to produce a sensitivity mosaic image MsR.
Subsequently, a third example of a configuration of the image processing section 7 which principally executes the third demosaic process is described with reference to
In the third example of the configuration of the image processing section 7, a color and sensitivity mosaic image from the image pickup system is supplied to a by-sensitivity-basis color interpolation section 151. Color mosaic pattern information representative of a color mosaic arrangement of the color and sensitivity mosaic image is supplied to the by-sensitivity-basis color interpolation section 151. Sensitivity mosaic pattern information representative of a sensitivity mosaic arrangement of the color and sensitivity mosaic image is supplied to the by-sensitivity-basis color interpolation section 151 and sensitivity uniformization sections 152 to 154.
It is to be noted that, in the following description, unless otherwise specified, the color and sensitivity mosaic image has the color and sensitivity mosaic pattern P3 of
However, the configuration and the operation described below can be applied also to another color and sensitivity mosaic image having three colors other than R, G and B or a further color and sensitivity mosaic image which has four colors.
The by-sensitivity-basis color interpolation section 151 performs a by-sensitivity-basis color interpolation process for the color and sensitivity mosaic image and supplies resulting sensitivity mosaic image MsR for an R component, sensitivity mosaic image MsG for a G component and sensitivity mosaic image MsB for a B component to corresponding ones of the sensitivity uniformization sections 152 to 154, respectively.
The sensitivity uniformization section 152 performs a sensitivity uniformization process for the sensitivity mosaic image MsR for an R component to produce an output image R. The sensitivity uniformization section 153 performs a sensitivity uniformization process for the sensitivity mosaic image MsG for a G component to produce an output image G. The sensitivity uniformization section 154 performs a sensitivity uniformization process for the sensitivity mosaic image MsB for a B component to produce an output image B.
The extraction section 161 performs an extraction process of the sensitivity S1 (in the present case, i=0 or 1) for the color and sensitivity mosaic image and supplies a resulting color mosaic image McSi which includes pixels of the sensitivity Si to a color interpolation section 162. It is to be noted that the color mosaic image McSi is an image represented using an st coordinate system different from the xy coordinate system of the original color and sensitivity mosaic image (details are hereinafter described with reference to
The color interpolation section 162 interpolates RGB components of all pixels of the color mosaic image McSi from the extraction section 161 and supplies resulting images Rsi, Gsi and Bsi to the corresponding insertion sections 163 to 165, respectively. The image Rsi is an image composed of pixel values of R components corresponding to the pixels of the color mosaic image McSi. The image Gsi is an image composed of pixel values of G components corresponding to the pixels of the color mosaic image McSi. The image Bsi is an image composed of pixel values of B components corresponding to the pixels of the color mosaic image McSi. Further, the images Rsi, Gsi and Bsi are represented using a coordinate system same as that of the color mosaic image McSi. It is to be noted that the color interpolation section 162 is configured in a similar manner as in the example of the configuration of the color interpolation section 52 shown in
The insertion section 163 combines a number of images Rsi of an R component equal to the number of kinds of sensitivities supplied from the color interpolation section 162 based on the original position information of the sensitivity Si supplied from the extraction section 161 to produce a sensitivity mosaic image MsR, and supplies the sensitivity mosaic image MsR to the sensitivity uniformization section 152. The insertion section 164 combines a number of images Gsi of a G component equal to the number of kinds of sensitivities supplied from the color interpolation section 162 based on the original position information of the sensitivity Si supplied from the extraction section 161 to produce a sensitivity mosaic image MsG, and supplies the sensitivity mosaic image MsG to the sensitivity uniformization section 153. The insertion section 165 combines a number of images Bsi of a B component equal to the number of kinds of sensitivities supplied from the color interpolation section 162 based on the original position information of the sensitivity Si supplied from the extraction section 161 to produce a sensitivity mosaic image MsB, and supplies the sensitivity mosaic image MsB to the sensitivity uniformization section 154.
It is to be noted that examples of configurations of the sensitivity uniformization sections 153 and 154 are similar to the example of the configuration of the sensitivity uniformization section 152 shown in
Subsequently, a third demosaic process by the third example of the configuration of the image processing section 7 shown in
At step 181, the by-sensitivity-basis color interpolation section 151 performs a by-sensitivity-basis color interpolation process for the color and sensitivity mosaic image to produce an R component sensitivity mosaic image MsR, a G component sensitivity mosaic image MsG and a B component sensitivity mosaic image MsB and supplies them to the sensitivity uniformization sections 152 to 154, respectively.
Details of the by-sensitivity-basis color interpolation process of the by-sensitivity-basis color interpolation section 151 are described with reference to a flow chart of
At step S192, the extraction section 161 determines one of all kinds of sensitivities included in the sensitivity mosaic pattern information. The designated sensitivity is represented by Si.
At step S193, the extraction section 161 extracts only pixels of the sensitivity Si from among all pixels of the color and sensitivity mosaic image to produce a color mosaic image McSi of the sensitivity Si and supplies the color mosaic image McSi to the color interpolation section 162. Further, the extraction section 161 produces original position information of the sensitivity Si which keeps a positional relationship between the color mosaic image McSi and the original color and sensitivity mosaic image and supplies the original position information to the insertion sections 163 to 165. Further, the extraction section 161 produces color mosaic pattern information of the sensitivity Si representative of a color mosaic arrangement of the color mosaic image McSi and supplies the color mosaic pattern information to the color interpolation section 162.
Details of the process at step S193 are described with reference to
Since pixels of the sensitivity Si extracted do not have a pixel distance of the original color and sensitivity mosaic image, the color mosaic image McSi of the sensitivity Si produced is formed in a grating wherein the pixel distance, the original and the direction are different from those of the original color and sensitivity mosaic image. Therefore, the extraction section 61 produces, simultaneously with production of the color mosaic image McSi, original position information which allows, for each pixel, information of the original position to be referred to based on a corresponding relationship between the coordinate system of the original color and sensitivity mosaic image and the coordinate system of the color mosaic image McSi.
The corresponding relationship between the coordinate systems of the original color and sensitivity mosaic image and the color mosaic image McSi to be produced is such as illustrated in
Extraction of pixels of the sensitivity S0 represented by ▪ of the color and sensitivity mosaic image is described with reference to
sA={(xA−1)+yA}/2
tA={(xmax−1−xA)+yA}/2 (22)
The extraction section 161 applies the coordinates (xA, yA) of the pixel of the sensitivity S0 of the original color and sensitivity mosaic image to the expression (22) to calculate the coordinates (sA, tA) on the color mosaic image McSi and uses the value of the pixel for the coordinates to produce a color mosaic image McSi. Simultaneously, the extraction section 161 places the coordinates (xA, yA) in a corresponding relationship to the coordinates (sA, tA) into the original position information of the sensitivity S0.
Extraction of a pixel of the sensitivity S1 represented by □ of the color and sensitivity mosaic image is described with reference to
sB=(xB+yB)/2
tB={(xmax−1−xB)+yB}/2 (23)
The extraction section 161 applies the coordinates (xB, yB) of the pixel of the sensitivity S1 of the original color and sensitivity mosaic image to the expression (22) to calculate the coordinates (sB, tB) on the color mosaic image McSi and uses the value of the pixel for the coordinates to produce a color mosaic image McSi. Simultaneously, the extraction section 161 places the coordinates (xB, yB) in a corresponding relationship to the coordinates (sB, tB) into the original position information of the sensitivity S1.
Referring back to
The processing returns to step S191 so that the processing at steps S191 to S194 is repeated until it is discriminated at step S191 that all sensitivities included in the sensitivity mosaic pattern information have been designated. When it is discriminated at step S191 that all sensitivities included in the sensitivity mosaic pattern information have been designated, the processing advances to step S195.
At step S195, the insertion section 163 combines a number of images Rsi of an R component (in the present case, the images Rs0 and images Rs1) equal to the number of kinds of sensitivities supplied from the color interpolation section 162 based on all of the original position information supplied from the extraction section 161 to produce a sensitivity mosaic image MsR, and supplies the sensitivity mosaic image MsR to the sensitivity uniformization section 152. Similarly, the insertion section 164 produces and supplies a sensitivity mosaic image MsG to the sensitivity uniformization section 153, and the insertion section 165 produces and supplies a sensitivity mosaic image MsB to the sensitivity uniformization section 154.
The processing returns to step S182 of
The sensitivity uniformization process of the sensitivity uniformization section 152 is described with reference to a flow chart of
At step S203, the local sum calculation section 171 calculates a local sum corresponding to the noticed pixel and supplies it to the synthetic sensitivity compensation section 172. More particularly, the pixel values of 5×5 pixels (hereinafter referred to as reference pixels) centered at the noticed pixel are extracted, and the pixel values are multiplied by such filter coefficients set in advance corresponding to relative positions of the reference pixels to the noticed pixel as seen in
At step S204, the synthetic sensitivity compensation section 172 collates the local sum with the synthetic sensitivity compensation LUT 173 to acquire a corresponding compensation value and determines the compensation value as a pixel value of the output image R corresponding to the noticed pixels.
The processing returns to step S201 so that the processing at steps S201 to S204 is repeated until it is discriminated at step S201 that all pixels have been used as a noticed pixel. When it is discriminated at step S201 that all pixels have been used as a noticed pixel, the sensitivity uniformization process is ended, and the processing returns to
It is to be noted that, although also the sensitivity uniformization sections 153 and 154 execute a similar sensitivity uniformization process in parallel to the sensitivity uniformization process of the sensitivity uniformization section 152, detailed description of it is omitted.
Description of the third demosaic process by the third example of the configuration of the image processing section 7 is ended therewith.
Subsequently, an outline of a fourth demosaic process of the image processing system including the image processing section 7 as a principal component is described.
The fourth demosaic process includes a luminance image production process for producing a luminance image from a color and sensitivity mosaic image obtained by processing of the image pickup system, and a monochromatic image process for producing output images R, G and B using the color and sensitivity mosaic image and the luminance image.
In the fourth example of the configuration of the image processing section 7, a color and sensitivity mosaic image from the image pickup system, color mosaic pattern information which indicates a color mosaic arrangement of the color and sensitivity mosaic image and sensitivity mosaic pattern information which indicates a sensitivity mosaic arrangement of the color and sensitivity mosaic image are supplied to a luminance image production section 181 and monochromatic image production sections 182 to 184.
It is to be noted that, in the following description, unless otherwise specified, the color and sensitivity mosaic image has the color and sensitivity mosaic pattern P2 of
However, the configuration and the operation described below can be applied also to another color and sensitivity mosaic image which includes three colors other than R, G and B or a further color and sensitivity mosaic image which includes four colors.
The luminance image production section 181 performs a luminance image production process for the color and sensitivity mosaic image supplied thereto and supplies a resulting luminance image to the monochromatic image production sections 182 to 184.
The monochromatic image production section 182 produces an output image R using the color and sensitivity mosaic image and the luminance image supplied thereto. The monochromatic image production section 183 produces an output image G using the color and sensitivity mosaic image and the luminance image supplied thereto. The monochromatic image production section 184 produces an output image B using the color and sensitivity mosaic image and luminance image supplied thereto.
The estimation section 191 performs an R component estimation process for the color and sensitivity mosaic image and supplies an estimation value R′ of an R component for each pixel obtained by the process to a multiplier 194. The estimation section 192 performs a G component estimation process for the color and sensitivity mosaic image and supplies an estimation value G′ of a G component for each pixel obtained by the process to another multiplier 195. The estimation section 193 performs a B component estimation process for the color and sensitivity mosaic image and supplies an estimation value B′ of a B component for each pixel obtained by the process to a further multiplier 196.
The multiplier 194 multiplies the estimation value R′ supplied from the estimation section 191 by a color balance coefficient KR and outputs the product to an adder 197. The multiplier 195 multiplies the estimation value G′ supplied from the estimation section 192 by a color balance coefficient KG and outputs the product to the adder 197. The multiplier 196 multiplies the estimation value B′ supplied from the estimation section 193 by a color balance coefficient KB and outputs the product to the adder 197.
The adder 197 adds the product R′□kR inputted from the multiplier 194, the product G′□kG inputted from the multiplier 195 and the product B′□kB inputted from the multiplier 196, and produces a luminance candidate image wherein the resulting sum is used as a pixel value and supplies the luminance candidate image to a noise removal section 198.
Here, the color balance coefficients kR, kG and kB are values set in advance and, for example, kR=0.3, kG=0.6 and kB=0.1. It is to be noted that, basically, the color balance coefficients kR, kG and kB may have any values only if they can be used to calculate, as a luminance candidate value, a value having a correlation to a luminance variation. Accordingly, for example, the color balance coefficients may be kR=kG=kB.
The noise removal section 198 performs a noise removal process for the luminance candidate image supplied from the adder 197 and supplies the resulting luminance image to monochromatic image production sections 182 to 184.
The interpolation section 201 performs an interpolation process for the color and sensitivity mosaic image and outputs an R candidate image wherein all resulting pixels have pixel values of an R component to the ratio value calculation section 202. The ratio value calculation section 202 calculates a low-frequency component of an intensity ratio (the low-frequency component is hereinafter referred to merely as an intensity ratio) between corresponding pixels of the R candidate image and the luminance image and produces ratio value information which represents an intensity ratio corresponding to each pixel, and supplies the ratio value information to the multiplier 203.
The multiplier 203 multiplies the pixel value of each pixel of the luminance image by the corresponding intensity ratio and produces an output image R having the product as a pixel value.
It is to be noted that, since also examples of a configuration of the monochromatic image production sections 183 and 184 are similar to the example of the configuration of the monochromatic image production section 182, description of them is omitted.
Now, the fourth demosaic process by the fourth example of the configuration of the image processing section 7 is described with reference to a flow chart of
At step S211, the luminance image production section 181 performs a luminance image production process for the color and sensitivity mosaic image to produce a luminance image and supplies the luminance image to the monochromatic image production sections 182 to 184.
The luminance image production process of the luminance image production section 181 is described with reference to a flow chart of
At step S221, the estimation sections 191 to 193 discriminate whether or not all pixels of the color and sensitivity mosaic image have been used as a noticed pixel. If the estimation sections 191 to 193 discriminate that all pixels have not been used as a noticed pixel, then the processing advances to step S222. At step S222, the estimation sections 191 to 193 determine one by one pixel as a noticed pixel beginning with the left lowermost pixel and ending with the right uppermost pixel of the color and sensitivity mosaic image.
At step S223, the estimation section 191 performs an R component estimation process for the color and sensitivity mosaic image to estimate an estimation value R′ corresponding to the noticed pixel and supplies the estimation value R′ to the multiplier 194. The estimation section 192 performs a G component estimation process for the color and sensitivity mosaic image to estimate an estimation value G′ corresponding to the noticed pixel and supplies the estimation value G′ to the multiplier 194. The estimation section 193 performs a B component estimation process for the color and sensitivity mosaic image to estimate an estimation value B′ corresponding to the noticed pixel and supplies the estimation value B′ to the multiplier 194.
The R component estimation process of the estimation section 191 is described with reference to a flow chart of
At step S232, the estimation section 191 acquires a number of such R component interpolation filter coefficients set in advance corresponding to relative positions of the reference pixels to the noticed pixel as shown in
At step S233, the estimation section 191 refers to the color mosaic pattern information and the sensitivity mosaic pattern information to detect those of pixels neighboring with the noticed pixel (for example, 15×15 pixels centered at the noticed pixel) which have an R component and have the sensitivity S1, and extracts the pixel values of the detected pixels (hereinafter referred to as reference pixels).
At step S234, the estimation section 191 acquires a number of R component interpolation filter coefficients corresponding to relative positions of the reference pixels to the noticed pixel, the number being equal to the number of the reference pixels. Further, the estimation section 191 multiplies the pixel values of the reference pixels and the corresponding filter coefficients and arithmetically operates the sum total of the products. Furthermore, the estimation section 191 divides the sum total of the products by the sum total of the used interpolation filter coefficients to acquire a second quotient.
At step S235, the estimation section 191 adds the first quotient acquired at step S232 and the second quotient acquired at step S234. At step S235, the estimation section 191 collates the sum of the first quotient and the second quotient arithmetically operated at step S235 with a synthetic sensitivity compensation LUT (hereinafter described) built therein to acquire a compensation value of a compensated sensitivity characteristic. The acquired compensation value is determined as an estimation value R′ corresponding to the noticed pixel. The processing returns to step S224 of
It is to be noted that, since the G component interpolation processes of the estimation section 192 and the B component interpolation processes of the estimation section 193 are similar to the R component interpolation process of the estimation section 191, description of them is omitted. It is to be noted, however, in the G component estimation process of the estimation section 192, reference pixels are detected from among 7×7 pixels centered at the noticed pixel, and further, the G component interpolation filter coefficients illustrated in
Here, the synthetic sensitivity compensation LUT used by the estimation section 191 is described with reference to
In the estimation process, a first quotient calculated from a pixel of the sensitivity S0 measured with such a characteristic as indicated by the characteristic curve b of
While the synthesized characteristic curve c exhibits a characteristic of a wide dynamic range from a low luminance to a high luminance, since it has a shape of a polygonal line, an original linear characteristic is restored using a characteristic curve reverse to the sensitivity characteristic curve c. More particularly, the sum of the first product and the second product is applied to a reverse characteristic curve d to the sensitivity characteristic curve c of
In particular, the synthetic sensitivity compensation LUT is obtained by converting the reverse characteristic curve d of
Description is given with reference back to
The processing returns to step S221 so that the processing at steps S221 to S224 is repeated until it is discriminated at step S221 that all pixels have been used as a noticed pixel. When it is discriminated at step S221 that all pixels have been used as a noticed pixel, the processing advances to step S225. It is to be noted that the luminance candidate image produced by the processes at steps S221 to 224 is supplied to the noise removal section 198.
At step S225, the noise removal section 198 performs a noise removal process for the luminance candidate image supplied thereto from the adder 197 to produce a luminance image and supplies the luminance image to the monochromatic image production sections 182 to 184.
The noise removal process of the noise removal section 198 is described with reference to a flow chart of
At step S243, the noise removal section 198 acquires the pixel values (luminance candidate values) of the pixels positioned upwardly, downwardly, leftwardly and rightwardly of the noticed pixel and substitutes the acquired luminance candidate values of the pixels positioned upwardly, downwardly, leftwardly and rightwardly of the noticed pixel into variables a3, a0, a1 and a2, respectively.
At step S244, the noise removal section 198 executes a direction selective smoothing process to acquire a smoothed value corresponding to the noticed pixel.
The direction selective smoothing process of the noise removal section 198 is described with reference to a flow chart of
luminance gradient vector g
=(a2−a1,a3−a0) (24)
At step S252, the noise removal section 198 arithmetically operates the magnitude (absolute value)
∥∇∥ of the luminance gradient vector g.
At step S253, the noise removal section 198 applies the variables a0 to a3 to the following expressions (25) and (26) to calculate a smoothed component Hh in the horizontal direction and a smoothed component Hv in the vertical direction corresponding to the noticed pixel:
Hh=(a1+a2)/2 (25)
Hv=(a3+a0)/2 (26)
At step S254, the noise removal section 198 arithmetically operates a significance wh in the horizontal direction and a significance wv in the vertical direction corresponding to the absolute value
∥g∥ of the luminance gradient vector g.
More particularly, where the absolute value ∥g∥ of the luminance gradient vector g is higher than 0, the absolute value of the inner product of the normalized luminance gradient vector g/∥g∥ and the vector (1, 0) is subtracted from 1 to obtain the significance wh in the horizontal direction as given by the following expression (27). Further, the absolute value of the inner product of the normalized luminance gradient vector g/∥g∥ and the vector (0, 1) is subtracted from 1 to obtain the significance wv in the vertical direction as given by the following expression (28).
wh=1−|(g/∥g∥,(1,0))| (27)
wv=1−|(g/∥g∥,(0,1))| (28)
Where the absolute value ∥g∥ of the luminance gradient vector g is 0, the smoothing contribution rate wh in the horizontal direction and the smoothing contribution rate wv in the vertical direction are both set to 0.5.
At step S255, the noise removal section 198 arithmetically operates a smoothed value a corresponding to the noticed pixel using the following expression (29):
α=(wh·Hh+wv·Hv)/(wh+wv) (29)
The processing returns to step S245 of
The processing returns to step S241 so that the processing at steps S241 to S245 is repeated until it is discriminated at step S241 that all pixels have been used as a noticed pixel. When it is discriminated at step S241 that all pixels have been used as a noticed pixel, the noise removal process is ended and also the luminance image production process is ended, and the processing returns to step S212 of
At step S212, the monochromatic image production sections 182 to 184 produce the output images R, G, and B, respectively by using the supplied color and sensitivity mosaic image and the luminance image.
A first monochromatic image production process of the monochromatic image production section 182 is described with reference to a flow chart of
At step S261, the interpolation section 201 performs an interpolation process for the color and sensitivity mosaic image to produce an R candidate image wherein all pixels have pixel values of an R component and outputs the R candidate image to the ratio value calculation section 202.
It is to be noted that the interpolation process of the interpolation section 201 is similar to the R component estimation process of the estimation section 191 which composes the luminance image production section 181 described hereinabove with reference to the flow chart of
At step S262, the ratio value calculation section 202 performs a ratio value calculation process to calculate an intensity ratio and further produces ratio value information representative of the intensity ratio corresponding to each pixel, and supplies the intensity ratio and the ratio value information to the multiplier 203.
The ratio value calculation process of the ratio value calculation section 202 is described with reference to a flow chart of
At step S273, the ratio value calculation section 202 refers to those pixels which are positioned in the neighborhood of the noticed pixel (for example, 7×7 pixels centered at the noticed pixel) to acquire the pixel values (monochromatic candidate values of R components) of the pixels. Further, the ratio value calculation section 202 extracts the pixel values (luminance values) of the pixels of the luminance image which are positioned at the same coordinates as those of the reference pixels.
At step S274, the ratio value calculation section 202 acquires a number of smoothing filter coefficients set in advance as shown in
At step S275, the ratio value calculation section 202 multiplies the monochromatic candidate values for an R component of the reference pixels and the corresponding filter coefficients, divides the products by the corresponding luminance values and arithmetically operates the sum total of the quotients. Further, the ratio value calculation section 202 divides the sum total of the quotients by the sum total of the used smoothing filter coefficients and determines the quotient as an intensity ratio corresponding to the noticed pixel to produce ratio value information.
The processing returns to step S271 so that the processing at steps S271 to S275 is repeated until it is discriminated at step S271 that all pixels of the R candidate image have been used as a noticed pixel. When it is discriminated at step S271 that all pixels of the R candidate image have been used as a noticed pixel, the ratio value information produced is supplied to the multiplier 203, and the processing returns to step S263 of
At step S263, the multiplier 203 multiplies the pixel values of the pixels of the luminance image by the corresponding intensity ratios to produce an output image R wherein the products are used as pixel values.
It is to be noted that, simultaneously with the first monochromatic image production process of the monochromatic image production section 182, also the monochromatic image production sections 183 and 184 execute similar processes.
Description of the fourth demosaic process by the fourth example of the configuration of the image processing section 7 is ended therewith.
In the second example of the configuration of the luminance image production section 181, a color and sensitivity mosaic image, color mosaic pattern information and sensitivity mosaic pattern information are supplied to the estimation section 211.
The estimation section 121 performs a component estimation process for the color and sensitivity mosaic image and supplies an estimation value R′ of an R component, a estimation value G′ of a G component and an estimation value B′ of a B component for each pixel obtained by the component estimation process to the corresponding multipliers 194 to 196, respectively.
It is to be noted that the elements from the multiplier 194 to the noise removal section 198 included in the second example of the configuration of the luminance image production section 181 are similar to the elements from the multiplier 194 to the noise removal section 198 included in the first example of the configuration of the luminance image production section 181 shown in
Now, the estimation process for RGB components by the estimation section 211 is described with reference to a flow chart of
At step S281, the estimation section 211 calculates an estimated pixel value C0 corresponding to the noticed pixel through an estimated pixel value C0 interpolation process wherein the pixel values of such four pixels centered at the noticed pixel as shown in
At step S291, the estimation section 211 substitutes the pixel values of the four pixels positioned upwardly, downwardly, leftwardly and rightwardly of the noticed pixel indicated by ◯ each with a space of one pixel left therebetween into variables a3, a0, a1 and a2 and applies a direction selective smoothing process described hereinabove with reference to
The process of substituting the pixel values of four pixels positioned upwardly, downwardly, leftwardly and rightwardly of a designated pixel into the variables a3, a0, a1 and a2 and applying the direction selective smoothing process described hereinabove with reference to
At step S292, the estimation section 211 adds the smoothed value α obtained at step S291 to the pixel value of the noticed pixel and determines the sum as the estimated pixel value C0 of the noticed pixel. The processing returns to step S282 of
At step S282, the estimation section 211 calculates an estimated pixel value C1 corresponding to the noticed pixel through an estimated pixel value C1 interpolation process wherein such 12 pixels centered at the noticed pixel as shown in
At step S301, the estimation section 211 discriminates whether or not the color of the noticed pixel is G. If the estimation section 211 discriminates that the color of the noticed pixel is G, then the processing advances to step S302. At step S302, the estimation section 211 substitutes the pixel values of four pixels positioned leftwardly downwards, leftwardly upwards, rightwardly downwards and rightwardly upwards in the neighborhood of the noticed pixel represented by ◯ as shown in
The process of substituting the pixel values of four pixels positioned leftwardly downwards, leftwardly upwards, rightwardly downwards and rightwardly upwards in the neighborhood of a designated pixel into the variables a0, a1, a2 and a3, respectively, and applying the direction selective smoothing process described hereinabove with reference to
At step S303, the estimation section 211 multiplies the smoothed value α obtained at step S302 by 2 and determines the product as an estimated pixel value C1 of the noticed pixel. The processing returns to step S283 of
It is to be noted that, if it is discriminated at step S301 that the color of the noticed pixel is not G, then the processing advances to step S304.
At step S304, the estimation section 211 executes the vertical direction selective smoothing process using four pixels positioned with a space of one pixel left from the pixel neighboring leftwardly upwards of the noticed pixel to calculate a smoothed value α and substitutes the smoothed value α into the variable a1. At step S305, the estimation section 211 executes the vertical direction selective smoothing process using four pixels positioned with a space of one pixel left from the pixel neighboring rightwardly downwards of the noticed pixel to calculate a smoothed value α and substitutes the smoothed value α into the variable a2. At step S306, the estimation section 211 substitutes the pixel value of the pixel neighboring leftwardly downwards of the noticed pixel into the variable a0 and substitutes the pixel value of the pixel neighboring rightwardly upwards of the noticed pixel into the variable a3.
At step S307, the estimation section 211 applies the variables a0, a1, a2 and a3 whose values have been set at steps S304 to S306 to the direction selective smoothing process described hereinabove with reference to
At step S308, the estimation section 211 executes the vertical direction selective smoothing process using four pixels positioned with a space of one pixel left from the pixel neighboring leftwardly downwards of the noticed pixel to calculate a smoothed value α and substitutes the smoothed value α into the variable a0. At step S309, the estimation section 211 executes the vertical direction selective smoothing process using four pixels positioned with a space of one pixel left from the pixel neighboring rightwardly upwards of the noticed pixel to calculate a smoothed value α and substitutes the smoothed value α into the variable a3. At step S310, the estimation section 211 substitutes the pixel value of the pixel neighboring leftwardly upwards of the noticed pixel into the variable a1 and substitutes the pixel value of the pixel neighboring rightwardly downwards of the noticed pixel into the variable a2.
At step S311, the estimation section 211 applies the variables a0, a1, a2 and a3 whose values have been set at steps S308 to S310 to the direction selective smoothing process described hereinabove with reference to
At step S312, the estimation section 211 adds the smoothed value α′ obtained at step S307 and the smoothed value α″ obtained at step S311 and determines the sum as an estimated pixel value C1 corresponding to the noticed pixel. The processing returns to step S283 of
At step S283, the estimation section 211 calculates a estimated pixel value C2 corresponding to the noticed pixel through a estimated pixel value C2 interpolation process wherein such four pixels centered at the noticed pixel as shown in
At step S321, the estimation section 211 discriminates whether or not the color of the noticed pixel is G. If the estimation section 211 discriminates that the color of the noticed pixel is G, then the processing advances to step S322.
At step S322, the estimation section 211 executes the vertical direction selective smoothing process using four pixels positioned with a space of one pixel left from the pixel neighboring upwardly of the noticed pixel to calculate a smoothed value α and determines it as a smoothed value α′.
At step S323, the estimation section 211 executes the vertical direction selective smoothing process using four pixels positioned with a space of one pixel left from the pixel neighboring downwardly of the noticed pixel to calculate a smoothed value α and determines it as a smoothed value α″.
At step S324, the estimation section 211 adds an average value of the pixel value of the pixel neighboring downwardly of the noticed pixel and the smoothed value α′ obtained at step S322 and an average value of the pixel value of the pixel neighboring upwardly of the noticed pixel and the smoothed value α″ obtained at step S323 and determines the sum as an estimated pixel value C2 corresponding to the noticed pixel. The processing returns to step S284 of
It is to be noted that, if it is discriminated at step S321 that the color of the noticed pixel is not G, then the processing advances to step S325.
At step S325, the estimation section 211 executes the oblique direction selective smoothing process using four pixels positioned obliquely in the neighborhood of the pixel neighboring leftwardly of the noticed pixel to calculate a smoothed value α and substitutes it into the variable a1. At step S326, the estimation section 211 executes the oblique direction selective smoothing process using four pixels positioned obliquely in the neighborhood of the pixel neighboring rightwardly of the noticed pixel to calculate a smoothed value α and substitutes it into the variable a2. At step S327, the estimation section 211 substitutes the pixel value of the pixel neighboring downwardly of the noticed pixel into the variable a0 and substitutes the pixel value of the pixel neighboring upwardly of the noticed pixel into the variable a3.
At step S328, the estimation section 211 applies the variables a0, a1, a2 and a3 whose values have been set at steps S325 to S327 to the direction selective smoothing process described hereinabove with reference to
At step S329, the estimation section 211 executes the oblique direction selective smoothing process using four pixels positioned obliquely in the neighborhood of the pixel neighboring downwardly of the noticed pixel to calculate a smoothed value α and substitutes it into the variable a0. At step S330, the estimation section 211 executes the oblique direction selective smoothing process using four pixels positioned obliquely in the neighborhood of the pixel neighboring upwardly of the noticed pixel to calculate a smoothed value α and substitutes it into the variable a3. At step S331, the estimation section 211 substitutes the pixel value of the pixel neighboring leftwardly of the noticed pixel into the variable a1 and substitutes the pixel value of the pixel neighboring rightwardly of the noticed pixel into the variable a2.
At step S332, the estimation section 211 applies the variables a0, a1, a2 and a3 whose values have been set at steps S329 to S331 to the direction selective smoothing process described hereinabove with reference to
At step S333, the estimation section 211 adds the smoothed value α′ obtained at step S328 and the smoothed value α″ obtained at step S322 and determines the sum as an estimated pixel value C2 corresponding to the noticed pixel. The processing returns to step S284 of
At step S284, the estimation section 211 calculates a estimated pixel value C3 corresponding to the noticed pixel through an estimated pixel value C3 interpolation process wherein such eight pixels centered at the noticed pixel as shown in
At step S341, the estimation section 211 discriminates whether or not the color of the noticed pixel is G. If the estimation section 211 discriminates that the color of the noticed pixel is G, then the processing advances to step S342.
At step S342, the estimation section 211 executes the vertical direction selective smoothing process using four pixels positioned with a space of one pixel left from the pixel neighboring rightwardly of the noticed pixel to calculate a smoothed value α and determines it as a smoothed value α′.
At step S343, the estimation section 211 executes the vertical direction selective smoothing process using four pixels positioned with a space of one pixel left from the pixel neighboring leftwardly of the noticed pixel to calculate a smoothed value α and determines it as a smoothed value α″.
At step S344, the estimation section 211 adds an average value of the pixel value of the pixel neighboring leftwardly of the noticed pixel and the smoothed value α′ obtained at step S342 and an average value of the pixel value of the pixel neighboring rightwardly of the noticed pixel and the smoothed value α″ obtained at step S343 and determines the sum as an estimated pixel value C3 corresponding to the noticed pixel. The processing returns to step S285 of
It is to be noted that, if it is discriminated at step S341 that the color of the noticed pixel is G, then the processing advances to step S345. At step S345, the estimation section 211 sets the estimated pixel value C3 corresponding to the noticed pixel to 0. The processing returns to step S285 of
At step S285, the estimation section 211 refers to the color mosaic pattern information and the sensitivity mosaic pattern information to discriminate the color and the sensitivity of the noticed pixel, and applies, based on a result of the discrimination, the estimated pixel values C0 to C3 corresponding to the noticed pixel obtained at steps S281 to S284 to a synthetic sensitivity compensation LUT (similar to the synthetic sensitivity compensation LUT described hereinabove with reference to
In particular, where the color of the noticed pixel is G and the sensitivity is S0, a value LUT(C2) when the estimated pixel value C2 is applied to the synthetic sensitivity compensation LUT is determined as the estimated value R′, and a value LUT((C0+C1/)2)) when an average value of the estimated pixel values C0+C1 is applied to the synthetic sensitivity compensation LUT is determined as the estimated value G′ while a value LUT(C3) when the estimated pixel value C3 is applied to the synthetic sensitivity compensation LUT is determined as the estimated value B′.
Where the color of the noticed pixel is G and the sensitivity is S1, a value LUT(C3) when the estimated pixel value C3 is applied to the synthetic sensitivity compensation LUT is determined as the estimated value R′, and a value LUT((C0+C1/)2)) when an average value of the estimated pixel values C0+C1 is applied to the synthetic sensitivity compensation LUT is determined as the estimated value G′ while a value LUT(C2) when the estimated pixel value C2 is applied to the synthetic sensitivity compensation LUT is determined as the estimated value B′.
Where the color of the noticed pixel is R, a value LUT(C0) when the estimated pixel value C0 is applied to the synthetic sensitivity compensation LUT is determined as the estimated value R′, and a value LUT(C2) when an average value of the estimated pixel value C2 is applied to the synthetic sensitivity compensation LUT is determined as the estimated value G′ while a value LUT(C1) when the estimated pixel value C1 is applied to the synthetic sensitivity compensation LUT is determined as the estimated value B′.
Where the color of the noticed pixel is B, a value LUT(C1) when the estimated pixel value C1 is applied to the synthetic sensitivity compensation LUT is determined as the estimated value R′, and a value LUT(C2) when an average value of the estimated pixel value C2 is applied to the synthetic sensitivity compensation LUT is determined as the estimated value G′ while a value LUT(C0) when the estimated pixel value C0 is applied to the synthetic sensitivity compensation LUT is determined as the estimated value B′.
Since, in the estimation process of RGB components by the estimation section 211, the estimated pixel values C0 to C3 produced making use of the direction selective smoothing process are used in such a manner as described above, deterioration of the resolution of an image signal is suppressed.
Description of the estimation process for RGB components by the estimation section 211 is ended therewith.
Incidentally, it is described in the foregoing description that the monochromatic image production sections 183 and 184 of the fourth example of the configuration of the image processing section 7 are configured similarly to the example of the configuration of the monochromatic image production section 182 shown in
The R candidate image production process executed by the monochromatic image production section 182 in place of the monochromatic candidate image production process at step S261 is described with reference to a flow chart of
At step S351, the interpolation section 201-R discriminates whether or not all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the first time. If the interpolation section 201-R discriminates that all pixels have not been used as a noticed pixel for the first time, then the processing advances to step S352. At step S352, the interpolation section 201-R determines one by one pixel as a noticed pixel for the first time beginning with the left lowermost pixel and ending with the right uppermost pixel of the color and sensitivity mosaic image.
At step S353, the interpolation section 201-R discriminates whether or not the color of the noticed pixel for the first time is R. If the interpolation section 201-R discriminates that the color of the noticed pixel for the first time is R, then the processing advances to step S354. At step S354, the interpolation section 201-R executes the vertical direction selective smoothing process using four pixels positioned upwardly, downwardly, leftwardly and rightwardly of the noticed pixel for the first time with a space of one pixel left therebetween to calculate a smoothed value α. At step S355, the interpolation section 201-R applies the sum of the pixel value of the noticed pixel for the first time and the smoothed value α calculated at step S354 to a synthetic sensitivity compensation LUT (a synthetic sensitivity compensation LUT similar to that described with reference to
It is to be noted that, if it is discriminated at step S353 that the color of the noticed pixel for the first time is not R, then the processing returns to step S351 skipping the steps S354 and S355.
Thereafter, the processing at steps S351 to S355 is repeated until it is discriminated at step S351 that all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the first time. When it is discriminated at step S351 that all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the first time, the processing advances to step S356.
At step S356, the interpolation section 201-R discriminates whether or not all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the second time. If the interpolation section 201-R discriminates that all pixels have not been used as a noticed pixel for the second time, then the processing advances to step S357. At step S357, the interpolation section 201-R determines one by one pixel as a noticed pixel for the second time beginning with the left lowermost pixel and ending with the right uppermost pixel of the color and sensitivity mosaic image.
At step S358, the interpolation section 201-R discriminates whether or not the color of the noticed pixel for the second time is B. If the interpolation section 201-R discriminates that the color of the noticed pixel for the second time is B, then the processing advances to step S359. At step S359, the interpolation section 201-R executes the oblique direction selective smoothing process using four pixels positioned obliquely in the neighborhood of the noticed pixel for the second time to calculate a smoothed value α. At step S360, the interpolation section 201-R determines the smoothed value α calculated at step S359 as a pixel value corresponding to the noticed pixel for the second time of the R candidate image. The processing returns to step S356.
It is to be noted that, if it is discriminated at step S358 that the color of the noticed pixel for the second time is not B, then the processing returns to step S356 skipping the steps S359 and S360.
Thereafter, the processing at steps S356 to S360 is repeated until it is discriminated at step S356 that all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the second time. When it is discriminated at step S356 that all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the second time, the processing advances to step S351.
At step S361, the interpolation section 201-R discriminates whether or not all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the third time. If the interpolation section 201-R discriminates that all pixels have not been used as a noticed pixel for the third time, then the processing advances to step S362. At step S362, the interpolation section 201-R determines one by one pixel as a noticed pixel for the third time beginning with the left lowermost pixel and ending with the right uppermost pixel of the color and sensitivity mosaic image.
At step S363, the interpolation section 201-R discriminates whether or not the color of the noticed pixel for the third time is G. If the interpolation section 201-R discriminates that the color of the noticed pixel for the third time is G, then the processing advances to step S364. At step S364, the interpolation section 201-R executes the vertical direction selective smoothing process using four pixels positioned upwardly, downwardly, leftwardly and rightwardly of the noticed pixel for the third time to calculate a smoothed value α. At step S365, the interpolation section 201-R determines the smoothed value α calculated at step S364 as a pixel value corresponding to the noticed pixel for the third time of an R candidate image.
It is to be noted that, if it is discriminated at step S363 that the color of the noticed pixel for the third time is not G, then the processing returns to step S351 skipping the steps S364 and S365.
Thereafter, the processing at steps S361 to S365 is repeated until it is discriminated at step S361 that all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the third time. When it is discriminated at step S361 that all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the third time, the R candidate image production process is ended.
The B candidate image production process executed by the monochromatic image production section 184 is described with reference to a flow chart of
At step S371, the interpolation section 201-B discriminates whether or not all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the first time. If the interpolation section 201-B discriminates that all pixels have not been used as a noticed pixel for the first time, then the processing advances to step S372. At step S372, the interpolation section 201-B determines one by one pixel as a noticed pixel for the first time beginning with the left lowermost pixel and ending with the right uppermost pixel of the color and sensitivity mosaic image.
At step S373, the interpolation section 201-B discriminates whether or not the color of the noticed pixel for the first time is B. If the interpolation section 201-B discriminates that the color of the noticed pixel for the first time is B, then the processing advances to step S374. At step S374, the interpolation section 201-B executes the vertical direction selective smoothing process using four pixels positioned upwardly, downwardly, leftwardly and rightwardly of the noticed pixel for the first time with a space of one pixel left therebetween to calculate a smoothed value α. At step S375, the interpolation section 201-B applies the sum of the pixel value of the noticed pixel for the first time and the smoothed value α calculated at step S374 to a synthetic sensitivity compensation LUT (a synthetic sensitivity compensation LUT similar to that described with reference to
It is to be noted that, if it is discriminated at step S373 that the color of the noticed pixel for the first time is not B, then the processing returns to step S371 skipping the steps S374 and S375.
Thereafter, the processing at steps S371 to S375 is repeated until it is discriminated at step S371 that all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the first time. When it is discriminated at step S371 that all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the first time, the processing advances to step S376.
At step S376, the interpolation section 201-B discriminates whether or not all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the second time. If the interpolation section 201-B discriminates that all pixels have not been used as a noticed pixel for the second time, then the processing advances to step S377. At step S377, the interpolation section 201-B determines one by one pixel as a noticed pixel for the second time beginning with the left lowermost pixel and ending with the right uppermost pixel of the color and sensitivity mosaic image.
At step S378, the interpolation section 201-B discriminates whether or not the color of the noticed pixel for the second time is R. If the interpolation section 201-B discriminates that the color of the noticed pixel for the second time is R, then the processing advances to step S379. At step S379, the interpolation section 201-B executes the oblique direction selective smoothing process using four pixels positioned obliquely in the neighborhood of the noticed pixel for the second time to calculate a smoothed value α. At step S380, the interpolation section 201-B determines the smoothed value α calculated at step S379 as a pixel value corresponding to the noticed pixel for the second time of the B candidate image. The processing returns to step S376.
It is to be noted that, if it is discriminated at step S378 that the color of the noticed pixel for the second time is not R, then the processing returns to step S376 skipping the steps S379 and S380.
Thereafter, the processing at steps S376 to S380 is repeated until it is discriminated at step S376 that all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the second time. When it is discriminated at step S376 that all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the second time, the processing advances to step S381.
At step S381, the interpolation section 201-B discriminates whether or not all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the third time. If the interpolation section 201-B discriminates that all pixels have not been used as a noticed pixel for the third time, then the processing advances to step S382. At step S382, the interpolation section 201-B determines one by one pixel as a noticed pixel for the third time beginning with the left lowermost pixel and ending with the right uppermost pixel of the color and sensitivity mosaic image.
At step S383, the interpolation section 201-B discriminates whether or not the color of the noticed pixel for the third time is G. If the interpolation section 201-B discriminates that the color of the noticed pixel for the third time is G, then the processing advances to step S384. At step S384, the interpolation section 201-B executes the vertical direction selective smoothing process using four pixels positioned upwardly, downwardly, leftwardly and rightwardly in the neighborhood of the noticed pixel for the third time to calculate a smoothed value α. At step S385, the interpolation section 201-B determines the smoothed value α calculated at step S384 as a pixel value corresponding to the noticed pixel for the third time of a B candidate image. The processing returns to step S381.
It is to be noted that, if it is discriminated at step S383 that the color of the noticed pixel for the third time is not G, then the processing returns to step S381 skipping the steps S384 and S385.
Thereafter, the processing at steps S381 to S385 is repeated until it is discriminated at step S381 that all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the third time. When it is discriminated at step S381 that all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the third time, the B candidate image production process is ended.
The G candidate image production process executed by the monochromatic image production section 183 is described with reference to a flow chart of
At step S391, the interpolation section 201-G discriminates whether or not all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the first time. If the interpolation section 201-G discriminates that all pixels have not been used as a noticed pixel for the first time, then the processing advances to step S392. At step S392, the interpolation section 201-G determines one by one pixel as a noticed pixel for the first time beginning with the left lowermost pixel and ending with the right uppermost pixel of the color and sensitivity mosaic image.
At step S393, the interpolation section 201-G discriminates whether or not the color of the noticed pixel for the first time is G. If the interpolation section 201-G discriminates that the color of the noticed pixel for the first time is G, then the processing advances to step S394. At step S394, the interpolation section 201-G executes the oblique direction selective smoothing process using four pixels positioned obliquely in the neighborhood of the noticed pixel for the first time to calculate a smoothed value α. At step S395, the interpolation section 201-G applies the sum of the pixel value of the noticed pixel for the first time and the smoothed value α calculated at step S394 to a synthetic sensitivity compensation LUT (a synthetic sensitivity compensation LUT similar to that described with reference to
It is to be noted that, if it is discriminated at step S393 that the color of the noticed pixel for the first time is not G, then the processing returns to step S391 skipping the steps S394 and S395.
Thereafter, the processing at steps S391 to S395 is repeated until it is discriminated at step S391 that all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the first time. When it is discriminated at step S391 that all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the first time, the processing advances to step S396.
At step S396, the interpolation section 201-G discriminates whether or not all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the second time. If the interpolation section 201-G discriminates that all pixels have not been used as a noticed pixel for the second time, then the processing advances to step S397. At step S397, the interpolation section 201-G determines one by one pixel as a noticed pixel for the second time beginning with the left lowermost pixel and ending with the right uppermost pixel of the color and sensitivity mosaic image.
At step S398, the interpolation section 201-G discriminates whether or not the color of the noticed pixel for the second time is G. If the interpolation section 201-G discriminates that the color of the noticed pixel for the second time is not G, then the processing advances to step S399. At step S399, the interpolation section 201-G executes the vertical direction selective smoothing process using four pixels positioned upwardly, downwardly, leftwardly and rightwardly in the neighborhood of the noticed pixel for the second time to calculate a smoothed value α. At step S400, the interpolation section 201-G determines the smoothed value α calculated at step S399 as a pixel value corresponding to the noticed pixel for the second time of the G candidate image. The processing returns to step S396.
It is to be noted that, if it is discriminated at step S398 that the color of the noticed pixel for the second time is R, then the processing returns to step S396 skipping the steps S399 and S400.
Thereafter, the processing at steps S396 to S400 is repeated until it is discriminated at step S396 that all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the second time. When it is discriminated at step S396 that all pixels of the color and sensitivity mosaic image have been used as a noticed pixel for the second time, the R candidate image production process is ended.
Incidentally, as described hereinabove, in the fourth demosaic process, a luminance image and monochromatic images are produced from a color and sensitivity mosaic image, and all colors are restored making use of the correlation between the luminance and the color components to restore all pixels having a uniform sensitivity and all color components. However, the luminance image to be produced first may have a biased spectral characteristic only if color information to be restored has the correlation and the signal can be restored with a high resolution. For example, the characteristic of a color mosaic arrangement of a color and sensitivity mosaic image that it includes a number of pixels of G equal to twice that of pixels of R or pixels of B like a Bayer arrangement may be utilized to produce an image of a G component in place of a luminance image, and the correlation between G and R or between G and B may be utilized to produce an image of an R component and an image of a B component.
To execute such processing as just described, the image processing section 7 may be configured in such a manner as shown in
Description of the examples of the configuration of the image processing section 7 for executing the first to fourth demosaic processes is ended therewith.
It is to be noted that, while the series of processes described above can be executed by hardware, it may otherwise be executed by software. Where the series of processes is executed by software, a program which constructs the software is installed from a recording medium into a computer incorporated in hardware for exclusive use or, for example, a personal computer for universal use which can execute various functions by installing various programs.
The recording medium is formed as a package medium such as, as shown in
It is to be noted that, in the present specification, the steps which describe the program recorded in a recording medium may be but need not necessarily be processed in a time series in the order as described, and include processes which are executed in parallel or individually without being processed in a time series.
As described above, according to the present invention, a restored image wherein the sensitivities of pixels are uniformized and each pixel has all of a plurality of color components.
It should be understood that various changes and modifications to the presently preferred embodiments described herein will be apparent to those skilled in the art. Such changes and modifications can be made without departing from the spirit and scope of the present subject matter and without diminishing its intended advantages. It is therefore intended that such changes and modifications be covered by the appended claims.
Kobayashi, Seiji, Ono, Hiroaki, Mitsunaga, Tomoo
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
4547074, | May 08 1981 | Omron Tateisi Electronics Co. | Color sensing device |
4933562, | Oct 09 1987 | Thomson-CSF | Multiple-radiation detector particularly for X-rays with two energy levels |
5119181, | Mar 30 1990 | Xerox Corporation | Color array for use in fabricating full width arrays |
5340977, | Jul 11 1991 | MATSUSHITA ELECTRIC INDUSTRIAL CO , LTD | Solid-state image pickup device |
5420635, | Aug 30 1991 | FUJIFILM Corporation | Video camera, imaging method using video camera, method of operating video camera, image processing apparatus and method, and solid-state electronic imaging device |
5789737, | Jan 22 1997 | Xerox Corporation | High dynamic range segmented pixel sensor array |
5805217, | Jun 14 1996 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Method and system for interpolating missing picture elements in a single color component array obtained from a single color sensor |
6570613, | Feb 26 1999 | HOWELL, LTD | Resolution-enhancement method for digital imaging |
6618502, | Sep 30 1998 | Fuji Photo Optical Co., Ltd. | Color image processing method and apparatus |
6625305, | Aug 16 1999 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Image demosaicing method |
6646246, | Nov 21 2000 | Apple Inc | Method and system of noise removal for a sparsely sampled extended dynamic range image sensing device |
6714243, | Mar 22 1999 | Biomorphic VLSI, Inc. | Color filter pattern |
6757012, | Jan 13 2000 | Biomorphic VLSI, Inc. | Color selection for sparse color image reconstruction |
6765611, | Nov 21 2000 | Monument Peak Ventures, LLC | Method for compressing an image from a sparsely sampled extended dynamic range image sensing device |
6803955, | Mar 03 1999 | Olympus Corporation | Imaging device and imaging apparatus |
6804407, | Apr 02 2000 | Monument Peak Ventures, LLC | Method of image processing |
6809765, | Oct 05 1999 | Sony Corporation; Sony Electronics INC | Demosaicing for digital imaging device using perceptually uniform color space |
6831692, | Oct 12 1998 | FUJIFILM Corporation | Solid-state image pickup apparatus capable of outputting high definition image signals with photosensitive cells different in sensitivity and signal reading method |
6842191, | Nov 03 1999 | U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT | Color image restoration with anti-alias |
6873442, | Nov 07 2000 | Apple Inc | Method and system for generating a low resolution image from a sparsely sampled extended dynamic range image sensing device |
6909461, | Jul 13 2000 | OmniVision Technologies, Inc | Method and apparatus to extend the effective dynamic range of an image sensing device |
6924841, | May 02 2001 | Aptina Imaging Corporation | System and method for capturing color images that extends the dynamic range of an image sensor using first and second groups of pixels |
6943831, | Jan 24 2001 | Monument Peak Ventures, LLC | Method and apparatus to extend the effective dynamic range of an image sensing device and use residual images |
6972793, | Dec 26 1998 | Sony Corporation | Video signal processing device and its method |
6999119, | Apr 10 1998 | Nikon Corporation | Image-capturing element, image-capturing circuit for processing signal from image-capturing element, image-capturing device, driving method of image-capturing element |
20010011736, | |||
20010012133, | |||
EP1209900, | |||
JP10164602, | |||
JP10294949, | |||
JP11150687, | |||
JP2000069491, | |||
JP2000253412, | |||
JP2000253413, | |||
JP2000316163, | |||
JP2000316167, | |||
JP2000316169, | |||
JP2001061157, | |||
JP2166987, | |||
JP2192277, | |||
JP5064083, | |||
JP564075, | |||
JP61501424, | |||
JP8223491, | |||
JP8331461, | |||
WO79784, | |||
WO8601965, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 14 2015 | Sony Semiconductor Solutions Corporation | (assignment on the face of the patent) | / | |||
Jul 19 2016 | Sony Corporation | Sony Semiconductor Solutions Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 039189 | /0797 |
Date | Maintenance Fee Events |
Jan 21 2019 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Mar 13 2023 | REM: Maintenance Fee Reminder Mailed. |
Aug 28 2023 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Sep 25 2021 | 4 years fee payment window open |
Mar 25 2022 | 6 months grace period start (w surcharge) |
Sep 25 2022 | patent expiry (for year 4) |
Sep 25 2024 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 25 2025 | 8 years fee payment window open |
Mar 25 2026 | 6 months grace period start (w surcharge) |
Sep 25 2026 | patent expiry (for year 8) |
Sep 25 2028 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 25 2029 | 12 years fee payment window open |
Mar 25 2030 | 6 months grace period start (w surcharge) |
Sep 25 2030 | patent expiry (for year 12) |
Sep 25 2032 | 2 years to revive unintentionally abandoned end. (for year 12) |