An image processing apparatus which composes a plurality of images shot under different exposure conditions, comprises a detection unit which detects an exposure error according to a position on a screen using, out of the plurality of images shot under different exposure conditions, an image whose exposure amount is smaller than a predetermined value as a detection target image, a calculation unit which calculates correction information to correct the exposure error detected by the detection unit in accordance with the position on the screen, an adjustment unit which performs level adjustment according to the exposure amount of the detection target image in accordance with the correction information and position information on the screen, and an image composition unit which generates a composite image by composing the plurality of images including the image that has undergone the level adjustment.
|
9. An image processing method of composing a plurality of images shot under different exposure conditions, the method comprising:
detecting an exposure error of a detection target image according to a position in the detection target image, the detection target image having an exposure amount smaller than a predetermined exposure amount;
calculating correction information to correct the detected exposure error;
performing level adjustment on the detection target image in accordance with the exposure amount of the detection target image and the correction information; and
generating a composite image by composing the plurality of images including the detection target image that has undergone the level adjustment,
wherein in the detecting of the exposure error, the exposure error is detected by defining, out of the plurality of images, a reference image having an exposure amount not less than the predetermined exposure amount, and comparing the reference image with the detection target image,
wherein in the calculating of the correction information, the correction information is calculated for the detection target image and is not calculated for the reference image, and
wherein the image having an exposure amount smaller than the predetermined exposure amount is shot with a shutter speed higher than a predetermined shutter speed and the image having an exposure amount not less than the predetermined exposure amount is shot with a shutter speed not higher than the predetermined shutter speed.
10. A non-transitory computer-readable storage medium storing a program for causing a computer to execute an image processing method of composing a plurality of images shot under different exposure conditions, the method comprising:
detecting an exposure error of a detection target image according to a position in the detection target image, the detection target image having an exposure amount smaller than a predetermined exposure amount;
calculating correction information to correct the detected exposure error;
performing level adjustment on the detection target image in accordance with the exposure amount of the detection target image and the correction information; and
generating a composite image by composing the plurality of images including the detection target image that has undergone the level adjustment,
wherein in the detecting of the exposure error, the exposure error is detected by defining, out of the plurality of images, a reference image having an exposure amount is not less than the predetermined exposure amount, and comparing the reference image with the detection target image,
wherein in the calculating of the correction information, the correction information is calculated for the detection target image and is not calculated for the reference image, and
wherein the image having an exposure amount smaller than the predetermined exposure amount is shot with a shutter speed higher than a predetermined shutter speed and the image having an exposure amount not less than the predetermined exposure amount is shot with a shutter speed not higher than the predetermined shutter speed.
1. An image processing apparatus which composes a plurality of images shot under different exposure conditions, comprising:
a detection unit configured to detect an exposure error of a detection target image according to a position in the detection target image, the detection target image having an exposure amount smaller than a predetermined exposure amount;
a calculation unit configured to calculate correction information to correct the exposure error detected by the detection unit;
an adjustment unit configured to perform level adjustment on the detection target image in accordance with the exposure amount of the detection target image and the correction information; and
an image composition unit configured to generate a composite image by composing the plurality of images including the detection target image that has undergone the level adjustment,
wherein the detection unit detects the exposure error by defining, out of the plurality of images, a reference image having an exposure amount not less than the predetermined exposure amount, and comparing the reference image with the detection target image, and
wherein the calculation unit calculates the correction information for the detection target image and does not calculate the correction information for the reference image,
wherein the image having an exposure amount smaller than the predetermined exposure amount is shot with a shutter speed higher than a predetermined shutter speed and the image having an exposure amount not less than the predetermined exposure amount is shot with a shutter speed not higher than the predetermined shutter speed.
2. The apparatus according to
3. The apparatus according to
a measurement unit configured to divide each of the reference image and the detection target image into subareas and obtain a measurement value for each subarea from a luminance value of each subarea;
a comparison unit configured to compare the measurement values of subareas of the reference image and the detection target image at a same position, so as to obtain a comparison value of each subarea; and
a tally unit configured to tally the comparison values,
wherein the exposure error is detected for the detection target image based on a tally result of the tally unit.
4. The apparatus according to
5. The apparatus according to
6. The apparatus according to
7. The apparatus according to
8. The apparatus according to
|
1. Field of the Invention
The present invention relates to a technique of composing a plurality of images shot under different exposure conditions so as to generate an image having a wide dynamic range.
2. Description of the Related Art
There exists a method of shooting a plurality of images with little highlight-detail loss and images with little shadow-detail loss under different exposure conditions and composing the images to generate an image having a wide dynamic range. In this image composition processing, the images are composed after level adjustment according to exposure amounts has been performed between over-exposure images and under-exposure images.
A focal plane shutter that adjusts the exposure time using the traveling interval between the front curtain and the rear curtain is known to cause an exposure error when the traveling speed of the front curtain and that of the rear curtain have a difference (curtain speed unevenness), and the exposure time changes between the two ends of the screen. In particular, the curtain speed unevenness readily occurs at a high shutter speed. At a low shutter speed, the influence of the curtain speed unevenness is small. Japanese Patent Laid-Open No. 2003-078815 or 2008-079209 proposes a technique of correcting an exposure error caused by the curtain speed unevenness.
When controlling a lens or a stop, a phenomenon called shading is known in which the peripheral portion of an image becomes darker than the center. A technique of correcting the shading has also been proposed (for example, Japanese Patent Laid-Open No. 2002-290829).
When composing a plurality of images, an exposure error may occur according to the position on the screen due to the curtain speed unevenness or shading. Even when gain adjustment is done all over the screen, the level cannot match depending on the position on the screen, and a moving area may erroneously be detected.
The level setting unit 1307 sets a 4-times level matching gain value in the level gain processing unit 1309 for under-exposure image data. Similarly, the level setting unit 1306 sets a ¼ level matching gain value in the level gain processing unit 1308 for over-exposure image data. The motion detection unit 1310 compares the under-exposure image data and the over-exposure image data, which have undergone the level matching, and detects motion information in the images.
Conventional motion detection is performed based on the pixel level distributions shown in
In Japanese Patent Laid-Open No. 2003-078815 or No. 2008-079209 described above, to detect curtain speed unevenness, a reference image such as a wall or white paper needs to be shot in advance. This is inadequate for correcting curtain speed unevenness in every shooting.
In Japanese Patent Laid-Open No. 2002-290829, a plurality of correction tables used for shading correction need to be prepared in accordance with the focal length or f-number at the time of shooting. This is not suitable for a single-lens reflex camera capable of variously exchanging the lens. In addition, since shading correction is performed for each of two shot images, processing is complicated.
The present invention has been made in consideration of the aforementioned problems, and provides an image processing technique capable of normally performing motion detection without necessity of complicated processing by composing a plurality of images shot under different exposure conditions after exposure error correction according to the position on the screen.
In order to solve the aforementioned problems, the present invention provides an image processing apparatus which composes a plurality of images shot under different exposure conditions, comprising: a detection unit configured to detect an exposure error according to a position on a screen using, out of the plurality of images shot under different exposure conditions, an image whose exposure amount is smaller than a predetermined value as a detection target image; a calculation unit configured to calculate correction information to correct the exposure error detected by the detection unit in accordance with the position on the screen; an adjustment unit configured to perform level adjustment according to the exposure amount of the detection target image in accordance with the correction information and position information on the screen; and an image composition unit configured to generate a composite image by composing the plurality of images including the image that has undergone the level adjustment.
In order to solve the aforementioned problems, the present invention provides an image processing method of composing a plurality of images shot under different exposure conditions, the method comprising: a detection step of detecting an exposure error according to a position on a screen using, out of the plurality of images shot under different exposure conditions, an image whose exposure amount is smaller than a predetermined value as a detection target image; a calculation step of calculating correction information to correct the exposure error detected in the detection step in accordance with the position on the screen; an adjustment step of performing level adjustment according to the exposure amount of the detection target image in accordance with the correction information and position information on the screen; and an image composition step of generating a composite image by composing the plurality of images including the image that has undergone the level adjustment.
According to the present invention, it is possible to normally perform motion detection without necessity of complicated processing by composing a plurality of images shot under different exposure conditions after exposure error correction according to the position on the screen.
Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
Embodiments of the present invention will be described in detail below. The following embodiments are merely examples for practicing the present invention. The embodiments should be properly modified or changed depending on various conditions and the structure of an apparatus to which the present invention is applied. The present invention should not be limited to the following embodiments. Also, parts of the embodiments to be described later may be properly combined.
Correction processing of an exposure error caused by the curtain speed unevenness of a focal plane shutter according to the first embodiment will be described with reference to
<Apparatus Configuration>
An embodiment will be described in which an image processing apparatus of the present invention is applied to, for example, an image capturing apparatus such as a digital camera that shoots an image.
The configuration and functionality of the image capturing apparatus according to this embodiment will be described with reference to
Referring to
The motion detection unit 110 compares under-exposure image data and over-exposure image data which have undergone level matching by the level setting units 106 and 107 and detects motion information in the images. For example, the under-exposure image data and the over-exposure image data are divided into predetermined areas. The difference between the areas is obtained. If the absolute value of the difference value is large, the area is determined as a moving area.
The composition ratio setting unit 111 sets the composition ratio based on the luminance information or motion information of image data. More specifically, for a bright portion of an image, the composition ratio is set so as to mainly compose the under-exposure image data. For a dark portion of the image, the composition ratio is set so as to mainly compose the over-exposure image data. For an area determined as a moving area, the composition ratio is set so as to output one of the under-exposure image data and the over-exposure image data. One of the image data is output, thereby avoiding image quality degradation such as a blur of the moving area in the composite image data.
A case will be explained below in which the under-exposure image is shot first, and the over-exposure image data is shot next and composed. However, the shooting order of the under-exposure image and the over-exposure image can be reversed when a frame memory for saving shot images is used.
In this embodiment, the under-exposure image serves as a detection target image and undergoes level matching including exposure error correction, and the over-exposure image undergoes level matching as before. This is because an exposure error caused by curtain speed unevenness according to the position on the screen more conspicuously appears in the under-exposure image. That is, the curtain speed unevenness generally occurs in both a high-speed shutter and a low-speed shutter. However, since the difference in exposure time generated by curtain speed unevenness has a predetermined value, the influence of the exposure error is small in the low-speed shutter (over-exposure side) with a long exposure time, and the influence of the exposure error is large in the high-speed shutter (under-exposure side) with a short exposure time. Hence, only the image data (under-exposure image data) on the high-speed shutter side is corrected as detection target image data.
The configuration and functionality of the exposure error detection unit 116 shown in
Referring to
A subarea division unit 205 divides the image data 203 into subareas, as shown in
Out of the measurement values of the under-exposure image and the over-exposure image obtained on the subarea basis, a result measured first is temporarily stored in a memory 208. For example, when the under-exposure image is measured first, the measurement value is stored in the memory 208. Next, a comparison unit 209 compares the measurement values of the under-exposure image and the over-exposure image for each subarea. According to the above-described order, the measurement value of the under-exposure image having undergone level adjustment first and stored in the memory 208 and the measurement value of the over-exposure image having undergone level adjustment next are compared. The comparison is done by division of measurement values corresponding to subareas at the same position of the images. For example, let RL be the measurement value of the subarea at a given position of the under-exposure image, and RH be the measurement value of the subarea at the same position of the over-exposure image. A comparison value M is obtained by
M=(RL)/(RH) (1)
Each measurement value is a value that has undergone exposure amount adjustment by the adjustment unit 207. Hence, the calculation result has a value of about 1.0, which facilitates tally and approximate calculation to be described below. Note that the comparison here need only obtain a comparison result representing the exposure error between the under-exposure image and the over-exposure image, and for example, the difference between adjustment results may be obtained on the subarea basis.
The comparison results obtained on the subarea basis are tallied by a tally unit 210 in accordance with the position of each subarea on the screen. In this embodiment, to obtain the exposure error caused by the curtain speed unevenness of the focal plane shutter, tally is performed in correspondence with the traveling direction of the shutter. That is, to obtain the exposure error distributed in the vertical direction, measurement values at the same vertical position are tallied. This state will be explained with reference to
Note that when the measurement value of each subarea is adjusted based on the exposure amount information 202, and comparison of the image data 203 is done by division, an intercept 403 of the approximate line to the image center position serves as an index to know the degree of accuracy of the approximation because the value is about 1.0 independently of the actual exposure amount of each image. For example, when the adjustment is not done based on the exposure amount information 202, and the exposure amount difference between the under-exposure image and the over-exposure image is 1/16, the intercept 403 to the image center position has a small value of about 0.06. On the other hand, if the exposure amount difference between the under-exposure image and the over-exposure image is ½, the intercept 403 has a value of about 0.5 that is largely different from the above-described value. When the adjustment is performed based on the exposure amount information 202, the intercept 403 takes a value of about 1.0. If the value is greatly different from 1.0, it can easily be determined that the approximation is incorrect. That is, when the coefficient representing the approximate line falls outside a specific range (for example, the intercept to the image center position is 0.9 to 1.1), it can be determined that the exposure error could not appropriately be detected. Whether to correct the exposure error in processing later can easily be determined in accordance with the determination result. This makes it possible to take a safe measure such as prohibiting inappropriate exposure error correction.
The procedure of causing the level setting unit 107 to set a level to correct an exposure error from the correction coefficient 204 calculated by the exposure error detection unit 116 will be described next with reference to
Referring to
First, a correction value calculation unit 506 calculates the correction value of the exposure error from the correction coefficient 503 and the number 504 of lines. As described with reference to
z=1/(a×(X−N/2)+b) (2)
Next, a level calculation unit 507 multiplies the result by the exposure amount information 502, thereby calculating the level matching gain value 505. Letting E be the exposure amount, a level matching gain value G is given by
G=E×z (3)
The level matching gain value 505 calculated by the level setting unit of this embodiment is sent to the level gain processing unit 109 shown in
As a result, the pixel level of the under-exposure image matches that of the over-exposure image all over the screen. Motion detection shown in
As described above, according to this embodiment, an exposure error according to the position on the screen is detected and corrected, thereby normally performing motion detection when composing a plurality of images shot under different exposure conditions.
Note that in this embodiment, the comparison value is obtained by equation (1). Instead of using equation (1), a comparison value M′ may be obtained by changing the order of division operation by
M′=(RH)/(RL) (4)
Let c be the slope of the obtained approximate line, and d be the intercept to the image center position. A correction value z′ and the level matching gain value G are respectively obtained by
z′=d×(X−N/2)+e) (5)
G=E×z′ (6)
In this embodiment, composing two images, that is, an under-exposure image and an over-exposure image, has been described. However, the present invention is not limited to this and is also applicable to composing three or more images. In this case, the image corresponding to the under-exposure image of this embodiment is an image in which the curtain speed unevenness of the shutter conspicuously appears, that is, each image shot at a speed higher than a predetermined shutter speed. The image undergoes exposure error detection and level matching including exposure error correction. Similarly, as the image corresponding to the over-exposure image, an image shot at a speed lower than the predetermined shutter speed is used for exposure error detection. If there are a plurality of images shot under the same conditions, exposure error detection may be performed using one of them. Alternatively, after the exposure amounts of the plurality of images under the same conditions are adjusted by the adjustment unit 207 in
Additionally, in this embodiment, the exposure error detection unit 116 performs processing for each input image. However, since the exposure error according to the position on the screen is caused by a mechanical factor, each shutter speed of the reference image and the correction value for each set exposure amount may be stored in another table. In this case, the same effect as described above can be obtained by reading out the correction value from the table at the time of shooting and performing only the level adjustment according to the position on the screen.
The same effect as described above can also be obtained by causing the exposure error detection unit 116 to perform processing only at a predetermined timing and updating the correction value stored in the table in consideration of aging.
Note that the above-described predetermined timing can be either the first shooting time at the time of activation after a predetermined period or the first shooting time after lens exchange in a lens-interchangeable image capturing apparatus.
Correction processing of an exposure error caused by shading according to the second embodiment will be described next with reference to
In
An error exposure detection unit 716 of this embodiment detects an exposure error from an under-exposure image and an over-exposure image and calculates correction information, as will be described later. Line counters 717 and 718 detect the number of lines in the vertical direction (x direction) of the screen and the number of lines in the horizontal direction (y direction) of the screen, respectively, for image data currently under composition processing. The level setting unit 707 sets a level matching gain value corresponding to the numbers of lines in the vertical and horizontal directions based on correction information from the error exposure detection unit 716 and position information (numbers of lines) from the line counters 717 and 718 as well as the exposure amount information from the exposure amount setting unit 703.
An explanation will be made below assuming that the under-exposure image serves as a detection target image and undergoes level matching including exposure error correction, and the over-exposure image undergoes conventional level matching. However, the same effect can be obtained even when the over-exposure image serves as the detection target image, and the under-exposure image undergoes conventional level matching.
The configuration and functionality of the exposure error detection unit 716 shown in
Referring to
In
The comparison results obtained by the comparison unit 809 on the subarea basis are tallied by a tally unit 810 in accordance with the position of each subarea on the screen. In this embodiment, to obtain the exposure error caused by shading, tally is performed in accordance with the distance from the screen center. That is, measurement values at an equidistance from the screen center are tallied. This state of tally will be explained with reference to
Note that when the measurement value of each subarea is adjusted based on the exposure amount information 802, and comparison of the image data 803 is done by division, an intercept 1003 of the approximate line to the image center position has a value of about 1.0 independently of the actual exposure amount of each image, as in the first embodiment.
The procedure of causing the level setting unit 707 to set a level to correct an exposure error from the correction coefficient 804 calculated by the exposure error detection unit 716 will be described next with reference to
Referring to
First, a center distance calculation unit 1107 calculates the distance (center distance) of a pixel currently under composition from the screen center. Next, a correction value calculation unit 1108 calculates the correction value of the exposure error from the correction coefficient 1103 and the center distance. As described above concerning the error exposure detection unit 716 in
a×s4+b×s3+c×s2+d×s+e (7)
where a, b, c, d, and e are the coefficients of the polynomial expression.
A correction value z is given by
z=1/(a×s4+b×s3+c×s2+d×s+e) (8)
Next, a level calculation unit 1109 multiplies the result by the exposure amount information 1102, thereby calculating the level matching gain value 1106. Letting E be the exposure amount, a level matching gain value G is given by equation (3).
The level matching gain value 1106 calculated by the level setting unit 707 of this embodiment is sent to the level gain processing unit 709 shown in
As a result, the pixel level of the under-exposure image matches that of the over-exposure image all over the screen. Motion detection shown in
As described above, according to this embodiment, an exposure error according to the distance from the screen center is detected and corrected, thereby normally performing motion detection when composing a plurality of images shot under different exposure conditions.
Note that in this embodiment, since focus is placed on the exposure error caused by shading, the exposure error according to the distance from the screen center is corrected. If the curtain speed unevenness of the shutter according to the first embodiment is added to the shading, the approximate calculation unit 811 obtains the approximate line in consideration of the positions in the vertical direction and horizontal direction on the image when obtaining correction coefficient. In the level setting unit 707, the center distance calculation unit 1107 does not perform processing, and the correction value calculation unit 1108 directly calculates the correction coefficient from the number 1104 of lines in the vertical direction and the number 1105 of lines in the horizontal direction.
Aspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiment(s), and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiment(s). For this purpose, the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (for example, computer-readable medium). In such a case, the system or apparatus, and the recording medium where the program is stored, are included as being within the scope of the present invention.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2012-197900, filed Sep. 7, 2012, which is hereby incorporated by reference herein in its entirety.
Patent | Priority | Assignee | Title |
11252337, | Dec 29 2020 | Himax Imaging Limited | Method and apparatus of performing automatic exposure control for image sensor with context switching |
9898676, | Jan 13 2016 | I-SHOU UNIVERSITY | Method for determining the level of degradation of a road marking |
Patent | Priority | Assignee | Title |
20080095408, | |||
20080187235, | |||
20110001859, | |||
20110181754, | |||
20120069214, | |||
20130051700, | |||
20130128080, | |||
20130235257, | |||
JP2002290829, | |||
JP2003078815, | |||
JP2008079209, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Aug 20 2013 | MAKINO, JUN | Canon Kabushiki Kaisha | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033082 | /0592 | |
Aug 28 2013 | Canon Kabushiki Kaisha | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Dec 19 2019 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Feb 26 2024 | REM: Maintenance Fee Reminder Mailed. |
Aug 12 2024 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jul 05 2019 | 4 years fee payment window open |
Jan 05 2020 | 6 months grace period start (w surcharge) |
Jul 05 2020 | patent expiry (for year 4) |
Jul 05 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 05 2023 | 8 years fee payment window open |
Jan 05 2024 | 6 months grace period start (w surcharge) |
Jul 05 2024 | patent expiry (for year 8) |
Jul 05 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 05 2027 | 12 years fee payment window open |
Jan 05 2028 | 6 months grace period start (w surcharge) |
Jul 05 2028 | patent expiry (for year 12) |
Jul 05 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |