An aspect of the present invention is an imaging method using a multifocal lens having a plurality of regions, the plurality of regions having different focal lengths, and the imaging method includes a focusing state control step of controlling a focusing state of the multifocal lens, and an imaging step of obtaining an image of a subject in the controlled focusing state. In the focusing state control step, the focusing state is controlled so that a main subject is focused via a region with a shortest required focusing time among the plurality of regions in response to a picture taking instruction.
|
1. An imaging method using a multifocal lens having a plurality of regions, the plurality of regions having different focal lengths, the method comprising:
controlling a focusing state of the multifocal lens;
obtaining an image of a subject in the controlled focusing state; and
calculating required focusing times required for focusing at each of the respective regions,
wherein in the controlling the focusing state, the focusing state is controlled so that a main subject is focused via a region with a shortest required focusing time of the calculated required focusing times among the plurality of regions in response to input of a picture taking instruction.
7. An imaging method using a multifocal lens having a plurality of regions, the plurality of regions having different focal lengths, the method comprising:
controlling a focusing state of the multifocal lens;
obtaining an image of a subject in the controlled focusing state; and
calculating required focusing times required for focusing at each of the respective regions,
wherein in response to input of a picture taking instruction,
in the controlling the focusing state, control is performed so that a main subject is focused via a region with a shortest required focusing time of the calculated required focusing times among the plurality of regions, and the image of the subject is obtained, and subsequently
in the controlling the focusing state, control is performed so that the main subject is focused via a main region having a largest area among the plurality of regions, and the image of the subject is obtained.
12. An imaging method using a multifocal lens having a plurality of regions, the plurality of regions having different focal lengths, the method comprising:
controlling a focusing state of the multifocal lens;
obtaining an image of a subject in the controlled focusing state; and
calculating required focusing times required for focusing at each of the respective regions,
wherein in response to input of a picture-taking instruction, a first imaging is continuously performed, a second imaging is continuously performed subsequent to the first imaging, and a moving image of the subject is obtained when a region with a shortest required focusing time of the calculated required focusing times among the plurality of regions is not a main region having a largest area among the plurality of regions,
otherwise, in response to the input of the picture-taking instruction,
the first imaging is continuously performed to obtain the moving image of the subject,
in the first imaging, control is performed so that a main subject is focused via the region with the shortest required focusing time in the focusing state control to obtain an image of the subject, and
in the second imaging, control is performed so that the main subject is focused via the main region in the focusing state control to obtain the image of the subject.
2. The imaging method according to
3. The imaging method according to
4. The imaging method according to
5. The imaging method according to
6. The imaging method according to
8. The imaging method according to
9. The imaging method according to
10. The imaging method according to
11. The imaging method according to
13. The imaging method according to
14. The imaging method according to
15. The imaging method according to
|
1. Field of the Invention
The present invention relates to a technology for controlling a focusing state by an imaging device including a multifocal lens.
2. Description of the Related Art
In recent years, an imaging device using a multifocal lens having a plurality of focal points has been known. For example, Patent Literature 1 describes that a subject image is obtained by convolution processing in an imaging device including a multifocal lens. Also, Patent Literature 2 describes that an imaging signal is corrected with an inverse function based on a point spread function in an imaging device including a multifocal lens. Furthermore, in Patent Literatures 1 and 2, images with different focal lengths are obtained by using the imaging device including the multifocal lens, and an in-focus image is obtained by restoration processing.
Still further, Patent Literature 3 describes that a barcode is read in a short focus region provided to a center portion of a multifocal lens and normal picture taking is performed in a long focus region provided to a peripheral portion of the lens.
On the other hand, as described in Patent Literature 4, pictures of the same subject are taken by a plurality of camera modules with different focal lengths instead of using a multifocal lens, and a multifocal image as a whole is obtained.
In the conventional technologies as described in Patent Literatures 1 and 2, by performing convolution processing on the image obtained by using the multifocal lens or correcting the image with the inverse function, an in-focus image can be obtained to some extent even with the lens fixed. However, in these conventional technologies, completely focusing on the subject is not intended, and a blur to some extent (within a range in an allowable circle of confusion) occurs in an almost entire range of the depth of field. Therefore, it is difficult to obtain an image that is in focus to the maximum performance of the lens. Moreover, the characteristics of the multifocal lens where the focal length and depth of field vary depending on the region are not sufficiently used.
Also, in the conventional technologies as described in Patent Literatures 1 to 3, by setting a deep depth of focus of the lens or performing restoration processing on the obtained image, an in-focus image can be obtained to some extent in a wide range even with the lens fixed. However, in these conventional technologies, completely focusing on the subject is not intended, and a blur to some extent (within a range in an allowable circle of confusion) occurs in an almost entire range of the depth of field. Therefore, it is difficult to obtain an image that is in focus to the maximum performance of the lens. Moreover, accurate focusing by following a moving subject is also difficult.
Furthermore, when many modules are used as in the technology described in Patent Literature 4, the system disadvantageously becomes complex and upsized.
The present invention was made based on the circumstances described above, and has an object of providing an imaging method and image processing method, programs therefor, recording medium, and an imaging device capable of obtaining a desired image by taking advantage of the characteristics of a multifocal lens and also capable of extracting and selecting the obtained image according to the purpose or preference.
To achieve the above-described object, an imaging method according to a first aspect of the present invention is an imaging method using a multifocal lens having a plurality of regions, the plurality of regions having different focal lengths, and the imaging method includes a focusing state control step of controlling a focusing state of the multifocal lens, and an imaging step of obtaining an image of a subject in the controlled focusing state. In the imaging method according to the first aspect, the focusing state of the multifocal lens having a plurality of different focal lengths is controlled to obtain an image of the subject. Therefore, a desired image, such as an accurately-focused image and an image focused at high speed, can be obtained by taking advantage of the characteristics of the multifocal lens.
In the imaging method according to a second aspect of the present invention in the first aspect, in the focusing state control step, the focusing state is controlled according to a relation between a depth of field of the multifocal lens and a width of a distance distribution of the subject.
In the imaging method according to the second aspect, focusing control is performed according to the relation between the depth of the field of the multifocal lens and the width of the distance distribution of the subject. With this, for example, according to the purpose of picture taking, it is flexibly possible to obtain an image accurately focused in any of the plurality of focal regions according to the distance to the subject, to image a subject at a lens position different from a focusing position to obtain an image with a specific subject blurred, or others, thereby obtaining a desired image by taking advantage of the characteristics of the multifocal lens. Note in the second aspect that the “depth of field of the multifocal lens” is assumed to include a depth of field of each focal region of the multifocal lens and a depth of field of the multifocal lens as a whole.
Note in the second aspect that the distance distribution can be detected in any of various methods. For example, calculation may be performed by using phase difference information, or by comparing contrasts of a plurality of imaging signals obtained at different lens positions. Also, an image obtained in the second aspect is not restricted to a still picture, and a moving picture may be obtained by continuously performing focusing control according to changes in distance distribution of the subject. Furthermore, in the second aspect, the imaging device preferably has an image pickup element including a light-receiving element which selectively receives a light beam passing through any of the plurality of regions.
In the imaging method according to a third aspect of the present invention in the first or second aspect, in the focusing state control step, when the depth of field of the multifocal lens as a whole is equal to the width of the distance distribution of the subject or more, the focusing state is controlled so that the width of the distance distribution of the subject is within the depth of field as the whole.
A relation between the depth of field as the whole multifocal lens and the distance distribution of the subject varies according to the design of the multifocal lens, the actual picture-taking situation, etc., and either one may be wide. In the imaging method according to the third aspect, in consideration of these circumstances, when the depth of field of the multifocal lens as a whole is equal to the width of the distance distribution of the subject or more, focusing control is performed so that the width of the distance distribution of the subject is within the depth of subject to obtain an image. Therefore, an image appropriately focused according to the distance distribution of the subject can be obtained
Note in third aspect that wow the width of the distance distribution of the subject is covered by the depth of field as the multifocal lens as a whole can be defined according to the picture-taking purpose. For example, with the width of the distance distribution of the subject set within the depth of field, control may be performed so that the center of the depth of the field as a whole and the center of the depth of field of subject match each other or the depth of field as a whole is shifted to a front side or a rear side of the distance distribution of the subject. Also, with the width of the distance distribution of the subject set within the depth of field as a whole, control may be performed so that the focusing degree in any of the focal regions with respect to a specific subject such as a main subject is equal to a threshold or more or is maximum.
In the imaging method according to a fourth aspect of the present invention in the first or second aspect, in the focusing state control step, when the depth of field of the multifocal lens as a whole is narrower than the width of the distance distribution of the subject, a range not within the depth of field as the whole extends off equally on a front side and a back side of the depth of field as the whole. The fourth aspect defines an aspect of focusing control when the depth of field of the multifocal lens as a whole is narrower than the width of the distance distribution of the subject.
In the imaging method according to a fifth aspect of the present invention in the first or second aspect, in the focusing state control step, when the depth of field of the multifocal lens as a whole is narrower than the width of the distance distribution of the subject, the focusing state is controlled based on a focusing priority ranking among subjects. In the fifth aspect, for example, with the subject at the first priority ranking accurately focused in any of the focal regions or with the focusing degree of the subject with the first priority ranking set equal to a predetermined value or more, the focusing degree of the subject with the second priority ranking or lower can be made maximum. With this, an image appropriately focused according to the distance distribution of the subject can be obtained.
The imaging method according to a sixth aspect of the present invention in the fifth aspect further includes a step of detecting a face of a person, and a step of setting a focusing priority ranking of the person with the detected face higher than a ranking of a subject other than the person, wherein in the focusing state control step, the focusing state is controlled based on the set focusing priority ranking. With this, in the imaging method according to the sixth aspect, an image focused in consideration of the distance distribution of the subject and further with a person prioritized can be obtained.
The imaging method according to a seventh aspect of the present invention in the fifth aspect further includes a step of causing a user of the imaging device to specify the focusing priority ranking, wherein in the focusing state control step, the focusing state is controlled based on the specified focusing priority ranking. Note in the imaging method according to the seventh embodiment that the focusing priority ranking of a person may be made higher or the focusing priority ranking of a subject other than the person may be made higher.
In the imaging method according to an eighth aspect of the present invention in the first aspect, in the focusing state control step, the focusing state is controlled so that a main subject is focused in any of the plurality of regions. Note that whether the subject is a main subject can be determined based on whether the subject is a face of a person, based on a ratio occupying a picture-taking region, etc.
In the imaging method according to a ninth aspect of the present invention in the eighth aspect, in the focusing state control step, the focusing state is controlled so that the main subject is focused in a region with a shortest focal length among the plurality of regions, and thereby in the imaging step, an image with a region on a front side of the main subject blurred is obtained. With this, in the ninth aspect, an image in which a subject on a front side (a side near the imaging device) of the main subject is blurred according to the distance from the main subject can be obtained.
In the imaging method according to a tenth aspect of the present invention in the eighth aspect, in the focusing state control step, the focusing state is controlled so that the main subject is focused in a region with a longest focal length among the plurality of regions, and thereby in the imaging step, an image with a region on a depth side of the main subject blurred is obtained. With this, in the tenth aspect, an image in which a subject on a depth side (a side far from the imaging device) of the main subject is blurred according to the distance from the main subject can be obtained.
In the imaging method according to an eleventh aspect of the present invention, in any of the first to tenth aspects, in the focusing state control step, the focusing state is controlled based on a relation between an aperture value of the multifocal lens and a depth of field of the multifocal lens. The field of depth varies depending on the aperture of the lens. In general, the depth of field becomes deeper, when the aperture value (F value) increases, and the depth of field becomes shallower when the aperture value decreases. Therefore, as in the eleventh aspect, by performing focusing control based on the relation between the aperture value of the multifocal lens and the depth of filed of the multifocal lens, an image appropriated focused according to the distance distribution of the subject can be obtained.
The imaging method according to a twelfth aspect of the present invention in any of the first to tenth aspects further includes a step of recording the obtained image and information indicating the focusing degree of the subject included in the obtained image. With this, in the imaging method according to the twelfth aspect, the user can select and extract a desired image, such as an image with a focusing degree equal to a predetermined value or more, with reference to the focusing degree information after image obtainment.
To achieve the above-described object, an image processing method according to a thirteenth aspect of the present invention includes a step of extracting an image included in the recorded image based on the information recorded by the twelfth aspect. Also, the image processing method according to a fourteenth aspect of the present invention in the thirteenth aspect further includes a step of extracting an image including a subject having a specified focusing degree based on the recorded information. Furthermore, the image processing method according to a fifteenth aspect of the present invention in the thirteenth or fourteenth aspect further includes a step of generating a new image in which a specified subject has a specified focusing degree by using the extracted image. In the image processing method according to any of the above aspects, the user can obtain a desired focusing degree according to the purpose or preference. Note in the thirteenth to fifteenth aspects that a specific value may be specified as the focusing degree, or only an upper-limit value, only a lower-limit value, or a range formed of an upper-limit value and a lower-limit value may be specified.
To achieve the above-described object, an imaging program according to a sixteenth aspect of the present invention causes an imaging device to perform the imaging method according to any of the first to twelfth aspects. Also, to achieve the above-described object, an image processing program according to a seventeenth aspect of the present invention causes an image processing device to perform the image processing method according to any of the thirteenth to fifteenth aspects. The programs according to the sixteenth and seventeenth aspects may be incorporated in the imaging device such as a digital camera, or may be used as image processing and editing software in a personal computer (PC) or the like. Furthermore, in recording media according to eighteenth and nineteenth aspects of the present invention, computer-readable codes of the imaging programs according to the sixteenth and seventeenth aspects are respectively recorded. As an example of the recording media according to the eighteenth and nineteenth aspects of the present invention, a non-transitory semiconductor storage medium or magneto-optical recording medium can be used, such as a ROM or RAM of a digital camera or PC as well as a CD, DVD, BD, HDD, SSD, or any of various memory cards.
To achieve the above-described object, an imaging device according to a twentieth aspect of the present invention includes a multifocal lens having a plurality of regions, the plurality of regions having different focal lengths, focusing state control means which controls a focusing state of the multifocal lens, and imaging means which obtains an image of a subject in the controlled focusing state, wherein the focusing state control means controls the focusing state according to a relation between a depth of field of the multifocal lens and a width of a distance distribution of the subject.
In the imaging device according to the twentieth aspect, by controlling the focusing state according to the relation between the depth of field of the multifocal lens and the width of the distance distribution of the subject, for example, it is flexibly possible to obtain an image accurately focused in any of the plurality of focal regions according to the distance to the subject, to image a subject at a lens position different from a focusing position to obtain an image with a specific subject blurred, or others, thereby obtaining a desired image by taking advantage of the characteristics of the multifocal lens. Note in the twentieth aspect that, as with the second aspect, “depth of field of the multifocal lens” is assumed to include a depth of field of each focal region of the multifocal lens and a depth of field of the multifocal lens as a whole.
Also, the image obtained in the twentieth aspect is not restricted to a still image, and a moving image may be obtained by continuously performing focusing control according to the changes of the distance distribution of the subject. Note that the imaging device according to the twentieth aspect preferably has an image pickup element including a light-receiving element which selectively receives a light beam passing through any of the plurality of regions.
To achieve the above-described object, in the imaging device according to a twenty-first aspect of the present invention in the first aspect, in the focusing state control step, the focusing state is controlled so that a main subject is focused via any of the plurality of regions in response to a picture-taking instruction.
In the imaging method according to the twenty-first aspect, since the focusing state is controlled so that the main subject is focused via any of the plurality of regions. Therefore, an image where the main subject is accurately focused can be obtained. Also, since focusing is performed via any of the plurality of regions, the time required for lens driving can be shortened, and focusing can be made at high speed. Note that the front image device preferably has an image pickup element including a light-receiving element which selectively receives a light beam passing through any of the plurality of regions.
Note in each aspect of the present invention that the “multifocal lens” is not restricted to a single lens but includes a lens configured of a plurality of lenses. “Controlling the multifocal lens” means that the subject is focused by moving at least part of the lens(es) configuring the multifocal lens. Also, in each aspect of the present invention, a “picture-taking instruction” may be inputted by a user pressing a shutter button, or may be generated by an imaging device at each picture-taking time during moving-picture taking. Furthermore, in each aspect of the present invention, whether the subject is a main subject can be determined based on whether the subject is a face of a person, based on a ratio occupying a picture-taking region, etc.
In the imaging method according to a twenty-second aspect of the present invention in the twenty-first aspect, in the focusing state control step, control is performed so that the main subject is focused via a region with a shortest required focusing time among the plurality of regions. In the imaging method according to the twenty-second aspect, since control is performed so that the main subject is focused via a region with the shortest required focusing time among the plurality of regions, focusing can be made at higher speed.
In the imaging method according to a twenty-third aspect of the present invention in the twenty-first or twenty-second aspect, in the focusing state control step, control is performed so that the main subject is focused via a main region having a largest area among the plurality of regions. When imaging is performed by using a multifocal lens, image quality depends on the area of each focal region. In the imaging method according to the twenty-third aspect, the image is obtained by performing control so that the main subject is focused via the man region having the largest area among the plurality of regions. Therefore, an image accurately focused with high image quality can be obtained, and the maximum performance of the lens can be used.
In the imaging method according to a twenty-fourth aspect of the present invention in any of the twenty-first to twenty-third aspects, in the focusing state control step, control is performed so that the main subject is focused via a region with a shortest required focusing time among the plurality of regions, and the image of the subject is obtained in the imaging step, and subsequently in the focusing state control step, control is performed so that the main subject is focused via a main region having a largest area among the plurality of regions, and the image of the subject is obtained in the imaging step. In the imaging method according to the twenty-fourth aspect, control is performed so that focusing is made via a region with the shortest required focusing time among the plurality of regions to obtain an image and, subsequently, control is performed so that focusing is made via a main region having the largest area among the plurality of regions to obtain an image. Therefore, an image focused at high speed without a time lag and an image accurately focused with high image quality but with a slight time lag can be obtained. From these images, the user can select a desired image according to the purpose or preference.
In the imaging method according to a twenty-fifth aspect of the present invention in any of the twenty-first to twenty-fourth aspects, in the imaging step, after the picture-taking instruction and before the focusing state control step is performed, an image is obtained in all of the plurality of regions with the focusing state at the picture-taking instruction being kept. Since imaging is performed with the focusing state at the time of the picture-taking instruction being kept and a multifocal lens is used, an image focused to some extent can be obtained at high speed in any of the plurality of regions.
In the imaging method according to a twenty-sixth aspect of the present invention in any of the twenty-first to twenty-fifth aspects, in response to the picture-taking instruction, at least one of the following is continuously performed to obtain a moving image of the subject: performing control so that the main subject is focused via a region with a shortest required focusing time among the plurality of regions in the focusing state control step and obtaining an image of the subject in the imaging step; and performing control so that the main subject is focused via a main region having a largest area among the plurality of regions in the focusing state control step and obtaining the image of the subject in the imaging step.
In the imaging method according to the twenty-sixth aspect, in response to the picture-taking instruction, at least one of the following image obtainment types is continuously performed to obtain a moving image of the subject: image obtainment with control so that the main subject is focused via a region with the shortest required focusing time among the plurality of regions and image obtainment with control so that the main subject is focused via a main region having the largest area among the plurality of regions. Therefore, at least one of the image focused at high speed and the image accurately focused with high image quality can be continuously obtained. Note that which image is to be obtained may be specified by the user, or both images may be continuously obtained and then selected and edited subsequently. With this, the user can obtain a desired image according to the purpose or preference.
The imaging method according to a twenty-seventh aspect of the present invention in any of the twenty-first to twenty-sixth aspects further includes a step of recording the obtained image and imaging time information indicating a time from the picture-taking instruction to image obtainment in association with each other. In the imaging method according to the twenty-seventh aspect, the obtained image and the imaging time information indicating the time from the picture-taking instruction to image obtainment are recorded in association with each other. Therefore, the user can select and extract a desired image, such as an image with a picture-taking time is equal to a predetermined time or less, with reference to the imaging time information after image obtainment.
The imaging method according to a twenty-eighth aspect of the present invention in any of the twenty-first to twenty-seventh aspects further includes a step of recording the obtained image and focusing degree information indicating a focusing degree of the obtained image in association with each other. In the imaging method according to the twenty-eighth aspect, the obtained image and the focusing degree information indicating a focusing degree of the obtained image are recorded in association with each other. Therefore, the user can select and extract a desired image, such as an image with a focusing degree is equal to a predetermined value or more, with reference to the focusing degree information after image obtainment.
The imaging method according to a twenty-ninth aspect of the present invention in any of the twenty-first to twenty-eighth aspects further includes a step of recording the obtained image and imaging region information indicating from which area the obtained image is obtained in association with each other. In the imaging method according to the twenty-ninth aspect, the obtained image and the imaging region information indicating from which area the obtained image is obtained are recorded in association with each other. Therefore, the user can select and extract an image taken in a specific region (for example, in the main region) according to the purpose or preference.
To achieve the above-described object, an image processing method according to a thirtieth aspect of the present invention includes a step of extracting an image included in the recorded image based on the information recorded with the imaging method according to any of the twenty-seventh to twenty-ninth aspects. In the image processing method according to the thirtieth aspect, an image is extracted with reference to the imaging time information, the focusing degree information, and the imaging region information recorded in association with the image. Therefore, the user can obtain a desired image according to the purpose or preference.
To achieve the above-described object, an imaging program according to a thirty-first aspect of the present invention causes an imaging device to perform the imaging method according to any of the twenty-first to thirtieth aspects.
To achieve the above-described object, an imaging program according to a thirty-second aspect of the present invention causes an imaging device to perform the imaging method according to the thirtieth aspect. The program according to the thirty-second aspect may be used by being incorporated in the imaging device such as a digital camera or may be used in a personal computer (PC) or the like as image processing/editing software. Also, in recording media according to thirty-third and thirty-fourth aspects of the present invention, computer-readable codes of the imaging programs according to the thirty-first and thirty-second aspects are respectively recorded. As an example of the recording media according to the thirty-third and thirty-forth aspects, a non-transitory semiconductor storage medium or magneto-optical recording medium can be used, such as a ROM or RAM of a digital camera or PC as well as a CD, DVD, BD, HDD, SSD, or any of various memory cards.
To achieve the above-described object, the imaging device according to a thirty-fifth aspect of the present invention includes a multifocal lens having a plurality of regions, the plurality of regions having different focal lengths, focusing state control means which controls a focusing state of the multifocal lens, and imaging means which obtains an image of a subject in the controlled focusing state, wherein the imaging means performs control so that a main subject is focused via any of the plurality of regions in response to a picture-taking instruction.
In the imaging device according to the thirty-fifth aspect, the focusing state of the multifocal lens so that the main subject is focused via any of the plurality of regions to obtain an image. Therefore, an image accurately focused on the main subject can be obtained. Also, since focusing is made via any of the plurality of regions, the time required for lens driving can be shortened, and focusing can be made at high speed. Note that the imaging device preferably has an image pickup element including a light-receiving element which selectively receives a light beam passing through any of the plurality of regions.
In the imaging device according to a thirty-sixth aspect of the present invention, the focusing state control means controls the focusing state with a phase-difference scheme. For focusing control for use in the imaging device, a so-called contrast scheme and a so-called phase difference scheme are present. While the focusing state is detected by driving the lens in the contrast scheme, the focusing state is detected based on a phase difference of a subject image in the phase difference scheme. Therefore, focusing control at higher speed than the contrast scheme can be achieved, and accurate focusing at high speed, which is an effect of the present invention, can be further achieved.
Note in each aspect of the present invention described above, the “multifocal lens” is not restricted to one lens but includes a lens configured of a plurality of lenses, and “control of the multifocal lens” means that at least part of the lens (lenses) configuring the multifocal lens is moved to focus on the subject.
As described above, according to the present invention, a desired image can be obtained by taking advantage of the characteristics of the multifocal lens, and the obtained image call be extracted and selected according to the purpose or preference.
Embodiments for carrying out the imaging method and image processing method, program therefor, recording medium, and imaging device according to the present invention are described in detail below according to the attached drawings.
[Structure of Imaging Device]
The imaging device 10 is provided with an operating unit 38 such as a shutter button, a mode dial, a replay button, a MENU/OK key, a cross key, and a BACK key. A signal from this operating unit 38 is inputted to the CPU 40 and, based on the input signal, the CPU 40 controls each circuit of the imaging device 10, as will be described further below.
The shutter button is an operation button for inputting an instruction for starting picture taking, and is configured of switches of a two-step stroke type having an S1 switch that is turned ON at the time of a half push and an S2 switch that is turned ON at the time of a full push. The mode dial is means that selects any of a still-picture/moving-picture taking mode, a manual/auto picture-taking mode, and a picture-taking scene, etc.
The replay button is a button for switching to a replay mode for causing a still picture or moving picture of images taken and recorded to be displayed on a liquid-crystal monitor (LCD) 30. The MENU/OK key is an operation key having both of a function for making an instruction for causing a menu to be displayed on a screen of the liquid-crystal monitor 30 and a function for making an instruction for confirming and executing selected details and the like. The cross key is an operating unit for inputting an instruction of any of four directions, that is, upward, downward, leftward, and rightward, and functions as cursor movement operation means, a zoom switch, a frame-advance button at the time of replay mode, or the like. The BACK key is used to delete a desired target such as a selected item and cancel instruction details, return to the immediately preceding operation state, or the like. These buttons and keys can also be used for an operation required at the time of image extraction and combining processes or setting priorities among subjects, which will be described further below.
In the taking mode, an image of image light representing a subject is formed via a taking lens 12 and an aperture 14 onto a light-receiving surface of a solid-state image pickup element (hereinafter referred to as a “CCD”) 16 (imaging means). The taking lens 12 is driven by a lens drive unit 36 (focusing state control means), which is controlled by the CPU 40, and focusing control or the like is performed, which will be described further below. The lens drive unit 36 changes the focal position by moving the taking lens 12 in an optical axis direction by following an instruction from the CPU 40.
The taking lens 12 (multifocal lens and imaging means) is a multifocal lens (trifocal lens) having a region (hereinafter referred to as a near focal region) 12a with a short focal length for near distance picture taking, a region (hereinafter referred to as an intermediate focal region) 12b with a focal length longer than that of the near focal region and capable of taking pictures of humans and the like, and a region (hereinafter referred to as a far focal region) 12c with a focal length further longer than that of the intermediate focal region and capable of taking pictures of landscapes and the like. In the taking lens 12, as depicted in
Note that while each focal region is formed in a half-moon shape or a band shape in the example of
An area of a region surrounded by each of these curves corresponds to an area of each focal region. The shape of each curve, the degree of overlap with another curve, and a depth of field DOF of the taking lens 12 as a whole defined by these can be set according the characteristics of the optical system and the picture-taking purpose. These curves Ca, Cb, and Cc correspond to MTF (Modulation Transfer Function) curves of the respective regions.
Light beams passing through the near focal region 12a, the intermediate focal region 12b, and the far focal region 12c of the taking lens 12 enter each photosensor of the CCD 16. As depicted in
Note that, as depicted in
Also, the CPU 40 controls the aperture 14 via the aperture drive unit 34 and also controls a charge accumulation time (shutter speed) at a CCD 16 via a CCD control unit 32 and reading of an image signal from the CCD 16. The signal charge accumulated in the CCD 16 is read as a voltage signal according to the signal charge based on a read signal supplied from the CCD control unit 32, and is supplied to an analog signal processing unit 20.
An analog signal processing unit 20 samples and holds R, G, and B signals for each pixel by performing a correlated double sampling process on a voltage signal outputted from the CCD 16, and amplifies and then supplies the resultant signals to an A/D converter 21. The A/D converter 21 converts the sequentially inputted analog R, G, and B signals to digital R. G, and B signals for output to an image input controller 22.
A digital signal processing unit 24 performs predetermined signal processing on the digital image signals inputted via the image input controller 22, such as offset processing, gain control processing including white balance correction and sensitivity correction, gamma correction processing, and YC processing.
Image data processed in the digital signal processing unit 24 is outputted to a VRAM 50. The VRAM 50 includes an A region and a B region each for storing image data representing one frame. Image data representing one frame is alternately rewritten between the A region and the B region. From the region other than the region where image data is being rewritten, the written image data is read. The image data read from the VRAM 50 is encoded in a video encoder 28 and is outputted to the liquid-crystal monitor 30, thereby causing a subject image to be displayed on the liquid-crystal monitor 30.
In the liquid-crystal monitor 30, a touch panel is adopted, displaying the obtained image. Also, as will be described further below, with a user's operation via a screen, operations are possible, such as specifying a main subject and other subject, setting focusing priorities among subjects, specifying a blurred region, and the like.
Also, when a press at the first step (half press) of the shutter button of the operating unit 38 is provided, the CPU 40 starts an AF operation (focusing state control step), controlling a focusing state of the taking lens 12 via the lens drive unit 36. Also, image data outputted from the A/D converter 21 at the time of a half press of the shutter button is captured into an AE detecting unit 44.
The CPU 40 calculates a brightness (picture-taking Ev value) of the subject from an integrated value of G signals inputted from the AE detecting unit 44, determines an aperture value and an electronic shutter (shutter speed) of the CCD 16 based on this picture-taking Ev value and, based on the result, controls a charge accumulation time at the aperture 14 and the CCD 16.
An AF processing unit 42 (focusing state control means) is a portion where a phase-difference AF process is performed, controlling a focus lens in the taking lens 12 so that a defocus amount found from a phase difference in image data between a main picture element and a sub-picture element in a predetermined focus region of image data.
When the AE operation and the AF operation end and a press of the shutter button at the second step (full press) is provided, image data outputted from the A/D converter 21 in response to the press is inputted from the image input controller 22 to a memory (SDRAM) 48 and is temporarily stored therein.
After temporary storage in the memory 48, an image file is generated after signal processing at the digital signal processing unit 24 such as YC processing, compression processing to a JPEG (joint photographic experts group) format at a compression/decompression processing unit 26, etc. The image file is read by a media controller 52 and recorded on a memory card 54. The image recorded on the memory card 54 can be replayed and displayed on the liquid-crystal monitor 30 by operating the replay button of the operating unit 38.
[Imaging Process in the Present Invention]
Next, an imaging process performed by the above-structure imaging device 10 is described.
When the imaging process is started, the lens is first moved to an initial position (S100). Any initial position can be set. For example, a lens position where the subject at a distance of 2 m is focused in the intermediate focal region 12b as a main region can be set as an initial position. When the taking lens 12 is moved to the initial position, picture taking is ready. When the user presses the shutter button of the operating unit 38 to input a picture-taking instruction (S102), a distance distribution of a subject is detected (S104) based on an aperture value of the aperture 14. Based on the detected distance distribution, the child Q1 as a main subject is focused in the main region (S106; focusing state control step). In this state, the child Q1 is in a width Wb of the intermediate focal region 12b (also in the range of the depth of field DOF of the taking lens 12 as a whole), and the position of the child Q1 matches with the position of the peak Pb of the curve Cb. Note that a determination as to whether the subject is a main subject at S106 can be made by determining whether the subject is a human face, based on a ratio occupying a picture-taking region, or by the user specifying via the liquid-crystal monitor 30.
Note in the first embodiment that the distance distribution can be detected by any of various methods. For example, as described in Japanese Patent Application Laid-Open No. 2011-124712, calculation may be performed by using phase difference information or by comparing contrasts of a plurality of imaging signals obtained at different lens positions. The same goes for second to fourth embodiments, which will be described further below.
After focusing control at S106, an image is obtained (S108; imaging step), and the obtained image is recorded in association with focusing information (such as the width of the distance distribution of the subject, the depth of field of the taking lens 12, and the focusing degree of the subject) in the memory card 54 (S110). This allows extraction and combining of images with reference to focusing degree information, which will be described further below.
Note that when moving-picture taking is performed in the first embodiment, the processes at S104 to S110 are continuously performed to obtain and record images according to the movement of the main subject (that is, changes of the distance distribution).
When an imaging process is started, the CPU 40 moves the taking lens 12 to an initial position (S200). When a picture-taking instruction is inputted (S202), the depth of field DOF of taking lens 12 as a whole is calculated based on the aperture value of the aperture 14 (S204), and a distance distribution of the subject is next detected (S206). As a result, when the depth of field DOF is equal to the width SD of the distance distribution of the subject or more (YES at S208), the procedure proceeds to S210 (focusing state control step), where focusing control is performed so that the width SD is within the depth of field DOF, as depicted in (b) of
How the width SD of the distance distribution of the subject is covered by the depth of field DOF as the multifocal lens as a whole at the time of focusing control at S210 can be defined according to the degree of difference in breadth between DOF and SD or the picture-taking purpose. For example, with the width SD of the distance distribution set within the depth of field DOF, control may be performed so that the center of DOF and the center of SD match each other or so that the focusing degree in any of the focal regions with respect to a specific subject such as a main subject is maximum.
After focusing control at S210, imaging is performed (S212; imaging step), and the obtained image and focusing degree information and focusing information (such as the width of the distance distribution of the subject, the depth of field of the taking lens 12, and the focusing degree of the subject) regarding the respective subjects Q1 to Q3 are recorded in association with each other in the memory card 54 (S214).
On the other hand, when the depth of field DOF is narrower than the width SD of the distance distribution of the subject (NO at S208), the procedure proceeds to S216 (focusing state control step), where an image is obtained with focusing control so that SD extends off the depth of field forward and backward equally (S218; imaging step), and the obtained image and the focusing degree information of the respective subjects Q1 to Q3 are recorded in association with each other in the memory card 54 (S220).
Note that when moving-picture taking is performed in the second embodiment, the processes at S204 to S214 or S204 to S220 are continuously performed.
As such, in the second embodiment, focusing control is performed according to whether the depth of field DOF of the taking lens 12 as a whole is equal to the width SD of the distance distribution of the subject or more to obtain an image. Therefore, an appropriately focused image can be obtained according to the distance distribution of the subject.
In the flowchart of
In the flowchart of
Examples of the priority setting described above are depicted in (a) to (c) of
An example in which the user sets priority ranking among subjects is depicted in (c) of
When the priority ranking is set at S316 in the above-described manner, the procedure proceeds to S318 (focusing state control step), where focusing control is performed so that the subject with higher priority ranking is within the depth of field DOF of the taking lens 12. Note in this focusing control that any of various schemes can be adopted to determine how control is further performed, with the subject with higher priority ranking being within the depth of field DOF of the taking lens 12. For example, the subject Q1 with the first priority ranking can be accurately focused in any of the focal regions of the taking lens 12 or, as depicted in (d) of
When the focusing control ends, an image of the subject is obtained (S320; imaging step), and the obtained image and the focusing information the focusing information (such as the width of the distance distribution of the subject, the depth of field of the taking lens 12, and the focusing degree of the subject) are recorded in association with each other in the memory card 54 (S322).
Note that when moving-picture taking is performed in the third embodiment, the processes at S304 to S322 are continuously performed.
As such, in the third embodiment, an image appropriately focused according to the distance distribution of the subject can be obtained.
In the flowchart of
In the flowchart of
On the other hand, when it is determined to blur the depth side of the main subject (NO at S408), the CPU 40 performs control so that the child Q1 as the main subject is focused in the far focal region 12c of the taking lens 12 as depicted in (f) of
As such, in the fourth embodiment, the user can obtain a desired image, with the front side or the deep side of the main subject blurred according to the picture-taking purpose. Note that the user may be prompted to specify a blurring target region via the liquid-crystal monitor 30 for focusing control so that that region is blurred and then imaging.
[Image Processing in the Present Invention]
Next, extraction and combining of images obtained with the above-described scheme are described.
First at S500, an image including a subject as a target (here, the case is described in which the child Q1, the dog Q2, and the woods Q3 are present as in (a) to (c) of
When the focusing degree of the detected target subject is equal to the threshold or more (YES at S502), the procedure proceeds to 504, where it is determined whether the detected focusing degree is maximum. When it is determined that the focusing degree is maximum (YES at S504), updating is performed with the process target image being taken as a combining source image regarding the target subject, and the procedure then proceeds to S508, where it is determined whether all images have been processed. Note that when NO is determined at S502 or S504, the procedure proceeds to S508 without updating the combining source image. When YES is determined at S508, the procedure proceeds to S510. When NO is determined, the processes at S502 to S506 are repeated until the process on all images ends. With this, an image with a focusing degree of the target subject being equal to the threshold or more and maximum is extracted from the images recorded in the memory card 54.
When the image extracting process for one target subject ends in this manner, it is determined whether the process on all subjects has ended (S510). In the example depicted in (a) to (c) of
As such, in the fifth embodiment, an image with all subjects having a focusing degree equal to the threshold or more can be obtained by combining. Note that image extraction and combining may be performed by displaying an image before the process and an image after the process on the liquid-crystal monitor 30.
First at S600, an image including a subject as a target (here, the case is described in which the child Q1, the dog Q2, and the woods Q3 are present as in (a) to (c) of
At S604, the focusing degree of the subject of interest is detected to determine whether the focusing degree is equal to a threshold or more. As described in the first to fourth embodiments described above, since the images and the imaging information (such as the width of the distance distribution of the subject, the depth of field of the taking lens 12, and the focusing degree of the subject) are recorded in association with each other in the memory card 54, the focusing degree of the subject of interest can be detected with reference to this focusing information. Note that a value set on the imaging device 10 side may be used as the threshold, or a value specified by the user via the operating unit 38 may be used. Also, a threshold may be set for each subject.
When the focusing degree of the subject of interest is equal to the threshold or more (YES at S604), the procedure proceeds to 606 to determine whether the detected focusing degree is maximum. When it is determined that the focusing degree is maximum (YES at S606), the process target image is updated as a combining source image (S608), and the procedure then proceeds to S610 to determine whether all images have been processed. When NO is determined at S604 and S606, the procedure proceeds to S610 without updating the combining source image. When the determination result is YES at S610, the procedure proceeds to S612. When the determination result is NO, the processes at S600 to S608 are repeated until the process on all images ends. With this, an image with the focusing degree of the subject of interest is equal to the threshold or more and maximum is extracted from among the images recorded in the memory card 54.
On the other hand, when the detected subject is not a subject of interest and the procedure proceeds to S614, it is determined whether the focusing degree is equal to the threshold or less. When the determination result is YES, the procedure proceeds to S616 to determine whether the focusing degree is minimum. When the determination result is YES, the procedure proceeds to S618, where the process target image is updated as a combining source image regarding a subject other than the subject of interest, and the procedure then proceeds to S610. Note that when NO is determined at S614 and S616, the procedure proceeds to S610 without updating the combining source image.
When the image extracting process for one target subject ends in this manner, it is determined whether the process on all subjects has ended (S610). When the determination at S610 is YES, image combining is performed at S612. This image combing is a process of extracting, for each target subject, a portion of the target subject from the extracted image and combining these portions as one image. As described above, since the image with a high focusing degree is extracted as the subject of interest and images with a low focusing degree are extracted as other subjects, a new image is obtained by combining so that the focusing degree of the subject of interest is equal to the threshold or more and the focusing degrees of the other subjects is equal to the threshold or less. Note that image extraction and combining may be performed by displaying an image before the process and an image after the process on the liquid-crystal monitor 30.
Note in the fifth and sixth embodiments that when image extraction and combining is performed for a moving image, a reference value of extraction (a threshold of the focusing degree) may be changed at each time. For example, instead of extracting an image with a focusing degree equal to the fixed threshold or more at all times, an image with a low focusing degree may be extracted at first, and an image with a high focusing degree may be gradually extracted as time elapses. When the images extracted in this manner are replayed as moving pictures, the subject initially blurred gradually becomes clear, and natural moving images can be obtained when abrupt focusing may provide uncomfortable feelings. Conversely, images may be extracted so that the focused subject may be gradually blurred. Such a time-varying reference value (threshold) may be set by the imaging device, or a value set by the user via the liquid-crystal monitor 30 or the operating unit 38 may be used.
In the above-described embodiment, description is made in which the imaging device 10 is a device performing an imaging process and image processing. However, the imaging process and image processing according to the present invention are not restricted to be performed by the imaging device, and can be performed at a camera-equipped portable telephone, a personal computer (PC), a portable game machine, or the like.
Next, a seventh embodiment of the present invention is described. The structure of the imaging device 10 (imaging device and image processing device) according to the seventh embodiment is similar to that of the imaging device 10 according to the first to sixth embodiments described above (refer to
[Imaging Process]
Next, an imaging process performed by the above-structured imaging device 10 is described.
When moving-picture taking is performed in the seventh embodiment, processes at S706 to S716 as depicted in
Note in the imaging process that the time required for focusing can be calculated from a lens position at the time of a picture-taking instruction, an in-focus lens position, and a lens moving speed.
With the processes as described above, in the seventh embodiment, the main subject is focused via a region with the shortest required focusing time according to a picture-taking instruction to obtain an image. Therefore, the focused image at high speed (without a time lag) can be obtained. Also, since the main subject is focused via any of the focal region, the accurately focused image can be obtained.
Next, an eighth embodiment of the imaging process according to the present invention is described.
When an imaging process is started, the CPU 40 moves the taking lens 12 to an initial position (S800). When a picture-taking instruction is inputted (S802), images are obtained and recorded in all regions with the lens position at the time of the picture-taking instruction being kept (S804; imaging step). When the images obtained at S804 are recorded, focusing information, which will be described further below, may be recorded in association with each other. Next, the CPU 40 detects a focusing state of the taking lens 12 (S806), and the main subject is focused via a main region (S807; focusing state control step) to obtain an image (S808; imaging step), and the obtained image and the focusing information are recorded in association with each other in the memory card 54 (S810).
When moving-picture taking is performed, processes at S806 to S810 as depicted in a flowchart of
With the processes as described above, in the eighth embodiment, the main subject is focused via the main region according to a picture-taking instruction to obtain an image. Therefore, the accurately focused image with high image quality can be obtained.
Next, a ninth embodiment of the imaging process according to the present invention is described.
When the imaging process is started, the CPU 40 moves the taking lens 12 to an initial position (S900). This state is schematically depicted in (a) of
Next, the CPU 40 detects the focusing state of the taking lens 12, and calculates a time required for focusing at each focusing region (S906). As a result of calculation of the focusing time, when the focusing time in the main region (the intermediate focal region 12b) is the shortest (YES at S908), the procedure proceeds to S909, where focusing control of the main region is performed to obtain an image (S910; imaging step), and the obtained image and the focusing information are recorded in association with each other in the memory card 54 (S912). On the other hand, when the focusing time in the main region is not the shortest (NO at S908), the taking lens 12 is controlled so that focusing is achieved via a region with the shortest required focusing time (S914; focusing state control step) to obtain an image (S916; imaging step), and the obtained image and the focusing information are recorded in association with each other in the memory card 54 (S918). In the example depicted in (a) to (f) of
Note in the ninth embodiment that the “focusing information” means imaging time information indicating a time from a picture-taking instruction to image obtainment, focusing information indicating a focusing degree of the obtained image, and imaging region information indicating the region of the taking lens 12 from which the image has been obtained. The same goes for seventh and eighth embodiments.
As such, in the ninth embodiment, images in all of the focal regions are obtained with the lens position being kept at the initial position according to a picture-taking instruction, next an image in a region with the shortest required focusing time is obtained and, furthermore, when the main region is not a region with the shortest required focusing time, an image is also obtained in the main region. With this, an image with high image quality accurately focused with the focused image at high speed can be obtained. Also, by imaging by focusing via any of the focal regions of the taking lens 12 as a multifocal lens, an image focused to the maximum performance of the lens can be obtained.
Note in the ninth embodiment that in specifying a main subject, when a human face is detected, that human may be recognized as a main subject, or a subject having the largest area may be recognized as a main subject. Also, a subject corresponding to a place touched by the user on the liquid-crystal monitor 30 having a touch panel function may be recognized as a main subject. This main-subject specifying process is depicted in (a) to (c) of
Next, the case in which moving-picture taking is performed in the ninth embodiment is described.
In the flowchart of
In the flowchart of
In the seventh to ninth embodiments described above, the case is described in which the AF processing unit 42 performs a phase-difference AF process. However, the AF processing unit 42 may perform a contrast AF process. When performing the contras AF process, the AF processing unit 42 integrates high-frequency components of image data in a predetermined focus region to calculate an AF evaluation value indicating a focusing state, and controls the focusing state of the taking lens 12 so that this AF evaluation value is maximum.
[Image Processing in the Present Invention]
Next, image processing in the present invention is described. In the present invention, as described in the seventh to ninth embodiments of the imaging process described above, the obtained image and the focusing information are recorded in association with each other. Therefore, with reference to this focusing information, a desired image can be selected and extracted. Execution of a specific process is controlled by the CPU 40 based on the program (image processing program) stored in the EEPROM 46.
Conceptual diagrams of an image extracting process in the present invention are depicted in (a) to (c) of
In (a) of
In the ninth embodiment of the above-described imaging process, as described at S912 and S918 of
For example, with reference to the imaging time information indicating the time from a picture-taking instruction to image obtainment, an image focused and imaged can be extracted at high speed with a short time lag. An example of extracting an image with the shortest imaging time from an image group depicted in (a) of
Note that when an image is extracted with reference to the focusing information described above, the information to be referred to may be changed at each time, and a reference value of extraction may be changed at each time. For example, when an image is extracted with reference to the focusing degree, instead of extracting an image with a focusing degree equal to the threshold or more at all times, an image with a low focusing degree may be extracted at first, and an image with a high focusing degree may be gradually extracted as time elapses. When the images extracted in this manner are replayed as moving pictures, the subject initially blurred gradually becomes clear, and natural moving images can be obtained when abrupt focusing may provide uncomfortable feelings. Conversely, images may be selected and extracted so that the focused subject may be gradually blurred.
This image selection and extraction may be performed by displaying an image before the process and an image after the process on the liquid-crystal monitor 30.
While the present invention has been described in the foregoing by using the embodiments, the technical scope of the present invention is not restricted to the scope described in the embodiments described above. It is evident for a person skilled in the art that various changes or improvements can be added to the above-described embodiments. It is evident from the description of the claims that the such changed and improved embodiments can be included in the technical scope of the present invention.
It should be noted that the execution order of the process such as the operations, procedures, steps, stages, and the like in the device, system, program, and method described in the claims, the specification, and the drawings can be achieved as any order unless specifically and explicitly instructed as “before”, “prior to”, and the like and an output in a previous process is used in a subsequent process. Also, regarding the operation flows in the claims, the specification, and the drawings, even if “first”, “next”, and the like are used for description for convenience, this does not mean that implementation according to this order is imperative.
Patent | Priority | Assignee | Title |
11172112, | Sep 09 2019 | EMBEDTEK, LLC | Imaging system including a non-linear reflector |
11375102, | Apr 09 2020 | Sick AG | Detection of image data of a moving object |
Patent | Priority | Assignee | Title |
4902115, | Sep 22 1986 | OLYMPUS OPTICAL CO , LTD , HATAGAYA 2-43-2, SHIBUYA-KU, TOKYO, JAPAN | Optical system for endoscopes |
5845155, | Aug 29 1996 | Asahi Kogaku Kogyo Kabushiki Kaisha | Multipoint autofocusing system |
6614998, | Oct 18 1999 | FUJIFILM Corporation | Automatic focusing camera and shooting method |
20070279618, | |||
20090284641, | |||
20110002678, | |||
20110134282, | |||
20120120232, | |||
20140098228, | |||
JP2001116980, | |||
JP2003098426, | |||
JP2008058540, | |||
JP2008172523, | |||
JP2008516299, | |||
JP2011013514, | |||
JP2011118235, | |||
JP5199443, | |||
JP7119893, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 22 2014 | ONO, SHUJI | FUJIFILM Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 032764 | /0113 | |
Apr 28 2014 | FUJIFILM Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Aug 08 2019 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Aug 09 2023 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Feb 23 2019 | 4 years fee payment window open |
Aug 23 2019 | 6 months grace period start (w surcharge) |
Feb 23 2020 | patent expiry (for year 4) |
Feb 23 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 23 2023 | 8 years fee payment window open |
Aug 23 2023 | 6 months grace period start (w surcharge) |
Feb 23 2024 | patent expiry (for year 8) |
Feb 23 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 23 2027 | 12 years fee payment window open |
Aug 23 2027 | 6 months grace period start (w surcharge) |
Feb 23 2028 | patent expiry (for year 12) |
Feb 23 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |