An image capturing device and a method for detecting image deformation thereof are provided. The method is for the image capturing device having a first sensor and a second image sensor, and the method includes the following steps. A first image is captured through the first image sensor, and a second image is captured through the second image sensor. A deform detection is performed according to the first and second images so as to obtain a comparison information between the first and second images. Whether a coordinate parameter relationship between the first and second images being varied is determined according to the comparison information, in which the coordinate parameter relationship is associated with a spatial configuration relationship between the first and second image sensors.
|
1. A method for detecting image deformation, for an image capturing device having a storage unit, a first image sensor and a second image sensor, wherein the first image sensor and the second image sensor are disposed on the image capturing device with a spatial configuration relationship, wherein a current calibration parameter is stored in the storage unit and set corresponding to the spatial configuration relationship between the first image sensor and the second image sensor, the method comprising:
capturing a first image through the first image sensor, and capturing a second image through the second image sensor;
performing an image rectification on the first image and the second image through the current calibration parameter;
performing a deform detection according to the rectified first image and the rectified second image, so as to obtain a relative spatial information between the rectified first image and the rectified second image;
determining whether the current calibration parameter storing in the storage unit is complied with the spatial configuration relation by determining a coordinate parameter relationship between the first rectified image and the rectified second image is varied when the relative spatial information is not complied with a predefined condition, wherein the coordinate parameter relationship is associated with the spatial configuration relationship between the first image sensor and the second image sensor; and
when the coordinate parameter relationship between the rectified first image and the rectified second image is varied, performing a dynamic warping procedure on the first rectified image and the rectified second image, so as to calibrate the coordinate parameter relationship between the rectified first image and the rectified second image.
7. An image capturing device, having a first image sensor and a second image sensor, wherein the first image sensor and the second image sensor are disposed on the image capturing device with a spatial configuration relationship, the image capturing device comprising:
a storage unit, recording a plurality of modules and a current calibration parameter corresponding to the spatial configuration relationship between the first image sensor and the second image sensor; and
a processing unit, coupled to the first image sensor, the second image sensor and the storage unit to access and execute the modules recorded in the storage unit, and the modules comprising:
a capturing module, capturing a first image through the first image sensor and capturing a second image through the second image sensor;
a deform detecting module, performing an image rectification on the first image and the second image through the current calibration parameter, performing a deform detection according to the rectified first image and the rectified second image, so as to obtain a relative spatial information between the rectified first image and the rectified second image;
a determining module, determining a coordinate parameter relationship between the rectified first image and rectified the second image is varied when the relative spatial information is not complied with a predefined condition, wherein the coordinate parameter relationship is associated with a spatial configuration relationship between the first image sensor and the second image sensor; and
a dynamic warping module, performing a dynamic warping procedure on the rectified first image and the rectified second image when the coordinate parameter relationship between the rectified first image and the rectified second image is varied, so as to calibrate the coordinate parameter relationship between the rectified first image and the rectified second image.
2. The method according to
respectively performing a feature detection on the rectified first image and the rectified second image to obtain a plurality of first features of the rectified first image and a plurality of second features of the rectified second image;
comparing coordinate positions of the first features with coordinate positions of the second features respectively corresponding to the first features, so as to obtain a plurality of displacements between the first features and the second features; and
calculating the displacements to obtain a relative rotational angle between the first image and the second image.
3. The method according to
determining whether the rectified first image and the rectified second image belong to a same image group according to image information of the first image and the second image.
4. The method according to
determining the coordinate parameter relationship between the rectified first image and the rectified second image is varied when the relative rotational angle is greater than a threshold value.
5. The method according to
performing a three-dimensional depth estimation according to the rectified first image and the rectified second image to generate a corresponding depth information of the target object and obtaining a depth focus position associated with the target object according to the depth information;
obtaining an autofocus position associated with the target object through an autofocus procedure; and
comparing the depth focus position and the autofocus position to obtain a focal length difference.
6. The method according to
determining the coordinate parameter relationship between the rectified first image and the rectified second image is varied when the focal length difference is greater than a threshold value.
8. The image capturing device according to
9. The image capturing device according to
10. The image capturing device according to
11. The image capturing device according to
12. The image capturing device according to
|
This application claims the priority benefit of Taiwan application serial no. 103103259, filed on Jan. 28, 2014. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.
1. Field of the Invention
The invention generally relates to an image capturing device, and more particularly, to a method for detecting image deformation and an image capturing device using the same.
2. Description of Related Art
For the current depth image sensing technique, it is commonly seen that image capturing devices with binocular lenses are employed to capture images corresponded to various view angles, so that three-dimensional (3D) depth information of a target object may be calculated by the images corresponded to various view angles. Therefore, in order to precisely obtain the 3D depth information of the target object from the 2D images, the spatial configuration between the two lenses has to be specifically designed and the delicate parameter calibration is a necessary procedure. To be more specific, when the manufactures fabricate the image capturing devices with binocular lenses, each corresponding spatial position of the binocular lenses is incapable of being accurately disposed on the precise pre-determined settings. As such, during the process of fabricating the image capturing devices, the manufacturers calibrate the disposed binocular lenses modules in advance, and thus a set of factory-default calibrating parameters is obtained. In the future, the image capturing device may employ these factory-default calibrating parameters to calibrate the images captured by the binocular lenses during the process of users operating the image capturing device, so as to overcome the lack of precision in the processes of fabricating the image capturing devices.
However, during the process of users operating or carrying the image capturing device, it is possible to cause the spatial position variation of lenses (such as, displacement or rotation) as the image capturing device being extruded, impacted or dropped. Once the circumstance of the lens being displaced or deformed occurs, these factory-default calibrating parameters are no longer complied with the current application circumstance, such that the image capturing device is incapable of obtaining the correct depth information. For example, if the horizontal disequilibrium has occurred between the binocular lenses of the stereoscopic image capturing device, due to the horizontals of the captured left-frames and right-frames are mismatched after the disequilibrium, the 3D photographing effects are lead to be poor.
Accordingly, the invention is directed to an image capturing device and a method for detecting an image deformation thereof, which is capable of instantly detecting whether the deformation of the binocular images is occurred during the use of the image capturing device, so as to further execute the real-time correction on the images.
The invention provides a method for detecting the image deformation, which is for an image capturing device having a first image sensor and a second image sensor. The method includes the following steps. A first image is captured through the first image sensor, and a second image is captured through the second image sensor. A deform detection is performed according to the first image and the second image, so as to obtain a comparison information between the first and second images. Whether a coordinate parameter relationship between the first and second images being varied is determined according to the comparison information, in which the coordinate parameter relationship is associated with a spatial configuration relationship between the first and second image sensors.
In an embodiment of the invention, the step of performing the deform detection according to the first and second images to obtain the comparison information between the first and second images includes: respectively performing a feature detection on the first image and the second image to obtain a plurality of first features of the first image and a plurality of second features of the second image, comparing coordinate positions of the first features with coordinate positions of the second features respectively corresponding to the first features, so as to obtain a plurality of displacement information between the first features and second features, and calculating the displacement information to obtain a relative rotational angle between the first image and second image.
In an embodiment of the invention, before the step of respectively performing the feature detection on the first image and the second image to obtain the plurality of first features of the first image and the plurality of second features of the second image, the method further includes the following steps. Whether the first image and the second image belong to a same image group is determined according to image information of the first image and second image. When the first image and the second image belong to the same image group, an image rectification is performed on the first image and second image through a current calibration parameter.
In an embodiment of the invention, the step of determining whether the coordinate parameter relationship between the first and second images is varied according to the comparison information includes: determining the coordinate parameter relationship between the first image and second image is varied when the relative rotational angle is greater than a threshold value.
In an embodiment of the invention, the first image sensor captures the first image for a target object and the second image sensor captures the second image for the target object, and the step of performing the deform detection according to the first image and second image to obtain the comparison information between the first image and second image includes: performing a three-dimensional depth estimation according to the first image and second image to generate a corresponding depth information of the target object and obtaining a depth focus position associated with the target object according to the depth information, obtaining an autofocus position associated with the target object through an autofocus procedure, and comparing the depth focus position and the autofocus position to obtain a focal length difference.
In an embodiment of the invention, the step of determining whether the coordinate parameter relationship between the first and second images is varied according to the comparison information includes: determining the coordinate parameter relationship between the first image and second image is varied when the focal length difference is greater than a threshold value.
In an embodiment of the invention, the method for detecting the image deformation further includes the following steps. An image rectification is performed on the first and second images through a current calibration parameter. When the coordinate parameter relationship between the first and second images is varied, a dynamic warping procedure is performed on the first and second images, so as to calibrate the coordinate parameter relationship between the first and second images.
From another perspective, the invention provides an image capturing device. The image capturing device has a first image sensor and a second image sensor, and the image capturing device also includes a storage unit and a processing unit. The storage unit records a plurality of modules. The processing unit is coupled to the first image sensor, the second image sensor and the storage unit, so as to access and execute the modules recorded in the storage unit. The modules include a capturing module, a deform detecting module and a determining module. The capturing module captures a first image through the first image sensor and captures a second image through the second image sensor. The deform detecting module performs a deform detection according to the first and second images, so as to obtain a comparison information between the first and second images. The determining module determines whether a coordinate parameter relationship between the first and second images is varied according to the comparison information, in which the coordinate parameter relationship is associated with a spatial configuration relationship between the first and second image sensors.
In an embodiment of the invention, the storage unit further stores a current calibration parameter, and the modules further include a dynamic warping module. The dynamic warping module performs an image rectification on the first and second images through the current calibration parameter. When the coordinate parameter relationship between the first and second images is varied, the dynamic warping module performs a dynamic warping procedure on the first and second images, so as to calibrate the coordinate parameter relationship between the first and second images.
In light of the above, the method for detecting image deformation provided by the embodiments of the invention may instantly detect whether the image deformation is occurred. To be specific, the method for detecting image deformation may detect whether the calibration parameters currently utilized for calibrating the first and second images are able to perform the accurate calibration. By this way, the image capturing device in the invention is capable of performing the deform detection correctly, automatically and instantly, in the process of the user operates the image capturing device. Accordingly, when the occurrence of deformation is detected, the further correction and improvement may be executed to avoid continuously using the calibration parameters that no longer complied with the current situation to perform the image rectification, and simultaneously, to enhance the accuracy of the depth information calculation.
Several exemplary embodiments accompanied with figures are described in detail below to further describe the invention in details.
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
Reference will now be made in detail to the present preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the description to refer to the same or like parts.
While image capturing devices are in the factory, the spatial configuration relationship between the binocular lenses thereof have already been calculated and adjusted precisely, and accordingly, a set of factory-default calibrating parameters is generated. The presetting of factory-default calibrating parameters is configured to calibrate the images captured by the various lenses to have a designed and fixed coordinate parameter relationship. In order to resolve the circumstance that these factory-default calibrating parameters are no longer applicable due to the binocular lenses being displaced or rotated, the claimed invention instantly utilizes the left-image and right-image to perform a deform detection, and accordingly determines whether the circumstance that the factory-default calibrating parameters or the current calibration parameter is no longer applicable has occurred. Below, embodiments of the invention will be described as implementation examples of the invention.
The first image sensor 110 and the second image sensor 120 may include a lens and photosensitive elements. The photosensitive elements may be a charge coupled device (CCD), a complementary metal-oxide semiconductor (CMOS) or other elements, in which the first image sensor 110 and the second image sensor 120 may also include apertures etc., although the invention is not limited thereto. Besides, the lenses of the first image sensor 110 and the second image sensor 120 may be served as a left lens and a right lens, according to the lens configuration positions of the first image sensor 110 and the second image sensor 120.
In the embodiment, the focusing unit 130 is coupled to the first image sensor 110, the second image sensor 120 and the processing unit 140, and is configured to control focal lengths of the first image sensor 110 and the second image sensor 120. In other words, the focusing unit 130 controls the lenses of the first image sensor 110 and the second image sensor 120 to move to an in-focus position. The focusing unit 130 may be a voice coil motor (VCM) or other different types of motors to control the step position of lenses, so as to change the focal lengths of the first image sensor 110 and the second image sensor 120.
The processing unit 140 may be a central processing unit (CPU), a microprocessor, an application specific integrated circuits (ASIC), a programmable logic device (PLD) or hardware devices equipped with computing capability, for instance. The storage unit 150 may be a random access memory, a flash memory or other memories, configured to store data and a plurality of modules, and the processing unit 140 is coupled to the storage unit 150 and configured to execute these modules. The afore-described modules include a capturing module 151, a deform detecting module 152, a determining module 153 and a dynamic warping module 154, in which these modules may be computer programmes that are loaded to the processing unit 140, and thus the detection of image deformation can be executed.
Firstly, in step S201, the capturing module 151 captures a first image through the first image sensor 110, and captures a second image through the second image sensor 120. In other words, the first image and the second image may be regarded as the left image and the right image corresponding to various view angles, which are captured from a same scene. Moreover, the first image and the second image may be the live-view images captured under the preview state.
It should be noted that, since the first and second images are calibrated through the factory-default calibrating parameters, as employing the first and second images with different view angles to calculate the depth information, the corresponding features on the first and second images are projected onto same coordinate positions under a reference coordinate system after being through coordinate conversion calculation. In case of the features corresponded to each other on the calibrated first and second images are not projected to the same coordinate positions under the reference coordinate system, such instance may be referred as image deformation.
From another perspective, the factory-default calibrating parameters are applicable to the two left and right images so as to respectively perform image rectification afterwards, so that the two real images become to either have horizontal disparity or to have vertical disparity (which is caused by the allocation of lenses), for instance, a difference between elevation of angles of the binocular lenses exists. The image rectification is performed through the factory-default calibrating parameters, such that the real images may be converted into the calibrated images corresponding to the left and right lenses which are allocated on a same image capturing plane with the horizontal or vertical position difference exists. That is to say, under the premise of the left and right lenses being disposed horizontally, each pixel on the left and right images after being image rectified should have the horizontal position difference to be existed. At this time, if the capturing direction of the left and right lenses is changed, the vertical position of each pixel on the left and right images after being image rectified may still remain a difference, which may also be referred as image deformation.
In step S202, the deform detecting module 152 performs the deform detection according to the first and second images, so as to obtain a comparison information between the first and second images. It should be noted that, in one embodiment, the first image and the second image may have already been calibrated through the factory-default calibrating parameters before performing the deform detection. Specifically, through the deform detection of the embodiment, it may be known that whether the left and right images after calibrated through the default calibrating parameters are deformed or skewed, that is, whether the first and second image are calibrated through the current calibration parameter to either have the horizontal disparity or to have vertical disparity, in which the comparison information may be indicated the degree of skew or deform of the left and right images.
In step S203, the determining module 153 determines whether the coordinate parameter relationship between the first and second images is varied according to the comparison information. If the step S203 determines to be no, the determining module 153 determines the coordinate parameter relationship between the first and second images is not varied in step S204. If the step S203 determines to be yes, the determining module 153 determines the coordinate parameter relationship between the first and second images is varied in step S205. It should be noted that, such coordinate parameter relationship is associated with the spatial configuration relationship between the first image sensor 110 and the second image sensor 120.
That is to say, when the spatial configuration relationship between the first image sensor 110 and the second image sensor 120 is changed, the coordinate parameter relationship between the first and second images is varied correspondingly. In case of the determining module 153 determines the coordinate parameter relationship between the first and second images is varied, it indicates that the lens(es) of the first image sensor 110 and/or the second image sensor 120 may be displaced or rotated, and the factory-default calibrating parameters or the current calibration parameter is no longer complied with the current situation. In other words, the set of factory-default calibrating parameters or the current calibration parameter is no longer able to accurately perform the image rectification on the first and second images. Based on the above, the image capturing device 100 may execute the corresponding correction as the image deformation may be detected correctly and instantly, so as to avoid repeatedly using the parameters that are not complied with the current situation to perform the image rectification.
The details of utilizing the first and second images to perform the deform detection are described in the following exemplary embodiments.
Firstly, in step S301, the capturing module 151 captures a first image through the first image sensor 110, and captures a second image through the second image sensor 120. In step S302, the deform detecting module 152 determines whether the first image and the second image belong to a same image group according to image information of the first and second images. The image information may be the recorded photographing information while capturing the images, such as the resolution, the focal length, the exposure time or the recording time of the image. Through the comparison of the image information, the deform detecting module 152 may find out whether the first image and the second image are the two images captured at the same time from a same scene.
That is to say, in the embodiment, the single image group has two pictures, in which the images in the same image group are the two images captured by the left and right lenses at the same time from the same scene. In step S303, when the first and second images belong to the same image group, the deform detecting module 152 performs the image rectification on the first and second images through the current calibration parameter. The objective is to maintain the coordinate parameter relationship between the first and second images in a state that is facilitated to calculate the depth information.
In step S304, the deform detecting module 152 respectively performs a feature detection on the first image and the second image, so as to obtain a plurality of first features of the first image and a plurality of second features of the second image. The feature detection is configured to detect a plurality of features in an image, such as the edge detection, the corner detection or other feature detection algorithms, although the invention is not limited thereto.
In step S305, the deform detecting module 152 compares coordinate positions of the first features with coordinate positions of the second features respectively corresponding to the first features, so as to obtain a plurality of displacement information between the first features and the second features. For example, under the premise of the binocular lenses being horizontally disposed, the first and second images are in the state of horizontal and co-linear. As such, the coordinate components of the first features and the corresponding second features along the vertical direction should be identical. Moreover, the coordinate components of each first feature and each corresponding second features along the horizontal direction should have a gap, which may be served as horizontal disparity, which may be referred as the aberration in the horizontal direction.
Therefore, through comparing the coordinate positions of the features on the first and second images, whether the coordinate parameter relationship between the first and second images being varied may be informed. For example, under the premise of the binocular lenses being horizontally disposed, in case of the displacement quantity for the coordinate components of the first features and the corresponding second features along the vertical direction is excessively large, it indicates that the coordinate parameter relationship between the first and second images is varied, and also indicates the spatial positions of the first image sensor 110 and the second image sensor 120 are varied as well. That is to say, it may be informed whether the image deformation is occurred through analyzing and statistically counting the plurality of displacement information between the first and second features.
Besides, in step S306 of the embodiment, the deform detecting module 152 may even calculate these displacement information to obtain a relative rotational angle between the first image and the second image. In brief, the deform detecting module 152 may obtain a rotational quantity between the first and second images according to these displacement quantity corresponding to the various features and the coordinate positions of the features, in which the rotational quantity between the images may be caused by the rotation of the lens module, for instance. Hence, in step S307, the determining module 153 determines whether the relative rotational angle is greater than a threshold value. In step S308, when the relative rotational angle is greater than the threshold value, the determining module 153 determines the coordinate parameter relationship between the first and second images is varied. In step S309, when the relative rotational angle is not greater than the threshold value, the determining module 153 determines the coordinate parameter relationship between the first and second images is not varied.
It should be noted that, in one embodiment, the deform detecting module 152 may also determine whether the coordinate positions of the first features and the corresponding second features being projected are matched with the coordinate positions of the reference coordinate system thereon, and calculates the projection displacement quantity between the two projection points (that is, the projection points of the first features and the second features). When the projection displacement quantity is greater than a threshold value, the deform detecting module 152 then analyzes the coordinate parameter relationship between the first and second images through these projection displacement quantity corresponding to the projection points.
Firstly, in step S401, the capturing module 151 captures a first image through the first image sensor 110, and captures a second image through the second image sensor 120, in which the first image sensor 110 for a target object captures the first image while the second image sensor 120 for the target object captures the second image. Specifically, in the embodiment, the method for selecting a target object may be selecting the target object through a click signal received by the focusing unit 130 from the user selected. For example, the user may select the target object in a touch manner or through moving the imagecapturing device to a specific region, although the invention is not limited thereto. In other feasible embodiments, the method for selecting a target object may also be automatically selecting the target object and obtaining the coordinate position of the target object through the focusing unit 130 performs an object detection procedure.
In step S402, the deform detecting module 152 performs a three-dimensional depth estimation according to the first and second images to generate the corresponding depth information of the target object, and obtains the depth a depth focus position associated with the target object according to the depth information. Specifically, the deform detecting module 152 may perform an image processing through stereo vision technique, in order to seek the 3D coordinate positions in space and each pixel depth information in images of the target object. Moreover, the step of obtaining the depth focus position associated with the target object according to the depth information may be implemented by obtaining the focus positions associated with the target object through inquiring a depth table according to the piece of depth information, for instance.
Therefore, through seeking the corresponding relationship between the number of steps of the stepping motor or the magnitude of current of the voice coil motor and the clear depth of the target object in advance, the number of steps of the stepping motor or the magnitude of current of the voice coil motor corresponding to the currently obtained depth information of the target object can be inquired, and accordingly, the depth focus position associated with the target object may be obtained.
Subsequently, in step S403, the deform detecting module 152 obtains an autofocus position associated with the target object through an autofocus procedure. Specifically, in the process of executing the autofocus procedure, the lenses of the first image sensor 110 and the second image sensor 120 may be respectively adjusted to the required focus positions through the focusing unit 130 automatically controls the lens module to move in a wide range, so as to obtain the autofocus position associated with the target object. For instance, the focusing unit 130 may employ the hill-climbing method in the autofocus technique to obtain the autofocus position associated with the target object, although the invention is not limited thereto.
In step S404, the deform detecting module 152 compares the depth focus position and the autofocus position to obtain a focal length difference. Generally, the image capturing device 100 is able to obtain a desired depth information when the image capturing device 100 is not suffered from impacts. Once the image capturing device 100 is suffered from impacts, the spatial configuration relationship between the first image sensor 110 and the second image sensor 120 may be varied. The image capturing device 100 is incapable of obtaining the desired depth information according to the previous default calibrating parameters, that is to say, the image capturing device 100 is incapable of estimating the correct depth focus position through the depth information and the depth table recorded beforehand. As such, a difference is generated between the depth focus position and the autofocus position obtained through the autofocus procedure.
Hence, in step S405, the determining module 153 determines whether the focal length difference is greater than a threshold value. In step S406, when the focal length difference is greater than the threshold value, the determining module 153 determines the coordinate parameter relationship between the first and second images is varied. In step S407, when the focal length difference is not greater than the threshold value, the determining module 153 determines the coordinate parameter relationship between the first and second images is not varied.
It should be noted that, when the image capturing device 100 determines the coordinate parameter relationship between the first and second images is varied, it indicates that the current calibration parameter is no longer able to perform the accurate image rectification on the images. Therefore, the depth estimation engine of the image capturing device 100 is no longer capable of generating the desired depth information through the images captured by the binocular lenses. Based on the above, in one embodiment, before the image capturing device 100 updates or corrects the current calibration parameter or the factory-default calibrating parameters, the method for detecting image deformation may further include performing a dynamic warping procedure on the present first and second images so as to further correct the coordinate parameter relationship between the first and second images.
In order to illustrate how the image capturing device corrects the present images,
Firstly, in step S501, the dynamic warping module 154 performs an image rectification on the first image and the second image through the current calibration parameter. Specifically, before the image capturing device 100 calculates the depth information of the images through a depth estimator, the current calibration parameter may be employed to perform the image rectification on the first image and the second image, so as to calibrate the first and second images to the desired spatial corresponding relationship, thereby obtaining the correct depth information. In step S502, the dynamic warping module 154 determines whether the coordinate parameter relationship between the first and second images is varied. Specifically, the dynamic warping module 154 may find out whether the coordinate parameter relationship between the first and second images is varied according to the determination result generated by the determining module 153, in which the details of determining whether the images are deformed are explicitly illustrated in the embodiments depicted in
Hence, in step S503, when the coordinate parameter relationship between the first and second images is varied, the dynamic warping module 154 performs the dynamic warping procedure on the first and second images, so as to calibrate the coordinate parameter relationship between the first and second images. Specifically, the objective of the dynamic warping procedure is to calibrate the first and second images to the desired spatial corresponding relationship. For example, the first and second images are calibrated to the state of horizontal and co-linear.
It is known from the embodiment depicted in
Moreover, in the embodiment, the dynamic warping module 154 may adjust the second image based on the first image, and similarly, may adjust the first image based on the second image, so as to adjust the first and second images to the desired spatial corresponding relationship. Additionally, in one embodiment, the dynamic warping module 154 may also adjust the first and second images simultaneously, so as to adjust the first and second images to the desired spatial corresponding relationship.
It should be noted that, in the embodiment, the image rectification has been performed on the first and second images through the current calibration parameter. As such, even the current calibration parameter is unable to calibrate the first and second images to the desired state, the degree of deform between the first and second images has been decreased, so that the computation for seeking the preferable parameter adjustment information may also be reduced. For example, in case of the image rectification is not performed on the first and second images through the current calibration parameter, the dynamic warping module 154 may look for a preferable rotational adjustment angle within a required range of +30 degree to −30 degree. In case of the image rectification is performed on the first and second images through the current calibration parameter, the dynamic warping module 154 may look for a preferable rotational adjustment angle within a required range of +5 degree to −5 degree.
For example,
Then, in step S504, the dynamic warping module 154 obtains the depth information though the first and second images. By this way, during the period of accumulating the complete information to correct the current calibration parameter, the dynamic warping module 154 may perform an adaptively fine adjustment on a single group of images, such that the adjusted first and second images may be configured to calculate the image information of the depth information.
In summary, the invention is able to instantly detect whether the image deformation is occurred, in order for the image capturing device to perform the corresponding correction or calibration measure. That is to say, in the process of the user operating the image capturing device, the image capturing device of the invention is capable of detecting image deformation instantly and automatically, so as to further determine whether the phenomenon of the binocular lenses being rotated or displaced is occurred. When the image deformation is detected, the image capturing device of the invention may perform the further correction and improvement, in order to avoid continuously using the calibration parameters that no longer complied with the current situation to perform the image rectification. Besides, in one embodiment of the invention, when the image deformation is detected, the image capturing device of the invention may also perform the dynamic warping procedure on the single group of images currently captured, and thus the image capturing device is still able to obtain the correct depth information to carry out the next step of the application, so as to ensure the image capturing quality.
It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present invention without departing from the scope or spirit of the invention. In view of the foregoing, it is intended that the present invention cover modifications and variations of this invention provided they fall within the scope of the following claims and their equivalents.
Chou, Hong-Long, Chuang, Che-Lun, Wang, Yu-Chih, Wang, Yao-Sheng
Patent | Priority | Assignee | Title |
10204686, | Jul 11 2017 | Samsung Electronics Co., Ltd. | Page buffer, method of sensing a memory cell using the same, and nonvolatile memory device including the same |
10621791, | Nov 24 2017 | Industrial Technology Research Institute | Three-dimensional modeling method and system thereof |
11347908, | Nov 02 2018 | Inkbit, LLC | Intelligent additive manufacturing |
11354466, | Nov 02 2018 | Inkbit, LLC | Machine learning for additive manufacturing |
11625515, | Nov 02 2018 | Inkbit, LLC | Machine learning for additive manufacturing |
11651122, | Nov 02 2018 | Inkbit, LLC | Machine learning for additive manufacturing |
11712837, | Nov 01 2019 | Inkbit, LLC | Optical scanning for industrial metrology |
11766831, | May 04 2021 | Inkbit, LLC | Calibration for additive manufacturing |
Patent | Priority | Assignee | Title |
20070296721, | |||
20080306708, | |||
20110025829, | |||
20110110583, | |||
20110222757, | |||
20120069149, | |||
20130235213, | |||
20130321588, | |||
20150103148, | |||
CN102428707, | |||
CN102905147, | |||
CN103444191, | |||
TW201220251, | |||
TW201320714, | |||
WO2014002725, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jun 09 2014 | WANG, YU-CHIH | ALTEK SEMICONDUCTOR CORP | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033123 | /0180 | |
Jun 09 2014 | CHOU, HONG-LONG | ALTEK SEMICONDUCTOR CORP | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033123 | /0180 | |
Jun 09 2014 | CHUANG, CHE-LUN | ALTEK SEMICONDUCTOR CORP | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033123 | /0180 | |
Jun 09 2014 | WANG, YAO-SHENG | ALTEK SEMICONDUCTOR CORP | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033123 | /0180 | |
Jun 12 2014 | Altek Semiconductor Corp. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Dec 31 2020 | M2551: Payment of Maintenance Fee, 4th Yr, Small Entity. |
Date | Maintenance Schedule |
Jul 04 2020 | 4 years fee payment window open |
Jan 04 2021 | 6 months grace period start (w surcharge) |
Jul 04 2021 | patent expiry (for year 4) |
Jul 04 2023 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 04 2024 | 8 years fee payment window open |
Jan 04 2025 | 6 months grace period start (w surcharge) |
Jul 04 2025 | patent expiry (for year 8) |
Jul 04 2027 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 04 2028 | 12 years fee payment window open |
Jan 04 2029 | 6 months grace period start (w surcharge) |
Jul 04 2029 | patent expiry (for year 12) |
Jul 04 2031 | 2 years to revive unintentionally abandoned end. (for year 12) |