An optical-information-reading apparatus includes an imaging optical system having a varifocal lens that adjusts a focal position, a solid-state image sensing device that images a code symbol, a ranging laser that measures a distance to the code symbol, and a decoder that calculates the distance to the code symbol using the ranging laser. The apparatus controls the varifocal lens based on calculated distance information.
|
1. An optical-information-reading apparatus comprising:
an imaging optical system including a varifocal lens which adjusts a focal position;
an image sensor that images an imaging target of which the imaging optical system performs an image formation;
a distance-measuring system that measures the distance to the imaging target; and
a control unit that is operable to:
(i) calculate the distance to the imaging target using the distance-measuring system,
(ii) control the imaging optical system based on calculated distance information to perform the image formation of the imaging target on the imaging means, and
(iii) control the image sensor to image the imaging target, wherein the control unit calculates the distance to the imaging target using a distance measurement parameter that is characteristic of the apparatus and which has been previously been determined and stored in a memory that is accessible to the control unit;
wherein the distance-measuring system includes a light-emitter for irradiating the imaging target with light;
wherein a distance measurement parameter comprises location information of the distance-measuring system in relation to an optical axis of the imaging optical system, the optical axis passing through an imaging surface of the imaging sensor; and
wherein the control unit calculates, by the distance-measuring system, a location where irradiating light reflected by the imaging target is incident on the image sensor and the distance to the imaging target using the location information.
9. An optical-information-reading apparatus comprising:
an imaging optical system including a varifocal lens which adjusts a focal position;
an image sensor that images an imaging target of which the imaging optical system performs an image formation;
a distance-measuring system that measures the distance to the imaging target; and
a control unit that is operable to:
(i) calculate the distance to the imaging target using the distance-measuring system,
(ii) control the imaging optical system based on calculated distance information to perform the image formation of the imaging target on the imaging means, and
(iii) control the image sensor to image the imaging target, wherein the control unit calculates the distance to the imaging target using a distance measurement parameter that is characteristic of the apparatus and which has been previously been determined and stored in a memory that is accessible to the control unit;
wherein the control unit determines a deterioration of the imaging optical system based on the calculated distance information to the imaging target and control information of the imaging optical system that performs the image formation of the imaging target on the image sensor;
wherein the varifocal lens is a liquid lens comprising first and second solutions:
(a) that have different light refractive indexes,
(b) that are not mixed,
(c) that have a boundary surface therebetween,
(d) that are sealed in a container and
(e) to which electric voltage is applied to control a shape of the boundary surface;
wherein the imaging optical system includes a temperature detector that detects the temperature of the liquid lens; and
wherein the control unit controls the liquid lens based the temperature detected by the temperature detector.
2. The optical-information-reading apparatus according to
the distance measurement parameter is region information for specifying a region in which the reflected irradiating light is incident on the image sensor, and
the control unit images light that is incident on the region specified by the region information, searches the region specified by the region information, and calculates a location of the light that is incident on the image sensor.
3. The optical-information-reading apparatus according to
4. The optical-information-reading apparatus according to
5. The optical-information-reading apparatus according to
(1) the varifocal lens is a liquid lens comprising first and second solutions:
(a) that have different light refractive indexes,
(b) that are not mixed,
(c) that have a boundary surface therebetween,
(d) that are sealed in a container and
(e) to which electric voltage is applied to control a shape of the boundary surface,
(2) the imaging optical system includes a temperature detector that detects the temperature of the liquid lens, and
(3) the control unit controls the liquid lens based the temperature detected by the temperature detector.
6. The optical-information-reading apparatus according to
7. The optical-information-reading apparatus according to
(1) the varifocal lens is a liquid lens comprising first and second solutions:
(a) that have different light refractive indexes,
(b) that are not mixed,
(c) that have a boundary surface therebetween,
(d) that are sealed in a container and
(e) to which electric voltage is applied to control a shape of the boundary surface,
(2) the imaging optical system includes a temperature detector that detects the temperature of the liquid lens, and
(3) the control unit controls the liquid lens based the temperature detected by the temperature detector.
8. The optical-information-reading apparatus according to
10. The optical-information-reading apparatus according to
|
This patent application is a continuation of International Application No. PCT/JP2010/069432 filed Nov. 1, 2010 and designating the United States of America. This international application was published in Japanese on May 5, 2011 under No. WO 2011/02770 A1. This international application is hereby incorporated herein by reference in its entirety.
The present invention relates to an optical-information-reading apparatus which reads a code symbol such as a barcode and a two-dimensional code.
The bar code, which is one-dimensional code information, is well known for use in merchandise management, inventory management, and the like. Further, the two-dimensional code has been known as a code having higher information density. A processing method has been known, according to which the two-dimensional code is imaged by a solid-state image sensing device, such as CMOS image sensor or CCD image sensor, as a device for imaging the two-dimensional code, after which various kinds of processing are performed on the image, and it is binarized and decoded.
The CMOS image sensor used in such an apparatus to read the code information has been not different from the one used for a digital camera, or the like, in a function so that it is required also to have function as a camera to normally take photographs of an object and/or a sight. For example, in the case of the inventory management, it is used when a target object and a location in which the object is stored are imaged, which are stored in a database together with code information.
A small-scaled camera using the above-mentioned CMOS image sensor is equipped for a mobile phone. The mobile phones have a camera function to take photographs of an object and/or a sight like the normal digital camera and mostly include a scanner for barcode/two-dimensional code and an optical character reader (OCR).
In a scene of the above-mentioned merchandise management, inventory management or the like, it is necessary to scan code symbols applied to objects, one object after another. At present, it is desirable to provide an autofocus function, and the autofocus function and the imaging process are required to be fast.
Generally, according to a method in wide use, a lens is arranged so as to move along an optical axis, the distance at which it focuses on the code symbol is calculated, and the lens moves to a position that is suitable for the distance (see, for example, Patent Document 1 below). Further, a simple autofocus is in wide use according to which two lens locations of short and long ranges, in addition to a fixed lens location, are stored in a memory and, when failing to image, the lens moves in a direction of the short range or the long range (see, for example, Patent Document 2 below).
Further, an optical reading apparatus has been equipped with technology such that optical wavefront modulation elements combining a plurality of lenses are formed to have a focusing function, and focus is adjusted based on the distance to a reading target object (see, for example, Patent Document 3 below).
Additionally, technology has been disclosed such that an image lens, a prism, a lens for long range and its image sensor, and a lens for short range and its image sensor are combined to form optics, and focal distance is adjusted without moving the lenses (see, for example, Patent Document 4 below).
An optical-information-reading apparatus such as a code scanner is now required to be down-sized or integrated. Particularly, when the optical-information-reading apparatus is incorporated into a part of an apparatus, such as a mobile phone, large space capacity may not be used therefor. It is lens optics that require largest capacity in the optical-information-reading apparatus. In the Patent Documents 1 and 2, any space for allowing the lens to move must be maintained, which cannot be reduced.
In Patent Document 3, at least 5 lenses are required. Here, space for allowing the lenses to be arranged is also required. Optical wavefront modulation elements are controlled by complicated software, in which large computational complexity is required. Since it has optics with high precision, it is difficult to construct a production line thereof, which causes the costs thereof to be increased significantly.
Further, Patent Document 4 has a configuration such that at least 2 sets of the normal optics are equipped, so that a case itself is made massive, which causes the costs thereof to be increased. Although a lens, an effective aperture of which is small, is used in order to design a small optical member, such a small lens causes focus and/or image formation to become difficult, which requires a correction function and correction member to correct the inconvenience. In conclusion, it is impossible to accomplish downsizing, which causes the costs thereof to be increased.
In an optical-information-reading apparatus which reads a code symbol, when its focus has low precision, if cell patterns are complicated in a small area, particularly, such as a micro two-dimensional code, it is impossible to recognize black and white in the cells, which results in failure to read them, or causes it to take a long time to decode them. Accordingly, it is necessary to make fine adjustments to the location of the lens, or the like, after it is assembled, in order to accomplish an autofocus function which calculates the distance at which it focuses on the code symbol, which takes a lot of work and much time and increases its production costs.
Thus, the present invention solves such problems and, as an object, providing an optical-information-reading apparatus which has a simple configuration, accomplishes fast imaging of the code symbol with high precision, downsizes the apparatus itself and can be manufactured at low cost.
In order to solve the above-mentioned problem, embodiments of the present invention provide an optical-information-reading apparatus containing an imaging optical system including a varifocal lens which adjusts a focal position, imaging means that images an imaging target of which the imaging optical system performs an image formation, distance-measuring means that measures a distance to the imaging target Controlling means calculates the distance to the imaging target using the distance-measuring means, the imaging optical system is controlled based on calculated distance information to perform the image formation of the imaging target on the imaging means, and the imaging means is controlled to image the imaging target. The control unit calculates the distance to the imaging target using a distance measurement parameter which is proper to the apparatus, the distance measurement parameter to be used for the calculation of the distance to the imaging target being previously measured and the measured distance measurement parameter which is proper to the apparatus being stored.
In an optical-information-reading apparatus embodying the present invention, the distance measurement parameter to be used for the calculation of the distance to the imaging target is previously measured, the measured distance measurement parameter which is proper to the apparatus is stored, and in the reading of the imaging target such as the code symbol or the like, the distance to the imaging target is calculated using the distance measurement parameter which is proper to the apparatus, which enables an autofocus to be carried out.
Making use of the present invention, it is possible to improve the precision in a distance measurement without performing any fine adjustment in the location of lens, or the like, after assembly thereof, which enables the period of time required to autofocus, the imaging and the decoding to be shortened, thereby realizing fast imaging of the code symbol. Further, when the imaging optical system is deteriorated because of long-term deterioration, or the like, it is possible to perform control to allow the image formation of the imaging target to be performed. Particularly, in a configuration where a liquid lens is used as the varifocal lens, it is possible to compensate deterioration of the liquid lens based on temperature, or the like. Even when using it for a long time, it is possible to realize fast imaging of the code symbol. This enables a user of the apparatus to image a plurality of code symbols in succession without any stress.
The following will describe embodiments of an optical-information-reading apparatus according to the present invention with reference to drawings.
The camera module 10 is provided with an imaging optical system 11 having an autofocus function for adjusting a focal position and a solid-state image sensing device 12 that images the code symbol 100, on which the imaging optical system 11 performs the image formation. A ranging laser 13 measures the distance from the solid-state image sensing device 12 to the code symbol 100.
The imaging optical system 11 is provided with a main lens 2 and a varifocal lens 3. The main lens 2 is a fixed focus lens composed of single or plural optical lens(es). The varifocal lens 3 is positioned rearward of the main lens 2 and serves to focus light transmitted through main lens 2 on the solid-state image sensing device 12.
The varifocal lens 3 may be provided with a mechanism to move the single or plural optical lens(es) mechanically along an optical axis to implement an autofocus function. On the other hand, the varifocal lens 3 may be composed of a liquid lens 30 to implement an autofocus function without using any mechanism to move the lens(es) mechanically.
When electricity flows through the electrodes 34a and 34b and electric voltage is applied on the solution 31, the liquid lens 30 can change the shape of the boundary surface 35 between the solution 31 and the oil 32. Such a phenomenon is referred to as “electrowetting phenomenon”. By changing the curvature of the boundary surface 35 between the solution 31 and the oil 32, it is possible to move a focal position to adjust the focus thereof.
Further, the varifocal lens 3 may be a liquid crystal lens which can implement an autofocus function without using any mechanism to move the lens mechanically like the liquid lens 30.
Referring back to
The optical-information-reading apparatus 1A controls the imaging optical system 11 to measure the distance to the code symbol 100 so that the focal position is set to the measured distance. Accordingly, the optical-information-reading apparatus 1A is provided with a distance-measuring portion using a laser beam in the camera module 10.
For distance-measuring by laser, two methods are well known. First, is a pulsing technology in which distance is obtained by measuring the delay in time between a start of a laser pulse and the return of the reflection thereof. The other is a parallax technology in which an imaging target is irradiated with light to form spots thereon, locations of the spots on the imaging target are measured and the distance from the imaging target is obtained from the measured locations of the spots.
Since the distance measurement is performed in this embodiment using the parallax method, the optical-information-reading apparatus 1A is provided with ranging lasers 13 as the distance-measuring means. Each of the ranging lasers 13 is configured so that the code symbol 100 is irradiated, with the imaging optical system 11 performing the image formation on the solid-state image sensing device 12, and the light reflected by the code symbol 100 is incident on the solid-state image sensing device 12.
The laser beams from the ranging lasers 13, and the locations of the spots formed on the code symbol 100 varies based on the distance to the code symbol 100. The spots appear as locations of the reflected light that is incident on the solid-state image sensing device 12.
Accordingly, the optical-information-reading apparatus 1A measures the locations of the light, which is irradiated from the ranging laser 13 and reflected by the code symbol 100 and imaged on the solid-state image sensing device 12, so that the distance to the code symbol 100 can be obtained.
In the optical-information-reading apparatus 1A in which the varifocal lens 3 is composed of the liquid lens 30, the relationship between the electric voltage to be applied to the liquid lens 30 and the focal distance is previously determined. By applying the electric voltage corresponding to the distance to the code symbol 100 on the liquid lens 30, the reflected light from the code symbol 100 which is remote therefrom by any optional distance is focused on the solid-state image sensing device 12 by using the imaging optical system 11 to perform the imaging. In the optical-information-reading apparatus 1A in which the varifocal lens 3 is composed of the liquid crystal lens 36, similarly, a relationship between the electric voltage to be applied to the liquid crystal lens 36 and the focal distance is previously determined. By applying the electric voltage corresponding to the distance to the code symbol 100 on the liquid crystal lens 36, the reflected light from the code symbol 100 which is remote therefrom by any optional distance is focused on the solid-state image sensing device 12 by using the imaging optical system 11 to perform the imaging.
Here, since in the liquid lens 30, even if the same electric voltage is applied thereon, the curvature of the boundary surface varies because of variation in the temperature, temperature compensation is required in the optical-information-reading apparatus 1A which is provided with the imaging optical system 11 composed of the liquid lens 30. Thus, the optical-information-reading apparatus 1A is provided with a thermistor 14 in the camera module 10 as temperature-detecting means for detecting temperature of the liquid lens 30 at a position near the liquid lens 30.
The optical-information-reading apparatus 1A compensates the focal position of the liquid lens 30 drawn from any distance information on the distance to the code symbol 100, which is obtained using the ranging laser 13, using temperature information detected by the thermistor 14, to realize the correct autofocus. In the liquid crystal lens 36, similarly, it is also possible to realize the correct autofocus using temperature compensation.
It is to be noted that the optical-information-reading apparatus 1A may be provided with a lighting LED 15 for irradiating the code symbol 100 with guide light indicating a range in which the code symbol 100 is readable, in the camera module 10. The lighting LED 15, however, is suitably provided so that it may not be equipped, based on the shape of the apparatus or the purpose thereof.
The decoder 200 is provided with an application specific integrated circuit (ASIC) 210 as controlling means to control the focus adjustment and the imaging to be performed in the camera module 10 and to control the decoding of the signal output from the camera module 10 and the data transfer. In the ASIC 210, various kinds of data are written and/or read into/from SDRAM 220, FROM 230, and the like. Although the optical-information-reading apparatus 1A is a scanner which can read barcode and two-dimensional code, it is possible to read any characters if it is equipped with OCR software. The ASIC 210 may be a combination of CPU and LSI such as a field programmable gate array (FPGA).
The optical-information-reading apparatus 1A may obtain electric voltage information according to the measured distance to the code symbol 100 by storing in ASIC 210 a table of the relationship between the distance to the code symbol 100 and the electric voltage to be applied to the liquid lens 30. The distance to the code symbol 100 can be obtained by measuring the location of the light irradiated from the ranging laser 13 and reflected by the code symbol 100 to the solid-state image sensing device 12.
Further, a compensation table by the periphery temperature of the liquid lens 30 is stored in the ASIC 210. Although the liquid lens 30 is required to have a period of waiting time before an image is imaged after the electric voltage is applied thereto, the period of waiting time before the image is imaged varies based on the temperature. The period of waiting time is generally short in case of high temperature. For example, the period of waiting time at 60° C. is much shorter than the period of waiting time at 25° C. Accordingly, the table of the relationship between the periphery temperature of the liquid lens 30 and the period of imaging waiting time is stored in the ASIC 210.
Thus, by referring to the table of the relationship between the distance to the code symbol 100 and the electric voltage to be applied on the liquid lens 30, the compensation table relating to the periphery temperature of the liquid lens 30 and the table on the relationship between the periphery temperature of the liquid lens 30 and the period of imaging waiting time, it is possible to realize an optimal focus adjustment and capture a clear image. In the liquid crystal lens 36, similarly, by storing a table of the relationship between the distance to the code symbol 100 and the electric voltage to be applied to the liquid crystal lens 36 in the ASIC 210 and referring to these tables, it is also possible to realize an optimal focus adjustment and capture a clear image.
In the method of measuring distance using the parallax method, a distance “x” is obtained which is between a subject surface M including the code symbol 100, which is the imaging target, and a plane O including the origin of the imaging optical system 11. System 11 has the main lens 2, which is fixed focal system lens, and the varifocal lens 3 composed of, for example, the liquid lens 30. Distance x is obtained according to the following equation (1).
where “x” is the distance between the subject surface and the origin surface, “a” is the distance between the ranging laser and the optical axis of the imaging optical system in a plane including the origin of the optical axis of the imaging optical system, “r” is the location onto which the laser beam is reflected, “R” is the distance from the center of the solid-state image sensing device to a end point, which is spread by half the angle of view, θ1 is the half angle of view, and θ2 is an angle formed by the ranging laser and a plane including the origin of the optical axis of the imaging optical system.
In the equation (1), the distance “a” between the ranging laser 13 and the optical axis of the imaging optical system 11 in a plane O including the origin of the optical axis of the imaging optical system 11, the half angle of view θ1, an angle θ2 formed by the ranging laser 13 and the plane O including the origin of the optical axis of the imaging optical system 11, and the distance R from the center of the solid-state image sensing device 12 to a end point, which is spread by half angle of view, are set as design values of the optical-information-reading apparatus 1A. The distance “x” is obtained from the location “r” at which the laser spot Sp is reflected onto an imaging surface E of the solid-state image sensing device 12.
By inverting equation (1), an inverse number x−1 of the distance “x” is obtained according to the following equation (2) and precision of distance measurement is given by the following equation (3).
According to equation (3), it is seen that the precision of distance measurement is fixed by the distance “a” (from the optical axis of the imaging optical system 11 to the ranging laser 13) and the angle θ2 (formed by the optical axis of the imaging optical system 11 and the ranging laser 13) have no influence on the precision of distance measurement.
Accordingly, taking into consideration the precision of distance measurement, the distance “a” (between) the ranging laser 13 and the optical axis of the imaging optical system 11) is set a design value thereof given by a mechanical dimension in order to avoid any error problem from occurring in a design stage. On the other hand, the angle θ2 formed by the ranging laser 13 and the optical axis is not required to be set a design value thereof.
However, since the distance “a” of the ranging laser 13 from the optical axis and the angle θ2 formed by the ranging laser 13 and the optical axis are required in the calculation to obtain the distance “x” to the code symbol 100, the distance “a” and the angle θ2 are given with a mechanical tolerance which avoids any error problem from occurring.
Accordingly, by measuring the distance “a” of the ranging laser 13 from the optical axis and the angle θ2 formed by the ranging laser 13 and the optical axis as distance measurement parameters and storing them on a nonvolatile memory in the ASIC 210 shown in
The following will describe a method of obtaining the distance “a” of the ranging laser 13 from the optical axis and the angle θ2 formed by the ranging laser 13 and the optical axis, which are required to measure the distance, without any mechanical adjustment with reference to
Subject surfaces M1, M2, . . . , and Mp are positioned at fixed distance x1, x2, . . . , and xp and produce spots at locations r1, r2, . . . , and rp on the imaging surface E via reflected light from the ranging laser 13, irradiating with a laser beam each of the subject surfaces.
The above-mentioned equation (1) can be deformed to the following equation (4). The calculation of yi=−xiri/R tan θ1 is performed using the half angle of view θ1, which is a calculated value. The distance “R” from the center of the solid-state image sensing device 12 to the end point is spread by the half angle of view. Then, the least squares method is applied to xi and yi so that an inclination dy/dx and an intercept y0 are obtained.
By using the inclination and the intercept thus obtained, the distance “a” of the ranging laser 13 from the optical axis and the angle, tan θ2, formed by the ranging laser 13 and the optical axis, which are required to measure the distance are obtained according to the following equations (5) and (6).
From the above, the most efficient method of obtaining the distance “a” of the ranging laser 13 from the optical axis and the angle, tan θ2, formed by the ranging laser 13 and the optical axis, which are required to measure the distance, is to position the subject surface at two fixed points, having a distance of the closest approach and the furthest distance, fixed by the specification thereof and to obtain two parameters, “a” and tan θ2 on the distance information and the angle information.
In the above-mentioned parallax method, the period of time it takes to measure the distance is dependent on an area S in which the laser spots Sp produced by the reflected light of the ranging lasers 13 move on the imaging surface E of the solid-state image sensing device 12. When making the area S in which the laser spots Sp move minimum, we minimize the period of time for imaging and the range for searching the laser spots Sp on the imaging surface E of the solid-state image sensing device 12, thereby minimizing the period of time it takes to measure the distance.
Generally, as shown in
However, in fact, even when a design is adopted such that the ranging lasers 13 are installed on the center axes of the solid-state image sensing device 12, the optical axis of any of the ranging lasers 13 is inclined according to the adjustment in the assembly thereof, which becomes a system moving within a range of the area SC shown in
Accordingly, the following will describe a method of minimizing the period of time required to measure the distance without any mechanical adjustment when assembling the ranging lasers 13, with reference to
Further, when imaging the laser spots Sp for measuring the distance, by imaging the smallest image range including the laser spots Sp at two fixed points having a distance of the closest approach and the furthest distance, for example, by imaging a rectangular range thereof, the period of time required to obtain the image for the distance measurement is shortened. Further, when searching for the laser spot Sp, the search starts on an average axis extending through two points: the laser spot Spn corresponding to the distance of the closest approach and the laser spot Spf corresponding to the furthest distance, for example.
In the liquid lens 30 shown in
Accordingly, a relationship between distance information L(=x) to the code symbol 100, which is a imaging target, and electric voltage information Vn to be applied to the liquid lens 30 in order to focus on the code symbol 100 which exists on a location specified by the distance information L, is measured, to produce a table TB relating the distance and electric voltage, and it is then stored in nonvolatile memory in the ASIC 210.
Thus, it is possible to obtain the electric voltage information Vn based on the distance to the code symbol 100 measured by the above-mentioned parallax method or the like. The optical-information-reading apparatus 1A shown in
Next, when the time has elapsed or the apparatus is preserved under a high-temperature state, the solution or the oil in the liquid lens 30 deteriorates, so that there may be a case where the setting in the table TB on the distance and electric voltage, which was previously produced and stored, does not correspond to the state of the liquid lens 30.
The refractive index on which the focus can be performed based on the distance obtained by the measurement when referring to the table TB of the distance and electric voltage and applying the electric voltage on the liquid lens 30 is shown in
Accordingly, if the optical-information-reading apparatus 1A does not carry out the reading normally because the code symbol 100 is impossible to be decoded when imaging the code symbol 100 by autofocus using the open look algorithm which obtains the distance information L, obtains the electric voltage information Vn based on the distance information L and controls the liquid lens 30, it images the code symbol 100 using a closed look algorithm which controls the liquid lens 30 so as to focus thereon from the image imaged by the solid-state image sensing device 12. Further, since the distance information L to the code symbol 100 may be obtained, the electric voltage information Vn when being able to focus on the code symbol 100 using the closed loop algorithm is obtained as correction data and the deterioration of the liquid lens 30 is discovered.
Namely, the optical-information-reading apparatus 1A obtains the distance information L on the distance to the code symbol 100 from the locations of the laser spots Sp, which are irradiated from the ranging laser 13, are reflected by the code symbol 100 and are formed on the imaging surface E of the solid-state image sensing device 12. It also obtains the electric voltage information Vn based on the distance information L by referring to the table TB relating to the distance and electric voltage which is previously stored in the ASIC 210 and controls the liquid lens 30 to obtain the image of the code symbol 100 via the solid-state image sensing device 12 at a step SA3 shown in
When the optical-information-reading apparatus 1A obtains the image of the code symbol 100 via the solid-state image sensing device 12, the decoder 200 decodes the code symbol 100 at a step SA4 shown in
If it is impossible to decode the code symbol 100 at step SA4 shown in
In the imaging optical system 11 using he liquid lens 30, the closed loop algorithm which focus on the imaging target to be imaged by the solid-state image sensing device 12 uses, for example, a contrast method is applied in which the electric voltage applied to the liquid lens 30, which controls the curvature of the liquid lens 30, changes so that the contrast (spatial frequency) of the image imaged by the solid-state image sensing device 12 becomes highest
When focusing on the code symbol 100, the optical-information-reading apparatus 1A obtains its image via the solid-state image sensing device 12 at a step SA7 shown in
At a step SA9 shown in
At step SA11 shown in
In
As shown in
Accordingly, the algorithm is changed to the closed loop algorithm. If the electric voltage information Vn when focusing on the code symbol 100 is obtained as the correction data to learn the deterioration of the liquid lens 30, it is clear that the n-V curve N3 indicating the applied electric voltage and the refractive index after the correction comes closer to the n-V curve N2 indicating the relationship between the applied electric voltage and the refractive index at a deteriorated state of liquid lens 30.
The main lens 2 is configured so that single or plural optical lens(es) is/are housed in a cylindrical housing 20. In the main lens 2, a screw portion 21a is formed on an outer circumference of the housing 20.
In the liquid lens 30, an electrode, not shown, connecting with the electrode 34a and an electrode connecting with the electrode 34b are formed on a part of an outer periphery of the cylindrical container 33 sealing the solution and the oil as described on
The camera body 5 is provided with a lens-mounting portion 50 in which a cylindrical space is formed so as to fit outer shapes of the main lens 2 and the liquid lens 30, both of which have cylindrical shapes. The lens-mounting portion 50 allows optical axes of the main lens 2 and the liquid lens 30 to coincide by limiting their position along the X-Y directions that are orthogonal to the optical axes when mounting the main lens 2 and the liquid lens 30 thereon with them being put one on the top of another along the optical axis.
In the camera body 5, a screw portion 51a with which the screw portion 21a formed on the housing 20 of the main lens 2 is engaged is formed on an inner circumference of the lens-mounting portion 50 and when mounting the main lens 2 on the lens-mounting portion 50 and engaging the screw portion 21a of the main lens 2 with the screw portion 51a of the lens-mounting portion 50, the main lens 2 is fixed at a forward end of the camera body 5.
The camera module 10 is provided with a packing 6b held between the lens-mounting portion 50 of the camera body 5 and the liquid lens 30 and a spacer 6a held between the main lens 2 and the liquid lens 30. The packing 6b has a ring shape, is made by, for example, silicon. The packing 6b limits the position of the liquid lens 30 in a Z direction along the optical axis thereof.
The camera module 10 is provided with a base board 7 mounting the solid-state image sensing device 12, a main board 8 performing any signal processing such as decoding and a flexible printed circuit board (FPC) 9 connecting the liquid lens 30, the base board 7 and the main board 8. On the base board 7, the camera body 5 is mounted. On the base board 7, the ranging laser 13 constituting a distance-measuring portion is also mounted. On the base board 7, an outer case, not shown, for covering the camera body 5 and the ranging laser 13 is further installed, which protects the camera body 5 on which the main lens 2 and the liquid lens 30 are mounted and the ranging laser 13. Although, in this embodiment, the base board 7 and the main board 8 have been separately configured, the base board 7 and the main board 8 may be configured so as to be formed as one board.
The flexible printed circuit board 9 is provided with a ring shaped electrode portion 90 to be connected to an electrode, not shown, of the liquid lens 30. The electrode portion 90 is configured so as to have a shape so that it is fitted into the lens-mounting portion 50 of the camera body 5.
In the camera module 10, the thermistor 14 for detecting temperature at the liquid lens 30 and a position near the liquid lens 30 is mounted, for example, near the electrode portion 90 of the flexible printed circuit board 9. Temperature information detected by the thermistor 14 is transferred to the main board 8 through the flexible printed circuit board 9.
The camera module 10 is assembled so that the ranging laser 13 is installed to the camera body 5, the camera body 5 installing the ranging laser 13 is fixed on the base board 7 mounting the solid-state image sensing device 12 and the like using an adhesive, and the ranging laser 13 is fixed on the base board 7 using solder.
Next, the flexible printed circuit board 9 is connected to the base board 7 and one electrode 90 of the flexible printed circuit board 9 and the packing 6b are inserted into the lens-mounting portion 50 of the camera body 5 with the packing 6b being below the other.
Next, the liquid lens 30 is inserted into the lens-mounting portion 50 of the camera body 5 and the other electrode 90 of the flexible printed circuit board 9 is inserted thereinto with it being above the liquid lens 30. The spacer 6a is inserted thereinto above the electrode 90 of the flexible printed circuit board 9, the main lens 2 is mounted on the lens-mounting portion 50 and the screw portion 21a of the main lens 2 is engaged with the screw portion 51a of the lens-mounting portion 50.
Thus, in the camera module 10, one electrode 90 of the flexible printed circuit board 9 is held between the main lens 2 and the liquid lens 30 and the packing 6b and the other electrode 90 of the flexible printed circuit board 9 is held between the liquid lens 30 and the lens-mounting portion 50 of the camera body 5 so that the position of the liquid lens 30 on Z axis along the optical axis is limited by the packing 6b. The packing 6b also pushes the liquid lens 30 toward a direction of the main lens 2 so that the electrode, not shown, of the liquid lens 30 and the electrodes 90 of the flexible printed circuit board 9 are electrically connected to each other. It is to be noted that since the main lens 2 is fixed on the camera body 5 by means of the engagement of the screw portions, this enable fine adjustment of the position thereof along the optical axis to be performed. Further, by inserting the spacer 6a so that the electrodes 90 and the like are not rotated when rotating the main lens 2, the electrodes 90, the liquid lens 30 and the like are protected.
where “s′” is working distance, “s” is working distance at an image side and “f” is the focal distance.
In a case of the imaging optical system providing with only the fixed focus lens system 2A, an adjustment is carried out to mechanically meet a location of a lens barrel 2B having the fixed focus lens system 2A to a target working distance at the image side in order to focus on the imaging target which exists away therefrom by a predetermined distance (working distance s′) if a principal point (rear side principal point) of the imaging optical system is H1.
Further, an amount of variation Δs′(T) of the working distance based on the temperature can be easily measured and the working distance s′(T) at some temperature is represented by the following equation (8) in relation to working distance s′(T0) at a reference temperature T0.
s′(T)=s′(T0)−Δs′(T) (8)
In other words, even when the focal distance f and the working distance “s” at an image side vary based on the temperature to generate Δf(T) and Δs(T) so that the working distance s′ changes, a trend of whole Δs′(T) can be measured to be turned into a model, thereby enabling any compensation to be carried out.
Thus, in the imaging optical system providing the fixed focus lens system 2A composed of single or plural optical lens(es) and the varifocal lens system 3A composed of the liquid lens or single or plural optical lens(es), a relationship between the focal distance of the imaging optical system and the distance to the subject surface M is represented by the following equation (9) where the direction from the subject surface M to an imaging surface E is positive.
where φ is the power of the whole lens system (refraction power), φa is the power of the fixed focus lens system, φb, is power of the varifocal lens and “t” is the distance between the principal points in the lens system.
In a case of the imaging optical system in which the fixed focus lens system 2A and the varifocal lens system 3A are positioned in this order from a side of the subject surface M like the camera module 10 according to this embodiment, the fixed focus lens system 2A is adjusted under a state where an optical state of the varifocal lens system 3A is unknown because the varifocal lens system 3A itself which has been previously assembled to the imaging optical system cannot be adjusted.
In other words, the power of whole of the lens system becomes unknown and if a principal point (rear side principal point) of the imaging optical system providing with the fixed focus lens system 2A and the varifocal lens system 3A is H2, the mechanical location of the lens barrel 2B to adjust the working distance “s” to the working distance “s′” at the image side depends on the state of the varifocal lens system 3A. On the contrary, since the location of the lens barrel 2B is fixed if the characteristic of the varifocal lens system 3A is utilized, by adjusting the varifocal lens system 3A, it is possible to match the working distance to the working distance “s′” at the image side. Accordingly, the imaging optical system into which the varifocal lens system is introduced is not required to perform any mechanical adjustment in a case of a system used in only a situation of a certain temperature.
However, since working distance S0″(T0) of only the fixed focus lens system 2A at the reference temperature T0 and a situation φb (T0, V) of the varifocal lens system 3A are unknown, the following two methods are utilized as the method of temperature compensation. Here, “V” is input variable to fix the varifocal lens system and is the applied electric voltage when the varifocal lens system is configured to be the liquid lens.
One method is such that the optimal φb (T, V) to obtain a desired s′(T) is fixed by testing operations to change the input V plural times to φa (T), s(T) and t(T) under the real environmental temperature. In this method, a feedback system for autofocus to check a relationship between “V” and s′(T) during the reading operation is required so that its operation is slow.
The other method is such that φb (T, V) which is a target distance z=s′ (T) is measured for every individual condition of temperature T and a compensation table or a compensation equation is prepared for every individual or as average value thereof. In this method, a massive amount of operation is required when preparing it for every individual and the precision of s′(T) obtained from φb (T, V) becomes low from a variation in the two optical systems when using the average value.
Accordingly, in this invention, in a case of a system in which the varifocal lens system is added to the fixed focus lens system, a method of maintaining the necessary precision of s′(T) is considered without any necessity of the feedback system.
The temperature compensation in this embodiment is carried out according to the following three steps.
Step 1: Previous Formation of Compensation Equation
(1) Characteristic of the temperature variation in a case where only the fixed focus lens system 2A is introduced is previously measured and it is modeled using the following equation (10).
(2) Characteristic of the temperature variation in the varifocal lens system 3A is previously measured and it is modeled using the following equation (11).
sa(T)=sa(T0)−Δsa(T) (10)
φb(T,V)=φb(T0,V0)−Δφb(T,V) (11)
Step 2: Adjustment Step of obtaining Individual Difference
(1) one Output φb (T0, V0) of the varifocal lens to an input V0 at a certain reference temperature T0 is measured before the assembly and it is recorded or maintained in the system (nonvolatile memory or the like).
(2) Sa(T0) is calculated from the output φb (T0, V0) and it is recorded or maintained in the system (nonvolatile memory or the like).
Step 3: During Operation
When reading, φb (T, V) which is a target distance z=s′(T) is calculated from φb (T0, V0) and Sa(T0) and the input V is input to the varifocal lens.
The following will describe the step 1 for the previous formation of compensation equation in more detail.
<Step 1-1>
The temperature characteristics measured in a case where only the fixed focus lens system 2A is introduced are represented as the following equations (12) through (15).
where “s(bar)′” is working distance, “T” is environmental temperature and “T0” is reference temperature.
<Step 1-2>
The characteristics of temperature variations in the varifocal lens system 3A are represented as the following equations (16) through (18) using the above-mentioned model equation (11).
ai and bi are coefficients fixed by the specification of the varifocal lens system 3A. However, since dφb/dT(T0) and φ0 (T0) are required for every individual, they must be obtained at the adjustment step.
<Step 2-1>
In the above-mentioned step 1, dφb/dT(T0) and φ0 (T0) for every individual are required in the compensation equation. One output φb (T0, V0) of the varifocal lens to an input V0 at a certain reference temperature T0 is measured and it is recorded or maintained in the nonvolatile memory of the system.
The measurement may be carried out such that the varifocal lens system 3A is installed into the fixed focus lens system 2A having a known optical characteristic and the input V0 at which the image obtained by imaging the subject surface M is clearest is searched so that the optical characteristics φb (T0, V0) in this moment can be calculated. Particularly, if the fixed focus lens system 2A is prepared so that it is φb (T0, V0)=0, the value to be stored and maintained may be only V0.
Accordingly, the subject surfaces M1, M2, . . . , Mp are positioned at the fixed distance x1, x2, . . . , xp and the inputs V1, V2, . . . , Vp are measured at each of which the image obtained by imaging each of the subject surfaces M is clearest. Since φbi is obtained from 1/xi, dφ/dV(T0) is obtained from the relationship between V and φ. In this moment, when the subject surfaces M include two fixed points, the distance of the closest approach and the furthest distance, under the specification of manufactured apparatus, its precision becomes good.
When using φb (T0, V0) measured for every individual by the above-mentioned method, φ0 (T0) can be obtained because the inclination and one fixed point are known and dφb/dT(T0) and φ0 (T0) thus obtained are stored and maintained in the non-volatile memory.
<Step 2-2>
In the step 2-1, “s′” obtained with V0 fixed so that it is dφb/dT(T0)=0 being incorporated into each of the individuals is Sa(T0) to be obtained. It is calculated, stored and maintained in the non-volatile memory.
<Step 3>
The following will describe in more detail the step 3 that is a control at the operation time. When reading, input V which is a target distance z=s′(T) is input to the varifocal lens system 3A. Next, the measurement of distance is performed using the above-mentioned parallax method or the like to obtain the distance “z” up to the imaging target. The exterior sensor such as the thermistor 14 shown in
Next, using the above-mentioned equations (9) through (11), φ(T) is obtained from S(T0) and s′ (T) and φb (T) are obtained from φ(T). By referring T, then any optimal input coefficient, for example, optimal electric voltage is calculated to obtain V which is input to the varifocal lens system 3A.
The present invention is available for a barcode reader and a two-dimensional code reader, and can realize autofocus in a small-scale apparatus.
Kawashima, Yasutake, Tanaka, You, Komi, Satoshi
Patent | Priority | Assignee | Title |
11474254, | Nov 07 2017 | PIAGGIO FAST FORWARD, INC | Multi-axes scanning system from single-axis scanner |
9672400, | Jul 08 2014 | AILA TECHNOLOGIES, INC | Imaging and peripheral enhancements for mobile devices |
9910137, | Aug 17 2009 | SEEGRID CORPORATION | System and method using a multi-plane curtain |
Patent | Priority | Assignee | Title |
6141105, | Nov 17 1995 | Minolta Co., Ltd. | Three-dimensional measuring device and three-dimensional measuring method |
20070097508, | |||
20080110265, | |||
20100040355, | |||
20120199655, | |||
JP10002711, | |||
JP2001221621, | |||
JP2002056348, | |||
JP2005182117, | |||
JP2008123562, | |||
JP2009086584, | |||
JP2009134023, | |||
JP4264511, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 30 2012 | Optoelectronics Co. Ltd. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Nov 03 2016 | M2551: Payment of Maintenance Fee, 4th Yr, Small Entity. |
Oct 28 2020 | M2552: Payment of Maintenance Fee, 8th Yr, Small Entity. |
Date | Maintenance Schedule |
May 14 2016 | 4 years fee payment window open |
Nov 14 2016 | 6 months grace period start (w surcharge) |
May 14 2017 | patent expiry (for year 4) |
May 14 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
May 14 2020 | 8 years fee payment window open |
Nov 14 2020 | 6 months grace period start (w surcharge) |
May 14 2021 | patent expiry (for year 8) |
May 14 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
May 14 2024 | 12 years fee payment window open |
Nov 14 2024 | 6 months grace period start (w surcharge) |
May 14 2025 | patent expiry (for year 12) |
May 14 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |