A stereo camera device detecting a distance to a subject includes two cameras and a calculation unit that calculates a distance to the subject based on the images acquired by the two cameras. The calculation unit includes an image processing unit that searches for corresponding points of the images acquired by the two cameras and calculates two parallaxes based on differences in positional coordinates of the corresponding points on the images, an offset value calculation unit that calculates parallax offset values across the images based on the two parallaxes calculated by the image processing unit at least at two time points, and a statistical processing unit that performs a statistical analysis on a distribution of the parallax offset values and determines an optimum value of the parallax offset values, the optimum value being used as a correction parameter.
|
1. A stereo camera device detecting a distance to a moving subject, the stereo camera device comprising:
two cameras that are installed and separated from each other by a base line length on a moving platform; and
circuitry configured to calculate distances to the moving subject on images based on the images acquired by the two cameras;
wherein the circuitry is configured to search for corresponding points of the images acquired by the two cameras and to calculate two parallaxes, based on differences in positional coordinates of the corresponding points on the images, at least at two time points,
wherein the circuitry is configured to calculate a running distance, that is a difference between the distances to the moving subject at the two time points, based on relative velocities of the two cameras with respect to the moving subject,
wherein the circuitry is configured to calculate parallax offset values of the corresponding points across the images using the two parallaxes calculated by the circuitry at least at the two time points and using the calculated running distance, and
wherein the circuitry is configured to perform a statistical analysis on a distribution of the parallax offset values and determine an optimum value of the parallax offset values, the optimum value being used as a correction parameter.
6. A correction method performed by a stereo camera device that detects a distance to a moving subject, the method comprising:
acquiring images at different time points by synchronizing two cameras installed and separated from each other by a base line length on a moving platform;
calculating parallaxes based on differences in positional coordinates of corresponding points on the images acquired by the two cameras;
calculating a distance to a same corresponding point based on parallaxes of at least two images acquired at two different time points and by using the base line length, a focal point distance of the two cameras, and parallax offset values, the same corresponding point being on the two images;
calculating a running distance, that is a difference between distances to the moving subject at the two different time points, based on relative velocities of the two cameras with respect to the moving subject;
calculating the parallax offset values of the same corresponding point across the images using the running distance acquired at the two different time points and using the parallaxes of the two images;
determining an optimum value of the parallax offset values by performing a statistical analysis on a distribution of the parallax offset values; and
correcting a correction parameter by using the determined optimum value, the correction parameter being used for measuring the distance.
2. The stereo camera device according to
wherein the circuitry is configured to generate a frequency distribution of the parallax offset values to determine the optimum value.
3. The stereo camera device according to
4. The stereo camera device according to
5. The stereo camera device according to
7. The correction method according to
generating a frequency distribution of the parallax offset values to determine the optimum value.
8. A non-transitory computer-readable medium storing a computer-readable program that causes a computer to execute the correction method described in
9. The stereo camera device according to
10. The correction method according to
11. The stereo camera device according to
12. The correction method according to
|
The present invention relates to a measurement technique of measuring a distance to a subject using images, and more particularly to a stereo camera, a correction method, and a program that effectively collects a parameter.
There has been known a so-called stereo camera device that measures a distance between a formed image and a subject by disposing plural imaging devices such as cameras for forming an image of a subject. A parallel stereo camera includes two cameras fixed at different positions. A parallel stereo camera device includes two fixed cameras separated from each other by a predetermined distance which is referred to as a base line length. Those two cameras are fixed in a manner such that the optical axes of the cameras are parallel to each other. The stereo camera device is provided to determine the distance to the subject by converting the parallax between a subject image acquired by the first camera and a subject image acquired by the second camera into the distance using a specific parameter in an optical system.
A method of measuring the distance using two cameras disposed in parallel is described with reference to
On the other hand, in the camera 1L, the image of the subject X is formed at a position PL on the imaging surface SL of the camera 1L. Further, a straight line passing through the optical center OL of the camera 1L and being parallel to the straight line X-OR is expressed in a dotted line as illustrated in
In many cases, the stereo camera devices are installed in a moving body such as a vehicle and used for measuring a distance. Due to this usage, the stereo camera devices are subjected to continuous vibrations and temperature changes. As described with reference to
To prevent the degradation of the accuracy of the distance measurement, it is necessary to accurately adjust a parameter in the manufacturing process. However, due to vibrations during moving of the vehicle and time-dependent changes in the distortion of the vehicle body, temperature change and the like, the misalignment may occur. To overcome the misalignment, namely in order to maintain the accuracy of the positional relationship between the two cameras in the stereo camera device, there may be one method of adjusting (correcting) the device by using a test chart which is an image having a known distance from the device after the shipment (sales) of the stereo camera device. However, during the adjustment (correction) using the test chart, the stereo camera device may not be used. Namely, when this method is used, availability of the stereo camera device may be greatly reduced.
To overcome the inconvenience of the method, there have been proposed several methods in which the adjustment (correction) is made by using a subject in a scene while moving without using the test chart, the subject having a known feature (e.g., a white lane having a known distance, a traffic signal, and a utility pole). For example, Japanese Patent Application Publication No. 10-341458 (Patent Document 1) discloses a technique in which, to detect the misalignment of the stereo camera device in the installation direction, the figures of static subjects are memorized first and a static subject is recognized by comparing the memorized data. Then, the parallax offset is calculated based on the moved distance measured by a velocity sensor or the like and the distance to the recognized static subject. Further, Japanese Patent No. 3436074 (Patent Document 2) discloses an in-vehicle stereo camera device that performs a correction process based on images of the same static subjects disposed at plural positions and distances between the plural positions. Further, Japanese Patent Application Publication No. 2009-176090 (Patent Document 3) describes an environment recognition device that detects a subject based on an imaged image and determines a surrounding environment. Further, Japanese Patent Application Publication No. 2009-288233 (Patent Document 4) describes a technique of correcting the tilt of an image imaged by a camera.
However, in the techniques described above, only limited subjects such as a white lane having a known distance on a highway road, the traffic signal, the utility poles and the like can be recognized. Therefore, the place where and the timing when the parallax offset can be performed may be limited. Further, the scene continuously changes during moving and vision also changes depending on the position of the subject in the image due to lens distortion. Therefore, it may be difficult to precisely recognize the static subject. Further, the method can be used for only the static subjects having the figures which have been memorized.
The present invention is made in light of the above inconveniences of the techniques of the related art as described above, and may provide a stereo camera device, a correction method, and a program that accurately correct the parallax offset by fully using the functions of the stereo camera device regardless of the figure of the subject, timing, place, and regardless of whether the subject is moving.
According to an embodiment of the present invention, a stereo camera device includes two cameras and a calculation section. The two cameras are installed in a manner such that the two cameras are separated from each other by a (predetermined) base line length. The calculation section includes an image processing section, an offset value calculation section, a statistical processing section, and a distance calculation section.
The image processing section searches for corresponding points between the images acquired by the two cameras, calculates the parallaxes, and outputs the calculated parallaxes of the respective corresponding points to the offset value calculation section. The offset value calculation section calculates the parallax offset values of the respective corresponding points (1-N). The parallax offset values are the offset values to correct the respective parallaxes of the corresponding points. The parallax offset values are calculated by using the parallaxes at different timings t0 and t1 and the moved distance of the vehicle or the like on which the stereo camera device is mounted without using the distance Z to the subject.
The calculated parallax offset values is output to the statistical processing section. The calculation results of the offset value calculation section are converted into appearance frequency of the parallax offset values. A statistical analysis is performed on the appearance frequency so that an optimum value of the parallax offset value to be used as a correction parameter is determined.
The distance calculation section calculates a distance to a subject to be measured in an parallax image based on the base line length, the focal point distance of the cameras, and the correction parameter determined by the statistical processing section.
Further, the stereo camera device according to an embodiment of the present invention may include a correction section that revises (corrects) the correction parameter based on the optimum value of the parallax offset value. The correction parameter is used for measuring the distance. The correction section in the embodiment may be a motor stage or an NVRAM (Non Volatile Random Access Memory).
Further, according to embodiments of the present invention, there are provided a correction method and a program that cause the calculation section to acquire a correction parameter q so as to be used for the later distance calculation.
In the following, embodiments of the present invention are described with reference to the accompanying drawings. However, the present invention is not limited to the embodiments.
The cameras 102 and 104 includes lenses 102a and 104a and photoelectric conversion elements 102a and 104b to obtain (acquire) right (R) and left (L) images, respectively. The cameras 102 and 104 are disposed in a manner such that the cameras 102 and 104 are separated from each other by a base line distance B between the centers of the lenses 102a and 104a and an optical axis 110 of the camera 102 is parallel to an optical axis 112 of the camera 104. The cameras 102 and 104 perform a photoelectric conversion on the optical images having passed through the lenses 102a and 104a and formed on the conversion elements 102a and 104b, respectively, by a shutter (not shown), and store the respective photoelectric converted images as digital data. The stored (recorded) images are output to the calculation section 120 to be used in the distance calculation and a correction process later.
As described above, the cameras 102 and 104 are disposed in a manner such that the cameras 102 and 104 are separated from each other by a base line distance B and the optical axis 110 is parallel to the optical axis 112. Further, the lenses 102a and 104a of the cameras 102 and 104 have the same focal point distance f. The distance Z from the optical centers and the subject 108 (in the direction parallel to the optical axes 110 and 112) is given in the following formula (1) by using a parallax p of the subject 108 in the right (R) and left (L) images of the subject 108 acquired by the right and left cameras 102 and 104. Herein, the parallax p is defined as a positional difference in the image of the subject 108 acquired by the cameras 102 and 104.
Z=B×f/p (1)
In order to accurately measure the distance Z, it is necessary that the optical axis 110 is accurately parallel to the optical axis 112. However, it may be difficult to maintain the parallel relationship between the optical axes 110 and 112 depending on a setting state of the cameras 102 and 104 during manufacturing and installation. To overcome the inconvenience, in many cases, in the distance calculation, the actual stereo camera device 100 uses the following formula (2) including a correction parameter q to accurately calculate the distance using the parameter mainly related to the parallelism (parallel relationship).
Z=B×f/(p−q) (2)
The parameter q in the formula (2) is the correction parameter referred to as a parallax offset. In this embodiment of the present invention, the correction parameter q is calculated in the method described below. Namely, a corresponding point k is determined on the images at different time points. Based on time difference or a running distance of the vehicle having the stereo camera device 100 of the corresponding point k, a parallax offset value qk relevant to the corresponding point k is obtained across the entire image. Then, a statistical process is performed on the parallax offset values qk and an optimum value of the parallax offset values qk of the image is determined as the correction parameter q. After a certain time period has passed since the installation of the stereo camera device 100, the parallelism (parallel relationship) between the cameras 102 and 104 may be impaired due to relative rotation between the cameras 102 and 104 caused by, for example, the vibrations of the vehicle in which the stereo camera device 100 is installed.
The same change over time may also occur in the camera 102 as well. Therefore, in a conventional stereo camera device, a method is employed in which a test chart having a predetermined distance is periodically read to calculate the parallax offset value; a previous correction parameter q is replaced by the calculated parallax offset value; and the replaced (updated) correction parameter q is stored in a non-volatile memory such as an NVRAM (Non-Volatile Random Access Memory) to be used for the distance calculation later. Further, when the stereo camera device includes a motor to correct the optical axes, the replaced (updated) correction parameter q may be used in control data to control the drive of the motor.
The calculation section 120 instructs (causes) the cameras 102 and 104 to capture images, and performs a distance measurement calculation and calculation for parameter correction using the images acquired by the cameras 102 and 104.
The calculation section 120 may be implemented as a one-chip microcomputer, an ASIC (Application Specific Integrated Circuit), an in-vehicle computer, a personal computer or the like. Further, the calculation section 120 performs mutual communications with the external device 230 to output a result of the distance measurement calculation calculated by the calculation section 120 using the images to the external device 230, so that the external device 230 can perform various controls using the distance Z calculated by the calculation section 120. The external device 230 may not be included in the calculation section 120 that performs memory access with a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), and other memory device such as a USB memory. Otherwise, the external device 230 may be integrated into the calculation section 120 as an information processing device.
As illustrated in
The calculated parallax pk is output to the offset value calculation section 206 and further output to the distance calculation section 208. At a timing to start a parameter correction process or upon a receipt of an external interruption signal to start the parameter correction process, the offset value calculation section 206 starts the parameter correction process. In the time period other than the time period of performing the parameter correction process, the offset value calculation section 206 performs the distance measurement using only the parallax pk. The parameter correction process is executed by synchronizing the timings of a velocity sensor 204 to obtain velocity information and the cameras 102 and 104 to acquire images by the calculation section 120.
When the offset value calculation section 206 performs the distance measurement, the distance calculation section 208 receives the parallax pk that was received by the offset value calculation section 206, and calculates distance Z using the correction parameter q which is effective at that time and based on the formula (1). The calculated distance Z is output from the calculation section 120 to the external device 230 via an interface 210 included in the calculation section 120. By using the received distance Z, the external device 230 performs control to control other in-vehicle devices such as an alarm device, a control device, and an engine.
As described above, the calculation section 120 includes the statistical processing section 240. The statistical processing section 240 along with the offset value calculation section 206 and the velocity sensor 204 of
Next, the specific parameter correction process according to this embodiment of the present invention is described. In the parameter correction process according to this embodiment of the present invention, first, the calculation section 120 instructs the cameras 102 and 104 to capture (acquire) images at time t0 as a reference time and simultaneously instructs the velocity sensor 204 to output the velocity information.
At time t1 when a predetermined time period has passed since time t0, the calculation section 120 further instructs the cameras 102 and 104 to capture (acquire) images. As a result, the stereo camera device 100 acquires images captured by the two cameras 102 and 104 at time (time points) t0 and t1. Namely, the stereo camera device 100 acquires four images. Then, the image processing section 202 searches for the corresponding point based on two images of the respective cameras 102 and 104 at time t0 and calculates a corresponding parallax image. In the same manner, the image processing section 202 searches for the corresponding point based on two images of the respective cameras 102 and 104 at time t1 and calculates the corresponding parallax image. Those parallax images are input to the offset value calculation section 206. Further, to calculate (acquire) the corresponding points of the parallax images at time t0 and t1, not only the parallax images but also the images having been used for acquiring the parallax images are input to the offset value calculation section 206.
On the other hand, the calculation section 120 sets the velocity information output from the velocity sensor 204 into the offset value calculation section 206, and further sets parallax values p0 and p1 relevant to the corresponding points common to the respective sets of the two images captured by the two cameras 102 and 104. Herein, the parallax value p0 refers to the parallax of the corresponding point k at time t0. The parallax value p1 refers to the parallax of the same corresponding point k at time t1. In the following descriptions, for clarification purposes, it is assumed that the corresponding point has been fixed to the corresponding point k.
Based on the parallax values p0 and p1 acquired as described above, the offset value calculation section 206 obtains the following formula (3) expressing the relationships between the distances Z0 and Z1 at time t0 and t1 and the parallax value p0 and p1 at time t0 and t1.
Z0=Bf/{p0−pk}
Z1=Bf/{p1−qk} (3)
When the subject of the corresponding point is a static subject, the distances Z0 and Z1 and the running distance D correspond to the moved distance. On the other hand, when the subject of the corresponding point is a moving subject such as a vehicle, the distances Z0 and Z1 and the running distance D correspond to a distance calculated based on the relative velocity. Further, when the subject of the corresponding point is a moving subject, the relative velocity corresponding to the camera 102 may differ from the relative velocity corresponding to the camera 104. In this case, the values of Z1 distribute in a range having the center of Z0.
By using the above formula (3), the running distance D is given as D=Z0−Z1. Further, the running distance D is given as in the following formula (4) where the base line distance B, the focal point distance f, the parallax value of measurement variables p0 and p1, and the parallax offset value qk of the corresponding point k are used.
Z0−Z1=Bf/{p0−qk}−Bf/{p1−qk} (4)
The above formula (4) may be transformed into a second degree equation of the parallax offset value qk. Further, based on the formula (4), the parallax offset value qk is given in the following formula (5) as the solution (qk≧0) of the second degree equation.
As described above, the parallax offset value qk may vary depending on the relative velocity between moving bodies and may not be always the same. In this embodiment, the above relational formula is given for each of the corresponding points k (k=1 to N). Therefore, for each of the corresponding points k, the parallax offset values qk are calculated across the entire image as the candidates of the correction parameter q. The offset value calculation section 206 executes the process of the above formula (5).
Next, a statistical (correcting) process performed by the statistical processing section 240 is described. When the process of the offset value calculation section 206 is finished, the calculated parallax offset values qk are input to the statistical processing section 240. The statistical processing section 240 determines whether each of the calculated parallax offset values qk belongs to predetermined chunks (groups) one by one. In this case, when, for example, one parallax offset value qk is determined to belong to a specific chunk Qm (m: positive integer), the statistical processing section 240 increments the count value of the chunk Qm (by one). Further, the statistical processing section 240 counts the numbers of the parallax offset values qk for each of the chunks Qm, and generates (calculates) the frequencies (appearance frequencies) Am of the parallax offset values qk of the corresponding chunks Qm. Further, based on the frequencies (appearance frequencies) Am when the above counting process of all the corresponding points k is completed, the statistical processing section 240 generates a frequency matrix F of the following formula (6). Herein, the symbol Qm denotes chunks of the parallax offset values qk and is referred by the corresponding index values Im which is provided to give the correction parameter q and is the values defined as Im=(minimum value)+Δ×M. Further, with respect to the chunk Qm, the maximum value of the parallax offset value qk may be determined in advance, and a range up to the maximum value may be evenly divided into m sections, so that the above frequency accumulation calculation may be performed. In this embodiment, even when the parallax offset values qk cannot be acquired across the entire image, it may become possible to effectively determine the optimum value based on a comparison between the peak of the frequency distribution and the number of the remaining sample points. As a result, it may become possible to minimize (reduce) the time period when the stereo camera device 100 cannot measure the distance due to the statistical (correcting) process. Further, according to another embodiment of the present invention, the appearance frequencies may be calculated after the parallax offset value qk is calculated for all the pixels.
The above frequency matrix F provides statistical data used to provide (determine) the correction parameter q. In this case, the statistical processing section 240 may use any of known statistical processings to perform a process on the statistical data to obtain the optimum value. Specifically, in the statistical processing, the mode value, the median value, the average value, the weighted average value or the like may be used. When the mode value is used, the parallax offset value qk may be determined as the index value Imax providing the focal point distance fmax. In addition, the parallax offset value qk may be acquired (determined) by using the regression analysis assuming the normal distribution, the regression analysis assuming the polynomial distribution, the regression analysis using the distribution function such as the binomial distribution, the X square distribution, the Poisson distribution, the β distribution or the like. Namely, any appropriate statistical model may be used.
In the statistical processing, when the dispersion is calculated and the calculated dispersion value occupies more than a predetermined rate relative to the deflection range of the parallax offset values qk, the results so far may be discarded, so that, after a certain time period, the process may be re-executed. By doing this, it may become possible to avoid a case where an improper value is adopted as the correction parameter q. When a positional relationship between the two cameras 102 and 104 largely changes by a large impact, the change of the parallax offset value qk from the initial value should become larger, and the change of the parallax offset value qk over time or the like should become smaller. Therefore, the setting value of the deflection range of the parallax offset values qk varies depending on which type of the change is to be detected.
Further, in another embodiment of the present invention, when the statistical analysis assuming the standard deviation is performed, the standard deviation σpast when the currently effective correction parameter q is acquired may be stored, and whether the value is adopted may be determined based on the inspection result of the value acquired this time.
The following formula (7) has a format expressing the shifted angle (displacement angle) between the optical axes 110 and 112 of the cameras 102 and 104 by using the correction parameter q.
θ=tan−1(q/f) (7)
When the stereo camera device 100 includes a camera driving system, the data of the angle θ acquired based on the correction parameter q is transferred to the motor of the camera driving system as the data indicating the amount of driving the motor. On the other hand, when the stereo camera device 100 does not include the camera driving system, the data of the angle θ acquired based on the correction parameter q is transferred to the distance calculation section 208 and stored in an NVRAM 234 as the correction parameter q to be used for calculating the distance Z later. The data of the distance Z acquired by the distance calculation section 208 is output from the calculation section 120 to the external device 230 via the interface 210, so as to be used for various controls using the distance Z.
In this embodiment, the corresponding point k on the images captured (acquired) by the cameras 102 and 104 may be searched for by the pixel. Otherwise, from a viewpoint of faster processing, the right (R) and left (L) images may be divided into meshes, and the luminance, the hue, the color distribution and the like of the meshes may be calculated, so that the corresponding point is determined by the mesh.
For explanatory purposes, a part (a) of
pkt=krt−klt
In embodiments, as described above, the parallax is calculated across the entire image. Therefore, to improve the accuracy in searching for the corresponding point in the captured images and in calculating the parallax, it is preferable to perform a known distortion correction process and the like.
In this embodiment, as long as the same subject in the right (R) and left (L) images acquired in the same time point can be recognized, the position of the specific pixel of the subject in the image may be directly used. In another embodiment, after the image is divided into meshes, the data of the pixels in the same mesh may be averaged and the positional coordinates of the meshes may be used. In the following, it is assumed that the mesh and the pixels or the pixel region of the mesh may be referred to as the corresponding point which becomes a reference of the calculation in this embodiment.
In this embodiment, the size of the divided meshes is not limited to a specific size. However, when the size of the divided meshes is too small, it may take a longer time to search for the corresponding point which is the processing target in the right and left images. On the other hand, when the size of the divided meshes is too large, the accuracy of the correction may be degraded. In consideration of the above features, in this embodiment, for example, the image is divided into the meshes in a manner such that the number of the meshes in the vertical and the lateral directions is 100 and therefore the total number of the meshes in the images is 10,000.
In this case, the corresponding point is searched for by using the images captured by the two cameras 102 and 104 at time t1 and time t0 which is a time point when a predetermined time has passed since time t1. However, more specifically, for example, the corresponding point may be searched for by using the images acquired only by one of the two cameras 102 and 104. Otherwise, the corresponding point may be searched for by using the images acquired by each of the two cameras 102 and 104. In searching for the corresponding point, a known technique such as the SAD (Sum of Absolute Difference) and the POC (Phase Only Correlation) may be used. Herein, the corresponding points of the images at different time points may be referred to as the corresponding points k (k=1 to N).
In steps S505 and S506, the offset value calculation section 206 calculates the parallax values of the corresponding points k of the images at different time points, the corresponding points k having been searched for in step S504. More specifically, in step S505, the offset value calculation section 206 acquires (calculates) the parallax value pk0 related to the corresponding point k based on the parallax image at time t0. In the same manner, in step S506, the offset value calculation section 206 acquires (calculates) the parallax value pk1 related to the corresponding point k based on the parallax image at time t1.
In step S507, the offset value calculation section 206 calculates the parallax offset value qk based on the formula (5) using the parallax value pk0 related to the corresponding point k at time t0, the parallax value pk1 related to the corresponding point k at time t1, and the running distance D of the vehicle between time t0 and time t1. However, as described above, there are plural corresponding points k across the entire image. Therefore, there are plural parallax values pk0 and plural parallax values pk1. On the other hand, there is only one value of the running distance D of the vehicle between time t0 and time t1. In step S508, the parallax offset value qk calculated in step S507 is output to the statistical processing section 240.
In step S509, the offset value calculation section 206 determines whether the calculations of the parallax offset values qk related to the plural corresponding points k across the entire image has been completed. When determining that the calculations has not been completed (NO in step S509), the process goes back to step S505 to repeat the calculations until the calculations of all the corresponding points k are completed. On the other hand, when determining that the calculations has been completed (YES in step S509), the process goes to step S510 to stop (finish) the process of the offset value calculation section 206. When the process is finished, the parallax offset values qk related to the plural corresponding points k across the entire image have been output to the statistical processing section 240.
As another embodiment of the present invention, it is obvious that the optimum value may also be determined by collectively performing the statistical analysis on the n parallax offset values qk in the scattering diagram form without generating the frequency histogram. Namely, an appropriate statistical analysis may be selected based on, for example, the capability and the memory capacity of the calculation section 120.
As described above, according to an embodiment of the present invention, it may become possible to provide a stereo camera device, a correction method, and a program that highly-accurately corrects the parallax offset by maximally using the functions of the stereo camera device regardless of whether the subject is moving or not, regardless of the figure of the subject, regardless of time and place, and without necessarily accurately measuring the relative distances to the subject.
The above described functions in this embodiment may be realized by using a device-executable program that is described in a programming language such as C, C++, Java (registered trademark), or the like or an assembly language. Further, the program may be stored in and distributed using a device-readable recording medium such as a hard disk device, a CD-ROM (Compact Disk Read Only Memory), an MO (Magneto-Optical disk), a flexible disk, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory) or the like. Further, the program may be provided in a readable form by another device and may be transmitted via a network.
Although the invention has been described with respect to a specific embodiment for a complete and clear disclosure, the appended claims are not to be thus limited but are to be construed as embodying all modifications and alternative constructions that may occur to one skilled in the art that fairly fall within the basic teachings herein set forth.
The present application is based on and claims the benefit of priority of Japanese Patent Application Nos. 2010-205015 filed on Sep. 14, 2010, and 2011-121378 filed on May 31, 2011, the entire contents of which are hereby incorporated herein by reference.
Patent | Priority | Assignee | Title |
10186034, | Jan 20 2015 | Ricoh Company, Ltd. | Image processing apparatus, system, image processing method, calibration method, and computer-readable recording medium |
10621694, | Jan 20 2015 | Ricoh Company, Ltd. | Image processing apparatus, system, image processing method, calibration method, and computer-readable recording medium |
10713807, | Sep 29 2017 | Denso Corporation; Toyota Jidosha Kabushiki Kaisha; Ricoh Company, LTD | Vicinity supervising device and vicinity supervising method of vehicle |
Patent | Priority | Assignee | Title |
5418899, | May 25 1992 | Ricoh Company, LTD | Size magnification processing unit for processing digital image in accordance with magnification factor |
5436739, | Dec 03 1991 | Ricoh Company, LTD | Color correction system for transforming color image from one color space to another |
5541742, | Dec 03 1991 | Ricoh Company, Ltd. | Color correction system for transforming color image from one color space to another |
5659406, | Dec 03 1991 | Ricoh Company, Ltd. | Color correction system for transforming color image from one color space to another |
6628327, | Jan 08 1997 | Ricoh Company, LTD | Method and a system for improving resolution in color image data generated by a color image sensor |
6628427, | Jul 07 1998 | RICOH CO , LTD | Method and apparatus for image processing which improves performance of gray scale image transformation |
6856708, | Mar 04 1999 | Ricoh Co., Limited | Method and system for composing universally focused image from multiple images |
6882365, | Jul 01 1998 | Ricoh Company, Ltd. | Direction-sensitive correction method and system for data including abrupt intensity gradients |
6947076, | Oct 27 1999 | Ricoh Company, Ltd. | Image pickup apparatus and image processing method for picking up an image of a target object in divisions |
7145596, | Jun 30 2000 | Ricoh Company, LTD | Method of and apparatus for composing a series of partial images into one image based upon a calculated amount of overlap |
7182464, | Jan 28 2000 | Ricoh Company, Ltd. | Projector with adjustably positioned image plate |
7379621, | Mar 04 1999 | Ricoh Company, Ltd. | Method and system for composing universally focused image from multiple images |
7428007, | Oct 27 1999 | Ricoh Company, Ltd. | Image pickup apparatus and image processing method for picking up an image of a target in divisions |
7595826, | Jan 08 1997 | Ricoh Company, Ltd. | Method a system for improving resolution in color image data generated by a color image sensor |
20090046924, | |||
20090190827, | |||
20090271111, | |||
20100283837, | |||
20110001814, | |||
20110096185, | |||
20110134282, | |||
20110216215, | |||
20140002593, | |||
JP10232111, | |||
JP10341458, | |||
JP200428811, | |||
JP2009176090, | |||
JP2009288233, | |||
JP200975124, | |||
JP20098539, | |||
JP875454, | |||
JP9133525, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 05 2011 | Ricoh Company, Ltd. | (assignment on the face of the patent) | / | |||
Apr 13 2012 | UMEZAWA, YUKO | Ricoh Company, LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 028182 | /0480 | |
Apr 13 2012 | AOKI, SHIN | Ricoh Company, LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 028182 | /0480 |
Date | Maintenance Fee Events |
Dec 21 2015 | ASPN: Payor Number Assigned. |
Apr 16 2019 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Apr 19 2023 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Oct 27 2018 | 4 years fee payment window open |
Apr 27 2019 | 6 months grace period start (w surcharge) |
Oct 27 2019 | patent expiry (for year 4) |
Oct 27 2021 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 27 2022 | 8 years fee payment window open |
Apr 27 2023 | 6 months grace period start (w surcharge) |
Oct 27 2023 | patent expiry (for year 8) |
Oct 27 2025 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 27 2026 | 12 years fee payment window open |
Apr 27 2027 | 6 months grace period start (w surcharge) |
Oct 27 2027 | patent expiry (for year 12) |
Oct 27 2029 | 2 years to revive unintentionally abandoned end. (for year 12) |