A processing-target image generation device generates a processing-target image which is an object to be subjected to an image conversion process for acquiring an output image based on an input image taken by an image-taking part. A coordinates correspondence part causes input coordinates, spatial coordinates, and projection coordinates to correspond to each other, the input coordinates being on an input image plane on which the input image is located, the spatial coordinates being on a space model on which the input image is projected, the projection coordinates being on a processing-target image plane on which the processing-target image is positioned and the image projected on the space model is re-projected.
|
5. A processing-target image generation device that generates a processing-target image which is an object to be subjected to an image conversion process for acquiring an output image based on an input image taken by an image-taking part, the processing-target image generation device comprising:
a coordinates correspondence part configured to cause input coordinates, spatial coordinates, and projection coordinates to correspond to each other, the input coordinates being on an input image plane on which said input image is located, the spatial coordinates being on a space model on which said input image is projected, the projection coordinates being on a processing-target image plane on which said processing-target image is positioned and the image projected on said space model is re-projected,
wherein said coordinate correspondence part causes said spatial coordinates on said space model to correspond to said projection coordinates on said processing-target image plane so that each of lines connecting a plurality of coordinates positions on said processing-target image plane and a plurality of coordinates positions on said space model corresponding to the plurality of coordinates positions on said processing-target image plane, respectively, passes a predetermined single point.
11. A processing-target image generation method that generates a processing-target image which is an object to be subjected to an image conversion process for acquiring an output image based on an input image taken by an image-taking part, the processing-target image generation method comprising:
a coordinates correspondence step of causing input coordinates, spatial coordinates, and projection coordinates to correspond to each other, the input coordinates being on an input image plane on which said input image is located, the spatial coordinates being on a space model on which said input image is projected, the projection coordinates being on a processing-target image plane on which said processing-target image is positioned and the image projected on said space model is re-projected,
wherein said coordinate correspondence step includes a step of causing said spatial coordinates on said space model to correspond to said projection coordinates on said processing-target image plane so that each of lines connecting a plurality of coordinates positions on said processing-target image plane and a plurality of coordinates positions on said space model corresponding to the plurality of coordinates positions on said processing-target image plane, respectively, passes a predetermined single point.
1. A processing-target image generation device that generates a processing-target image which is an object to be subjected to an image conversion process for acquiring an output image based on an input image taken by an image-taking part, the processing-target image generation device comprising:
a coordinates correspondence part configured to cause input coordinates, spatial coordinates, and projection coordinates to correspond to each other, the input coordinates being on an input image plane on which said input image is located, the spatial coordinates being on a space model on which said input image is projected, the projection coordinates being on a processing-target image plane on which said processing-target image is positioned and the image projected on said space model is re-projected,
wherein said coordinate correspondence part causes said spatial coordinates on said space model to correspond to said projection coordinates on said processing-target image plane so that lines connecting a plurality of coordinates positions on said processing-target image plane and a plurality of coordinates positions on said space model corresponding to the plurality of coordinates positions on said processing-target image plane, respectively, are parallel to each other on a plane perpendicular to said processing-target image plane.
9. A processing-target image generation method that generates a processing-target image which is an object to be subjected to an image conversion process for acquiring an output image based on an input image taken by an image-taking part, the processing-target image generation method comprising:
a coordinates correspondence step of causing input coordinates, spatial coordinates, and projection coordinates to correspond to each other, the input coordinates being on an input image plane on which said input image is located, the spatial coordinates being on a space model on which said input image is projected, the projection coordinates being on a processing-target image plane on which said processing-target image is positioned and the image projected on said space model is re-projected,
wherein said coordinate correspondence step includes a step of causing said spatial coordinates on said space model to correspond to said projection coordinates on said processing-target image plane so that lines connecting a plurality of coordinates positions on said processing-target image plane and a plurality of coordinates positions on said space model corresponding to the plurality of coordinates positions on said processing-target image plane, respectively, are parallel to each other on a plane perpendicular to said processing-target image plane.
2. The processing-target generation device as claimed in
3. An operation support system that supports a movement or an operation of a body to be operated, comprising:
the processing-target image generation device as claimed in
a display part configured to display an output image generated based on the processing-target image generated by said processing-target image generation device.
4. The operation support system as claimed in
6. The processing-target generation device as claimed in
7. An operation support system that supports a movement or an operation of a body to be operated, comprising:
the processing-target image generation device as claimed in
a display part configured to display an output image generated based on the processing-target image generated by said processing-target image generation device.
8. The operation support system as claimed in
10. The processing-target generation method as claimed in
12. The processing-target generation method as claimed in
|
This is a continuation application filed under 35 U.S.C. 111(a) claiming benefit under 35 U.S.C. 120 and 365(c) of International Application PCT/JP2011/058897, filed on Apr. 8, 2011, designating the U.S., which claims priority to Japanese Patent Application No. 2010-091656. The entire contents of the foregoing applications are incorporated herein by reference.
The present invention relates to a processing-target image generation device and a processing-target generation method that generates a processing-target image which is an object to be subjected to an image conversion process for acquiring an output image based on an input image, and an operation support system using the device or the method.
There is known an image generation device that maps an input image from a camera on a predetermined space model on a three-dimensional space, and generates a visual point conversion image, which is viewed from an arbitrary virtual visual point in the three-dimensional space while referring to the mapped space data (for example, refer to Japanese Patent Publication No. 3286306).
The image generation device disclosed in Japanese Patent Publication No. 3286306 projects an image taken by a camera mounted on a vehicle onto a three-dimensional space model configured by a plurality of plane surfaces or curved surfaces that surround the vehicle. The image generation device generates a visual point conversion image using the image projected onto the space model, and displays the produced visual point conversion image to a driver. The visual point conversion image is an image of a combination of a road surface image, which virtually reflects a state of a road taken from directly above, and a horizontal image, which virtually reflects a horizontal direction image. Thereby, the image generation device can relates, when the driver driving the vehicle looks the visual point conversion image, an object in the visual point conversion image to an object actually existing outside the vehicle without giving uncomfortable feeling.
The image generation device disclosed in Japanese Patent Publication No. 3286306 generates a view point conversion image using an image projected on a three-dimensional space model. Therefore, for example, in a case of using a cylindrical space model arranged in a circumference of a vehicle, an effective view point conversion image can be generated when a virtual view point is set, which is on a center axis of the cylinder and views the cylinder from directly above. However, when a virtual view point is set, which is on a center axis of the cylinder and views the cylinder from directly above, an image projected on an inner side surface thereof (an image which is a base of a horizontal image in the vie point conversion image) cannot be displayed at all. Or, when a virtual camera which views the side surface of the cylinder from outside is set, an image projected on an inner bottom surface of the cylinder (an image which is a base of a road surface image in the view point conversion image) and an image projected on an inner side surface thereof (an image which is a base of a horizontal image in the vie point conversion image) cannot be displayed at all. Therefore, a position of the virtual camera, which can be set, is limited to a large extent.
Moreover, the image generation device disclosed in Japanese Patent Publication No. 3286306 generates the view point conversion image by regarding an image projected on a space model as a nondivisible one piece unit. For example, when a cylindrical space model arranged around a vehicle is used, even if it is desirable to enlarge or reduce only an image projected on an inner side surface of the cylinder (an image which is a base of a horizontal image in the view point conversion image), the generation of the view point conversion image must be performed again by changing a position or an angle of view of the virtual camera. Thereby, an influence is given to an image projected on an inner bottom surface of the cylinder (an image which is a base of a road surface image in the view point conversion image). Thus, the image generation device disclosed in Patent Document 1 lacks flexibility in adjusting a view point conversion image.
It is an object of the present invention to provide a processing-target image generation device and a processing-target image generation method that enable flexible adjustment of an output image, and an operation support system using the device or the method.
In order to achieve the above-mentioned object, there is provided according to one aspect of the present invention a processing-target image generation device that generates a processing-target image which is an object to be subjected to an image conversion process for acquiring an output image based on an input image taken by an image-taking part, the processing-target image generation device including: a coordinates correspondence part configured to cause input coordinates, spatial coordinates, and projection coordinates to correspond to each other, the input coordinates being on an input image plane on which the input image is located, the spatial coordinates being on a space model on which the input image is projected, the projection coordinates being on a processing-target image plane on which the processing-target image is positioned and the image projected on the space model is re-projected.
There is provided according to another aspect of the present invention a processing-target image generation method that generates a processing-target image which is an object to be subjected to an image conversion process for acquiring an output image based on an input image taken by an image-taking part, the processing-target image generation method including: a coordinates correspondence step of causing input coordinates, spatial coordinates, and projection coordinates to correspond to each other, the input coordinates being on an input image plane on which the input image is located, the spatial coordinates being on a space model on which the input image is projected, the projection coordinates being on a processing-target image plane on which the processing-target image is positioned and the image projected on the space model is re-projected.
There is provided according to a further aspect of the present invention, an operation support system that supports a movement or an operation of a body to be operated, including: the above-mentioned processing-target image generation device; and a display part configured to display an output image generated based on the processing-target image generated by the processing-target image generation device.
According to the present invention, a processing-target image generation device and a processing-target image generation method, which generates a processing-target image enabling flexible adjustment of an input image, and an operation support system using the device or the method can be provided.
Hereafter, a description will be given, with reference to the drawings, of embodiments of the invention.
The processing-target image generation device 100 according to the embodiment generates, for example, a processing-target image which is an object to be subjected to an image conversion process to acquire an output image based on an input image taken by a camera 2 mounted on a construction machine. The processing-target image generation device 100 generates an output image which enables intuitive perception of a positional relationship and distance feel with a peripheral obstacle by applying an image conversion process to the generated processing-target image, and presents it to an operator. As illustrated in
A cab (driver's cabin) 64 is provided on a front left side part of the upper-part turning body 63, and an excavation attachment E is provided on a front central part. The cameras 2 (a right side camera 2R and a backside camera 2B) are provided on a right side surface and a rear surface of the upper-part turning body 63. The display part 5 is installed in the cab 64 at a position where the display part 5 can be easily viewed by an operator.
Next, a description is given of each structural element of the processing-target image generation device 100.
The control part 1 includes a computer provided with a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), an NVRAM (Non-Volatile Random Access Memory), etc. For example, programs corresponding to each of a coordinates correspondence part 10 and an output image generation part 11 mentioned later are stored in the ROM or the NVRAM. The CPU performs processing by executing a program corresponding to each means while using the RAM as a temporary storage area.
The camera 2 is a device for acquiring an input image which projects a circumference of the excavator 60, and includes a right side camera 2R and a backside camera 2B. The right side camera 2R and the backside camera 2B are attached to the right side surface and the rear surface of the upper-part turning body 63 so that, for example, an image of an area of a dead zone to the operator can be taken (refer to
The camera 2 acquires an input image according to a control signal from the control part 1, and outputs the acquired input image to the control part 1. In addition, when the camera 2 acquires the input image using a fish-eye lens or a wide-angle lens, the camera 2 output a corrected input image to the control part 1 in which an apparent distortion or tilting, which is caused by usage of those lenses, is corrected. However, the camera 2 may output the acquired input image as it is without correction. In such a case, the control part corrects an apparent distortion and tilting.
The input part 3 is a device for an operator to enable an input of various kinds of information to the image generation device 100, and includes, for example, a touch panel, a button switch, a pointing device, a keyboard, etc.
The storage part 4 is a device for storing various kinds of information, and includes, for example, a hard disk, an optical disk, a semiconductor memory, etc.
The display part 5 is a device for displaying image information, and includes, for example, a liquid crystal display or a projector, which is installed in the cab 64 (refer to
The “processing-target image” is generated based on an input image and to be subjected to an image conversion process (for example, a scale conversion, an affine conversion, a distortion conversion, a viewpoint conversion processing). For example, an input image, which is an input image taken by a camera that takes an image of a ground surface from above and contains an image (for example, an empty part) in a horizontal direction according to a wide view angle, is used in an image conversion process. In such a case, the input image is projected onto a predetermined space model so that a horizontal image thereof is not displayed unnaturally (for example, is not handled as an empty part on a ground surface). Then, an image suitable for the image conversion process can be obtained by re-projecting a projection image projected on the space model onto a different two-dimensional plane. It should be noted that the processing-target image may be used as an output image as it is without applying an image conversion process.
The “space model” is a target object on which an input image is projected, and includes at least a plane surface or a curved surface (for example, a plane surface parallel to the processing-target image plane or a plane surface or curved surface that forms an angle with the processing-target image plane) other than a processing-target image plane, which is a plane surface on which the processing-target image is positioned.
As illustrated in
Next, a description is given of the coordinates correspondence part 10 and the output image generation part that the control part 1 includes.
The coordinates correspondence part 10 is provided for causing the coordinates on the input image plane on which the input image taken by the camera 2 is positioned (may be referred to as input coordinates), the coordinates on the space model MD (may be referred to as spatial coordinates, and the coordinates on the processing-target image plane R3 (may be referred to as projection coordinates) to correspond to each other. For example, the coordinates on the output image plane, the coordinates on the space model MD and the coordinates on the processing-target image plane R3 are caused to correspond to each other based on various parameters with respect to the camera 2, such as an optical center, a focal distance, a CCD size, an optical axis direction vector, a camera horizontal direction vector, a projection system, etc., of the camera 2, which are input through the input part 3 and a previously determined positional relationship between the input image plane, the space model MD and the processing-target image plane R3. The correspondence relationship is stored in the input image-space model correspondence relation map 40 and the space model-processing-target image correspondence relation map 41 of the storage part 4.
The output image generation part 11 generates an output image. The output image generation part 11 causes the coordinates on the processing-target image plane R3 and the coordinates on the output image plane on which the output image is positioned to correspond to each other by applying, for example, a scale conversion, an affine conversion, or a distortion conversion to the processing-target image. The correspondence relationship is stored in the processing-target image-output image correspondence relation map 42 of the storage part 4. The output image generation part 11 generates an output image by relating a value of each pixel in the output mage (for example, a brightness value, a color phase value, a chroma value, etc.) to a value of each pixel in the input image while referring to the input image-space model correspondence relation map 40 and the space model-processing-target image correspondence relation map 41 stored in the coordinates correspondence part 10.
Moreover, the output image generation part 11 causes the coordinates on the processing-target image plane R3 and the coordinates on the output image plane on which the output image is positioned to correspond to each other based on various parameters, such as an optical center, a focal distance, a CCD size, an optical direction axis vector, a camera horizontal direction vector, a projection system, etc., of a virtual camera that are input through the input part 3. The correspondence relationship is stored in the processing-target image-output image correspondence relation map 42 of the storage part 4. Then, the output image generation part 11 generates an output image by relating a value of each pixel in the output image (for example, a brightness value, a color phase value, a chroma value, etc.) to a value of each pixel in the input image while referring to the input image-space model correspondence relation map 40 and the space model-processing-target image correspondence relation map 41 stored in the coordinates correspondence part 10.
It should be noted that the output image generation part 11 may generate the output image by changing a scale of the processing-target image without using a concept of virtual camera.
Next, a description is given of an example of a process performed by the coordinates correspondence part 10 and the output image generation part 11.
The coordinates correspondence part 10 can cause the input coordinates on the input image plane correspond to the spatial coordinates on the space model by using the Hamilton's quaternion.
First, in order to convert the coordinates (coordinates of an XYZ coordinate system) on the space model into the coordinates (coordinates on the UVW coordinates system) on the input image plane, the XYZ coordinates system is rotated to cause the X-axis to be coincident with the U-axis, the Y-Axis to be coincident with the V-axis and the Z-axis to be coincident with −W-axis after parallel-moving the original point of the XYZ coordinates system to the optical center C (original point of the UVW coordinates system). Here, the sign “−” means that a direction is opposite. This is caused by ahead of a camera is set to a +W direction in the UVW coordinates system, and a vertical downward direction is set to a −Z direction in the XYZ coordinates system.
If there are a plurality of cameras 2, each of the cameras 2 has an individual UVW coordinates system. Thereby, the coordinates correspondence part 10 translates and rotates the XYZ coordinates system with respect to each of the plurality of UVW coordinates system.
The above-mentioned conversion is realized by translating the XYZ coordinates system so that the optical center C of the camera 2 becomes the original point of the XYZ coordinates system, and, thereafter, rotating the XYZ coordinates system so that the X-axis is coincident with the −W-axis and further rotating the XYZ coordinates system so that the X-axis is coincident with the U-axis. Therefore, the coordinates correspondence part 10 integrates the two rotations into a single rotation operation by describing the conversion by the Hamilton's quaternion.
By the way, a rotation to cause a certain vector A to be in coincident with a different vector B corresponds to a process of rotating by an angle formed between the vector A and the vector B using a normal line of a plane defined by the vector A and the vector B. When the rotating angle is set to θ, the angle θ is expressed by an inner product of the vector A and the vector B as follows.
Moreover, the unit vector N of the normal line of the plane defined by the vector A and the vector B is expressed by an outer product of the vector A and the vector B as follows.
It should be noted that when i, j and k are imaginary number unit, the quaternion is a hypercomplex number satisfying the following condition.
ii=jj=kk=ijk=−1 [Formula 3]
In the present embodiment, the quaternion Q is expressed as follows, where a real component is t and pure imaginary components are a, b and c.
Q=(t;a,b,c)=t+ai+bj+ck [Formula 4]
Therefore, the conjugate quaternion of the quaternion Q is expressed as follows.
Q*=(t;−a,−b,−c)=t−ai−bj−ck [Formula 5]
The quaternion Q can express a three-dimensional vector (a, b, c) by the pure imaginary components a, b and c while setting the real component t to 0 (zero). In addition, a rotating operation with an arbitrary vector as an axis can be expressed by each component t, a, b and c.
Further, the quaternion Q can express the consecutive plurality of numbers of rotating operation as a single rotation by integrating the rotating operations. For example, a point D (ex, ey, ez), which is an arbitrary point S (sx, sy, sz) rotated by an angle θ with an arbitrary unit vector C (l, m, n) as an axis, can be expressed as follows.
Here, in the present embodiment, when the quaternion expressing a rotation, which causes the Z-axis to be coincident with the −W-axis, is Q, the point X on the X-axis in the XYZ coordinates system is moved to a point X′. Therefore, the point X′ is expressed as follows.
X′=QzXQz* [Formula 7]
Moreover, in the present embodiment, when the quaternion expressing a rotation, which causes a line connecting the point X′ on the X-axis and the original point to be coincident with the U-axis is Qx, the quaternion R expressing a rotation to cause the Z-axis to be coincident with the −W-axis and further cause the X-axis to be coincident with the U-axis is expressed as follows.
R=QxQz [Formula 8]
As mentioned above, the coordinates P′, when arbitrary coordinates P on the space model (XYZ coordinates system) is expressed by the coordinates on the input image plane (UVW coordinates system), is expressed as follows.
P′=RPR* [Formula 9]
Because the quaternion R is a constant of each of the cameras 2, the coordinates correspondence part 10 can convert the coordinates on the space model (XYZ coordinates system) into the coordinates on the input image plane (UVW coordinates system) by merely performing the operation.
After converting the coordinates on the space model (XYZ coordinates system) into the coordinates on the input image plane (UVW coordinates system), the coordinates correspondence part 10 computes an incident angle α formed by a line segment CP′ connecting the optical center C (coordinates on the UVW coordinates system) of the camera 2 and coordinates P′, which is arbitrary coordinates P on the space model expressed by the UVW coordinates system, and the optical axis G of the camera 2.
Moreover, the coordinates correspondence part 10 computes an argument φ and a length of a line segment EP′, the argument φ being formed by the line segment EP′, which connects the coordinates P′ and an intersecting point E of a plane H and an optical axis G in the plane H, which is parallel to the input image plane R4 (for example, a CCD surface) and containing the coordinates P′, and a U′-axis in the plane H.
In an optical system of a camera, normally, an image height h is a function of an incident angle α and a focal distance f. Accordingly, the coordinate correspondence part 10 computes the image height h by selecting an appropriate projection system such as a normal projection (h=f tan α), an orthogonal projection (h=f sin α), a stereographic projection (h=2 f tan(α/2)), an equisolid angle projection (h=f sin(α/2)), an equidistant projection (h=fα), etc.
Thereafter, the coordinates correspondence part 10 decomposes the image height h to a U-component and a V-component on the UV coordinates system according to an argument φ, and divide them by a numerical value corresponding to a pixel size per one pixel of the input image plane R4. Thereby, the coordinates correspondence part 10 can cause the coordinates P (P′) on the space model MD and the coordinates on the input image plane R4.
It should be noted that when the pixel size per one pixel in the U-axis direction of the input image plane R4 is set to au, and the pixel size per one pixel in the V-axis direction of the input image plane R4 is set to av, the coordinates (u, v) on the input image plane R4 corresponding to the coordinates P (P′) on the space model MD is expressed as follows.
As mentioned above, the coordinates correspondence part 10 causes the coordinates on the space model MD to correspond to the coordinates on one or more input image planes R4 existing for each camera, and relates the coordinates on the space model MD, a camera identifier, and the coordinates on the input image plane R4, and stores the correspondence relationship in the input image-space model correspondence relation map 40.
Because the coordinates correspondence part 10 operates the conversion of coordinates by using the quaternion, the coordinates correspondence part 10 provides an advantage in that a gimbal lock is not generated unlike a case where a conversion of coordinates is operated using an Euler angle. However, the coordinate correspondence part 10 is not limited to one performing an operation of conversion of coordinates using a quaternion, and the conversion of coordinates may be operated using an Euler angle.
If it is possible to cause a correspondence to coordinates on a plurality of input image planes R4, the coordinates correspondence part 10 may cause the coordinates P (P′) to correspond to the coordinates on the input image plane R4 with respect to a camera of which incident angle is smallest, or may cause the coordinates P (P′) to correspond to the coordinates on the input image plane R4 selected by an operator.
Next, a description is given of a process of re-projecting the coordinates on the curved surface area R2, from among the coordinates on the space model MD, onto the processing-target image plane R3 on the XY plane.
In the example illustrated in
It should be noted that when the camera 2 uses projection systems (for example, an orthogonal projection, a stereographic projection, an equisolid angle projection, an equidistant projection, etc.) other than the normal projection system, the coordinates correspondence part 10 causes the coordinates K1 and K2 on the input image plane R4 to correspond to the coordinates L1 and L2 on the space model MD according to the respective projection system.
Specifically, the coordinates correspondence part 10 causes the coordinates on the input image plane to correspond to the coordinates on the space model MD based on a predetermined function (for example, an orthogonal projection (h=f sin α), a stereographic projection (h=2 f tan(α/2)), an equisolid angle projection (h=f sin(α/2)), an equidistant projection (h=fα), etc.). In this case, the line segment K1-L1 and the line segment K2-L2 do not pass the optical center C of the camera 2.
In the example illustrated in
The coordinates correspondence part 10 can cause the coordinates on the plane surface area R1 of the space model MD to correspond to the coordinates on the processing-target image plane R3 using a group of parallel lines PL, similar to the coordinates on the curved surface area R2. However, in the example illustrated in
As mentioned above, the coordinates correspondence part 10 causes the spatial coordinates on the space model MD to correspond to the projection coordinates on the processing-target image plane R3, and stores the coordinates on the space model MD and the coordinates on the processing-target image R3 in the space model-processing-target image correspondence relation map 41 by relating them to each other.
In the example illustrated in
If the virtual camera 2 uses projection systems (for example, an orthogonal projection, a stereographic projection, an equisolid angle projection, an equidistant projection, etc.) other than the normal projection, the output image generation part 11 causes the coordinates N1 and N2 on the output image plane R5 of the virtual camera 2V to correspond to the coordinates M1 and M2 on the processing-target image plane R3 according to the respective projection system.
Specifically, the output image generation part 11 causes the coordinates on the output image plane R5 to correspond to the coordinates on the processing-target image plane R3 based on a predetermined function (for example, an orthogonal projection (h=f sin α), a stereographic projection (h=2 f tan(α/2)), an equisolid angle projection (h=f sin(α/2)), an equidistant projection (h=fα), etc.). In this case, the line segment M1-N1 and the line segment M2-N2 do not pass the optical center CV of the virtual camera 2V.
As mentioned above, the output image generation part 11 causes the coordinates on the output image plane R5 to correspond to the coordinates on the processing-target image plane R3, and stores the coordinates on the output image plane R5 and the coordinates on the processing-target image R3 in the processing-target image-output image correspondence relation map 42 by relating them to each other. Then, the output image generation part 11 generates the output image be relating a value of each pixel in the output image to a value of each pixel in the input image while referring to the input image-space model correspondence relation map 40 and the space model-processing-target image correspondence relation map 41 stored in the coordinates correspondence part 10.
It should be noted that
Next, a description is given, with reference to
As illustrated in
The change in the intervals of the group of coordinates means that only an image portion corresponding to the image projected on the curved surface area R2 of the space model MD from among the image portions on the output image plane R5 (refer to
Next, a description is given, with reference to
As illustrated in
Similar to the case of the group of parallel lines PL, the change in the intervals of the group of coordinates means that only an image portion corresponding to the image projected on the curved surface area R2 of the space model MD from among the image portions on the output image plane R5 (refer to
As explained above, the image generation device 100 can linearly or nonlinearly enlarge or reduce an image portion (for example, a horizontal image) of the output image corresponding to the image projected on the curved surface area R2 of the space model MD without giving an influence to an image portion (for example, a road image) of the output image corresponding to the image projected on the plane surface area R1 of the space model MD. Thereby, an object positioned around the excavator 60 (an object in an image a circumference viewed from the excavator 60 in a horizontal direction) can be rapidly and flexibly enlarged or reduced without giving an influence to a road image (a virtual image when viewing the shovel from directly above) in the vicinity of the excavator 60, which can improve visibility of a dead angle area of the excavator 60.
Next, a description will be given, with reference to
In
As illustrated in
That the distance D2 changes to the distance D4 and the distance D1 is constant means that only an image portion corresponding to an image projected on the curved surface area R2 of the space model MD from among the image portions on the output image plane R5 is enlarged or reduced, similar to the action explained with reference to
It should be noted that when an output image is generated directly based on the image projected on the space model MD, the image portion on the output image plane R5 corresponding to the image projected on the curved surface area R2 alone cannot be enlarged or reduced because the plane surface area R1 and the curved surface area R2 cannot handle separately (because they cannot be separate objects to be enlarged or reduced).
As illustrated in
It should be noted that a description was given, with reference to
Next, a description will be given, with reference to
First, the control part 1 causes a coordinate point on the processing-target image plane R3 to correspond to a coordinate point on the space model MD by the coordinates correspondence part 10 (step S1).
Specifically, the coordinates correspondence part 10 acquires an angle formed between the group of parallel lines PL and the processing-target image plane R3, and computes a point at which one of the group of parallel lines PL extending from the coordinate point on the processing-target image plane R3 intersects with the curved surface area R2 of the space model MD. Then, the coordinates correspondence part 10 derives a coordinate point on the curved surface area R2 corresponding to the computed point as a coordinate point on the curved surface area R2 corresponding to a coordinate point on the processing-target image plane R3, and stores a correspondence relationship therebetween in the space model-processing-target image correspondence relation map 41. The angle formed between the group of parallel lines PL and the processing-target image plane R3 may be a value previously stored in the storage part 4, etc., or may be a value dynamically input by the operator through the input part 3.
When the coordinates on the processing-target image plane R3 is coincident with the coordinates on the plane surface area R1 on the space model MD, the coordinates correspondence part 10 derives the coordinates on the plane surface area R1 concerned as the coordinates corresponding to the coordinates on the processing-target image plane R3, and stores a correspondence relationship therebetween in the space model-processing-target image correspondence relation map 41.
Thereafter, the control part 1 causes the coordinates on the space model MD derived by the above mentioned process to correspond to the coordinates on the input image plane R4 by the coordinates correspondence part 10 (step S2).
Specifically, the coordinates correspondence part 10 acquires the coordinate point of the optical center C of the camera 2 using a normal projection (h=f tan α), and computes a point at which a line segment extending from a coordinate point on the space model MD, which is a line segment passing the optical center C, intersects with the input image plane R4. Then, the coordinates corresponding part 10 derives a coordinate point on the input image plane R4 corresponding to the computed point as a coordinate point on the input image plane R4 corresponding to the coordinate point on the space model MD, and stores a correspondence relationship therebbetween in the input image-space model map 40.
Thereafter, the control part 1 determines whether or not all of the coordinate points on the processing-target image plane R3 are caused to correspond to coordinate points on the space model MD and the coordinate points on the input image plane R4 (step S3). If it is determined that all of the coordinate points have not been caused to correspond (NO of step S3), the process of step S1 and step S2 is repeated.
On the other hand, if it is determined that all of the coordinate points are caused to correspond (YES of step S3), the control part 1 causes the processing-target image generation process to end and, thereafter, causes the output image generation process to start. Thereby, the output mage generation part 11 causes the coordinates on the processing-target image plane R3 to the coordinates on the output image plane R5 (step S4).
Specifically, the output image generation part 11 generates an output image by applying a scale conversion, an affine conversion or a distortion conversion to a processing-target image. Then, the output image generation part 11 stores a correspondence relationship between the coordinates on the processing-target image plane R3 and the coordinates on the output image plane R5 in the processing-target image-output image correspondence relation map 42, the correspondence relationship being determined according to the applied scale conversion, affine conversion, or distortion conversion.
Alternatively, when generating the output image using the virtual camera 2V, the output image generation part 11 may compute the coordinates on the output image plane R5 from the coordinates on the processing-target image plane R3, and may store a correspondence relationship therebetween in the processing-target image-output image correspondence relation map 42.
Alternatively, when generating the output image using the virtual camera 2V using a normal projection (h=f tan α), the output image generation part 11 may compute, after acquiring the coordinate point of the optical center CV of the virtual camera 2V, a point at which a line segment extending from a coordinate point on the output image plane R5, which line segment passes the optical center CV, intersects with the processing-target image plane R3. Then, the output image generation part 11 may derive the coordinated on the processing-target image plane R3 corresponding to the computed point as a coordinate point on the processing-target image plane R3 corresponding to the coordinate point on the output image plane R5, and may store a correspondence relationship therebetween in the processing-target image-output image correspondence relation map 42.
Thereafter, the control part 1 follows, by the output-image generation part 11, the correspondence relationship between the coordinates on the input image plane R4 and the coordinates on the space model MD, the relationship between the coordinates on the space model MD and the coordinates on the processing-target image plane R3 and the correspondence relationship between the processing-target image plane R3 and the coordinates on the output mage plane R5, while referring to the input image-space model correspondence relation map 40, the space model-processing-target image correspondence relation map 41 and the processing-target image-output image correspondence relation map 42, and acquires values (for example, a brightness value, a color phase value, a chroma value, etc.) possessed by coordinates on the input image plane R4 corresponding to the coordinates on the output image plane R5. It should be noted that, when a plurality of coordinates on a plurality of input image planes R4 correspond to one coordinate point on the output image plane R5, the output image generation part 11 may derive statistical values (for example, a mean value, a maximum value, a minimum value, an intermediate value, etc.) based on each of the values of the plurality of coordinates on the plurality of input image planes R4, and may use the statistical values as the values of the coordinates on the output image plane R5.
Thereafter, the control part 1 determines whether or not all of the values of the coordinates on the output image plane R5 are caused to correspond to the values of the coordinates on the input mage plane R4 (step S6). If it is determined that all of the values of the coordinates have not been caused to correspond (NO of step S4), the process of step S5 is repeated.
On the other hand, if it is determined that all of the values of the coordinates have been caused to correspond (YES of step S6), the control part 1 generates an output image, and ends the series of processes.
According to the above-mentioned structure, the processing-target image generation device 100 is able to generate the processing-target image and the output image that can cause the operator to intuitively grasp the positional relationship between the construction machine and a peripheral obstacle.
The processing-target image generation device 100 is capable of surely causing each coordinate point on the processing-target plane R3 to correspond to one or more coordinate points on the input image plane R4 by performing the correspondence operation to track back from the processing-target image plane R3 to the input image plane R4 through the space model MD. Therefore, a better quality processing-target image can be generated as compared to a case where a coordinate correspondence operation is performed in an order from the input image plane R4 to the processing-target image plane R3 through the space model MD. It should be noted that when performing a coordinate correspondence operation in an order from the input image plane R4 to the processing-target image plane R3 through the space model MD, each of the coordinate points on the input image plane R4 can be caused to correspond to one or more coordinate points on the processing-target image plane R3, however, there may be a case where a part of the coordinate points on the processing-target image plane R3 cannot be caused to correspond to any one of the coordinate points on the input mage plane R4. In such a case, it is necessary to apply an interpolation process to the part of the coordinate points on the processing-target image plane R3.
Moreover, when enlarging or reducing only an image corresponding to the curved surface area R2 of the space model MD, the processing-target image generation device 100 can realize a desired enlargement or reduction by merely rewriting only a part associated with the curved surface area R2 in the space model-processing-target image correspondence relation map 41 by changing the angle formed between the group of parallel lines PL and the processing-target image plane R3 without rewriting the contents of the input image-space model correspondence relation map 40.
Moreover, when changing an appearance of the output image, the processing-target image generation device 100 is capable of generating a desire output image (a scale conversion image, an affine conversion image or a distortion conversion image) by merely rewriting the processing-target image-output image correspondence relation map 42 by changing various parameters regarding a scale conversion, an affine conversion or a distortion conversion without rewriting the contents of the input image-space model correspondence relation map 40 and the contents of the space model-processing-target image correspondence relation map 41.
Similarly, when changing a view point of the output image, the processing-target image generation device 100 is capable of generating an output image (view point conversion image) which is viewed from a desired view point by merely rewriting the processing-target image-output mage correspondence relation map 42 by changing values of various parameters of the virtual camera 2V without rewriting the contents of the input image-space model correspondence relation map 40 and the space model-processing-target image correspondence relation map 41.
Next, a description is given, with reference to
As best illustrated in
In
A perpendicular line drawn from the optical center of the backside camera 2B to the cylinder center axis (re-projection axis) is in a perpendicular relationship with a perpendicular line drawn from the optical axis of the right side camera 2R to the cylinder center axis (re-projection axis). Although the two perpendicular lines intersect with each other at a point J2 while existing in the plane parallel to the plane surface area R1 and the plane on which the processing-target image plane R3 is positioned in the present embodiment, the two perpendicular lines may be positioned on separate planes, respectively, and may be in a twisted positional relationship.
According to the positional relationship between the camera 2 and the space model MD illustrated in
In
On the other hand, a perpendicular line drawn from the optical center of the backside camera 2B to the cylinder center axis (re-projection axis) is not in a perpendicular relationship with a perpendicular line drawn from the optical axis of the right side camera 2R to the cylinder center axis (re-projection axis). The perpendicular line drawn from the optical center of the backside camera 2B to the cylinder center axis (re-projection axis) intersect with the perpendicular line drawn from the optical axis of the right side camera 2R to the perpendicular line thereof at a point J2 which is not on the cylinder center axis (re-projection axis). In the present embodiment, the optical centers of the backside camera 2B and the right side camera 2R exist on a plane parallel to the plane surface area R1 and the plane on which the processing-target image plane R3 is positioned. However, the optical centers of the backside camera 2B and the right side camera 2R may be positioned on different planes, respectively, and the perpendicular lines of each other may be in a twisted positional relationship.
According to the positional relationship between the camera 2 and the space model MD illustrated in
On the other hand, as illustrated in
In
On the other hand, the optical axis G1 and the optical axis G2 do not intersect with each other on the cylinder center axis (re-projection axis) but intersect at a point J1 which does not exist on the cylinder center axis (re-projection axis). It should be noted that the optical axis G1 and the optical axis G2 may be in a twisted positional relationship if components of a projection on a plane parallel to the XY-plane intersect at points which do not exist on the cylinder center axis (re-projection axis).
According to the positional relationship between the camera 2 and the space model MD illustrated in
On the other hand, as illustrated in
As mentioned above, the processing-target image generation device 100 is capable of generating the processing-target image by arranging the space model MD so that the cylinder center axis (re-projection axis) of the space model MD and the optical axis of the camera intersect with each other without bending an object existing in an optical axis direction of the camera at a boundary between the road image portion and the horizontal image portion. It should be noted that this advantage can be obtained in a case of a single camera or a case of three or more cameras.
Moreover, the processing-target image generation device 100 is capable of generating the processing-target image by arranging the space model MD so that the perpendicular lines drawn from the optical centers of the backside camera 2B and the right side camera 2R to the cylinder center axis (re-projection axis) of the space model MD are perpendicular to each other without bending objects on a just right had side and just behind the excavator 60 at a boundary between the road image portion and the horizontal image portion. It should be noted that this advantage can be obtained in a case of three or more cameras.
The output image is trimmed to be in a circular shape do that the image when the excavator 60 performs a turning operation can be displayed without uncomfortable feel. That is, the output image is displayed so that the center CTR of the circle is at the cylinder center axis of the space model, and also on the turning axis PV of the excavator 60, and the output image rotates about the center CTR thereof in response to the turning operation of the excavator 60. In this case, the cylinder center axis of the space model MD may be coincident with or not coincident with the re-projection axis.
The radius of the space model is, for example, 5 meters. The angle formed by the group of parallel lines PL between the processing-target image plane R3 or the height of the start point of the group of auxiliary lines AL may be set so that, when an object (for example, an operator) exists at a position distant from the turning center of the excavator 60 by a maximum reach distance (for example, 12 meters) of an excavation attachment E, the object is displayed sufficiently large (for example, 7 millimeters or more).
Further, in the output image, a CG image of the excavator 60 is arranged so that a front of the excavator 60 is coincident with an upper portion of the screen of the display part 5 and the turning center thereof is coincident with the center CTR. This is to facilitate recognition of a positional relationship between the excavator 60 and the object that appears in the output image. It should be noted that, a frame image containing various sets of information such as orientation, etc., may be arranged on a periphery of the output image.
In this state, as illustrated in
Although the processing-target image generation device 100 uses the cylindrical space model MD as a space model in the above-mentioned embodiments, the image generation device may use a space model having other columnar shapes such as a polygonal column, etc., or may use a space model constituted by tow planes including a bottom surface and a side surface. Alternatively, the image generation device 100 may se a space model having only a side surface.
The above-mentioned processing-target image generation device 100 is mounted together with cameras on a construction machine, which travels by itself and is equipped with movable members, such as a bucket, an arm, a boom, a turning mechanism, etc., and is incorporated into an operation support system which support a movement of the construction machine and operations of those movable members while presenting an image of surrounding areas to an operator. However, the processing-target image generation device 100 may be mounted together with cameras on other construction machines (body to be operated), such as an industrial machine, a stationary crane, etc., which has a movable member but does not travel by itself, and may be incorporated into an operation support system which supports operations of the machine.
The present invention is not limited to the specifically disclosed embodiments, and various variations and modifications may be made without departing from the scope of the present invention.
Patent | Priority | Assignee | Title |
10621743, | Apr 24 2013 | Sumitomo Heavy Industries, Ltd. | Processing-target image creating device, processing-target image creating method, and operation assisting system |
9598836, | Mar 29 2012 | Joy Global Surface Mining Inc | Overhead view system for a shovel |
Patent | Priority | Assignee | Title |
4926346, | Dec 27 1985 | AISIN-WARNER KABUSHIKI KAISHA, A CORP OF JAPAN; KABUSHIKI KAISHA SHINSANGYOKAIHATSU, A CORP OF JAPAN | Road image input system for vehicle control |
5410346, | Mar 23 1992 | Fuji Jukogyo Kabushiki Kaisha | System for monitoring condition outside vehicle using imaged picture by a plurality of television cameras |
7307655, | Jul 31 1998 | MATSUSHITA ELECTRIC INDUSTRIAL CO , LTD | Method and apparatus for displaying a synthesized image viewed from a virtual point of view |
7554573, | Jun 12 2002 | Panasonic Corporation | Drive assisting system |
20080309784, | |||
JP3286306, | |||
WO2005084027, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Oct 09 2012 | KIYOTA, YOSHIHISA | Sumitomo Heavy Industries, LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029112 | /0675 | |
Oct 11 2012 | Sumitomo Heavy Industries, Ltd. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Mar 27 2015 | ASPN: Payor Number Assigned. |
Mar 15 2018 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Mar 16 2022 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Sep 30 2017 | 4 years fee payment window open |
Mar 30 2018 | 6 months grace period start (w surcharge) |
Sep 30 2018 | patent expiry (for year 4) |
Sep 30 2020 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 30 2021 | 8 years fee payment window open |
Mar 30 2022 | 6 months grace period start (w surcharge) |
Sep 30 2022 | patent expiry (for year 8) |
Sep 30 2024 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 30 2025 | 12 years fee payment window open |
Mar 30 2026 | 6 months grace period start (w surcharge) |
Sep 30 2026 | patent expiry (for year 12) |
Sep 30 2028 | 2 years to revive unintentionally abandoned end. (for year 12) |