An obstacle recognition apparatus is provided which can recognize an obstacle by accurately extracting a floor surface. It includes a distance image generator (222) to produce a distance image using a disparity image and homogeneous transform matrix, a plane detector (223) to detect plane parameters on the basis of the distance image from the distance image generator (222), a coordinate transformer (224) to transform the homogeneous transform matrix into a coordinate of a ground-contact plane of a robot apparatus (1), and a floor surface detector (225) to detect a floor surface using the plane parameters from the plane detector (223) and result of coordinate transformation from the coordinate transformer (224) and supply the plane parameters to an obstacle recognition block (226). The obstacle recognition block (226) selects one of points on the floor surface using the plane parameters of the floor surface detected by the floor surface detector (225) and recognizes an obstacle on the basis of the selected point.
|
4. A computer-readable medium adapted to store an obstacle recognition program, executed by a robot apparatus to recognize an obstacle, the program comprising steps of:
producing a distance image on the basis of a disparity image calculated based on image data and sensor data outputs;
using a homogeneous transform matrix corresponding to the disparity image;
detecting plane parameters on the basis of the distance image produced in the distance image producing step;
transforming a coordinate system into a one on the floor surface;
detecting the floor surface on the basis of the plane parameters detected in the plane detecting step;
selecting a point on the floor surface using the plane parameters supplied from the detecting step to recognize an obstacle on the basis of the selected point; and
utilizing the plane parameters of the floor surface to recognize an obstacle.
3. An obstacle recognition method comprising steps of:
producing a distance image on the basis of a disparity image calculated based on image data supplied from a plurality of imaging means provided in a robot apparatus and sensor data outputs from a plurality of sensing means provided in the robot apparatus and using a homogeneous transform matrix corresponding to the disparity image at locations of the plurality of distance image producing means;
detecting plane parameters on the basis of the distance image produced in the distance image producing step;
transforming a coordinate system at the locations of the plurality of imaging means into a one on the floor surface;
detecting the floor surface on the basis of the plane parameters detected in the plane detecting step;
selecting a point on the floor surface using the plane parameters supplied from the detecting step to recognize an obstacle on the basis of the selected point; and
recognizing an obstacle based on the plane parameters of the floor surface.
1. An obstacle recognition apparatus comprising:
a distance image producing means for producing a distance image on the basis of a disparity image calculated based on image data supplied from a plurality of imaging means provided in a robot apparatus and sensor data outputs from a plurality of sensing means provided in the robot apparatus and using a homogeneous transform matrix corresponding to the disparity image at locations of the plurality of imaging means;
a plane detecting means for detecting plane parameters on the basis of the distance image produced by the distance image producing means; a coordinate transforming means for transforming a coordinate system at the locations of the plurality of imaging means into a one on the floor surface;
a floor surface detecting means for detecting the floor surface on the basis of the plane parameters detected by the plane detecting means; and
an obstacle recognizing means for selecting a point on the floor surface using the plane parameters supplied from the plane detecting means to recognize an obstacle on the basis of the selected point, wherein the obstacle recognizing means uses the plane parameters of the floor surface detected by the floor surface detecting means to recognize an obstacle.
5. A mobile robot apparatus comprising:
a head unit having a plurality of imaging means and sensing means;
at least one moving leg unit having a sensing means;
a body unit having an information processing means and sensing, and that moves on a floor surface by the use of the moving leg unit while recognizing an obstacle on the floor surface,
wherein the body unit comprises:
a distance image producing means for producing a distance image on the basis of a disparity image calculated based on image data supplied from the plurality of imaging means provided in the head unit and sensor data outputs from the plurality of sensing means provided in the head, moving leg and body units, respectively,
wherein a homogeneous transform matrix corresponding to the disparity image at locations of the plurality of imaging means;
a plane detecting means for a detecting plane parameters on the basis of the distance image produced by the distance image producing means;
a coordinate transforming means for transforming a coordinate system at the locations of the plurality of imaging means into a one on the floor surface;
a floor surface detecting means for detecting the floor surface on the basis of the plane parameters detected by the plane detecting means; and
an obstacle recognizing means for selecting a point on the floor surface using the plane parameters supplied from the plane detecting means to recognize an obstacle on the basis of the selected point, wherein the obstacle recognizing means uses the plane parameters of the floor surface detected by the floor surface detecting means to recognize an obstacle.
2. The apparatus as set forth in
6. The apparatus as set forth in
|
1. Field of the Invention
The present invention generally relates to an obstacle recognition apparatus and method, and more particularly to an obstacle recognition apparatus, obstacle recognition method and an obstacle recognition program, applied in a mobile robot apparatus to recognize an obstacle on a floor, and a mobile robot apparatus.
This application claims the priority of the Japanese Patent Application No. 2002-073388 filed on Mar. 15, 2002, the entirety of which is incorporated by reference herein.
2. Description of the Related Art
Needless to say, it is important for the autonomous robot to be able to recognize its surroundings for programming a route it should take and move along the route. The conventional mobile or locomotive robots include a wheeled type robot, walking type robot (bipedal and quadrupedal), etc. The wheeled type robot provided around it ultrasonic sensors disposed in parallel to the floor surface to detect reflected ultrasound from a wall or the like. Since the ultrasonic sensors are disposed in parallel to the floor surface, they will not detect such return ultrasound from the floor surface and can thus recognize all points having reflected the ultrasound as obstacles. Therefore, it is easy to process the ultrasound information for recognition of such obstacles. Since this method permits to detect only an obstacle having a predetermined height, however, the robot cannot recognize any obstacle such as a small step (lower than the predetermined height) or a hole or concavity in the floor. On the other hand, the walking robot, quadrupedal or bipedal, has such a distance sensor installed on a part thereof that can actively be changed in posture, such as the head, hand end, etc.
The quadrupedal walking robot (entertainment robot) having recently been well known employs a distance sensor installed on the head thereof to derive floor parameter from a posture of the robot and judge, on the basis of the floor surface parameter, whether a ranged point is a floor surface or an obstacle.
However, the method adopted in the quadrupedal walking robot can range only one point (or several points when the distance detection is repeatedly tried with the robot posture being changed) and so can hardly detect the surroundings of the robot satisfactorily. Also, this method can detect one point at each time of distance detection, it is not so highly reliable in terms of the accuracy of distance detection.
It is therefore an object of the present invention to overcome the above-mentioned drawbacks of the related art by providing an obstacle recognition apparatus, obstacle recognition method and an obstacle recognition program, capable of accurately extracting a floor surface to recognize an obstacle on the floor.
The present invention has another object to provide a mobile robot apparatus capable of moving while recognizing an obstacle of a floor by the use of the obstacle recognition apparatus.
The above object can be attained by providing an obstacle recognition apparatus including, according to the present invention, a distance image producing means for producing a distance image on the basis of a disparity image calculated based on image data supplied from a plurality of imaging means provided in a robot apparatus and sensor data outputs from a plurality of sensing means provided in the robot apparatus and using a homogeneous transform matrix corresponding to the disparity image at locations of the plurality of imaging means, a plane detecting means for detecting a parameter on the basis of the distance image produced by the distance image producing means, a coordinate transforming means for transforming a coordinate system at the locations of the plurality of imaging means into a one on the floor surface, a floor surface detecting means for detecting the floor surface on the basis of the parameter detected by the plane detecting means, and an obstacle recognizing means for selecting a point on the floor surface using the parameter supplied from the plane detecting means to recognize an obstacle on the basis of the selected point.
Also, the above object can be attained by providing an obstacle recognition means including, according to the present invention, steps of producing a distance image on the basis of a disparity image calculated based on image data supplied from a plurality of imaging means provided in a robot apparatus and sensor data outputs from a plurality of sensing means provided in the robot apparatus and using a homogeneous transform matrix corresponding to the disparity image at locations of the plurality of distance image producing means, detecting a parameter on the basis of the distance image produced in the distance image producing step, transforming a coordinate system at the locations of the plurality of imaging means into a one on the floor surface, a floor surface detecting means for detecting the floor surface on the basis of the parameter detected in the plane detecting step, and an obstacle recognizing means for selecting a point on the floor surface using the parameter supplied from the detecting step to recognize an obstacle on the basis of the selected point.
Also, the above object can be attained by providing an obstacle recognition program executed by a robot apparatus to recognize an obstacle, the program including, according to the present invention, steps of producing a distance image on the basis of a disparity image calculated based on image data supplied from a plurality of imaging means provided in the robot apparatus and sensor data outputs from a plurality of sensing means provided in the robot apparatus and using a homogeneous transform matrix corresponding to the disparity image at locations of the plurality of distance image producing means, detecting a parameter on the basis of the distance image produced in the distance image producing step, transforming a coordinate system at the locations of the plurality of imaging means into a one on the floor surface, a floor surface detecting means for detecting the floor surface on the basis of the parameter detected in the plane detecting step, and an obstacle recognizing means for selecting a point on the floor surface using the parameter supplied from the detecting step to recognize an obstacle on the basis of the selected point.
Also, the above object can be attained by providing a mobile robot apparatus composed of a head unit having a plurality of imaging means and sensing mans, at least one moving leg unit having a sensing means, and a body unit having an information processing means and sensing, and that moves on a floor surface by the use of the moving leg unit while recognizing an obstacle on the floor surface, the apparatus including in the body unit thereof, according to the present invention, a distance image producing means for producing a distance image on the basis of a disparity image calculated based on image data supplied from the plurality of imaging means provided in the head unit and sensor data outputs from the plurality of sensing means provided in the head, moving leg and body units, respectively, and using a homogeneous transform matrix corresponding to the disparity image at locations of the plurality of imaging means, a plane detecting means for detecting a parameter on the basis of the distance image produced by the distance image producing means, a coordinate transforming means for transforming a coordinate system at the locations of the plurality of imaging means into a one on the floor surface, a floor surface detecting means for detecting the floor surface on the basis of the plane parameters detected by the plane detecting means, and an obstacle recognizing means for selecting a point on the floor surface using the plane parameters supplied from the plane detecting means to recognize an obstacle on the basis of the selected point.
According to the present invention, a two-dimensional distance sensor such as a stereoscopic camera or the like is used as each of the plurality of imaging means. With the distance sensors, it is possible to extract a floor surface more robustly and accurately from a relation between the plane detection by the image recognition and robot posture.
These objects and other objects, features and advantages of the present invention will become more apparent from the following detailed description of the preferred embodiments of the present invention when taken in conjunction with the accompanying drawings.
An embodiment of the present invention will be described herebelow with reference to the accompanying drawings. The embodiment is an application of the obstacle recognition apparatus according to the present invention to a bipedal walking robot apparatus.
Referring now to
The color image 202 and disparity image 203 are supplied to a CPU (controller) 220 incorporated in a body unit 260 also included in the robot apparatus 1. Also, each of joints of the robot apparatus 1 is provided with an actuator 230. The actuator 230 is supplied with a control signal 231 as a command from the CPU 220 to drive a motor according to the command. Each joint (actuator) has installed thereto a potentiometer which supplies a present angle of rotation of the motor to the CPU 220. The potentiometer installed on the actuator 230 and sensors 240 including a touch sensor installed to the foot sole and giro sensor installed to the body unit 260 measure the present state of the robot apparatus 1 such as a present angle of joint, installation information, posture information, etc. and supply them as sensor data 241 to the CPU 220. The CPU 220 is thus supplied with the color image 202 and disparity image 203 from the stereo image processor 210 and sensor data 241 such as the angles of joints from all the actuators 230, and implements a software configuration as will be described later.
The software used in this embodiment is configured for each of objects to perform various recognition processes to recognize a position and travel (moving distance) of the robot apparatus 1, an obstacle to the robot apparatus 1, and an environmental map, etc. of the robot apparatus 1 and output a list of behaviors any of which the robot apparatus 1 finally selects. The embodiment uses, as a coordinate indicative of a position of the robot apparatus 1, two coordinate systems: one is a camera coordinate system belonging to the worldwide reference system in which a specific object such as a landmark which will be described later is taken as the coordinate origin (will be referred to as “absolute coordinate” hereunder wherever appropriate), and the other is a robot-centered coordinate system in which the robot apparatus 1 itself is taken as the coordinate origin (will be referred to as “relative coordinate” hereunder wherever appropriate).
The above software is generally indicated with a reference 300. As shown in
First, the obstacle recognition apparatus according to the present invention, installed in the aforementioned robot apparatus 1, will be described. The obstacle recognition apparatus is constructed in the CPU 220 which performs the plane extractor PLEX 320.
The above distance image generator 222 produces a distance image on the basis of a disparity image calculated based on image data supplied from the two CCD cameras provided in the robot apparatus 1 and sensor data output from a plurality of sensors provided in the robot apparatus 1 and using a homogeneous transform matrix corresponding to the disparity image at locations of the two CCD cameras. The plane detector 223 detects plane parameters on the basis of the distance image produced by the distance image generator 222. The coordinate transformer 224 transforms the homogeneous transform matrix into a coordinate on a surface with which the robot apparatus 1 is in contact. The floor surface detector 225 detects a floor surface using the plane parameters supplied from the plane detector 223 and result of the coordinate conversion made by the coordinate transformer 224, and supplies the plane parameters to the obstacle recognition block 226. The obstacle recognition block 226 selects a point on the floor surface using the plane parameters of the floor surface detected by the floor surface detector 225 and recognizes an obstacle on the basis of the selected point.
As aforementioned, images captured by the CCD cameras 200L and 200R are supplied to the stereo image processing 210 which will calculate a color image (YUV) 202 and a disparity image (YDR) 203 from disparity information (distance information) in the right and left disparity images 201R and 201L shown in detail in
The kinematics/odometry layer KINE 310 determines the joint angles in the sensor data 302 at the time when the image data 301 was acquired on the basis of the input data including he image data 301 and sensor data 302, and uses the joint angle data to transform the robot-centered coordinate system in which the robot apparatus 1 is fixed in the center into the coordinate system of the CCD cameras installed on the head unit 250. In this case, the embodiment of the obstacle recognition apparatus according to the present invention derives a homogeneous transform matrix 311 etc. of the camera coordinate system from the robot-centered coordinate system and supplies the homogeneous transform matrix 311 and disparity image 312 corresponding to the former to the obstacle recognition apparatus 221 (plane extractor PLEX 320) in
The obstacle recognition apparatus 221 (plane extractor PLEX 320) receives the homogeneous transform matrix 311 and disparity image 312 corresponding to the former, and follows a procedure given in
First, the coordinate transformer 224 in the obstacle recognition apparatus 221 (plane extractor PLEX 320) receives the homogeneous transform matrix 311, and the distance image generator 222 receives the disparity image 312 corresponding to the homogeneous transform matrix 311 (in step S61). Then, the distance image generator 222 produces a distance image from position data (X, Y Z) being three-dimensional when viewed in the camera coordinate for each pixel with the use of a calibration parameter which assimilates a lens distortion and installation error of the stereo cameras (in step S62) from the disparity image 312. Each of the three-dimensional data includes a reliability parameter obtainable from the reliability on an input image such as a disparity image or distance image, and is selected based on the reliability parameter for entry to the distance image generator 222.
The plane detector 223 samples data from a selected three-dimensional data group at random and estimates a plane through Hough transformation. That is, the plane detector 223 estimates a plane by calculating plane parameters (θ, φ, d) on the assumption that the direction of normal vector is (θ, φ) and the distance from the origin is d and voting the above plane parameters directly to a voting space (θ, ψ, d)=((θ, φ cos θ, d). Thus, the plane detector 223 detects parameters of a prevailing plane in the image (step S63). The plane parameters are detected using a histogram of the parameter space ((θ, φ) (voting space) shown in
During the voting, the plane detector 223 varies the weight of one vote depending upon the reliability parameter incidental to the three-dimensional data, plane parameters calculating method or the like to vary the importance of a vote, and also averages the weights of votes distributed in the vicinity of a candidate peak voting to estimate a peak voting obtainable from the vote importance. Thus, the plane detector 223 can estimate a highly reliable data. Also, the plane detector 223 iterates the plane parameters as initial parameters to determine a plane. Thus, the plane detector 223 can determine a more highly reliable plane. Further, the plane detector 223 calculates a reliability on the plane using the reliability parameter incidental to the three-dimensional data from which a finally determined plane has been calculated, residual error in the iteration, etc., and outputs the plane reliability along with the plane data, whereby subsequent operations can be done more easily. As above, the plane extraction is performed by such a method based on the probability theory that parameters of a prevailing one of the planes included in the three-dimensional data are determined through estimation of a probability density function standing on a voting, that is, a histogram. By using the plane parameter, it is possible to know a distance, from the plane, of a measured point whose distance has initially been determined based on the images.
Next, the coordinate transformer 224 transforms the homogeneous transform matrix 311 of the camera coordinate system into a plane with which the robot's foot sole is in contact, as shown in
Next, the obstacle recognition block 226 selects a point in the plane (on the floor surface) from the original distance image using the plane parameters selected by the floor surface detector 225 in step S65 (step S66). The selection is done based on the formulae (1) and (2) given below and based on the fact that the distance d from the plane is smaller than a threshold Dth.
d<Dth (2)
Therefore, the obstacle recognition block 226 can recognize a point (not on the floor surface) other than the one judged as being in the plane or on the floor surface in step S66 as an obstacle (step S67). The obstacle thus recognized is represented by the point (x, y) on the floor surface and its height z. For example, a height of z<0 indicates a point lower than the floor plane.
Thus, it is possible to determine that an obstacle point higher than the robot is not any obstacle since the robot can go under it.
Also, by transforming the coordinate such that an extracted-floor image (
Thus, the obstacle recognition apparatus can extract a stable plane since it detects a plane using many measured points. Also, it can select a correct plane by collating candidate planes obtainable from the image with floor parameters obtainable from a robot's posture. Further, according to the present invention, since the obstacle recognition apparatus substantially recognizes a floor surface, not any obstacle, it can make the recognition independently of the shape and size of an obstacle. Moreover, since an obstacle is represented by its distance from the floor surface, the obstacle recognition apparatus can detect not only an obstacle but a small step or concavity of a floor. Also, the obstacle recognition apparatus can easily judge, with the size of a robot taken in consideration, whether the robot can walk over or under a recognized obstacle. Furthermore, since an obstacle can be represented on a two-dimensional floor surface, the obstacle recognition apparatus can apply the methods used with conventional robots for planning a route and also calculate such routing more speedily than the three-dimensional representation.
Next, the software used in the robot apparatus 1 shown in
The kinematics/odometry layer KINE 310 of the software 300 shown in
Next, the kinematics/odometry layer KINE 310 determines a time correspondence between the image data 301 and sensor data 302 (step S102-1). More specifically, there is determined a joint angle in the sensor data 302 at a time when the image data 301 has been captured. Then, the kinematics/odometry layer KINE 310 transforms the robot-centered coordinate system having the robot apparatus 1 positioned in the center thereof into a coordinate system of the camera installed on the head unit (step S102-2) using the joint angle data. In this case, the kinematics/odometry layer KINE 310 derives a homogeneous transform matrix etc. of the camera coordinate system from the robot-centered coordinate system, and sends the homogeneous transform matrix 311 and image data corresponding to the former to an image-recognition object. Namely, the kinematics/odometry layer KINE 310 supplies the homogeneous transform matrix 311 and disparity image 312 to the plane extractor PLEX 320, and the homogeneous transform matrix 311 and color image 313 to the landmark sensor CLS 340.
Also, the kinematics/odometry layer KINE 310 calculates a travel of the robot apparatus 1 from walking parameters obtainable from the sensor data 302 and counts of walks from the foot-sole sensors and then a moving distance of the robot apparatus 1 in the robot-centered coordinate system. In the following explanation, the moving distance of the robot apparatus 1 in the robot-centered coordinate system will also be referred to as “odometry” wherever appropriate. This “odometry” is indicated with a reference 314, and supplied to the occupancy grid calculator OG 330 and absolute coordinate localizer LZ 350.
Supplied with the homogeneous transform matrix 311 calculated by the kinematics/odometry layer KINE 310 and disparity image 312 supplied from a corresponding stereo camera, the plane extractor PLEX 320 updates the existent data stored in the memory thereof (step S103). Then, the plane extractor PLEX 320 calculates three-dimensional position data (range data) from the disparity image 312 with the use of the calibration parameters from the stereo camera etc. (step S104-1). Next, the plane extractor PLEX 320 extracts a plane other than those of walls, table, etc. Also, the plane extractor PLEX 320 determines, based on the homogeneous transform matrix 311, a correspondence between the plane thus extracted and the plane with which the foot sole of the robot apparatus 1 is contact, selects a floor surface, takes a point not on the floor surface, for example, a thing or the like at a position higher than a predetermined threshold, as an obstacle, calculates a distance of the obstacle from the floor surface and supplies information on the obstacle (321) to the occupancy grid calculator (OG) 330 (step S104-2).
Supplied with the odometry 314 calculated by the kinematics/odometry layer KINE 310 and obstacle grid information 321 calculated by the plane extractor PLEX 320, the occupancy grid calculator 330 updates the existent data stored in the memory thereof (step S105). Then, the occupancy grid calculator 330 updates occupancy grid calculators holding a probability that the obstacle is or not on the floor surface by the probability theory-based method (step S106).
The occupancy grid calculator OG 330 holds obstacle grid information on an area extending 4 meters, for example, from around the robot apparatus 1, namely, the aforementioned environmental map, and posture information indicative of an angle in which the robot apparatus 1 is directed. By updating the environmental map by the above probability theory-based method and supplying the result of updating (obstacle grid information 331) to an upper layer, namely, the situated behavior layer SBL 360 in this embodiment, can plan to have the robot apparatus 1 detour around the obstacle.
Supplied with the homogeneous transform matrix 311 and color image 313 supplied from the kinematics/odometry layer KINE 310, the landmark sensor CLS 340 updates the data pre-stored in the memory thereof (step S107). Then, the landmark sensor CLS 340 processes the color image 313 to detect a pre-recognized color landmark. Using the homogeneous transform matrix 311, it transforms the position and size of the landmark in the color image 313 into ones in the camera coordinate system. Further, the landmark sensor CLS 340 transforms the position of the landmark in the camera coordinate system into a one in the robot-centered coordinate system using the homogeneous transform matrix, and supplies information 341 on the landmark position in the robot-centered coordinate system (landmark relative position information) to the absolute coordinate localizer LZ 350 (step S108).
When the absolute coordinate localizer LZ 350 is supplied with the odometry 314 from the kinematics/odometry layer KINE 310 and color landmark relative position information 341 from the landmark sensor CLS 340, it updates the data pre-stored in the memory thereof (step S109). Then, it calculates an absolute coordinate (a position in the worldwide coordinate system) of the robot apparatus 1 by the probability theory-based method using the color landmark absolute coordinate (a worldwide coordinate system) 351 pre-recognized by the absolute coordinate localizer LZ 350, color landmark relative position information 341 and odometry 314. Then, the absolute coordinate localizer LZ 350 supplies the absolute coordinate position 351 to the situated behavior layer SBL 360.
Supplied with the obstacle grid information 331 from the occupancy grid calculator OG 330 and absolute coordinate position 351 from the absolute coordinate localizer LZ 350, the situated behavior layer SBL 360 updates the data pre-stored in the memory thereof (step S111). Then, the situated behavior layer SBL 360 acquires the result of recognition of an obstacle existing around the robot apparatus 1 from the obstacle grid information 331 supplied from the occupancy grid calculator OG 330 and a present absolute coordinate of the robot apparatus 1 from the absolute coordinate localizer LZ 350, thereby planning a route to a target point given in the absolute coordinate system or robot-centered coordinate system, along which the robot apparatus 1 can reach walking without collision with any obstacle and issuing an action command to behave for walking along the route. That is, the situated behavior layer SBL 360 decides, based on input data, an action the robot apparatus 1 has to do depending upon a situation, and outputs a list of actions (step S112).
For a navigation by the user of the robot apparatus 1, the user is provided with the result of recognition of an obstacle being around the robot apparatus 1 from the occupancy grid calculator OG 330 and the absolute coordinate of the present position of the robot apparatus 1 from the absolute coordinate localizer LZ 350, and an action command is issued in response to an input from the user.
As shown in
The face identifier FI 377 is an object to identify a detected face image. Supplied with the information 372 including rectangular area images indicative of face areas, respectively, from the face detector FDT 371, it judges to which person the face image belong, by comparison through persons' features listed in an on-hand person dictionary stored in the memory thereof. Then, the face identifier FI 377 supplies person ID information 378 along with position and size information of the face image area of an face image received from the face detector FDT 371 to a distance information linker DIL 379.
In
Also, a motion detector MDT 375 is provided as an object to detect a moving part of an image. It supplies information 376 on a detected moving area to the distance information linker DIL 379.
The aforementioned distance information linker DIL 379 is an object to add distance information to supplied two-dimensional information to provide three-dimensional information. More specifically, the supplied two-dimensional information includes ID information 378 from the face identifier FI 377, information 374 such as the positions, sizes and features of the color area divisions from the multicolor tracker MCT 373 and information 376 on the moving area from the motion detector MDT 375 to produce three-dimensional information 380, and supplies the data to a short-term memory STM 381.
The short-term memory STM 381 is an object to hold information as to the surroundings of the robot apparatus 1 for a relatively short length of time. It is supplied with the result of sound recognition (words, direction of sound source and certainty factor) from an author decoder (not shown), position and size of a flesh-color area, and position and size of a face area from the multicolor tracker MCT 373, and with person ID information etc. from the face identifier FI 377. Also, the short-term memory STM 381 receives a direction (joint angle) of the neck of the robot apparatus 1 from each of the sensors provided on the robot apparatus 1. Then, using these result of recognition and sensor output coordinately, it stores information on where a person exists at present, what he or she is, which person spoke words and what conversion the first person has had with the second person. The short-time memory STM 381 passes physical information on such a thing or target and events (history) arranged on the time base to a host module such as the situated behavior layer SBL 360.
The situated behavior layer SBL 360 is an object to determine a behavior (situated behavior) of the robot apparatus 1 on the basis of the information supplied from the aforementioned short-term memory STM 381. I can evaluate and perform a plurality of behaviors at the same time. Also, with the robot body set in sleep state by selecting any other behavior, it can start up another behavior.
Next, each of the objects except for the obstacle detection by the plane extractor PLEX 320 having previously been described in detail will described in further detail.
The occupancy grid calculator OG 330 holds an environmental map being map information concerning the robot-centered coordinate system and composed of grids having a predetermined size, and posture information indicative of an angle at which the robot apparatus 1 is directed from a predetermined direction such as x- or y-direction. The environmental map has a grid (obstacle-occupied area) recognized as an obstacle based on the obstacle grid information supplied from the plane extractor PLEX 320. When the robot apparatus 1 moves, namely, when supplied with the odometry 314, the occupancy grid calculator OG 330 updates the pre-recognized environmental map and posture information pre-stored in the memory thereof correspondingly to a change in posture (differential moving angle) and moving distance (differential travel) of the robot apparatus 1. If the differential travel is smaller than the grid size, the occupancy grid calculator OG 330 will not update the environmental map. When the travel exceeds the grid size, the occupancy grid calculator OG 330 updates the environmental map. Also, by changing the environmental map and size of the grids in the map appropriately as necessary, it is possible to reduce the required amount of calculation, memory copy costs, etc.
The occupancy grid calculator OG 330 will be described in further detail below with reference to
As shown in
As mentioned above, when supplied with the odometry 314 from the kinematics/odometry layer KINE 310 which receives data image and sensor data, the occupancy grid calculator OG 330 updates the environmental map pre-stored in the memory thereof. In this memory, there are held the environmental map 400 which is the map information including information on the obstacle existing around the robot apparatus 403 and posture information 404 indicative of the direction of the robot apparatus (moving body) in the map. The posture information indicates an angle in which the robot apparatus 403 is directed from a predetermined direction, for example, x- or y-direction. Then, the occupancy grid calculator OG 330 will update the pre-recognized environmental map and posture direction information shown in
First, the method of updating the environmental map will be described. In the position recognition apparatus according to the present invention, the environmental map is not updated depending upon the travel (moving distance) of the robot apparatus. More particularly, if the travel is within a predetermined area (grid), the environmental map is not updated. When the travel exceeds the grid size, the environmental map will be updated.
On the other hand, if the robot apparatus 503 used to be within the central grid 501 of the environmental map 500 as in
More particularly, on the assumption that the size of a grid being a minimum unit recognizable as an obstacle-occupied area is CS (cell size), a position of the robot apparatus within the grid is (Bx, By) and a two-dimensional travel (moving distance) of the robot apparatus is (dx, dy), the magnitude of a grid shift (Sx, Sy) is given bu the following equation (3):
where [ ] is a Gaussian symbol indicating a maximum integer value not exceeding a value within [ ]. A position (Rx, Ry) of the robot apparatus within a grid is given by the following equation (4):
(Rx, Ry)=(dc+CS×Sx, dy−CS×Sy) (4)
The values of the equations (3) and (4) are shown in
When the robot apparatus 603 has moved the distance (dx, dy) 605, a shift (Sx, Sy) of the environmental map can be calculated by the aforementioned formula (1). It should be noted here that an updated environmental map 610 shown in FIG. 14(b) can be updated as the environmental map 600 having been shifted an x-axial distance Sx and y-axial distance Sy oppositely to the x-axial travel dx and y-axial travel dy, respectively. Also, it should be noted that the rightward travel (dx, dy) of the robot apparatus is taken as a forward travel while the leftward shift (Sx, Sy) of the environmental map, opposite to the travel of the robot apparatus, is taken as a forward shift.
The robot apparatus 613 having thus moved will be within a central grid 611 in an updated environmental map 610. A position (Ax, Ay) within the central grid 611 of the updated environmental map 610, taken by the robot apparatus 613 after the environmental map has been shifted, can be calculated as (Ax, Ay)=(Bx+Rx, By+Ry)=(Bx+dx−CS×Sx, By+dy−CS×Sy) from the position (Bx, By), in the central grid 601 of the environmental map 600 before shifted, of the robot apparatus 603 before moved and the travel (moving distance) (Rx, Ry) 615 within the grid.
Next, how to update information on the posture direction of the robot apparatus will be described.
More specifically, as shown in
Also, when an entertainment robot apparatus make any other recognition than the position recognition, such as sound recognition, face identification or the like, the priority of the position recognition, not related directly with such functions, may be lowered. In such a case, when the position recognition apparatus (position recognition object) is stopped from functioning, a travel of the robot apparatus being out of operation will possibly cause a problem that the position recognition once suspended and resumed because its priority has become high again will not match the result of position recognition before the position recognition function is suspended (this is called a “kidnap” problem”). Therefore, in case the priority of the position recognition may be low, increase in size CS of the environmental map grids or decrease in size of the environmental map itself makes it possible to reduce the resource of the CPU without suspending the function of the position recognition object.
Further, the environmental map and robot posture information obtained through the aforementioned operations can be displayed on a control panel shown in
It is important that the mobile or locomotive robot can autonomously operate as having been described in the foregoing and can also be remote-controlled by the user. Some of the conventional robot apparatus have a control panel on which there is displayed an image picked up by the camera installed on the robot body. However, it is difficult for the user to control his robot just by viewing an image of the environment around the robot apparatus, that is supplied from such a camera. Especially, the control is more difficult with an image from a camera having a limited field of view. In this embodiment, however, since the environmental map 910 is displayed on the remote-control panel 900 in addition to the image 901 captured by a camera such as a CCD camera installed on the robot apparatus 913, the robot apparatus 913 can be controlled with a considerably higher accuracy than the conventional robot apparatus remote-controlled based on the camera data alone.
In the foregoing, it has been described that the environmental map is not updated even when the robot posture is changed. Normally, however, the robot apparatus has the imaging unit such as a CCD camera or the like installed at the front side of the head unit thereof and thus the camera-captured image 901 provides a view before the robot apparatus. Therefore, when the robot apparatus is turned 180 deg. for example correspondingly to a changed posture, the camera-captured image will be opposite in direction to the environmental map. Therefore, in case the posture is changed through an angle larger then predetermined, such as 45 deg. or 90 deg., the robot apparatus can be controlled with a much higher accuracy by turning the environmental map correspondingly to such a rotation.
The landmark sensor CLS 340 identifies, based on sensor information on the robot apparatus 1 and information on the actions made by the robot apparatus 1, the location (position and posture) of the robot apparatus in an environment including color landmarks 1004 having colors such as a green part 1001, pink part 1002 or blue part 1003, for example, as shown in
The landmark sensor CLS 340 manages a robot-existence probability p(1) for a location 1 of each of grids (x, y) provided nearly regularly in a two-dimensional work space. The existence probability p(1) is updated in response to entry of a move of the robot apparatus, namely, internal observation information a or observation of the landmark, namely, external observation information s.
The existence probability p(1) depends upon an existence probability p(1′) in a preceding state of the robot apparatus, namely, in a location 1′ of the robot apparatus itself, and a transition probability p(1|a, 1′) that the preceding state 1′ shifts to the location 1 when the movement a is done in the state 1′. More specifically, a product of the existence probability p(1′) that each of the preceding locations 1′ is reached and transition probability (1|a, 1′) that the preceding location 1′ shifts to the location 1 when the movement a is done in the location 1′ is added (or integrated) to provide the existence probability p(1) that the present state, namely, the location 1, of the robot apparatus itself, is reached. Therefore, when the movement a of the robot apparatus is observed as a result of the external observation, the robot-existence probability p(1) can be updated for each grid according to the following formula (5):
Also, the existence probability p(1) that the robot apparatus is in the location 1 depends upon a transition probability p(s|1) that the landmark is observable in the existence probability p(1) and in the location 1. Therefore, when the observation of the landmark in the location 1, namely, external observation information s, is supplied to the landmark sensor CLS 340, the robot-existence probability p(1) can be updated according to the following formula (6). However, the right side of the equation (4) is normalized by dividing it by a probability p(s) that a landmark is observable.
The Markov localizer ML 342 holds its position in a work space as landmark probability density distribution over discrete grids, and it is supplied with external observation information s on observation of the landmark and internal observation information a on motions of the robot apparatus itself to update the landmark probability density distribution. At each time, the Markov localizer ML 342 supplies a grid whose landmark probability density distribution value is largest as the result of landmark estimation to the EKF controller 344.
The robot location sensing by the Markov localization is featured mainly by its robustness against noises of the sensor and that the sensing can be done speedily but coarsely.
On the other hand, the extended Kalman filter EKF 343 shown in
The extended Kalman filter EKF 343 is composed of a state model defining a relation between the motion information a on the robot apparatus itself and state, namely, location 1, and an observation model defining a relation between the robot location 1 and landmark observation information s.
The state model has a transition function F (1,a) by which the robot apparatus gives a theoretical state 1 when it has made the action a in the state (location) 1. Actually, since a noise component w is superposed on the theoretical state 1, the state 1 of the robot apparatus is given by the following formula (7) on the basis of the state model:
1←F(1,a)+w (7)
Also, the observation model is composed of an observation function H (Env, 1) by which the robot apparatus gives an observation theoretical value s as to a known environment Env (position of a landmark, for example) when it is in a state, namely, in the location 1. Actually, since a noise component v is superposed on the observation theoretical value, the observation value s is given by the following formula (8) on the basis of the observation model:
s←H(Env,1)+v (8)
Note that the noise components w and v superposed on the state 1 and observation s, respectively, are assumed as Gaussian distributions, respectively, taking zero as a central value thereof.
In the extended Kalman filter EKF 343 having the state model defining the relation between the motion information a on the robot apparatus itself and location 1 and the observation model defining the relation between the robot location 1 and landmark observation information s, the motion information a is known as a result of internal observation and the landmark observation information s is known as a result of external observation. Therefore, the position of the robot apparatus can be identified through estimation of the state 1 of the robot apparatus on the basis of the motion information a of the robot apparatus and observation information s. In this embodiment, the robot's motion a, state 1 and observation s can be represented by the following Gaussian distributions, respectively:
a: Gaussian distribution with median amed and covariance Σa
s: Gaussian distribution with median smed and covariance Σs
1: Gaussian distribution with median 1med and covariance Σ1
The state 1 of the robot apparatus at a time is estimated on the assumption that it shows a Gaussian distribution having a certain median and covariance. Then, when the motion a of the robot apparatus is found, the median and covariance concerning the estimated state 1 can be updated using the following formulae (9-1) and (9-2):
1 med←F(1med, amed) (9-1)
Σ1←∇F1Σ1∇F1T+∇FaΣa∇FaT (9-2)
where ∇F1 and ∇Fa are given by the following:
∇F1: Jacobian matrix given by ∂F/∂1
∇Fa: Jacobian matrix given by ∂F/∂a
Similarly, the state 1 of the robot apparatus at a time is estimated on the assumption that it shows a Gaussian distribution having a certain median and covariance. Then, when the landmark observation information s is found, the median and covariance concerning the estimated state 1 can be updated using the following formulae (10-1) and (10-2):
1med←1med+Wvmed (10-1)
Σ1←Σ1−WΣvWT (10-2)
where W: Kalman filer gain given by W=Σ1∇H1TΣp−1
Since the extended Kalman filter EKF 343 is very robust to the sensor data, the result of estimation from the extended Kalman filter EKF 343 is taken as the entire output from the landmark sensor CLS 340.
The extended Kalman filter (EKF) controller 344 controls the operation of the extended Kalman filter EKF 344 according to the output from the Markov localizer ML 342. More specifically, the EKF controller 344 verifies the validity of the landmark observation information s on the basis of the result of robot localization by the Markov localizer ML 342. The EKF controller 344 can judge the validity of the observation information s depending upon whether the probability p(s|mlp) in which a landmark is found in a grid position mlp where the existence probability is maximum in the Markov localizer ML 342 exceeds a predetermined threshold.
In case the probability p(s|mlp) that a landmark is found in the grid position mlp is smaller than the threshold, it is estimated that even the Markov localizer ML 342 robust to sensor noises has not perform its function to the satisfactory extent. In such a case, even if the extended Kalman filter EKF 343 not so robust to sensor noises is used to estimate the robot location, it will not be able to provide accurate estimation but will rather waste the time for calculation. So, when the observation information s is judged not to be valid, a selector 345 shown in
Also, the EKF controller 344 verifies the validity of the result of robot localization made by the extended Kalman filter EKF 343. The validity of the robot localization result can be judged by a distribution comparison test in which the estimation result is compared with the existence probability p(1) supplied from the Markov localizer ML 342 using a median and covariance of the estimated state 1. The distribution comparison test includes a chi-square test ((ml , EKF), for example.
If the distribution comparison test has proved that the probability distribution estimated by the Markov localizer ML 342 and that by the extended Kalman filter EKF 343 are not similar to each other, it can be determined that the robot localization by the extended Kalman filter EKF 343 not robust to sensor noises is not valid because of the influence of the sensor noises. In this case, the EKF controller 344 re-initializes the extended Kalman filter EKF 343 because it takes much time for restoration to its normal state.
The aforementioned landmark sensor CLS 340 functions as will be described below.
Also, when external observation information s on an observation of the landmark is supplied to the landmark sensor CLS 340, the EKF controller 344 will first update the estimated robot location in the Markov localizer ML 342 using the formula (6) (step S211).
The output from the Markov localizer ML 342 is supplied to the EKF controller 344 which in turn will verify the validity of the observation information s (step S212). The EKF controller 344 can judge the validity of the observation information s depending upon whether the probability p(s|mlp) in which a landmark is found in a grid position mlp where the existence probability is maximum in the Markov localizer ML 342 exceeds a predetermined threshold.
In case the probability p(s|mlp) that a landmark is found in the grid position mlp is smaller than the threshold, it is estimated that even the Markov localizer ML 342 robust to sensor noises has not perform its function to the satisfactory extent. In such a case, even if the extended Kalman filter EKF 343 not so robust to sensor noises is used to estimate the robot location, it will not be able to provide accurate estimation but will rather waste the time for calculation. So, when the observation information s is judged not to be valid, a selector 345 shown in
On the other hand, when the verification of the observation information s shows that the observation information s is valid, namely, the probability p(s|mlp) that the landmark is found in the grid position mlp exceeds the threshold, the EKF controller 344 will further update will further update the estimated robot location in the extended Kalman filter EKF 343 using the formulae (10-1) and (10-2) (step S213).
The result of the robot localization estimation by the extended Kalman filter EKF 343 is supplied to the EKF controller 344 which will verify the validity of the estimated robot location (step S214). The validity of the robot localization by the extended Kalman filter EKF 343 can be judged by a distribution comparison test in which the estimation result is compared with the existence probability p(1) supplied from the Markov localizer ML 342 using a median and covariance of the estimated state 1. For the distribution comparison test, there is available the chi-square test ((ml, EKF) by way of example.
If the distribution comparison test has proved that the probability distribution estimated by the Markov localizer ML 342 and that by the extended Kalman filter EKF 343 are not similar to each other, it can be determined that the robot location estimated by the extended Kalman filter EKF 343 not robust to sensor noises is not valid because of the influence of the sensor noises (step S215). In this case, the EKF controller 344 re-initializes the extended Kalman filter EKF 343 because it takes much time for restoration to its normal state.
Thus, the landmark sensor CLS 340 can identify a robot location accurately, speedily and robustly using a global search destined for a wide range and which can be done in a relatively short time and a local search destined for a limited range and which is accurate, speedy and robust, in combination.
Then, the situated behavior layer SBL 360 acquires a result of recognition of an obstacle existing around the robot apparatus 1 from the obstacle grid information 331 supplied from the occupancy grid calculator OG 330, and the present absolute coordinate of the robot apparatus 1 from the absolute coordinate localizer LZ 350, to thereby plan a route along which the robot apparatus 1 can walk, that is, a walkable route, to a destination designated in the absolute coordinate system or robot-centered coordinate system and issue an action command for moving along the planned route. Namely, the situated behavior layer 360 will decide a behavior the robot apparatus 1 has to perform, based on the input data and depending upon the situations. The situated behavior layer SBL 360 functions based on the obstacle grid information 331 supplied from the occupancy grid calculator OG 330 as will be described later.
An obstacle map produced based on the obstacle grid information 331 from the occupancy grid calculator OG 330 is composed of three types of areas as shown in
The first one of the three types of map areas is a one occupied by an obstacle (black in
Next, a route planning algorithm employed in the situated behavior layer SBL 360 will be described in detail with reference to a flow chart in
First, the situated behavior layer SBL 360 controls the robot apparatus 1 to turn its sight line to a destination to produce an obstacle map showing a obstacle or obstacles laid along a straight route extending from the present position to the destination (step S71). Then, a distance image or disparity image is acquired to measure a distance between the present position and destination, and the obstacle map is produce or the existing obstacle map is updated (step S72).
Next, a route is planned in the obstacle map thus produced with the yet-to-observed area and free-space area being regarded as walkable areas (step S73).
In this embodiment, a walking route is planned by a method called “A* search”, for example, capable of minimizing the costs for an entire route. The “A* search” method provides a best priority search using f as a performance function and in which the h function is admissible In step S74, the situated behavior layer SBL 360 judges whether the route planned by the “A* search” method in step S73 is a walkable one. When the situated behavior layer SBL 360 has determined that the robot apparatus 1 cannot detour around any obstacle along the route (NO), it exits the route planning with informing that any further search will not make it possible to plan a walkable for the robot apparatus 1 (step S75).
If the situated behavior layer SBL 360 has determined that the route planned by the “A* search” method for example in step S73 is a walkable one (YES), it goes to step S76 where it will search for any yet-to-observe area included in the planned route output. If it is determined in step S76 that no yet-to-observe area is included in the planned route output (NO), the situated behavior layer SBL 360 outputs, in step S77, the walkable route as a planned route to the destination. When it is determined in step S76 that any yet-to-observe area is included in the planned route output (YES), the situated behavior layer SBL 360 goes to step S78 where it will calculate a number of walking steps from the present position to the yet-to-observe area and judge whether the number of steps exceeds a threshold.
If the situated behavior layer SBL 360 has determined in step S78 that the number of walking steps exceeds the threshold (YES), it outputs the walkable route to the yet-to-observe area in step S79. On the contrary, the situated behavior layer SBL 360 has determined in step S78 that the number of walking steps is smaller than the threshold (NO), it will control the direction of viewing to measure the distance to the yet-to-observe area and retry the observation (step S80), and update the obstacle map again.
Employing the aforementioned route planning algorithm, the situated behavior layer SBL 360 plans a route with the yet-to-observe area and free-space area being regarded as walkable area, and re-observe only a yet-to-observe area included in the planned route output, whereby it can plan a walkable route efficiently and in a shorter time without execution of any observation and distance image calculation, not required for moving to the destination.
A bipedal walking robot apparatus having installed therein the aforementioned position recognition apparatus according to the present invention will be described in detail below. This humanoid robot apparatus is practically usable to support the human activities in various situations in the living environment and daily life. It is also an entertainment robot which can behave correspondingly to its internal states (anger, sadness, joy, pleasure, etc.) and also do basic motions like those the human beings do.
Each of the arm units 4R and 4L forming the upper limb consists of a shoulder-joint pitch axis 107, shoulder-joint roll axis 108, upper-arm yaw axis 109, elbow-joint pitch axis 110, lower-arm yaw axis 111, wrist-joint pitch axis 112, wrist-joint roll axis 113 and a hand 114. The human hand is actually a multi-joint, multi-degree-of-freedom structure including a plurality of fingers. However, since the motion of the hand 114 is little contributed to or have little influence on the posture control and walking control on the robot apparatus 1, so the hand 114 is assumed herein to have no degree of freedom. Therefore, each of the arm units 4R and 4L has seven degrees of freedom.
The body unit 2 has three degrees of freedom including a body pitch axis 104, body roll axis 105 and a body yaw axis 106.
Each of the leg units 5R and 5L is composed of a hip-joint yaw axis 115, hip-joint pitch axis 116, hip-joint roll axis 117, knee-joint pitch axis 118, ankle-joint pitch axis 119, ankle-joint roll axis 120, and a foot 121. In this embodiment, the hip-joint pitch axis 116 and hip-joint roll axis 117 define together the position of the hip joint of the robot apparatus 1. The human foot is actually a multi-joint, multi-degree of freedom structure including the foot sole. However, the foot sole of the robot apparatus 1 has no degree of freedom. Therefore, each of the leg units has six degrees of freedom.
In effect, the robot apparatus 1 has a total of 32 degrees of freedom (=3+7 ×2+3+6×2). However, the degrees of freedom of the entertainment robot 1 is not always limited to the thirty two but the number of joints, namely, degrees of freedom, may be increased or decreased appropriately correspondingly to any restrictions and required specifications in designing and producing the robot apparatus 1.
Each of the above degrees of freedom of the robot apparatus 1 are actually implemented by a corresponding actuator. The actuator should preferably small and lightweight since the robot apparatus 1 should have no more unnecessary bulging than necessary but have a body shape extremely approximate to the natural bodily shape of the human being and the bipedal walking type robot is difficult to control in posture because of its unstable structure.
The control unit 10 controls the operation of the entire robot apparatus 1 synthetically. The control unit 10 includes a main controller 11 composed of main circuit components such as a CPU (central processing unit), DRAM, flash memory, etc. (not shown), and a peripheral circuit 12 including a power circuit and interfaces (not shown) for transfer of data and command to and from the components of the robot apparatus 1.
The control unit 10 may be installed anywhere as necessary. It is installed in the body unit 2 in this embodiment as shown in
Each of the degrees of freedom in the robot apparatus 1 shown in
The head unit 3 has provided therein a CCD (charged coupled device) camera to capture the external situations, and also a distance sensor to measure a distance to an object standing before the robot apparatus 1, a microphone to collect external sounds, a speaker to output a voice or sound, a touch sensor to detect a pressure applied to the head unit 3 by a user's physical action such as “patting” or “hitting”, etc.
The body unit 2 has provided therein a body pitch-axis actuator A5, body roll-axis actuator A6 and a body yaw-axis actuator A7, implementing the body pitch axis 104, body roll axis 105 and body yaw axis 106, respectively. Also, the body unit 2 incorporates a battery to power the robot apparatus 1. This battery is a rechargeable type.
Each of the arm units 4R and 4L is composed of sub units including a upper arm unit 41R (41L), elbow joint unit 42R (42L) and lower arm unit 43R (43L), and it has provided therein a shoulder-joint pitch-axis actuator A8, shoulder-joint roll-axis actuator A9, elbow-joint roll-axis actuator A10, elbow-joint pitch-axis actuator A11, elbow-joint roll-axis actuator A12, write-joint pitch-axis actuator A13 and write-joint roll-axis actuator A14, implementing the shoulder-joint pitch axis 107, shoulder-joint roll axis 108, elbow-joint roll axis 109, elbow-joint pitch axis 110, lower-arm yaw axis 111, write-joint pitch axis 112 and write-joint roll axis 113, respectively.
Also, each of the leg units 5R and 5L is composed of sub units including a femoral unit 51R (51L), knee unit 52R (52L) and tibial unit 53R (53L), and it has provided therein a femoral-joint yaw-axis actuator A16, femoral-joint roll-axis actuator A17, femoral-joint roll-axis actuator A18, knee-joint pitch-axis actuator A19, angle-joint pitch-axis actuator A20 and ankle-joint roll-axis actuator A21, implementing the femoral-joint yaw axis 115, femoral-joint roll axis 116, femoral-joint roll axis 117, knee-joint pitch axis 118, angle-joint pitch axis 119 and ankle-joint roll axis 120, respectively.
It is more preferable that each of the aforementioned actuators A2, A3, . . . used in the joints can be formed from a small AC servo actuator having a directly-coupled gear and one-chip servo control system thereof installed in a motor unit.
Also, the body unit 2, head unit 3, arm units 4R and 4L and leg units 5R and 5L are provided with sub controllers 20, 21, 22R and 22L, and 23R and 23L, respectively, for controlling the driving of the actuators. Further, the leg units 5R and 5L have ground-contact check sensors 30R and 30L, respectively, to detect whether the foot soles of the leg units 5R and 5L are in contact with the ground or floor surface, and the body unit 2 has installed therein a posture sensor 31 to measure the robot posture.
Each of the ground-touch sensors 30R and 30L is composed of a proximity sensor, micro switch or the like installed on the foot sole for example. Also, the posture sensor 31 is a combination of an acceleration sensor and giro sensor, for example.
The output from each of the ground-contact sensor 30R or 30L permits to judge whether the right or left leg unit is currently standing at the foot sole thereof on the ground or floor surface or idling while the robot apparatus 1 is walking or running. Also, the output from the posture sensor 31 permits to detect an inclination of the body unit and posture of the robot apparatus 1.
The main controller 11 can dynamically correct a controlled target in response to the output from each of the ground-contact sensors 30R and 30L and posture sensor 31. More particularly, the main control 11 can implement a whole-body motion pattern in which the upper limbs, body and lower limbs of the robot apparatus 1 move in coordination by making an adaptive control on each of the sub controls 20, 21, 22R and 22L, and 23R and 23L.
For a whole-body motion of the robot apparatus 1, a foot motion, a zero-moment point (ZMP) trajectory, body motion, waist height, etc. are set and commands for actions corresponding to the settings are transferred to the sub controllers 20, 21, 22R and 22R, and 23R and 23L which will interpret the commands transferred from the main controller 11 and output a drive control signal to each of the actuators A2, A3, . . . The “ZMP (zero-moment point)” referred to herein means a point on the floor surface where the moment caused by the reactive force applied due to the walking of the robot apparatus 1 is zero, and the ZMP trajectory means a trajectory along which the ZMP moves while the robot apparatus 1 is walking, for example. It should be noted that the concept of the ZMP and application of ZMP to the criteria for walking-robot stability are referred to the document “Legged Locomotion Robots” (by Miomir Vukobratovic).
As above, each of the sub controllers 20, 21, . . . in the robot apparatus 1 interprets a command received from the main controller 11 to output a drive control signal to each of the actuators A2, A3, . . . to control each of the robot component units. Thus, the robot apparatus 1 is allowed to stably walk with its posture positively changed to a target one.
Also, the control unit 10 in the robot apparatus 1 collectively processes sensor data from the sensors such as the acceleration sensor, touch sensor, ground-contact check sensor, image information from the CCD camera and sound information from the microphone in addition to the aforementioned control of the robot posture. The control unit 10 is connected to the main controller 11 via hubs corresponding to the sensors including the acceleration sensor, giro sensor, touch sensor, distance sensor, microphone, speaker, etc., actuators, CCD camera and battery, respectively (not shown).
The main controller 11 sequentially acquires sensor data supplied from the sensors, image data and sound data and stores the data into place in the DRAM via the internal interfaces, respectively. Also, the main controller 11 sequentially acquires battery residual-potential data supplied from the battery, and stores the data into place in the DRAM. The sensor data, image data, sound data and battery residual-potential data thus stored in the DRAM are used by the main controller 11 in controlling the motions of the robot apparatus 1.
When the robot apparatus 1 is initially powered, the main controller 11 reads a control program and stores it into the DRAM. Also, the main controller 11 judges the robot's own condition and surroundings and existence of user's instruction or action on the basis of the sensor data, image data, sound data and battery residual-potential data sequentially stored in the DRAM.
Further, the main controller 11 decides a behavior on the basis of the robot's own condition under the control of the result of judgment and the control program stored in the DRAM, and drives the necessary actuators on the basis of the result of decision to cause the robot apparatus 1 to make a so-called “motion” or “gesture”.
Thus the robot apparatus 1 can judge its own condition and surroundings on the basis of the control program and autonomously behave in response to a user's instruction or action.
Note that the robot apparatus 1 can autonomously behave correspondingly to its internal condition. This will be explained below using an example software configuration of the control program in the robot apparatus 1 with reference to
As shown in
Also, a robotic server object 42 is provided in the lowest layer of the device driver layer 40. It includes a virtual robot 43 formed from a software group which provides an interface for access to the hardware such as the aforementioned sensors and actuators 281 to 28n, a power manager 44 formed from a software group which manages the power switching etc., a device driver manager 45 formed from a software group which manages other various device drivers, and a designed robot 46 formed from a software group which manages the mechanism of the robot apparatus 1.
There is also provided a manager object 47 composed of an object manager 48 and service manager 49. The object manager 48 is a software group to manage the start-up and exit of each software group included in a robotic server object 42, middleware layer 50 and an application layer 51. The service manager 49 is a software group which manages the connection of each object on the basis of inter-object connection information stated in a connection file stored in the memory card.
The middleware layer 50 is provided in a layer above the robotic server object 42. It is a software group which provides basic functions of the robot apparatus 1, such as image processing, sound processing, etc. Also, the application layer 51 is provided in a layer above the middleware layer 50, and it is a software group which decides a behavior of the robot apparatus 1 on the basis of a result of processing by each software group included in the middleware layer 50.
Note that the software configurations of the middleware layer 50 and application layer 51 will be explained with reference to
As shown in
The signal processing modules 60 to 68 of the recognition system 70 acquire corresponding data of sensor data, image data and sound data read from the DRAM by the virtual robot 43 of the robotic server object 42, make a predetermined processing on the basis of the data and supply the results of processing to the input semantics converter module 69. It should be noted that the virtual robot 43 is designed to transfer or convert a signal according to a predetermined protocol, for example.
The input semantics converter module 69 uses the results of processing supplied from the signal processing modules 670 to 68 to recognize a robot's internal condition and surroundings such as “noisy”, “hot”, “bright”, “robot has detected a ball”, “robot has detected overturn”, “robot has been patted”, “robot has been hit”, “robot has heard musical scale”, “robot has detected moving object” or “robot has detected obstacle”, or a user's instruction or action and supplies the result of recognition to the application layer 41.
As shown in
As shown in
When supplied with a result of recognition from the input semantics converter module 69 or upon elapse of a predetermined time after reception of a last result of recognition, each of the behavior models decides a next behavior in referring to a corresponding emotion parameter held in the emotion model 83 or a corresponding desire parameter held in the instinct model 84 whenever necessary as will be described later, and supplies the behavior data to the behavior selection module 81.
Note that in this embodiment, each of the behavior models uses an algorithm called “finite-probabilistic automaton” as a method of deciding a next behavior. This automaton probabilistically decides a destined one of nodes NODE0 to NODEn as shown in
More specifically, each of the behavior models holds a state transition table generally indicated with a reference 90 as shown in
In the state transition table 90, input events taken as transition conditions at the nodes NODE0 to NODEn are listed in corresponding lines of an “Input event name” column in the order of precedence, and further conditions to the transition conditions are listed in corresponding lines of “Data name” and “Data range”.
Therefore, when a result of recognition that “a ball has been detected (BALL)” is supplied, a node 100 in the state transition table 90 in
Even when no result of recognition is supplied, the node NODE100 can transit to another node if any of “joy”, “surprise” or “sadness” held in the emotion model 83, of the emotion and desire parameters held in the emotion and instinct models 83 and 84, respectively, to which the behavior model refers periodically, is within a range of 50 to 100.
Also, in the state transition table 90, the names of nodes to which one of the nodes NODE0 to NODEn can transit are listed in “Transit to” lines of a “Probability of transition to other node” column, the probability of transition to each of the other nodes NODE0 to NODEn to which the nodes can transit when all the conditions stated in lines of each of “Input event name”, “Data name” and “Data range” “columns” are met is stated in a corresponding line of the “Probability of transition to other node” column, and a behavior to be outputted when the node transits to any of the nodes NODE0 to NODEn is stated in an “Output behavior” line of the “Probability of transition to other node” column. It should be noted that the sum of the probabilities stated in the corresponding lines of the “Probability of transition to other node” is 100 (%).
Therefore, the node NODE100 defined as in the state transition table 90 in
Each of the behavior models is formed from a connection of the nodes NODE0 to NODEn each defined as in the aforementioned state transition table 90. For example, when it is supplied with a result of recognition from the input semantics converter module 69, it will probabilistically decide a next behavior using corresponding state transition tables of the nodes NODE0 to NODEn, and supplies the result of decision to the behavior selection module 81.
The behavior selection module 81 shown in
Also, the behavior selection module 81 informs the learning module 82, emotion model 83 and instinct model 84, on the basis of behavior completion information supplied from the output semantics converter module 78 after completion of the behavior, that the behavior is complete.
On the other hand, the learning module 82 is supplied with a result of recognition of a teaching given as a user s action such as “hit” or “pat”, of the results of recognition supplied from the input semantics converter module 69.
Then, based on the result of recognition and information from the behavior selection module 71, the learning module 82 will change the transition probability of a corresponding behavior model in the behavior model library 70 by lowering the probability of expression of a behavior in response to “hitting (scolding)” while raising the probability of expression of a behavior in response to “patting (praising)”.
On the other hand, the emotion model 83 holds a parameter indicating the intensity of each of a total of six emotions “joy”, “sadness”, “anger”, “surprise”, “disgust” and “fear”. The emotion model 83 periodically updates each of the emotion parameters on the basis of a specific result of recognition such as “hit” and “pat” supplied from the input semantics converter module 69, time elapse, information from the behavior selection module 81, etc.
More particularly, on the assumption that a variation of the emotion calculated using a predetermined formula and based on a result of recognition supplied from the input semantics converter module 69 is ΔE[t], a concurrent behavior of the robot apparatus 1 and an elapsed time from the preceding updating, parameter of the present emotion is E[t] and a coefficient of the sensitivity to the emotion is ke, the emotion model 83 calculates a parameter of the emotion E[t+1] in a next period using the following equation (11), and updates the emotion parameter by replacing it with the present emotion parameter E[t]. Also, the emotion model 83 will update all the emotion parameters in the same way.
E=[t+1]=E[t]+ke×ΔE[t] (11)
Note that it is predetermined how much each result of recognition and information from the output semantics converter module 78 influence the variation ΔE[t] of each emotion parameter. For example, a result of recognition “robot has been hit” will greatly influence the variation ΔE[t] of the parameter of an emotion “anger”, while a result of recognition “robot has been patted” will also have a great influence on the ΔE[t] of the parameter of an emotion “joy”.
The information from the output semantics converter module 78 is so-called feedback information on a behavior (information on completion of a behavior), namely, information on a result of behavior expression. The emotion model 83 changes the emotion by such information as well, which will be explainable by the fact that a behavior like “cry” will lower the level of an emotion “anger”, for example. It should be noted that information from the output semantics converter module 78 is also supplied to the aforementioned learning module 82 which will take the information as a base to change the transition probability corresponding to the behavior model.
Note that the feedback of the result of a behavior may be done by an output from the behavior selection module 81 (a behavior to which an emotion is added).
On the other hand, the instinct model 84 holds parameters indicating the intensity of each of four desires “exercise”, “affection”, “appetite” and “curiosity”. The instinct model 84 periodically updates the desire parameter on the basis of a result of recognition supplied from the input semantics converter module 69, elapsed time and information from the behavior selection module 81.
More particularly, on the assumption that a variation of each of the desires “exercise”, “affection”, “appetite” and “curiosity”, calculated using a predetermined formula and based on a result of recognition, elapsed time and information from the output semantics converter module 78, is ΔI[k], a present parameter of the desire is I[k] and a coefficient of the sensitivity to the desire is ki, the instinct model 84 calculates a parameter of the emotion I[k+1] in a next period using the following equation (12), and updates the desire parameter by replacing it with the present desire parameter I[k]. Also, the instinct model 84 will update all the desire parameters except for the “appetite” parameter in the same way.
I[k+1]=I[k]+ki×ΔI[k] (12)
Note that it is predetermined how much each result of recognition and information from the output semantics converter module 78 influence the variation ΔI[k] of each desire parameter. For example, information from the output semantics converter module 78 will have a great influence on the ΔI[k] of the parameter of a “fatigue”.
Note that in this embodiment, the parameter of each of the emotions and desires (instincts) is so limited as to vary within a range of 0 to 100 and also the coefficients ke and ki are set separately for each of the emotions and instincts.
On the other hand, the output semantics converter module 78 of the middleware layer 50 supplies an abstract action command such as “move forward”, “joy”, “cry” or “track (ball)” supplied from the behavior selection module 81 of the application layer 51 as above to a corresponding one of the signal processing modules 71 to 77 of the output system 79 as shown in
Each of the signal processing modules 71 to 77 generates, based on the action command thus supplied, a servo command for supply to a corresponding one of the actuators to behave as commanded, sound data to be outputted from the speaker, and drive data for supply to the LED and sends the data to the corresponding actuator, speaker or LED via the virtual robot 43 of the robotic server object 42 and the signal processing circuit in this order.
Thus, the robot apparatus 1 can autonomously behave correspondingly to its own condition (internal) and surroundings (external) and user's instruction and action under the aforementioned control program.
The control program is provided via a recording medium in which it is recorded in a form readable by the robot apparatus. As the recording media for recording the control program, there are available magnetic-reading type ones (e.g., magnetic tape, flexible disc and magnetic card), optical-reading type ones (e.g., CD-ROM, MO, CD-R and DVD), etc. The recording media include also semiconductor memory such as so-called memory card, rectangular or square), IC card, etc. Also, the control program may be provided via the so-called Internet or the like.
The control program provided in any of the above forms is reproduced by a dedicated read driver or personal computer and transmitted to the robot apparatus 1 by cable or wirelessly. Also, the robot apparatus 1 can read the control program directly from the recording medium by a drive unit, if any provided, for a small storage medium for the semiconductor memory or IC card.
In this embodiment, the obstacle recognition apparatus can extract a stable plane since it detects a plane using many measured points. Also, it can select a correct plane by collating candidate planes obtainable from the image with floor parameters obtainable from a robot's posture. Further, according to the present invention, since the obstacle recognition apparatus substantially recognizes a floor surface, not any obstacle, it can make the recognition independently of the shape and size of an obstacle. Moreover, since an obstacle is represented by its distance from the floor surface, the obstacle recognition apparatus can detect not only an obstacle but a small step or concavity of a floor. Also, the obstacle recognition apparatus can easily judge, with the size of a robot taken in consideration, whether the robot can walk over or under a recognized obstacle. Furthermore, since an obstacle can be represented on a two-dimensional floor surface, the obstacle recognition apparatus can apply the methods used with conventional robots for planning a route and also calculate such routing speedily than the three-dimensional representation.
As having been described in the foregoing, the obstacle recognition apparatus according to the present invention can accurately extract a floor surface and recognize an obstacle since the floor surface detecting means detects the floor surface on the basis of a result of coordinate transformation by the coordinate transforming means and plane parameters detected by the plane detecting means, selects a point on the floor surface using the plane parameters of the floor surface, and recognizes the obstacle on the basis of the selected point.
Also, the obstacle recognition method according to the present invention permits to accurately extract a floor surface and recognize an obstacle since in a floor surface detecting step, the floor surface is detected on the basis of a result of coordinate transformation in the coordinate transforming step and plane parameters detected in the plane detecting step, a point on the floor surface is selected using the plane parameters of the floor surface, and the obstacle is recognized on the basis of the selected point.
Also, the obstacle recognition program is executed by the robot apparatus to accurately extract a floor surface and recognize an obstacle since in a floor surface detecting step, the floor surface is detected on the basis of a result of coordinate transformation in the coordinate transforming step and plane parameters detected in the plane detecting step, a point on the floor surface is selected using the plane parameters of the floor surface, and the obstacle is recognized on the basis of the selected point.
Also, the robot apparatus according to the present invention can accurately extract a floor surface and recognize an obstacle since the floor surface detecting means detects the floor surface on the basis of a result of coordinate transformation by the coordinate transforming means and plane parameters detected by the plane detecting means, selects a point on the floor surface using the plane parameters of the floor surface, and recognizes the obstacle on the basis of the selected point.
In the foregoing, the present invention has been described in detail concerning certain preferred embodiments thereof as examples with reference to the accompanying drawings. However, it should be understood by those ordinarily skilled in the art that the present invention is not limited to the embodiments but can be modified in various manners, constructed alternatively or embodied in various other forms without departing from the scope and spirit thereof as set forth and defined in the appended claims.
Sabe, Kohtaro, Kawamoto, Kenta, Fukuchi, Masaki, Okubo, Atsushi, Ohashi, Takeshi, Gutmann, Steffen
Patent | Priority | Assignee | Title |
10002344, | Oct 17 2016 | Conduent Business Services, LLC | System and method for retail store promotional price tag detection |
10019803, | Oct 17 2016 | Conduent Business Services, LLC | Store shelf imaging system and method using a vertical LIDAR |
10176452, | Jun 13 2014 | Conduent Business Services, LLC | Store shelf imaging system and method |
10196104, | May 04 2016 | GOOGLE LLC | Terrain Evaluation for robot locomotion |
10210603, | Oct 17 2016 | Conduent Business Services, LLC | Store shelf imaging system and method |
10254826, | Apr 27 2015 | GOOGLE LLC | Virtual/augmented reality transition system and method |
10265851, | Feb 19 2011 | Apparatus and method for enabling rapid configuration and reconfiguration of a robotic assemblage | |
10265863, | Sep 09 2015 | CARBON ROBOTICS, INC | Reconfigurable robotic system and methods |
10289990, | Oct 17 2016 | Conduent Business Services, LLC | Store shelf imaging system and method |
10293484, | May 10 2011 | Sony Corporation | Robot device and method of controlling the robot device |
10452071, | Feb 29 2016 | AI Incorporated | Obstacle recognition method for autonomous robots |
10453046, | Jun 13 2014 | Conduent Business Services, LLC | Store shelf imaging system |
10611613, | Aug 26 2011 | Crown Equipment Corporation | Systems and methods for pose development using retrieved position of a pallet or product load to be picked up |
10677595, | Nov 30 2016 | Inventec Appliances (Pudong) Corporation; Inventec Appliances Corp.; Inventec Appliances (Nanchang) Corporation | Method for navigating an automated guided vehicle |
10755430, | Mar 02 2017 | AI Incorporated | Method for estimating distance using point measurement and color depth |
10788836, | Feb 29 2016 | AI Incorporated | Obstacle recognition method for autonomous robots |
10969791, | Feb 29 2016 | AI Incorporated | Obstacle recognition method for autonomous robots |
11037320, | Mar 01 2016 | AI Incorporated | Method for estimating distance using point measurement and color depth |
11204610, | May 30 2016 | Kabushiki Kaisha Toshiba | Information processing apparatus, vehicle, and information processing method using correlation between attributes |
11250288, | May 30 2016 | Kabushiki Kaisha Toshiba | Information processing apparatus and information processing method using correlation between attributes |
11449061, | Feb 29 2016 | AI Incorporated | Obstacle recognition method for autonomous robots |
11467587, | Feb 29 2016 | AI Incorporated | Obstacle recognition method for autonomous robots |
11468588, | Mar 01 2016 | AI Incorporated | Method for estimating distance using point measurement and color depth |
11481746, | Jun 13 2014 | Conduent Business Services, LLC | Store shelf imaging system |
11487294, | Feb 11 2019 | 634 AI LTD | Systems and methods for managing multiple autonomous vehicles |
11520344, | Feb 11 2019 | 634 AI LTD | Systems and methods for managing multiple autonomous vehicles |
11693413, | Feb 29 2016 | AI Incorporated | Obstacle recognition method for autonomous robots |
11718976, | Feb 28 2018 | Honda Motor Co., Ltd. | Control apparatus, work machine, control method, and computer readable storage medium |
7519457, | Jun 17 2005 | HONDA MOTOR CO , LTD | Path generator for mobile object |
7583817, | Feb 25 2005 | Kabushiki Kaisha Toyota Chuo Kenkyusho | Object determining apparatus |
7639841, | Dec 21 2004 | Siemens Corporation | System and method for on-road detection of a vehicle using knowledge fusion |
7769491, | Mar 04 2005 | Sony Corporation | Obstacle avoiding apparatus, obstacle avoiding method, obstacle avoiding program, and mobile robot apparatus |
7865267, | Sep 19 2003 | Sony Corporation | Environment recognizing device, environment recognizing method, route planning device, route planning method and robot |
8055060, | Oct 02 2006 | Konica Minolta Holdings, Inc. | Image processing apparatus capable of operating correspondence between base image and reference image, method of controlling that image processing apparatus, and computer-readable medium recording program for controlling that image processing apparatus |
8180100, | Aug 11 2004 | HONDA MOTOR CO , LTD ; Tokyo Institute of Technology | Plane detector and detecting method |
8229176, | Jul 25 2008 | HERE GLOBAL B V | End user image open area maps |
8339417, | Jul 25 2008 | HERE GLOBAL B V | Open area maps based on vector graphics format images |
8374780, | Jul 25 2008 | HERE GLOBAL B V | Open area maps with restriction content |
8396257, | Jul 25 2008 | HERE GLOBAL B V | End user image open area maps |
8417446, | Jul 25 2008 | HERE GLOBAL B V | Link-node maps based on open area maps |
8548671, | Jun 06 2011 | Crown Equipment Corporation | Method and apparatus for automatically calibrating vehicle parameters |
8577538, | Jul 14 2006 | iRobot Corporation | Method and system for controlling a remote vehicle |
8594930, | Jul 25 2008 | HERE GLOBAL B V | Open area maps |
8655588, | May 26 2011 | Crown Equipment Corporation | Method and apparatus for providing accurate localization for an industrial vehicle |
8705792, | Aug 06 2008 | Toyota Jidosha Kabushiki Kaisha | Object tracking using linear features |
8799201, | Jul 25 2011 | Toyota Jidosha Kabushiki Kaisha | Method and system for tracking objects |
8805579, | Feb 19 2011 | Submersible robotically operable vehicle system for infrastructure maintenance and inspection | |
8824762, | Oct 22 2010 | The Johns Hopkins University | Method and system for processing ultrasound data |
8825387, | Jul 25 2008 | HERE GLOBAL B V | Positioning open area maps |
8873831, | Dec 21 2010 | Samsung Electronics Co., Ltd. | Walking robot and simultaneous localization and mapping method thereof |
8897947, | Dec 17 2009 | Murata Machinery, Ltd | Autonomous mobile device |
9056754, | Sep 07 2011 | Crown Equipment Corporation | Method and apparatus for using pre-positioned objects to localize an industrial vehicle |
9188982, | Apr 11 2011 | Crown Equipment Corporation | Method and apparatus for efficient scheduling for multiple automated non-holonomic vehicles using a coordinated path planner |
9206023, | Aug 26 2011 | Crown Equipment Corporation | Method and apparatus for using unique landmarks to locate industrial vehicles at start-up |
9233466, | Feb 19 2011 | Apparatus and method for enabling rapid configuration and reconfiguration of a robotic assemblage | |
9542746, | Jun 13 2014 | Conduent Business Services, LLC | Method and system for spatial characterization of an imaging system |
9580285, | Aug 26 2011 | Crown Equipment Corporation | Method and apparatus for using unique landmarks to locate industrial vehicles at start-up |
9656389, | Feb 19 2011 | Apparatus and method for enabling rapid configuration and reconfiguration of a robotic assemblage | |
9659204, | Jun 13 2014 | Conduent Business Services, LLC | Image processing methods and systems for barcode and/or product label recognition |
9672736, | Oct 22 2008 | Toyota Jidosha Kabushiki Kaisha | Site map interface for vehicular application |
9690374, | Apr 27 2015 | GOOGLE LLC | Virtual/augmented reality transition system and method |
9751214, | Feb 06 2015 | Samsung Electronics Co., Ltd. | Apparatus for returning of robot and returning method thereof |
9791860, | May 12 2006 | FLIR DETECTION, INC | Autonomous behaviors for a remote vehicle |
9928438, | Mar 10 2016 | Conduent Business Services, LLC | High accuracy localization system and method for retail store profiling via product image recognition and its corresponding dimension database |
9958873, | Apr 11 2011 | Crown Equipment Corporation | System for efficient scheduling for multiple automated non-holonomic vehicles using a coordinated path planner |
Patent | Priority | Assignee | Title |
4412121, | Aug 28 1981 | S R I International | Implement positioning apparatus and process |
4647965, | Nov 02 1983 | Picture processing system for three dimensional movies and video systems | |
4965499, | Dec 31 1987 | WESTINGHOUSE ELECTRIC CORPORATION, A CORP OF PA | Parametric path modeling for an optical automatic seam tracker and real time robotic control system |
5200818, | Mar 22 1991 | Video imaging system with interactive windowing capability | |
5416713, | Oct 30 1992 | Mitsubishi Denki Kabushiki Kaisha | Obstacle avoidance apparatus |
5444478, | Dec 29 1992 | U S PHILIPS CORPORATION | Image processing method and device for constructing an image from adjacent images |
5592215, | Feb 03 1993 | Rohm Co., Ltd. | Stereoscopic picture system and stereoscopic display panel therefor |
5633995, | Jun 19 1991 | Martin Marietta Corporation | Camera system and methods for extracting 3D model of viewed object |
5959663, | Oct 19 1995 | Sony Corporation | Stereoscopic image generation method and apparatus thereof |
6272237, | Oct 11 1995 | Fujitsu Limited | Image processing apparatus |
6377013, | Dec 24 1999 | Honda Giken Kogyo Kabushiki Kaisha | Control apparatus for legged mobile robot |
6400313, | Jan 12 2000 | Honeywell International Inc | Projection of multi-sensor ray based data histories onto planar grids |
6456731, | May 21 1998 | QUANTAPOINT, INC | Optical flow estimation method and image synthesis method |
6542249, | Jul 20 1999 | Align Technology, INC | Three-dimensional measurement method and apparatus |
6788809, | Jun 30 2000 | Intel Corporation | System and method for gesture recognition in three dimensions using stereo imaging and color vision |
6826293, | Mar 22 2000 | Honda Giken Kogyo Kabushiki Kaisha | Image processing device, singular spot detection method, and recording medium upon which singular spot detection program is recorded |
6919909, | Oct 24 2000 | National Technology & Engineering Solutions of Sandia, LLC | Fractional screen video enhancement apparatus |
6945341, | Sep 29 2000 | Honda Giken Kogyo Kabushiki Kaisha | Bipedal robot |
20010021882, | |||
20050131581, | |||
JP1096607, | |||
JP200265721, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Mar 13 2003 | Sony Corporation | (assignment on the face of the patent) | / | |||
May 30 2003 | SABE, KOHTARO | Sony Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014231 | /0761 | |
Jun 02 2003 | KAWAMOTO, KENTA | Sony Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014231 | /0761 | |
Jun 02 2003 | OHASHI, TAKESHI | Sony Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014231 | /0761 | |
Jun 02 2003 | FUKUCHI, MASAKI | Sony Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014231 | /0761 | |
Jun 02 2003 | OKUBO, ATSUSHI | Sony Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014231 | /0761 | |
Jun 03 2003 | GUTMANN, STEFFEN | Sony Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014231 | /0761 |
Date | Maintenance Fee Events |
Mar 08 2010 | ASPN: Payor Number Assigned. |
Sep 21 2011 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jan 22 2016 | REM: Maintenance Fee Reminder Mailed. |
Jun 10 2016 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jun 10 2011 | 4 years fee payment window open |
Dec 10 2011 | 6 months grace period start (w surcharge) |
Jun 10 2012 | patent expiry (for year 4) |
Jun 10 2014 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jun 10 2015 | 8 years fee payment window open |
Dec 10 2015 | 6 months grace period start (w surcharge) |
Jun 10 2016 | patent expiry (for year 8) |
Jun 10 2018 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jun 10 2019 | 12 years fee payment window open |
Dec 10 2019 | 6 months grace period start (w surcharge) |
Jun 10 2020 | patent expiry (for year 12) |
Jun 10 2022 | 2 years to revive unintentionally abandoned end. (for year 12) |