A robot having the motion style of a character defined by animation data obtained from an animation movie or computer animation data files. The robot is generated using a robot development method that obtains animation data for a character walking or performing other movements. The character may be humanoid, and the method includes developing a bipedal robot with a lower portion having a kinematic structure matching the kinematic structure of the lower portion of the animation character as defined in the animation data. A control program is generated for the robot such as by using trajectory optimization. The control program may include an open-loop walking trajectory that mimics the character's walking motion provided in the animation data. The open-loop walking trajectory may be generated by modifying the motion of the character from the animation data such that the Zero Moment Point (ZMP) stays in the contact convex hull.
|
16. A method for generating and controlling a humanoid robot to provide motions that mimic motion of an animation character, comprising:
processing animation data for an animation character to obtain a character skeleton and a reference motion for the animation character;
based on the character skeleton, defining a kinematic structure for a robot;
based on the reference motion and the character skeleton, defining torque requirements for the robot; and
based on the torque requirements, defining a plurality of actuators to move the kinematic structure of the robot based on the reference motion.
1. A robot adapted to mimic motion of an animation character, comprising:
a plurality of structural segments interconnected at joints to provide a floating-base, legged robot assembly;
a plurality of actuators each provided proximate to one of the joints for applying a torque to the structural segments to provide movement of the floating-base, legged robot assembly; and
a controller providing control signals to the actuators to provide the movement of the robot based on a motion trajectory,
wherein the motion trajectory is based on a reference motion extracted from animation data for the animation character and is modified based on a kinematic structure provided by the structural segments and actuators.
9. A method of generating design and control criteria for a robot, comprising:
from a data storage device, retrieving a set of animation data for a character;
with a processor on a computer system, extracting data defining a character skeleton for the character from the set of animation data;
with the processor, setting a set of target features for the robot including defining a kinematic structure based on the extracted data defining the character skeleton and further including defining a range of motion for the robot based on the set of animation data; and
designing mechanics of the robot including a set of structural segments corresponding to the kinematic structure and the range of motion for the robot.
2. The robot of
3. The robot of
4. The robot of
5. The robot of
6. The robot of
7. The robot of
8. The robot of
10. The method of
11. The method of
12. The method of
13. The method of
14. The method of
15. The method of
17. The method of
18. The method of
19. The method of
20. The method of
|
1. Field of the Description
The present description relates, in general, to design and control of robots including legged robots (e.g., floating-base, biped humanoid robots or other legged robots such as quadrupeds). More particularly, the present description relates to methods for developing, and then controlling, a robot to move in a manner that mimics or relatively closely matches an animation character's movements, e.g., a bipedal robot developed to walk like an animation or film character.
2. Relevant Background.
There is great demand for engineers and scientists in the robotic industry to create robots that embody animation characters in the real world, such as by taking on similar movements like a way of walking that is unique and readily recognized for the character. This is especially true in the entertainment industry because such character-imitating robots would allow people to physically interact with characters that they have previously only seen in films or on television shows. To give a feeling of life to these robots, it is important to mimic not only the appearance of the character being imitated but also to mimic the motion styles of the character.
Unfortunately, the process of matching the movements and motion styles of a robot to a character, such as a character in an animated film or television show, is not straightforward and prior solutions have not been wholly effective. One problem with designing and controlling a robot to mimic a character is that most animation characters are not designed and animated considering physical feasibility in the real world and their motions are seldom physically correct.
For example, it may be desirable to mimic a character that is humanoid in form, and a designer may choose to mimic the character with a biped humanoid robot, which is a robot with an appearance based on that of the human body. Humanoid robots have been designed for providing interaction with various environments such as tools and machines that were made for humans and often are adapted for safely and effectively interacting with human beings. In general, humanoid robots have a torso with a head, two arms, and two legs each with some form of foot such that the robot can walk on planar surfaces, climb steps, and so on (e.g., these humanoid robots are “bipeds” as are humans). Humanoid robots may be formed with many rigid links that are interconnected by joints that are operated or positioned by applying a force or torque to each joint to move and position a robot. Similarly, other legged robots such as those with three, four, or more legs also may walk utilizing force-controlled movement of their legs.
In order to interact with human environments, humanoid robots require safe and compliant control of the force-controlled joints. In this regard, a controller is provided for each robot that has to be programmed to determine desired motions and output forces (contact forces) and, in response, to output joint torques to effectively control movement and positioning of the humanoid robot. However, it has proven difficult to operate these humanoid robots with such a controller to accurately mimic an animation character's motion style as this style may simply not be physically correct or feasible for the humanoid robot and its links and joints (e.g., its physical components may not be able to perform the character's movements).
Animation characters have evolved to be more realistic in both their outer appearance and in their movements (or motion style). Using computer graphic techniques, three dimensional (3D) characters can be designed and generated with more natural and physically plausible motions with the 3D animation characters. Among many other motions, due to the interest in bipedal characters, generating realistic and natural bipedal walking has been extensively studied by many researchers. One approach to trying to mimic animation characters with bipedal robots has been to directly explore the walking motions with trajectory optimization to find desired motions that obey the laws of physics. Another approach has been to develop walking controllers that allow characters to walk in physics-based simulation. Recently, the computer graphics community has been trying to establish techniques to animate mechanical characters, and a computational framework of designing mechanical characters with desired motions has been proposed by some researchers. To date, though, none of these approaches has provided bipedal robots that consistently mimic an animation character's movements.
As a result, the desire to have lifelike bipedal walking in the real world has persisted in the field of robotics for the past several decades. To address the goal of solving real world problems such as helping elderly citizens in their daily life or resolving natural and man-made disasters, humanoid robots have been developed with high-fidelity control of joint positions and torques. Other bipedal robots that are more compact have been developed for entertainment and for hobby enthusiasts using servo motors. Recently, miniature bipedal robots with servo motors and links provided with 3D printers have begun to gain attention in the robotic industry. To date, though, none of these innovations have provided robots that can readily mimic movement, including walking, of many animation or film characters.
Hence, there remains a need for a method of developing/generating (and then controlling operations of) a robot that can more accurately move like a wide variety of animation characters, including characters that were not created to comply with constraints of the real or physical world. Preferably, the new method would at least be useful for developing a bipedal robot that walks like an animation character and, more preferably, would be useful for designing, building, and controlling a robot chosen to have a physical configuration at least somewhat matching that of the animation character so that the robot can be controlled to have the motion style of the animation character. Also, it may be useful in some implementations that the robots developed with these new methods be built using readily available components such as links that can be 3D printed and joints operated with servo motors and so on.
The inventors recognized that there was a need for improved methods for designing and controlling robots to bring animation characters to life in the real world. As an exemplary implementation, a bipedal robot is presented that looks like and, more significantly for this description, walks like an animation character. In other words, a robot is designed, built, and then controlled so as to take on the motion style of a character defined by animation data (or data obtained from an animation movie or clip containing the character and/or from computer graphics (CG) files associated with the animation movie/clip).
In brief, the robot development or design method starts with obtaining a set of animation data of a particular character walking (or performing other movements to be mimicked by a robot). In this example, the character is humanoid in form, and the method continues with developing a bipedal robot that has a lower portion including a torso assembly, a left leg assembly, and a right leg assembly that corresponds to the lower portion of the animation character. In particular, the lower portion of the robot is designed to have a kinematic structure matching the kinematic structure of the lower portion of the animation character as defined in or as derived from the animation data.
The lower portion with the matching kinematic structure may then be fabricated, such as with the links being 3D printed or the like and with joint actuators in the form of servo motors. Then, a control program can be generated for the fabricated and assembled robot (or its lower portion) such as by using trajectory optimization or other approaches. The control program may include an open-loop walking trajectory that is configured to mimic the character's walking motion provided in the previously obtained animation data.
The open-loop walking trajectory may be generated by modifying the motion of the character from the animation data such that the Zero Moment Point (ZMP) stays in the contact convex hull. The controller may then use the control program to operate the joints via control of the servo motors (or other joint actuators) such as to test or verify whether the walking provided by the animation data can be performed and whether the robot accurately imitates the character's movements. In tests run by the inventors, the results of this method of developing a robot (its structure and control components) showed marked improvement over prior techniques with movements of an animation character being closely mimicked or copied.
More particularly, a robot is described herein that is adapted to mimic motion of an animation character. The robot includes a plurality of structural segments interconnected at joints and also includes a plurality of actuators each provided proximate to one of the joints for applying a torque to the structural segments to provide movement of the robot. Further, the robot includes a controller providing control signals to the actuators to provide the movement of the robot based on a motion trajectory. To mimic the animation character's movements, the motion trajectory corresponds to a reference motion extracted from animation data for the animation character and then modified based on a kinematic structure provided by the structural elements and actuators. In some implementations, the robot is a bipedal robot, and the reference motion is a walking motion for the animation character.
To implement the robot, the actuators may be configured to provide torque requirements obtained from the reference motion. Further, the reference motion can be modified to provide the motion trajectory while complying with a range of motion defined in the reference motion for the animation character. To provide the motion trajectory for the robot, the reference motion can be mapped to a configuration space of the structural elements of the robot, and the reference motion can be modified by keeping a stance foot flat on the ground to retain stability.
In some embodiments of the robot, the structural segments correspond with elements of a skeleton defined for the animation character in the animation data. In such embodiments, each of the structural elements has a size and a shape corresponding with a size and a shape of one of the elements of the skeleton. Particularly, the shape or the size of at least one of the structural elements can be modified to differ from the shape or the size of the corresponding one of the elements of the skeleton such that the robot has a range of motion corresponding to a range of motion of the animation character as defined in the reference motion. In this regard, the shape and/or size of robot's links/segments may be modified to provide the desired range of motion or motion style so as to better mimic the movement of the animation character.
Briefly, the present description is directed toward processes for designing, developing, and fabricating a robot that is particularly well suited to mimic movements of an animation character. The processes taught herein also include generating control programs or commands for causing the fabricated robot to perform movements with a motion style that closely matches or even copies those of the animation character. Further, the description teaches a robot built according to these processes and configured to carry out motions similar to those of an animation character. The processes may be used with nearly any animation data (e.g., data defining kinematic structure of an animation character and movements of such a character) and to provide robots with a widely varying form to take on the form of the animation character in the real or physical world. A bipedal robot is described in the following description as one non-limiting example to mimic a bipedal humanoid character, but the concepts taught herein may readily be extended to other robot configurations.
The following description explains a method for developing a robot that is designed after an animation character, and the description also discusses that the robot can then be controlled or operated using a character-like walking motion generated according to the techniques discussed herein.
More specifically, the computer animation may involve skeletal animation, which is a technique in computer animation to represent a character 120 in two parts: a surface representation used to draw the character (e.g., the skin, the mesh, or the like) and a hierarchical set of interconnected bones (called the skeleton or rig) used to animate (e.g., pose, keyframe, and the like) the character's skin or mesh. Skeletal animation is a standard way to animate characters for a period of time, and the animation or motion of the character's skeleton (bones/links and its joints) may be completed using inverse kinematics, forward kinematic animation, or other techniques so as to define movements over the period of time. In this manner, the motion style or movement of the particular skeleton of the animation character can be defined by an animator, and the animation data provides not only arrangement of the links and joints (with the skeleton 120) but also its kinematics or movement definitions.
As shown in
In practice, the links of the robot 150 can be 3D printed (or otherwise fabricated) and the joints can be actuated by servo motors or the like. For character-like walking motion, an open-loop walking trajectory can be generated (and was by the inventors in experiments) by using, for example, trajectory optimization. One objective of the walking trajectory generated for the robot 150 is to be similar to (to mimic) the walking of the animation character 120, while preserving stability. This may be achieved in some cases by insuring the Zero Moment Point (ZMP) criterion is satisfied during the walking by the robot 150.
The optimized walking can then be tested on hardware, e.g., an embodiment of the robot 150, so as to verify the robot 150 using the control program or walking trajectory will generate or provide stable walking (which, in many cases, can closely match that of the animation character 120 when it is animated to walk). The following description also explains that through the hardware experiments discrepancies can be identified between the simulation and hardware walking, which can limit the hardware walking at optimized speed (for example). With this in mind, the following description also includes some teaching as to how this discrepancy or problem can be resolved or at least addressed to produce an enhanced robot for mimicking animation characters.
The following provides a framework for developing hardware for a robot and for generating a walking trajectory (control programs/signals) for a robot so as to mimic an animation character and its movements. One main challenge in this regard is that the original animation (e.g., the animation data used to provide the animation character 120) and its motions are not typically designed considering physical constraints. For example, the inventors have proven their method by targeting an animation character (i.e., character 120) that has an ankle-foot section made up of three joints, where each has 3 degrees of freedom (DOF). However, integrating nine actuators into a small foot of a physical robot may not be practical such that the developed robot 150 was based upon the animation character's ankle-foot section but was varied or modified to provide a more practical implementation that can still simulate the movement of the ankle-foot section of the animation character 120. Moreover, the walking motions in animation (or in the animation data for the robot 150 provided by the animation software) are generated with key frames crafted by artists or animators, and these walking motions are often not physically realizable by a robot such as robot 150. In other words, just playing back the walking motion in the animation data on a real robot can result in the robot losing its balance and falling down.
Briefly, the present description is directed toward a human motion tracking controller as well as to a control method used to control a robot and robot with such a controller controlling its balanced movements. The controller (or tracking controller) is adapted to allow free-standing humanoid robots to track given or input motions, such as motions defined by human motion capture data (which may include key frame animation and other sources of human motion data).
One difficulty that was overcome or addressed in designing the tracking controller was that the given/input motion may not be feasible for a particular robot or in a particular environment or point in time, and, without the controller design described herein, the robot may fall down if it simply tries to fully track the motion with a naïve feedback controller. For example, while the captured human motion is feasible for the human subject/model providing the human capture data, the same motion may not be feasible for the robot because of the differences in the kinematics and dynamics of the particular floating-base humanoid robot with its particular design and arrangement of components including force or torque-drive joints. Also, key frame animations are usually created without considering the dynamics of a robot, and, therefore, their use as human motion capture data input to a controller has been problematic for prior controllers as certain motions simply are not possible to perform by particular physical robots.
The following description teaches a tracking controller for use with humanoid robots that can track motion capture data while maintaining the balance of the controlled robot. The controller includes two main components: (1) a component that computes the desired acceleration at each degree of freedom (DOF) using, for example, a standard proportional-derivative (PD) controller; and (2) a component that computes the optimal joint torques and contact forces to realize the desired accelerations, from the first component, considering the full-body dynamics of the robot and the limits in contact forces as well as the constraints on contact link motions. By taking advantage of the fact that the joint torques do not contribute to the six DOFs of the robot's root, the tracking controller has been designed to decouple the optimization into two simpler sub-problems that can be solved in real time (e.g., for real time control over the robot with the controller issuing torque values to each driver of each joint of the robot).
One potential technique for enhancing the stability while controlling a robot is to run multiple tracking controllers at the same time and then to choose one of the controllers to determine the final control input. In addition to the controller that attempts to track the given reference motion (defined by the motion capture data input to the tracking controller), the tracking controller (or controller assembly) may include another instance of the controller run with a fixed static equilibrium pose as the reference. Then, the tracking controller assembly can choose the actual input to the robot depending on how close the robot is to losing its balance. In other words, the controller with the fixed reference can be used (or act to take over control) when the robot is determined to be about to lose balance so as to safely stop the robot at a static equilibrium pose (e.g., controlled to not perform or track motions that would cause it to fall or lose balance).
In step 210, the method 200 may involve extracting and analyzing the animation data to obtain information useful for creating the physical robot (e.g., the animation character's skeleton) and for generating a mimicking motion with the created physical robot (e.g., reference motion as defined by animation data). This extracted data 215 is passed on to the next step or function 220 that involves setting target features for the physical robot. In one implementation, the animation data may be provided from a computer animation program such as MAYA.
Based on output 215 of the analysis in step 210, target features can be set in step 220 including the kinematic structure of the robot (e.g., the robot's links (size, shape, and so on), joints between links, actuators for joints, and the like), range of motion (ROM) for the components of the kinematic structure, and torque requirements of each joint of the kinematic structure. The processing and optimizing of step 220 (and other steps such as steps 240, 250, 270) may involve use of MATLAB Simulink/SimMechanics (R2013a) or other software products presently available or as may be available in the future to aid robotics design.
As shown, step 220 provides a set of output (target features) 230 for use in later design/development steps. Particularly, torque requirements 232 for each joint of the kinematic structure produced based on the character skeleton 215 are provided to a step/process 240 that functions to select mechatronic components such as actuators that can realize the target features (e.g., the torque requirements 232 for the joints). Further, the output 230 may include the kinematic structure of the robot as well as the ROM for the structure or its components as shown being provided at 234 to process step/function 250.
Step 250 involves designing the robot mechanics including segments/links and joints between links based both on the kinematic structure (e.g., links to match or correspond with bones in the character's skeleton) and the ROM for this structure (e.g., joints can be configured to provide stops or otherwise achieve the various ROMs of the animation character as may be defined by the character skeleton and/or its animation in the reference motion 215). As further input to step 250, the output/results 245 of step 240 may be provided including actuators, controllers, and other selected mechatronic components. The output 255 of step 250 may be a digital model such as a computer aided design (CAD) model of the designed robot mechanics, and this output 255 can be used as input to a step 260 of building/fabricating the robot (e.g., printing 3D parts to provide the links/segments, obtaining the actuators, the controller(s), and mounting/assembly hardware, and so on and then assembling these components). The output 265 of step 260 is a fabricated robot such as robot 150 of
Additionally, the output 230 of the target feature setting/selecting step 220 may include a target motion trajectory 238 that may be provided as input to a step 270 that involves generating a robot motion trajectory (e.g., a walking trajectory in this example). To generate this trajectory (e.g., an open-loop walking trajectory) for the robot (fabricated/assemble in step 260), the animation walking motion 215 obtained from the animation data in step 210 is modified in step 220 to provide a target motion trajectory 238 that is suitable for the robot 265. Specifically, step 220 may involve mapping the motion to the robot's configuration space. Then, the motion can be modified to keep the stance foot flat on the ground so that stability is guaranteed, such as through use of the traditional ZMP criterion, which is widely used in the robotics industry for controlling robots to provide bipedal walking.
Then, to generate the robot walking trajectory 270, minimal mechanics 257 are obtained from step/process 250. Further, it is assumed in step 270 that the walking motion should be designed/selected so as to be stable while preserving the walking style (motion style) of the animation character as defined or provided in reference motion 215 from the animation data. In some embodiments, step 270 involves use of trajectory optimization. Particularly, this optimization is performed with the objective function of minimizing the deviation from the target walking motions and keeping the center of pressure (CoP) or the ZMP in the support polygon.
The product of step 270 is a motion trajectory that can be used by a robot controller provided on/in the robot hardware (or offboard in some cases) to control the actuators to perform a set of movements or a motion that mimics that of the reference motion 215 obtained from the input animation data. In step 280 the robot controller acts to control the robot based on this optimized motion trajectory 275 to perform the walking (or other) motion, which can initially be tested by tracking the joint trajectories. Step 280 may involve further adjustment of the motion trajectory to obtain movements that even more closely match that of the reference motion 215 from the animation data.
The method 200 may be applied to a whole robot or to select portions of a robot. In the following discussion, more details are provided for performing the steps/processes of the method 200 with regard to providing and operating robot hardware of a lower body or portion of an animation character that can be modeled as a bipedal humanoid robot. First, data is extracted from animation data. In one specific implementation, a skeleton structure of an animation character was extracted from an input animation data file from a computer graphics program/software product (e.g., MAYA or the like). In the particular embodiment as shown with animation character 120 of
Table I below provides the extracted values for the foot, shank, and thigh heights and the pelvis width. Comparing to typical miniature bipedal robots, the pelvis is wide (at 18.8 centimeters) relative to the leg length (20 centimeters=8.1+8.9+3.0 centimeters), which makes the later balance during walking with a physical robot more challenging.
TABLE I
Target and Final Kinematic Configuration (the target segment
dimensions are from the animation character and the ROM
requirements are from a simulation study)
full
body
pelvis
thigh
shank
foot
foot
dim.
(height)
(width)
(height)
(height)
(height)
(length)
anim.
73.0
18.8
8.1
8.9
3.0
10.0
robot
—
20.0
8.9
9.8
3.7
10.6
(unit: cm)
hipyaw
hiproll
hippitch
kneepitch
anklepitch
ankleroll
(intern~
(adduct~
(extend~
(extend~
(extend~
(invert~
ROM
extern)
abduct)
flex)
flex)
flex)
evert)
req.
−15~45
−35~15
−15~60
20~115
0~70
0~20
robot
−40~90
−55~90
−105~135
0~115
−30~120
−20~90
(unit: degree)
Once the character skeleton and reference motion are extracted from the animation data, the method can proceed to setting target features for the robot and its motion trajectory. The skeleton data of the animation character does not include inertia properties, and it is typically impractical to build a physical robot (or robot hardware) exactly as defined by the animation data for an animation character. For example, the ankle-foot portion of a skeleton of an animation character (e.g., character 120 of
In the exemplary implementation, the goal of the development method/process was to generate a robot that could mimic the walking motion found by the animation character in the original animation (or animation data). With this in mind, the dynamics of the target walking motion were investigated in simulation, with details of this simulation/experiments explained in more detail below. Through this simulation study, though, it was verified that a leg configuration of a typical miniature humanoid robot, which includes a thigh, a shank, and a foot connected by a 3 DOF hip joint, a 1 DOF knee joint, and a 2 DOF ankle joint, can accurately mimic the walking motion (as defined in the test run with a reference motion for the animation character 120 of walking in with a particular motion style). Furthermore, the simulation result provided an estimate of the range of motion (see Table I) and torque requirements of each joint for the walking motion.
Once the target features are chosen/set based on the extracted information from the animation data, the method can proceed with selecting the mechatronic components for the robot. For example, this may involve selecting a set of joint actuators by searching from a set/database of available joint actuators that can meet the estimated torque requirements. The search, in some cases, can be limited such as to servo motors or another desired type of actuator with servo motors being useful to facilitate control and testing involving tracking joint trajectories. In one particular, but not limiting example, servos distributed/produced by Dynamixel were used as these are widely available and commonly used for humanoid robots with joint position controllers (which can also be chosen to build the robot). More specifically, the MX-106T from Dynamixel was used as the actuator for the hip and knee joints while the MX-64T from Dynamixel was used as the actuator for the robot's ankle joints. The maximum torques that the MX-106T and MX-64T can exert (τ106,max Nm and τ64,max Nm) given the angular velocities (Ω rad/s) are:
τ106,max(Ω)≈8.81−1.80×|Ω|
τ64,max(Ω)≈4.43−0.66×|Ω| Eq. (1)
which can be set as constraints in the walking motion optimization (as discussed below).
Further, in the specific test or prototype implementation used by the inventors, an OpenCM9.04 microcontroller board was used with an OpenCM458EXP expansion board. With this controller for the robot, joint position commands were sent to the servos every 10 milliseconds in TTL. In the test run, the power was provided off-board, but it is expected that the controller and power of many robots developed and built with the techniques described herein will be provided on-board.
Once the target features are selected or generated from the animation data, the robot mechanics can be designed or generated. Continuing with the example of the animation character 120 of
As in the animation character 120, the three rotational axes of the hip joints and the two rotational axes of the ankles are co-aligned at each joint. In addition, the method of design involved attempting to align the hip, knee, and ankle pitch joint in the sagittal plane at the default configuration (e.g., stand straight configuration). However, due to the size of the ankle servos (e.g., other mechatronic components), the ankle pitch joint was placed forward in the final design of the robot 150 as shown in
Once the design of the robot is completed, the robot's parts can be fabricated or otherwise obtained and fabricated to provide the robot. In the ongoing example of robot 150, the segments/links were all 3D printed and then assembled using aluminum frames and the actuators (as shown in
TABLE II
Hardware Specifications for Fabricating Robot
height
35
cm
DOF
12
power
off-board
weight
2.55
kg
actuator
MX-64T (×8)
con-
OpenCM9.04,
leg len
22
cm
MX-106T (×4)
troller
OpenCM458EXP
pel
20
cm
segment
RGD525
comm
TTL
wid
prot
(leg len: leg length, pel wid: pelvis width, ctrl prot: control protocol)
With these aspects of the robot development and control method understood, it may be useful to further describe the use of this method to provide a stable walking motion that mimics the walking motion of an animation character using animation data (e.g., to mimic the animation character 120 shown in
In the working example of character 120, the animation data provided reference motion in the form of two similar, but not identical, asymmetric gaits. From these gaits, the method involved generating (or designing) four different single stance motions. For the animation walking motions, one stride took 1.1 second to 1.2 second. Further, a double support phase compose was about 15 percent of a full step, and a stance phase started with a heel-strike pose and ends with a toe-off pose. Further, from the animation data, it was determined that the foot is rotated about 30 degrees during stance (see below in
In the exemplary implementation/test of the robot development method, the inventors aimed at generating one walking motion for the robot to mimic the animation character 120 and its walking motion (i.e., the reference motion input to the method). This one walking motion was designed to look similar to the animation, but not necessarily replicate the reference motion. Since the robot 150 has different kinematic configuration than the animation character, the position trajectories of the segments/links were extracted (rather than the joint trajectories) in the Euclidean space
In the next step of setting target features, the target joint trajectories were generated for the single stance motion for the robot,
The original single stance motion,
Target motion trajectories for the robot,
By modifying the four different animation walking motions with the same process, the four target joint trajectories were acquired for the single stance phase. All four walking motions are shown with dotted lines in the graph 400 of
Since the modified motion is physically incorrect (e.g., the CoP gets out of the foot in the sequence shown in
Trajectory optimization is a well-established method for designing optimal trajectories for given criteria and constraints. The single stance can be parameterized as {circumflex over (φ)}SS with 145 (=12×12+1) parameters, where 12 parameters represent equally time-spaced nodes of each joint trajectory (×12), and an additional parameter defines the time duration of the single stance motion (+1). To evaluate a set of parameters {circumflex over (φ)}SS, joint trajectories are reconstructed from the parameters, the forward kinematics (FK) is solved to investigate the motion, and the inverse dynamics (ID) is solved to examine the dynamics. More specifically, the joint trajectories can be reconstructed with spline interpolation, and a physics simulation can be run with MATLAB Simulink/SimMechanics (R20113a) (or other software tools) in joint angle tracking mode with the stance foot welded on the ground to solve FK and ID.
From the results of FK and ID, the single surface motion can be evaluated based on how similar the motion is to the target motion (∥φSS, φtgtSS∥) and how close the CoP remains at the center of the stance foot (∥COPSS∥). The difference of the motions and the CoP deviation are defined as the root mean squares of the differences of the joint angles as ∥{circumflex over (φ)}SS,
Since a parametrized single stance motion, {circumflex over (φ)}SS, can be evaluated, the problem of generating the optimal motion can be formulated as a nonlinear constrained optimization problem as:
min{circumflex over (φ)}SScSS∥{circumflex over (φ)}SS,
subject to PSS,k=1,Kfoot=PtgtSS,k=1,Kfoot -C1
∥COPSS,COPlimit∥<0 -C2
zSS,k=2, . . . K-1Sw-foot>0 -C3
τ<τmax(Ω) -C4
|GRFz|>0 -C5
|GRFx,y/GRFz|<μ -C6
One goal may be to minimize the deviation from the target motion, {circumflex over (φ)}SS, while increasing the stability of the motion, i.e., keeping the CoP near the center of the stance foot. The weighting coefficients, cSS and cCOP, are heuristically found to balance the goals. Constraint 1, C1, guarantees that the swing foot starts and ends at target positions, which is necessary for symmetric walking. All other constraints (C2 to C6) insure physically realizable walking motions: C2 insures that the model does not fall down; C3 keeps the swing foot (zSS,k=2, . . . k-1Sw-foot) to be higher than the ground; C4 checks if the servos are capable of generating such torques (Eq. (1)); C5 assures the stance foot does not take off the ground; and C6 is intended to avoid slipping at the stance foot (GRFx,y,z are the ground reaction forces along each axis, and the inventors used a friction coefficient μ=0.6 in the tested prototype). In theory, C2 is enough to prevent the robot from falling down without the cost term of COPSS. However, by keeping the CoP near the center of the foot, the modeling errors and small disturbance (which always exist in real environments) can be compensated for in the operation of the robot.
In the test, the covariance matrix adaptation evolution strategy (CMA-ES) was used to optimize the trajectory parameters, {circumflex over (φ)}SS. To force the constraints, a large constant, e.g., cL=106, was added to the cost when the constraints were violated. One of the target trajectories was used as the initial parameter, and the CMA-ES was run with a population size of 64 for 6000 generations. Without optimizing the implementation for running speed, it takes about two days to run on a conventional desktop computer.
With regard to generating a robot walking trajectory (e.g., double stance phase), once the single stance motion, {circumflex over (φ)}optSS, is optimized to generate the full walking motion, the double stance phase motion, {circumflex over (φ)}DS, can be generated which connects the last pose of the single stance phase and the first pose of the next single stance phase. The segment positions in the Euclidean space are interpolated between the target start and end poses, and then the IK is solved to get the joint trajectories. In the test case, the interpolated double stance motion is stable, as the CoP remains in the support polygon progressing from the previous stance foot toward the next stance foot. If the motion were, instead, not stable, a stable one can be generated by optimizing the trajectory similar as to that of the single stance phase. For optimizing the double support motion, the cost will be ∥COPSS∥. Since a target double stance motion is not available, the first constraint, C1, is applied for all time steps, and COPlimit will be the support polygon which covers both feet of the robot.
The full walking motion is then obtained by connecting the single stance motion and the double stance motion. The joint trajectories of a full stride of one leg from single stance, double stance, swing, and double stance are shown in
The robot 150 was also operated to test the physical design from the animation character and whether the animation character's walk was accurately mimicked. The optimized walking trajectory was tested on the hardware by tracking the open-loop joint trajectories with the servo motors. When the optimized trajectory was played back (used to control the robot's actuators), the robot was found to wobble forward because the robot did not perfectly produce the motion. Specifically, the stance leg flexes more and scuffs the swing foot at the beginning and end of the swing phase. This causes the swing foot to push against the ground and the stance foot to slip, which results in unstable walking. The test helped verify that the robot produces different motions under load by comparing the walking to sky-walking (playing the walking motion in the air), and two sources of motion error were believed to be the deformations of the links and the error of the servo motors.
During the test run, the robot was observed to slip less when the optimized motion was played back slower, and the resulting walking looked closer to the optimized walking. In theory, the CoP trajectory will be closer to the CoM trajectory for slower paced walking. The optimized CoM trajectory (shown in
It is believed based on the test results that the system/method can be enhanced to better track optimal trajectories. First, it is likely that optimal trajectories can be generated that are easier to track. For example, when optimizing a trajectory, a cost term can be included regarding the deformation of the linkages. In addition, as one optimizes for the robustness against CoP deviation, the robustness can be considered against swing foot height. Second, better segments can reduce the deformations. Further, one can investigate materials and structural designs to provide stiffer segments. Also, improving the strength of the segments may be desirable since the segments occasionally broke when the robot fell down during testing. Third, better tracking may be achieved by improving the servo control of the motors. Further, a feedforward controller can be added to the robot that adds offsets to the angular trajectories considering the load the joints and the segments will bear.
The system 600 also includes a robot development workstation 630, e.g., a computer system particularly adapted for providing the functions shown for method 200 of
The development suite 640 includes a data extraction module 642 that acts to extract data from the animation data 624 including the character skeleton 626 and the reference motion 628 for the skeleton 626 (e.g., to perform step 210 of method 200 of
The development suite 640 also includes a robot mechatronic and mechanics module 646 for generating/selecting mechatronic components including actuators 658 using the torque requirements 652 as input (e.g., to perform step 240 of method 200 of
The motion trajectory 664 is communicated to the robot 670 and stored in memory 674 as shown at 676. The robot 670, which may be a bipedal humanoid robot or other robot with the motion trajectory being a walking motion, includes a controller (e.g., a tracking controller) 672 that processes the motion trajectory 676 to generate a set of control signals (e.g., joint torques) 678 that are communicated to the drivers 680 (e.g., servo motors or other actuators as defined by development suite 640 at 658) of the robot 670, which act at joints 684 to move segments/links 688 (which are provided based on structural segment definitions 660) so as to follow the motion trajectory 676 (e.g., to follow a walking motion) and mimic movements 628 of the animation character 626.
Although the invention has been described and illustrated with a certain degree of particularity, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the combination and arrangement of parts can be resorted to by those skilled in the art without departing from the spirit and scope of the invention, as hereinafter claimed.
Yamane, Katsu, Kim, Joohyung, Song, Seungmoon
Patent | Priority | Assignee | Title |
11192600, | May 25 2015 | ROBOTICAL LTD | Robot leg |
Patent | Priority | Assignee | Title |
8786680, | Jun 21 2011 | Carnegie Mellon University | Motion capture from body mounted cameras |
20120152051, | |||
20120326356, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 20 2015 | KIM, JOOHYUNG | DISNEY ENTERPRISES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 035012 | /0117 | |
Feb 20 2015 | SONG, SEUNGMOON | DISNEY ENTERPRISES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 035012 | /0117 | |
Feb 23 2015 | YAMANE, KATSU | DISNEY ENTERPRISES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 035012 | /0117 | |
Feb 24 2015 | Disney Enterprises, Inc. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Feb 21 2020 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jan 24 2024 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Aug 30 2019 | 4 years fee payment window open |
Mar 01 2020 | 6 months grace period start (w surcharge) |
Aug 30 2020 | patent expiry (for year 4) |
Aug 30 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Aug 30 2023 | 8 years fee payment window open |
Mar 01 2024 | 6 months grace period start (w surcharge) |
Aug 30 2024 | patent expiry (for year 8) |
Aug 30 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Aug 30 2027 | 12 years fee payment window open |
Mar 01 2028 | 6 months grace period start (w surcharge) |
Aug 30 2028 | patent expiry (for year 12) |
Aug 30 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |