To provide a game device capable of expressing, through a relatively simple process, a picture showing a first character object grabbing and pulling a clothing object of a second character object. According to the present invention, a direction from the position of the second character object to the position of the first character object is obtained, and based on the obtained direction, the positions of at least some of the vertexes of the clothing object are changed (S205). Further, the position of a predetermined portion of the first character object, which is used to pull the clothing object, is changed, based on the obtained direction (S206).
|
1. A game device for displaying a picture showing a first character object pulling a clothing object included in a second character object, in which the first character object and the second character object are placed in a virtual three dimensional space, comprising:
direction obtaining means for obtaining a direction from a position of the second character object to a position of the first character object;
clothing object control means for changing positions of at least some of vertexes of the clothing object, based on the direction obtained by the direction obtaining means; and
first character object control means for changing a position of a predetermined portion of the first character object, based on the direction obtained by the direction obtaining means, the predetermined portion being used to pull the clothing object.
8. A control method for controlling a game device comprising a processor for displaying a picture showing a first character object pulling a clothing object included in a second character object, in which the first character object and the second character object are placed in a virtual three dimensional space, the method comprising:
a direction obtaining step of obtaining by the processor a direction from a position of the second character object to a position of the first character object;
a clothing object control step of changing by the processor positions of at least some of vertexes of the clothing object, based on the direction obtained at the direction obtaining step; and
a first character object control step of changing by the processor a position of a predetermined portion of the first character object, based on the direction obtained at the direction obtaining step, the predetermined portion being used to pull the clothing object.
9. A non-transitory computer readable information storage medium storing a program for causing a computer to function as a game device for displaying a picture showing a first character object pulling a clothing object included in a second character object, in which the first character object and the second character object are placed in a virtual three dimensional space, the program for causing the computer to function as:
direction obtaining means for obtaining a direction from a position of the second character object to a position of the first character object;
clothing object control means for changing positions of at least some of vertexes of the clothing object, based on the direction obtained by the direction obtaining means; and
first character object control means for changing a position of a predetermined portion of the first character object, based on the direction obtained by the direction obtaining means, the predetermined portion being used to pull the clothing object.
2. The game device according to
the at least some of the vertexes of the clothing object move according to a reference point set in the virtual three dimensional space, and
the clothing object control means changes a position of the reference point, based on the direction obtained by the direction obtaining means.
3. The game device according to
the first character object control means changes the position of the predetermined portion, based on the positions of the at least some of the vertexes of the clothing object.
4. The game device according to
distance obtaining means for obtaining a distance between the position of the second character object and the position of the first character object,
wherein the clothing object control means changes the positions of the at least some of the vertexes of the clothing object, based on the direction obtained by the direction obtaining means and the distance obtained by the distance obtaining means.
5. The game device according to
the clothing object control means changes the positions of the at least some of the vertexes of the clothing object, based on the direction obtained by the direction obtaining means and a distance obtained by multiplying the distance obtained by the distance obtaining means by a factor, and
the clothing object control means includes means for changing the factor as time passes.
6. The game device according to
motion data storage means for storing motion data describing a basic motion of the predetermined portion in a case where the first character object pulls the clothing object, and
predetermined portion position obtaining means for obtaining the position of the predetermined portion of the first character object, the position being specified based on the motion data,
wherein the distance obtaining means obtains a distance between the position of the second character object and the position of the predetermined portion of the first character object.
7. The game device according to
|
The present invention relates to a game device, a game device control method, a program, and an information storage medium.
There is known a game device for displaying a game screen image showing a picture obtained by viewing a virtual three dimensional space from a given viewpoint. For example, there is known a game device for displaying a game screen image showing a picture obtained by viewing from a given viewpoint a virtual three dimensional space where a plurality of player objects representative of soccer players are placed, to thereby provide a soccer game.
Here, in an actual soccer game, or the like, a player may grab and pull the uniform of another player to block the other player from playing. Therefore, for example, if a picture showing a player object grabbing and pulling the uniform of another player object can be displayed in the above-described soccer game, the reality of the soccer game can be enhanced. In displaying such a picture showing a player object grabbing and pulling the uniform of another player object, it is necessary to avoid a heavy processing load.
The present invention has been conceived in view of the above, and an object thereof is to provide a game device, a game device control method, a program, and an information storage medium capable of expressing, through a relatively simple process, a picture showing a first character object grabbing and pulling a clothing object of a second character object.
In order to attain the above described object, a game device according to the present invention is a game device for displaying a picture showing a first character object pulling a clothing object included in a second character object, in which the first character object and the second character object are placed in a virtual three dimensional space, comprising direction obtaining means for obtaining a direction from a position of the second character object to a position of the first character object; clothing object control means for changing positions of at least some of vertexes of the clothing object, based on the direction obtained by the direction obtaining means; and first character object control means for changing a position of a predetermined portion of the first character object, based on the direction obtained by the direction obtaining means, the predetermined portion being used to pull the clothing object.
Also, a game device control method according to the present invention is a control method for controlling a game device for displaying a picture showing a first character object pulling a clothing object included in a second character object, in which the first character object and the second character object are placed in a virtual three dimensional space, the method comprising a direction obtaining step of obtaining a direction from a position of the second character object to a position of the first character object; a clothing object control step of changing positions of at least some of vertexes of the clothing object, based on the direction obtained at the direction obtaining step; and a first character object control step of changing a position of a predetermined portion of the first character object, based on the direction obtained at the direction obtaining step, the predetermined portion being used to pull the clothing object.
Also, a program according to the present invention is a program for causing a computer, such as, e.g., a consumer game device, a portable game device, a portable phone, a personal digital assistant (PDA), a personal computer, or the like, to function as a game device for displaying a picture showing a first character object pulling a clothing object included in a second character object, in which the first character object and the second character object are placed in a virtual three dimensional space, the program for causing the computer to function as direction obtaining means for obtaining a direction from a position of the second character object to a position of the first character object; clothing object control means for changing positions of at least some of vertexes of the clothing object, based on the direction obtained by the direction obtaining means; and first character object control means for changing a position of a predetermined portion of the first character object, based on the direction obtained by the direction obtaining means, the predetermined portion being used to pull the clothing object.
Also, an information storage medium according to the present invention is a computer readable information storage medium storing the above described program. A program distribution device according to the present invention has an information storage medium recording the above described program, and reads the above described program from the information storage medium and distributes the read program. A program distribution method according to the present invention is a program distribution method for reading the above described program from an information storage medium recording the above described program and distributing the read program.
The present invention relates to a game device for displaying a picture showing a first character object pulling the clothing object included in a second character object, in which the first character object and the second character object are placed in a virtual three dimensional space. According to the present invention, a direction from the position of the second character object to the position of the first character object is obtained, and based on the obtained direction, the positions of at least some of the vertexes of the clothing object are changed. Further, based on the obtained direction, the position of a predetermined portion of the first character object, the portion being used to pull the clothing object, is changed. According to the present invention, it is possible to express, through a relatively simple process, a picture showing the first character object grabbing and pulling the clothing object of the second character object.
Also, according to one aspect of the present invention, the at least some of the vertexes of the clothing object may move according to a reference point set in the virtual three dimensional space, and the clothing object control means may change the position of the reference point, based on the direction obtained by the direction obtaining means.
Also, according to one aspect of the present invention, the first character object control means may change the position of the predetermined portion, based on the positions of the at least some of the vertexes of the clothing object.
Also, according to one aspect of the present invention, the above described game device may further comprise distance obtaining means for obtaining a distance between the position of the second character object and the position of the first character object, wherein the clothing object control means may change the positions of the at least some of the vertexes of the clothing object, based on the direction obtained by the direction obtaining means and the distance obtained by the distance obtaining means.
Also, according to one aspect of the present invention, the clothing object control means may change the positions of the at least some of the vertexes of the clothing object, based on the direction obtained by the direction obtaining means and a distance obtained by multiplying the distance obtained by the distance obtaining means by a factor, and the clothing object control means may include means for changing the factor as time passes.
Also, according to one aspect of the present invention, the above described game device may further comprise motion data storage means for storing motion data describing a basic motion of the predetermined portion in a case where the first character object pulls the clothing object, and predetermined portion position obtaining means for obtaining the position of the predetermined portion of the first character object, the position being specified based on the motion data, wherein the distance obtaining means may obtain the distance between the position of the second character object and the position of the predetermined portion of the first character object.
Also, according to one aspect of the present invention, the direction obtaining means may obtain the direction from the position of the second character object to the position of the predetermined portion of the first character object.
In the following, one example of an embodiment of the present invention will be described in detail, based on the accompanying drawings. A game device according to an embodiment of the present invention is realized, using, e.g., a consumer game device, a portable game device, a portable phone, a personal digital assistant (PDA), a personal computer, or the like. Here, a case in which a consumer game device is used to realize a game device according to an embodiment of the present invention will be described.
The consumer game device 11 is a publicly known computer game system, and comprises a bus 12, a microprocessor 14, an image processing unit 16, a sound processing unit 20, a DVD-ROM reproduction unit 24, a main memory 26, an input output processing unit 30, and a controller 32. Structural elements other than the controller 32 are accommodated in the enclosure of the consumer game device 11.
The bus 12 is used to exchange an address and data among the respective units of the consumer game device 11. The microprocessor 14, image processing unit 16, main memory 26, and input output processing unit 30 are connected via the bus 12 for data exchange.
The microprocessor 14 controls the respective units of the consumer game device 11, based on an operating system stored in a ROM (not shown), a program and data read from the DVD-ROM 25, and data read from the memory card 28. The main memory 26 comprises, e.g., a RAM. A program and data read from the DVD-ROM 25 or the memory card 28 is written into the main memory 26 when required. The main memory 26 is used also as a working memory of the microprocessor 14.
The image processing unit 16 includes a VRAM, and renders a game screen image into the VRAM, based on the image data sent from the microprocessor 14. The image processing unit 16 converts the game screen image into a video signal, and outputs the resultant video signal at a predetermined time to the monitor 18.
The input output processing unit 30 is an interface via which the microprocessor 14 accesses the sound processing unit 20, DVD-ROM reproduction unit 24, memory card 28, and controller 32. The sound processing unit 20, DVD-ROM reproduction unit 24, memory card 28, and controller 32 are connected to the input output processing unit 30.
The sound processing unit 20 includes a sound buffer, in which various sound data, such as game music, game sound effects, messages, and so forth, read from the DVD-ROM 25 are stored. The sound processing unit 20 reproduces the various sound data stored in the sound buffer, and outputs via the speaker 22.
The DVD-ROM reproduction unit 24 reads a program recorded in the DVD-ROM 25 according to an instruction from the microprocessor 14. Note that although the DVD-ROM 25 is used here to supply a program to the consumer game device 11, any other information storage medium, such as a CD-ROM, a ROM card, and the like, may be used. Alternatively, a program may be supplied to the consumer game device 11 from a remote place through a communication network, e.g., the Internet, and the like.
The memory card 28 includes a nonvolatile memory (e.g., EEPROM, and the like). The consumer game device 11 has a plurality of memory card slots, in which to mount the memory card 28. Various game data, e.g., save data and the like, is stored in the memory card 28.
The controller 32 is a general purpose operation input means, using which a user inputs various game operations. The input output processing unit 30 scans the states of the respective units of the controller 32 every constant cycle (e.g., every 1/60th of a second), and forwards an operating signal describing the scanning result through the bus 12 to the microprocessor 14. The microprocessor 14 determines a game operation carried out by the user, based on the operating signal. It is possible to connect a plurality of controllers 32 to the consumer game device 11. The microprocessor 14 controls a game, based on the operating signals input from the respective controllers 32.
In the game device 10 having the above-described structure, a game program read from the DVD-ROM 25 is carried out, whereby, e.g., a soccer game is carried out.
In the main memory 26 of the game device 10, a virtual three dimensional space is created.
A skeleton (bones and joints) is set inside the player object 50.
These skeleton parts are managed in a hierarchical structure with the hips 60 as a route, such as is shown in
A skeleton part (bone) set on the player object 50 and a vertex of a polygon forming the player object 50 are correlated to each other. Then, when a skeleton part (bone) rotates, the vertex of the polygon correlated to that skeleton part moves according to the rotating skeleton part. As a result, the shape (posture) of the player object 50 will change according to the rotation of the skeleton part.
A virtual camera 46 is placed in the virtual three dimensional space 40. A game screen image showing a picture obtained by viewing the virtual three dimensional space 40 from the virtual camera 46 is displayed on the monitor 18.
In the main memory 26, information describing the state (a position, a posture, a moving speed, a movement direction, and the like) of each player object 50 placed in the virtual three dimensional space 40 is stored. For example, as the posture of the player object 50 is specified according to the states (a rotational angle, and the like) of the respective skeleton parts of the player object 50, information describing the states of the respective skeleton parts of the player object 50 is stored in the main memory 26 as information describing the posture of the player object 50. In addition, in the main memory 26, information describing the state (a position, a moving speed, a movement direction, and the like) of the ball object and the state (a position, a viewing direction, an angle of view, and the like) of the virtual camera 46 are also stored. The information stored in the main memory 26 is updated every predetermined period of time ( 1/30th of a second in this embodiment). In addition, based on the information, a game screen image is updated every predetermined period of time ( 1/30th of a second in this embodiment).
Note that, in this specification, updating the information describing the states of the player object 50, the ball object, and the virtual camera 46, stored in the main memory 26 is referred to as “updating the states of the player object 50, the ball object, and the virtual camera 46”
In the following, a technique for preferably expressing a picture showing a player object 50 grabbing and pulling the uniform object 52 of a player object 50 belonging to the opponent team (hereinafter referred to as an “opponent player object”) in a soccer game will be described.
In the game device 10, whether or not the player object 50 satisfies a condition for starting a motion of grabbing the uniform object 52 of the opponent player object (hereinafter referred to as a “grabbing motion”) is determined. Specifically, the microprocessor 14 determines whether or not the player object 50 satisfies a condition for starting a grabbing motion as described below.
Initially, the microprocessor 14 determines whether or not a grab point of any opponent player object is included in a predetermined area ahead of the player object 50.
When it is determined that a grab point 80 of any opponent player object is included in the predetermined area 78 ahead of the player object 50, the microprocessor 14 determines whether or not the opponent player object has its back toward the player object 50.
When the grab point 80 of the opponent player object is included in the predetermined area 78 ahead of the player object 50 and the opponent player object has its back toward the player object 50, the microprocessor 14 determines that the player object 50 satisfies the condition for starting a grabbing motion. Then, the microprocessor 14 causes the player object 50 to start a motion of grabbing the uniform object 52 of the opponent player object.
In the following, a process to be carried out to display a picture showing a player object 50 grabbing the uniform object 52 of an opponent player object will be described.
Note that the player object 50 which is determined as satisfying the condition for starting a grabbing motion, that is, the player object 50 which is to grab the uniform object 52, will be hereinafter referred to as a “first player object”. An opponent player object of which the uniform object 52 is to be grabbed by the first player object will be hereinafter referred to as a “second player object”. Further, the following description is made based on the assumption that the first player object (first character object) tries to grab the uniform object 52 (clothing object) of the second player object (second character object) with its right hand.
The process shown in
The following description is given based on the assumption that motion data on a grabbing motion defines the states (a rotational angle, and the like) of the respective skeleton parts (the right upper arm 68r, the right forearm 70r, the hand 72r, and so forth) in each of the frames from the first frame (a grabbing motion start frame) to the tenth frame (a grabbing motion completion frame). In addition, the description is made based on the assumption that one frame is 1/30th of a second long.
As shown in
Thereafter, the microprocessor 14 updates the states of a player object 50 other than the first player object, the ball object, and the virtual camera 46 (S102). In addition, the microprocessor 14 updates the states (except posture) of the first player object (S103). Then, the microprocessor 14 carries out a process (S104 to S106) for updating the posture of the first player object.
Initially, the microprocessor 14 obtains the original position of the tip end 74 of the right hand 72r of the first player object in the nth frame (S104). That is, the microprocessor 14 reads the states of the respective skeleton parts in the nth frame from the motion data on a grabbing motion. Then, the microprocessor 14 obtains the position (original position) of the tip end 74 of the right hand 72r, the position being specified according to the states of the respective skeleton parts in the nth frame.
Further, the microprocessor 14 corrects the original position of the tip end 74 of the right hand 72r of the first player object in the nth frame, obtained at S104, to thereby obtain the position (corrected position) of the tip end 74 of the right hand 72r of the first player object in the nth frame (S105).
A positional relationship between the first player object and the second player object varies from time to time. Therefore, moving the respective skeleton parts (the right upper arm 68r, the right forearm 70r, the right hand 72r, and the like) of the first player object according to the motion data on a grabbing motion may not be enough to have the tip end 74 of the right hand 72r of the first player object reach the grab point 80 of the second player object. In view of the above, the microprocessor 14 corrects the position of the tip end 74 of the right hand 72r of the first player object.
At 5105, the microprocessor 14 obtains a position which divides the straight line 82-n from the original position 74o-n, obtained at 5104, to the grab point 80 of the second player object at the ratio n:(10-n) as the corrected position 74-n of the tip end 74 of the right hand 72r of the first player object.
For example, for the variable n being 1, a position which divides the straight line 82-1 from the original position 74o-1 to the grab point 80 of the second player object at the ratio 1:9 is obtained as the corrected position 74-1 of the tip end 74 of the right hand 72r of the first player object. In addition, for example, for the variable n being 8, a position which divides the straight line 82-8 from the original position 74o-8 to the grab point 80 of the second player object at the ratio 8:2 is obtained as the corrected position 74-8 of the tip end 74 of the right hand 72r of the first player object. For the variable n being 10, a position which divides the straight line 82-10 from the original position 74o-10 to the grab point 80 of the second player object at the ratio 10:0, that is, the position of the grab point 80 of the second player object, is obtained as the corrected position 74-10 of the tip end 74 of the right hand 72r of the first player object.
After the corrected position of the tip end 74 of the right hand 72r of the first player object is obtained, the microprocessor 14 updates the states (a rotational angle) of the respective skeleton parts of the first player object (S106). That is, the microprocessor 14 determines the states of the right upper arm 68r, the right forearm 70r, and the like, which are skeleton parts of a higher hierarchical order than the right hand 72r, based on the corrected position of the tip end 74 of the right hand 72r. A publicly known inverse kinematics algorithm is used for this process. In addition, the microprocessor 14 determines the state of another skeleton part of the first player object, based on the motion data on, e.g., a running motion, and so forth.
Thereafter, the microprocessor 14 creates a game screen image in the VRAM (S107). In the above, the microprocessor 14 deforms the polygons forming the player object 50, based on the states of the respective skeleton parts. That is, the microprocessor 14 sets the positions of the vertexes of the polygons forming the player object 50, based on the states of the respective skeleton parts. The game screen image created in the VRAM is output to the monitor 18 at a predetermined time.
Then, the microprocessor 14 determines whether or not the value of the variable n is 10 (S108). Here, the value “10” indicates the total number of frames of the motion data on a grabbing motion. When the value of the variable n is not 10, after elapse of a predetermined period of time ( 1/30th of a second in this embodiment), the process shown in
With the above described process (
In this embodiment, the position of the tip end 74 of the right hand 72r of the first player object is corrected using the method shown in
Note that the position of the tip end 74 of the right hand 72r of the player object 50 may be corrected using another method.
In this case, the microprocessor 14 adds (n/10)*ΔP1 to the original position 74o-n at S105 to thereby obtain the corrected position 74-n of the tip end 74 of the right hand 72r of the first player object, ΔP1 indicating the difference between the original position 74o-10 of the tip end 74 of the right hand 72r of the first player object in the grabbing motion completion frame (tenth frame) and the grab point 80 of the second player object.
For example, for the value of the variable n being 1, addition of (1/10)*ΔP1 to the original position 74o-1 provides the corrected position 74-1 of the tip end 74 of the right hand 72r of the first player object. Further, for example, for the value of the variable n being 8, addition of (8/10)*ΔP1 to the original position 74o-8 provides the corrected position 74-8 of the tip end 74 of the right hand 72r of the first player object. Then, for the value of the variable n being 10, addition of (10/10)*ΔP1 to the original position 74o-10 provides the corrected position 74-10 of the tip end 74 of the right hand 72r of the first player object. That is, the position of the grab point 80 of the second player object is obtained as the corrected position 74-10.
Using the above describe method as well, the tip end 74 of the right hand 72r of the first player object reaches the grab point 80 of the second player object in the grabbing motion completion frame (tenth frame), and unnatural movement of the right hand of the first player object can be avoided.
Note that use of the correction method shown in
In the following, a process for displaying a picture showing the first player object pulling the uniform object 52 of the second player object will be described.
As shown in
Thereafter, the microprocessor 14 updates the states of a player object 50 other than the first player object, the ball object, and the virtual camera 46 (S202). In addition, the microprocessor 14 updates the states (except posture) of the first player object (S203). The microprocessor 14 carries out a process (S204 to 5206) for updating the posture of the first player object.
Initially, the microprocessor 14 determines the position of a deformation control point (a reference point) of the second player object (S204). A deformation control point is a basic point for controlling deformation of the uniform object 52. In this embodiment, a deformation control point, as well as the grab point 80, is set on each player object 50. Some of the vertexes of the polygons forming the uniform object 52 of the player object 50 are correlated to the deformation control point, and the vertex correlated to the deformation control point moves according to the deformation control point, with details thereof to be described later (see
With the uniform object 52 of the player object 50 not being pulled by another player object 50, the position of the deformation control point of the player object 50 is set at a predetermined basic position. The basic position may be, e.g., the position of the grab point 80. Meanwhile, with the uniform object 52 of the player object 50 being pulled by another player object 50, the position of the deformation control point of the player object 50 is determined as described below.
That is, the microprocessor 14 determines the position of the deformation control point of the second player object, based on the positional relationship between the first player object and the second player object.
Initially, the microprocessor 14 (direction obtaining means) obtains the direction D1 from the position (e.g., the foot position) of the second player object 50b to the position (e.g., the foot position) of the first player object 50a (see
Then, the microprocessor 14 (clothing object control means) sets the position of the deformation control point 82 of the second player object 50b at a position displaced by the distance L in the direction D from the basic position 82a (grab point 80) of the deformation control point 82 of the second player object 50b (see
L=L1*|sin((m/10)*2π)| (1)
As described above, the distance L is obtained by multiplying the distance L1 from the position of the second player object 50b to the position of the first player object 50a by a factor (the absolute value of sin((m/10)*2π)) which varies as time passes. Therefore, even if the state in which the distance L1 from the position of the second player object 50b to the position of the first player object 50a remains the same continues for a predetermined period of time, the distance L will change as time passes, and resultantly, the position of the deformation control point 82 of the second player object 50b will change as time passes.
After determination of the position of the deformation control point 82 of the second player object, the microprocessor 14 (clothing object control means) deforms the uniform object 52 of the second player object, based on the position of the deformation control point 82 of the second player object (S205).
The microprocessor 14 causes the representative vertex 86a of the uniform object 52 of the second player object and vertexes 86 around the representative vertex 86a to move according to the deformation control point 82 of the second player object. Specifically, the microprocessor 14 moves the representative vertex 86a of the uniform object 52 of the second player object and the vertexes 86 around the representative vertex 86a parallel to the direction 84 from the basic position 82a of the deformation control point 82 of the second player object to the current position of the deformation control point 82, determined at 5204. In the above, the movement distance of each vertex 86 is determined based on the distance between the vertex 86 and the representative vertex 86a. Specifically, a vertex 86 located farther from the representative vertex 86a is set to move by a shorter distance.
Note that the representative vertex 86a and the vertexes 86 around this reference vertex 86a are correlated additionally to the skeleton parts of chest 58, hips 60, and the like, of the second player object. Therefore, as the second player object moves ahead and the skeleton parts of chest 58 and hips 60 of the second player object accordingly move ahead, the representative vertex 86a and the vertexes 86 around the representative vertex 86a will be pulled additionally in the movement direction (the forward direction) of the second player object. Therefore, the movement distances of the representative vertex 86a and the vertexes 86 around the reference vertex 86a are determined based on both the movement of the deformation control point 82 of the second player object and the movement of the second player object itself.
After completion of the process of deforming the uniform object 52 of the second player object, the microprocessor 14 updates the states (a rotational angle, and the like) of the respective skeleton parts of the first player object 50, based on the position of the representative vertex 86a of the uniform object 52 of the second player object (S206).
Initially, the microprocessor 14 sets the position of the tip end 74 of the right hand 72r of the first player object at the position of the representative vertex 86a of the uniform object 52 of the second player object. Thereafter, the microprocessor 14 updates the states of the right upper arm 68r, the right forearm 70r, and the like, which are the skeleton parts of higher hierarchical order than the right hand 72r, based on the position of the tip end 74 of the right hand 72r. A publicly known inverse kinematics algorithm is used for this process. In addition, the microprocessor 14 updates the state of another skeleton part of the first player object, based on the motion data on, e.g., a running motion.
Thereafter, the microprocessor 14 produces a game screen image in the VRAM (S207). In the above, the microprocessor 14 deforms the polygons forming the player object 50, based on the states (a rotational angle) of the respective skeleton parts. That is, the microprocessor 14 sets the positions of the vertexes of the polygons forming the player object 50, based on the states of the respective skeleton parts. The game screen image produced in the VRAM is output to the monitor 18 at a predetermined time.
Thereafter, the microprocessor 14 determines whether or not the distance between the first player object and the second player object is equal to or longer than a predetermined distance (S208). When the distance between the first player object and the second player object is equal to or longer than the predetermined distance, the microprocessor 14 determines that the second player object has shaken off the first player object, and then causes the first player object to finish the pulling motion relative to the second player object (S210).
Meanwhile, when the distance between the first player object and the second player object is shorter than the predetermined distance, the microprocessor 14 then determines whether or not the value of the variable m is 10 (S209). With the value of the variable m being 10, the microprocessor 14 causes the first player object to finish the pulling motion relative to the second player object (S210). As described above, it is arranged in this embodiment such that a pulling motion by the first player object relative to the second player object is finished after continuation for ⅓ (=10/30) of a second. Note that, however, a longer period of time may be set as a duration in which the first player object continues a pulling motion relative to the second player object.
With the above described process (
Note that displaying a picture showing the uniform object 52 of the second player object being pulled by the first player object and thereby deformed can also be realized by carrying out a physical simulation operation, which, however, results in a complicated process and a heavy processing load. Regarding this point, according to the present invention, displaying a picture showing the uniform object 52 of the second player object being pulled by the first player object and thereby deformed can be realized through a relatively simple process of changing the position of the deformation reference point 82 of the second player object, based on the positional relationship (the direction D1 and the distance L1) between the first player object and the second player object. That is, reduction of a processing load can be attained.
Also, in this embodiment, even if the state in which the distance L1 from the position of the second player object to the position of the first player object (see
Also, in this embodiment, after the uniform object 52 of the second player object is deformed in consideration of the position of the deformation control point 82 and rotation of the skeleton parts of chest 58, hips 60, and the like, of the second player object, the position of the right hand 72r of the first player object is determined in accordance with the state of the deformation of the uniform object 52. Then, the posture (the states of the respective skeleton parts) of the first player object is determined based on the position of the right hand 72r. According to the present invention, when expressing a picture showing the uniform object 52 of the second player object, being pulled by the first player object and thereby deformed, it is possible to express the picture showing the uniform object 52 of the second player object, being deformed in consideration of the motion (movement, and the like) of the second player object itself. As a result, the reality of the picture showing the uniform object 52 of the second player object, being pulled by the first player object and thereby deformed is improved.
Note that the deformation control point 82 (the direction D and the distance L) of the second player object may be determined using the motion data on the motion of moving its right hand grabbing the uniform object 52 of another player object 50 so as to pull the uniform object 52. In the following, this determination method will be described. The following description is given based on the assumption that the above-described motion data describes the states (a rotational angle, and the like) of the respective skeleton parts of the player object 50 in the respective frames from the first frame (motion start frame) to the tenth frame (motion completion frame).
In this case, at S204, initially, the microprocessor 14 reads the states of the respective skeleton parts in the mth frame from the above-described motion data. Then, the microprocessor 14 obtains the position (original position) of the tip end 74 of the right hand 72r (predetermined portion) of the first player object, the position being specified according to the states of the respective skeleton parts in the mth frame. Thereafter, the microprocessor 14 (predetermined portion position obtaining means) obtains a position (corrected position) by correcting the obtained original position, using the method described below.
The microprocessor 14 obtains a position which divides the straight line 82-m from the original position 74o-m to the basic position 82a (grab point 80) of the deformation control point 82 of the second player object at the ratio (10−m+1):(m−1) as the corrected position 74-m.
For example, for the variable m being 2, a position which divides the straight line 82-2 from the original position 74o-2 to the basic position 82a of the deformation control point 82 of the second player object at the ratio 9:1 is obtained as the corrected position 74-2. Also, for example, for the variable m being 9, a position which divides the straight line 82-9 from the original position 74o-9 to the basic position 82a of the deformation control point 82 of the second player object at the ratio 2:8 is obtained as the corrected position 74-9.
In this case, the microprocessor 14 adds ((10−m+1)/10)*ΔP2 to the original position 74o-m to thereby obtain the corrected position 74-m, ΔP2 indicating the difference between the original position 74o-1 in the first frame and the basic position 82a (grab point 80) of the deformation control point 82 of the second player object.
For example, for the variable m being 2, addition of (9/10)*ΔP2 to the original position 74o-2 provides the corrected position 74-2. For example, for the variable m being 9, addition of (2/10)*ΔP2 to the original position 74o-9 provides the corrected position 74-9.
After obtaining the corrected position 74-m, using the method shown in
Then, the microprocessor 14 (clothing object control means) sets the position of the deformation control point 82 of the second player object at a position displaced by the distance L in the direction D from the basic position 82a (grab point 80) of the deformation control point 82 of the second player object 50b (see
In the above described manner, it is possible to determine the position of the deformation control point 82 of the second player object in consideration of the movement of the right hand of the first player object, pulling the uniform object 52. As a result, it is possible to deform the uniform object 52 of the second player object in consideration of the movement of the right hand of the first player object, pulling the uniform object 52. Also, in the above described manner as well, it is possible to express, through a relatively simple process, a picture showing the uniform object 52 of the second player object being pulled by the first player object, and thereby deformed.
Note that the position of the deformation control point 82 of the second player object may be determined based on the assumption that the direction D (see
In this manner as well, it is possible to determine the position of the deformation control point 82 of the second player object in consideration of the movement of the right hand of the first player object, pulling the uniform object 52. As a result, it is possible to deform the uniform object 52 of the second player object in consideration of the movement of the right hand of the first player object, pulling the uniform object 52.
The deformation control point 82 may reciprocate between, e.g., the position determined based on the corrected position 74-1 in the first frame and the position determined based on the corrected position 74-3 in the third frame. That is, the deformation control point 82 having moved from the position determined based on the corrected position 74-1 via the position determined based on the corrected position 74-2 to the position determined based on the corrected position 74-3 may return via the position determined based on the corrected position 74-2 to the position determined based on the corrected position 74-1. Also, the deformation control point 82 having returned to the position determined based on the corrected position 74-1 may move again to the position determined by the corrected position 74-3 via the position based on the corrected position 74-2.
In the above described manner, it is possible to have the first player object continue the pulling motion relative to the second player object. For example, in the case where the first player object is the player object 50 operated by a user, it is possible to have the first player object continue the pulling motion relative to the second player object during a period in which the user continues a predetermined operation (e.g., successively pressing a button).
According to the above described game device 10, it is possible to express, through a relatively simple process, a picture showing a player object 50 grabbing and pulling the uniform object 52 of a player object 50 belonging to the opponent team.
Note that the present invention is not limited to the above-described embodiment.
For example, when the first player object pulls the uniform object 52 of the second player object, a parameter value which indicates the ability of the second player object may be corrected such that the ability of the second player object is lowered. For example, correction may be made such that the moving speed of the second player object becomes slower.
Also, for example, a game to be carried out in the game device 10 may be a sport game other than a soccer game. The present invention can be applied to a game, such as basket ball, and the like, in which players may perform in contact with each other. In addition, a game to be carried out in the game device 10 may be a game other than a sport game.
For example, although a program is supplied from the DVD-ROM 25, or an information storage medium, to the consumer game device 11 in the above description, a program may be distributed through a communication network to a household, or the like.
Tsunashima, Tadaaki, Yanagihara, Hideki
Patent | Priority | Assignee | Title |
11052317, | Jun 29 2020 | Square Enix Ltd. | Performing simulation of stretchable character in computer game |
11478707, | Jan 30 2020 | SQUARE ENIX LTD | Gap jumping simulation of stretchable character in computer game |
Patent | Priority | Assignee | Title |
6949024, | Jun 05 1996 | Kabushiki Kaisha Sega Enterprises | Image processing apparatus for a game, method and program for image processing for a game, and computer-readable medium |
7706636, | Mar 24 2005 | BANDAI NAMCO ENTERTAINMENT INC | Image generation system (game system), image generation method, program and information storage medium |
20020109680, | |||
20060217008, | |||
20090244309, | |||
JP2005099952, | |||
JP2005342120, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 05 2008 | KONAMI DIGITAL ENTERTAINMENT CO., LTD. | (assignment on the face of the patent) | / | |||
Nov 09 2009 | TSUNASHIMA, TADAAKI | KONAMI DIGITAL ENTERTAINMENT CO , LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 023608 | /0239 | |
Nov 09 2009 | YANAGIHARA, HIDEKI | KONAMI DIGITAL ENTERTAINMENT CO , LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 023608 | /0239 |
Date | Maintenance Fee Events |
Dec 17 2015 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Dec 31 2019 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Jan 03 2024 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Jul 10 2015 | 4 years fee payment window open |
Jan 10 2016 | 6 months grace period start (w surcharge) |
Jul 10 2016 | patent expiry (for year 4) |
Jul 10 2018 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 10 2019 | 8 years fee payment window open |
Jan 10 2020 | 6 months grace period start (w surcharge) |
Jul 10 2020 | patent expiry (for year 8) |
Jul 10 2022 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 10 2023 | 12 years fee payment window open |
Jan 10 2024 | 6 months grace period start (w surcharge) |
Jul 10 2024 | patent expiry (for year 12) |
Jul 10 2026 | 2 years to revive unintentionally abandoned end. (for year 12) |