An information processing device includes: a learning section configured to learn a state transition probability model defined by state transition probability for each action of a state making a state transition due to an action performed by an agent capable of performing action and observation probability of a predetermined observed value being observed from the state, using an action performed by the agent and an observed value observed in the agent when the agent has performed the action.
|
9. An information processing method of an information processing device, said information processing method comprising a step of:
learning, by at least one processor, a state transition probability model defined by
state transition probability for each action of a state making a state transition due to an action performed by an agent capable of performing action, and
observation probability of a predetermined observed value being observed from said state, using an action performed by said agent and an observed value observed in said agent when said agent has performed the action.
10. A program embodied on a non-transitory computer-readable medium comprising instructions that, when executed by at least one processor, cause a computer to perform operations comprising:
learning a state transition probability model defined by
state transition probability for each action of a state making a state transition due to an action performed by an agent capable of performing action, and
observation probability of a predetermined observed value being observed from said state, using an action performed by said agent and an observed value observed in said agent when said agent has performed the action.
19. An information processing device comprising:
at least one processor;
a storage device storing learning instructions that, when executed by the at least one processor, cause the information processing device to learn a state transition probability model defined by
state transition probability for each action of a state making a state transition due to an action performed by an agent capable of performing action, and
observation probability of a predetermined observed value being observed from said state, using an action performed by said agent and an observed value observed in said agent when said agent has performed the action.
1. An information processing device comprising:
at least one processor; and
a storage device storing instructions that, when executed by the at least one processor, cause the information processing device to perform operations comprising:
learning a state transition probability model defined by
state transition probability for each action of a state making a state transition due to an action performed by an agent capable of performing action, and
observation probability of a predetermined observed value being observed from said state, using an action performed by said agent and an observed value observed in said agent when said agent has performed the action.
17. An information processing method of an information processing device, said information processing method comprising the steps of:
recognizing, by at least one processor, present conditions of an agent capable of performing action using an action performed by said agent and an observed value observed in said agent when said agent has performed the action on the basis of a state transition probability model obtained by learning said state transition probability model defined by
state transition probability for each action of a state making a state transition due to an action performed by said agent, and
observation probability of a predetermined observed value being observed from said state, using an action performed by said agent and an observed value observed in said agent when said agent has performed the action, and obtaining a present state as a state of said state transition probability model, the state of said state transition probability model corresponding to the present conditions;
determining one of states of said state transition probability model as a goal state set as a goal; and
calculating an action plan as an action series that maximizes likelihood of state transition from said present state to said goal state on the basis of said state transition probability model, and determining an action to be performed next by said agent according to the action plan.
18. A program embodied on a non-transitory computer-readable medium comprising instructions that, when executed by at least one processor, cause a computer to perform operations comprising:
recognizing present conditions of an agent capable of performing action using an action performed by said agent and an observed value observed in said agent when said agent has performed the action on the basis of a state transition probability model obtained by learning said state transition probability model defined by
state transition probability for each action of a state making a state transition due to an action performed by said agent, and
observation probability of a predetermined observed value being observed from said state, using an action performed by said agent and an observed value observed in said agent when said agent has performed the action, and obtaining a present state as a state of said state transition probability model, the state of said state transition probability model corresponding to the present conditions;
determining one of states of said state transition probability model as a goal state set as a goal; and
calculating an action plan as an action series that maximizes likelihood of state transition from said present state to said goal state on the basis of said state transition probability model, and determining an action to be performed next by said agent according to the action plan.
11. An information processing device comprising:
at least one processor;
a storage device storing instructions that, when executed by the at least one processor, cause the information processing device perform operations, said instructions comprising:
state recognizing instructions for recognizing present conditions of an agent capable of performing action using an action performed by said agent and an observed value observed in said agent when said agent has performed the action on the basis of a state transition probability model obtained by learning said state transition probability model defined by
state transition probability for each action of a state making a state transition due to an action performed by said agent, and
observation probability of a predetermined observed value being observed from said state,
instructions for using an action performed by said agent and an observed value observed in said agent when said agent has performed the action, and obtaining a present state as a state of said state transition probability model, the state of said state transition probability model corresponding to the present conditions;
goal determining instructions for determining one of states of said state transition probability model as a goal state set as a goal; and
action determining instructions for calculating an action plan as an action series that maximizes likelihood of state transition from said present state to said goal state on the basis of said state transition probability model, and determining an action to be performed next by said agent according to the action plan.
20. An information processing device comprising:
at least one processor;
a storage device storing instructions that, when executed by the at least one processor, cause the information processing device to perform operations, said instructions comprising:
state recognizing instructions configured to recognize present conditions of an agent capable of performing action using an action performed by said agent and an observed value observed in said agent when said agent has performed the action on the basis of a state transition probability model obtained by learning said state transition probability model defined by
state transition probability for each action of a state making a state transition due to an action performed by said agent, and
observation probability of a predetermined observed value being observed from said state, using an action performed by said agent and an observed value observed in said agent when said agent has performed the action, and obtaining a present state as a state of said state transition probability model, the state of said state transition probability model corresponding to the present conditions;
goal determining instructions configured to determine one of states of said state transition probability model as a goal state set as a goal; and
action determining instructions configured to calculate an action plan as an action series that maximizes likelihood of state transition from said present state to said goal state on the basis of said state transition probability model, and determine an action to be performed next by said agent according to the action plan.
2. The information processing device according to
wherein said instructions are configured to cause the information processing device to learn said state transition probability model under a one-state one-observed-value constraint under which one observed value is observed in one state of said state transition probability model.
3. The information processing device according to
wherein said instructions are configured to cause the information processing device to perform learning that satisfies said one-state one-observed-value constraint by repeating, until no dividing object state is detected,
detecting a state in which a plurality of observed values are observed in said state transition probability model after being learned as a dividing object state to be divided,
dividing said dividing object state into a plurality of states in each of which one of said plurality of observed values is observed, and
relearning said state transition probability model after said dividing object state is divided into said plurality of states.
4. The information processing device according to
wherein said instructions are configured to cause the information processing device to divide said dividing object state into a plurality of states after division by
assigning one of said plurality of observed values to a state after the division obtained by dividing said dividing object state,
setting an observation probability of the observed value assigned to said state after the division being observed in said state after the division to one, and setting observation probabilities of other observed values being observed in said state after the division to zero, and
setting state transition probability of state transition having said state after the division as a transition source to state transition probability of state transition having said dividing object state as a transition source, and setting state transition probability of state transition having said state after the division as a transition destination to a value obtained by correcting state transition probability of state transition having said dividing object state as a transition destination by an observation probability in said dividing object state of the observed value assigned to said state after the division.
5. The information processing device according to
wherein when there are a plurality of states as transition source states or transition destination states of state transition when a predetermined action is performed, and an identical observed value is observed in each of the plurality of states, said instructions are configured to cause the information processing device to merge said plurality of states into one state.
6. The information processing device according to
wherein said instructions are configured to cause the information processing device to merge a plurality of states as merging object states into a representative state by
detecting the plurality of states as said merging object states to be merged when the plurality of states are present as transition source states or transition destination states in said state transition probability model of state transition when a predetermined action is performed, and observed values having a maximum said observation probability, the observed value being observed in the plurality of respective states, coincide with each other,
setting observation probability of each observed value being observed in said representative state as said one state when the plurality of states as said merging object states are merged into said one state to an average value of observation probabilities of each observed value being observed in the plurality of respective states as said merging object states, and setting observation probability of each observed value being observed in said merging object state other than said representative state to zero,
setting state transition probability of state transition having said representative state as a transition source to an average value of state transition probabilities of state transition having the plurality of respective states as said merging object states as a transition source, and setting state transition probability of state transition having said representative state as a transition destination to a sum of state transition probabilities of state transition having the plurality of respective states as said merging object states as a transition destination, and
setting state transition probability of state transition having said merging object state other than said representative state as a transition source and state transition probability of state transition having said merging object state other than said representative state as a transition destination to zero.
7. The information processing device according to
wherein said instructions are configured to cause the information processing device to perform learning that satisfies said one-state one-observed-value constraint by repeating, until no merging object states are detected,
detecting a plurality of states as said merging object states from said state transition probability model after being learned,
merging the plurality of states as said merging object states into said representative state, and
relearning said state transition probability model after the merging.
8. The information processing device according to
wherein said state transition probability model is an extended HMM (Hidden Markov model) obtained by extending state transition probability of an HMM to state transition probability for each action performed by said agent, and
said instructions are configured to cause the information processing device to perform learning of said extended HMM to estimate said state transition probability with respect to each action and said observation probability according to a Baum-Welch re-estimation method.
12. The information processing device according to
wherein said state recognizing instructions are further configured to cause the information processing device to update an inhibitor for inhibiting state transition so as to inhibit state transition between an immediately preceding state immediately preceding said present state and a state other than said present state with respect to an action performed by said agent at a time of a state transition from said immediately preceding state to said present state, and
said action determining instructions are further configured to cause the information processing device to correct said state transition probability of said state transition probability model using said inhibitor, and calculates said action plan on the basis of said state transition probability after correction.
13. The information processing device according to
14. The information processing device according to
wherein said instructions further comprises open end detecting instructions for detecting an open end as another state having a state transition not yet performed among state transitions that can be made with a state in which a predetermined observed value is observed as a transition source, a same observed value as the predetermined observed value being observed in said other state, and
wherein said goal determining instructions are further configured to cause the information processing device to determine said open end as said goal state.
15. The information processing device according to
obtain action probability as probability of said agent performing each action when each observed value is observed, using said state transition probability and said observation probability,
calculate action probability based on said observation probability as probability of said agent performing each action in each state in which each observed value is observed, by multiplying said action probability by said observation probability,
calculate action probability based on said state transition probability as probability of said agent performing each action in each state, by adding together, with respect to each state, said state transition probabilities of state transitions having the state as a transition source in each action, and
detect a state in which a difference between said action probability based on said observation probability and said action probability based on said state transition probability is equal to or larger than a predetermined threshold value as said open end.
16. The information processing device according to
branch structure detecting instructions for detecting a state of branch structure as a state from which state transition can be made to different states when one action is performed on the basis of said state transition probability, and
wherein said goal determining instructions are further configured to cause the information processing device to determine said state of branch structure as said goal state.
|
1. Field of the Invention
The present invention relates to an information processing device, an information processing method, and a program, and particularly to an information processing device, an information processing method, and a program that enable the determination of an appropriate action of an agent capable of autonomously performing various actions (autonomous agent), for example.
2. Description of the Related Art
As a state predicting and action determining method, there is for example a method of applying a partially observed Markov decision process and automatically constructing a static partially observed Markov decision process from learning data (see, for example, Japanese Patent Laid-Open No. 2008-186326, which is hereinafter referred to as Patent Document 1).
In addition, as an operation planning method for an autonomous mobile robot or a pendulum, there is a method of making an action plan discretized in a Markov state model, and further inputting a planned goal to a controller and deriving an output to be given to a controlling object to thereby perform desired control (see, for example, Japanese Patent Laid-Open Nos. 2007-317165 and 2006-268812, which are respectively referred to as Patent Documents 2 and 3).
While various methods have been proposed as methods for determining an appropriate action for an agent capable of performing various actions autonomously, there is a request to propose further new methods.
The present invention has been made in view of such a situation. It is desirable to be able to determine an appropriate action for an agent, that is, determine an appropriate action as an action to be performed by an agent.
An information processing device and a program according to a first embodiment of the present invention are an information processing device and a program for making a computer function as the information processing device, the information processing device including a learning section configured to learn a state transition probability model defined by state transition probability for each action of a state making a state transition due to an action performed by an agent capable of performing action and observation probability of a predetermined observed value being observed from the state, using an action performed by the agent and an observed value observed in the agent when the agent has performed the action.
An information processing method according to the first embodiment of the present invention is an information processing method of an information processing device, the method including a step of learning a state transition probability model defined by state transition probability for each action of a state making a state transition due to an action performed by an agent capable of performing action and observation probability of a predetermined observed value being observed from the state, using an action performed by the agent and an observed value observed in the agent when the agent has performed the action.
In the first embodiment as described above, a state transition probability model defined by state transition probability for each action of a state making a state transition due to an action performed by an agent capable of performing action and observation probability of a predetermined observed value being observed from the state is learned using an action performed by the agent and an observed value observed in the agent when the agent has performed the action.
An information processing device or a program according to a second embodiment of the present invention is an information processing device or a program for making a computer function as the information processing device. The information processing device includes: a state recognizing section configured to recognize present conditions of an agent capable of performing action using an action performed by the agent and an observed value observed in the agent when the agent has performed the action on the basis of a state transition probability model obtained by learning the state transition probability model defined by state transition probability for each action of a state making a state transition due to an action performed by the agent and observation probability of a predetermined observed value being observed from the state, using an action performed by the agent and an observed value observed in the agent when the agent has performed the action, and obtaining a present state as a state of the state transition probability model, the state of the state transition probability model corresponding to the present conditions; a goal determining section configured to determine one of states of the state transition probability model as a goal state set as a goal; and an action determining section configured to calculate an action plan as an action series that maximizes likelihood of state transition from the present state to the goal state on the basis of the state transition probability model, and determine an action to be performed next by the agent according to the action plan.
An information processing method according to the second embodiment of the present invention is an information processing method of an information processing device. The information processing method includes the steps of: recognizing present conditions of an agent capable of performing action using an action performed by the agent and an observed value observed in the agent when the agent has performed the action on the basis of a state transition probability model obtained by learning the state transition probability model defined by state transition probability for each action of a state making a state transition due to an action performed by the agent and observation probability of a predetermined observed value being observed from the state, using an action performed by the agent and an observed value observed in the agent when the agent has performed the action, and obtaining a present state as a state of the state transition probability model, the state of the state transition probability model corresponding to the present conditions; determining one of states of the state transition probability model as a goal state set as a goal; and calculating an action plan as an action series that maximizes likelihood of state transition from the present state to the goal state on the basis of the state transition probability model, and determining an action to be performed next by the agent according to the action plan.
In the second embodiment as described above, present conditions of an agent capable of performing action are recognized using an action performed by the agent and an observed value observed in the agent when the agent has performed the action on the basis of a state transition probability model obtained by learning the state transition probability model defined by state transition probability for each action of a state making a state transition due to an action performed by the agent and observation probability of a predetermined observed value being observed from the state, using an action performed by the agent and an observed value observed in the agent when the agent has performed the action, and a present state as a state of the state transition probability model, the state of the state transition probability model corresponding to the present conditions, is obtained. In addition, one of states of the state transition probability model is determined as a goal state set as a goal. Then, an action plan as an action series that maximizes likelihood of state transition from the present state to the goal state is calculated on the basis of the state transition probability model, and an action to be performed next by the agent is determined according to the action plan.
Incidentally, the information processing device may be an independent device, or may be an internal block forming one device.
The program can be provided by being transmitted via a transmission medium, or in a state of being recorded on a recording medium.
According to the first and second embodiments of the present invention, an appropriate action can be determined as an action to be performed by the agent.
[Environment in which Agent Performs Action]
The agent is a device such for example as a robot (which may be a robot acting in a real world or may be a virtual robot acting in a virtual world) capable of autonomously performing action such as movement and the like (capable of action).
The agent is capable of changing the conditions of the agent itself by performing action, and observing externally observable information and recognizing the conditions using an observed value as a result of the observation.
In addition, the agent constructs a model of the action environment (environment model) in which the agent performs action, in order to recognize conditions and determine (select) an action to be performed in each condition.
The agent performs efficient modeling (constructs an environment model) of not only an action environment having a fixed structure but also an action environment having a structure not fixed but changing probabilistically.
The action environment in
In the action environment of
Thereafter, at time t=t2(>t1), the position p1 is changed from the wall to a passage, and the action environment consequently has a structure in which the agent can pass through both of the positions p1 and p2.
Further, at a subsequent time t=t3, the position p2 is changed from the passage to a wall, and the action environment consequently has a structure in which the agent can pass through the position p1 but cannot pass through the position p2.
[Actions Performed by Agent and Observed Values Observed by Agent]
The agent sets areas divided in the form of squares by dotted lines in
In
The agent in the present embodiment observes one of 15 kinds of observed values (symbols) O1 to O15 in observation units.
The observed value O1 is observed in an observation unit having a wall at the top, bottom, and left and having a passage at the right. The observed value O2 is observed in an observation unit having a wall at the top, left, and right and having a passage at the bottom.
The observed value O3 is observed in an observation unit having a wall at the top and left and having a passage at the bottom and right. The observed value O4 is observed in an observation unit having a wall at the top, bottom, and right and having a passage at the left.
The observed value O5 is observed in an observation unit having a wall at the top and bottom and having a passage at the left and right. The observed value O6 is observed in an observation unit having a wall at the top and right and having a passage at the bottom and left.
The observed value O7 is observed in an observation unit having a wall at the top and having a passage at the bottom, left, and right. The observed value O8 is observed in an observation unit having a wall at the bottom, left, and right and having a passage at the top.
The observed value O9 is observed in an observation unit having a wall at the bottom and left and having a passage at the top and right. The observed value O10 is observed in an observation unit having a wall at the left and right and having a passage at the top and bottom.
The observed value O11 is observed in an observation unit having a wall at the left and having a passage at the top, bottom, and right. The observed value O12 is observed in an observation unit having a wall at the bottom and right and having a passage at the top and left.
The observed value O13 is observed in an observation unit having a wall at the bottom and having a passage at the top, left, and right. The observed value O14 is observed in an observation unit having a wall at the right and having a passage at the top, bottom, and left.
The observed value O15 is observed in an observation unit having a passage at all of the top, bottom, left, and right.
Incidentally, an action Um(m=1, 2, . . . , M (M is a total number of actions (kinds of actions))) and an observed value Ok (k=1, 2, . . . , K (K is a total number of observed values)) are each a discrete value.
[Example of Configuration of Agent]
The agent obtains an environment model resulting from modeling the action environment by learning.
In addition, the agent recognizes present conditions of the agent itself using a series of observed values (observed value series).
Further, the agent makes a plan of actions to be performed to move from the present conditions to a certain goal (action plan), and determines an action to be performed next according to the action plan.
Incidentally, the learning, recognition of conditions, and making of the action plan (determination of actions) that are performed by the agent can be applied to not only a problem (task) of the agent moving to the top, bottom, left, or right in observation units but also a problem that is generally taken up as a problem of reinforcement learning and which is capable of formulation in the framework of a Markov decision process (MDP).
In
Then, the agent learns the action environment (structure of the action environment (environment model resulting from modeling the action environment)) and determines an action to be performed next using an action series, which is a series of actions Um (symbols representing the actions Um) performed up to now, and an observed value series, which is a series of observed values Ok (symbols representing the observed values Ok) observed up to now.
There are two modes in which the agent performs action, that is, a reflex action mode (reflex act mode) and a recognition action mode (recognition act mode).
In the reflex action mode, a rule for determining an action to be performed next from the observed value series and the action series obtained in the past is designed as an innate rule.
In this case, as the innate rule, a rule for determining an action so as not to hit a wall (allowing to-and-fro movement in a passage) or a rule for determining an action so as not to hit a wall and so as not to retrace a taken path until coming to a dead end, for example, can be employed.
According to the innate rule, the agent repeats determining an action to be performed next for an observed value observed in the agent and observing an observed value in an observation unit after performing the action.
The agent thereby obtains an action series and an observed value series when the agent has moved in the action environment. The action series and the observed value series thus obtained in the reflex action mode are used to learn the action environment. That is, the reflex action mode is used mainly to obtain the action series and the observed value series serving as learning data used in learning the action environment.
In the recognition action mode, the agent determines a goal, recognizes present conditions, and determines an action plan for achieving the goal from the present conditions. The agent then determines an action to be performed next according to the action plan.
Incidentally, switching between the reflex action mode and the recognition action mode can be performed according to an operation by a user, for example.
In
The reflex action determining section 11 is supplied with an observed value observed in the action environment and output by the sensor 13.
The reflex action determining section 11 in the reflex action mode determines an action to be performed next for the observed value supplied from the sensor 13 according to the innate rule, and controls the actuator 12.
The actuator 12 is for example a motor for making the agent walk when the agent is a robot walking in a real world. The actuator 12 performs driving under control of the reflex action determining section 11 or an action determining section 24 to be described later. With the actuator performing driving, the agent performs an action determined by the reflex action determining section 11 or the action determining section 24 in the action environment.
The sensor 13 senses externally observable information, and outputs an observed value as a result of the sensing.
That is, the sensor 13 observes an observation unit in which the agent is located in the action environment, and outputs a symbol representing the observation unit as an observed value.
Incidentally, the sensor 13 in
The observed value output by the sensor 13 is supplied to the reflex action determining section 11 and the history storing section 14. In addition, the action output by the sensor 13 is supplied to the history storing section 14.
The history storing section 14 sequentially stores the observed value and the action output by the sensor 13. The history storing section 14 thereby stores a series of observed values (observed value series) and a series of actions (action series).
Incidentally, while a symbol representing an observation unit in which the agent is located is employed as an externally observable observed value in this case, a set of a symbol representing an observation unit in which the agent is located and a symbol representing an action performed by the agent can be employed as the observed value.
The action controlling section 15 learns a state transition probability model as an environment model for making the structure of the action environment stored (obtained), using the observed value series and the action series stored in the history storing section 14.
In addition, the action controlling section 15 calculates an action plan on the basis of the state transition probability model after learning. Further, the action controlling section 15 determines an action to be performed next by the agent according to the action plan, and controls the actuator 12 according to the action, thereby making the agent perform the action.
Specifically, the action controlling section 15 includes a learning section 21, a model storing section 22, a state recognizing section 23, and an action determining section 24.
The learning section 21 learns the state transition probability model stored in the model storing section 22 using the action series and the observed value series stored in the history storing section 14.
In this case, the state transition probability model learned by the learning section 21 is a state transition probability model defined by state transition probability for each action that a state makes a state transition due to an action performed by the agent and observation probability that a predetermined observed value is observed from a state.
There is for example an HMM (Hidden Markov Model) as the state transition probability model. However, the state transition probability of an ordinary HMM is not present for each action. Accordingly, in the present embodiment, the state transition probability of the HMM (Hidden Markov Model) is extended to state transition probability for each action performed by the agent. The HMM having state transition probability thus extended (which HMM will be referred to also as an extended HMM) is employed as an object for learning by the learning section 21.
The model storing section 22 stores the extended HMM (state transition probability, observation probability and the like as model parameters defining the extended HMM). In addition, the model storing section 22 stores an inhibitor to be described later.
The state recognizing section 23 in the recognition action mode recognizes the present conditions of the agent on the basis of the extended HMM stored in the model storing section 22 using the action series and the observed value series stored in the history storing section 14, and obtains (recognizes) a present state as a state of the extended HMM which state corresponds to the present conditions.
The state recognizing section 23 then supplies the present state to the action determining section 24.
In addition, the state recognizing section 23 updates the inhibitor stored in the model storing section 22 and updates an elapsed time managing table stored in an elapsed time managing table storing section 23 to be described later according to the present state and the like.
The action determining section 24 functions as a planner for planning actions to be performed by the agent in the recognition action mode.
Specifically, the action determining section 24 is supplied with the present state from the state recognizing section 23, and is also supplied with one of states of the extended HMM stored in the model storing section 22 as a goal state as a goal from the goal determining section 16.
The action determining section 24 calculates (determines) an action plan as a series of actions that maximizes the likelihood of state transition from the present state from the state recognizing section 23 to the goal state from the goal determining section 16 on the basis of the extended HMM stored in the model storing section 22.
Further, the action determining section 24 determines an action to be performed next by the agent according to the action plan, and controls the actuator 12 according to the determined action.
The goal determining section 16 in the recognition action mode determines the goal state, and then supplies the goal state to the action determining section 24.
Specifically, the goal determining section 16 includes a goal selecting section 31, an elapsed time managing table storing section 32, an external goal inputting section 33, and an internal goal generating block 34.
The goal selecting section 31 is supplied with an external goal as a goal state from the external goal inputting section 33 and an internal goal as a goal state from the internal goal generating block 34.
The goal selecting section 31 selects the state as the external goal from the external goal inputting section 33 or the state as the internal goal from the internal goal generating block 34, determines the selected state as a goal state, and then supplies the goal state to the action determining section 24.
The elapsed time managing table storing section 32 stores an elapsed time managing table. For each state of the extended HMM stored in the model storing section 22, an elapsed time elapsed since the state became a present state and the like are registered in the elapsed time managing table.
The external goal inputting section 33 sets a state externally supplied from the outside (of the agent) as an external goal, which is a goal state, and then supplies the external goal to the goal selecting section 31.
That is, the external goal inputting section 33 is for example operated by a user when the user externally specifies a state as a goal state. The external goal inputting section 33 sets the state specified by the operation of the user as an external goal, which is a goal state, and then supplies the external goal to the goal selecting section 31.
The internal goal generating block 34 generates an internal goal as a goal state inside (of the agent), and then supplies the internal goal to the goal selecting section 31.
Specifically, the internal goal generating block 34 includes a random goal generating section 35, a branch structure detecting section 36, and an open end detecting section 37.
The random goal generating section 35 randomly selects one state from the states of the extended HMM stored in the model storing section 22 as a random goal, sets the random goal as an internal goal, which is a goal state, and then supplies the internal goal to the goal selecting section 31.
The branch structure detecting section 36 detects a state of branch structure, which state can make a state transition to different states when a same action is performed, on the basis of the state transition probability of the extended HMM stored in the model storing section 22, sets the state of branch structure as an internal goal, which is a goal state, and then supplies the internal goal to the goal selecting section 31.
Incidentally, when the branch structure detecting section 36 detects a plurality of states as states of branch structure from the extended HMM, the goal selecting section 31 refers to the elapsed time managing table in the elapsed time managing table storing section 32, and selects a state of branch structure whose elapsed time is a maximum among the plurality of states of branch structure as a goal state.
The open end detecting section 37 detects an open end, which has a state transition yet to be made among state transitions that can be made with a state in which a predetermined observed value is observed as a transition source in the extended HMM stored in the model storing section 22 and which is another state in which the same observed value as the predetermined observed value is observed. Then, the open end detecting section 37 sets the open end as an internal goal, which is a goal state, and supplies the internal goal to the goal selecting section 31.
[Process in Reflex Action Mode]
In step S11, the reflex action determining section 11 sets a variable t for counting time to 1, for example, as an initial value. The process then proceeds to step S12.
In step S12, the sensor 13 obtains a present observed value (observed value at time t) ot from the action environment, and outputs the observed value ot. The process then proceeds to step S13.
The observed value ot at time t in the present embodiment is one of the 15 observed value O1 to O15 shown in
In step S13, the agent supplies the observed value ot output by the sensor 13 to the reflex action determining section 11. The process then proceeds to step S14.
In step S14, the reflex action determining section 11 determines an action ut to be performed at time t for the observed value ot from the sensor 13 according to the innate rule, and controls the actuator 12 according to the action ut. The process then proceeds to step S15.
The action ut at time t in the present embodiment is one of the five actions U1 to U5 shown in
The action ut determined in step S14 will hereinafter be referred to also as the determined action ut.
In step S15, the actuator 12 performs driving under control of the reflex action determining section 11. The agent thereby performs the determined action ut.
At this time, the sensor 13 observes the actuator 12, and outputs the action ut performed by the agent (symbol representing the action ut).
The process then proceeds from step S15 to step S16, where the history storing section 14 stores the observed value ot and the action ut output by the sensor 13 in a form of being added to a series of observed values and actions already stored as a history of observed values and actions. The process then proceeds to step S17.
In step S17, the reflex action determining section 11 determines whether the agent has performed action a number of times which number is specified (set) in advance as a number of actions to be performed in the reflex action mode.
When it is determined in step S17 that the agent has not yet performed action the number of times which number is specified in advance, the process proceeds to step S18, where the reflex action determining section 11 increments time t by one. The process then returns from step S18 to step S12 to thereafter repeat a similar process.
When it is determined in step S17 that the agent has performed action the number of times which number is specified in advance, that is, when time t is equal to the number of times which number is specified in advance, the process in the reflex action mode is ended.
According to the process in the reflex action mode, a series of observed values ot (observed value series) and a series of actions ut performed by the agent when the observed values ot are observed (action series) (a series of actions ut and a series of values ot+1 observed in the agent when the actions ut have been performed) are stored in the history storing section 14.
Then, the learning section 21 in the agent learns an extended HMM using the observed value series and the action series stored in the history storing section 14 as learning data.
In the extended HMM, the state transition probability of an ordinary (existing) HMM is extended to state transition probability for each action performed by the agent.
Now suppose that an ergodic HMM in which a state transition from a certain state to an arbitrary state is possible is employed as HMMs including the extended HMM. Also suppose that the number of states of an HMM is N.
In this case, the ordinary HMM has the state transition probabilities aij of N×N state transitions from each of N states Si to each of N states Sj as a model parameter.
All the state transition probabilities of the ordinary HMM can be represented by a two-dimensional table in which the state transition probability aij of a state transition from a state Si to a state Sj is disposed in an ith row from the top and a jth column from the left.
The table of state transition probabilities of an HMM will hereinafter be described also as state transition probability A.
The extended HMM has state transition probabilities for each action Um performed by the agent.
The state transition probability of a state transition from a state Si to a state Sj with respect to a certain action Um will hereinafter be described also as state transition probability aij(Um).
The state transition probability aij(Um) represents a probability of a state transition occurring from a state Si to a state Sj when the agent performs an action Um.
All the state transition probabilities of the extended HMM can be represented by a three-dimensional table in which the state transition probability aij (Um) of a state transition from a state Si to a state Sj with respect to an action Um is disposed in an ith row from the top, a jth column from the left, and an mth plane in a direction of depth from the front side.
Hereinafter, in the three-dimensional table of the state transition probability A, an axis in a vertical direction will be referred to as an i-axis, an axis in a horizontal direction will be referred to as a j-axis, and an axis in the direction of depth will be referred to as an m-axis or an action axis.
In addition, a plane composed of state transition probabilities aIj(Um) which plane is obtained by cutting the three-dimensional table of the state transition probability A with a plane perpendicular to the action axis at a position m on the action axis will be referred to also as a state transition probability plane with respect to an action Um.
Further, a plane composed of state transition probabilities aIj (Um) which plane is obtained by cutting the three-dimensional table of the state transition probability A with a plane perpendicular to the i-axis at a position I on the i-axis will be referred to also as an action plane with respect to a state SI.
The state transition probabilities aIj (Um) forming the action plane with respect to the state SI represent a probability of each action Um being performed when a state transition occurs with the state SI as a transition source.
Incidentally, as with the ordinary HMM, the extended HMM has not only the state transition probability aij(Um) for each action but also initial state probability πi of being in a state Si at an initial time t=1 and observation probability bi(Ok) of an observed value Ok being observed in the state Si as model parameters.
[Learning Extended HMM]
In step S21, the learning section 21 initializes the extended HMM.
Specifically, the learning section 21 initializes the initial state probability πi, the state transition probability aij(Um) (for each action), and the observation probability bi(Ok) as model parameters of the extended HMM stored in the model storing section 22.
Incidentally, supposing that the number (total number) of states of the extended HMM is N, the initial state probability πi is initialized to 1/N, for example. Supposing in this case that the action environment, which is a labyrinth in a two-dimensional plane, is composed of a×b observation units, that is, a horizontal observation units and b vertical observation units, the number of (a+Δ)×(b+Δ) states, where Δ is an integer as a margin, can be employed as the number N of states of the extended HMM.
In addition, the state transition probability aij(Um) and the observation probability bi(Ok) are initialized to a random value that can be assumed as a probability value, for example.
In this case, the state transition probability aij(Um) is initialized such that a sum total of state transition probabilities aij(Um) of each row in a state transition probability plane with respect to each action Um (ai,1(Um)+ai,2(Um)+ . . . +ai, N(Um)) is 1.0.
Similarly, the observation probability bi(Ok) is initialized such that a sum total of observation probabilities of observed values O1, O2, . . . , OK being observed from each state Si (bi(O1)+bi(O2)+ . . . +bi(OK)) is 1.0.
Incidentally, when so-called incremental learning is performed, the initial state probability πi, the state transition probability aij(Um), and the observation probability bi(Ok) of the extended HMM stored in the model storing section 22 are used as initial values as they are. That is, the initialization in step S21 is not performed.
After step S21, the process proceeds to step S22. From step S22 on down, the learning of the extended HMM is performed which estimates the initial state probability πi, the state transition probability aij(Um) for each action, and the observation probability bi(Ok) according to a Baum-Welch re-estimation method (method obtained by extending the Baum-Welch re-estimation method with respect to action) using the action series and the observed value series as learning data stored in the history storing section 14.
Specifically, in step S22, the learning section 21 calculates forward probability αt+1(j) and backward probability βt(i).
In the extended HMM, when an action ut is performed at time t, a state transition occurs from a present state Si to a state Sj, and an observed value ot+1 is observed in the state Sj after the state transition at next time t+1.
In such an extended HMM, the forward probability αt+1(j) is probability P (o1, o2, . . . , ot+1, u1, u2, . . . ut, st+1=j|Λ) of the action series u1, u2, . . . , ut of the learning data being observed, the observed value series o1, o2, . . . , ot+1 being observed, and the agent being in the state Sj at time t+1 in a model Λ, which is the present extended HMM (extended HMM defined by the initial state probability πi, the state transition probability aij(Um), and the observation probability bi(Ok) actually stored in the model storing section 22). The forward probability αt+1(j) is expressed by equation (1).
Incidentally, a state st represents a state at time t, and is one of states S1 to SN when the number of states of the extended HMM is N. In addition, an equation st+1=j denotes that a state st+1 at time t+1 is a state Sj.
The forward probability αt+1(j) of equation (1) represents a probability of the agent being in a state Sj at time t+1 and observing an observed value ot+1 after a state transition is effected by performing (observing) an action ut when the agent observes the action series u1, u2, . . . , ut−1 and the observed value series o1, o2, . . . , ot of the learning data and is in a state st at time t.
Incidentally, the initial value α1(j) of the forward probability αt+1(j) is expressed by equation (2).
α1(j)=πjbj(o1) (2)
The initial value α1(j) of equation (2) represents a probability of being in the state Sj at first (time t=0) and observing an observed value o1.
In addition, in the extended HMM, the backward probability βt(i) is probability P (ot+1, ot+2, . . . , oT, ut+1, ut+2, . . . , uT−1, st=i|Λ) of the agent being in a state Si at time t, and thereafter observing the action series ut+1, ut+2, . . . , uT−1 of the learning data and observing the observed value series ot+1, ot+2, . . . , oT in the model Λ, which is the present extended HMM. The backward probability βt(i) is expressed by equation (3).
Incidentally, T denotes the number of observed values of the observed value series of the learning data.
The backward probability βt(i) of equation (3) represents a probability of a state st at time t being the state Si when a state st+1 at time t+1 is a state Sj and an observed value ot+1 is observed after a state transition is effected by performing (observing) an action ut in the state Si at time t in a case where the agent is in the state Sj at time t+1, and thereafter observing the action series ut+1, ut+2, . . . , uT−1 of the learning data and observing the observed value series ot+2, Ot+3, . . . , oT.
Incidentally, the initial value βT(i) of the backward probability βt(i) is expressed by equation (4).
βT(i)=1 (4)
The initial value βT(i) of equation (4) indicates that a probability of being in the state Si at the end (time t=T) is 1.0, that is, that the agent is always in the state Si at the end.
The extended HMM is different from the ordinary HMM in that the extended HMM uses the state transition probability aij(ut) for each action as state transition probability of a state transition from a certain state Si to a certain state Sj, as shown in equation (1) and equation (3).
After the forward probability αt+1(j) and the backward probability βt(i) are calculated in step S22, the process proceeds to step S23, where the learning section 21 re-estimates the initial state probability πi, the state transition probability aij(Um) for each action Um, and the observation probability bi(Ok) as model parameters Λ of the extended HMM using the forward probability αt+1 and the backward probability βt(i).
In this case, the re-estimation of the model parameters are performed as follows by extending the Baum-Welch re-estimation method as state transition probability is extended to the state transition probability aij(Um) for each action Um.
A probability ξt+1(i, j, Um) of a state transition being made to a state Sj at time t+1 by performing an action Um in a state Si at time t in a case where an action series U=u1, u2, . . . , uT−1 and an observed value series O=o1, o2, oT are observed in the model Λ as the present extended HMM is expressed by equation (5) using forward probability αt(i) and backward probability βt+1(j).
Further, a probability γt(i, Um) of action ut=Um being performed in the state Si at time t can be calculated as a probability obtained by marginalizing the probability ξt+1(i, j, Um) with respect to the state Sj at time t+1. The probability γt(i, Um) is expressed by equation (6).
The learning section 21 re-estimates the model parameters Λ of the extended HMM using the probability ξt+1(i, j, Um) of equation (5) and the probability γt(i, Um) of equation (6).
Supposing that estimated values obtained by re-estimating the model parameters Λ are represented as model parameters Λ′ using a prime (′), the estimated value π′i of initial state probability, which estimated value is a model parameter Λ′, is obtained according to equation (7).
In addition, the estimated value a′ij(Um) of state transition probability for each action, which estimated value is a model parameter Λ′, is obtained according to equation (8).
The numerator of the estimated value a′ij(Um) of state transition probability of equation (8) represents an expected value of the number of times of making a state transition to the state Sj by performing the action ut=Um in the state Si, and the denominator of the estimated value a′ij(Um) of state transition probability of equation (8) represents an expected value of the number of times of making a state transition by performing the action ut=Um in the state Si.
The estimated value b′j(Ok) of observation probability, which estimated value is a model parameter Λ′, is obtained according to equation (9).
The numerator of the estimated value b′j(Ok) of observation probability of equation (9) represents an expected value of the number of times of making a state transition to the state Sj and observing an observed value Ok in the state Sj, and the denominator of the estimated value b′j(Ok) of observation probability of equation (9) represents an expected value of the number of times of making a state transition to the state Sj.
After re-estimating the estimated values π′i, a′ij(Um), and b′j(Ok) of initial state probability state, transition probability, and observation probability as model parameters Λ′ in step S23, the learning section 21 stores each of the estimated values n′i as new initial state probability πi, the estimated value a′ij(Um) as new state transition probability aij(Um), and the estimated value b′j(Ok) as new observation probability bj(Ok) in the model storing section 22 in an overwriting manner. The process then proceeds to step S24.
In step S24, whether the model parameters of the extended HMM, that is, the (new) initial state probability πi, the (new) state transition probability aij(Um), and the (new) observation probability bj(Ok) stored in the model storing section 22 have converged is determined.
When it is determined in step S24 that the model parameters of the extended HMM have not converged yet, the process returns to step S22 to repeat a similar process using new initial state probability πi, new state transition probability aij(Um), and new observation probability bj(Ok) stored in the model storing section 22.
When it is determined in step S24 that the model parameters of the extended HMM have converged, that is, when the model parameters of the extended HMM after the re-estimation in step S23 are hardly changed from the model parameters of the extended HMM before the re-estimation in step S23, for example, the process of learning the extended HMM is ended.
As described above, by learning the extended HMM defined by the state transition probability aij(Um) for each action using the action series of actions performed by the agent and the observed value series of observed values observed in the agent when the agent has performed the actions, the structure of the action environment is obtained through the observed value series in the extended HMM, and relation between each observed value and an action performed when the observed value is observed (relation between actions performed by the agent and observed values observed when the actions have been performed (observed values observed after the actions)) is obtained.
As a result, as will be described later, an appropriate action can be determined as an action to be performed by the agent in the action environment in the recognition action mode by using the extended HMM after such learning.
[Process in Recognition Action Mode]
In the recognition action mode, as described above, the agent determines a goal, recognizes present conditions, and calculates an action plan to achieve the goal from the present conditions. Further, the agent determines an action to be performed next according to the action plan, and performs the action. The agent then repeats the above process.
In step S31, the state recognizing section 23 sets a variable t for counting time to 1, for example, as an initial value. The process then proceeds to step S32.
In step S32, the sensor 13 obtains a present observed value (observed value at time t) ot from the action environment, and outputs the observed value ot. The process then proceeds to step S33.
In step S33, the history storing section 14 stores the observed value ot at time t which observed value is obtained by the sensor 13 and an action ut−1 (action ut−1 performed by the agent at immediately preceding time t−1) output by the sensor 13 when the observed value ot is observed (immediately before the sensor 13 obtains the observed value ot) as a history of an observed value and an action in the form of being added to already stored series of observed values and actions. The process then proceeds to step S34.
In step S34, the state recognizing section 23 recognizes present conditions of the agent on the basis of the extended HMM using the action performed by the agent and the observed value observed in the agent when the action has been performed, and obtains a present state as a state of the extended HMM which state corresponds to the present conditions.
Specifically, the state recognizing section 23 reads out, from the history storing section 14, an action series of zero or more latest actions and an observed value series of one or more latest observed values as an action series and an observed value series for recognition which series are used to recognize the present conditions of the agent.
Further, the state recognizing section 23 observes the action series and the observed value series for recognition, and obtains an optimum state probability δt(j), which is a maximum value of a state probability of being in a state Sj at time (present time) t, and an optimum path ψt(j), which is a state series providing the optimum state probability δt(j), according to a Viterbi algorithm (algorithm obtained by extending the Viterbi algorithm to actions), for example, in the learned extended HMM stored in the model storing section 22.
According to the Viterbi algorithm, a state series that maximizes the likelihood of a certain observed value series being observed (maximum likelihood state series) can be estimated among series of states (state series) traced when the observed value series is observed in the ordinary HMM.
However, because state transition probability is extended with respect to actions in the extended HMM, the Viterbi algorithm needs to be extended with respect to actions in order to be applied to the extended HMM.
Thus, the state recognizing section 23 obtains the optimum state probability δt(j) and the optimum path ψt(j) according to equation (10) and equation (11), respectively.
In this case, max[X] in equation (10) denotes a maximum value of X obtained when a suffix i indicating a state Si is changed to integers in a range of 1 to N, which is the number of states. In addition, argmax{X} in equation (11) denotes a suffix i that maximizes X obtained when the suffix i is changed to integers in a range of 1 to N.
The state recognizing section 23 observes the action series and the observed value series for recognition, and obtains the maximum likelihood state series, which is a state series reaching a state Sj and maximizing the optimum state probability δt(j) of equation (10) at time t from the optimum path ψt(j) of equation (11).
Further, the state recognizing section 23 sets the maximum likelihood state series as a result of recognition of the present conditions, and obtains (estimates) a last state of the maximum likelihood state series as a present state st.
After obtaining the present state st, the state recognizing section 23 updates the elapsed time managing table stored in the elapsed time managing table storing section 32 on the basis of the present state st. The process then proceeds from step S34 to step S35.
Specifically, in association with each state of the extended HMM, an elapsed time elapsed since the state became a present state is registered in the elapsed time managing table of the elapsed time managing table storing section 32. The state recognizing section 23 resets the elapsed time of the state that has become the present state st to zero, for example, and increments the elapsed times of other states by one, for example, in the elapsed time managing table.
The elapsed time managing table is referred to as required when the goal selecting section 31 selects a goal state, as described above.
In step S35, the state recognizing section 23 updates the inhibitor stored in the model storing section 22 on the basis of the present state st. The updating of the inhibitor will be described later.
Further, in step S35, the state recognizing section 23 supplies the present state st to the action determining section 24. The process then proceeds to step S36.
In step S36, the goal determining section 16 determines a goal state from among the states of the extended HMM, and supplies the goal state to the action determining section 24. The process then proceeds to step S37.
In step S37, the action determining section 24 corrects the state transition probability of the extended HMM stored in the model storing section 22 using the inhibitor stored in the same model storing section 22 (inhibitor updated in the immediately preceding step S35). The action determining section 24 thereby calculates corrected transition probability as state transition probability after the correction.
The corrected transition probability is used as the state transition probability of the extended HMM in calculation of an action plan by the action determining section 24 to be described later.
After step S37, the process proceeds to step S38, where the action determining section 24 calculates an action plan as a series of actions that maximizes the likelihood of state transition from the present state from the state recognizing section 23 to the goal state from the goal determining section 16 according to the Viterbi algorithm (algorithm obtained by extending the Viterbi algorithm to actions), for example, on the basis of the extended HMM stored in the model storing section 22.
According to the Viterbi algorithm, a maximum likelihood state series that maximizes the likelihood of a certain observed value series being observed can be estimated among state series from one of two states to the other state, that is, state series from the present state to the goal state, for example, in the ordinary HMM.
However, as described above, because state transition probability is extended with respect to actions in the extended HMM, the Viterbi algorithm needs to be extended with respect to actions in order to be applied to the extended HMM.
Thus, the action determining section 24 obtains a state probability δ′t(j) according to equation (12).
In this case, max[X] in equation (12) denotes a maximum value of X obtained when a suffix i indicating a state Si is changed to integers in a range of 1 to N, which is the number of states, and a suffix m indicating an action Um is changed to integers in a range of 1 to M, which is the number of actions.
Equation (12) is obtained by deleting the observation probability bj(ot) from equation (10) for obtaining the optimum state probability δt(j). In addition, in equation (12), the state probability δ′t(j) is obtained in consideration of the action Um. This corresponds to the extension of the Viterbi algorithm with respect to actions.
The action determining section 24 performs the calculation of equation (12) in a forward direction, and temporarily stores the suffix i taking a maximum state probability δ′t(j) and the suffix m indicating an action Um performed when a state transition to the state Si indicated by the suffix i occurs at each time.
Incidentally, the corrected transition probability obtained by correcting the state transition probability aij(Um) of the learned extended HMM by the inhibitor is used as state transition probability aij(Um) in the calculation of equation (12).
The action determining section 24 calculates the state probability δ′t(j) of equation (12) with the present state st as a first state, and ends the calculation of the state probability δ′t(j) of equation (12) when the state probability δ′t(Sgoal) of a goal state Sgoal becomes a predetermined threshold value δ′th or more, as shown in equation (13).
δ′t(Sgoal)≧δ′th
Incidentally, the threshold value δ′th in equation (13) is set according to equation (14), for example.
δ′th=0.9T′
In this case, T′ in equation (14) denotes the number of calculations of equation (12) (series length of a maximum likelihood state series obtained from equation (12)).
According to equation (14), the threshold value δ′th is set by adopting 0.9 as a state probability when one likely state transition occurs.
Hence, according to equation (13), the calculation of the state probability δ′t(j) of equation (12) is ended when T′ likely state transitions occur consecutively.
Ending the calculation of the state probability δ′t(j) of equation (12), the action determining section 24 obtains a maximum likelihood state series (shortest path in many cases) from the present state st to the goal state Sgoal and a series of actions Um performed when a state transition providing the maximum likelihood state series occurs, by tracing the suffixes i and m stored for the state Si and the action Um from a state at the time of ending the calculation of the state probability δ′t(j) of equation (12), that is, the goal state Sgoal to the present state st in an opposite direction.
Specifically, as described above, the action determining section 24 stores the suffix i taking a maximum state probability δ′t(j) and the suffix m indicating an action Um performed when a state transition to the state Si indicated by the suffix i occurs at each time when calculating the state probability δ′t(j) of equation (12) in a forward direction.
The suffix i at each time indicates which state Si has a maximum state probability when a return is made from a state Sj to the state Si in a temporally retrograde direction. The suffix m at each time indicates an action Um that effects a state transition providing the maximum state probability.
Hence, when the suffixes i and m at each time are retraced time by time from the time of ending the calculation of the state probability δ′t(j) of equation (12), and a time of starting the calculation of the state probability δ′t(j) of equation (12) is reached, series formed by arranging each of a series of suffixes of states of a state series from the present state st to the goal state Sgoal and a series of suffixes of actions of an action series performed when state transitions of the state series occur in temporally retrograde order are obtained.
The action determining section 24 obtains the state series (maximum likelihood state series) from the present state st to the goal state Sgoal and the action series performed when the state transitions of the state series occur by rearranging the series arranged in temporally retrograde order in order of time.
The action series performed when the state transitions of the maximum likelihood state series from the present state st to the goal state Sgoal occur, the action series being obtained by the action determining section 24 as described above, is an action plan.
In this case, the maximum likelihood state series obtained in the action determining section 24 together with the action plan is the state series of state transitions that occur (should occur) when the agent performs actions according to the action plan. Thus, if state transitions not in accordance with the arrangement of states of the maximum likelihood state series occur when the agent performs the actions according to the action plan, the agent may not reach the goal state even when the agent performs the actions according to the action plan.
After the action determining section 24 obtains the action plan in step S38 as described above, the process proceeds to step S39, where the action determining section 24 determines an action ut to be performed next according to the action plan. The process then proceeds to step S40.
That is, the action determining section 24 sets a first action of the action series as the action plan as the determined action ut to be performed next.
In step S40, the action determining section 24 controls the actuator 12 according to the action (determined action) ut determined in the immediately preceding step S39, and the agent thereby performs the action ut.
The process thereafter proceeds from step S40 to step S41, where the state recognizing section 23 increments time t by one. The process returns to step S32 to repeat a similar process from step S32 on down.
Incidentally, the process in the recognition action mode of
As described above, the state recognizing section 23 recognizes the present conditions of the agent using an action performed by the agent and an observed value observed in the agent when the action has been performed and obtains a present state corresponding to the present conditions on the basis of the extended HMM, the goal determining section 16 determines a goal state, and the action determining section 24 calculates an action plan, which is a series of actions that maximizes the likelihood (state probability) of state transition from the present state to the goal state on the basis of the extended HMM, and determines an action to be performed next by the agent according to the action plan. Therefore an appropriate action can be determined as an action to be performed by the agent for the agent to reach the goal state.
The existing action determining method prepares a state transition probability model learning an observed value series and an action model, which is a model of actions for realizing state transitions of the state transition probability model, separately from each other, and performs learning.
Thus, because the two models of the state transition probability model and the action model are learned, the learning needs large amounts of calculation cost and storage resources.
On the other hand, the agent of
In addition, the existing action determining method needs to calculate a state series up to a goal state using the state transition probability model, and calculate actions to obtain the state series using the action model. That is, the existing action determining method needs to calculate the state series up to the goal state and calculate the actions to obtain the state series using the separate models.
The existing action determining method therefore needs a large amount of calculation cost up to the calculation of actions.
On the other hand, the agent of
[Determination of Goal State]
In the goal determining section 16, in step S51, the goal selecting section 31 determines whether an external goal is set.
When it is determined in step S51 that an external goal is set, that is, when for example a user operates the external goal inputting section 33 to specify one state of the extended HMM stored in the model storing section 22 as an external goal, which is a goal state, and the goal state (suffix indicating the goal state) is supplied from the external goal inputting section 33 to the goal selecting section 31, the process proceeds to step S52, where the goal selecting section 31 selects the external goal from the external goal inputting section 33 and then supplies the external goal to the action determining section 24. The process then makes a return.
Incidentally, the user can specify a state (suffix of the state) as a goal state by not only operating the external goal inputting section 33 but also operating a terminal such as a PC (Personal Computer), for example. In this case, the external goal inputting section 33 recognizes the state specified by the user by communicating with the terminal operated by the user, and then supplies the state to the goal selecting section 31.
When it is determined in step S51 that no external goal is set, on the other hand, the process proceeds to step S53, where the open end detecting section 37 detects an open end from the states of the extended HMM on the basis of the extended HMM stored in the model storing section 22. The process then proceeds to step S54.
In step S54, the goal selecting section 31 determines whether an open end is detected.
When the open end detecting section 37 detects an open end from the states of the extended HMM, the open end detecting section 37 supplies the state (suffix indicating the state) as the open end to the goal selecting section 31. The goal selecting section 31 determines whether an open end is detected according to whether the open end is supplied from the open end detecting section 37.
When it is determined in step S54 that an open end is detected, that is, when one or more open ends are supplied from the open end detecting section 37 to the goal selecting section 31, the process proceeds to step S55, where the goal selecting section 31 selects for example an open end having a minimum suffix indicating the state as a goal state from the one or more open ends from the open end detecting section 37, and then supplies the goal state to the action determining section 24. The process then makes a return.
When it is determined in step S54 that no open end is detected, that is, when no open end is supplied from the open end detecting section 37 to the goal selecting section 31, the process proceeds to step S56, where the branch structure detecting section 36 detects a state of branch structure from the states of the extended HMM on the basis of the extended HMM stored in the model storing section 22. The process then proceeds to step S57.
In step S57, the goal selecting section 31 determines whether a state of branch structure is detected.
In this case, when the branch structure detecting section 36 detects a state of branch structure from the states of the extended HMM, the branch structure detecting section 36 supplies the state of branch structure (suffix indicating the state of branch structure) to the goal selecting section 31. The goal selecting section 31 determines whether a state of branch structure is detected according to whether the state of branch structure is supplied from the branch structure detecting section 36.
When it is determined in step S57 that a state of branch structure is detected, that is, when one or more states of branch structure are supplied from the branch structure detecting section 36 to the goal selecting section 31, the process proceeds to step S58, where the goal selecting section 31 selects one state as a goal state from the one or more states of branch structure from the branch structure detecting section 36, and then supplies the goal state to the action determining section 24. The process then makes a return.
Specifically, referring to the elapsed time managing table in the elapsed time managing table storing section 32, the goal selecting section 31 recognizes the elapsed time of the one or more states of branch structure from the branch structure detecting section 36.
Further, the goal selecting section 31 detects a state having a longest elapsed time from the one or more states of branch structure from the branch structure detecting section 36, and selects the state as a goal state.
When it is determined in step S57 that no state of branch structure is detected, on the other hand, that is, when no state of branch structure is supplied from the branch structure detecting section 36 to the goal selecting section 31, the process proceeds to step S59, where the random goal generating section 35 randomly selects one state of the extended HMM stored in the model storing section 22, and then supplies the state to the goal selecting section 31.
Further, in step S59, the goal selecting section 31 selects the state from the random goal generating section 35 as a goal state, and then supplies the goal state to the action determining section 24. The process then makes a return.
Incidentally, the detection of an open end by the open end detecting section 37 and the detection of a state of branch structure by the branch structure detecting section 36 will be described later in detail.
[Calculation of Action Plan]
In
In the extended HMM of
Two states between which state transitions are possible represent that the agent can move between two observation units corresponding to the two respective states. Thus, arrows indicating state transitions of the extended HMM represent passages through which the agent can move in the action environment.
In
For example, in
When the extended HMM is learned using an observed value series and an action series obtained in the action environment whose structure changes as learning data, the extended HMM in which a plurality of states correspond to one observation unit as shown in
That is, in
Further, in
As a result, in the extended HMM of
That is, in the extended HMM, the structure of the action environment is obtained in which no state transition is made and no passage is allowed because of the presence of the wall between the state S21 of the observation unit corresponding to the states S21 and S23 and the state S17 of the observation unit corresponding to the states S2 and S17.
In addition, in the extended HMM, the action environment of the structure having the passage between the observation unit corresponding to the states S21 and S23 and the observation unit corresponding to the states S2 and S17 is obtained by the state S23 and the state S2.
That is, in the extended HMM, the structure of the action environment is obtained in which state transitions are made and passage is allowed between the state S23 of the observation unit corresponding to the states S21 and S23 and the state S2 of the observation unit corresponding to the states S2 and S17.
As described above, in the extended HMM, even when the structure of the action environment changes, the structure of the action environment which structure changes can be obtained.
In
In
The action determining section 24 sets an action of moving from the first state S28 to the next state S23 of the action plan PL1 as a determined action. The agent performs the determined action.
As a result, the agent moves in a right direction (performs the action U2 in
In
In a state where the structure having the wall between the observation unit corresponding to the states S21 and S23 and the observation unit corresponding to the states S2 and S17 is obtained, the observation unit corresponding to the states S21 and S23 is the state S21, as described above. At time t=2, the state recognizing section 23 recognizes that the present state is the state S21.
The state recognizing section 23 updates an inhibitor for inhibiting state transition so as to inhibit a state transition between a state immediately preceding the present state and a state other than the present state and so as not to inhibit a state transition between the immediately preceding state and the present state (not to inhibit will hereinafter be referred to also as to enable) with respect to the action performed by the agent at the time of the state transition from the immediately preceding state to the present state.
That is, in this case, because the present state is the state S21 and the immediately preceding state is the state S28, the inhibitor is updated so as to inhibit state transition between the immediately preceding state S28 and the state other than the present state S21, that is, for example state transition between the first state S28 and the next state S23 in the action plan PL1 obtained at time t=1.
Further, the inhibitor is updated so as to enable state transition between the immediately preceding state S28 and the present state S21.
Then, at time t=2, the action determining section 24 sets the state S21 as the present state, sets the state S30 as the goal state, obtains a maximum likelihood state series S21, S28, S27, S26, S25, S20, S15, S10, S1, S17, S16, S22, S29, and S30 for reaching the goal state from the present state, and calculates an action series of actions performed when state transitions providing the maximum likelihood state series occur as an action plan.
Further, the action determining section 24 sets an action of moving from the first state S21 to the next state S28 of the action plan as a determined action. The agent performs the determined action.
As a result, the agent moves in a left direction (performs the action U4 in
At time t=3, the state recognizing section 23 recognizes that the present state is the state S28.
Then, at time t=3, the action determining section 24 sets the state S28 as the present state, sets the state S30 as the goal state, obtains a maximum likelihood state series for reaching the goal state from the present state, and calculates an action series of actions performed when state transitions providing the maximum likelihood state series occur as an action plan.
In
That is, at time t=3, the action plan PL3 different from the action plan PL1 at time t=1 is calculated even though the present state is the same state S28 as at time t=1 and the goal state is the same state S30 as at time t=1.
This is because the inhibitor is updated so as to inhibit state transition between the state S28 and the state S23 at time t=2, as described above, and in obtaining the maximum likelihood state series at time t=3, the selection of the state S23 as the transition destination of a state transition from the state S28 as the present state is inhibited and thereby the state S27, which is a state to which a state transition can be made from the state S28, other than the state S23, is selected.
After calculating the action plan PL3, the action determining section 24 sets an action of moving from the first state S28 to the next state S27 of the action plan PL3 as a determined action. The agent performs the determined action.
As a result, the agent moves in a downward direction (performs the action U3 in
[Correction of State Transition Probability Using Inhibitor]
As shown in
The action determining section 24 then calculates an action plan using the corrected transition probability Astm as the state transition probability of the extended HMM.
In this case, in calculating an action plan, the state transition probability used for the calculation is corrected by the inhibitor for the following reasons.
The states of the extended HMM after learning may include a state of branch structure, which state allows state transitions to different states when one action is performed.
For example, in the case of the state S29 in
Thus, different state transitions can occur from the state S29 when one certain action is performed, and the state S29 is a state of branch structure.
When different state transitions may occur with respect to one certain action, that is, for example when a state transition to a certain state may occur or a state transition to another state may also occur at the time of one certain action being performed, the inhibitor inhibits the occurrence of a state transition other than one state transition of the different state transitions that can occur so that only the one state transition occurs.
That is, supposing that different state transitions that can occur with respect to one certain action are referred to as a branch structure, when the extended HMM is learned using an observed value series and an action series obtained from the action environment that changes in structure as learning data, the extended HMM obtains a change in structure of the action environment as a branch structure. As a result, a state of branch structure occurs.
Thus, due to the occurrence of states of branch structure, even when the structure of the action environment changes to various structures, the extended HMM obtains all of the various structures of the action environment.
The various structures of the action environment that changes in structure, the various structures being obtained by the extended HMM, are information to be stored for a long period of time without being forgotten. Therefore the extended HMM that has obtained such information (state transition probability, in particular, of the extended HMM) will be referred to also as a long term memory.
When the present state is a state of branch structure, the present structure of the action environment that changes in structure determines which of different state transitions as a branch structure is possible as a state transition from the present state.
That is, even when a state transition is possible from the state transition probability of the extended HMM as a long term memory, the state transition may not be able to be made depending on the present structure of the action environment that changes in structure.
Accordingly, the agent updates the inhibitor on the basis of the present state obtained by recognition of present conditions of the agent independently of the long term memory. Then, the agent obtains the corrected transition probability as state transition probability after correction, the corrected transition probability inhibiting a state transition that cannot be made in the present structure of the action environment and enabling a state transition that can be made in the present structure of the action environment, by correcting the state transition probability of the extended HMM as a long term memory using the inhibitor. The agent calculates an action plan using the corrected transition probability.
The corrected transition probability is information obtained at each time by correcting the state transition probability as a long term memory using the inhibitor updated on the basis of the present state at each time, and is information that it suffices to store for a short period of time. The corrected transition probability will therefore be referred to also as a short term memory.
The action determining section 24 (
When all the state transition probabilities Altm of the extended HMM are expressed by a three-dimensional table as shown in
The three-dimensional table expressing the state transition probabilities Altm of the extended HMM will be referred to also as a state transition probability table. The three-dimensional table expressing the inhibitor Ainhibit will be referred to also as an inhibitor table.
When the number of states of the extended HMM is N, and the number of actions that can be performed by the agent is M, the state transition probability table is a three-dimensional table of N×N×M elements, that is, N elements wide, N elements high, and M elements deep. Thus, in this case, the inhibitor table is also a three-dimensional table of N×N×M elements.
Incidentally, in addition to the inhibitor Ainhibit, the corrected transition probability Astm is also expressed by a three-dimensional table of N×N×M elements. The three-dimensional table expressing the corrected transition probability Astm will be referred to also as a corrected transition probability table.
For example, supposing that a position in an ith row from the top, a jth column from the left, and an mth plane in a direction of depth from the front side in the state transition probability table is expressed as (i, j, m), the action determining section 24 obtains a corrected transition probability Astm as an element in the position (i, j, m) of the corrected transition probability table by multiplying a state transition probability Altm (=aij(Um)) as an element in the position (i, j, m) of the state transition probability table by an inhibitor Ainhibit as an element in the position (i, j, m) of the inhibitor table according to equation (15).
Astm=Altm×Ainhibit (15)
Incidentally, the inhibitor is updated in the state recognizing section 23 (
The state recognizing section 23 updates the inhibitor so as to inhibit state transition between a state Si immediately preceding a present state Sj and a state other than the present state Sj and so as not to inhibit (so as to enable) state transition between the immediately preceding state Si and the present state Sj with respect to an action Um performed by the agent at the time of a state transition from the immediately preceding state Si to the present state Sj.
Specifically, supposing that a plane obtained by cutting the inhibitor table with a plane perpendicular to the action axis at a position m of the action axis is referred to as an inhibitor plane with respect to the action Um, the state recognizing section 23 overwrites, with 1.0, an inhibitor as an element in a position (i, j) in an ith row from the top and a jth column from the left among N×N inhibitors, that is, N horizontal inhibitors and N vertical inhibitors of the inhibitor plane with respect to the action Um, and overwrites, with 0.0, inhibitors as elements in positions other than the position (i, j) among the N inhibitors in the ith row from the top.
As a result, according to the corrected transition probability obtained by correcting the state transition probability using the inhibitor, only a most recent experience, that is, a proximate state transition that has been made can be performed among state transitions (branch structure) from a state of branch structure, and the other state transition cannot be made.
The extended HMM expresses the structure of the action environment experienced (obtained by learning) by the agent up to the present. Further, when the structure of the action environment changes to various structures, the extended HMM expresses the various structures of the action environment as branch structures.
On the other hand, the inhibitor expresses which of a plurality of state transitions as a branch structure possessed by the extended HMM as a long term memory models the present structure of the action environment.
Thus, by correcting the state transition probability by multiplying the state transition probability of the extended HMM as a long term memory by the inhibitor, and calculating an action plan using the corrected transition probability (short term memory) as state transition probability after the correction, even when the structure of the action environment is changed, an action plan can be obtained in consideration of the structure after being changed (present structure) without the structure after being changed being relearned by the extended HMM.
That is, when the structure of the action environment after being changed in structure is a structure already obtained by the extended HMM, by updating the inhibitor on the basis of the present state, and correcting the state transition probability of the extended HMM using the inhibitor after being updated, an action plan can be obtained in consideration of the structure after being changed of the action environment without the extended HMM being relearned.
That is, an action plan adapted to a change in structure of the action environment can be obtained quickly and efficiently with calculation cost reduced.
Incidentally, when the action environment is changed to a structure not obtained by the extended HMM, to determine an appropriate action in the action environment of the structure after being changed needs relearning of the extended HMM using an observed value series and an action series observed in the action environment after being changed.
In a case where the action determining section 24 calculates an action plan using the state transition probability of the extended HMM as it is, an action series to be performed when state transitions of a maximum likelihood state series from a present state st to a goal state Sgoal occur is calculated as an action plan assuming that all of a plurality of state transitions as a branch structure can be made according to the Viterbi algorithm even when the present structure of the action environment is a structure in which only one of the plurality of state transitions as the branch structure can be made and the other state transition cannot be made.
On the other hand, in a case where the action determining section 24 corrects the state transition probability of the extended HMM by the inhibitor, and calculates an action plan using the corrected transition probability as state transition probability after the correction, it is assumed that a state transition inhibited by the inhibitor cannot be made, and an action series to be performed when state transitions of a maximum likelihood state series from the present state st to the goal state Sgoal occur without the state transition inhibited by the inhibitor can be calculated as an action plan.
Specifically, for example, in
In
As a result, at time t=3 in
The inhibitor is updated so as to enable a state transition experienced by the agent among a plurality of state transitions as a branch structure and inhibit the other state transitions than the state transition experienced by the agent.
That is, the inhibitor is updated so as to inhibit state transition between the state immediately preceding the present state and the state other than the present state (state transition from the immediately preceding state to the state other than the present state) and enable state transition between the immediately preceding state and the present state (state transition from the immediately preceding state to the present state) with respect to the action performed by the agent at the time of the state transition from the immediately preceding state to the present state.
When only the enabling of a state transition experienced by the agent among a plurality of state transitions as a branch structure and the inhibition of the other state transitions than the state transition experienced by the agent is performed as the updating of the inhibitor, the state transitions inhibited by updating the inhibitor remain inhibited unless the agent thereafter experiences the state transitions.
When an action to be performed next by the agent is determined according to an action plan calculated using the corrected transition probability obtained by correcting the state transition probability of the extended HMM by the inhibitor in the action determining section 24 as described above, no action plans including actions causing the state transitions inhibited by the inhibitor are calculated. The state transitions inhibited by the inhibitor therefore remain inhibited unless the action to be performed next is determined by a method other than the method of determining the action to be performed next according to an action plan or unless the agent experiences the state transitions inhibited by the inhibitor by chance.
Thus, even when the structure of the action environment is changed from a structure in which a state transition inhibited by the inhibitor cannot be made to a structure in which the state transition can be made, action plans including an action causing the state transition cannot be calculated until the agent fortunately experiences the state transition inhibited by the inhibitor.
Accordingly, the state recognizing section 23 not only enables a state transition experienced by the agent among a plurality of state transitions as a branch structure and inhibits the other state transitions than the state transition experienced by the agent as the updating of the inhibitor but also relaxes the inhibition of the state transitions according to the passage of time.
That is, the state recognizing section 23 updates the inhibitor so as to enable a state transition experienced by the agent among a plurality of state transitions as a branch structure and inhibit the other state transitions than the state transition experienced by the agent, and further updates the inhibitor so as to relax the inhibition of the state transitions according to the passage of time.
Specifically, the state recognizing section 23 updates an inhibitor Ainhibit at time t to an inhibitor Ainhibit (t+1) at time t+1 according to equation (16), for example, so that the inhibitor converges to 1.0 according to the passage of time.
Ainhibit(t+1)=Ainhibit(t)+c(1−Ainhibit)(t))(0≦c≦1) (16)
In equation (16), the coefficient c is higher than 0.0 and lower than 1.0. The higher the coefficient c, the more quickly the inhibitor converges to 1.0.
According to equation (16), the inhibition of a state transition once inhibited (state transition whose inhibitor is set at 0.0) is relaxed with the passage of time, so that an action plan including an action causing the state transition is calculated even when the agent has not experienced the state transition.
An inhibitor update performed so as to relax the inhibition of a state transition according to the passage of time will hereinafter be referred to also as an update corresponding to forgetting due to natural decay.
[Inhibitor Update]
Incidentally, the inhibitor is initialized to 1.0 as an initial value when time t is initialized to 1 in step S31 of the process in the recognition action mode in
In step S71 in the process of updating the inhibitor, the state recognizing section 23 updates all inhibitors Ainhibit stored in the model storing section 22 as an update corresponding to forgetting due to natural decay, that is, an update according to equation (16). The process then proceeds to step S72.
In step S72, the state recognizing section 23 determines whether a state Si immediately preceding a present state Sj is a state of branch structure and whether the present state Sj is one of different states to which state transition can be made from the state of branch structure as the immediately preceding state Si by performing a same action on the basis of the extended HMM (state transition probability of the extended HMM) stored in the model storing section 22.
In this case, whether the immediately preceding state Si is a state of branch structure can be determined as in the case of the branch structure detecting section 36 (
When it is determined in step S72 that the immediately preceding state Si is not a state of branch structure or it is determined that the immediately preceding state Si is a state of branch structure but that the present state Sj is not one of different states to which state transition can be made from the state of branch structure as the immediately preceding state Si by performing a same action, the process skips steps S73 and S74, and makes a return.
When it is determined in step S72 that the immediately preceding state Si is a state of branch structure and that the present state Sj is one of different states to which state transition can be made from the state of branch structure as the immediately preceding state Si by performing a same action, the process proceeds to step S73, where the state recognizing section 23 updates, to 1.0, the inhibitor hij(Um) of the state transition from the immediately preceding state Si to the present state Sj(inhibitor in a position (i, j, m) of the inhibitor table) with respect to the immediately preceding action Um among the inhibitors Ainhibit stored in the model storing section 22. The process then proceeds to step S74.
In step S74, the state recognizing section 23 updates, to 0.0, the inhibitor hij′(Um) of state transition from the immediately preceding state Si to a state Sj, other than the present state Sj(inhibitor in a position (i, j′, m) of the inhibitor table) with respect to the immediately preceding action Um among the inhibitors Ainhibit stored in the model storing section 22. The process then makes a return.
In the existing action determining method, a state transition probability model such as an HMM or the like is learned assuming the modeling of a static structure. Therefore, when the structure of the learning object is changed after the state transition probability model is learned, the state transition probability model needs to be relearned with the structure after being changed as an object. Thus a high calculation cost is required to deal with changes in structure of the learning object.
On the other hand, in the agent of
Thus, when the structure of the action environment changes, an action plan adapted to (following) the changing structure can be calculated at low calculation cost (without the extended HMM being relearned).
In addition, the inhibitor is updated so as to relax the inhibition of a state transition according to the passage of time. Thus, an action plan including an action causing the state transition inhibited in the past can be calculated with the passage of time even when the agent does not experience the state transition inhibited in the past by chance. As a result, when the structure of the action environment is changed to a structure different from a structure when the state transition was inhibited in the past, an action plan appropriate for the structure after being changed can be calculated quickly.
[Detection of Open End]
An open end is broadly a state as a transition source from which a state transition unexperienced by the agent is known in advance possibly to occur in the extended HMM.
Specifically, a state that is not assigned state transition probability (whose state transition probability is 0.0 (a value assumed to be 0.0)) and from which a state transition cannot be made because even though a comparison between the state transition probability of the state and the state transition probability of another state assigned an observation probability of a same observed value as in the state being observed (the observation probability is not a value of 0.0 (not a value assumed to be 0.0)) shows that the state transition can be made to a next state when a certain action is performed, the action has not been performed in the state corresponds to an open end.
Hence, when another state that has a state transition not yet performed among state transitions that can be made with a state in which a predetermined observed value is observed as a transition source in the extended HMM and in which the same observed value as the predetermined observed value is observed is detected, the other state is an open end.
As shown in
An open end detected indicates beyond which part of the structure obtained by the extended HMM an area unknown to the agent extends. Thus, by calculating an action plan with an open end as a goal state, the agent aggressively performs an action of stepping into an unknown area. As a result, the agent can more widely learn the structure of the action environment (obtain an observed value series and an action series serving as learning data for learning the structure of the action environment), and efficiently gain experience necessary to supplement an obscure part whose structure is not obtained in the extended HMM (structure near an observation unit corresponding to a state as an open end in the action environment).
The open end detecting section 37 first generates an action template in order to detect an open end.
In generating the action template, the open end detecting section 37 subjects the observation probability B={bi(Ok)} of the extended HMM to threshold processing, and makes a list for each observed value Ok of states Si in which the observed value Ok is observed with a probability equal to or higher than a threshold value.
Specifically,
The open end detecting section 37 sets the threshold value at 0.5, for example, and performs threshold processing that detects observation probabilities B equal to or higher than the threshold value.
In this case, in
Thereafter, for each of the observed values O1, O2, and O3, the open end detecting section 37 lists and detects states Si in which the observed value Ok is observed with a probability equal to or higher than the threshold value.
For the observed value O1, the state S5 is listed as a state in which the observed value O1 is observed with a probability equal to or higher than the threshold value. For the observed value O2, the states S2 and S4 are listed as states in which the observed value O2 is observed with a probability equal to or higher than the threshold value. For the observed value O3, the states S1 and S3 are listed as states in which the observed value O3 is observed with a probability equal to or higher than the threshold value.
Thereafter, the open end detecting section 37 calculates a transition probability corresponding value for each action Um with respect to each observed value Ok using the state transition probability A={aij (Um)} of the extended HMM, the transition probability corresponding value being a value corresponding to the state transition probability aij(Um) of a state transition whose state transition probability aij(Um) is a maximum among state transitions from a state Si listed for the observed value Ok, sets the transition probability corresponding value calculated for each action Um with respect to each observed value Ok as an action probability of the action Um being performed when the observed value Ok is observed, and generates an action template C, which is a matrix having the action probability as an element.
The open end detecting section 37 detects a maximum state transition probability from state transition probabilities of state transitions from a state Si listed for an observed value Ok, the state transition probabilities being arranged in a column (lateral) direction (j-axis direction), in the three-dimensional state transition probability table.
Specifically, for example, attention will now be directed to the observed value O2, and suppose that the states S2 and S4 are listed for the observed value O2.
In this case, the open end detecting section 37 directs attention to an action plane with respect to the state S2, the action plane with respect to the state S2 being obtained by cutting the three-dimensional state transition probability table by a plane perpendicular to the i-axis at a position i=2 on the i-axis, and detects a maximum value of state transition probabilities a2,j(U1) of state transitions from the state S2 which state transitions occur when the action U1 is performed in the action plane with respect to the state S2.
That is, the open end detecting section 37 detects a maximum value of the state transition probabilities a2,1(U1), a2,2(U1), . . . , a2,N(U1) arranged in the j-axis direction at a position m=1 on the action axis in the action plane with respect to the state S2.
The open end detecting section 37 similarly detects maximum values of the state transition probabilities of state transitions from the state S2 which state transitions occur when the other actions Um are performed from the action plane with respect to the state S2.
Further, with respect to the state S4 as another state listed for the observed value O2, the open end detecting section 37 similarly detects a maximum value of the state transition probabilities of state transitions from the state S4 which state transitions occur when each action Um is performed from the action plane with respect to the state S4.
As described above, the open end detecting section 37 detects a maximum value of the state transition probabilities of the state transitions that occur when each action Um is performed with respect to each of the states S2 and S4 listed for the observed value O2.
Thereafter, the open end detecting section 37 averages the maximum values of the state transition probabilities which maximum values are detected as described above with respect to the states S2 and S4 listed for the observed value O2 for each action Um, and sets the average value obtained by the averaging as a transition probability corresponding value corresponding to a maximum value of state transition probability with respect to the observed value O2.
The transition probability corresponding value with respect to the observed value O2 is obtained for each action Um. The transition probability corresponding value for each action Um which transition probability corresponding value is obtained with respect to the observed value O2 indicates a probability of the action Um being performed (action probability) when the observed value O2 is observed.
The open end detecting section 37 similarly obtains a transition probability corresponding value as an action probability for each action Um with respect to the other observed values Ok.
The open end detecting section 37 then generates a matrix having an action probability of the action Um being performed when the observed value Ok is observed as an element in a kth row from the top and an mth column from the left as an action template C.
The action template C is therefore a matrix of K rows and M columns in which matrix the number of rows is equal to the number of observed values Ok and the number of columns is equal to the number of actions Um.
After generating the action template C, the open end detecting section 37 calculates an action probability D based on observation probability using the action template C.
Supposing that a matrix having the observation probability bi(Ok) of an observed value Ok being observed in a state Si as an element in an ith row and a kth column is referred to as an observation probability matrix B, the observation probability matrix B is a matrix of N rows and K columns in which matrix the number of rows is equal to the number N of states Si and the number of columns is equal to the number K of observed values Ok.
The open end detecting section 37 calculates the action probability D based on the observation probability, the action probability D being a matrix having a probability of an action Um being performed in a state Si in which an observed value Ok is observed as an element in an ith row and an mth column, by multiplying the observation probability matrix B of N rows and K columns by the action template C, which is a matrix of K rows and M columns, according to equation (17).
D=BC (17)
In addition to calculating the action probability D based on the observation probability as described above, the open end detecting section 37 calculates an action probability E based on state transition probability.
The open end detecting section 37 calculates the action probability E based on the state transition probability, the action probability E being a matrix having a probability of an action Um being performed in a state Si as an element in an ith row and an mth column, by adding together state transition probabilities aij(Um) for each action Um with respect to each state Si in an i-axis direction in a three-dimensional state transition probability table A made of an i-axis, a j-axis, and an action axis.
That is, the open end detecting section 37 calculates the action probability E based on the state transition probability, the action probability E being a matrix of N rows and M columns, by obtaining a sum total of state transition probabilities aij(Um) arranged in a horizontal direction (column direction) in the state transition probability table A made of the i-axis, the j-axis, and the action axis, that is, a sum total of state transition probabilities aij(Um) arranged in a straight line parallel to the j-axis which straight line passes through a point (i, m) when attention is directed to a position i on the i-axis and a position m on the action axis, and setting the sum total as an element in the ith row and mth column of the matrix.
After calculating the action probability D based on the observation probability and the action probability E based on the state transition probability as described above, the open end detecting section 37 calculates a differential action probability F, which is a difference between the action probability D based on the observation probability and the action probability E based on the state transition probability according to equation (18).
F=D−E (18)
The differential action probability F is a matrix of N rows and M columns as with the action probability D based on the observation probability and the action probability E based on the state transition probability.
Small squares in
According to the differential action probability F, when there are a plurality of states as states in which an observed value Ok is observed, and it is known that an action Um can be performed from a part of the plurality of states (a state in which the agent has performed the action Um), another state in which a state transition that occurs when the action Um is performed is not reflected in a state transition probability aij(Um) (a state in which the agent has not performed the action Um), that is, an open end can be detected.
Specifically, when a state transition that occurs when an action Um is performed is reflected in the state transition probability aij(Um) of a state Si, an element in the ith row and the mth column of the action probability D based on the observation probability and an element in the ith row and the mth column of the action probability E based on the state transition probability assume similar values to each other.
On the other hand, when a state transition that occurs when the action Um is performed is not reflected in the state transition probability aij(Um) of the state Si, the element in the ith row and the mth column of the action probability D based on the observation probability is a certain value that cannot be assumed to be 0.0 due to an effect of the state transition probability of a state in which the same observed value as in the state Si is observed and the action Um has been performed, whereas the element in the ith row and the mth column of the action probability E based on the state transition probability is 0.0 (including a small value that can be assumed to be 0.0).
Hence, when a state transition that occurs when the action Um is performed is not reflected in the state transition probability aij(Um) of the state Si, an element in the ith row and the mth column of the differential action probability F has a value (absolute value) that cannot be assumed to be 0.0. Therefore an open end and an action that has not been performed in the open end can be detected by detecting an element having a value that cannot be assumed to be 0.0 in the differential action probability F.
That is, when the element in the ith row and the mth column of the differential action probability F has a value that cannot be assumed to be 0.0, the open end detecting section 37 detects the state Si as an open end, and detects the action Um as an action that has not been performed in the state Si as an open end.
In step S81, the open end detecting section 37 subjects the observation probability B={bi(Ok)} of the extended HMM stored in the model storing section 22 (
After step S81, the process proceeds to step S82, where, as described with reference to
Thereafter, the process proceeds from step S82 to step S83, where the open end detecting section 37 calculates an action probability D based on observation probability by multiplying an observation probability matrix B by the action template C according to equation (17). The process then proceeds to step S84.
In step S84, as described with reference to
The process then proceeds from step S84 to step S85, where the open end detecting section 37 calculates a differential action probability F, which is a difference between the action probability D based on the observation probability and the action probability E based on the state transition probability according to equation (18). The process then proceeds to step S86.
In step S86, the open end detecting section 37 subjects the differential action probability F to threshold processing, and thereby detects an element whose value is equal to or higher than a threshold value in the differential action probability F as a detection object element as an object of detection.
Further, the open end detecting section 37 detects the row i and the column m of the detection object element, detects a state Si as an open end, and detects an action Um as an unexperienced action that has not been performed in the open end Si. The process then makes a return.
By performing the unexperienced action in the open end, the agent can explore an unknown area continuing beyond the open end.
The existing action determining method determines the goal of the agent with a known area (learned area) and an unknown area (area not learned yet) treated equally (without distinction) without considering the experience of the agent. Thus, many actions need to be performed to accumulate experience of an unknown area. As a result, many trials and much time are required to learn the structure of the action environment widely.
On the other hand, the agent of
That is, an open end is a state beyond which an unknown area not experienced by the agent extends. Therefore, by detecting an open end and determining an action with the open end as a goal state, the agent can aggressively step into an unknown area. Thereby, the agent can efficiently accumulate experience to learn the structure of the action environment more widely.
[Detection of State of Branch Structure]
The extended HMM obtains a part changing in structure in the action environment as a state of branch structure. A state of branch structure corresponding to a change in structure already experienced by the agent can be detected by referring to the state transition probability of the extended HMM as a long term memory. When a state of branch structure is detected, the agent can recognize the presence of a part changing in structure in the action environment.
When a part changing in structure is present in the action environment, it is desirable to actively check the present structure of such a part periodically or irregularly, and reflect the present structure in the inhibitor, or in turn the corrected transition probability as a short term memory.
Accordingly, the agent of
The branch structure detecting section 36 detects a state of branch structure as shown in
A state transition probability plane with respect to each action Um in the state transition probability table A is normalized such that a sum total in a horizontal direction (column direction) of each row is 1.0.
Hence, when attention is directed to a certain row i in a state transition probability plane with respect to an action Um, and the state Si is not a state of branch structure, a maximum value of state transition probabilities aij(Um) in the ith row is 1.0 or a value very close to 1.0.
On the other hand, when the state Si is a state of branch structure, the maximum value of the state transition probabilities aij(Um) in the ith row is sufficiently smaller than 1.0, such as 0.6 or 0.5 shown in
Accordingly, when a maximum value of state transition probabilities aij(Um) in each row i in the state transition probability plane with respect to each action Um is smaller than a threshold value amax
In equation (19), Aijm denotes a state transition probability aij(Um) in an ith position from the top in an i-axis direction, a jth position from the left in a j-axis direction, and an mth position from the front in an action axis direction in the three-dimensional state transition probability table A.
In addition, in equation (19), max(Aijm) denotes a maximum value of N state transition probabilities Ai, s, u to AN, S, U (a1, S(U) to aN, S(U)) in an Sth position from the left in the j-axis direction (a state as a transition destination of state transition from the state Si is a state S) and a Uth position from the front in the action axis direction (an action performed when state transition from the state Si occurs is an action U) in the state transition probability table A.
Incidentally, the threshold value amax
When detecting one or more states of branch structure, the branch structure detecting section 36 supplies the one or more states of branch structure to the goal selecting section 31, as described with reference to
Referring to the elapsed time managing table in the elapsed time managing table storing section 32, the goal selecting section 31 recognizes the elapsed time of the one or more states of branch structure from the branch structure detecting section 36.
Then, the goal selecting section 31 detects a state whose elapsed time is longest from the one or more states of branch structure from the branch structure detecting section 36, and selects the state as a goal state.
By detecting a state whose elapsed time is longest from the one or more states of branch structure and selecting the state as a goal state as described above, it is possible to set each of the one or more states of branch structure as a goal state temporally equally, as it were, and perform action to check how the structure corresponding to the state of branch structure is.
In the existing action determining method, a goal is determined without attention being directed to states of branch structure, and therefore a state that is not a state of branch structure is often set as a goal. Thus, unnecessary actions are often performed when the latest structure of the action environment is to be grasped.
On the other hand, the agent of
Incidentally, when a state of branch structure is set as a goal state, the agent after reaching the state of branch structure (an observation unit corresponding to the state of branch structure) as the goal state can identify an action by which a state transition can be made from the state of branch structure to a different state on the basis of the extended HMM, and move by performing the action. The agent can thereby recognize (grasp) the structure of a part corresponding to the state of branch structure, that is, the state to which a state transition can be made from the state of branch structure at present.
[Simulation]
Specifically,
In the action environment of the first structure, positions pos1, pos2, and pos3 are a passage through which the agent can pass, whereas in the action environment of the second structure, the positions pos1 to pos3 are a wall through which the agent cannot pass.
Incidentally, each of the positions pos1 to pos3 can be made to be a passage or a wall individually.
In the simulation, the agent was made to perform actions in the reflex action mode (
In
In the extended HMM of
Two states between which state transitions are possible represent that the agent can move between two observation units corresponding to the two respective states. Thus, arrows indicating state transitions of the extended HMM represent passages through which the agent can move in the action environment.
In
In
In
Incidentally, dotted line arrows in
In the simulation, an initial setting was made to set inhibitors corresponding to the state transitions represented by the dotted line arrows in
Incidentally,
At time t=t0, the structure of the action environment is the first structure (
Further, at time t=t0, the goal state (observation unit corresponding to the goal state) is a lower left state S37, and the agent is located in a state S20 (observation unit corresponding to the state S20).
Then, the agent calculates an action plan for going to the state S37 as the goal state, and is moving in the left direction from the state S20 as present state as an action determined according to the action plan.
At time t=t1, the structure of the action environment is changed from the first structure to a structure in which the position pos1 is a passage through which the agent can pass but the positions pos2 and pos3 are a wall through which the agent cannot pass.
Further, at time t=t1, the goal state is the lower left state S37 as at time t=t0, and the agent is located in a state S31.
At time t=t2, the structure of the action environment is the structure in which the position pos1 is a passage through which the agent can pass but the positions pos2 and pos3 are a wall through which the agent cannot pass (which structure will hereinafter be referred to also as the structure after the change).
Further, at time t=t2, the goal state is a state S3 on the upper side, and the agent is located in the state S31.
Then, the agent calculates an action plan for going to the state S3 as the goal state, and is going to move in the upward direction from the state S31 as present state as an action determined according to the action plan.
In this case, at time t=t2, the action plan to effect state transitions of a state series S31, S36, S39, S35, and S3 is calculated.
Incidentally, when the action environment is of the first structure, the position pos1 (
However, when the action environment is changed to the structure after the change, the positions pos2 and pos3 are a wall, and thus the agent cannot pass through the positions pos2 and pos3.
As described above, in the initial setting of the simulation, only inhibitors corresponding to the state transitions possible only in the action environment of the second structure are set to 0.0. At time t=t2, state transitions possible in the action environment of the first structure are not inhibited.
Thus, at time t=t2, although the position pos2 between the observation unit corresponding to the states S3 and S30 and the observation unit corresponding to the states S34 and S35 is a wall through which the agent cannot pass, the agent calculates an action plan including an action that effects a state transition from the state S35 to the state S3 to pass through the position pos2 between the observation unit corresponding to the states S3 and S30 and the observation unit corresponding to the states S34 and S35.
At time t=t3, the structure of the action environment remains the structure after the change.
Further, at time t=t3, the goal state is the state S3 on the upper side, and the agent is located in a state S28.
Then, the agent calculates an action plan for going to the state S3 as the goal state, and is going to move in the right direction from the state S28 as present state as an action determined according to the action plan.
In this case, at time t=t3, the action plan to effect state transitions of a state series S28, S23, S2, S16, S22, S28, and S3 is calculated.
The agent calculates a similar action plan to the action plan (
As a result, at time t=t3, the agent calculates an action plan to effect state transitions of a state series S28, S23, S2, S16, S22, S29, and S3, which action plan does not cause the state transition from the state S39 to the state S35 in which state transition the agent can pass through the position pos2.
Incidentally, when the action environment is of the structure after the change, the position pos3 (
As described above, in the initial setting of the simulation, only inhibitors corresponding to the state transitions possible only in the action environment of the second structure in which the positions pos1 to pos3 are a wall through which the agent cannot pass are set to 0.0. At time t=t3, a state transition from the state S23 to the state S2 which state transition is possible in the action environment of the first structure and corresponds to passing through the position pos3 is not inhibited.
Thus, at time t=t3, the agent calculates an action plan that effects the state transition from the state S23 to the state S2 to pass through the position pos3 between the observation unit corresponding to the states S21 and S23 and the observation unit corresponding to the states S2 and S17.
At time t=t4, the structure of the action environment is the structure after the change.
Further, at time t=t4, the goal state is the state S3 on the upper side, and the agent is located in a state S21.
The agent moves from the observation unit corresponding to the state S28 to the observation unit corresponding to the states S21 and S23 by performing an action determined according to the action plan (
As a result, at time t=t4, the agent calculates an action plan that does not include the state transition from the state S28 to the state S23 (and, in turn, does not include the agent passing through the position pos3 between the observation unit corresponding to the states S21 and S23 and the observation unit corresponding to the states S2 and S17).
In this case, at time t=t4, the action plan to effect state transitions of a state series S28, S27, S26, S25, S20, S15, S10, S1, S2, S16, S22, S29, and S3 is calculated.
At time t=t5, the structure of the action environment is the structure after the change.
Further, at time t=t5, the goal state is the state S3 on the upper side, and the agent is located in a state S28.
The agent moves from the observation unit corresponding to the state S21 to the observation unit corresponding to the state S28 by performing an action determined according to the action plan (
At time t=t6, the structure of the action environment is the structure after the change.
Further, at time t=t6, the goal state is the state S3 on the upper side, and the agent is located in a state S15.
Then, the agent calculates an action plan for going to the state S3 as the goal state, and is going to move in the right direction from the state S15 as present state as an action determined according to the action plan.
In this case, at time t=t6, the action plan to effect state transitions of a state series S10, S1, S2, S16, S22, S29, and S3 is calculated.
As described above, even when the structure of the action environment is changed, the agent observes the structure after the change (determines (recognizes) the present state), and updates an inhibitor. Then, the agent recalculates an action plan using the inhibitor after being updated, and can finally reach the goal state.
[Example of Application of Agent]
The cleaning robot 51 in
The cleaning robot in
A host computer 52 functions as the reflex action determining section 11, the history storing section 14, the action controlling section 15, and the goal determining section 16 in
The host computer 52 is installed in the living room or another room, and is connected to an access point 53 for controlling radio communication by a wireless LAN (Local Area Network) or the like.
The host computer 52 exchanges necessary data with the cleaning robot 51 by performing radio communication with the cleaning robot 51 via the access point 53. The cleaning robot 51 thereby moves as action similar to that of the agent of
Incidentally, in
However, the provision of the blocks forming the agent of
Specifically, for example, the cleaning robot 51 can be provided with blocks corresponding to not only the actuator 12 and the sensor 13 but also the reflex action determining section 11 not required to have a highly advanced calculating function, and the host computer 52 can be provided with blocks corresponding to the history storing section 14, the action controlling section 15, and the goal determining section 16 that need an advanced calculating function and a high storage capacity.
According to the extended HMM, in the action environment where a same observed value is observed in observation units at different positions, the present conditions of the agent can be recognized using an observed value series and an action series, and a present state, or in turn an observation unit (place) where the agent is located, can be identified uniquely.
The agent of
Such an agent is for example applicable to a practical robot such as a cleaning robot or the like operating in for example a living environment that is dynamically changed in structure due to life activities of humans and in which environment humans live.
For example, a living environment such as a room or the like may be changed in structure by opening or closing of a door of the room, a change in arrangement of furniture in the room, or the like.
However, because the shape of the room itself does not change, a part that changes in structure and a part that does not change in structure coexist in the living environment.
According to the extended HMM, a part changing in structure can be stored as a state of branch structure. Therefore, a living environment including a part changing in structure can be expressed efficiently (with a low storage capacity).
On the other hand, for a cleaning robot used as an alternative device to a cleaner operated by a human in a living environment to achieve an object of cleaning the entire room, the cleaning robot needs to identify the position of the cleaning robot itself, and move while adaptively changing a path in the room whose structure changes probabilistically (room whose structure may change).
The agent of
Incidentally, it is desirable from a viewpoint of reducing the manufacturing cost of the cleaning robot to avoid incorporating a camera as an advanced sensor and an image processing device for performing image processing such as recognition of an image output by the camera, for example, as means for observing an observed value into the cleaning robot.
That is, in order to reduce the manufacturing cost of the cleaning robot, it is desirable to employ an inexpensive means such for example as a distance measuring device that measures a distance by outputting ultrasonic waves, a laser or the like in a plurality of directions as means for the cleaning robot to observe an observed value.
However, when inexpensive means such for example as a distance measuring device is employed as means for observing an observed value, a same observed value is often observed in different positions of a living environment, and it is difficult to identify the position of the cleaning robot uniquely with only an observed value at a single time.
Even in a living environment in which it is thus difficult to identify the position of the cleaning robot uniquely with only an observed value at a single time, the position can be identified uniquely using an observed value series and an action series according to the extended HMM.
[One-State One-Observed-Value Constraint]
The learning section 21 in
The Baum-Welch re-estimation method is basically a method of converging a model parameter by a gradient method. The model parameter may therefore fall into a local minimum.
Whether the model parameter falls into a local minimum is determined by initial value dependence, that is, dependence of the model parameter on an initial value.
The present embodiment employs an ergodic HMM as the extended HMM. An ergodic HMM has a particularly great initial value dependence.
The learning section 21 (
In this case, the one-state one-observed-value constraint makes (only) one observed value observed in one state of the extended HMM (an HMM including the extended HMM).
Incidentally, when the extended HMM is learned without any constraint in the action environment that changes in structure, a case where a change in structure of the action environment is represented by having a distribution in observation probability and a case where a change in structure of the action environment is represented by having a branch structure of state transitions may be mixed in the extended HMM after being learned.
A case where a change in structure of the action environment is represented by having a distribution in observation probability is a case where a plurality of observed values are observed in a certain state. A case where a change in structure of the action environment is represented by having a branch structure of state transitions is a case where state transitions to different states are caused by a same action (there is a possibility of a state transition being made from a present state to a certain state and there is also a possibility of a state transition being made to a state different from the certain state when a certain action is performed).
According to the one-state one-observed-value constraint, a change in structure of the action environment is represented only by having a branch structure of state transitions in the extended HMM.
Incidentally, when the structure of the action environment does not change, the extended HMM can be learned without the one-state one-observed-value constraint being imposed.
The one-state one-observed-value constraint can be imposed by introducing state division and, more desirably, state merge (integration) into the learning of the extended HMM.
[State Division]
In the state division, when a plurality of observed values are observed in one state in the extended HMM whose model parameters (initial state probability πi, state transition probability aij(Um), and observation probability bi(Ok)) are converged by the Baum-Welch re-estimation method, the state is divided into a plurality of states the number of which is equal to the number of the plurality of observed values so that each of the plurality of observed values is observed in one state.
In
Further, in
Because the two observed values O7 and O13 as a plurality of observed values are observed in the state S2 in
In
Further, in
In addition, in
In the state division, the learning section 21 (
As described with reference to
In learning the extended HMM (HMMs including the extended HMM), each of observation probabilities bi(O1) to bi(OK) of observed values O1 to OK being observed in a certain state Si in the observation probability matrix B is normalized such that a sum total of the observation probabilities bi(O1) to bi(OK) is 1.0.
Hence, when (only) one observed value is observed in one state Si, a maximum value of the observation probabilities bi(O1) to bi(OK) of the state Si is 1.0 (a value that can be assumed to be 1.0), and the observation probabilities other than the maximum value are 0.0 (a value that can be assumed to be 0.0).
When a plurality of observed values are observed in one state Si, on the other hand, a maximum value of the observation probabilities bi(O1) to bi(OK) of the state Si is sufficiently smaller than 1.0, such as 0.6 or 0.5 shown in
Hence, a dividing object state can be detected by searching for an observation probability Bik=bi(Ok) that is smaller than a threshold value bmax
In equation (20), Bik denotes an element in an ith row and a kth column of the observation probability matrix B, and is equal to an observation probability bi(Ok) of an observed value Ok being observed in a state Si.
In equation (20), argfind(1/K<Bik<bmax
Incidentally, the threshold value bmax
The learning section 21 (
Further, the learning section 21 detects the observed values Ok of all the suffixes k expressed in equation (20) as a plurality of observed values observed in the dividing object state (state whose suffix i is S).
The learning section 21 then divides the dividing object state into a plurality of states equal in number to the plurality of observed values observed in the dividing object state.
Supposing that the states after the division that divided the dividing object state are referred to as states after the division, the dividing object state can be employed as one of the states after the division, and an invalid state in the extended HMM at the time of the division can be employed as the other states after the division.
Specifically, for example, when the dividing object state is divided into three states after the division, the dividing object state can be employed as one of the three states after the division, and an invalid state in the extended HMM at the time of the division can be employed as the other two states after the division.
In addition, an invalid state in the extended HMM at the time of the division can be employed as all of the plurality of states after the division. In this case, however, the dividing object state needs to be made to be an invalid state after the state division.
In
Further, in
The learning section 21 (
The learning section 21 assigns the state S3 after the division that divided the dividing object state S3 one of the plurality of observed values O1 and O2, for example the observed value O1, and sets an observation probability of the observed value O1 assigned to the state S3 after the division being observed in the state S3 after the division to 1.0 and sets observation probabilities of the other observed values being observed to 0.0.
Further, the learning section 21 sets the state transition probabilities a3,j(Um) of state transitions having the state S3 after the division as a transition source to the state transition probabilities a3,j(Um) of state transitions having the dividing object state S3 as a transition source, and sets the state transition probabilities of the state transitions having the state S3 after the division as a transition destination to values obtained by correcting the state transition probabilities of the state transitions having the dividing object state S3 as a transition destination by the observation probability in the dividing object state S3 of the observed value assigned to the state S3 after the division.
The learning section 21 similarly sets observation probabilities and state transition probabilities with respect to the other state S6 after the division.
In
In this case, as shown in
Further, as shown in
The setting of the observation probabilities as described above is expressed by equation (21).
B(S3, :)=0.0
B(S3, O1)=1.0
B(S6, :)=0.0
B(S6, O2)=1.0 (21)
In equation (21), B(,) is a two-dimensional array, and an element B(S, 0) of the array represents an observation probability of an observed value O being observed in a state S.
A colon (:) as a suffix of the array represents all elements of a dimension as the colon. Hence, in equation (21), an equation B(S3, :)=0.0 indicates that observation probabilities of respective observed values O1 to Ok being observed in the state S3 are all set to 0.0.
According to equation (21), the observation probabilities of the respective observed values O1 to Ok being observed in the state S3 are all set to 0.0 (B(S3, :)=0.0), and then only the observation probability of the observed value O1 being observed is set to 1.0 (B(S3, O1)=1.0).
Further, according to equation (21), the observation probabilities of the respective observed values O1 to Ok being observed in the state S6 are all set to 0.0 (B(S6, :)=0.0), and then only the observation probability of the observed value O2 being observed is set to 1.0 (B(S6, O2)=1.0).
State transitions similar to state transitions having the dividing object state S3 as a transition source should be made as state transitions having each of the states S3 and S6 after the division as a transition source.
Accordingly, as shown in
On the other hand, state transitions such that state transitions having the dividing object state S3 as a transition destination are divided by rates (ratios) of observation probabilities of the respective observed values O1 and O2 being observed in the dividing object state S3 should be made as state transitions having the state S3 after the division to which the observed value O1 is assigned and the state S6 after the division to which the observed value O2 is assigned as a transition destination.
Accordingly, as shown in
The learning section 21 then sets the state transition probabilities of state transitions having the state S3 after the division to which the observed value O1 is assigned as a transition destination to the corrected values resulting from correcting the state transition probabilities by the observation probability of the observed value O1.
Further, as shown in
The learning section 21 then sets the state transition probabilities of state transitions having the state S6 after the division to which the observed value O2 is assigned as a transition destination to the corrected values resulting from correcting the state transition probabilities by the observation probability of the observed value O2.
The setting of the state transition probabilities as described above is expressed by equation (22).
A(S3, :, :)=A(S3, :, :)
A(S6, : , :)=A(S3, : , :)
A(: , S3, :)=B(S3, O1)A(: , S3, :)
A(: , S6, :)=B(S3, O2)A(:, S3, :) (22)
In equation (22), A(, ,) is a three-dimensional array, and an element A(S, S′, U) of the array indicates a state transition probability of a state transition being made from a state S as a transition source to a state S′ when an action U is performed.
As in equation (21), a colon (:) as a suffix of the array represents all elements of a dimension as the colon.
Hence, in equation (22), A(S3, :, :), for example, denotes all state transition probabilities of state transitions from a state S3 as a transition source to each state S when each action is performed. In equation (22), A(:, S3, :), for example, denotes all state transition probabilities of state transitions from each state to a state S3 as a transition destination when each action is performed.
According to equation (22), with respect to all actions, the state transition probabilities of state transitions having the state S3 after the division as a transition source are set to the state transition probabilities of state transitions having the dividing object state S3 as a transition source (A(S3, :, :)=A (S3, :, :)).
In addition, with respect to all the actions, the state transition probabilities of state transitions having the state S6 after the division as a transition source are set to the state transition probabilities of the state transitions having the dividing object state S3 as a transition source (A(S6, :, :)=A(S3, :, :)).
Further, according to equation (22), with respect to all the actions, the state transition probabilities A(:, S3, :) of state transitions having the dividing object state S3 as a transition destination are multiplied by the observation probability B(S3, O1) in the dividing object state S3 of the observed value O1 assigned to the state S3 after the division, whereby corrected values B(S3, O1)A(:, S3, :) resulting from the correction of the state transition probabilities A(:, S3, :) of the state transitions having the dividing object state S3 as a transition destination are obtained.
Then, with respect to all the actions, the state transition probabilities A(:, S3, :) of state transitions having the state S3 after the division to which the observed value O1 is assigned as a transition destination are set to the corrected values B(S3, O1)A(:, S3, :) (A(:, S3, :)=B(S3, O1)A(:, S3, :)).
In addition, according to equation (22), with respect to all the actions, the state transition probabilities A(:, S3, :) of the state transitions having the dividing object state S3 as a transition destination are multiplied by the observation probability B(S3, O2) in the dividing object state S3 of the observed value O2 assigned to the state S6 after the division, whereby corrected values B(S3, O2)A(:, S3, :) resulting from the correction of the state transition probabilities A(:, S3, :) of the state transitions having the dividing object state S3 as a transition destination are obtained.
Then, with respect to all the actions, the state transition probabilities A(:, S6, :) of state transitions having the state S6 after the division to which the observed value O2 is assigned as a transition destination are set to the corrected values B(S3, O2)A(:, S3, :) (A(: , S6, :)=B(S3, O2)A(:, S3, :)).
[State Merge]
In the state merge, in the extended HMM whose model parameters are converged by the Baum-Welch re-estimation method, when there are a plurality of states (different states) as states of transition destinations of state transitions having one state as a transition source at the time of a certain action being performed, and an identical observed value is observed in each of the plurality of states, the plurality of states in which the identical observed value is observed are merged into one state.
Further, in the state merge, in the extended HMM whose model parameters are converged, when there are a plurality of states as states of transition sources of state transitions having one state as a transition destination at the time of a certain action being performed, and an identical observed value is observed in each of the plurality of states, the plurality of states in which the identical observed value is observed are merged into one state.
That is, in the state merge, in the extended HMM whose model parameters are converged, when state transitions having an identical state as a transition source or a transition destination occur with respect to each action, and there are a plurality of states in which an identical observed value is observed, such a plurality of states are redundant, and are thus merged into one state.
In this case, the state merge includes a forward merge that merges a plurality of states as transition destinations when the plurality of states are present as states of the transition destinations of state transitions from one state at the time of a certain action being performed and a backward merge that merges a plurality of states as transition sources when the plurality of states are present as states of the transition sources of state transitions to one state at the time of a certain action being performed.
In
Each of the state transitions from the state S1 to the plurality of states S2 and S3 as transition destinations, that is, the state transition from the state S1 to the state S2 as a transition destination and the state transition from the state S1 to the state S3 as a transition destination is made when an identical action is performed in the state S1.
Further, an identical observed value O5 is observed in the state S2 and the state S3.
In this case, the learning section 21 (
The one state obtained by merging the plurality of merging object states will hereinafter be referred to also as a representative state. In
A plurality of state transitions that can occur from a certain state to states in which an identical observed value is observed when a certain action is performed appear to be branches from the one transition source state to the plurality of transition destination states. Therefore such state transitions will be referred to as branches in a forward direction. In
Incidentally, in the case of the branches in the forward direction, a state as a branch source is the state S1 as the transition source, and states as branch destinations are the states S2 and S3 as the transition destinations in which the identical observed value is observed. The branch destination states S2 and S3, which are also the transition destination states, are the merging object states.
In
Each of the state transitions from the plurality of states S3 and S4 as transition sources to the state S5, that is, the state transition from the state S3 as a transition source to the state S5 and the state transition from the state S4 as a transition source to the state S5 is made when an identical action is performed in the states S3 and S4.
Further, an identical observed value O7 is observed in the state S3 and the state S4.
In this case, the learning section 21 (
In
State transitions from a plurality of states in which an identical observed value is observed to a certain state as a transition destination when a certain action is performed appear to be branches from the one transition destination state to the plurality of transition source states. Therefore such state transitions will be referred to as branches in a backward direction. In
Incidentally, in the case of the branches in the backward direction, a state as a branch source is the state S5 as the transition destination, and states as branch destinations are the states S3 and S4 as the transition sources in which the identical observed value is observed. The branch destination states S3 and S4, which are also the transition source states, are the merging object states.
In the state merge, the learning section 21 (
When there are a plurality of states as states of the extended HMM which states are transition sources or transition destinations of state transitions when a predetermined action is performed, and observed values observed in the plurality of respective states and having a maximum observation probability coincide with each other, the learning section 21 detects the plurality of states as merging object states.
Specifically,
In the state transition probability plane A with respect to each action Um, state transition probabilities are normalized such that with respect to each state Si, a sum total of state transition probabilities aij(Um) having the state Si as a transition source (sum total of aij(Um) taken while a suffix j is changed from 1 to N with suffixes i and m fixed) is 1.0.
Hence, a maximum value of state transition probabilities having a certain state Si as a transition source with respect to a certain action Um(state transition probabilities arranged in a certain row i in a horizontal direction in the state transition probability plane A with respect to the action Um) when there are no branches in the forward direction with the state Si as a branch source is 1.0 (a value that can be assumed to be 1.0), and the state transition probabilities other than the maximum value are 0.0 (a value that can be assumed to be 0.0).
On the other hand, a maximum value of state transition probabilities having a certain state Si as a transition source with respect to a certain action Um when there are branches in the forward direction with the state Si as a branch source is sufficiently smaller than 1.0, such as 0.5 shown in
Accordingly, a state as a branch source of branches in the forward direction can be detected by searching for a state Si such that a maximum value of state transition probabilities aij(Um) (=Aijm) in a row i in the state transition probability plane with respect to an action Um is smaller than a threshold value amax
Incidentally, in this case, the threshold value amax
When detecting a state as a branch source of branches in the forward direction (which state will hereinafter be referred to also as a branch source state) as described above, the learning section 21 (
That is, the learning section 21 detects a plurality of states as branch destinations of the branches in the forward direction from the branch source state when the suffix m of an action Um is U and the suffix i of the branch source state Si of the branches in the forward direction is S according to equation (23).
In equation (23), Aijm denotes a state transition probability aij(Um) in an ith position from the top in an i-axis direction, a jth position from the left in a j-axis direction, and an mth position from the front in an action axis direction in the three-dimensional state transition probability table.
In addition, in equation (23), argfind(amin
Incidentally, the threshold value amin
The learning section 21 (
Thereafter, when a plurality of states are detected as candidates for branch destination states of branches in the forward direction, the learning section 21 determines whether observed values observed in the plurality of respective candidates for branch destination states and having a maximum observation probability coincide with each other.
The learning section 21 then detects candidates whose observed values having a maximum observation probability coincide with each other among the plurality of candidates for branch destination states as branch destination states of branches in the forward direction.
Specifically, the learning section 21 obtains an observed value Omax having a maximum observation probability with respect to each of the plurality of candidates for branch destination states according to equation (24).
In equation (24), Bik denotes an observation probability bi(Ok) of an observed value Ok being observed in a state Si.
In equation (24), argmax(Bik) denotes the suffix k of a maximum observation probability BS,K of a state whose suffix is S in the observation probability matrix B.
When the suffixes k of maximum observation probabilities BS,K obtained in equation (24) with respect to the suffix i of each of a plurality of states Si as a plurality of candidates for branch destination states coincide with each other, the learning section 21 detects the candidates whose suffixes k obtained in equation (24) coincide with each other among the plurality of candidates for branch destination states as branch destination states of branches in the forward direction.
In
With respect to the states S1 and S4 as candidates for branch destination states of the branches in the forward direction, the observed value O2 observed in the state S1 and having a maximum observation probability of 1.0 and the observed value O2 observed in the state S4 and having a maximum observation probability of 0.9 coincide with each other. The states S1 and S4 are therefore detected as branch destination states of the branches in the forward direction.
Specifically,
In the state transition probability plane A with respect to each action Um, state transition probabilities are normalized such that with respect to each state Si, a sum total of state transition probabilities aij(Um) having the state Si as a transition source is 1.0, as described with reference to
However, when there is a possibility of a state transition from the state Si to the state Sj being made, a state transition probability aij(Um) having the state Sj as a transition destination is a positive value other than 0.0 (a value that can be assumed to be 0.0).
Hence, a branch source state (a state that can be a branch source state) of branches in the backward direction and candidates for branch destination states can be detected according to equation (25).
In equation (25), Aijm denotes a state transition probability aij(Um) in an ith position from the top in an i-axis direction, a jth position from the left in a j-axis direction, and an mth position from the front in an action axis direction in the three-dimensional state transition probability table.
In addition, in equation (25), argfind(amin
Incidentally, the threshold value amin
The learning section 21 (
The learning section 21 further detects, as candidates for branch destination states, a plurality of branch source states of state transitions corresponding to a plurality of state transition probabilities Aijm satisfying the conditional expression amin
Thereafter, the learning section 21 determines whether observed values observed in the plurality of respective candidates for branch destination states of branches in the backward direction and having a maximum observation probability coincide with each other.
Then, as in the case of detecting branch destination states of branches in the forward direction, the learning section 21 detects candidates whose observed values having a maximum observation probability coincide with each other among the plurality of candidates for branch destination states as branch destination states of branches in the backward direction.
In
With respect to the states S2 and S5 as candidates for branch destination states of the branches in the backward direction, the observed value O3 observed in the state S2 and having a maximum observation probability of 1.0 and the observed value O3 observed in the state S5 and having a maximum observation probability of 0.8 coincide with each other. The states S2 and S5 are therefore detected as branch destination states of the branches in the backward direction.
When the learning section 21 detects a branch source state of branches in the forward direction and the backward direction and a plurality of branch destination states branching from the branch source state as described above, the learning section 21 merges the plurality of branch destination states into one representative state.
In this case, the learning section 21 for example sets a branch destination state having a minimum suffix among the plurality of branch destination states as a representative state, and merges the plurality of branch destination states into the representative state.
That is, for example, when three states are detected as a plurality of branch destination states branching from a certain branch source state, the learning section 21 sets a branch destination state having a minimum suffix among the plurality of branch destination states as a representative state, and merges the plurality of branch destination states into the representative state.
In addition, the learning section 21 makes the other two states not set as the representative state among the three branch destination states invalid states.
Incidentally, a representative state in the state merge can be selected from invalid states rather than branch destination states. In this case, after the plurality of branch destination states are merged into the representative state, the plurality of branch destination states are all made to be an invalid state.
In
Further, in
The learning section 21 (
The learning section 21 sets observation probabilities b1(Ok) of respective observed values Ok being observed in the representative state S1 to average values of observation probabilities b1(Ok) and b4(Ok) of the respective observed values Ok being observed in each of the plurality of states S1 and S4 as merging object states, and sets observation probabilities b4(Ok) of the respective observed values Ok being observed in the state S4 other than the representative state S1 of the plurality of states S1 and S4 as merging object states to zero.
In addition, the learning section 21 sets state transition probabilities a1,j(Um) of state transitions having the representative state S1 as a transition source to average values of state transition probabilities a1,j(Um) and a4,j(Um) of state transitions having each of the plurality of states S1 and S4 that are merging object states as a transition source, and sets state transition probabilities ai,1(Um) of state transitions having the representative state S1 as a transition destination to sums of state transition probabilities ai,1(Um) and ai,4(Um) of state transitions having each of the plurality of states S1 and S4 that are merging object states as a transition destination.
Further, the learning section 21 sets state transition probabilities a4j (Um) of state transitions having the state S4 other than the representative state S1 of the plurality of states S1 and S4 that are merging object states as a transition source and state transition probabilities ai,4(Um) of state transitions having the state S4 as a transition destination to zero.
The learning section 21 sets an observation probability b1(O1) of an observed value O1 being observed in the representative state S1 to an average value (b1(O1)+b4(O1))/2 of observation probabilities b1(O1) and b4(O1) of the observed value O1 being observed in the merging object states S1 and S4.
The observation probabilities b1(Ok) of the other observed values Ok being observed in the representative state S1 are similarly set.
Further, the learning section 21 sets the observation probabilities b4(Ok) of the respective observed values Ok being observed in the state S4 other than the representative state S1 of the merging object states S1 and S4 to zero.
The setting of the observation probabilities as described above is expressed by equation (26).
B(S1, :)=(B(S1, :)+B(S4, :))/2
B(S4, :)=0.0 (26)
In equation (26), B(,) is a two-dimensional array, and an element B(S, O) of the array represents an observation probability of an observed value O being observed in a state S.
A colon (:) as a suffix of an array represents all elements of a dimension as the colon. Hence, in equation (26), an equation B(S4, :)=0.0 indicates that observation probabilities of the respective observed values being observed in the state S4 are all set to 0.0.
According to equation (26), the observation probabilities of the respective observed values Ok being observed in the representative state S1 are set to average values of the observation probabilities b1(Ok) and b4(Ok) of the respective observed values Ok being observed in each of the merging object states S1 and S4(B(S1, :)=(B (S1, :)+B (S4, :))/2).
Further, according to equation (26), the observation probabilities b4(Ok) of the respective observed values Ok being observed in the state S4 other than the representative state S1 of the merging object states S1 and S4 are set to zero (B(S4, :)=0.0).
State transitions having each of a plurality of states that are merging object states as a transition source do not necessarily coincide with each other. State transitions having each of a plurality of states that are merging object states as a transition source should be possible as state transitions having a representative state obtained by merging the merging object states as a transition source.
Accordingly, as shown in
State transitions having each of a plurality of states that are merging object states as a transition destination do not necessarily coincide with each other, either. State transitions having each of a plurality of states that are merging object states as a transition destination should be possible as state transitions having a representative state obtained by merging the merging object states as a transition destination.
Accordingly, as shown in
Incidentally, whereas the average values of the state transition probabilities a1,j(Um) and a4,j(Um) of the state transitions having the merging object states S1 and S4 as a transition source are adopted as the state transition probabilities a1,j(Um) of the state transitions having the representative state S1 as a transition source, the sums of the state transition probabilities ai,1(Um) and ai,4(Um) of the state transitions having the merging object states S1 and S4 as a transition destination are adopted as the state transition probabilities ai,1(Um) of the state transitions having the representative state S1 as a transition destination because in the state transition probability plane A with respect to each action Um, whereas the state transition probabilities aij(Um) are normalized such that a sum total of state transition probabilities aij(Um) having a state Si as a transition source is 1.0, normalization such that a sum total of state transition probabilities aij(Um) having a state Sj as a transition destination is 1.0 is not performed.
In addition to the setting of the state transition probabilities having the representative state S1 as a transition source and the state transition probabilities having the representative state S1 as a transition destination, the learning section 21 sets the state transition probabilities having the merging object state S4 (merging object state other than the representative state S1) as a transition source and the state transition probabilities having the merging object state S4 as a transition destination, the merging object state S4 being rendered unnecessary for the expression of the structure of the action environment by merging the merging object states S1 and S4 into the representative state S1, to zero.
The setting of the state transition probabilities as described above is expressed by equation (27).
A(S1, : :)=(A(S1, :, :)+A(S4, :, :))/2
A(: , S1, :)=A(: , S1, :)+A(: , S4, :)
A(S4, : :)=0.0
A(:, S4, :)=0.0 (27)
In equation (27), A(,) is a three-dimensional array, and an element A(S, S′, U) of the array indicates a state transition probability of a state transition being made from a state S as a transition source to a state S′ when an action U is performed.
As in equation (26), a colon (:) as a suffix of an array represents all elements of a dimension as the colon.
Hence, in equation (27), A(S1, :), for example, denotes all state transition probabilities of state transitions from a state S1 as a transition source to each state when each action is performed. In equation (27), A(:, S1, :), for example, denotes all state transition probabilities of state transitions from each state to a state S1 as a transition destination when each action is performed.
According to equation (27), with respect to all actions, the state transition probabilities of state transitions having the representative state S1 as a transition source are set to the average values of the state transition probabilities a1, j(Um) and a4, j(Um) of state transitions having the merging object states S1 and S4 as a transition source (A(S1, :, :)=(A(S1, :, :)+A(S4, :, :))/2).
In addition, with respect to all the actions, the state transition probabilities of state transitions having the representative state S1 as a transition destination are set to sums of the state transition probabilities ai, 1(Um) and ai, 4(Um) of state transitions having the merging object states S1 and S4 as a transition destination (A(:, S1, :)=A(:, S1, :)+A(:, S4, :)).
Further, according to equation (27), with respect to all the actions, the state transition probabilities having the merging object state S4 as a transition source and the state transition probabilities having the merging object state S4 as a transition destination, the merging object state S4 being rendered unnecessary for the expression of the structure of the action environment by merging the merging object states S1 and S4 into the representative state S1, are set to zero (A(S4, :, :)=0.0, A(:, S4, :)=0.0).
As described above, the state transition probabilities having the merging object state S4 as a transition source and the state transition probabilities having the merging object state S4 as a transition destination, the merging object state S4 being rendered unnecessary for the expression of the structure of the action environment by merging the merging object states S1 and S4 into the representative state S1, are set to 0.0, and the observation probabilities of each observed value being observed in the merging object state S4 rendered unnecessary are set to 0.0. The merging object state S4 rendered unnecessary thereby becomes an invalid state.
[Learning of Extended HMM under One-State One-Observed Value Constraint]
In step S91, the learning section 21 performs initial learning of the extended HMM, that is, a process similar to steps S21 to S24 in
After the model parameters have converged in the initial learning in step S91, the learning section 21 stores the model parameters of the extended HMM in the model storing section 22 (
In step S92, the learning section 21 detects a dividing object state from the extended HMM stored in the model storing section 22. The process then proceeds to step S93.
When the learning section 21 cannot detect a dividing object state in step S92, that is, when there is no dividing object state in the extended HMM stored in the model storing section 22, the process skips steps S93 and S94, and proceeds to step S95.
In step S93, the learning section 21 performs a state division that divides the dividing object state detected in step S92 into a plurality of states after the division. The process then proceeds to step S94.
In step S94, the learning section 21 performs learning of the extended HMM that is stored in the model storing section 22 and in which the state division has been performed in immediately preceding step S93, that is, a process similar to steps S22 to S24 in
Incidentally, in the learning in step S94 (also in step S97 to be described later), the model parameters of the extended HMM stored in the model storing section 22 are used as initial values of the model parameters as they are.
After the model parameters of the extended HMM have converged in the learning in step S94, the learning section 21 stores (overwrites) the model parameters of the extended HMM in the model storing section 22 (
In step S95, the learning section 21 detects merging object states from the extended HMM stored in the model storing section 22. The process then proceeds to step S96.
When the learning section 21 cannot detect merging object states in step S95, that is, when there are no merging object states in the extended HMM stored in the model storing section 22, the process skips steps S96 and S97, and proceeds to step S98.
In step S96, the learning section 21 performs a state merge that merges the merging object states detected in step S95 into a representative state. The process then proceeds to step S97.
In step S97, the learning section 21 performs learning of the extended HMM that is stored in the model storing section 22 and in which the state merge has been performed in immediately preceding step S96, that is, a process similar to steps S22 to S24 in
After the model parameters of the extended HMM have converged in the learning in step S97, the learning section 21 stores (overwrites) the model parameters of the extended HMM in the model storing section 22 (
In step S98, the learning section 21 determines whether no dividing object state is detected in the process of detecting a dividing object state in step S92 and no merging object states are detected in the process of detecting merging object states in step S95.
When it is determined in step S98 that at least one of a dividing object state and merging object states is detected, the process returns to step S92 to repeat a similar process from step S92 on down.
When it is determined in step S98 that neither of a dividing object state and merging object states is detected, the process of learning the extended HMM is ended.
By repeating the state division, the learning of the extended HMM after the state division, the state merge, and the learning of the extended HMM after the state merge as described above until neither of a dividing object state and merging object states is detected, learning satisfying the one-state one-observed-value constraint is performed, and the extended HMM in which (only) one observed value is observed in one state can be obtained.
In step S111, the learning section 21 initializes a variable i indicating the suffix of a state Si to one, for example. The process then proceeds to step S112.
In step S112, the learning section 21 initializes a variable k indicating the suffix of an observed value Ok to one, for example. The process then proceeds to step S113.
In step S113, the learning section 21 determines whether an observation probability Bik=bi(Ok) of an observed value Ok being observed in the state Si satisfies the conditional expression 1/K<Bik<bmax
When it is determined in step S113 that the observation probability Bik=bi(Ok) does not satisfy the conditional expression 1/K<Bik<bmax
When it is determined in step S113 that the observation probability Bik=bi(Ok) satisfies the conditional expression 1/K<Bik<bmax
The process thereafter proceeds from step S114 to step S115, where whether the suffix k is equal to the number K of observed values (hereinafter referred to also as the number of symbols) is determined.
When it is determined in step S115 that the suffix k is not equal to the number K of symbols, the process proceeds to step S116, where the learning section 21 increments the suffix k by one. The process then returns from step S116 to step S113 to repeat a similar process from step S113 on down.
When it is determined in step S115 that the suffix k is equal to the number K of symbols, the process proceeds to step S117, where the learning section 21 determines whether the suffix i is equal to the number N of states (the number of states of the extended HMM).
When it is determined in step S117 that the suffix i is not equal to the number N of states, the process proceeds to step S118, where the learning section 21 increments the suffix i by one. The process then returns from step S118 to step S112 to repeat a similar process from step S112 on down.
When it is determined in step S117 that the suffix i is equal to the number N of states, the process proceeds to step S119, where the learning section 21 detects each state Si stored in association with the observed value to be divided in step S114 as a dividing object state. The process then makes a return.
In step S131, the learning section 21 selects one of dividing object states which state is not yet set as a state of interest to be a state of interest. The process then proceeds to step S132.
In step S132, the learning section 21 sets the number of observed values to be divided which observed values are associated with the state of interest as the number Cs of states after division into which the state of interest is divided (which number will hereinafter be referred to also as a dividing number), and selects Cs states in total, that is, the state of interest and Cs−1 states of invalid states among the states of the extended HMM, as states after division.
The process thereafter proceeds from step S132 to step S133, where the learning section 21 assigns each of the Cs states after division one of the Cs observed values to be divided which observed values are associated with the state of interest. The process then proceeds to step S134.
In step S134, the learning section 21 initializes a variable c for counting the Cs states after the division to one, for example. The process then proceeds to step S135.
In step S135, the learning section 21 selects a cth state after the division among the Cs states after the division as a state of interest after the division to which state attention is directed. The process then proceeds to step S136.
In step S136, the learning section 21 sets an observation probability that the observed value to be divided which observed value is assigned to the state of interest after the division is observed in the state of interest after the division to 1.0, and sets observation probabilities of the other observed values being observed to 0.0. The process then proceeds to step S137.
In step S137, the learning section 21 sets state transition probabilities of state transitions having the state of interest after the division as a transition source to state transition probabilities of state transitions having the state of interest as a transition source. The process then proceeds to step S138.
In step S138, the learning section 21 corrects state transition probabilities of state transitions having the state of interest as a transition destination by an observation probability that the observed value of the dividing object state which observed value is assigned to the state of interest after the division is observed in the state of interest, as described with reference to
In step S139, the learning section 21 sets state transition probabilities of state transitions having the state of interest after the division as a transition destination to the corrected values obtained in the immediately preceding step S138. The process then proceeds to step S140.
In step S140, the learning section 21 determines whether the variable c is equal to the dividing number Cs.
When it is determined in step S140 that the variable c is not equal to the dividing number Cs, the process proceeds to step S141, where the learning section 21 increments the variable c by one. The process then returns to step S135.
When it is determined in step S140 that the variable c is equal to the dividing number Cs, the process proceeds to step S142, where the learning section 21 determines whether all the dividing object states have been selected as a state of interest.
When it is determined in step S142 that all the dividing object states have not been selected as a state of interest, the process returns to step S131 to repeat a similar process from step S131 on down.
When it is determined in step S142 that all the dividing object states have been selected as a state of interest, that is, when the division of all the dividing object states has been completed, the process makes a return.
In step S161, the learning section 21 initializes a variable m indicating the suffix of an action Um to one, for example. The process then proceeds to step S162.
In step S162, the learning section 21 initializes a variable i indicating the suffix of a state Si to one, for example. The process then proceeds to step S163.
In step S163, the learning section 21 detects a maximum value max(Aijm) of state transition probabilities Aijm(=aij(Um)) of state transitions having the state Si as a transition source to each state Sj with respect to an action Um in the extended HMM stored in the model storing section 22. The process then proceeds to step S164.
In step S164, the learning section 21 determines whether the maximum value max(Aijm) satisfies equation (19), that is, the equation 1/N<max (Aijm)<amax
When it is determined in step S164 that the maximum value max (Aijm) does not satisfy equation (19), the process skips step S165, and proceeds to step S166.
When it is determined in step S164 that the maximum value max(Aijm) satisfies equation (19), the process proceeds to step S165, where the learning section 21 detects the state Si as a branch source state of branches in the forward direction.
Further, the learning section 21 detects a state Sj as a transition destination of a state transition whose state transition probability Aijm=aij(Um) satisfies the conditional expression amin
In step S166, the learning section 21 determines whether the suffix i is equal to the number N of states.
When it is determined in step S166 that the suffix i is not equal to the number N of states, the process proceeds to step S167, where the learning section 21 increments the suffix i by one. The process then returns to step S163.
When it is determined in step S166 that the suffix i is equal to the number N of states, the process proceeds to step S168, where the learning section 21 initializes a variable j indicating the suffix of the state Sj to one, for example. The process then proceeds to step S169.
In step S169, the learning section 21 determines whether there are a plurality of states Si′, as transition sources of state transitions whose state transition probabilities Ai′jm=ai′j(Um) satisfy the conditional expression amin
When it is determined in step S169 that there are not a plurality of states Si′ as transition sources of state transitions satisfying the conditional expression amin
When it is determined in step S169 that there are a plurality of states Si′, as transition sources of state transitions satisfying the conditional expression amin
Further, the learning section 21 detects the plurality of states Si′ as transition sources of the state transitions whose state transition probabilities Ai′jm=ai′j(Um) satisfy the conditional expression amin
In step S171, the learning section 21 determines whether the suffix j is equal to the number N of states.
When it is determined in step S171 that the suffix j is not equal to the number N of states, the process proceeds to step S172, where the learning section 21 increments the suffix j by one. The process then returns to step S169.
When it is determined in step S171 that the suffix j is equal to the number N of states, the process proceeds to step S173, where the learning section 21 determines whether the suffix m is equal to the number M of actions Um (which number will hereinafter be referred to also as an action number).
When it is determined in step S173 that the suffix m is not equal to the action number M, the process proceeds to step S174, where the learning section 21 increments the suffix m by one. The process then returns to step S162.
When it is determined in step S173 that the suffix m is equal to the action number M, the process proceeds to step S191 in
In step S191 in
In step S192, with respect to each of a plurality of branch destination states (candidates for branch destination states) detected in relation to the state of interest, that is, a plurality of branch destination states (candidates for branch destination states) branching from the state of interest as a branch source, the learning section 21 detects an observed value Omax observed in the branch destination state and having a maximum observation probability (which observed value will hereinafter be referred to also as a maximum probability observed value) according to equation (24). The process then proceeds to step S193.
In step S193, the learning section 21 determines whether the plurality of branch destination states detected in relation to the state of interest include branch destination states whose maximum probability observed values Omax coincide with each other.
When it is determined in step S193 that the plurality of branch destination states detected in relation to the state of interest do not include branch destination states whose maximum probability observed values Omax coincide with each other, the process skips step S194, and proceeds to step S195.
When it is determined in step S193 that the plurality of branch destination states detected in relation to the state of interest include branch destination states whose maximum probability observed values Omax coincide with each other, the process proceeds to step S194, where the learning section 21 detects the plurality of branch destination states whose maximum probability observed values Omax coincide with each other among the plurality of branch destination states detected in relation to the state of interest as merging object states of one group. The process then proceeds to step S195.
In step S195, the learning section 21 determines whether all the branch source states have been selected as a state of interest.
When it is determined in step S195 that not all the branch source states have been selected as a state of interest, the process returns to step S191.
When it is determined in step S195 that all the branch source states have been selected as a state of interest, the process makes a return.
In step S211, the learning section 21 selects one of groups of merging object states which group is not yet set as a group of interest to be a group of interest. The process then proceeds to step S212.
In step S212, the learning section 21 selects a merging object state having a minimum suffix, for example, among a plurality of merging object states of the group of interest to be a representative state of the group of interest. The process then proceeds to step S213.
In step S213, the learning section 21 sets observation probabilities of each observed value being observed in the representative state to average values of observation probabilities of each observed value being observed in the plurality of respective merging object states of the group of interest.
Further, in step S213, the learning section 21 sets observation probabilities of each observed value being observed in the other merging object states than the representative state of the group of interest to 0.0. The process then proceeds to step S214.
In step S214, the learning section 21 sets state transition probabilities of state transitions having the representative state as a transition source to average values of state transition probabilities of state transitions having each of the merging object states of the group of interest as a transition source. The process then proceeds to step S215.
In step S215, the learning section 21 sets state transition probabilities of state transitions having the representative state as a transition destination to sums of state transition probabilities of state transitions having each of the merging object states of the group of interest as a transition destination. The process then proceeds to step S216.
In step S216, the learning section 21 sets state transition probabilities of state transitions having the other merging object states than the representative state of the group of interest as a transition source and state transitions having the other merging object states than the representative state of the group of interest as a transition destination to 0.0. The process then proceeds to step S217.
In step S217, the learning section 21 determines whether all the groups of merging object states have been selected as a group of interest.
When it is determined in step S217 that not all the groups of merging object states have been selected as a group of interest, the process returns to step S211.
When it is determined in step S217 that all the groups of merging object states have been selected as a group of interest, the process makes a return.
In the simulation, an environment whose structure is converted to a first structure and a second structure is employed as the action environment.
In the action environment of the first structure, a position pos is a wall and thus cannot be passed, whereas in the action environment of the second structure, the position pos is a passage and can thus be passed.
In the simulation, an observed value series and an action series serving as learning data were obtained in each of the action environment of the first structure and the action environment of the second structure, and the learning of the extended HMM was performed.
In
In addition, in
The extended HMM in
On the other hand, the extended HMM in
According to the learning under the one-state one-observed-value constraint, when the structure of the action environment changes, parts not changing in structure are commonly stored in the extended HMM, and parts changing in structure are represented by branch structures of state transitions ((a plurality of) state transitions to different states as state transitions occurring when a certain action is performed) in the extended HMM.
Thus, because the action environment changing in structure can be properly represented by the single extended HMM without models being prepared for each structure, the action environment changing in structure can be modeled with a small amount of storage resources.
[Process in Recognition Action Mode for Determining Action according to Predetermined Strategy]
In the process in the recognition action mode of
In a case where the agent is located in an unknown area, even when an action is determined as described with reference to
Accordingly, the agent in the recognition action mode can determine whether the present conditions of the agent are unknown conditions (conditions in which an observed value series and an action series not heretofore observed are observed) (conditions not obtained in the extended HMM) or known conditions (conditions in which an observed value series and an action series heretofore observed are observed) (conditions obtained in the extended HMM), and determine an appropriate action on the basis of a result of the determination.
In the recognition action mode in
Thereafter, the process proceeds to step S301, where the state recognizing section 23 (
Then, the process proceeds from step S301 to step S302, where the state recognizing section 23 observes the observed value series and the action series for recognition in the learned extended HMM stored in the model storing section 22, and obtains an optimum state probability δt(j), which is a maximum value of state probability of being in a state Sj at time t, and an optimum path ψt(j), which is a state series providing the optimum state probability δt(j), according to the above-described equations (10) and (11) based on the Viterbi algorithm.
Further, the state recognizing section 23 observes the observed value series and the action series for recognition, and obtains a maximum likelihood state series, which is a state series for arriving at the state Sj which state series maximizes the optimum state probability δt(j) in equation (10) at time t, from the optimum path ψt(j) in equation (11).
The process thereafter proceeds from step S302 to step S303, where the state recognizing section 23 determines on the basis of the maximum likelihood state series whether the present conditions of the agent are known conditions or unknown conditions.
In the following, the observed value series for recognition (or the observed value series and the action series for recognition) will be denoted as O, and the maximum likelihood state series in which the observed value series O and the action series for recognition are observed will be denoted as X. Incidentally, the number of states forming the maximum likelihood state series X is equal to the series length q of the observed value series O for recognition.
The time t at which the first observed value of the observed value series O for recognition is observed is set at one, for example. A state at time t (tth state from a start) of the maximum likelihood state series X will be denoted as Xt, and a state transition probability of a state transition from the state Xt to a state Xt+1 at time t+1 will be denoted as A(Xt, Xt+1).
Further, a likelihood of the observed value series O for recognition being observed in the maximum likelihood state series X will be denoted as P(O|X).
In step S303, the state recognizing section 23 determines whether equation (28) and equation (29) are satisfied.
A(Xt,Xt+1)>Threstrans(0<t<q) (28)
P(O|X)>Thresobs (29)
Threstrans in equation (28) is a threshold value for making a division as to whether a state transition from the state Xt to the state Xt+1 can occur. Thresobs in equation (29) is a threshold value for making a division as to whether the observed value series O for recognition can be observed in the maximum likelihood state series X. For example, values that can properly make the above-described divisions are set as the threshold values Threstrans and Thresobs by simulation or the like.
When at least one of equation (28) and equation (29) is not satisfied, the state recognizing section 23 determines in step S303 that the present conditions of the agent are unknown conditions.
When both of equation (28) and equation (29) are satisfied, the state recognizing section 23 determines in step S303 that the present conditions of the agent are known conditions.
When it is determined in step S303 that the present conditions are known conditions, the state recognizing section 23 obtains the last state of the maximum likelihood state series X as a present state St (estimates the last state of the maximum likelihood state series X to be a present state St). The process then proceeds to step S304.
In step S304, the state recognizing section 23 updates the elapsed time managing table stored in the elapsed time managing table storing section 32 (
The agent thereafter performs a similar process to the process from step S35 on down in
When it is determined in step S303 that the present conditions are unknown conditions, on the other hand, the process proceeds to step S305, where the state recognizing section 23 calculates one or more candidates for a present condition state series, which is a state series for the agent to reach the present conditions on the basis of the extended HMM stored in the model storing section 22.
Further, the state recognizing section 23 supplies the one or more candidates for the present condition state series to the action determining section 24 (
In step S306, the action determining section 24 determines an action to be performed next by the agent according to a predetermined strategy using the one or more candidates for the present condition state series from the state recognizing section 23.
The agent thereafter performs a similar process to the process from step S40 on down in
As described above, when the present conditions are unknown conditions, the agent calculates one or more candidates for a present condition state series, and determines an action of the agent according to a predetermined strategy using the one or more candidates for the present condition state series.
That is, when the present conditions are unknown conditions, the agent obtains a state series leading to the present conditions in which state series a latest observed value series and a latest action series of a certain series length q are observed as a candidate for the present condition state series from state series that can be obtained from past experiences, that is, state series of state transitions occurring in the learned extended HMM (which state series will hereinafter be referred to also as experienced state series).
The agent then determines an action of the agent according to a predetermined strategy using (reusing) the present condition state series as experienced state series.
[Calculation of Candidate for Present Condition State Series]
In step S311, the state recognizing section 23 reads out a latest observed value series whose series length q is a predetermined length Q′ and an action series of actions performed when each observed value of the observed value series is observed (a latest action series of actions performed by the agent, whose series length q is a predetermined length Q′, and an observed value series of observed values observed in the agent when the actions of the action series are performed) as an observed value series and an action series for recognition from the history storing section 14 (
In this case, the length Q′ as series length q of the observed value series for recognition obtained in step S311 by the state recognizing section 23 is for example one, which is shorter than the length Q as series length q of the observed value series obtained in step S301 in
That is, as described above, the agent obtains a state series in which an observed value series and an action series for recognition as a latest observed value series and a latest action series are observed as a candidate for the present condition state series from the experienced state series. When the series length q of the observed value series and the action series for recognition is too long, a state series in which an observed value series and an action series for recognition of such a long series length q are observed may not be found in the experienced state series (or there is only a likelihood approximately equal to zero of such a state series being found).
Accordingly, the state recognizing section 23 obtains an observed value series and an action series for recognition of a short series length q in step S311 so as to be able to obtain a state series in which the observed value series and the action series for recognition are observed from the experienced state series.
After step S311, the process proceeds to step S312, where the state recognizing section 23 observes the observed value series and the action series for recognition obtained in step S311 in the learned extended HMM stored in the model storing section 22, and obtains an optimum state probability δt(j), which is a maximum value of state probability of being in a state Sj at time t, and an optimum path ψt(j), which is a state series providing the optimum state probability δt(j), according to the above-described equations (10) and (11) based on the Viterbi algorithm.
That is, the state recognizing section 23 obtains the optimum path ψt(j), which is a state series whose series length q is Q′ in which state series the observed value series and the action series for recognition are observed, from the experienced state series.
The state series as optimum path ψt(j) obtained (estimated) on the basis of the Viterbi algorithm will hereinafter be referred to also as a state series for recognition.
In step S312, the optimum state probability δt(j) and the state series for recognition (optimum path ψt(j)) are obtained for each of N states Sj of the extended HMM.
After obtaining the state series for recognition in step S312, the process proceeds to step S313, where the state recognizing section 23 selects one or more state series for recognition as a candidate for the present condition state series from the state series for recognition obtained in step S312. The process then makes a return.
That is, in step S313, for example the state series for recognition whose likelihood, that is, optimum state probability δt(j) is equal to or higher than a threshold value (for example a value 0.8 times a maximum value of optimum state probability δt(j) (maximum likelihood)) is selected as a candidate for the present condition state series.
Alternatively, for example R (R is an integer of one or more) state series for recognition whose optimum state probabilities δt(j) are within R highest rankings are selected as candidates for the present condition state series.
In the process of calculating a candidate for the present condition state series in
On the other hand, in the process of calculating a candidate for the present condition state series in
In step S321 in the process of calculating a candidate for the present condition state series in
In step S322, the state recognizing section 23 reads out a latest observed value series whose series length is a length q and an action series of actions performed when each observed value of the observed value series is observed as an observed value series and an action series for recognition from the history storing section 14 (
In step S323, the state recognizing section 23 observes the observed value series and the action series for recognition having the series length q in the learned extended HMM stored in the model storing section 22, and obtains an optimum state probability δt(j), which is a maximum value of state probability of being in a state Sj at time t, and an optimum path ψt(j), which is a state series providing the optimum state probability δt(j), according to the above-described equations (10) and (11) based on the Viterbi algorithm.
Further, the state recognizing section 23 observes the observed value series and the action series for recognition, and obtains a maximum likelihood state series, which is a state series for arriving at the state Sj which state series maximizes the optimum state probability δt(j) in equation (10) at time t, from the optimum path ψt(j) in equation (11).
The process thereafter proceeds from step S323 to step S324, where the state recognizing section 23 determines on the basis of the maximum likelihood state series whether the present conditions of the agent are known conditions or unknown conditions, as in step S303 in
When it is determined in step S324 that the present conditions are known conditions, that is, when a state series in which the observed value series and the action series for recognition (the latest observed value series and the latest action series) having the series length q are observed can be obtained from the experienced state series, the process proceeds to step S325, where the state recognizing section 23 increments the series length q by one.
The process then returns from step S325 to step S322 to repeat a similar process from step S322 on down.
When it is determined in step S324 that the present conditions are unknown conditions, that is, when a state series in which the observed value series and the action series for recognition (the latest observed value series and the latest action series) having the series length q are observed cannot be obtained from the experienced state series, the process proceeds to step S326, where the state recognizing section 23 in subsequent steps S326 to S328 obtains a state series having a longest series length in which state series the observed value series and the action series for recognition (the latest observed value series and the latest action series) are observed from the experienced state series as a candidate for the present condition state series.
That is, in steps S322 to S325, whether the present conditions of the agent are known conditions or unknown conditions is determined on the basis of the maximum likelihood state series in which the observed value series and the action series for recognition are observed while the series length q of the observed value series and the action series for recognition is incremented by one.
Thus, a maximum likelihood state series in which the observed value series and the action series for recognition having a series length q−1 obtained by decrementing the series length q immediately after it is determined in step S324 that the present conditions are unknown conditions by one are observed is present as a state series (one of state series) having a longest series length in which state series the observed value series and the action series for recognition are observed in the experienced state series.
Accordingly, in step S326, the state recognizing section 23 reads out a latest observed value series whose series length is the length q−1 and an action series of actions performed when each observed value of the observed value series is observed as an observed value series and an action series for recognition from the history storing section 14 (
In step S327, the state recognizing section 23 observes the observed value series and the action series for recognition obtained in step S326 and having the series length q−1 in the learned extended HMM stored in the model storing section 22, and obtains an optimum state probability δt(j), which is a maximum value of state probability of being in a state Sj at time t, and an optimum path ψt(j), which is a state series providing the optimum state probability δt(j), according to the above-described equations (10) and (11) based on the Viterbi algorithm.
That is, the state recognizing section 23 obtains an optimum path ψt(j) (state series for recognition), which is a state series having the series length q−1 in which state series the observed value series and the action series for recognition are observed, from the state series of state transitions occurring in the learned extended HMM.
After obtaining the state series for recognition in step S327, the process proceeds to step S328, where the state recognizing section 23 selects one or more state series for recognition as a candidate for the present condition state series from the state series for recognition obtained in step S327, as in step S313 in
As described above, by incrementing the series length q and obtaining an observed value series and an action series for recognition having a series length q−1 obtained by decrementing the series length q immediately after it is determined that the present conditions are unknown conditions by one, an appropriate candidate for the present condition state series (state series corresponding to a structure more resembling the structure of the present position of the agent in the structure of the action environment obtained by the extended HMM) can be obtained from the experienced state series.
That is, when the series length of the observed value series and the action series for recognition used to obtain a candidate for the present condition state series is fixed, an appropriate candidate for the present condition state series may not be obtained if the fixed series length is too short or too long.
Specifically, when the series length of the observed value series and the action series for recognition is too short, the experienced state series include many state series having a high likelihood of the observed value series and the action series for recognition having such a series length being observed, and thus a large number of state series for recognition having high likelihoods are obtained.
As a result, when a candidate for the present condition state series is selected from such a large number of state series for recognition having high likelihoods, a possibility of a state series that represents the present conditions more properly not being selected as a candidate for the present condition state series in the experienced state series may be increased.
On the other hand, when the series length of the observed value series and the action series for recognition is too long, the experienced state series may not include a state series having a high likelihood of the observed value series and the action series for recognition having such a series length that is too long being observed, and consequently a possibility of not being able to obtain a candidate for the present condition state series may be increased.
On the other hand, as described with reference to
As a result, an action can be determined by making the most of the experienced state series.
[Determination of Action according to Strategy]
In
Specifically, in step S341, the action determining section 24 selects one candidate of one or more candidates for the present condition state series from the state recognizing section 23 (
In step S342, the action determining section 24 obtains a sum of state transition probabilities of state transitions having a last state of the state series of interest as a transition source for each action Um as a degree of action appropriateness indicating appropriateness of performing the action Um (according to the first strategy) in relation to the state series of interest on the basis of the extended HMM stored in the model storing section 22.
Specifically, supposing that the last state is represented as SI (I is one of integers 1 to N), the action determining section 24 obtains a sum of state transition probabilities aI,1(Um), aI,2(Um), . . . , aI,N(Um) arranged in a j-axis direction (horizontal direction) in a state transition probability plane with respect to each action Um as a degree of action appropriateness.
The process then process from step S342 to step S343, where the action determining section 24 sets, to 0.0, the degree of action appropriateness obtained with respect to the action Um whose degree of action appropriateness is lower than a threshold value among M (kinds of) actions U1 to UM whose degrees of action appropriateness have been obtained.
That is, by setting the degree of action appropriateness obtained with respect to the action Um whose degree of action appropriateness is lower than the threshold value to 0.0, the action determining section 24 excludes the action Um whose degree of action appropriateness is lower than the threshold value from candidates for a next action to be performed according to the first strategy in relation to the state series of interest, and consequently selects actions Um whose degree of action appropriateness is equal to or higher than the threshold value as candidates for a next action to be performed according to the first strategy.
After step S343, the process proceeds to step S344, where the action determining section 24 determines whether all the candidates for the present condition state series have been set as a state series of interest.
When it is determined in step S344 that not all the candidates for the present condition state series have been set as a state series of interest yet, the process returns to step S341. Then, in step S341, the action determining section 24 selects one candidate not yet set as a state series of interest to be a new state series of interest from the one or more candidates for the present condition state series from the state recognizing section 23. Thereafter a similar process is repeated.
When it is determined in step S344 that all the candidates for the present condition state series have been set as a state series of interest, the process proceeds to step S345, where the action determining section 24 determines a next action from the candidates for the next action on the basis of the degree of action appropriateness with respect to each action Um which degree of action appropriateness is obtained in relation to each of the one or more candidates for the present condition state series from the state recognizing section 23. The process then makes a return.
That is, the action determining section 24 for example determines that a candidate whose degree of action appropriateness is a maximum is the next action.
In addition, the action determining section 24 obtains an expected value (average value) of the degree of action appropriateness with respect to each action Um, and determines the next action on the basis of the expected value.
Specifically, for example, with respect to each action Um, the action determining section 24 obtains an expected value (average value) of the degree of action appropriateness with respect to the action Um, which degree of action appropriateness is obtained in relation to each of the one or more candidates for the present condition state series.
The action determining section 24 then determines on the basis of the expected value with respect to each action Um that an action Um whose expected value is a maximum, for example, is the next action.
Alternatively, the action determining section 24 determines the next action by a SoftMax method, for example, on the basis of the expected value with respect to each action Um.
Specifically, the action determining section 24 randomly generates an integer m in a range of the suffixes 1 to M of the M actions U1 to UM with a probability corresponding to an expected value with respect to the action Um having the integer m as a suffix thereof, and determines that the action Um having the generated integer m as a suffix thereof is the next action.
As described above, when an action is determined according to the first strategy, the agent performs an action performed by the agent in known conditions resembling the present conditions of the agent.
Thus, according to the first strategy, when the agent is in unknown conditions, and it is desired that the agent perform an action similar to an action taken in known conditions, the agent can be made to perform the appropriate action.
The determination of an action according to such a first strategy can be made in not only a case where the agent is in unknown conditions but also a case of determining an action to be performed after the agent has reached an open end as described above, for example.
When the agent is in unknown conditions, and the agent is made to perform an action similar to an action taken in known conditions, the agent may wander in the action environment.
When the agent wanders in the action environment, the agent may return to a known place (area) (the present conditions become known conditions), or may explore an unknown place (the present conditions continue remaining unknown conditions).
Thus, when it is desired that the agent return to a known place, or that the agent explore an unknown place, it would be hard to say that an action such that the agent wanders in the action environment is appropriate as an action to be performed by the agent.
Accordingly, the action determining section 24 can determine a next action according to not only the first strategy but also a second strategy and a third strategy in the following.
The second strategy is a strategy for increasing information enabling recognition of the (present) conditions of the agent. By determining an action according to the second strategy, an appropriate action can be determined as an action for the agent to return to a known place. As a result, the agent can return to the known place efficiently.
Specifically, in determining an action according to the second strategy, the action determining section 24 determines, as a next action, an action effecting a state transition from the last state st of one or more candidates for the present condition state series from the state recognizing section 23 to an immediately preceding state st−1 as a state immediately preceding the last state st, as shown in
In step S351, the action determining section 24 selects one candidate of one or more candidates for the present condition state series from the state recognizing section 23 (
In this case, when the series length of the candidates for the present condition state series from the state recognizing section 23 is one, and thus there is not an immediately preceding state immediately preceding the last states, the action determining section 24 refers to the extended HMM (state transition probabilities of the extended HMM) stored in the model storing section 22 before performing the process of step S351, and obtains a state from which a state transition can be made to the last state of each of the one or more candidates for the present condition state series from the state recognizing section 23 as a transition destination.
The action determining section 24 then treats a state series obtained by arranging the state from which a state transition can be made to the last state of each of the one or more candidates for the present condition state series from the state recognizing section 23 as a transition destination and the last state as a candidate for the present condition state series. The same is true for
In step S352, the action determining section 24 obtains a state transition probability of the state transition from the last state of the state series of interest to the immediately preceding state immediately preceding the last state for each action Um as a degree of action appropriateness indicating appropriateness of performing the action Um (according to the second strategy) in relation to the state series of interest on the basis of the extended HMM stored in the model storing section 22.
Specifically, the action determining section 24 obtains the state transition probability aij(Um) of the state transition from the last state Si to the immediately preceding state Sj which state transition is made when the action Um is performed as a degree of action appropriateness with respect to the action Um.
The process then process from step S352 to step S353, where the action determining section 24 sets degrees of action appropriateness obtained with respect to actions other than an action whose degree of action appropriateness is a maximum among M (kinds of) actions U1 to UM to 0.0.
That is, by setting the degrees of action appropriateness obtained with respect to the actions other than the action whose degree of action appropriateness is a maximum to 0.0, the action determining section 24 consequently selects the action whose degree of action appropriateness is a maximum as a candidate for a next action to be performed according to the second strategy in relation to the state series of interest.
After step S353, the process proceeds to step S354, where the action determining section 24 determines whether all the candidates for the present condition state series have been set as a state series of interest.
When it is determined in step S354 that not all the candidates for the present condition state series have been set as a state series of interest yet, the process returns to step S351. Then, in step S351, the action determining section 24 selects one candidate not yet set as a state series of interest to be a new state series of interest from the one or more candidates for the present condition state series from the state recognizing section 23. Thereafter a similar process is repeated.
When it is determined in step S354 that all the candidates for the present condition state series have been set as a state series of interest, the process proceeds to step S355, where the action determining section 24 determines a next action from candidates for the next action on the basis of the degree of action appropriateness with respect to each action Um which degree of action appropriateness is obtained in relation to each of the one or more candidates for the present condition state series from the state recognizing section 23, as in step S345 in
When actions are determined according to the second strategy as described above, the agent performs actions as of retracing a path taken by the agent. As a result, information (observed values) enabling recognition of the conditions of the agent is increased.
Thus, according to the second strategy, the agent can be made to perform an appropriate action when the agent is in unknown conditions and it is desired that the agent perform an action to return to a known place.
The third strategy is a strategy for increasing information (observed values) on unknown conditions not obtained in the extended HMM. By determining an action according to the third strategy, an appropriate action can be determined as an action for the agent to explore an unknown place. As a result, the agent can explore the unknown place efficiently.
Specifically, in determining an action according to the third strategy, the action determining section 24 determines, as a next action, an action effecting a state transition other than a state transition from the last state st of one or more candidates for the present condition state series from the state recognizing section 23 to an immediately preceding state st−1 as a state immediately preceding the last state st, as shown in
In step S361, the action determining section 24 selects one candidate of one or more candidates for the present condition state series from the state recognizing section 23 (
In step S362, the action determining section 24 obtains a state transition probability of the state transition from the last state of the state series of interest to the immediately preceding state immediately preceding the last state for each action Um as a degree of action appropriateness indicating appropriateness of performing the action Um (according to the third strategy) in relation to the state series of interest on the basis of the extended HMM stored in the model storing section 22.
Specifically, the action determining section 24 obtains the state transition probability aij(Um) of the state transition from the last state Si to the immediately preceding state Sj which state transition is made when the action Um is performed as a degree of action appropriateness with respect to the action Um.
The process then process from step S362 to step S363, where the action determining section 24 detects an action whose degree of action appropriateness is a maximum among M (kinds of) actions U1 to UM as an action effecting a state transition returning the state to the immediately preceding state (which action will hereinafter be referred to also as a return action) in relation to the state series of interest.
After step S363, the process proceeds to step S364, where the action determining section 24 determines whether all the candidates for the present condition state series have been set as a state series of interest.
When it is determined in step S364 that not all the candidates for the present condition state series have been set as a state series of interest yet, the process returns to step S361. Then, in step S361, the action determining section 24 selects one candidate not yet set as a state series of interest to be a new state series of interest from the one or more candidates for the present condition state series from the state recognizing section 23. Thereafter a similar process is repeated.
When it is determined in step S364 that all the candidates for the present condition state series have been set as a state series of interest, the action determining section 24 resets the selection of all the candidates for the present condition state series as a state series of interest. The process then proceeds to step S365.
In step S365, as in step S361, the action determining section 24 selects one candidate of the one or more candidates for the present condition state series from the state recognizing section 23 which candidate is not yet set as a state series of interest to which to direct attention to be a state series of interest. The process then proceeds to step S366.
In step S366, as in step S342 in
The process thereafter proceeds from step S366 to step S367, where the action determining section 24 sets, to 0.0, degrees of action appropriateness obtained with respect to actions Um whose degrees of action appropriateness are lower than a threshold value among the among M (kinds of) actions U1 to UM whose degrees of action appropriateness have been obtained and the degree of action appropriateness obtained with respect to the return action.
That is, by setting the degrees of action appropriateness obtained with respect to the actions Um whose degrees of action appropriateness are lower than a threshold value to 0.0, the action determining section 24 consequently selects actions Um whose degrees of action appropriateness are equal to or higher than the threshold value as candidates for a next action to be performed according to the third strategy in relation to the state series of interest.
Further, by setting the degree of action appropriateness obtained with respect to the return action among the actions Um whose degrees of action appropriateness are equal to or higher than the threshold value which actions Um are selected in relation to the state series of interest to 0.0, the action determining section 24 consequently selects the other actions than the return action as candidates for the next action to be performed according to the third strategy in relation to the state series of interest.
After step S367, the process proceeds to step S368, where the action determining section 24 determines whether all the candidates for the present condition state series have been set as a state series of interest.
When it is determined in step S368 that not all the candidates for the present condition state series have been set as a state series of interest yet, the process returns to step S365. Then, in step S365, the action determining section 24 selects one candidate not yet set as a state series of interest to be a new state series of interest from the one or more candidates for the present condition state series from the state recognizing section 23. Thereafter a similar process is repeated.
When it is determined in step S368 that all the candidates for the present condition state series have been set as a state series of interest, the process proceeds to step S369, where the action determining section 24 determines a next action from the candidates for the next action on the basis of the degree of action appropriateness with respect to each action Um which degree of action appropriateness is obtained in relation to each of the one or more candidates for the present condition state series from the state recognizing section 23, as in step S345 in
When actions are determined according to the third strategy as described above, the agent performs actions other than the return action, that is, actions of exploring an unknown place. As a result, information on unknown conditions not obtained in the extended HMM is increased.
Thus, according to the third strategy, the agent can be made to perform an appropriate action when the agent is in unknown conditions and it is desired that the agent explore an unknown place.
As described above, the agent calculates candidates for the present condition state series as a state series for the agent to reach the present conditions on the basis of the extended HMM, and determines an action to be performed next by the agent according to a predetermined strategy using the candidates for the present condition state series. The agent can thereby determine an action on the basis of the experience obtained in the extended HMM even when the agent is not supplied with the metric of the action to be performed, such for example as a reward function for calculating a reward for the agent.
Incidentally, as an action determining method for eliminating the obscurity of conditions, Japanese Patent Laid-Open No. 2008-186326, for example, describes a method of determining an action by one reward function.
The process in the recognition action mode in
As described above, the second strategy increases information enabling recognition of the conditions of the agent, and the third strategy increases information on unknown conditions not obtained in the extended HMM. The second and third strategies are therefore strategies increasing some information.
The determination of an action according to the second and third strategies thus increasing some information can be made as follows in addition to the methods described with reference to
A probability Pm(O) of an observed value O being observed when the agent performs an action Um at a certain time t is expressed by equation (30).
Incidentally, ρi denotes a state probability of being in a state Si at time t.
Now supposing that an amount of information whose probability of occurrence is expressed by the probability Pm(O) is represented as I(Pm(O)), the suffix m′ of an action Um′ when the action is determined according to a strategy that increases some information is expressed by equation (31).
argmax{I(Pm(O))} in equation (31) denotes the suffix m′ that maximizes the amount of information I(Pm(O)) in braces among the suffixes m of actions Um.
Now supposing that information enabling recognition of the conditions of the agent (which information will hereinafter be referred to also as recognition enabling information) is employed as information, to determine an action Um′, according to equation (31) is to determine an action according to the second strategy that increases the recognition enabling information.
In addition, supposing that information on unknown conditions not obtained in the extended HMM (which information will hereinafter be referred to also as unknown condition information) is employed as information, to determine an action Um′ according to equation (31) is to determine an action according to the third strategy that increases the unknown condition information.
Supposing that the entropy of information whose probability of occurrence is expressed by the probability Pm(O) is represented as Ho(Pm), equation (31) can be equivalently expressed by the following equation.
That is, the entropy Ho(Pm) can be expressed by equation (32).
When the entropy Ho(Pm) of equation (32) is large, probabilities Pm(O) of respective observed values O being observed are uniform. Therefore, obscurity such that it is not known which observed value is observed, or in turn it is not known where the agent is increased, and a possibility of obtaining information unknown to the agent, or information on an unknown world, as it were, is increased.
Thus, because the unknown condition information is increased by making the entropy Ho(Pm) large, equation (31) when an action is determined according to the third strategy increasing the unknown condition information can be equivalently expressed by equation (33) that maximizes the entropy Ho(Pm).
argmax{Ho(Pm)} in equation (33) denotes the suffix m′ that maximizes the entropy Ho(Pm) in braces among the suffixes m of actions Um.
On the other hand, when the entropy Ho(Pm) of equation (32) is small, the probabilities Pm(O) of the respective observed values O being observed are high only at a specific observed value. Therefore, obscurity such that it is not known which observed value is observed, or in turn it is not known where the agent is eliminated, and the position of the agent is easily determined.
Thus, because the recognition enabling information is increased by making the entropy Ho(Pm) small, equation (31) when an action is determined according to the second strategy increasing the recognition enabling information can be equivalently expressed by equation (34) that minimizes the entropy Ho(Pm).
argmin{Ho(Pm)} in equation (34) denotes the suffix m′ that minimizes the entropy Ho(Pm) in braces among the suffixes m of actions Um.
Incidentally, in addition, on the basis of relation in magnitude between a maximum value of the probability Pm(O) and a threshold value, for example, an action Um that maximizes the probability Pm(O) can be determined as a next action.
To determine an action Um that maximizes the probability Pm(O) as a next action when the maximum value of the probability Pm(O) is larger than the threshold value (equal to or larger than the threshold value) is to determine an action so as to eliminate obscurity, that is, to determine an action according to the second strategy.
On the other hand, to determine an action Um that maximizes the probability Pm(O) as a next action when the maximum value of the probability Pm(O) is equal to or less than the threshold value (less than the threshold value) is to determine an action so as to increase obscurity, that is, to determine an action according to the third strategy.
In the above, an action is determined using the probability Pm(O) of an observed value being observed when the agent performs an action Um at a certain time t. In addition, an action can be determined using a probability Pmj in equation (35) of a state transition being made from a state Si to a state Sj when the agent performs an action Um at a certain time t, for example.
Specifically, the suffix m′ of an action Um′ when the action is determined according to a strategy that increases an amount of information I(Pmj) whose probability of occurrence is represented by the probability Pmj is expressed by equation (36).
argmax{I(Pmj)} in equation (36) denotes the suffix m′ that maximizes the amount of information I(Pmj) in braces among the suffixes m of actions Um.
Now supposing that recognition enabling information is employed as information, to determine an action Um′ according to equation (36) is to determine an action according to the second strategy that increases the recognition enabling information.
In addition, supposing that unknown condition information is employed as information, to determine an action Um′ according to equation (36) is to determine an action according to the third strategy that increases the unknown condition information.
Supposing that the entropy of information whose probability of occurrence is expressed by the probability Pmj is represented as Hj (Pm), equation (36) can be equivalently expressed by the following equation.
That is, the entropy Hj(Pm) can be expressed by equation (37).
When the entropy Hj(Pm) of equation (37) is large, probabilities Pmj of respective state transitions being made from a state Si to a state Sj are uniform. Therefore, obscurity such that it is not known which state transition is made, or in turn it is not known where the agent is increased, and a possibility of obtaining information on an unknown world unknown to the agent is increased.
Thus, because the unknown condition information is increased by making the entropy Hj(Pm) large, equation (36) when an action is determined according to the third strategy increasing the unknown condition information can be equivalently expressed by equation (38) that maximizes the entropy Hj(Pm).
argmax{Hj(Pm)} in equation (38) denotes the suffix m′ that maximizes the entropy Hj(Pm) in braces among the suffixes m of actions Um.
On the other hand, when the entropy Hj(Pm) of equation (37) is small, the probabilities Pmj of the respective state transitions being made from a state Si to a state Sj are high only in a specific state transition. Therefore, obscurity such that it is not known which observed value is observed, or in turn it is not known where the agent is eliminated, and the position of the agent is easily determined.
Thus, because the recognition enabling information is increased by making the entropy Hj(Pm) small, equation (36) when an action is determined according to the second strategy increasing the recognition enabling information can be equivalently expressed by equation (39) that minimizes the entropy Hj(Pm).
argmin{Hj(Pm)} in equation (39) denotes the suffix m′ that minimizes the entropy Hj(Pm) in braces among the suffixes m of actions Um.
Incidentally, in addition, on the basis of relation in magnitude between a maximum value of the probability Pmj and a threshold value, for example, an action Um that maximizes the probability Pmj can be determined as a next action.
To determine an action Um that maximizes the probability Pmj as a next action when the maximum value of the probability Pmj is larger than the threshold value (equal to or larger than the threshold value) is to determine an action so as to eliminate obscurity, that is, to determine an action according to the second strategy.
On the other hand, to determine an action Um that maximizes the probability Pmj as a next action when the maximum value of the probability Pmj is equal to or less than the threshold value (less than the threshold value) is to determine an action so as to increase obscurity, that is, to determine an action according to the third strategy.
In addition, such an action determination as to eliminate obscurity, that is, action determination according to the second strategy can be made using a-posteriori probability P(X|O) of being in a state SX when an observed value O is observed.
That is, the a-posteriori probability P(X|O) is expressed by equation (40).
Supposing that the entropy of the a-posteriori probability P(X|O) is represented as H(P(X|O)), action determination according to the second strategy can be made by determining an action so as to decrease the entropy H(P(X|O)).
That is, action determination according to the second strategy can be made by determining an action Um according to equation (41).
argmin{ } in equation (41) denotes the suffix m′ that minimizes a value in braces among the suffixes m of actions Um.
ΣP(O)H(P(X|O)) in the braces of argmin{ } in equation (41) is a sum total of products of the probability P(O) of an observed value O being observed and the entropy H(P(X|O)) of the a-posteriori probability P(X|O) of being in a state SX when the observed value O is observed with the observed value O changed from an observed value O1 to an observed value OK, and represents the whole entropy of the observed values O1 to OK being observed when an action Um is performed.
According to equation (41), an action that minimizes the entropy ΣP(O)H(P(X|O)), that is, an action that increases the possibility of the observed value O being determined uniquely is determined as a next action.
Hence, to determine an action according to equation (41) is to determine an action so as to eliminate obscurity, that is, to determine an action according to the second strategy.
In addition, such an action determination as to increase obscurity, that is, action determination according to the third strategy can be made so as to maximize an amount of decrease indicating an amount by which the entropy H(P(X|O)) of the a-posteriori probability P(X|O) is decreased with respect to the entropy H(P(X)) of a-priori probability P(X) of being in a state SX, assuming that the amount of decrease is an amount of unknown condition information.
That is, the a-priori probability P(X) is expressed by equation (42).
An action Um′ that maximizes the amount of decrease of the entropy H(P(X|O)) of the a-posteriori probability P(X|O) with respect to the entropy H(P(X)) of the a-priori probability P(X) of being in a state SX can be determined according to equation (43).
argmax{ } in equation (43) denotes the suffix m′ that maximizes a value in braces among the suffixes m of actions Um.
According to equation (43), a sum total ΣP(O) (H(P(X))−H(P(X|O))) of multiplication values P(O)(H(P(X))−H(P(X|O))) obtained by multiplying the probability P(O) of an observed value O being observed by a difference H(P(X))−H(P(X|O)) between the entropy H(P(X)) of the a-priori probability P(X), which is a state probability of being in a state SX when the observed value O is not known, and the entropy H(P(X|O)) of the a-posteriori probability P(X|O) of being in the state SX when an action Um is performed and the observed value O is observed with the observed value O changed from an observed value O1 to an observed value OK is assumed to be an amount of unknown condition information increased by performing the action Um. An action that maximizes the amount of unknown condition information is determined as a next action.
[Selection of Strategy]
The agent can determine an action according to the first to third strategies as described with reference to
According to the second strategy, an action is determined so as to increase the recognition enabling information and eliminate obscurity, that is, so that the agent returns to a known place (area).
According to the third strategy, on the other hand, an action is determined so as to increase the unknown condition information and increase obscurity, that is, so that the agent explores an unknown place.
Incidentally, according to the first strategy, it is not known whether the agent returns to a known place or explores an unknown place, but an action performed by the agent in known conditions similar to the present conditions of the agent is performed.
In order to obtain the structure of the action environment widely, that is, to increase knowledge (known world) of the agent, as it were, it is necessary to determine actions such that the agent explores an unknown place.
On the other hand, for the agent to obtain the unknown place as known place, the agent needs to return from the unknown place to the known place, and learn the extended HMM (incremental learning) to connect the unknown place with the known place. Thus, for the agent to obtain the unknown place as known place, an action needs to be determined so that the agent returns to the known place.
By determining actions so that the agent explores an unknown place and determining actions so that the agent returns to a known place in a well-balanced manner, the entire structure of the action environment can be modeled in the extended HMM efficiently.
Accordingly, the agent can select a strategy to be followed when determining an action from the second and third strategies on the basis of an elapsed time since the conditions of the agent became unknown conditions, as shown in
In step S381, the action determining section 24 (
In this case, the unknown condition elapsed time is the number of times that a result of recognition that the present conditions are unknown conditions is consecutively obtained in the state recognizing section 23. The unknown condition elapsed time is reset to zero when a result of recognition that the present conditions are known conditions is obtained. Thus, when the present conditions are not unknown conditions (when the present conditions are known conditions), the unknown condition elapsed time is zero.
In step S382, the action determining section 24 determines whether the unknown condition elapsed time is larger than a predetermined threshold value.
When it is determined in step S382 that the unknown condition elapsed time is not larger than the predetermined threshold value, that is, when not much time that the conditions of the agent are unknown conditions has passed, the process proceeds to step S383, where the action determining section 24 selects the third strategy for increasing the unknown condition information as a strategy to be followed when determining an action from the second and third strategies. The process then returns to step S381.
When it is determined in step S382 that the unknown condition elapsed time is larger than the predetermined threshold value, that is, when a considerable time that the conditions of the agent are unknown conditions has passed, the process proceeds to step S384, where the action determining section 24 selects the second strategy for increasing the recognition enabling information as a strategy to be followed when determining an action from the second and third strategies. The process then returns to step S381.
In
In step S391, the action determining section 24 (
In step S392, the action determining section 24 determines whether the unknown ratio is larger than a predetermined threshold value.
When it is determined in step S392 that the unknown ratio is not larger than the predetermined threshold value, that is, when the ratio at which the conditions of the agent are unknown conditions is not very high, the process proceeds to step S393, where the action determining section 24 selects the third strategy for increasing the unknown condition information as a strategy to be followed when determining an action from the second and third strategies. The process then returns to step S391.
When it is determined in step S392 that the unknown ratio is larger than the predetermined threshold value, that is, when the ratio at which the conditions of the agent are unknown conditions is rather high, the process proceeds to step S394, where the action determining section 24 selects the second strategy for increasing the recognition enabling information as a strategy to be followed when determining an action from the second and third strategies. The process then returns to step S391.
Incidentally, while a strategy is selected on the basis of the ratio at which the conditions are unknown conditions (unknown ratio) in a result of recognition of the conditions for a predetermined proximate time in
In the case where strategy selection is made on the basis of the known ratio, the third strategy is selected as a strategy at the time of determining an action when the known ratio is higher than a threshold value, or the second strategy is selected as a strategy at the time of determining an action when the known ratio is not higher than the threshold value.
In addition, in step S383 in
By thus selecting the strategies, the entire structure of the action environment can be modeled in the extended HMM efficiently.
[Description of Computer to which Present Invention is Applied]
Next, the series of processes described above can be carried out not only by hardware but also by software. When the series of processes is to be carried out by software, a program constituting the software is installed onto a general-purpose personal computer or the like.
The program can be recorded in advance on a hard disk 105 or a ROM 103 as a recording medium included in the computer.
Alternatively, the program can be stored (recorded) on a removable recording medium 111. Such a removable recording medium 111 can be provided as a so-called packaged software. In this case, the removable recording medium 111 includes for example a flexible disk, a CD-ROM (Compact Disk Read Only Memory), an MO (Magneto-Optical Disk), a DVD (Digital Versatile Disk), a magnetic disk, and a semiconductor memory.
In addition to being installed from the removable recording medium 111 as described above onto the computer, the program can be downloaded to the computer via a communication network or a broadcasting network and installed onto the built-in hard disk 105. Specifically, the program can be for example transferred from a download site to the computer by radio via an artificial satellite for digital satellite broadcasting or transferred to the computer by wire via networks such as a LAN (Local Area Network), the Internet, and the like.
The computer includes a CPU (Central Processing Unit) 102. The CPU 102 is connected with an input-output interface 110 via a bus 101.
When the CPU 102 is supplied with a command via the input-output interface 110 by an operation of an input section 107 by a user, for example, the CPU 102 executes the program stored in the ROM (Read Only Memory) 103 according to the command. Alternatively, the CPU 102 loads the program stored in the hard disk 105 into a RAM (Random Access Memory) 104, and then executes the program.
The CPU 102 thereby performs processes according to the above-described flowcharts or processes performed by the configurations of the above-described block diagrams. The CPU 102 then outputs a result of the processes from an output section 106, transmits the result from a communicating section 108, or records the result onto the hard disk 105 via the input-output interface 110, for example, as required.
Incidentally, the input section 107 is formed by a keyboard, a mouse, a microphone and the like. The output section 106 is formed by an LCD (Liquid Crystal Display), a speaker and the like.
In the present specification, processes performed by the computer according to the program do not necessarily need to be performed in time series in the order described as flowcharts. That is, processes performed by the computer according to the program also include processes performed in parallel or individually (for example parallel processing or processing based on an object).
In addition, the program may be processed by one computer (processor), or may be subjected to distributed processing by a plurality of computers. Further, the program may be transferred to a remote computer and executed by the remote computer.
It is to be noted that embodiments of the present invention are not limited to the foregoing embodiments, and that various changes can be made without departing from the spirit of the present invention.
The present application contains subject matter related to that disclosed in Japanese Priority Patent Application JP 2009-140064 filed in the Japan Patent Office on Jun. 11, 2009, the entire content of which is hereby incorporated by reference.
Noda, Kuniaki, Hidai, Kenichi, Sabe, Kohtaro, Kawamoto, Kenta, Yoshiike, Yukiko
Patent | Priority | Assignee | Title |
10049325, | Sep 08 2011 | Sony Corporation | Information processing to provide entertaining agent for a game character |
11062225, | Dec 09 2016 | Adobe Inc | Techniques for providing sequential recommendations to users |
Patent | Priority | Assignee | Title |
20130066816, | |||
JP2006268812, | |||
JP2007317165, | |||
JP2008186326, | |||
JP6161551, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 12 2010 | SABE, KOHTARO | Sony Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 024466 | /0566 | |
Apr 13 2010 | YOSHIIKE, YUKIKO | Sony Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 024466 | /0566 | |
Apr 13 2010 | KAWAMOTO, KENTA | Sony Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 024466 | /0566 | |
Apr 13 2010 | HIDAI, KENICHI | Sony Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 024466 | /0566 | |
May 25 2010 | NODA, KUNIAKI | Sony Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 024466 | /0566 | |
Jun 01 2010 | Sony Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Oct 10 2013 | ASPN: Payor Number Assigned. |
Feb 22 2017 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Sep 24 2020 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Sep 03 2016 | 4 years fee payment window open |
Mar 03 2017 | 6 months grace period start (w surcharge) |
Sep 03 2017 | patent expiry (for year 4) |
Sep 03 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 03 2020 | 8 years fee payment window open |
Mar 03 2021 | 6 months grace period start (w surcharge) |
Sep 03 2021 | patent expiry (for year 8) |
Sep 03 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 03 2024 | 12 years fee payment window open |
Mar 03 2025 | 6 months grace period start (w surcharge) |
Sep 03 2025 | patent expiry (for year 12) |
Sep 03 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |