 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
• |
Previously,
we've looked at search problems with static
|
|
|
|
environments. (One agent affects the environment.)
|
|
|
• |
Now,
we generalize just a bit and allow two agents to
|
|
|
|
affect
the environment in turn. à dynamic environment
|
|
• |
Previously,
we've looked for a sequence of actions to a
|
|
|
|
goal
state.
|
|
|
• |
Now,
we're looking for a sequence of actions which
|
|
|
|
maximizes
some utility measure regardless of how an
|
|
|
|
adversarial
agent acts.
|
|