Task planning under uncertainty using a spreading activation network
Abstract
As robotics and automation applications extend to the service sector, researchers have to increasingly deal with performing robotic actions in uncertain and unstructured environments. A traditional solution to this problem models uncertainty about the effects of actions by probabilities conditioned on the state of the environment, making it possible to select plans that have the highest probability of success in a given situation. Reactive systems use another approach to handling uncertainty, by employing a set of predefined situation-response rules that make it possible to move toward the goal from any situation, whether expected or unexpected. This paper describes a planner that combines the two approaches. A proactive component generates plans that are biased toward picking the most reliable action in a given situation, and a reactive component can alter the selected actions based on unexpected situations that may arise in uncertain environments. Action selection is driven by a spreading activation mechanism on a probabilistic network that encodes the domain knowledge. A decision-theoretic framework incorporates quantitative goal utilities and action costs into the action selection mechanism. Experiments conducted demonstrate the ability of the planner to plan with hard and soft domain constraints and action costs, modify plans as a reaction to unexpected changes in the environment or goal utilities, and plan in situations with multiple conflicting goals.