ABSTRACT
In this paper, we review PUG, a cognitive architecture for embodied agents, and report extensions that let it represent and reason about spatial relations. The framework posits graded concepts that are grounded in perception and integrates symbolic reasoning with continuous control. After describing the architecture, we discuss how an extended version supports places, encoded as virtual objects defined by distances to reference entities, and reasons about them as if they were visible. We demonstrate PUG’s control of a simulated robot that approaches targets and avoids obstacles in a simple two-dimensional environment. In closing, we discuss related research on agent architectures, robotic control, and spatial cognition, along with our plans to extend the framework’s capabilities for spatial representation and reasoning.
Acknowledgments
The research reported in this article was carried out while the first author was at Stanford University’s Center for Design Research and it was supported by Grant No. FA9550-20-1-0130 from the US Air Force Office of Scientific Research, which is not responsible for its contents. We thank participants in the Dagstuhl Seminar on Representing and Solving Spatial Problems for useful discussions that contributed to the ideas it reports, as well as the reviewers for their constructive feedback.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1 PUG expands to Planning with Utilities and Goals, which reflects the framework’s concern with task planning that is guided by both symbolic goals and numeric utilities associated with these structures. Here we focus on the most recent version of the architecture, PUG/C (Langley & Katz, Citation2022), which extends previous incarnations (Langley et al., Citation2016, Citation2017) to support continuous control. Earlier papers provide details about task-level plan generation, execution, and monitoring, which we will not address here, as they have few direct implications for spatial cognition.
2 In these cases, veracity is a piecewise linear function of attribute values, but other relations are possible, such as inverse square.
3 Clearly this will not scale to complex environments and an important direction for future work is developing an anytime approach to conceptual inference that is guided by utility, like that reported by Asgharbeygi et al. (Citation2005). The architecture should also take advantage of truth-maintenance methods (Stanojevic et al., Citation1994) to eliminate redundant inferences across cycles.
4 The architecture also sums the effects of matched processes, which we have described elsewhere (Langley & Katz, Citation2022), to predict the combined influence of these control settings, as well as those of natural factors.
5 The examples here assume that reference objects are described as points, but the architectural framework allows more complex entities that comprise multiple points or surfaces.
6 The definition for place P has no :veracity field, so its associated beliefs will have the default score of one, as with percepts.
7 This unified approach differs from that in Soar (Laird, personal communication), which calls on a PID controller as a subroutine.