Purposeful Signal-symbol Relations for Manipulation Planning
Purposeful Signal-symbol Relations for Manipulation Planning
Disciplines
Electrical Engineering, Electronics, Information Engineering (30%); Computer Sciences (60%); Psychology (10%)
Keywords
-
Task Planning,
Motion Planning,
Autonomous Robots,
Symbol Grounding,
Reinforcement Learning,
Probalistic Inference
Over the last years, well-studied robotic actions, such as pushing, picking, or placing, are being concatenated for the execution of multi-step tasks using human-like instructions: pick up object A, place it on object B, push object C towards object D, etc. In industrial settings, these instructions are carefully predefined for executing repetitive tasks in controlled environments. Artificial intelligence (AI) task planning approaches permits projecting this paradigm outside industrial scenarios by generating the required instructions automatically depending on particular configurations of objects and goals. However, despite the significant efforts made in this direction, the success of robotic architectures combining task and motion planning (TAMP) is still very limited. The abstract representation of objects and actions used by AI planning methods usually ignores physical constraints critical to successfully executing a task: What specific movements are necessary to remove a cup from a shelf without collisions? At which precise point should a bottle be grasped for a stable pour afterwards? And so on. These physical constraints are normally evaluated after AI task planning using computationally expensive trial-and-error strategies to find the motions for task execution in a given scenario. We propose a new TAMP approach where the evaluation of physical constraints start before task planning. Our approach blends perception, task planning, and execution into a common structure called Action Context (AC) that encodes object-object and object-robot causal relations in terms of the purpose of such relations in the context of a task: Is the relation between the robot hand and the bottle adequate for picking up the bottle for pouring afterwards? This is a fundamental difference with respect to the traditional approach of using purpose-neutral descriptions of object relations (on, in, under, etc.), which prevents evaluating if such a relation is suitable to execute a task. For example, picking up a bottle for pouring and picking up a bottle for placing it somewhere else define two hand- bottle relations with different motion requirements and physical constraints, which cannot be defined by only checking if the bottle is in the hand. We propose mechanisms to use ACs for AI task planning, where the feasibility of motion guides the planning process. Our TAMP quickly renders task plans that are physically feasible, avoiding the intensive computations of current approaches and increasing the rate of success.
- Universität Innsbruck - 100%
Research Output
- 4 Citations
- 4 Publications
-
2023
Title Unified Task and Motion Planning using Object-centric Abstractions of Motion Constraints DOI 10.48550/arxiv.2312.17605 Type Preprint Author Agostini A -
2025
Title Bootstrapping Object-Level Planning with Large Language Models DOI 10.1109/icra55743.2025.11127365 Type Conference Proceeding Abstract Author Paulius D Pages 16233-16239 -
2025
Title Bootstrapping Object-level Planning with Large Language Models Type Conference Proceeding Abstract Author Agostini A Conference IEEE International Conference on Robotics and Automation (ICRA) Link Publication -
2025
Title Bootstrapping Object-level Planning with Large Language Models Type Conference Proceeding Abstract Author Agostini A Conference IEEE International Conference on Robotics and Automation (ICRA) Pages 16233-16239 Link Publication