QuadPiPS

QuadPiPS: A Perception-informed Footstep Planner for Quadrupeds With Semantic Affordance Prediction

Hardware demonstrations

Visualizations of QuadPiPS hardware deployment in both indoor and outdoor settings. (a) Flat, (b) Single Platform, (c) Double platform, (d) Ramped single platform, (e) Triple platform, (f) Ramped stepping stones platform. Top images are the real world and bottom images are RViz. In RViz, superpixel regions are depicted by blue points, normals, and boundaries.

Video

Abstract

This work proposes QuadPiPS, a perception-informed framework for quadrupedal foothold planning in the perception space. QuadPiPS employs a novel ego-centric local environment representation, known as the legged egocan, that is extended here to capture unique legged affordances through a joint geometric and semantic encoding that supports local motion planning and control for quadrupeds. QuadPiPS takes inspiration from the Augmented Leafs with Experience on Foliations (ALEF) planning framework to partition the foothold planning space into its discrete and continuous subspaces. To facilitate real-world deployment, QuadPiPS broadens the ALEF approach by synthesizing perception-informed, real-time, and kinodynamically-feasible reference trajectories through search and trajectory optimization techniques. To support deliberate and exhaustive searching, QuadPiPS over-segments the egocan floor via superpixels to provide a set of planar regions suitable for candidate footholds. Nonlinear trajectory optimization methods then compute swing trajectories to transition between selected footholds and provide long-horizon whole-body reference motions that are tracked under model predictive control and whole body control. Benchmarking with the ANYmal C quadruped across ten simulation environments and five baselines reveals that QuadPiPS excels in safety-critical settings with limited available footholds. Real-world validation on the Unitree Go2 quadruped equipped with a custom computational suite demonstrates that QuadPiPS enables terrain-aware locomotion on hardware.

Overall framework

QuadPiPS workflow

Workflow for the QuadPiPS framework. The red modules represent the perception pipeline, blue modules represent the planning components, and green modules represent control. Dashed blocks represent novel components that are introduced in this proposed framework. Arrows represent downstream dependencies, meaning that the robot state x is incorporated into several modules downstream from the semantic egocan.

To perform foothold planning over the egocan environment representation, QuadPiPS draws inspiration from the Augmented Leafs with Experience on Foliations (ALEF) framework. However, existing ALEF implementations perform offline kinematic planning over entirely known environments, and the time required to generate feasible paths ranges from seconds to minutes. In this work, an extension of the Simple Linear Iterative Clustering (SLIC) superpixels algorithm oversegments the egocan floor to facilitate exhaustive searching over candidate footholds through a contact mode family transition graph. Then, a nonlinear trajectory optimization formulation synthesizes kinodynamically-feasible and real-time whole body reference trajectories that transition the system between stances. Lastly, a Model Predictive Controller (MPC) and Whole Body Controller (WBC) track this reference trajectory.

Planning Approach

QuadPiPS search and optimization diagram

Overall diagram of the proposed planning framework. A graph search is performed over contact mode family transitions defined through a series of geometric and kinematic constraints including a user-defined gait, kinematic reachability volumes, and stance stability. Then, the suggested contact sequence is passed to a long-horizon trajectory optimization problem to synthesize the whole body reference trajectory.

QuadPiPS adapts its core hierarchical philosophy from the Augmented Leafs with Experience on Foliations (ALEF) framework. To perform perception-informed and kinodynamically-feasible quadrupedal motion planning in real time, QuadPiPS decomposes the task of foothold planning into its discrete and continuous planning spaces. The discrete space consists of the set of potential footholds, and the continuous space consists of the whole body configurations that the robot can assume to perform said footholds.

Simulation Demonstrations

Ramped Balance Beam

Ramped Stepping Stones

Sparse Stepping Stones

Split Beams

Simulation Environments

Simulation environments

Visualization of environments with corresponding superpixels. (a) Ramp, (b) Stairs, (c) Rubble, (d) Pegboard, (e) Balance beam, (f) Ramped balance beam, (g) Ramped stepping stones, (h) Sparse stepping stones, (i) Obstructed balance beam, and (j) Winding balance beam. Top images are in Gazebo and bottom images are RViz. In RViz, green spheres represent the start torso pose and red spheres represent the goal torso pose. Superpixels are depicted by blue boundaries, normals, and points.

Simulation Baselines

Simulation benchmarking

Simulation benchmarking results for the ten environments. For each environment, the left plot shows the success rates for all baselines. The right plot shows task progress percentage over time for each attempted trial. Solid lines are successes and dashed lines are failures. Environmental feature locations are also displayed to give context on where trials failed.

Hardware Demonstrations

Ramped Single Platform

Double Platform

Stepping Stones

Triple Platform