Skip to content

Specialized Controllers

Quad-SDK ships hooks for two specialized controller families: learned policies (ONNX) and the leg-disentanglement (Underbrush) controller for cluttered environments.

Learned controllers (ONNX)

Overview

The runtime can execute neural-network policies trained in IsaacLab or MuJoCo via the ONNX Runtime. Custom policy parameterizations can be created by subclassing LearnedController (which itself implements the LegController interface — see Writing your own controller).

Prerequisites

  • C++ ONNXRuntime installed locally, or the Quad-SDK devcontainer (which bundles it)
  • A .onnx model file with the expected input/output schema (see LearnedController header for the contract)

Switching to a learned controller

  1. In quad_gazebo.py and quad_plan.py, set controller_mode to learned in the robot config.
  2. Copy your .onnx weights into robot_driver/include/robot_driver/models/.
  3. Update the model path in robot_driver/config/robot_driver.yaml.

Run

ros2 launch quad_utils quad_gazebo.py
ros2 topic pub /robot_1/control/mode std_msgs/UInt8 "data: 1" --once

ros2 launch quad_utils quad_plan.py \
  robot_configs:='[{"name":"robot_1","type":"go2","controller_mode":"learned","reference":"twist","twist_input":"keyboard"}]'

Observation inputs

The LearnedPolicy base class exposes everything most policies need:

  • RobotState (body pose + joint state) on every tick
  • cmd_vel (commanded body twist)
  • IMU acceleration — the latest sensor_msgs/Imu is cached on the controller via updateImuMsg() and is available alongside the body pose. This lets policies trained with raw accel features run without a separate subscription.

If your policy was trained against IsaacLab's accel observation, this is what you read at inference time.

Tips

Match the training observation order

The most common bug is observation-vector ordering disagreeing between training and deployment. Bake the schema into a header (or load it from JSON next to the .onnx) rather than relying on convention.

Don't allocate per tick

Pre-allocate the input/output tensors in the constructor; reuse them every call.

Leg-disentanglement (Underbrush)

Overview

A reactive swing-phase controller that prevents leg entanglements in cluttered natural and human-made environments using proprioceptive sensing only — no extra cameras or contact sensors required.

In benchmark trials reported in the publication, the controller succeeded in 14 of 16 lab trials.

Status

  • Available on: ROS 1 (devel branch) and ROS 2 (devel_ros2)
  • Hardware support: ROS 2 build currently only runs on the Ghost Robotics Spirit 40 — porting to other platforms is open work

Citation

Yim, J. K., Ren, J., Ologan, D., Gonzalez, S. G., & Johnson, A. M. Proprioception and reaction for walking among entanglements. IEEE/RSJ IROS, 2023.

Usage (ROS 2, Spirit 40)

ros2 launch quad_utils underbrush_gazebo.py
ros2 topic pub /robot_1/control/mode std_msgs/UInt8 "data: 1" --once
ros2 launch quad_utils quad_plan.py reference:=twist logging:=true
ros2 run body_force_estimator path_following

The swing-phase reactive logic lives in body_force_estimator/src/path_following.py. To port to other platforms, adapt the per-leg kinematic limits and contact thresholds in that file plus the matching quad_utils/config/<robot>.yaml.