Specialized Controllers¶
Quad-SDK ships hooks for two specialized controller families: learned policies (ONNX) and the leg-disentanglement (Underbrush) controller for cluttered environments.
Learned controllers (ONNX)¶
Overview¶
The runtime can execute neural-network policies trained in IsaacLab or MuJoCo via the ONNX Runtime. Custom policy parameterizations can be created by subclassing LearnedController (which itself implements the LegController interface — see Writing your own controller).
Prerequisites¶
- C++ ONNXRuntime installed locally, or the Quad-SDK devcontainer (which bundles it)
- A
.onnxmodel file with the expected input/output schema (seeLearnedControllerheader for the contract)
Switching to a learned controller¶
- In
quad_gazebo.pyandquad_plan.py, setcontroller_modetolearnedin the robot config. - Copy your
.onnxweights intorobot_driver/include/robot_driver/models/. - Update the model path in
robot_driver/config/robot_driver.yaml.
Run¶
ros2 launch quad_utils quad_gazebo.py
ros2 topic pub /robot_1/control/mode std_msgs/UInt8 "data: 1" --once
ros2 launch quad_utils quad_plan.py \
robot_configs:='[{"name":"robot_1","type":"go2","controller_mode":"learned","reference":"twist","twist_input":"keyboard"}]'
Observation inputs¶
The LearnedPolicy base class exposes everything most policies need:
RobotState(body pose + joint state) on every tickcmd_vel(commanded body twist)- IMU acceleration — the latest
sensor_msgs/Imuis cached on the controller viaupdateImuMsg()and is available alongside the body pose. This lets policies trained with raw accel features run without a separate subscription.
If your policy was trained against IsaacLab's accel observation, this is what you read at inference time.
Tips¶
Match the training observation order
The most common bug is observation-vector ordering disagreeing between training and deployment. Bake the schema into a header (or load it from JSON next to the .onnx) rather than relying on convention.
Don't allocate per tick
Pre-allocate the input/output tensors in the constructor; reuse them every call.
Leg-disentanglement (Underbrush)¶
Overview¶
A reactive swing-phase controller that prevents leg entanglements in cluttered natural and human-made environments using proprioceptive sensing only — no extra cameras or contact sensors required.
In benchmark trials reported in the publication, the controller succeeded in 14 of 16 lab trials.
Status¶
- Available on: ROS 1 (
develbranch) and ROS 2 (devel_ros2) - Hardware support: ROS 2 build currently only runs on the Ghost Robotics Spirit 40 — porting to other platforms is open work
Citation¶
Yim, J. K., Ren, J., Ologan, D., Gonzalez, S. G., & Johnson, A. M. Proprioception and reaction for walking among entanglements. IEEE/RSJ IROS, 2023.
Usage (ROS 2, Spirit 40)¶
ros2 launch quad_utils underbrush_gazebo.py
ros2 topic pub /robot_1/control/mode std_msgs/UInt8 "data: 1" --once
ros2 launch quad_utils quad_plan.py reference:=twist logging:=true
ros2 run body_force_estimator path_following
The swing-phase reactive logic lives in body_force_estimator/src/path_following.py. To port to other platforms, adapt the per-leg kinematic limits and contact thresholds in that file plus the matching quad_utils/config/<robot>.yaml.