Isaac Gym Environments for Legged Robots
This repository provides the environment used to train ANYmal (and other robots) to walk on rough terrain using NVIDIA’s Isaac Gym.
It includes all components needed for sim-to-real transfer: actuator network, friction & mass randomization, noisy observations and random pushes during training.
Maintainer: Nikita Rudin
Affiliation: Robotic Systems Lab, ETH Zurich
Contact: [email protected]
With the shift from Isaac Gym to Isaac Sim at NVIDIA, we have migrated all the environments from this work to Isaac Lab. Following this migration, this repository will receive limited updates and support. We encourage all users to migrate to the new framework for their applications.
Information about this work’s locomotion-related tasks in Isaac Lab is available here.
Project website: https://leggedrobotics.github.io/legged_gym/
Paper: https://arxiv.org/abs/2109.11978
pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
cd isaacgym/python && pip install -e .
cd examples && python 1080_balls_of_solitude.py
isaacgym/docs/index.html
)cd rsl_rl && git checkout v1.0.2 && pip install -e .
cd legged_gym && pip install -e .
legged_robot.py
) and a config file (legged_robot_config.py
). The config file contains two classes: one containing all the environment parameters (LeggedRobotCfg
) and one for the training parameters (LeggedRobotCfgPPo
).cfg
will add a function with a corresponding name to the list of elements which will be summed to get the total reward.task_registry.register(name, EnvClass, EnvConfig, TrainConfig)
. This is done in envs/__init__.py
, but can also be done from outside of this repository.python legged_gym/scripts/train.py --task=anymal_c_flat
--sim_device=cpu
, --rl_device=cpu
(sim on CPU and rl on GPU is possible).--headless
.v
to stop the rendering. You can then enable it later to check the progress.issacgym_anymal/logs/<experiment_name>/<date_time>_<run_name>/model_<iteration>.pt
. Where <experiment_name>
and <run_name>
are defined in the train config.python legged_gym/scripts/play.py --task=anymal_c_flat
load_run
and checkpoint
in the train config.The base environment legged_robot
implements a rough terrain locomotion task. The corresponding cfg does not specify a robot asset (URDF/ MJCF) and has no reward scales.
envs/
with '<your_env>_config.py
, which inherit from an existing environment cfgsresources/
.cfg
set the asset path, define body names, default_joint_positions and PD gains. Specify the desired train_cfg
and the name of the environment (python class).train_cfg
set experiment_name
and run_name
isaacgym_anymal/envs/__init__.py
.cfg
, cfg_train
as needed. To remove a reward set its scale to zero. Do not modify parameters of other envs!ImportError: libpython3.8m.so.1.0: cannot open shared object file: No such file or directory
, do: sudo apt install libpython3.8
. It is also possible that you need to do export LD_LIBRARY_PATH=/path/to/libpython/directory
/ export LD_LIBRARY_PATH=/path/to/conda/envs/your_env/lib
(for conda user. Replace /path/to/ to the corresponding path.).net_contact_force_tensor
are unreliable when simulating on GPU with a triangle mesh terrain. A workaround is to use force sensors, but the force are propagated through the sensors of consecutive bodies resulting in an undesirable behaviour. However, for a legged robot it is possible to add sensors to the feet/end effector only and get the expected results. When using the force sensors make sure to exclude gravity from the reported forces with sensor_options.enable_forward_dynamics_forces
. Example: sensor_pose = gymapi.Transform()
for name in feet_names:
sensor_options = gymapi.ForceSensorProperties()
sensor_options.enable_forward_dynamics_forces = False # for example gravity
sensor_options.enable_constraint_solver_forces = True # for example contacts
sensor_options.use_world_frame = True # report forces in world frame (easier to get vertical components)
index = self.gym.find_asset_rigid_body_index(robot_asset, name)
self.gym.create_asset_force_sensor(robot_asset, index, sensor_pose, sensor_options)
(...)
sensor_tensor = self.gym.acquire_force_sensor_tensor(self.sim)
self.gym.refresh_force_sensor_tensor(self.sim)
force_sensor_readings = gymtorch.wrap_tensor(sensor_tensor)
self.sensor_forces = force_sensor_readings.view(self.num_envs, 4, 6)[..., :3]
(...)
self.gym.refresh_force_sensor_tensor(self.sim)
contact = self.sensor_forces[:, :, 2] > 1.