Usage Video#

Quick Facts#

  1. Here we provide Examples of how to use OmniSafe.

  2. You can train policy by running omnisafe train .

  3. You can customize the configuration of the algorithm by running omnisafe train-config .

  4. You can run a benchmark by running omnisafe benchmark .

  5. You can run an evaluation by running omnisafe eval .

  6. You can get some helps by running omnisafe help .

Train Policy#

Example

You can train a policy by running:

omnisafe train
--algo PPO
--total-steps 1024
--vector-env-nums 1
--custom-cfgs algo_cfgs:steps_per_epoch
--custom-cfgs 512

Here we provide a video example:

Hint

The above command will train a policy with PPO algorithm, and the total training steps is 1024. The vector environment number is 1. The algo_cfgs:steps_per_epoch is the update cycle of the PPO algorithm, which means that the policy will be updated every 512 steps.

Customize Configuration#

Example

You can also customize the configuration of the algorithm by running:

omnisafe train-config "./saved_source/train_config.yaml"

Here we provide a video example:

Hint

The above command will use a configuration file train_config.yaml in the saved_source directory to train policy. We have provided an example showing the file layer of the configuration file. You can customize the configuration of the algorithm in this file.

Run Benchmark#

Example

You can run a benchmark by running:

omnisafe benchmark test_benchmark 2 "./saved_source/benchmark_config.yaml"

Here we provide a video example:

Hint

The above command will run a benchmark with 2 parallel process. The configuration file benchmark_config.yaml is in the saved_source directory. We have provided an example showing the file layer of the configuration file. You can customize the configuration of the benchmark in this file.

Run Evaluation#

Example

You can run an evaluation by running:

omnisafe eval ./saved_source/PPO-{SafetyPointGoal1-v0} "--num-episode" "1"

Here we provide a video example:

Hint

The above command will run an evaluation for a trained policy. The model parameters is in the saved_source directory.

Get Help#

Example

If you have any questions, you can get help by running:

omnisafe --help

Then you will see:

Hint

The above command will show the help information of OmniSafe, which may help you to some degree. If you still have any questions, just feel free to open an issue.