OmniSafe Statistics Tools#

Usage Example#

Here we provide a simple example of how to use the StatisticsTools class. Suppose you want to tune the algo_cfgs:batch_size parameter of your algorithm, then your run_experiment_grid.py file could look like this:

 1if __name__ == '__main__':
 2    eg = ExperimentGrid(exp_name='Example')
 3
 4    # Set the algorithms.
 5    example_policy = ['PPOLag', 'TRPOLag']
 6
 7    # Set the environments.
 8    eg.add('env_id', 'SafetyAntVelocity-v1')
 9
10    eg.add('algo', example_policy)
11    eg.add('train_cfgs:torch_threads', [1])
12    eg.add('algo_cfgs:batch_size', [32, 64, 128])
13    eg.add('logger_cfgs:use_wandb', [False])
14    eg.add('seed', [0])
15    # total experiment num is better to be divisible by num_pool
16    # meanwhile, users should decide this value according to their machine
17    eg.run(train, num_pool=6, gpu_id=None)

Then you run the experiment with the following command:

cd ~/omnisafe/examples/benchmarks
python run_experiment_grid.py

When the experiment is running, you can check it at omnisafe/examples/benchmarks/exp-x/Example .

Each experiment will be named with a hash value. which encode different set of parameters. In this example we set 6 kinds of parameters, that is 2 algorithms * 3 batch_size, so 6 hash values will be generated, which denotes 6 different sets of parameters.

In this example, they are:

SafetyAntVelocity-v1---1f58ce80fc9540b32a925d95694e3f836f80a5511e9e5c834e77195a2e9c3944
SafetyAntVelocity-v1---7a451ea3e08cfb6caf64d05c307be9b6c32a509dc425f758387f90f96939d720
SafetyAntVelocity-v1---7cefb92954e284496a08c3ca087af3971f8a37ba1845242208ef2c6afcaf4d27
SafetyAntVelocity-v1---564ef55d6dac0002b8ecf848a240fe05de8639cc33229b4f773157dd2f828e71
SafetyAntVelocity-v1---9997d3e3b2555d9f0da2703b24b376aa5ddd73d8abaffe95288b23bfd7304779
SafetyAntVelocity-v1---50699a2818176e088a359b124296d67ac6fb130336c5f7b66f356b34f361e356

After the experiment is finished, you can use the ~/omnisafe/examples/analyze_experiment_results.py script to analyze the results. For example, to plot the average return/cost of the SafetyAntVelocity-v1 environment, you can set the ~/omnisafe/examples/analyze_experiment_results.py file as follows:

 1# just fill in the path in which experiment grid runs.
 2PATH = '/home/gaiejj/PKU/omnisafe_zjy/examples/benchmarks/exp-x/Example'
 3if __name__ == '__main__':
 4    st = StatisticsTools()
 5    st.load_source(PATH)
 6    # just fill in the name of the parameter of which value you want to compare.
 7    # then you can specify the value of the parameter you want to compare,
 8    # or you can just specify how many values you want to compare in single graph at most,
 9    # and the function will automatically generate all possible combinations of the graph.
10    # but the two mode can not be used at the same time.
11    st.draw_graph(parameter='algo_cfgs:batch_size', values=None, compare_num=3, cost_limit=None, show_image=True)

Then 2 images will be generated in the ~/omnisafe/examples/ directory. Each image would be named with the hash value of the experiment. If you want to compare the performance of different parameters, you can refer to the hash value in the experiment directory.

Statistics Tools#

Documentation

class omnisafe.common.statistics_tools.StatisticsTools[source]#

Analyze experiments results launched by experiment grid.

Users can choose any parameters to compare the results. Aiming to help users to find the best hyperparameter faster.

Variables:
  • grid_config_dir (str) – The directory of grid_config.json.

  • decompressed_grid_config (dict[str, Any]) – The decompressed grid_config.json.

  • path_map_img_name (dict[str, Any]) – The map from path to image name.

  • grid_config (dict[str, Any]) – The grid_config.json.

  • exp_dir (str) – The experiment directory.

  • plotter (Plotter) – The plotter.

Initialize an instance of StatisticsTools.

_variants(keys, vals)[source]#

Recursively builds list of valid variants.

Parameters:
  • keys (list[str]) – The keys of the config.

  • vals (list[Any]) – The values of the config.

Returns:

List of valid variants.

Return type:

list[dict[str, Any]]

combine(sequence, num_choosen)[source]#

Combine elements in sequence to n elements.

Parameters:
  • sequence (list[str]) – The sequence to be combined.

  • num_choosen (int) – The number of elements to be combined.

Returns:

The generator of the combined elements.

Return type:

Generator

decompress_key(compressed_key, value)[source]#

This function is used to convert the custom configurations to dict.

Note

This function is used to convert the custom configurations to dict. For example, if the custom configurations are train_cfgs:use_wandb and True, then the output dict will be {'train_cfgs': {'use_wandb': True}}.

Parameters:
  • compressed_key (str) – The compressed key.

  • value (Any) – The value of the compressed key.

Returns:

The decompressed dict.

Return type:

dict[str, Any]

dict_permutations(input_dict)[source]#

Generate all possible combinations of the values in a dictionary.

Takes a dictionary with string keys and list values, and returns a dictionary with all possible combinations of the lists as values for each key.

Parameters:

input_dict (dict[str, Any]) – The input dictionary.

Returns:

The list of all possible combinations of the values in a dictionary.

Return type:

list[dict[str, Any]]

draw_graph(parameter, values=None, compare_num=None, cost_limit=None, smooth=1, show_image=False)[source]#

Draw graph.

Parameters:
  • parameter (str) – The parameter to compare.

  • values (list[Any] or None, optional) – The values of the parameter to compare. Defaults to None.

  • compare_num (int or None, optional) – The number of values to compare. Defaults to None.

  • cost_limit (float or None, optional) – The cost limit of the experiment. Defaults to None.

  • smooth (int, optional) – The smooth window size. Defaults to 1.

  • show_image (bool) – Whether to show graph image in GUI windows.

Return type:

None

Note

values and compare_num cannot be set at the same time.

get_compressed_key(dictionary, key)[source]#

Get the compressed value of the key.

Parameters:
  • dictionary (dict[str, Any]) – the uncompressed dictionary.

  • key (str) – the key.

Returns:

The compressed value of the key.

Return type:

Any

load_source(path)[source]#

Load experiment results.

Parameters:

path (str) – The experiment directory.

Return type:

None

make_config_groups(parameter, parameter_values, values=None, compare_num=None)[source]#

Make config groups.

Each group contains a list of config paths to compare.

Warning

values and compare_num cannot be set at the same time.

Parameters:
  • parameter (str) – The parameter to compare.

  • parameter_values (list[str]) – The values of the parameter to compare.

  • values (list[Any] or None, optional) – The values of the parameter to compare. Defaults to None.

  • compare_num (int or None, optional) – The number of values to compare. Defaults to None.

Returns:

A list of graph paths.

Return type:

list[dict[tuple[str, Any], str]]

update_dict(total_dict, item_dict)[source]#

Updater of multi-level dictionary.

Parameters:
  • total_dict (dict[str, Any]) – The total dictionary.

  • item_dict (dict[str, Any]) – The item dictionary.

Return type:

None

variants(keys, vals)[source]#

Makes a list of dict, where each dict is a valid config in the grid.

There is special handling for variant parameters whose names take the form

'full:param:name'

The colons are taken to indicate that these parameters should have a nested dict structure. For example, if there are two params,

Key

Val

'base:param:a'

1

'base:param:b'

2

the variant dict will have the structure

variant = {
    base: {
        param : {
            a : 1,
            b : 2
            }
        }
    }
Parameters:
  • keys (list[str]) – The keys of the config.

  • vals (list[Any]) – The values of the config.

Returns:

List of valid and not duplicate variants.

Return type:

list[dict[str, Any]]