algorithms package

Subpackages

Submodules

algorithms.utils module

algorithms.utils.build_baseline(args, env)[source]
Parameters
  • args – args for building baseline

  • env – env

Returns

baseline model object

algorithms.utils.build_critic(args, data, env, writer=None)[source]

build critic handler

algorithms.utils.build_ngsim_env(args, n_veh=1, alpha=0.001)[source]
Parameters
  • args – args for building the env

  • n_veh – n vehicles in one episode

Returns

env, trajectory information, lower bound and higher bound for action space

algorithms.utils.build_policy(args, env, mode: int = 0)[source]
Parameters
  • args – args to build policy

  • env – env

  • mode – 0 for training, 1 for testing

Returns

a policy model object

algorithms.utils.build_reward_handler(args, writer=None)[source]
Parameters
  • args – args for building reward handler

  • writer – None

Returns

a reward handler

algorithms.utils.extract_normalizing_env(env)[source]
algorithms.utils.extract_wrapped_env(env, typ)[source]
algorithms.utils.load_params(filepath)[source]
Parameters

filepath – file path to load parameters

Returns

the loaded parameters

algorithms.utils.maybe_mkdir(dirpath)[source]

if there is a dir, skip. Else make one

algorithms.utils.save_params(output_dir, params, epoch, max_to_keep=None)[source]
Parameters
  • output_dir – output directory

  • params – params to save

  • epoch – epoch number

  • max_to_keep – the maximum number of co-exist params file

Returns

no return, write params for the given output directory

algorithms.utils.set_up_experiment(exp_name, phase, exp_home='./data/experiments/', snapshot_gap=5)[source]

Module contents