Simulating an environment in pyRDDLGym with a built-in policy.

Simulating an environment in pyRDDLGym with a built-in policy.#

This basic example illustrates how to make an environment in pyRDDLGym, execute a simple policy and collect return statistics from the simulation.

First install and import the required packages:

pip install --quiet --upgrade pip pyRDDLGym rddlrepository
Note: you may need to restart the kernel to use updated packages.

Import the required packages:

import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt

import pyRDDLGym
from pyRDDLGym.core.policy import RandomAgent

We will run the Wildfire domain, instance 1 from the 2014 planning competition:

env = pyRDDLGym.make('Wildfire_MDP_ippc2014', '1')

We will evaluate the random policy for 10 episodes, and return the summary statistics of the returns:

agent = RandomAgent(action_space=env.action_space, num_actions=env.max_allowed_actions)
return_stats = agent.evaluate(env, episodes=10)
for key, value in return_stats.items():
    print(f'{key}: {value}')
mean: -6182.0
median: -7215.0
min: -10655.0
max: -170.0
std: 3228.6678367401005

Summarize the performance in a histogram:

%matplotlib inline
returns = [agent.evaluate(env, episodes=1)['mean'] for _ in range(100)]
plt.hist(returns)
plt.show()
../_images/136558ed58cd31a08caf9fc3409587dda76d413e3c514bdabfd9b12a38b96d65.png