Welcome to pyRDDLGym’s Documentation!
- Introduction
- Installation Guide
- Getting Started: Basics
- Getting Started: Advanced Tools
- Managing Problems using rddlrepository
- pyRDDLGym-jax: JAX Compiler and Planner
- Requirements
- Installing via pip
- Installing the Pre-Release Version via git
- Changing the Simulation Backend to JAX
- Differentiable Planning: Deterministic Domains
- Differentiable Planning: Stochastic Domains
- Running the Basic Example
- Running from the Python API
- Writing Configuration Files for Custom Problems
- Writing Configuration Files for Policy Networks
- Boolean Actions
- Constraints on Action Fluents
- Reward Normalization
- Utility Optimization
- Changing the Planning Algorithm
- Automatically Tuning Hyper-Parameters
- Dealing with Non-Differentiable Expressions
- Computing the Gradients Manually
- Limitations
- pyRDDLGym-gurobi: Gurobi MIP Compiler and Planner
- pyRDDLGym-prost: PROST Planner
- pyRDDLGym-rl: Reinforcement Learning
- Symbolic Toolset for pyRDDLGym
- RDDL Language Description