simple_rl.mdp package¶
Subpackages¶
Submodules¶
simple_rl.mdp.MDPClass module¶
MDPClass.py: Contains the MDP Class.
-
class
simple_rl.mdp.MDPClass.
MDP
(actions, transition_func, reward_func, init_state, gamma=0.99, step_cost=0)[source]¶ Bases:
object
Abstract class for a Markov Decision Process.
simple_rl.mdp.MDPDistributionClass module¶
MDPDistributionClass.py: Contains the MDP Distribution Class.
-
class
simple_rl.mdp.MDPDistributionClass.
MDPDistribution
(mdp_prob_dict, horizon=0)[source]¶ Bases:
object
Class for distributions over MDPs.
-
get_all_mdps
(prob_threshold=0)[source]¶ - Args:
- prob_threshold (float)
- Returns:
- (list): Contains all mdps in the distribution with Pr. > @prob_threshold.
-
get_init_state
()[source]¶ - Notes:
- Not all MDPs in the distribution are guaranteed to share init states.
-
remove_mdp
(mdp)[source]¶ - Args:
- (MDP)
- Summary:
- Removes @mdp from self.mdp_prob_dict and recomputes the distribution.
-
simple_rl.mdp.StateClass module¶
-
class
simple_rl.mdp.StateClass.
State
(data=[], is_terminal=False)[source]¶ Bases:
object
Abstract State class