Skip to content
This repository has been archived by the owner on Sep 1, 2024. It is now read-only.

[Feature Request] More general reward #115

Open
mkolodziejczyk-piap opened this issue Aug 3, 2021 · 1 comment
Open

[Feature Request] More general reward #115

mkolodziejczyk-piap opened this issue Aug 3, 2021 · 1 comment
Labels
enhancement New feature or request

Comments

@mkolodziejczyk-piap
Copy link

Hi, currently reward_fn is independent from environment class (mbrl.models.ModelEnv) and accepts as input actions and next observation. In practice more general, dependent on environment parameters reward functions are needed. For example,

  • we've got some reference trajectory or obstacles that are fixed or periodically updated
  • we want to include progress in reward function, e.g. reward - prev_reward

My initial thought is to change reward_fn from external function to class function of ModelEnv and then we could use self.parameter of that class. I wonder if this is "safe" and doesn't mess with other features

Regards,

@mkolodziejczyk-piap mkolodziejczyk-piap added the enhancement New feature or request label Aug 3, 2021
@luisenp
Copy link
Contributor

luisenp commented Aug 6, 2021

Hi @mkolodziejczyk-piap. This is an interesting suggestion. Can you give a more concrete example to help me sketch out something?

As a starting point, in the current state of the code it should already be possible to use a reward_fn that is a class, as long as you implement a __call__ method with the same inputs (actions and next observation). This will allow you to keep some internal state, but depending on how you'd to use the ModelEnv you may need to have your own version of evaluate_action_sequences.

I'm happy to take a deeper look at this with more details in hand.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants