Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature suggestion: Average damage during advantage #26

Open
LeoDanielsson opened this issue Jan 19, 2021 · 3 comments
Open

Feature suggestion: Average damage during advantage #26

LeoDanielsson opened this issue Jan 19, 2021 · 3 comments

Comments

@LeoDanielsson
Copy link

Hi! I love the app. It's very useful when selecting perks.

I'd like to suggest adding a graph showing average damage when under advantage (and possibly disadvantage). This is something that would be nice in particular when considering rolling modifier perks, as they can lower your average damage during advantage.

So it would allow me to say "Hmm, adding two rolling +1:s will increase my average damage by X, but lower my average advantaged damage by Y" and make a qualified decision based on that.

@scottmk
Copy link

scottmk commented Dec 24, 2023

This would be really tricky with advantage rules on rolling cards. The rule for that has also changed from Gloomhaven to Frosthaven.

Using the Frosthaven rules, if you have the following situation:

r+1
r+1
+0
+2

You would end up with +4.
But if you have the following situation:

+0
r+1
r+1
+2

You'd end up only with +1.

Further, if you had the following situation:

r+1
+0
r+1
+2

You'd end up with +2.

So there's a lot of possible permutations depending on the order of the deck. I'm not saying it's impossible to figure out the average damage with advantage, but it would pretty complicated logic. You'd have to calculate every possible permutation of the deck in order to figure it out. That's difficult to do because the standard attack modifier deck has 20! or 2.4 x 1018 permutations. So it would be computationally expensive without some sort of clever dynamic programming solution.

You could reduce the problem space quite a bit by just figuring out only the permutations that have at least one rolling card in the first n + 2 cards, where n is the number of rolling cards in the deck, but still, it's not trivial.

@scottmk
Copy link

scottmk commented Dec 24, 2023

I was curious, so let's say you have the standard attack modifier deck and added two r+1 cards. The following python could would get you all permutations of 4 card draws that have all the rolling cards in them:

from itertools import permutations

def _get_all_rolling_permutations(deck):
    num_rolling_cards = len([card for card in deck if card.startswith('r')])
    permutation_length = num_rolling_cards + 2
    card_permutes = permutations(deck, permutation_length)
    yield from [combo for combo in card_permutes if any(card.startswith('r') for card in combo)]

Which in the above example would be 59,280 permutations. That's pretty easy to calculate the average from, so maybe this isn't as hard as I initially thought.

However, it will blow up in complexity as the number of rolling cards increases. The time complexity for calculating the average would be O(n!), and I don't think there's any way around that aside from some clever dynamic programming solution I haven't thought of.

With 3 rolling cards, you're looking at 2,177,400 permutations, and with 4, you're looking at 69,001,920, and so on. I'll keep scratching my head at a better way to figure this out.

@scottmk
Copy link

scottmk commented Dec 25, 2023

I was able to reduce the problem space quite a lot by looking at all the possible patterns that can be drawn. For example, if you have 4x r+1 cards plus the standard 20-card deck, then there are 69,001,920 permutations of advantage draws, but only 120 unique outcomes. So I just wrote a simple algorithm to find the cache key value for each permutation to reduce computation:

def get_cache_key(permutation):
    # TODO should be is -2 for (dis)advantage and -1 for regular
    num_rolling_size = len(permutation) - 2
    cache_key = []
    for idx, elem in enumerate(permutation):
        if elem.startswith('r'):
            cache_key.append(elem)
        else:
            # sort all the rolling modifiers that start
            cache_key.sort()
            if len(cache_key) == num_rolling_size:
                # case 1 (r, r, x, y)
                cache_key.extend(sorted(permutation[idx:]))
            elif len(cache_key) < num_rolling_size:
                # case 2 (r, x, r, ...), we only need the next two elems
                next_two = [elem]
                last_elem = permutation[idx + 1]
                #  and we can remove the r from the last elem
                next_two.append(last_elem[1:] if last_elem.startswith('r') else last_elem)
                #  then sort last two and add
                cache_key.extend(sorted(next_two))
            elif not cache_key:
                # case 3 (x, y, any, any), we would sort only the first two
                cache_key.extend(sorted(permutation[idx:idx + 2]))
            break
    return tuple(cache_key)

From the limited testing I've done, this seems to work pretty well.

However, this doesn't really solve the complexity problem, just dramatically reduces it. You still have to create all 69 million permutations and evaluate them, and that adds up. My little Python program takes ~2 minutes to run with four r+1 cards plus the standard deck, and that's after all my little tricks to reduce the problem space.

And that's a very simple case. The Brute and the Mindthief can have up to 15 r+0 cards on top of the basic 20-card deck. That could take hours to run.

I did happen to find someone who made a spreadsheet that can do this, and it looks like he just manually enumerated all possible options. So maybe I'll look into how he did this and see if I can learn any math tricks I'm missing. But also his spreadsheet is for Gloomhaven rules, and most people strongly prefer the Frosthaven advantage rules. They're very different.

Since I know, for example, that there are only 120 possible outcomes in the above example, perhaps it would be better to try to just build all those possible outcomes, instead of building all permutations first and then reducing the problem space.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants