-
Notifications
You must be signed in to change notification settings - Fork 236
WaveRider Engine
The current calculation system is long overdue for an overhaul. It works well for baked-in sets of formulas but has shortcomings when specification involves more complicated procedures. For example, Ganyu's A1, which increases the Frost Flake CRIT Rate by 20%, requires the Frost Flake calculation to use a new formula with a different CRIT Rate calculation instead of the preexisting one. Using different formulas for similar calculations, such as in this scenario, also prevent optimizers from merging those calculations.
The goal of the new formula engine includes
- High flexibility when editing the formula, including when the operations may depend on one another,
- Ability to indicate that a portion of the code is reused at multiple places to avoid duplicate computations,
- Fast repeated calculations (comparable to the old one) where each round has slightly different parameters, and
- Ability to add multiple "final formulas" together to form multi-variate optimization.
This design breaks down formulas into simple blocks such as summation, production, read, etc., allowing for more stringent optimization, terms rearrangement, and potentially automatic finite differentiation. Each formula computation either depends on the other formula values (dependencies) or the external values in this system. Thus, the chain of formula dependencies forms a hierarchy that specifies the necessary calculations. This hierarchy consists of nodes, with the root node representing the formula we wish to compute. The type definitions of the node and its relevant objects are as follows.
export type Node = ConstantNode | ReadNode | FormulaNode | SubscriptNode | DataNode
interface NodeBase {
operands: Node[]
}
interface ConstantNode extends NodeBase {
action: "const"
value: number
info?: Info
}
interface SubscriptNode extends NodeBase {
action: "subscript"
list: number[]
info?: Info
}
interface ReadNode extends NodeBase { /* TBD */ }
interface DataNode extends NodeBase { /* TBD */ }
interface FormulaNode extends NodeBase {
action: Operation
}
type CommutativeMonoidOperation = "min" | "max" | "sum" | "prod"
type Operation = CommutativeMonoidOperation | "res" | "frac" | "threshold_add"
type Info = {
name?: string
variant?: "physical" | ElementKey | ReactionModeKey
unit?: "%" | "flat"
}
The operation
and operands
fields indicate the type of the node and the values it uses, respectively. Some nodes also have info
fields, which we will discuss when discussing formula printing in a later section. Finally, we defer the definition of ReadNode
and DataNode
until we discuss the mechanics of the formula modification.
It is unwieldy to use the data structure described earlier. So we also add utility functions returning appropriate nodes. The names of the utility functions generally are the same as the operation
field of the object. The list follows.
function sum(...values: (number | Node)[]): FormulaNode // `add` Node
function prod(...values: (number | Node)[]): FormulaNode // `mul` Node
function min(...values: (number | Node)[]): FormulaNode
function max(...values: (number | Node)[]): FormulaNode
function frac(x: number | Node, c: number | Node): FormulaNode {
function threshold_add(value: Node, threshold: number | Node, addition: number | Node): FormulaNode
function subscript(index: Node, list: number[], info?: Info): SubscriptNode
function res(base: number | Node): FormulaNode
For example,
const in1 = prod(a, b, ...)
const out = add(in1, in2, ...)
// same as
const sum = {
action: "add",
dependencies: [in1, in2, ...]
}
The above out
node is also equivalent to
- add // out
- mul // in1
- a
- b
- ...
- in2
- ...
Throughout this proposal, we use either form of the shorthands as appropriate.
Note that the tree represents the computation in Static Single Assignment form (SSA), which is by design acyclic. Some formulas may appear to be cyclic when written out in the imperative form:
let b = ...
let a = b + c
let b = a + c
In such cases, the formula is not in SSA form, and we can change the variables to turn them into SSA form.
let b = ...
let a = b + c
let b2 = a + c
There are multiple optimizations one can do to the hierarchy. This section list a few candidates.
The values of some nodes don't change across computations. We can directly replace it with fewer computations, such as
- If all operands are
Constant
s, we can apply the operation and replace it with the result (as anotherConstant
),
Furthermore, for essentially any model of ReadNode
and DataNode
,
- We can always replace
ReadNode
with the node associated with itskey
if such formula exists, and - We can always replace
DataNode
with its only operand (operands[0]
), after applying allReadNode
.
Some operations, such as add
and mul
, are commutative monoid. We can merge nodes with the same commutative monoid operation, e.g. add(add(1, 2), add(3, 4)) => add(1, 2, 3, 4)
. However, we don't merge nodes that are used multiple times in the hierarchy. It is more efficient to keep such nodes separated.
Some formulas appear multiple times at different places in the hierarchy. We can replace the duplicated formula with either of the duplicated nodes so they can share the result. These formulas include
-
ReadNode
s with the samekey
, - Any nodes with the same dependencies in the same order, and
- Any commutative monoid nodes with the same dependencies in any order.
Furthermore, if any two commutative monoid nodes have more than one common dependency, we can separate common nodes into a nested operand.
Note that deduplication may create new opportunities for flattening and vice versa. Thus, we may repeat the two optimizations until we can no longer progress. It is guaranteed to terminate as long as each flattened entry is used only once (and deduplicated entries appear more than once).
Note that the set of nodes described above is sufficient to specify any computations of rational numbers. However, it cannot define functions that compute a specific calculation (e.g., average dmg) from some inputs (e.g. artifact stats). For example, the node sum(1, prod(2, 3))
computes 1 + 2 * 3
but nothing else.
To specify the input to the calculation, we use ReadNode
as a placeholder, which is then replaced with appropriate nodes later on. The reason for this design is two-folded. On the one hand, we can use ReadNode
as a function input and replace those nodes with appropriate ConstantNode
when computing the final values. However, on the other hand, we can also use ReadNode
to describe any modification that changes the base calculation. Examples of such modification include increasing the crit rate, which is par for the course for the Genshin Impact, where essentially every character modifies their stats or damage outputs in some way.
The ReadNode
s only describes the read half of the story; there is also the write half where we replace ReadNode
s with appropriate nodes. There are a few potential designs for the modifications. There are also a few requirements that the design must meet. The design MUST support
- Multiple modifications to the same node; the modifications may come from different sources such as character constellation/ascension bonus, artifact set effects, etc., some of which may apply to the same node.
- Localized modifications; in formulas such as team buffs, only part of the hierarchy relevant to that character's buff would receive the modification as different team members have distinct bonus effects.
- Cascading modification; some modifications depend on other nodes, which may be further modified. The design must be able to identify and apply said (further) modification, potentially resulting in yet another layer of modification.
- Pre-modified formulas; standard formulas such as total ATK can be written a priori, and the system would add modifications later.
We now discuss the two potential designs of DataNode
. In every design, the key
of the ReadNode
is of type string[]
. Furthermore, there is a Data
type for mapping the ReadNode.key
to formulas. The type definition of Data
is as follows.
interface Data {
formulas: FormulaData
}
interface FormulaData {
operation?: never
[key: string]: FormulaContext | Node
}
In this definition, each Data
contains a FormulaData
, which is a recursive object. The key
of the ReadNode
specifies the path from the root of the FormulaData
to the appropriate formula. An example of FormulaData
and its associated key-value mapping follows.
const data: Data = {
formula: {
total: {
enerRech_: sum(char.enerRech_, weapon.enerRech_, artifact.enerRech_),
critRate_: sum(char.critRate_, weapon.critRate_, artifact.critRate_),
},
base: {
atk: sum(char.atk, weapon.atk),
},
}
}
Mapping
["total", "enerRech_"] => sum(char.enerRech_, weapon.enerRech_, artifact.enerRech_)
["total", "critRate_"] => sum(char.critRate_, weapon.critRate_, artifact.critRate_)
["base", "atk"] => sum(char.atk, weapon.atk)
Each DataNode
contains an array of Data
objects for the ReadNode
to read from, and each ReadNode
contains a key
and the accumulation
field that is used when multiple Data
objects are found to match the same key
.
interface DataNode extends NodeBase {
operation: "data"
data: Data[]
}
interface ReadNode extends NodeBase {
operation: "read"
key: string[]
accumulation: "unique" | CummulativeMonoidOperation
}
In this design, when ReadData
is to apply, the reader finds the closes DataNode
in the ancestor of the ReadNode
, it then maps the key
using the Data
objects in the data node. All of the matched formulas are then accumulated using the ReadNode.accumulation
operation. If the ReadNode.accumulation
is unique
and multiple nodes are found, the program raises an exception. As a simple example to show how the ReadNode
applies, consider the following hierarchy.
Node 1 - data [ Data 4 ]
Node 2 - data [ Data 1, Data 2, Data3 ]
Node 3 - sum
Node 4 - read (sum) [ ["total", "enerRech_"] ]
Node 5 - data [ Data 5 ]
Data 1 = {
total: { enerRech_: prod(0.3, base.atk) }
}
Data 2 = {
total: { enerRech_: sum(char.enerRech_, weapon.enerRech_, artifact.enerRech_) }
}
Data 3 = { /* Empty */ }
When applying Node 4, we first find all DataNode
s in its ancestor, Node 1 and Node 2. We then choose the closest DataNode
, Node 2. From the chosen node Node 2, we apply the key ["total", "enerRech_"] to each of the Data
; Data 1, Data 2, and Data 3, resulting in
[prod(0.3, base.atk), sum(char.enerRech_, weapon.enerRech_, artifact.enerRech_), undefined]
Since the accumulation
field is set to sum
, the ReaderNode
is then replaced with the summation of all found nodes above. Hierarchy becomes
Node 6 - data [ Data 4 ]
Node 7 - data [ Data 1, Data 2, Data3 ]
Node 8 - sum
Node 9 - sum
Node 10 - prod(0.3, base.atk)
Node 11 - sum(char.enerRech_, weapon.enerRech_, artifact.enerRech_)
Node 5 - data [ Data 5 ]
Note that since the summation now contains a different set of operands, Node 8 != Node 3, and by extension, Node 1 != Node 6 and Node 2 != Node 7. However, since Node 5 is not an ancestor of Node 4, it is unaffected by the modification and remains the same. If the ReadData
could not find DataNode
or find any match from the Data
objects, the ReadData
is treated as a later-stage read operation and remains as is.
If the new node that replaces a ReadNode
contains another ReadNode
, the engine will try to apply the ReadNode
again, resulting in a cascading effect for the ReadNode
application.
The main advantage of this design lies in its simplicity since ReadNode
always looks for a single DataNode
object, all ReadNode
formulas with the same key will result in the same final node, meaning all ["total", "atk"] read nodes in the same part of the hierarchy, including inside of Data
objects, will have the same outcome.
For DataNode
, we have a utility function similar to other nodes.
function data(base: Node, data: Data[]): DataNode {
const out = data(in, [data1, data2])
// Equivalent to
const out = {
operation: "data",
operands: [in],
data: [data1, data2],
}
For ReadNode
, the process is more involved to avoid mistyping keys
. We first create a ReadSpec
object, a recursive object with "unique" | CommutativeMonoidAction
as leaves, e.g.,
const spec = {
total: {
atk: "add",
},
base: {
atk: "add,
},
}
We then call a utility function makeReader
to create a recursive object with appropriate readers as the leaves.
function makeReadNodes<T: ...>(context: T, prefix: string[] = []): ...
const readNodes = makeReadNodes(spec) /* {
total: {
atk: /* ReadNode for `total.atk` */
},
base: {
atk: /* ReadNode for `base.atk` */
},
}
*/
readNodes.total.atk // `ReadNode` for `total.atk`
readNodes.base.atk // `ReadNode` for `base.atk`
Instead of ReadNode
, every node has a unique identifier, and the modification can be applied to any node with a given identifier. The problem with this design is that the identifiers can hardly be unique as different parts of the formula tend to share the same default/unmodified formulas (e.g. base ATK formula) but are applied with different modifications (e.g. different moves in the combo). Even when one always makes a copy when using the default formulas, it is still a challenge to correctly identify the correct formula in the correct region in the hierarchy.
Technically, one can use the string
for the keys instead of string[]
. This would simplify the lookup process. However, using string[]
allows us to modify the keys much more intuitively. For example, for the current character's total ATK, we can use ["total", "atk"]
for the key, and use ["mona", "total", "atk"]
when referring to specific character's total ATK with minimal changes to the Data
object. This feature of string[]
keys play a crucial role in designing a structure for team buff, which requires accessing data from multiple Data
sources.
We separate Data
objects into nine different subtypes:
- Common data
- Character sheet's data,
- Weapon sheet's data,
- All artifact sheet's data,
- User-specific character's data
- User-specific weapon's data
- User-specific artifact's data
- Team buff's data
- Enemy's data
There is also a unique data, generation data, which contains only data available during build generations.
The common data contains node formulas that are the same in most settings, such as total.atk = base.atk * artifact.atk_ + artifact.atk
. The sheet's data contains nodes common to their respective target, e.g., the character sheet's data may contain the hp calculation for different levels. User-specific data contains data specific to the user's target, such as character level, and team buff data contains information regarding how the buff from other characters applies to the current users. Finally, the enemy's data contain information regarding the enemy's state during calculation.
The common data is as follows.
/// Create an object containing all `keys`, each with associated value `mapping(key)`
function objectFromKeyMap(keys: readonly string[], mapping: (key) => finalValue)
const readerSpec = {
base: objectFromKeyMap(["atk", "def", "hp"], _ => "sum"),
total: objectFromKeyMap(allStats, _ => "sum"),
art: objectFromKeyMap(allArtifactSets, _ => "sum"),
char: {
auto: "sum", skill: "sum", burst: "sum",
level: "unique", constellation: "unique", ascension: "unique",
},
weapon: { level: "unique", rank: "unique", },
hit: {
base: "sum",
dmgBonus: "sum",
amp: { reactionMulti: "sum", multi: "sum", base: "sum", },
crit: { value: "unique", rate: "unique", avg: "sum", },
},
trans: {
dmg: "sum",
reactionMulti: "unique",
base: "sum",
lvlMulti: "unique",
},
enemy: {
res: { base: "sum", cryo: "sum", pyro: "sum", ... },
level: "unique", def: "sum", defRed: "sum", },
}
const input = makeReaders(readerSpec)
const { base, total, char, hit, trans, enemy, } = input
const common = {
total: objectFromKeyMap(["atk", "def", "hp"],
key => prod(base[key], total[`${key}_`])),
trans: {
dmg: prod(trans.reactionMulti, trans.base, trans.lvlMulti, enemy.res),
base: sum(unit, prod(16, frac(total.eleMas, 2000))),
lvlMulti: subscript(char.level, transformativeReactionLevelMultipliers),
},
hit: {
dmg: prod(hit.base, sum(unit, hit.dmgBonus), hit.crit.value, enemy.def, enemy.res, hit.amp.multi),
crit: {
rate: max(min(total.critRate_, unit), 0),
avg: prod(hit.crit.rate, total.critDMG_)
},
amp: {
multi: prod(hit.amp.reactionMulti, hit.amp.base),
base: sum(unit, prod(25 / 9, frac(total.eleMas, 1400))),
},
},
enemy: {
res: { cryo: 10, geo: 10, ... },
def: frac(sum(char.level, 100), prod(sum(enemy.level, 100), sum(1, prod(-1, enemy.defRed))))
}
}
This section shows how each part of the design comes together to form a coherent whole. In this example, we will be calculating Ganyu's Frostflake Bloom + Ganyu's Normal Atk Dmg. Using the common formula as defined above, the character sheet data is as follows.
Ganyu's Data {
char: {
// Bonus tlvl from C3 & C5
skill: threshold_add(5, input.char.constellation, 3),
burst: threshold_add(3, input.char.constellation, 3),
},
base: {
hp: subscript(input.char.level, hp_scaling_array)
atk: subscript(input.char.level, atk_scaling_array)
def: subscript(input.char.level, atk_scaling_array)
},
premod: {
/* In reality, we should add another layer for conditional */
cryoDmg_: threshold_add(4, input.char.ascension, 20)
},
enemy: {
res: {
cryo: threshold_add(1, input.char.constellation, -15)
}
},
}
User's Ganyu's data
{
char: {
auto: 8, skill: 8, burst: 8,
constellation: 1, level: 90, ascension: 6,
},
}
For simplicity, we replace weapon sheet data and weapon data into a more straightforward data object and have no artifact.
Weapon {
base: {
atk: 608,
},
premod: {
atk_: 49.6
},
hit: {
normal_: 21.0,
charged_: 21.0,
}
}
Then the dmg calculation data is as follows
FrostFlake Data {
base: prod(subscript(input.char.auto, /* ff base dmg lookup table */), input.postmod.atk), // Base of the dmg
crit: {
value: input.hit.avg, // Pick Avg dmg
},
enemy: {
res: {
base: input.enemy.cryo_res, // Choose cryo resistance as dmg
// Extra Cryo reduction for C1, which is FF-specific
cryo: threshold_add(1, input.char.constellation, -15),
}
},
}
Auto ATK Data {
base: prod(subscript(input.char.auto, /* ff base dmg lookup table */), input.postmod.atk), // Base of the dmg
crit: {
value: input.hit.avg,
},
enemy: {
res: {
base: input.enemy.res.physical // Choose physical resistance as dmg
}
},
}
Enemy Data {
baseRes: 10,
level: 89,
def: 0,
}
We then construct
Node 1 - sum
Node 2 - data [ Ganyu, User's Ganyu, Weapon, Frostflake, Enemy ]
Node 3 - data [ Ganyy, User's Ganyu, Weapon, Auto, Enemy ]
and compute the final value.
This may look intimidating, but given the ReadData
rule when looking up the DataNode
, we know that each ReadNode
is the combination of all the Data
in the DataNode
. For example, the value of input.enemy.res_
at Node 2 will simply be sum(common.enemy.cryo_res_, frostflake.enemy.res.cryo)
, while its value at Node 3 will instead be sum(common.enemy.res.physical)
.