Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug/spraying #308

Closed
wants to merge 68 commits into from
Closed

Bug/spraying #308

wants to merge 68 commits into from

Conversation

htopazian
Copy link
Contributor

  • creating a variable output called n_spray for the number of people receiving IRS at each timepoint
  • adding spraying to mortality.processes.R
  • fixing typo in processes from RTS,S EPI to PEV EPI

pwinskill and others added 30 commits July 12, 2023 14:27
 * Add tests for the EPI and mass vaccination processes/listeners
 * Add checks in the listeners to remove scheduled vaccinations
 * Update the news for the bug fix
 * RcppExports updated automatically
Fix competing hazards for mass and EPI pev:
 * Update pev-epi min_wait test to directly check sample population
 * Update pe-epi  min_wait test to check that pev_timestep is being updated
   on the first dose
 * Copy pev-epi min_wait test to mass pev
 * Update pev and epi processes to calculate targets based on time of first dose
 * Update infection immunity function to calculate pev immunity only
   after 3rd dose
Custom carrying capacity is set as a relative scaler, not absolute value
 * Update documentation
 * Update parameterisation and tests
 * Update booster listener to incorporate timed coverage
 * Update attach_events to pass timed coverage parameters
 * Test timed coverage sampling
 * Bonus: fix a biology test
 * rename booster_timesteps -> booster_spacing
 * make booster_coverage a matrix (timestep x booster doses)
 * update validation
 * update implementation of attach_booster_listener
 * fix regression tests
 * update documentation
 * update vignettes
 * fix correlation examples
…he function now calculates the maternally-acquired immunity to severe disease (birth_ivm) using the acquired immunity to severe disease (iva) rather than the acquired immunity to clinical disease (ica)
Implement time varying coverage for PEV boosters:
…early treatment failure (ETF) component of the model. Also created a draft vignette for Antimalarial_Resistance on the feat/antimalarial_resistance branch
…esigned to parameterise antimalarial resistance for malariasimulation runs
…ple-value entries for each parameter (e.g. updates to resistance proportions through time) and included an additional check to make sure the vectors for each input parameter are of equal length. I have also added antimalarial-resistance_development.R which is the script I've been using to amend/test functions and in the current commit includes an updated version of calculate_treated() function that simulates the effect of early treatment failure (artemisinin resistance outcome)
… using any(), removed for loops for second and third checks and used any()
…n and removed additional space from set_antimalarial_resistance() in antimalarial_resistance.R
…e treated as part of improvement in development process
…nt scripts outside of project following advice).
…gning the resistance parameters and use a vectorised method. Fixed a typo in the set_antimalarial_resistance() function documentation
…calculate_treated() to separate resistance and non-resistance simulation paths
Thomas Brewer and others added 29 commits February 15, 2024 11:52
…/LCF/RDP). Included test to ensure check works and amended documentation in get_parameters() to warn users features in development.
…rror

Added check for undeveloped antimalarial resistance features (SPC/LPF…
The model can now be run for a given number of steps, have its state
saved, and then restore it and run for some more time steps.

Parameters of the model may be modified when resuming, allowing a
simulation to be forked with a single historical run and many futures,
modelling different intervention scenarios.

There are some limitations as to which parameters may be modified. In
general, the structure of the simulation must remain the same, with the
same variables and same events. This means interventions cannot be
enabled or disabled when resuming, only their parameterization can
change. Additionally, the timing of intervention events should not be
modified. These limitations may be relaxed in the future, if there is a
need for it.

To avoid changing the existing API, this feature is available as a new
`run_resumable_simulation` function. The function returns a pair of
values, the existing dataframe with the simulation data and an object
representing the internal simulation state. The function can be called a
second time, passing the saved simulation state as an extra argument.
See the `test-resume.R` file for usage of the new function.

The implementation builds on top of individual's new support for this.
Individual already saves the state of its native objects, ie. variables
and events. The malaria model keeps quite a bit of state outside of
individual, which we need to save and restore explicitly. This is done
by creating a set of "stateful objects", ie. R6 objects with a pair
`save_state` and `restore_state` methods. This interface is implemented
by every bit of the model that has state to capture:

- `LaggedValue` objects store the short term EIR and FOIM history.
- `Solver` objects represent the current state of ODE solvers.
- Adult mosquito ODEs keep a history of incubating values which need to
  be restored.
- `CorrelationParameters` stores a randomly sampled value. This needs to
  be saved to ensure the simulation is resumed with that same value.
- In addition to R's native RNG (whose state is already saved by
  individual), the model uses the dqrng library, whose state needs
  saving.
The existing process written in R needs to copy the contents of each
variable from C++ using `v$get_values()`, then after scaling the vector
it would copy the result back into C++ using `v$queue_update`.

The amount of data copied and the time it took was pretty significant.
6 double variables, one for each kind of immunity, need to be updated in
full at each time step, each as big as the population size.

Moving this into C++ removes the need for any copy at all, besides the
multication loop. Values are read out of a reference to the vector held
by the DoubleVariable, the result of the multication is moved to the
queue, and finally individual moves the vector in the queue into the
DoubleVariable. The speedup from this change for a 1M population size is
around 10%.

An alternative optimization I considered was to compute the exponential
decay lazily, recording only the timestep and value at which the
immunity was last updated and using the closed form expression of the
exponential decay.  This would avoid the need to have mass updates of
the immunity variables at every time step. Unfortunately in my testing
this ends up being slower than even the current implementation, with all
the time being spent in calculating the current value. This would also
be a much more intrusive change, since every use of the immunity
variables needs to be modified to take the last update timestep, the
current timestep and the decay rate into consideration.
The tests for the correlation parameters were using a tolerance of 1e2
everywhere. Since the tolerance is relative, assertions would
effectively succeed for any value within a few orders of magnitude.
Setting obviously wrong expected results in the assertions did not
produce any errors.

This switches the tolerance to 0.1. This was chosen through
experimenting, as something that didn't cause false negative even after
many runs, while also being reasonably close. When using 0.01 I did get
1 failure in 100 runs of the test suite.

Some of the assertions were using incorrected expected values, which
flew under the radar because of the huge tolerance. I've also cleared up
the tests and bit and made them more consitent with one another.

Finally I changed the `CorrelationParameters` constructor to accept only
the values it needs (population size and intervention booleans), rather
than the full simulation parameters. This makes the test a bit more
concise, and will also help with upcoming tests that work at restoring
correlation state while adding interventions. The existing public
wrapper `get_correlation_parameters` still has the same interface as
before.
…ty_process function to include parameters as an input
…alarial_resistance and get_antimalarial_resistance_parameters, then made additional changes to documentation where necessary to reflect changes. Fixed the effects of the changes on unit tests
The dt parameter was described as "the delay to move from state Tr to
Ph", but the Ph (prophylaxis) state does not exist in the model. Instead
the dt parameter is used to transition from Tr to S, in `R/processes.R`.

Looking at the commit history, there seems to have been a bit of back
and forth around adding and removing the Tr and Ph states. The comment
in question was introduced in #30, after the
prophylaxis state had already been removed and even at the time the
parameter was already used as the transition rate from Tr to S.
Antimalarial Resistance: Slow Parasite Clearance
The `set_tbv` function was located at the bottom of `pev_parameters.R`,
which was confusing and unintuitive. I believe this was a remnant of a
time when that file was named `vaccine_parameters.R`. It is now moved to
its own file, named `tbv_parameters.R`. This matches the general naming
convention used by all other interventions.

The ages parameter of `set_tbv` was previously described as being
"per-round", which might hint at it being, eg. a list of age ranges with
one entry per round. This did not match the implementation, which uses
the same whole `tbv_ages` parameter for every round.

I don't how useful being able to pass in a non-contiguous set of ages
is. In the future it might be clearer to change the TBV intervention to
use `min_ages` and `max_ages`, the same way other mass-event
interventions (MDA, SMC and mass-PEV) are implementated. This will make
it easy to actually have varying age ranges for each round, if desired.

This also adds a check in `set_tbv` for the length of coverages to
ensure it matches the timesteps, the same way it is done in other
interventions.

The documentation for the `timesteps` and `coverages` parameters of the
`set_pmc` method made mentions of rounds, implying that the intervention
only takes place at the given timesteps, when it is in fact a perennial
age-based intervention that may happen at any timestep. The timesteps
parameter is only used to vary the coverage over time.

Finally, there were a pair of lost words in the documentation of
`set_tbv` and `set_mda` from successive code refactors, which have now
been removed.
This uses a few of individual's new features that make restoring more
flexible. It also fixes a bug when restoring the mosquitto solvers, by
correctly restoring the timestep, which is needed to model seasonality.

The intervention events use the new `restore = FALSE` flag to make sure
their schedule can be modified when resuming. Instead of having the
events re-schedule themselves everytime they fire, we setup the entire
schedule upfront when initialising the simulation.

It adds end-to-end testing of this feature, across a range of scenarios.
For each scenario, the outcomes of the simulation with and without
restoring are compared and we make sure they are equivalent.
When resuming a simulation, it is possible to add new intervention.
The correlation of the new intervention may need configuring, both
relative to itself and to existing interventions.

The correlation parameters work by sampling, at the start of the
simulation, relative weights for each individual and intervention. The
weight are later used to select which individuals are targeted by the a
given intervention. There weights are drawn from a multivariate normal
distribution, whose variance-covariance matrix depends on the configured
correlation.

When adding interventions, new corresponding columns need to be added to
the weights matrix. A fresh mvnorm distribution cannot be used, since it
would override the already drawn weigths for the existing interventions.
The solution instead is to use a conditional mvnorm distribution on the
new columns, using the existing values as the conditions. This yields
new weigths which follow the determined variance-covariance matrix while
preserving the existing values.
Merge branch 'dev' into competing_hazards_mortality_order
# Please enter a commit message to explain why this merge is necessary,
# especially if it merges an updated upstream into a topic branch.
#
# Lines starting with '#' will be ignored, and an empty message aborts
# the commit.
Moved mortality process to the end of processes to resolve order of competing hazards.
…on't think it needs to be multiplied by psi. When these values are drawn in lines 135-140 of biting_process.R they get multiplied by psi in line 144. That means we're effectively multiplying these values by psi^2. I think we can correct this by adjusting the calculation thusly.

The two tests at the end would then need adjusting (one way or another) to reflect this change - which could use improving.
dqrng just released a new version, which means we can stop relying on
the mrc-ide r-universe as an additional repository.
Depend on dqrng from CRAN again.
- adding spraying to mortality processes
- renaming process from RTS,S to PEV
@htopazian htopazian closed this May 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants