You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When an individual becomes infected, we do not need to store the recovery event in the queue. We can simply create a separate collection of recovery times (and nodes). Then at the end we parse the cumulative infection info and the recovery info and create the I and R lists.
Perhaps at the end we might have the numerical value of the total number of recoveries and a heap (in reverse time order of recovery events). We process this in reverse. while also going through the cumulative I list in reverse. This will give use the values of I[t] and R[t] (and S[t]) at the times of all infection and recovery events.
Might be worth checking with Petter Holme about this (see his note at the top of his blog post on fastest SIR in the East) https://petterhol.me/2018/02/07/fastest-network-sir-code-in-the-east/. He suggests this is a significant speedup (factor of 2), but I should explore how he has implemented it.
The text was updated successfully, but these errors were encountered:
When an individual becomes infected, we do not need to store the recovery event in the queue. We can simply create a separate collection of recovery times (and nodes). Then at the end we parse the cumulative infection info and the recovery info and create the I and R lists.
Perhaps at the end we might have the numerical value of the total number of recoveries and a heap (in reverse time order of recovery events). We process this in reverse. while also going through the cumulative I list in reverse. This will give use the values of I[t] and R[t] (and S[t]) at the times of all infection and recovery events.
Might be worth checking with Petter Holme about this (see his note at the top of his blog post on fastest SIR in the East) https://petterhol.me/2018/02/07/fastest-network-sir-code-in-the-east/. He suggests this is a significant speedup (factor of 2), but I should explore how he has implemented it.
The text was updated successfully, but these errors were encountered: