updating pruning func for varlingam for faster processing #178
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The recent update replaces a previous loop structure with a more efficient use of advanced indexing to fill the variables
obj
andexp
. This improves code readability and may speed up execution, especially with large datasets.Instead of iterating through each element
block
inblocks
, a 3D array structure is now created to hold flattened ancestor indices for each block. This is done by extracting the ancestor indices for each block usingblocks[:, 0, ancestor_indexes]
and shaping them into a flat structureancestor_indexes_flat
.Populating the
obj
variable is also optimized using advanced indexing, directly copyingblocks[:, 0, i]
intoobj
for more efficient assignment.For the
exp
variable, advanced indexing is used similarly. Flattened ancestor indices are inserted intoexp[:, :causal_order_no]
, and remaining elements fromblocks[:, 1:].reshape(len(blocks), -1)
are placed into the corresponding position inexp[:, causal_order_no:]
. This update improves code clarity and enhance execution speed drastically.During my research I noticed that the pruning takes a very long time and this happens especially with longer data sets with many nodes, this should fix this issue. Atleast for Varlingam.