You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe
There is currently minimal support for evaluating the performance of the system. This makes it difficult to evaluate the acknowledgement success rate, execution times, etc.
Describe the solution you'd like
Implement a collection of benchmarks to enable users to evaluate the performance of the system and their code. This will further support new developers as they implement their own algorithms and extend from pymavswarm
Describe alternatives you've considered
Alternative such as pyperformance exist; however, it would be helpful to have statistics related specifically to the pymavswarm system
Implementation Ideas
Implement some/all of the following benchmarks:
Acknowledgement success rate
State verification success rate
Average ping
Method execution time
Memory usage
Average number of retries attempted
It may also be helpful to add support for generating visualizations from the data and a decorator to enable users to specify which methods should be benchmarked.
Additional context
N/A
The text was updated successfully, but these errors were encountered:
How about a vitual machine with drone software on them?
This way you can take all measurements in and out of the virtual machine to C&C, as well as look at the internal interactions.
This way you should be able to cover all benchmarks, although measured at different places in this setup.
The disadvantage is (but not limited to) that you'd need to create, or have, several programs to implement these benchmarks.
The assumptions on which this methode relies are (but not limited to):
There's no or minimal interference between the swarm systems.
Drone software can be installed on VM's without critical adaptations to it's software
Is your feature request related to a problem? Please describe
There is currently minimal support for evaluating the performance of the system. This makes it difficult to evaluate the acknowledgement success rate, execution times, etc.
Describe the solution you'd like
Implement a collection of benchmarks to enable users to evaluate the performance of the system and their code. This will further support new developers as they implement their own algorithms and extend from
pymavswarm
Describe alternatives you've considered
Alternative such as
pyperformance
exist; however, it would be helpful to have statistics related specifically to thepymavswarm
systemImplementation Ideas
Implement some/all of the following benchmarks:
It may also be helpful to add support for generating visualizations from the data and a decorator to enable users to specify which methods should be benchmarked.
Additional context
N/A
The text was updated successfully, but these errors were encountered: