Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE]: Integrate benchmarking capabilities #100

Open
evan-palmer opened this issue Jul 29, 2022 · 1 comment
Open

[FEATURE]: Integrate benchmarking capabilities #100

evan-palmer opened this issue Jul 29, 2022 · 1 comment
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@evan-palmer
Copy link
Contributor

Is your feature request related to a problem? Please describe

There is currently minimal support for evaluating the performance of the system. This makes it difficult to evaluate the acknowledgement success rate, execution times, etc.

Describe the solution you'd like

Implement a collection of benchmarks to enable users to evaluate the performance of the system and their code. This will further support new developers as they implement their own algorithms and extend from pymavswarm

Describe alternatives you've considered

Alternative such as pyperformance exist; however, it would be helpful to have statistics related specifically to the pymavswarm system

Implementation Ideas

Implement some/all of the following benchmarks:

  • Acknowledgement success rate
  • State verification success rate
  • Average ping
  • Method execution time
  • Memory usage
  • Average number of retries attempted

It may also be helpful to add support for generating visualizations from the data and a decorator to enable users to specify which methods should be benchmarked.

Additional context

N/A

@evan-palmer evan-palmer added the enhancement New feature or request label Jul 29, 2022
@evan-palmer evan-palmer changed the title [FEATURE]: Integrate benchmark capabilities [FEATURE]: Integrate benchmarking capabilities Jul 29, 2022
@evan-palmer evan-palmer added the good first issue Good for newcomers label Aug 13, 2022
@TV2G
Copy link

TV2G commented Jul 29, 2024

How about a vitual machine with drone software on them?
This way you can take all measurements in and out of the virtual machine to C&C, as well as look at the internal interactions.
This way you should be able to cover all benchmarks, although measured at different places in this setup.

The disadvantage is (but not limited to) that you'd need to create, or have, several programs to implement these benchmarks.
The assumptions on which this methode relies are (but not limited to):

  • There's no or minimal interference between the swarm systems.
  • Drone software can be installed on VM's without critical adaptations to it's software

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants