Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Corpus Pruning Algorithm Experiment #2002

Closed
wants to merge 8 commits into from

Conversation

tokatoka
Copy link
Contributor

@tokatoka tokatoka commented Jul 23, 2024

This PR tries new idea from https://mschloegel.me/paper/schiller2023fuzzerrestarts.pdf

I implemented a fuzzer that periodically reset the scorpus every 30/120 minutes after novelty was not found/enough time has passed.

@tokatoka
Copy link
Contributor Author

@DonggeLiu
Could you run the CI?

@tokatoka
Copy link
Contributor Author

@DonggeLiu Ping

@DonggeLiu
Copy link
Contributor

Done!
I was on leave last week.

@tokatoka
Copy link
Contributor Author

it looks like everytime i update it needs additional approval 😅
can you run it again?

@DonggeLiu
Copy link
Contributor

it looks like everytime i update it needs additional approval 😅 can you run it again?

Do you happen to know any way to allow certain users (like you) to always be able to run CIs?

@tokatoka
Copy link
Contributor Author

I think you can make me "Collaborator".

@DonggeLiu
Copy link
Contributor

I think you can make me "Collaborator".

Oh we will have to discuss this with other owners of this repo.
Is there a more lightweight alternative?

@tokatoka
Copy link
Contributor Author

https://docs.github.com/en/organizations/managing-organization-settings/disabling-or-limiting-github-actions-for-your-organization#configuring-required-approval-for-workflows-from-public-forks

I think all the options are written here but it looks like there's no functionality to allow specific users to run CI

@tokatoka
Copy link
Contributor Author

but it's strange because previously you didn't have to manually run it for me right?

@DonggeLiu
Copy link
Contributor

but it's strange because previously you didn't have to manually run it for me right?

I am not sure, maybe I did.

@tokatoka
Copy link
Contributor Author

i'm still debuggin it :)

@tokatoka
Copy link
Contributor Author

i think i resolved the problem, could you run again?

@jonathanmetzman
Copy link
Contributor

/gcbrun

@jonathanmetzman
Copy link
Contributor

I've changed things so we shouldn't need to approve every time actions wants to run

@tokatoka
Copy link
Contributor Author

thank you!

@tokatoka
Copy link
Contributor Author

@DonggeLiu The CI looks good
can we run the experiment?
The command is

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-14-libafl-pruner --fuzzers libafl libafl_latest libafl_r120_force_10 libafl_r120_force_50 libafl_r120_last_10 libafl_r120_last_50 libafl_r30_force_10 libafl_r30_force_50 libafl_r30_last_10 libafl_r30_last_50

@DonggeLiu
Copy link
Contributor

Sure! We are still resolving the bottleneck in measurement so we cannot run too many fuzzers in one experiment. Ideally let's keep ~5 fuzzers in each.
How would you like to group them?

@tokatoka
Copy link
Contributor Author

tokatoka commented Aug 1, 2024

ok

This is group A.

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-14-libafl-pruner --fuzzers libafl libafl_latest libafl_r120_force_10 libafl_r120_force_50 libafl_r120_last_10

This is group B

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-14-libafl-pruner --fuzzers libafl_r120_last_50 libafl_r30_force_10 libafl_r30_force_50 libafl_r30_last_10 libafl_r30_last_50

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-08-02-libafl-pruner --fuzzers libafl libafl_latest libafl_r120_force_10 libafl_r120_force_50 libafl_r120_last_10

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-08-02-libafl-pruner --fuzzers libafl_r120_last_50 libafl_r30_force_10 libafl_r30_force_50 libafl_r30_last_10 libafl_r30_last_50

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-08-02-libafl-pruner-1 --fuzzers libafl_r120_last_50 libafl_r30_force_10 libafl_r30_force_50 libafl_r30_last_10 libafl_r30_last_50

@tokatoka
Copy link
Contributor Author

tokatoka commented Aug 2, 2024

it looks like it didn't run unfortunately

@DanBlackwell
Copy link

HI @tokatoka , not my PR so sorry to intrude; it looks like your experiment did start, as the experiment data was created and the logs indicate it's running here. I've had the same thing happen on the last 2 runs of my PR here; the coverage sub-directory in the data never gets created, even though the fuzzer is running.

I wonder if there's anything obvious in the logs? (I guess one of the FB team can see these?)

@tokatoka
Copy link
Contributor Author

tokatoka commented Aug 2, 2024

thanks for the info!
it looks like all the experiment that began today is affected..

@DonggeLiu
Copy link
Contributor

DonggeLiu commented Aug 3, 2024

This is likely due to no space on device:
image

@gustavogaldinoo could you please look into this? Thanks!
I've removed all running experiments since none of them produced any results.

@DonggeLiu
Copy link
Contributor

Also noticed many Profdata files merging failed. and https://github.com/google/fuzzbench/pull/2011#issuecomment-2270197163 in the cloud log, which may block the experiment report generation. Related: #2011 (comment).

BTW, will this PR generate a large corpus? This may explain the tons of no space left on device errors.

@tokatoka
Copy link
Contributor Author

tokatoka commented Aug 6, 2024

BTW, will this PR generate a large corpus? This may explain the tons of no space left on device errors.

yes. i'm thinking about the fix for it now..

@tokatoka tokatoka closed this Aug 6, 2024
@DanBlackwell
Copy link

DanBlackwell commented Sep 3, 2024

Any chance you ran this somewhere in the end? It would be interesting to see the results even if it's only a subset of the available benchmarks that don't use much storage (e.g. open_h264 looks bad for storage, as does proj4 and woff2)

@tokatoka
Copy link
Contributor Author

tokatoka commented Sep 3, 2024

no i didn't run this in the end

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants