-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can we avoid large memory allocations and swapping? #40
Comments
You can just use whatever OS facility to set the process memory limit, maybe? Something somewhere will have to give a limit. I just restrict the integer size to things that go to an allocator, usually, to not be too ridiculously big. |
according to https://alex.dzyoba.com/blog/restrict-memory/ ulimit does not work but qemu and lxc-execute do. Do you have any experience with those tools? |
not really; I have used QEMU indirectly, in that some fuzzers use it to carry out their instrumentation. I think it has a pretty serious overhead? |
ok so if this turns out to be a big issue I guess we should try lxc-execute first. |
right now this is hard-coded, we could allow the user to specify the max vector size as an argument to the R function that does the random input generation, e.g., RcppDeepState::deepstate_compile_run(max.vector.length=1000) |
hi @akhikolla I noticed when testing binsegRcpp that if you pass a large integer as the max_segments argument R may try to do a big memory allocation and freeze the computer (thrashing).
would be nice to automatically avoid that (kill the process instead of thrashing), but I'm not sure how.
@agroce did you ever encounter something similar? any ideas for a solution?
(of course the user could rewrite the code to exit early if there is a big memory allocation but it would be better if we did not require the user to rewrite the code)
The text was updated successfully, but these errors were encountered: