-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove MPIData #270
base: develop
Are you sure you want to change the base?
Remove MPIData #270
Conversation
…to feature/alltoall
Benchmark resultJudge resultBenchmark Report for /home/runner/work/Rimu.jl/Rimu.jlJob Properties
ResultsA ratio greater than
Benchmark Group ListHere's a list of all the benchmark groups executed by this job:
Julia versioninfoTarget
Baseline
Target resultBenchmark Report for /home/runner/work/Rimu.jl/Rimu.jlJob Properties
ResultsBelow is a table of this job's results, obtained by running the benchmarks.
Benchmark Group ListHere's a list of all the benchmark groups executed by this job:
Julia versioninfo
Baseline resultBenchmark Report for /home/runner/work/Rimu.jl/Rimu.jlJob Properties
ResultsBelow is a table of this job's results, obtained by running the benchmarks.
Benchmark Group ListHere's a list of all the benchmark groups executed by this job:
Julia versioninfo
Runtime information
Architecture: x86_64
Benchmark resultJudge resultBenchmark Report for /home/runner/work/Rimu.jl/Rimu.jlJob Properties
ResultsA ratio greater than
Benchmark Group ListHere's a list of all the benchmark groups executed by this job:
Julia versioninfoTarget
Baseline
Target resultBenchmark Report for /home/runner/work/Rimu.jl/Rimu.jlJob Properties
ResultsBelow is a table of this job's results, obtained by running the benchmarks.
Benchmark Group ListHere's a list of all the benchmark groups executed by this job:
Julia versioninfo
Baseline resultBenchmark Report for /home/runner/work/Rimu.jl/Rimu.jlJob Properties
ResultsBelow is a table of this job's results, obtained by running the benchmarks.
Benchmark Group ListHere's a list of all the benchmark groups executed by this job:
Julia versioninfo
Runtime information
Architecture: x86_64
Benchmark resultJudge resultBenchmark Report for /home/runner/work/Rimu.jl/Rimu.jlJob Properties
ResultsA ratio greater than
Benchmark Group ListHere's a list of all the benchmark groups executed by this job:
Julia versioninfoTarget
Baseline
Target resultBenchmark Report for /home/runner/work/Rimu.jl/Rimu.jlJob Properties
ResultsBelow is a table of this job's results, obtained by running the benchmarks.
Benchmark Group ListHere's a list of all the benchmark groups executed by this job:
Julia versioninfo
Baseline resultBenchmark Report for /home/runner/work/Rimu.jl/Rimu.jlJob Properties
ResultsBelow is a table of this job's results, obtained by running the benchmarks.
Benchmark Group ListHere's a list of all the benchmark groups executed by this job:
Julia versioninfo
Runtime information
Architecture: x86_64
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is probably OK to remove MPIData
now that we have PDVec
. If you wanted to run it without threading, say just just MPI for parallelisation, would that work as well (and be as fast) as using MPIData
.
It will work, probably just as fast, or very close to being as fast. |
Pull Request Test Coverage Report for Build 11712572551Details
💛 - Coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this, this is great!
Just one comment, see below.
function MPI.Allreduce(ms::Rimu.MultiScalar{T}, op, comm::MPI.Comm) where {T<:Tuple} | ||
result_vector = MPI.Allreduce([ms...], op, comm) | ||
return Rimu.MultiScalar(T(result_vector)) | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Coveralls thinks that this method is not tested. I remember that I introduced it for a reason, which was that MPI reductions of custom operators were not possible on ARM processors (and actually a bit slow on Intel).
If we are not using reductions for MultiScalar
s anymore then maybe we should remove the methods? If we keep it we should test it, though.
Breaking changes
MPIData
.