Replies: 1 comment 4 replies
-
I've got a hacked branch to just barely pull of some NUMA locking with madMAx on my system and I've only noticed a few percent better throughput. But yeah, maybe something will happen there. I have a Supermicro X9DR3-LN4F+ with a pair of Intel Xeon E5-2690 v2, separate RAM NUMA locked RAM drives, and a RAID0 of 4x Inland Premium 1TB that are shared and all on a single CPU's node. That last bit has me "concerned" that my numbers aren't really representative, but I also haven't noticed a significant difference between plots on each node. I would have expected that if the RAID being on one node was an issue that the node it was on would be faster and the other would be slower. Also, splitting them up will reduce peak bandwidth. But I may yet get to doing that. |
Beta Was this translation helpful? Give feedback.
-
use
numactl --physcpubind=0-3
for CPU cores bindingyour script runs 20-40 min longer due long known Linux kernel CPU & IO scheduling
I like your plot manager but plots take longer than running all manually with numactl cores binding/affinity
Beta Was this translation helpful? Give feedback.
All reactions