Replies: 2 comments 5 replies
-
since this isnt time based. each job would write 10GB. and each node would multiply that. So 10g * 10 * 4. Your test would finish once 400G was written. |
Beta Was this translation helpful? Give feedback.
-
if i understand correctly i think this is maybe close to what you might want. heres a read example with a smaller filesize: [global] [read-file] this is the output: read-file: (groupid=0, jobs=10): err= 0: pid=32835: Tue Nov 14 12:40:35 2023 Run status group 0 (all jobs): Disk stats (read/write): attaching the resulting iolog. Here you can see each thread reading from a different offset for a set size. |
Beta Was this translation helpful? Give feedback.
-
Hej,
I was wondering how I would configure FIO properly to write a single shared file from multiple nodes with multiple jobs per node.
E.g.
I know NFS semantics dot not provide POSIX consistency semantics and the resulting file might be "broken" - hence DirectIO to mittigate this a tiny bit. I tested with direct=0 as well, but it throws errors (as to be expected on NFS)
fio: client: unable to find matching tag (55c9a4301df0)
.I tried the following with FIO and it kind of works, but it is surprisingly slow:
According to the network stats, each of the 4 clients is writing with around 1 GB/s.
Example for x440-01:
The bandwidth stats would suggest, that the file should be done in ~2.5 seconds (10 GB / [4 * 1 GB/s])
The NFS server sees around 500 MB/s:
But even with 500 MB/s a 10 GB file would be done in 20 seconds and not 15 minutes
I can do something similar with IOR:
Network throughput per node is again roughly 1 GB/s for just a couple of secons:
The test is done in around 20 seconds now.
The NFS-Server sees around 480 MB/s
Thanks for any input on this topic :)
Beta Was this translation helpful? Give feedback.
All reactions