Replies: 2 comments 1 reply
-
ZFS indeed tries to balance the write proportional to the use percentage of specific vdev vs the whole pool. But this logic fights with other logic, that tries to send more writes to vdev that can write faster. So for slow writes the first logic shod prevail, but for fast -- likely the second. Is there chance that your smaller drives are somehow much faster? PS: If you are using large blocks, then the first logic may also be less efficient, and this change may slightly improve situation: #13388 . |
Beta Was this translation helpful? Give feedback.
1 reply
-
Sampling the three different systems under my direction, I can say that
VDEV allocation can vary pretty wildly. An early FreeBSD and more recently
patched ZOL system running on Ubuntu 20.04LTS look fairly balanced, but the
other Ubuntu ZOL 20.04 LTS system not so much. All disk and VDEV groups are
the same, all fabrics SAS3.
Below is the outlier Linux system
zpool list -v
… NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG
CAP DEDUP HEALTH ALTROOT
pool3 254T 163T 91.6T - - 1%
64% 1.00x ONLINE -
raidz3 127T 90.6T 36.4T - - 3%
71.4% - ONLINE
wwn-0x5000cca264340cec - - - - - -
- - ONLINE
wwn-0x5000cca26434359c - - - - - -
- - ONLINE
wwn-0x5000cca26433cf48 - - - - - -
- - ONLINE
wwn-0x5000cca2646d917c - - - - - -
- - ONLINE
wwn-0x5000cca26430cfbc - - - - - -
- - ONLINE
wwn-0x5000cca26400738c - - - - - -
- - ONLINE
wwn-0x5000cca26400f3e0 - - - - - -
- - ONLINE
wwn-0x5000cca26400dc90 - - - - - -
- - ONLINE
wwn-0x5000cca26400dd90 - - - - - -
- - ONLINE
wwn-0x5000cca2646b9fbc - - - - - -
- - ONLINE
raidz3 127T 72.1T 55.2T - - 0%
56.7% - ONLINE
wwn-0x5000cca28e899370 - - - - - -
- - ONLINE
wwn-0x5000cca28e8bc248 - - - - - -
- - ONLINE
wwn-0x5000cca28e8bf3d0 - - - - - -
- - ONLINE
wwn-0x5000cca2973ac288 - - - - - -
- - ONLINE
wwn-0x5000cca2973b953c - - - - - -
- - ONLINE
wwn-0x5000cca2973ba8a4 - - - - - -
- - ONLINE
wwn-0x5000cca2973c7360 - - - - - -
- - ONLINE
wwn-0x5000cca29816b850 - - - - - -
- - ONLINE
wwn-0x5000cca2a37672f8 - - - - - -
- - ONLINE
wwn-0x5000cca2a387d940 - - - - - -
- - ONLINE
On Thu, May 12, 2022 at 9:13 PM filonovd ***@***.***> wrote:
Thanks for getting back to me.
The newer and bigger drives are connected via SAS expander which might
slow them down overall. Is there any tunables to give less priority to the
"speed" logic versus "space" logic?
Using faster VDEVs is fine for as long as we have space there. But soon
these will be full and then it will have to use slower ones and only slower
ones. Which sounds a bit sub-optimal. On the other hand - this is pure
backup server and access speed is not that important there.
—
Reply to this email directly, view it on GitHub
<#13449 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACLTKBPE6Z3YMSQISFE5GO3VJXJH5ANCNFSM5VTG7PDA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi -
As far as I understand ZFS should distribute data according to the percentage of available space on each VDEV. Sounds really great. On paper.
Yesterday I had a chance to test it out in production by building from scratch a fairly large backup zpool. I had 44x14TB+18x10TB+16x12TB drives, so I came up with this config -
As you can see the largest VDEVs (four 11x14TB raidz2) are the least used. And the smaller ones are being used more.
Can somebody explain what's going on here? Am still pushing data there and will see how that data distributes across the pool later on. But if there's something that I didn't do right to have it distributed properly then it's better to learn now and have it fixed before I have whole 500TB backed up.
P.S. The data files are fairly small, it's not like I pushed a single file that is 8TB in size and it just landed on an empty VDEV.
Beta Was this translation helpful? Give feedback.
All reactions