Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot add osd #71

Closed
raito9x opened this issue Aug 22, 2024 · 5 comments
Closed

Cannot add osd #71

raito9x opened this issue Aug 22, 2024 · 5 comments

Comments

@raito9x
Copy link

raito9x commented Aug 22, 2024

Hi, i'm install debian 12 and the newest vitastor v1.8.0
But when i add first osd, using command
vitastor-disk prepare /dev/sdb
It's will be error: Invalid argument

I try strace vitastor-disk prepare /dev/sdb and the log here

newfstatat(AT_FDCWD, "/dev/sdc1", {st_mode=S_IFBLK|0600, st_rdev=makedev(0x8, 0x21), ...}, 0) = 0
newfstatat(AT_FDCWD, "/dev/disk/by-partuuid/f579b758-1e49-5842-9de5-339de1b5de32", 0x7ffe4267bff0, AT_SYMLINK_NOFOLLOW) = -1 ENOENT (No such file or directory)
clock_nanosleep(CLOCK_REALTIME, 0, {tv_sec=0, tv_nsec=100000000}, NULL) = 0
newfstatat(AT_FDCWD, "/dev/disk/by-partuuid/f579b758-1e49-5842-9de5-339de1b5de32", {st_mode=S_IFLNK|0777, st_size=10, ...}, AT_SYMLINK_NOFOLLOW) = 0
readlink("/dev", 0x7ffe4267b500, 1023)  = -1 EINVAL (Invalid argument)
readlink("/dev/disk", 0x7ffe4267b500, 1023) = -1 EINVAL (Invalid argument)
readlink("/dev/disk/by-partuuid", 0x7ffe4267b500, 1023) = -1 EINVAL (Invalid argument)
readlink("/dev/disk/by-partuuid/f579b758-1e49-5842-9de5-339de1b5de32", "../../sdc1", 1023) = 10
readlink("/dev/sdc1", 0x7ffe4267b500, 1023) = -1 EINVAL (Invalid argument)
newfstatat(AT_FDCWD, "/sys/block/sdc", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0
openat(AT_FDCWD, "/sys/block/sdc/device/scsi_disk", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 3
newfstatat(3, "", {st_mode=S_IFDIR|0755, st_size=0, ...}, AT_EMPTY_PATH) = 0
getdents64(3, 0x55863836e030 /* 3 entries */, 32768) = 80
getdents64(3, 0x55863836e030 /* 0 entries */, 32768) = 0
close(3)                                = 0
openat(AT_FDCWD, "/sys/block/sdc/device/scsi_disk/0:0:2:0/cache_type", O_RDONLY) = 3
read(3, "none\n", 1024)                 = 5
read(3, "", 1019)                       = 0
close(3)                                = 0
openat(AT_FDCWD, "/sys/block/sdc/device/scsi_disk/0:0:2:0/cache_type", O_WRONLY) = 3
write(3, "write through", 13)           = -1 EINVAL (Invalid argument)
dup(2)                                  = 4
fcntl(4, F_GETFL)                       = 0x2 (flags O_RDWR)
newfstatat(4, "", {st_mode=S_IFCHR|0600, st_rdev=makedev(0x88, 0), ...}, AT_EMPTY_PATH) = 0
write(4, "write: Invalid argument\n", 24write: Invalid argument
) = 24
close(4)                                = 0
exit_group(1)                           = ?
+++ exited with 1 +++

Please help me

@vitalif
Copy link
Owner

vitalif commented Aug 22, 2024

Hi
Yeah, vitastor-disk expects write through in cache_type, but your device has none and doesn't allow to overwrite it.
I think you can work around this issue by adding --disable_data_fsync 1 --skip_cache_check 1 to vitastor-disk... and probably you'll have to run vitastor-disk update-sb /dev/vitastor/xxx --skip_cache_check 1 too because of #70 which was fixed after 1.8.0 (without it OSD probably won't start).
Which controller do you use though? Such that it shows none?

@raito9x
Copy link
Author

raito9x commented Aug 23, 2024

oh i see, my command missing. It's must be vitastor-disk prepare /dev/sdb --disable_data_fsync on
I think --disable_data_fsync is option but not, it's must be require
but i cann't understand here, i cannot create ec pool and replica pool

root@vitastor-2:~# vitastor-cli create-pool testpool --ec 2+2 --pg_count 256
Not enough matching OSDs to create pool. Change parameters or add --force to create a degraded pool.

At least 4 (pg_size=4) OSDs should have:
- block_size 128k
- bitmap_granularity 4k
- immediate_commit all
- different parent 'host' nodes
root@vitastor-2:~# vitastor-cli create-pool testpool --pg_size 2 --pg_count 256
Not enough matching OSDs to create pool. Change parameters or add --force to create a degraded pool.

At least 2 (pg_size=2) OSDs should have:
- block_size 128k
- bitmap_granularity 4k
- immediate_commit all
- different parent 'host' nodes

Despite status is ok

root@vitastor-2:~# vitastor-cli status
  cluster:
    etcd: 3 / 3 up, 1.1 M database size
    mon:  3 up, master vitastor-3
    osd:  16 / 16 up

  warning:
    2 osds are full
  
  data:
    raw:   0 B used, 1.6 T / 1.6 T available
    state: 0 B clean
    pools: 0 / 0 active
    pgs:   
  
  io:
    client: 0 B/s rd, 0 op/s rd, 0 B/s wr, 0 op/s wr
TYPE   NAME        UP  SIZE   USED%  TAGS  WEIGHT  BLOCK  BITMAP  IMM   NOOUT
host   vitastor-1                                                            
  osd  1           up  99.9G  0 %          1       1M     4k      none  -    
  osd  2           up  99.9G  0 %          1       1M     4k      none  -    
  osd  3           up  99.9G  0 %          1       1M     4k      none  -    
  osd  4           up  99.9G  0 %          1       1M     4k      none  -  
host   vitastor-2                                                            
  osd  5           up  99.9G  0 %          1       1M     4k      none  -    
  osd  6           up  99.9G  0 %          1       1M     4k      none  -    
  osd  7           up  99.9G  0 %          1       1M     4k      none  -    
  osd  8           up  99.9G  0 %          1       1M     4k      none  -    
host   vitastor-3                                                            
  osd  9          up  99.9G  0 %          1       1M     4k      none  -    
  osd  10          up  99.9G  0 %          1       1M     4k      none  -    
  osd  11          up  99.9G  0 %          1       1M     4k      none  -    
  osd  12          up  99.9G  0 %          1       1M     4k      none  -    
host   vitastor-4                                                            
  osd  13          up  99.9G  0 %          1       1M     4k      none  -    
  osd  14          up  99.9G  0 %          1       1M     4k      none  -    
  osd  15          up  99.9G  0 %          1       1M     4k      none  -    
  osd  16          up  99.9G  0 %          1       1M     4k      none  -   

Why warning in status

    2 osds are full

And why pg state offline

root@vitastor-2:~# vitastor-cli pg-list
POOL          NUM  OSD SET  PRIMARY  DATA CLEAN  MISPLACED  DEGRADED  INCOMPLETE  STATE  
replica-pool  1    0,0      -        0 B         0 B        0 B       0 B         offline
replica-pool  2    0,0      -        0 B         0 B        0 B       0 B         offline
replica-pool  3    0,0      -        0 B         0 B        0 B       0 B         offline
replica-pool  4    0,0      -        0 B         0 B        0 B       0 B         offline
ec-pool       1    0,0,0,0  -        0 B         0 B        0 B       0 B         offline
ec-pool       2    0,0,0,0  -        0 B         0 B        0 B       0 B         offline
ec-pool       3    0,0,0,0  -        0 B         0 B        0 B       0 B         offline
ec-pool       4    0,0,0,0  -        0 B         0 B        0 B       0 B         offline

@vitalif
Copy link
Owner

vitalif commented Aug 23, 2024

It says it wants block 128k, but your osds have 1M block

@raito9x
Copy link
Author

raito9x commented Aug 26, 2024

Why the default command vitastor-disk prepare /dev/sdb --disable_data_fsync on auto create 1M block instead of 128k :(
How can i create osd with 128kb block

@vitalif
Copy link
Owner

vitalif commented Aug 27, 2024

Ok I already replied to you in telegram chat but I want to add that I implemented block size and immediate commit autodetection i.e. if all osds use the same block size (for example 1M) then create-pool will warn about it and select it automatically. It's in a branch and will go to the next release

@raito9x raito9x closed this as completed Oct 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants