Hey #freebsd #bhyve users!

I'm trying to test draid virtually for #openzfsmastery, because nobody's gonna give me a shelf of 60 disks to play with. I set everything up like:

disk35_name="disk35"
disk35_type="ahci-hd"
disk35_dev="sparse-zvol"

Seems that if I have 35 ahci-hd entries, disk0 through disk34, bhyve and freebsd works. At disk35, the host panics on boot.

Is this expected with bhyve? Or does FreeBSD need a special tweak with 36 disks?

@mwl have you tried this with NVMe type as disk? s/virtio-blk/nvme/ for the type.

also maybe it runs out of “pci slots” try adding this:

ahci_device_limit="8"

I’ll try this tomorrow and see what comes up

@dch I have not, good idea! maybe virtio as well? Gonna play around some this weekend...
@mwl @dch I did as part of my testing. I use NVMe devices for everything, no matter the underlying storage (thanks @ctuffli ). FWIW ahci-hd should only used for compatibility requirements (usually to get virtio drivers onboard for Windows guests) as it dog slow.
@Tubsta @dch @mwl hopefully goes without saying, but yell at me if a virtual shelf of N NVMe disks in bhyve doesn’t work
@ctuffli @dch @mwl You might know the current state, what is the pci device limit currently in bhyve? I think it is 32 but might be wrong
@Tubsta @dch @mwl bhyve "slot" specification in the max is bus:pcislot:function where bus is 0-255, pcislot is 0-31, and function is 0-7. If the maths don't fail me, that is around 64k-ish devices

@ctuffli @Tubsta @dch @mwl So, we talking bhyve or vm-bhyve?

I would not be surprised if vm-bhyve doesn’t assume that, “no one would ever want more than 8 devices”, or at least devices of one type.

Have you tried raw bhyve?

@dexter @ctuffli @dch @mwl It is plugging it into vm-bhyve without issue. I've got another idea, hold my beer #bhyve
@dexter @ctuffli @dch @mwl Stop the press!!! I got it to work. The reason we think it isn't working is when the console has not been put on bus 0. If the guest already has a functioning 15.0 installed with SSH, you can just ssh to it and you are all good. My test, I have 40 NVMe devices come up in dmesg.
@ctuffli @dch @dexter @mwl All works as intended with vm-bhyve, just no console:

root@disk:~ # cat ndadevices.txt | xargs zpool create -m /draid zdraid draid
root@disk:~ # zpool status
pool: zdraid
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
zdraid ONLINE 0 0 0
draid1:8d:39c:0s-0 ONLINE 0 0 0
nda2p1 ONLINE 0 0 0
nda3p1 ONLINE 0 0 0
nda4p1 ONLINE 0 0 0
nda5p1 ONLINE 0 0 0
nda6p1 ONLINE 0 0 0
nda7p1 ONLINE 0 0 0
nda8p1 ONLINE 0 0 0
nda9p1 ONLINE 0 0 0
nda10p1 ONLINE 0 0 0
nda11p1 ONLINE 0 0 0
nda12p1 ONLINE 0 0 0
nda13p1 ONLINE 0 0 0
nda14p1 ONLINE 0 0 0
nda15p1 ONLINE 0 0 0
nda16p1 ONLINE 0 0 0
nda17p1 ONLINE 0 0 0
nda18p1 ONLINE 0 0 0
nda19p1 ONLINE 0 0 0
nda20p1 ONLINE 0 0 0
nda21p1 ONLINE 0 0 0
nda22p1 ONLINE 0 0 0
nda23p1 ONLINE 0 0 0
nda24p1 ONLINE 0 0 0
nda25p1 ONLINE 0 0 0
nda26p1 ONLINE 0 0 0
nda27p1 ONLINE 0 0 0
nda28p1 ONLINE 0 0 0
nda29p1 ONLINE 0 0 0
nda30p1 ONLINE 0 0 0
nda31p1 ONLINE 0 0 0
nda32p1 ONLINE 0 0 0
nda33p1 ONLINE 0 0 0
nda34p1 ONLINE 0 0 0
nda35p1 ONLINE 0 0 0
nda36p1 ONLINE 0 0 0
nda37p1 ONLINE 0 0 0
nda38p1 ONLINE 0 0 0
nda39p1 ONLINE 0 0 0
nda40p1 ONLINE 0 0 0

root@disk:~ # zfs list zdraid
NAME USED AVAIL REFER MOUNTPOINT
zdraid 3.37M 166G 768K /draid
@ctuffli @dch @dexter @mwl I'll keep this guest online for a week in case you have any other issues that need a look at. 16:36, beer o'clock time....

@Tubsta @dch @dexter @ctuffli thank you!!!

time zones rock, I go to sleep and someone else fixes my problems. ;-)