Hey #freebsd #bhyve users!

I'm trying to test draid virtually for #openzfsmastery, because nobody's gonna give me a shelf of 60 disks to play with. I set everything up like:

disk35_name="disk35"
disk35_type="ahci-hd"
disk35_dev="sparse-zvol"

Seems that if I have 35 ahci-hd entries, disk0 through disk34, bhyve and freebsd works. At disk35, the host panics on boot.

Is this expected with bhyve? Or does FreeBSD need a special tweak with 36 disks?

@mwl have you tried this with NVMe type as disk? s/virtio-blk/nvme/ for the type.

also maybe it runs out of “pci slots” try adding this:

ahci_device_limit="8"

I’ll try this tomorrow and see what comes up

@dch I have not, good idea! maybe virtio as well? Gonna play around some this weekend...
@mwl @dch I did as part of my testing. I use NVMe devices for everything, no matter the underlying storage (thanks @ctuffli ). FWIW ahci-hd should only used for compatibility requirements (usually to get virtio drivers onboard for Windows guests) as it dog slow.
@Tubsta @dch @mwl hopefully goes without saying, but yell at me if a virtual shelf of N NVMe disks in bhyve doesn’t work
@ctuffli @dch @mwl You might know the current state, what is the pci device limit currently in bhyve? I think it is 32 but might be wrong
@Tubsta @dch @mwl bhyve "slot" specification in the max is bus:pcislot:function where bus is 0-255, pcislot is 0-31, and function is 0-7. If the maths don't fail me, that is around 64k-ish devices

@ctuffli @Tubsta @dch @mwl So, we talking bhyve or vm-bhyve?

I would not be surprised if vm-bhyve doesn’t assume that, “no one would ever want more than 8 devices”, or at least devices of one type.

Have you tried raw bhyve?

@dexter @ctuffli @dch @mwl It is plugging it into vm-bhyve without issue. I've got another idea, hold my beer #bhyve
@dexter @ctuffli @dch @mwl Stop the press!!! I got it to work. The reason we think it isn't working is when the console has not been put on bus 0. If the guest already has a functioning 15.0 installed with SSH, you can just ssh to it and you are all good. My test, I have 40 NVMe devices come up in dmesg.
@ctuffli @dch @dexter @mwl All works as intended with vm-bhyve, just no console:

root@disk:~ # cat ndadevices.txt | xargs zpool create -m /draid zdraid draid
root@disk:~ # zpool status
pool: zdraid
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
zdraid ONLINE 0 0 0
draid1:8d:39c:0s-0 ONLINE 0 0 0
nda2p1 ONLINE 0 0 0
nda3p1 ONLINE 0 0 0
nda4p1 ONLINE 0 0 0
nda5p1 ONLINE 0 0 0
nda6p1 ONLINE 0 0 0
nda7p1 ONLINE 0 0 0
nda8p1 ONLINE 0 0 0
nda9p1 ONLINE 0 0 0
nda10p1 ONLINE 0 0 0
nda11p1 ONLINE 0 0 0
nda12p1 ONLINE 0 0 0
nda13p1 ONLINE 0 0 0
nda14p1 ONLINE 0 0 0
nda15p1 ONLINE 0 0 0
nda16p1 ONLINE 0 0 0
nda17p1 ONLINE 0 0 0
nda18p1 ONLINE 0 0 0
nda19p1 ONLINE 0 0 0
nda20p1 ONLINE 0 0 0
nda21p1 ONLINE 0 0 0
nda22p1 ONLINE 0 0 0
nda23p1 ONLINE 0 0 0
nda24p1 ONLINE 0 0 0
nda25p1 ONLINE 0 0 0
nda26p1 ONLINE 0 0 0
nda27p1 ONLINE 0 0 0
nda28p1 ONLINE 0 0 0
nda29p1 ONLINE 0 0 0
nda30p1 ONLINE 0 0 0
nda31p1 ONLINE 0 0 0
nda32p1 ONLINE 0 0 0
nda33p1 ONLINE 0 0 0
nda34p1 ONLINE 0 0 0
nda35p1 ONLINE 0 0 0
nda36p1 ONLINE 0 0 0
nda37p1 ONLINE 0 0 0
nda38p1 ONLINE 0 0 0
nda39p1 ONLINE 0 0 0
nda40p1 ONLINE 0 0 0

root@disk:~ # zfs list zdraid
NAME USED AVAIL REFER MOUNTPOINT
zdraid 3.37M 166G 768K /draid
@ctuffli @dch @dexter @mwl I'll keep this guest online for a week in case you have any other issues that need a look at. 16:36, beer o'clock time....

@Tubsta @dch @dexter @ctuffli thank you!!!

time zones rock, I go to sleep and someone else fixes my problems. ;-)

@dexter @ctuffli @Tubsta @dch vm-bhyve, because I'm lazy
@mwl @dexter @ctuffli @dch No, not lazy. You just want consistency because fat fingers on a massive native command line just gives you headaches you don’t need (well that is my excuse) #repeatable #bhyve

@Tubsta @dch @dexter @ctuffli

bhyve seems to work well, but yeah, those command lines are 

@mwl @Tubsta @dch @ctuffli I am speaking purely from a diagnostics perspective. I believe, to their credit, their log shows the command that was executed. Which is kinda important, given the many “bhyve” bugs along the lines of “it’s treating my boot drive as if it is a CD-ROM drive …” #TrueStory!

So yeah. Share the underlying actual stuff please because very few vm-bhyve bugs are bhyve bugs.

@mwl @Tubsta @dch @ctuffli I mean like, I didn’t write an OpenBSD bug report when I found that typo in Absolute OpenBSD. In of course the one chapter you didn’t trust me to have in advance. Nosiree. THEN you tasked me with raising over $150K to date (USD no less, not that Looney stuff) for a conference.

But yeah sure, blame bhyve.

Until of course it comes time to write a book on the subject… ahem!

@dexter @Tubsta @dch @ctuffli

Sure, I ask if something's weird and file a bug.

But sometimes, I'm doing something actively daft. If bhyve-vm hangs trying to launch a host with 75 drives, my first instinct is to say "well, this would be because I'm a damn idiot trying to use 75 drives on one VM..."

@mwl @Tubsta @dch @ctuffli Bruh. bhyve now supports NUMA domains, because test ALL THE THINGS. 75 drives is like… a good start. All scaling issues should be explored to their unholy extremes.

U got this!

@dexter @Tubsta @dch @ctuffli

okay then, I'll send you my stupid failures.

@dexter @mwl @dch @ctuffli That is how I found the issue yesterday. The logs are excellent in vm-bhyve.