Recent kernels have SR-IOV support for these chips too. B&H has them listed for $950.
https://www.bhphotovideo.com/c/product/1959142-REG/intel_33p...
When 32GB NVIDIA cards seem to start at around $4000 that's a big enough gap to be motivating for a bunch of applications.

Intel Arc Pro B70 Graphics Card
Buy Intel Arc Pro B70 Graphics Card featuring 2800 MHz Boost Clock Speed, 32 Xe Cores | 256 XMX AI Engines, Xe2 Architecture | 32 RT Units, 32GB of ECC GDDR6 VRAM, 256-Bit Memory Interface, 608 GB/s Memory Bandwidth, DisplayPort 2.1, 7680 x 4320 @ 120 Hz Max Resolution, PCI Express 5.0 x16 Interface. Review Intel B70
My guess is the main "AI" contribution here is to automate some of the work around the actual fuzzing. Setting up the test environment and harness, reading the code + commit history + published vulns for similar projects, identifying likely trouble spots, gathering seed data, writing scripts to generate more seed data reaching the identified trouble spots, adding instrumentation to the target to detect conditions ASan etc don't, writing PoC code, writing draft patches... That's a lot of labor and the coding agents can do a mediocre job of all of it for the cost of compute.
They pitch their company as finding bugs "with AI". It's not hard to point one of the coding agents at a repo URL and have it find bugs even in code that's been in the wild for a long time, looking at their list that looks likely to be what they're doing.
Is it actually necessary to run transcontinental consensus? Apps in a given location are not movable so it would seem for a given app it's known which part of the network writes can come from. That would require partitioning the namespace but, given that apps are not movable, does that matter? It feel like there are other areas like docs and tooling that would benefit from relatively higher prioritization.