the network card i'm poking at, which has two different CPU clusters with Linux running on it, apparently has eMMC and NVMe drives onboard
also it has docker^W containerd installed by default
the operating system is booted by grub??

somebody here was making a joke about the network card running kubernetes

i regret to inform you that the network card does in fact run kubernetes (there is kubelet in ps)

@whitequark at first I was surprised then remembered ISPs/carriers. Due to how modern networking systems work (think 5g), they run a lot of service stuff in containers and sticking it on the NIC is probably giving faster network connectivity by bypassing PCIe to CPU translations.
@lethedata the bf-3 is mostly for AI stuff as far as i know. also like. the CPU on this thing is connected to the NIC (the actual NIC part of NIC) over... maybe PCIe, maybe AMBA? not sure. some sort of bus. but you're definitely going to still have NIC to CPU translations, is my point

@whitequark The OS on the NIC is also pretty useful to build a Cloud as your cloud provisioning from the provider can run on the NIC, and the customer can own the complete other hardware, but still can't interfere with the NIC offloading stuff. So you can build something similar to AWS nitro engine, where a lot of magic like EBS and stuff is implemented. Like you can mount remote NVMes and let them appear local to the customer on demand, when they click it in your API/Cloud UI.

@lethedata

@hikhvar @whitequark @lethedata and networking too. You can do all the fancy vxlan/evpn multipathing stuff in the NIC in hardware and the host just sees a single interface. A single BF3 can push up to wirespeed (~400G) in that config. The software bugs are … interesting though.