Nvidia vs. The World: Why Google and Amazon are Building Their Own Silicon

YouTube

#Intel should figure out their #ml strategy because they have:

1. #OpenVino plugins for GPU and NPU,
2. #OpenXLA plugin for GPU
3. #ipex for PyTorch
4. intel-npu-acceleration-library for PyTorch
5. oneDNN neural network math kernels

And for #ONNX they have both OpenVino and oneDNN runtimes.

Best of all I haven't reliably gotten the NPU to work using any permutation of them lol..

After some investigation I found that #OpenVino is about twice as fast as #OpenXLA in diffusion on my Intel Xe graphics iGPU.

Having to convert safetensors models is pretty inconvenient.