@kbob @drwho
On Halio's site for the chip (https://hailo.ai/files/hailo-8l-m-2-et-product-brief-en/) they don’t talk about FLOPs, they talk about “operations” (and claim 13 teraops).
A quick search turns up nothing but marketing fluff, though this one is pretty interesting: https://hailo.ai/products/technology/
It’s a dataflow processor array with a flexible internal network. They also talk about the cleverness of their compiler technology and how it takes advantage of their dataflow architecture.
While they have ported Tensorflow and Pytorch (plus a couple of other neural-net frameworks), there’s no indication what size floating point operations they’re doing. Typically, “AI processors” save energy by using fewer bits — 16, 8, or even fewer (I’ve saw a paper the other day that talked about a mixture of 1 and 2 bit operations).
So, can your DSP application be built using Tensorflow or Pytorch? Does it not depend on great floating point accuracy?
#raspberrypi #hailo8