I recently chanced upon the paper, "Deep, Differentiable Logic Gate Networks", Petersen (2022). It describes the Logical Neural Network, whose neurons are 2-input, 1-output #logic gates. The whole network is but a #combinational #circuit, so the trained network can readily be synthesised on #FPGA. Fancy that! And given the simplicity and sparsity of an FPGA-borne LNN, it runs a couple of orders of magnitude faster than a GPU-borne DNN, and consumes an order of magnitude less power, yet able to attain a comparable task accuracy.
https://arxiv.org/pdf/2210.08277

The seminal paper on LNN is this: "Logical Neural Networks", Riegel (2020).
https://arxiv.org/pdf/2006.13155

This paper below, "Logic Neural Networks for Efficient FPGA Implementation", Ramírez (2024), is a good companion paper to read, too.
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10746856

NB—The Logical Neural Network #LNN is not related to the Binary Neural Network #BNN. The BNN is a binarised (read, "crude") approximation of a conventional, real-valued DNN (yielding 1-bit activations and weights). The LNN, in contrast, has no weights at all on the wires that connect the gates and the activation functions are the inherently non-linear logic operations.