Example Application:  Machine Learning on FPGAs in the Cloud

We provide sample applications including the FINN framework for machine learning.  FINN \cite{Blott_2018} is developed and maintained by Xilinx Research Labs to explore deep neural network inference on FPGAs.  The FINN compiler is used to create dataflow architectures that can be parallelized across and within different layers of a neural network, and transform the dataflow architecture to a bit file that can be run on FPGA hardware.
With the amount of resources available in the OCT, we are particularly interested in implementing network-attached FINN accelerators split across multiple FPGAs with convolutional neural network types such as MobileNet and ResNet, whose partitioning is discussed by  \cite{alonso2021elastic}.  Figure 2 shows an arrangement of this where MobileNet is  implemented with three accelerators that are mapped to two FPGAs.  Two of the accelerators function stand alone, while the third, which contains all the communications required between the FPGAs, is split between two Xilinx U280s. The two halves of this accelerator are connected using the network infrastructure of the UDP stack,  which enables communication between them via the switch.