Heterogeneous Compute Architecture for Deep Learning in the Cloud
Speaker
Nicholas Fraser, Xilinx Research, IE
Abstract
Accuracy of deep learning algorithms continues to outpace many traditional algorithms, while requiring little domain expertise and no explicit programming. However, they are typically associated with astronomical computational and memory requirements which push the limits of projected performance scalability with future technology nodes. This has led to a surge in innovative computer architectures and chips. Within this talk, we'll take a deeper look at compute and memory requirements for a range of popular neural networks and discuss how emerging architectures, fuelled by cloud dynamics, are trying to overcome this through architectural innovation.