From Murmann Mixed-Signal Group
BSEE, California Institute of Technology, 2012
Research: Hardware Implementation and Optimization for Deep Belief Networks
Deep Belief Network (DBN) algorithms are used for pattern recognition tasks such as finding structure in large datasets including images, speech, and financial data. These algorithms yield state-of-the-art classification accuracy and learn to represent input data without using predefined features or supervised learning. Training these neuro-inspired networks, however, can be extremely time-consuming and limits the size and complexity of the models that can be used. To overcome this limitation, we suggest the implementation of DBNs in hardware, which can provide an efficient computational platform for applications in which speed, power, and area are stringent constraints. Though recent literature cites the use of GPUs and CPUs for massively parallelizing these algorithms to reduce computation time and cost, we believe an FPGA or ASIC implementation could be a more elegant solution to this problem.
Email: yanglita AT stanford DOT edu