David is a research assistant scientist at the University of Florida. He specializes in using reconfigurable architectures for machine learning and heterogeneous computing. He has had several internships with Intel’s Data Center group, focusing on the Intel Xeon+FPGA platform and compute-near-memory infrastructure. He also spent a summer at Microsoft Research (Project Catapult), working on the integration of FPGAs in the cloud using Intel FPGAs.  Currently, his research focuses on applied AI and end-to-end machine learning workflows for scientific computing and streaming big data analytics.
Publications:
- Addressing the Memory Bottleneck in AI Model Training
MLOps Systems, MLSys 2020 - Accelerating Scientific Discovery with SCAIGATE Science Gateway
IEEE eScience 2019 - PCS: A Productive Computational Science Platform
International Conference on High Performance Computing & Simulation (HPCS 2019), July, Dublin
PCS: A Productive Computational Science Platform - Acceleration of Scientific Deep Learning Models on Heterogeneous Computing Platform with IntelR FPGAs
ISC19, IXPUG 2019
ISC19 IXPUG Workshop paper - Deep Learning with Many-Core Processors and BigDL using Scientific Datasets
IXPUG 2018
View workshop - Fast CNN Inference in Resource-Constrained Environments
Intel AI DevCon 2018
View event blog - Deep Learning Inference with Intel FPGA on Dell EMC
Dell 2018
View event blog - SCAIGATE: Scientific Computing with Artificial Intelligence Gateway
Gateways 2018
View event page - Accelerating High Energy Physics Exploration with Deep Learning
PEARC 2017
View paper - From Science to Big Data to System
ACIS Lab 2017
View slides
View media - Deep Learning with Intel DAAL on Knights Landing Processor
CERN IML 2017
View slides - Simplified Deep Learning Framework with DAAL and OpenCL: Towards Hardware Applications
CERN IML 2016
View slides