Biography

I am building high-performance systems for zero-knowledge proof and deep learning.

I received my Ph.D. degree from the Computer Science Department at UC Santa Barbara, fortunately advised by Yufei Ding. I obtained my bachelor degree from Nanjing University, China.

Research Interests

My research resides at the intersection of deep learning (DL) and computer systems, focusing on building high-performance and secure DL systems.

  • Efficiency: investigating acceleration techniques towards high-performance DL, including quantization, hand-tuned GPU kernels, and runtime systems. [ASPLOS’24, OSDI’23, ATC’23, ATC’22, SC’22, PPoPP’22, OSDI’21, ATC’21, SC’21, PPoPP’21, IPDPS’21, CIKM’21, ICTAI’20]
  • Secure and Private DL: designing algorithms and implementing high-performance CPU/GPU frameworks for private and secure DL models. [AAAI’21, ICASSP’21]

Recent Publications and Preprints

[ASPLOS’24] ZENO: A Type-based Optimization Framework for Zero-Knowledge Neural Network Inference.
Boyuan Feng, Zheng Wang, Yuke Wang, Shu Yang, Yufei Ding.

[OSDI’23] MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Multi-GPU Platforms. paper, code
Yuke Wang, Boyuan Feng, Zheng Wang, Tong Geng, Ang Li, Kevin Barker, Yufei Ding.

[ATC’23] TC-GNN: Bridging Sparse GNN Computation and Dense Tensor Cores on GPUs. paper, code
Yuke Wang, Boyuan Feng, Zheng Wang, Guyue Huang, Yufei Ding.

[ATC’22] Faith: An Efficient Framework for Transformer Verification on GPUs. paper, code
Boyuan Feng, Tianqi Tang, Yuke Wang, Zhaodong Chen, Zheng Wang, Shu Yang, Yuan Xie, Yufei Ding.

[SC’22] EL-Rec: Efficient Large-scale Recommendation Model Training via Tensor-train Embedding. paper
Zheng Wang, Yuke Wang, Boyuan Feng, Dheevatsa Mudigere, Bharath Muthiah, Yufei Ding.

[PPoPP’22] QGTC: Accelerating Quantized GNN via GPU Tensor Core. paper, code
Yuke Wang, Boyuan Feng, and Yufei Ding. (* co-primary authors)

[OSDI’21] GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs. paper, code
Yuke Wang, Boyuan Feng, Gushu Li, Shuangchen Li, Lei Deng, Yuan Xie, Yufei Ding.

[ATC’21] Palleon: A Runtime System for Efficient Video Processing toward Dynamic Class Skew. paper
Boyuan Feng, Yuke Wang, Gushu Li, Yuan Xie, Yufei Ding.

[SC’21] APNN-TC: Accelerating Arbitrary-Precision Neural Networks on Tensor Cores. paper, code
Boyuan Feng*, Yuke Wang*, Tong Geng, Ang Li, Yufei Ding (* co-primary authors).

[PPoPP’21] EGEMM-TC: Accelerating Scientific Computing on Tensor Cores with Extended Precision. paper
Boyuan Feng, Yuke Wang, Guoyang Chen, Weifeng Zhang, Yuan Xie, and Yufei Ding.

[IPDPS’21] DSXplore: Optimizing Convolutional Neural Networks via Sliding-Channel Convolution. paper, code
Yuke Wang, Boyuan Feng, and Yufei Ding.

[AAAI’21] Uncertainty-aware Attention Graph Neural Network for Defending Adversarial Attacks. paper
Boyuan Feng, Yuke Wang, and Yufei Ding.

[ICASSP’21] SAGA: Sparse Adversarial Attack on EEG-based BrainComputer Interface. paper
Boyuan Feng, Yuke Wang, and Yufei Ding.

[CIKM’21] An Efficient Quantitative Approach for Optimizing Convolutional Neural Networks. paper
Yuke Wang, Boyuan Feng, Xueqiao Peng, Yufei Ding.

[ICTAI’20] SGQuant: Squeezing the Last Bit on Graph Neural Networks with Specialized Quantization. paper
Boyuan Feng, Yuke Wang, Xu Li, Shu Yang, Xueqiao Peng, Yufei Ding. (* co-primary authors)

[Preprint] ZEN: Efficient Zero-Knowledge Proofs for Neural Networks. paper, code
Boyuan Feng, Lianke Qin, Zhenfei Zhang, Yufei Ding, and Shumo Chu.

Contact

fby.1994 # gmail.com