Hanqing Zeng

Hanqing Zeng

PhD candidate in Computer Engineering

University of Southern California

Biography

I am a PhD candidate in Computer Engineering at USC, supervised by Prof. Viktor Prasanna. My main research goal is to improve the scalability, accuracy and efficiency of large scale graph learning. Towards the goal, during my PhD, I design new models, training/inference algorithms and hardware systems for Graph Neural Networks; after graduation, I will join Facebook/Meta AI as a research scientist to further explore practical solutions in web-scale social recommendation.

Beyond graph learning, I am also generally interested in solving memory and computation intensive problems by algorithm-architecture mapping approaches. In my initial years of PhD, I accelerated the computation of Convolutional Neural Networks and graph analytics via parallelization on heterogeneous platforms (GPU, CPU and FPGA).

Interests
  • Graph representation learning
  • Parallel & distributed computing
  • Graph infra
Education
  • PhD in Computer Engineering, 2022

    University of Southern California

  • Bachelor of Engineering, 2016

    University of Hong Kong

Experience

 
 
 
 
 
Research intern
Facebook AI
Jun 2021 – Nov 2021 Menlo Park, California

Responsibilities include:

  • Developed graph engine to support large scale GNN computation on production data
  • Developed new GNN models for heterogeneous graphs
 
 
 
 
 
Research intern
Facebook AI
May 2020 – Aug 2020 Menlo Park, California

Responsibilities include:

  • Integrated state-of-the-art minibatch GNN training methods (e.g., GraphSAINT) into internal infrastructure
  • Developed new GNN models for orders of magnitude improvements in scalability (shaDow-GNN)
 
 
 
 
 
Research assistant
University of Southern California
Aug 2016 – Present Los Angeles, California

Achievements:

  • Authored or Coauthored 20+ papers (1 best paper award + 2 best paper candidates)
  • Mentored 5+ junior PhD or Master students, and helped them publish their first papers
  • Contributed in 5+ DARPA and NSF projects
  • Serving as reviewer or PC member for 10+ top conferences and journals (outstanding reviewer award for ICLR 2021)

Recent Publications

Quickly discover relevant content by filtering publications.
(2022). DecGNN: a framework for mapping decoupled GNN models onto CPU-FPGA heterogeneous platform. ACM/FPGA (poster).

Cite DOI

(2021). Decoupoling the depth and scope of Graph Neural Networks. NeurIPS.

PDF Cite Code Poster Slides

(2021). Accelerating large scale real-time GNN inference using channel pruning. VLDB.

PDF Cite DOI

(2021). Accurate, efficient and scalable training of Graph Neural Networks. JPDC.

PDF Cite DOI

(2020). Hardware acceleration of large scale GCN inference. IEEE/ASAP.

Cite DOI

(2020). VTR 8: High-performance CAD and customizable FPGA architecture modelling. ACM/TRETS.

Cite Code

(2020). Accelerating large scale GCN inference on FPGA. IEEE/FCCM (poster).

Cite DOI

(2020). GraphSAINT: Graph sampling based inductive learning method. ICLR.

PDF Cite Code Slides

(2019). SPEC2: Spectral sparse CNN accelerator on FPGAs. IEEE/HiPC.

Cite DOI

(2019). A flexible design automation tool for accelerating quantized spectral CNNs. IEEE/FPL.

Cite DOI

(2019). Accurate, efficient and scalable graph embedding. IEEE/IPDPS.

PDF Cite DOI

(2018). Throughput-optimized frequency domain CNN with fixed-point quantization on FPGA. IEEE/ReConFig.

Cite DOI

(2018). A fast and efficient parallel algorithm for pruned landmark labeling. IEEE/HPEC.

Cite DOI

(2018). An FPGA framework for edge-centric graph processing. ACM/CF.

Cite DOI

(2018). A framework for generating high throughput CNN implementations on FPGAs. ACM/FPGA.

Cite DOI

(2017). Fast generation of high throughput customized deep learning accelerators on FPGAs. IEEE/ReConFig.

Cite DOI

(2017). Quickly finding a truss in a haystack. IEEE/HPEC.

Cite DOI

(2017). Design and implementation of parallel PageRank on multicore platforms. IEEE/HPEC.

Cite DOI

Contact