Talks

Invited talks and presentations

  • 1
    SpotServe: Serving Generative Large Language Models on Preemptible Instances
  • 2
    SpecInfer: Accelerating Generative LLM Serving with Tree-based Speculative Inference and Token Verification.
  • 3
    Recent Advances in Data-Centric MLSys:  A DBer's Perspective.
  • 4
    SDPipe: A Semi-Decentralized Framework for Heterogeneity-aware Pipeline-parallel Training.
    • ChinaSys, Online, China, July 2023
    • VLDB, Online, Canada, September 2023
  • 5
    Galvatron: Efficient Transformer Training over Multiple GPUs Using Automatic Parallelism.
  • 6
    Parcae: Proactive, Liveput-Optimized DNN Training on Preemptible Instances.
  • 7
    When Sparsity Meets Distributed DL System: Efficient and Scalable Huge Embedding Model Training.
    • Catalyst Group Meeting, Pittsburgh, USA, October 2022
    • Tencent, Online, China, September 2022
    • Baidu, OPPO, MetaX, Online, China, April 2022
    • Jiqizhixin, Online, China, January 2022
  • 8
    Hetu: An Automatic Parallel Distributed Deep Learning Framework for Huge Model.
    • Huawei Cloud InnovWave Talk, Online, China, April 2023
    • CCF TCDB & Gauss 松鼠会, Online, China, April 2023
    • BAAI Conference, Beijing, China, June 2022
    • MSRA, Beijing, China, November 2021
    • NDBC, Kunming, China, December 2019
  • 9
    HET-GMP: a Graph-based System Approach to Scaling Large Embedding Model Training.
  • 10
    HET: Scaling out Huge Embedding Model Training via Cache-enabled Distributed Framework.
    • VLDB, Sydney, Australia, September 2022
    • ChinaSys Winter, Xiamen, China, December 2021
    • Huawei, Alibaba, ByteDance, October 2021
  • 11
    Heterogeneity-Aware Distributed Machine Learning Training via Partial Reduce.
    • SIGMOD, Xiaan, China, June 2021
  • 12
    DeGNN: Improving Graph Neural Networks with Graph Decomposition.