BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Denver
X-LIC-LOCATION:America/Denver
BEGIN:DAYLIGHT
TZOFFSETFROM:-0700
TZOFFSETTO:-0600
TZNAME:MDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0600
TZOFFSETTO:-0700
TZNAME:MST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260422T000713Z
LOCATION:301-302-303
DTSTART;TZID=America/Denver:20231114T160000
DTEND;TZID=America/Denver:20231114T163000
UID:submissions.supercomputing.org_SC23_sess167_pap166@linklings.com
SUMMARY:TANGO: Re-Thinking Quantization for Graph Neural Network Training 
 on GPUs
DESCRIPTION:Shiyang Chen (Rutgers University); Da Zheng (Amazon); Caiwen D
 ing (University of Connecticut); Chengying Huan (Institute of Software, Ch
 inese Academy of Sciences); Yuede Ji (University of North Texas); and Hang
  Liu (Rutgers University)\n\nGraph Neural Networks (GNNs) are rapidly gain
 ing popularity since they hold state-of-the-art performance for various cr
 itical graph-related tasks. While quantization is a primary approach to ac
 celerating GNN computation, quantized training faces remarkable challenges
 . We observe that current quantized GNN training systems often experience 
 longer training time than their full-precision counterparts for two reason
 s: (i) addressing the accuracy challenge results in too much overhead. (ii
 ) The optimization opportunity exposed by quantization is not well leverag
 ed. This paper introduces Tango, which re-thinks quantization challenges a
 nd opportunities for graph neural network training on GPUs with the follow
 ing contributions: First, we introduce light-weighted rules to meet the ac
 curacy requirement for quantized GNN training. Second, we design and imple
 ment quantization-aware primitives and inter-primitive optimizations to ac
 celerate GNN training. Third, we integrate Tango with the mainstream Deep 
 Graph Library (DGL) system and demonstrate that Tango outperforms the stat
 e-of-the-art across all the evaluated GNN models and datasets.\n\nTag: Art
 ificial Intelligence/Machine Learning\n\nRegistration Category: Tech Progr
 am Reg Pass\n\nReproducibility Badges: Artifact Available\n\nSession Chair
 : Israt Nisa (Amazon Web Services AI Research and Education)\n\n
END:VEVENT
END:VCALENDAR
