BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Denver
X-LIC-LOCATION:America/Denver
BEGIN:DAYLIGHT
TZOFFSETFROM:-0700
TZOFFSETTO:-0600
TZNAME:MDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0600
TZOFFSETTO:-0700
TZNAME:MST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260422T000628Z
LOCATION:301-302-303
DTSTART;TZID=America/Denver:20231114T153000
DTEND;TZID=America/Denver:20231114T170000
UID:submissions.supercomputing.org_SC23_sess167@linklings.com
SUMMARY:Training Graph Neural Networks
DESCRIPTION:DistTGL: Distributed Memory-Based Temporal Graph Neural Networ
 k Training\n\nMemory-based Temporal Graph Neural Networks are powerful too
 ls in dynamic graph representation learning and have demonstrated superior
  performance in many real-world applications.  However, their node memory 
 favors smaller batch sizes to capture more dependencies in graph events an
 d needs to be main...\n\n\nHongkuan Zhou (University of Southern Californi
 a (USC)); Da Zheng, Xiang Song, and George Karypis (Amazon Web Services AI
 ); and Viktor Prasanna (University of Southern California (USC))\n--------
 -------------\nBLAD: Adaptive Load Balanced Scheduling and Operator Overla
 p Pipeline for Accelerating the Dynamic GNN Training\n\nDynamic graph netw
 orks are widely used for learning time-evolving graphs, but prior work on 
 training these networks is inefficient due to communication overhead, long
  synchronization, and poor resource usage.  Our investigation shows that c
 ommunication and synchronization can be reduced by carefully...\n\n\nKaihu
 a Fu, Quan Chen, Yuzhuo Yang, Jiuchen Shi, Chao Li, and Minyi Guo (Shangha
 i Jiao Tong University)\n---------------------\nTANGO: Re-Thinking Quantiz
 ation for Graph Neural Network Training on GPUs\n\nGraph Neural Networks (
 GNNs) are rapidly gaining popularity since they hold state-of-the-art perf
 ormance for various critical graph-related tasks. While quantization is a 
 primary approach to accelerating GNN computation, quantized training faces
  remarkable challenges. We observe that current quantiz...\n\n\nShiyang Ch
 en (Rutgers University); Da Zheng (Amazon); Caiwen Ding (University of Con
 necticut); Chengying Huan (Institute of Software, Chinese Academy of Scien
 ces); Yuede Ji (University of North Texas); and Hang Liu (Rutgers Universi
 ty)\n\nTag: Artificial Intelligence/Machine Learning\n\nRegistration Categ
 ory: Tech Program Reg Pass\n\nReproducibility Badges: Artifact Available\n
 \nSession Chair: Israt Nisa (Amazon Web Services AI Research and Education
 )
END:VEVENT
END:VCALENDAR
