BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Denver
X-LIC-LOCATION:America/Denver
BEGIN:DAYLIGHT
TZOFFSETFROM:-0700
TZOFFSETTO:-0600
TZNAME:MDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0600
TZOFFSETTO:-0700
TZNAME:MST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260422T000713Z
LOCATION:704-706
DTSTART;TZID=America/Denver:20231113T164200
DTEND;TZID=America/Denver:20231113T170600
UID:submissions.supercomputing.org_SC23_sess457_ws_mlg104@linklings.com
SUMMARY:DDStore:  Distributed Data Store for Scalable Training of Graph Ne
 ural Networks on Large Atomistic Modeling Datasets
DESCRIPTION:Jong Youl Choi, Massimiliano Lupo Pasini, Pei Zhang, Kshitij M
 ehta, and Frank Liu (Oak Ridge National Laboratory (ORNL)) and Jonghyun Ba
 e and Khaled Ibrahim (Lawrence Berkeley National Laboratory (LBNL))\n\nGra
 ph neural networks (GNNs) are a class of Deep Learning models used in desi
 gning atomistic materials for effective screening of large chemical spaces
 . To ensure robust prediction, GNN models must be trained on large volumes
  of atomistic modeling data on leadership class supercomputing facilities.
  Even with the advent of modern architectures that consist of multiple sto
 rage layers that include node-local NVMe devices in addition to device mem
 ory for caching large datasets, extreme-scale model training faces I/O cha
 llenges at scale.\n\nWe present DDStore, an in-memory distributed data sto
 re designed for GNN training on large-scale graph data. DDStore provides a
  hierarchical, distributed, data caching technique that combines data chun
 king, replication, low-latency random access, and high throughput communic
 ation. DDStore achieves near-linear scaling for training a GNN model using
  up to 1000 GPUs on the Summit and Perlmutter supercomputers, and reaches 
 up to a 6.15x reduction in GNN training time compared to state-of-the-art 
 methodologies.\n\nTag: Artificial Intelligence/Machine Learning, Graph Alg
 orithms and Frameworks\n\nRegistration Category: Workshop Reg Pass\n\nSess
 ion Chairs: Seung-Hwan Lim (Oak Ridge National Laboratory (ORNL)); José Mo
 reira (IBM); Catherine Schuman (University of Tennessee, Knoxville); and R
 ichard Vuduc (Georgia Institute of Technology)\n\n
END:VEVENT
END:VCALENDAR
