BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Denver
X-LIC-LOCATION:America/Denver
BEGIN:DAYLIGHT
TZOFFSETFROM:-0700
TZOFFSETTO:-0600
TZNAME:MDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0600
TZOFFSETTO:-0700
TZNAME:MST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260422T000711Z
LOCATION:DEF Concourse
DTSTART;TZID=America/Denver:20231114T100000
DTEND;TZID=America/Denver:20231114T170000
UID:submissions.supercomputing.org_SC23_sess291_rpost208@linklings.com
SUMMARY:A High-Performance I/O Framework for Accelerating DNN Model Update
 s Within Deep Learning Workflow
DESCRIPTION:Jie Ye and Jaime Cernuda (Illinois Institute of Technology), B
 ogdan Nicolae (Argonne National Laboratory (ANL)), and Anthony Kougkas and
  Xian-He Sun (Illinois Institute of Technology)\n\nIn traditional deep lea
 rning workflows, AI applications (producers) train DNN models offline usin
 g fixed datasets, while inference serving systems (consumers) load the tra
 ined models for offering real-time inference queries. In practice, AI appl
 ications often operate in a dynamic environment where data is constantly c
 hanging. Compared to offline learning, Continuous learning frequently (re)
 -trains models to adapt to the ever-changing data. This demands regular de
 ployment of the DNN models, increasing the model update frequency between 
 producers and consumers. Typically, producers and consumers are connected 
 via model repositories like PFS, which may result in high model update lat
 ency due to I/O bottleneck of PFS. To address this, our work introduces a 
 high-performance I/O framework that speeds up model updates between produc
 ers and consumers. It employs a cache-aware model handler to minimize the 
 latency and an intelligent performance predictor to maintain a balance bet
 ween training and inference performance.\n\nTag: Artificial Intelligence/M
 achine Learning, Architecture and Networks, Heterogeneous Computing, I/O a
 nd File Systems, Performance Measurement, Modeling, and Tools, Post-Moore 
 Computing, Programming Frameworks and System Software, Quantum Computing\n
 \nRegistration Category: Tech Program Reg Pass, Exhibits Reg Pass\n\n
END:VEVENT
END:VCALENDAR
