BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Denver
X-LIC-LOCATION:America/Denver
BEGIN:DAYLIGHT
TZOFFSETFROM:-0700
TZOFFSETTO:-0600
TZNAME:MDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0600
TZOFFSETTO:-0700
TZNAME:MST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260422T000713Z
LOCATION:DEF Concourse
DTSTART;TZID=America/Denver:20231114T100000
DTEND;TZID=America/Denver:20231114T170000
UID:submissions.supercomputing.org_SC23_sess291_rpost141@linklings.com
SUMMARY:Optimizing Uncertainty Quantification of Vision Transformers in De
 ep Learning on Novel AI Architectures
DESCRIPTION:Erik Pautsch (Loyola University, Chicago); John LI (University
  of California San Diego); Silvio Rizzi (Argonne National Laboratory (ANL)
 ); George Thiruvathukal (Loyola University, Chicago); and Maria Pantoja (C
 alifornia Polytechnic State University, San Luis Obispo)\n\nDeep Learning 
 (DL) methods have shown substantial efficacy in computer vision (CV) and n
 atural language processing (NLP). Despite their proficiency, the inconsist
 ency in input data distributions can compromise prediction reliability. Th
 is study mitigates this issue by introducing uncertainty evaluations in DL
  models, thereby enhancing dependability through a distribution of predict
 ions. Our focus lies on the Vision Transformer (ViT), a DL model that harm
 onizes both local and global behavior. We conduct extensive experiments on
  the ImageNet-1K dataset, a vast resource with over a million images acros
 s 1,000 categories. ViTs, while competitive, are vulnerable to adversarial
  attacks, making uncertainty estimation crucial for robust predictions.\n\
 nOur research advances the field by integrating uncertainty evaluations in
 to ViTs, comparing two significant uncertainty estimation methodologies, a
 nd expediting uncertainty computations on high-performance computing (HPC)
  architectures, such as the Cerebras CS-2, SambaNova DataScale, and the Po
 laris supercomputer, utilizing the MPI4PY package for efficient distribute
 d training.\n\nTag: Artificial Intelligence/Machine Learning, Architecture
  and Networks, Heterogeneous Computing, I/O and File Systems, Performance 
 Measurement, Modeling, and Tools, Post-Moore Computing, Programming Framew
 orks and System Software, Quantum Computing\n\nRegistration Category: Tech
  Program Reg Pass, Exhibits Reg Pass\n\n
END:VEVENT
END:VCALENDAR
