BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Denver
X-LIC-LOCATION:America/Denver
BEGIN:DAYLIGHT
TZOFFSETFROM:-0700
TZOFFSETTO:-0600
TZNAME:MDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0600
TZOFFSETTO:-0700
TZNAME:MST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260422T000623Z
LOCATION:501-502
DTSTART;TZID=America/Denver:20231113T090000
DTEND;TZID=America/Denver:20231113T173000
UID:submissions.supercomputing.org_SC23_sess440@linklings.com
SUMMARY:Workshop on Artificial Intelligence and Machine Learning for Scien
 tific Applications (AI4S)
DESCRIPTION:AI4S – Invited Talk\n\nAnimashree (Anima) Anandkumar (Californ
 ia Institute of Technology)\n---------------------\nElastic Deep Learning 
 through Resilient Collective Operations\n\nA robust solution that incorpor
 ates fault tolerance and elastic scaling capabilities for distributed deep
  learning. Taking advantage of MPI resilient capabilities, aka. User-Level
  Failure Mitigation (ULFM), this novel approach promotes efficient and lig
 htweight failure management and encourages smoo...\n\n\nJiali Li, George B
 osilca, and Aurelien Bouteiller (University of Tennessee) and Bogdan Nicol
 ae (Argonne National Laboratory (ANL))\n---------------------\nAcceleratin
 g Particle and Fluid Simulations with Differentiable and Interpretable Gra
 ph Networks for Solving Forward and Inverse Problems\n\nWe leverage physic
 s-embedded differentiable graph network simulators (GNS) to accelerate par
 ticulate and fluid simulations to solve forward and inverse problems. GNS 
 represents the domain as a graph with particles as nodes and learned inter
 actions as edges, improving generalization to new environmen...\n\n\nKrish
 na Kumar (University of Texas System) and Yonjin Choi (University of Texas
 )\n---------------------\nEntropy-Driven Optimal Sub-Sampling of Fluid Dyn
 amics for Developing Machine-Learned Surrogates\n\nOptimal sub-sampling of
  large datasets from fluid dynamics simulations is essential for training 
 reduced-order machine learned models. A method using Shannon entropy was d
 eveloped to weight flow features according to their level of information c
 ontent, such that the most informative features can be ...\n\n\nWesley Bre
 wer (Oak Ridge National Laboratory (ORNL)); Daniel Martinez (Science and T
 echnology Corporation); Muralikrishnan Gopalakrishnan Meena, Aditya Kashi,
  Katarzyna Borowiec, and Siyan Liu (Oak Ridge National Laboratory (ORNL));
  and Christopher Pilmaier, Greg Burgreen, and Shanti Bhushan (Mississippi 
 State University)\n---------------------\nToward Foundation Models for Mat
 erials Science:  The Open MatSci ML Toolkit\n\nArtificial intelligence and
  machine learning have shown great promise in their ability to accelerate 
 novel materials discovery. As researchers and domain scientists seek to un
 ify and consolidate chemical knowledge, the case for models with potential
  to generalize across different tasks within materi...\n\n\nKin Long Kelvi
 n Lee (Intel Corporation), Carmelo Gonzales (Intel Labs), Matthew Spelling
 s (Vector Institute), Mikhail Galkin and Santiago Miret (Intel Labs), and 
 Nalini Kumar (Intel Corporation)\n---------------------\nToward Rapid Auto
 nomous Electron Microscopy with Active Meta-Learning\n\nIn this work, we d
 eveloped a method to accelerate computational steering of microscopy exper
 iments by active meta-learning. Before this work, a tailored AI model was 
 trained specifically for every experiment by active learning to reconstruc
 t spectrum and uncover regions of interest by sampling just ...\n\n\nGayat
 hri Saranathan, Martin Foltin, and Aalap Tripathy (Hewlett Packard Enterpr
 ise (HPE)); Maxim Ziatdinov (Oak Ridge National Laboratory (ORNL)); Ann Ma
 ry Justine Koomthanam and Suparna Bhattacharya (Hewlett Packard Enterprise
  (HPE)); Ayana Ghosh and Kevin Roccapriore (Oak Ridge National Laboratory 
 (ORNL)); and Sreenivas Rangan Sukumar and Paolo Faraboschi (Hewlett Packar
 d Enterprise (HPE))\n---------------------\nMachine Learning Applied to Si
 ngle-Molecule Activity Prediction\n\nCatalytic processes are used in about
  1/3 of US manufacturing, from the field of chemical engineering to renewa
 ble energy. Assessing the activity of single-molecules, or individual mole
 cules, is necessary to the development of efficient catalysts. Their heter
 ogeneity structure leads to particle-spec...\n\n\nKendric Hood and Qiang G
 uan (Kent State University)\n---------------------\nTournament-Based Pretr
 aining to Accelerate Federated Learning\n\nAdvances in hardware, prolifera
 tion of compute at the edge, and data creation at unprecedented scales hav
 e made federated learning (FL) necessary for the next leap forward in perv
 asive machine learning. For privacy and network reasons, large volumes of 
 data remain stranded on endpoints located in ge...\n\n\nMatt Baughman (Uni
 versity of Chicago); Nathaniel Hudson (University of Chicago, Argonne Nati
 onal Laboratory (ANL)); Ryan Chard (Argonne National Laboratory (ANL)); An
 dre Bauer (University of Chicago); Ian Foster (Argonne National Laboratory
  (ANL)); and Kyle Chard (University of Chicago, Argonne National Laborator
 y (ANL))\n---------------------\nAutotuning Apache TVM-Based Scientific Ap
 plications Using Bayesian Optimization\n\nApache TVM (Tensor Virtual Machi
 ne), an open source machine learning compiler framework designed to optimi
 ze computations across various hardware platforms, provides an opportunity
  to improve the performance of dense matrix factorizations such as LU (Low
 er Upper) decomposition and Cholesky decomposi...\n\n\nXingfu Wu (Argonne 
 National Laboratory (ANL), University of Chicago); Praveen Paramasivam (Un
 iversity of South Dakota); and Valerie Taylor (Argonne National Laboratory
  (ANL))\n---------------------\nAI4S – Afternoon Break\n------------------
 ---\nEnhancing Heterogeneous Federated Learning with Knowledge Extraction 
 and Multi-Model Fusion\n\nConcerned with user data privacy, this paper pre
 sents a new federated learning (FL) method that trains machine learning mo
 dels on edge devices without accessing sensitive data. Traditional FL meth
 ods, although privacy-protective, fail to manage model heterogeneity and i
 ncur high communication costs ...\n\n\nDuy Phuong Nguyen and Sixing Yu (Io
 wa State University), J. Pablo Muñoz (Intel Corporation), and Ali Jannesar
 i (Iowa State University)\n---------------------\nA Comparison of Mesh-Fre
 e Differentiable Programming and Data-Driven Strategies for Optimal Contro
 l under PDE Constraints\n\nThe field of Optimal Control under Partial Diff
 erential Equations (PDE) constraints is rapidly changing under the influen
 ce of Deep Learning and the accompanying automatic differentiation librari
 es. Novel techniques like Physics-Informed Neural Networks (PINNs) and Dif
 ferentiable Programming (DP) ar...\n\n\nRoussel Desmond Nzoyem Ngueguin, D
 avid A.W. Barton, and Tom Deakin (University of Bristol)\n----------------
 -----\nProtein Generation via Genome-Scale Language Models with Bio-Physic
 al Scoring\n\nLarge language models (LLMs) trained on vast biological data
 sets can learn biological motifs and correlations across the evolutionary 
 landscape of natural proteins. LLMs can then be used for de novo design of
  novel proteins with specific structures, functions, and physicochemical p
 roperties. We empl...\n\n\nGautham Dharuman, Logan Ward, Heng Ma, and Priy
 anka V. Setty (Argonne National Laboratory (ANL)); Ozan Gokdemir (Universi
 ty of Chicago); Sam Foreman, Murali Emani, Kyle Hippe, and Alexander Brace
  (Argonne National Laboratory (ANL)); Kristopher Keipert and Thomas Gibbs 
 (NVIDIA Corporation); Ian Foster (Argonne National Laboratory (ANL)); Anim
 a Anandkumar (California Institute of Technology); and Venkatram Vishwanat
 h and Arvind Ramanathan (Argonne National Laboratory (ANL))\n-------------
 --------\nTencoder:  Tensor-Product Encoder-Decoder Architecture for Predi
 cting Solutions of PDEs with Variable Boundary Data\n\nIt is widely hoped 
 that artificial intelligence will boost data-driven surrogate models in sc
 ience and engineering. However, fundamental spatial aspects of AI surrogat
 e models remain under-studied. We investigate the ability of neural-networ
 k surrogate models to predict solutions to PDEs under varia...\n\n\nAditya
  Kashi (Oak Ridge National Laboratory (ORNL))\n---------------------\nAI4S
  – Morning Break\n---------------------\nEnabling Performant Thermal Condu
 ctivity Modeling with DeePMD and LAMMPS on CPUs\n\nThe ability to retain D
 FT-level accuracy and reduce the high computational costs has been made po
 ssible using Deep Potential models which allow accurate prediction of inte
 ratomic force and energy distributions, when trained on DFT data. DeePMD-k
 it is a Python/C++ package which implements such a mode...\n\n\nNariman Pi
 roozan and Nalini Kumar (Intel Corporation)\n---------------------\nAI4S –
  Keynote\n\nRick Stevens (Argonne National Laboratory (ANL))\n------------
 ---------\nAI4S – Lunch Break\n\nTag: Artificial Intelligence/Machine Lear
 ning\n\nRegistration Category: Workshop Reg Pass\n\nSession Chairs: Murali
  Emani (Argonne National Laboratory (ANL)); Gokcen Kestor (Barcelona Super
 computing Center (BSC); University of California, Merced); and Dong Li (Un
 iversity of California, Merced)
END:VEVENT
END:VCALENDAR
