BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Denver
X-LIC-LOCATION:America/Denver
BEGIN:DAYLIGHT
TZOFFSETFROM:-0700
TZOFFSETTO:-0600
TZNAME:MDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0600
TZOFFSETTO:-0700
TZNAME:MST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260422T000714Z
LOCATION:E Concourse
DTSTART;TZID=America/Denver:20231116T100000
DTEND;TZID=America/Denver:20231116T170000
UID:submissions.supercomputing.org_SC23_sess302@linklings.com
SUMMARY:Doctoral Showcase Posters Display
DESCRIPTION:Corralling the Computing Continuum:  Mobilizing Modern Distrib
 uted Resources for Machine Learning and Accessible Computing\n\nTo achieve
  the resource agnostic flexibility of compute described by the computing c
 ontinuum, we combined our work in workload profiling and cost estimation w
 ith task provisioning to present DELTA–a framework for serverless workload
  placement across a computing ecosystem. To address the dynami...\n\n\nMat
 t Baughman (University of Chicago)\n---------------------\nOvercoming the 
 Gap between Compute and Memory Bandwidth in Modern GPUs\n\nThe imbalance b
 etween compute and memory bandwidth has been a long-standing issue. Despit
 e efforts to address it, the gap between them is still widening. This has 
 led to the categorization of many applications as memory-bound kernels.	\n
 \nThis dissertation centers on memory-bound kernels, with a parti...\n\n\n
 Lingqi Zhang (Tokyo Institute of Technology)\n---------------------\nI/O E
 fficient Machine Learning\n\nMy research focuses on systems optimizations 
 for machine learning, specifically on I/O efficient model storage and retr
 ieval.\n\nThe first part of my work focuses on efficient inference serving
  of tree ensemble models. Tree structures are inherently not cache friendl
 y and their traversal incurs random...\n\n\nMeghana Madhyastha (Johns Hopk
 ins University, Argonne National Laboratory (ANL))\n---------------------\
 nPreemptive Intrusion Detection:  Real-World Measurements, Bayesian-Based 
 Detection, and AI-Driven Countermeasures\n\nThe problem of preempting atta
 cks before damages remains the top security priority. The gap between aler
 ts and early detection remains wide open because noisy attack attempts and
  unreliable alerts mask real attacks from humans. This dissertation brings
  together: 1) attack patterns mining driven by r...\n\n\nPhuong Cao (Unive
 rsity of Illinois)\n---------------------\nHigh Performance Computing for 
 Optimization of Radiation Therapy Treatment Plans\n\nModern radiation ther
 apy relies heavily on computational methods to design optimal treatment pl
 ans (control parameters for the treatment machine) for individual patients
 . These parameters are determined by constructing and solving a mathematic
 al optimization problem. Ultimately, the goal is to creat...\n\n\nFelix Li
 u (KTH Royal Institute of Technology, Sweden; Raysearch Laboratories)\n---
 ------------------\nDesign Automation Tools and Software for Quantum Compu
 ting\n\nQuantum computing promises to solve problems beyond the reach of t
 oday’s machines, but it requires efficient and reliable software tools to 
 realize its potential. This poster gives an overview of various contributi
 ons towards design automation methods and software for quantum computing t
 hat le...\n\n\nLukas Burgholzer (Technical University of Munich)\n--------
 -------------\nEnabling Reproducibility and Scalability of Scientific Work
 flows in HPC and Cloud\n\nScientific communities across fields like earth 
 science, biology, and materials science increasingly run complex workflows
  for their scientific discovery. We work closely with these communities to
  leverage high-performance computing (HPC), big data analytics, and artifi
 cial intelligence/machine lear...\n\n\nPaula Olaya (University of Tennesse
 e)\n---------------------\nScaling HPC Applications through Predictable an
 d Reliable Data Reduction Methods\n\nFor scientists and engineers, large-s
 cale computer systems are one of the most powerful tools to solve complex 
 high-performance computing (HPC) and Deep Learning (DL) problems. With the
  ever-increasing computing power such as the new generation of exascale (o
 ne exaflop or a billion billion calculati...\n\n\nSian Jin (Indiana Univer
 sity, Argonne National Laboratory (ANL))\n---------------------\nInteracti
 ve In-Situ Visualization of Large Distributed Volume Data\n\nLarge distrib
 uted volume data are routinely produced in numerical simulations and exper
 iments. In-situ visualization, the visualization of simulation or experime
 nt data as it is generated, enables simulation steering and experiment con
 trol, which helps scientists gain an intuitive understanding of t...\n\n\n
 Aryaman Gupta (Technical University Dresden, Center for Systems Biology Dr
 esden (CSBD))\n---------------------\nHigh Performance Serverless for HPC 
 and Clouds\n\nFunction-as-a-Service (FaaS) computing brought a fundamental
  shift in resource management. It allowed for new and better solutions to 
 the problem of low resource utilization, an issue that has been known in d
 ata centers for decades. The problem persists as the frequently changing r
 esource availabili...\n\n\nMarcin Copik (ETH Zürich)\n--------------------
 -\nCharged Particle Track Reconstruction Algorithms for Massively Parallel
  Systems\n\nThe reconstruction of the trajectories of charged particles th
 rough detector experiments is a core computational task in the domain of h
 igh-energy physics. Upcoming upgrades to accelerators such as the Large Ha
 dron Collider as well as to experiments like ATLAS threaten to render exis
 ting CPU-based a...\n\n\nStephen Nicholas Swatman (University of Amsterdam
 , European Organization for Nuclear Research (CERN))\n--------------------
 -\nModernizing Simulation Software for the Exascale Era\n\nModern HPC hard
 ware is becoming increasingly heterogeneous and diverse in the exascale er
 a. The diversity of hardware and software stacks adds additional developme
 nt challenges to high performance simulations. One common development appr
 oach is to re-engineer the code for each new target architectur...\n\n\nNi
 gel P. Tan (University of Tennessee)\n\nTag: Accelerators, Artificial Inte
 lligence/Machine Learning, Applications, Cloud Computing, Distributed Comp
 uting, Data Analysis, Visualization, and Storage, Data Compression, Hetero
 geneous Computing, I/O and File Systems, Quantum Computing, Reproducibilit
 y, Security, Software Engineering\n\nRegistration Category: Tech Program R
 eg Pass, Exhibits Reg Pass
END:VEVENT
END:VCALENDAR
