BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Denver
X-LIC-LOCATION:America/Denver
BEGIN:DAYLIGHT
TZOFFSETFROM:-0700
TZOFFSETTO:-0600
TZNAME:MDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0600
TZOFFSETTO:-0700
TZNAME:MST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260422T000611Z
LOCATION:505
DTSTART;TZID=America/Denver:20231116T153000
DTEND;TZID=America/Denver:20231116T170000
UID:submissions.supercomputing.org_SC23_sess310@linklings.com
SUMMARY:Doctoral Showcase II Presentations
DESCRIPTION:Overcoming the Gap between Compute and Memory Bandwidth in Mod
 ern GPUs\n\nThe imbalance between compute and memory bandwidth has been a 
 long-standing issue. Despite efforts to address it, the gap between them i
 s still widening. This has led to the categorization of many applications 
 as memory-bound kernels.	\n\nThis dissertation centers on memory-bound ker
 nels, with a parti...\n\n\nLingqi Zhang (Tokyo Institute of Technology)\n-
 --------------------\nHigh Performance Computing for Optimization of Radia
 tion Therapy Treatment Plans\n\nModern radiation therapy relies heavily on
  computational methods to design optimal treatment plans (control paramete
 rs for the treatment machine) for individual patients. These parameters ar
 e determined by constructing and solving a mathematical optimization probl
 em. Ultimately, the goal is to creat...\n\n\nFelix Liu (KTH Royal Institut
 e of Technology, Sweden; Raysearch Laboratories)\n---------------------\nE
 nabling Reproducibility and Scalability of Scientific Workflows in HPC and
  Cloud\n\nScientific communities across fields like earth science, biology
 , and materials science increasingly run complex workflows for their scien
 tific discovery. We work closely with these communities to leverage high-p
 erformance computing (HPC), big data analytics, and artificial intelligenc
 e/machine lear...\n\n\nPaula Olaya (University of Tennessee)\n------------
 ---------\nScaling HPC Applications through Predictable and Reliable Data 
 Reduction Methods\n\nFor scientists and engineers, large-scale computer sy
 stems are one of the most powerful tools to solve complex high-performance
  computing (HPC) and Deep Learning (DL) problems. With the ever-increasing
  computing power such as the new generation of exascale (one exaflop or a 
 billion billion calculati...\n\n\nSian Jin (Indiana University, Argonne Na
 tional Laboratory (ANL))\n---------------------\nHigh Performance Serverle
 ss for HPC and Clouds\n\nFunction-as-a-Service (FaaS) computing brought a 
 fundamental shift in resource management. It allowed for new and better so
 lutions to the problem of low resource utilization, an issue that has been
  known in data centers for decades. The problem persists as the frequently
  changing resource availabili...\n\n\nMarcin Copik (ETH Zürich)\n---------
 ------------\nModernizing Simulation Software for the Exascale Era\n\nMode
 rn HPC hardware is becoming increasingly heterogeneous and diverse in the 
 exascale era. The diversity of hardware and software stacks adds additiona
 l development challenges to high performance simulations. One common devel
 opment approach is to re-engineer the code for each new target architectur
 ...\n\n\nNigel P. Tan (University of Tennessee)\n\nTag: Accelerators, Appl
 ications, Cloud Computing, Data Compression, Heterogeneous Computing, I/O 
 and File Systems, Reproducibility, Software Engineering\n\nRegistration Ca
 tegory: Tech Program Reg Pass\n\nSession Chairs: André Brinkmann (Johannes
  Gutenberg University Mainz) and Xubin He (Temple University, Department o
 f Computer and Information Sciences)
END:VEVENT
END:VCALENDAR
