BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Denver
X-LIC-LOCATION:America/Denver
BEGIN:DAYLIGHT
TZOFFSETFROM:-0700
TZOFFSETTO:-0600
TZNAME:MDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0600
TZOFFSETTO:-0700
TZNAME:MST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260422T000711Z
LOCATION:704-706
DTSTART;TZID=America/Denver:20231113T161800
DTEND;TZID=America/Denver:20231113T164200
UID:submissions.supercomputing.org_SC23_sess457_ws_mlg105@linklings.com
SUMMARY:HPC-GPT: Integrating Large Language Model for High-Performance Com
 puting
DESCRIPTION:Xianzhong Ding (University of California, Merced); Le Chen (Io
 wa State University); Murali Emani (Argonne National Laboratory (ANL)); Ch
 unhua Liao, Pei-Hung Lin, and Tristan Vanderbruggen (Lawrence Livermore Na
 tional Laboratory (LLNL)); Zhen Xie (Argonne National Laboratory (ANL)); a
 nd Alberto Cerpa and Wan Du (University of California, Merced)\n\nLarge La
 nguage Models (LLMs), including the LLaMA model, have exhibited their effi
 cacy across various general-domain natural language processing (NLP) tasks
 . However, their performance in high-performance computing (HPC) domain ta
 sks has been less than optimal due to the specialized expertise required t
 o interpret the model’s responses. In response to this challenge, we propo
 se HPC-GPT, a novel LLaMA-based model that has been supervised fine-tuning
  using generated QA (Question-Answer) instances for the HPC domain. To eva
 luate its effectiveness, we concentrate on two HPC tasks: managing AI mode
 ls and datasets for HPC, and data race detection. By employing HPC-GPT, we
  demonstrate comparable performance with existing methods on both tasks, e
 xemplifying its excellence in HPC-related scenarios. Our experiments on op
 en-source benchmarks yield extensive results, underscoring HPC-GPT’s poten
 tial to bridge the performance gap between LLMs and HPC-specific tasks.\n\
 nTag: Artificial Intelligence/Machine Learning, Graph Algorithms and Frame
 works\n\nRegistration Category: Workshop Reg Pass\n\nSession Chairs: Seung
 -Hwan Lim (Oak Ridge National Laboratory (ORNL)); José Moreira (IBM); Cath
 erine Schuman (University of Tennessee, Knoxville); and Richard Vuduc (Geo
 rgia Institute of Technology)\n\n
END:VEVENT
END:VCALENDAR
