BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Denver
X-LIC-LOCATION:America/Denver
BEGIN:DAYLIGHT
TZOFFSETFROM:-0700
TZOFFSETTO:-0600
TZNAME:MDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0600
TZOFFSETTO:-0700
TZNAME:MST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260422T000615Z
LOCATION:503-504
DTSTART;TZID=America/Denver:20231115T103000
DTEND;TZID=America/Denver:20231115T120000
UID:submissions.supercomputing.org_SC23_sess251@linklings.com
SUMMARY:Exhibitor Forum: Super Intelligence I
DESCRIPTION:Cost-Effective LLM Inference Solution Using SK hynix's AiM (Ac
 celerator-in-Memory)\n\nLarge language models (LLMs) are becoming increasi
 ngly popular for a variety of AI services, such as chatbots and virtual as
 sistants. However, serving LLMs can be challenging, due to their high oper
 ating costs and long service latency. The main challenge in serving LLMs i
 s the memory bandwidth bottl...\n\n\nYongkee Kwon (SK hynix Inc)\n--------
 -------------\nEthernet-Based Interconnect, the Critical Crossroads for HP
 C and AI Networking at Scale\n\nThe convergence of HPC and AI entails an e
 xplosion of performance of the number of nodes/cores, data volume, and dat
 a movement. In the coming years, the deployment of AI networks, ranging fr
 om rack-scale to datacenter-scale, is set to accelerate, necessitating the
  evolution of networking technology ...\n\n\nEric Eppe (Eviden)\n---------
 ------------\nOvercoming the Cost of Data Movement in AI Inference Acceler
 ators\n\nThe largest performance bottleneck and energy usage in neural net
 work acceleration is the fetching of weight and activation values prior to
  general matrix-vector (GEMV) or general matrix-matrix (GEMM) computation.
  Traditional von Neumann architectures, even with large on-chip caches, co
 nsume as much...\n\n\nArun Iyengar (Untether AI)\n\nTag: Accelerators, Art
 ificial Intelligence/Machine Learning, Architecture and Networks, Hardware
  Technologies\n\nRegistration Category: Tech Program Reg Pass, Exhibits Re
 g Pass\n\nSession Chair: Nathan Hanford (Lawrence Livermore National Labor
 atory (LLNL))
END:VEVENT
END:VCALENDAR
