BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Denver
X-LIC-LOCATION:America/Denver
BEGIN:DAYLIGHT
TZOFFSETFROM:-0700
TZOFFSETTO:-0600
TZNAME:MDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0600
TZOFFSETTO:-0700
TZNAME:MST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260422T000713Z
LOCATION:505
DTSTART;TZID=America/Denver:20231115T162400
DTEND;TZID=America/Denver:20231115T163300
UID:submissions.supercomputing.org_SC23_sess308_spostu102@linklings.com
SUMMARY:Chasing Clouds with Donkeycar:  Holistic Exploration of Edge and C
 loud Inferencing Trade-Offs in E2E Self-Driving Cars
DESCRIPTION:Kyle Zheng (National Science Foundation (NSF))\n\nIn autonomou
 s driving, computational resources are strained by inference models. The v
 iability of offloading inference to the cloud, considering latency between
  the car and data center, is questioned. We introduce a Cloud-Aided Real-t
 ime Inferencing Framework, integrating with Donkeycar and distributing com
 putational load between cloud and edge. Utilizing Raspberry Pi 4 for edge 
 inferencing and NVIDIA Triton Inference Server for the cloud, we demonstra
 te the framework's advantages, particularly in RNN performance, which achi
 eved 90% autonomy. Our study includes a scaled car navigating obstacles, a
 ssessing factors like speed, resources, latency, and autonomy score. The s
 ystem's performance shows faster inference time, eliminating bottlenecks, 
 and processing 42 frames per second in the cloud, 11 times faster than on 
 the edge. The poster will detail the strengths, limitations, and potential
  of leveraging cloud resources in real-time edge environments, focusing on
  autonomy scores and latency trade-offs.\n\nRegistration Category: Tech Pr
 ogram Reg Pass\n\nSession Chair: Ana Gainaru (Oak Ridge National Laborator
 y (ORNL))\n\n
END:VEVENT
END:VCALENDAR
