profile image

Aaron Appelle

Hello! I am a final-year PhD candidate at Duke University advised by Jerome Lynch. My doctoral work focused on machine learning approaches to pedestrian and crowd modeling for location-based smart city services. My current research interests are in world models and video diffusion models for multi-agent simulation.

Recently, I was an AI resident at Google X working on generative models for architectural building design. Before my PhD, I completed a master’s degree at Stanford University and bachelor’s degree at Columbia University.

Selected Publications

Video Models as Simulators of Multi-Person Pedestrian Trajectories arXiv

Evaluating Video Models as Simulators of Multi-Person Pedestrian Trajectories

We propose an evaluation protocol to benchmark text-to-video (T2V) and image-to-video (I2V) models as implicit simulators of pedestrian dynamics. We use 3D reconstruction and depth estimation to extract pedestrian trajectories without known camera parameters.

October 2025 · Aaron Appelle, Jerome P. Lynch Preprint
Image-To-Video Models for Pedestrian Dynamics Simulation ICML World Models

Can Image-To-Video Models Simulate Pedestrian Dynamics?

We investigate whether image-to-video (I2V) models based on diffusion transformers can generate realistic pedestrian movement patterns in crowded public scenes by conditioning on keyframes from pedestrian benchmark datasets.

Footstep Vibration-Based Pedestrian Localization System ASCE

Automated and Scalable Footstep Vibration-Based Pedestrian Localization in Built Environments Using Deep Learning

We introduce a novel deep-learning-based footstep localization system using ground vibrations measured using geophones and a scalable method to automatically collect training data. We show that it significantly outperforms acoustic localization.

February 2025 · Aaron Appelle, Liming Salvino, Jerome P. Lynch Paper
Pedestrian Footstep Localization System SPIE

Pedestrian Footstep Localization Using a Deep Convolutional Network for Time Difference of Arrival Estimation

We present a privacy-preserving localization system that uses geophones to map pedestrian locations in outdoor spaces. The system uses a 1D-CNN to compute time differences of arrival (TDOA) and achieves localization accuracy of less than 1 meter.

May 2024 · Aaron Appelle, Liming Salvino, Jerome P. Lynch Paper
Embedded Sensing System for Shipboard Damage Control SPIE

Embedded sensing system for shipboard damage control scenarios

We develop a shipboard monitoring system using extremely low-power wearable sensors and BLE connectivity. The system measures kinematic and biometric data plus ship environmental conditions for real-time crew health tracking during emergency response scenarios.

May 2025 · Eric Stach, Aaron Appelle, Jerome Lynch, Liming Salvino Paper
Integration of Wearable and Ambient Sensors APWSHM

Integration of wearable and ambient sensors towards characterization of physical effort

We describe an intelligent environment that unifies wearable skin-strain sensors for physiological monitoring, geophones and microphones to record ambient vibrations and sounds, and video cameras to visually observe human activities.

March 2023 · Aaron Appelle, Liming Salvino, Yun-An Lin, Taylor Pierce, Emerson Noble, Gabriel Draughon, Kenneth J. Loh, Jerome P. Lynch Paper
Wearable Sensor Platform with Graphene Motion Tape EWSHM

Wearable Sensor Platform to Monitor Physical Exertion Using Graphene Motion Tape

We present a power-efficient wearable sensing platform that uses graphene motion-tape sensors integrated with an IoT sensor node to infer distributed muscle exertion in real time. We use compressed sensing to efficiently transmit signal information.

June 2022 · Aaron Appelle, Yun-An Lin, Emerson Noble, Liming Salvino, Kenneth J. Loh, Jerome P. Lynch Paper

Research Interests

  • World models and video diffusion models
  • Simulating multi-agent motion trajectories in real world environments
  • Convolutional/recurrent architectures for signal processing in challenging noisy settings
  • Generative models for 3D shapes and scenes

Academic

  • Past life as a structural engineersubset of civil engineering for designing the structural systems of buildings, bridges, and other built infrastructure to resist gravity and environmental forces like wind and earthquakes. Spent all my time trying to automate the calculations and spreadsheets
  • Been studying ML/AI since 2019 at Stanford (CS229Machine Learning, CS230Deep Learning, CS361Design Optimization, CS161Design and Analysis of Algorithms)
  • Interned for 8 months as an AI Resident at Google Xaka “the moonshot factory” – Waymo, Google Glass, etc. working on generative models for architectural design
  • Did a Fulbright research fellowship post-bachelors in Switzerlandforever enamored with their train system and mountains at EPFL for 1 year

Personal

  • I’ve run two marathons: the City of Oaks MarathonRaleigh, NC (2023) and the New York City Marathon (2024)
  • I’ve lived≥ 3 months on 3 continents and in 8 citiesmunicipal, not metropolitan area.
  • I enjoy learning languages and speak some FrenchJe me débrouille bien, SpanishAprendido en la escuela, and Italian“Imparato” dagli amici to varying degrees
  • Have played the saxincluding alto, tenor, and baritone since elementary schoolhighlights include performing in the pit orchestra for our high school production of “Chicago” and playing at Carnegie Hall with the Columbia University Wind Ensemble