ML Seminar Series

GEM: A Generalizable Ego-Vision Multimodal World Model for Fine-Grained Ego-Motion, Object Dynamics, and Scene Composition Control

by Dr Suman Saha (Senior Data Scientist, SDSC hub at PSI)

Europe/Zurich
OHSA/B17

OHSA/B17

Description

We present GEM, a Generalizable Ego-vision Multimodal world model that predicts future frames using a reference frame, sparse features, human poses, and ego-trajectories. Hence, our model has precise control over object dynamics, ego-agent motion and human poses. GEM generates paired RGB and depth outputs for richer spatial understanding. We introduce autoregressive noise schedules to enable stable long-horizon generations. Our dataset is comprised of 4000+ hours of multimodal data across domains like autonomous driving, egocentric human activities, and drone flights. Pseudo-labels are used to get depth maps, ego-trajectories, and human poses. We use a comprehensive evaluation framework, including a new Control of Object Manipulation (COM) metric, to assess controllability. Experiments show GEM excels at generating diverse, controllable scenarios and temporal consistency over long generations. Code, models, and datasets are fully open-sourced.

TEAMS LINK

 

 

Organised by

The Laboratory for Simulation and Modelling

Dr. Suman Saha
Registration
Participants
Participants
  • Benjamín Béjar
  • Suman Saha
  • Tomasz Kacprzak
  • +3