Found 5 repositories(showing 5)
ginwind
VLA-JEPA: Enhancing Vision-Language-Action Model with Latent World Model
JeffrinSam
Zero-to-mastery JEPA study guide: 245 cells, 14 GIFs, interactive notebooks covering JEPA, V-JEPA 2, VLA models, world models, and ICRA research
SaarthakG-Dtu
This repository contains latest research papers relating to VLA models and World Models like JEPA in the domain of embodied AI.
git-kinetix
Obsidian knowledge vault covering physical intelligence research: JEPA, world models, VLA, video planning — 34 papers, 64 metrics, 78 datasets with inline-linked tables and embedded PDFs
caroline430
Personal learning space for embodied AI & robotics — covering the full stack from robot hardware and perception (VLM) to task planning (LLM + CoT), vision-language-action models (VLA), world models (Dreamer, JEPA), and a runnable browser demo.
All 5 repositories loaded