📢 News

  • 2026/03/27 [Preprint] We released MMaDA-VLA, a fully native pre-trained large diffusion VLA model that unifies multi-modal understanding and generation in a single framework! See Project page for more details!
  • 2026/03/13 [Preprint] We released Articulat3D, a novel framework for constructing high-fidelity digital twins of articulated objects from casually captured monocular videos! See Project page for the overview video!
  • 2026/02/21 [CVPR’26] 2 papers (HiF-VLA and V²Drop) got accepted for CVPR 2026 (Main Conference)!
  • 2026/02/18 [RA-L] RoboSimGS, a novel Real2Sim2Real framework that converts multi-view real-world images into scalable, high-fidelity, and physically interactive simulation environments for robotic manipulation, got accepted for RA-L! See Project page for the overview video!
  • 2026/02/10 [RynnBrain] We presented RynnBrain, an embodied foundation model grounded in physical reality, including dense (2B, 8B) and MoE (30B) variants, alongside three specialized models: RynnBrain‑Plan (manipulation planning), RynnBrain‑Nav (navigation), and RynnBrain‑CoP (spatial reasoning). See Github and Chinese report from 机器之心. 2026/02/19 We released the technical report!
  • 2026/01/31 [ICRA’26] RynnVLA-001, the VLA foundation model, got accepted for ICRA 2026!
  • 2026/01/22 [Talk] I gave a talk titled Physical AI Ecosystem: Tackling the Key Barriers to Embodied Intelligence in AAAI-26 Interactive Industry Sessions.