research
Research directions and platforms of SOARLAB.
SOARLAB aims to build flying general intelligence that can be validated on real robotic platforms. We treat embodied intelligence as a full-stack problem that connects perception, state estimation, control, planning, world models, behavior decision-making, data loops, and hardware experiments.
Two Core Lines
Robotics Agent
We study agent-oriented perception, planning/control, world models, simulation, and reinforcement learning so robots can complete tasks in dynamic real-world environments.
WAM/VLA Foundation Models
We develop world-action and vision-language-action robot foundation models, emphasizing transferable action generation and real-world validation.
Hardware Platforms
| Platform | Research Goal |
|---|---|
| Aerial robots | Agile flight, GNSS-denied perception/localization, swarm state estimation, collaborative exploration, and autonomous systems. |
| Humanoids | Perception, planning/control, decision-making, sim-to-real transfer, and world models for embodied tasks. |
| Flying humanoids | Combining aerial and humanoid platforms to study cross-embodiment general and collaborative intelligence. |
Existing Strengths and New Directions
Spatial Intelligence and SLAM
The lab builds on prior work in spatial intelligence for aerial and legged robots, including visual-inertial state estimation, distributed SLAM, collaborative perception, multi-robot consistency optimization, and deployment on real hardware.
Aerial Swarms
We study aerial swarm state estimation, collaborative exploration, and autonomous operation in GNSS-denied environments. Representative systems include Omni-Swarm, D2SLAM, and RACER.
Embodied and Collaborative Intelligence
Future work focuses on embodied and collaborative intelligence, including reinforcement learning, agent behavior decision-making, multimodal models integrated with physical robots, and data/evaluation loops for robot foundation models.
Open Systems
Released or contributed systems include D2SLAM, Omni-Swarm, VINS-Fisheye, and FoxTracker. The next stage will keep the standard that algorithms should run on real robots.