Programme Overview

The forum sessions will take place in the Report Hall, the MATE competition will be held at the Natatorium, and the First-Floor Exhibition Hall will host the WBCD Exhibition throughout the event.

Date Time Report Hall Natatorium First-Floor Exhibition Hall
April 25 09:00–09:30 Opening Ceremony Team Testing and Debugging WBCD Exhibition
Open All Day
09:45–10:30 Talk by
Prof. Yao Mu
10:30–11:15 Talk by
Prof. Zhongyu Li
11:15–12:00 Talk by
Prof. Zipeng Dai
13:00–14:00 Student Session
14:00–14:45 Talk by
Prof. Boyu Zhou
14:45–15:30 Talk by
Prof. Han Zhang
15:30–16:15 Talk by
Prof. Chenjia Bai
16:15–18:00 Student Session
April 26 09:00–09:10 MATE Poster Session Remarks by Shanghai Jiao Tong University WBCD Exhibition
Open All Day
09:10–09:20 Remarks by
MATE Executive Board
09:20–09:30 Judges’ Briefing on
Competition Rules
and Notes
09:30–16:00 Competition
16:00–16:30 Results Announcement
16:30–16:45 Award Ceremony
16:45–17:00 Group Photo

Featured Talks

The following invited talks cover trajectory planning, aerial embodied intelligence, humanoid intelligence, and embodied AI.

Yao Mu
Yao Mu is a tenure-track Assistant Professor at Shanghai Jiao Tong University. He is recognized as a national-level young talent, a Shanghai Overseas High-Level Talent, a BAAI Young Scholar, and an EAI-100 2025 Emerging Scholar. He received his Ph.D. from the Department of Computer Science at The University of Hong Kong and was a visiting scholar at ETH Zurich and the National University of Singapore. His research focuses on multimodal embodied intelligence and robot learning. He has served as an Area Chair for top machine learning conferences such as ICLR and is an Executive Member of the CCF Technical Committee on Intelligent Robots. He has published more than 50 papers in leading journals and conferences including IJRR, RSS, NeurIPS, ICML, and CVPR, with over 3,200 citations on Google Scholar. His honors include a nomination for the IROS 2025 Best Paper Award, the Best Paper Award at the ECCV 2024 Workshop on Collaborative Embodied Intelligence, the 2024 CAA Scholarship for the Symposium on Autonomous Robots, the IEEE ICCAS 2020 Best Student Paper Award, and a nomination for the IEEE IV 2021 Best Student Paper Award.
Yao Mu
Shanghai Jiao Tong University

Generative Simulation-Driven Large-Scale Parallel Embodied Learning

This talk focuses on large-scale parallel embodied learning driven by generative simulation. Against the emerging trend of embodied intelligence evolving toward self-improving systems, it presents a complete technical pipeline ranging from diverse scene and asset generation, embodied simulation and data engine construction, to progress-evaluation foundation models, discrete diffusion VLA models, and joint optimization with reinforcement learning. Through case studies in long-horizon embodied manipulation agents and fine-grained laboratory tasks, the talk shows how generative simulation can continuously amplify data scalability, training automation, and capability evolution, offering new paths and new paradigms for the next generation of embodied intelligence systems.

Zhongyu Li
Dr. Zhongyu Li is an Assistant Professor in the Department of Mechanical and Automation Engineering at The Chinese University of Hong Kong. He received his Ph.D. in Mechanical Engineering from UC Berkeley in 2025 and his B.Eng. in Mechatronics Engineering from Zhejiang University in 2019. His work has been recognized by several best paper finalist honors at IROS and ICRA, and he was selected as an RSS Pioneer and a Rising Star in Mechanical Engineering in 2023.
The Chinese University of Hong Kong

From Robot Cerebellum to Robot Brain

Bipedal humanoids have the potential to transform work in human environments, but many core challenges remain unresolved. This talk first revisits our earlier work on unlocking the agility of legged robots through reinforcement learning. We developed a generalized motion-tracking control framework that enabled bipedal robots to execute a wide range of highly dynamic skills, including targeted standing jumps, running a 400-meter dash, and traversing challenging terrain. This line of work helped catalyze recent RL-based humanoid locomotion controllers built on motion imitation. We then extended these capabilities beyond locomotion to support intelligent loco-manipulation and multi-agent interaction, pushing legged robots toward more functional real-world behavior. Conceptually, this line of research addresses the problem of the robot cerebellum: controllers that provide robust, adaptive, and athletic mobility. Building on this foundation, the talk introduces our current work at CUHK, where we aim to move from robot cerebellum to robot brain by using multimodal data to endow humanoid robots not only with dynamic locomotion, but also with generalizable whole-body manipulation, task-level reasoning, and safe, interactive intelligence.

Zipeng Dai
Zipeng Dai is a Young Scientist and Head of Agile AI Flight at Differential Robotics (Hangzhou). He received both his B.Eng. and Ph.D. from the School of Computer Science at Beijing Institute of Technology, won the National Scholarship for Doctoral Students twice, has published more than ten papers in top venues such as TMC, INFOCOM, and ICDE, and received the Runner-up Award for Best Paper at KDD 2021.
Differential Robotics (Hangzhou)

Toward Open-Source, Deployable, and Application-Oriented Aerial Embodied Intelligence

Centered on the theme of open-source, deployment, and real-world application, this talk reviews the team's exploration from the FAST Lab stage to its engineering and industrial development at Differential Robotics. It summarizes a line of work on autonomous UAV flight, end-to-end control, perception and decision making in complex environments, and reflects on how these efforts evolved from academic research into deployable systems. A key lesson is that the core of aerial intelligence does not lie in isolated algorithmic advances, but in the closed-loop unification of perception, planning and control, decision making, world modeling, and real-system deployment. Based on this perspective, the talk outlines a practical path through which aerial embodied intelligence can move from laboratory validation to real-world use.

Boyu Zhou
Prof. Zhou has published papers in prestigious robotics journals and conferences, including TRO, TASE, RAL, RSS, ICRA, and IROS. His team's work has been recognized with the 2023 IEEE Transactions on Robotics Best Paper Award, the 2023 IEEE RAL Best Paper Award, and was a finalist for the 2024 IEEE ICRA Best Paper Award on Unmanned Aerial Vehicles. His publications have been listed as popular papers in TRO and RAL, with the highest ranking of No. 1. He serves as an Associate Editor of IEEE TRO and is listed in Stanford's World's Top 2% Scientists.
Southern University of Science and Technology

Foundation Models for Aerial Embodied Intelligence

Robotics is entering a new stage in which data-driven methods and foundation models are deeply integrated, forming a key driver of advances in robot autonomy, while fine manipulation and complex interaction in real environments are increasingly important. This talk focuses on aerial embodied intelligence and presents a systematic line of explorations from autonomous navigation to higher-level capabilities. At the navigation level, it discusses how generative models improve perception in challenging environments such as glass and mirror scenes, enabling robust flight. For exploration in unknown environments, it introduces a lightweight scene representation and efficient planning framework for large-scale exploration. It then shows how vision-language foundation models support zero-shot UAV navigation, object-goal navigation, and on-the-fly 3D scene scanning. Finally, the talk discusses applications of whole-body planning in aerial load transportation systems and high-speed mobile manipulation.

Chenjia Bai
Chenjia Bai is a Research Scientist at the China Telecom Artificial Intelligence Research Institute (TeleAI) and Director of its Embodied Intelligence Research Center. His work focuses on the “brain” and “cerebellum” of embodied intelligence, including cross-embodiment adaptation, dexterous manipulation planning, and data synthesis. He has published more than sixty papers, led multiple national and municipal projects, and his work has been recognized by awards and media coverage including WAIC, MIT Technology Review, and CCTV.
TeleAI

A Brain-and-Cerebellum Capability Foundation for General Embodied Intelligence

This talk discusses a collaborative “brain-and-cerebellum” capability foundation for general embodied intelligence and presents the TeleAI team's work on building embodied agents with both cognitive decision-making and motion control abilities. On the cerebellum side, the team has developed a high-dynamic whole-body motion control framework in the KungfuBot series, enabling text-driven real-time motion generation, adaptation to complex environments, soccer skills, and robust human-robot interaction. These efforts go beyond traditional control in dynamic balance and multimodal interaction. On the brain side, the team has proposed the PRTS vision-language-action foundation model based on contrastive reinforcement learning, built the GN-0 end-to-end embodied navigation framework that unifies map-based and map-free navigation, and developed the ATE cross-embodiment transfer method to improve VLA generalization across different robot platforms. To support this foundation, the team has also created the TeleSim high-fidelity simulation data platform, a weakly embodiment-dependent data collection system, and a matrix of full-size and compact humanoid platforms in the TeleBot series. The system has been validated in scenarios such as guidance, transportation, and fine-grained manipulation, providing an integrated solution from data and models to embodied platforms.

Han Zhang
Shanghai Jiao Tong University

Trajectory Planning: Objective Functions, Feedback, and Solution Methods

The core problem of trajectory planning can be understood through three tightly coupled questions: what to plan for, what information to plan with, and how to solve the resulting problem efficiently. This talk presents our recent studies and reflections on trajectory planning for robots and intelligent vehicles along the line of objective functions, feedback, and solution methods. First, on objective design, the talk discusses how Inverse Optimal Control (IOC) can be used to recover latent task goals and trade-off mechanisms from expert behavior or historical data, reducing the need for manually crafted cost functions and making planning better aligned with task requirements and real behavior patterns. Second, on feedback construction, the talk combines SLAM and environment perception to show how localization, mapping, and scene understanding can provide reliable state feedback and environmental constraints, turning planning from offline geometric generation into perception-driven closed-loop decision making. Finally, on solution methods, the talk introduces two representative optimization frameworks: one for planning under uncertainty, where safe trajectory generation must cope with perception error and environmental uncertainty, and another for motion planning of articulated vehicles on curved roads using warm-start techniques. Overall, the talk argues that high-quality trajectory planning is not merely a numerical optimization problem, but a system problem that deeply couples goal learning, environmental feedback, and real-time solving.