ADAPTOR

Advancing Assistive Teleoperation

with Few-Shot Learning & Cross-Operator Generalization

1Jilin University    2IO-AI TECH

Video Presentation

Method

Adaptor Framework Overview
01

Intention Preprocessing

Models intent uncertainty by constructing perturbation distributions via stochastic noise injection and extracting keyframes from expert trajectories.

02

Policy Learning

Leverages a tripartite expert architecture: the VLM Expert encodes environmental context, the Intention Expert fuses semantics with trajectory guidance, and the Action Expert generates precise controls.

Experiments & Results

Extensive real and simulated benchmarks demonstrate that Adaptor achieves state-of-the-art performance, outperforming baselines in success rate, efficiency, and user satisfaction.

+41.9%
Success Rate
Improvement vs. Baselines
-32.2%
Completion Time
Efficiency Gain
83.2%
Novice Success
Few-Shot Generalization
9/10
User Satisfaction
Top Metrics Score

Toward Efficient Teleoperation

Adaptor employs shared control paradigm to mitigate operator workload during teleoperation.

Coarse Demonstration: Operator provides a rough guide; Adaptor refines it into precise actions.

Partial Demonstration: Operator executes only a subset; Adaptor completes the full trajectory.

Overall Performance

Adaptor significantly reduces teleoperation time while maintaining a high success rate.

Cross-Operator Generalization

We train the adapter exclusively on expert demonstrations and evaluate its generalization capability across multiple tasks using operators with varying levels of expertise.

Cross Operator Generalization

Adaptor maintains low performance variance across operators with different expertise levels.

BibTeX

If you find this work useful, please consider citing:

@misc{liu2026adaptoradvancingassistiveteleoperation,
      title={Adaptor: Advancing Assistive Teleoperation with Few-Shot Learning and Cross-Operator Generalization}, 
      author={Yu Liu and Yihang Yin and Tianlv Huang and Fei Yan and Yuan Xu and Weinan Hong and Wei Han and Yue Cao and Xiangyu Chen and Zipei Fan and Xuan Song},
      year={2026},
      eprint={2604.09462},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2604.09462}, 
}