Research
AxisGuide: Grounding Robot Action Coordinate System in RGB Observations for Robust Visuomotor Manipulation
Jiyun Jang, Yujin Sung, Woosung Joung, Daewon Chae, Sangwon Lee, Sohwi Kim, Jinkyu Kim, Jungbeom Lee
RSS, 2026 A framework that injects action coordinate cues into RGB observations to explicitly ground the robot’s action space, improving zero-shot execution and robustness in visuomotor manipulation across diverse environments.
uCLIP: Parameter-Efficient Multilingual Extension of Vision-Language Models with Unpaired Data
Dahyun Chung*, Donghyun Shin*, Yujin Sung*, Seunggi Moon*, Jinwoo Jeon, Byung-Jun Lee
AAAI, 2026 project / paper / code Lightweight framework that enables multilingual vision–language alignment for underrepresented languages by using English as a semantic pivot and requiring no paired supervision. |
||
|
Website template from Jon Barron. |
||