TY - GEN
T1 - Autonomous Workflow for Multimodal Fine-Grained Training Assistants Towards Mixed Reality
AU - Pei, Jiahuan
AU - Viola, Irene
AU - Huang, Haochen
AU - Wang, Junxiao
AU - Ahsan, Moonisa
AU - Ye, Fanghua
AU - Jiang, Yiming
AU - Sai, Yao
AU - Wang, Di
AU - Chen, Zhumin
AU - Ren, Pengjie
AU - Cesar, Pablo
N1 - Publisher Copyright:
© 2024 Association for Computational Linguistics.
PY - 2024
Y1 - 2024
N2 - Autonomous artificial intelligence (AI) agents have emerged as promising protocols for automatically understanding the language-based environment, particularly with the exponential development of large language models (LLMs). However, a fine-grained, comprehensive understanding of multimodal environments remains under-explored. This work designs an autonomous workflow tailored for integrating AI agents seamlessly into mixed reality (MR) applications for fine-grained training. We present a demonstration of a multimodal fine-grained training assistant for LEGO brick assembly in a pilot MR environment. Specifically, we design a cerebral language agent that integrates LLMs with memory, planning, and interaction with MR tools and a vision-language agent, enabling agents to decide their actions based on past experiences. Furthermore, we introduce LEGO-MRTA, a multimodal fine-grained assembly dialogue dataset synthesized automatically in the workflow served by a commercial LLM. This dataset comprises multimodal instruction manuals, conversations, MR responses, and vision question answering. Last, we present several prevailing open-resource LLMs as benchmarks, assessing their performance with and without fine-tuning on the proposed dataset. We anticipate that the broader impact of this workflow will advance the development of smarter assistants for seamless user interaction in MR environments, fostering research in both AI and HCI communities.
AB - Autonomous artificial intelligence (AI) agents have emerged as promising protocols for automatically understanding the language-based environment, particularly with the exponential development of large language models (LLMs). However, a fine-grained, comprehensive understanding of multimodal environments remains under-explored. This work designs an autonomous workflow tailored for integrating AI agents seamlessly into mixed reality (MR) applications for fine-grained training. We present a demonstration of a multimodal fine-grained training assistant for LEGO brick assembly in a pilot MR environment. Specifically, we design a cerebral language agent that integrates LLMs with memory, planning, and interaction with MR tools and a vision-language agent, enabling agents to decide their actions based on past experiences. Furthermore, we introduce LEGO-MRTA, a multimodal fine-grained assembly dialogue dataset synthesized automatically in the workflow served by a commercial LLM. This dataset comprises multimodal instruction manuals, conversations, MR responses, and vision question answering. Last, we present several prevailing open-resource LLMs as benchmarks, assessing their performance with and without fine-tuning on the proposed dataset. We anticipate that the broader impact of this workflow will advance the development of smarter assistants for seamless user interaction in MR environments, fostering research in both AI and HCI communities.
UR - http://www.scopus.com/inward/record.url?scp=85205288521&partnerID=8YFLogxK
U2 - 10.18653/v1/2024.findings-acl.240
DO - 10.18653/v1/2024.findings-acl.240
M3 - Conference contribution
AN - SCOPUS:85205288521
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 4051
EP - 4066
BT - The 62nd Annual Meeting of the Association for Computational Linguistics
A2 - Ku, Lun-Wei
A2 - Martins, Andre
A2 - Srikumar, Vivek
PB - Association for Computational Linguistics (ACL)
T2 - Findings of the 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024
Y2 - 11 August 2024 through 16 August 2024
ER -