A Framework for Agents Guiding Foundation Models through Knowledge and Reasoning
Abstract
Foundation models (FMs) such as large language models have revolutionized the field of AI by showing remarkable performance in various tasks. However, they exhibit numerous limitations that prevent their broader adoption in many real-world systems, which often require a higher bar for trustworthiness and usability. In this paper, we propose a conceptual framework that encapsulates different modes by which agents could interact with FMs and guide them suitably for a set of tasks, particularly through knowledge augmentation and reasoning. Our framework elucidates various agent role categories; we emphasize three categories that are particularly crucial for increasing trust in AI systems: updaters that change the nature of token generation, assessors that evaluate the FM output, and orchestrators that help manage potentially complex workflows.