Towards More Likely Models for AI Planning
Turguy Caglar, Sirine Belhaj, et al.
IJCAI 2023
Supernet training of LLMs is of great interest in industrial applications as it confers the ability to produce a palette of smaller models at constant cost, regardless of the number of models (of different size / latency) produced. We propose a new method called Multistage Low-rank Fine-tuning of Super-transformers (MLFS) for parameter-efficient supernet training. We show that it is possible to obtain high-quality encoder models that are suitable for commercial edge applications, and that while decoder-only models are resistant to a comparable degree of compression, decoders can be effectively sliced for a significant reduction in training time.
Turguy Caglar, Sirine Belhaj, et al.
IJCAI 2023
Eduardo Almeida Soares, Dmitry Zubarev, et al.
ICLR 2025
Srikanth Tamilselvam, Dinesh Khandelwal, et al.
ACML 2022
Eduardo Almeida Soares, Victor Shirasuna, et al.
ACS Fall 2024