In-Context Bias Propagation in LLM-Based Tabular Data GenerationPol Garcia RecasensAlberto Gutierrez-torreet al.2025ICML 2025
Mind the Memory Gap: Unveiling GPU Bottlenecks in Large-Batch LLM InferencePol G. RecasensFerran Agulloet al.2025CLOUD 2025
Towards Pareto Optimal Throughput in Small Language Model ServingPol G. RecasensYue Zhuet al.2024EuroSys 2024