Recurrent Transformers Trade-off Parallelism for Length Generalization on Regular LanguagesPaul SoulosAleksandar Terzicet al.2024NeurIPS 2024
TabSketchFM: Sketch-based Tabular Representation Learning for Data Discovery over Data LakesAamod KhatiwadaHarsha Kokelet al.2024NeurIPS 2024
MemReasoner: A Memory-augmented LLM Architecture for Multi-hop ReasoningIrene KoSihui Daiet al.2024NeurIPS 2024
Compositional Communication with LLMs and Reasoning about Chemical StructuresDmitry ZubarevSarath Swaminathan2024NeurIPS 2024
A Large Encoder-Decoder Polymer-Based Foundation ModelEduardo Almeida SoaresNathaniel Parket al.2024NeurIPS 2024
A Mamba-Based Foundation Model for ChemistryEmilio Ashton Vital BrazilEduardo Almeida Soareset al.2024NeurIPS 2024
Multi-View Mixture-of-Experts for Predicting Molecular Properties Using SMILES, SELFIES, and Graph-Based RepresentationsEduardo Almeida SoaresIndra Priyadarsini Set al.2024NeurIPS 2024
Agnostic Causality-Driven Enhancement of Chemical Foundation Models on Downstream TasksVictor ShirasunaEduardo Almeida Soareset al.2024NeurIPS 2024
Towards Using Large Language Models and Deep Reinforcement Learning for Inertial Fusion EnergyVadim ElisseevMax Espositoet al.2024NeurIPS 2024