IBM at NeurIPS 2025

About

Neural Information Processing Systems (NeurIPS) is a leading machine learning and computational neuroscience conference. IBM Research is excited to sponsor NeurIPS again this year as a Platinum sponsor.  We invite all attendees to visit us during the event at booth number 1109, from Tuesday, December 2 through Friday, December 5.

We look forward to meeting you and telling you more about our latest work and career opportunities at IBM Research. At our booth we’ll be demoing projects on a broad range of AI topics such as foundation models, trustworthy AI, natural language processing and understanding, knowledge and reasoning, AI automation, human-centered AI, and federated learning.

Presentation times of conference workshops, demos, papers, and tutorials can be found see the agenda section at the bottom of this page. Note: All times are displayed in your local time.

Career opportunities

Visit us at the IBM Booth to meet with IBM researchers and recruiters to speak about future job opportunities or 2025 summer internships.

Agenda

  • Description:

    Visit us at the IBM booth in the exhibit hall to talk to our researchers and recruiters. We'll also be doing demos of our work. <Booth demo & staff schedule coming soon>

  • Description:

    Analog in-memory computing (AIMC) is a promising compute paradigm to improve speed and power efficiency of neural network inference beyond the limits of conventional von Neumann-based architectures. However, AIMC introduces fundamental challenges such as noisy computations and strict constraints on input and output quantization. Because of these constraints and imprecisions, off-the-shelf LLMs are not able to achieve 4-bit-level performance when deployed on AIMC-based hardware. While researchers previously investigated recovering this accuracy gap on small, mostly vision-based models, a generic method applicable to LLMs pre-trained on trillions of tokens does not yet exist. In this work, we introduce a general and scalable method to robustly adapt LLMs for execution on noisy, low-precision analog hardware. Our approach enables state-of-the-art models — including Phi-3-mini-4k-instruct and Llama-3.2-1B-Instruct — to retain performance comparable to 4-bit weight, 8-bit activation baselines, despite the presence of analog noise and quantization constraints. Additionally, we show that as a byproduct of our training methodology, analog foundation models can be quantized for inference on low-precision digital hardware. Finally, we show that our models also benefit from test-time compute scaling, showing better scaling behavior than models trained with 4-bit weight and 8-bit static input quantization. Our work bridges the gap between high-capacity LLMs and efficient analog hardware, offering a path toward energy-efficient foundation models. Code is available at github.com/IBM/analog-foundation-models.

    Authors:
    IC
    Iason Chalas
    IBM
    GA
    Giovanni Acampa
    IBM
    AC
    An Chen
    IBM
    OF
    Omobayode Fagbohungbe
    IBM
    ST
    Sidney Tsai
    Manager, PRSM, AI Hardware and Cloud/AI for EDA
    IBM
    +4 more view all

Upcoming events