Explainable AI
To trust AI systems, explanations can go a long way. We’re creating tools to help debug AI, where systems can explain what they’re doing. This includes training highly optimized, directly interpretable models, as well as explanations of black-box models and visualizations of neural network information flows.
Our work
Teaching AI models to improve themselves
ResearchPeter HessIBM and RPI researchers demystify in-context learning in large language models
NewsPeter HessThe latest AI safety method is a throwback to our maritime past
ResearchKim MartineauFind and fix IT glitches before they crash the system
NewsKim MartineauWhat is retrieval-augmented generation?
ExplainerKim MartineauDid an AI write that? If so, which one? Introducing the new field of AI forensics
ExplainerKim Martineau- See more of our work on Explainable AI
Publications
Optimal Transport for Efficient, Unsupervised Anomaly Detection on Industrial Data
- Abigail Langbridge
- Fearghal O'Donncha
- et al.
- 2024
- Big Data 2024
Future Workload and Cloud Resource Usage: Insights from an Interpretable Forecasting Model
- 2024
- Big Data 2024
Final-Model-Only Data Attribution with a Unifying View of Gradient-Based Methods
- Dennis Wei
- Inkit Padhi
- et al.
- 2024
- NeurIPS 2024
Global Area Sampling for Geospatial Foundation Model
- 2024
- AGU 2024
Advanced Physics-AI Models for Rain Enhancement in Arid Regions
- Lloyd Treinish
- Mukul Tewari
- et al.
- 2024
- AGU 2024
Advancing Applications of Remote Sensing for Detection of and Long-Term Monitoring of Harmful Algal Blooms (HABs)
- Lloyd Treinish
- Vincent Moriarty
- 2024
- AGU 2024