Fairness, Accountability, Transparency
Biases can lead to systematic disadvantages for marginalized individuals and groups — and they can arise in any point in the AI development lifecycle. To increase the accountability of high-risk AI systems, we're developing technologies to increase their end-to-end transparency and fairness.
Our work
IBM reaffirms its commitment to the Rome Call for AI ethics
NewsMike MurphyWhat is red teaming for generative AI?
ExplainerKim MartineauThe latest AI safety method is a throwback to our maritime past
ResearchKim MartineauWhat is AI alignment?
ExplainerKim MartineauIBM’s Stacy Hobson wants to build tech that works for everyone
ResearchKim MartineauWhat is prompt-tuning?
NewsKim Martineau- See more of our work on Fairness, Accountability, Transparency
Projects
Accelerator Technologies
We're developing technological solutions to assist subject matter experts with their scientific workflows by enabling the Human-AI co-creation process.
Publications
- Qinyi Chen
- Jason Cheuk Nam Liang
- et al.
- 2024
- NeurIPS 2024
- Brooklyn Sheppard
- Anna Richter
- et al.
- 2024
- ACL 2024
- Victor Akinwande
- Megan Macgregor
- et al.
- 2024
- IJCAI 2024
- Farhad Mohsin
- Qishen Han
- et al.
- 2024
- IJCAI 2024
- Apoorva Nitsure
- Youssef Mroueh
- et al.
- 2024
- ICML 2024
- Yuya Jeremy Ong
- Jay Pankaj Gala
- et al.
- 2024
- IEEE CISOSE 2024