- 2017
- CHI 2017
Conversational Intelligence
Overview
The Conversational Intelligence Intelligence group at IBM Research - Brazil conducts state-of-the-art research aimed at constantly improving IBM’s Watson technology in three main areas: the understanding of human speech, the theoretical foundations of natural-language processing (NLP), and the processing and production of Brazilian languages (including Portuguese and indigenous languages) by machines. Our team carried out pioneering work on the design and evaluation of conversational systems, neuro-symbolic classification of the intent of human utterances, on the use of large language models (LLMs) for speech tasks, social media analytics, and the processing of ultra-low resource languages such as local indigenous languages.
Research topics
Conversational AI
The demand for virtual agents that can handle customer needs has continued to increase dramatically. At IBM Research, we’re building the next generation of artificial intelligence systems that can understand what’s being asked of them and how best to respond as efficiently as possible.
Human-Centered AI
AI systems are proliferating in everyday life, and it’s imperative to understand those systems from a human perspective. We design and investigate new forms of human-AI interactions and experiences that enhance - and extend - human capabilities for the good of our products, clients, and society at large.
Natural Language Processing
Much of the information that can help transform enterprises is locked away in text, like documents, tables, and charts. We’re building advanced AI systems that can parse vast bodies of text to help unlock that data, but also ones flexible enough to be applied to any language problem.
Speech
As more of the world moves online, the demand for systems that can understand users and speak to them in natural language is growing exponentially. We're working on next-generation AI that learns to decipher and replicate the way humans speak.
Foundation Models
Modern AI models that execute specific tasks in a single field are giving way to ones that learn more generally, and work across domains and problems. Foundation models, which are trained on large, unlabeled datasets and fine-tuned for an array of applications, are driving this shift.
Publications
- Andrea Brittomattos
- Dário Augusto Borges Oliveira
- et al.
- 2018
- ICME 2018
- Zvi Kons
- Slava Shechtman
- et al.
- 2018
- SLT 2018
- Gustavo Resende
- Johnnatan Messias
- et al.
- 2019
- WWW 2019
- 2019
- CUI 2019
- 2019
- CHI 2019
- Fabricio Barth
- Heloisa Candello
- et al.
- 2020
- CUI 2020
- Paulo Cavalin
- Marisa Vasconcelos
- et al.
- 2020
- IJCNN 2020