Learning Lifted Operator Models with Logical Neural Networks
Don Joven Ravoy Agravante, Michiaki Tatsubori
JSAI 2022
The property of mixed selectivity has been discussed at a computational level and offers a strategy to maximize computational power by adding versatility to the functional role of each neuron. Here, we offer a biologically grounded implementational-level mechanistic explanation for mixed selectivity in neural circuits. We define pure, linear, and nonlinear mixed selectivity and discuss how these response properties can be obtained in simple neural circuits. Neurons that respond to multiple, statistically independent variables display mixed selectivity. If their activity can be expressed as a weighted sum, then they exhibit linear mixed selectivity; otherwise, they exhibit nonlinear mixed selectivity. Neural representations based on diverse nonlinear mixed selectivity are high dimensional; hence, they confer enormous flexibility to a simple downstream readout neural circuit. However, a simple neural circuit cannot possibly encode all possible mixtures of variables simultaneously, as this would require a combinatorially large number of mixed selectivity neurons. Gating mechanisms like oscillations and neuromodulation can solve this problem by dynamically selecting which variables are mixed and transmitted to the readout.
Don Joven Ravoy Agravante, Michiaki Tatsubori
JSAI 2022
Michael Hersche, Francesco Di Stefano, et al.
NeurIPS 2023
Minori Narita, Daiki Kimura
IJCAI 2023
Mourad Baïou, Francisco Barahona
Discrete Applied Mathematics