Publication
Automatica
Paper

Recovering Markov models from closed-loop data

View publication

Abstract

Situations in which recommender systems are used to augment decision making are becoming prevalent in many application domains. Almost always, these prediction tools (recommenders) are created with a view to affecting behavioural change. Clearly, successful applications actuating behavioural change, affect the original model underpinning the predictor, leading to an inconsistency. This feedback loop is often not considered in standard machine learning techniques which rely upon machine learning/statistical learning machinery. The objective of this paper is to develop tools that recover unbiased user models in the presence of recommenders. More specifically, we assume that we observe a time series which is a trajectory of a Markov chain R modulated by another Markov chain S, i.e. the transition matrix of R is unknown and depends on the current state of S. The transition matrix of the latter is also unknown. In other words, at each time instant, S selects a transition matrix for R within a given set which consists of known and unknown matrices. The state of S, in turn, depends on the current state of R thus introducing a feedback loop. We propose an Expectation–Maximisation (EM) type algorithm, which estimates the transition matrices of S and R. Experimental results are given to demonstrate the efficacy of the approach.

Date

Publication

Automatica

Authors

Share