Learning and total evidence with imprecise probabilities

Published in International Journal of Approximate Reasoning, 2022

Link (IJAR)

In dynamic learning, a rational agent must revise their credence about a question of interest in accordance with the total evidence available between the earlier and later times. We discuss situations in which an observable event F that is sufficient for the total evidence can be identified, yet its probabilistic modeling cannot be performed in a precise manner. The agent may employ imprecise (IP) models of reasoning to account for the identified sufficient event, and perform change of credence or sequential decisions accordingly. Our proposal is illustrated with four case studies: the classic Monty Hall problem, statistical inference with non-ignorable missing data, frequentist hypothesis testing, and the use of forward induction in a two-person sequential game.

This paper is an extended version of an ISIPTA’21 conference proceeding entitled Total evidence and learning with imprecise probabilities, PMLR (147)161–168.