The effects of feedback on judgmental interval predictions

Fergus Bolger*, Dilek Önkal-Atay

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

24 Citations (Scopus)

Abstract

The majority of studies of probability judgment have found that judgments tend to be overconfident and that the degree of overconfidence is greater the more difficult the task. Further, these effects have been resistant to attempts to 'debias' via feedback. We propose that under favourable conditions, provision of appropriate feedback should lead to significant improvements in calibration, and the current study aims to demonstrate this effect. To this end, participants first specified ranges within which the true values of time series would fall with a given probability. After receiving feedback, forecasters constructed intervals for new series, changing their probability values if desired. The series varied systematically in terms of their characteristics including amount of noise, presentation scale, and existence of trend. Results show that forecasts were initially overconfident but improved significantly after feedback. Further, this improvement was not simply due to 'hedging', i.e. shifting to very high probability estimates and extremely wide intervals; rather, it seems that calibration improvement was chiefly obtained by forecasters learning to evaluate the extent of the noise in the series.

Original languageEnglish
Pages (from-to)29-39
Number of pages11
JournalInternational Journal of Forecasting
Volume20
Issue number1
Early online date28 Jan 2004
DOIs
Publication statusPublished - Jan 2004
Externally publishedYes

Fingerprint

Dive into the research topics of 'The effects of feedback on judgmental interval predictions'. Together they form a unique fingerprint.

Cite this