Replication and Extension of a Forecasting Decision Support System: An Empirical Examination of the Time Series Complexity Scoring Technique
Association for Information Systems
AIS Transactions on Replication Research
This study presents a conceptual replication of Adya and Lusk’s (2016) forecasting decision support system (FDSS) that identifies the complexity or simplicity of a time series. Prior studies in forecasting have argued convincingly that the design of FDSS should incorporate the complexity of the forecasting task. Yet, there existed no formal way of determining time series complexity until this FDSS, referred to as the Complexity Scoring Technique (CST). The CST uses characteristics of the time series to trigger 12 rules that score the complexity of a time series and classify it along the binary dimension of Simple or Complex. The CST was originally validated using statistical forecasts of a small set of 54 time series as well as judgmental forecasts from 14 representative participants to confirm that the FDSS successfully distinguished Simple series from Complex ones. In this study, we (a) replicate the CST on a much larger set of data from both statistical and judgmental forecasting methods, and (b) extend and validate the series classification categories from the binary Simple-Complex used in the original CST to Very Simple, Simple, Complex, and Very Complex thus adding an ordinal link between the two previous binary designations. Findings suggest that both the replication and extension of the CST further validate it, thereby greatly enhancing its use in the practice of forecasting. Implications for research and practice are discussed.