Strengthening Validation Frameworks in Dynamic Microsimulation: Evidence from SimPaths

Strengthening Validation Frameworks in Dynamic Microsimulation: Evidence from SimPaths

Mariia Vartuzova  ( University of Essex )  —  “Strengthening Validation Frameworks in Dynamic Microsimulation: Evidence from SimPaths”  (joint work with: Matteo Richardi, Rejoice Frimpong)
July 1, 2026, 0:00 am TBC TBC
Conference presentation

Dynamic microsimulation models such as SimPaths are increasingly used to evaluate long-term policy impacts by generating synthetic trajectories for individuals and households. Their credibility, however, depends on rigorous validation: demonstrating that simulated outcomes can reliably reproduce observed data. Despite their growing role in policy analysis, validation practices remain fragmented and only partially automated (e.g., O’Donoghue et al., 2015; Gosseries & Van der Heyden, 2018). This paper presents ongoing work on strengthening validation frameworks in SimPaths, with a focus on discriminator-based methods and econometric consistency checks. First, we apply classifiers (e.g., Gradient Boosted Machines) to distinguish between simulated and survey data. Discriminator accuracy provides an interpretable quantitative score of similarity: the closer performance is to random guessing, the more realistic the simulated data. Second, we explore the re-estimation of key behavioural regressions using simulated data and assess parameter recovery. This helps identify whether discrepancies arise from implementation issues, estimation limitations, or structural differences between datasets. By combining machine-learning discriminators with regression-based diagnostics, the paper contributes to more automated, transparent, and reproducible validation practices for complex microsimulation models.