Book of Abstracts

A Step Beyond Microsimulation: Agent-Based Modelling of the English Housing Market
July 2, 2026, 9:00 am Auditorium 1 Plenary
Keynote,  •  Agent based modeling , Housing ,
Housing markets are very important in modern societies because of their effect on households’ ability to find suitable accommodation at an affordable price and because of they lock in huge amounts of wealth, often in a way that is highly unequal. As a result, in many countries, and specifically in England, housing policy is a highly contentious and difficult issue. In this presentation, I will consider how one might model the English Housing market, from simple statistical approaches, through microsimulation and agent-based modelling, and illustrate the latter with a description of an agent-based model that has been developed over the last two decades and now incorporates owner-occupation, the rental sector, social housing and buy-to-lets. The model allows the testing of the implications on market prices and rents of a range of actual and proposed policies, such as changing the basis of property ‘council’ taxes, a ‘mansion’ tax on expensive properties, and transaction taxes, such as the English stamp duty land tax. I will comment on the advantages of using an agent-based modelling approach, but also on the problems and difficulties we had to overcome to obtain a working and validated model and suggest avenues for future development.
Read more
Living with High Inflation: The Distributional Impact of the Cost of Living Crisis in Türkiye
Cathal O’Donoghue  ( University of Galway )  —  “Living with High Inflation: The Distributional Impact of the Cost of Living Crisis in Türkiye”  (joint work with: Zeynep Gizem Can)
July 1, 2026, 1:15 pm Auditorium 1 Plenary
Keynote,  •  Inflation , Income distribution ,
Türkiye experienced the highest inflation experience in the OECD during the cost of living crisis during the cost of living crisis in the early mid-2020s. While the European Union inflation rate was 9.2% in 2022, declining to 6.4% in 2023 and 2.6% for 2025 - Eurostat, year on year inflation peaked at 85% in Türkiye in October 2022 and with annual inflation remaining above 65% at the end of 2023 before dipping to about 45% at the end of 2024 - Turkstat. Such large price changes impact the income distribution in many ways. In this presentation, we describe a portfolio of research that has employed microsimulation based decomposition methods to disentangle the impact of large macro-economic changes on inequality. The research begins by describing the historical macro-economic volatility that Türkiye. Using the new ARIA microsimulation model we undertake a variety of different analyses focusing on different dimensions. We begin by examining the distribution of price changes before the crisis and after the peak crisis in 2022. We then explore the policy response in terms of the poverty effectiveness efficiency and the poverty gap efficiency social transfers, which as an archetypal Southern European Welfare state mainly focuses on pension age work replacement benefits. With a progressive income tax system, we explore the nature of the fiscal drag within the system during this period. We contrast it with impact of price change on the regressive indirect tax system. With data from before the crisis and peak-crisis, we are employ a unique decomposition of the consumption and savings response during the crisis, emphasising in particular the differential savings response and the importance of durables as a source of hedging inflation for high income households on the one hand and the prioritisation of necessities by low income households. Furthermore, we explore the inequality increasing nature of the labour market, where some sectors have been resilient to price inflation in terms of wage growth, combined with other sectors that have not. A key conclusion is the distributional impact of price change has a greater impact when behavioural responses are considered than the literature that focuses on pre-behavioural response. As a result the consumption patterns have a greater impact than income changes.
Read more
Creating and Utilising Synthetic Population Data: Examples, Innovations and Pitfalls
July 1, 2026, 9:00 am P02 Workshop on synthetic data
Workshop,  •  Synthetic data ,
Through their ability to fill important data gaps, synthetic populations have become well-established resources in research spanning a wide range of population geography aligned disciplines. By providing readily available data on key life domains and for entire populations, the role of synthetic populations is ever growing – for example within the context of modelling policy questions around public budgets, urban planning, climate mitigation, or health inequalities. Nevertheless, many approaches to creating synthetic populations present important limitations, impacting their robustness and utility for policy and research. In this talk I will outline the creation and utility of synthetic population data, covering important innovations such as the nesting of household and individual level structures, validation, approaches to sharing datasets, and undertaking applied research based on these datasets.
Read more
Data Without Barriers: Synthetic Data as a Catalyst for Responsible Innovation
July 1, 2026, 9:00 am P02 Workshop on synthetic data
Workshop,  •  Synthetic data ,
The ability to access and use high-quality data is becoming a key enabler, and bottleneck, for innovation across AI and digital systems. Yet privacy constraints, regulation, and data scarcity continue to limit what organizations and researchers can do. Synthetic data generation is increasingly emerging as a powerful ingredient for enabling responsible, inclusive, and scalable data-driven innovation. In this talk, I’ll introduce a broader vision for data democratization, with synthetic data playing a central role. I’ll walk through how generative AI models can be used to synthesize rich, realistic tabular datasets, and how these can be safely shared and applied across a wide range of use cases, from AI model development and testing to fairness research, simulation, and beyond. The session will include a live walkthrough of open-source tools, showcasing how accessible and practical synthetic data generation can be today.
Read more
Digital twins: challenges, pitfalls, and opportunities
Ralf Münnich  ( University of Trier )  —  “Digital twins: challenges, pitfalls, and opportunities”
July 1, 2026, 9:00 am P02 Workshop on synthetic data
Workshop,  •  Synthetic data ,
Li and O’Donoghue (2013) emphasized microsimulations to cover two areas, the microsimulations per se in terms of what-if-questions as well as synthetic data generation as an important base for performing microsimulations. More and more methods such as data fusion of different surveys, prediction methods, as well as modern ML approaches are applied. However, modelling strategies need to be adjusted accordingly, in particular depending on cross-sectional or longitudinal applications. Further, the increasing attention is laid on the granularity of the modelling. All in all, little attention is laid on the accuracy of the generated data as well as on assumptions and implicit decisions of developers of microsimulation models. The presentation focuses on different aspects of synthetic data generation and so-called digital twins. Special attention will be laid on timely and regional granularity as well as of unobserved heterogeneities of the simulations including uncertainties of the entire modelling process. Additionally, specific data situations and disclosure limitations will be addressed.
Read more
Enhanced data fusion and anonymization for microsimulation systems
Cédric Heuchenne  ( CAPE - UCLouvain Saint-Louis )  —  “Enhanced data fusion and anonymization for microsimulation systems”
July 1, 2026, 9:00 am P02 Workshop on synthetic data
Workshop,  •  Synthetic data ,
The fusion and anonymization of multiple heterogeneous data sources remain major challenges in applied statistics. In this work, we consider the joint use of demographic and fiscal census data together with several sample surveys. The objective is to integrate these sources in order to obtain a coherent representation of the overall population and to enable the evaluation of policy changes, such as reforms of the fiscal system, while ensuring that the resulting data are fully synthetic and thus completely anonymized. We present a modeling framework for merging data sets that share the same type of statistical units (e.g., households), and we show how this framework can be enhanced by incorporating information from data sets defined on different units (e.g., individuals). We also address the issue of harmonizing surveys that rely on distinct sampling designs. The proposed approach leads to a fully anonymized synthetic data set that preserves the main statistical properties of the original data and can be directly used for analysis by end users.
Read more
Generating Synthetic Populations for Transportation: A Variational Autoencoder Approach
Pierre-Olivier Vandanjon  ( Université Gustave Eiffel )  —  “Generating Synthetic Populations for Transportation: A Variational Autoencoder Approach”  (joint work with: Abdoul Razac Sané; Pierre Hankach; Rachid Belaroussi; Pascal Gastineau)
July 1, 2026, 9:00 am P02 Workshop on synthetic data
Workshop,  •  Synthetic data ,
Synthetic populations are commonly used in transportation analysis to feed traffic simulators. Recently, they have also been used to assess the sensitivity of a territory to factors such as construction noise. However, traditional methods as Iteratif Proportianal Fitting (IPF) for generating synthetic populations, based on sampling and calibration to aggregated data, have limitations. Indeed, they only allow generating individuals similar to those in the initial sample. Machine Learning and Statistical Learning methods, such as Variational Autoencoders (VAE), offer a promising alternative. VAE have already demonstrated their effectiveness in generating realistic images. We present here how to use VAE to generate synthetic populations, allowing for more varied representations of a territory.
Read more
: The impact of in-work conditionality of Universal Credit on benefit take-up and employment
Ashley Burdett  ( Centre of Microimulation and Policy Analysis, University of Essex )  —  “: The impact of in-work conditionality of Universal Credit on benefit take-up and employment”  (joint work with: Matteo Richiardi)
July 1, 2026, 0:00 am TBC TBC
Conference presentation,  •  Labour supply , Work conditions , Behavioral models ,
Universal Credit (UC) is the main means-tested benefit in the UK welfare system, supporting low-income individuals and families. UC replaced multiple benefits with a single payment, while introducing strict job search requirements and in-work conditionality. Individuals who are not working and are deemed capable of work are usually required, among other things, to actively look for a job, while claimants who are working but earning below a threshold are required to take steps to increase their earnings, including looking for alternative jobs and increasing work hours. Failure to comply can result in benefit sanctions. Research shows that UC conditionality can have detrimental effects on individual well-being and mental health, while evidence of its employment effects is mixed. In this study, we jointly model the take-up behaviour and labour supply decisions through the lens of a structural random utility model. Individuals anticipate that receiving UC negatively affects their well-being, and job search requirements may reduce the utility they derive from income and leisure. As a result, they might choose not to take up UC even if they are eligible and modify their labour supply accordingly. In this paper, we compare baseline simulations with estimated parameters with counterfactual simulations where the effects of conditionality are muted/removed. This allows us to quantify the impact of conditionality on a number of outcomes of interest, including benefit take-up and employment.
Read more
A Novel Weighting-Based Approach to Cohort Replenishment in Dynamic Microsimulations
Michal Kvasnička  ( Masaryk University )  —  “A Novel Weighting-Based Approach to Cohort Replenishment in Dynamic Microsimulations”  (joint work with: Andrea Piano Mortari and Federico Belotti)
July 1, 2026, 0:00 am TBC TBC
Conference presentation,  •  Validation & methods , Aging & demographics ,
We propose a new method for generating replenishment cohorts in dynamic microsimulation models. Standard dynamic microsimulations project the future states of an initial population through the recursive application of one-step-ahead predictions. Over time, sample size declines due to attrition (e.g., mortality), and without the integration of new individuals, the projected population progressively departs from the target population structure. To preserve representativeness, replenishment cohorts must therefore be introduced at each simulation step. Cohort replenishment is challenging because it must simultaneously (i) reflect secular trends in individual characteristics (e.g., declining smoking prevalence) and (ii) preserve the underlying correlation structure among these characteristics (e.g., the relationship between smoking and lung cancer). Existing approaches, most notably the method used in the Future Elderly Model, address these challenges but are computationally intensive and algorithmically complex. We introduce an alternative algorithm that draws eligible donors from historical data while preserving their observed characteristics. Donor sampling weights are adjusted to match period-specific target prevalences using a procedure akin to entropy balancing (Hainmueller, 2012). The proposed method offers three key advantages: (i) efficiency, as it is substantially simpler to implement and less computationally demanding than existing approaches; (ii) accuracy, as it closely tracks specified feature trends given a sufficiently rich donor pool; and (iii) parsimony, as it avoids the need to explicitly specify trends for all variables. Instead, trends in non-targeted characteristics emerge endogenously from the imposed constraints, an especially valuable property in settings with limited longitudinal data. We evaluate the proposed method by comparing its performance with the benchmark approach in reproducing target prevalences, preserving the joint distribution of individual characteristics, and generating plausible trends for features not explicitly constrained features.
Read more
A venue-based population-wide individual-based microsimulation model for COVID-19 transmission
Astrid Sierens  ( Hasselt University (UHasselt) & Vrije Universiteit Brussel (VUB), Belgium )  —  “A venue-based population-wide individual-based microsimulation model for COVID-19 transmission”  (joint work with: Prof dr. Lander Willem (UAntwerpen), Prof dr. Pieter Libin (VUB), Prof dr. Niel Hens (UHasselt - UAntwerpen))
July 1, 2026, 0:00 am TBC TBC
Conference presentation,  •  Validation & methods ,
Understanding infectious disease transmission requires insight into who interacts with whom, where and how these interactions take place, and under which conditions. While individual-based models (IBMs) allow interactions to be represented at the level of individuals, most models aggregate information on interaction partners (e.g. by age), without specifying where contacts occur, how they take place, or which individuals are co-present in the same setting. As a result, interactions outside households, schools or workplaces are commonly represented using aggregated community structures, and detailed mobility or venue-level contact data are rarely available. This poses a key challenge for microsimulation-based transmission modelling. We present a methodological extension of the STRIDE individual-based model (Willem et al., 2021) that introduces an explicit, population-wide representation of community venues. Starting from aggregated community interaction pools, individuals are assigned to specific venue types (i.a. such as shops, restaurants, and other social locations) using empirical time-use data. This venue-based decomposition makes it possible to explicitly represent where interactions occur at the population scale, without relying on detailed mobility trajectories. Such fine-grained representations are straightforward when modelling a single setting, but become substantially more challenging when extending to all venues across an entire population. The venue-based structure allows heterogeneous environmental characteristics to be incorporated at the setting level, including ventilation, occupancy, and exposure duration. Within this framework, we explicitly integrate multiple transmission pathways (i.a. close-range droplet transmission and airborne transmission) within STRIDE. The contribution of each pathway depends on individual behaviour and venue-specific conditions. Most epidemiological models, including IBMs, focus on a single dominant transmission route. By contrast, jointly modelling multiple pathways across all venues makes it possible to examine how these routes combine and interact to shape transmission at the population scale. A key challenge is the limited availability of venue-level data. To this end, we developed an algorithm to redistribute aggregated community contacts across venues based on occupancy sizes and the time individuals spend in each setting. Where time-use data were unavailable, venue attendance patterns were imputed using age-stratified contact information. Many additional venue-specific characteristics required for transmission modelling, (i.a., contact duration, proximity, and environmental parameters) were not directly observed. In such cases, we introduced assumptions, guided as much as possible by existing literature. By representing individuals and venues explicitly, the model can study superspreading caused by differences in contacts, infectiousness, and venue conditions. Because empirical data on the drivers of superspreading are limited, variability in key characteristics was introduced using mathematically defined distributions, allowing heterogeneity to be explored systematically. Our new model was applied to a computer-generated population of 600,000 virtual individuals designed to statistically mirror the Belgian population, enabling the simulation of intervention scenarios corresponding to Belgium’s first COVID-19 lockdown. Moreover, it allowed us to investigate a variety of what if scenarios, including ventilation interventions. Overall, this work demonstrates how venue-based microsimulation with heterogeneous transmission and individual-level variability can enhance the realism and policy relevance of population-scale infectious disease models.
Read more