This paper analyzes the reduction in total labor costs induced by an increase in the minimum wage in France. Using the Ines microsimulation model developed by the French National Statistical Institute (Insee), I simulate a 2% increase in wages for all workers paid at the minimum wage. Due to the complex of exemptions of the French socio-fiscal system, an increase in the minimum wage leads to a reduction in employers’ social security contributions (SSCs) for workers earning between 1.01 and 3.5 times the minimum wage. I confirm existing results from L’horty (2000): overall, a 1% increase in the minimum wage reduces employers’ SSCs by approximately €1.67 billion.
INFORM2 is a dynamic microsimulation model developed within the Department for Work and Pensions (DWP) for forecasting claim volumes that underpin the benefit expenditure forecast for Universal Credit. Development began in 2018 and builds on earlier iterations of the INFORM framework. The model simulates independent benefit units and individuals on a monthly time step, using Universal Credit (UC) administrative data as its core input.
INFORM2 outputs provide a detailed, benefit-unit level simulation of the UC caseload and its composition for Great Britain over the medium term. Its design allows results to be broken down by UC eligibility rules including age, health and carer status, family composition, housing costs, and work and earnings patterns. This detailed compositional information is crucial not only for accurate and consistent estimation of caseloads but also estimating the average benefit amounts that complete the expenditure forecast.
The model integrates onflows synthesised from historical data across both UC and the six “Legacy” working age benefits that UC replaces, simulating new claims, transitions from the Legacy system, and the complex flow dynamics at the margin of benefit entitlement and take-up. Internal transition- and off-flow-probabilities are handled using a combination of discrete probability matrices and logistic regressions in addition to deterministic ageing rules, for example when claimants move to the pension-age Benefits.
A range of structural and data constraints shaped the development of the model. Although 2019 offered the first full year where UC was fully rolled out to new claims, the system was still far from steady state: the legacy stock remained substantial, transitions were immature; and these behavioural patterns risked biasing estimated probabilities. COVID19 added further complexity by disrupting labour market dynamics to such an extent that the modelling data could not be robustly updated until 2022-23, when operational and claimant behaviour appeared to be stabilising. Even then, major increasing pace of Legacy-to-UC transition, and economic and policy changes continued to produce new discontinuities that required substantial development work. For example, the earnings modelling was refined separately up to 2024-25 to account for substantial changes in earnings distributions and conditionality rules.
Several research and development strands are in progress. A major one, investigated through a co-sponsored PhD study with the Centre for Microsimulation and Policy Analysis the Centre for Microsimulation and Policy Analysis at The University of Essex is the incorporation of economic forecasts into INFORM2 modelling, via a new onflows model estimated on the link between UC onflow volumes to unemployment rates, levels of benefit and working-age population changes. Also, machine‑learning techniques such as neural networks are being explored as alternatives to the increasingly large DPM structures used in the current off‑flows module.
INFORM2 is a highly significant model in Government, and is receiving increased scrutiny at all levels of Government, placing high value on explainability of its outputs, which is challenging to balance alongside the appetite for accuracy and detail that a microsimulation model affords.
This paper studies labor market responses to tax policy using a structural labor supply model estimated within a Random Utility - Random Opportunity framework. The RURO model represents labor supply as a choice among a finite set of work options, where individuals compare the utility of different employment and hours combinations given the opportunities available to them. Preferences are modeled in a random utility framework, while heterogeneity in job availability and constraints is captured through the opportunity structure. This allows the model to account for both choice behavior and limitations in feasible options. We combine this structural labor supply framework with a detailed microsimulation model based on Belgian administrative data, allowing for an accurate mapping from labor supply choices to disposable income and fiscal outcomes.
The main contribution of the paper is to extend the RURO labor supply model by incorporating labor demand elasticities. Standard applications of structural labor supply models implicitly assume perfectly elastic labor demand, omitting wage adjustments and firm side responses. This assumption can limit the ability of these models to capture key labor market frictions. By supplementing the RURO framework with labor demand elasticities, we allow employment and wages to adjust to policy induced changes in labor supply, providing a more realistic representation of labor market equilibrium.
This integrated approach improves the interpretation of labor supply estimates and strengthens the link between individual choice behavior and aggregate labor market outcomes. More broadly, the paper contributes to the structural labor supply literature by showing how demand side adjustments can be incorporated in a tractable way, addressing an important limitation of existing models.
As the population ages and the sustainability gap in public finances in Finland widens, new solutions are needed to ensure sufficient funding for public services. One potential solution is to place greater emphasis on private wealth in the financing of care services. At present, client fees for long-term social and health care services in Finland are determined based on clients’ income. This study examines the potential effects of also taking clients’ assets into account. We focus on the fiscal and distributional implications of such a reform. The analysis is based on the SOTE-SISU static microsimulation model and unique administrative register including wealth.
Due to the limitations of wealth data, the analysis concerns clients’ financial wealth in the form of investment funds, shares, and investment properties. Together, these account for slightly less than one-third of the total wealth of people aged 65 and over. The main component of wealth—owner-occupied housing—is excluded from the analysis. In the simulation, wealth was taken into account by adding 15 per cent of assets exceeding €15,000 to annual income, following the formula used in the housing allowance for pensioners.
According to the results, the total amount of client fees collected for long-term residential care would increase by approximately 12 per cent under the reform. The increase is substantial considering that the change would affect only about 14 per cent of clients and that the types of assets included in the analysis represent only a small share of older people’s total wealth. Among clients whose wealth would be considered, the fees could rise considerably (average increase €830 per month, +88 %). The largest number of affected clients would be found in the third income decile, which also has the highest prevalence of residential care users, while the probability of being affected increases with income level. Following the reform, the out-of-pocket financing of residential care would rise from about 16 per cent to 18 per cent.
The static simulation is indicative as it does not account for potential behavioural or other dynamic changes—for instance, changes in how older people might use, transfer, or convert their wealth, or shift toward private services. On the other hand, the analysis covers only a limited subset of household wealth, and future cohorts of care users are likely to be both larger and wealthier than those in 2022 data.
Demographic change poses profound challenges to labor markets across advanced economies. Population ageing is increasing pressure on public social security systems in many countries. These developments have intensified the policy debate on how to extend working lives and increase labor force participation at older ages. Against this background, promoting labor force participation among individuals close to or beyond statutory retirement age has gained increasing importance. In the German policy debate, one prominent example of such an approach is the “active pension” scheme (Aktivrente) recently introduced in 2026. This policy allows employed pensioners to earn additional income up to a specified monthly threshold without being subject to income taxation. While the intended goal of these measures is to encourage voluntary labor supply at older ages, their actual quantitative effects on employment and public finances remain uncertain. Traditional models in pension economics typically conceptualize retirement as a discrete and absorbing state, in which labor force participation ends entirely upon retirement. In light of changing employment biographies and policy initiatives aimed at extending working lives, this limitation has become increasingly problematic. A methodological extension of existing microsimulation models that explicitly accounts for labor supply decisions at older ages, through differentiated transition scenarios, earnings rules, or tax allowances, is therefore required to reliably assess the effects of such reforms. The paper at hand addresses this methodological challenge using Germany as a case study. It examines to what extent microsimulation approaches behavioral adjustments can be further developed to analyze policy measures that create labor supply incentives for pensioners. The paper identifies and discusses both the limits and the potential of extending existing modeling frameworks. We were able to transfer the methodology of microsimulation to the group of retirees, using a microsimulation model based on the German Socio-Economic Panel. Simulations of hypothetical reform scenarios, as well as estimated labor supply elasticities, yield plausible results that are consistent with findings on labor supply responses among the younger working-age population. We plan to advance this model such that labor supply effects of realistic reform scenarios, like the active pension, can be estimated, as well as distributional effects of such reforms. We also plan to study and estimate in further detail the labor supply elasticities of pensioners with the model.
This tutorial introduces the microWELT model and modular modelling platform for comparative dynamic microsimulation. MicroWELT is a portable, continuous time interacting population model built to work with readily available data for many countries, and it supports optional alignment to aggregate targets. It is “X-compatible”: the same model code can be compiled using Modgen or the open-source openM++ environment. As documented on the project website www.microWELT.eu, the model is also extendable to refined national applications such as the microDEMS model, which applies the same platform to an Austrian setting using detailed longitudinal administrative records, illustrating how the shared core can be refined when richer data are available.
Participants will learn (i) the conceptual architecture of microWELT as an interacting population model (entities, states, events, and exposures in continuous time); (ii) the platform’s modular structure and (iii) how to use the documentation and web resources to adapt the platform to new research questions. We will show examples of comparative and national applications developed in different research contexts. The emphasis is on “how to get started”: using microWELT as a reference implementation, a starting template for new applications, and a training resource for dynamic microsimulation workflows. We will also point participants to the platform’s step-by-step implementation material, organised to introduce core concepts first and usable as a textbook-style reference when extending models.
MicroWELT lowers the barrier to comparative analysis by providing a shared, well-documented model core that can be reused across countries and projects. Its continuous-time framework is well suited to life-course processes and interacting individuals (e.g., partnership formation and dissolution), while its modularity allows users to extend the core demography to topics such as education, labour force projections, health, long-term care, pensions and other policy-relevant outcomes. Because aggregate outcomes can be aligned to official projections, users can combine micro-level heterogeneity with macro-level consistency - useful when communicating scenarios to policy audiences and when comparing results across countries. Cross-compatibility with Modgen/openM++ also supports both “production-style” workflows and reproducible open-source deployment.
Our team brings 25 years of experience developing dynamic microsimulation models with the Modgen/openM++ programming technology and pioneering comparative, cross-national models. MicroWELT is implemented at the Austrian Institute of Economic Research (WIFO), and our work spans model architecture, parameterization, implementation, documentation, and applied comparative studies.
The tutorial is aimed at applied researchers, graduate students, and policy analysts who want a concrete, working example of a comparative continuous-time dynamic microsimulation platform and guidance on how to build their own applications. For participants not familiar with Modgen/openM++, we recommend also attending the companion tutorial led by Doug Manuel introducing openM++ and the repository stcOpenMpp.
This study examines whether supervised machine learning can improve the prediction of household expenditure shares within the standard statistical matching pipeline that fuses EU SILC–type microdata with Household Budget Survey (HBS) expenditures. The conventional approach uses a transparent two part econometric design: a probit model for participation (extensive margin) and an OLS regression for conditional spending (intensive margin). While robust, this framework is known to struggle in categories with pronounced zero inflation, nonlinear participation boundaries, heterogeneous spending patterns, or timing noise. We assess whether replacing the parametric steps with Gradient Boosted Trees (GBT) for participation and Gradient Boosted Regression (GBR) for conditional expenditure yields systematically better predictions without altering the downstream imputation workflow. We combine Swiss SILC 2020 as the recipient dataset and Swiss HBS 2015–2017 as the donor survey. Because these samples have no shared identifiers, we harmonize variables following established Eurostat/JRC practices. Seventeen covariates present in both sources are aligned through recoding and aggregation, and we uprate nominal incomes and expenditures using the harmonized index of consumer prices (HICP) to ensure comparability with the SILC reference year. We apply EUROMOD style categorical aggregation to mitigate incidental zeros, remove extreme expenditure to income ratios, and enforce a common structure for the predictors used in both stages of the model. This creates a coherent evaluation environment in which alternative prediction models can be compared fairly. The imputation pipeline remains unchanged to ensure comparability with policy applications. First, we estimate participation for each aggregated COICOP category using the selected model (probit baseline or GBT alternative). Second, we model conditional expenditure given participation using OLS (baseline) or GBR (alternative). Third, we compute fitted shares and apply a pseudo R² screen to restrict attention to categories where covariates meaningfully explain variation. All diagnostics and matching steps are identical across methods so that any downstream differences are attributable solely to the prediction component. The design yields (i) cross validated probability and error metrics for extensive and intensive margins; (ii) threshold sweep summaries to document operating point sensitivity under imbalance; and (iii) downstream compatibility with the standard donor selection step used in EUROMOD/SWISSMOD type applications. Because the imputation workflow and diagnostics are held constant, the study isolates the contribution of flexible predictors relative to the classical probit–OLS baseline in a way that is transparent for policy use.
Understanding spatial disparities in income distribution is crucial for designing effective and targeted public policies. However, small-area analysis is constrained by the limited representativeness of household surveys at fine geographical levels and by the lack of comprehensive income information in administrative and census data. This paper develops a spatial microsimulation framework to map and decompose income inequality across municipalities in Luxembourg by combining EU-SILC survey data, census information, and administrative statistics to simulate complete disposable income distributions at the municipal level for 2012 and 2022. The model integrates labour market behaviour, multiple income sources—including capital income and private transfers—and the tax-benefit system using EUROMOD. Spatial heterogeneity is captured through a two-step procedure that combines census-based reweighting with regression-based alignment to local demographic and labour market control totals. We further decompose overall inequality into between- and within-municipality components, observing a stronger within-municipality component. This suggests that factors operating at the local level—such as demographic composition, labour market participation, and access to specific income sources—play a central role in shaping income disparities. Our results reveal the dominant role of demographic and local structural factors in driving these disparities. By producing timely and spatially detailed estimates of disposable income and inequality, this paper demonstrates how combining spatial microsimulation with dynamic income generation can overcome data limitations and provide a powerful tool for analysing the drivers of spatial inequality and supporting evidence-based local policy design.
The share of individuals with a migration background in European societies is increasing, both directly because of migration and indirectly because migrants’ descendants give rise to an increasing second and third generation, raising questions on the potential impact of unfolding diversity by migration background on fertility trends in Europe. Life course research has identified a large number of mechanisms and clocks that shape patterns of family formation in migrant populations, but the translation of such micro-level (inter)actions into macro-level population outcomes remains a key challenge. Using population-wide longitudinal microdata from Belgian registers, we use a multistate discrete-time hazard model of entry into parenthood and parity progression that simultaneously considers conventional determinants of family formation (e.g. age, education, parity, time since index birth), migration-specific factors (origin group, migrant generation, age and parity at migration, duration of residence), while additionally incorporating unobserved heterogeneity that shapes transitions over the life course. We subsequently feed parameter estimates and variance estimates into a dynamic microsimulation model that allows to quantify the sensitivity of macro-level demographic trends in timing and quantum of order-specific fertility to unfolding diversity by migration background and contrasting migration scenarios.
Microsimulation is a uniquely powerful technique for chronic disease modelling because it simulates outcomes at the level of the individual over time, capturing heterogeneity, history-dependent progression, multimorbidity, and complex clinical pathways that cohort averages cannot. In an era when chronic diseases account for the majority of global mortality and impose escalating pressure on health systems, decisions about their prevention, treatment, pricing, and resource allocation carry profound long-term clinical and financial consequences. Consequently, accurate long-horizon modelling of these diseases has become central to policy, reimbursement, and investment decisions. Historically, however, microsimulation has been constrained by computational performance. Statistical precision requires large, simulated populations to reduce Monte Carlo error, and probabilistic sensitivity analysis multiplies this burden through repeated parameter sampling. Many models built in spreadsheets or high-level languages require hours or days to run, limiting scenario exploration, delaying iteration, and reducing their practical utility in time-sensitive decision environments. To address these limitations, a legacy microsimulation stack was rebuilt into a high-performance platform capable of executing 100 million life-course simulations in approximately 100 seconds. Performance gains were achieved through several core engineering innovations. The microsimulation core was implemented in modern C++, enabling direct control over memory allocation, cache locality, and execution flow. Compared with interpreted (e.g. Python, R) or spreadsheet-based environments, compiled C++ dramatically reduces runtime overhead and enables predictable, deterministic execution, strengthening validation processes and supporting regulatory-grade transparency and auditability. Memory architecture was optimised to maximise Central Processing Unit (CPU) cache efficiency and minimise allocation costs. Modelled individuals’ attributes, state transitions, and event processes were encoded in compact, structured formats, allowing large virtual populations to be simulated without performance degradation. The engine exploited modern multi-core CPU architectures through multi-threading, allowing independent patient simulations to run concurrently. Because individual life trajectories are largely independent within Monte Carlo microsimulation, the model parallelises naturally, enabling near-linear scaling with available cores. Beyond single-machine performance, the system supports horizontal scaling via containerised simulation instances, allowing elastic expansion across the infrastructure based on workload demand, without reliance on specialised high-performance computing clusters. The platform includes integrated pipelines for data ingestion, preprocessing, simulation execution, and post-processing. Outputs are automatically aggregated into epidemiological, and economic metrics, including incidence, prevalence, costs, and healthcare resource use outcomes, ready for decision analysis. A user-facing interface abstracts technical complexity, allowing domain experts to configure scenarios and execute simulations without interacting directly with the code or infrastructure. The entire platform is securely hosted in the cloud, allowing for easy set up and access anywhere in the world. The system is comprised of cross-cloud components that allow it to be hosted in any of the major cloud providers.
These advances represent a fundamental shift in capability: complex simulations once requiring hours or days may now be completed in seconds, enabling real-time exploration of uncertainty, and rapid scenario iteration to expedite decision-making. Microsimulation can therefore operate at the scale and speed demanded by modern policy, reimbursement, and investment strategies, amid growing chronic disease complexity and multimorbidity.