Monday, March 02, 2026

A hybrid simulation methodology for identifying and mitigating supply chain disruptions

Durring times of crisis, shocks to supply chains can propagate through the entire economy (e.g., global shortages of critical goods, such as personal protective equipment during COVID-19). At the same time, criminal organizations may disrupt and manipulate licit supply chains for financial gain or political objectives.  Thus there is a strong need for modeling and simulating not only supply chain operations but also malicious actors who may act to disrupt them. 


In the paper we introduce a novel hybrid modeling framework (implemented in MASON) designed to identify vulnerabilities across supply networks. Through the framework, we are able to analyze disruption scenarios  and evaluate mitigation strategies using a pharmaceutical supply chain model (i.e., PharmaSim). As such this paper and proposed framework provides a foundation for simulation-driven planning tools that help organizations anticipate risks and strengthen supply chain resilience.

If this sounds of interest, below we provide the abstract to the paper, some of the figures which show the supply chain we model and the simulation framework along with some results. While at the bottom of the page, you can find the full referece to the paper and a link to it, while the model itself is available at https://github.com/eclab/DES-Supply-Chain-demo

Abstract

Global disruptions have shown that shocks to supply chains can quickly ripple through entire economies, highlighting the need to identify vulnerabilities and evaluate mitigation strategies to build resilience. In this paper, we propose a simulation methodology, Hybrid Integrated Supply-Chain Simulation (HISS), to identify and mitigate potential disruptions in supply chains. We demonstrate HISS using a generic pharmaceutical supply chain model including sourcing, outsourcing, production, packaging, and distribution processes, created using MASON’s hybrid modeling capabilities. We classify disruptions from malicious actors and analyze their timing, impact, and scope. The simulation is further extended to modeling mitigation strategies and assessing their efficacy. Extensive optimization allowed us to identify worst-case disruptions and optimized safety stock strategies reduced impacts by a factor of five, while anomaly detection achieved a high recall of 0.966. The modeling approach proposed in this paper provides a basis for planning tools that support resilience and preparedness of supply chains.

Keywords: Hybrid simulation, supply chains modeling, resilience, optimization, evolutionary computation. 

Visual representation of pharmaceutical supply chain (PSC), which was used to code PharmaSim

Time series of daily production flow through the active pharmaceutical ingredient (API) Production node (resilience triangles are shown in red and the number of units on the vertical axis is in millions).

Overview of the software components and their interactions.

Sample time series of numbers of packaged units with anomalies due to (left) a disruption and due to (right) normal fluctuations (the number of units on the vertical axis is in millions).


Full reference:

Rana, A., Patel, R., Goswami, A., Luke, S., Baveja, A., Domeniconi, C., Melamed, B., Roberts, F., Chen, W., Crooks, A.T., Menkov, V., Narayan, V., Jones, J. and Kavak, H. (2026). A hybrid simulation methodology for identifying and mitigating supply chain disruptions. Journal of Simulation, 1–22. https://doi.org/10.1080/17477778.2026.2628944 (pdf)


Monday, January 05, 2026

Not just numbers: Understanding cities through their words

In the past we have written how one can use social media or newspapers to study the world around us. Keeping with this theme of using text we (Xinyu FuCatherine BrinkleyThomas SanchezChaosu Li and myself) have a new editorial entitled "Not just numbers: Understanding cities through their words" which accompanies a special issue in Environment and Planning B entitled "Leveraging Natural Language Processing for Urban Analytics

The editorial discusses how researchers can use natural language processing  (NLP) methods to get a sense of a diverse range of issues impacting cities. To quote from the editorial, these range: 
 "from  analyzing housing development from council planning applications (Lin et al., 2025), revealing visitor perceptions of famous attractions or passengers’ perceptions on transit service quality from social media (Luo et al., 2025; Ma et al., 2025), defining the meaning of urban imageability based on online review (Zhu et al., 2025), understanding the spatial implications of the digital economy (Occhini et al., 2025), and extracting policies from official government reports (Wang et al., 2025)."

These papers, along with the data they used, and findings are summarized in the table below, and as such demonstrate how one can move beyond purely quantitative data and methods to study cities. If this sounds of interest, please feel free to read our editorial along with the papers in the special issue. 


Full reference: 

Fu, X., Brinkley, C., Sanchez, T.W., Li, C. and Crooks, A.T. (2026), Not Just Numbers: Understanding Cities through their Words, Environment and Planning B, 53(1): 3-10. (pdf)

Monday, December 15, 2025

Creating and Assessing an Unconventional Global Database of Dust Storms Utilizing Generative AI

In the past we have written about how one can use social media to monitor dust storms along with how multi-modal large language models (MLLMs) can be used to analyze images. At the recent American Geophysical Union (AGU) Fall Meeting we (Sage Keidel, Stuart Evans and myself) brought these two strands of research together in a poster entitled "Creating and Assessing an Unconventional Global Database of Dust Storms Utilizing Generative AI."

In this work we showcase how MLLMs are providing new opportunities and accessible methods for information extraction from imagery data using geo-located images from Flickr which have a dust keyword tag associated with it from multiple languages (e.g., Arabic, English, Spanish).  We run these images through ChatGPT, which classifies them as dust storms or not and compare this classification with human classifed images. If this sounds of interest, below you can read the abstract, see the poster along with a selection of images that have been labeled as as dust storm or not and ChatGPTs confidence in its classification. While the dust storm database itself can be found here

Abstract:

Complete observations of dust events are difficult, as dust’s spatial and temporal variability means satellites may miss dust due to overpass time or cloud coverage, while ground stations may miss dust due to not being in the plume. As a result, an unknown number of dust events go unrecorded in traditional datasets. Dust’s importance both for atmospheric processes and as a health and travel hazard makes detecting dust events whenever possible important, and in particular, studies of the health impacts of dust are limited by detailed exposure information. 

In recent years, social media platforms have emerged as a valuable source of unconventional data to study events such as earthquakes and flooding around the world. However, one challenge with respect to using such data is classifying and labeling it (i.e., is it a dust storm or not?). While it is relatively simple to classify textural data through natural language processing, it is not the case with imagery data. Traditionally, classifying imagery data was a complex computer vision task. However, recent advancements in generative artificial intelligence (AI) especially multi-modal large language models (MLLMs) are opening up new opportunities and offering accessible methods for information extraction from imagery data. Therefore, in this study we collected geotagged Flickr images referencing dust from around the globe from multiple languages (e.g., English, Spanish, Arabic) and use generative AI (i.e., ChatGPT) to classify the images as dust storms or not. Furthermore, we compare a sample of these classified images from ChatGPT with human classified images to assess its accuracy in classification. Our results suggest that ChatGPT can relatively accurately detect dust storms from Flickr images and thus helps us create an unconventional global database of dust storm events that might otherwise go unobserved from more traditional datasets.



Workflow

Poster

Dust storm database (click here to go to it)

Full Referece: 
Keidel, S., Evans S. and Crooks, A.T. (2025), Creating and Assessing an Unconventional Global Database of Dust Storms Utilizing Generative AI, American Geophysical Union (AGU) Fall Meeting, 15th–19th December, New Orleans, LA. (pdf of poster).

Friday, December 12, 2025

Quantitative Comparison of Population Synthesis Techniques

In the past we have written a number of posts on synthetic populations, however, one thing we have not done is compare the various techniques that can be used to create them. This has now changed with a new paper entitled "Quantitative Comparison of Population Synthesis Techniques" which was recently presented at the 2025 Winter Simulation Conference.

In this paper, we (David Han, Samiul IslamTaylor Anderson, Hamdi Kavak and myself) investigate five synthetic population generation techniques (e.g., Iterative Proportional Fitting, Conditional Probabilities, Simple Random Sampling, Hill Climbing and Simulated Annealing) in parallel to synthesize population data for different North America settings (e.g., Fairfax County, VA, USA and Metro Vancouver, BC, Canada). Our findings suggest that while iterative proportional fitting and conditional probabilities techniques perform best, it also suggests at the same time that it is important to consider the basis of choosing certain methods over others for generating synthetic populations with regard to a geographic domain. 

If this sounds of interest, below you can read the abstract to the paper, see some of the figures and tables that support our discussion. While at the bottom of the post you can find the full referece and a link to the paper. Moreover, in an effort to allow for reproducible science,  all code and data are available to interested readers in our GitHub repository located at https://github.com/kavak-lab/synthetic-pop-comparison.

Abstract
Synthetic populations serve as the building blocks for predictive models in many domains, including transportation, epidemiology, and public policy. Therefore, using realistic synthetic populations is essential in these domains. Given the wide range of available techniques, determining which methods are most effective can be challenging. In this study, we investigate five synthetic population generation techniques in parallel to synthesize population data for various regions in North America. Our findings indicate that iterative proportional fitting (IPF) and conditional probabilities techniques perform best in different regions, geographic scales, and with increased attributes. Furthermore, IPF has lower implementation complexity, making it an ideal technique for various population synthesis tasks. We documented the evaluation process and shared our source code to enable further research on advancing the field of modeling and simulation.
A conceptual depiction of the IPF process for population synthesis.

Our four-step process used in this study.

Average R2 values by geographic level and method (standard deviations in italics).

% Total absolute error (% TAE) comparison by attribute for Fairfax County.

Full Referece: 
Han, D., Islam, S., Anderson, T., Crooks, A.T. and Kavak, H. (2025), Quantitative Comparison of Population Synthesis Techniques, in Azar, E., Djanatliev, A., Harper, A., Kogler, C., Ramamohan, V., Anagnostou, A. and Taylor, S.J.E. (eds.), Proceedings of the 2025 Winter Simulation Conference, Seattle, WA, IEEE. pp. 151-162. (pdf)

Friday, November 28, 2025

Integration of Community Level Data into Mathematical Models

In the past we have posted about how we can utilize data and models to explore pandemics and peoples reactions to them. And while interest in the COVID might of waned, there will be future pandemics. 

To this end, at the 53rd Annual Meeting of NAPCRG we (Laurene Tumiel Berhalter, Sanchit Goel, Dawn Vanderkooi, Bruce PitmanYinyin Ye,  Jennifer Surtees and myself) had a poster entitled "Integration of Community Level Data into Mathematical Models to Predict Future Public Health Emergencies." The objective of the poster is to showcase how one can integrate 211 data into models to predict future public health emergencies. If this sounds of interest, below you can see the poster and at the bottom of the post you can access the abstract. 


Full Reference:

Tumiel, L.M., Goel, S., Vanderkooi, D., Pitman E.B., Crooks A.T., Ye, Y. and Surtees, J. (2025), Integration of Community Level Data into Mathematical Models to Predict Future Public Health Emergencies, North American Primary Care Research Group (NAPCRG) 53rd Annual Meeting, 21st-25th November, Atlanta, GA (pdf).