Monday, December 15, 2025

Creating and Assessing an Unconventional Global Database of Dust Storms Utilizing Generative AI

In the past we have written about how one can use social media to monitor dust storms along with how multi-modal large language models (MLLMs) can be used to analyze images. At the recent American Geophysical Union (AGU) Fall Meeting we (Sage Keidel, Stuart Evans and myself) brought these two strands of research together in a poster entitled "Creating and Assessing an Unconventional Global Database of Dust Storms Utilizing Generative AI."

In this work we showcase how MLLMs are providing new opportunities and accessible methods for information extraction from imagery data using geo-located images from Flickr which have a dust keyword tag associated with it from multiple languages (e.g., Arabic, English, Spanish).  We run these images through ChatGPT, which classifies them as dust storms or not and compare this classification with human classifed images. If this sounds of interest, below you can read the abstract, see the poster along with a selection of images that have been labeled as as dust storm or not and ChatGPTs confidence in its classification. While the dust storm database itself can be found here

Abstract:

Complete observations of dust events are difficult, as dust’s spatial and temporal variability means satellites may miss dust due to overpass time or cloud coverage, while ground stations may miss dust due to not being in the plume. As a result, an unknown number of dust events go unrecorded in traditional datasets. Dust’s importance both for atmospheric processes and as a health and travel hazard makes detecting dust events whenever possible important, and in particular, studies of the health impacts of dust are limited by detailed exposure information. 

In recent years, social media platforms have emerged as a valuable source of unconventional data to study events such as earthquakes and flooding around the world. However, one challenge with respect to using such data is classifying and labeling it (i.e., is it a dust storm or not?). While it is relatively simple to classify textural data through natural language processing, it is not the case with imagery data. Traditionally, classifying imagery data was a complex computer vision task. However, recent advancements in generative artificial intelligence (AI) especially multi-modal large language models (MLLMs) are opening up new opportunities and offering accessible methods for information extraction from imagery data. Therefore, in this study we collected geotagged Flickr images referencing dust from around the globe from multiple languages (e.g., English, Spanish, Arabic) and use generative AI (i.e., ChatGPT) to classify the images as dust storms or not. Furthermore, we compare a sample of these classified images from ChatGPT with human classified images to assess its accuracy in classification. Our results suggest that ChatGPT can relatively accurately detect dust storms from Flickr images and thus helps us create an unconventional global database of dust storm events that might otherwise go unobserved from more traditional datasets.



Workflow

Poster

Dust storm database (click here to go to it)

Full Referece: 
Keidel, S., Evans S. and Crooks, A.T. (2025), Creating and Assessing an Unconventional Global Database of Dust Storms Utilizing Generative AI, American Geophysical Union (AGU) Fall Meeting, 15th–19th December, New Orleans, LA. (pdf of poster).

Friday, December 12, 2025

Quantitative Comparison of Population Synthesis Techniques

In the past we have written a number of posts on synthetic populations, however, one thing we have not done is compare the various techniques that can be used to create them. This has now changed with a new paper entitled "Quantitative Comparison of Population Synthesis Techniques" which was recently presented at the 2025 Winter Simulation Conference.

In this paper, we (David Han, Samiul IslamTaylor Anderson, Hamdi Kavak and myself) investigate five synthetic population generation techniques (e.g., Iterative Proportional Fitting, Conditional Probabilities, Simple Random Sampling, Hill Climbing and Simulated Annealing) in parallel to synthesize population data for different North America settings (e.g., Fairfax County, VA, USA and Metro Vancouver, BC, Canada). Our findings suggest that while iterative proportional fitting and conditional probabilities techniques perform best, it also suggests at the same time that it is important to consider the basis of choosing certain methods over others for generating synthetic populations with regard to a geographic domain. 

If this sounds of interest, below you can read the abstract to the paper, see some of the figures and tables that support our discussion. While at the bottom of the post you can find the full referece and a link to the paper. Moreover, in an effort to allow for reproducible science,  all code and data are available to interested readers in our GitHub repository located at https://github.com/kavak-lab/synthetic-pop-comparison.

Abstract
Synthetic populations serve as the building blocks for predictive models in many domains, including transportation, epidemiology, and public policy. Therefore, using realistic synthetic populations is essential in these domains. Given the wide range of available techniques, determining which methods are most effective can be challenging. In this study, we investigate five synthetic population generation techniques in parallel to synthesize population data for various regions in North America. Our findings indicate that iterative proportional fitting (IPF) and conditional probabilities techniques perform best in different regions, geographic scales, and with increased attributes. Furthermore, IPF has lower implementation complexity, making it an ideal technique for various population synthesis tasks. We documented the evaluation process and shared our source code to enable further research on advancing the field of modeling and simulation.
A conceptual depiction of the IPF process for population synthesis.

Our four-step process used in this study.

Average R2 values by geographic level and method (standard deviations in italics).

% Total absolute error (% TAE) comparison by attribute for Fairfax County.

Full Referece: 
Han, D., Islam, S., Anderson, T., Crooks, A.T. and Kavak, H. (2025), Quantitative Comparison of Population Synthesis Techniques, in Azar, E., Djanatliev, A., Harper, A., Kogler, C., Ramamohan, V., Anagnostou, A. and Taylor, S.J.E. (eds.), Proceedings of the 2025 Winter Simulation Conference, Seattle, WA, ACM. (pdf)

Friday, November 28, 2025

Integration of Community Level Data into Mathematical Models

In the past we have posted about how we can utilize data and models to explore pandemics and peoples reactions to them. And while interest in the COVID might of waned, there will be future pandemics. 

To this end, at the 53rd Annual Meeting of NAPCRG we (Laurene Tumiel Berhalter, Sanchit Goel, Dawn Vanderkooi, Bruce PitmanYinyin Ye,  Jennifer Surtees and myself) had a poster entitled "Integration of Community Level Data into Mathematical Models to Predict Future Public Health Emergencies." The objective of the poster is to showcase how one can integrate 211 data into models to predict future public health emergencies. If this sounds of interest, below you can see the poster and at the bottom of the post you can access the abstract. 


Full Reference:

Tumiel, L.M., Goel, S., Vanderkooi, D., Pitman E.B., Crooks A.T., Ye, Y. and Surtees, J. (2025), Integration of Community Level Data into Mathematical Models to Predict Future Public Health Emergencies, North American Primary Care Research Group (NAPCRG) 53rd Annual Meeting, 21st-25th November, Atlanta, GA (pdf).

Saturday, November 08, 2025

New Paper: Modeling Wildfire Evacuation with Embedded Fuzzy Cognitive Maps

While we have explored disasters in the past through agent-based models and other computational social science approaches, one area we have not explored is how one can use agent-based models to explore evacuations durring a wild fire event.  This has now changed with a new paper with  Zhongyu Zhou and myself entitled  "Modeling Wildfire Evacuation with Embedded Fuzzy Cognitive Maps: An Agent-Based Simulation of Emotion and Social Contagion" which was recently presented at the  2025 International Conference of the Computational Social Science Society of the Americas (CSSSA). 

In the paper we present an agent-based model combined with an embedded fuzzy cognitive map (FCM) to simulate residents’ evacuation behavior during a wildfire event. If this sounds of interest, below we provide the abstract to the paper along with some of the figures that showcase the model logic and some of its results. A detailed ODD, the model and the data needed to run the model can be found at: https://github.com/ozzyzhou99/LA-Wildfire-Model/. Finally, at the bottom of the post you can find the full referece to the paper and a link to it.  

Abstract: 

Wildfires are becoming increasingly dangerous, especially in densely populated fire-prone areas like Los Angeles. People’s evacuation decisions during wildfire events are influenced by many factors, including emotions such as fear or panic, which often affect people’s choices to evacuate. Traditional evacuation models often assume that individuals behave rationally. As a result, these models tend to overlook the influence of emotional factors on evacuation behavior. To address this issue, this study develops an agent-based model (ABM) combined with an embedded fuzzy cognitive map (FCM) to simulate residents’ evacuation behavior during a wildfire event. The model covers two types of agents: evacuees and rescuers. It focuses on how emotions change over time and how they spread among people. While we also expect to observe how these emotional changes will affect evacuation decisions. This research also considers differences between different income groups to explore whether low-income residents are more likely to panic. Results from the model show that agents with different emotions behave differently during the evacuation process. Emotional changes clearly affect how agents choose routes and whether they can respond quickly. In addition, the results suggest that income level affects emotional responses, and low-income groups are more likely to feel fear. This study highlights the value of using ABM and FCM together to better understand evacuation behavior and provides a new idea for developing fairer and more effective disaster response plans.

Keywords: Agent-Based Modeling, Emotional decision-making, GIS, Fuzzy Cognitive Map, Wildfire Evacuation.
Data used in the setting up the model experiment. (A) is household income data, (B) is location of previously affected houses, and (C) is evacuation road data.

Agent-level embedded FCM loop with social contagion.
Evacuees’ Workflow (A), Rescuers” Workflow (B).




Box plots of average emotions for three groups of experiments (50 repetitions each). From left to right, the number of people in each income group increases progres- sively. Low income (LI), middle income (MI), and high income (HI).

Full Referece 
Zhou, Z. and Crooks, A.T. (2025), Modeling Wildfire Evacuation with Embedded Fuzzy Cognitive Maps:An Agent-Based Simulation of Emotion and Social Contagion, Proceedings of the 2025 International Conference of the Computational Social Science Society of the Americas, Santa Fe, NM. (pdf)

Thursday, November 06, 2025

HD-GEN: A Software System for Large-Scale Human Mobility Data Generation Based on Patterns of Life


 
Human mobility datasets are essential for investigating human behavior, mobility patterns, and traffic dynamics.  In the past we have written about how one can use agent-based models to generate patterns of life trajectories datasets. Building on this work at the ACM SIGSPATIAL 2025 conference, we (Hossein AmiriRichard YangShiyang RuanJoon-Seok KimHamdi KavakAndrew Crooks,  Dieter Pfoser,  Carola Wenk and Andreas Züfle) had a paper entitled "HD-GEN: A Software System for Large-Scale Human Mobility Data Generation Based on Patterns of Life"

In this paper, we extend our previous work by introducing a software system that provides a new suite of tools built on top of the Patterns of Life simulation framework. Specifically this work consolidates our contributions into a unified data generation pipeline that includes:

  1. additional discussion of the motivation and applications of large-scale simulated trajectory data, 
  2. detailed instructions on running the simulation and generating datasets, 
  3. extended analysis of the shared dataset, and 
  4. an integrated GitHub repository

The proposed system enables large-scale synthetic dataset generation, either by statistically replicating real-world data or by creating datasets with user-defined properties. If this sounds of interest, below you can read the abstract to the paper, the poster that accompanies it and we have also provided detailed instructions on how to reproduce the generated datasets, and made the code and data available at https://github.com/onspatial/large-scale-dataset-generator.

Abstract

Understanding individual human mobility is critical for a wide range of applications. Real-world trajectory datasets provide valuable insights into actual movement behaviors but are often constrained by data sparsity and participant bias. Synthetic data, by contrast, offer scalability and flexibility but frequently lack realism. To address this gap, we introduce a comprehensive software pipeline for generating, calibrating, and processing large-scale human mobility datasets that integrate the realism of empirical data with the control and extensibility of Patterns-of-Life simulations. Our system consists of three integrated components. First, a genetic algorithm–based calibration module fine-tunes simulation parameters to align with real-world mobility characteristics, such as daily trip counts and radius of gyration, enabling realistic behavioral modeling. Second, a data generation engine constructs geographically grounded simulations using OpenStreetMap data to produce diverse mobility logs. Third, a data processing suite transforms raw simulation logs into structured formats suitable for downstream applications, including model training and benchmarking. 

Keywords: GeoLife, Patterns of Life, Simulation, Realistic Trajectory Datasets

Dataset creation phases with HD-GEN software.

Full Reference: 

Hossein, A., Yang, R.,  Ruan, S., Kim, J-S., Kavak, H., Crooks, A.T., Pfoser, D., Wenk, C. and Züfle, A., (2025). HDGEN: A Software System for Large-Scale Human Mobility Data Generation Based on Patterns of Life. In The 33rd ACM International Conference on Advances in Geographic Information Systems (SIGSPATIAL ’25), November 3–6, 2025, Minneapolis, MN. pp. 407-410. (pdf) (poster)