News

July 2025

Lessons Learned: Harmonizing and Standardizing Intensive Longitudinal Grief Data

Written by Justina Pociūnaitė-Ott

In the first version of the Grief-ID archive, we successfully harmonized data from three intensive longitudinal data studies. This resulted in a rich dataset of 315 participants and over 20,000 individual data points. Each participant took part in a 14-day study, completing brief surveys up to five times a day. The Grief-ID archive has already gained some interest and has been reused, attesting to the benefits of FAIR data management.

But our work didn’t stop there. Our goal is to turn Grief-ID into a living archive -- an evolving resource that grows as new datasets are added. That said, combining data from different studies isn’t always straightforward. Each study often uses its own set of questions, scales, and sampling schemes. These differences pose challenges in terms of harmonizing data, but they don’t make harmonization impossible. Instead, they push us to be more creative and precise in how we align the data, so that it remains meaningful and useful for future research.

Harmonization means finding common ground: matching similar items across studies, while respecting the original intent behind each one. It’s a way to unlock the full potential of reusing existing data that required significant effort from both participants and researchers. The more we can reuse and this rich source of intensive longitudinal data, the more insights we can gain into how grief unfolds in daily life after loss. Our goal is to strike the right balance: making it easier to reuse the data, while being mindful of the time and effort required from both contributors and developers.

A couple of research groups already expressed interest in contributing their data to Grief-ID, thus, we realized the importance of first assessing how well their data could be harmonized. Naturally, future researchers using this archive can make their own decisions about what data to compare or combine. But by taking the first steps to harmonize core aspects of each study, we aim to make those decisions easier and better informed.

Step 1: Harmonizing Item Content

The first step in harmonizing intensive longitudinal data in grief was item content -- making sure that different items used across studies actually assess similar experiences, symptoms, or constructs so they can be harmonized. This meant carefully comparing how questions (items) were phrased and whether they could reasonably be treated as equivalent across studies.

To support this, we reached out to international grief experts and asked them to evaluate whether differently worded items could be used interchangeably.

The first opportunity for feedback came during a grief expert meeting at the University of Zurich, organized by Clare Killikelly and Andreas Maercker. We presented our research on grief in daily life and hosted a dedicated roundtable on item harmonization. While the discussion was useful, we realized the format where the experts switch every 10 minutes wasn’t ideal for this harmonization exercise. We needed more time and our colleagues needed clearer background and instructions. Following the event, we asked 10 experts to fill out the harmonization sheet that we prepared and intended to use at the roundtable. Five experts provided feedback by filling out the item harmonization sheet, however, we realized that while the feedback was indeed useful, we needed more refined instructions to make the harmonization task even more precise.

We refined the instructions and launched a second round of feedback, inviting ten different grief and trauma experts, seven of which responded by returning completed item harmonization sheets. Based on the feedback from both rounds of the harmonization feedback, we created the first version of a harmonization file, outlining which items could be conceptually treated as equivalent and which should be kept separate as per expert consensus.

You can download the synthesized results and the instructions in the link below. Overall, experts agreed on many items, with just three items requiring changes. We've implemented changes based on their input and now offer a version that can help future users of the archive decide which items can be combined across datasets, and where caution is needed.

Harmonization results

This file contains the EMA item harmonization results for rating prolonged grief reactions across three studies. It presents expert agreement levels and final recommendations.
You will also find the instructions used during the harmonization process included for reference.

Expert panel:

Clare Killikely (University of Zurich)
Franziska Lechner-Meichsner (Bergische Universität Wuppertal)
Hannah Comteße (FernUniversität in Hagen)
Kirsten Smith (University of Oxford)
Geert Smid (University of Humanistic Studies, ARQ Centrum'45)
Frida Berglund (Uppsala University)
Rebecca Rhodin (Uppsala University)
Maja O'Connor (Aarhus University)
Iryna Frankova (Vrije Universiteit Amsterdam)
Paul Boelen (Utrecht University, ARQ Centrum'45)
Liia Kivelä (University of Twente)
Laura Hofmann (Medical School Berlin)

Moments from Grief Expert Meeting, Zurich, April 2025

Grief Expert Meeting, Zurich, April 2025 

Step 2: Harmonizing Different Sampling Schemes

The second step in our harmonization process focused on sampling schemes -- specifically, how often and when participants were asked to respond to surveys during the day.

Based on our overview of existing intensive longitudinal data studies on grief, we realized different studies used different approaches. Some prompted participants five times a day, others fewer or more often. This raised a key question: When merging datasets with different sampling frequencies, how should we treat the timing of responses?

To explore this, we invited five experts in ecological momentary assessment (EMA) for a focused discussion. Their input helped us think through technical challenges related to differences in sampling frequency and data organization across studies, and how to support flexible data use.

One key takeaway from this meeting was that diary studies with only one notification per day aren’t ideal for the Grief-ID archive at this stage. Because these studies measure symptoms at a different rhythm, typically asking about how someone felt over the past day, rather than in the moment. Aggregating this data would mean losing important details about how symptoms fluctuate throughout the day. That level of detail is central to intensive longitudinal research.

Similar to the item harmonization efforts, we acknowledge that the ways to merge different times of notifications can seem arbitrary and the final decision is always left with the main data reusers. However, we suggest organizing responses either by time of day (e.g., morning, afternoon, evening) as presented in a table below or by the sequence of notifications within a day (e.g., first prompt, second prompt, etc.). Importantly, we don’t plan to remove any raw data, unless it’s identifiable.

Study A

Study B

Study C

Harmonization suggestion

14 days; 5 times/day

17 days; 6 times/day

28 days; 4times/day

 

In the past three hours… 

At the moment…

In the past hour...


8:30 - 9:30 AM

09:00 - 10:30 AM

7:30 – 8:30 AM

Morning

11:30 AM - 12:30 PM

11:00 AM- 12:30 PM

11:30 AM – 12:30 PM

Noon

2:30 - 3:30 PM

1:00 - 2:30 PM


Afternoon

 

3:00 - 4:30 PM

3:30 – 4:30 PM

 

5:30 - 6:30 PM

5:00 - 6:30 PM

 

 

8:30 - 9:30 PM

7:00 - 8:30 PM

7:30 – 8:30PM

Evening

We also talked about how best to structure the datasets. In the future versions of the archive, we plan to create two clearly separated datasets: a cross-sectional dataset containing demographic info, baseline and follow-up measures, and an EMA dataset with time-varying data like prolonged grief reactions and daily context.

The goal is to store the data in a super long format. This means fewer repeated timestamps and better handling of missing data across studies. In the archive, we’re planning to include both: the original, non-harmonized timestamps, a suggested set of harmonized time slots (to make comparison across studies easier).

We also reminded ourselves that harmonization is mainly about organizing the data, not analyzing it. Decisions about how to deal with differences in time intervals or the number of measurements can often be addressed later, by the reuser, when thinking about the analytic plan.

Step 3: Harmonizing Different Rating Scales

The final step in our harmonization process dealt with differences in rating scales. Some studies may use a visual analog scale (VAS) ranging from 0 to 100, while others use a Likert scale ranging from 0 to 6.

This presents a tricky challenge when trying to harmonize response scales across studies. These scales don’t just differ in numbers, they also differ in granularity. A 0–100 scale can capture small fluctuations in symptom intensity, while a 0–6 scale is more limited. This could affect how we interpret moment-to-moment changes, or introduce floor or ceiling effects that mask more subtle patterns.

One idea might be to simply rescale everything to the same format, for example, converting all scores to a 0–100 range. But this isn’t as straightforward as it sounds. Rescaling can result in loss of information and may distort the original meaning of the responses.

At this point, we believe this issue is best handled during the phase of making an analytic plan, rather than through forced harmonization in the archive itself. It’s up to each researcher to decide how much detail they want to preserve and which trade-offs they’re willing to accept.

That said, we think it’s important to flag this challenge clearly for users of the Grief-ID archive. Combining data across different answer scales can affect your findings, and the decision of whether (and how) to harmonize scales should be guided by your specific research question.

We welcome input from other researchers on this issue and look forward to hearing how others choose to approach it in their analyses.

Looking Ahead

We hope this work inspires others to consider contributing their own datasets to Grief-ID. As you've seen, even datasets with different designs, items, or scales can still be meaningfully combined. By pooling our efforts, we open the door to more powerful and insightful findings about how grief unfolds in daily life. These efforts contribute to the "Reusability" principle in the FAIR data framework, ensuring that the data aren’t just stored, but actually usable in practice.

Grief-ID is a living archive, meant to grow, evolve, and support new discoveries over time. Whether you’re designing a new ILD study or have already collected data, we encourage you to see how your work could add value to this collective resource.

And this is only the beginning. We're continuing to work on further standardization and automation, in collaboration with our colleagues at Global Collaboration for Traumatic Stress, and Trauma Data Institute. Stay tuned for more updates, and thank you for being part of the growing community around Grief-ID.

For more information about the first version of Grief-ID read this data note: https://doi.org/10.1080/20008066.2025.2526885

-----------------------------------------------------------------

New paper alert

The data note describing the contents of the Grief-ID dataset has now been officially published in the European Journal of Psychotraumatology. It provides an overview of the study design, measures, and structure of the data -- making it easier for other researchers to reuse the dataset.

JuNE 2025

New paper alert

During the International Society for Traumatic Stress Studies (ISTSS) conference in Boston in September 2024, Lonneke Lenferink and Justina Pociūnaitė-Ott gave a paper-in-a-day workshop. Together with early career researchers from different countries, we investigated how a person's daily social life is related to grief reactions after a death of a loved one due to murder, suicide or accident.