Survey design and conjoint experiments
The survey was planned and designed in early summer 2022, with the goal of providing scientific evidence for potential vaccination campaigns in future scenarios of the pandemic. Conjoint experiments were used previously to analyze how a broad range of factors affects vaccine acceptance, and the research design allows exploring the implications of various future scenarios. We reviewed the literature on vaccination readiness to identify relevant attributes to be manipulated in the experiment8,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39. We also consulted with practitioners in the public health sector and country experts to assess what kind of new and current developments might have consequences for vaccine uptake in future scenarios. We found that both the objective conditions, such as virus variants and availability of new vaccines, as well as more subjective factors, such as motivations, were likely to play a role. Also, the evidence suggested that, in addition to the vaccination campaigns, media coverage on vaccinations and the wider information environment were likely to affect the public mood when further rounds of vaccinations would become necessary. We, therefore, included two experiments to cover both of these aspects. In both experiments, we varied attribute levels randomly to assess which components of a multidimensional treatment would be influential. The complete list of attributes and their levels as well as the wording of the prompt and follow-up questions are available in Supplementary Files 3 and 4. Note that study participants evaluated only hypothetical scenario descriptions and were not assigned any medical treatment or product.
Besides the conjoint experiments, the survey included questions on sociodemographics, general health, attitudes toward vaccination, trust in institutions and emotions, such as the experienced levels of pandemic fatigue. The standardized questionnaire was designed for interviews to last for about 15 minutes. All questions were asked in a closed-ended format. We used nominal scales, five-point Likert-type scales as well as numerical rating scales (for example, 0–10 rating scale) to record the responses. The translations of the questions were done by native speakers and checked by experts familiar with the local conditions and vaccination discourse in Austria and Italy.
The questionnaire was then programmed and tested. Before the questionnaire was fielded, the survey was pilot tested by the research team and laypersons, and some minor adjustments were made based on the feedback. The survey was then fielded by the survey company, and the interviews were realized as computer-assisted web interviews (CAWIs).
Details on experiment 1—a hypothetical vaccination campaign
Experiment 1 was designed to assess scenarios for a hypothetical vaccination campaign. The discussion with experts in public health revealed that there was uncertainty about the evolution of virus variants, and we included this attribute to evaluate to what extent the severity of virus variants could influence vaccination decisions. The review of the literature revealed that properties of available vaccines, costs/incentives and campaign messages could affect vaccine uptake. Discussions with country experts revealed that some people seemed to be waiting for new vaccines to become available, with some waiting for inactivated virus vaccines and some waiting for vaccines adapted to the Omicron variants. Regarding campaign messages, the literature suggested that emotions may play a role in vaccine uptake. To convey emotions, we used a testimonial by a fictive person (randomizing age group and gender) making a statement about his or her motivation to get vaccinated. We included statements about economic and health risks at the individual, interpersonal and collective levels. Taking up advice from the expert consultations, we also included messages emphasizing a sense of community and self-efficacy. The wording of all attributes and levels is listed in Supplementary File 3 (see Supplementary Table 3.1: English translation; Supplementary Table 3.2: original version in German; and Supplementary Table 3.3: original version in Italian).
Details on experiment 2—media communication on vaccinations
Experiment 2 was designed to assess scenarios of hypothetical media coverage on vaccinations. Recent research has demonstrated the importance of communicating expert consensus to increase vaccinations. However, media coverage often tries to balance perspectives, even when there is far-reaching consensus (‘false balance’). We, therefore, included fictional reports about a TV discussion round as an attribute in the experiment. Previous research also suggested that celebrity endorsement might play a role. As recent coverage on vaccination included both celebrities endorsing vaccinations but also opposition to vaccination by celebrities, we included a brief fictional report on celebrity behavior. The discussion with public health experts further revealed that there was great uncertainty about the risks of Long COVID. Hence, we included an attribute varying the likelihood change of getting Long COVID after infection. Finally, previous research has shown that legal rules, such as vaccine passports and vaccine mandates, may matter for vaccinations. To evaluate whether such instruments would be suitable for increasing vaccinations, we included an attribute for the requirement of a vaccine passport and two kinds of vaccine mandates. The wording of all attributes and levels is listed in Supplementary File 4 (see Supplementary Table 4.1: English translation; Supplementary Table 4.2: original version in German; and Supplementary Table 4.3: original version in Italian).
We used population-representative quotas to recruit the respondents from the commercial online access panel by Marketagent GmbH (certified under International Organization for Standardization 20252), encompassing more than 135,000 registered panelists in Austria and 87,000 in Italy. The target population in each country was residents aged 14+ years. Note that, from the age of 14, adolescents in Austria and Italy can decide for themselves, without their parents’ consent, whether they want to be vaccinated or not. The survey was initially opened for 61,503 Austrians and 73,077 Italians and closed after the target quotas were reached in each country. Eventually, 3,187 participants from Austria and 3,170 participants from Italy took part in the study after providing informed consent. The participation rate was overall similar to comparable studies8,57,58,59. The quota targets and actual values in the sample are shown in Supplementary File 1. Note that, although we aimed to make the sample structure match as closely as possible to the targets, some subgroups of the population remain hard to reach for online surveys (for example, very high age and language minorities).
Information about the countries
During several stages of the global vaccine rollout, the German-speaking countries showed the lowest COVID-19 vaccination rates in Western Europe, with Austria having long held the highest share of unvaccinated people among these countries60,61,62. In August 2022, Austria stood at a rate of 77.1% of people who completed the primary course, and 59.2% received the first booster41. Flattening vaccination rates, however, show hesitant booster uptake. Austria was, therefore, among the first countries that announced a general COVID-19 vaccination mandate, which it has recently withdrawn after experiencing increasing societal polarization over this issue63. Although, like Austria, Italy was also struggling with booster campaigns at the beginning of autumn 2022, it was ranked better in vaccination uptake among the European countries. In August 2022, 80.2% of Italians were first-course uptakers, and 71.5% received boosters11,42. Italy also first introduced and later withdrew a vaccination mandate, similar to Austria, but with the difference that it was only for healthcare workers64. Vaccine fatigue, however, is also high in many other countries. For instance, it has been reported that booster uptake in Canada was stalling2, and, according to recent news reports, this is similar in the United States3.
Sample size calculation
We calculated a minimum sample size of 2,017 for a conjoint experiment based on the size of the population of Austria/Italy, desired confidence levels of 95% and error margins of 2%. We also took into consideration that it might not be possible to motivate some groups in the population for further vaccinations due to vaccine fatigue. Although, in July 2022, about 77% of Austrians and 85% of Italians had received at least one dose, only 59% of Austrians and 72% of Italians had received the first booster vaccinations. Based on these numbers, we estimated that only two-thirds of the respondents in our sample might be susceptible to treatment, whereas about one-third would refuse vaccination under any scenario. We, therefore, opted for a sample size of approximately 3,000 respondents for each country.
We used descriptive statistics for sociodemographic variables and attitudes to characterize the sample. We present results in tabular form and as box or bar plots as appropriate and included them in Extended Data Figs. 1 and 2. To evaluate the impact of the attribute levels in the conjoint experiments, we computed average marginal component effects (AMCEs) with 95% CIs for the binary responses. We display the AMCEs by country and vaccination status as coefficient plots for ease of interpretation. We show the results with and without correction of multiplicity. For the full estimation tables, see Supplementary Files 5–12. We included a disaggregated description of the results by gender in Extended Data Fig. 3 and Supplementary Files 13–16. For reporting, we adhered to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist (Supplemental File 17).
Assessment of data quality
To assess data quality, we investigated non-response patterns. Specifically, we calculated the number of times respondents did not give substantial answers (‘Don’t know’, ‘No answer’). We flagged an interview as suspicious when respondents gave an unsubstantial answer more than 50% of the time. We found no Austrian particpants and 151 Italian participants for whom this was true and performed the analyses with and without these participants. There were no changes in the results of the conjoint experiments, and, for this reason, we did not exclude these participants from the analyses.
Details on how we dealt with missing values
Respondents were required to answer the questions on the conjoint experiments to complete the interview. For the binary choice questions, respondents were instructed to spontaneously choose one of the two options if they felt indifferent. Therefore, there are no missing values on our outcome variable. The measures that were included in the survey for descriptive purposes, such as sociodemographics, institutional trust and emotions, offered opt-out options (‘Don’t know’, ‘No answer’). When calculating descriptive statistics, such responses were excluded from the analysis.
The conjoint experiments were not pre-registered, which was not required under local guidelines and did not seem appropriate, given the exploratory nature of the research question. A conjoint experiment is a survey-based, exploratory research method that attempts to understand how people make complex choices. It follows a cross-sectional, observational design and collects data only once. It includes hypothetical scenarios, which we developed for the present study in a way that all attribute levels were realistic and all possible combinations were plausible in both country contexts. Attribute levels were randomized, and not all participants saw the same levels. The experiment, thus, provides initial, exploratory evidence through which channels and in which way we could reach which target groups. Based on the findings from a conjoint experiment like ours, one can develop a targeted intervention and then test it in a subsequent clinical trial. Similar studies were also not registered as clinical trials24,25,26.
Reliability and validity of survey instruments
This study used several survey instruments to characterize the study population (Supplementary Files 4–6) and two survey experiments to assess the effect of various contextual features and interventions (Supplementary Files 7 and 8) that can influence the likelihood to get vaccinated. Specifically, to characterize the underlying psychology of different subgroups in the sample, we used measures to capture (1) emotions, (2) trust in various institutions and (3) attitudes toward COVID-19 vaccination. In the experiments, we manipulated the virus variant, the availability of vaccines, costs/incentives, campaign messages, consensus of experts, celebrity endorsement, information on Long COVID and legal rules, such as vaccine passports and vaccine mandates. We measured their impact on the evaluation of the campaign (experiment 1), trust in the vaccine (experiment 2) and the likelihood to get vaccinated (both experiments). In the following, we discuss aspects related to the reliability and validity of the survey measures used in this study.
Regarding the measurement of emotions, we adopted the approach to assess a set of distinct emotions. The reliability and validity of the measurement of emotions in surveys have been evaluated, for instance, by Marcus et al.65. In their study, they asked respondents to describe how they felt using a set of distinct emotions (for example, being worried, hopeful, angry, etc.). They showed that both radio buttons and continuous slider scales are highly reliable instruments for capturing emotions in surveys. According to their results, a more fine-grained response scale with more than five categories was preferable to allow for more nuanced responses and better discrimination, which was why we used an 11-point scale to record the responses. They also assessed the construct validity of the measurement of emotions. Among other things, they assessed to what extent emotions correlated in expected ways with the interest in novel information, confirming, for example, that anxiety is positively related to information seeking. We built on this approach and added feeling ‘tired’ or ‘exhausted’ as items to the list of emotions, as Lilleholt et al.66 have shown that the feeling of fatigue is negatively related to information seeking, as well as preventive behavior, such as physical distancing, wearing masks and hygiene. In our data, avoiding information seeking was correlated with feeling tired (r = 0.3, P < 0.001).
To measure trust in institutions, we used an item battery with a list of institutions that respondents evaluated on an 11-point scale. Due to their relevance as possible sources and multipliers of health-related information in the context of the pandemic, we included pharmaceutical companies, the healthcare system, schools and science, in addition to the government, parliament and the media. The theoretical underpinnings for the measurement go back to political culture studies and studies of political system support67,68. Similar measures have been conventionally used in many surveys, such as the European Social Survey (ESS), the World Value Survey (WVS), the European Union Statistics on Income and Living Conditions (EU-SILC), the European Quality of Life Survey (EQLS) and the Eurobarometer or the Gallup World Poll (GWP). The reliability of such measures has been evaluated, for instance, in an Organization for Economic Cooperation and Development (OECD) working paper by González and Smith69. They reported correlations that are in line with a high degree of reliability of the measurement and found a good performance in terms of construct validity.
Various models of the psychological antecedents of vaccination have been discussed in the literature (see, for example, Betsch et al.70 for an overview), and this literature has identified many psychological factors affecting vaccine acceptance. Based on this research, as well as based on insights from qualitative interviews conducted by colleagues from the ‘Solidarity in times of a pandemic’ (SOLPAN) project71, a new item battery to capture attitudes toward COVID-19 vaccination was developed in the context of the Austrian Corona Panel Project57 at the beginning of the initial rollout (for the results from the item battery, see Extended Data Fig. 2). These items were found to be strongly correlated with the number of vaccinations, jointly explaining more than 50% of the variance of the COVID-19 vaccination status. As can be seen in Supplementary File 6, these items also showed the expected relationships in our sample. In the context of this study, we also added two additional items to capture perceptions that (1) the COVID-19 vaccines provide only limited protection against infection (‘Vaccination does not help, you still get ill’) and (2) the emergence of the new virus variants reduce the need for vaccination (‘The current virus variant is mild, so I don’t need vaccination’). Also, these items showed the expected relationships with vaccination status. In our sample, the items explained 61% of the variance of vaccination readiness. As attitudes and vaccination status mapped quite closely to each other, and as this seemed most relevant in practical terms for vaccination campaigns, we conditioned on the vaccination status in our analysis. An analysis using psychological factors as moderators did not yield fundamentally different insights.
Finally, experimental treatments and response scales were similar to those used by previous research on interventions for increasing COVID-19 uptake (for a review, see Batteux et al.22). Thus, similar measures have been conventionally used and can be considered as established instruments in the relevant literature (see also the cited literature on properties of vaccines24,25,26, communication (for example, campaign messages27,28,29,30, expert consensus31 and celebrity endorsement32,33,34), costs/incentives8,34,35,36 and legal rules (for example, vaccine passports36,37 and vaccine mandates38,39)). In terms of predictive validity of such survey measures for actual behavior, note that reported vaccination intentions from panel surveys were highly predictive of actual vaccination behavior in the context of the COVID-19 pandemic72. Thus, considering all aspects related to the measurement, we think that the survey instruments and experimental designs used provide a reliable and valid measurement.
All survey participants provided informed consent to the survey company that carried out the fieldwork. Only anonymized data were received and analyzed. Research ethics approval for this study was not required according to institutional and national guidelines, such as the Medical Devices Act of Austria.
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.