Data Collection Innovation-> Big Data and 'Organic Data' ->Big Experimentation and Computational Social Science
New paper out: Assessing the perceived effect of non-pharmaceutical interventions on SARS-Cov-2 transmission risk: an experimental study in Europe
We conduct a large (N = 6567) online experiment to measure the features of non-pharmaceutical interventions (NPIs) that citizens of six European countries perceive to lower the risk of transmission of SARS-Cov-2 the most. We collected data in Bulgaria (n = 1069), France (n = 1108), Poland (n = 1104), Italy (n = 1087), Spain (n = 1102) and Sweden (n = 1097). Based on the features of the most widely adopted public health guidelines to reduce SARS-Cov-2 transmission (mask wearing vs not, outdoor vs indoor contact, short vs 90 min meetings, few vs many people present, and physical distancing of 1 or 2 m), we conducted a discrete choice experiment (DCE) to estimate the public’s perceived risk of SARS-CoV-2 transmission in scenarios that presented mutually exclusive constellations of these features. Our findings indicate that participants’ perception of transmission risk was most influenced by the NPI attributes of mask-wearing and outdoor meetings and the least by NPI attributes that focus on physical distancing, meeting duration, and meeting size. Differentiating by country, gender, age, cognitive style (reflective or intuitive), and perceived freight of COVID-19 moreover allowed us to identify important differences between subgroups. Our findings highlight the importance of improving health policy communication and citizens’ health literacy about the design of NPIs and the transmission risk of SARS-Cov-2 and potentially future viruses.
A new paper with the French team of Periscope
https://www.tandfonline.com/doi/full/10.1080/21642850.2023.2287663
New paper with my PhD Student Filippo.
https://www.sciencedirect.com/science/article/pii/S0955395924000136?via%3Dihub
Structural equation modeling (SEM) is a widespread and commonly used approach to test substantive hypotheses in the social and behavioral sciences. When performing hypothesis tests, it is vital to rely on a sufficiently large sample size to achieve an adequate degree of statistical power to detect the hypothesized effect. However, applications of SEM rarely consider statistical power in informing sample size considerations or determine the statistical power for the focal hypothesis tests performed. One reason is the difficulty in translating substantive hypotheses into specific effect size values required to perform power analyses, as well as the lack of user-friendly software to automate this process. The present paper presents the second version of the R package semPower which includes comprehensive functionality for various types of power analyses in SEM. Specifically, semPower 2 allows one to perform both analytical and simulated a priori, post hoc, and compromise power analysis for structural equation models with or without latent variables, and also supports multigroup settings and provides user-friendly convenience functions for many common model types (e.g., standard confirmatory factor analysis [CFA] models, regression models, autoregressive moving average [ARMA] models, cross-lagged panel models) to simplify power analyses when a model-based definition of the effect in terms of model parameters is desired.
Abstract. The capacity for language is a defining property of our species, yet despite decades of research, evidence on its neural basis is still mixed and a generalized consensus is difficult to achieve. We suggest that this is partly caused by researchers defining “language” in different ways, with focus on a wide range of phenomena, properties, and levels of investigation. Accordingly, there is very little agreement among cognitive neuroscientists of language on the operationalization of fundamental concepts to be investigated in neuroscientific experiments. Here, we review chains of derivation in the cognitive neuroscience of language, focusing on how the hypothesis under consideration is defined by a combination of theoretical and methodological assumptions. We first attempt to disentangle the complex relationship between linguistics, psychology, and neuroscience in the field. Next, we focus on how conclusions that can be drawn from any experiment are inherently constrained by auxiliary assumptions, both theoretical and methodological, on which the validity of conclusions drawn rests. These issues are discussed in the context of classical experimental manipulations as well as study designs that employ novel approaches such as naturalistic stimuli and computational modeling. We conclude by proposing that a highly interdisciplinary field such as the cognitive neuroscience of language requires researchers to form explicit statements concerning the theoretical definitions, methodological choices, and other constraining factors involved in their work.