Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

In neuropsychological research, there are a near limitless number of different approaches researchers can choose when designing studies. Here we showcase the multiverse/specification curve technique to establish the robustness of analytical pathways choices within classic psychometric test validation in an example test of executive function. We examined the impact of choices regarding sample groups, sample sizes, test metrics, and covariate inclusions on convergent validation correlations between tests of executive function. Data were available for 87 neurologically healthy adults and 117 stroke survivors, and a total of 2,220 different analyses were run in a multiverse analysis. We found that the type of sample group, sample size, and test metric used for analyses affected validation outcomes. Covariate inclusion choices did not affect the observed coefficients in our analyses. The present analysis demonstrates the importance of carefully justifying every aspect of a psychometric test validation study a priori with theoretical and statistical factors in mind. It is essential to thoroughly consider the purpose and use of a new tool when designing validation studies.

Original publication




Journal article



Publication Date



clinical population, executive function, multiverse analysis, specification curve analysis, validation