In this way, cross-validation mimics the advantages of an independent replication with the same amount of collected data (Yarkoni and Westfall, 2017). doi: 10.1037/a0015108 Cross Ref Full Text | Google Scholar Schrouff, J., Rosa, M. Splitting data into training and test subsets can be done using various methods, including holdout cross-validation, k-fold cross-validation, leave-one-subject-out cross-validation, and leave-one-trial-out cross-validation.
As an extreme example, if the number of parameters of a model is the same as or greater than the number of observations, then a model can perfectly predict the training data simply by memorizing the data in its entirety.
Direct replications are replication attempts that aim at reproducing the exact effects as obtained by a previous study, incorporating the exact experimental conditions.
In contrast, conceptual replications examine the general nature of the previously obtained effects, while aiming at extending the original effects to a new context.
The validity and generalizability of the generated model is then tested on the k observation (test dataset). Bayesian tests to quantify the result of a replication attempt.
Cross-validation has the computational advantage that it avoids fitting a model too closely to the peculiarities of a data set (overfitting). doi: 10.2139/ssrn.2259879 Pub Med Abstract | Cross Ref Full Text | Google Scholar Varoquaux, G., Raamana, P. A., Hoyos-Idrobo, A., Schwartz, Y., and Thirion, B. Assessing and tuning brain decoders: cross-validation, caveats, and guidelines. doi: 10.1016/j.neuroimage.20 Pub Med Abstract | Cross Ref Full Text | Google Scholar Verhagen, J., and Wagenmakers, E.