posted on 2023-05-20, 05:46authored byNorberg, A, Abrego, N, Blanchet, FG, Adler, FR, Anderson, BJ, Anttila, J, Araujo, MB, Dallas, T, Dunson, D, Elith, J, Foster, SD, Fox, R, Franklin, J, Godsoe, W, Guisan, A, O'Hara, B, Nicole HillNicole Hill, Holt, RD, Hui, FKC, Husby, M, Kalas, JA, Lehikoinen, A, Luoto, M, Mod, HK, Newell, G, Renner, I, Roslin, T, Soininen, J, Thuiller, W, Vanhatalo, J, Warton, D, White, M, Zimmermann, NE, Gravel, D, Ovaskainen, O
A large array of species distribution model (SDM) approaches has been developed for explaining and predicting the occurrences of individual species or species assemblages. Given the wealth of existing models, it is unclear which models perform best for interpolation or extrapolation of existing data sets, particularly when one is concerned with species assemblages. We compared the predictive performance of 33 variants of 15 widely applied and recently emerged SDMs in the context of multispecies data, including both joint SDMs that model multiple species together, and stacked SDMs that model each species individually combining the predictions afterward. We offer a comprehensive evaluation of these SDM approaches by examining their performance in predicting withheld empirical validation data of different sizes representing five different taxonomic groups, and for prediction tasks related to both interpolation and extrapolation. We measure predictive performance by 12 measures of accuracy, discrimination power, calibration, and precision of predictions, for the biological levels of species occurrence, species richness, and community composition. Our results show large variation among the models in their predictive performance, especially for communities comprising many species that are rare. The results do not reveal any major trade‐offs among measures of model performance; the same models performed generally well in terms of accuracy, discrimination, and calibration, and for the biological levels of individual species, species richness, and community composition. In contrast, the models that gave the most precise predictions were not well calibrated, suggesting that poorly performing models can make overconfident predictions. However, none of the models performed well for all prediction tasks. As a general strategy, we therefore propose that researchers fit a small set of models showing complementary performance, and then apply a cross‐validation procedure involving separate data to establish which of these models performs best for the goal of the study.
History
Publication title
Ecological Monographs
Volume
89
Article number
e01370
Number
e01370
Pagination
1-24
ISSN
0012-9615
Department/School
Institute for Marine and Antarctic Studies
Publisher
Ecological Soc Amer
Place of publication
1707 H St Nw, Ste 400, Washington, USA, Dc, 20006-3915
Rights statement
Copyright 2019 The Authors. Licensed under Creative Commons Attribution 4.0 International (CC BY 4.0) https://creativecommons.org/licenses/by/4.0/