Listing 1 - 1 of 1 |
Sort by
|
Choose an application
Clinical risk prediction models (CRPM) have an important role in (clinical-) decision making, doctors use those models in their decisions about the care plan for their patients. These models give the probability for having a disease and are estimated by using data. These data are more often gathered by multiple centres and are known as multicentre data. This way of gathering data makes it easier to generalize the results, however leads to some statistical challenges. With this multicentre data it is assumed that patients from one centre are more similar than a randomly chosen patient from a different centre. Think for example about referral patterns: specialized centres probably get the more severe cases. These differences can be taken into account by mixed effects models which allow each centre to have a cluster-specific intercept. However, these cluster-specific intercepts are unknown of centres that were not included in the development of the model. The goal of this thesis is to compare methods to update the intercept of a CRPM to make it suitable for patients in new clusters. Nine methods to update the intercept are compared by means of a simulation study and a case study. These updating methods sometimes need the prevalence of the disease in the clusters or a dataset from the new cluster to update the CRPM. The latter might not always be available. In the case study, data from the IOTA group about non-pregnant women with at least one persistent adnexal mass in their ovarian is used. The prediction model estimates the probability of a malignant adnexal mass based on patient-specifics, clinical information and ultra-sound information. The simulation study simulates 24 development datasets, which vary in the amount of variability in the baseline risks, the sample sizes, and the prevalence of the disease. With these datasets prediction models are estimated which are updated for two new clusters: one with similar prevalence and one with dissimilar prevalence. The updating methods are compared based on their overall model performance, discriminative ability and the agreement between observed and predicted outcome. The overall model performance shows how well the model fits the data, and the discriminative ability indicates how well the model can discriminate between disease and non-disease. The agreement between the observed and predicted outcome reflects the consistency of probabilities for the disease. Based on this comparison of the methods to update the intercept for patients in new cluster it can be concluded that when a small new cluster dataset is available the Bayesian correction based on the outcome incidence and the predictors is preferred. Bayesian statistics, and hence this method, can combine previous knowledge from the already existing prediction model with data from the new cluster. Moreover, when a large dataset from the new cluster is available, it suggested to use the Bayesian correction method as well. However, when the differences between the clusters are very big it is recommended to consider the new method, which takes the amount of variability of the clusters into account. However, datasets from the new cluster might not always be available. When this is the case, one could correct the intercept based on the prevalence(s) which might be found in the literature. When this prevalence is also not available, setting the random intercept equal to zero also gives a workable solution.
Listing 1 - 1 of 1 |
Sort by
|