Listing 1 - 10 of 11 | << page >> |
Sort by
|
Choose an application
Choose an application
In mijn afstudeerproject is het de bedoeling dat ik een inscholingsbrochure opstel van de volgende systemen: C-eHealth Portal, C2M, Medibase (4D Client), Patiënt-administratie en Hospiview. Dit onderwerp werd mij toegewezen door mevrouw Verboven, mijn stagementor. De brochure is geschreven voor het medisch secretariaat van de polikliniek neurologie van het AZ Turnhout, campus Sint-Elisabeth. Te vermelden valt dat C2M, Medibase (4D Client) en Hospiview in de toekomst niet meer gebruikt zullen worden op de polikliniek neurologie. Het is echter nog niet bekend wanneer deze systemen vervangen zullen worden. Het is niet eenvoudig om op een nieuwe werkplek of stageplaats aan de slag te gaan. Je bent nog niet vertrouwd met de werking van het secretariaat en met het takenpakket. Een nieuw personeelslid of stagiair begeleiden neemt veel tijd in beslag en is dan ook een extra belasting binnen deze drukke polikliniek. Deze brochure kan hierbij hulp bieden en het zal voor de huidige werknemers tijd besparen. Het doel van deze brochure is dus om nieuwe werknemers of stagiairs vertrouwd te maken met de verschillende systemen. Het gebruik van de systemen wordt geplaatst in de werking van het secretariaat neurologie. Zo worden bijvoorbeeld enkel de opties besproken die op het secretariaat neurologie gebruikt worden. Ik heb in de brochure het hoofdstukje "Gebruik systemen polikliniek neurologie" toegevoegd waarin ik bij elk systeem kort uitleg wat het inhoudt en waarvoor het op de polikliniek neurologie gebruikt wordt. Op die manier kan de lezer snel te weten komen welk systeem hij voor welke actie nodig heeft. Ik heb ook voor elk systeem apart een inleiding voorzien. In de boeken "TaalAnker: Zelf brochures schrijven" van Martina Huigen en "Prismahandboek: Communicatiewijzer" van Ton de Vries vond ik interessante informatie over hoe je een goede brochure/handleiding schrijft. Ik heb dan ook geprobeerd om zoveel mogelijk rekening te houden met deze tips. Ik geef hierover m...
Choose an application
Choose an application
Clinical risk prediction models (CRPM) have an important role in (clinical-) decision making, doctors use those models in their decisions about the care plan for their patients. These models give the probability for having a disease and are estimated by using data. These data are more often gathered by multiple centres and are known as multicentre data. This way of gathering data makes it easier to generalize the results, however leads to some statistical challenges. With this multicentre data it is assumed that patients from one centre are more similar than a randomly chosen patient from a different centre. Think for example about referral patterns: specialized centres probably get the more severe cases. These differences can be taken into account by mixed effects models which allow each centre to have a cluster-specific intercept. However, these cluster-specific intercepts are unknown of centres that were not included in the development of the model. The goal of this thesis is to compare methods to update the intercept of a CRPM to make it suitable for patients in new clusters. Nine methods to update the intercept are compared by means of a simulation study and a case study. These updating methods sometimes need the prevalence of the disease in the clusters or a dataset from the new cluster to update the CRPM. The latter might not always be available. In the case study, data from the IOTA group about non-pregnant women with at least one persistent adnexal mass in their ovarian is used. The prediction model estimates the probability of a malignant adnexal mass based on patient-specifics, clinical information and ultra-sound information. The simulation study simulates 24 development datasets, which vary in the amount of variability in the baseline risks, the sample sizes, and the prevalence of the disease. With these datasets prediction models are estimated which are updated for two new clusters: one with similar prevalence and one with dissimilar prevalence. The updating methods are compared based on their overall model performance, discriminative ability and the agreement between observed and predicted outcome. The overall model performance shows how well the model fits the data, and the discriminative ability indicates how well the model can discriminate between disease and non-disease. The agreement between the observed and predicted outcome reflects the consistency of probabilities for the disease. Based on this comparison of the methods to update the intercept for patients in new cluster it can be concluded that when a small new cluster dataset is available the Bayesian correction based on the outcome incidence and the predictors is preferred. Bayesian statistics, and hence this method, can combine previous knowledge from the already existing prediction model with data from the new cluster. Moreover, when a large dataset from the new cluster is available, it suggested to use the Bayesian correction method as well. However, when the differences between the clusters are very big it is recommended to consider the new method, which takes the amount of variability of the clusters into account. However, datasets from the new cluster might not always be available. When this is the case, one could correct the intercept based on the prevalence(s) which might be found in the literature. When this prevalence is also not available, setting the random intercept equal to zero also gives a workable solution.
Choose an application
Choose an application
Clinical risk prediction models are used to help physicians make correct predictions regarding diagnosis or prognosis. To enhance the efficiency of the data collection process and the generalisability of results, multicenter data can be collected. However, it causes the data to be no longer independent. Mixed effects models and generalized estimating equations are designed to take this dependency into account when analyzing the data. To investigate whether these models perform differently from other models in terms of prediction, a mixed effects model, a generalized estimating equation model, a standard logistic regression and a fixed effects logistic regression are compared. Also the effects of sample size, of the amount of clustering in the data, of non-normal underlying random effects distributions and of the presence of a center-predictor interaction are investigated. This is done by performing a simulation study. In addition, the performance of the different models is assessed on a real life dataset: the International Ovarian Tumor Analysis group (IOTA) dataset. The most important conclusions of this research are that the predictive performance of the marginal predictions based on the standard logistic regression, the generalized estimating equation and the mixed effects model integrating over the random intercept are very similar. Moreover, marginal predictions perform best on the population level while conditional predictions perform well on the center level. As a result, the target group for whom predictions are to be made should determine the type of predictions to be used. Regarding the sample size, smaller sample sizes affect the performance measures due to overfitting. Furthermore, heavily clustered data yield the same results as slightly clustered data, but more pronounced. In addition, slightly non-normal underlying random effects distributions don’t affect the predictive performance much when the sample sizes are big. However, ignoring an existing center-predictor interaction does have an influence on the predictive performance on the center level: predictions are too moderate, too close to the overall prevalence.
Choose an application
Choose an application
Choose an application
In every research some proportion of the data can be missing for several reasons: by study design, unforeseen factors, confidentiality, drop-outs, … (Horton & Kleinman, 2007). Since many statistical procedures cannot handle missing data well (Jadhav et al., 2019), researchers resort to imputation methods. These methods allow for the missing data to be filled in. This is done by using the relationships of the non-missing data and applying the learned patterns to fill in the missingness. One of those methods is missForest (Stekhoven & Bühlmann, 2012), which is updated to missForest v2 in 2023 (Albu, 2022). This thesis compares the results of missForest v2 with some commonly used imputation methods. We make a distinction between imputation in the inference setting and the prediction setting. In the former we study associations between the outcome and predictors with the aim to draw conclusions about the population, in the latter making future predictions is the ultimate goal. Different missingness mechanisms are described and it is shown they are mainly of importance in the inference setting, while making little difference when it comes to prediction. References Albu, E. (2022). missForest v2 Missing data imputation for prediction (Responsible Machine Learning in Healthcare). In missForest v2 Missing data imputation for prediction (Responsible Machine Learning in Healthcare). Horton, N. J., & Kleinman, K. P. (2007). Much Ado About Nothing: A Comparison of Missing Data Methods and Software to Fit Incomplete Data Regression Models. The American Statistician, 61(1), 79–90. https://doi.org/10.1198/000313007X172556 Jadhav, A., Pramod, D., & Ramanathan, K. (2019). Comparison of Performance of Data Imputation Methods for Numeric Dataset. Applied Artificial Intelligence, 33(10), 913–933. https://doi.org/10.1080/08839514.2019.1637138 Stekhoven, D. J., & Bühlmann, P. (2012). MissForest - nonparametric missing value imputation for mixed-type data. Bioinformatics, 28(1), 112–118. https://doi.org/10.1093/bioinformatics/btr597
Choose an application
As important sources of morbidity and mortality, bloodstream infections (BSI) can be effectively controlled if proper preventive measures are taken. To improve the care quality and infection controls for hospitalized patients at UZ Leuven, a project is put forward aiming at dynamic risk prediction of primary catheter-related bloodstream infection. This study is in the first phase of the whole project, trying to determine the potential variables that may affect the risk of CLA-BSI for hospitalized patients. In this study, UZ Leuven electronic health records (EHRs) data from October 1, 2012 to December 31, 2013 was used to develop a proof-of-concept model for the risk of CLA-BSI. In this paper, we considered other competing risks that may exclude the occurrence of CLA-BSI such as death and discharge from hospital (or catheter removal for more than 48 hours). The cumulative incidence function (CIF) was applied to avoid the overestimation of traditional Kaplan-Meier (KM) estimates which ignored the setting of competing risks. Both the cause-specific hazard model and the subdistribution hazard model were developed to help address questions of etiology and predict risk. We also incorporated competing risks models with landmark approach to achieve the goal of dynamic risk prediction. Through building landmark subsets using the covariates information available at landmark time points, we can capture individuals' prognosis dynamically. Forward stepwise strategy was applied for variable selection and finally a landmark cause-specific supermodel and separate landmark subdistribution hazard models at landmark time points were built to detect the effect of these selected covariates on the risk of CLA-BSI. In our study, we found that the increasing number of catheters at the landmark time, the receipt of chemotherapy and total parental nutrition (TPN) before landmark time were associated with higher incidence of CLA-BSI. In addition, the type and location of catheters, as well as the medical discipline divisions of the supervisor at the landmark time also have an effect on the risk of CLA-BSI.
Listing 1 - 10 of 11 | << page >> |
Sort by
|