Listing 1 - 10 of 35 | << page >> |
Sort by
|
Choose an application
Robust statistics. --- Nonparametric statistics --- Regression analysis --- Robust statistics --- 519.2 --- 519.2 Probability. Mathematical statistics --- Probability. Mathematical statistics --- Statistics, Robust --- Distribution (Probability theory) --- Mathematical statistics --- Distribution-free statistics --- Statistics, Distribution-free --- Statistics, Nonparametric --- Data processing --- Mathematical models --- Nonparametric statistics. --- Data processing. --- Mathematical models. --- Statistics . --- Statistical Theory and Methods. --- Statistics and Computing/Statistics Programs. --- Statistics for Business, Management, Economics, Finance, Insurance. --- Statistical analysis --- Statistical data --- Statistical methods --- Statistical science --- Mathematics --- Econometrics
Choose an application
Choose an application
Choose an application
In verschillende wetenschappelijke domeinen wordt statistiek aangewend om uitspraken op een wetenschappelijk verantwoorde wijze te staven. Op basis van dataverzameling en analyse van de gegevens probeert men op zoek te gaan naar genuanceerde informatie over systemen, methoden, producties of populaties. Dit boek is geschreven voor al wie een elementaire statistische analyse wil begrijpen of uitvoeren.Aan de hand van praktische voorbeelden uit verschillende wetenschappelijke vakgebieden worden de opeenvolgende stappen in een statistische aanpak doorlopen.
519.2 --- #KVHB:Statistiek --- # BIBC : Academic collection --- statistiek --- Statistiek --- Probability. Mathematical statistics --- statistiek (wiskunde) --- Basic Sciences. Statistics --- Statistics (General) --- Statistics (General). --- 519.2 Probability. Mathematical statistics --- PXL-Central Office 2016 --- wetenschappelijk onderzoek
Choose an application
Choose an application
Choose an application
Choose an application
Choose an application
A short trailer for a KU Leuven MOOC for Credit about 'Statistical Data Analysis for Scientists and Engineers'
Choose an application
In statistics and data science, principal component analysis (PCA) is a predominant method. It is frequently used to reduce the dimension of a given data set, in order to make the resulting data easier to use in another method. The dimension reduction is carried out by so-called principal components. These are eigenvectors of the covariance- or correlation matrix of the given data. In spite of its popularity, PCA has its setbacks. First, the data must be of numerical nature, otherwise the covariance matrix cannot be computed. Next, the method can only describe linear dependency structures in the data. In addition, the use of the covariance matrix causes the method to be sensitive to outliers. However, in some applications, non-linear dependency structures are of interest. Moreover, many applications of artificial intelligence are capable of dealing with text data, for which PCA might be a useful preliminary step. In the past, an alternative approach to PCA was developed to deal with these limitations. The generalization of the PCA method discussed in this thesis is called kernel principal component analysis. The method is based on the fact that PCA can be expressed only using inner products of observations in the data. Its power lies in the kernel trick, which allows to replace the inner products by non-linear kernel functions. These kernel functions are able to reflect non-linear structures in the data, or are able to work with non-numerical data. At the same time, the classic kernel PCA algorithm suffers from similar instabilities as the classic linear PCA algorithm when outliers are present in the data. This thesis addresses this issue by also including robust alternatives to the classic kernel PCA algorithm. These robust alternatives consist of existing methods, such as kernel spherical PCA, kernel PCA based on projection pursuit and kernel ROBPCA. Nonetheless, a fairly new method, kernel PCA based on the kernel MRCD estimator, is covered as well. Apart from presenting the aforementioned kernel PCA methods, the goal is to bring these methods to life in the programming language R. Using self-developed implementations of the algorithms, some visualisations of the performance of the algorithms are presented. Hereby not only the effectiveness of the methods is established, but their added value is illustrated as well. We see examples where the robust kernel PCA methods perform better than the classic kernel algorithm. However, the method for kernel PCA based on the kernel MRCD estimator does not outperform the existing methods. Its performance is adequate, but it has a high computational cost for large n. A computationally less challenging alternative is kernel ROBPCA, which is capable of producing equally good results as KPCA-KMRCD in less computation time.
Listing 1 - 10 of 35 | << page >> |
Sort by
|