Listing 1 - 4 of 4 |
Sort by
|
Choose an application
This brief monograph is an in-depth study of the infinite divisibility and self-decomposability properties of central and noncentral Student’s distributions, represented as variance and mean-variance mixtures of multivariate Gaussian distributions with the reciprocal gamma mixing distribution. These results allow us to define and analyse Student-Lévy processes as Thorin subordinated Gaussian Lévy processes. A broad class of one-dimensional, strictly stationary diffusions with the Student’s t-marginal distribution are defined as the unique weak solution for the stochastic differential equation. Using the independently scattered random measures generated by the bi-variate centred Student-Lévy process, and stochastic integration theory, a univariate, strictly stationary process with the centred Student’s t- marginals and the arbitrary correlation structure are defined. As a promising direction for future work in constructing and analysing new multivariate Student-Lévy type processes, the notion of Lévy copulas and the related analogue of Sklar’s theorem are explained.
Statistics. --- Stochastic processes. --- t-test (Statistics). --- Stochastic processes --- t-test (Statistics) --- Mathematics --- Physical Sciences & Mathematics --- Mathematical Statistics --- Distribution (Probability theory) --- Distribution functions --- Frequency distribution --- Random processes --- Statistics, general. --- Characteristic functions --- Probabilities --- Statistical analysis --- Statistical data --- Statistical methods --- Statistical science --- Econometrics --- Statistics .
Choose an application
Self-normalized processes are of common occurrence in probabilistic and statistical studies. A prototypical example is Student's t-statistic introduced in 1908 by Gosset, whose portrait is on the front cover. Due to the highly non-linear nature of these processes, the theory experienced a long period of slow development. In recent years there have been a number of important advances in the theory and applications of self-normalized processes. Some of these developments are closely linked to the study of central limit theorems, which imply that self-normalized processes are approximate pivots for statistical inference. The present volume covers recent developments in the area, including self-normalized large and moderate deviations, and laws of the iterated logarithms for self-normalized martingales. This is the first book that systematically treats the theory and applications of self-normalization.
Grenzwertsatz. --- Limit theorems (Probability theory). --- Mathematical statistics. --- t-test (Statistics). --- Limit theorems (Probability theory) --- Mathematical statistics --- t-test (Statistics) --- Mathematics --- Physical Sciences & Mathematics --- Mathematical Statistics --- Probabilities. --- Statistical inference --- Statistics, Mathematical --- Probability --- Statistical methods --- Mathematics. --- Statistics. --- Probability Theory and Stochastic Processes. --- Statistical Theory and Methods. --- Combinations --- Chance --- Least squares --- Risk --- Statistics --- Probabilities --- Sampling (Statistics) --- Distribution (Probability theory. --- Distribution functions --- Frequency distribution --- Characteristic functions --- Statistics . --- Statistical analysis --- Statistical data --- Statistical science --- Econometrics
Choose an application
Recent advances in information technology have brought forth a paradigm shift in science, especially in the biology and medical fields. Statistical methodologies based on high-performance computing and big data analysis are now indispensable for the qualitative and quantitative understanding of experimental results. In fact, the last few decades have witnessed drastic improvements in high-throughput experiments in health science, for example, mass spectrometry, DNA microarray, next generation sequencing, etc. Those methods have been providing massive data involving four major branches of omics (genomics, transcriptomics, proteomics, and metabolomics). Information about amino acid sequences, protein structures, and molecular structures are fundamental data for the prediction of bioactivity of chemical compounds when screening drugs. On the other hand, cell imaging, clinical imaging, and personal healthcare devices are also providing important data concerning the human body and disease. In parallel, various methods of mathematical modelling such as machine learning have developed rapidly. All of these types of data can be utilized in computational approaches to understand disease mechanisms, diagnosis, prognosis, drug discovery, drug repositioning, disease biomarkers, driver mutations, copy number variations, disease pathways, and much more. In this Special Issue, we have published 8 excellent papers dedicated to a variety of computational problems in the biomedical field from the genomic level to the whole-person physiological level.
Technology: general issues --- History of engineering & technology --- water temperature --- bathing --- ECG --- heart rate variability --- quantitative analysis --- t-test --- hypertrophic cardiomyopathy --- data mining --- automated curation --- molecular mechanisms --- atrial fibrillation --- sudden cardiac death --- heart failure --- left ventricular outflow tract obstruction --- cardiac fibrosis --- myocardial ischemia --- compound–protein interaction --- Jamu --- machine learning --- drug discovery --- herbal medicine --- data augmentation --- deep learning --- ECG quality assessment --- drug–target interactions --- protein–protein interactions --- chronic diseases --- drug repurposing --- maximum flow --- adenosine methylation --- m6A --- RNA modification --- neuronal development --- genetic variation --- copy number variants --- disease-related traits --- sequential order --- association test --- blood pressure --- cuffless measurement --- longitudinal experiment --- plethysmograph --- nonlinear regression --- n/a --- compound-protein interaction --- drug-target interactions --- protein-protein interactions
Choose an application
Computational intelligence is a general term for a class of algorithms designed by nature's wisdom and human intelligence. Computer scientists have proposed many computational intelligence algorithms with heuristic features. These algorithms either mimic the evolutionary processes of the biological world, mimic the physiological structure and bodily functions of the organism,
individual updating strategy --- integrated design --- global optimum --- flexible job shop scheduling problem --- whale optimization algorithm --- EHO --- bat algorithm with multiple strategy coupling (mixBA) --- multi-objective DV-Hop localization algorithm --- optimization --- rock types --- variable neighborhood search --- biology --- average iteration times --- CEC2013 benchmarks --- slicing tree structure --- firefly algorithm (FA) --- benchmark --- single loop --- evolutionary computation --- memetic algorithm --- normal cloud model --- 0-1 knapsack problems --- elite strategy --- diversity maintenance --- material handling path --- artificial bee colony algorithm (ABC) --- urban design --- entropy --- evolutionary algorithms (EAs) --- monarch butterfly optimization --- numerical simulation --- architecture --- set-union knapsack problem --- Wilcoxon test --- convolutional neural network --- global position updating operator --- particle swarm optimization --- computation --- minimum load coloring --- topology structure --- adaptive multi-swarm --- minimum total dominating set --- mutation operation --- shape grammar --- greedy optimization algorithm --- ?-Hilbert space --- genetic algorithm --- large scale optimization --- large-scale optimization --- NSGA-II-DV-Hop --- constrained optimization problems (COPs) --- first-arrival picking --- transfer function --- SPEA 2 --- stochastic ranking (SR) --- wireless sensor networks (WSNs) --- acceleration search --- convergence point --- fuzzy c-means --- evolutionary algorithm --- success rates --- Artificial bee colony --- particle swarm optimizer --- random weight --- range detection --- adaptive weight --- large-scale --- automatic identification --- cloud model --- swarm intelligence --- evolutionary multi-objective optimization --- DV-Hop algorithm --- bat algorithm (BA) --- Friedman test --- quantum uncertainty property --- facility layout design --- local search --- deep learning --- Y conditional cloud generator --- benchmark functions --- discrete algorithm --- dispatching rule --- DE algorithm --- nonlinear convergence factor --- energy-efficient job shop scheduling --- t-test --- evolution --- dimension learning --- global optimization --- confidence term --- elephant herding optimization --- moth search algorithm --- evolutionary
Listing 1 - 4 of 4 |
Sort by
|