Listing 1 - 5 of 5 |
Sort by
|
Choose an application
Self-normalized processes are of common occurrence in probabilistic and statistical studies. A prototypical example is Student's t-statistic introduced in 1908 by Gosset, whose portrait is on the front cover. Due to the highly non-linear nature of these processes, the theory experienced a long period of slow development. In recent years there have been a number of important advances in the theory and applications of self-normalized processes. Some of these developments are closely linked to the study of central limit theorems, which imply that self-normalized processes are approximate pivots for statistical inference. The present volume covers recent developments in the area, including self-normalized large and moderate deviations, and laws of the iterated logarithms for self-normalized martingales. This is the first book that systematically treats the theory and applications of self-normalization.
Grenzwertsatz. --- Limit theorems (Probability theory). --- Mathematical statistics. --- t-test (Statistics). --- Limit theorems (Probability theory) --- Mathematical statistics --- t-test (Statistics) --- Mathematics --- Physical Sciences & Mathematics --- Mathematical Statistics --- Probabilities. --- Statistical inference --- Statistics, Mathematical --- Probability --- Statistical methods --- Mathematics. --- Statistics. --- Probability Theory and Stochastic Processes. --- Statistical Theory and Methods. --- Combinations --- Chance --- Least squares --- Risk --- Statistics --- Probabilities --- Sampling (Statistics) --- Distribution (Probability theory. --- Distribution functions --- Frequency distribution --- Characteristic functions --- Statistics . --- Statistical analysis --- Statistical data --- Statistical science --- Econometrics
Choose an application
Recent advances in information technology have brought forth a paradigm shift in science, especially in the biology and medical fields. Statistical methodologies based on high-performance computing and big data analysis are now indispensable for the qualitative and quantitative understanding of experimental results. In fact, the last few decades have witnessed drastic improvements in high-throughput experiments in health science, for example, mass spectrometry, DNA microarray, next generation sequencing, etc. Those methods have been providing massive data involving four major branches of omics (genomics, transcriptomics, proteomics, and metabolomics). Information about amino acid sequences, protein structures, and molecular structures are fundamental data for the prediction of bioactivity of chemical compounds when screening drugs. On the other hand, cell imaging, clinical imaging, and personal healthcare devices are also providing important data concerning the human body and disease. In parallel, various methods of mathematical modelling such as machine learning have developed rapidly. All of these types of data can be utilized in computational approaches to understand disease mechanisms, diagnosis, prognosis, drug discovery, drug repositioning, disease biomarkers, driver mutations, copy number variations, disease pathways, and much more. In this Special Issue, we have published 8 excellent papers dedicated to a variety of computational problems in the biomedical field from the genomic level to the whole-person physiological level.
water temperature --- bathing --- ECG --- heart rate variability --- quantitative analysis --- t-test --- hypertrophic cardiomyopathy --- data mining --- automated curation --- molecular mechanisms --- atrial fibrillation --- sudden cardiac death --- heart failure --- left ventricular outflow tract obstruction --- cardiac fibrosis --- myocardial ischemia --- compound–protein interaction --- Jamu --- machine learning --- drug discovery --- herbal medicine --- data augmentation --- deep learning --- ECG quality assessment --- drug–target interactions --- protein–protein interactions --- chronic diseases --- drug repurposing --- maximum flow --- adenosine methylation --- m6A --- RNA modification --- neuronal development --- genetic variation --- copy number variants --- disease-related traits --- sequential order --- association test --- blood pressure --- cuffless measurement --- longitudinal experiment --- plethysmograph --- nonlinear regression --- n/a --- compound-protein interaction --- drug-target interactions --- protein-protein interactions
Choose an application
The need for establishing a formal limit between the concentration of potentially toxic inorganic compounds in groundwater due to natural processes or to anthropogenic pollution has prompted researchers to develop methods to derive this boundary and define the "Natural Background Level" (NBL). NBLs can be used as screening levels to define the good chemical status of groundwater bodies, as well as to fix the remediation target in polluted sites.The book "Natural Background Levels in Groundwater" brings together a set of case studies across Europe and worldwide where the assessments and identification of this boundary are performed with different methodologies. It provides an overview of the approaches and protocols applied and tested in different states for NBL assessment, ranging from well-known methods, such as component separation or cumulative probability plot methods, to new computer-aided protocols. The main objective of this book is to bring together and discuss different methodological approaches and tools to improve the assessment of groundwater NBLs. The overview, discussion and comparison of different approaches and case histories for NBL calculation can be useful for scientists, water managers and practitioners.
ambient background values --- probability plot --- modified Lepeltier method --- pre-selection method --- LOQ --- groundwater body --- Croatia --- natural background levels --- software implementation --- parameters estimation --- statistical methods --- component separation method --- groundwater quality --- groundwater level --- geostatistics --- t-test --- spatial distribution modeling --- natural background --- conceptual model --- preselection --- nitrates --- confidence level --- arsenic --- sites under remediation --- site-specific data --- Ferrara --- trace metals --- Lanzo Massif --- ultramafic rocks --- ophiolites --- chromium --- hexavalent chromium --- nickel --- neutral mine drainage --- groundwater --- Italian guidelines --- cadmium --- copper --- zinc --- Denmark --- natural background level --- water quality --- anthropogenic pressure --- trace element --- groundwater monitoring --- n/a
Choose an application
During recent years, the use of advanced data analysis methods has increased in clinical and epidemiological research. This book emphasizes the practical aspects of new data analysis methods, and provides insight into new challenges in biostatistics, epidemiology, health sciences, dentistry, and clinical medicine. This book provides a readable text, giving advice on the reporting of new data analytical methods and data presentation. The book consists of 13 articles. Each article is self-contained and may be read independently according to the needs of the reader. The book is essential reading for postgraduate students as well as researchers from medicine and other sciences where statistical data analysis plays a central role.
modified stretched exponential function --- age-dependent stretched exponent --- characteristic life --- maximum lifespan --- South Korean female --- Long-term care (LTC) --- importance-satisfaction (I-S) model --- performance evaluation matrix (PEM) --- service quality performance matrix (SQPM) --- voice of customer (VOC) --- vocal fatigue --- vocal distance dose --- neck surface accelerometer --- medical informatics --- statistical computing --- data analysis --- retirement threshold --- decision support system --- heuristic approach --- surgery scheduling --- software tool --- case study --- prostate cancer --- castration-resistant prostate cancer --- deep learning --- phased long short-term memory --- statistics --- reporting --- data presentation --- publications --- medicine --- health care --- ICD coding --- hierarchical classification --- electronic healthcare --- data mining --- data anonymization --- health --- cervical injury --- neck pain --- inertial sensors --- Active Contour Models --- snake segmentation --- GVF --- prostate imaging --- biostatistics --- GLM --- skewed data --- t-test --- Type I error --- power simulation --- Monte Carlo --- deep belief network --- heart disease diagnosis --- sparse FCM --- bird swarm algorithm --- mathematical models --- Iterative simulation --- compartmental model --- diabetes control --- mobile assistant --- n/a
Choose an application
Computational intelligence is a general term for a class of algorithms designed by nature's wisdom and human intelligence. Computer scientists have proposed many computational intelligence algorithms with heuristic features. These algorithms either mimic the evolutionary processes of the biological world, mimic the physiological structure and bodily functions of the organism,
individual updating strategy --- integrated design --- global optimum --- flexible job shop scheduling problem --- whale optimization algorithm --- EHO --- bat algorithm with multiple strategy coupling (mixBA) --- multi-objective DV-Hop localization algorithm --- optimization --- rock types --- variable neighborhood search --- biology --- average iteration times --- CEC2013 benchmarks --- slicing tree structure --- firefly algorithm (FA) --- benchmark --- single loop --- evolutionary computation --- memetic algorithm --- normal cloud model --- 0-1 knapsack problems --- elite strategy --- diversity maintenance --- material handling path --- artificial bee colony algorithm (ABC) --- urban design --- entropy --- evolutionary algorithms (EAs) --- monarch butterfly optimization --- numerical simulation --- architecture --- set-union knapsack problem --- Wilcoxon test --- convolutional neural network --- global position updating operator --- particle swarm optimization --- computation --- minimum load coloring --- topology structure --- adaptive multi-swarm --- minimum total dominating set --- mutation operation --- shape grammar --- greedy optimization algorithm --- ?-Hilbert space --- genetic algorithm --- large scale optimization --- large-scale optimization --- NSGA-II-DV-Hop --- constrained optimization problems (COPs) --- first-arrival picking --- transfer function --- SPEA 2 --- stochastic ranking (SR) --- wireless sensor networks (WSNs) --- acceleration search --- convergence point --- fuzzy c-means --- evolutionary algorithm --- success rates --- Artificial bee colony --- particle swarm optimizer --- random weight --- range detection --- adaptive weight --- large-scale --- automatic identification --- cloud model --- swarm intelligence --- evolutionary multi-objective optimization --- DV-Hop algorithm --- bat algorithm (BA) --- Friedman test --- quantum uncertainty property --- facility layout design --- local search --- deep learning --- Y conditional cloud generator --- benchmark functions --- discrete algorithm --- dispatching rule --- DE algorithm --- nonlinear convergence factor --- energy-efficient job shop scheduling --- t-test --- evolution --- dimension learning --- global optimization --- confidence term --- elephant herding optimization --- moth search algorithm --- evolutionary
Listing 1 - 5 of 5 |
Sort by
|