Listing 1 - 10 of 11034 | << page >> |
Sort by
|
Choose an application
Mathematical logic is a branch of mathematics that takes axiom systems and mathematical proofs as its objects of study. This book shows how it can also provide a foundation for the development of information science and technology. The first five chapters systematically present the core topics of classical mathematical logic, including the syntax and models of first-order languages, formal inference systems, computability and representability, and Gödel’s theorems. The last five chapters present extensions and developments of classical mathematical logic, particularly the concepts of version sequences of formal theories and their limits, the system of revision calculus, proschemes (formal descriptions of proof methods and strategies) and their properties, and the theory of inductive inference. All of these themes contribute to a formal theory of axiomatization and its application to the process of developing information technology and scientific theories. The book also describes the paradigm of three kinds of language environments for theories and it presents the basic properties required of a meta-language environment. Finally, the book brings these themes together by describing a workflow for scientific research in the information era in which formal methods, interactive software and human invention are all used to their advantage. This book represents a valuable reference for graduate and undergraduate students and researchers in mathematics, information science and technology, and other relevant areas of natural sciences. Its first five chapters serve as an undergraduate text in mathematical logic and the last five chapters are addressed to graduate students in relevant disciplines.
Choose an application
The development of powerful computing environment and the geographical information system (GIS) in recent decades has thrust the analysis of geo-referenced disease incidence data into the mainstream of spatial epidemiology. This book offers a modern perspective on statistical methods for detecting disease clustering, an indispensable procedure to find a statistical evidence on aetiology of the disease under study. With increasing public health concerns about environmental risks, the need for sophisticated methods for analyzing spatial health events is immediate. Furthermore, the research area of statistical methods for disease clustering now attracts a wide audience due to the perceived need to implement wide-ranging monitoring systems to detect possible health-related events such as the occurrence of the severe acute respiratory syndrome (SARS), pandemic influenza and bioterrorism As an invaluable resource for a wide range of audience including public health researchers, epidemiologists and biostatistians, this book features: A concise introduction to basic concepts of disease clustering/clusters A historical overview of methods for disease clustering A detailed treatment of selected methods useful for practical investigation of disease clustering Analysis and illustration of methods for a variety of real data sets Toshiro Tango, Ph.D., is the Director of Department of Technology Assessment and Biostatistics of National Institute of Public Health, Japan. He has published a number of methodological and applied articles on various aspects of biostatistics. He is Past President of the Japanese Region of the International Biometric Society. He has served as Associate Editor for several journals including Statistics in Medicine and Biometrics.
Choose an application
Seasonal patterns have been found in a remarkable range of health conditions, including birth defects, respiratory infections and cardiovascular disease. Accurately estimating the size and timing of seasonal peaks in disease incidence is an aid to understanding the causes and possibly to developing interventions. With global warming increasing the intensity of seasonal weather patterns around the world, a review of the methods for estimating seasonal effects on health is timely. This is the first book on statistical methods for seasonal data written for a health audience. It describes methods for a range of outcomes (including continuous, count and binomial data) and demonstrates appropriate techniques for summarising and modelling these data. It has a practical focus and uses interesting examples to motivate and illustrate the methods. The statistical procedures and example data sets are available in an R package called ‘season’. Adrian Barnett is a senior research fellow at Queensland University of Technology, Australia. Annette Dobson is a Professor of Biostatistics at The University of Queensland, Australia. Both are experienced medical statisticians with a commitment to statistical education and have previously collaborated in research in the methodological developments and applications of biostatistics, especially to time series data. Among other projects, they worked together on revising the well-known textbook "An Introduction to Generalized Linear Models," third edition, Chapman Hall/CRC, 2008. In their new book they share their knowledge of statistical methods for examining seasonal patterns in health.
Choose an application
Computational techniques based on simulation have now become an essential part of the statistician's toolbox. It is thus crucial to provide statisticians with a practical understanding of those methods, and there is no better way to develop intuition and skills for simulation than to use simulation to solve statistical problems. Introducing Monte Carlo Methods with R covers the main tools used in statistical simulation from a programmer's point of view, explaining the R implementation of each simulation technique and providing the output for better understanding and comparison. While this book constitutes a comprehensive treatment of simulation methods, the theoretical justification of those methods has been considerably reduced, compared with Robert and Casella (2004). Similarly, the more exploratory and less stable solutions are not covered here. This book does not require a preliminary exposure to the R programming language or to Monte Carlo methods, nor an advanced mathematical background. While many examples are set within a Bayesian framework, advanced expertise in Bayesian statistics is not required. The book covers basic random generation algorithms, Monte Carlo techniques for integration and optimization, convergence diagnoses, Markov chain Monte Carlo methods, including Metropolis {Hastings and Gibbs algorithms, and adaptive algorithms. All chapters include exercises and all R programs are available as an R package called mcsm. The book appeals to anyone with a practical interest in simulation methods but no previous exposure. It is meant to be useful for students and practitioners in areas such as statistics, signal processing, communications engineering, control theory, econometrics, finance and more. The programming parts are introduced progressively to be accessible to any reader. Christian P. Robert is Professor of Statistics at Université Paris Dauphine, and Head of the Statistics Laboratory of CREST, both in Paris, France. He has authored more than 150 papers in applied probability, Bayesian statistics and simulation methods. He is a fellow of the Institute of Mathematical Statistics and the recipient of an IMS Medallion. He has authored eight other books, including The Bayesian Choice which received the ISBA DeGroot Prize in 2004, Monte Carlo Statistical Methods with George Casella, and Bayesian Core with Jean-Michel Marin. He has served as Joint Editor of the Journal of the Royal Statistical Society Series B, as well as an associate editor for most major statistical journals, and was the 2008 ISBA President. George Casella is Distinguished Professor in the Department of Statistics at the University of Florida. He is active in both theoretical and applied statistics, is a fellow of the Institute of Mathematical Statistics and the American Statistical Association, and a Foreign Member of the Spanish Royal Academy of Sciences. He has served as Theory and Methods Editor of the Journal of the American Statistical Association, as Executive Editor of Statistical Science, and as Joint Editor of the Journal of the Royal Statistical Society Series B. In addition to books with Christian Robert, he has written Variance Components, 1992, with S.R. Searle and C.E. McCulloch; Statistical Inference, Second Edition, 2001, with Roger Berger; and Theory of Point Estimation, Second Edition, 1998, with Erich Lehmann. His latest book is Statistical Design 2008.
Choose an application
This book brings together young researchers from a variety of fields within mathematics, philosophy and logic. It discusses questions that arise in their work, as well as themes and reactions that appear to be similar in different contexts. The book shows that a fairly intensive activity in the philosophy of mathematics is underway, due on the one hand to the disillusionment with respect to traditional answers, on the other to exciting new features of present day mathematics. The book explains how the problem of applicability once again plays a central role in the development of mathematics. It examines how new languages different from the logical ones (mostly figural), are recognized as valid and experimented with and how unifying concepts (structure, category, set) are in competition for those who look at this form of unification. It further shows that traditional philosophies, such as constructivism, while still lively, are no longer only philosophies, but guidelines for research. Finally, the book demonstrates that the search for and validation of new axioms is analyzed with a blend of mathematical historical, philosophical, psychological considerations.
Choose an application
Choose an application
The new edition of this influential textbook, geared towards graduate or advanced undergraduate students, teaches the statistics necessary for financial engineering. In doing so, it illustrates concepts using financial markets and economic data, R Labs with real-data exercises, and graphical and analytic methods for modeling and diagnosing modeling errors. Financial engineers now have access to enormous quantities of data. To make use of these data, the powerful methods in this book, particularly about volatility and risks, are essential. Strengths of this fully-revised edition include major additions to the R code and the advanced topics covered. Individual chapters cover, among other topics, multivariate distributions, copulas, Bayesian computations, risk management, multivariate volatility and cointegration. Suggested prerequisites are basic knowledge of statistics and probability, matrices and linear algebra, and calculus. There is an appendix on probability, statistics and linear algebra. Practicing financial engineers will also find this book of interest. David Ruppert is Andrew Schultz, Jr., Professor of Engineering and Professor of Statistical Science at Cornell University, where he teaches statistics and financial engineering and is a member of the Program in Financial Engineering. Professor Ruppert received his PhD in Statistics at Michigan State University. He is a Fellow of the American Statistical Association and the Institute of Mathematical Statistics and won the Wilcoxon prize. He is Editor of the Journal of the American Statistical Association-Theory and Methods and former Editor of the Electronic Journal of Statistics and of the Institute of Mathematical Statistics's Lecture Notes—Monographs. Professor Ruppert has published over 125 scientific papers and four books: Transformation and Weighting in Regression, Measurement Error in Nonlinear Models, Semiparametric Regression, and Statistics and Finance: An Introduction. David S. Matteson is Assistant Professor of Statistical Science at Cornell University, where he is a member of the ILR School, Center for Applied Mathematics, Field of Operations Research, and the Program in Financial Engineering, and teaches statistics and financial engineering. Professor Matteson received his PhD in Statistics at the University of Chicago. He received a CAREER Award from the National Science Foundation and won Best Academic Paper Awards from the annual R/Finance conference. He is an Associate Editor of the Journal of the American Statistical Association-Theory and Methods, Biometrics, and Statistica Sinica. He is also an Officer for the Business and Economic Statistics Section of the American Statistical Association, and a member of the Institute of Mathematical Statistics and the International Biometric Society.
Choose an application
Choose an application
Choose an application
This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.
Listing 1 - 10 of 11034 | << page >> |
Sort by
|