Listing 1 - 2 of 2 |
Sort by
|
Choose an application
Kernel Equating (KE) is a powerful, modern and unified approach to test equating. It is based on a flexible family of equipercentile-like equating functions and contains the linear equating function as a special case. Any equipercentile equating method has five steps or parts. They are: 1) pre-smoothing; 2) estimation of the score-probabilities on the target population; 3) continuization; 4) computing and diagnosing the equating function; 5) computing the standard error of equating and related accuracy measures. KE brings these steps together in an organized whole rather than treating them as disparate problems. KE exploits pre-smoothing by fitting log-linear models to score data, and incorporates it into step 5) above. KE provides new tools for diagnosing a given equating function, and for comparing two or more equating functions in order to choose between them. In this book, KE is applied to the four major equating designs and to both Chain Equating and Post-Stratification Equating for the Non-Equivalent groups with Anchor Test Design. This book will be an important reference for several groups: (a) Statisticians and others interested in the theory behind equating methods and the use of model-based statistical methods for data smoothing in applied work; (b) Practitioners who need to equate tests—including those with these responsibilities in testing companies, state testing agencies and school districts; and (c) Instructors in psychometric and measurement programs. The authors assume some familiarity with linear and equipercentile test equating, and with matrix algebra. Alina von Davier is an Associate Research Scientist in the Center for Statistical Theory and Practice, at Educational Testing Service. She has been a research collaborator at the Universities of Trier, Magdeburg, and Kiel, an assistant professor at the Politechnical University of Bucharest and a research scientist at the Institute for Psychology in Bucharest. Paul Holland holds the Frederic M. Lord Chair in Measurement and Statistics at Educational Testing Service. He held faculty positions in the Graduate School of Education, University of California, Berkeley and the Harvard Department of Statistics. He is a Fellow of the American Statistical Association, the Institute of Mathematical Statistics, and the American Association for the Advancement of Science. He is an elected Member of the International Statistical Institute and a past president of the Psychometric society. He was awarded the (AERA/ACT) E. F. Lindquist Award, in 2000, and was designated a National Associate of the National Academies of Science in 2002. Dorothy Thayer currently is a consultant in the Center of Statistical Theory and Practice, at Educational Testing Service. Her research interests include computational and statistical methodology, empirical Bayes techniques, missing data procedures and exploratory data analysis techniques.
Examinations --- Educational tests and measurements --- Scoring. --- Interpretation. --- Design and construction. --- Standards. --- Econometrics. --- Statistics. --- Educational tests and measuremen. --- Psychometrics. --- Statistics for Social Sciences, Humanities, Law. --- Assessment, Testing and Evaluation. --- Statistics . --- Assessment. --- Measurement, Mental --- Measurement, Psychological --- Psychological measurement --- Psychological scaling --- Psychological statistics --- Psychology --- Psychometry (Psychophysics) --- Scaling, Psychological --- Psychological tests --- Scaling (Social sciences) --- Statistical analysis --- Statistical data --- Statistical methods --- Statistical science --- Mathematics --- Econometrics --- Economics, Mathematical --- Statistics --- Measurement --- Scaling --- Methodology --- Educational assessment --- Educational measurements --- Mental tests --- Tests and measurements in education --- Psychological tests for children --- Psychometrics --- Students --- Test construction --- Test design --- Interpretation of examinations --- Test interpretation --- Test results --- Remote scoring of examinations --- Scoring of examinations --- Self-scoring of examinations --- Test scoring --- Rating of --- Validity
Choose an application
Test equating methods are used with many standardized tests in education and psychology to ensure that scores from multiple test forms can be used interchangeably. In recent years, researchers from the education, psychology, and statistics communities have contributed to the rapidly growing statistical and psychometric methodologies used in test equating. This book provides an introduction to test equating which both discusses the most frequently used equating methodologies and covers many of the practical issues involved. This second edition expands upon the coverage of the first edition by providing a new chapter on test scaling and a second on test linking. Test scaling is the process of developing score scales that are used when scores on standardized tests are reported. In test linking, scores from two or more tests are related to one another. Linking has received much recent attention, due largely to investigations of linking similarly named tests from different test publishers or tests constructed for different purposes. The expanded coverage in the second edition also includes methodology for using polytomous item response theory in equating. The themes of the second edition include: * the purposes of equating, scaling and linking and their practical context * data collection designs * statistical methodology * designing reasonable and useful equating, scaling, and linking studies * importance of test development and quality control processes to equating * equating error, and the underlying statistical assumptions for equating Michael J. Kolen is a Professor of Educational Measurement at the University of Iowa. Robert L. Brennan is E. F. Lindquist Chair in Measurement and Testing and Director of the Center for Advanced Studies in Measurement and Assessment at the University of Iowa. Both authors are acknowledged experts on test equating, scaling, and linking, they have authored numerous publications on these subjects, and they have taught many workshops and courses on equating. Both authors have been President of the National Council on Measurement in Education (NCME), and both received an NCME award for Outstanding Technical Contributions to Educational Measurement following publication of the first edition of this book. Professor Brennan received an NCME award for Career Contributions to Educational Measurement and authored Generalizability Theory published by Springer-Verlag. .
Educational tests and measurements --- Examinations --- Psychological tests --- Methoden en technieken --- Standards. --- Design and construction. --- Interpretation. --- Scoring. --- statistiek --- statistiek. --- Mental tests --- Psychological assessment --- Tests, Psychological --- Psychology --- Testing --- Clinical psychology --- Remote scoring of examinations --- Scoring of examinations --- Self-scoring of examinations --- Test scoring --- Interpretation of examinations --- Test interpretation --- Test results --- Test construction --- Test design --- Educational assessment --- Educational measurements --- Tests and measurements in education --- Psychological tests for children --- Psychometrics --- Students --- Standards --- Design and construction --- Interpretation --- Scoring --- Methodology --- Validity --- Rating of --- Statistics . --- Assessment. --- Psychometrics. --- Statistics for Social Sciences, Humanities, Law. --- Assessment, Testing and Evaluation. --- Measurement, Mental --- Measurement, Psychological --- Psychological measurement --- Psychological scaling --- Psychological statistics --- Psychometry (Psychophysics) --- Scaling, Psychological --- Scaling (Social sciences) --- Statistical analysis --- Statistical data --- Statistical methods --- Statistical science --- Mathematics --- Econometrics --- Measurement --- Scaling
Listing 1 - 2 of 2 |
Sort by
|