Listing 1 - 3 of 3 |
Sort by
|
Choose an application
Carol A. Chapelle shows readers how to design validation research for tests of human capacities and performance. Any test that is used to make decisions about people or programs should have undergone extensive research to demonstrate that the scores are actually appropriate for their intended purpose. Argument-Based Validation in Testing and Assessment is intended to help close the gap between theory and practice, by introducing, explaining, and demonstrating how test developers can formulate the overall design for their validation research from an argument-based perspective.
Quantitative methods in social research --- Examinations --- Educational tests and measurements --- Psychological tests --- Validity. --- Evaluation. --- #SBIB:303H12 --- Methoden en technieken: sociale wetenschappen --- Examinations - Validity. --- Educational tests and measurements - Evaluation. --- Psychological tests - Evaluation.
Choose an application
Score reporting research is no longer limited to the psychometric properties of scores and subscores. Today, it encompasses design and evaluation for particular audiences, appropriate use of assessment outcomes, the utility and cognitive affordances of graphical representations, interactive report systems, and more. By studying how audiences understand the intended messages conveyed by score reports, researchers and industry professionals can develop more effective mechanisms for interpreting and using assessment data.Score Reporting Research and Applications brings together experts who design and evaluate score reports in both K-12 and higher education contexts and who conduct foundational research in related areas. The first section covers foundational validity issues in the use and interpretation of test scores; design principles drawn from related areas including cognitive science, human-computer interaction, and data visualization; and research on presenting specific types of assessment information to various audiences. The second section presents real-world applications of score report design and evaluation and of the presentation of assessment information. Across ten chapters, this volume offers a comprehensive overview of new techniques and possibilities in score reporting.
Educational tests and measurements --- Examinations --- Grading and marking (Students) --- Evaluation. --- Validity. --- Graded schools --- Marking (Students) --- Students --- School reports --- Test results --- Test validity --- Validity of examinations --- Grading and marking --- Interpretation --- Rating of --- Educational tests and measurements - Evaluation --- Examinations - Validity --- Andrew Krumm --- April L. Zenisky --- Francis O'Donnell --- Gautam Puhan --- Gavin T. L. Brown --- John A. C. Hattie --- Linda Corrin --- Lisa A. Keller --- Marc Silver --- Mary Hegarty --- Mingyu Feng --- Priya Kannan --- Rebecca Zwick --- Richard J. Tannenbaum --- Ronald K. Hambleton --- Samuel A. Livingston --- Sandip Sinharay --- Sharon Slater --- Shelby J. Haberman --- Shuchi Grover --- Stephen G. Sireci --- Timothy M. O'Leary --- Yooyoung Park
Choose an application
The goal of this book is to emphasize the formal statistical features of the practice of equating, linking, and scaling. The book encourages the view and discusses the quality of the equating results from the statistical perspective (new models, robustness, fit, testing hypotheses, statistical monitoring) as opposed to placing the focus on the policy and the implications, which although very important, represent a different side of the equating practice. The book contributes to establishing “equating” as a theoretical field, a view that has not been offered often before. The tradition in the practice of equating has been to present the knowledge and skills needed as a craft, which implies that only with years of experience under the guidance of a knowledgeable practitioner could one acquire the required skills. This book challenges this view by indicating how a good equating framework, a sound understanding of the assumptions that underlie the psychometric models, and the use of statistical tests and statistical process control tools can help the practitioner navigate the difficult decisions in choosing the final equating function. This book provides a valuable reference for several groups: (a) statisticians and psychometricians interested in the theory behind equating methods, in the use of model-based statistical methods for data smoothing, and in the evaluation of the equating results in applied work; (b) practitioners who need to equate tests, including those with these responsibilities in testing companies, state testing agencies, and school districts; and (c) instructors in psychometric, measurement, and psychology programs. Dr. Alina A. von Davier is a Strategic Advisor and a Director of Special Projects in Research and Development at Educational Testing Service (ETS). During her tenure at ETS, she has led an ETS Research Initiative called “Equating and Applied Psychometrics” and has directed the Global Psychometric Services Center. The center supports the psychometric work for all ETS international programs, including TOEFL iBT and TOEIC. She is a co-author of a book on the kernel method of test equating, an author of a book on hypotheses testing in regression models, and a guest co-editor for a special issue on population invariance of linking functions for the journal Applied Psychological Measurement.
Educational tests and measurements -- Evaluation. --- Examinations -- Design and construction. --- Examinations -- Interpretation. --- Examinations -- Scoring. --- Psychological tests -- Standards. --- Scaling (Social sciences). --- Social sciences -- Statistical methods. --- Examinations --- Educational tests and measurements --- Psychological tests --- Education --- Social Sciences --- Education, Special Topics --- Theory & Practice of Education --- Scoring --- Statistical methods --- Mathematical statistics. --- Scoring. --- Interpretation. --- Mathematics --- Statistical inference --- Statistics, Mathematical --- Interpretation of examinations --- Test interpretation --- Test results --- Remote scoring of examinations --- Scoring of examinations --- Self-scoring of examinations --- Test scoring --- Education. --- Assessment. --- Statistics. --- Psychometrics. --- Assessment, Testing and Evaluation. --- Statistics for Social Science, Behavorial Science, Education, Public Policy, and Law. --- Statistics for Social Science, Behavioral Science, Education, Public Policy, and Law. --- Measurement, Mental --- Measurement, Psychological --- Psychological measurement --- Psychological scaling --- Psychological statistics --- Psychology --- Psychometry (Psychophysics) --- Scaling, Psychological --- Scaling (Social sciences) --- Statistical analysis --- Statistical data --- Statistical science --- Econometrics --- Children --- Education, Primitive --- Education of children --- Human resource development --- Instruction --- Pedagogy --- Schooling --- Students --- Youth --- Civilization --- Learning and scholarship --- Mental discipline --- Schools --- Teaching --- Training --- Measurement --- Scaling --- Methodology --- Statistics --- Probabilities --- Sampling (Statistics) --- Validity --- Educational tests and measuremen. --- Statistics for Social Sciences, Humanities, Law. --- Statistical methods. --- Statistics .
Listing 1 - 3 of 3 |
Sort by
|