Listing 1 - 10 of 11 | << page >> |
Sort by
|
Choose an application
This paper makes an in-depth comparison of the PISA (OECD) and TIMSS (IEA) mathematics assessments conducted in 2003. First, a comparison of survey methodologies is presented, followed by an examination of the mathematics frameworks in the two studies. The methodologies and the frameworks in the two studies form the basis for providing explanations for the observed differences in PISA and TIMSS results. At the country level, it appears that Western countries perform relatively better in PISA as compared to their performance in TIMSS. In contrast, Asian and Eastern European countries tend to do better in TIMSS than in PISA. This paper goes beyond making mere conjectures about the observed differences in results between PISA and TIMSS. The paper provides supporting evidence through the use of regression analyses to explain the differences. The analyses showed that performance differences at the country level can be attributed to the content balance of the two tests, as well as the sampling definitions – age-based and grade-based – in PISA and TIMSS respectively. Apart from mathematics achievement, the paper also compares results from the two studies on measures of self-confidence in mathematics. Gender differences are also examined in the light of contrasting results from the two studies. Overall, the paper provides a comprehensive comparison between PISA and TIMSS, and, in doing so, it throws some light on the interpretation of results of large-scale surveys more generally.
Choose an application
This paper makes an in-depth comparison of the PISA (OECD) and TIMSS (IEA) mathematics assessments conducted in 2003. First, a comparison of survey methodologies is presented, followed by an examination of the mathematics frameworks in the two studies. The methodologies and the frameworks in the two studies form the basis for providing explanations for the observed differences in PISA and TIMSS results. At the country level, it appears that Western countries perform relatively better in PISA as compared to their performance in TIMSS. In contrast, Asian and Eastern European countries tend to do better in TIMSS than in PISA. This paper goes beyond making mere conjectures about the observed differences in results between PISA and TIMSS. The paper provides supporting evidence through the use of regression analyses to explain the differences. The analyses showed that performance differences at the country level can be attributed to the content balance of the two tests, as well as the sampling definitions – age-based and grade-based – in PISA and TIMSS respectively. Apart from mathematics achievement, the paper also compares results from the two studies on measures of self-confidence in mathematics. Gender differences are also examined in the light of contrasting results from the two studies. Overall, the paper provides a comprehensive comparison between PISA and TIMSS, and, in doing so, it throws some light on the interpretation of results of large-scale surveys more generally.
Choose an application
Electronic data processing --- Electronic digital computers --- Programming languages (Electronic computers) --- #TCPW P1.0 --- #TCPW P1.2 --- 681.3*A1 --- 681.3*A1 Introductory and survey --- Introductory and survey --- Computer languages --- Computer program languages --- Computer programming languages --- Machine language --- Languages, Artificial --- Automatic digital computers --- Computers, Electronic digital --- Digital computers, Electronic --- Computers --- Hybrid computers --- Sequential machine theory --- ADP (Data processing) --- Automatic data processing --- Data processing --- EDP (Data processing) --- IDP (Data processing) --- Integrated data processing --- Office practice --- Automation --- Information systems
Choose an application
Choose an application
Choose an application
Choose an application
Choose an application
This book is a valuable read for a diverse group of researchers and practitioners who analyze assessment data and construct test instruments. It focuses on the use of classical test theory (CTT) and item response theory (IRT), which are often required in the fields of psychology (e.g. for measuring psychological traits), health (e.g. for measuring the severity of disorders), and education (e.g. for measuring student performance), and makes these analytical tools accessible to a broader audience. Having taught assessment subjects to students from diverse backgrounds for a number of years, the three authors have a wealth of experience in presenting educational measurement topics, in-depth concepts and applications in an accessible format. As such, the book addresses the needs of readers who use CTT and IRT in their work but do not necessarily have an extensive mathematical background. The book also sheds light on common misconceptions in applying measurement models, and presents an integrated approach to different measurement methods, such as contrasting CTT with IRT and multidimensional IRT models with unidimensional IRT models. Wherever possible, comparisons between models are explicitly made. In addition, the book discusses concepts for test equating and differential item functioning, as well as Bayesian IRT models and plausible values using simple examples. This book can serve as a textbook for introductory courses on educational measurement, as supplementary reading for advanced courses, or as a valuable reference guide for researchers interested in analyzing student assessment data.
Education --- Educational evaluation. --- Research --- Methodology. --- Educational assessment --- Educational program evaluation --- Evaluation research in education --- Instructional systems analysis --- Program evaluation in education --- Self-evaluation in education --- Evaluation --- Mathematical statistics. --- Statistics. --- Educational tests and measuremen. --- Computer software. --- Statistical Theory and Methods. --- Statistics for Social Sciences, Humanities, Law. --- Assessment, Testing and Evaluation. --- Mathematics in the Humanities and Social Sciences. --- Mathematical Software. --- Statistical analysis --- Statistical data --- Statistical methods --- Statistical science --- Mathematics --- Econometrics --- Software, Computer --- Computer systems --- Statistical inference --- Statistics, Mathematical --- Statistics --- Probabilities --- Sampling (Statistics) --- Statistics . --- Assessment. --- Mathematics. --- Social sciences. --- Behavioral sciences --- Human sciences --- Sciences, Social --- Social science --- Social studies --- Civilization --- Math --- Science
Choose an application
This book is a valuable read for a diverse group of researchers and practitioners who analyze assessment data and construct test instruments. It focuses on the use of classical test theory (CTT) and item response theory (IRT), which are often required in the fields of psychology (e.g. for measuring psychological traits), health (e.g. for measuring the severity of disorders), and education (e.g. for measuring student performance), and makes these analytical tools accessible to a broader audience. Having taught assessment subjects to students from diverse backgrounds for a number of years, the three authors have a wealth of experience in presenting educational measurement topics, in-depth concepts and applications in an accessible format. As such, the book addresses the needs of readers who use CTT and IRT in their work but do not necessarily have an extensive mathematical background. The book also sheds light on common misconceptions in applying measurement models, and presents an integrated approach to different measurement methods, such as contrasting CTT with IRT and multidimensional IRT models with unidimensional IRT models. Wherever possible, comparisons between models are explicitly made. In addition, the book discusses concepts for test equating and differential item functioning, as well as Bayesian IRT models and plausible values using simple examples. This book can serve as a textbook for introductory courses on educational measurement, as supplementary reading for advanced courses, or as a valuable reference guide for researchers interested in analyzing student assessment data.
Choose an application
The PISA 2000 Technical Report describes the complex methodology underlying PISA 2000, along with additional features related to the implementation of the project at a level of detail that allows researchers to understand and replicate its analyses. It presents information on the test and sample design, methodologies used to analyse the data, technical features of the project and quality control mechanisms.
Listing 1 - 10 of 11 | << page >> |
Sort by
|