Monday, November 1, 2010

Rasch Model IRT Demystified



Questionable No Child Left Behind (NCLB) test cut scores have put Arkansas, Texas, New York, and now Illinois in the news this year. How NCLB test cut scores are set is of concern. The traditional method of just counting right marks (known as classical test theory or CTT) is not used.

Instead the Rasch model item response theory (IRT) that estimates student ability and question difficulty is used. It is an acceptable way to calibrate questions for computer assisted testing (CAT); where you only answer enough questions to determine pass or fail. This leaves in question how psychometricians, education officials, and politicians use the Rasch model on NCLB tests.

How tests are scored should not be a mystery known only to those who benefit directly from “higher test scores” that may have no other meaning or use.  A detailed examination can also determine the Rasch model’s ability to make useful sense of classroom test results for instructional and student counseling purposes.

This blog will now pause a bit to relate the printouts from the Winsteps Rasch model IRT (student ability and item difficulty) with the Power Up Plus (right mark scoring or RMS) printouts in a new blog: Rasch Model Audit.

Power Up Plus (FreePUP) prints out two student counseling reports: 

Table 3. Student Counseling Mark Matrix with Scores and Item Difficulty contains the same student marks that Ministep (the free version of [Winsteps]) starts with when doing a Rasch model IRT test score analysis. The most able students with the least difficult items are in the upper left. The least able students with the most difficult items are in the lower right. The relationships between student, item, mark, and test are presented in a highly usable fashion for both students and teachers for student counseling.


Table 3a. Student Counseling Mark Matrix with Mastery/Easy, Unfinished and Discriminating (MUD) Analysis re-tables the data to assist in the improvement of instruction and testing. Winsteps Rasch Model IRT quantifies each of the marks on these two tables. This is a most interesting and powerful addition to RMS. PUP Tables 3  and 3a will be used as working papers in this audit of the Rasch model.

On return, this blog will continue with the application of Knowledge and Judgment Scoring (KJS) and the Rasch model to promote student development (using all levels of thinking). We need accurate, honest and fair test results presented in an easy to understand and to use manner. KJS does this, as well as promotes student development. We also need to detect sooner when school and state officials are releasing meaningless test results (it took three years in New York). Both needs require some of the same insights.

Next: Rasch Model Audit

No comments:

Post a Comment