Table 4 Unexpected Responses |
Juxtapose=, Left-hand= control the layout of Table 4. These are responses with standardized residuals equal to or exceeding the amount specified by Unexpected=. They are listed in the order specified by Usort=. Up to T4maximum= are reported.
Table 4 shows outlying (i.e., very unexpected) observations. Look for patterns. Is the same examinee, rater, item, .... appearing many times? Perhaps this is an indication of unusual performance or data entry error. If it is a rater, perhaps a misunderstanding of the rating criteria. The outlying observation may be irrelevant for my purposes and so can be bypassed as missing data using Models= ...,M. This happens, for instance, when our interest is in determining task difficulty, but some examinees misunderstand the task and so are rated low. Their low ratings are going to make the task appear more difficult, but the task will immediately return to its correct difficulty when the task instructions are revised to remove the misunderstanding.
Look carefully at these residuals, inspecting them for signs of:
1) Incorrectly formatted data. Were the original data correctly reformatted into Facets data?
2) Incorrect application of a scoring key. Data that was originally in multiple-choice question (MCQ) or some other coded form may not have been correctly scored.
3) Reversed rating scales (or partial credit items). Survey items with the word "not" in them or with a negative implication may need to have their scoring reversed in order to align with the hypothesis that "more of the variable implies a higher observed response". You can also use model statements with "-?" terms in order to reverse the direction of the variable for particular combinations of elements.
4) Idiosyncratic or "off-variable" behavior. Wild guessing, response sets, frequent selection of "don't know" are symptoms that the participants are not exhibiting the type of behavior that the instrument is intended to measure. For the construction of meaning, and from the measurement point of view, such data are better recoded "missing" (by means of "M" models, or ";" in their data lines).
5) Patterns of unexpected residuals loading onto certain elements. Facets reports the misfitting elements by their order in the data file. You may find it useful to transfer this Table into your word processor and sort it according to standardized residual size, facet etc. Systematic patterns of misfit may prompt a bias analysis or the reformulations of your Model= specifications.
There will always be some degree of misfit to the measurement model. You must decide when the fit of the data to the model is good enough for your purposes.
Table 4.1 Unexpected Responses (sorted by 1,2,3).
+--------------------------------------------------------------------+
| Cat Score Exp. Resd StRes| Nu Senior sc Nu Junior Nu Traits | Sequence
|-----------------------------+--------------------------------------|
| 2 2 5.2 -3.2 -2.0 | 2 Brahe 5 Edward 1 Attack | 57
| 2 2 5.2 -3.2 -2.0 | 2 Brahe 5 Edward 4 Daring | 60
| 6 6 3.3 2.7 2.2 | 2 Brahe 6 Fred 3 Clarity | 64
| 9 9 5.3 3.7 2.3 | 2 Brahe 1 Anne 1 Attack | 37
|-----------------------------+--------------------------------------|
| Cat Score Exp. Resd StRes| Nu Senior sc Nu Junior Nu Traits | Sequence
+--------------------------------------------------------------------+
Column headings have the following meanings:
Cat = Observed value of the category as entered in the data file.
Step = Value of category after it has been recounted cardinally commencing with "0" corresponding to the lowest observed category.
Exp. = Expected score based on current estimates
Resd = Residual, the score difference between Step and Exp.
StRes = The residual standardized by its standard error and truncated. A value of 1.6 is reported as 1. See Residuals file for a more exact value. StRes is expected to approximate a unit normal distribution.
Sequence = Sequence number of observation in the data file
For each modeled facet:
N is the number of the element in the specifications
Name of Facet is over the name of the element
Help for Facets (64-bit) Rasch Measurement and Rasch Analysis Software: www.winsteps.com Author: John Michael Linacre.
Facets Rasch measurement software.
Buy for $149. & site licenses.
Freeware student/evaluation Minifac download Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation Ministep download |
---|
Forum: | Rasch Measurement Forum to discuss any Rasch-related topic |
---|
Questions, Suggestions? Want to update Winsteps or Facets? Please email Mike Linacre, author of Winsteps mike@winsteps.com |
---|
State-of-the-art : single-user and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied Rasch, Winsteps, Facets online Tutorials |
---|
Coming Rasch-related Events: Winsteps and Facets | |
---|---|
Oct 21 - 22 2024, Mon.-Tues. | In person workshop: Facets and Winsteps in expert judgement test validity - UNAM (México) y Universidad Católica de Colombia. capardo@ucatolica.edu.co, benildegar@gmail.com |
Oct. 4 - Nov. 8, 2024, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Our current URL is www.winsteps.com
Winsteps® is a registered trademark