Anchored estimation |
How many anchor items or anchor persons?
The percent of anchor items is less important than the number of anchor items. We need enough anchor items to be statistically certain that a large incorrect value of one anchor item will not distort the equating. In my experiments, 10 were needed in order to prevent large accidental deviation in one from distorting the measurement. Please remember that the choice of 10 anchor items is before cleaning. We expect the 10 anchor items to malfunction somewhat. If we did not, only one anchor item would be needed.
Suppose that we have 10 anchor items. Then we discover one anchor item has changed its difficulty in the new test administration. In our analysis of the new test administration, we unanchor that item, so that there now only 9 anchor items. Have we failed the 10-anchor-item criterion? No! The remaining 9 anchor items are now better than the original 10 anchor items.
Anchoring or fixing parameter estimates (measures) is done with IAFILE= for items, PAFILE= for persons, and SAFILE= for response structures.
The anchor values are assigned to the anchored parameters (persons, items, steps). Then the estimation routine skips over estimating the anchored parameters, except for "displacement" values which are estimated during the fit-analysis phase. Displacements are the approximate differences between the anchor values and their freely estimated values. The anchor value for an anchored parameter participates in the estimation of all the other parameters in the same way as the current estimate of a free parameter does.
From the estimation perspective under JMLE, anchored and unanchored items appear exactly alike. The only difference is that anchored values are not changed at the end of each estimation iteration, but unanchored estimates are. JMLE converges when "observed raw score = expected raw score based on the estimates". For anchored values, this convergence criterion is never met, but the fit statistics etc. are computed and reported by Winsteps as though the anchor value is the "true" parameter value. Convergence of the overall analysis is based on the unanchored estimates. if large displacements are shown for the anchored items or persons, try changing the setting of ANCESTIM=.
Using pre-set "anchor" values to fix the measures of items (or persons) in order to equate the results of the current analysis to those of other analyses is a form of "common item" (or "common person") equating. Unlike common-item equating methods in which all datasets contribute to determining the difficulties of the linking items, the current anchored dataset has no influence on those values. Typically, the use of anchored items (or persons) does not require the computation of equating or linking constants. During an anchored analysis, the person measures are computed from the anchored item values. Those person measures are used to compute item difficulties for all non-anchored items. Then all non-anchored item and person measures are fine-tuned until the best possible overall set of measures is obtained. Discrepancies between the anchor values and the values that would have been estimated from the current data can be reported as displacements. The standard errors associated with the displacements can be used to compute approximate t-statistics to test the hypothesis that the displacements are merely due to measurement error.
Example: I have a study in which an instrument is administered serially over time to each participant. I want to estimate the trait for subsequent occasions treating the baseline parameter estimates as fixed.
Here's an approach:
1.Format the so that each data row is a set of 7 items followed by a person label containing a time-point code (say, 01,02,03,04,..) in the first two columns and a participant number.
2.Analyze each time-point in a separate file or separately in a combined file using PSELECT=01 then PSELECT=02, etc. to verify that the data for each time-point is correctly entered.
3.Analyze the time-point 01 file or separately in a combined file using PSELECT=01. Output IFILE=01if.txt SFILE=01sf.txt
4.Analyze all the data in separate files or combined files with no PSELECT= and IAFILE=01if.txt SAFILE=01sf.txt
Output PFILE= filename.txt or PFILE=filename.xls containing a measure for every time-point + participant using the time-point 01 parameter values.
Analysis Procedure with Anchored Items
If one set of item parameter estimates is treated as the "truth" (definitive), then here is the procedure I recommend:
1.a free analysis of new data be performed first to verify that the new data is correct, e.g., no item miskeys, data entry errors, miscoding, etc.
2.an item-anchored analysis of the new data. Item displacements will tell us if items have drifted, e.g., due to item exposure. Action can be taken, such as deleting the exposed item in the new data, or coding the drifted item as a new item in the new data.
3.the final item-anchored analysis of the new data which is used for reporting etc., and which also produces the person measures on the same measurement scale as all other datasets using the anchored items. Reliability indexes, fit statistics, etc., require the final person measures so are reported here.
Estimation with anchored items (or persons)
1. the anchor values for the items (or persons) are treated as their "true" values. Other items start at 0 logits.
2. during the estimation:
a) all the item difficulties -> the person estimates
b) for anchored items, their displacements are estimated and averaged. All the person estimates are adjusted by the average anchored item displacement
c) all the person estimates -> updated item difficulties (except anchor values)
d) (a) and (b) and (c) repeat until the biggest change in any estimate is less than LCONV= value
Help for Winsteps Rasch Measurement and Rasch Analysis Software: www.winsteps.com. Author: John Michael Linacre
Facets Rasch measurement software.
Buy for $149. & site licenses.
Freeware student/evaluation Minifac download Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation Ministep download |
---|
Forum: | Rasch Measurement Forum to discuss any Rasch-related topic |
---|
Questions, Suggestions? Want to update Winsteps or Facets? Please email Mike Linacre, author of Winsteps mike@winsteps.com |
---|
State-of-the-art : single-user and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied Rasch, Winsteps, Facets online Tutorials |
---|
Coming Rasch-related Events: Winsteps and Facets | |
---|---|
Oct 21 - 22 2024, Mon.-Tues. | In person workshop: Facets and Winsteps in expert judgement test validity - UNAM (México) y Universidad Católica de Colombia. capardo@ucatolica.edu.co, benildegar@gmail.com |
Oct. 4 - Nov. 8, 2024, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Our current URL is www.winsteps.com
Winsteps® is a registered trademark