Fair averages based on = Mean |
This is for 32-bit Facets 3.87. Here is Help for 64-bit Facets 4
Fair average scores are reported in the output for each element. These are the scores that correspond to the logit measures as though each element of that facet encountered elements of similar difficulty in the other facets. Fair averages are intended for communicating the measures as adjusted ratings. This is useful when the audience have a strong conceptualization of the rating scale, but little interest in, or understanding of, the measurement system.
Fair average = Mean
This provides a norm-referenced average the measures for all elements (except this element) are set to the average values of the elements in their facets. It uses mean measure of the elements of each facet (except the current element) as the reference for computation. This is the default option. It is shown as Fair(M) in Table 7.
Fair average = Zero
This provides a criterion-referenced average the measures for all elements (except this element) are set to zero (logits or on user-scaling). It uses the origin of the measurement scale for each facet (except the current element) as the reference for computation. This was the default option in early version of Facets. It is shown as Fair(Z) in Table 7.
For the non-centered facet (typically persons), these two fair averages are usually the same. For a centered facet (e.g., items or raters) they are different. So for your non-centered rater facet, do you want the "fair-average" for a rater to be the rating given by this rater to a person with an "average" measure, or to a person with a "zero" measure? You may need to try both to identify which is actually what you want to report.
Look at your non-centered facet. Do you want the fair averages for all elements to be determined by a person at the Umean= value (Fair=zero) or a person at the person-sample mean (Fair=mean). If you are describing performances on this test then (fair=mean).
Example 1: An examination board wishes to use criterion-referenced fair scores for rater comparisons, because a "zero" logit person is at the pass-fail point. If students are the non-centered facet, then the fair scores for the students should be the same for fair=mean and fair=zero. For the raters, items, etc., fair=zero would be more student-sample-independent.
Fair score = Zero
Example 2: An examination board wishes to use ratings based on an average task rated by an average rater:
Fair score = Mean
Example 3: I wanted to use a fair average (with Fair=zero) of 2
as a cutscore. No person has exactly this. How can I find the person measure?
One approach:
1. Analyze your data and output an Anchorfile=
2. look for person measures with Fair Average near 2
3. In the Anchorfile=, change the anchored person measures so they cover the range discovered in 2. No need to change the data.
4. Analyze the modified anchor file and see which person measure has a Fair Average near enough to 2.0
5. redo 2, 3, 4 if needed.
Another approach:
1. Analyze your data and output the Scorefile= for the persons to Excel
2. Sort on the Fair Average column
3. Delete all values far from a Fair Average of 2.0
4. Plot Measureagainst Fair Average
5. Tell Excel to draw the trend line and display the equation
6. Put value of 1.33 into the equation.
Help for Facets Rasch Measurement and Rasch Analysis Software: www.winsteps.com Author: John Michael Linacre.
Facets Rasch measurement software.
Buy for $149. & site licenses.
Freeware student/evaluation Minifac download Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation Ministep download |
---|
Forum: | Rasch Measurement Forum to discuss any Rasch-related topic |
---|
Questions, Suggestions? Want to update Winsteps or Facets? Please email Mike Linacre, author of Winsteps mike@winsteps.com |
---|
State-of-the-art : single-user and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied Rasch, Winsteps, Facets online Tutorials |
---|
Coming Rasch-related Events: Winsteps and Facets | |
---|---|
Oct 21 - 22 2024, Mon.-Tues. | In person workshop: Facets and Winsteps in expert judgement test validity - UNAM (México) y Universidad Católica de Colombia. capardo@ucatolica.edu.co, benildegar@gmail.com |
Oct. 4 - Nov. 8, 2024, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Our current URL is www.winsteps.com
Winsteps® is a registered trademark