Fair average

This is for 32-bit Facets 3.87. Here is Help for 64-bit Facets 4

A "Fair Average" for each element of the target facet is computed as though every observation of that element in the data was for an average element of every other facet. Then these artificial observations are averaged by the number of observations of the target element. This is "fair" in the sense that every element of the target facet encounters elements of the same logit values in the other facets. The Fair Average is computed using an average rater, essay, etc., omitting elements with extrme scores. Since the mathematics is non-linear (logistic), a rating by an average rater is not the same as the average of the ratings by all the raters, but is usually close to it.

 

We can check this by outputting an Anchorfile=. Keep everything anchored. Then, for every facet except one (the target), change the anchor values of the elements to the average logit value of the elements in that facet.  Then the expected average for each element in the target facet should be the same as the fair average.

 

Meaning of the Fair Average:

 

The "Fair Average" is what the raters would have had if they had all rated the same "average" elements under the same "average" conditions. For instance, suppose one rater rated all the easy tasks (= high observed average rating)  but another rater rated all the hard tasks (= low observed average rating). Then the "Fair Average" says "What if both those raters had rated the same average task, what would the rating have been?". We can then use the Fair Average to compare the severity/leniency of the raters as though they had rated under the same conditions.

 

Example: What's the relationship between the raw score and Fair Averages?

They are both in the original rating-scale metric. The raw score on an item is the original observation. The Fair Average is the original observation adjusted for its context. Suppose that my performance receives a rating of 3 from a lenient rater. My Fair Average is 2.5. Your performance receives a rating of 3 from a severe rater. Your Fair Average is 3.5. Comparing your "3" with my "3" is unfair, because you had a severe rater (who generally gives low ratings) and I had a lenient rater (who generally gives high ratings). After adjusting for rater severity/leniency, our "Fair Average" ratings are 2.5 and 3.5. These give a fair comparison.

 

The "fair average" transforms the Rasch measure back into an expected average raw response value. This value is in a standardized environment in which all other elements interacting with this element have a zero measure or the mean measure of all elements in their facet. This is "fair" to all elements in the facet, e.g., this adjusts raw ratings for severe and lenient raters. This enables a "fair" comparison to be made in the raw score metric, in the same way that the measure does on the linear latent variable. Fair-M uses the facet element means as the baseline. Fair-Z uses the facet local origins (zero points) as the baseline. These are set by Fair average=.

 

The original purpose of Facets (in 1986) was to construct software that would automatically adjust for differences in rater severity/leniency. So this has been done for your data. No adjustment or trimming of usefully-fitting raters is necessary, regardless of their severity/leniency.

 

But Facets does assume that the average leniency of the raters is at the required standard of severity/leniency (usually by centering the rater facet at zero logits). If you see, from evidence external to the data, that the average leniency of the of the raters is too high or too low, then please

1. include an additional "adjustment" facet in your analysis with a ? in Models=

2. this facet has one element.

3. Anchor this element at the adjustment value

4. Include the element in all the observations in the dataset by using dvalues=

5. perform an analysis of all the data including the adjustment element The rater leniencies are now correct, but the fair averages are not correct for the adjusted leniencies.

6. output an anchorfile

7. replace the ? in the model specification with an X

8. analyze the anchorfile with the dataset

9. there will be displacements of the size of the adjustment value. The fair averages are correct for the adjusted leniencies.

 

This procedure will require several attempts before it produces the correct results.

 

Standard Error of the Fair Average:

 

The S.E. of the Fair Average is about the same as the S.E. of the Observed Average.

 

Output the Residualfile= to Excel. Sort on the facet of interest.

The S.E. of the observed average is =STDEV(cells of observations for element)/SQRT(COUNT(observations for element)

 

Calculation of the Fair Average Score

The observed average score is the average rating received by the element. The logit measure is the linear measure implied by the observations. This is adjusted for the measures of whatever other elements of other facets participated in producing the observed data. It is often useful to transform these measures back into the original raw score metric to communicate their substantive meaning. Fair Average does this. It is the observed average adjusted for the measures of the other elements encountered. It is the observed average that would have been received if all the measures of the other elements had been located at the average measure of the elements in each of their facets.

 

Fr the Fair Average computation, elements with extreme scores are omitted. So the observed average when Totalscore=No matches the Fair Average.

 

A basic many-facet Rasch model for observation Xnmij is:

 

 log ( Pnmijk / Pnmij(k-1)) = Bn - Am - Di - Cj - Fk

 

where

Bn is the ability of person n, e.g., examinee: Mary,

Am is the challenge of task m, e.g., an essay: "My day at the zoo".

Di is the difficulty of item i, e.g., punctuation,

Cj is the severity of judge j, e.g., the grader: Dr. Smith,

Fk is the barrier to being observed in category k relative to category k-1,

where k=0 to t, and F0=0.

 

To compute the fair average for person n (or task m, item i, judge j), set all element parameters except Bn (or Am, Di, Cj) to their mean (or zero) values. Thus, the model underlying a fair rating, when Fair=Mean, is:

 

 log ( Pnmijk / Pnmij(k-1)) = Bn - Amean - Dmean - Cmean - Fk

 

or, when Fair=Zero, it becomes:

 

 log ( Pnmijk / Pnmij(k-1)) = Bn - Fk

 

and the Fair average is sum(k Pnmijk) across categories k=0 to t.

 


 

What Facets does ...

 

1. compute mean logit value of all the elements in each facet: MFlogit(f) for facet f.

 

2. for each observation, sum the mean logit values of the relevant facets that are modeled to generate the observation: sum(MFlogit(f)) = MFtotal

 

3. for each observation: subtract the mean logit from the element measure for each facet: flogit(f) =  element logit(f) - MFlogit(f)

  example:  element 3 of facet 4 has a measure of 3.2 logits

   flogit(4) = 3.2 - MFlogit(4)

   

4. for each observation, for each element and facet generating it, compute the "fair" expected score for a logit measure of MFtotal+flogit(f) on the rating scale relevant to the observation

 

5. accumulate the "fair" expected scores for each element across all the observations in which it participates

 

6. divide each element's accumulated "fair" expected scores by the number of observations for that element

 

7. the result is the "fair average" for the element

 


 

Example: Students are rated on 6 items. I want a fair-average for each student on each item.

1. Perform the analysis of all the data.

2. Output an Anchorfile=

3. Unanchor the students (remove the ,A for students in Labels=)

4. Analyze the anchorfile one item at a time by commenting out all items except 1 in Labels=

5. The reported measures and fair-averages for the students will be the fair averages for each item.

6. To assemble these fair-averages, output a Scorefile= from each one-item analysis to, say, Excel, selecting student identification and fair average fields.

 


 

Verifying the Fair Average

 

Analyze your data in Facets. Choose a Facet and an element whose Fair Average in Table 7 you want to verify.

 

In Facets,

"Output Files Menu"

"Residuals File"

"Select fields to output"

"Uncheck all"

Check - observation, expected value, element numbers

OK

Output to Excel.

 

In Excel, sort on the element numbers for your Facet.

Delete all rows except those for your element of your Facet.

 

Count and Sum the observations for your element of your Facet.

The count and sum should agree with Facets Table 7.

Average the observations. This should agree with the Table 7 Observed Average.

 

Now for the Fair Average:  This is the expected value for your element when it encounters elements of mean difficulty (usually 0) in all the other facets. Extreme-score elements are excluded from computing the mean.

 

From Facets Table 7, choose elements near the mean of the other facets.

 

In the Excel table, discover the expected values for your element combined with the mean elements of the other facets. The expected values should agree with the Table 7 Fair Average.

 


 

Problems with the Fair Average

 

If the "Fair averages" do not monotonically increase with the element measures, then

the misalignment of fair score with Rasch measures can occur when some items/task/raters etc. have different rating scales to other items/task/raters and all candidates are not rated on all rating scales.

 

For instance,

you do task 1 which has a rating scale from 0-10

but I do task 2 which has a rating scale from 0-5

 

We both have a measure of 2.00 logits.

 

Then your "fair score" will be 8.2 on the 0-10 item

But my "fair score" will be 4.1 on the 0-5 item

 

To get around this,

1. Write out an Anchorfile= from the Facets analysis.

2. Construct dummy data in which every candidate has a rating on every task (it doesn't matter what the value of the rating is).

3. Analyze the dummy data

4. The "fair score" for each candidate will be averaged across all the tasks.


Help for Facets Rasch Measurement and Rasch Analysis Software: www.winsteps.com Author: John Michael Linacre.
 

Facets Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation Minifac download
Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation Ministep download

Rasch Books and Publications
Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, 2nd Edn, 2024 George Engelhard, Jr. & Jue Wang Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale
Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes Statistical Analyses for Language Testers (Facets), Rita Green Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind Rasch Measurement: Applications, Khine Winsteps Tutorials - free
Facets Tutorials - free
Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan
Other Rasch-Related Resources: Rasch Measurement YouTube Channel
Rasch Measurement Transactions & Rasch Measurement research papers - free An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse Rasch Measurement Theory Analysis in R, Wind, Hua Applying the Rasch Model in Social Sciences Using R, Lamprianou El modelo métrico de Rasch: Fundamentación, implementación e interpretación de la medida en ciencias sociales (Spanish Edition), Manuel González-Montesinos M.
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Rasch Models for Measurement, David Andrich Constructing Measures, Mark Wilson Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias Diseño de Mejores Pruebas - free, Spanish Best Test Design A Course in Rasch Measurement Theory, Andrich, Marais Rasch Models in Health, Christensen, Kreiner, Mesba Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen
As an Amazon Associate I earn from qualifying purchases. This does not change what you pay.

facebook Forum: Rasch Measurement Forum to discuss any Rasch-related topic

To receive News Emails about Winsteps and Facets by subscribing to the Winsteps.com email list,
enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Winsteps.com
The Winsteps.com email list is only used to email information about Winsteps, Facets and associated Rasch Measurement activities. Your email address is not shared with third-parties. Every email sent from the list includes the option to unsubscribe.

Questions, Suggestions? Want to update Winsteps or Facets? Please email Mike Linacre, author of Winsteps mike@winsteps.com


State-of-the-art : single-user and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied
 
Rasch, Winsteps, Facets online Tutorials

Coming Rasch-related Events: Winsteps and Facets
Oct 21 - 22 2024, Mon.-Tues. In person workshop: Facets and Winsteps in expert judgement test validity - UNAM (México) y Universidad Católica de Colombia. capardo@ucatolica.edu.co, benildegar@gmail.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

Our current URL is www.winsteps.com

Winsteps® is a registered trademark