Rating (or partial credit) scale (or Response model) =

The Rating (or partial credit) scale= statement provides a simple way to provide further information about the scoring model beyond that in the Model= specification. You can name each category of a scale, provide Rasch-Andrich threshold-values (step calibrations) for anchoring or starting values, and recode observations.

 

Components of Rating (or partial credit) scale=

Format:

Rating scale = user name, structure, scope, numeration

user name of scale or response model

any set of alphanumeric characters, e.g., "Likert". To be used, it must match exactly a user name specified in a Model= statement.

 

structure

D, R, B, P = any scale code in table below, except a user name

scope

S = Specific (or # in a Model= specification) means that each occurrence of this scale name in a different Model= specification refers to a separate copy of the scale, with its own Rasch-Andrich thresholds (step calibrations), though each has the same number of categories, category names, etc.

G = General means that every reference to this scale in any Model= specification refers to the same, single manifestation of the scale.

numeration

O = Ordinal means that the category labels are arranged cardinally, representing ascending, but adjacent, qualitative levels of performance regardless of their values.

K =Keep means that the category labels are cardinal numbers, such that all intermediate numbers represent levels of performance, regardless as to whether they are observed in any particular data set.

 

Model= and

Rating Scale=

Scale codes

Meaning for this model

D

Dichotomous data. Only 0 and 1 are valid.

Dn

Dichotomize the counts. Data values 0 through n-1 are treated as 0. Data values n and above are treated as 1. E.g., "D5" recodes "0" to "4" as 0 and "5" and above as 1.

R

The rating scale (or partial credit) categories are in the range 0 to 9. The actual valid category numbers are those found in the data. RK to maintain unobserved intermediate categories in the category ordering.

Rn

 

The rating scale (or partial credit) categories are in the range 0 to "n". Data values about "n" are missing data. The actual valid category numbers are those found in the data. If 20 is the largest category number used, then specify "R20".

RnK

 

Suffix "K" (Keep) maintains unobserved intermediate categories in the category ordering, e.g., R5K. If K is omitted, the categories are renumbered consecutively to remove the unobserved intermediate numbers.

RnH

Suffix "H" (Hide) hides unobserved categories in Table 8 that are not included in the category ordering e.g., R5H

RnKH

RnK and RnH: Keep unobserved intermediate categories and Hide unobserved extreme categories, e.g., R5KH

M

Treat all observations matching this model as Missing data, i.e, a way to ignore particular data, effectively deleting these data.

Bn

Binomial (Bernoulli) trials, e.g., "B3" means 3 trials. In the Model= statement, put the number of trials. In the Data= statement, put the number of successes. Use Rating Scale= for anchored discrimination.

B1

1 binomial trial, which is the same as a dichotomy, "D".

B100

Useful for ratings expressed as percentages %.  Use Rating Scale= for anchored discrimination.

P

Poisson counts, with theoretical range of 0 through infinity, though observed counts must be in the range 0 to 255.  Use Rating Scale= for anchored discrimination.

the name of a user-defined scale

A name such as "Opinion". This name must match a name given in a Rating (or partial credit) scale= specification.

 

Components of category description lines

Format:

Rating scale = myscale, R5

category number, category name, measure value, anchor flag, recoded values, reordered values

category number, category name, measure value, anchor flag, recoded values, reordered values

.....

*

category number

quantitative count of ordered qualitative steps, e.g., 2.

-1 is treated as missing data and is used when recoding data.

category name

label for category, e.g., "agree"

measure value

These provide starting or pre-set fixed values.
For rating scales (or partial credits items): Rasch-Andrich threshold-values (step calibrations).
For binomial trials and Poisson counts: scale discrimination, but only when entered for category 0, and with value greater than 0.

anchor flag

For rating scales (or partial credit items), ",A" means Anchor this category at its pre-set Rasch-Andrich threshold (step calibration) value. If omitted, or any other letter, the logit value is only a starting value. Anchoring a category with a pre-set Rasch-Andrich  threshold forces it to remain in the estimation even when there are no matching responses. Anchor ",A" the lowest category with "0" to force it to remain in the estimation. For binomial trials and Poisson counts: ",A" entered for category 0 means anchor (fix) the scale discrimination at the assigned value.

recoded values

Data values to be recoded, separated by "+" signs (optional). Numeric ranges to be recoded are indicated by "-". Examples:

5+6+Bad recodes "5", "6" or "Bad" in the data file to the category number.

"5-8" recodes "5", "6", "7" or "8" to the category number.

1+5-8 recodes "1", "5", "6", "7", "8" to the category number.

 

 

Example 0. Facets data must be integers, but mine is decimal.

 

1. multiply all the data by 2: 3.5, 4, 4.5, 5 -> 7,8,9, 10

Use Rating Scale= to recode the data this way

 

Rating (or partial credit) scale=MyScale,R10 ;

7=3.5,,,3.5

8=4,,,4

9=4.5,,,4.5

10=5,,,5

*

 

2. weight the Models by 0.5

Models = ?,?,?, MyScale, 0.5

 

Example 1: Anchor a rating scale (or partial credit) at pre-set Rasch-Andrich thresholds (step calibrations.)

 

Model=?,?,faces,1 ; the "Liking for Science" faces

*

Rating (or partial credit) scale=faces,R3

1=dislike,0,A ; always anchor bottom category at "0"

2=don't know,-0.85,A ; anchor first step at -0.85 Rasch-Andrich threshold

3=like,0.85,A ; anchor second step at +0.85 Rasch-Andrich threshold

* ; as usual, Rasch-Andrich thresholds sum to zero.

 

Example 2: Center a rating scale (or partial credit) at the point where categories 3 and 4 are equally probable. Note: usually a scale is centered where the first and last categories are equally probable. More detailed scale rating scale anchoring example.

 

Model=?,?,friendliness,1 ; the scale

*

Rating (or partial credit) scale=friendliness,R4

1=obnoxious

2=irksome

3=passable

4=friendly,0,A ; Forces categories 3 and 4 to be equally probable at a relative logit of 0.

 

Example 3: Define a Likert scale of "quality" for persons and items, with item 1 specified to have its own Rasch-Andrich thresholds (scale calibrations). Recoding is required.

 

Model=

?,1,quality,1 ; a scale named "quality" for item 1

?,?,quality,1 ; a scale named "quality" for all other items

*

 

Rating (or partial credit) scale=quality,R3,Specific ; the scale is called "quality"

0=dreadful

1=bad

2=moderate

3=good,,,5+6+Good ; "5","6","Good" recoded to 3.

; ",,," means logit value and anchor status omitted

-1=unwanted,,,4  ; "4" was used for "no opinion", recoded to -1 so ignored

* ; "0","1","2","3" in the data are not recoded, so retain their values.

 

Example 4: Define a Likert scale of "intensity" for items 1 to 5, and "frequency" for items 6 to 10. The "frequency" items are each to have their own scale structure.

 

Model=

?,1_5,intensity ; "intensity" scale for items 1-5

?,6-10#,frequency ; "frequency" scale for items 6-10 with "partial credit" format

*

 

Rating (or partial credit) scale=intensity,R4 ; the scale is called "intensity"

1=none

2=slightly

3=generally

4=completely

*

Rating (or partial credit) scale=frequency,R4 ; the scale is called "frequency"

1=never

2=sometimes

3=often

4=always

*

 

The components of the Rating (or partial credit) scale= specification:

Rating (or partial credit) scale=quality,R3,Specific ; the scale is called "quality"

 

"quality" (or any other name you choose)
is the name of your scale. It must match the scale named in a Model= statement.

 

R3 an Andrich rating scale (or partial credit) with valid categories in the range 0 through 3.

 

Specific each model statement referencing quality generates a scale with the same structure and category names, but different Rasch-Andrich thresholds (step calibrations).

 

Example 5: Items 1 and 2 are rated on the same scale with the Rasch-Andrich thresholds. Items 3 and 4 are rated on scales with the same categories, but different Rasch-Andrich thresholds:

 

Model=

?,1,Samescale

?,2,Samescale

?,3,Namesonly

?,4,Namesonly

*

Rating (or partial credit) scale=Samescale,R5,General 

; only one set of Rasch-Andrich threshold is estimated for all model statements

; category 0 is not used ; this is a potentially 6 category (0-5) rating scale (or partial credit)

1,Deficient

2,Satisfactory

3,Good

4,Excellent

5,Prize winning

*

Rating (or partial credit) scale=Namesonly,R3,Specific 

; one set of Rasch-Andrich thresholds per model statement

0=Strongly disagree ; this is a 4 category (0-3) rating scale (or partial credit)

1=Disagree

2=Agree

3=Strongly Agree

*

 

Example 6: Scale "flavor" has been analyzed, and we use the earlier values as starting values.

 

Rating (or partial credit) scale=Flavor,R

0=weak ; bottom categories always have 0.

1=medium,-3 ; the Rasch-Andrich threshold from 0 to 1 is -3 logits

2=strong,3 ; the step value from 1 to 2 is 3 logits

* ; The sum of the anchor Rasch-Andrich thresholds is the conventional zero.

 

Example 7: Collapsing a four category scale (0-3) into three categories (0-2):

 

Rating (or partial credit) scale=Accuracy,R2

0=wrong ; no recoding. "0" remains "0"

1=partial,,,2 ; "2" in data recoded to "1" for analysis.

; "1" in data remains "1" for analysis, ",,," means no pre-set logit value and no anchoring.

2=correct,,,3 ; "3" in data recoded to "2" for analysis.

; "2" in data already made "1" for analysis.

*

data=

1,2,0 ; 0 remains category 0

4,3,1 ; 1 remains category 1

5,4,2 ; 2 recoded to category 1

6,23,3 ; 3 recoded to category 2

13,7,4 ; since 4 is not recoded and is too big for R2, Facets terminates with the message:
Data is: 13,7,4
Error 26 in line 53: Invalid datum value: non-numeric or too big for model
Execution halted

 

Example 8: Recoding non-numeric values.

 

Categories do not have to be valid numbers, but must match the data file exactly, so that, for a data file which contains "R" for right answers, and "W" or "X" for wrong answers, and "M" for missing:

 

Rating (or partial credit) scale=Keyed,D ; a dichotomous scale called "Keyed"

0=wrong,,,W+X ; both "W" and "X" recoded to "0", "+" is a separator

1=right,,,R ; "R" recoded to "1"

-1=missing,,,M ; "M" recoded to "-1" - ignored as missing data

*

data=

1,2,R ; R recoded 1

2,3,W ; W recoded 0

15,23,X ; X recoded 0

7,104,M ; M recoded to -1, treated as missing data

 

Example 9: Maintaining the rating scale (or partial credit) structure with unobserved intermediate categories. Unobserved intermediate categories can be kept in the analysis.

 

Model=?,?,Multilevel

Rating (or partial credit) scale=Multilevel,R2,G,K ; means that 0, 1, 2 are valid
; if  0 and 2 are observed, 1 is forced to exist.
Dichotomies can be forced to 3 categories, to match 3 level partial credit items, by scoring the dichotomies 0=wrong, 2=right, and modeling them R2,G,K.

 

Example 10: An observation is recorded as "percents". These are to be modelled with the same discrimination as in a previous analysis, 0.72.

 

Model=?,?,Percent

Rating (or partial credit) scale=Percent,B100,G ; model % at 0-100 binomial trials

0=0,0.72,A ; Anchor the scale discrimination at 0.72

 

Example 11: Forcing structure (step) anchoring with dichotomous items. Dichotomous items have only one step, so usually the Rasch-Andrich threshold is at zero logits relative to the item difficulty. To force a different value:

 

Facets = 2

Model = ?,?,MyDichotomy

Rating scale = MyDichotomy, R2

0 = 0, 0, A ; anchor bottom category at 0 - this is merely a place-holder

1 = 1, 2, A ; anchor the second category at 2 logits

2 = 2 ; this forces Facets to run a rating scale model, but it drops from the analysis because the data are 0, 1.

*

If the items are centered, this will move all person abilities by 2 logits. If the persons are centered, the item difficulties move by 2 logits.

 

Example 12: The item-person alignment is to be set at 80% success on dichotomous items, instead of the standard 50% success.

Model = ?,?,?, Dichotomous

Rating scale = Dichotomous, R1  ; define this as a rating scale with categories 0,1 rather than a standard dichotomy (D)

0 = 0, 0, A ; Place-keeper for bottom category

1 = 1, -1.39, A ; Anchor Rasch - Andrich threshold for 0-1 threshold at -1.39 logits

*

 

Table 6 Standard 50% Offset - kct.txt

----------------------------

|Measr|+Children|-Tapping i|

----------------------------

+   1 + **.     +          +

|     |         | 11       |

|     |         |          |

|     |         |          |

*   0 *         *          *

|     | ******  |          |  ---

|     |         |          |   |

|     |         |          |   80% probability of succes

+  -1 +         +          +   |

|     | *.      |          |   |

|     |         | 10       |  ---

|     |         |          |

+  -2 +         +          +

----------------------------

|Measr| * = 2   |-Tapping i|

----------------------------

 

Table 6 80% offset -1.39 logits

----------------------------

|Measr|+Children|-Tapping i|

----------------------------

+   1 +         +          +

|     |         | 11       |

|     | **      |          |

|     |         |          |

*   0 *         *          *

|     |         |          |

|     | **.     |          |

|     |         |          |

+  -1 +         +          +

|     |         |          |

|     | ******  | 10       | <- Item with 80% probability of success targeted

|     |         |          |

+  -2 +         +          +

----------------------------

|Measr| * = 2   |-Tapping i|

----------------------------

 

Example 13. Data has the range 0-1000, but older versions of Facets only accept 0-254. Convert the data with the Rating Scale= specification:

models = ?,?,...,spscale

rating scale=spscale,R250,Keep ; keep unobserved intermediate categories in the rating scale structure

0,0-1,,,0+1 "0-1" is the category label.

1,2-5,,,2+3+4+5 ; this can be constructed in Excel, and then pasted into the Facets specifications

2,6-9,,,6+7+8+9

3,10-13,,,10+11+12+13

....

248,990-993,,,990+991+992+993

249,994-997,,,994+995+996+997

250,998-1000,,,998+999+1000

*

 

Example 14: The rating-scale anchor values are the relative log-odds of adjacent categories. For instance, if the category frequencies are

0   20

1   10

2   20

and all other measures (person abilities, item difficulties, etc.) are 0. Then Facets would show:

 

Rating (or partial credit) scale=RS1,R2,G,O

0=,0,A,,          ; this "0" is a conventional value to indicate that this is the bottom of the rating scale

1=,0.69,A,,     ; this is log(frequency(0)/frequency(1)) = loge(20/10)

2=,-.0.69,A,,   ; this is log(frequency(1)/frequency(2)) = loge(10/20)

 

In rating scale applications, we may want to impose the constraint that the log-odds values increase, if so, we we will only accept rating scale which conceptually have  category structure similar to:

0   10

1   20

2   10

This would produce:

 

Rating (or partial credit) scale=RS1,R2,G,O

0=,0,A,,        ; this "0" is a conventional value to indicate that this is the bottom of the rating scale

1=,-0.69,A,,  ; this is log(frequency(0)/frequency(1)) = loge(10/20)

2=,.0.69,A,,  ; this is log(frequency(1)/frequency(2)) = loge(20/10)

 

Example 15: The test has a 7-category rating scale, but some items are oriented forward and others are reversed, inverted, negative:

 

Facets =3  ; Facet 3 are the items

Models =

?, ?, 1, Forward

?, ?, 2, Reversed

?, ?, 3-8, Forward

?, ?, 9-12, Reversed

*

 

Rating scale= Forward, R7, General

1 =

2 =

3 =

4 =

5 =

6 =

7 =

*

 

Rating scale= Reversed, R7, General

1 = , , , 7

2 = , , , 6

3 = , , , 5

4 = , , , 4

5 = , , , 3

6 = , , , 2

7 = , , , 1

*

 

Example 16: The rating scales do not have the same number of rating categories: fluency 40-100, accuracy 0-70, coherence 0-30, etc.

 

Let's assume the items are facet 3,

 

1. Every category number between the lowest and the highest is observable

 

Models=

?, ?, #, R100K   ; K means "keep unobserved intermediate categories"

*

 

2. Every category number between the lowest and the highest category is not observable, but every observable category has been observed in this dataset

 

Models=

?, ?, #, R100   ; unobserved categories will be collapsed out of the rating scales

*

 

3. Only some categories are observable, but not all of those have been observed

 

Models=

?, ?, 1, Fluency

?, ?, 2, Accuracy

?, ?, 3, Coherence

*

 

Rating scale = Fluency, R100, Keep

0 = 0

1 = 10, , , 10  ; rescore 10 as "1"

2 = 20, , , 20

....

10 = 100, , , 100

*

 

Rating scale = Accuracy, R70, Keep

0 = 0

1 = 10, , , 10  ; rescore 10 as "1"

2 = 20, , , 20

....

7 = 70, , , 70

*

 

Rating scale = Coherence, R30, Keep

0 = 0

1 = 5, , , , 5

2 = 10, , , 10  ; rescore 10 as "1"

3 = 15, , , 15

4 = 20, , , 20

5 = 25, , , 25

6 = 30, , , 30

*

 

And there are more possibilities ...


Help for Facets (64-bit) Rasch Measurement and Rasch Analysis Software: www.winsteps.com Author: John Michael Linacre.
 

Facets Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation Minifac download
Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation Ministep download

Rasch Books and Publications
Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, 2nd Edn, 2024 George Engelhard, Jr. & Jue Wang Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale
Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes Statistical Analyses for Language Testers (Facets), Rita Green Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind Rasch Measurement: Applications, Khine Winsteps Tutorials - free
Facets Tutorials - free
Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan
Other Rasch-Related Resources: Rasch Measurement YouTube Channel
Rasch Measurement Transactions & Rasch Measurement research papers - free An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse Rasch Measurement Theory Analysis in R, Wind, Hua Applying the Rasch Model in Social Sciences Using R, Lamprianou El modelo métrico de Rasch: Fundamentación, implementación e interpretación de la medida en ciencias sociales (Spanish Edition), Manuel González-Montesinos M.
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Rasch Models for Measurement, David Andrich Constructing Measures, Mark Wilson Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias Diseño de Mejores Pruebas - free, Spanish Best Test Design A Course in Rasch Measurement Theory, Andrich, Marais Rasch Models in Health, Christensen, Kreiner, Mesba Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen
As an Amazon Associate I earn from qualifying purchases. This does not change what you pay.

facebook Forum: Rasch Measurement Forum to discuss any Rasch-related topic

To receive News Emails about Winsteps and Facets by subscribing to the Winsteps.com email list,
enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Winsteps.com
The Winsteps.com email list is only used to email information about Winsteps, Facets and associated Rasch Measurement activities. Your email address is not shared with third-parties. Every email sent from the list includes the option to unsubscribe.

Questions, Suggestions? Want to update Winsteps or Facets? Please email Mike Linacre, author of Winsteps mike@winsteps.com


State-of-the-art : single-user and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied
 
Rasch, Winsteps, Facets online Tutorials

Our current URL is www.winsteps.com

Winsteps® is a registered trademark