Anchor output file = " " |
When Anchorfile= is specified, Facets writes out a new specification file containing the input specifications for this analysis. This file omits any specifications entered as extra specifications, but includes the final estimated measures for all elements and scales. This file also has all estimated measures marked as ",A" for "anchored".
These anchor values can be used as starting values for slightly different later runs. Edit the ",A"'s out of the first line of each facet in the Labels= specifications and out of the category lines of Rating (or partial credit) scale=.
This can also be specified from the Output Files menu, Anchor Output file.
Example: Using an edited version of Guilford.txt.
Anchorfile=Guilford.anc
Title = Ratings of Scientists (Psychometric Methods p.282 Guilford 1954)
Score file = GUILFSC ; score files GUILFSC.1.txt, SC.2.txt and SC.3.txt produced
Facets = 3 ; three facets: judges, examinees, items
Inter-rater = 1 ; facet 1 is the rater facet
Arrange = m,2N,0f ; arrange tables by measure-descending for all facets,
Positive = 2 ; the examinees have greater creativity with greater score
Non-centered = 1 ; examinees and items are centered on 0, judges are allowed to float
Unexpected = 2 ; report ratings if standardized residual >=|2|
Usort = (1,2,3),(3,1,2),(Z,3) ; sort and report unexpected ratings several ways (1,2,3) is Senior, Junior, Trait
Vertical = 2N,3A,2*,1L,1A ;define rulers to display and position facet elements
Zscore = 1,2 ;report biases greater in size than 1 logit or with z>2
Pt-biserial = measure ; point-measure correlation
Models =
?B,?B,?,RS1,1 ; CREATIVITY
*
Rating (or partial credit) scale = RS1,R9,G,O ; CREATIVITY
; Facets has renamed the rating scale in order to avoid ambiguity. Please edit the rating-scale name in the Anchorfile if you wish.
1=lowest,0,A ; this is a place-holder for the bottom category
2=,-.6441868,A
3=,-2.317694,A
4=,.8300989,A
5=middle,-1.477257,A
6=,1.710222,A
7=,-1.001601,A
8=,2.358206,A
9=highest,.5422113,A
; Rasch-Andrich Thresholds = 0, -.6441868 ,-2.317694 ,.8300989, -1.477257 ,1.710222 ,-1.001601 ,2.358206 .5422113
*
Labels =
1,Senior scientists,A
1=Avogadro,0.038439
2=Brahe,.2356183
3=Cavendish,-0.091576
*
2,Junior Scientists,A
1=Anne,-0.068648
2=Betty,.637455
3=Chris,-.2451435
4=David,-.4621402
5=Edward,.4228902
6=Fred,-.5607015
7=George,.2762882
*
3,Traits,A
1=Attack,-.2684814
2=Basis,-.1409857
3=Clarity,.2020537
4=Daring,-.2898233
5=Enthusiasm,.4972367
*
Data=
1,1,1_5,5,5,3,5,3
....
Example 1: Two time points and a rating scale:
(1) Obtain the common structure for the rating scale:
Construct the entire data set. Put in a dummy "time-point" facet with the two elements anchored at 0.
If the same person appears twice, then give them two related id-numbers, so that they are easy to pair up later. Analyze the dataset. Write out an Anchorfile=
(2) Time-point 1 is the "gold standard":
For the time-1 data, edit the Model= statements for element 1 of the time-point facet.
Copy-and-paste into the specification file the parts you want to anchor from the (1) anchor file.
Analyze the time-1 data and output another anchorfile.
(3) Time-point 2 is measured in the Time-point 1 "frame of reference":
For the time-2 data, edit the Model= statements for element 2 of the time-point facet.
Copy-and-paste into the specification file the parts you want to anchor from the (2) anchor file.
Analyze the time-2 data.
Example 2: I want generate a report for each criterion entered in the analysis. I am interested in getting fair averages for the different criteria used by the raters.
1. Perform the complete analysis.
2. Output an Anchorfile=
3. In the Anchorfile=, keep everything anchored but comment out all except one criterion.
4. Analyze the Anchorfile=.
Now all the reporting will be for only that one criterion.
5. Return to 3. for the next criterion.
Example 3: Fair Average score for Item-by-Candidate combinations.
i.Do a standard analysis:
Facets = 3 ; Candidates, Raters, Items
Non-centered = 1 ; candidates - we need this
Model = ?, ?, ?, R
Anchorfile = anc.txt ; we need this
ii.In anc.txt,
Edit the model specification:
Model = ?, X, ?, .... ; inactivates the raters
Remove "Anchorfile="
Residual file = residual.xls ; Excel file - we need this
iii.Analyze anc.txt
iv.Open Residual.xls ; in Excel
Delete all columns except: Candidate, Item, Expected score
Sort the file: Candidate major, Item minor.
All the expected scores for each candidate-item combination should be the same
Used the Excel "Advanced filter" to remove duplicate lines.
v.The "expected score" is the "Fair Average score for Item-by-Candidate"
Example 2: I have 10 raters and one of them is stringent. I want to see the stringent rater's adjusted ratings and score.
Facets automatically adjusts for rater stringency/leniency. If you want to see what the stringent rater would have done, then here is an approach:
1. Analyze your data with Facets in the usual way.
Output the Anchorfile= from the Output Files menu
2. Edit the Anchorfile=.
a. Delete all the raters except the stringent one.
b. Change the anchor value for the stringent rater to 0.0
3. Analyze the edited Anchorfile.
Output the Residualfile= to Excel using the Output Files menu
4. In the Excel file, the "Expected" response values are the adjusted ratings.
Help for Facets (64-bit) Rasch Measurement and Rasch Analysis Software: www.winsteps.com Author: John Michael Linacre.
Facets Rasch measurement software.
Buy for $149. & site licenses.
Freeware student/evaluation Minifac download Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation Ministep download |
---|
Forum: | Rasch Measurement Forum to discuss any Rasch-related topic |
---|
Questions, Suggestions? Want to update Winsteps or Facets? Please email Mike Linacre, author of Winsteps mike@winsteps.com |
---|
State-of-the-art : single-user and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied Rasch, Winsteps, Facets online Tutorials |
---|
Coming Rasch-related Events | |
---|---|
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Our current URL is www.winsteps.com
Winsteps® is a registered trademark