ERSI
Home
About Us The Scales In Practice Training News Authors Order FAQ Resources

Process of Revision of the ITERS-R













A Side-by-Side Comparison of Subscales and Indicators (PDF)

The process of revision drew on four main sources of information: (1) research on development in the early years and findings related to the impact of child care environments on children's health and development; (2) a content comparison of the original ITERS with other assessment instruments designed for a similar age group, and additional documents describing aspects of program quality; (3) feedback from ITERS users, solicited through a questionnaire that was circulated and also put on our website, as well as from a focus group of professionals familiar with the ITERS; and (4) intensive use for more than two years by two of the ITERS co-authors and over 25 ITERS trained assessors for The North Carolina Rated License Project.
The data from studies of program quality gave us information about the range of scores on various items, the relative difficulty of items, and their validity. The content comparison helped us to identify items to consider for addition or deletion. By far the most helpful guidance for the revision was the feedback from direct use in the field. Colleagues from the US, Canada, and Europe who had used the ITERS in research, monitoring, and program improvement gave us valuable suggestions based on their experience with the scale. The focus group discussed in particular what was needed to make the revised ITERS more sensitive to issues of inclusion and diversity.

Changes in the ITERS-R

While retaining the basic similarities in format and content that provide continuity between the ITERS and ITERS-R, the following changes were made:

  1. The indicators under each level of quality in an item were numbered so that they could be given a score of "Yes", "No", or "Not Applicable" (NA) on the score sheet. This makes it possible to be more exact in reflecting observed strengths and weaknesses in an item.
  2. Negative indicators on the minimal level were removed from one item and are now found only in the 1 (inadequate) level. In levels 3 (minimal), 5 (good), and 7 (excellent) only indicators of positive attributes are listed. This eliminates the one exception to the scoring rule in the original ITERS.
  3. The Notes for Clarification have been expanded to give additional information to improve accuracy in scoring and to explain the intent of specific items and indicators.
  4. Indicators and examples were added throughout the scale to make the items more inclusive and culturally sensitive. This follows the advice given to us by scales users to include indicators and examples in the scale instead of adding a subscale.
  5. New items were added to several subscales including the following:
    • Listening and Talking: Item 12. Helping children understand language, and Item 13. Helping children use language.
    • Activities: Item 22. Nature/science, and Item 23. Use of TV, video and/or computer.
    • Program Structure: Item 30. Free play, and Item 31. Group play activities.
    • Parents and Staff: Item 37. Staff continuity, and Item 38. Supervision and evaluation of staff.
  1. Some items in the Space and Furnishings subscale were combined to remove redundancies, and two items were dropped in Personal Care Routines: Item 12. Health policy, and Item 14. Safety policy. Research showed that these items were routinely rated with high scores because they were based on regulation but the corresponding items assessing practice were rated much lower. It is practice that the ITERS-R should concentrate on since the aim is to assess process quality.
  2. The scaling of some of the items in the subscale Personal Care Routines was made more gradual to better reflect varying levels of health practices in real life situations, including Item 6. Greeting/departing, Item 7. Meals/snacks, Item 9. Diapering/toileting, Item 10. Health practices, and Item 11. Safety practices. 8. Each item is printed on a separate page, followed by the Notes for Clarification.
  3. Sample questions are included for indicators that are difficult to observe.
- Introduction
- Development
- Process of Revision
- Overview of the Subscales and     Items
- Reliability and Validity
- Selected References
- Additional Notes
- Supplementary Materials
- Translations
ERS® and Environment Rating Scale® are registered trademarks of Teachers College, Columbia University.