Assessments and Reporting Assessment Data

Program reports must include aggregated results of seven or eight assessments of candidate proficiency. Beginning fall 2010, for full recognition, programs will be required to submit data that represent two applications of the assessment.  That is, the assessment must be given and data collected at least two times.  If an assessment is in a class that is offered every semester, then the two applications would take two academic years.  For revised and response to conditions reports, data from one application of the assessment would be required for full recognition.  Data must be organized according to the categories used in your scoring guide or evaluation criteria. The percentage of candidates achieving at acceptable levels for each category should be given.

Institutions Seeking Continuing Accreditation
Onsite Visit Scheduled for Fall 2010 and Beyond

Amount of Data Expected
Program Report(s) ~A minimum of two applications of the selected assessment
Institutional Reports ~Most recent 12-month period
Onsite Visit ~Three years

Institutions Seeking Accreditation for the First Time
Onsite Visit Scheduled for Fall 2010 and Beyond

Amount of Data Expected
Program Report(s) ~A minimum of two applications of the selected assessment
Institutional Reports ~Most recent 12-month period
Onsite Visit ~Two years

    In designing assessments, keep in mind that they must be fair, accurate, and consistent. NCATE indicates that assessments must be

     

    • Appropriate and designed to assess meaningful cognitive demands and skill requirements
    • Congruent with the complexity, cognitive demands, and skill requirements described in the standards
    • At a level consistent with the standards, challenging but reasonable for candidates who are ready to teach or to take on other professional
    • responsibilities
    • Well defined
    • Credible and unbiased
    • Systematically evaluated by institutions that use them

     

    Scoring

    Design of scoring tools (such as rubrics) is as important as design of the assessments themselves. These tools should

     

    • Address relevant, meaningful attributes of candidate knowledge and performance related to the standards
    • Have written and shared criteria for judging performance that indicate the qualities by which levels of performance can be differentiated
    • Be explicit enough to anchor judgments about degree of success
    • Define what is being sought and be expressed clearly
    • Keep in mind that, to be reliable, scoring of assessments must be yielding approximately the same values across raters. 

     

    Examples

    Imagine that an institution offers a course in which students must complete a case study of the literacy development of a young child. Materials related to evaluation of the assignment and inclusion in the program report might include the

    following samples:

    For more on rubrics, consider these samples:

     

    Also, visit NCATE’s SPA Assessment Library for more models of IRA assessments http://www.ncate.org/institutions/assessmentLibrary.asp?ch=90

     

     

    Assessment purposes

    In addition to measuring candidate achievement, well-designed assessments can help teachers answer these questions about their programs:

     

    • What is expected?
    • What are our standards?
    • What does good performance look like?
    • What do I want to accomplish?
    • What kind of feedback do I give to improve student work next time?
    • Where are my students on their journey to competence, and what is the next step in instruction?
    • Is my instruction effective?
    Join IRA Today!

    Home| About IRA| Contact Us| Help| Privacy & Security| Terms of Use

    © 1996–2014 International Reading Association. All rights reserved.