ESSA: Opportunities and Risks in Assessment

ESSA: Opportunities and Risks in Assessment

This is the second in a series of blogs from the EducationCounsel team unpacking ESSA and highlighting next steps for states and local school districts.

ESSA: Opportunities and Risks in Assessment
Dan Gordon, Senior Legal and Policy Advisor, EducationCounsel

Dan Gordon Assessment received a great deal of attention at the beginning of the final sprint to reauthorize the Elementary and Secondary Education Act, largely around the question of whether to maintain requirements for annual assessments. In the end, the headline-grabbing shifts mostly took place elsewhere, especially in Title I accountability and educator evaluation. Yet a deeper look at the Every Student Succeeds Act (ESSA) reveals complex and important changes to federal assessment policy as well. States and districts now have several opportunities to advance their development of high-quality systems of assessment. These systems can then support the great teaching and learning necessary for all students to graduate high school with the full range of knowledge and deeper learning skills they need to succeed. However, new opportunities come with new risks as well.

  • Opportunity #1: States are encouraged to move beyond traditional assessment design. ESSA maintains the federal expectations of assessment quality established by the No Child Left Behind Act, including alignment to standards and measuring higher-order thinking skills. But ESSA also clarifies additional opportunities for states to redesign their annual assessments. Annual assessments under ESSA do not have to look like the often-criticized traditional standardized tests. States can instead design assessments in three ways that have the potential to better support teaching and learning – whether through more accurately pinpointing current student performance, measuring higher-order thinking skills via more authentic assessments, or gathering multiple data points throughout the year rather than just one snapshot. First, states may use computer adaptive assessments that can measure student performance below, on, or above grade level, even if doing so means students don’t all answer the same test items. There is even an explicit statement that above/below grade level determinations could be included in the state-developed accountability systems. Second, state tests can at least in part include portfolios, projects, and/or extended performance tasks in their assessment design. Third, states can choose to administer a series of assessments throughout the year that cumulatively result in a summative score. This flexibility means that states can design assessment systems that may provide more and richer information about students’ progress and achievement.

  • Opportunity #2: States may pilot even more innovative approaches either statewide or with subsets of districts. ESSA authorizes the U.S. Department of Education (USED) to administer a new pilot program through which states can experiment even further with innovative assessment and accountability systems. The pilots are designed to encourage states to innovate, within some parameters, including: (i) the pilot is limited for the first three years to a maximum of seven states (but then USED can approve additional states in subsequent years); (ii) the innovative system must ultimately expand statewide (assuming it starts with a subset of districts, as the New Hampshire pilot did under that state’s ESEA Flexibility Waiver); and (iii) if the pilot does begin with a subset of districts, the new assessments must still produce data that are comparable to the assessment in use statewide during the pilot. Implemented well, these pilots could yield radically different approaches, including assessing students when they are ready to demonstrate mastery, embedding assessments in the day-to-day curriculum, or other innovations proposed by interested states. These innovations would likely force other systems to adapt as well, most notably accountability systems that have been developed with the traditional assessment approaches in mind.There are at least three situations in which the pilot might be more attractive than the new design features described above that are available to all states immediately:

    • The “Go Slower” case: State A wants to incorporate performance-based assessments in its state test, but it is not ready to go statewide immediately. Somewhat counter intuitively, the “innovation” pilot would allow State A to first design, test, and improve the performance tasks with a subset of districts and then scale statewide over five to seven years.

    • The “Go Faster” case: State B wants their summative assessment to be entirely comprised of portfolios, projects and/or performance-based assessments (i.e., no multiple choice items). Because ESSA provides for including these type of items only on part of state assessments (see the first opportunity discussed above), State B would have to leverage the pilot to accomplish its more aggressive design change.

    • The “Go Beyond” case: State C wants to do something other than the handful of innovative design features allowed for under the new assessment flexibility in ESSA discussed in the first opportunity above. The pilot’s limitations, by contrast, are mainly technical (e.g., comparability of data). The pilot would thus be the best vehicle for State C to try something not explicitly authorized in the general assessment provisions of ESSA.

  • Opportunity #3: States and districts are encouraged to build higher-quality systems of assessment. ESSA includes two new opportunities for states to re-evaluate their systems of assessment to help ensure they are supporting the skills needed to meet college and career expectations. First, each year, ESSA authorizes at least $1.5M for any interested state to conduct an audit of its state- and local-required assessments and to implement an improvement plan based on the audit findings, including subgrants from the state to school districts. The goals of such audits are not only “eliminating any unnecessary assessments” to reduce the testing burden, but also “improved assessment quality and efficiency to improve teaching and learning.” The subgrants to local districts are particularly interesting new opportunities to drive the second goal: for example, districts may use these funds “to support teachers in the development of classroom-based assessments, interpreting assessment data, and designing instruction.” Although the two goals should often be compatible with each other, some states may need to balance their desire to reduce the time spent on assessment with the potential benefits of embracing the more authentic, instructionally-embedded assessments ESSA now allows, which may actually take more time than a standardized test. Second, ESSA also allows districts – in rethinking their assessment portfolio – to swap the state high school test for a national assessment, such as the SAT or ACT. States must approve the selection and will have to determine, among other things, if the national assessment is sufficiently aligned to the particular state’s standards. Leaders at both the district and state levels should consider this option as part of a thoughtful process of creating a high-quality and balanced system of assessment.

  • Risk: How will USED thread the innovation needle? Although ESSA restricts USED’s authority in many ways, the law maintains the federal government’s role in approving (via peer review) each state’s assessments, including the new design options discussed in the first opportunity discussed earlier. USED will also define the constraints and limitations of the assessment pilot. In exercising these approval rights, USED will have to balance a number of competing forces. On one hand, innovators need room to innovate. If USED is too strict in its interpretation of technical requirements, then the new assessment opportunities may exist only in theory. On the other hand, there are substantial risks in lowering the technical bar too far in an attempt to support innovation. If the new assessments do not yield valid, reliable, and (most importantly) comparable data, they could undermine the integrity of publicly reported achievement data, jeopardize the coherence and equity of state accountability determinations, and limit educators’ ability to get schools and individual students the resources and supports they need to succeed. This latter risk is especially problematic within the larger ESSA context because the law relies heavily on the theory of action that transparent data will drive state action, especially on behalf of struggling students. In other words, with fewer federal levers in place, the need for quality, consistent, and reliable data is greater than ever.

Assessment policy under ESSA should be a fascinating and important space to watch as states and districts decide whether to take Congress up on these various new opportunities and as USED responds to their proposed innovations. In light of growing concerns about the quality and quantity of assessments (including but not limited to the “opt-out” movement), education leaders at all levels should take advantage of the opportunities presented by ESSA to build high-quality systems of assessment that support the drive to college- and career-ready outcomes for all students.

Share this post