Lessons Learned, or Lost?

Lessons Learned, or Lost?

September 6, 2019
Kathryn Young

As I chat with my kids about their first week of school, I can feel their nervous excitement about new classmates and teachers, books yet unread, projects to come, and hidden talents yet to be discovered.  School systems and homes all across America share in this time of preparation and anticipation for the new school year.  Often less heralded, however, is the work schools and education systems do to look back at prior work and figure out how this year can be even better.   For example, at my children’s school, a committee of parents and teachers are starting the year with a backwards look at the newly-released state assessment scores.  We are asking ourselves what the data mean for the effectiveness of last year’s strategies and staffing.  We’ll use those insights to help decide what to keep or change this year and beyond.  It is just one part of a larger process of continuous improvement the school uses to learn from and improve its supports and strategies.

In many schools, fall is also the time for new school improvement plans under the federal Every Student Succeeds Act (ESSA).  In most states, this is the first year of plan implementation, so there is rightly considerable attention to what’s ahead this school year and beyond – particularly in schools identified for low performance.  A key part of the improvement process, however, must be looking back at the school improvement “architecture” that states built or refined over the last several years to support and spur school improvement.  It is critical to study and learn how the architecture is working in practice and continuously improve it to make sure a state’s system is actually advancing effective school improvement.

Now is an excellent time for state education agencies to take that look back.  These are the entities charged with developing the overall school improvement architecture for supporting the lowest-performing schools.  Many states have or will soon have new school improvement plans from districts as well as school performance data from last school year – providing them with key artifacts to discuss with their districts, schools, and communities.  Digging into current school improvement structures and data with stakeholders can help states understand whether they are supporting change in the way they intended.

In 2018, EducationCounsel identified a set of decision points for states to help design their ESSA school improvement architecture to spur effective school improvement.  These principles from the Council of Chief State School Officers (CCSSO) and EdCounsel are what states should keep in mind as they design their architecture and make decisions.  Principle #9 explicitly addresses how those systems must themselves embrace continuous improvement.

The ten questions below are meant to serve as conversation starters about continuous improvement in the way states are set up to support schools in improvement status.  They are not exhaustive – but we hope they can be entry points into even deeper, more contextual questions:

  1. Did the state’s structures for making accountability determinations and identifying schools for improvement work as intended? Are they structured to incentivize every school and district to improve education so that each student is ready for college, careers, and civic engagement, regardless of the student’s context?  If not, what accountability and school identification parameters might need to change?
  2. Did the way the state distributed funds for school improvement match what schools and districts are being asked to accomplish, and is the distribution process aligned to the same timeline? Do the funding amounts, timing of the release of funds, and requirements for receiving funds (e.g. application requirements and funding priorities) allow for robust planning, hiring, and significant overhauls to ineffective school structures where needed?  If not, what specifically might need to change?
  3. Did the state’s report cards and related tools for making education data actionable allow district and school staff and public stakeholders to effectively analyze inequities and identify any root causes of student outcomes? Did districts with schools in improvement find any state-supported technical assistance on data analysis useful for identifying and addressing inequities that lead to lower student outcomes?  Were all districts aware of the existence of these tools and assistance?  Have the state’s efforts made a difference in how districts and schools are analyzing data and developing improvement strategies?  If not, what might need to change to support the availability and analysis of data that effectively informs improvement strategies?
  4. Have the guidelines or templates for developing school improvement plans and receiving school improvement funds encouraged districts and schools to submit quality school improvement plans that address needs and inequities specific to their schools and communities? If the state developed a rubric for reviewing plans, did the design decisions result in high-quality plans being approved and lower-quality plans being revised?  Has the state often had to go back and ask for further information from districts and schools?  Did districts understand the template or guidelines around evidence-based interventions and developing strategies based on needs assessments?  What might need to change in the templates or guidelines so that the next round of plan development and reviews goes more smoothly?
  5. Did the process for completing local needs assessments and root cause analyses for school improvement result in deep enough data analysis to produce actionable areas for school improvement? Is there a strong enough focus on addressing inequities?  Did the processes include significant stakeholder and school engagement?  In your review of improvement plans, did schools and districts link these analyses directly to their strategies?  If not, how can the state help tighten the connection?
  6. Have the state’s guidelines and expectations for resource equity reviews resulted in districts, schools, and the public understanding of the full array of resources (i.e. not just school financing and per-pupil funding, but quality teachers, enrichment programs, advanced coursework, etc.) that impact equity of opportunity and outcomes for students? Have the guidelines and templates resulted in districts and schools making connections between resource equity and school improvement planning?  Is this true both for the reviews that ESSA requires of districts with high numbers of schools in Comprehensive Support and Improvement (CSI) and Targeted Support and Improvement (TSI) and for resource equity reviews that ESSA requires for any school identified for Additional Targeted Support and Improvement (ATSI)?  If not, what changes could better support a fuller look at resources and connections to school improvement plans?
  7. If the state produced guidance to districts and schools around improvement planning and strategies, has it resulted in school improvement plans with a greater focus on effective strategies that build capacity and buy-in for the long-term? For example, do districts and schools know more about and select strategies with evidence behind them that fit their unique needs?  What feedback did districts and schools provide about the utility of the state’s guidance to support improvement planning and implementation?  Has the guidance resulted in greater supports for school leaders in schools identified for improvement?  Has it resulted in effective stakeholder engagement in school improvement planning, including voices that may not traditionally have been engaged?  If not, what areas of guidance could be clearer or built out for more effective school improvement planning and implementation?
  8. Did the state’s structures for delivering technical assistance (TA) or connecting districts to TA result in high quality support that meets the specific needs of districts and schools in improvement? Did you analyze trends across the needs assessments? If so, what adjustments should you make to the focus and format of TA to better meet the needs of identified schools and their districts?  Do districts and schools feel that they have increased capacity to address their needs thanks to the TA – and particularly to address any inequities and ineffective strategies that create low outcomes in their schools?  If not, how could the structures and content of technical assistance and partnerships be strengthened to support schools that struggle with capacity and outcomes?
  9. Do districts and schools understand the state’s criteria for schools to exit improvement and the level of attainment each student group needs to meet in a given school? Do the state’s exit criteria still make sense, based on recent data trends and school identifications?  Is there anything in the first or second year of data or school improvement processes that might be reason to rethink exit criteria?  Do the criteria include a strong enough focus on the performance of often marginalized groups of students, while also allowing for innovation?
  10. Did the state’s progress monitoring and other processes to support school improvement planning and implementation lead to continuous improvement within the agency and within schools and districts? Did it result in actionable information for improvement in addition to checking for compliance? How did districts and schools experience this process?  As schools and districts move into implementing school improvement plans, are there any adjustments needed to the state agency’s learning agenda and evaluation plans?  Do stakeholders in the community – especially those that have not been traditionally engaged – feel they had and continue to have input into the states’ continuous improvement processes?   Do they include deep dives into information and data on equity in each school and community, to keep a clear, sharp focus on this?

These questions should be adapted to focus on district and school continuous improvement processes too.  States, districts, schools, and communities should ask these questions regularly to adjust the school improvement architecture intermittently, based on what they have learned from their efforts and research.  Doing so can create a cycle of learning and continuous improvement that constantly iterates on and increases success—a critical aspect of a broader shift in education toward functioning as a learning system.

As each student walks into their new classroom, they are filled with hope that this year might be better than the last.  Will those of us in charge of education systems use this moment to make sure it is?

Share this post