Thursday, November 23, 2006

How can schools prove their success?

Yesterday we looked at the conumdrum of students who pass their classes -- sometimes with flying colors -- yet fail the high-stakes tests established to measure educational success across an entire state. Something's obviously wrong with the situation. Either:
  • the standardized tests are more rigorous than the state standards,
  • schools have failed at preparing students to take standardized tests, or
  • schools are failing to teach the material -- and inflating grades to mask the problem.
This all points out the difficulty communities face when trying to hold schools accountable for student performance. How do you know that the measuring stick you're looking at is accurate?

This matters at a school like JIS because, as it stands, much of the reporting on student performance comes in terms of grades on internal assessments (unit tests, papers -- if your student is lucky enough to write any -- and projects) that make up the students' grades.

Now we can look at IB and AP scores, but those are only relevant for high school students, especially since our population is so transient. (Although wouldn't it be interesting to see an analysis of these scores correlated to the number of years a student has spent at JIS?)

And JIS does participate in the ISA (International School Assessment) for students in grades 3, 5, 7, and 9. But that test is new, its developer was still in the process last year of tweeking the test's grading rubric, and we don't have much of a track record to go on. (FYI: this is the same test that allowed my 5th grader and his classmates to use a calculator for the math section.)

That leaves the assessments done by individual teachers as the main data source on accountability. Are the results generalizable? Do grades in one classroom mean the same in another? If you spend any time talking with parents about their children's varying experiences in the same grade, it would take a lot of convincing to get them to believe that grades would be enough to hold a school accountable.

Looking at data gathered in the classroom is tough -- both as a measure of accountability, and as a tool to improve instruction. Here's a good article from the latest issue of Education Next (a publication of Stanford University's Hoover Institution) that delves into the complexity of using a variety of data to inform and guide schools' decisions on instruction. In each example, data-driven decision making requires a significant commitment and investment in resources, training, and time.

Sorry to wax on about this subject....it's just been on my mind, and like an unreachable itch in the middle of my back, I can't get rid of it.

If you find this topic at all interesting, please have a look at this website: The Education Commission of the States (ECS) Accountability Site. I'm still wading through it, but I've found the "Accountability Policy Inventory & Analysis Tool" incredibly interesting. It gives specific examples -- bucketloads, in fact -- of the types of data that can demonstrate a school's effectiveness. (To download this Word document, follow the link above, and then click on the tool -- it's on the right side of the page under Highlights.)

0 Comments:

Post a Comment

<< Home