Wednesday, July 25, 2007

Chairman Miller's NCLB Memorandum Link

Education Week's blogger extraordinaire, Alexander Russo, posted House education committee chairman George Miller's memorandum to House freshmen from earlier this month, which outlines Miller's stance on NCLB. The two-page memo dated July 7 outlines nine key proposals for revising the controversial law. Here's a link to the memorandum: http://blogs.edweek.org/edweek/thisweekineducation/upload/2007/07/the_miller_reauthorization_mem/G_Miller_Memo_July_2007.pdf

Mapping State Proficiency Standards onto NAEP Scales

Under NCLB, states are required to report the percentages of students achieving proficiency in reading and mathematics for grades 3 through 8. As tempting as it is to compare proficiency scores across states, researchers are well aware of the pitfalls of this approach, given the differences in state curriculum standards and assessments.

In a recently released report (see link below), the Department's Institute of Education Sciences compared state assessment proficiency percentages to the estimated
percentages of students achieving proficiency with respect to the standards established by the National Assessment of Educational Progress (NAEP). IES found large discrepancies between the two. It attributed this variation to differences in both content standards and student academic achievement from state to state, as well as from differences in the stringency of the standards adopted by the states.

Thursday, July 5, 2007

State Graduation Rates: A Failing Accountability Measure

The heightened focus on accountability in education and growing interest in high school reform in recent years has focused attention on one of the most significant indicators of a high school’s performance: the student graduation rate. Yet there has been considerable debate in educational policy circles for several years about how many students actually earn a diploma. Various reports cite national graduation rates as low as 70 percent and as high as 83 percent. The state picture is even more confusing: 36 states report graduation rates between 80 and 97 percent, while an independent source has suggested that the rates in these same states are only 58 to 86 percent.

While statisticians and researchers argue about the best approaches for estimating graduation rates, the public is left to ponder why state governments fail to accurately count the number of students who graduate from their schools each year and what can be done to fix this problem. This paper sets out to explain the reasons for this inaccuracy in graduation rates and to identify future directions for assuring accuracy in reported graduation rates.

At the time when the No Child Left Behind Act of 2001 made reporting of an on-time graduation rate an accountability reporting requirement, few states had data collection systems adequate to produce an accurate rate. The accurate calculation of this rate requires cohort data, or student record data on student progress from grade to grade, data on graduation status, and data on students who transfer in and out of a school, district, or state during their secondary school studies. Given the absence of cohort data, the NCLBA regulations offered the states some flexibility in reporting graduation rates. Specifically, the states were provided with the option of developing their own definition of a graduation rate that “more accurately measures the rate of students who graduate from high school with a regular diploma” (Title I Final Regulations, 2002, section 200.19 (a) (1) (B)), provided that the state did not count drop-outs as transfers. The states individually responded with new proxy graduation rate definitions, most of which were calculated using cross-sectional data.

Independent researchers quickly pointed to flaws in several of these state-created graduation rate definitions that had been approved by the U.S. Department of Education (hereafter referred to as the Department). The Education Trust examined the first round of state graduation rate data reported to the Department for the 2001-02 school year and found that many state graduation rates appeared to diverge significantly from independent estimates. The report writers asserted that many states used questionable graduation rate definitions, which too often had the effect of overinflating graduation rates. North Carolina, for instance, reported an impressive graduation rate of 92.4%. However, when Jay Greene, an independent analyst, applied a cohort definition to enrollment data and diploma counts in the Department’s Common Core of Data, he produced a graduation rate of only 63% for the state.

The problems associated with the North Carolina graduation rate definition are illustrative of the complex problem of graduation rate definitions. The North Carolina graduation rate definition was not based on the percentage of students who entered in the 9th grade and received a degree four years later, but on the percentage of diploma recipients who received their diploma in four years or less. Students who dropped out of high school or transferred were excluded from the calculations altogether. As the Education Trust noted in its 2003 report, this meant that “if only 50% of students who enter 9th grade in North Carolina were to eventually obtain a high school diploma, but every one of those 50% did so in four years or less, then North Carolina would report a “graduation rate” of 100%.” North Carolina’s rate definition clearly produces a misleading graduation rate, yet their graduation rate definition met the NCLBA criteria for state-defined rates and had been approved by the Department.

There are two issues with North Carolina’s graduation rate definition that recur in many states. First, the rate definition uses cross-sectional data, since in 2003, as is the case today, the state of North Carolina does not have a student record data system in place and therefore cannot use cohort data to calculate its graduation rate. Because student record data are not available, the state cannot track student progress from the 9th through the 12th grade to provide an accurate calculation of on-time graduation. Secondly, the cross-sectional data provide annual snapshots of the number of students and dropouts at one point in time in the school year, but they cannot track individual students as they transfer in and out of a school, district, or state during their secondary school education. This failure to account for transfers produces graduation rates that misrepresent the true proportion of students receiving an on-time diploma.

The need to determine accurate graduation rates fueled efforts at the national level to improve the state of educational data systems. Following the published recommendations of a National Center on Education Statistics task force on Graduation, Completion, and Dropout Indicators, the Department directed resources to assist states with the development of student record data systems. The Department launched, for instance, a $50 million program of cooperative agreements to help states develop their student record data systems; initial awards were made to 14 states in November of 2005. The National Governors Association undertook its own measures to improve the quality of graduation rates, developing the Graduation Counts Compact in 2005. Under the Compact, states commit to developing a high-quality, student-level data collection system that tracks students from kindergarten through college. It also includes a four-year cohort graduate rate formula: the number of students who graduated divided by the number of students enrolling in 9th grade for the first time four years earlier—plus the students who joined this class of students (that is, the cohort) and minus the students who left. All 50 states signed the contract and promised to implement the reforms.

However, at present fewer than half of the states have student record data systems that will enable them to follow individual students from pre-kindergarten through high school, and only a handful are using them to calculate a four-year cohort graduation rate (Hoff, 2006). It may take years for other states to get such systems in place. Curran reported that the National Governors Association anticipates that by 2010, 39 states will report a graduation rate using the Compact (cohort) definition. These states will begin reporting only after they have developed four or five years of longitudinal data capable of tracking students’ progress from their first-time entry into the ninth grade through their exit from high school.

It is not sufficient to report on cohort graduation data without taking into account student transfers. As the U.S. Government Accountability Office reported in its 2005 report, No Child Left Behind Act. Education Could Do More to Help States Better Define Graduation Rates and Improve Knowledge about Intervention Strategies, the primary factor affecting the accuracy of graduation rates is student mobility. The American family in the early 21st century is very mobile, presenting a challenge to state and local education agencies to accurately account for student transfers and dropouts.

In order to accurately track students who transfer in and out of a local education agency, a state must have a student record data system that includes exit or “leaver” data. These data are typically entered in the system as codes that identify the reason a student left a particular district or school (Data Quality Campaign, 2006). States with exit data systems in place provide their districts with a set of codes with which to identify reasons for students’ exits, including events such as death, transfer out of state, transfer to a home school, transfer to another country, transfer to a private school, incarceration, General Educational Development (GED) certificate, and hospital-bound.

Since exit code systems are critical to the reliability of graduation rates reported by states, the Department’s National Forum on Education Statistics, a cooperative of state, local, and federal education agencies, produced Accounting for Every Student: A Taxonomy for Standard Student Exit Codes (2006), which outlines the need for an exit code system and provides recommendations on how to establish a classification system of exit codes within the architecture of the student record data system. An exit code system, the Forum argued in this report, can greatly assist states in producing correct and comparable calculations of completion and dropout rates.

However, an exit code system is only as useful as the policies and procedures governing its data. A local educational agency, for instance, without policies and procedures in place to sufficiently identify the reasons for students’ exits from a system, may inaccurately report all students of “unknown” status as dropouts. It may also count students who drop out, return to school, and then drop out again as a dropout each time, inflating the district’s drop-out rate. Alternatively, a district may record students who drop out as transfers before they receive documentation that the student actually enrolled in a different school, thereby artificially lowering the drop-out rate. These situations illustrate the problems posed by insufficient policies and procedures for identifying and tracking student exits.

Identifying students who transfer is important to calculating graduation rates, since a state-defined rate definition under NCLBA, as noted earlier, can measure the rate of students who graduate from high school with a regular diploma provided that the state did not count drop-outs as transfers. State adoption of a standardized procedure for documenting transfers was recommended by the National Governors Association Task Force on State High School Graduation Data, as part of its recommendations to states on how to develop a high-quality, comparable high school graduation measure. Their task force had recommended use of a transcript request or other documentation from a receiving school as the only form of acceptable documentation to record a student departure as a transfer. By default, a student for whom there is no information should be documented as a nongraduate or dropout. However, as of August 2006, few states had established this procedure, although Curran reported that many states were developing checks and audit procedures as their student record systems came online that enabled undocumented student transfers to be identified when the student registered at another school or for students who had transferred out but had not shown up as enrolled elsewhere. This approach, however, does not fully address the issue of transfers to private schools, home schools, or out-of-state schools, which are harder to verify.

The use of proxy graduation rate definitions will soon end as states, with the support of the Department and interested groups such as the National Governors Association and the Data Quality Campaign, complete the process of developing student record data systems. When student record systems come online, it is anticipated that many of the accuracy issues plaguing graduation rate reports will simply disappear. However, the high school graduation rates reported using cohort data will only be accurate if actions are taken now to ensure that policies and procedures are developed and respected in regards to documenting student exits, especially transfers. Ultimately the leadership at the school, district, and state levels is responsible for creating a data-driven culture that makes it a priority to rethink and possibly reorganize how education data is managed throughout the system and increase training and professional development for staff, including managers and users. Only a broad-based commitment to data quality will produce accountability measures that accurately portray the state of education in the U.S.