The Release of Rand Reports Is Independent of Politics - Los Angeles Times
Advertisement

The Release of Rand Reports Is Independent of Politics

Share via
James A. Thomson is president and CEO of Rand

It isn’t every day that a resolutely nonpartisan think tank finds itself in the eye of the political storm during the final stretch of a presidential campaign. In releasing two education studies during the election season--an issue paper last week on the Texas testing system that raised “serious questions†about its validity and a report in late July, just prior to the GOP convention, that looked at the performance of 44 states on national tests and had some positive things to say about Texas--we at Rand anticipated strong reactions from all sides.

We could certainly do without the distortions that followed as both campaigns combed the findings for use in speeches and attack ads and critics charged Rand with partisan collusion on behalf of one side or the other. Far more disturbing, however, is that the campaigns and the critics have shortchanged the main thrust of the findings in both reports.

The July report was the more comprehensive. Educators have long known that the most powerful factors in predicting student performance are family characteristics, such as racial and socioeconomic status, which vary widely among the states. So the authors adjusted state scores on National Assessment of Educational Progress tests for these factors, creating a more-level playing field.

Advertisement

The adjusted data allowed them to compare the performance of similar students in one state to their counterparts elsewhere. Texas scored at or near the top in several categories. GOP commentators focused almost exclusively on this finding. Yet the data also enabled the researchers to explore why large score differences remain even after accounting for states’ demographic differences. Here they found that states with lower pupil-teacher ratios, greater access to public preschool and more ample classroom resources for teachers record higher NAEP scores. Democrats emphasized this finding.

These expensive programs account for the bulk of the adjusted score differences between states, according to the July analysis. Still, large sums of money won’t produce commensurate progress unless principals and teachers implement the programs in a well-directed way. As we’ve seen in the resurgence of the American economy, management makes a difference.

This is where the new issue paper comes in. One of the prime tools of effective private sector management is an accountability system that includes clear goals, a well-designed incentive structure and solid performance measures. Building this kind of system into American education is a fine idea. But we have to recognize that the development of accurate education measurements represents an enormous challenge.

Advertisement

Many states are currently trying to measure achievement and implement accountability via statewide testing programs.

Our paper is one of a series of studies showing that these testing programs are not yet working well. In this study, the researchers compared scores on the Texas Assessment of Academic Skills, or TAAS, reading and math tests with Texas scores on NAEP tests in the same subjects in comparable years. Our researchers have looked at statewide testing in other states and will continue to do so. But Texas has the highest-profile high-stakes testing program and one that has recorded extraordinary gains in math and reading scores.

The Texas scores on NAEP, the blue-chip standard in this field, were good. But there was little correspondence between these results and scores that ranged up to six times higher on the statewide test, raising troubling questions about the reliability of the TAAS. Some of our critics have observed that it is normal for state tests scores to be higher (the issue paper makes this very point and explains why). But not this much higher.

Advertisement

The questions are troubling because far-reaching decisions are being made, in Texas and elsewhere, based on testing systems that may not be sound. Meanwhile, both major party presidential candidates are calling for yet greater reliance on such programs. Instead of defensive reactions to unwelcome findings, we need trials, evaluations and continuous improvements.

The issue paper recommends a number of steps that could be taken to enhance the tests’ trustworthiness. Unfortunately, these constructive ideas are being lost in the charges and counter-charges surrounding the timing of its release.

Many of the critics suggest that we should have waited to release this study until after the election. Others thought we should have delayed the July report. Given the maelstrom in which we now find ourselves, that sounds appealing. But we release studies when the research is in, when rigorous reviews have been received and when revisions are complete.

Sitting on a study, contrary to that policy, would have been the real political act. It would have raised far more appropriate questions about our integrity than those being heard now. Research institutions that don’t initiate projects for political reasons cannot time their distribution for such reasons.

Rand, unlike a national administration or political campaign, does not have a party line. We recruit the best and demand high-quality, objective analysis.

It’s uncomfortable, of course, to see our researchers’ statements being used by competing campaign Web sites and advertisements, even if that does suggest our balance. Perhaps they will all be taken more seriously once the election dust settles.

Advertisement

Meanwhile, we can look to at least one ironic silver lining. For years, many in the public and the policy community have assumed that Rand analyzes national security problems but not important domestic issues. I think that misconception is out of the way.

Advertisement