Twitter  Is the river behind your house rising? A British Twitter bot will tell you nie.mn/1kQfI9f  
Nieman Journalism Lab
Pushing to the future of journalism — A project of the Nieman Foundation at Harvard

L.A. Times’ controversial teacher database attracted traffic and got funding from a nontraditional source

Not so long ago, a hefty investigative series from the Los Angeles Times might have lived its life in print, starting on a Monday and culminating with abig package in the Sunday paper. But the web creates the potential for long-from and in-depth work to not just live on online, but but do so in a more useful way than a print-only story could. That’s certainly the case for the Times’ “Grading the Teachers,” a series based on the “value-added” performance of individual teachers and schools. On the Times’ site, users can review the value-added scores of 6,000 3rd- through 5th-grade teachers — by name — in the Los Angeles Unified School District as well as individual schools. The decision to run names of individual teachers and their performance was controversial.

The Times calculated the value-added scores from the 2002-2003 school year through 2008-2009 using standardized test data provided by the school district. The paper hired a researcher from RAND Corp. to run the analysis, though RAND was not involved. From there, in-house data expert and long-time reporter Doug Smith figured out how to present the information in a way that was usable for reporters and understandable to readers.

As might be expected, the interactive database has been a big traffic draw. Smith said that since the database went live, more than 150,000 unique visitors have checked it out. Some 50,000 went right away and now the Times is seeing about 4,000 users per day. And those users are engaged. So far the project has generated about 1.4 million page views — which means a typical user is clicking on more than 9 pages. That’s sticky content: Parents want to compare their child’s teacher to the others in that grade, their school against the neighbor’s. (I checked out my elementary school alma mater, which boasts a score of, well, average.)

To try to be fair to teachers, the Times gave their subjects a chance to review the data on their page and respond before publication. But that’s not easy when you’re dealing with thousands of subjects, in a school district where email addresses aren’t standardized. An early story in the series directed interested teachers to a web page where they were asked to prove their identity with a birth date and a district email address to get their data early. About 2,000 teachers did before the data went public. Another 300 submitted responses or comments on their pages.

“We moderate comments,” Smith said. “We didn’t have any problems. Most of them were immediately posteable. The level of discourse remained pretty high.”

All in all, it’s one of those great journalism moments at the intersection of important news and reader interest. But that doesn’t make it profitable. Even with the impressive pageviews, the story was costly from the start and required serious resource investment on the part of the Times.

To help cushion the blow, the newspaper accepted a grant from the Hechinger Report, the education nonprofit news organization based at Columbia’s Teachers College. [Disclosure: Lab director Joshua Benton sits on Hechinger's advisory board.] But aside from doing its own independent reporting, Hechinger also works with established news organizations to produce education stories for their own outlets. In the case of the Times, it was a $15,000 grant to help get the difficult data analysis work done.

I spoke with Richard Lee Colvin, editor of the Hechinger Report, about his decision to make the grant. Before Hechinger, Colvin covered education at the Times for seven years, and he was interested in helping the newspaper work with a professional statistician to score the 6,000 teachers using the “value-added” metric that was the basis for the series.

“[The L.A. Times] understood that was not something they had the capacity to do internally,” Colvin said. “They had already had conversations with this researcher, but they needed financial support to finish the project.” (Colvin wanted to be clear that he was not involved in the decision to run individual names of teachers on the Times’ site, just in analyzing the testing data.) In exchange for the grant, the L.A. Times allowed Hechinger to use some of its content and gave them access to the data analysis, which Colvin says could have future uses.

At The Hechinger Report, Colvin is experimenting with how it can best carry out their mission of supporting in-depth education coverage — producing content for the Hechinger website, placing its articles with partner news organizations, or direct subsidies as in the L.A. Times series. They’re currently sponsoring a portion of the salary of a blogger at the nonprofit MinnPost whose beat includes education. “We’re very flexible in the ways we’re working with different organizations,” Colvin said. But, to clarify, he said, “we’re not a grant-making organization.”

As for the L.A. Times’ database, will the Times continue to update it every year? Smith says the district has not yet handed over the 2009-10 school year data, which isn’t a good sign for the Times. The district is battling with the union over whether to use value-added measurements in teacher evaluations, which could make it more difficult for the paper to get its hands on the data. “If we get it, we’ll release it,” Smith said.

                                   
What to read next
RevealCIRlogo
Justin Ellis    July 18, 2014
With $3.5 million in grant funding and an eye for collaboration, the Center for Investigative Reporting and PRX aim to bring deep investigations to radio and podcasting.
  • http://opendna.com Jay McKinnon

    The Economic Policy Institute’s August 2010 Briefing Paper (#278), authored by the best names in educational assessment, reviews the state of the art of Value Added Modeling (VAM). The report finds that VAM is “an invalid teacher evaluation system” which is “far too unstable to be considered fair or reliable” and cannot provide causal relationships “except under extreme and unrealistic assumptions”. The full report here: http://www.epi.org/page/-/pdf/bp278.pdf

    Contra the Hechinger Report (http://tinyurl.com/29t7mhw), the EPI didn’t just caution “leaders against placing undue weight on standardized test scores in evaluations of teacher effectiveness”, it told them that test-based VAM measurements were virtually useless for evaluating teachers.

    The LA Time’s “Grading the Teachers” series has been and remains well-trafficked but it also lacks any scientific validity for the purpose intended. In other words, the LA Times has been tremendously successful in providing legitimacy to a quantitative methodology which is objectively invalid for the purpose to which it is being applied. The LA Times has no excuse for this error unless the RAND researcher they hired failed to do a literature search within RAND itself (EPI report includes many quotes and references to RAND studies which are derogatory about using VAM in the way advocated by the LA Times).

    I would be reluctant to cite “Grading the Teachers” as an example of good journalism, because my definition of “good journalism” does not include promoting pseudoscience as the basis for public policy.

  • http://www.niemanlab.org/ Joshua Benton

    Hi Jay: Just to put on my old education reporter hat for a minute, here’s what EPI had to say about value added:

    A review of the technical evidence leads us to conclude that, although standardized test scores of students are one piece of information for school leaders to use to make judgments about teacher effectiveness, such scores should be only a part of an overall comprehensive evaluation. Some states are now considering plans that would give as much as 50% of the weight in teacher evaluation and compensation decisions to scores on existing tests of basic skills in math and reading. Based on the evidence, we consider this unwise. Any sound evaluation will necessarily involve a balancing of many factors that provide a more accurate view of what teachers in fact do in the classroom and how that contributes to student learning.

    In other words, they find standardized test scores are reasonable as a contribution to an overall evaluation, not that they are never to be used for any purpose ever ever ever. That actually seems to line up well with Hechinger’s comment that EPI cautions “leaders against placing undue weight on standardized test scores in evaluations of teacher effectiveness.”

    It also cites ETS:

    VAM results should not serve as the sole or principal basis for making consequential decisions about teachers.

    And NAS:

    …VAM estimates of teacher effectiveness should not be used to make operational decisions because such estimates are far too unstable to be considered fair or reliable.

    All of those support the idea that value-added should not be used as a determining factor in an employment decision. But that doesn’t mean the data shouldn’t be public or that it isn’t useful. There’s plenty of publicly available data that is non-determinative but useful. If I was at fault in five car accidents last year, that’s not proof that I’m a subpar driver. There could be other factors at work! Maybe I’ve gotten better! Maybe I had a really awful car that had serious mechanical problems! If you wanted to really know whether I can drive well, you’d probably want to have some other data. But it’s nonetheless a useful thing to know — particularly when you have as many years’ worth of data as the LAT had.

  • http://www.chanceofrain.com Emily Green

    I am a former LA Times staff writer and now a freelance contributor to the paper. I am also a long-time volunteer at Twenty-fourth Street Elementary School, which was evaluated by the Times project. Link: http://projects.latimes.com/value-added/school/los-angeles/twenty-fourth-street-elementary/

    After years of working intensely at the school developing a (now much changed) school garden curriculum, I can say from first-hand experience that a random generator could have done a better job ranking the teachers.

    A teacher who himself does not speak English properly, never mind teach it, is ranked as “most effective,” while a teacher who is nothing short of amazing and always took the most failing students for her third grade class is ranked as “average.” Elsewhere in the ratings, one of the best ranked teachers, a truly good instructor, left three years ago (why rank him)?

    The school did and probably still does have some shockingly bad teachers. But its worst teacher was never worse than the administration. In 2003, the principal was so corrupt that the teachers had to band together to have her removed.

    Her successor, who served from 2004-2006, was brought out of retirement in her mid 70s to take on a then over-crowded failing school. She was frail and so unable to handle staff that a junior custodian thought nothing of riding motorcycles on the playground, senior custodians watched TV in their office, and parents routinely paraded their students in after the bell without so much as a blush. The office was like something out of Hill Street Blues.

    In late 2006, the elderly interim principal was succeeded by a newcomer who hadn’t qualified for her position yet and who, by definition, had no experience. Under her, during the last three years, 50% of the teaching staff have left, including three of the four “excellent” rated teachers.

    Meanwhile, none of the staff, administrative or instructional, can be blamed for the prevailing attitude of the parents, which is that education is free child care and attendance is optional.

    Nothing about the Times ranking captures what is going on at 24th Street Elementary School, other than the students test poorly. To put the names of individual teachers next to crunched scoring histories, then to suggest that there is a direct link, is not “one of those great journalism moments at the intersection of important news and reader interest.” It’s a travesty.

    An adequate journalism moment might be rehiring the hundreds of reporters sacked in recent years by the Tribune and rebuilding a credible education desk.

  • Sherman Dorn

    How is it that you write about the series without discussing ANY of the ethical questions raised around the release of the stats by teacher?

  • Pingback: L.A. Times’ Controversial Teacher Database Attracted Traffic and Got Funding From a Nontraditional Source | Laura McGann | Voices | AllThingsD

  • Tom Corcoran

    Nieman’s one-sided coverage neglects to point out the high error rate in VAM analysis, the failure of the LA Times to check school rosters to make sure the scores were assigned to the right teachers (standard procedure elsewhere), and the problems associated with high absenteeism and student mobility (endemic in LA). Furthermore, the piece also fails to note that Hechinger which is associated with Teachers College, and uses its university affiliation, funded research without going through TC’s institutional Review Board which would have insisted on protection for the human subjects included in the analysis. All in all, this was an injustice to LA’s teachers, a bad day for social science, and a poor ethical decison by Hechinger.

  • Pingback: Get Schooled: 6 Education-themed news databases :: 10,000 Words :: where journalism and technology meet

  • http://opendna.com Jay McKinnon

    Thanks for engaging in the comments, Joshua. Everything you have written is true, but the LA Times did not couch the data in those terms.

    The Times was not being entirely honest when it published that “Small differences in rankings are not statistically significant, particularly for those rated near the average.” The EPI study found that even large differences in ranking were not statistically significant, not even for outliers (see p12). With 10 years of data the error rate would still be 12%, down from 26% with three years data (the LAT claims 7% with 7 years of data).

    If this “Grading the Teachers” is to be defended with the “right to know” then the LA Times must publish the statistical methods and models which allowed them to claim error rates substantially lower than any reported by the EPI. The goals of transparency cannot be achieved by hiding suspect science behind pretty bell curve visualizations.

    You use a car accident analogy, so I’ll continue along those lines: Suppose that in any given year there was a 26% chance that your DMV records would falsely report that you got in a serious car accident. Would you endorse risk-based insurance premiums if the chance of such an error was only 12% over ten years, or would you call that unjust?

  • http://thisweekineducation.com alexander

    wow, laura and joshua —

    the story’s a great success but it’s based on iffy methods and its funder is disavowing the paper’s decision to publish individual names.

    and if that didn’t raise interesting journo-ethical questions then what about the comments by other journos?

    bill boyarsky slammed the LAT’s decision- and excuse-making here
    http://www.laobserved.com/boyarsky/2010/09/since_the_los_angeles_times.php

    the nyt’s david leonhardt called it an overreaction
    http://www.nytimes.com/2010/09/05/magazine/05FOB-wwln-t.html?_r=1&wpisrc=nl_wonk

    there’s a lot more here than pageviews and innnovative funding methods

    / alexander

  • http://www.niemanlab.org/ Joshua Benton

    Alexander, this isn’t a site about education policy. This is a site about new ways of supporting and producing online journalism. While there’s a healthy debate to be had about the propriety of the LAT’s series, it’s a debate that has happened elsewhere — and we’re happy to focus on the stuff that is in our bailiwick. There are tons and tons of interesting journo-ethics questions raised every day, but we generally leave those to others so we can focus on our beat.

    (That said, I suspect you and I would disagree on the ethical question here.)

    Jay, to answer your question, I think there’s a difference in your car analogy between “endorsing risk-based insurance premiums” — which I’d say would equate to basing salaries/firing decisions on value-added data — and simply making the information public, which is what the LAT did. The question of whether a government agency should act on this data is, for me, a separate one from whether the data should be withheld from the public.

    I will say that, when I faced a similar set of questions as an education reporter — a data analysis of suspicious test scores on a state exam — I didn’t go in the LAT’s direction. I named a handful of the worst offenders in my stories and then approached the rest only in the aggregate. But I don’t fault the LAT for doing it differently.

  • alexander

    ok let’s remind everyone first off that you’ve got a giant conflict on this issue, josh, since you’re on the hechinger board. second of all, i’m not arguing ed policy or research here but the complications and decisions involved in having one outfit pay for something and another outfit implement it — yes, some of them ethical. if that’s not your beat, i don’t know what is. and it sounds like you agree with me on the substance, anyway. / alexander

  • http://www.niemanlab.org/ Joshua Benton

    Alexander, as you no doubt saw, I mentioned my role on the advisory board in the post.

  • http://accomplishedcaliforniateachers.wordpress.com/ Martha

    “and simply making the information public”

    The L.A. Times did much more than “make the information public.” It assigned labels to teachers as effective or ineffective. It created a database where parents could look up a child’s teachers and judge their effectiveness based on flawed data. It’s scandalous and sexy, and I can believe it generated lots of hits. It was also misleading, and when folks try to minimize the harm done to teachers by taking a complex issue (teacher quality) and boiling it down to a number, it should give us pause. Naked pictures of Britney Spears also generate page views. Does this make great journalism?

  • http://thisweekineducation.com alexander

    nb — one of the poorly rated teachers committed suicide last week

    http://losangeles.cbslocal.com/2010/09/27/teacher-from-south-gate-missing-since-monday/

    no direct connection i’m sure.

    /ar

  • http://www.classsizematters.org Leonie Haimson

    My question is, is where Hechinger did receive the money for this? Were they a pass-through for a more large pocketed foundation or funder?

  • Pingback: Defining “Great” Teachers | Watt Way Blog

  • Pingback: The top 7 technologies that changed modern journalism forever « Kozar Cool Blog

  • http://cheaptravelresource.com insanely cheap flights,cheap travel,travel leisure at its best,Cheap Flights Int’l

    Author Joe Steine has on a regular basis contibuted to our site with knowledgeable content

    http://cheaptravelresource.com

  • http://www.airmax90dunk.com/nike-air-max-90-shoes-c-103.html Nike Air Max one cheap

    [url=http://www.airmax90dunk.com/]Nike Air Max 2011 releases[/url]

    http://www.airmax90dunk.com/nike-air-max-90-shoes-c-103.html

  • http://www.articlescondo.com/index.php?page=article&article_id=278449 Thomas

    I am having problems with the post images in Firefox

    http://www.articlescondo.com/index.php?page=article&article_id=278449