Dear TV Fans: There Is No TV Ratings Measurement System That Will Make Everyone Happy

Categories: 1-Featured,Broadcast TV,Cable TV,Help

Written By

July 8th, 2012

Whenever there are Twitter Q&A's from  Syfy's Craig Engler (@Syfy) or USA's Ted Linhart (@TedOnTV) I just can't look away. Having seen many of the Twitter Q&A sessions it's well into repeat questions and different variants of the same questions. I still can't look away! There are always lots of questions about Nielsen and measurement. Many fans have a quarrel with Nielsen, but I've come to the conclusion that most of them would have a quarrel with TV ratings measurement no matter what the system of measurement was.

As long as there are fans of TV shows, there will be disappointed fans when shows get canceled. No measurement system can change that and whatever the measurement system, low-rated shows will typically be canceled.

There are several types of complaints/improvement suggestions that come up, here are a few of the most frequent:

We have the technology! We could measure everybody! Why don't they measure everybody!

The technological capabilities to achieve that probably exist but that technology doesn't actually exist yet. My bet is it never exists, even if the technology vastly improves. There are a lot of reasons why I think that but the top two are:

  1. lots of people don't want their viewing measured (they view it as an invasion of privacy)
  2. $$$$$ - It would cost too much money

There are other reasons besides those, but both of those are show stoppers. Would a complete census be more accurate than Nielsen? If you could get it, it would, without a doubt, be more accurate. But TV ratings measurement exists for the purpose of buying/selling TV advertising. The networks and advertisers aren't going to be willing to pay for it and as expensive as Nielsen is (and it's very expensive) the census style system would be multiple orders of magnitude more expensive to maintain and manage. The networks and advertisers aren't going to pay for something like that for a system that might only be a little more accurate.

On top of that you'd still probably need Nielsen or something like it because the census system would have so much data to crunch it wouldn't likely be able to produce fast national ratings the next morning and final ratings the next afternoon. The networks need the information fast so they can react and make scheduling decisions.

But what about data from all those set top boxes - that's a lot bigger sample than Nielsen!

It is indeed a bigger sample than Nielsen. But in at least a couple of ways it's not what advertisers (at least buyers of national advertising) want.

  1. it only measures people with set top boxes. Advertisers want to measure everyone, even the people still using rabbit ears
  2. it measures household viewing but doesn't tell you *who* in the household watched. Advertisers (at least buyers of national advertising) want to know who watched, what their gender is, what their age is, etc. set top box data isn't that granular.

the good news is the data is available and is being used.  Usually it is in addition to Nielsen and not a replacement though recently there have been reports that a few local stations are switching to using set top box data to sell local advertising spots. My understanding is that those numbers aren't churned out as rapidly as Nielsen reports ratings either.

Neftlix, On Demand, DVR, iTunes and Online Viewing Aren't Counted and they should be!

This is a fairly common one and not just with fans of TV shows. I've seen producers of shows with similar laments. First, let's talk about the "not counted" piece. DVD, Netflix, and iTunes certainly aren't counted in the Nielsen ratings. That's because the purpose of the Nielsen ratings is to measure in order to broker the sale of TV advertising, not to come up with some type of "total popularity" metric. Since Netflix, iTunes and DVDs don't have advertising, they're not counted in the Nielsen ratings. But that's not the same thing as "they're not counted." Of course sales of DVDs and iTunes episodes are counted and the people who need to know, know!  You can pretty much count on any deal where how much Netflix pays to license content depends on how many people watched on Netflix, it gets counted.  There are several reasons why months and months and months after the fact nobody reports a total popularity metric - but you can start with months and months and months after the fact it's not in anyone's interest to pay money to figure all that out. The people who need the information to make decisions have the information.

DVR viewing *is* measured by Nielsen.  Currently advertisers pay for commercials watched up to 3 days after the program initially aired on television (these ratings are commonly called C3 or C+3 ratings).  That was the compromise reached between the networks and the advertisers though the networks are reportedly still pushing for 7 days instead of 3.  In any event Nielsen measures commercial viewing live and up to three days after. On the program ratings side Nielsen measures DVR viewing up to 7 days after and can slice that in a variety of ways (the common slices are the same day DVR viewing included in the Live+SD daily ratings, L+3 which includes live plus 3 days of viewing and L+7 which is live plus seven days).

On Demand and Online viewing can be tracked by Nielsen but only up to 3 days after the initial telecast (for purposes of the C3 ratings advertisers use) and only if they contain the same national commercials that aired on TV. Some networks are playing around with this but particularly on the online side it looks like for now most networks are shying away from full commercial loads for online viewing. I'm not sure if that's because the networks don't want to put off online viewers by adding (LOTS) more commercials or whether the reality is that despite the measurement capabilities advertisers don't want to pay the same rates for online viewing or a combination of both.


Another thing to keep in mind is that whatever the measurement system, many of the things you (and the networks, for that matter) don't like aren't a function of measurement but rather what the advertisers are willing to pay for. So, for example, a new measurement system wouldn't change that broadcast primetime advertising is sold on the basis of adults 18-49 and subsets of adults 18-49. That's not about measurement, that's about what the folks with the big briefcases full of money want to spend it on.

  • Holly


    Doctor Who being taken off PBS might have been contract based rather than ratings based. I think BBC America now has exclusive rights to air Doctor Who in the US.

  • Temis

    Arguments about ratings can be boiled down to two truisms:

    1. People who say the Neilsens are inaccurate don’t understand statistics.

    2. Ratings are for advertisers, not viewers. If advertisers tell the networks that they no longer trust the Neilsens, the networks will fall all over themselves fixing the system, because networks get paid by advertisers, not by viewers.

  • Temis

    I could also add:

    3. Advertisers have millions of dollars at stake, not to mention their jobs, that depend on the accuracy of the Neilsens. They have far more motive to question the accuracy of Neilsens than we do. Why don’t they? Hmm.

  • Corporate Chef

    No matter how accurate the viewership data may be the major concern to be addressed is who is really listening to commercials – multitasking has skyrocketed during commercial breaks – content is key and getting your message integrated when viewers are watching is the future – measurement should focus on branded content effectiveness not diaries. Advertisers need to justify spend with real ROI’s- both marketing and sales- not just “projected” reach- they should overlay branded entertainment results with sec x sec data of who is actually watching the integrated branded entertainment message – the findings may just provide better planning & insights—

  • Holly

    @Corporate Chef,

    OK, now how should they measure “branded content effectiveness”?

  • Corporate Chef

    There are a few companies that specialize in such

  • The End

    @Corporate Chef

    But that’s not an answer to Hollys post, she’s asking you a question which you’ve just deflected with a non answer. :)

  • Alex

    I think people would have more respect and understanding of the rating systems if there weren’t decades of data showing how wrong they can be. I remember it being shown that the wrong demographic was being measured with the original Star Trek and had the right numbers been calculated, it would have run for at least one more year. I think if the numbers were simply more accurate, that would go a long way towards easing people’s bad feelings. Producers, on the other hand, could save their fans a lot of heartache by adopting the British format of programming – shows that have set runs of 6, 12, 20 episodes and then that’s it; the seasons are self-contained and if they get renewed, they do another self-contained set of episodes. I stopped watching regular weekly TV a long time ago because I got tired of becoming involved in a story only to have it cut off after 5 episodes or 13.

  • Alex

    @Fleur Depending on when Doctor Who was cancelled on your PBS station it might well have had to do with the cost. In the 1990s and 2000s they raised the cost of the show and the viewer-supported PBS stations (it was not purchased by a network, but by individual stations) dropped it. So here’s the question: never mind whether you had the show listed in the diary – did you phone PBS and donate money?

  • Holly


    Do you have an example from the last 20 years? Using an example from more than 40 years ago seems to be stretching a bit.

  • John (the Scruffy One)

    Neilsen ratings measure watching habits among all levels of TV watching as well, right? I tend to think that people may just be having discussions about certain shows in certain TV watching cohorts (1-2 hours, 10-15 hours, 20+). People randomly self-group in such distributed ways. So on a site like Television by the numbers, of course a large population is going to talk about Fringe but that doesn’t mean that the rest of people who watch TV are. TVBTN does not constitute a random sample.

    Or to put it another way. Assume I have 15 friends. If the sample of the 16 of us is not a representative sample in the statistical sense, we may be the only 16 people watching a show even though it looks like “Everybody watches it” from my perspective.

  • Corporate Chef

    Sorry Holly – ran into quick meeting – Measuring branded entertainment effectiveness has resonated with advertisers especially with CMO’s – the most traditional is panel or focus groups which are are becoming less popular with researchers as predictive models and social media monitoring is on the rise. No doubt this chat is being monitored by some new start up who will pasteurize it into their business model. I was not defecting the answer – measuring the effectiveness of branded entertainment is not a one paragraph answer. Nielsen has some metrics however a company called iTVX can answer your questions – I am sorry for changing the subject however “content not reach” will be the driving force in media planning – Every broadcaster has some form of branded integration offerings attached to their commercial inventory.

  • Freddy Arrow


    “I remember it being shown that the wrong demographic was being measured with the original Star Trek and had the right numbers been calculated, it would have run for at least one more year.”

    Do you have a link for this anywhere? I’ve been a Star Trek fan for a long time and I’ve never heard of such a thing. I’ve always heard that Star Trek got mediocre to bad ratings when it was on in prime time, but for whatever reason, it took off in syndication.

  • Nadine

    Well the argument about sample size being inadequate is the one thing that I have to say might warrant consideration. But that’s something for a statistician to comment on. It could be significant–or not. They do pretty good election forecasts with fairly small samples, but how small is good?

  • Tom

    I feel like I should save this comment are paste it in every time this converstion comes up.

    Why Set Top Boxes are Not a Replacement for Nielsen
    Does the STB know the number of people living in the home and their demographic information? (Not without privacy/optin issues.)

    But let’s imagine there were no privacy issues:
    Does the STB know what the population without STBs were watching?
    Does the STB know how many people were actually in the room at the time, and not getting a snack, indisposed, etc.?
    Does the STB know if anyone is actually watching the feed, and is not operating off a different video input (e.g. video games)?
    Does the STB know if the TV itself is even on, and the user simply didn’t turn off the power to the TV (and not the STB)?
    Does the STB know if the input is being routed to a independent DVR (admittedly, they are getting more rare), and when or if that DVRed content is ever watched)?

    (Note that, at least in theory, the answer for all of the above for People Meters is yes.)

    Now, STB data can be used as a sanity check on Nielsen data. If 300k STBs ‘watched’, say, Sanctuary, but Nielsen reported only 200k viewers, then either far more STBs are left on when not in use than you expect, or the sci-fi audience is underrepresented in the Nielsen sample. (But then again, nearly no one actually cares about viewers, they want ratings data.)

  • Alex Roggio

    I’ve taken various Statistics courses so I know how systems like Nielsen work and why they are utilized. But statistics only works if you ignore the possibility of your sample not being filled with oddballs that go against the majority.

    If out of 10 TV screens, 8 of them were watching The X Factor while 2 of them were watching Grey’s Anatomy, but your sample of Three just happened to have the 2 that were watching Grey’s Anatomy, your conclusion would be that 33% of people watch The X-Factor while the other 66% are watching Grey’s Anatomy. Did you see how that deviates from the actual 80% and 20% totals?

    A lot of you Nielsen (and statistics) supporters will just cry about how the above circumstance is highly improbable, but there are various counter-arguments. #1 Nielsen’s sample size is so ridiculously small (less than 0.001%) that it is much less likely that their samples are representative of the total population & #2 Even if it’s “highly improbable”, it’s still POSSIBLE.

    Hell, even if you want to exaggerate Nielsen’s accuracy and say they are 95% accurate, that’s still 5% of a possibility that they got it all wrong. 5% of a possibility that when they said everyone was watching X show, they were actually watching Y show. Shows will get cancelled, people will lose their jobs, others will get rich.. all because of misinformation. This has been happening for decades! And we all know Nielsen’s “margin of error” is far greater than 5%.

    There’s also the fact that the people being measured know they are being measured (response bias). This keeps them from being completely random and thus increase the margin of error tenfold. “I really want to watch Dancing with the Stars, but Bones needs all the help it can get!” You think that type of thinking doesn’t go on in a Nielsen household? These people KNOW that their viewing habits represent hundreds of thousands of viewers and are affecting the fate of many tv shows. They will most certainly take this into account before turning on a TV, even subconsciously.

    Statistics in this type of medium can only work when the samples are large (at least 25% of TV viewers), samples are completely random (every week, a completely different group of people are being measured) and response bias is avoided (viewers have no idea that they are being measured). Until a system like this exists, no one should trust the ratings. I know they WILL anyway, but still, they shouldn’t…

  • Holly

    @Alex Roggio,

    According to Ted Linhart, the Nielsen sample is actually 0.02%, not 0.001%. Still small, but a huge difference from the number you gave. The sample has also increased 4X in the last 10 years (from 5K households in 2002), with more increases planned.

    I think a larger sample size would be useful, but a 25% sample is simply unfeasible. The increase in accuracy won’t come close to out-weighing the cost.

  • Holly

    Oops, I meant to include that you are right about the possibility, however small, of a random(ish) sample resulting in bad data. However, while it is theoretically possible, I think the chances are small enough to justify inaction by Nielsen and apparently it’s not a big enough worry for the advertisers and networks to push (and pay) Nielsen to change.

  • Freddy Arrow

    @Alex Roggio

    “Statistics in this type of medium can only work when the samples are large (at least 25% of TV viewers), samples are completely random (every week, a completely different group of people are being measured) and response bias is avoided (viewers have no idea that they are being measured). Until a system like this exists, no one should trust the ratings. I know they WILL anyway, but still, they shouldn’t…”

    That’s just not true at all. Using the calculator found here:

    You can determine what sample size you need. Assuming a 115,000,000 households in the US, a 99% confidence level with a 1% confidence interval only requires a sample size of 16,639. I believe Nielsen’s sample is 20,000 to 25,000.

  • AppleStinx

    @Freddy Arrow

    The sample must be reasonably representative of the population. Sampling each metropolitan area would be more representative overall than sampling the entire country as one, and Nielsen does sampling by local markets.

    If you use that calculator just for the NY Metropolitan area (7387810 households), 99% confidence level, 1% confidence interval, your sample size for NY alone would be 16,604 households. Next, for LA: 16,591 households; Chicago: 16,562. And so on. To complicate matters, in many cases one household accounts for 2 or more samples. Nielsen installs peoplemeter on all the tv sets in the household; i.e. the number of installed peoplemeters always exceeds the panel size (households).

© 2015 Tribune Digital Ventures