ABOUT THAT 39% ENHANCED CBET/VOLTAIR INCREASE….

July 13, 2016

 

 

In early October 2015 Nielsen announced the results of their enhanced CBET testing in the Washington, DC/ Baltimore markets that had taken place over the previous two months. They proudly stated that, using the new enhanced CBET, 39% of the 289 rating combinations across 5 (unknown) Demos and 6 (unknown) Dayparts of 14 (unknown) stations went up by a .1 AQH Rating. If you were thinking 5 x 6 x 14 = 420, you were not alone (not to mention that 14, 5 and 6 do not even divide into 289 evenly). So if the 289 Station/Daypart/Demos Nielsen continues to insist is correct is actually incorrect, could the entire premise of 39% be wrong as that number is key to calculating the percent change?

 

Radio generally ignored this discrepancy and viewed the announcement as 39% of the stations would be going up. Almost a 50:50 chance it would be theirs. Playing these odds, Nielsen quickly rolled out the new enhanced CBET encoders late last year (Oct-Dec).

 

Now that close to 100% of the PPM Stations are using the new Encoders and/or Voltairs, what exactly did Nielsen actually deliver in the all important Persons 25-54 Rating which is the key Demo for roughly 50% of Radio Revenue?

 

BEFORE THE NEW ENHANCED CBET:  The first Voltair (the first code enhancer) was installed into a small PPM Market in early Spring 2014; another went into a large market in late Spring 2014 (based on comments Voltair representatives made at trade shows).  With this information, the June 2014 PPM monthly Nielsen report (which primarily measured May) can give us a valid ‘before’ comparison of the PPM markets without Voltairs (with the exception of 1 or possibly 2 Stations with Voltairs already installed).

 

BY THE JUNE 2015 PPM RESULTS:  based on comments Voltair representatives made to trade publications, one can estimate that of the 600-700 Voltairs shipped, they were installed at roughly 60% of the Stations in PPM Markets. 

 

AFTER THE NEW ENHANCED CBET: The AFTER comparison was a big surprise. The current PPM June’16 monthly is two years after the first Voltair and coming up on one year after Nielsen’s test install of their enhanced CBET encoders (in Washington/Baltimore). During this two year period, almost all stations have installed some type of PPM signal enhancers.  However, in this current monthly, the PUR (Persons Using Radio also known as PUMM Persons Using Measured Media) in Washington, DC was actually DOWN (AQH Rating A25-54 M-S 6A-12M) from June 2014 to June 2016. In fact, a full 25% of the 48 PPM markets are down or flat in PUR.

 

To be fair, Nielsen never stated that Market PUR would rise. However, since Nielsen noted a 39% gain of AQH Rating in their enhanced CBET test, it is logical to assume that (unless this was simply a rounding issue) the market PUR also should have risen significantly.

·        Nielsen also never stated that any stations experienced decreases in AQH Rating.  However, in their Washington, DC test market, the total PUR ratings actually fell 2%.

 

The next logical question to explore: Did 39% of the Stations experience at least a 0.1 AQH Rating in Nielsen’s Washington test market among Persons 25-54 Full week?

 

·           In Washington, between June 2014, June 2015 and June 2016, a total of 61 stations made the book in at least one of these three Monthlies.

o   Of those 61, a total of 2 stations (3.3%) gained 0.2 Rating Points (WAMU-FM and WTOP-FM). (A surprise as only 2 out of 289 Station/Daypart/Demo combinations in Nielsen’s test gained 0.2 AQH Rating. That’s 0.6% in the Nielsen test compared to 3.3% June 2014>June 2016).

o   Of those 61, a total of 5 stations (8.2%) gained 0.1 Rating Points.

o   Of those 61, a total of 9 stations (14.8%) decreased 0.1 Rating Points.

o   Of those 61, a total of 1 station (1.6%) went decreased 0.2 Rating Points.

 

·           Bottom Line: 7 were up; 10 were down.  What happened to the remaining 44 stations? Those 72.1% were flat.

 

Summary: In Washington, during the period where most of the stations installed PPM enhancers, 11.5% were up; 16.4% were down; 72.1% were flat.  That resulted in a net loss to the stations of -4.9%!  That’s a far cry from Nielsen’s report of 39% gains of +0.1 AQH Rating in their Washington/Baltimore test in Radio’s most important demo.

 

 

 

  Washington, DC All Stations

      June 2014 > June 2016

Percent

Stations

Change

3.3%

2

0.2

11.5%

8.2%

5

0.1

72.1%

44

0

16.4%

14.8%

9

-0.1

1.6%

1

-0.2

 

 

 

Are the results in Baltimore different?

 

Unlike Washington, Baltimore did show an increase in Market PUR (AQH Rating A25-54 M-S 6AM – 12M) from June 2014 to June 2016, rising from 7.6 to 8.4. That certainly looks more encouraging from the start.

·        Between June 2014, June 2015 and June 2016 a total of 59 stations made the Baltimore book in at least one of the 3 Monthlies.

 

o   Of those 59, a total of 3 stations (5.1%) gained up 0.3 Rating Points (Radio One’s WERQ-FM and WWIN-FM as well as WLIF-FM). (A25-54 M-S 6A-12M). (Note: Nielsen did not find any test station rising 0.3 AQH Rating Points in their study).

o   Of those 59, a total of 3 stations (5.1%) gained 0.2 Rating Points (WCBM-AM, WIYY-FM and WJZ-FM). 

o   Of those 59, a total of 9 stations (15.3%) gained 0.1 Rating Points.

o   Of those 59, a total of 13 stations (22.0%) decreased 0.1 Rating Points.

o   The remaining 31 stations were flat (52.5%).

 

·        Bottom Line: 24.5% were up; 22.0% were down; 52.5% were flat; there was a net gain of ONLY 3.5%.

 

 

 

        Baltimore All Stations

       June 2014 > June 2016

Percent

Stations

Change

5.1%

3

0.3

5.1%

3

0.2

25.4%

15.3%

9

0.1

52.5%

31

0

22.0%

22.0%

13

-0.1

 

 

What if we combine the Washington, DC / Baltimore Market using the 70 Stations from both markets with at least 0.1 AQH Rating Persons 25-54 in one of the 2 surveys? 

 

·        Of those 70 Stations:

o   3 Stations (4.3%), all in Baltimore, increased 0.3 AQH Ratings (A25-54 M-S 6A-12M) from June 2014 to June 2016 (Note: Nielsen did not find any test station rising 0.3 AQH Rating Points in their study).

o   5 Stations (7.1%) increased 0.2

o   14 Stations (20.0%) increased 0.1

o   22 Stations (31.4%) decreased 0.1

o   1 Washington Station (1.4%) decreased 0.1.

o   The remaining 25 Stations (35.7%) were FLAT.

 

·        Bottom Line: 31.4% were up; 32.9% were down; 35.7% were flat; there was a net LOSS of -1.5%.

 

 

Washington+Baltimore MD  All Stations

      June 2014 > June 2016

Percent

Stations

Change

4.3%

3

0.3

7.1%

5

0.2

31.4%

20.0%

14

0.1

35.7%

25

0

32.9%

31.4%

22

-0.1

1.4%

1

-0.2

 

 

Like a TV Car Commercial, buried in small print on a Nielsen graphic last year, it is stated that only 14 stations with a .1 AQH Rating or higher were used in the Nielson calculations of resulting in the 39% increase. That is why I only used stations without a 0.0 in both surveys periods for the above Washington/Baltimore calculation. In other words, Hubbard’s WFED-AM, CBS’s WJZ-AM, Radio One’s WOL-AM and WWIN-AM, iHeartMedia’s WCAO-AM and WFMD-AM, Salem’s WAVA-FM and Pacifica’s WPFW-FM among others are deemed irrelevant as per the “massaged” Nielsen 14 Station specs.

 

Of the 19 stations in the Nielsen non-random “test’, Nielsen dropped 5 of those with a 0.0 AQH Rating to “massage” the data and achieve their 39% increase which it then reported. If Nielsen had used ALL stations (or even the 19 in their total test), the reported 39% increase would have been MUCH lower and closer to the 115 Index (15% increase) that Nielsen reported using enhanced CBET.  Had I included ALL stations that made the Rating Report in the Washington and Baltimore test markets, 75 Stations (63%) would have been flat, 22 (18%) would show a gain and 23 (19%) would have shown a loss – and a 1% Net Loss overall.

 

And full disclosure, I only ran 1 Daypart (Monday-Sunday 6A-12M) and 1 Demo (Persons 25-54) though Nielsen supposedly ran 5 Demos and 6 Dayparts. It is possible that Persons 25-54 Full week might have performed much worse than other demos/dayparts, but the probability is that this KEY DEMO should be a good representation of the 5 unknown Demos that I suspect Nielsen probably used (6+ or 12+, P18-34, P18-49, P25-54 and P35-64) and 6 Dayparts (Mon-Sun 6A-12M, Mon-Fri 6A-10A, Mon-Fri 10A-3P, Mon-Fri 3P-7P, S-S 6A-12M and M-F 7P-12M or M-F 6A-7P) in their test.

 

We can point to declining Market PUR levels since the early 90s. We can point to the fact that due to Nielsen’s OLD Edit Policies combined with the CBET Enhancement devices causing station to receive an extra minute of listening at the start of a session when the respondent is NOT listening (as I disclosed last October). This continues to cause a battle between the MRC and Nielsen. At some point this will be corrected and these inflated numbers will drop to reality.

 

But the bottom line is:

 

Is this what you expected when Nielsen made their 39% announcement last October, especially in Radio’s most important demo?

 

 

comments@kabrich.com

 

 

website counter
Stats For Free web counter

 

 

How Voltair Created Listening Where None Actually Existed

10/26/2015

 

Over the last week, all the Pro-Voltair Pundits have been so giddy about the “Good Times to Come” with Nielsen’s Enhanced CBET rollout.  We have been told how to calculate the forthcoming Average Revenue Change, how to handle Advertiser Objections to higher rates, and how this will impact our bottom line. We are told Radio should look at all the formats that have disappeared, even if they were long on the decline, because the “Enhanced CBET code” will bring back the dead!

“Happy Days are Here Again” is the message they are sending. “Your Boat Has Come In!”

The challenge is, despite Nielsen’s claims of Transparency, none of these articles explain how those additional ratings and dollars are suppose to materialize. Quite simply, it’s amazing anyone can forecast anything without the complete story.

Truth be told, most stations will see no increase in AQH Rating.

In fact, before it’s over, very few stations will ever see a 15% increase (think 40% of the stations not using a Voltair and using the Original CBET). If you are a Voltair user, Christmas have come and gone. Santa will not be making a second delivery this year. In fact, the majority of stations currently using a Voltair will see some decline when all is said and done.

Huh?

That’s not what Nielsen said on the Webinar!

That’s not what I’ve read in the trades!

How can I make such an outrageous statement?

Actually, Nielsen never said your station would go up on the Webinar or in their deck either. Seems the pundits are reading into the study what they want to read into it. Nielsen simply reported testing of the Original CBET compared to the Enhanced CBET under current policies. Of course, these pundits assumed what Nielsen wanted them to.

More on that later.

We need to understand how the increases occurred to determine how everything will shake out.

Of course, that is easier said than done. Despite Nielsen’s claim of “transparency”, no one from Nielsen has revealed publicly why the numbers increased.

When one has to figure out the truth from Nielsen’s “transparency” you need to closely listen to what they say – and more importantly, what they do NOT say.

Despite claims to the contrary, you will notice that Nielsen never stated outright that the original PPM never missed any radio listening. One could assume that was said to avoid lawsuits.

Likewise, Voltair claimed to not give a station credit for listening that was not real.

So where did that 15% increase come from?

People see increases and they “assume” it must have been missed listening. But could Voltair (and Enhanced CBET) have actually “created” listening where there was none?

With Nielsen’s transparency, we would never know – so we have to do what any good researcher does, look at other test markets for answers – namely Canada.

Numeris was unable to conduct a side by side test with Voltair in the 90 days after the box was pulled in Canada. Nielsen was unable to test Voltair side by side as well. As I pointed out early on, it was impossible to do side by side testing. It’s like being Pregnant and Not-Pregnant at the same time. Impossible. That is why you never read about the results of Numeris testing after the headlines “Canada to test Voltair” back in June.

Numeris did test Enhanced CBET in Toronto on 13 stations for 1 week.

On Thursday, October 15th, Numeris quietly revealed the results of their test in Toronto to their Board.

As Numeris is owned by Broadcasters, the information is truly more “transparent”, yet in this case, Numeris is not releasing the exact data change either, but for different reasons.

The 4 week test in the USA was equivalent to a normal 1 month PPM period while the test in Canada was only for 1 week out of Numeris’s standard 13 week PPM survey.

Though Numeris saw “similar” increases to Nielsen in raw data, they will not release the actual number nor will they release the actual change. You may recall Nielsen reported an average 15% increase in AQH Rating for 39% of the stations using the Original CBET on layer 1 and the Enhanced CBET on layer 2.

The reason Numeris did not want to release that data is clear. The average is meaningless in this case and really does not tell you anything of note.

I can understand that. Think of it this way. Bill Gates and 9 Radio Broadcasters are in a room. The average Net Worth of people in the room is around $8.5 Billion Dollars. The Median Net Worth is probably only 6 or 7 figures at best, certainly not 11 figures like Bill Gates. In this room, Median is a much better measure than an average.

Makes you wonder about Nielsen’s 15% and 39% number, doesn’t it?

Mark Ramsey brought this up about 2 weeks ago as well but it appears some people failed to read his article.

BTW, in 2010 when Carol Hanley was trying to change Arbitron’s cloud of secrecy, I was actually shown what I can only describe as the “mechanical” from actual PPM data. Though I could only look at it for about 2 minutes, while scrolling through, the one thing that stood out was that I saw NO 9 minute listening occasions, yet we are always told that is the average time listening per occasion.

Of course, Carol’s attempt to change Corporate Culture and lift the “veil of secrecy” ended when Arbitron was bought by Nielsen.

As seen over and over, in the vacuum of truth, people will always come up with their version of reality.

Both Nielsen and Numeris have a “buffering/lead-in” credit of 1 minute prior to the first detected code providing that no other “station code” was received in that prior minute. In other words, if the first code is detected in the :02 minute and no station code is detected in the :01 minute, the station is given credit for both the :01 minute in addition to the :02 minute.

Just 3 codes over a 5 minute period can produce very different results via Numeris and Nielsen due to the different edit rules used. In the USA, just 3 codes has the potential to credit a station one 15 Minute AQH credit from Nielsen and 0 Minutes AMA Credit from Numeris. (This is an extreme, but shows the differences).

We still need to understand exactly how and why both Nielsen and Numeris show additional listening with the Enhanced CBET codes (as well as Voltair in the USA).

As you can probably imagine, I was shocked given everything we currently knew prior to this month. I also explained to several why this is unfortunately the WORST CASE scenario for Broadcasters! I have had numerous conversations with many smart people, including some that worked on PPM Development from the beginning, and with their detailed knowledge of behind the scenes, they did not see how it was possible to miss mass amounts of listening either.

The stories of a talk program going up 90% on every station that used a Voltair didn't hold up to scrutiny. Nor did we see a .6 suddenly turn into a 1.2 or 1.4. It's 15% average (whatever that means). But regardless, missing ANY listening it too much missed - well that's provided that all 15% really IS listening.

Unlike what Nielsen has provided to Broadcasters, Numeris has provided their Board thoughts to explain this.

The Edit rules were originally set up for certain detection times, code quality, and code count and when those variables change the edit rules apply differently. As noted above, both systems buffer up to 60 seconds (an additional minute) before the first code is received.

Numeris actually did some side by side PPM testing with their TV Meter Panel (using the old Nielsen Mark II Equipment) before deciding to implement PPM measurement in TV and Radio (both independent decisions). With a 30 second buffer time for TV, the results were virtually identical.

While Arbitron and Nielsen developed and tested PPM together in Delaware and Philadelphia around 13 years ago (so much for Nielsen throwing Arbitron under the bridge as one Pundit says with glee)a lag time was observed between first tuning to a station and the time before the first good code was received. As a result of this testing, the 60 second buffer to the first code detection of a session was implemented for Radio with harsher listening environments than the family TV room. This worked to compensate for the lag time both Companies detected during testing.

Both Nielsen and Numeris conducted their tests last month with the existing Edit Rules. As a result of the tests, rating services (and the MRC in the USA) need to review everything to see what needs adjusting.

Using the previous example of a code detection in the :02 minute and the lead in credit given in the :01 minute, with the Enhanced CBET (or the Original CBET with a Voltair) detecting faster in the :01 minute instead of the :02 minute, the station receives credit in the :00 minute, which, DID NOT have listening to the station. Thus, the lead in minute is not reflecting reality.

As Numeris President Jim MacLeod told me around the first of this month, “There appears to be no question the PPM captures the first code faster with the Enhanced CBET.  So, if one then adds a minute, in many cases minutes are being added where there was actually no listening.  In other words, the first credit is seen a minute sooner with the enhanced code (remember both tests were side by side, same panelist) compared to the original CBET code. While Numeris does want to do more testing, it appears the added minute may not be reflecting reality.”

Furthermore, Jim believes that “the edit rule giving a 60 second buffer in the beginning needs to be reviewed and possibly eliminated – as it appears to not be necessary any longer. There appears to be no question the PPM captures the first code faster with the Enhanced CBET.” (note: This would include Voltair)

“If one then adds a minute, in many cases minutes are being added where there was actually no listening.  In other words, the first credit is seen a minute sooner with the enhanced code (remember both tests were side by side, same panelist) compared to the original CBET code.”

“Looking at the number of tuning sessions/occasions in a day—a minute on each aggregates to a pretty good number!”

Using the previous example of a code detection in the :02 minute and the lead in credit given in the :01 minute, with the Enhanced CBET (or the Original CBET with a Voltair) detecting faster in the :01 minute instead of the :02 minute, the station receives credit in the :00 minute, which, DID NOT have listening to the station. Thus, the lead in minute is not reflecting reality.

Nielsen is aware of this as a result of the Enhanced CBET test. In fact, the MRC has actually conducted tests to see exactly how long first detection takes in a variety of environments using the original CBET, the original CBET with Voltair, and using the Enhanced CBET as a result of the Voltair tests they have been conducting at the request of Broadcasters.

Again, the buffering minute was added because of the delay picking up the first code in the original CBET. This was actually proven to be correct in the SIDE BY SIDE testing on the same devices where the Enhanced CBET was picked up faster on Layer 2 than the Original CBET on Layer 1. As a result, adding an additional minute in front of the first detected code is not correct in 2016.

Think of it this way. One should keep 2 seconds distance between cars travelling on a road, but when the road is slick from rain or snow, that time needs to be changed accordingly.

Different Conditions. Different Rules.

It’s essentially a “tax loophole”. A loophole that is about to be corrected.

Bottom line, with Voltair (and Enhanced CBET under current Original CBET Edit Policy), both give double lead in credit to stations, for an additional minute per session.

When this is corrected, all Stations using Voltair will see the credit disappear – as will all stations using the Enhanced CBET. In other words, the “average” 15% increase, whatever that really was, will not be seen by stations as the underlying Edit Rules will be changed to reflect the new reality.

Voltair’s claim about creating credit for a station when respondents were not listening is, well, incorrect.

Based on what has been observed, it is questionable how much Telos knew about the edit rules; I personally do not believe they set out specifically to exploit this. I personally believe they just “stumbled” into it without knowing why it worked.

Quite frankly, a number of inventions over the years came about this way.

This explains where a good portion of the extra 15% AQH Rating is coming from, though it is not the only way listening was increased, which I will get into next time.

It is hard to call this “extra credit” cheating as the rules in place were followed. Knowingly or unknowingly, advantage was taken of the edit rules which were designed to reflect a different reality, the characteristics of the original CBET code. A Tax Loophole.

The better question is, what would the increase be if the 60 second buffer is eliminated? The numbers would be down some significant level from that 15% “average” increase.

And therein lies the problem.

As “transparent” Nielsen has not told us what percent of the “average” 15% increase comes from the 60 second lead in credit, it’s impossible to tell how much of this will remain after the edit rules are changed.

So what is happening now?

Nielsen plans to start the Enhanced CBET rollout next Monday, November 2nd to stations outside of Baltimore-Washington initial market.

Or will they?

Nielsen (as all MRC tested services) cannot change their services without first conducting an audit. That’s an agreement everyone signs with the MRC.

And Audits take time.

Washington-Baltimore data is being rushed into an audit. Last I heard, it is still not completed.

And the Enhanced CBET Firmware is scheduled to show up next Monday (11/2/2015) for a number of markets – less than 7 days away.

Will the audit be completed in time for Nielsen to roll out the November upgrades on the previously released schedule?

Nielsen’s Director of Communication, Diane Laura, contacted me Monday morning 10/19/2015 in response to a blog posting I released to Clients. The posting came down hard on Nielsen for their lack of “transparency” despite what Nielsen Audio’s Matt O’Grady claims in the trades. She said she would check and get answers to my questions that had thus far gone unanswered, so I also asked her about what I was hearing that the rollouts could be delayed?

She promised she would check and get me back to me later that day... then later that day an email said “shortly….” I still have yet to hear back.

Does this mean that her definition of “shortly” is different from other humans? Or does this mean that I scared her to death because I actually knew this…and she’s ducking for cover?

Both Nielsen and the MRC are both aware that the current edit rules need re-evaluation to ensure they reflect reality with the Enhanced CBET.

Is Nielsen just going to roll over and give the extra extra-credit minute to Broadcasters just to placate, even though it is not accurate now?

The MRC is probably NOT going to let that fly.

And what about that agreement that ALL Services sign with the MRC stating they will not roll out any new services or changes without first completing an Audit?

One also wonders how the MRC will keep the accreditation on these markets when everything has been changed on the fly? Or perhaps Nielsen believes that no one cares as so many PPM Markets remain without MRC Accreditation (or have lost Accreditation with hardly any outcry from Broadcasters).

Will the MRC strip ALL PPM Markets of accreditation until new audits under the Enhanced CBET codes?

And why is Nielsen pushing this through so quickly now compared to Numeris.

Is it

1) Because of the Controversy over the past 10 months

or

2)  They know that PUMM levels bottom in December (just when the Enhanced CBET goes live in all markets) and then begins to steadily rise from January to May, which may make the less educated believe they are going up because of the Enhanced CBET instead of with the rising tide?

Number 2 would also help to disguise credit being lost as the edit rules are changed.

I also hear that the MRC has not so small concern about the potential impact of using both Enhanced CBET and Voltair together based on their testing of Voltair units. Does this provide crediting in situations where content may not be audible by a person?

Both Voltair and Enhanced CBET are not needed to “enhance” the watermark.

BTW, if you have thought this through, it also means Nielsen could tell who had a Voltair online by simply sampling their audio and seeing how long it takes for the first code to be detected. Not sure if Nielsen has thought of this, but it’s a rather easy thing to figure out with the Original CBET and a Voltair….do it 5-10 times and average the time needed for the first detection.

Likewise, Nielsen could simply record 5-10 Sixty second samples off the air of every station during the week prior to Enhanced CBET rollout and easily determine every station using a Voltair prior to the Upgrade

Personally, I believe that Voltair users should be able to use their Voltairs with the original CBET if they so desire (even though the Index was slightly lower, but not significantly lower). Or they could take the Voltair out of the audio chain and use the Enhanced CBET instead. But I do not see this idea flying with the MRC.

I guess many would argue that is still not equal footing.

Could this delay the rollout?

We are about to find out.

Many questions remain to be answered.

Let’s remember, the end goal is to get ACCURATE listening estimates, so the Currency is to be believed by the Ad Community, not bonus credit for listening that never really happened.

With that said, I wouldn’t go budgeting 2016 based on a Windfall.

At the very least, all these pundits have set up sunny scenarios which are not based in reality.

 

Follow Up:


From: Jim MacLeod <Numeris, Canada>
Date: Sun, Oct 25, 2015
To: Randy Kabrich

I find this very interesting.  We can only hope enough people read it and re-read the portions that are a bit more difficult to understand.  You did not mention it, but we are doing a 13 week side by side test starting November 30 with all Toronto commercial stations.  I am highly confident this test length will let us see the exact performance of enhanced CBET through the normal ebb and flow of radio usage, especially considering it will capture the Christmas period where the “all Christmas” formats can cause a lot of cross tuning.  It is known to our members this is being done.  We also indicated to members that we need to re-examine edit rules with enhanced CBET.

We’re also willing to do some Voltair testing both the enhancement side of it and the monitoring side they are now promoting.  Side by side testing would have to wait until the 13 week test is over, and we will have to do some testing to be sure it would not interfere with Layer 1 currency (likely OK, but be sure is the plan).  We’ll see where that goes.

Interesting to see the reaction to this!

 

Jim: Thanks for helping keep Broadcasters informed! 

 

 

 

The Worst Case Scenario

10/06/2015

 

So we have the numbers from Nielsen after their CBET Enhancement tests, with and without Voltairs in-line.

And quite simply, it’s the worst case scenario for Broadcasters.

Why?

Because I was incorrect about a Voltair effect?

Hardly. I’ve been wrong before. It won’t be the last time.

That was the reason I put all the numbers online, unlike all others. I said to look at the numbers yourself.

Most Broadcasters know for AQH Share and Ratings to change, you can only do it with Cume and/or TSL. You must change one or the other (or both) to make an impact on the number.

Likewise, PUMM (PUR) Ratings can only be changed in 1 of 2 ways.

You can either

1)   Stop a decline

2)   You can cover up a decline.

I chose the positive route, based on reasons I made clear. In fact I tried to avoid talk about the annual declines until Kurt Hanson posted his article and it had to be addressed.

I had hopes that Radio had finally set a base in and stopped the 5% annual decline.

There were reasons to believe (and hope) this was true.

1)   Increases were larger in smaller markets (Markets 25+) where Voltair use was limited, compared to Majors (1-25).

2)   Generally speaking, the Increases came from non-MRC Accredited Markets (clearly more sample issues) while MRC Accredited Markets (better sample distribution) did not show an increase. There was a reason I denoted MRC Markets in the data, but no one found that hidden Easter Egg in the data.

3)   Declines continued P18-34, P18-49 and P25-54 Year to Year, which was a really bad sign if the bottom had not been put in. Year over Year increases were P35-64 and the major increase was with Persons 35+, causing P6+ and P12+ to go up. If as some bloggers did not drill down in the data and only quoted P6+, they missed this important point. Thus with this data, if Enhanced CBET was inflating numbers, one could assume the Voltairs were influencing only the very oldest demos.

Now Nielsen has told us in fact P35+ is where the Audience increased with the Enhanced CBET testing. They state this is logical as Radio has more audience the older you go.

Oh well. That’s what I get for going with the positive thinking outcome.

However, we did learn all the fantastical stories of 90% increases on Certain Programming with CBET Enhancement are bunk.

Likewise the notion “If the .6 AQH Ratings all the sudden are 1.2 to 1.4, then we have something..” didn’t pan out either.

So what do we know now as absolute facts?

Over the past several years, PUMM (PUR) has fallen roughly an average of 5% a year

Nielsen Audio PPM Enhanced Encoding Test shows a rough 15% average increase with Voltair or Enhanced CBET Encoding

From June 2014 > June 2015, Persons 25-54 PUMM (PUR) was essentially flat

According to Telos Statements, over 600 Voltairs had been sold at end of June 2015 PPM period. 600+ Voltairs = somewhat over 60% of the non-duplicated stations that matter in 48 PPM Markets.

So knowing all those numbers, and being very obvious that no base has been put in to Radio’s decline , ask your favorite Researcher or financial person to determine what X equals if X= % PUMM(PUR) fell from June 2014 > June 2015 without CBET Enhancement?

And that is why this is the very worst case scenario for Radio Industry.

 

 

Posts earlier to October 2015

 

PPM and MLB Play by Play