In the first of a new series of research and analytics columns, Vivid Interface MD and partner in new research group SEER Geoff Dixon asks: How likely are you to recommend Net Promoter Scores?
Net Promoter Scores are used by most event organisers who conduct visitor and exhibitor satisfaction research and we now tend to automatically include them in a survey.
Net Promoter ratings are a measure of customer loyalty generally known as NPS, a registered trademark developed by Fred Reichfield. They are based on the concept that an event brand should seek to have more promoters than detractors. To quantify this, the basic question asked in a survey is, ‘How likely is it that you would recommend [show name] to a friend or colleague?’
The answer scale is a 0-10 rating, where 10 is ‘extremely likely’ and 0 is ‘not at all likely’. A rating of 9-10 is known as a promoter, 7-8 as passive and 0-6 as a detractor. The NPS rating comes from subtracting the percentage of detractors from percentage of promoters.
A totally negative score can be -100 and a totally positive score can be +100, although we have never experienced scores near these boundaries. Our experience of NPS ratings for consumer events has delivered a lowest rating of -12 NPS and a highest rating of +50 NPS.
However, using comments from the ‘detractors’ of our lowest-ever scoring show as an example, NPS ratings may not be what they seem. In our example, visitor numbers are stable and the value-for-money ratings, measure of overall satisfaction and likelihood of return are all close to or above the industry averages we see in the Vivid Performance Index (VPI rating).
Looking at the comments from the event’s detractors explaining their rating, we can see 26 per cent gave what we would call positive comments. For instance, visitors reported that they ‘enjoyed activities’ , found it a ‘must-attend’ event, ‘informative’, ‘covers all aspects’, was ‘a fun experience’ and an ‘informative’, ‘good show’.
When we examine the visitor NPS ratings by segmenting the respondents into new, previous and loyalist visitors, we can see previous and loyalist visitors give the lowest ratings (Graph 1). For the same event, loyalist and previous visitors are the visitor groups most likely to return to the show next time (Graph 2).
Finally, we may be starting to arrive at the answer for at least some of the negativity in the NPS rating for the event when we look at the ratings by age (Graph 3).
For another show that has an NPS rating of +48, the visitor numbers are declining. Here again, we can see an age effect in the NPS ratings (see Graph 4). What does all this mean and what should organisers be taking away from these examples?
The NPS rating must be placed into the context of your visitor profile and other research insights. It is critical event organisers ensure NPS ratings are used alongside other performance metrics and, in particular, with a level of granularity that is meaningful for the audience structure.
The NPS rating can be a great statistic when it goes in the right direction, but can be worrying when it doesn’t. The real benefit comes from understanding what is going on – and that never comes back to a single number.
This was first published in the May edition of Exhibition News. Any comments? Email firstname.lastname@example.org