With very special thanks to Becky Loftus at the RSC for sharing her results and insight for this case study.
For the Quality Metrics National Test, the RSC chose three distinctively different productions to evaluate trialling the Quality Metrics:
Shakespeare’s Henry V performed at the Royal Shakespeare Theatre, Stratford upon Avon and the Barbican Theatre, London
A piece of new writing by Helen Edmundson, Queen Anne, performed at the Swan Theatre, Stratford upon Avon
And a revived family production written by Ella Hickson, Wendy & Peter Pan, performed at the Royal Shakespeare Theatre, Stratford upon Avon
Each evaluation collected responses from professional peers, self assessors and audience members:
The Public’s Response
The interesting finding extracted from the public responses was that where some dimensions were consistently high i.e. a high mean average score and a low standard deviation , others had a comparatively lower mean average but a larger standard deviation. The latter was found to be the case for Distinctiveness, Challenge, Relevance, and Local Impact, meaning that for these dimensions the data trends indicate the audience was split in its reaction to the work.
What becomes particularly interesting is that this pattern was found across all three evaluations in spite of the artistic differences of the work presented. It should be noted here that by a split audience in this example there was no uncertainty that the vast majority of the public respondents found all three pieces of work to merit very high scores for Concept, Presentation, Captivation, Enthusiasm, and Rigour. The split comes from some audience members continuing to score the work very highly on Distinctiveness, Challenge, Relevance, and Local Impact whereas others scored these lower by contrast, and significantly so in the following cases:
Henry V – Distinctiveness, Relevance
Queen Anne – Distinctiveness, Relevance, Local Impact
Wendy & Peter Pan - Relevance
Over 20% of the respondents for Henry V described themselves as ‘Lifetime Loyalists’, regularly visiting the RSC, and for Queen Anne this group formed 46% of the respondents, in contrast to only 11% of those responding to Wendy & Peter Pan. Looking at the significance of audience split for Distinctiveness (It was different from things I’ve experienced before) in response to Henry V and Queen Anne, it is more likely that those who have seen the RSC many times before may have experienced more similar work thus scoring Distinctiveness lower than those who do not regularly visit the RSC, but are still very enthusiastic (measured through the Enthusiasm dimension – I would come to something like this again) to continue attending and supporting the work they love.
Managing the Peer Assessment process was perhaps one of the more challenging elements of the methodology for the following reasons:
1. The distance required to travel for some productions is a barrier to obtaining peer assessors for example it was easier to find peer assessors to attend the London production than in Stratford upon Avon.
2. The small sample size means that when an opinion varies hugely this can change the shape of the results dramatically.
3. The process comes with lots of administration (e.g. arranging tickets, changes in attendance dates, drop-outs and chasing survey completion). The time and resource impact of this should not be underestimated.
Regardless of the challenges, the peer assessment does give an important and alternative perspective to the work and enables deeper interpretation of the results, particularly when triangulated with self and public responses.