CASE STUDY: Derby Museums

CASE STUDY: Derby Museums

With very special thanks to Derby Museums for their results and insight for this case study

Derby Museums completed three evaluations using Culture Counts over the course of the trial period, all of which were focusing on very different events.

The results were shared at a full staff briefing, where a cross section of the whole team was interested to hear the results.  Using a more qualitative style of data presented via the quality metrics enabled the team to view a more rounded perspective, as opposed to only looking at hard figures, which doesn’t necessarily do justice to the work or its outcomes.  Enabling the team to see how satisfied their customers were provided a real morale boost.

The results revealed some surprises, showing high levels of satisfaction where perhaps the team hadn’t felt things had been as successful or relevant.  Seeing the nuances between the different events and the scores received can be interesting, as there may be elements of the event that particularly resonated with the attendees, which the organisation may not have previously realised with such clarity.  This highlighted the usefulness of the self-assessor completing both the prior and post surveys.  Through completing the self-assessments, they also realised that their intentions could be more focused.  Having clearer intentions from the offset will allow the focus of the work to really come through. By mapping expectations and experiences, a richer frame for interpretation is formed. 

Combining results from the Quality Metrics survey and demographics from other survey providers, Derby Museums have been able to further develop their applications for funding.  Many funding bodies have specific goals to fulfil or demographics to reach and therefore their funds must be allocated appropriately; for example, this could be the elderly, rural communities, ethnic minorities etc. Incorporating the results from the Quality Metrics survey, Derby Museums have presented a positive case in funding applications and funding reports.  In particular, this has been useful when asked questions about the audience’s thoughts and experiences, as they were able to refer directly to the positive feedback gathered from the attendees.

There is also the potential to collect more detailed demographics and to gather further data focusing on Generic Learning Outcomes.  This could be achieved using the system as it currently stands; however, adding further questions was not actively encouraged during the Quality Metrics National Test.   

< BACK TO CASE STUDIES

CASE STUDY: English Touring Theatre

CASE STUDY: English Touring Theatre

Does having a collaborative relationship with a host venue have an impact on dimension scores for touring companies?

With very special thanks to English Touring Theatre for sharing their results and insight for this case study.

As a touring company, English Touring Theatre (ETT) had a different experience to many of the other users throughout the Quality Metrics National Test.  They were using the metrics and the Culture Counts platform to evaluate one specific work, The Herbal Bed, in different locations across the country.  Evaluating one piece of work in different locations enables the organization to compare the reception of the work across locations.  And it wasn’t just the audience’s experience; ETT also measured the experience of peer and self assessors in each location. 

As the staff were not able to travel to each venue, it was decided that the most practical method to collect public responses would be to use the ‘Online’ method, whereby an email would be sent to attendees after the performance to ask for feedback.  In order for this to be successful, ETT needed to rely on the cooperation of the host venues.  Offering to share their results with the host venue, this was generally not a problem.  However, it certainly proved a challenge in one case, where one venue chose to not assist in the collection of public responses.  This resulted in no public responses being gathered for that particular venue, which was naturally frustrating for ETT.  That said, they were still able to gather peer and self responses which have contributed to their overall evaluation.

When comparing the public responses across the three locations, the shape of the graph remains constant.  However, responses in Northampton were consistently higher than in Oxford and Liverpool, and generally the lowest scoring was present in Oxford. When comparing the results between Oxford and Northampton, the dimension with the largest difference in scoring was Local Impact, with a 9% difference between the two locations.  Whilst this may not seem a large difference, when comparing with the average differences at 3.9%, its significance can be appreciated.  Something worth highlighting here is that The Herbal Bed was coproduced with the host venue in Northampton.  This collaboration might have resonated with the audience in a particular way, causing them to feel a stronger attachment to the work, as opposed to other locations and venues.  There could of course be other things playing a part – this is where perhaps looking at demographics and other cultural activities in the various locations could also be of interest.  

The chart below compares the different scorings across the locations and the percentage in difference between the highest scoring location and the lowest for public respondents is marked above the bars:

Despite the smaller sample sizes of peer and self assessors, the results they present are interesting and highlight the importance of variety within the different respondent groups. 

The self assessors were from the creative team at ETT, and yet they scored the production very differently.  Is this due to the fact that each self assessor focuses on a different element of the production, and therefore takes a different approach to assessment?  Or is it because their individual backgrounds within the cultural sector have caused them to receive the production differently?  The different perspectives revealed in the self assessment highlight the value of using multiple assessors.  Results such as these broaden the discussion surrounding creative intention.

The peer assessment presents a similar reception to that of the public assessment, where the Northampton production generally scored highest.  The responses from peers can be subject to how well they know the work of the organisation and their previous experience as a reviewer. The results largely mirror with what could be expected in comparison with broader trends emerging from the Quality Metrics National Test: for example, the difference between the public and peer scores for the Distinctiveness dimension is large.  In addition, the peer assessors tend to score lower than the self assessors. 

Upon greater reflection, it seems that the significance of the coproduction between the host venue in Northampton and English Touring Theatre must not be underestimated, as it seems from the scores that the experience of that production was more positively felt by all the respondents.

CASE STUDY:                          Ludus Dance

CASE STUDY: Ludus Dance

How the identity of the work can shape metric choice

With very special thanks to Ludus Dance for sharing their data and insight for this case study.

Ludus Dance is an organisation that had signed up to be a part of the Quality Metrics National Test, whereby the focus was on testing the quality metrics.  That said, Ludus made the choice to experiment further with the metrics, choosing their metrics according to respondent type.  They also added a further respondent group by including the participants, and testing the participatory metrics, within one particular evaluation.  This took place over their event The Lancashire Youth Dance Festival.  The Festival was a combination of two days of dance workshops and classes, followed by a showcase which was open to the general public, as well as friends and family of the participants.  

Through choosing their own metrics in what could be considered a more organic approach to metric selection, a further layer is understood in the organisation’s intentions or focus of the work.  Interestingly, although different metrics were used to across the respondent groups, peer, self, public and participants, we can see similarities in the themes.

The selection of the below metrics indicates the importance of inclusivity and connection between those from different backgrounds, both amongst the audience and the participants.  Interestingly, Growth is the only metric Ludus chose for the three respondent groups: public, peer and self.

Audience: Growth: It could appeal to new audiences

Self: Collaboration: It connected other artists

Self: Growth: It appeals to a large community of interest

Participants: New people: I got to know people who are different to me

Peer: Growth: It could appeal to new audiences

Peer: Diversity: It could engage people from different backgrounds

When looking at the survey for the participants, Ludus’ choice of participatory metrics presents a well-rounded selection.  Aligned to the participatory metrics wheel which broadly groups participatory metrics in to clusters, the emphasis for this piece of work seems to be on the personal development of the participants.  Generally, the metrics chosen were also frequently used by other organisations; however, it is worth noting that the above themes of inclusivity & connection demonstrate are quite specific to this piece of work, which contrasts with other works evaluated.  In addition, the selection of the metric New People indicates that the opportunity to meet new people was again quite specific to this piece of work whereas in the other test events across all the participatory organisations in the trial, this metric was much less likely to be chosen. Similarly, with other work evaluated with participatory metrics, the metrics selected demonstrate the unique position the participatory activity has in contribution to the broad range of participatory work produced by organisations.

 

As well as the core quality metrics, Ludus has also used a metric in their audience survey from the Place category, Accessibility: I find it easy to get to and from here.  Whilst this is a place metric, it also relates to the importance of inclusivity, as shown overall in the chosen metrics.

When looking at the custom questions, the use of the word ‘highlight’ features in both the questions for the self assessors and the participants, revealing the importance of the experience of those that have been involved as part of both the creative and participating teams.  Ludus also included open text questions regarding the experience for the peer and public assessors.

The importance of inclusivity and opportunity for connection with those from different backgrounds amongst the participants and audience is clear when looking at the metric choice.  Not only can the metric selection enable one to see how Ludus’ Festival sits amongst other participatory works, but it also enables one to see what makes it unique.  This understanding contributes to the overall picture of participatory work.  Ludus’ specific focus on inclusivity and connection is strongly featured, with the emphasis on the personal experience.  When looking at the selected metrics collectively, across all respondent groups, we get a sense of the specific intentions of the Festival.  The unique nature of participatory events presents a vast variety of objectives, creating a rich picture of participatory work.

CASE STUDY: Whitworth and Manchester Museum

CASE STUDY: Whitworth and Manchester Museum

Across the Whitworth and Manchester Museum we have implemented Culture Counts metric surveys to gather information on artistic quality along with the perspective of our peers and visitors in relation to our exhibitions. During the pilot phase of the project, we created and completed 8 surveys ranging from new exhibitions, events and live performances.

 

Choosing from the three options of conducting public surveys (online, display or interview) we selected to utilise the strength of our Visitor team to personally interview members of the public. Before starting the surveys, we took the time to thoroughly train the Visitor team by giving them a background of the project, context of the questions and guidance on the importance of approaching a diverse range of visitors. We also offered opportunities to play with off-line versions, which quickly built their confidence in navigating the system.

 

In starting the surveys, what we found most interesting was the feedback we received from the team. They commented on how the surveys sparked further conversations about arts and culture in and around Manchester. During the various surveys conducting, we found the team had no problems collecting the minimum amount needed for an accurate reflection in data and in one instance we collected over 300 visitors responses for one exhibition alone.

 

This is a significant change in attitudes toward survey taking. Previous experiences with collecting information through other survey formats have been met with avoidance or hesitation. The metric system is quick, easy and very user friendly. Due to the uncomplicated interface and intelligent questions, the public, not only participate, but also see this as an opportunity to engage in further conversations.

 

Since the beginning of the project, we have been busy collecting quality data and analysing our findings. Now moving into a reflective stage of the project, we are triangulating the data between self, public and peer to identify our perspective verses those of our visitors and colleagues. We can already see areas in the reports where there are differences between the data and are now beginning to use this information to improve the visitor experience for future exhibitions. 

 

By Chad, The Whitworth Gallery & Manchester Museum

CASE STUDY: ROYAL LIVERPOOL PHILHARMONIC

CASE STUDY: ROYAL LIVERPOOL PHILHARMONIC

With very special thanks to Beth Wells at the Royal Liverpool Philharmonic for sharing her results and insight for this case study.

 

Why did you sign up?

 We signed up the Quality Metrics National Test to open new conversations around quality aspirations internally within the organisation and also test the Culture Counts platform itself. The process was led through the marketing team, who co-ordinated peers and audience responses and then the artistic assessment came from artistic planning with input from any external contractors/consultants if applicable to the performance.

 

What did you choose to evaluate?

We surveyed three very different events in order to test the Quality Metrics, all Royal Liverpool Philharmonic Orchestra concerts.

·       Petrenko’s Shostakovich – this was very much a “standard” performance for us, the Orchestra are highly regarded for performing Russian Music especially with our Chief Conductor Vasily Petrenko.

·       Santa’s Sleigh on Hope Street is our annual festive Orchestral Family performance aimed predominantly at 4-10 year olds and their families.

·       Sixties’ Valentine – the Orchestra performing a selection of love songs from the 60s (such as Tom Jones, Cilla Black and the Beach Boys) with West End Singers. 

 

What was the most challenging element of the process?

In many ways collecting audience responses was the easiest part, we run a box office system and we send a post-performance email for everything anyway as a matter of course so we just included a link to the survey in that.

I think that in hindsight I would think twice about the time of year for the performances as finding peer assessors for a daytime performance on the last weekend prior to Christmas and on the Saturday night closest to Valentine’s Day was challenging to say the least!

 We struggled with some of the wording for the metrics for Petrenko’s Shostakovich, the Orchestra celebrated its 175thanniversary last year and we consider ourselves to be quite central to life in Liverpool and so when it came to the question around “it was important that it’s happening here” I think both internally and externally there was a challenge to make the distinction between its important that the Orchestra reside in Liverpool and whether that particular performance was important artistically.

 

What was your most interesting finding?

I think we got the most use out of the concerts where we had spent a lot of time internally working around the presentation of concerts. Recently there has been a huge amount of internal discussion about how we can improve the presentation of concerts that we know particularly attract new audiences, so family performances and ones with a more crossover “pops” type appeal. For these we have invested more resources in wrap around activity, additional lighting and more than just the Orchestra on the stage so this was a good opportunity to have a 360 degree evaluation of them and include some external input. It generated new conversations since we were able to marry together, audience, peer and internal all at once and look at the responses at the same time.

For 60s Valentines there had been ticket offers available and so we were aware that this performance had a higher than average number of first-time attenders. This bore out in the results where the metric for metric for distinctiveness was higher than we would have anticipated as it was a new experience for many who were there.

 Additionally, we can be quite hard on ourselves internally – always feeling as though we fall short and whilst it’s good to be ambitious it was great for our artistic team to have some external peers come along and evaluate what we are doing and give their impressions.

 

What will you do differently next time?

One of our challenges is that the majority of our performances are one-nighters and so when it comes to inviting peer assessors we are not able to be flexible and offer a selection of dates. However, on the other side of the coin it was nice to have the opportunity to invite peers to attend as we do not have press nights or review nights so often those opportunities can be limited.

We carry out a lot of research as a matter of course with regular audience surveys, however we would definitely use this again for the more special “one-off” type events. 

CASE STUDY: OLDHAM COLISEUM

CASE STUDY: OLDHAM COLISEUM

With very special thanks to David Martin at Oldham Coliseum for sharing his results and insight for this case study.

 

Oldham Coliseum has been involved with the Quality Metrics from the outset in the UK: along with a group of industry peers, a shared understanding of what artistic quality was found and much deliberation took place on refining a set of metrics which could be applied to a number of relevant contexts and then tested.

For the second phase, the Quality Metrics National Test, two evaluations have been run so far, with a third pending. The first, Pitman Painters, written by Lee Hall and directed by Kevin Shaw, is a piece of work about the Ashington Group – Northumberland miners who employed Robert Lyon, a Master of Painting at Kings College Newcastle, to teach them an evening class in Art Appreciation. The production was programmed as a definitive statement about the importance of art for everyone and innate creativity, with political reference. The second production evaluated was Our Gracie – a very different piece of work with a strong music score. This was a newly commissioned play written by Philip Goulding celebrating the life and times of Gracie Fields, a local and extremely popular entertainer.

 

Public Response

Surveys were sent via email to audience members at the end of each run (159 respondents for The Pitman Painters and 213 for Our Gracie). The Pitman Painters was uniformly liked with the lowest metric for distinctiveness averaging 70% - all other metrics scored in the high 80s and low 90s. Our Gracie was similarly very popular with audiences, scoring lower for distinctiveness, challenge and relevance – averaging between 58% and 69% - and scoring in the 80s for all other metrics. The dimension profile (the balance of scores across the metrics) were different for these two pieces of work.

 

Some Reflections

·       Average audience response may be skewed, for example if someone likes the show then they may be more likely to say so rather than if they felt indifferently. Using tablets for intercept interviewing front of house may result in a different dimension profile as the sample won’t be as self-selecting.

·       Peer and self review broadly followed the same dimension profile as that of audience members. Peers were generally more critical but the pattern of response indicates that the intentions of the work were understood by the peers, inferred by comparing the peer responses to that of the self assessors.

·       There is a desire from both self and peer assessors to take a reflective approach when considering the work. Using the quality metrics in this test phase enables a very quick evaluation process – the onus for reflection is down to the respondent and not demanded by the process.

Is it useful?

The methodology and metrics have legs, particularly as a way of gleaning information from audiences if the biases in motivation to provide feedback can be addressed. It probably has a place in a portfolio of approaches and be developed alongside other methodologies of assessing quality. It is certainly something to stay interested in.

 

CASE STUDY: ARNOLFINI

CASE STUDY: ARNOLFINI

Gaia Rosenberg Colorni, Executive assistant and impact manager at Arnolfini

Arnolfini is a Contemporary arts centre based in Bristol, they hold a variety of events on and off site based on multi disciplinary art forms, with a strong link to educational initiatives. 

As part of the Quality Metrics National Test, they have so far evaluated 4 events based on the quality metrics and 3 events based on participatory metrics. 

In this video Gaia discusses how they got children involved in evaluations and how they overcame peer assessor challenges. 

 

CASE STUDY: Royal Shakespeare Company

CASE STUDY: Royal Shakespeare Company

With very special thanks to Becky Loftus at the RSC for sharing her results and insight for this case study.

For the Quality Metrics National Test, the RSC chose three distinctively different productions to evaluate trialling the Quality Metrics: 
Shakespeare’s Henry V performed at the Royal Shakespeare Theatre, Stratford upon Avon and the Barbican Theatre, London
A piece of new writing by Helen Edmundson, Queen Anne, performed at the Swan Theatre, Stratford upon Avon
And a revived family production written by Ella Hickson, Wendy & Peter Pan, performed at the Royal Shakespeare Theatre, Stratford upon Avon

Each evaluation collected responses from professional peers, self assessors and audience members:

The Public’s Response
The interesting finding extracted from the public responses was that where some dimensions were consistently high i.e. a high mean average score and a low standard deviation , others had a comparatively lower mean average but a larger standard deviation. The latter was found to be the case for Distinctiveness, Challenge, Relevance, and Local Impact, meaning that for these dimensions the data trends indicate the audience was split in its reaction to the work.
What becomes particularly interesting is that this pattern was found across all three evaluations in spite of the artistic differences of the work presented. It should be noted here that by a split audience in this example there was no uncertainty that the vast majority of the public respondents found all three pieces of work to merit very high scores for Concept, Presentation, Captivation, Enthusiasm, and Rigour.  The split comes from some audience members continuing to score the work very highly on Distinctiveness, Challenge, Relevance, and Local Impact whereas others scored these lower by contrast, and significantly so in the following cases:

Henry V – Distinctiveness, Relevance
Queen Anne – Distinctiveness, Relevance, Local Impact
Wendy & Peter Pan - Relevance

Over 20% of the respondents for Henry V described themselves as ‘Lifetime Loyalists’, regularly visiting the RSC, and for Queen Anne this group formed 46% of the respondents, in contrast to only 11% of those responding to Wendy & Peter Pan. Looking at the significance of audience split for Distinctiveness (It was different from things I’ve experienced before) in response to Henry V and Queen Anne, it is more likely that those who have seen the RSC many times before may have experienced more similar work thus scoring Distinctiveness lower than those who do not regularly visit the RSC, but are still very enthusiastic (measured through the Enthusiasm dimension – I would come to something like this again) to continue attending and supporting the work they love.

Some Challenges
Managing the Peer Assessment process was perhaps one of the more challenging elements of the methodology for the following reasons:

1.    The distance required to travel for some productions is a barrier to obtaining peer assessors for example it was easier to find peer assessors to attend the London production than in Stratford upon Avon.
2.    The small sample size means that when an opinion varies hugely this can change the shape of the results dramatically.
3.     The process comes with lots of administration (e.g. arranging tickets, changes in attendance dates, drop-outs and chasing survey completion). The time and resource impact of this should not be underestimated.

Regardless of the challenges, the peer assessment does give an important and alternative perspective to the work and enables deeper interpretation of the results, particularly when triangulated with self and public responses.