The SHOCK factor - Challenging your audience

The SHOCK factor - Challenging your audience

Do you challenge your audience? Is shock enough to make a piece challenging?

We discuss 'the shock factor', how the Quality Metrics shed a new light on what challenges our audience, and how this could change show reviews forever...

Escaping the 'beige'

Escaping the 'beige'

Director John Knell discusses what we have come to name 'beige' results and how to uncover more interesting insights from your evaluations.

Supporting Formative Development

Supporting Formative Development

As part of the Participatory Metrics Strand of the Quality Metrics National Test we’ve been running discussion groups across England with a number of experienced individuals representing a whole range of creative organisations working in participatory contexts, to further refine the Participatory Metrics.

Something that often comes up is the question: How can a reflective practitioner with formative evaluation practice embedded in their creative processes integrate the metrics, and in the case of testing the metrics, using Culture Counts?

Formative evaluation is a process of evaluation that happens either continually or regularly (depending on the activity) whereby feedback is immediately incorporated back into practice so that the activity evolves based on the needs and development of those involved.

In contrast to the Quality Metrics’ methodology where a large part of the evaluation in most (but not all) cases comes after a production/exhibition is complete, the application of the Participatory Metrics demands a more versatile approach to be of genuine value to reflective practitioners working with participants. I’ll note here that this approach can equally be applied to using the Quality Metrics and other tools depending on the feedback processes an individual or organisation may use for a given project, particularly in the research and development phase of creating work.

In practical terms, the Culture Counts platform is well equipped to support different evaluation processes.

In the case of formative evaluation, I suggest identifying points in time across a project where you have space to think critically and reflectively of the work. If, by way of example, there are six points in time you’ve identified, in the dashboard within one evaluation, create six surveys (one for each point in time, named accordingly) using the same metrics with any required adjustments for grammatical tense and project milestones. Depending on your activity, process and resource capacity you/your practitioners/artists/producers complete the corresponding survey at a given time. If the reflective practice includes participant feedback, the URL can be used to gather response from them too. It is recognised that if gathering feedback from participants or others, the metrics might be best accompanied in an open discussion around the objectives of the project and how the activity is developing in alignment with the metrics to support the reflective practice.

Having completed the formative surveys two interesting things happen:

1.     You can gather self and participant feedback at multiple and compare/contrast the responses feeding back in to a reflective cycle in real time.

2.     It enables you to track these reflections over the course of the project in a standardised way and form an evidence base for the responsive changes that were made to the project in addressing the needs of the practitioners/ self-reflectors and participants.

A rigorous approach to participatory development and evaluation is something we have repeatedly come across in the arts sector. Supporting it with a real time evidence base enabled by the Culture Counts platform we hope will further enhance this high quality area of practice.


           By Alison Whitaker

Visitor Services & the Quality Metrics (The Whitworth Gallery & Manchester museum)

Visitor Services & the Quality Metrics (The Whitworth Gallery & Manchester museum)

Across the Whitworth and Manchester Museum we have implemented Culture Counts metric surveys to gather information on artistic quality along with the perspective of our peers and visitors in relation to our exhibitions. During the pilot phase of the project, we created and completed 8 surveys ranging from new exhibitions, events and live performances.

 

Choosing from the three options of conducting public surveys (online, display or interview) we selected to utilise the strength of our Visitor team to personally interview members of the public. Before starting the surveys, we took the time to thoroughly train the Visitor team by giving them a background of the project, context of the questions and guidance on the importance of approaching a diverse range of visitors. We also offered opportunities to play with off-line versions, which quickly built their confidence in navigating the system.

 

In starting the surveys, what we found most interesting was the feedback we received from the team. They commented on how the surveys sparked further conversations about arts and culture in and around Manchester. During the various surveys conducting, we found the team had no problems collecting the minimum amount needed for an accurate reflection in data and in one instance we collected over 300 visitors responses for one exhibition alone.

 

This is a significant change in attitudes toward survey taking. Previous experiences with collecting information through other survey formats have been met with avoidance or hesitation. The metric system is quick, easy and very user friendly. Due to the uncomplicated interface and intelligent questions, the public, not only participate, but also see this as an opportunity to engage in further conversations.

 

Since the beginning of the project, we have been busy collecting quality data and analysing our findings. Now moving into a reflective stage of the project, we are triangulating the data between self, public and peer to identify our perspective verses those of our visitors and colleagues. We can already see areas in the reports where there are differences between the data and are now beginning to use this information to improve the visitor experience for future exhibitions. 

 

By Chad, The Whitworth Gallery & Manchester Museum

 

Learning and Insight Days

Learning and Insight Days

A big thanks to the 100 NPOs and MPMs who came to the Learning and Insight days that we ran in Birmingham, Bristol, London, Manchester and Newcastle over the last two weeks.

 

It was great to see you all face to face and hear about your experiences using the quality metrics and the Culture Counts platform. In the coming months we will post blogs from some of you on your key insights and challenges. Get in touch if you want to write something.

 

For those who were unable to make the sessions there are a few obvious headlines to share from the group discussions. Many of you talked about the challenge of managing the peer evaluation process, particularly those of you who aren’t evaluating an event in a major town or city. A number of organisations shared positive stories about how you have been discussing the Quality Metrics data in your organisations. As a result of the skills elements at the sessions there is strong support from all of you for Culture Counts to create additional resources for this microsite, for example about how to calculate standard deviations for dimension scores and wider evaluation tips and hints (we’ll tweet out updates so follow us!)

 

It was also clear that many of you are keen to share your data with other NPOs, some of whom are already taking part in this national test phase and some who are not. As you know you can share your evaluations with other participating NPOs in the dashboard. For those keen to share their data more widely, can we remind you that NPOs and MPMs not taking part in this quality metrics national test phase can register their interest with us to gain trial access to the Culture Counts platform under the terms of our Nesta, AHRC, ACE Digital R&D fund award (see our earlier news item). Remember the deadline for this closes on March 1st.

 

(Image credit: Royal West of England Academy)

 

 

Valuing Culture

Valuing Culture

What is a high quality cultural experience? How can cultural organisations best measure the quality of what they do? How can these insights enrich conversations with artists, audiences and supporters?

 

These are the questions at the heart of the whole Quality Metrics approach. Trying to answer them has meant giving the cultural sector the lead role in defining the metrics they think can best capture the quality of cultural experiences, and then working with them to see whether the quality metrics generate data that is credible and insightful. We are now doing that at scale with the 150 participating NPOs and MPMs.

 

The vital litmus test here, being explored by this National Test phase, is whether the quality metrics’ data nudges cultural organisations towards productive, data informed, self-reflection on their creative practices.

 

Amidst all the language of big data, standardised metrics, and triangulation that comes with the value measurement territory, it is easy to lose sight that this self-reflection process is where the transformative energy lies in conversations about valuing culture.  With cultural organisations using the quality metrics data to explore whether they are consistently meeting their creative intentions, and deepening their understanding of how distinct audiences respond to different types of work.  The opportunity is to support the cultural sector to move the conversation about data and value out of narrow ‘advocacy’ and ‘audit’ boxes, and into the creative and commercial lifeblood of their organisations. http://artsdigitalrnd.org.uk/features/data-at-the-deep-end/

 

The quality metrics, and the resulting data, are aiming to be the means to this bigger end – ideally with a strong community dimension with the use of standardised quality dimensions encouraging a sector-wide exchange of shared insights and interpretations.

 

The cultural organisations taking part in this National Test phase are going to be triggered to reflect, share and comment on the impact of the quality metrics on their internal conversations about creative practice, intention and the value of what they do. It’s going to be fascinating hearing about their experiences, and sharing them here. 

The Bigger Picture: Aggregating Data

The Bigger Picture: Aggregating Data

If the reflective processes of the cultural organisations involved in this National Test are one vital element of unlocking insights from the quality metrics, another is the overall report on the National Test that the Culture Counts team will be producing. We are giving a lot of thought to how best to aggregate the data and tell a rich story about the results in the round.
 

The wonderful thing about standardised questions (like the quality metrics) is that if you ask them enough times, you can start building a large pool of comparable data, enabling new questions and answers to emerge. 
 

The quality metrics as a core set of questions aim to capture the quality of artistic work and can be used across art-forms, presentation mediums and types of experience. It is essential that when faced with an aggregate quality metrics dataset encompassing all the different conditions in which they can be applied, that the questions asked of the resulting quality metrics data set do not dilute or diminish the rich details of the underlying survey data.


Therefore the data aggregation analysis in the trial is not about comparing the artistic quality of one event over another; it is about enhancing insight around the experience of artistic work as a whole across the National Test. It needs to draw out the dimensions of quality and enable conversations about different reactions to different types of work to be specifically interpreted based on evidence on a national scale.
 

We’ll keep you posted about how we are going to be approaching the analysis of the data (applying meta data to the overall data at its most granular level) in order to ensure we tell a powerful overall story without losing the complexity and richness of the underlying data.