The participatory strand was more popular than anticipated, with 24 organisations taking part. As many organisations in the ACE funded portfolio cover participatory work through their programme of activity there is much interest in measuring the quality of participatory work effectively. Organisations were invited based upon existing relationships; knowledge of the organisations’ work who had expressed interest in the main quality metrics trial, a judgement on which organisations had the capacity to absorb both the main trial and this participatory metrics strand; and on specific recommendations from ACE. Two arts organisations which sit outside the portfolio, Fun Palaces and Brighter Sound, were invited to take part due to their expertise in participatory work.


Two inception workshops have been run to date, with one held in London in December, and one in Manchester in January.  The conversations at the inception meetings clearly demonstrated the rich expertise and processes that go in to producing participatory work within the attending organisations.


Facilitating testing is underway and is managed in a similar way to the quality metrics trial. All organisations in the participatory metrics strand can see in the Culture Counts dashboard a template participatory evaluation with the long list of participatory metrics. They can then choose to edit the metrics, remove those that they wish and add in others.


In other words the existing long list of participatory metrics, produced in the original pilot phase led by Re:Bourne / New Adventures, are being actively refined in the sense that participating organisations are being encouraged to choose freely from that list (so we will see which of the dimensions are being chosen most frequently), and also to either edit and refine them, or to add in additional metrics. The evaluation data and the shared interpretations of the cohort of participating NPOs should produce rich insights into how best to refine and develop the participatory metrics.



Some emerging themes have already started to take shape from the work so far:

·      The need to adapt individual dimensions into different metric statements for different contexts


·      Tracking behaviour change. Whilst the length of the trial mitigates against any substantive tracking of participants, there is significant interest in how far a platform like Culture Counts will allow organisations to track participants across the course of a participatory experience and thereafter. We are going to explore with some of the organisations (the Book Trust) what is possible within the confines of this national test period.  


·      Some of the participating organisations raised the possibility of developing both summative and formative approaches. Summative assessment is after the fact, post event. Formative can be at anytime (before, during and after) and can be used to inform and critically reflect on how to best continue within a process.


·      Organisations expressed the need for us to be clear about the distinction between quality assurance and outcome measures. 


·      The need to explore the subtleties between interaction, participation, and learning.


·      There seems to be a clear gap in the current participatory metrics – where it aligns with CYP Principle 1 – around excellence.


·      How might we need to expand and refine the respondent types in participatory work – i.e. more than a self/peer/participant respondent frame – extended to include the practitioner and commissioner/producer. How well defined are the roles in the activity / artist participant etc.? The participating organisations have observed that in some instances of participatory work the roles are more clearly defined than in others.


·      What is the vision of the participatory work? Is it deliberate and thought through? How can this intention be captured by the self assessment? Is the vision that of the producer or the practitioner – how well has that vision been communicated? This is linked to deliberate innovation in different contexts, for example how far is the artist repeating content irrespective of the purpose of the participatory event, or are they innovating in new ways to meet clear goals. 


·      Experience for the artist as a participant can contribute insight e.g. some of the best work occurs when the artist is stretched alongside participants.


·      What criteria are used for selecting participatory artists? How can that align with these metrics when considering the quality of the participatory experience? Is the artist inspiring? Do they have expertise?


·      What other quality indicators can be used to capture other objectives of participatory work such as process, production, social change, collaborative understanding? How can the optimal conditions for learning and engaging be measured and included?


·      Formative and long-term impact - repeat testing of metrics with time tracking would be useful over longer participatory experiences. Some participatory work extends to alumni relationships.


·      What are the development options to facilitate access? We could might consider: images, cultural differences, languages, disabilities


The draft long-list of participatory metrics can be found here