Policy Week event: Using Digital Technology to Assess Quality in the Arts

Policy week event

This event followed on from research as part of the NESTA R&D Culture Metrics project which involved researchers from the Institute for Cultural Practices, The University of Manchester, in collaboration with a partnership of arts and cultural organisations and a technology partner, Culture Counts. The research considers how digital technologies, social media and big data might help arts organisations benchmark and demonstrate the quality of their work, and assess and evaluate its impact on audiences and for funders.

Arts and cultural policy increasingly requires arts organisations to demonstrate the public value and outcomes of publicly funded work, through evidence and evaluation. At the same time arts organisations want to be able to demonstrate the quality of this work in ways they understand and control. To this end, the Culture Metrics project has attempted to develop measures for the quality of arts and cultural experiences, which have been co-produced and tested by the sector, and bring together data from different stakeholders in the process, using digital technologies. As the research partner, we have been looking at some of the claims of this project, along with the motivations and challenges for cultural partners and policy stakeholders when adopting the metrics system.

This Policy Week event aimed to further explore the challenges for arts organisations and policy makers in this area of interest – and to consider more broadly the ways in which the arts use digital technology, social media and big data to demonstrate their public value.

Presentation of the Culture Metrics research

In the project, we explored how organisations already use social media data with increasing sophistication to generate and continue conversations with their audiences and to promote their brand values and missions, as well as arts experiences and events. We found that organisations regularly bring together and triangulate data to understand their audiences and inform programming and producing decisions. We found less evidence of a consistent and structured use of social media data as a source for measuring quality or as an integrated component of their performance management regimes.  Technically, however, the data arts organisations use is unlikely to be ‘big’ in the ways understood by the growing literature that enthuses about the potential value of big data and its distinctiveness from other more routine data sources, such as box office and occasional audience questionnaire surveys. 

We also found that Culture Metrics “ticks a lot of the boxes” found by recent research on performance measurement and quality in arts organisations - of so-called ‘artistic vibrancy’ (the Australian term for healthy cultural organisations). For example, a literature review by Bailey and Richardson (2010) found a number of models with had the following recommendations for performance measurement in common:

  • include external and internal views of the organization’s performance (e.g., audience, funders, artists, peers, staff);
  • ensure organizational ownership of the measurement process for it to be meaningful;
  • engage employees and management in the measurement process;
  • match measures to the organization’s mission.

(Bailey and Richardson 2010: 294)

The event heard an introduction to the NESTA R&D project from Abigail Gilmore and John Knell, with some insights into the process for the research and some early findings. This preceded the live testing of the Culture Counts evaluation system by delegates representing both ‘public’ and ‘peers’ in an evaluation of the Matthew Darbyshire exhibition

The results of this test event, combined with data already collected in the Gallery by University of Manchester Arts Management students the week before, were then displayed and discussed.

Questions this exercise raised included: What are the differences between verbal and text responses and ‘hard’ quantitative metrics? How should we understand them in terms of the implications for what the data can be used for? What provisions are made for responses and effects of arts and culture which don’t happen in the short window between experiencing events and exhibitions and undertaking the survey, and that can be delayed by some time? These ‘longitudinal’ impacts are discussed within the methodologies reviewed by the Arts Council England’s recent literature review on understanding the value of cultural experiences as one of the critical drawbacks in post-event surveys. However, John Knell explained how Culture Counts can be used retrospectively and, as other audience members contributed, the discussion considered how the methodologies for understanding audience impact should not only rely on post-event surveying, but also include other mixed methods that allow for conversations with audiences and publics.

Another question raised was about the relation between the use of social media data as object of further interpretation and the taxonomy of self, peer, public? Is this the ‘right’ taxonomy? Interestingly there was little discussion about the sampling strategies or data collection techniques used by Culture Counts, although there were some concerns about the representativeness of respondents, particularly in relation to ‘peers’ of arts organisations. As the project testing also found, recruiting suitable peers to act as critical friends for particular bits of programming takes effort and care is needed to avoid accusations of bias selection. 

In their presentation Kostas Arvanitis and Chiara Zuanni introduced follow-on research from the Culture Metrics project, which explores and combines social media data with analysis of audience experiences. Focusing on current methodologies for the collection and analysis of social media data, they discussed the relation between this data and the data collected by the Culture Counts system, highlighting the organisational challenges of a data-rich cultural professional practice. They highlighted broader issues including the impact that the rhetoric of data, especially big data, has on producing preconceptions of validity and value, and considered the gaps in the data and how these gaps are accounted for in organizational practice. Overall, Arvanitis and Zuanni raised questions about the data cultures that are being formed in cultural organisations, and about what data-driven decision-making might actually mean, how it manifests itself in organisational life and how the collection and analysis of social media data might fit into organisational data culture and practice. In addressing what we need to take into consideration in planning, carrying out, and evaluating social media metrics they talked about:

  • Understanding the context and motivation of audiences’ social media activity
  • Value and usefulness of unprompted/unstructured reactions (as opposed to structured surveys)
  • Accuracy of data
  • Representativeness of audiences
  • Different platforms, different users, different uses?
  • Methodological and ethical issues on capturing and using social media data

Panel discussion

This was followed by the panel discussion, which focused on the following questions:

  • How can arts organisations use social media, digital technologies and big data more strategically?
  • What implications for cultural policy derive from the use of this data?
  • What one improvement would help?
Roundtable: from left, Nick Merriman, Alison Clark, chair Abi Gilmore, Cimeon Ellerton, Hasan Bakshi.

Roundtable: from left, Nick Merriman, Alison Clark, chair Abi Gilmore, Cimeon Ellerton, Hasan Bakshi.

Hasan Bakshi, NESTA , discussed the questions in relation to his experience as an economist attempting to develop robust measures for innovation, value creation and, recently, cultural or ‘intrinsic value’ through methods such as contingent valuation. With the proliferation of big data, from diverse sources, his concerns include the question of data standards and how we might understand these in relation to bigger data. One recommendation for the arts might be to involve more data scientists, who arts organisations can work with to ensure the quality of the data and its analysis. He gave an example from recent research of ‘machine learning’ through exploring massive datasets, which have helped to identify video games as a growing form of cultural participation – in contrast to other forms of participation research that miss changes in behaviour, particularly associated with new or emergent cultural forms. One particular method which needs updating is the survey model: instead, Hasan argues, big data use cases can reveal new patterns of behaviour and experience, and even redefine what we understand as culture.

Cimeon Ellerton, The Audience Agency, talked of his experience in developing audience data through services such as Audience Finder. For him, the priorities remain the standardisation of data and also the need to aggregate data whilst encouraging arts organisations to be open. This needs to be a collective effort, involving all organisations, so that the smaller arts companies with less capacity for audience evaluation and research could benefit from scalable economies. Leadership by the sector in using data and evidence is also key: “whoever owns the story gets the funding”.

Alison Clark, Arts Council England North, said she feels arts organisations continue to have a fear of social media and of sharing. There isa sense that the more organisations value data, the less they are likely to share data. Arts Council England has embraced these issues, by developing a new data strategy and encouraging the use of data scientists to bring new skills and practices to enrich big data analysis and evaluation. Although this may bring more opportunities for ‘data-driven decision making’, at the same time it brings other anxieties. For example, evidence-based funding schemes, such as Creative People and Places, which is only for places with identified low levels of engagement established by Active People survey data, mean places and programmes, which other types of knowledge and analysis would reward, miss out. There is a concern that policy for arts and culture becomes somehow un-human – that there is an automation of creativity, and policy formation can be algorithmic. A solution to this lies to some extent in a collective embrace of big data, where everyone joins in: for Alison, this means leadership within the sector to create peer pressure and encourage sharing of resources and practice.

Nick Merriman, Manchester Museum,  speaking from the ‘user’ perspective, discussed how initiatives like Culture Counts can provide better accountability for public funding, since by aggregating stakeholder perspectives they provide a credible means of understanding quality, not just of arts and cultural experiences, but also other activities which involve exchange between publicly funded services and the public, for example, science engagement. He spoke of the added value of the process of developing shared metrics through collaboration, although he also cautioned it is early days still for the embedding of this approach in the sector, particularly in terms of bringing together the data from Culture Counts with social media data.

Discussion focused on the barriers to data sharing: unlike the commercial sector, for the publicly funded arts there is more opportunity – and need – for policy to intervene in this space and to encourage organisations to work collaboratively to increase their analytical capacity. The discussion suggested that concerns for technologically-determined funding decisions are balanced by the opportunities to create better transparency and accountability. This, it was felt, is facilitated viaa richer conversation based on a range of knowledge which includes the generative potential of big data rather than relying on single source or set of measures; to achieve thisfurther scientific endeavour is needed to develop the data standards, metrics and methodologies to measure and demonstrate public value. 

The arts may feel they are playing catch-up with other sectors in this agenda, and whilst one wonders if the rhetoric of evidence-based policy is simply replaced by the rhetoric of big data within data-driven decision-making, with the added ‘wow factor’ of algorithmic policy-making, there is clearly an appetite to work together to understand and harness the potential of big data, to adapt and use its social properties, especially when led by the sector rather than imposed top down by policy.

 

References

Bailey, J and Richardson, L (2010) Meaningful measurement: a literature review and Australian and British case studies of arts organizations conducting “artistic self-assessment” in Cultural Trends Vol. 19, No. 4, December 2010, 291–306.