A Case Study
This article is a written version of a talk I gave at this year’s Group for Education in Museums Conference. It takes a look at the evaluation work done for “Represent”, an inclusion project run for young people at Birmingham City Museum and Art Gallery. It was requested I write this up for the VSG Newsletter as some of you may be interested in the methods used.
Here, I shall only outline the content of the project and concentrate on the actual evaluation. Those that wish to know more should refer to the website, www.birmingham.gov.uk/represent. The website also includes the full evaluation report. At the end of this report there is also a brief evaluation of the evaluation!
“Represent” was a project set up to attract young people, chiefly those from ethnic minorities, to museums. However there were other aims that are listed in the box below, including developing the confidence and skills of the young people. Thus the main evaluation aim was to provide information on how successful the project was in trying to attract new audiences to museums, but other evaluation aims considered how effective the project was in improving skills and confidence.
The evaluation needed to highlight the sorts of factors that contribute to the success/failure of the project. The evaluation depended on showing changes in the attitudes, skills, and experiences of the young people to show that the project had had an impact on them. Thus it was essential that that data be collected throughout the project and in particular thoughts from the young people. This is easier said than done! It helped in using the key-worker as a middleman in the collection of data (at least some of the time) as I thought they would feel more comfortable with him. I also wanted to talk to them directly and offer an opportunity for them to tell me about the key-worker and the project. The key-worker and myself compared our feelings and discoveries.
“Represent” was a project set up to attract to museums young people, chiefly from ethnic minorities. Aims of the project were not just to attract new audiences to museums but also add quality and experience to their lives.
The project was initiated with “walk tall” sessions by Dr Roy Paget, which developed confidence, communication skills and self-esteem. Early social events also provided opportunities for getting to know each other. Subsequent activities were developed in consultation with the young people. Activities included museum visits, recording experiences in scrapbooks, an exhibition on graffiti, and DJ workshops.
The methods used here are not new and are informed by social science research that focuses on the use of Qualitative data. Using qualitative data has many problems (mind you so does using quantitative methods) but it does provide an opportunity to dig deeper into the context. Reliability of data recording and the validity of analysis are critical to providing a “true” picture. Obviously if one asks different questions of the collected data, different “truths” will result. And if different evaluators use different methods, yet other truths will result. However there are ways of making qualitative data more consistent and comparable. Miles & Huberman in their standard text comes up with a number of very useful strategies. These include testing ideas, returning to the audiences with the results, using counts of frequencies, and relating results to other similar research. Their ideas are based on a full understanding of the theoretical context of such methodology.
Data was collected from a number of different sources
- diaries from key workers ands others involved in provision
list of factors diary keepers were asked to recordfeelings about own input and impact on you
feelings about input of colleagues/museum
feelings about impact of project on partnerships
feelings about impact of project on young people
success of overall visit/training course
feelings about young people’s needsIdeas on how the project is changing
Ideas on how it is succeeding
any issues or problems
evaluation sheets from those involved in providing the programme,
- evaluation and work sheets from the attendees,
- work produced by attendees eg scrapbooks
- interviews and focus groups (during the project and at the end) of staff including management
- focus groups (at the end) and informal conversation with attendees
- evaluator notes from project meetings held through out the programme
- evaluator notes from parts of the programme attended eg trips, walk tall sessions
- videos taken by key-worker and attendees
This variety of data offers views from a diverse range of viewpoints (triangulation – a common form of validity) and also provides opportunities for the evaluator to get a feel for the project.
Useful analysis depends on the early development of clear evaluation aims (see Figure 3 of evaluation cycle). The data collected is searched through a number of times, the evaluator looking for evidence to support (and NOT) the completion of the aims. For example, in “Represent” one of the aims asked for training activity, and so I looked for examples of:
- opportunities for the young people to develop through training,
- evidence of the development of relevant skills etc
- gaps/problems in the training provision.
The examples found are sorted into their different categories with sub groups being formed within these. Data, which does not support successful outcomes provides opportunities for a more critical view and highlights some of the limitations of the project. It is important that the evaluator becomes immersed within the data to “feel” trends and patterns emerging.
Evaluation provides an opportunity to assess how effective the project has been and can inform future work. To make the evaluation more useful it helps if clearly focused evaluation questions are formed at the beginning of the project. Different questions will provide different information for different purposes. Because of the importance of the evaluation and its results, the evaluation needs to be carried out in a rigorous manner and conform to best practice including considering the reliability and validity of the evaluation programme. The evaluation cycle below shows the evaluation process, illustrating the need for involvement through out and the cyclic nature of the process, with evaluation informing future programmes and policy.
Kate Pontin is a consultant evaluator and educator. She started working in museums over 15 years ago, and has chiefly worked as a museum educator in small and large local authority services. Her interests in evaluation, and in the actual experience of visitors developed when she did her MA thesis on the evaluation of a natural sciences gallery. Since then, 11 years ago, evaluation work has become a focus for her work and she has done evaluation work on displays, user group opinions, professional needs, and informal learning