Freelance Museum Consultancy


Techniques to use in Qualitative Evaluation and Research

By validity I mean truth; interpreted as the extent to which an account accurately represents the social phenomena to which it refers” (Hammersley in Silverman, pg 149)


This article is the result of a talk I gave last year – this time a paper given at the research week at the University of Leicester. Validity is a subject close to my heart and I feel it is central to qualitative data being taken seriously. It has a role in forming structure in a qualitative study and thus helps provide consistency from worker to worker.

This paper provides notes on the types of validity that can be used when doing qualitative data collection and analysis. At least one of these methods should be used. Although a focus has been put on using triangulation in recent years there are other methods that can be used.


Triangulation concerns itself with using data drawn from different contexts. These contexts include:

* Data source
* Method
* Researcher
* Theory
* Data type (qualitative and quantitative)

The different data are compared to see if they corroborate each other.
It can be particularly helpful in solving the partiality of data. Dingwall (in Silverman) suggests that it is useful when comparing public and private accounts, for example an interview and field data can be combined to make better sense of the data.

For example in my research I have used a number of different methods of data collection; observations, interview and respondents own feelings using comments cards.

The limitations of triangulation

Interestingly Silverman is critical of triangulation. He suggests that it may be inappropriate to conceive an over-arching reality from data that is gathered in different contexts, and approximate this to the truth.

Rather he suggests that it may be useful to help provide an “assembly of reminders” about the situation but that it should not be used to adjudicate between accounts. In qualitative data we are there to understand not to judge the truth (Silverman p.158)

Silverman says in criticism of triangulation that:

a) each method used relies on the same reliability issue ie I could be equally inconsistent at categorising in interviews as in recording observations.

b) Triangulation looks at different contexts and therefore ignores the context bound nature of ethnographic situations. Data collected from different contexts for example
Children drawing their own pictures about the experience and my views during and after the observations of the visits. (Silverman, p.158)

c) Rarely does the inaccuracy of one approach to the data complement the accuracy of another.

d) the aggregation of data even when grounded in same theoretical perspective does not produce an overall truth (Silverman, pg 157)

Silverman suggests that triangulation may be effective for why questions and not how questions.

Many others are less critical of triangulation but his comments certainly need thinking about. I feel that triangulation should not be used solely and should be supported by other methods. These need to be thought about before the project starts and included in the evaluation work programme. If we expect qualitative reports to be taken seriously and trusted, as quantitative work is we need to establish rigorous strategies to make the data and reports consistent and reliable.

Taking the findings back to the subjects being studied (see Silverman)
This enables the subjects to comment on and verify ones findings. This is known as respondent validation. One can provide either:

a report
a classification of activities or situations
a hypothetical case.

Silverman feels that this technique is not always appropriate. He quotes Bloor when he says that this method does not validate the data but can generate useful information for further analysis. Miles and Huberman also refer to getting feedback from informants (159). They say it is a time consuming process but if it is built into the process one can check out findings. One can use new or old informants. It is a useful exercise as it can help you “now better what you know”. But what do you do if they do not agree? There are also issues of ethics.

Testing hypotheses (see Silverman page 160)
This is known as analytic induction (and equivalent to statistical testing of quantitative data). In this method ones tests hypotheses that have been produced from the data. They are then reformulated as necessary to make sure all exceptions have been eliminated. The tests can be with new or old data.

* Tests out ideas/clarifies/confirms
* Concentrates on one or two cases studies in detail
* Using tick boxes for learning processes gave me time to compare the process learning categories with my personal understanding of the visit.

If – Then tests (and ruling out spurious relations)
More focused than testing hypothesis
It is about expected relationships
If p is true one can look to see whether q is true, if p then q 
A number of different tests should be done.
It is particularly useful in areas where one is not quite fully clear about.
Many computer programmes such, as will do it for you.

For example in a case study in Miles and Huberman (p. 272)

If the period of Implementation is later then Institutional concern will be more frequent than individual ones

Counting in Qualitative Data (Silverman p.162)
This means the crude counting of measurements such as observation categories
It does not provide proof of the thrust of the argument but it does
* Confirm what one felt about a situation (or not)
* It can inform analysis

Inferring from one case to a larger population (Silverman, p. 160)
This can be the generalisation of cases to theoretical propositions rather than to populations so relate to broad classes of phenomena

a) Obtaining information about relevant aspects of the population of cases and comparing our case to them

b) Using results of research on a random sample of cases
This helps test the ideas but also stretches them further and makes them clearer.

c) Co-ordination several ethnographic studies

Replicating Findings (Miles and Huberman, p. 273)
If can reproduce the findings in a new context or in another part of one’s database it is a more dependable one.
One can use new data to bolster the theory or qualify old data

One can test emerging hypotheses in another part of the case or data set provides more rigorous answers (less bias) – eg cross case studies. One can test across cases using memos, interim case summaries etc to check this. One should do this as going along

Weighting of data (Miles and Huberman, p. 267)
This can be used carefully by using stronger data as a more valid record

Stronger data Weaker data
Data collected later or after repeated contact Data collected early/during entry
Seen or reported Heard second-hand
Observed behaviour/activities Reports/statements
Field worker is trusted Field worker is not trusted
Collected in informal setting Collected in official or formal setting
Respondent is alone with field worker Respondent is in presence of others

Checking the meaning of outliers 
Using them to deepen preliminary conclusions (Miles and Huberman, p. 269)
Outliers are cases that do not fit the current hypothesis.

A look at the exceptions can strengthen the explanation
Why are there outliers?
If there are no outliers, why not?

Looking for negative evidence and rival explanations
(and following up surprises, Miles and Huberman, p.271)

This is not always easy to do as people do not naturally include items in their analysis disprove their theory. One can also look for disconfirmation of the theory as well as just outliers. After all according to Popper (in Silverman) one can only disprove and not prove theories.

Glaser and Strauss give no guidelines on how or how long one should look for outliers and alternative explanations. Any findings can be used to revise the hypothesis. One can use prior data, new data and the studies of others. One should not be too quick, however to discard the original hypothesis (this may depend on the proportion of negative cases found for example).
Rival explanation (Miles and Huberman, p. 273)
This is healthy exercise to think about. One should do this while collecting data and not too late in the study, but it is difficult to do when want to try and bolster one’s own ideas. Use colleagues to help by commenting on ideas and thinking of alternative explanations.

Generally one can say that validation is difficult but one must try to produce results that are as true as possible especially if these are to be used to inform others and to make generalisations. A variety of methods should be used, which ones depending on the situation.

There is a need to plan validation into the methodology. It is difficult when one is not experienced in research as one does not fully understand the implications of what one reads until one gets to analyse and discuss data


M.B Miles & A M Huberman; Qualitative Data Analysis, 2nd edition, Sage, 1994

D. Silverman; Interpreting Qualitative Data, methods for analysing talk, text and interaction, Sage, 1993