Never mind the quality, feel the width…

“integrity” flickr photo by drumminhands https://flickr.com/photos/drumminhands/7114464945 shared under a Creative Commons (BY) license

In preparing for a forthcoming supervisory meeting, I’ve been asked to share what I felt were the standout insights from my empirical observations, but for each one explain how I know, how I convinced myself of that, and how I can convince others. I guess what I’m being asked here is to justify my claims to knowledge; how do I assert that my interpretations are plausible? Lincoln and Guba (1985:290) phrase it as follows:

“How can an inquirer persuade his or her audiences (including self) that the findings of an inquiry are worth  paying attention to, worth taking account of?”

For them it is about trustworthiness and the arguments which can be mounted to make the case, however, assessing the quality of research findings is far from straightforward and is contested in a number of ways. Traditionally, research quality has been judged on the criteria of validity, reliability, generalisability, and objectivity. Validity, simply put, is the extent to which an account adequately represents the phenomenon it purports to. Reliability is related to the replicability of the data generation and analysis; if different people conducted the same study, or the same person on different occasions, would the outcomes be the same? Generalisability refers to the extent what has been learned can be extended to wider populations and objectivity, to how the biases and interests of researcher and researched have been reduced or accounted for.

These principles were inherited from the methods of natural science and are generally to be found in positivist-leaning, often quantitative studies. Although they continue to be applied within qualitative research (Morse et al, 2002), a debate amongst qualitative researchers developed asking how and even whether rigour can be established in this way. Participants in this debate fall into three camps (Denzin, 2009): foundational, quasi-foundational and non-foundational. Foundationalists contend ‘that research is research, quantitative of qualitative. All research should conform to a set of shared criteria’ (e.g. validity, credibility etc). Quasi-foundationalists seek a set of criteria unique to qualitative research, whereas non-foundationalists propose that criteria are not appropriate since they view ‘inquiry within a moral frame, implementing an ethic rooted in the concepts of care, love, and kindness.’

Given the uneasy truce emerging from the paradigm wars of the late 20th century (Bryman, 2008), a common set of criteria with which to judge all research endeavours might be seen as one route through which to reinforce parity between quantitative and qualitative research. Some, like Hammersley (2007) seek to restore the perceived devaluing of qualitative research by the drive towards evidence-based practice. This he argues might be achieved by a move towards at least some common criteria by which rigour might be established.

A helpful summary of some of the quality principles which have been proposed across the research can be found in this table from Johnson and Rasulova (2017):

 

Authors Principles of quality and rigour
Agar (1986) Credibility, accuracy of representation, and authority of the writer
Guba (1981) Truth value, applicability, consistency, and neutrality
Kirk and Miller (1986) Consistency of results, stability over time, similarity within a given time period
Brink (1991) Stability, consistency, equivalence
Lincoln and Guba (1985) Credibility, confirmability, dependability, transferability, authenticity

 

They point out the degree of overlap of criteria, but note that it is Lincoln and Guba’s (1985) principles which seems to have garnered most traction. Helpfully, Johnson and Rasulova also summarise these principles:

 

Qualitative research Questions that underpin the principles of qualitative research (Pretty, 1994; p. 42) Quantitative research concepts Techniques to establish the principles include:
Credibility How can we be confident about the ‘truth’ of the findings? Internal validity Prolonged engagement, persistent observation, triangulation, peer debriefing, negative case analysis, member check.
Confirmability How can we be certain that the findings have been determined by the subjects and contexts of the inquiry, rather than the biases, motivations and perspectives of the investigator? Objectivity
Dependability Would the findings be repeated if the inquiry were replicated with the same (or similar) subjects in the same or similar context? Reliability An audit trail subject to external audit
Transferability Can we apply these findings to other context or with other groups of people? Generalisation Thick description
Authenticity Have people been changed by the process? To what extent did the investigation prompt action? Fairness, together with ontological, educative, catalytic and tactical authentication.

 

(The additional criterion of authenticity was added by Lincoln and Guba in 1986 to respond to criticisms. I have added the fifth column to provide a little more detail )

This closer look at these criteria (and other similar lists) make it apparent how they attracted criticisms levelled at their positivist origins and the way they appear to be seeking absolute trustworthiness. Coming firmly from the non- (or anti-) foundational camp, Smith (1984) goes so far as to reject the possibility of criteria completely, stating that

“The assumptions of naturalistic inquiry—that reality is mind-dependent and that there are multiple realities—are incompatible with the idea of an independently existing reality that can be known through a neutral set of  procedures.”

This shift towards ontology makes plain the argument which is being made relating to truth or reality. The criteria offered thus far seem to have been formulated in an attempt to uncover the nature of an single, external, independent reality. Furthermore that the researcher and researched, knower and known are independent of one another, that biases can be eliminated (or accounted for) and that objectivity should be a target. At least for the first four of Lincoln and Guba’s criteria.

Arising from her proclivities for interpretive and poststructural research, and seeking to distance herself from the charge of pandering to post-positivist derived criteria, Tracy (2010) developed her ‘Eight ”Big-Tent” Criteria.’ She contends that high quality qualitative methodological research can be judged by the presence of (a) a worthy topic, (b) rich rigor, (c) sincerity, (d) credibility, (e) resonance, (f) a significant contribution, (g) ethics, and (h) meaningful coherence. These are presented as ‘universal’ criteria, that is, in order to be considered quality research, each of these criteria must be satisfied.

The various approaches presented so far fall into what Burke (2016) and Sparkes and Smith (2006) have termed the ‘criteriological approach.’ This propounds the belief that “criteria for judging qualitative research need to be, and can be, predetermined, permanent and applied to any form of inquiry regardless of its intents and purposes” (Smith and McGannon, 2017). They see the universal application of criteria as problematic, in how it requires any and all research to be judged in ‘preordained and set ways’ (which of course was precisely the reason proposed as one means to address the criticisms of the positivists). They offer as an alternative the ‘relativist’ approach in which criteria are instead a socially constructed list of characterising traits, and are applied in a manner that is contextually situated and flexible (Sparkes and Smith, 2009). These lists are not fixed in advance, but are open-ended, responding to what unfolds by adding or subtracting characteristics as required. This feels somewhat woolly, leaving itself open to the charge from realists that ‘anything goes.’ However, proponents of a relativist approach counter on ontological grounds, claiming that there is no social reality independent of our interests to act as a reference point against which to judge claims to knowledge. The only option when judging the quality of research is to appeal to ‘time and place contingent lists of characteristics to sort out the good from the not so good’ (Smith and Hodkinson, 2009).

In a rather more radical take, Richardson and St. Pierre (2005) use ‘triangulation’ as a point of departure to move towards the notion of ‘crystallization.’ This conception moves beyond the two-dimensional, triple perspective of the triangle, and embraces the way crystals grow and change shape to produce a wealth of symmetries, facets, shapes and potential angles of approach. As such, they embrace a much broader range of methods, approaches to research and the subsequent analyses. Crystallized projects should be evaluated on the basis of their ‘substantive contribution, aesthetic merit, reflexivity, and impact,’ although Richardson and St. Pierre note these are only four of the criteria they use. So despite the unfortunate pseudoscientific associations with crystals, even here we have a list of criteria, although one acknowledged as having emerged from postmodern thinking.

My research project is neither built on a realist ontology, nor a positivist epistemology, nor does it employ quantitative methods. It will come as no surprise therefore that the traditional criteria of validity, reliability, generalisability and objectivity hardly seem appropriate here. However, nor do those which are somewhat inspired by them, like confirmability, dependability and transferability. These all assume a reality ‘out there’ which can somehow be known ‘in here. With an actor-network theory sensibility however, reality can be out there, independent of the knower, but only if it made that way.

“Realities are made. They are effects of the apparatuses of inscription. At the same time, since there are such apparatuses already in place, we also live in and experience a real world filled with real and more or less stable objects.” (Law, 2004)

What emerges from my research will be made through the practices and methods I have employed. The apparatuses by which validity, reliability and so forth are inscribed elsewhere will indeed be appropriate for some studies, however, not so for my research I’d argue. As a consequence, I think this is better served by a relativist, rather than a criteriological approach. I also feel that using some of the original terminology (trustworthiness, rigour etc) carries baggage, so I propose to use terms which are less used, and for me, describe better the characteristics on which this project should be judged. It is my hope that the reader, for it is they who will be the arbiter of the quality of this research, feel that it has been conducted with integrity, fidelity and honesty. To that end, in a forthcoming post, I’ll offer characteristics assembled from different studies (Elliott et al, 1999; Lincoln and Guba, 1985; Lincoln, 1995; Richardson, 2000; Spencer et al, 2012; Tracy, 2010;) which are appropriate to the circumstances within which this research was conducted.

References

Burke, S. (2016). Rethinking validity and trustworthiness in qualitative inquiry. In B. M. Smith, & A. C. Sparkes (Eds.), Routledge handbook of qualitative research in sport and exercise (pp. 330-339). London: London : Routledge
Bryman, A. (2008). The end of the paradigm wars? In P. Alasuutari, L. Bickman & J. Brannen (Eds.), The SAGE handbook of social research methods (pp. 13-25). London: SAGE Publications. doi:10.4135/9781446212165
Hammersley, M. (1990). Reading ethnographic research : A critical guide. Longman.
Hammersley, M. (1992). What’s wrong with ethnography? : Methodological explorations. Routledge.
Hammersley, M. (2007). The issue of quality in qualitative research. International Journal of Research & Method in Education, 30(3), 287-305.
Johnson, S., & Rasulova, S. (2017). Qualitative research and the evaluation of development impact: Incorporating authenticity into the assessment of rigour. Journal of Development Effectiveness, 9(2), 263-276.
Lieblich, A., Tuval-Mashiach, R., & Zilber, T. (1998). Narrative research: Reading, analysis and interpretation (Applied social research methods ; 47). Thousand Oaks, Calif. ; London: Sage Publications.
Lincoln, Y. (1995). Emerging Criteria for Quality in Qualitative and Interpretive Research. Qualitative Inquiry, 1(3), 275-289.
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry Sage.
Lincoln, Y. S., & Guba, E. G. (1986). But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. New directions for evaluation, 1986(30), 73-84.
Morse, J. M., Barrett, M., Mayan, M., Olson, K., & Spiers, J. (2002). Verification strategies for establishing reliability and validity in qualitative research. International journal of qualitative methods, 1(2), 13-22.
Nolan, M., Hanson, E., Magnusson, L., & Andersson, B. A. (2003). Gauging quality in constructivist research: The Äldre Väst Sjuhärad model revisited. Quality in Ageing and Older Adults, 4(2), 22-27.
Richardson, L. (2000). Evaluating ethnography. Qualitative inquiry, 6(2), 253-255.
Richardson, L., & St Pierre, E. A. (2005). Writing: A method of inquiry. In N. K. Denzin, & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (3rd ed. ed., pp. 959-978). Thousand Oaks, Calif. ; London: Thousand Oaks, Calif. ; London : Sage Publications.
Smith, John K. (1984). The Problem of Criteria for Judging Interpretive Inquiry. Educational Evaluation and Policy Analysis, 6(4), 379-91.
Smith, John K., & Hodkinson, Phil. (2009). Challenging Neorealism: A Response to Hammersley. Qualitative Inquiry, 15(1), 30-39.
Smith, B., & McGannon, K. (2017). Developing rigor in qualitative research: Problems and opportunities within sport and exercise psychology. International Review of Sport and Exercise Psychology, 1-21.
Sparkes, & Smith. (2009). Judging the quality of qualitative inquiry: Criteriology and relativism in action. Psychology of Sport & Exercise, 10(5), 491-497.
Spencer, L., Ritchie, J., Lewis, J., & Dillon, L. (2012). Quality in qualitative evaluation: A framework for assessing research evidence. (). London: Cabinet Office.
Tracy, S. (2010). Qualitative Quality: Eight “Big-Tent” Criteria for Excellent Qualitative Research. Qualitative Inquiry, 16(10), 837-851

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s