Validity and reliability tests

Validity and reliability tests in case study research: A literature review with “hands-on” applications for each research phase Andreas M Riege. Qualitative Market Research. Bradford: 2003.Vol.6, Iss. 2; pg. 75, 12 pgs
” Jump to full text
” Translate document into:
” More Like This – Find similar documents
Subjects: Market research, Theory, Reliability, Validity Classification Codes 7100 Market research, 9130 Experimental/theoretical Author(s): Andreas M Riege Document types: Feature Publication title: Qualitative Market Research. Bradford: 2003. Vol. 6, Iss. 2; pg. 75, 12 pgs Source type: Periodical ProQuest document ID: 339179171 Text Word Count 5481 Document URL: http://proquest.umi.com/pqdweb?did=339179171&sid=9&Fmt=4&clientId=5646&RQT=309&VName=PQD

Abstract (Document Summary) Despite the advantages of the case study method, its reliability and validity remain in doubt. Tests to establish the validity and reliability of qualitative data are important to determine the stability and quality of the data obtained. However, there is no single, coherent set of validity and reliability tests for each research phase in case study research available in the literature. This article presents an argument for the case study method in marketing research, examining various criteria for judging the quality of the method and highlighting various techniques, which can be addressed to achieve objectivity, and rigorous and relevant information for planning to marketing actions. The purpose of this article is to invite further research by discussing the use of various scientific techniques for establishing the validity and reliability in case study research. The article provides guidelines for achieving high validity and reliability for each phase in case study research. [PUBLICATION ABSTRACT]
Full Text (5481 words) Copyright MCB UP Limited (MCB) 2003
[Headnote] Keywords

[Headnote] Case studies, Qualitative techniques, Reliability, Validity

[Headnote] Abstract

[Headnote] Despite the advantages of the case study method, its reliability and validity remain in doubt. Tests to establish the validity and reliability of qualitative data are important to determine the stability and quality of the data obtained. However, there is no single, coherent set of validity and reliability tests for each research phase in case study research available in the literature. This article presents an argument for the case study method in marketing research, examining various criteria for judging the quality of the method and highlighting various techniques, which can be addressed to achieve objectivity, and rigorous and relevant information for planning to marketing actions. The purpose of this article is to invite further research by discussing the use of various scientific techniques for establishing the validity and reliability in case study research. The article provides guidelines for achieving high validity and reliability for each phase in case study research.

Introduction
Research in management and marketing traditionally has been based on positivist science, quantitatively oriented, along a linear deductive path. However, the position is changing for management and marketing research which “seems implicitly to assume a realist perspective” (Hunt, 1990, p. 8), a perspective, which is arguably more practitioner-oriented. Realists share the positivists’ aim of explaining and predicting social phenomena, however, where phenomena have not yet been fully discovered and comprehended, realist investigation often seems more appropriate to identify phenomena and transform people’s experiences into verbal experiences of the researcher (Donnellan, 1995; Tsoukas, 1989). Nevertheless, it still seems that qualitative research, such as case study research, is not really accepted as a rigorous alternative to established quantitative methods in marketing research. But then, case study research is not always to be seen as a substitute for quantitative research. Within the realism paradigm, the focus is on the rigorously analytical method of case study research, which seems to be especially appropriate for two areas: the study of network systems and international business-to-business marketing (Johnston et al., 1999; Perry et al., 1999), rather than the merely descriptive or explanatory use of case studies. It also seems suitable for research into, for example, customer relationship marketing, consumer decision-making, customer satisfaction, and knowledge management.
To support the following discussion, similarities and differences between qualitative research and case study research, that is how is case study research is different from the other qualitative methods, need to be clarified. There are several distinguishing factors when comparing qualitative methods. First, whereas the main objective of case study research is the development and construction of theory, in-depth interviews concentrate on obtaining rich and detailed information, convergent interviews on narrowing down the research focus, and focus groups on group interaction. Second, case studies require a medium to high level of prior theory, which is not the case for in-depth and convergent interviews, which require no or little prior theory. Third, the process and content of case studies usually is (semi-) structured and follows standard procedures, whilst other methods are more flexible, with in-depth interviews ranging from being unstructured to structured, and focus groups and convergent interviews being very unstructured. Fourth, whereas the main strength of case study research as well as in-depth interviews lay in its replication, convergent interviews’ strength is their progressive and iterative nature and focus groups’ strength is their synergistic effect in a group setting (based on Yin, 1994).
This paper has four main sections: following a brief reflection on existing research and approaches, first, it provides a discussion on eminent theoretical research paradigms to set the case study method in perspective to other research paradigms. Second, it describes a variety of design tests (mostly used in quantitative research approaches) and corresponding design tests (mostly used in qualitative research approaches) for establishing and improving validity and reliability. Third, it discusses case study techniques for conducting design/corresponding design tests and phases of research in which these techniques may occur. Finally, it examines the applicability to various research paradigms.
Background
A discussion as to how the chosen research methodology can achieve validity and reliability forms an integral part of any rigorous research effort. However, few scientific techniques have been developed to address the scientific worth and rigour of qualitative research, in particular case study research (De Ruyter and Scholl, 1998). Rust and Cooil (1994) presented a comparative analysis of reliability approaches for both quantitative and qualitative data, and introduced a new approach to measure reliability by expressing reliability as the proportion of expected loss that is avoided when the data are used to make decisions. However, they did not present specific criteria highlighting how to achieve a higher degree of reliability in either qualitative research or specifically case study research. This paper will address this issue by providing some guidelines for case study research.
Furthermore, Healy and Perry (2000) established six specific criteria to judge the validity and reliability of case study research, within the realism paradigm. However, what they provided was, first, a thorough comparison and discussion on theoretical paradigms and philosophical foundations (building on previous studies by Guba and Lincoln (1994); Perry et al. (1999); Riege and Nair (1997)), concluding that the realism perspective appears to be the most appropriate one for marketing researchers. We share their opinion. Second, they presented a comparative analysis of, and a link between, design tests in qualitative research and each of the theoretical paradigms (see also Riege and Nair, 1997). This discussion, however, did not conclude as to what “really needs to be done” to establish validity and reliability in case study research. To address this gap, this paper provides several techniques that will guide both academic researchers in marketing and applied marketing researchers through each phase of their research – from research design stage to report writing – in their attempt to establish validity and reliability.
Whilst the literature offers some suggestions as to what techniques can enhance the validity and reliability of qualitative research, there is only little indication as to what techniques should be used to enhance validity and reliability in case study research. This paper suggests that several of the “generic” quantitative and qualitative techniques and others also can be applied to the case study method, based upon their application in rigorous case study research of over 40 master’s by research and doctorate students from Australia, New Zealand and the UK. That is, numerous students tested the suitability of applying various design tests, as discussed in the following sections, to enhance the quality of their work. Note that this is not an empirical study based on various approaches of students to address validity and reliability in case study research. Essentially, I also argue that a number of design tests for establishing validity and reliability can be used for judging the quality of case study research, irrespective of their commonly more qualitative or quantitative nature, nor of their theoretical paradigm.
In particular, this paper discusses two core questions:
(1) Are the tests and techniques for establishing the validity and reliability of qualitative and quantitative marketing research significantly different?
(2) How far can they be used for case study research?
The focus of the argument lies in the judgement of the quality of theory building, and of the rigour of analytical case studies. That is, regardless whether researchers use them as initial studies to define or refine initially stated research propositions, or view the case study approach, as their main data collection method for the aim is to develop and build theory. A summary of available tests and techniques for establishing validity and reliability in case study research is shown in Table I, which provides the main platform for the following discussion.
Theoretical paradigms and the nature of case studies
The following brief section describes four different paradigms to research methodology:
(1) positifism;
(2) realism;
(3) critical theory; and
(4) constructivism.
Although a discussion on each of these four belief systems or worldviews that guide any researcher provides no or little contribution to the theory of market research, it is necessary as a starting point for the main discussion.
Positivists believe that natural and social sciences are composed of a set of specific methods for trying to discover and measure independent facts about a single apprehendable reality, which is assumed to exist, driven by natural laws and mechanisms. The aim of science is to build up objective and commensurable causal relationships showing how constructs of discrete elements work and perform from a relatively secure base, taking a broad view. Positivism is commonly characterised by a deductive method of inquiry seeking for theory confirmation in value-free, statistical generalisations (Deshpande, 1983; Hirschman, 1986; Tsoukas, 1989). Realists believe that natural and social sciences are capable of discovering and knowing reality, although not with certainty. Realists acknowledge differences between the real world and their particular view to it. They try to construct various views of this reality and aim to comprehend phenomena in terms of which ones are relative in place and time. In contrast to positivism, realism does not rely as much on deductive research inquiries, but sees more appropriate research methods in those that have an inductive nature for discovering and building theory rather than testing theory through analytical generalisations. Qualitative methods such as case studies commonly follow realistic modes of inquiry, for the main objectives are to discover new relationships of realities and build up an understanding of the meanings of experiences rather than verify predetermined hypotheses (Hunt, 1990; Perry and Coote, 1994). Critical theory assumes apprehendable social, political, cultural or economic realities incorporating a number of virtual or historical structures of these realities that are taken as real. Researchers and their investigated subjects are linked interactively, with the belief system of the researcher influencing the inquiry, which requires a dialogue between researcher and subject. Hence, no objective or value-neutral knowledge exists for all claims are relative to the values of the researcher. The essence of constructivism is multiple apprehendable realities, which are socially and empirically based, intangible mental constructions of individual persons. Similar to critical theory, assumptions are subjective but the created knowledge depends on the interaction between and among researcher and respondent(s), aiming at increasing an understanding of the similarities and differences of constructions that both the researcher and respondent(s) initially held in order to become more aware of, and informed about, the content and meaning of these constructions (Anderson, 1986). All constructivists believe that knowledge is theory-driven, a separation of researcher and research subject/object is not feasible, as is the separation between theory and practice (Mir and Watson, 2000). The methodology of critical theory’s and constructivism’s paradigms is dialectical, that is, it is focused on an understanding and reconstruction of the beliefs that individual people initially hold, trying to achieve a consensus by still being open to new interpretations as information and sophistication improve (Guba and Lincoln, 1994).
Tests for establishing validity and reliability
Next, various design and corresponding design tests recommended in the literature are described and their appropriateness for evaluating validity and reliability in case study research is examined. Table II complements Table I and sets the above discussed research paradigms of positivism, realism, critical theory, and constructivism in perspective to various tests used to evaluate the quality of case study research. It illustrates that in realism-oriented research both sets of design tests, first construct validity, internal and external validity, and reliability (which are well-known from quantitative research approaches), and second, credibility, transferability, dependability, and confirmability (which refer to more qualitative approaches) can and should be incorporated to enhance the quality of case study methods in marketing research.
Enlarge 200% Enlarge 400%
Table I

Enlarge 200% Enlarge 400%
Table I

The case study method is about theory construction and building, and is based on the need to understand a real-life phenomenon with researchers obtaining new holistic and in– depth understandings, explanations and interpretations about previously unknown practitioners’ rich experiences, which may stem from creative discovery as much as research design. That is, design tests are not the primary drivers of rigorous case study research and even could suppress the discovery of new meaningful insights and as a result not maximise the quality of the research.
Design tests
The following parts highlight how the four design tests of construct validity, internal and external validity, and reliability can improve the quality of case study design. Definitions for each test are not provided for they can be found in various textbooks. Construct validity establishes appropriate operational measures for theoretical concepts being researched. Case study research generally is perceived to be more subjective than qualitative research methodologies because researchers usually have close and direct personal contact with organisations and people examined. Hence, researchers need to make efforts to refrain from subjective judgements during the periods of research design and data collection to enhance construct validity.
Internal validity, as it is traditionally known in quantitative research, refers to the establishment of cause-and-effect relationships, while the emphasis on constructing an internally valid research process in case study research lies in establishing phenomena in a credible way. In particular, case study research intends to find generative mechanisms looking for the confidence with which inferences about real-life experiences can be made. That is, the
Enlarge 200% Enlarge 400%
Table II

researcher does not only highlight major patterns of similarities and differences between respondents’ experiences or beliefs but also tries to identify what components are significant for those examined patterns and what mechanisms produced them. External validity is concerned with the extrapolation of particular research findings beyond the immediate form of inquiry to the general. While quantitative research, for example, using surveys aims at statistical generalisation as a form of achieving external validity, case studies rely on analytical generalisation, whereby particular findings are generalised to some broader theory. The focus lies on an understanding and exploration of constructs, that is, usually the comparison of initially identified and/or developed theoretical constructs and the empirical results of single or multiple case studies.
Reliability refers to the demonstration that the operations and procedures of the research inquiry can be repeated by other researchers which then achieve similar findings, that is, the extent of findings can be replicated assuming that, for example, interviewing techniques and procedures remain consistent. In case study research this can raise problems as people are not as static as measurements used in quantitative research, and even if researchers were concerned to assure that others can precisely follow each step, results may still differ. Indeed, data on real-life events, which were collected by different researchers, may not converge into one consistent picture. However, possible differences also can provide a valuable additional source of information about cases investigated.
Corresponding design tests
Several authors suggest four corresponding sets of tests for establishing quality in qualitative research design in general (e.g. Hirschman, 1986; Robson, 1993). The four corresponding design tests of confirmability, credibility, transferability and dependability accomplish the four design tests as discussed above and shown in Table I. The concepts in the corresponding design tests seem analogous to the concepts of validity and reliability in quantitative research, as shown in Table II. Next, the meaning of each of the corresponding tests is discussed.
Confirmability is analogous to the notion of neutrality and objectivity in positivism, corresponding closely to construct validity. This test assesses whether the interpretation of data is drawn in a logical and unprejudiced manner. That is, to assess the extent to which the conclusions are the most reasonable ones obtainable from the data. Some useful questions (see Miles and Huberman, 1994, pp. 278-9) to be asked of a qualitative study such as case study research about this issue are:
Are the study’s general methods and procedures described explicitly and in detail?
Do we feel that we have a complete picture, including “backstage information”?
Are study data retained and available for reanalysis by others?
Credibility is the parallel construct to internal validity. It involves the approval of research findings by either interviewees or peers as realities may be interpreted in multiple ways. The purpose of this test is to demonstrate that the inquiry was carried out in a way which ensures credibility. Some useful questions to be asked to clarify this issue are:
How rich and meaningful or “thick” are the descriptions?
Are the findings internally coherent?
Are concepts systematically related?
Transferability is analogous to the function of external validity or generalisation in conventional quantitative research. This test is achieved when the research shows similar or different findings of a phenomenon amongst similar or different respondents or organisations, that is achieving analytical generalisation. Here we usefully may ask the following questions to clarify this issue of transferability:
Do the findings include enough “thick descriptions” for readers to assess the potential transferability appropriateness for their own settings?
Are the findings congruent with, connected to, or confirmatory of prior theory?
Dependability is analogous to the notion of reliability in quantitative research. The purpose of this test is to show indications of stability and consistency in the process of inquiry. The underlying issue here is whether the procedures or techniques used in the process of study are consistent. The questions that can be asked to clarify this issue are:
Are the research questions clear and are the features of the study design congruent with them?
Have things been done with reasonable care?
Guidelines for conducting design/corresponding design tests
There are several techniques which can enhance the validity and reliability in case study research. The following two sections highlight a variety of techniques suggested in the literature and show their appropriateness to enhance the quality of case study research, referring each technique back to the phase of research in which it should occur. Note that researchers need not perform specific research phases in a sequential manner but can move back and forth between phases. This “moving around” or iteration occurs, in particular, between data collection and analysis in such a way that preceding operations shape subsequent ones (Spiggle, 1994).
Case study techniques for design tests
The literature suggests several measures to increase the soundness of quantitative research applying the design tests of construct validity, internal and external validity, and reliability. These tests also can be applied to the case study method in the following way:
(1) Techniques which may be used to increase construct validity:
Use of multiple sources of evidence in the data collection phase, such as the triangulation of interview tapes, documents, artifacts, and others, for protection against researcher bias (Flick, 1992; Perakyla, 1997).
Establishment of a chain of evidence in the data collection phase, that is, use of verbatim interview transcripts and notes of observations made during field trips which allow the supply of sufficient citations and cross checks of particular sources of evidence (Griggs, 1987; Hirschman, 1986).
Reviewing of draft case study reports in the report-writing phase. That is, letting key informants and research assistants review interview transcripts, parts of the data analysis and final report outlining the findings, and if necessary change unclear aspects (Yin, 1994).
(2) Techniques which may be used to increase internal validity:
Use of within-case analysis, the cross-case and cross-nation pattern matching, in the data analysis phase (Miles and Huberman, 1994).
Display of illustrations and diagrams in the data analysis phase, to assist explanation building in the data analysis phase (Miles and Huberman, 1994).
Assurance of internal coherence of findings in the data analysis phase, which can be achieved by cross-checking the results (Yin, 1994).
3) Techniques which may be used to increase external validity:
Use of a (literal and/or theoretical) replication logic in multiple case studies in the research design phase, for example, 15 cases across two industries (e.g. manufacturing and services) in three countries (e.g. Australia, UK, USA) (Eisenhardt, 1989; Parkhe, 1993).
Definition of the scope and boundaries in the research design phase, which help to achieve reasonable analytical generalisations rather than statistical generalisations for the research in the research design phase (Marshall and Rossman, 1989).
Comparison of evidence with the extant literature in the data analysis phase, to clearly outline contributions and generalise those within the scope and boundaries of the research, not to a larger population (Yin, 1994).
(4) Techniques which may be used to increase reliability:
Give full account of theories and ideas for each research phase (LeCompte and Goetz, 1982).
Assurance of congruence between the research issues and features of the study design in the research design phase (Yin, 1994).
Record observations and actions as concrete as possible (LeCompte and Goetz, 1982).
Development and refinement of the case study protocol in the research design phase can be achieved by conducting several pilot studies testing the way of questioning and its structure (Eisenhardt, 1989; Mitchell, 1993; Yin, 1994).
Use of a structured or semi-structured case study protocol (Yin, 1994).
Use multiple researchers who continually communicate about methodological decisions (LeCompte and Goetz, 1982).
Record data mechanically, for example, by using a tape recorder or video tape (Hair and Riege, 1995).
Development of a case study data base at the end of data collection phase, to provide a characteristic way of organising and documenting the mass of collected data (Lincoln and Guba, 1985).
Assurance of meaningful parallelism of findings across multiple data sources (Yin, 1994).
Use peer review/examination (LeCompte and Goetz, 1982).
Qualitative techniques for corresponding design tests
Different qualitative techniques have been suggested in the literature by various researchers, shown in column 4 in Table I, for establishing validity and reliability in qualitative research with regard to the four corresponding design tests of confirmability, credibility, transferability and dependability. These qualitative techniques, which also can be applied to the rigorous case study method, are highlighted next as well as cross-reference to the phase of research when the technique is to be used.
(1) Techniques which may be used for establishing confirmability:
Use of the confirmability audit during the data collection and data analysis phase of the research (Lincoln and Guba, 1985). That is, the examination of raw data, findings, interpretations and recommendations. In particular, the audit involves retention of the raw data such as field notes, tapes, documents and others during the data collection stage for later inspection by the auditor if required. The next stage in the audit is for the auditor to judge whether inferences based on the data are logical during the data analysis phase as well as checking the quality of the findings and interpretations.
(2) Techniques which may be used for establishing credibility:
Use of triangulation techniques such as multiple sources of evidence, investigators and methods during the data collection and data analysis phase of the research, which enhance credibility (Lincoln and Guba, 1985).
Use of peer debriefing technique such as presenting the data analysis and conclusions to colleagues on a regular basis during the data analysis stage so as to foster subsequent credibility (Hirschman, 1986; Robson, 1993).
Use of member checks technique by presenting the findings and conclusions to the respondents and to take their reaction into account during the report writing phase of the research (Lincoln and Guba, 1985).
Credibility also can be achieved during the research design stage by taking into account the researcher’s assumptions, worldview and being theoretically orientated (Merriam, 1988).
Researcher self-monitoring is another technique for establishing credibility, and this occurs during the data collection and data analysis phase (Merriam, 1988). This technique involves the researcher carrying out the inquiry in such a way that ensures credibility.
(3) Techniques which may be used for establishing transferability:
Develop a case study database during the data collection phase of the research, which includes a “thick description” for readers to assess the potential transferability (Lincoln and Guba, 1985).
Use of cross-case and, where appropriate, cross-nation analysis in the data analysis stage of the research (Miles and Huberman, 1994).
Use of specific procedures for coding and analysis such as symbols, signs and others during the data analysis phase helps to ensure transferability (Yin, 1994).
(4) Techniques which may be used for establishing dependability:
Use of the dependability audit during the research design phase of the research (Lincoln and Guba, 1985). This audit involves the examination and documentation of the process of inquiry and this occurs in the research design stage. The auditor examines whether the processes followed in the inquiry are in order, understandable, well documented, providing mechanisms against bias, thus establishing dependability.
Dependability also can be achieved in the research design phase by safeguarding against researcher’s theoretical position and biases (Hirschman, 1986).
Implications for further research
This paper established criteria for judging the quality of case study method by synthesising several tests and techniques for establishing validity and reliability in case study research with special reference to marketing research. However, the case study research also can be applied to other research areas such as management and education research. For example, in the field of management research, it is appropriate for research into strategic management (Godfrey and Hill, 1995), organisational behaviour (Donnellan, 1995), and human resource management. The same rigour and thoroughness can be applied for each phase of the research to ensure its overall quality. Thus, an in-depth and rigorous case study approach can assist both management research practitioners and academics. One of our objectives was to not only provide a comprehensive overview of techniques but to provide opportunities for further research in case study research. Further research may discover additional techniques, which then can be used along with the techniques already mentioned, to further increase the rigour and thoroughness, thus enhancing the overall quality of, and understanding of the issues related to, case study research.
Moreover, a comparison of positivism and realism paradigms to research methodology, with special reference to marketing research, provides opportunities for further research in research methodology comparison in the areas of management and education research. As with any scientific research, if there are doubts about the research design, data collection, analysis and/or findings due to issues related to validity and reliability, other researchers can replicate the investigation with a similar case study research subject. If the investigation were considered to be correct, accurate and rigorous, further research can build of this. Of course, findings from exploratory case study research (as most other research) rarely are accepted immediately, and most likely require additional research, either in the form of additional qualitative or testing quantitative research.
Conclusion
The validity and reliability of case study research is a key issue for both marketing research practitioners and academics. A high degree of validity and reliability provides not only confidence in the data collected but, most significantly, trust in the successful application and use of the results to managerial decision-making. The four design tests of construct validity, internal validity, external validity and reliability are commonly applied to the theoretical paradigm of positivism. Similarly, however, they can be used for the realism paradigm, which includes case study research. The realist approach needs to be accepted as a scientific approach to generate “rich” not “soft” data, and is suggested as an alternative to the more traditional positivist paradigm. In addition to using the four “traditional” design tests, the application of four “corresponding” design tests is recommended to enhance validity and reliability, that is, credibility, trustworthiness (transferability), confirmability and dependability. Thus, we argue that all eight design tests, traditional and corresponding, are applicable to the realism paradigm, which encompasses case study research, as compared with the positivism paradigm where only the four “traditional” design tests are applicable. Therefore, by being able to apply all the eight design tests rigorously to case study research, it enhances the quality, validity, and reliability of case study research. In particular, this paper emphasises several case study techniques and advice (Table I, column 3), as well as qualitative techniques (Table I, column 4), all of which are recommended to achieve high quality in case study research through these eight tests. Note that different techniques apply, depending on the phase of research, that is research design, data collection, data analysis, and report writing. We expect that the comprehensive overview of various techniques will offer a practical and valuable step-by-step guide to addressing and establishing validity and reliability in case study research for both marketing practitioners and academics. As a result of focusing on every single phase of the research project, researchers can have faith and confidence in their data and what controls were employed over them, as well as their results and conclusions.
[Sidebar] Electronic access

[Sidebar] The Emerald Research Register for this journal is available at http://www.emeraldinsight.com/researchregister The current issue and full text archive of this journal is available at http://www.emeraldinsight.com/1352-2752.htm

[Reference] References

[Reference] Anderson, P.F. (1986), “On method in consumer research: a critical relativist perspective”, Journal of Consumer Research, Vol. 13, pp. 155-77. Denzin, N.K. and Lincoln, Y.S. (1994), “Entering the field of qualitative research”, in Denzin N.K. and Lincoln Y.S. (Eds), Handbook of Qualitative Research, Sage, Thousand Oaks, CA, pp. 1-18. de Ruyter, K. and Scholl, N. (1998), “Positioning qualitative market research: reflections from theory and practice”, Qualitative Market Research: An International Journal, Vol. 1 No. 1, pp. 7-14. Deshpande, R. (1983), “‘Paradigms lost’: on theory and method in research in marketing”, Journal of Marketing, Vol. 47 No. 4, pp. 101-10. Donnellan, E. (1995), “Changing perspectives on research methodology in marketing”, Irish Marketing Review, Vol. 8, pp. 81-90.

[Reference] Eisenhardt, K.M. (1989), “Building theories from case study research”, Academy of Management Review, Vol. 14 No. 4, pp. 532-50. Flick, U. (1992), “Triangulation revisited: strategy of validation or alternative?”, Journal for the Theory of Social Behaviour, Vol. 22, pp. 175-98. Godfrey, P.C. and Hill, CW.L. (1995), “The problem of unobservables in strategic management research”, Strategic Management Journal, Vol. 16 No. 7, pp. 519-33.

[Reference] Guba, E.G. and Lincoln, Y.S. (1994), “Competing paradigms in qualitative research”, in Denzin, NX and Lincoln, Y.S. (Eds), Handbook of Qualitative Research, Sage, Thousand Oaks, CA, pp. 105-17. Healy, M. and Perry, C. (2000), “Comprehensive criteria to judge validity and reliability of qualitative research

[Reference] within the realism paradigm”, Qualitative Market Research: An International Journal, Vol. 3 No. 3, pp. 118-26. Hirschman, E.C. (1986), “Humanistic inquiry in marketing research: philosophy, method, and criteria”, Journal of Marketing Research, Vol. 23, pp. 237-49. Hunt, S.D. (1990), “Truth in marketing theory and research”, Journal of Marketing, Vol. 54 No. 3, pp. 1-15. Johnston, W.J., Leach, M.P. and Liu, A.H. (1999), “Theory testing using case studies in business-to-business research”, Industrial Marketing Management, Vol. 28 No. 3, pp. 201-13. LeCompte, M. and Goetz, J. (1982), “Problems of reliability and validity in ethnographic research”, Review of Educational Research, Vol. 52 No. 1, pp. 30-60.

[Reference] Lincoln, Y.S. and Guba, E.G. (1985), Naturalistic Inquiry, Sage, Newbury Park, CA. Marshall, C. and Rossman, G.B. (1989), Designing Qualitative Research, Sage, Newbury Park, CA. Merriam, S.B. (1988), Case Study Research in Education: A Qualitative Approach, Jossey-Bass, San Francisco, CA. Miles, M.B. and Huberman, M.A. (1994), Qualitative Data Analysis – An Expanded Sourcebook, 2nd ed., Sage, Thousand Oaks, CA. Mir, R. and Watson, A. (2000), “Strategic management and the philosophy of science: the case for a constructivist methodology”, Strategic Management Journal, Vol. 21 No. 9, pp. 941-53. Mitchell, V.-W. (1993), “Industrial in-depth interviews: some considerations for first-time users”, Marketing Intelligence & Planning, Vol. 11 No. 4, pp. 25-9.

[Reference] Nair, G.S. and Riege, A.M. (1995), “Convergent interviewing to refine the research problem of a PhD thesis”, in Sogar, D.H. and Weber, I. (Eds), Marketing Educators’ and Researchers’ International Conference, Proceedings, 2-5 July, Gold Coast. Parkhe, A. (1993),”‘Messy’ research, methodological prepositions, and theory development in international joint ventures”, Academy of Management Review, Vol. 18 No. 2, pp. 491-500. Per&yla, A. (1997), “Reliability and validity in research based on tapes and transcripts”, in Silverman, D. (Ed.), Qualitative Research: Theory, Method and Practice, Sage, London.

[Reference] Perry, C. and Coote, L. (1994), “Process of a case study research methodology: tool for management development?”, National Conference of the Australian – New Zealand Association of Management, Wellington, pp. 1-22. Perry, C., Riege, A.M. and Brown, L. (1999), “Realism’s role among scientific paradigms in marketing research”, Irish Marketing Journal, Vol. 12 No. 2, pp. 16-23. Riege, A.M. and Nair, G.S. (1997), “Criteria for judging the quality of case study research”, School of Marketing and International Business Working Paper Series, Queensland University of Technology, Brisbane.

[Reference] Robson, C. (1993), Real World Research: A Resource for Social Scientists and Practitioners-Researchers, Blackwell, Oxford. Rust, R.T. and Cooil, B. (1994), “Reliability measures for qualitative data: theory and implications”, Journal of Marketing Research, Vol. 31 No. 1, pp. 1-14. Spiggle, S. (1994), “Analysis and interpretation of qualitative data in consumer research”, Journal of

[Reference] Consumer Research, Vol. 21, December, pp. 491-503. Tsoukas, H. (1989), “The validity of idiographic research explanations”, Academy of Management Review, Vol. 14 No. 4, pp. 551-61. Yin, RX (1994), Case Study Research – Design and Methods, Applied Social Research Methods Series, Vol. 5, rev. ed., Sage, Newbury Park, CA.