Join us on Facebook
Add Comment

Featured Articles from Around the Globe

Our Theories Are Only As Good As Our Methods

Our Theories Are Only As Good As Our Methods by  John P. Barile and Anna R. Smith

Author: John P. Barile and Anna R. Smith

Abstract:

Jason, Stevens, Ram, Miller, Beasley, and Gleason (2016) argue that the vast majority of theories in community psychology are actually frameworks, while specific and testable theories remain scarce. Suggesting that community psychology could benefit from such theories, the authors identify several impediments to theory development: researcher unwillingness, difficulty defining and operationalizing constructs, and difficulty capturing context. This response addresses the last challenge, highlighting the importance of using appropriate methods when developing testable theories. The difficulty is that context matters, and the vast majority of theories are conceived, tested, and “validated” within a single context – most often at the individual level. Therefore, as the context changes, so must the theory and arguably, the methods. We propose that community psychology’s frameworks provide a useful starting point for theory development and increased focus on innovative methods that account for and measure context are a prerequisite to developing testable, ecologically relevant theories.


Article:

Download the PDF version to access the complete article, including tables and figures.

Jason, Stevens, Ram, Miller, Beasley, and Gleason (2016) argue that the vast majority of theories in community psychology are actually frameworks, while specific and testable theories remain scarce. Suggesting that community psychology could benefit from such theories, the authors identify several impediments to theory development: researcher unwillingness, difficulty defining and operationalizing constructs, and difficulty capturing context. This response addresses the last challenge, highlighting the importance of using appropriate methods when developing testable theories. The difficulty is that context matters, and the vast majority of theories are conceived, tested, and “validated” within a single context – most often at the individual level. Therefore, as the context changes, so must the theory and arguably, the methods. We propose that community psychology’s frameworks provide a useful starting point for theory development and increased focus on innovative methods that account for and measure context are a prerequisite to developing testable, ecologically relevant theories.

Community psychology has long been at the forefront of proposing comprehensive frameworks and/or theories that attempt to understand and address many types of complex social issues. What separates community psychology from other fields addressing these issues is 1) how we define the problem and 2) how we approach the problem. Community psychologists tend to define the problem in a way that promotes social justice and recognizes person-environment interaction, and approach the problem by targeting change at the appropriate level. It is our approach to capturing these phenomena that has limited our ability to develop established theories from our frameworks. Many of these theories consider context and person-environment interaction, while they are measured and tested at the individual level, using individual-level methods. In other words, the field of community psychology has been successful in defining problems from an ecological perspective and directing interventions at appropriate levels but has failed to utilize methods that appropriately evaluate them. In 2005, Luke illustrated the many ways in which community psychologists can and should capture context in their research. Despite the publication of this now foundational paper, progress has been slow and many papers within the field continue to limit their analyses to the individual level, despite the development and availability of even more context-based approaches (e.g., multi-level latent class analysis). Ultimately, community psychology’s methods still do not match its theoretical approach.

This problem is not unique to community psychology. Andrew Hayes, a well-known methodologist within the field of communication, proposed the following explanation for the limited incorporation of multilevel approaches in research:

To me, a plausible explanation is a lack of awareness of the statistical tools available rather than their lack of availability, combined with a dearth of good examples of multilevel research and analysis. Although some may argue that the theoretical horse should pull the statistical cart, I argue that the horse and cart are mutually interdependent and should not be separated or ordered in terms of importance to the research enterprise. Just as statistical techniques can help us to answer our theoretical questions, they can also contribute to the formulation of the substantive and theoretical questions we ponder. In other words, knowing what is possible analytically can influence how we think theoretically. (Hayes, 2006, p. 386)

Supporting this idea, Anthony Greenwald (2012) authored a striking paper that demonstrated a researcher is much more likely to be awarded a Nobel award for contributions to method development than to a theory development. Greenwald argues that both theories and methods are essential to scientific knowledge production but that methods often generate previously inconceivable data, which, in turn, can inspire previously inconceivable theories.

We propose that comprehensive training in innovative methodologies tailored to the problems specific to community psychology research is critical to both the formulation and testing of comprehensive ecological theories. Unfortunately, many community psychologists still are taught research methods that fall within the confines of traditional psychology, while the methods required to study community-based phenomenon are very different from traditional experiment-based psychology. If our goal is to develop ecologically valid theories, we must be able to pull from diverse sets of quantitative and qualitative methodologies that can capture this complexity. Moreover, we should not try to win a game of chess with only one move. If we seek to transform our frameworks into robust theories, we must be able to leverage a wide range of tools that capture the complexities of the world in which we live. Therefore diversifying the methodological portfolios of community psychologists may provide researchers the tools necessary to identify and validate theories relevant to the field.

Capturing context should include the use of creative quasi-experimental designs and statistical techniques that allow researchers to develop theories that match our methods. Contrary to recommendations by Jason et al. (2016), we caution community psychologists from relying too heavily on randomized controlled trials as a method to ensure scientific validity. When capturing context, randomized controlled trials become problematic because they attempt to control group assignment (e.g., who gets the intervention) within a very non-randomized environment. In fact, with the right methodological tools, quasi-experimental approaches can lead to more ecologically valid conclusions than experimental approaches largely because absent a randomized experiment, individuals self-select into specific programs, relationships, and even neighborhoods. While the extent to which each of these choices are truly independent of one’s circumstances is debatable, randomized experiments are the only circumstances in which individuals’ choices are completely made for them.

Consider these conflicting examples of experiments on substance abuse treatment programs. Both Alcoholics Anonymous (AA) and harm reduction approaches have been found to be effective for reducing substance abuse (Moos & Moos, 2006; Marlatt & Witkiewitz, 2002; Montgomery, Miller & Tonigan, 1995). However, Kownacki and Shadish’s (1999) meta-analysis revealed that randomized trials, examining the impact of AA, found AA to be no more effective than alternative treatments, like harm reduction, and in some cases, AA appeared to lead to worse outcomes. But this finding may not suggest that AA is ineffective. Instead, it is quite possible that when researchers randomized individuals to either an AA or a harm reduction approach without considering people’s inclination to choose the program that best fits their needs, neither program worked as well as if the same individuals were allowed to self-select into a program. If we were to rely solely on randomized trials to come to these conclusions, we would neglect the fact that individuals often choose the programs, settings, and even environments that best fit their needs. Moreover, individuals and environments work in tandem and by randomizing one or the other, we potentially ignore inevitable person-environment fit interactions that are critical to their success or failure.

The consequences of limiting individuals’ ability to engage in the real-life experience of self-selection go beyond evaluating individual-level interventions (e.g., AA, harm reduction). In order for us to develop sound and ecologically valid theories, it is imperative that our studies appropriately mimic the conditions in which we live because without understanding the context, we cannot fully understand the mechanisms of change regardless if the study is a true experiment. Robert Sampson, a highly regarded sociologist, may have best summarized the limitations of experiments in social science research:

Experiments have long been cloaked in the mantle of science because of their grounding in the randomization paradigm, the putative cure for the ills of selection…As important as experiments are, however, they have tended toward individual reductionism and have obscured the causes of effects and operative social mechanism. Any deep understanding of causality requires a theory of mechanisms no matter what the experiment or statistical method employed. Estimation techniques, in other words, do not equal causal explanatory knowledge. (Sampson, 2008, p. 227)

Sampson’s perspective reflects the need for innovative methodologies that result in a rich understanding of the contextual environments often disregarded in highly controlled experiments.

Community psychologists know that the same intervention is rarely effective in all settings, and therefore, if we want to move from conducting multiple, one-off examinations of specific interventions in specific contexts, without a generalizable theory, we will need to expand the methods that we use to capture the dynamic multi-level systems that characterize the ecological environments in which we live. We advocate increased emphasis on innovative and multi-level methods in community psychology graduate programs, conferences, and textbooks. Ultimately, our theories – and practice – will only be as good as our methods.

References

Greenwald, A. G. (2012). There is nothing so theoretical as a good method. Perspectives on Psychological Sciences, 7, 99-108.

Hayes, A. F. (2006). A primer on multilevel modeling. Human Communication Research, 32, 385-410.

Jason, L. A., Stevens, E., Ram, D., Miller, S. A., Beasley, C. R. & Gleason, K. (2016). Theories in the field of community psychology. Global Journal of Community Psychology and Practice.

Kownacki, R. J. & Shadish, W. R. (1999). Does Alcoholics Anonymous work: Results from a meta-analysis of controlled experiments. Substance Use & Misuse, 34, 1897-1916.

Luke, D. A. (2005). Getting the big picture in community science: Methods that capture context. American Journal of Community Psychology, 35, 185-200.

Marlatt, G.A. & Witkiewitz, K. (2002). Harm reduction approaches to alcohol use: Health promotion, prevention, and treatment. Addictive Behaviors, 27, 867-886.

Moos, R.H. & Moos, B.S. (2006). Participation in treatment and Alcoholics Anonymous: A 16-year follow-up of initially untreated individuals. Journal of Clinical Psychology, 62, 735-750.

Sampson, R. J. (2008). Moving to inequality: Neighborhood effects and experiments meet social structure. American Journal of Sociology, 114, 189-231.

Tonigan, J. S., Toscova, R. & Miller, W. R. (1995). Meta-analysis of the literature on Alcoholics Anonymous: Sample and study characteristics moderate findings. Journal of Studies on Alcohol, 57, 65-72


Author

John P. Barile and Anna R. Smith John P. Barile and Anna R. Smith

John P. Barile John (Jack) Barile is an assistant professor at the University of Hawai‘i at Manoa in the Department of Psychology. Jack's research centers around ecological determinants of health-related quality of life. This line of research includes the study of individual level factors, such as socioeconomic status, age and ethnicity, as well as contextual factors, such as housing conditions and community violence. Methodologically, Jack is particularly interested in the use of multilevel structural equation modeling and quasi-experimental techniques to assess the impact of individual and community-level programs on quality of life outcomes. Jack completed a two-year fellowship at the US Centers for Disease Control and Prevention after earning a doctoral degree in community psychology from Georgia State University and a Bachelor of Science degree in health sciences from Old Dominion University. 

Anna R. Smith is a Community and Cultural Psychology doctoral student at the University of Hawai‘i at Manoa. Her research interests include university-community partnerships, program development and evaluation, communities in disaster areas, and developing research methodologies that take an intersectional approach.


Comments (2)

Add Comment

About this Article

Add Comment

Recent Comments

Jack Barile (University of Hawaii at Manoa) July 13, 2016

Regarding randomized controlled trials (RCT), we are largely concerned with the pervasive belief that RTC are considered the gold standard for theory development and that quasi-experimental designs are considered “weaker forms of theory testing ”(Jason et al., 2016, p. 18). We do not believe that Jason et al. (2016) were suggesting that RCT are necessarily the key to theory development but we are interested in encouraging researchers to be methodologically nimble. We argue that the nature of specific research questions, which includes researchers’ ability to incorporate contextual variables in their study design, are imperative to determining which methodological approach is best–i.e., we find that blanket statements regarding the inferiority of quasi-experimental designs are inappropriate. For example, Grossman and Mackenzie (2005) present a strong argument for why RCT are not the gold standard, even in medical research. Assuming that quasi-experimental and observational designs are inherently weaker designs potentially jeopardizes the funding or publication of well-conceived studies. When attempting to understand a phenomenon, we find that understanding its context is the highest priority, and if this can be achieved within a RCT that does not ignore the ever-present potential for self-selection, we absolutely support their use. We believe there are times for which RCT are appropriate and likely superior but this determination should be based on the research questions at hand. We find that well-conserved quasi-experimental and observational designs that attend to self-selection and contextual factors often lead to the most ecologically valid findings.

Grossman, J., & Mackenzie, F. J. (2005). The randomized controlled trial: gold standard, or merely standard?. Perspectives in biology and medicine, 48(4), 516-534.

Leonard Jason (DePaul University) July 11, 2016

Within the last article in this special issue, titled: Our Reflections on the Reactions to “Theories in the Field of Community Psychology,” Leonard A. Jason, Ed Stevens, Daphna Ram, Steven A. Miller, Christopher R. Beasley, and Kristen Gleason wrote the following reaction to this article:

"We agree with Barile and Smith (2016) that randomized controlled trials (RCTs) have their strengths, but also important and inherent weaknesses, particularly when social context needs to be considered. Our article does not suggest the sole use of RCTs; it only suggested specifying hypotheses a priori. Barile and Smith (2016) argue that too strong a focus on generalizable theory, and especially on randomized controlled trials as a means to achieve rigorous theory, can hamper the exploration of innovative methodological approaches that capture context. Indeed, their argument that randomized control trials often artificially erase some important person-context interactions is particularly relevant to our work as community psychologists. Until researchers have a very solid idea of what the randomized controlled trials should be looking at (i.e., what actually is “ignorable” in the context), multivariate methods can often be used to obtain more veridical results. In other words, we must not implement randomized controlled trials prematurely—indeed, this is more appropriate to occur when the very theoretical understanding we advocate for, has proceeded to a point of some reasonable verification.

As proposed by Barile and Smith (2016), two recent papers (Jason et al., 2014; Light et al. in press) specifically propose the social network framework as one way to capture social context under some circumstances. These papers are as methodological as they are substantive in tone, arguing that the theoretical conception of small group dynamics as complex systems fits quite naturally with the methodological approach supplied by networks, and the longitudinal modeling framework of conditional Markov processes (e.g. Snijders, van de Bunt, & Steglich’s Stochastic Actor modeling (2010)). Perhaps, as the network approach (and other more contextual approaches such as multilevel models) becomes more familiar, graduate programs in Community Psychology may routinely include training in these methods."




PdfDownload the PDF version to access the complete article.

Printer FriendlyPrinter friendly version

Keywords: Theory, Science, Community Psychology, Framework