The Reverse Progress Problem of the Behavioral Sciences

Human behaviors of prevention and self-management play a leading role in the treatment of most diseases – HIV, cancer, diabetes, and heart-disease to mention a few. Conversely, risk-taking behaviors and inadequate self-management lead to about 1.2 million American deaths annually – a number that has not changed significantly between 1990 and 2000 (Mokdad, Marks et al. 2004) in spite of extensive attention by the behavioral research community. Simultaneously, behavioral scientists across the disciplines keep working on non-health problems ranging from adoption of technologies to recycling behaviors.

Behavioral science researchers are now recognizing that it is impossible to find and incorporate all related disciplinary knowledge. In fact, the exact same charges may be leveled against every single research area of the behavioral sciences. There are simply too many overlapping research areas across disciplines for any single person to integrate or utilize. Analogously, chemistry faced the same problem 140 years ago:

When scientists first tried to describe the physical and chemical properties of the elements and chemical compounds, which are formed by the combination of atoms of different elements, they soon became buried under a mountain of seemingly unconnected facts…The solution to the problem was Mendeleev’s periodic table of the chemical elements (Stwertka 2002, p. 7).

It is clear that the behavioral sciences now exist in a “pre-Mendeleev” era, where true transdisciplinarity is impossible due to human cognitive limitations and lack of shared language. These restrictions mean that knowledge in the behavioral sciences cannot be “conceptualized as accumulating in a metaphorical reservoir from which society can draw” (Sarewitz and Pielke 2007, p. 7) as long as not even researchers themselves are capable of utilizing existing research effectively.

Many disciplines are behavioral in varying degrees (e.g. anthropology, behavioral health, business, economics, education, political science, psychology, nursing, and sociology) and generally focus on different dependent variables as appropriate for their discipline. For example, the business disciplines of Management Information Systems and Marketing may focus on adoption of information technologies and purchasing, respectively; however, when the independent variables of most of these disciplines are examined, there is enormous overlap. To demonstrate this overlap, our examination of 20 years of behavioral research in education and management information systems along with 10 years in psychology found variants of these independent variables starting with the letter ‘a’ in all three disciplines: ability, achievement (perceived), acceptance, affect, anxiety, and attitude. This illustration does not take into consideration cases where disciplines use different names for the same behavioral variables and suffer from a relatively small sample.

The above illustration simply scratches the surface of the enormous overlaps in use of independent variables across different disciplines. If we could properly interpret inconsistent language across disciplines, we would in all likelihood find that there is a finite set of truly unique variables in use across the disciplines. While individual disciplines have produced large volumes of new research, such work has had a limited effect in addressing the basic problems of human behavior. As long as multiple unconnected theories (in some cases, dozens of theories) are developed to address an individual behavior, and the relationships between independent variables in these theories seldom draw from knowledge of similar relationships in other theories, then most behavioral research does little to actually advance knowledge. We here promote a universal view of existing knowledge, suggesting that once any relationship between two variables has been tested, any subsequent test of the same relationship (regardless of what those variables are called and the discipline in which they are tested) on average provides less knowledge than what was uncovered during the preceeding test. For example, in the Information Systems discipline, the relationship between ease of use and usefulness has been tested hundreds of times, and any new test of this relationship can safely be assigned miniscule knowledge value. It follows that if two variables X and Y existing in discipline A are functionally identical to the ease of use and usefulness variables already existing in discipline B, but have different names, then the universal view provides that a test of their relationship still provides reduced knowledge value. This holds true even in the case where no researcher in discipline A has ever read a paper testing ease of use and usefulness. Such a case simply speaks to a break-down in information sharing within the universe of behavioral sciences.

The universal view of behavioral research (beyond stating that inadequate information search leads to unintended replication of existing research) also suggests that the behavioral sciences have grown beyond the reach of human cognition, leading to increasingly fragmented and insular areas of research. However, such insularity can never be seen as an excuse to ignore existing research:

The standard justification for the fragmentation of the behavioral disciplines is that each has a model of human behavior well suited to its particular object of study. While this is true, where those objects of study overlap, their objects must be compatible (Gintis 2007, p. 1).

Beyond Girtis’ contention, we suggest behavioral researchers should anoint to the highest priority the task of actually combining such models. According to Larsen (2003), behavioral researchers’ inability to deal with the enormous stores of existing research has far-reaching implications for the integrity and progress of the behavioral sciences. Because different people are less than 20% likely to express the same idea using the same words (Furnas, Landauer et al. 1987), problems of synonymy (differently named identical constructs) and polysemy (identically named dissimilar constructs) are rampant. Larsen (2003) found that in one research area, 83 unique constructs were measured using 948 different scales, and most of the research papers employing these constructs did not build on the existing similar scales but rather relied on creating new ones. In fact, our own examination of preliminary data found that of 2,562 uniquely named constructs recorded from three journals (one each in psychology, education, and business), 2,330 (90.9%) were used only once and 2,480 (96.7%) were used only once or twice. Stunningly, in this relatively small sample of research, a conservative evaluation found 70 differently named self-efficacy constructs, and only eight of these construct names were used more than once. In other words, a large proportion of research is likely to recreate existing scales under new names; and, in all likelihood, would unknowingly replicate existing research under the auspices of novel research.

To understand the problem, we start with the assumption that every single behavioral variable (construct) is unique – even in cases where two constructs in different papers: have the same name; are published by the same author(s); are published in the same journal; and have a shared citation structure. Before any two constructs can be confirmed as identical, empirical analysis is required.

Producing research that does not build on existing findings is detrimental to progress in the science of human behavior because each paper published reaches only a small portion of its potential. Our research shows that on average, a paper’s nomological network contains about almost seven constructs; meaning that, based on combinatorics the average paper can extend our knowledge by no more than 21 relationship hypotheses. Because a priori (no two constructs can be assumed to be identical), a corpus of 10,000 papers utilizing psychometric approaches to measure constructs related to health behavioral science will contain about 70,000 constructs (assuming seven constructs in each paper) and a maximum of 210,000 tested hypotheses. By contrast, without any method for combining identical constructs, there are almost 2.5 billion untested relationships in this same body of literature. While the majority of these relationships will be nil, knowing which are not will be invaluable. '''Paradoxically, paper number 10,001 also containing seven constructs will test no more than 21 hypotheses (for a maximum of 210,021) whereas the number of unknown relationships will increase by 490,000, leading to what we term the behavioral sciences’ reverse progress problem.''' Essentially, the “universe” of potential knowledge grows faster than all the researchers in the world can keep up with; and, adding more researchers only increases the problem. This is one of the main problems addressed by the Human Behavior Project at the University of Colorado and Team.

REFERENCES:
Furnas, G. W., T. K. Landauer, et al. (1987). "The Vocabulary Problem in Human-System Communication." Communications of the ACM 30(11): 964-971.
Gintis, H. (2007). "A Framework for the Unification of the Behavioral Sciences." Behavioral and Brain Sciences 30: 1-61.
Larsen, K. R. T. (2003). "A Taxonomy of Antecedents of Information Systems Success: Variable Analysis Studies." Journal of Management Information Systems 20(2): 169-246.
Mokdad, A. H., J. S. Marks, et al. (2004). "Actual Causes of Death in the United States, 2000." Journal of the American Medical Association 291(10): 1238-1245.
Sarewitz, D. and R. A. Pielke (2007). "The Neglected Heart of Science Policy: Reconciling Supply of and Demand for Science." Environmental Science & Policy 10: 5-16.
Stwertka, A. (2002). A Guide to the Elements. New York, NY, Oxford University Press.