In his comment included in the special issue of History of Psychology that I edited with Ivan Flis and Nadine Weidman, Ted Porter (UCLA History) said this of our efforts:
It rarely suffices merely to count things, for they may also require to be classified…. The problem can be especially thorny for the “psy” disciplines, where it applies to the fields themselves as well as to their subjects…. The authors of both articles under discussion here are keenly aware of the limits of any classification of psychological research articles, in particular, the clear shifts of meaning over time. They use these databases quite differently, however.(Porter, 2018, p. 369)
He then discussed my contribution:
Jeremy Burman’s (2018, pp. 302–333) data work brings to the surface a reflexive dimension that is mostly absent from Flis and van Eck’s (2018) research. He declares at the outset that digital researchers on word use in psychology “cannot simply trust the numbers” (p. 302). The problem is not simply that words have multiple senses and that their meanings can be bewilderingly unstable. Tracking the evolving patterns of word usage and meaning is, after all, one of the basic tasks of digital humanities, and anyone who is serious about it will recognize the need to descend repeatedly to the level of phrases and sentences—and never to assume that large-scale patterns are self-explanatory.(Porter, 2018, p. 370)
He continued, focusing in on the results of my archival explorations:
The most interesting issue in Burman’s (2018) critique is that psychologists and science administrators have taken a passionate interest in improving and standardizing their terms. He emphasizes… that after World War II, the vocabulary of psychology was consciously adapted to serve Cold War aims. Such ambitions are not uncommon in the sciences, whose leaders have worked assiduously to standardize words as well as measures.(Porter, 2018, p. 370)
Yet as he explains, this is not reason to dismiss the digital project:
Burman does not quite despair of digital studies, but he insists that an effective use of the database PsycINFO requires an awareness of these shifting meanings and, indeed, of the forms of power that lie behind them. In this way, an understanding of the history of statistics and of data becomes a prerequisite of effective quantification as well as a fascinating topic in its own right (Burman, 2018).(Porter, 2018, p. 370)
He then made an important point about the difference between statistical and substantive significance in considering Big Data:
Digital humanities relies chiefly on data and quantification—but with a difference. The methodology of the randomized trial, long a staple of scientific psychology, aims to detect differences between an experimental population and a control group that are (sufficiently) unlikely to have arisen by chance. Mere statistical significance, however, counts for little if you are working with a national census, a boundless inventory of Internet searches, or a database of hundreds of thousands of research articles. With numbers like these, almost every comparison you can think of will yield statistical significance. Substantive significance, of course, is another question. Many seeming effects will not be large enough to matter. Worse, they may have no evident meaning.(Porter, 2018, p. 371)
Indeed, this is a deep concern of mine in considering these methods:
With data on a vastly larger scale, the level of statistical significance gives way to the magnitude of the effect, virtually eliminating the danger that quirky patterns arising from nothing more than sampling error will lead to false conclusions. Biased samples, of course, are another matter. Neither Big Data nor fresh samples provide protection against errors of this kind, which are very difficult to avoid or even to estimate when data collection is uncontrolled and opportunistic.(Porter, 2018, pp. 371-372)
And he concluded much as we did, although with a twist:
Digital humanities is by no means simply the application of quantitative techniques of science to history, literature, and the arts but involves data tools that pose new problems and opportunities for humanities and sciences alike. Although it is no panacea, it clearly provides valuable resources for historical research. I would even say that to the extent they obviate the problem of statistical significance, digital technologies bring into statistical science some of the freedom to track down new sources and to explore alternative explanations that has long been the prerogative of humanistic researchers.
These articles offer a glimpse of what the humanities can contribute, not just to a deeper understanding of data and statistics but even to workaday tools of quantitative analysis. Digital technologies offer rich materials for humanistic interpretation, and they can never supplant it.(Porter, 2018, p. 372)
Categories: Digital methods Mentioned
J. T. Burman
JEREMY TREVELYAN BURMAN, PhD, is tenured Senior Assistant Professor (UD1 with indefinite contract) of Theory and History of Psychology at the University of Groningen in the Netherlands. The primary focus of his research is Jean Piaget, but he is also interested more generally in the formalization and movement of scientific meaning—over time, across disciplines, between languages, and internationally. To pursue these interests, he uses methods borrowed from the history and philosophy of science (esp. archival study) and the digital humanities (esp. network analysis).
Selected recent major works
Burman, J. T. (in press). The genetic epistemology of Jean Piaget. In W. Pickren (Ed.), The Oxford Research Encyclopedia of the History of Psychology. Oxford University Press.
Burman, J. T. (2020). On Kuhn’s case, and Piaget’s: A critical two-sited hauntology (or, on impact without reference). History of the Human Sciences, 33(3-4), 129-159. doi:10.1177/0952695120911576
Burman, J. T. (2019). Development. In R. J. Sternberg & W. Pickren, eds, The Cambridge Handbook of the Intellectual History of Psychology (pp. 287-317). New York: Cambridge University Press.
Leave a Reply