Menu Home

Mentioned: “A perspective from the history of scientific journals”

In her comment included in the special issue of History of Psychology that I edited with Ivan Flis and Nadine Weidman, Melinda Baldwin (now of History at the University of Maryland) said this of our efforts:

As a historian of scientific publishing, I am excited by the possibilities of such methods—and inspired to further thought by the caution these three authors urge in using them.

(Baldwin, 2018, p. 363)

She then said this of my contribution:

Burman’s (2018) article picks up on the question of what digitized journal databases do and do not contain and takes it a step further. What are the limitations and risks, he asks, of using particular databases to try to illuminate historical change? When we tell a database to search for a specific term during a specific time period, what kinds of results can we expect it to return, and to what extent are those results shaped by modern assumptions and historical contingencies? Burman devotes much of his article to identifying the years in which index-term searches in the database PsycINFO will yield valuable results. He uses both the database’s history and numerical analysis of its contents to answer his questions about the database’s structure and content, and his careful analysis illustrates that answers to those questions are often not straightforward or immediately evident.

Burman’s (2018) questions are ones that every historian and scientist interested in data-based studies of scientific literature need to ask themselves, and every budding digital historian should be taught to interrogate a database’s structure before trying to pull data from it. 

(Baldwin, 2018, p. 365)

She continued, helping to advance what I have since called the “trust in databases” problem:

Databases can conceal—or misrepresent—as much as they reveal, especially when we use them to analyze literatures that are just a little bit different from the one that we know today….

What a database chooses to track, index, and return as a search result is not an objective, straightforward decision, but a reflection of the choices that human actors made when designing the database and its search engine. Scholars should not lose sight of the assumptions that underlie the structures of seemingly straightforward digital tools. Burman (2018) highlights that the system that became PsycINFO came into being at a specific moment: during the Cold War and due to National Science Foundation (NSF) funding. The NSF only officially began funding the social sciences in 1958, and it seems notable that the NSF approached the American Psychological Association (APA) about this indexing project right around the time when the NSF opened its new division for the social sciences. Both the APA and the NSF had a vested interest in making the psychological literature more accessible and searchable—and in rendering it transparent and seemingly more objective and scientific to skeptics who wondered whether the social sciences ought to have a place alongside physics and biology in the NSF’s budget.

(Baldwin, 2018, pp. 365-366)

Also, in reflecting on the content of the database that perhaps might be trusted:

Burman (2018) summarizes the concerns of modern psychology, as evidenced by top index terms, with a vivid, tongue-in-cheek phrase: “the scientific study of depressed rats and the drugs that treat them” (p. 317). That playful sentence, of course, is only a surface sketch of the most popular index terms from 1992 to 2011, but Burman’s Table 1 suggests some interesting trends. The data suggest that studies of human sex differences had a brief heyday in the late 1970s and early 1980s; the place of depression in the psychological research landscape has steadily grown since the 1960s. There are also notable continuities. “Drugs” and “drug therapy” have remained in the top three index terms for the entire period under study; rats, too, have never fallen from the top three.

(Baldwin, 2018, p. 366)

However, this still requires a critical approach:

Burman (2018) cautions that big-data methods are “question-asking rather than answer-giving” for historians interested in disciplinary change; “no such method will be able to access the discipline’s history directly” (p. 324). I find that a persuasive interpretation.

(Baldwin, 2018, p. 366)

Baldwin concluded:

The digital analyses these two articles undertake provide insights that would not be possible through the usual historical method of reading each document and parsing it for meaning and significance. Not even the most voracious reader could go through 50 years’ worth of psychology journals by hand and hope to draw any rigorous conclusions. Colleagues skeptical of digital history often worry that individual stories and fine-grained analysis will be lost if we historians take on economists’ fondness for programs and models, but I think both of these articles indicate that digital tools are a complement to, not a replacement for, the traditional archive- and document-heavy approach to doing history. Using computer programs to produce big-picture overviews and identify places where further close analysis is necessary will be an essential approach for historians of late modern journals, especially after the period of explosive growth in the Cold War.

(Baldwin, 2018, pp. 366-367)

Categories: Digital methods Mentioned

J. T. Burman

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: