Welcome to the Fifth Annual GHI Conference on Digital Humanities and Digital History “Datafication in the Historical Humanities: Reconsidering Traditional Understandings of Sources and Data,” from June 2 to 4, 2022
A Hybrid Conference
Give the uncertainties around Covid-19 and to open the conference to a wider audience, we changed to a hybrid format, combining online and in-person attendance. Check out our Program and read our Event Registration to find out which events, we will be hosting hybrid or virtual such as our Virtual Poster Session.
We are looking forward to three exciting days of discussion!
#ghidh22
Conveners: German Historical Institute Washington in collaboration with Luxembourg Centre for Contemporary and Digital History (C²DH), Chair of Digital History at Humboldt Universität zu Berlin, Consortium Initiative NFDI4Memory, Roy Rosenzweig Center for History and New Media, and Stanford University, Department of History.
“Datafication in the Historical Humanities” is supported in part by funding provided by the German Research Foundation (DFG).






Datafication in the Historical Humanities: Reconsidering Traditional Understandings of Sources and Data
The Fifth Annual GHI Conference on Digital Humanities and Digital History will revolve around the concept of “datafication,” that is, the production of and the shift toward digital representations of historical sources as a prerequisite for storage, access, and analysis, not to mention their transmission and publication online.
Historians outside the field of quantitative social history rarely consider their objects of study as “data,” even when they look at documents or paintings in digitized versions on their screen. These witnesses of human lives call for emotional, imaginative, and empathetic engagement and thus cannot be reduced to mere commodities to fuel a new kind of computational research, despite what the slogan “data is the new oil” might suggest. Sources, not data, we might thus insist, are at the heart of historical research. On the other hand, we readily observe that gathering, organizing, sorting, excluding, and searching for selected information from (digital) sources are routine processes of historical investigation. Data-centered research, seen from this angle, seems more a continuation with updated tools and technologies than a radical break from traditional methods of inquiry. Johanna Drucker has forcefully pointed out that we should reconceive all data as “capta,” taken and not simply given as the designation might imply. Data is therefore not a natural representation of something pre-existing, but created as part of a knowledge-production process open to investigation and critique. Data in the humanities, by adopting Christof Schöch’s working definition, can therefore be considered as a digital, selectively constructed, machine-actionable abstraction representing some aspects of a given object of humanistic inquiry.
While we have seen a convergence in data modeling in text-oriented humanities (TEI), library science (FRBR), and for cultural heritage information (CIDOC CRM), no conceptual framework for modeling, curating, and managing data in historical research has gained wide adoption. The one possible exception comes from Wikidata, a project that has been conceptualized and populated with very little input from within our field. Ruth Mostern and Marieka Arksey argue that there are still no standards to emulate due to the small number of historical datasets currently available, and their heterogeneous nature. However, historical data repositories are “unlikely to realize their promise until the social life of data becomes part of the profession.” The current push by funders for National Research Data Infrastructures, such as NFDI in Germany, both adopts this idea of making data sharing a part of professional practice and calls for interdisciplinary research. Such activities are premised on the idea of the “social life of data,” the concept that research data and models designed and collected for very specific questions might become useful for a broader audience. The support for the re-use of both technical infrastructure and the models used for data collection will jumpstart their wider adoption.
The obstacles to such an undertaking are simultaneously conceptual, structural and practical: modeling the entire range of historical investigation is a call to modeling the entire world, from the very beginning until now. This raises the question whether these models are not in principle culture-bound, which excludes a global approach per se and leads to the question to what extent it is possible to find a generic conceptualization within a subgroup alone. However, especially in the context of datafication processes, the question of data modeling is a crucial one, since it lays the groundwork for historical research for future generations. It is a time-consuming and cost-intensive process that needs to be well conceived and thought through. There is a great risk of creating path dependencies that later limit our ability to work with this data.
Historical research often takes a nonlinear or even meandering path through many phases of uncertainty and redefinition. Just like traditional source-based studies, a data-driven investigation will not usually start with a predefined set of sources and questions, but will extend and refine the scope, the structure, and the rules for data entry continuously as new questions arise and additional material is encountered. In addition, we notice a lack of tradition in collaborating in larger teams that include programmers, archivists, librarians and other information professionals. Therefore, humanist data often has quite irregular shapes and does not meet the expectations of a building block that can easily be incorporated into larger structures outside the context of its original research.
For the conference, we would like to focus on the still mostly manual, therefore labor-intensive, and intellectually challenging task of transforming sources and collections into comparatively small but highly rigorous “handcrafted” datasets. How are the archives for such projects defined, developed, and managed? How do we select primary sources, deal with collections and create data models for their digital representations? With whom do we collaborate in this process? What logic and constraints shape the normalization of information when inputting them for comparison and analysis, and, just as important, what is discarded and how is absent or ambivalent data handled? What standards guide our datafication processes, which tools support us and what is the right scale to use? At the same time, which explicit and implicit limitations do such decisions impose on us? How does datafication create new archives, as Vincent Brown argues, defined by the tools used to explore them and the design decisions made during their creation? What could be the general design principles we follow in the process of datafication of historical sciences?