In the light of your experiences in this course, reflect critically on the scholarly merits of text mining, or on the digital humanities in general. Write an essay (max. 2000 words) in which you respond to one of the claims or questions below.
1. In Digital Humanities research, it is no longer necessary to formulate a research question or to develop a hypothesis during the initial stages of a research process. Scholars can use text mining technologies without any prior knowledge of the contents of the texts, and without any expectation of what these analyses ought to yield. Scholars can simply apply a number of algorithms, only to find explanations for the patterns that are revealed by these analyses afterwards.
- Stanley Fish, “Mind Your P’s and B’s: The Digital Humanities and Interpretation”, New York Times, 2012 http://opinionator.blogs.nytimes.com/2012/01/23/mind-your-ps-and-bs-the-digital-humanities-and-interpretation/?_r=0
- Chris Anderson, “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete”.
- Dan Dixon, “Analysis Tool or Research Methodology? Is There an Epistemology for Patterns?”, in: David Berry (ed.), Understanding Digital Humanities, Basingstoke: Palgrave Macmillan 2012
2. Data visualisations cannot independently express an argument. Scholars who make use of text mining technologies, and who create data visualisations as part of such a methodology, always need to write traditional textual publications as a companion to graphs and charts, to explain the value or the relevance of the patterns which are visualised. A data visualisation can never serve an independent scholarly resource.
- Jessop, M. “Digital Visualization as a Scholarly Activity”, in: Literary and Linguistic Computing, 23:3 (2008)
- Stéfan Sinclair, Stan Ruecker & Milena Radzikowska, “Information Visualization for Humanities Scholars”, in: Literary Studies in the Digital Age, Modern Language Association of America 2013
- Maureen Stone, “Information Visualization: Challenge for the Humanities”, in: Working Together or Apart : Promoting the next Generation of Digital Scholarship : Report of a Workshop Cosponsored by the Council on Library and Information Resources and the National Endowment for the Humanities, Washington D.C.: 2009
- Jessica Hullman & Nicholas Diakopoulos, “Visualization Rhetoric: Framing Effects in Narrative Visualization”, in: IEEE Transactions on Visualization and Computer Graphics, (2011),
3. Scholars who have an interest in the digital humanities should learn to program. While it is true that scholars can make use of various user-friendly tools which can be applied without much technical knowledge, the scope of such tools tends to be limited to a number of basic functions. In general, such tools cannot be used to address more focused research questions.
- Benjamin Schmidt, “Do Digital Humanists Need to Understand Algorithms?”, in: Debates in the Digital Humanities, (Minneapolis: University of Minnesota Press 2016).
- Stephen Ramsey, On Building, http://stephenramsay.us/text/2011/01/11/on-building/
- Ted Underwood, Where to Start with text Mining, https://tedunderwood.com/2012/08/14/where-to-start-with-text-mining/
4. The tools that are used in DH research have typically been developed for a particular scholarly purpose and/or within a particular methodological framework. Human software engineers consciously or unconsciously take decisions on the types of results that can be produced by tools, and as research instruments they therefore almost inevitably introduce a certain theoretical, practical or methodological bias. Is it possible in any way to make the scholars who use such tools more aware of the implications of these tools? Can the bias that is introduced by these tools be measured or be made visible in any way?
- David M Berry, “The Computational Turn: Thinking about the Digital Humanities”, 12 (2011), pp. 1–22.
- Stephen Ramsay & Geoffrey Rockwell, “Developing Things: Notes Towards an Epistemology of Building in the Digital Humanities”, in: Matthew K. Gold (ed.), Debates in the Digital Humanities, (Minneapolis: University of Minnesota Press 2012), pp. 75–84.
- Galey & S. Ruecker, “How a Prototype Argues”, in: Literary and Linguistic Computing, 25:4 (27 October 2010), pp. 405–424, <http://dx.doi.org/10.1093/llc/fqq021>.
5. Computer-based analyses of texts concentrate mostly on the formal or linguistic aspects of texts, such as their most frequent words or their grammatical categories. Because of this focus on formal characteristics, quantitative analyses cannot effectively be used in studies which aim to interpret the meaning of texts. Interpretation remains a quintessentially human activity.
- Jerome McGann and Lisa Samuels, “Deformance and Interpretation”, in Jerome McGann, Radiant Textuality: Literature after the World Wide Web, (New York: Palgrave Macmillan 2004). Also available at <http://www2.iath.virginia.edu/jjm2f/old/deform.html>
- Stephen Ramsey, “Algorithmic Criticism”, in: Susan Schreibman & Ray Siemens (eds.), A Companion to Digital Literary Studies, (Oxford: Blackwell 2008).
- Liu, Alan. “The State of the Digital Humanities: A Report and a Critique”, in: Arts and Humanities in Higher Education, 11:1–2 (1 December 2011), pp. 8–41, <http://dx.doi.org/10.1177/1474022211427364>.
6. It has often been claimed that, when researchers make use of computational methods, the research becomes more transparent. The results of studies which have made use of digital methods can allegedly be replicated and reproduced more easily. Examine at least four of the studies which are listed underneath “research projects based on text mining”, in the bibliography of this course syllabus, and make an assessment of the degree to which results can indeed be replicated. What is needed precisely to validate or to replicate specific claims? What can be the main benefits of reproducibility within humanities research?