What I’m Reading This Summer: Erica Hayes

Note: As the dh+lib Review editors work behind the scenes this summer, we have invited a few members of our community to step in as guest editors and share with us what they are reading and why the dh+lib audience might want to read it too. This post is from Erica Hayes, Digital Scholarship Librarian at Villanova University.

I recently completed a two-year fellowship at the NCSU Libraries, where a major part of my work was teaching digital scholarship workshops and collaborating with creative residents funded by the Immersive Scholar Andrew W. Mellon Foundation grant on the development of open-source digital scholarship projects for large-scale visualization environments. One of the aims of the grant is to share these projects across institutions, with the goal of overcoming technical and resource sharing barriers. As I start a brand new Digital Scholarship Librarian position at Villanova University in a few weeks, my readings this summer build off of the work I was doing at the NCSU Libraries and revolve around questions of reproducibility, sharing, and sustainability of digital scholarship projects. I’ve also been reading articles and guidebooks on best practices for teaching digital tools to users of varying skill levels and backgrounds, with an eye towards improving my pedagogical practice and new library services in the upcoming academic year. 

Battershell, C. and Ross, S. (2017). Using Digital Humanities in the Classroom: A Practical Introduction for Teachers, Lecturers, and Students. London: Bloomsbury Academic.

While teaching digital scholarship workshops and consulting with faculty on digital assignments this past year, I found myself recommending and returning this summer to this introductory guidebook on approaches to teaching digital humanities methods and tools in the classroom. Claire Battershell and Shawna Ross have put together a very practical handbook. The first chapter covers overcoming fears of failure and resistance to learning new technologies. They note, “failure is just a fact of life” and experimentation is an essential part of the learning process and should be valued just as much as the end-product. In Chapter 5, the authors provide different examples of classroom activities that vary in length, from 10 minutes to a single class, to a week’s worth of learning exercises. These different classroom activities and allotted time frames have been most useful to me when advising faculty on how much time they will need to teach a specific tool or provide a lesson plan. In Chapter 8, there are some useful sample grading rubrics for different kinds of digital assignments provided, such as developing mapping and timeline projects. In addition to providing sample class activities, grading rubrics, and examples of digital assignments and projects, the authors have created a helpful web companion to their book, which is hosted in Scalar and updated every academic year with more digital resources and classroom assignments on these topics. The book also includes chapters on ensuring accessibility for individuals with disabilities, syllabus design, developing digital assignments, and evaluating digital projects.

O’Sullivan, J. (2019, July 9). The humanities have a reproducibility problem. talking humanities, Curated by the School of Advanced Study, University of London, DOI: http://dx.doi.org/10.17613/g1j7-w527

Reproducibility is a core tenet of the scientific process that dictates the replication, transparency, and accuracy of research results and methods. In this blog post, James O’Sullivan discusses open research practices and the shortcoming of reproducibility within the digital humanities. Unlike scientific research, O’Sullivan notes humanities scholars often “accept the findings of their peers without access to the data from which discoveries are drawn.” This is a huge problem as humanities scholarship continues to develop and engage in more computational criticism. While access to data is only one part of the problem, “the relative obscurity of computer-assisted techniques” are a huge contribution to the rise of the reproducibility problem within the digital humanities. Although some computational work is sensitive and relies on literary datasets that are still under copyright and make things difficult to share, O’Sullivan argues we could be doing a better job at documenting our research methods and data workflows. He stresses that we need to work harder at making our DH research methods more transparent and reproducible if we want our research to be interdisciplinary and reach wider audiences. In his commentary, he emphasizes, “We need to dispel the mysticism embedded in digital humanities. Scholars with technical proficiencies have a responsibility to explain their methods more clearly, while the less technical need to increase their familiarity with new practices. It is frustrating that there are still journals that will not consider articles for peer-review because ‘the method has not been fully explained.’” 

The topic of reproducibility has been popping up more and more within the DH community. At the DH2017 conference in Montreal, Alan Lieu, Scott Kleinman, Lindsay Thomas, Ashley Champagne, and Jamal Russell also discussed the challenges of reproducibility within DH. In their panel, “Open, Shareable, Reproducible Workflows for the Digital Humanities: The Case of the 4Humanities.org ‘WhatEvery1Says’ Project,” they discussed the “lack of widely shared technical conventions and appropriate scholarly and publishing practices” in the digital humanities that make it difficult to answer reproducible questions. They also talked about the 4Humanities.org WhatEvery1Says (WE1S) project, which is working towards developing a more “open” and replicable “digital humanities methodology.” According to the panelists, the WE1S project will address “a growing need for ways to share and reproduce data workflows in digital humanities research in order to make DH comparable to ‘open science.’” As this project and the conversations around the reproducibility problem evolves, I’ll be interested to see how these issues are addressed. O’Sullivan makes some valid points that there is an increasing need for more transparent documentation and reproducibility within the digital humanities in order for our research methods to be improved, replicated, and built upon in the future.

Butler, Brandon, Shepherd, A., Visconti, A., and Work, L. (2019). Archiving DH Parts 1-4, Scholar’s Lab, University of Virginia Library. 

While I was working on the Immersive Scholar Andrew W. Mellon Foundation grant, there were many discussions on archiving digital scholarship projects and what that should even look like. This summer I started reading the Scholar’s Lab blog posts on Archiving DH and have found them to be very useful in terms of thinking through best practices for preserving DH projects. The blog posts are broken up into 4 parts and offer some potential solutions. Part 1 – The Problem provides a general overview of some of the problems we face when archiving DH projects. Not surprisingly, one of the biggest challenges facing archiving DH projects is a lack of institutional support and the constant change of technology. With frequent software upgrades and versioning, the authors note that there is very little infrastructure in place for supporting DH projects for the long term. Another major problem is not knowing when a DH project should be sunsetted or archived.

In Part 2 – The Problem in Detail and Part 3 – The Long View, they suggest thinking through these issues at the very early stages of a project and emphasize the importance of considering the cost of a project and the hidden fees with sustaining a DH project. Some of these include server costs, technical support, maintenance, security patches, software upgrades, and yearly costs of domain names. By thinking through these hidden costs and fees early,  the authors advise that it will be easier to assess how much time you want to invest and how long you want to maintain the project. In addition to knowing when to retire a DH project, they discuss the importance of having a data policy in place and clear guidelines about how you intend users to access, reuse, or not reuse your data. 

The final post, Part 4-the Solutions sums up the topic and offers some potential solutions. The authors stress that DH archived projects do not necessarily equate to “a fully functional, live, continuously developed project.” Since there “is no established, tried and true process or infrastructure for building a DH project that will last for centuries,” the goal of archiving a DH project should be to “keep the intellectual knowledge” of the project “accessible to the user in as close to the original format as possible.” Some of the solutions listed include using command line tools like wget and curl to create a static version of old DH projects built in static HTML, CSS, and JS pages, but they advise this solution is not “future proof” and does not come without its errors. Other options include containerization, depositing project files into an academic repository, abandoning the project altogether, or using web archivers like Archive-It and Webrecorder, among others. The authors note that we are “still in the early days of figuring out and building the infrastructure to support the long term accessibility of digital objects,” and there is no “one-box fits all” method for preserving DH projects. Throughout their blog posts, they reference other publications on this topic, while sharing their own insightful experiences and challenges with archiving DH projects.

Erica Hayes

Erica Hayes is a Digital Scholarship Librarian at Villanova University, where she supports faculty and students interested in integrating digital tools and methods into their research. Prior to joining Villanova University, she was the Project Manager on the Immersive Scholar Andrew W. Mellon Foundation grant and an NCSU Libraries Fellow based in the Copyright & Digital Scholarship Center and User Experience Department.