Low Expectations Distributed: Yet Another Institutional Repository Collection Development Workflow

Anyone who has worked on an institutional repository for even a short time knows  that collecting faculty scholarship is not a straightforward process, no matter how nice your workflow looks on paper or how dedicated you are. Keeping expectations for the process manageable (not necessarily low, as in my clickbaity title) and constant simplification and automation can make your process more manageable, however, and therefore work better. I’ve written before about some ways in which I’ve automated my process for faculty collection development, as well as how I’ve used lightweight project management tools to streamline processes. My newest technique for faculty scholarship collection development brings together pieces of all those to greatly improve our productivity.

Allocating Your Human and Machine Resources

First, here is the personnel situation we have for the institutional repository I manage. Your own circumstances will certainly vary, but I think institutions of all sizes will have some version of this distribution. I manage our repository as approximately half my position, and I have one graduate student assistant who works about 10-15 hours a week. From week to week we only average about 30-40 hours total to devote to all aspects of the repository, of which faculty collection development is only a part. We have 12 librarians who are liaisons with departments and do the majority of the outreach to faculty and promotion of the repository, but a limited amount of the collection development except for specific parts of the process. While they are certainly welcome to do more, in reality, they have so much else to do that it doesn’t make sense for them to spend their time on data entry unless they want to (and some of them do). The breakdown of work is roughly that the liaisons promote the repository to the faculty and answer basic questions; I answer more complex questions, develop procedures, train staff, make interpretations of publishing agreements, and verify metadata; and my GA does the simple research and data entry. From time to time we have additional graduate or undergraduate student help in the form of faculty research assistants, and we have a group of students available for digitization if needed.

Those are our human resources. The tools that we use for the day-to-day work include Digital Measures (our faculty activity system), Excel, OpenRefine, Box, and Asana. I’ll say a bit about what each of these are and how we use them below. By far the most important innovation for our faculty collection development workflow has been integration with the Faculty Activity System, which is how we refer to Digital Measures on our campus. Many colleges and universities have some type of faculty activity system or are in the process of implementing one. These generally are adopted for purposes of annual reports, retention, promotion, and tenure reviews. I have been at two different universities working on adopting such systems, and as you might imagine, it’s a slow process with varying levels of participation across departments. Faculty do not always like these systems for a variety of reasons, and so there may be hesitation to complete profiles even when required. Nevertheless, we felt in the library that this was a great source of faculty publication information that we could use for collection development for the repository and the collection in general.

We now have a required question about including the item in the repository on every item the faculty member enters in the Faculty Activity System. If a faculty member is saying they published an article, they also have to say whether it should be included in the repository. We started this in late 2014, and it revolutionized our ability to reach faculty and departments who never had participated in the repository before, as well as simplify the lives of faculty who were eager participants but now only had to enter data in one place. Of course, there are still a number of people whom we are missing, but this is part of keeping your expectation low–if you can’t reach everyone, focus your efforts on the people you can. And anyway, we are now so swamped with submissions that we can’t keep up with them, which is a good if unusual problem to have in this realm. Note that the process I describe below is basically the same as when we analyze a faculty member’s CV (which I described in my OpenRefine post), but we spend relatively little time doing that these days since it’s easier for most people to just enter their material in Digital Measures and select that they want to include it in the repository.

The ease of integration between your own institution’s faculty activity system (assuming it exists) and your repository certainly will vary, but in most cases it should be possible for the library to get access to the data. It’s a great selling point for the faculty to participate in the system for your Office of Institutional Research or similar office who administers it, since it gives faculty a reason to keep it up to date when they may be in between review cycles. If your institution does not yet have such a system, you might still discuss a partnership with that office, since your repository may hold extremely useful information for them about research activity of which they are not aware.

The Workflow

We get reports from the Faculty Activity System on roughly a quarterly basis. Faculty member data entry tends to bunch around certain dates, so we focus on end of semesters as the times to get the reports. The reports come by email as Excel files with information about the person, their department, contact information, and the like, as well as information about each publication. We do some initial processing in Excel to clean them up, remove duplicates from prior reports, and remove irrelevant information.  It is amazing how many people see a field like “Journal Title” as a chance to ask a question rather than provide information. We focus our efforts on items that have actually been published, since the vast majority of people have no interest in posting pre-prints and those that do prefer to post them in arXiv or similar. The few people who do know about pre-prints and don’t have a subject archive generally submit their items directly. This is another way to lower expectations of what can be done through the process. I’ve already described how I use OpenRefine for creating reports from faculty CVs using the SHERPA/RoMEO API, and we follow a similar but much simplified process since we already have the data in the correct columns. Of course, following this process doesn’t tell us what we can do with every item. The journal title may be entered incorrectly so the API call didn’t pick it up, or the journal may not be in SHERPA/RoMEO. My graduate student assistant fills in what he is able to determine, and I work on the complex cases. As we are doing this, the Excel spreadsheet is saved in Box so we have the change history tracked and can easily add collaborators.

Screen Capture from Asana Setup

A view of how we use Asana for managing faculty collection development workflows.

At this point, we are ready to move to Asana, which is a lightweight project management tool ideal for several people working on a group of related projects. Asana is far more fun and easy to work with than Excel spreadsheets, and this helps us work together better to manage workload and see where we are with all our on-going projects. For each report (or faculty member CV), we create a new project in Asana with several sections. While it doesn’t always happen in practice, in theory each citation is a task that moves between sections as it is completed, and finally checked off when it is either posted or moved off into some other fate not as glamorous as being archived as open access full text. The sections generally cover posting the publisher’s PDF, contacting publishers, reminders for followup, posting author’s manuscripts, or posting to SelectedWorks, which is our faculty profile service that is related to our repository but mainly holds citations rather than full text. Again, as part of the low expectations, we focus on posting final PDFs of articles or book chapters. We add books to a faculty book list, and don’t even attempt to include full text for these unless someone wants to make special arrangements with their publisher–this is rare, but again the people who really care make it happen. If we already know that the author’s manuscript is permitted, we don’t add these to Asana, but keep them in the spreadsheet until we are ready for them.

We contact publishers in batches, trying to group citations by journal and publisher to increase efficiency so we can send one letter to cover many articles or chapters. We note to follow up with a reminder in one month, and then again in a month after that. Usually the second notice is enough to catch the attention of the publisher. As they respond, we move the citation to either posting publisher’s PDF section or to author’s manuscript section, or if it’s not permitted at all to the post to SelectedWorks section. While we’ve tried several different procedures, I’ve determined it’s best for the liaison librarians to ask just for author’s accepted manuscripts for items after we’ve verified that no other version may be posted. And if we don’t ever get them, we don’t worry about it too much.

Conclusion

I hope you’ve gotten some ideas from this post about your own procedures and new tools you might try. Even more, I hope you’ll think about which pieces of your procedures are really working for you, and discard those that aren’t working any more. Your own situation will dictate which those are, but let’s all stop beating ourselves up about not achieving perfection. Make sure to let your repository stakeholders know what works and what doesn’t, and if something that isn’t working is still important, work collaboratively to figure out a way around that obstacle. That type of collaboration is what led to our partnership with the Office of Institutional Research to use the Digital Measures platform for our collection development, and that in turn has  led to other collaborative opportunities.

 


Creating a Flipbook Reader for the Web

At my library, we have a wonderful collection of artist’s books. These titles are not mere text, or even concrete poetry shaping letters across a page, but works of art that use the book as a form in the same way a sculptor might choose clay or marble. They’re all highly inventive and typically employ an interplay between language, symbol, and image that challenges our understanding of what a print volume is. As an art library, collecting these unique and inspiring works seems natural. But how can we share our artists’ books with the world? For a while, our work study students have been scanning sets of pages from the books using a typical flatbed scanner but we weren’t sure how to present these images. Below, I’ll detail how we came to fork the Internet Archive’s Bookreader to publish our images to the web.

Choosing a Book Display

Our Digital Scholarship Librarian, Lisa Conrad, led the artists’ book project. She investigated a number of potential options for creating interactive “flipbooks” out of our series of images. We had a few requirements we were looking for:

  • mobile friendly — ideally, our books would be able to be read on any device, not just desktops with Adobe Flash
  • easy to use — our work study staff shouldn’t need sophisticated skills, like editing code or markup, to publish a work
  • visually appealing — obviously we’re dealing with works of art, if our presentation is hideous or obscures the elegance of the original then it’s doing more harm than good
  • works with our repository — ultimately, the books would “live” in our institutional repository, so we needed software that would emit something like static HTML or documents that could be easily retained in it

These are not especially stringent requirements, in my mind. Still, we struggled to find decent options. The mobile limitation restricted things quite a bit; a surprising number of apps used flash or published in desktop-first formats. We felt our options to customize and simplify the user interface of other apps was too small. Even if an app spat out a nice bundle of HTML and assets that we could upload, the HTML might call out to sketchy external services or present a number of options we didn’t want or need. While I was at first hesitant to implement an open source piece of software that I knew would require much of my time, ultimately the Internet Archive’s Bookreader became the frontrunner. We felt more comfortable using an established, free piece of software that doesn’t require any server-side component (it’s all JavaScript!). My primary concern was that the last commit to the bookreader code base was over a half year ago.

Implementing the Internet Archive Bookreader

The Bookreader proved startlingly easy to implement. We simply copied the code from an example given by the Internet Archive, edited several lines of JavaScript, and were ready to go. Everything else was refinement. Still, we made a few nice decisions in working on the Bookreader that I’d like to share. If you’re not a coder, feel free to skip the rest of this section as it may be unnecessarily detailed.

First off, our code is all up on GitHub. But it won’t be useful to anyone else! It contains specific logic based on how our IR serves up links to the bookreader app itself. But one of the smarter moves, if I may indulge in some self-congratulation, was using submodules in git. If you know git then you know it’s easily one of the best ways to version code and manage projects. But what if you app is just an implementation of an existing code base? There’s a huge portion of code you’re borrowing from somewhere else, and you want to be able to segment that off and incorporate changes to it separately.

As I noted before, the bookreader isn’t exactly actively developed. But it still benefits us to separate our local modifications from the stable, external code. A submodule lets us incorporate another repository into our own. It’s the clean way of copying someone else’s files into our own version. We can reference files in the other repository as if they’re sitting right beside our own, but also pull in updates in a controlled manner without causing tons of extra commits. Elegant, handy, sublime.

Most of our work happened in a single app.js file. This file was copied from the Internet Archive’s provided example and only modified slightly. To give you a sense of how one modifies the original script, here’s a portion:

// Create the BookReader object
br = new BookReader();
// Total number of leafs
br.numLeafs = vaultItem.pages;
// Book title & URL used for the book title link
br.bookTitle= vaultItem.title;
br.bookUrl  = vaultItem.root + 'items/' + vaultItem.id + '/' + vaultItem.version + '/';
// how does the bookreader know what images to retrieve given a page number (index)?
br.getPageURI = function(index, reduce, rotate) {
    // reduce and rotate are ignored in this simple implementation, but we
    // could e.g. look at reduce and load images from a different directory
    var url = vaultItem.root + 'file/' + vaultItem.id + '/' + vaultItem.version + '/' + vaultItem.filenames + (index + 1) + '.JPG';
    return url;
}

The Internet Archive’s code provides us with a Bookreader class. All we do is instantiate it once and override certain properties and methods. The code above is all that’s necessary to display a particular image for a given page number; the vaultItem object (VAULT is the name of our IR) consists of information about a single artists’ book, like the number of pages, its title, its ID and version within the IR. The bookreader app cobbles together these pieces of info to figure out it should display images given a page or pair of pages. The getPageURI function is mostly working with a single index argument, while the group of concatenated strings for the URL are related to how our repository stores files. It’s highly specific to the IR we’re using, but not terribly complicated.

The Bookreader itself sits on a web server outside the IR. Since we cannot publicly share almost all of these books (more on this below), we restrict access to the images to users who are authenticated with the IR. So how can the external reader app serve up images of books within the repository? We expect people to discover the books via our library catalog, which links to the IR, or within the IR itself. Once they sign in, our IR’s display templates contain specially crafted URLs that pass the bookreader information about the item in our repository via their query string. Here’s a shortened example of one query string:

?title=Crystals%20to%20Aden&id=17c06cc5-c419-4e77-9bdb-43c69e94b4cd&pages=26

From there, the app parses the query string to figure out that the book’s title is Crystals to Aden, its ID within the IR is “17c06cc5-c419-4e77-9bdb-43c69e94b4cd”, and it has twenty six pages. We store those values in the vaultItem object referenced in the script above. That object contains enough information for the bookreader to determine how to retrieve images from the IR for each page of the book. Since the user has already authenticated with the IR when they discovered the URL earlier, the IR happily serves up the images.

Ch-ch-ch-challenges

Easily the most difficult part of our artists’ books project has been spreading awareness. It’s a cool project that’s required tons of work all around, from the Digital Scholarship Librarian managing the project, to our diligent work study student scanning images, to our circulation staff cataloging them in our IR, to me solving problems in JavaScript while my web design student worker polished the CSS. We’re proud of the result, but also struggling to share it outside of our walls. Our IR is great at some things but not particularly intuitive to use, so we cannot count on patrons stumbling across the artists’ books on their own very often. Furthermore, putting anything behind a login is obviously going to increase the number of people who give up before accessing it. Not being on the campus Central Authentication Service only exacerbates this for us.

The second challenge is—what else!—copyright. We don’t own the rights to any of these titles. We’ve been guerrilla digitizing them without permission from publishers. For internal sharing, that’s fine and well under Fair Use; we’re only showing excerpts and security settings in our IR ensure they’re only visible to our constituents. But I pine to share these gorgeous creations with people outside the college, with my social networks, with other librarians. Right now, we only have permission to share Crystals to Aden by Michael Bulteau (thanks to duration press for letting us!). Which is great! Except Crystals is a relatively text-heavy work; it’s fine poetry, yes, but not the earth-shattering reinterpretation of the codex I promised you in my opening paragraph.

Hopefully, we can secure permissions to share further works. Internally, we’ve pushed out notices on social media, the LMS, and email lists to inform people of the artists’ books. They’re also available for checkout, so hopefully our digital teasers bring people in to see the real deal.

Lasting Problems

And that’s something that must be mentioned; our digital simulations are definitely not the real deal. Never has a set of titles been so resistant to digitization. Our work study has even asked us “how could I possibly separate this work into distinct pairs of pages” about a work which used partially-overlapping leaves that look like vinyl record sleeves.1 Artists deliberately chose the printed book form and there’s a deep sacrilege in trying to digitally represent that. We can only ever offer a vague adumbration of what was truly intended. I still see value in our bookreader—and the works often look brilliant even as scanned images—but there are fundamental impediments to its execution.

Secondly, scanned images are not text. Many works do have text on the page, but because we’re displaying images in the browser users cannot select the text to copy it. Alongside each set of images in our IR, we also have a PDF copy of all the pages with OCR‘d text. But a decision was made not to make the PDFs visible to non-library staff, since they would be easy to download and disseminate (unlike the many discrete images in our bookreader). This all adds up to make the bookreader very inaccessible; its all images, there’s no feasible way for us to associate alt text with each one, and the PDF copy that might be of interest to visually-impaired users is hidden.

I’ve ended on a sour note, but digitizing our artists’ books and fiddling with the Internet Archive’s Bookreader was a great project. Fun, a bit challenging, with some splendid results. We have our work cut out for us if we want to draw more attention to the books and have them be compelling for everyone. But other libraries in similar situations may find the Bookreader to be a very viable, easy-to-implement solution. If you don’t have permissions issues and are dealing with more traditional works, it’s built to be customizable yet offers a pleasant reading experience.

Notes

  1. Paradoxic Mutations by Margot Lovejoy

A Clean House at the Directory of Open Access Journals

The Directory of Open Access Journals (DOAJ) is an international directory of journals and index of articles that are available open access. Dating back to 2003, the DOAJ was at the center of a controversy surrounding the “sting” conducted by John Bohannon in Science, which I covered in 2013. Essentially Bohannon used journals listed in DOAJ to try to find journals that would publish an article of poor quality as long as authors paid a fee. At the time many suggested that a crowdsourced journal reviewing platform might be the way to resolve the problem if DOAJ wasn’t a good source. While such a platform might still be a good idea, the simpler and more obvious solution is the one that seems to have happened: for DOAJ to be more strict with publishers about requirements for inclusion in the directory. 1.

The process of cleaning up the DOAJ has been going on for some time and is getting close to an important milestone. All the 10,000+ journals listed in DOAJ were required to reapply for inclusion, and the deadline for that is December 30, 2015. After that time, any journals that haven’t reapplied will be removed from the DOAJ.

“Proactive Not Reactive”

Contrary to popular belief, the process for this started well before the Bohannon piece was published 2. In December 2012 an organization called Infrastructure Services for Open Access (IS4OA)  (founded by Alma Swan and Caroline Sutton) took over DOAJ from Lund University, and announced several initiatives, including a new platform, distributed editorial help, and improved criteria for inclusion. 3 Because DOAJ grew to be an important piece of the scholarly communications infrastructure it was inevitable that they would have to take such a step sooner or later. With nearly 10,000 journals and only a small team of editors it wouldn’t have been sustainable over time, and to lose the DOAJ would have been a blow to the open access community.

One of the remarkable things about the revitalization of the DOAJ is the transparency of the process. The DOAJ News Service blog has been detailing the behind the scenes processes in detail since May 2014. One of the most useful things is a list of journals who have claimed to be listed in DOAJ but are not. Another important piece of information is the 2015-2016 development roadmap. There is a lot going on with the DOAJ update, however, so below I will pick out what I think is most important to know.

The New DOAJ

In March 2014, the DOAJ created a new application form with much higher standards for inclusion. Previously the form for inclusion was only 6 questions, but after working with the community they changed the application to require 58 questions. The requirements are detailed on a page for publishers, and the new application form is available as a spreadsheet.

While 58 questions seems like a lot, it is important to note that journals need not fulfill every single requirement, other than the basic requirements for inclusion. The idea is that journal publishers must be transparent about the structure and funding of the journal, and that journals explicitly labeled as open access meet some basic theoretical components of open access. For instance, one of the  basic requirements is that  “the full text of ALL content must be available for free and be Open Access without delay”. Certain other pieces are strong suggestions, but not meeting them will not reject a journal. For instance, the DOAJ takes a strong stand against impact factors and suggests that they not be presented on journal websites at all 4.

To highlight journals that have extremely high standards for “accessibility, openness, discoverability reuse and author rights”, the DOAJ has developed a “Seal” that is awarded to journals who answer “yes” to the following questions (taken from the DOAJ application form):

have an archival arrangement in place with an external party (Question 25). ‘No policy in place’ does not qualify for the Seal.

provide permanent identifiers in the papers published (Question 28). ‘None’ does not qualify for the Seal.

provide article level metadata to DOAJ (Question 29). ‘No’ or failure to provide metadata within 3 months do not qualify for the Seal.

embed machine-readable CC licensing information in article level metadata (Question 45). ‘No’ does not qualify for the Seal.

allow reuse and remixing of content in accordance with a CC BY, CC BY-SA or CC BY-NC license (Question 47). If CC BY-ND, CC BY-NC-ND, ‘No’ or ‘Other’ is selected the journal will not qualify for the Seal.

have a deposit policy registered in a deposit policy directory. (Question 51) ‘No’ does not qualify for the Seal.

allow the author to hold the copyright without restrictions. (Question 52) ‘No’ does not qualify for the Seal.

Part of the appeal of the Seal is that it focuses on the good things about open access journals rather than the questionable practices. Having a whitelist is much more appealing for people doing open access outreach than a blacklist. Journals with the Seal are available in a facet on the new DOAJ interface.

Getting In and Out of the DOAJ

Part of the reworking of the DOAJ was the requirementand required all currently listed journals to reapply–as of November 19 just over 1,700 journals had been accepted under the new criteria, and just over 800 had been removed (you can follow the list yourself here). For now you can find journals that have reapplied with a green check mark (what DOAJ calls The Tick!). That means that about 85% of journals that were previously listed either have not reapplied, or are still in the verification pipeline 5. While DOAJ does not discuss specific reasons a journal or publisher is removed, they do give a general category for removal. I did some analysis of the data provided in the added/removed/rejected spreadsheet.

At the time of analysis, there were 1776 journals on the accepted list. 20% of these were added since September, and with the deadline looming this number is sure to grow. Around 8% of the accepted journals have the DOAJ Seal.

There were 809 journals removed from the DOAJ, and the reasons fell into the following general categories. I manually checked some of the journals with only 1 or 2 titles, and suspect that some of these may be reinstated if the publisher chooses to reapply. Note that well over half the removed journals weren’t related to misconduct but were ceased or otherwise unavailable.

Inactive (has not published in the last calender year) 233
Suspected editorial misconduct by publisher 229
Website URL no longer works 124
Ceased publishing 108
Journal not adhering to Best Practice 62
Journal is no longer Open Access 45
Has not published enough articles this calendar year 2
Wrong ISSN 2
Other; delayed open access 1
Other; no content 1
Other; taken offline 1
Removed at publisher’s request 1
809

The spreadsheet lists 26 journals that were rejected. Rejected journals will know the specific reasons why their applications were rejected, but those specific reasons are not made public. Journals may reapply after 6 months once they have had an opportunity to amend the issues. 6  The general stated reasons were as follows:

Unknown 19
Has not published enough articles 2
Journal website lacks necessary information 2
Not an academic/scholarly journal 1
Only Abstracts 1
Web site URL doesn’t work 1
Conclusion

The work that DOAJ is doing to improve transparency and the screening process is very important for open access advocates, who will soon have a tool that they can trust to provide much more complete information for scholars and librarians. For too long we have been forced to use the concept of a list of “questionable” or even “predatory” journals. A directory of journals with robust standards and easy to understand interface will be a fresh start for the rhetoric of open access journals.

Are you the editor of an open access journal? What do you think of the new application process? Leave your thoughts in the comments (anonymously if you like).


The Library as Research Partner

As I typed the title for this post, I couldn’t help but think “Well, yeah. What else would the library be?” Instead of changing the title, however, I want to actually unpack what we mean when we say “research partner,” especially in the context of research data management support. In the most traditional sense, libraries provide materials and space that support the research endeavor, whether it be in the physical form (books, special collections materials, study carrels) or the virtual (digital collections, online exhibits, electronic resources). Moreover, librarians are frequently involved in aiding researchers as they navigate those spaces and materials. This aid is often at the information seeking stage, when researchers have difficulty tracking down references, or need expert help formulating search strategies. Libraries and librarians have less often been involved at the most upstream point in the research process: the start of the experimental design or research question. As one considers the role of the Library in the scholarly life-cycle, one should consider the ways in which the Library can be a partner with other stakeholders in that life-cycle. With respect to research data management, what is the appropriate role for the Library?

In order to achieve effective research data management (RDM), planning for the life-cycle of the data should occur before any data are actually collected. In circumstances where there is a grant application requirement that triggers a call to the Library for data management plan (DMP) assistance, this may be possible. But why are researchers calling the Library? Ostensibly, it is because the Library has marketed itself (read: its people) as an expert in the domain of data management. It has most likely done this in coordination with the Research Office on campus. Even more likely, it did this because no one else was. It may have done this as a response to the National Science Foundation (NSF) DMP requirement in 2011, or it may have just started doing this because of perceived need on campus, or because it seems like the thing to do (which can lead to poorly executed hiring practices). But unlike monographic collecting or electronic resource acquisition, comprehensive RDM requires much more coordination with partners outside the Library.

Steven Van Tuyl has written about the common coordination model of the Library, the Research Office, and Central Computing with respect to RDM services. The Research Office has expertise in compliance and Central Computing can provide technical infrastructure, but he posits that there could be more effective partners in the RDM game than the Library. That perhaps the Library is only there because no one else was stepping up when DMP mandates came down. Perhaps enough time has passed, and RDM and data services have evolved enough that the Library doesn’t have to fill that void any longer. Perhaps the Library is actually the *wrong* partner in the model. If we acknowledge that communities of practice drive change, and intentional RDM is a change for many of the researchers, then wouldn’t ceding this work to the communities of practice be the most effective way to stimulate long lasting change? The Library has planted some starter seeds within departments and now the departments could go forth and carry the practice forward, right?

Well, yes. That would be ideal for many aspects of RDM. I personally would very much like to see the intentional planning for, and management of, research data more seamlessly integrated into standard experimental methodology. But I don’t think that by accomplishing that, the Library should be removed as a research partner in the data services model. I say this for two reasons:

  1. The data/information landscape is still changing. In addition to the fact that more funders are requiring DMPs, more research can benefit from using openly available (and well described – please make it understandable) data. While researchers are experts in their domain, the Library is still the expert in the information game. At its simplest, data sources are another information source. The Library has always been there to help researchers find sources; this is another facet of that aid. More holistically, the Library is increasingly positioning itself to be an advocate for effective scholarly communication at all points of the scholarship life-cycle. This is a logical move as the products of scholarship take on more diverse and “nontraditional” forms.

Some may propose that librarians who have cultivated RDM expertise can still provide data seeking services, but perhaps they should not reside in the Library. Would it not be better to have them collocated with the researchers in the college or department? Truly embedded in the local environment? I think this is a very interesting model that I have heard some large institutions may want to explore more fully. But I think my second point is a reason to explore this option with some caution:

2. Preservation and access. Libraries are the experts in the preservation and access of materials. Central Computing is a critical institutional partner in terms of infrastructure and determining institutional needs for storage, porting, computing power, and bandwidth but – in my experience – are happy to let the long-term preservation and access service fall to another entity. Libraries (and archives) have been leading the development of digital preservation best practices for some time now, with keen attention to complex objects. While not all institutions can provide repository services for research data, the Library perspective and expertise is important to have at the table. Moreover, because the Library is a discipline-agnostic entity, librarians may be able to more easily imagine diverse interest in research data than the data producer. This can increase the potential vehicles for data sharing, depending on the discipline.

Yes, RDM and data services are reaching a place of maturity in academic institutions where many Libraries are evaluating, or re-evaluating, their role as a research partner. While many researchers and departments may be taking a more proactive or interested position with RDM, it is not appropriate for Libraries to be removed from the coordinated work that is required. Libraries should assert their expertise, while recognizing the expertise of other partners, in order to determine effective outreach strategies and resource needs. Above all, Libraries must set scope for this work. Do not be deterred by the increased interest from other campus entities to join in this work. Rather, embrace that interest and determine how we all can support and strengthen the partnerships that facilitate the innovative and exciting research and scholarship at an institution.


Data, data everywhere…but do we want to drink?

The role of data, digital curation, and scholarly communication in academic libraries.

Ask around and you’ll hear that data is the new bacon (or turkey bacon, in my case. Sorry, vegetarians). It’s the hot thing that everyone wants a piece of. It is another medium with which we interact and derive meaning from. It is information[1]; potentially valuable and abundant. But much like [turkey] bacon, un-moderated gorging, without balance or diversity of content, can raise blood pressure and give you a heart attack. To understand how best to interact with the data landscape, it is important to look beyond it.

What do academic libraries need to know about data? A lot, but in order to separate the signal from the noise, it is imperative to look at the entire environment. To do this, one can look to job postings as a measure of engagement. The data curation positions, research data services departments, and data management specializations focus almost exclusively on digital data. However, these positions, which are often catch-alls for many other things do not place the data management and curation activities within the larger frame of digital curation, let alone scholarly communication. Missing from job descriptions is an awareness of digital preservation or archival theory as it relates to data management or curation. In some cases, this omission could be because a fully staffed digital collections department has purview over these areas. Nonetheless, it is important to articulate the need to communicate with those stakeholders in the job description. It may be said that if the job ad discusses data curation, digital preservation should be an assumed skill, yet given the tendencies to have these positions “do-all-the-things” it is negligent not to explicitly mention it.

Digital curation is an area that has wide appeal for those working in academic and research libraries. The ACRL Digital Curation Interest Group (DCIG) has one of the largest memberships within ACRL, with 1075 members as of March 2015. The interest group was intentionally named “digital curation” rather than “data curation” because the founders (Patricia Hswe and Marisa Ramirez) understood the interconnectivity of the domains and that the work in one area, like archives, could influence the work in another, like data management. For example, the work from Digital POWRR can help inform digital collection platform decisions or workflows, including data repository concerns. This Big Tent philosophy can help frame the data conversations within libraries in a holistic, unified manner, where the various library stakeholders work collaboratively to meet the needs of the community.

The absence of a holistic approach to data can result in the propensity to separate data from the corpus of information for which librarians already provide stewardship. Academic libraries may recognize the need to provide leadership in the area of data management, but balk when asked to consider data a special collection or to ingest data into the institutional repository. While librarians should be working to help the campus community become critical users and responsible producers of data, the library institution must empower that work by recognizing this as an extension of the scholarly communication guidance currently in place. This means that academic libraries must incorporate the work of data information literacy into their existing information literacy and scholarly communication missions, else risk excluding these data librarian positions from the natural cohort of colleagues doing that work, or risk overextending the work of the library.

This overextension is most obvious in the positions that seek a librarian to do instruction in data management, reference, and outreach, and also provide expertise in all areas of data analysis, statistics, visualization, and other data manipulation. There are some academic libraries where this level of support is reasonable, given the mission, focus, and resourcing of the specific institution. However, considering the diversity of scope across academic libraries, I am skeptical that the prevalence of job ads that describe this suite of services is justified. Most “general” science librarians would scoff if a job ad asked for experience with interpreting spectra. The science librarian should know where to direct the person who needs help with reading the spectra, or finding comparative spectra, but it should not be a core competency to have expertise in that domain. Yet experience with SPSS, R, Python, statistics and statistical literacy, and/or data visualization software find their way into librarian position descriptions, some more specialized than others.

For some institutions this is not an overextension, but just an extension of the suite of specialized services offered, and that is well and good. My concern is that academic libraries, feeling the rush of an approved line for all things data, begin to think this is a normal role for a librarian. Do not mistake me, I do not write from the perspective that libraries should not evolve services or that librarians should not develop specialized areas of expertise. Rather, I raise a concern that too often these extensions are made without the strategic planning and commitment from the institution to fully support the work that this would entail.

Framing data management and curation within the construct of scholarly communication, and its intersections with information literacy, allows for the opportunity to build more of this content delivery across the organization, enfranchising all librarians in the conversation. A team approach can help with sustainability and message penetration, and moves the organization away from the single-position skill and knowledge-sink trap. Subject expertise is critical in the fast-moving realm of data management and curation, but it is an expertise that can be shared and that must be strategically supported. For example, with sufficient cross-training liaison librarians can work with their constituents to advise on meeting federal data sharing requirements, without requiring an immediate punt to the “data person” in the library (if such a person exists). In cases where there is no data point person, creating a data working group is a good approach to distribute across the organization both the knowledge and the responsibility for seeking out additional information.

Data specialization cuts across disciplinary bounds and concerns both public services and technical services. It is no easy task, but I posit that institutions must take a simultaneously expansive yet well-scoped approach to data engagement – mindful of the larger context of digital curation and scholarly communication, while limiting responsibilities to those most appropriate for a particular institution.

[1] Lest the “data-information-knowledge-wisdom” hierarchy (DIKW) torpedo the rest of this post, let me encourage readers to allow for an expansive definition of data. One that allows for the discrete bits of data that have no meaning without context, such as a series of numbers in a .csv file, and the data that is described and organized, such as those exact same numbers in a .csv file, but with column and row descriptors and perhaps an associated data dictionary file. Undoubtedly, the second .csv file is more useful and could be classified as information, but most people will continue to call it data.

Yasmeen Shorish is assistant professor and Physical & Life Sciences librarian at James Madison University. She is a past-convener for the ACRL Digital Curation Interest Group and her research focus is in the areas of data information literacy and scholarly communication.


A Forray into Publishing Open Data on GitHub

While we’ve written about using GitHub for publishing before, in this post I will explore publishing data on GitHub, as opposed to a presentation or academic paper. There are a few services where one can publish research data—FigShare comes to mind—but I wanted to try GitHub because I’m already familiar with the service, it seems suitable for publishing data alongside the scripts used to obtain and process it, and its focus on version control makes it particularly apt for publishing a work in progress. However, even with free services like GitHub available, open data still has hurdles to overcome. How can I, a lowly librarian with no grant funding or experience in this area, publish an open data set such that others can locate and reuse it? Let’s find out.

Background

As Lauren introduced in her last post, we here at ACRL Tech Connect are performing research into coding in libraries; how people learn to code, what learning resources they use, what languages they use. As part of this research, I wanted to compare what our survey respondents reported with a bulk analysis of GitHub repositories under library organizations. The Code4Lib wiki has an excellent page listing many library GitHub accounts, and GitHub has a nice API that reports, among many other things, the various languages used in a project. Those two sources of information seemed like a perfect match, so I wrote a few scripts to mash them together.

Publishing scripts that extract and analyze data is important. One cannot trust the results of a single scientific experiment or a poor sample set. Providing the programs used to collect data aims to allow reproducibility so future researchers can verify or build upon prior results. While we perhaps think of science as being quite established by now, data and reproducibility are major issues in most fields. Ask any data librarian and they’ll tell you; managing the preservation and distribution of research data is not a solved or simple problem. Furthermore, every so often another meta-research study will show that only some dismal percentage of experiments can be replicated.1

My own data is not so valuable. No cure for a debilitating disease rests on the number of bytes of Standard ML in your university’s GitHub account. But on principle I want to let my results be repeatable and, what’s more, if someone does find an error in my scripts or data I want it corrected. Even if my initial conclusions are off, someone might be able to construct a stronger study from their basis.

Step #1: Obtain a DOI

As the first step of publishing my data, I wanted to obtain a Digital Object Identifier. Sure, putting my work up on GitHub gives me a URL I can reference, but leaving it at that adds a lot of uncertainty. What if I change my GitHub username, which is contained in the URL? What if I transfer the repository to a new owner? What if GitHub goes out of business? While none of these are likely scenarios, they’re still worth guarding against. DOI providers essentially stick to a pact that their identifiers will continue to work for perpetuity. While that’s not always the case, I feel like grabbing a DOI is still The Right Thing To Do for pubishers at the present moment.

We can use Zenodo to secure a DOI. GitHub already has a fine guide named Making Your Code Citable, but I’ll lightly outline the process here.

First, we create a Zenodo account reusing our GitHub credentials. Zenodo will list out our repositories and we can click the On button next to one to ready it for publishing. This button establishes a “web hook” between events happening in that GitHub repository and Zenodo; when we go to publish a release, Zenodo will be aware of it.

This was the only step that tripped me up a bit. GitHub’s “releases” are not a part of the git version control system, they’re an added feature of the hosting environment. But in my mind they’re identical to git’s “tags” that one uses to label particular points in a repository’s history. Indeed, when we push a tag up to GitHub, it’ll show up on our repository’s releases page. But it appears tags are not technically releases, or don’t trigger the right web hook, because when I pushed a typical “v1.0.0” tag to GitHub, Zenodo didn’t notice. Instead, I had to go to my releases page, Draft a new release, and then Choose an existing tag to associate the version tag with a GitHub release. The title and description entered at this stage are available later in Zenodo.

The final step is back in Zenodo, where we can mint a DOI and describe our project further. We have a powerful set of fields for describing our project in Zenodo, including type (e.g. data set, software, presentation, publication), publication date, list of authors, open-ended description, list of keywords, access rights, license, funding agency, alternative identifiers (e.g. PubMed ID), and more. Zenodo also has a “communities” feature where we can deposit our research in a collection with a disciplinary focus; I put my data in the “Library and Information Science” group.

Step #2: Document the Data’s Schema

Obtaining a DOI is fine and all, but I also wanted to document my data more thoroughly. While it’s not a complicated data set, I’m familiar with the challenges that an unknown data schema presents for end users. All too often at work, I’m forced to revise data processing routines because a new outlier appears. There’ll be a string of text where I’m expecting only integers, a blank entry in what I thought was a required field, or an ID that doesn’t conform to the anticipated pattern (punctuation appears in a barcode! a random letter prefixes an otherwise numeric ID!).

To make our data’s structure clear, we can use the Data Package standard from the Open Knowledge Foundation (OKFN), specifically the Tabular Data Package subset which was designed for the CSV (comma-separated values) format.2 Documenting our data is straightforward with these standards; we place a “datapackage.json” file alongside our data files and fill in a few fields. Here’s an example:

{
    "name": "libs-github-api", // must be URL-friendly, e.g. no spaces 
    "description": "library GitHub projects",
    "license": "CC0 Public Domain",
    "keywords": ["libraries", "programming languages"],
    "resources": [ // list of files 
        {
            "name": "summary",
            "path": "data/summary.csv", // UNIX-style path relative to datapackage.json 
            "format": "csv",
            "mediatype": "text/csv",
            "schema": { // outline of fields within this file 
                "fields": [
                    {
                        "name": "language",
                        "type": "string", /* from a controlled list of data types
                        could also be integer, number, date, etc. */
                        "description": "name of the programming language",
                        "constraints": {
                            "required": true,
                            "unique": true
                        }
                    }
                    // our schema would list a few more fields here… 
                ]
            }
        }
    ]
}

Note that the comments above aren’t valid in JSON, I include them simply to provide some inline explanation.

While it requires a little reading to figure out how to fill out datapackage.json fields, many are self-evident. The appeal of the standard becomes evident in the schema section; we can tell consumers what types to expect from our data and other particularities of a given CSV column. Does a column contain empty values? Then required will be absent or explicitly set to false. Does a column contain both integers and text? Then the type “string” warns consumers not to anticipate only numeric values. What’s more, we can provide a regular expression in the pattern constraint which specifies exactly how a field may be formatted. Even strange barcodes with surprise punctuation could be documented precisely.

I would say there’s a lot more to the Data Package standards, but the truth is they’re elegant and concise. One can read all three (Data Package, Tabular Data Package, and JSON Table Schema) in a matter of minutes, look at an instructive example or two, and be ready to reveal their data’s structure in a standardized way. There is great depth available in the way one describes individual resources and their data schema.

Why spend all this time with a data package when we’ve already done something similar with Zenodo? The data package documentation solves a couple problems. First of all, packaging up our CSV alongside structured data about its nature addresses findability. There’s tons of open data out there, the issue is it can be scattered and difficult to find. If someone is looking for statistics on programming language usage, how would they go about finding my data? Searching GitHub will be challenging; the keywords one uses (“programming language”, the ambiguous “libraries”, etc.) will likely retrieve many repositories which don’t contain open data, and GitHub, while it does have a decent advanced search form, doesn’t have the facets to make retrieving a particular data set straightforward. One cannot, for instance, filter search results by a repository’s license or the format (CSV, JSON) of the data contained therein.

Data packages address the issue of findability by providing for the possibility of a registry that aggregates all the data sets it knows about. Once a datapackage.json appears, suddenly information like whether the format is CSV or JSON, what the license is, who created it, and what subject keywords are related to a repository become clear. The Open Knowledge Foundation already has a strong proof-of-concept registry, albeit one that lists only around a hundred data sets.

Since Data Package is an open standard, any third-party can easily parse its metadata and provide search facets based on the fields that are present. This is how the standard addresses a second issue; machine readability. Documenting data sets is good, necessary even, but it often only helps humans. I can write a five-page paper meticulously detailing my data’s collection methodology and structure, but that’s asking researchers to do a lot of reading. Now consider that their research might be on a grand scale; imagine if they needed to read a hundred five page papers describing ad hoc data schemas!

Instead, creating a machine readable description lets my data be processed quickly by a specially designed program. As a somewhat trivial example, I already used the OKFN’s Data Package Validator to ensure my schema documentation met their standard. As a more interesting use case, the OKFN also defines an optional “views” section of the data package standard which allows applications to automatically create charts from our data.

Reflection

While I’m glad that tools like Zenodo and standards like Data Package exist for publishing data, there’s still a lot of work to be done in this arena. Every time I make a new release on GitHub, which arguably should happen with even minor changes to my data or scripts, I have to refill the extensive Zenodo form. Zenodo also doesn’t detect the GitHub repository’s license, which is hardly blameworthy given that that information isn’t present as structured data but mere text in a readme file. However, when publishing a new version of the same underlying data, it doesn’t fill in the license or other information from its own previous items.

There’s a ton of efficiency left on the table in the data publishing process this post describes. Specifically, an integration between Zenodo and the datapackage.json metadata would alleviate a number of problems. Rather than repeatedly filling out a form in Zenodo, one could simply ensure changes were reflected in the datapackage.json and publish a new version on GitHub. Many fields between the two are redundant, though each also has its unique value; Zenodo asks for typical academic publishing information (e.g. publication type, links to prior versions) while Data Package asks for a data schema.

As a final area of concern, the open-ended “license” field is going to eventually limit the utility of the machine readable information in a Data Package.3 Perhaps this is my inner librarian unnecessarily freaking out here, but uncontrolled fields which affect resource reuse are bad news. Defaulting to authors specifying an arbitrary string of text as a license is precisely the problem that the Digital Public Library of America and other large digital libraries are facing, as their corpuses contain thousands of different rights statements.4 Zenodo provides a substantial list of licenses to choose from, but then does a poor job of automatically detecting one even if hints are available via GitHub or a previous incarnation of the publication. GitHub itself should probably make licenses for repositories required and controlled as I could see that being a vital facet in their advanced search as well as interesting data to expose to researchers via their API.

  1. I read something about this a month or two ago, but wasn’t able to relocate the source. Scouring the web, there’s a Washington Post article from January on the phenomenon of irreproducible research, which in turn points to a PLoS Med article from 2005 “Why Most Published Research Findings Are False“. Other studies along these lines are “A Survey on Data Reproducibility in Cancer Research Provides Insights into Our Limited Ability to Translate Findings from the Laboratory to the Clinic” in PLoS ONE and “Drug development: Raise standards for preclinical cancer research” in Nature.
  2. I might have been able to explore another intriguing project, Research Objects, which has the apt tagline “enabling reproducible, transparent research.” However, the Data Package standards were so easy to find and follow conceptually that I chose them.
  3. And to be fair, I did see one example where the licenses JSON property was specified as an array of objects containing a license name and URL, which might be easier to consume in script depending on what’s available at the URL.
  4. Aside: I don’t mean to argue that arbitrary license strings should be prohibited, because no controlled vocabulary is going to enumerate all possible choices. But there’s a lot of good work being done to make licenses easier to specify—think of Creative Commons with their composable, versioned licenses which can be referred to by URL. Defaulting to a controlled list of license types or at least pointing to a preferred vocabulary would help here.

Educating Your Campus about Predatory Publishers

The recent publication of Monica Berger and Jill Cirasella’s piece in College and Research Libraries News “Beyond Beall’s List: Better understanding predatory publishers” is a reminder that the issue of “predatory publishers” continues to require focus for those working in scholarly communication. Berger and Cirasella have done a exemplary job of laying out some of the issues with Beall’s list, and called on librarians to be able “to describe the beast, its implications, and its limitations—neither understating nor overstating its size and danger.”

At my institution academic deans have identified “predatory” journals as an area of concern, and I am sure similar conversations are happening at other institutions. Here’s how I’ve “described the beast” at my institution, and models for services we all can provide, whether subject librarian or scholarly communication librarian.

What is a Predatory Publisher? And Why Does the Dean Care?

The concept of predatory publishers became much more widely known in 2013 with a publication of an open access sting by John Bohannon in Science, which I covered in this post. As a recap, Bohannon created a fake but initially believable poor quality scientific article, and submitted it to open access journals. He found that the majority of journals accepted the poor quality paper, 45% of which were included in the Directory of Open Access Journals. At the time of publication in October 2013 the response to this article was explosive in the scholarly communications world. It seems that more than a year later the reaction continues to spread. Late in the fall semester of 2014, library administration asked me to prepare a guide about predatory publishers, due to concern among the deans that unscrupulous publishers might be taking advantage of faculty. This was a topic I’d been educating faculty about on an ad hoc basis for years, but I never realized we needed to address it more systematically. That all has changed, with senior library administration now doing regular presentations about predatory publishers to faculty.

If we are to be advocates of open access, we need to focus on the positive impact that open access has rather than dwell for too long on the bad sides of it. We also need faculty to be clear on their own goals for making their work open access so that they may make more informed choices. Librarians have limited faculty bandwidth on the topic, and so focusing on education about self-archiving articles (otherwise known as green open access) or choosing no-fee (also known as gold) open access journals is a better way to achieve advocacy goals than suggesting faculty choose only a certain set of gold open access journals. Unless we are offering money for paying article fees, we also don’t have much say about where faculty choose to publish. Education about how to choose a journal and a license responsibly is what we should focus on, even if it diverges from certain ideals (see Meredith Farkas on choosing creative commons licenses.)

Understanding the Needs and Preparing the Material

As I mentioned, my library administration asked for a guide that that they could use in presentations and share with faculty. In preparing this guide, I worked with our library’s Scholarly Communications committee (of which I am co-chair) to determine the format and content.

We decided that adding this material to our existing Open Access research guide would be the best move, since it was already up and we shared the URL widely already. We have a robust series of Open Access Week events (which I wrote about last fall) and this seemed to ideal place to continue engaging people. That said, we determined that the guide needed an overhaul to make it more clear that open access was an on-going area of concern, not a once a year event. Since faculty are not always immediately thinking of making work open access but of the mechanics of publishing, I preferred to start with the title “Publishing Your Own Work”.

To describe its features a bit more, I wanted to start from the mindset of self-archiving work to make it open access with a description of our repository and Peter Suber’s useful guide to making one’s own work open access. I then continued with an explanation of article publication fees, since I often get questions along those lines. They are not unique to open access journals, and don’t imply any fee to accept for publication, which was a fear that I heard more than once during Open Access Week last year. I only then discussed the concept of predatory journals, with the hope that a basic understanding of the process would allay fears. I then present a list of steps to research a journal. I thought these steps were more common sense than anything, but after conversations with faculty and administration, I realized that my intuition about what type of journal I am dealing with is obvious because I have daily practice and experience. For people new to the topic I tried to break down research into easy steps that help them to figure out where a journal is on the continuum from outright scams to legitimate but new or unusual journals. It was also important to me to emphasize self-archiving as a strategy no matter the journal publication model.

Lastly, while most academic libraries have a model of liaison librarians engaging in scholarly communications activities, the person who spends every day working on these issues is likely to be more versed in emerging trends. So it is important to work with liaisons to help them research journals and to identify quality open access journals in their disciplines. We plan to add this information to the guide in a future version.

Taking it on the Road

We felt that in-person instruction on these matters with faculty was a crucial next step, particularly for people who publish in traditional journals but want to make their work available. Traditional journals’ copyright transfer agreements can be predatory, even if we don’t think about it in those terms. Taking inspiration from the ACRL Scholarly Communications Roadshow I attended a few years ago, I decided to take the curriculum from that program and offer it to faculty and graduate students. We read through three publication agreements as a group, and then discussed how open the publishers were to reuse of material, or whether they mentioned it at all. We then included a section on addenda to contracts for negotiation about additional rights.

The first workshop received modest attendance, but included some thoughtful conversations, and we have promised to run it again. Some people may never have read their agreements closely, and never realized they were doing something illegal or not specifically allowed by, for instance, sharing an article they wrote with their students. That concrete realization is more likely to spur action than more abstract arguments about the benefits of open access.

Escaping the Predator Metaphor

If I could go back, I would get rid of the concept of “predator” attached to open access journals. Let’s call it instead unscrupulous entrants into an emerging business model. That’s not as catchy, but it explains why this has happened. I would argue, personally, that the hybrid gold journals by large publishers are just as predatory, as they capitalize on funding requirements to make articles open access with high fees. They too are trying new business models, and those may not be tenable either. As I said above, choosing a journal with eyes wide open and understanding all the ramifications of different publication models is the only way forward. To suggest that faculty are innocently waiting to be pounced on by predators is to deny their agency and their ability to make choices about their own work. There may be days where that metaphor seems apt, but I think overall this is a damaging mentality to librarians interested in promoting new models of scholarly communication. I hope we can provide better resources and programming to escape this, as well as to help administration to understand how to choose to fund open access initiatives.

In the comments I’d like to hear more suggestions about how to escape the “predator” metaphor, as well as your own techniques for educating faculty on your campus.


Making Open Access Everyone’s Business

Librarians should have a role in promoting open access content. The best methods and whether they are successful is a matter of heated debate. Take for an example a recent post by Micah Vandergrift on the ACRL Scholarly Communications mailing list, calling on librarians to stage a publishing walkout and only publish in open access library and information science journals. Many have already done so. Others, like myself, have published in traditional journals (only once in my case) but make a point of making their work available in institutional repositories. I personally would not publish in a journal that did not allow such use of my work, and I know many who feel the same way. 1 The point is, of course, to ensure that librarians are not be hypocritical in their own publishing and their use of repositories to provide open access–a long-standing problem pointed out by Dorothea Salo 2, among others3 We know that many of the reasons that faculty may hesitate to participate in open access publishing relate to promotion and tenure requirements, which generally are more flexible for academic librarians (though not in all cases–see Abigail Goben’s open access tenure experiment). I suspect that many of the reasons librarians aren’t participating more in open access has partly to do with more mundane reasons of forgetting to do so, or fearing that work is not good enough to make public.

But it shouldn’t be only staunch advocates of open access, open peer review, or new digital models for work and publishing who are participating. We have to find ways to advocate and educate in a gentle but vigorous manner, and reach out to new faculty and graduate students who need to start participating now if the future will be different. Enter Open Access Week, a now eight-year-old celebration of open access organized by SPARC. Just as Black Friday is the day that retailers hope to be in the black, Open Access Week has become an occasion to organize around and finally share our message with willing ears. Right?

It can be, but it requires a good deal of institutional dedication to make it happen. At my institution, Open Access Week is a big deal. I am co-chair of a new Scholarly Communications committee which is now responsible for planning the week (the committee used to just plan the week, but the scope has been extended). The committee has representation from Systems, Reference, Access Services, and the Information Commons, and so we are able to touch on all aspects of open access. Last year we had events five days out of five; this year we are having events four days out of five. Here are some of the approaches we are taking to creating successful conversations around open access.

    • Focus on the successes and the impact of your faculty, whether or not they are publishing in open access journals.

The annual Celebration of Faculty Scholarship takes place during Open Access Week, and brings together physical material published by all faculty at a cocktail reception. We obtain copies of articles and purchase books written by faculty, and set up laptops to display digital projects. This is a great opportunity to find out exactly what our faculty are working on, and get a sense of them as researchers that we may normally lack. It’s also a great opportunity to introduce the concept of open access and recruit participants to the institutional repository.

    • Highlight the particular achievements of faculty who are participating in open access.

We place stickers on materials at the Celebration that are included in the repository or are published in open access journals. This year we held a panel with faculty and graduate students who participate in open access publishing to discuss their experiences, both positive and negative.

  • Demonstrate the value the library adds to open access initiatives.

Recently bepress (which creates the Digital Commons repositories on which ours runs) introduced a real time map of repositories downloads that was a huge hit this year. It was a compelling visual illustration of the global impact of work in the repository. Faculty were thrilled to see their work being read across the world, and it helped to solve the problem of invisible impact. We also highlighted our impact with a new handout that lists key metrics around our repository, including hosting a new open access journal.

  • Talk about the hard issues in open access and the controversies surrounding it, for instance, CC-BY vs. CC-NC-ND licenses.

It’s important to not sugarcoat or spin challenging issues in open access. It’s important to include multiple perspectives and invite difficult conversations. Show scholars the evidence and let them draw their own conclusions, though make sure to step in and correct misunderstandings.

  • Educate about copyright and fair use, over and over again.

These issues are complicated even for people who work on them every day, and are constantly changing. Workshops, handouts, and consultation on copyright and fair use can help people feel more comfortable in the classroom and participating in open access.

  • Make it easy.

Examine what you are asking people to do to participate in open access. Rearrange workflows, cut red tape, and improve interfaces. Open Access Week is a good time to introduce new ideas, but this should be happening all year long.

We can’t expect revolutions in policy and and practice to happen overnight, or without some  sacrifice. Whether you choose to make your stand to only publish in open access journals or some other path, make your stand and help others who wish to do the same.

Notes
  1. Publishers have caught on to this tendency in librarians. For instance, Taylor and Francis has 12-18 month repository embargoes for all its journals except LIS journals. Whether this is because of the good work we have done in advocacy or a conciliatory gesture remains up for debate.
  2. Salo, Dorothea. “Innkeeper at the Roach Motel,” December 11, 2007. http://digital.library.wisc.edu/1793/22088.
  3. Xia, Jingfeng, Sara Kay Wilhoite, and Rebekah Lynette Myers. “A ‘librarian-LIS Faculty’ Divide in Open Access Practice.” Journal of Documentation 67, no. 5 (September 6, 2011): 791–805. doi:10.1108/00220411111164673.

Taking a Practical Look at the Google Books Case

Last month we got the long-awaited ruling in favor of Google in the Authors Guild vs. Google Books case, which by now has been analyzed extensively. Ultimately the judge in the case decided that Google’s digitization was transformative and thus constituted fair use. See InfoDocket for detailed coverage of the decision.

The Google Books project was part of the Google mission to index all the information available, and as such could never have taken place without libraries, which hold all those books. While most, if not all, the librarians I know use Google Books in their work, there has always been a sense that the project should not have been started by a commercial enterprise using the intellectual resources of libraries, but should have been started by libraries themselves working together.  Yet libraries are often forced to be more conservative about digitization than we might otherwise be due to rules designed to protect the college or university from litigation. This ruling has made it seem as though we could afford to be less cautious. As Eric Hellman points out, the decision seems to imply that with copyright the ends are the important part, not the means. “In Judge Chin’s analysis, copyright is concerned only with the ends, not the means. Copyright seems not to be concerned with what happens inside the black box.” 1 As long as the end use of the books was fair, which was deemed to be the case, the initial digitization was not a problem.

Looking at this from the perspective of repository manager, I want to address a few of the theoretical and logistical issues behind such a conclusion for libraries.

What does this mean for digitization at libraries?

At the beginning of 2013 I took over an ongoing digitization project, and as a first-time manager of a large-scale long-term project, I learned a lot about the processes involved in such a project. The project I work with is extremely small-scale compared with many such projects, but even at this scale the project is expensive and time-consuming. What makes it worth it is that long-buried works of scholarship are finally being used and read, sometimes for reasons we do not quite understand. That gets at the heart of the Google Books decision—digitizing books in library stacks and making them more widely available does contribute to education and useful arts.

There are many issues that we need to address, however. Some of the most important ones are what access can and should be provided to what works, and making mass digitization more available to smaller and international cultural heritage institutions. Google Books could succeed because it had the financial and computing resources of Google matched with the cultural resources of the participating research libraries. This problem is international in scope. I encourage you to read this essay by Amelia Sanz, in which she argues that digitization efforts so far have been inherently unequal and a reflection of colonialism. 2 But is there a practical way of approaching this desire to make books available to a wider audience?

Providing Access

There are several separate issues in providing access. Books that are in the public domain are unquestionably fine to digitize, though differences in international copyright law make it difficult to determine what can be provided to whom. As Amelia Sanz points out, Google can only digitize Spanish works prior to 1870 in Spain, but may digitize the complete work in the United States. The complete work is not available to Spanish researchers, but it is available in full to US researchers.

That aside, there are several reasons why it is useful to digitize works still unquestionably under copyright. One of the major reasons is textual corpus analysis–you need to have every word of many texts available to draw conclusions about use of words and phrases in those texts. Google Books ngram viewer is one such tool that comes out of mass digitization. Searching for phrases in Google and finding that phrase as a snippet in a book is an important way to find information in books that might otherwise be ignored in favor of online sources. Some argue that this means that those books will not be purchased when they might have otherwise been, but it is equally possible that this leads to greater discovery and more purchases, which research into music piracy suggests may be the case.

Another reason to digitize works still under copyright is to highlight the work of marginalized communities, though in that case it is imperative to work with those communities to ensure that the digitization is not exploitative. Many orphan works, for whom a rights-holder cannot be located, fall under this, and I know from some volunteer work that I have done that small cultural heritage institutions are eager to digitize material that represents the cultural and intellectual output of their communities.

In all the above cases, it is crucial to put into place mechanisms for ensuring that works under copyright are not abused. Google Books uses an algorithm that makes it impossible to read an entire book, which is probably beyond the abilities of most institutions. (If anyone has an idea for how to do this, I would love to hear it.) Simpler and more practical solutions to limiting access are to only make a chapter or sample of a book available for public use, which many publishers already allow. For instance, Oxford University Press allows up to 10% of a work (within certain limits) on personal websites or institutional repositories. (That is, of course, assuming you can get permission from the author). Many institutions maintain “dark archives“, which are digitized and (usually) indexed archives of material inaccessible to the public, whether institutional or research information. For instance, the US Department of Energy Office of Scientific and Technical Information maintains a dark archive index of technical reports comprising the equivalent of 6 million pages, which makes it possible to quickly find relevant information.

In any case where an institution makes the decision to digitize and make available the full text of in-copyright materials for reasons they determine are valid, there are a few additional steps that institutions should take. Institutions should research rights-holders or at least make it widely known to potential rights-holders that a project is taking place. The Orphan Works project at the University of Michigan is an example of such a project, though it has been fraught with controversy. Another important step is to have a very good policy for taking down material when a rights-holder asks–it should be clear to the rights-holder whether any copies of the work will be maintained and for what purposes (for instance archival or textual analysis purposes).

Digitizing, Curating, Storing, Oh My!

The above considerations are only useful when it is even possible for institutions without the resources of Google to start a digitization program. There are many examples of DIY digitization by individuals, for instance see Public Collectors, which is a listing of collections held by individuals open for public access–much of it digitized by passionate individuals. Marc Fischer, the curator of Public Collectors, also digitizes important and obscure works and posts them on his site, which he funds himself. Realistically, the entire internet contains examples of digitization of various kinds and various legal statuses. Most of this takes place on cheap and widely available equipment such as flatbed scanners. But it is possible to build an overhead book scanner for large-scale digitization with individual parts and at a reasonable cost. For instance, the DIY Book Scanning project provides instructions and free software for creating a book scanner. As they say on the site, all the process involves is to “[p]oint a camera at a book and take pictures of each page. You might build a special rig to do it. Process those pictures with our free programs. Enjoy reading on the device of your choice.”

“Processing the pictures” is a key problem to solve. Turning images into PDF documents is one thing, but providing high quality optical character recognition is extremely challenging. Free tools such as FreeOCR make it possible to do OCR from image or PDF files, but this takes processing power and results vary widely, particularly if the scan quality is lower. Even expensive tools like Adobe Acrobat or ABBYY FineReader have the same problems. Karen Coyle points out that uncorrected OCR text may be sufficient for searching and corpus analysis, but does not provide a faithful reproduction of the text and thus, for instance, provide access to visually impaired persons 3 This is a problem well known in the digital humanities world, and one solved by projects such as Project Gutenberg with the help of dedicated volunteer distributed proofreaders. Additionally, a great deal of material clearly in the public domain is in manuscript form or has text that modern OCR cannot recognize. In that case, crowdsourcing transcriptions is the only financially viable way for institutions to make text of the material available. 4 Examples of successful projects using volunteer transcriptors or proofreaders include Ancient Lives to transcribe ancient papyrus, What’s on the Menu at the New York Public Library, and DIYHistory at the University of Iowa libraries. (The latter has provided step by step instructions for building your own version using open source tools).

So now you’ve built your low-cost DIY book scanner, and put together a suite of open source tools to help you process your collections for free. Now what? The whole landscape of storing and preserving digital files is far beyond the scope of this post, but the cost of accomplishing this is probably the highest of anything other than staffing a digitization project, and it is here where Google clearly has the advantage. The Internet Archive is a potential solution to storing public domain texts (though they are not immune to disaster), but if you are making in-copyright works available in any capacity you will most likely have to take the risk on your own servers. I am not a lawyer, but I have never rented server space that would allow copyrighted materials to be posted.

Conclusion: Is it Worth It?

Obviously from this post I am in favor of taking on digitization projects of both public domain and copyrighted materials when the motivations are good and the policies are well thought out. From this perspective, I think the Google Books decision was a good thing for libraries and for providing greater access to library collections. Libraries should be smart about what types of materials to digitize, but there are more possibilities for large-scale digitization, and by providing more access, the research community can determine what is useful to them.

If you have managed a DIY book scanning project, please let me know in the comments, and I can add links to your project.

  1. Hellman, Eric. “Google Books and Black-Box Copyright Jurisprudence.” Go To Hellman, November 18, 2013. http://go-to-hellman.blogspot.com/2013/11/google-books-and-black-box-copyright.html.
  2. Sanz, Amelia. “Digital Humanities or Hypercolonial Studies?” Responsible Innovation in ICT (June 26, 2013). http://responsible-innovation.org.uk/torrii/resource-detail/1249#_ftnref13.
  3. Coyle, Karen. “It’s FAIR!” Coyle’s InFormation, November 14, 2013. http://kcoyle.blogspot.com/2013/11/its-fair.html.
  4. For more on this, see Ben Brumfield’s work on crowdsourced transcription, for example Brumfield, Ben W. “Collaborative Manuscript Transcription: ‘The Landscape of Crowdsourcing and Transcription’ at Duke University.” Collaborative Manuscript Transcription, November 23, 2013. http://manuscripttranscription.blogspot.com/2013/11/the-landscape-of-crowdsourcing-and.html.