Responsibilities For Open Access

In honor of Open Access Week, I want to look at some troubling recent discussions about open access, and what academic librarians who work with technology can do. As the manager of an open access institutional repository, I strongly believe that providing greater access to academic research is a good worth pursuing. But I realize that this comes at a cost, and that we have a responsibility to ensure that open access also means integrity and quality.

On “stings” and quality

By now, the article by John Bohannon in Science has been thoroughly dissected in the blogosphere 1. This was not a study per se, but rather a piece of investigative journalism looking into the practices of open access journals. Bohannon submitted variations on an article written under African pseudonyms from fake universities that “any reviewer with more than a high-school knowledge of chemistry…should have spotted the paper’s short-comings immediately.” Over the course of 10 months, he submitted these articles to 304 open access journals whose names he drew from the Directory of Open Access Journals and Jeffrey Beall’s list of predatory open access publishers. Ultimately 157 of the journals accepted the article and 98 rejected it, when any real peer review would have meant that it was rejected in all cases. It is very worth noting that in an analysis of the raw data that Bohannon supplied some publishers on Beall’s list rejected the paper immediately, which is a good reminder to take all curative efforts with an appropriate amount of skepticism 2.

There are certainly many methodological flaws in this investigation, which Mike Taylor outlines in detail in his post 3, and which he concludes was specifically aimed at discrediting open access journals in favor of journals such as Science. As Michael Eisen outlinesScience has not been immune to publishing articles that should have been rejected after peer review–though Bohannon informed Eisen that he intended to look at a variety of journals but this was not practical, and this decision was not informed by editors at Science. Eisen’s conclusion is that “peer review is a joke” and that we need to stop regarding the publication of an article in any journal as evidence that the article is worthwhile 4. Phil Davis at the Scholarly Kitchen took issue with this conclusion (among others noted above), since despite the flaws, this did turn up incontrovertible evidence that “a large number of open access publishers are willfully deceiving readers and authors that articles published in their journals passed through a peer review process…” 5. His conclusion is that open access agencies such as OASPA and DOAJ should be better at policing themselves, and that on the other side Jeffrey Beall should be cautious about suggesting a potential for guilt without evidence. I think one of the more level-headed responses to this piece comes from outside the library and scholarly publishing world in Steven Novella’s post on Neurologica, a blog focused on science and skepticism written by an academic neurologist. He is a fan of open access and wider access to information, but makes the point familiar to all librarians that the internet creates many more opportunities to distribute both good and bad information. Open access journals are one response to the opportunities of the internet, and in particular author-pays journals like “all new ‘funding models’ have the potential of creating perverse incentives.” Traditional journals fall into the same trap when they rely on impact factor to drive subscriptions, which means they may end up publishing “sexy” studies of questionable validity or failing to publish replication studies which are the backbone of the scientific method–and in fact the only real way to establish results no matter what type of peer review has been done 6.

More “perverse incentives”

So far the criticisms of open access have revolved around one type of “gold” open access, wherein the author (or a funding agency) pays article publication fees. “Green” open access, in which a version of the article is posted in a repository is not susceptible to abuse in quite the same way. Yet a new analysis of embargo policies by Shan Sutton shows that some publishers are targeting green open access through new policies. Springer used to have a 12 month embargo for mandated deposit in repositories such as PubMed, but now has extended it to all institutional repositories. Emerald changed its policy so that any mandated deposit to a repository (whether by funder or institutional mandate) was subject to a 24 month embargo  7.

In both cases, paid immediate open access is available for $1,595 (Emerald) or $3,000 (Springer). It seems that the publishers are counting that a “mandate” means that funds are available for this sort of hyrbid gold open access, but that ignores the philosophy behind such mandates. While federal open access mandates do in theory have a financial incentive that the public should not have to pay twice for research, Sutton argues that open access “mandates” at institutions are actually voluntary initiatives by the faculty, and provide waivers without question 8. Additionally, while this type of open access does provide public access to the article, it does not address larger issues of reuse of the text or data in the true sense of open access.

What should a librarian do?

The issues above are complex, but there are a few trends that we can draw on to understand our responsibilities to open access. First, there is the issue of quality, both in terms of researcher experience in working with a journal, and that of being able to trust the validity of an individual article. Second, we have to be aware of the terms under which institutional policies may place authors. As with many such problems, the technological issues are relatively trivial. To actually address them meaningfully will not happen with technology alone, but with education, outreach, and network building.

The major thing we can take away from Bohannon’s work is that we have to help faculty authors to make good choices about where they submit articles. Anyone who works with faculty has stories of extremely questionable practices by journals of all types, both open access and traditional. Speaking up about those practices on an individual basis can result in lawsuits, as we saw earlier this year. Are there technical solutions that can help weed out predatory publishers and bad journals and articles? The Library Loon points out that many factors, some related to technology, have meant that both positive and negative indicators of journal quality have become less useful in recent years. The Loon suggests that “[c]reating a reporting mechanism where authors can rate and answer relatively simple questions about their experiences with various journals seems worthwhile.” 9

The comments to this post have some more suggestions, including open peer review and a forum backed by a strong editor that could be a Yelp-type site for academic publisher reputation. I wrote about open peer review earlier this year in the context of PeerJ, and participants in that system did indeed find the experience of publishing in a journal with quick turnarounds and open reviews pleasant. (Bohannon did not submit a fake article to PeerJ). This solution requires that journals have a more robust technical infrastructure as well as a new philosophy to peer review. More importantly, this is not a solution librarians can implement for our patrons–it is something that has to come from the journals.

The idea that seems to be catching on more is the “Yelp” for scholarly publishers. This seems like a good potential solution, albeit one that would require a great deal of coordinated effort to be truly useful. The technical parts of this type of solution would be relatively easy to carry out. But how to ensure that it is useful for its users? The Yelp analog may be particularly helpful here. When it launched in 2004, it asked users who were searching for some basic information about their question, and to provide the email addresses of additional people whom they would have traditionally asked for this information. Yelp then emailed those people as well as others with similar searches to get reviews of local businesses to build up its base of information. 10 Yelp took a risk in pursuing content in that way, since it could have been off-putting to potential users. But local business information was valuable enough to early users that they were willing to participate, and this seems like a perfect model to build up a base of information on journal publisher practices.

This helps address the problem of predatory publishers and shifting embargoes, but it doesn’t help as much with the issue of quality assurance for the article content. Librarians teach students how to find articles that claim to be peer reviewed, but long before Bohannon we knew that peer review quality varies greatly, and even when done well tells us nothing about the validity of the research findings. Education about the scholarly communication cycle, the scientific method, and critical thinking skills are the most essential tools to ensure that students are using appropriate articles, open access or not. However, those skills are difficult to bring to bear for even the most highly experienced researchers trying to keep up with a large volume of published research. There are a few technical solutions that may be of help here. Article level metrics, particularly alternative metrics, can aid in seeing how articles are being used. (For more on altmetrics, see this post from earlier this year).

One of the easiest options for article level metrics is the Altmetric.com bookmarklet. This provides article level metrics for many articles with a DOI, or articles from PubMed and arXiv. Altmetric.com offers an API with a free tier to develop your own app. An open source option for article level metrics is PLOS’s Article-Level Metrics, a Ruby on Rails application. These solutions do not guarantee article quality, of course, but hopefully help weed out more marginal articles.

No one needs to be afraid of open access

For those working with institutional repositories or other open access issues, it sometimes seems very natural for Open Access Week to fall so near Halloween. But it does not have to be frightening. Taking responsibility for thoughtful use of technical solutions and on-going outreach and education is essential, but can lead to important changes in attitudes to open access and changes in scholarly communication.

 

Notes

  1. Bohannon, John. “Who’s Afraid of Peer Review?” Science 342, no. 6154 (October 4, 2013): 60–65. doi:10.1126/science.342.6154.60.
  2. “Who Is Afraid of Peer Review: Sting Operation of The Science: Some Analysis of the Metadata.” Scholarlyoadisq, October 9, 2013. http://scholarlyoadisq.wordpress.com/2013/10/09/who-is-afraid-of-peer-review-sting-operation-of-the-science-some-analysis-of-the-metadata/.
  3. Taylor, Mike. “Anti-tutorial: How to Design and Execute a Really Bad Study.” Sauropod Vertebra Picture of the Week. Accessed October 17, 2013. http://svpow.com/2013/10/07/anti-tutorial-how-to-design-and-execute-a-really-bad-study/.
  4. Eisen, Michael. “I Confess, I Wrote the Arsenic DNA Paper to Expose Flaws in Peer-review at Subscription Based Journals.” It Is NOT Junk, October 3, 2013. http://www.michaeleisen.org/blog/?p=1439.
  5. Davis, Phil. “Open Access ‘Sting’ Reveals Deception, Missed Opportunities.” The Scholarly Kitchen. Accessed October 17, 2013. http://scholarlykitchen.sspnet.org/2013/10/04/open-access-sting-reveals-deception-missed-opportunities/.
  6. Novella, Steven. “A Problem with Open Access Journals.” Neurologica Blog, October 7, 2013. http://theness.com/neurologicablog/index.php/a-problem-with-open-access-journals/.
  7. Sutton, Shan C. “Open Access, Publisher Embargoes, and the Voluntary Nature of Scholarship: An Analysis.” College & Research Libraries News 74, no. 9 (October 1, 2013): 468–472.
  8. Ibid., 469
  9. Loon, Library. “A Veritable Sting.” Gavia Libraria, October 8, 2013. http://gavialib.com/2013/10/a-veritable-sting/.
  10. Cringely, Robert. “The Ears Have It.” I, Cringely, October 14, 2004. http://www.pbs.org/cringely/pulpit/2004/pulpit_20041014_000829.html.

An Experiment with Publishing on GitHub

Scholarly publishing, if you haven’t noticed, is nearing a crisis. Authors are questioning the value added by publishers. Open Access publications are growing in number and popularity. Peer review is being criticized and re-invented. Libraries are unable to pay price increases for subscription journals. Traditional measures of scholarly impact and journal rankings are being questioned while new ones are developed. Fresh business models or publishing platforms appear to spring up daily.1

I personally am a little frustrated with scholarly publishing, albeit for reasons not entirely related to the above. I find that most journals haven’t adapted to the digital age yet and thus are still employing editorial workflows and yielding final products suited to print.

How come I have yet to see a journal article PDF with clickable hyperlinks? For that matter, why is PDF still the dominant file format? What advantage does a fixed-width format hold over flexible, fluid-width HTML?2 Why are raw data not published alongside research papers? Why are software tools not published alongside research papers? How come I’m still submitting black-and-white charts to publications which are primarily read online? Why are digital-only publications still bound to regular publication schedules when they could publish like blogs, as soon as the material is ready? To be fair, some journals have answered some of these questions, but the issues are still all too frequent.

So, as a bit of an experiment, I recently published a short research study entirely on GitHub.3 I included the scripts used to generate data, the data, and an article-like summary of the whole process.

What makes it possible

Unfortunately, I wouldn’t recommend my little experiment for most scholars, except perhaps for pre- or post-prints of work published elsewhere. Why? The primary reason people publish research is for tenure review, for enhancing a CV. I won’t list my study—though, arguably, I should be able to—simply because it didn’t go through the usual scholarly publishing gauntlet. It wasn’t peer-reviewed, it didn’t appear in a journal, and it wouldn’t count for much in the eyes of traditional faculty members.

However, I’m at a community college. Research and publication are not among my position’s requirements. I’m judged on my teaching and various library responsibilities, while publications are an unnecessary bonus. Would it help to have another journal article on my CV? Yes, probably. But there’s little pressure and personally I’m more interested in experimentation than in lengthening my list of publications.

Other researchers might also worry about someone stealing their ideas or data if they begin publishing an incomplete project. For me, again, publication isn’t really a competitive field. I would be happy to see someone reuse my project, even if they didn’t give proper attribution back. Openness is an advantage, not a vulnerability.

It’s ironic that being at a non-research institution frees me up to do research. It’s done mostly in my free-time, which isn’t great, but the lack of pressure means I can play with modes of publication, or not worry about the popularity of journals I submit to. To some degree, this is indicative of structural problems with scholarly publishing: there’s inertia in that, in order to stay in the game and make a name for yourself, you can’t do anything too wild. You need to publish, and publish in the recognized titles. Only tenured faculty, who after all owe at least some of their success to the current system, can risk dabbling with new publishing models and systems of peer-review.

What’s really good

GitHub, and the web more generally, are great mediums for scholarship. They address several of my prior questions.

For one, the web is just as suited to publishing data as text. There’s no limit on file format or (practically) size. Even if I was analyzing millions of data points, I could make a compressed archive available for others to download, verify, and reuse in their own research. For my project, I used a Google Spreadsheet which allows others to download the data or simply view it on the web. The article itself can be published on GitHub Pages, which provides free hosting for static websites.

article on GitHub pages

Here’s how the final study looks when published on GitHub Pages.

While my study didn’t undergo any peer review, it is open for feedback via a pull request or the “issues” queue on GitHub. Typically, peer review is a closed process. It’s not apparent what criticisms were leveled at an article, or what the authors did to address them. Having peer review out in the open not only illuminates the history of a particular article but also makes it easier to see the value being added. Luckily, there are more and more journals with open peer review, such as PeerJ which we’ve written about previously. When I explain peer review to students, I often open up the “Peer Review history” section of a PeerJ article. Students can see that even articles written by professional researchers have flaws which the reviewing process is designed to identify and mitigate.

Another benefit of open peer review, present in publishing on GitHub too, is the ability to link to specific versions of an article. This has at least two uses. First of all, it has historical value in that one can trace the thought process of the researcher. Much like original manuscripts are a source of insight for literary analyses, merely being able to trace the evolution of a journal article enables new research projects in and of itself.

Secondly, as web content can be a moving target as it is revised over time, being able to link to specific versions aids those referencing a work. Linking to a git “commit” (think a particular point in time), possibly using perma.cc or the Internet Archive to store a copy of the project as it existed then, is an elegant way of solving this problem. For instance, at one point I manually removed some data points which were inappropriate for the study I was performing. One can inspect the very commit where I did this, seeing which lines of text were deleted and possibly identifying any mistakes which were made.

I’ve also grown tired of typical academic writing. The tendency to value erudite over straightforward language, lengthy titles with the snarky half separated from the actually descriptive half by a colon, the anxiety about the particularities of citations and style manuals; all of these I could do without. Let’s write compelling, truthful content without fetishizing consistency and losing the uniqueness of our voice. I’m not saying my little study achieves much in this regard, but it was a relief to be free to write in whatever manner I found most suitable.

Finally, and most encouraging in my mind, the time to publication of a research project can be greatly reduced with new web-based means. I wrote a paper in graduate school which took almost two years to appear in a peer-reviewed journal; by the time I was given the pre-prints to review, I’d entirely forgotten about it. On GitHub, all delays were solely my fault. While it’s true (you can see so in the project’s history) that the seeds of this project were planted nearly a year ago, I started working in earnest just a few months ago and finished the writing in early October.

What’s really bad

GitHub, while a great company which has reduced the effort needed to use version control with its clean web interface and graphical applications, is not the most universally understood platform. I have little doubt that if I were to publish a study on my blog, I would receive more commentary. For one, GitHub requires an account which only coders or technologists would be likely to have already, while many comment platforms (like Disqus) build off of common social media accounts like Twitter and Facebook. Secondly, while GitHub’s “pull requests” are more powerful than comments in that they can propose changes to the actual content of a project, they’re doubtless less understood as well. Expecting scholarly publishing to suddenly embrace software development methodologies is naive at best.

As a corollary to GitHub’s rather niche appeal, my article hasn’t undergone any semblance of peer review. I put it out there; if someone spots an inaccuracy, I’ll make note of and address it, but no relevant parties will necessarily critique the work. While peer review has its problems—many intimate with the problems of scholarly publishing at large—I still believe in the value of the process. It’s hard to argue a publication has reached an objective conclusion when only a single pair of eyes have scrutinized it.

Researchers who are afraid of having their work stolen, or of publishing incomplete work which may contain errors, will struggle to accept open publishing models using tools like GitHub. Prof Hacker, in an excellent post on “Forking the Academy”, notes many cultural challenges to moving scholarly publishing towards an open source software model. Scholars may worry that forking a repository feels like plagiarism or goes against the tradition of valuing original work. To some extent, these fears may come more from misunderstandings than genuine problems. Using version control, it’s perfectly feasible to withhold publishing a project until it’s complete and to remove erroneous missteps taken in the middle of a work. Theft is just as possible under the current scholarly publishing model; increasing the transparency and speed of one’s publishing does not give license to others to take credit for it. Unless, of course, one uses a permissive license like the Public Domain.

Convincing academics that the fears above are unwarranted or can be overcome is a challenge that cannot be overstated. In all likelihood, GitHub as a platform will never be a major player in scholarly publishing. The learning curve, both technical and cultural, is simply too great. Rather, a good starting point would be to let the appealing aspects of GitHub—versioning, pull requests, issues, granular attribution of authorship at the commit level—inform the development of new, user-friendly platforms with final products that more closely resemble traditional journals. Prof Hacker, again, goes a long way towards developing this with a wish list for a powerful collaborative writing platform.

What about the IR?

The discoverability of web publications is problematic. While I’d like to think my research holds value for others’ literature reviews, it’s never going to show up while searching in a subscription database. It seems unreasonable to ask researchers, who already look in many places to compile complete bibliographies, to add GitHub to their list of commonly consulted sources. Further fracturing the scholarly publishing environment not only inconveniences researchers but it goes against the trend of discovery layers and aggregators (e.g. Google Scholar) which aim to provide a single search across multiple databases.

On the other hand, an increasing amount of research‐from faculty and students alike—is conducted through Google, where GitHub projects will appear alongside pre-prints in institutional repositories. Simply being able to tweet out a link to my study, which is readable on a smartphone and easily saved to any read-it-later service, likely increases its readership over stodgy PDFs sitting in subscription databases.

Institutional repositories solve some, but not all, of the deficiencies of publishing on GitHub. Discoverability is increased because researchers at your institution may search the IR just like they do subscription databases. Futhermore, thanks to the Open Archives Initiative and the OAI-PMH standard, content can be aggregated from multiple IRs into larger search engines like OCLC’s OAIster. However, none of the major IR software players support versioned publication. Showing work-in-progress, linking to specific points in time of a work, and allowing for easy reuse are all lost in the IR.

Every publication in its place

As I’ve stated, publishing independently on GitHub isn’t for everyone. It’s not going to show up on your CV and it’s not necessarily going to benefit from the peer review process. But plenty of librarians are already doing something similar, albeit a bit less formal: we’re writing blog posts with original research or performing quick studies at our respective institutions. It’s not a great leap to put these investigations under version control and then publish them on the web. GitHub could be a valuable compliment to more traditional venues, reducing the delay between when data is collected and when it’s available for public consumption. Furthermore, it’s not at all mutually exclusive with article submissions. One could gain both the immediate benefit of getting one’s conclusions out there, but also produce a draft of a journal article.

As scholarly publishing continues to evolve, I hope we’ll see a plethora of publishing models rather than one monolithic process replacing traditional print-based journals. Publications hosted on GitHub, or a similar platform, would sit nicely alongside open, web-based publications like PeerJ, scholarly blog/journal hybrids like In The Library with the Lead Pipe, deposits in Institutional Repositories, and numerous other sources of quality content.

Notes

  1. I think a lot of these statements are fairly well-recognized in the library community, but here’s some evidence: the recent Open Access “sting” operation (which we’ll cover more in-depth in a forthcoming post) that exposed flaws in some journals’ peer review process, altmetrics, PeerJ, other experiments with open peer review (e.g. by Shakespeare Quarterly), the serials crisis (which is well-known enough to have a Wikipedia entry), predictions that all scholarship will be OA in a decade or two, and increasing demands that scholarly journals allow text mining access all come to mind.
  2. I’m totally prejudiced in this matter because I read primarily through InstaPaper. A journal like Code4Lib, which publishes in HTML, is easy to send to read-it-later services, while PDFs aren’t. PDFs also are hard to read on smartphones, but they can preserve details like layout, tables, images, and font choices better than HTML. A nice solution is services which offer a variety of formats for the same content, such as Open Journal Systems with its ability to provide HTML, PDF, and ePub versions of articles.
  3. For non-code uses of GitHub, see our prior Tech Connect post.

Redesigning the Item Record Summary View in a Library Catalog and a Discovery Interface

A. Oh, the Library Catalog

Almost all librarians have a love-hate relationship with their library catalogs (OPAC), which are used by library patrons. Interestingly enough, I hear a lot more complaints about the library catalog from librarians than patrons. Sometimes it is about the catalog missing certain information that should be there for patrons. But many other times, it’s about how crowded the search results display looks. We actually all want a clean-looking, easy-to-navigate, and efficient-to-use library catalog. But of course, it is much easier to complain than to come up with an viable alternative.

Aaron Schmidt has recently put forth an alternative design for a library item record. In his blog post, he suggests a library catalog shifts its focus from the bibliographic information (or metadata if not a book) of a library item to a patron’s tasks performed in relation to the library item so that the catalog functions more as “a tool that prioritizes helping people accomplish their tasks, whereby bibliographic data exists quietly in the background and is exposed only when useful.” This is a great point. Throwing all the information at once to a user only overwhelms her/him. Schmidt’s sketch provides a good starting point to rethink how to design the library catalog’s search results display.

Screen Shot 2013-10-09 at 1.34.08 PM

From the blog post, “Catalog Design” by Aaron Schmidt

B. Thinking about Alternative Display Design

The example above is, of course, too simple to apply to the library catalog of an academic library straight away. For an usual academic library patron to determine whether s/he wants to either check out or reserve the item, s/he is likely to need a little more information than the book title, the author, and the book image. For example, students who look for textbooks, the edition information as well as the year of publication are important. But I take it that Schmidt’s point was to encourage more librarians to think about alternative designs for the library catalog rather than simply compare what is available and pick what seems to be the best among those.

Screen Shot 2013-10-09 at 1.44.36 PM

Florida International University Library Catalog – Discovery layer, Mango, provided by Florida Virtual Campus

Granted that there may be limitations in how much we can customize the search results display of a library catalog. But that is not a reason to stop thinking about what the optimal display design would be for the library catalog search results. Sketching alternatives can be in itself a good exercise in evaluating the usability of an information system even if not all of your design can be implemented.

Furthermore, more and more libraries are implementing a discovery layer over their library catalogs, which provides much more room to customize the display of search results than the traditional library catalog. Open source discovery systems such as Blacklight or VuFind provides great flexibility in customizing the search results display. Even proprietary discovery products such as Primo, EDS, Summon offer a level of customization by the libraries.

Below, I will discuss some principles to follow in sketching alternative designs for search results in a library catalog, present some of my own sketches, and show other examples implemented by other libraries or websites.

C. Principles

So, if we want to improve the item record summary display to be more user-friendly, where can we start and what kind of principles should we follow? These are the principles that I followed in coming up with my own design:

  • De-clutter.
  • Reveal just enough information that is essential to determine the next action.
  • Highlight the next action.
  • Shorten texts.

These are not new principles. They are widely discussed and followed by many web designers including librarians who participate in their libraries’ website re-design. But we rarely apply these to the library catalog because we think that the catalog is somehow beyond our control. This is not necessarily the case, however. Many libraries implement discovery layers to give a completely different and improved look from that of their ILS-es’ default display.

Creating a satisfactory design on one’s own instead of simply pointing out what doesn’t work or look good in existing designs is surprisingly hard but also a refreshing challenge. It also brings about the positive shift of focus in thinking about a library catalog from “What is the problem in the catalog?” to “What is a problem and what can we change to solve the problem?”

Below I will show my own sketches for an item record summary view for the library catalog search results. These are clearly a combination of many other designs that I found inspiring in other library catalogs. (I will provide the source of those elements later in this post.) I tried to mix and revise them so that the result would follow those four principles above as closely as possible. Check them out and also try creating your own sketches. (I used Photoshop for creating my sketches.)

D. My Own Sketches

Here is the basic book record summary view. What I tried to do here is giving just enough information for the next action but not more than that: title, author, type, year, publisher, number of library copies and holds. The next action for a patron is to check the item out. On the other hand, undecided patrons will click the title to see the detailed item record or have the detailed item record to be texted, printed, e-mailed, or to be used in other ways.

(1) A book item record

Screen Shot 2013-10-09 at 12.46.38 PM

This is a record of a book that has an available copy to check out. Only when a patron decides to check out the item, the next set of information relevant to that action – the item location and the call number – is shown.

(2) With the check-out button clicked

check out box open

If no copy is available for check-out, the best way to display the item is to signal that check-out is not possible and to highlight an alternative action. You can either do this by graying out the check-out button or by hiding the button itself.

Many assume that adding more information would automatically increase the usability of a website. While there are cases in which this could be true, often a better option is to reveal information only when it is relevant.

I decided to gray out the check-out button when there is no available copy and display the reserve button, so that patrons can place a hold. Information about how many copies the library has and how many holds are placed (“1 hold / 1 copy”) would help a patron to decide if they want to reserve the book or not.

(3) A book item record when check-out is not available

Screen Shot 2013-10-09 at 12.34.54 PM

I also sketched two other records: one for an e-Book without the cover image and the other with the cover image. Since the appropriate action in this case is reading online, a different button is shown. You may place the ‘Requires Login’ text or simply omit it because most patrons will understand that they will have to log in to read a library e-book and also the read-online button will itself prompt log in once clicked anyway.

(4) An e-book item record without a book cover

Screen Shot 2013-10-09 at 12.35.54 PM

(5) An e-book item record with a book cover

Screen Shot 2013-10-09 at 12.48.33 PM

(6) When the ‘Read Online’ button is clicked, an e-book item record with multiple links/providers

When there are multiple options for one electronic resource, those options can be presented in a similar way in which multiple copies of a physical book are shown.

Screen Shot 2013-10-09 at 12.35.22 PM

(6) A downloadable e-book item record

For a downloadable resource, changing the name of the button to ‘download’ is much more informative.

Screen Shot 2013-10-09 at 12.35.13 PM

(7) An e-journal item record

Screen Shot 2013-10-09 at 12.47.31 PM

(7) When the ‘Read Online’ button is clicked, an e-journal item record with multiple links/providers

Screen Shot 2013-10-09 at 12.41.56 PM

E. Inspirations

Needless to say, I did not come up with my sketches from scratch. Here are the library catalogs whose item record summary view inspired me.

torontopublic

Toronto Public Library catalog has an excellent item record summary view, which I used as a base for my own sketches. It provides just enough information for the summary view. The title is hyperlinked to the detailed item record, and the summary view displays the material type and the year in bod for emphasis. The big green button also clearly shows the next action to take. It also does away with unnecessary labels that are common in library catalog such as ‘Author:’ ‘Published:’ ‘Location:’ ‘Link:.’

User Experience Designer Ryan Feely, who worked on Toronto Public Library’s catalog search interface, pointed out the difference between a link and an action in his 2009 presentation “Toronto Public Library Website User Experience Results and Recommendations.” Actions need to be highlighted as a button or in some similar design to stand out to users (slide 65). And ideally, only the actions available for a given item should be displayed.

Another good point which Feely makes (slide 24) is that an icon is often the center of attention and so a different icon should be used to signify different type of materials such as a DVD or an e-Journal. Below are the icons that Toronto Public Library uses for various types of library materials that do not have unique item images. These are much more informative than the common “No image available” icon.

eAudiobooke-journal eMusic

vinyl VHS eVideo

University of Toronto Libraries has recently redesigned their library catalog to be completely responsive. Their item record summary view in the catalog is brief and clear. Each record in the summary view also uses a red and a green icon that helps patrons to determine the availability of an item quickly. The icons for citing, printing, e-mailing, or texting the item record that often show up in the catalog are hidden in the options icon at the bottom right corner. When the mouse hovers over, a variety of choices appear.

Screen Shot 2013-10-09 at 4.45.33 PM

univtoronto

Richland Library’s catalog displays library items in a grid as a default, which makes the catalog more closely resemble an online bookstore or shopping website. Patrons can also change the view to have more details shown with or without the item image. The item record summary view in the default grid view is brief and to the point. The main type of patron action, such as Hold or Download, is clearly differentiated from other links as an orange button.

richland

Screen Shot 2013-10-13 at 8.33.46 PM

Standford University Library offers a grid view (although not as the default like Richland Library). The grid view is very succinct with the item title, call number, availability information in the form of a green checkmark, and the item location.

Screen Shot 2013-10-13 at 8.37.37 PM

What is interesting about Stanford University Library catalog (using Blacklight) is that when a patron hovers its mouse over an item in the grid view, the item image displays the preview link. And when clicked, a more detailed information is shown as an overlay.

Screen Shot 2013-10-13 at 8.37.58 PM

Brigham Young University completely customized the user interface of the Primo product from ExLibris.

byu

And University of Michigan Library customized the search result display of the Summon product from SerialsSolutions.

Screen Shot 2013-10-14 at 11.16.49 PM

Here are some other item record summary views that are also fairly straightforward and uncluttered but can be improved further.

Sacramento Public Library uses the open source discovery system, VuFind, with little customization.

dcpl

I have not done an extensive survey of library catalogs to see which one has the best item record summary view. But it seemed to me that in general academic libraries are more likely to provide more information than necessary in the item record summary view and also to require patrons to click a link instead of displaying relevant information right away. For example, the ‘Check availability’ link that is shown in many library catalogs is better when it is replaced by the actual availability status of ‘available’ or ‘checked out.’ Similarly, the ‘Full-text online’ or ‘Available online’ link may be clearer with an button titled ‘Read online’ or ‘Access online.’

F. Challenges and Strategies

The biggest challenge in designing the item record summary view is to strike the balance between too little information and too much information about the item. Too little information will require patrons to review the detailed item record just to identify if the item is the one they are looking for or not.

Since librarians know many features of the library catalog, they tend to err on the side of throwing all available features into the item record summary view. But too much information not only overwhelms patrons and but also makes it hard for them to locate the most relevant information at that stage and to identify the next available action. Any information irrelevant to a given task is no more than noise to a patron.

This is not a problem unique to a library catalog but generally applicable to any system that displays search results. In their book, Designing the Search Experience , Tony Russell-Rose and Tyler Tate describes this as achieving ‘the optimal level of detail.’ (p.130)

Useful strategies for achieving the optimal level of detail for the item summary view in the case of the library catalog include:

  • Removing all unnecessary labels
  • Using appropriate visual cues to make the record less text-heavy
  • Highlighting next logical action(s) and information relevant to that action
  • Systematically guiding a patron to the actions that are relevant to a given item and her/his task in hand

Large online shopping websites, Amazon, Barnes & Noble, and eBay all make a good use of these strategies. There are no labels such as ‘price,’ ‘shipping,’ ‘review,’ etc. Amazon highlights the price and the user reviews most since those are the two most deciding factors for consumers in their browsing stage. Amazon only offers enough information for a shopper to determine if s/he is further interested in purchasing the item. So there is not even the Buy button in the summary view. Once a shopper clicks the item title link and views the detailed item record, then the buying options and the ‘Add to Cart’ button are displayed prominently.

Screen Shot 2013-10-09 at 1.21.15 PM

Barnes & Noble’s default display for search results is the grid view, and the item record summary view offers only the most essential information – the item title, material type, price, and the user ratings.

Screen Shot 2013-10-09 at 1.24.05 PM

eBay’s item record summary view also offers only the most essential information, the highest bid and the time left, while people are browsing the site deciding whether to check out the item in further detail or not.

Screen Shot 2013-10-09 at 1.28.47 PM

G. More Things to Consider

An item record summary view, which we have discussed so far, is surely the main part of the search results page. But it is only a small part of the search results display and even a smaller part of the library catalog. Optimizing the search results page, for example, entails not just re-designing the item record summary view but choosing and designing many other elements of the page such as organizing the filtering options on the left and deciding on the default and optional views. Determining the content and the display of the detailed item record is another big part of creating a user-friendly library catalog. If you are interested in this topic, Tony Russell-Rose and Tyler Tate’s book Designing the Search Experience (2013) provides an excellent overview.

Librarians are professionals trained in many uses of a searchable database, a known item search, exploring and browsing, a search with incomplete details, compiling a set of search results, locating a certain type of items only by location, type, subject, etc. But since our work is also on the operation side of a library, we often make the mistake of regarding the library catalog as one huge inventory system that should hold and display all the acquisition, cataloging, and holdings information data of the library collection. But library patrons are rarely interested in seeing such data. They are interested in identifying relevant library items and using them. All the other information is simply a guide to achieving this ultimate goal, and the library catalog is another tool in their many toolboxes.

Online shopping sites optimize their catalog to make purchase as efficient and simple as possible. Both libraries and online shopping sites share the common interests of guiding the users to one ultimate task – identifying an appropriate item for the final borrowing or access/purchase. For creating user-oriented library catalog sketches, it is helpful to check out how non-library websites are displaying their search results as well.

Screen Shot 2013-10-13 at 9.52.27 PM

music

themes

Once you start looking other examples, you will realize that there are very many ways to display search results and you will soon want to sketch your own alternative design for the search results display in the library catalog and the discovery system. What do you think would be the level of optimum detail for library items in the library catalog or the discovery interface?

Further Reading