A Clean House at the Directory of Open Access Journals

The Directory of Open Access Journals (DOAJ) is an international directory of journals and index of articles that are available open access. Dating back to 2003, the DOAJ was at the center of a controversy surrounding the “sting” conducted by John Bohannon in Science, which I covered in 2013. Essentially Bohannon used journals listed in DOAJ to try to find journals that would publish an article of poor quality as long as authors paid a fee. At the time many suggested that a crowdsourced journal reviewing platform might be the way to resolve the problem if DOAJ wasn’t a good source. While such a platform might still be a good idea, the simpler and more obvious solution is the one that seems to have happened: for DOAJ to be more strict with publishers about requirements for inclusion in the directory. 1.

The process of cleaning up the DOAJ has been going on for some time and is getting close to an important milestone. All the 10,000+ journals listed in DOAJ were required to reapply for inclusion, and the deadline for that is December 30, 2015. After that time, any journals that haven’t reapplied will be removed from the DOAJ.

“Proactive Not Reactive”

Contrary to popular belief, the process for this started well before the Bohannon piece was published 2. In December 2012 an organization called Infrastructure Services for Open Access (IS4OA)  (founded by Alma Swan and Caroline Sutton) took over DOAJ from Lund University, and announced several initiatives, including a new platform, distributed editorial help, and improved criteria for inclusion. 3 Because DOAJ grew to be an important piece of the scholarly communications infrastructure it was inevitable that they would have to take such a step sooner or later. With nearly 10,000 journals and only a small team of editors it wouldn’t have been sustainable over time, and to lose the DOAJ would have been a blow to the open access community.

One of the remarkable things about the revitalization of the DOAJ is the transparency of the process. The DOAJ News Service blog has been detailing the behind the scenes processes in detail since May 2014. One of the most useful things is a list of journals who have claimed to be listed in DOAJ but are not. Another important piece of information is the 2015-2016 development roadmap. There is a lot going on with the DOAJ update, however, so below I will pick out what I think is most important to know.

The New DOAJ

In March 2014, the DOAJ created a new application form with much higher standards for inclusion. Previously the form for inclusion was only 6 questions, but after working with the community they changed the application to require 58 questions. The requirements are detailed on a page for publishers, and the new application form is available as a spreadsheet.

While 58 questions seems like a lot, it is important to note that journals need not fulfill every single requirement, other than the basic requirements for inclusion. The idea is that journal publishers must be transparent about the structure and funding of the journal, and that journals explicitly labeled as open access meet some basic theoretical components of open access. For instance, one of the  basic requirements is that  “the full text of ALL content must be available for free and be Open Access without delay”. Certain other pieces are strong suggestions, but not meeting them will not reject a journal. For instance, the DOAJ takes a strong stand against impact factors and suggests that they not be presented on journal websites at all 4.

To highlight journals that have extremely high standards for “accessibility, openness, discoverability reuse and author rights”, the DOAJ has developed a “Seal” that is awarded to journals who answer “yes” to the following questions (taken from the DOAJ application form):

have an archival arrangement in place with an external party (Question 25). ‘No policy in place’ does not qualify for the Seal.

provide permanent identifiers in the papers published (Question 28). ‘None’ does not qualify for the Seal.

provide article level metadata to DOAJ (Question 29). ‘No’ or failure to provide metadata within 3 months do not qualify for the Seal.

embed machine-readable CC licensing information in article level metadata (Question 45). ‘No’ does not qualify for the Seal.

allow reuse and remixing of content in accordance with a CC BY, CC BY-SA or CC BY-NC license (Question 47). If CC BY-ND, CC BY-NC-ND, ‘No’ or ‘Other’ is selected the journal will not qualify for the Seal.

have a deposit policy registered in a deposit policy directory. (Question 51) ‘No’ does not qualify for the Seal.

allow the author to hold the copyright without restrictions. (Question 52) ‘No’ does not qualify for the Seal.

Part of the appeal of the Seal is that it focuses on the good things about open access journals rather than the questionable practices. Having a whitelist is much more appealing for people doing open access outreach than a blacklist. Journals with the Seal are available in a facet on the new DOAJ interface.

Getting In and Out of the DOAJ

Part of the reworking of the DOAJ was the requirementand required all currently listed journals to reapply–as of November 19 just over 1,700 journals had been accepted under the new criteria, and just over 800 had been removed (you can follow the list yourself here). For now you can find journals that have reapplied with a green check mark (what DOAJ calls The Tick!). That means that about 85% of journals that were previously listed either have not reapplied, or are still in the verification pipeline 5. While DOAJ does not discuss specific reasons a journal or publisher is removed, they do give a general category for removal. I did some analysis of the data provided in the added/removed/rejected spreadsheet.

At the time of analysis, there were 1776 journals on the accepted list. 20% of these were added since September, and with the deadline looming this number is sure to grow. Around 8% of the accepted journals have the DOAJ Seal.

There were 809 journals removed from the DOAJ, and the reasons fell into the following general categories. I manually checked some of the journals with only 1 or 2 titles, and suspect that some of these may be reinstated if the publisher chooses to reapply. Note that well over half the removed journals weren’t related to misconduct but were ceased or otherwise unavailable.

Inactive (has not published in the last calender year) 233
Suspected editorial misconduct by publisher 229
Website URL no longer works 124
Ceased publishing 108
Journal not adhering to Best Practice 62
Journal is no longer Open Access 45
Has not published enough articles this calendar year 2
Wrong ISSN 2
Other; delayed open access 1
Other; no content 1
Other; taken offline 1
Removed at publisher’s request 1

The spreadsheet lists 26 journals that were rejected. Rejected journals will know the specific reasons why their applications were rejected, but those specific reasons are not made public. Journals may reapply after 6 months once they have had an opportunity to amend the issues. 6  The general stated reasons were as follows:

Unknown 19
Has not published enough articles 2
Journal website lacks necessary information 2
Not an academic/scholarly journal 1
Only Abstracts 1
Web site URL doesn’t work 1

The work that DOAJ is doing to improve transparency and the screening process is very important for open access advocates, who will soon have a tool that they can trust to provide much more complete information for scholars and librarians. For too long we have been forced to use the concept of a list of “questionable” or even “predatory” journals. A directory of journals with robust standards and easy to understand interface will be a fresh start for the rhetoric of open access journals.

Are you the editor of an open access journal? What do you think of the new application process? Leave your thoughts in the comments (anonymously if you like).

The Library as Research Partner

As I typed the title for this post, I couldn’t help but think “Well, yeah. What else would the library be?” Instead of changing the title, however, I want to actually unpack what we mean when we say “research partner,” especially in the context of research data management support. In the most traditional sense, libraries provide materials and space that support the research endeavor, whether it be in the physical form (books, special collections materials, study carrels) or the virtual (digital collections, online exhibits, electronic resources). Moreover, librarians are frequently involved in aiding researchers as they navigate those spaces and materials. This aid is often at the information seeking stage, when researchers have difficulty tracking down references, or need expert help formulating search strategies. Libraries and librarians have less often been involved at the most upstream point in the research process: the start of the experimental design or research question. As one considers the role of the Library in the scholarly life-cycle, one should consider the ways in which the Library can be a partner with other stakeholders in that life-cycle. With respect to research data management, what is the appropriate role for the Library?

In order to achieve effective research data management (RDM), planning for the life-cycle of the data should occur before any data are actually collected. In circumstances where there is a grant application requirement that triggers a call to the Library for data management plan (DMP) assistance, this may be possible. But why are researchers calling the Library? Ostensibly, it is because the Library has marketed itself (read: its people) as an expert in the domain of data management. It has most likely done this in coordination with the Research Office on campus. Even more likely, it did this because no one else was. It may have done this as a response to the National Science Foundation (NSF) DMP requirement in 2011, or it may have just started doing this because of perceived need on campus, or because it seems like the thing to do (which can lead to poorly executed hiring practices). But unlike monographic collecting or electronic resource acquisition, comprehensive RDM requires much more coordination with partners outside the Library.

Steven Van Tuyl has written about the common coordination model of the Library, the Research Office, and Central Computing with respect to RDM services. The Research Office has expertise in compliance and Central Computing can provide technical infrastructure, but he posits that there could be more effective partners in the RDM game than the Library. That perhaps the Library is only there because no one else was stepping up when DMP mandates came down. Perhaps enough time has passed, and RDM and data services have evolved enough that the Library doesn’t have to fill that void any longer. Perhaps the Library is actually the *wrong* partner in the model. If we acknowledge that communities of practice drive change, and intentional RDM is a change for many of the researchers, then wouldn’t ceding this work to the communities of practice be the most effective way to stimulate long lasting change? The Library has planted some starter seeds within departments and now the departments could go forth and carry the practice forward, right?

Well, yes. That would be ideal for many aspects of RDM. I personally would very much like to see the intentional planning for, and management of, research data more seamlessly integrated into standard experimental methodology. But I don’t think that by accomplishing that, the Library should be removed as a research partner in the data services model. I say this for two reasons:

  1. The data/information landscape is still changing. In addition to the fact that more funders are requiring DMPs, more research can benefit from using openly available (and well described – please make it understandable) data. While researchers are experts in their domain, the Library is still the expert in the information game. At its simplest, data sources are another information source. The Library has always been there to help researchers find sources; this is another facet of that aid. More holistically, the Library is increasingly positioning itself to be an advocate for effective scholarly communication at all points of the scholarship life-cycle. This is a logical move as the products of scholarship take on more diverse and “nontraditional” forms.

Some may propose that librarians who have cultivated RDM expertise can still provide data seeking services, but perhaps they should not reside in the Library. Would it not be better to have them collocated with the researchers in the college or department? Truly embedded in the local environment? I think this is a very interesting model that I have heard some large institutions may want to explore more fully. But I think my second point is a reason to explore this option with some caution:

2. Preservation and access. Libraries are the experts in the preservation and access of materials. Central Computing is a critical institutional partner in terms of infrastructure and determining institutional needs for storage, porting, computing power, and bandwidth but – in my experience – are happy to let the long-term preservation and access service fall to another entity. Libraries (and archives) have been leading the development of digital preservation best practices for some time now, with keen attention to complex objects. While not all institutions can provide repository services for research data, the Library perspective and expertise is important to have at the table. Moreover, because the Library is a discipline-agnostic entity, librarians may be able to more easily imagine diverse interest in research data than the data producer. This can increase the potential vehicles for data sharing, depending on the discipline.

Yes, RDM and data services are reaching a place of maturity in academic institutions where many Libraries are evaluating, or re-evaluating, their role as a research partner. While many researchers and departments may be taking a more proactive or interested position with RDM, it is not appropriate for Libraries to be removed from the coordinated work that is required. Libraries should assert their expertise, while recognizing the expertise of other partners, in order to determine effective outreach strategies and resource needs. Above all, Libraries must set scope for this work. Do not be deterred by the increased interest from other campus entities to join in this work. Rather, embrace that interest and determine how we all can support and strengthen the partnerships that facilitate the innovative and exciting research and scholarship at an institution.

From Consensus to Expertise: Rethinking Library Web Governance

The world is changing, the web is changing and libraries are changing along with them. Commercial behemoths like Amazon, Google and Facebook, together with significant advancements in technical infrastructure and consumer technology, have established a new set of expectations for even casual users of the web. These expectations have created new mental models of how things ought to work, and why—not just online, either. The Internet of Things may not yet be fully realized but we clearly see its imminent appearance in our daily lives.

Within libraries, has our collective concept of the intention and purpose of the library website evolved as well? How should the significant changes in how the web works, what websites do and how we interact with them also impact how we manage, assess and maintain library websites?

In some cases it has been easier to say what the library website is not – a catalog, a fixed-form document, a repository—although it facilitates access to these things, and perhaps makes them discoverable. What, then, is the library website? As academic librarians, we define it as follows.

The library website is an integrated representation of the library, providing continuously updated content and tools to engage with the academic mission of the college/university.

It is constructed and maintained for the benefit of the user. Value is placed on consumption of content by the user rather than production of content by staff.

Moving from a negative definition to a positive definition empowers both stewards of and contributors to the website to participate in an ongoing conversation about how to respond proactively to the future, our changing needs and expectations and, chiefly, to our users’ changing needs and expectations. Web content management systems have moved from being just another content silo to being a key part of library service infrastructure. Building on this forward momentum enables progress to a better, more context-sensitive user experience for all as we consider our content independent of its platform.

It is just this reimagining of how and why the library website contributes value, and what role it fulfills within the organization in terms of our larger goals for connecting with our local constituencies—supporting research and teaching through providing access to resources and expertise—that demands a new model for library web governance.

Emerging disciplines like content strategy and a surge of interest in user experience design and design thinking give us new tools to reflect on our practice and even to redefine what constitutes best practice in the area of web librarianship.

Historically, libraries have managed websites through committees and task forces. Appointments to these governing bodies were frequently driven by a desire to ensure adequate balance across the organizational chart, and to varying degrees by individuals’ interest and expertise. As such, we must acknowledge the role of internal politics as a variable factor in these groups’ ability to succeed—one might be working either with, or against, the wind. Librarians, particularly in groups of this kind, notoriously prefer consensus-driven decision making.

The role of expertise is largely taken for granted across most library units; that is, not just anyone is qualified to perform a range of essential duties, from cataloging to instruction to server administration to website management.1 Consciously according ourselves and our colleagues the trust to employ their unique expertise allows individuals to flourish and enlarges the capacity of the organization and the profession. In the context of web design and governance, consensus is a blocker to nimble, standards-based, user-focused action. Collaborative processes, in which all voices are heard, together with empirical data are essential inputs for effective decision-making by domain experts in web librarianship as in other areas of library operations.

Web librarianship, through bridging and unifying individual and collaborative contributions to better enable discovery, supports the overall mission of libraries in the context of the following critical function:

providing multiple systems and/or interfaces
to browse, identify, locate, obtain and use
spaces, collections, and services,
either known or previously unknown to the searcher
with the goal of enabling completion of an information-related task or goal.

The scope of content potentially relevant to the user’s discovery journey encompasses the hours a particular library is open on a given day up to and including advanced scholarship, and all points between. This perhaps revives in the reader’s mind the concept of library website as portal – that analogy has its strengths and weaknesses, to be sure. Ultimately, success for a library website may be defined as the degree to which it enables seamless passage; the user’s journey only briefly intersects with our systems and services, and we should permit her to continue on to her desired information destination without unnecessary inconvenience or interference. A friction-free experience of this kind requires a holistic vision and relies on thoughtful stewardship and effective governance of meaningful content – in other words, on a specific and cultivated expertise, situated within the context of library practice. Welcome to a new web librarianship.

Courtney Greene McDonald (@xocg) is Head of Discovery & Research Services at the Indiana University Libraries in Bloomington. A technoluddite at heart, she’s equally likely to be leafing through the NUC to answer a reference question as she is to be knee-deep in a config file. She presents and writes about user experience in libraries, and is the author of Putting the User First: 30 Strategies for Transforming Library Services (ACRL 2014). She’s also a full–time word nerd and gourmand, a fair–weather gardener, and an aspiring world traveler.

Anne Haines (@annehaines) is the Web Content Specialist for the Indiana University Bloomington Libraries. She loves creating webforms in Drupal, talking to people about how to make their writing work better on the web, and sitting in endless meetings. (Okay, maybe not so much that last one.) You can find her hanging out at the intersection of content strategy and librarianship, singing a doo-wop tune underneath the streetlight.

Rachael Cohen (@RachaelCohen1) is the Discovery User Experience Librarian at Indiana University Bloomington Libraries, where she is the product owner for the library catalog discovery layer and manages the web-scale discovery service. When she’s not negotiating with developers, catalogers, and public service people you can find her hoarding books and Googling for her family.



  1. While it is safe to say that all library staff have amassed significant experience in personal web use, not all staff are equally equipped with the growing variety of skillsets and technical mastery necessary to oversee and steward a thriving website.

Wikipedia, Libraries, & Neutrality

This piece is substantially based off of a column I wrote for RUSQ that will appear early next year.

Broadly construed, there are two camps of opinions surrounding Wikipedia in librarianship. The first is that Wikipedia is not an academic quality source. It is something students need to be warned away from. The community authorship of wikipedia is typically the source of this criticism; since “anyone can edit” the online encyclopedia, there’s no way to tell whether you’re receiving high quality information written by experts or the ill-informed opinions of internet trolls.

The other camp is far more positive regarding Wikipedia. After all, the values of the Wikipedian community clearly align with our own. The ultimate goal of the project is not to give a venue for poor research or fringe theories, but to enable free and open access to information. While Wikipedia isn’t flawless, many of its articles compete with more established reference sources. Famously, Nature performed a comparison of scientific articles and found Wikipedia to be comparable to Encyclopedia Britannica. But following the study, a community project to correct the identified errors in Wikipedia sprung up, fixing them all in a little over a month.1 With this common ground, it’s no wonder that libraries have found Wikipedia to be a valuable partner in publicizing our content. Articles like Using Wikipedia to Extend Digital Collections, Putting the Library in Wikipedia, and Wikipedia Lover, Not a Hater: Harnessing Wikipedia to Increase the Discoverability of Library Resources all discuss the value of working with Wikipedia to highlight library digital collections and metadata, not against.

A good example of Wikipedia driving traffic to library special collections was brought to my attention by fellow Tech Connect author Margaret Heller, who pointed me towards the Google Analytics Usage Reports for CARLI Digital Collections. CARLI regularly sees Wikipedia as one of the top external traffic sources, with Wikipedia being noted in last few quarterly reports as a traffic source leading “to home pages or images from multiple CARLI Collections”.

The Problem with Neutral

I must admit that I side with the pro-Wikipedia folks. I love using Wikipedia as a resource—as a procrastination-enabler I have open several articles on roguelikes that I’m reading while I right this—and I love contributing to it in whatever small ways I can, from fixing broken markup to citation hunting. But Wikipedia is imperfect in ways more insidious that its anonymous authorship or occasionally inaccurate details. Rather, one problem lies within one of the five pillars that define the philosophy of Wikipedia; “Wikipedia is written from a neutral point of view”.

Neutrality has been under fire from #critlib lately, a group of librarians emphasizing critical theory especially with respect to information literacy instruction. ALA Annual featured a well-attended presentation entitled “But We’re Neutral!” And Other Librarian Fictions Confronted by #critlib. A seminal article appearing earlier this year in Code4Lib by Bess Sadler and Chris Bourg begins with a rousing section labelled Libraries are not Neutral:

In spite of the pride many libraries take in their neutrality, libraries have never been neutral repositories of knowledge. Research libraries in particular have always reflected the inequalities, biases, ethnocentrism, and power imbalances that exist throughout the academic enterprise through collection policies and hiring practices that reflect the biases of those in power at a given institution. In addition, theoretically neutral library activities like cataloging have often re-created societal patterns of exclusion and inequality.

Wikipedia shares this problem; while it appears neutral on the surface, its topical coverage and treatment of subjects reflect the skewed power relations of our society. A 2011 study found that 91% of editors were men. The same study shows that few editors come from the Global South and that the English Wikipedia receives far more focus than other languages. Another research paper from 2011 goes a bit further in demonstrating that “male articles are significantly longer than female articles”.2 The editorial gender gap has real effects on the encyclopedia’s content; it’s not just that having editors of all genders is good in its own right, it’s that Wikipedia’s claims to objectivity and neutrality are jeopardized by the disbalance.


So what’s a concerned librarian to do? I think the wrong thing would be to denounce Wikipedia. For one, the alternative sources we or our patrons would turn to are no less problematic. Encyclopaedia Britannica has a well-documented history of supposedly scientific articles written from the dominant viewpoint (c.f. the racist description of black people in the 1911 edition). What’s more, Wikipedia is going to be used. It’s massively popular. Sticking our heads in the sand because it doesn’t live up to its own standards of neutrality improves nothing.

Luckily, there are several Wikipedia projects focused on recruiting editors from underrepresented groups and addressing lackluster coverage of particular topics which we can support. One such project is Art + Feminism. In its own words:

Art+Feminism is a rhizomatic campaign to improve coverage of women and the arts on Wikipedia, and to encourage female editorship…Content is skewed by the lack of female participation. Many articles on notable women in history and art are absent on Wikipedia. This represents an alarming aporia in an increasingly important repository of shared knowledge.

Art+Feminism started out by hosting an edit-a-thon out of the Eyebeam Art and Technology Center in New York City in 2014, with more than 30 other locations worldwide joining in. An edit-a-thon is an event where people gather to perform Wikipedia edits, often centered around a particular theme or project. Higher education institutions and libraries make perfect partners for such occasions. We typically have useful materials to be cited and our students form a large body of potential participates who can be easily incentivized to join in, whether with extra credit or simply some food. So when my library heard of the upcoming 2nd annual edit-a-thon, we immediately began planning to host it. Below, I’ll briefly outline what we did in the hopes it’ll encourage other institutions to join us during this year’s edit-a-thon on March 5th.

Hosting an Edit-a-thon

First, we set up a meetup page on Wikipedia. If you’re unfamiliar with Wikipedia, creating a page like this isn’t a struggle. For one, you can simply copy the entire source markup of someone else’s meetup, then edit your specific details into that skeleton. For two, you can enable the experimental Visual Editor to make Wikipedia even easier to edit without learning wiki markup. The meetup page is an important place for putting up information like timing and directions, but is also a place for us to talk about the impact we made by showing how many editors attended and what articles we improved or created.

While we were putting initial details on our meetup page, we set about securing a location on the date of the edit-a-thon. We discovered that a gallery associated with our school had hosted the edit-a-thon the prior year, but they were unable to repeat it. Our school has campuses in both San Francisco and Oakland, but since Oakland had no other edit-a-thon locations we decided to host it there with the idea that SF locals already had an event nearby.

Our next steps aimed to make the event as easy for newcomers as possible. A staff member pulled relevant materials from our collection, so that researching would be simple and our rarer, more valuable resources might lend some of their information to Wikipedia. We also wanted experienced editors to be on hand during the event. I looked at a local Wikiproject, asking for help on the Talk page and then corresponding directly with a couple editors who had expressed interest. Finally, we managed to secure a visit from someone who actually works for the Wikimedia Foundation that runs the encyclopedia.

During the day, we set out snacks and name badges for everyone. Similar to the color-coded name badges at Ada Camp and other tech conferences, Art+Feminism recommended giving out name badges which signify one’s willingness to be photographed: green meant feel free to take a photo, orange meant please ask permission first, and red meant absolutely not. These steps ensure that everyone is comfortable at the event and not being exposed on the internet against their will. At the start of the day, we explained the coloring system and the Wikimedia staff person gave a short talk on how to write articles that endure, standing up to scrutiny over time. The results of the global event are listed on Wikipedia.

While I feel we accomplished much at our local event, there was one negative experience. An image uploaded to Wikimedia Commons for use on a page was flagged for deletion. The editor flagging it said something along the lines of “this isn’t your personal photo album” as the image was a headshot of a female artist. In the ensuing discussion around the proposed deletion, I noted that the image was about to be used on an article. It was never removed from Commons. Still, the incident underscores cultural problems in Wikipedia. The confrontational style of the discussion lacked good faith. Further, I heard a gendered undertone in the editor’s response; how many pictures of white men are derided as personal photos? While our library staff person was undeterred, moments of hostility like these drive away newcomers.

In Which I Admit I’m Missing the Point

To be fair, Wikipedia itself acknowledges that it fails to live up to neutral status. That the encyclopedia strives towards neutrality is the more vital point. But it’s not as if reaching a supposedly perfect neutrality resolves the issues that folks like #critlib are highlighting. Neutral as a positive value is precisely the problem, because there is no neutral stance that can be taken from outside of society’s power relations and history of inequity. So should we instead be agitating for Wikipedia to become less neutral and take more active stances on social issues? Or are edit-a-thons like Art+Feminism the most viable route towards ensuring topics are covered in a way that surfaces marginalized peoples and their experiences? I have no answers, just food for thought.

  1. See External peer review/Nature December 2005 for Wikipedia’s internal take on the correction process. The original Nature article is paywalled here and is doi:10.1038/438900a. A similar but open access study is doi:10.1371/journal.pone.0106930, Accuracy and Completeness of Drug Information in Wikipedia: A Comparison with Standard Textbooks of Pharmacology.
  2. WP:Clubhouse? An Exploration of Wikipedia’s Gender Imbalance. I’m skeptical of how “male” and “female” articles were defined, but the paper itself is thorough in its argumentation and statistical analysis. It’s also worth noting that this paper, and much of the other literature around the gender gap, ignores genders outside the female-male binary. It’s most useful to conceive of the gap as a disproportionate majority of males than a minority of females, while leaving all other genders out of the picture.

Near Us and Libraries, Robots Have Arrived

The movie, Robot and Frank, describes the future in which the elderly have a robot as their companion and also as a helper. The robot monitors various activities that relate to both mental and physical health and helps Frank with various house chores. But Frank also enjoys the robot’s company and goes on to enlist the robot into his adventure of breaking into a local library to steal a book and a greater heist later on. People’s lives in the movie are not particularly futuristic other than a robot in them. And even a robot may not be so futuristic to us much longer either. As a matter of fact, as of June 2015, there is now a commercially available humanoid robot that is close to performing some of the functions that the robot in the movie ‘Frank and Robot’ does.


Pepper Robot, Image from Aldebaran, https://www.aldebaran.com/en/a-robots/who-is-pepper

A Japanese company, SoftBank Robotics Corp. released a humanoid robot named ‘Pepper’ to the market back in June. The Pepper robot is 4 feet tall, 61 pounds, speaks 17 languages and is equipped with an array of cameras, touch sensors, accelerometer, and other sensors in his “endocrine-type multi-layer neural network,” according to the CNN report. The Pepper robot was priced at ¥198,000 ($1,600). The Pepper owners are also responsible for an additional ¥24,600 ($200) monthly data and insurance fee. While the Pepper robot is not exactly cheap, it is surprisingly affordable for a robot. This means that the robot industry has now matured to the point where it can introduce a robot that the mass can afford.

Robots come in varying capabilities and forms. Some robots are as simple as a programmable cube block that can be combined with one another to be built into a working unit. For example, Cubelets from Modular Robotics are modular robots that are used for educational purposes. Each cube performs one specific function, such as flash, battery, temperature, brightness, rotation, etc. And one can combine these blocks together to build a robot that performs a certain function. For example, you can build a lighthouse robot by combining a battery block, a light-sensor block, a rotator block, and a flash block.


A variety of cubelets available from the Modular Robotics website.

A variety of cubelets available from the Modular Robotics website.

By contrast, there are advanced robots such as those in the form of an animal developed by a robotics company, Boston Dynamics. Some robots look like a human although much smaller than the Pepper robot. NAO is a 58-cm tall humanoid robot that moves, recognizes, hears and talks to people that was launched in 2006. Nao robots are an interactive educational toy that helps students to learn programming in a fun and practical way.

Noticing their relevance to STEM education, some libraries are making robots available to library patrons. Westport Public Library provides robot training classes for its two Nao robots. Chicago Public Library lends a number of Finch robots that patrons can program to see how they work. In celebration of the National Robotics Week back in April, San Diego Public Library hosted their first Robot Day educating the public about how robots have impacted the society. San Diego Public Library also started a weekly Robotics Club inviting anyone to join in to help build or learn how to build a robot for the library. Haslet Public Library offers the Robotics Camp program for 6th to 8th graders who want to learn how to build with LEGO Mindstorms EV3 kits. School librarians are also starting robotics clubs. The Robotics Club at New Rochelle High School in New York is run by the school’s librarian, Ryan Paulsen. Paulsen’s robotics club started with faculty, parent, and other schools’ help along with a grant from NASA and participated in a FIRST Robotics Competition. Organizations such as the Robotics Academy at Carnegie Mellon University provides educational outreach and resources.

Image from Aldebaran website at https://www.aldebaran.com/en/humanoid-robot/nao-robot

There are also libraries that offer coding workshops often with Arduino or Raspberry Pi, which are inexpensive computer hardware. Ames Free Library offers Raspberry Pi workshops. San Diego Public Library runs a monthly Arduino Enthusiast Meetup. Arduinos and Raspberry Pis can be used to build digital devices and objects that can sense and interact the physical world, which are close to a simple robot. We may see more robotics programs at those libraries in the near future.

Robots can fulfill many other functions than being educational interactive toys, however. For example, robots can be very useful in healthcare. A robot can be a patient’s emotional companion just like the Pepper. Or it can provide an easy way to communicate for a patient and her/his caregiver with physicians and others. A robot can be used at a hospital to move and deliver medication and other items and function as a telemedicine assistant. It can also provide physical assistance for a patient or a nurse and even be use for children’s therapy.

Humanoid robots like Pepper may also serve at a reception desk at companies. And it is not difficult to imagine them as sales clerks at stores. Robots can be useful at schools and other educational settings as well. At a workplace, teleworkers can use robots to achieve more active presence. For example, universities and colleges can offer a similar telepresence robot to online students who want to virtually experience and utilize the campus facilities or to faculty who wish to offer the office hours or collaborate with colleagues while they are away from the office. As a matter of fact, the University of Texas, Arlington, Libraries recently acquired several Telepresence Robots to lend to their faculty and students.

Not all robots do or will have the humanoid form as the Pepper robot does. But as robots become more and more capable, we will surely get to see more robots in our daily lives.


Alpeyev, Pavel, and Takashi Amano. “Robots at Work: SoftBank Aims to Bring Pepper to Stores.” Bloomberg Business, June 30, 2015. http://www.bloomberg.com/news/articles/2015-06-30/robots-at-work-softbank-aims-to-bring-pepper-to-stores.

“Boston Dynamics.” Accessed September 8, 2015. http://www.bostondynamics.com/.

Boyer, Katie. “Robotics Clubs At the Library.” Public Libraries Online, June 16, 2014. http://publiclibrariesonline.org/2014/06/robotics-clubs-at-the-library/.

“Finch Robots Land at CPL Altgeld.” Chicago Public Library, May 12, 2014. https://www.chipublib.org/news/finch-robots-land-at-cpl/.

McNickle, Michelle. “10 Medical Robots That Could Change Healthcare – InformationWeek.” InformationWeek, December 6, 2012. http://www.informationweek.com/mobile/10-medical-robots-that-could-change-healthcare/d/d-id/1107696.

Singh, Angad. “‘Pepper’ the Emotional Robot, Sells out within a Minute.” CNN.com, June 23, 2015. http://www.cnn.com/2015/06/22/tech/pepper-robot-sold-out/.

Tran, Uyen. “SDPL Labs: Arduino Aplenty.” The Library Incubator Project, April 17, 2015. http://www.libraryasincubatorproject.org/?p=16559.

“UT Arlington Library to Begin Offering Programming Robots for Checkout.” University of Texas Arlington, March 11, 2015. https://www.uta.edu/news/releases/2015/03/Library-robots-2015.php.

Waldman, Loretta. “Coming Soon to the Library: Humanoid Robots.” Wall Street Journal, September 29, 2014, sec. New York. http://www.wsj.com/articles/coming-soon-to-the-library-humanoid-robots-1412015687.

Taking a Deep Breath after a Systems Migration

I have been mostly absent from ACRL Tech Connect this year because the last nine months have been spent migrating to a new library systems platform and discovery layer. As one of the key members of the implementation team, I have devoted more time to meetings, planning, development, more meetings, and more planning than any other part of my job has required thus far. We have just completed the official implementation project and are regular old customers by now. At this point I finally feel I can take a deep breath and step back to think about the past nine months in a holistic manner to glean some lessons learned from this incredible professional opportunity that was also incredibly challenging at times.

In this post I won’t go into the details of exactly which system we implemented and how, since it’s irrelevant to the larger discussion. Rather I’d like to stay at a high level to think about what working on such a project is like for a professional working with others on a team and as an individual trying to make things happen. For those who are curious about the details of the project, including management and process, those will be detailed in a forthcoming book chapter in Exploring Discovery (ALA Editions) edited by Ken Varnum. I will also be participating in an AL Live episode on this topic on October 8.

A project like this doesn’t come as a surprise. My library had been planning a move to a new platform for a number of years, and had an extremely inclusive selection process when selecting a new platform. When we found out that we would be able to go ahead with the implementation process I knew that I would have the opportunity to lead the implementation of the new discovery layer on the technical side, as well as coordinate much of the effort on the user outreach and education side. That was an exciting and terrifying role, since while it was far less challenging technically to my mind than working on the data migration, it would be the most public piece of the project. In addition it quickly became clear that our multi-campus situation wasn’t going to fit exactly into line with the built in solutions in the products, which required a great deal of additional work to understand the interoperability of the products and how they interacted with other systems. Ultimately it was a great education, but in the thick of it seemed to have no end in sight.

To that end, I wanted to share some of the lessons I learned from this process both as a leader and a member of a team. Of course, many of these are widely applicable to any project, whether it’s in a library systems department or any work place.

Someone has to say the obvious thing

One of the joys of doing something that is new to everyone is that the dread of impostor syndrome is diminished. If no one knows the answer, then no one can look like an idiot for not knowing, after all. Yet that is not always clear to everyone working on the project, and as the leader it’s useful to make it clear you have no idea how something works when you don’t, or if something is “simple” to you to still to say exactly how it works to make sure everyone understands. There’s a point at which assuming others do know the obvious thing is forgetting your own path to learning, in which it’s helpful to hear the simple thing stated clearly, which may take several attempts. Besides the obvious implications of people not understanding how something works, it robs them of a chance to investigate something of interest and become a real contributor. Try to not make other people have to admit they have no idea what you’re talking about, whether or not you think they should have known it. This also forces you to actually know what you’re talking about. Teaching something is, after all, the best way to learn it.

Don’t answer questions all the time

Human brains can be rather pathetic moment to moment even if they do all right in the end. A service mentality leads (or in some cases requires) us to answer questions as fast as we can, but it’s better to give the correct answer or the well-considered answer a little later than answer something in haste and get the answer wrong or say something in a poor manner. If you are trying to figure out things as you go along, there’s no reason for you to know anything off the top of your head. If you get a question in a meeting and need to double check, no one would be surprised. If you get an email at 5:13 PM after a long day and need to postpone even thinking about the answer until the following day, that is the best thing for your sanity and for the success of the project both.

Keep the end goal in mind, and know when to abandon pieces

This is an obvious insight, but crucial to feeling like you’ve got some control of the process. We tend to think of way more than we can possibly accomplish in a timeframe, and continual re-prioritization is essential. Some features you were sold on in the sales demo end up being lackluster, and other features you didn’t know existed will end up thrilling you. Competing opportunities and priorities will always exist. Good project management can account for those variables and still keep the core goals central and happening on time. But that said…

Project management is not a panacea

The whole past nine months I’ve had a vision that with perfect project management everything could go perfectly. This has crept into all areas of my life and made me imagine that I could project manage my way to perfection in my life with a toddler (way too many variables) or my house (110 year old houses are nearly as tricky as toddlers). We had excellent project management support from the vendor as well as internally, but I kept seeing room for improvement in everything. “If only we had foreseen that, we could have avoided this.” “If only I had communicated the action items more clearly after that meeting, we wouldn’t be so behind.” We actually learned very late in our project that other libraries undertaking similar projects hired a consultant to do nothing but project management on the library side which seemed like a very good idea–though we managed all right without one. In any event, a project manager wouldn’t have changed some of the most challenging issues, which didn’t have anything to do with timelines or resources but with differences in approach and values between departments and libraries. Everyone wants the “best” for the users, but the “best” for one person doesn’t work at all for another. Coming to a compromise is the right way to handle this, there’s no way to avoid conflict and the resulting change in the plan.

Hopefully we all get to experience projects in our careers of this magnitude, whether technical or not. Anything that shifts an institution to something new that touches everyone is something to take very seriously. It’s time-consuming and stressful because it should be! Nevertheless, managing time and stress is key to ensure that you view the work as thrilling rather than diminishing.

Accessibility Testing LibGuides 2.0

Over the summer my library began investigating potentially migrating to the LibGuides content management system from our current, Drupal-based subject guide system.  As part of our investigation, and with resources from our campus’ Universal Design Center 1, I began an initial review to determine the extent to which LibGuides 2.0 was accessible to all users, including users with disabilities or those using assistive technologies.  Our campus, like other California State University campuses, has a strong commitment to ensuring technology is accessible to all users.  The campus has a fairly extensive process for acquiring new technologies that require all departments to review the accessibility of any technology or web-based product purchased, and the Universal Design Center assists all departments on campus with these evaluations.  While evaluating technology for accessibility is not typically my area of responsibility (in fact, I rarely have involvement in end-user facing technology, let alone testing for usability and accessibility), in this case I was interested in using LibGuides as an opportunity to learn more about accessibility for my own knowledge.  Ensuring that web content is accessible requires a blend of skills related to using web markup, understanding user behavior, and knowledge of assistive technologies, and as a librarian I know I can benefit from a solid understanding of all of these areas.

While I am by no means an expert on accessibility, I am familiar with basic guidelines of accessibility for content creation and markup. 2  Of course, accessibility and usability in a content management system depend, in large part, on the practices followed by content creators.  LibGuides authors have a significant amount of control over the accessibility of the content they create.  For example, using the HTML source code editing features of LibGuides, any guide author can ensure their own markup is compliant with accessibility guidelines, and manually add elements such as alternative text, titled iFrames, or ARIA attributes.  However, I was especially interested in identifying any issues that LibGuides guide authors could not easily modify themselves.  While many features can be overridden via the extensive CSS customization available in LibGuides 2.0’s Bootstrap Framework3, I wanted to identify those ‘out-of-the-box’ elements that posed accessibility problems.

The following issues identified below have been reported to SpringShare, and I was told by SpringShare support that all of these issues are being investigated and already ‘on the list’ for future development.  As this is my first attempt to really deep-dive into web accessibility, I’m really interested in feedback about the issues identified below.  I am hoping that I’ve interpreted the standards correctly, but I definitely welcome any feedback or corrections!


A sample guide was created in a LibGuides demo instance to evaluate all built-in LibGuides box types, content types, and various multimedia elements to determine Section 508 compliance.  The following features were included on the guide that was used for testing:

LibGuides Box Types:

  • Tabbed
  • Gallery
  • Profile

LibGuides Content Types:

  • Rich Text/HTML
  • Database
  • Link
  • Media/Widget
  • Book from the Catalog
  • Document/File
  • RSS Feed
  • Guide List
  • Poll
  • Google Search

Free tools used to evaluate LibGuides accessibility include:

  • W3C Markup Validator :  Valid markup is usually much more accessible markup.  Unclosed tags or nesting problems can often cause problems with screen readers, keyboard navigation, or other assistive technologies.
  • WebAIM WAVE Accessibility Tool – Enter the URL of your page, and the WAVE Tool will examine the page and automatically identify accessibility errors (elements, such as form labels, that are required for accessibility that are absent or problematically implemented), alerts (potential issues that could be improved) and features (good accessibility practices).
  • CynthiaSays – Similar to the WAVE tool, CynthiaSays automatically reads through the markup of a URL you provide and generates a comprehensive report of problems and potential issues.
  • Mozilla Firefox with the following extensions (there are likely Chrome alternatives to these):
    • Fangs – A screen reader emulator that enables you to view a text-only version of a page the way a screen-reader would read it.  Ensuring that your page is read by a screen reader the way you intend is essential for accessibility, and Fangs enables you to review the screen-readability of your page without downloading a full screen-reading desktop client such as JAWS.
    • WCAG Color Contrast Checker – A handy tool to quickly view the color contrast of your page in the browser.  Low contrast elements, such as yellow text on a white background, can be very different to see for a variety of users.   
  • Colour Contrast Analyser – A helpful desktop client that enables automated checking to ensure that web page elements or images contain high enough contrast to be viewed and read easily by a wide variety of users.
  • JAWS – JAWS is a very popular screen reading application that enables web pages to be navigated and read aloud to users.  While this software has a cost, a free trial can be downloaded temporarily to preview the software’s functionality.

Guidelines from the US Federal Government’s Section 508 Accessibility Program, W3C’s WCAG 2.0, and CSU Northridge’s Web Accessibility Criteria were used in this evaluation.


These features do not conform to Section 508 and/or WCAG 2.0 compliance, and their implementation in LibGuides does not enable guide authors to easily override code to improve accessibility manually.  

Polls: Lack clear labeling of form elements (Section 508 1194.22(n))

In our testing, Poll elements lack “FOR” attributes in tag labels and “ID” attributes in associated form elements.  Poll forms also make use of ‘implicit labels’, where the form element and its associated label are contained within opening and closing label tags.  For example, radio button code from a  poll element is generated by LibGuides as:

<div class="radio">
<input type="radio" class="pad-left-med" name="s-lg-poll-option-13342416" 
id="s-lg-poll-option-13342416_1" value="83823" >Never

More accessible code might instead look like:

<div class="radio">
<label FOR=”never”>Never
<input type="radio" class="pad-left-med" name="s-lg-poll-option-13342416" 
id="s-lg-poll-option-13342416_1" value="83823" ID=”never”>
Cover images from ‘Books from the Catalog’:  Lack textual description (Section 508 1194.22(a))

In testing, whether covers were retrieved from Syndetics, Amazon, or whether default (blue or white) covers were used, all resultant “Books from the Catalog” elements lacked ALT attributes.  Images do, however, have title elements.  It could be interpreted that these elements are decorative and therefore do not require alternative text elements.  However, the default title elements (derived from the title of the book) is not especially descriptive to help the user understand the role of the image on the page.

For example:

<img alt="" src="http://syndetics.com/index.aspx?isbn=9780133017854/LC.GIF&amp;
title="Getting It Right for Young Children from Diverse Backgrounds" 
class="pull-left s-lg-book-cover-img-0">

This code could be made more accessible with the following:

<img alt="Getting it Right for Young Children from Diverse Backgrounds 
Cover Image" 
title="Getting It Right for Young Children from Diverse Backgrounds" 
class="pull-left s-lg-book-cover-img-0">
Gallery Keyboard Accessibility and Tab Navigation Section 508 1194.21 (a)

In testing, it was not possible to navigate through gallery images using keyboard tab navigation alone.  While it was possible with tab navigation to bypass the gallery (tab into and out of it into the next page element) the user would not be able to control the movement of the gallery or tab through the gallery images to access the descriptions or captions of the gallery.

Gallery Default Label and Caption Color: Insufficient contrast and readability

FireFox’s WCAG Color Contrast Checker identified the white label and caption color of the “Gallery” box type as having insufficient contrast with many images that could be used in the gallery.  Because the label and captions appear directly overlaid upon gallery images, with no outline or background color to enhance the contrast of the text, these labels and captions can be difficult to read.  There does not appear to be a way in LibGuides administrative settings to adjust the default caption, though custom scripting might be used to override the style.

A screenshot of an example "Gallery" image in LibGuides. The example screenshot is of a cityscape in Israel.

Figure 1:  LibGuides gallery feature showing white label and caption that can be difficult to read against the gallery image.

Accessible Practices for Guide Authors:  A few tips

The issues identified above cannot easily be resolved through LibGuides administrative options or author controls, but there are several other important practices for guide authors to be aware of.  The tips below are by no means a comprehensive guide to accessibility; there are many more aspects to ensuring content is accessible (especially concerning the use of media, tables, and other types of content), but this list provides a few examples of things content creators can be aware of when creating guides.

Media/Widget Embed Codes:  Manually add title attributes to iframe elements

When embedding iframe media (such as a YouTube video, SoundCloud file, or Google Form) it is essential that Guide authors manually add a TITLE attribute to media embed codes.

Here is an example of a YouTube video’s embed code:

<iframe width="548" height="315" 
frameborder="0" allowfullscreen></iframe>

When adding code like this to a LibGuides Media/Widget feature, guide authors should manually add in a descriptive title element to briefly describe the contents of the embedded media:

<iframe title=”Video tutorial on finding a book at the Oviatt Library” 
width="548" height="315" 
frameborder="0" allowfullscreen></iframe>

Embedded media should also always include captions for visual media and transcripts for audio and visual media.

Rich Text/HTML Content: Add alternative text to all images

When manually adding images to RichText/HTML content, guide authors should be sure to add descriptive Alternative Text in the image dialogue box:

The LibGuides image upload dialogue menu, with a black box highlighting the input field for alternative text.

Figure 2:  LibGuides Image Properties Dialogue Box used to add images.  The Alternative Text field is highlighted.

Links:  Add title and aria-label attributes

When manually adding links to resources in LibGuides, ensure the purpose of the link is clear, either with title attributes or aria-label attributes.  Avoid, where possible, vague link text such as ‘Read More’ or ‘Click Here’. If link text is vague or there is no descriptive information about the link visible on the page, use a title attribute or aria-label attribute:

Link with title attribute:

<a href="http://example.com" 
  title="Read about evaluating sources with the CRAP Test">
  The Crap Test

Link with aria-label attribute:

<a href="http://example.com" 
  aria-label="Read more about evaluating sources">
  The Crap Test
Look and Feel:  Ensure text is visually distinct from background colors

When designing the look and feel of LibGuides, where possible, ensure a high level of contrast between text and background colors for readability.  For example, consider enhancing the text contrast on box labels, which by default have somewhat low contrast (dark grey text on light grey background).  

A screenshot of a default LibGuides tab heading reading "Profile Box", with dark grey text over light grey text.

Figure 3:  LibGuides default box header, showing low contrast between text in box and background.

A LibGuides box header with text reading "Profile Box" where the text contrast has been enhanced by making it black against a light grey background.

Figure 4:  LibGuides box header with font color set to #000000 in administrative Look and Feel settings.

For any element on the page, avoid using colors that do not have high contrast with background color features.

More Resources

Many LibGuides authors have created excellent guides to accessibility for guide authors at their institution, and SpringShare also provides an useful  guide for best practices for LibGuides content creators that covers some accessibility practices.  Here are a few resources from the LibGuides community that helped me enormously when doing this evaluation:

The ACRL Universal Accessibility Interest Group (UAIG) is currently exploring the formation of a subcommittee to review LibGuides accessibility and potentially create a more comprehensive guide to best practices for LibGuides accessibility.  You can join the UAIG through your ALA / ACRL membership to learn more about this initiative.

I would also love to hear from other who have done this kind of testing and found other issues.  Do you have a guide to best practices that covers accessibility?  Are you aware of other features in LibGuides that are not accessible to all users?  Comment here or tweet me @lpmagnuson.


  1. The mission of the Universal Design Center is “to assist the campus community in creating pathways for individuals to learn, communicate, and share via information technology.  Part of the mission is to help the campus community design-in interoperability, usability, and accessibility into information technology so that individual learning and processing styles, or physical characteristics are not barriers to accessing information.” http://www.csun.edu/universaldesigncenter
  2. For an excellent overview of web accessibility compliance, see Cynthia Ng’s articles on ACRL Tech Connect at http://acrl.ala.org/techconnect/post/making-your-website-accessible-part-1-understanding-wcag, http://acrl.ala.org/techconnect/post/making-your-website-accessible-part-2-implementing-wcag, and http://acrl.ala.org/techconnect/post/making-your-website-accessible-part-3-content-wcag-compliance.
  3. For a great example of the extensive customization that can be done in LibGuides 2.0’s Bootstrap framework, see http://acrl.ala.org/techconnect/post/migrating-to-libguides-2-0

Collaborative UX Testing: Cardigans Included

Usability Testing

Understanding and responding to user needs has always been at the heart of librarianship, although in recent years this has taken a more intentional approach through the development of library user experience positions and departments.  Such positions are a mere fantasy though for many smaller libraries, where librarian teams of three or four run the entire show.  For the twenty-three member libraries of the Private Academic Library Network of Indiana (PALNI) consortium this is regularly the case, with each school on staff having an average of four librarians.  However, by leveraging existing collaborative relationships, utilizing recent changes in library systems and consortium staffing, and (of course) picking up a few new cardigans, PALNI has begun studying library user experience at scale with a collaborative usability testing model.

With four library testing locations spread over 200 miles in Indiana, multiple facilitators were used to conduct testing for the consortial discovery product, OCLC’s WorldCat Discovery. Using WebEx to screen record and project the testing into a library staff observation room, 30 participants completed three general tasks with multiple parts helping us to assess user needs and participant behavior.

There were clear advantages of collaborative testing over the traditional, siloed approach which were most obviously shown in the amount and type of data we received. The most important opportunity was the ability to test different setups of the same product. This type of comparative data led to conclusive setup recommendations, and showed problems unique to the institutions versus general user problems. The chance to test multiple schools also provided a lot more data, which reduced the likelihood of testing only outliers.

The second major advantage of collaborative testing was the ability to work as a team. From a physical standpoint, working as a team allowed us to spread the testing out, keeping it fresh in our minds and giving enough time in-between to fix scripts and materials. This also allowed us to test before and after technical upgrades. From a relational perspective, the shouldering of the work and continual support reduced burn out during the testing. Upon analyzing the data, different people brought different skill sets. Our particular team consisted of a graphic/interface designer, a sympathetic ear, and a master editor, all of whom played important roles when it came to analyzing and writing the report. Simply put, it was an enjoyable experience which resulted in valuable, comparative data – one that could not have happened if the libraries had taken a siloed approach.

When we were designing our test, we met with Arnold Arcolio, a User Researcher in OCLC’s User Experience and Information Architecture Group. He gave us many great pieces of advice. Some of them we found to work well in our testing, while others we rejected. The most valuable piece of advice he gave us was to start with the end in mind. Make sure you have clear objectives for what data you are trying to obtain. If you leave your objectives open ended, you will spend the rest of your life reviewing the data and learning interesting things about your users every time.

He recommended: We decided:
Test at least two users of the same type. This helps avoid outliers. For us, that meant testing at least two first year students and two seniors.
Test users on their own devices. We found this to be impractical for our purposes, as all devices used for testing had to have web conferencing software which allowed us to record users’ screen.
Have the participants read the tasks out loud. A technique that we used and recommend as well.
Use low-tech solutions for our testing, rather than expensive software and eye tracking software. This was a huge relief to PALNI’s executive director who manages our budget.
Test participants where they would normally do their research, in dorm rooms, faculty offices, etc. We did not take this recommendation due to time and privacy concerns.
He was very concerned about our use of multiple facilitators. We standardized our testing as much as possible.  First, we choose uniforms for our facilitators. Being librarians, the obvious choice was cardigans. We ordered matching, logoed cardigans from Lands’ End and wore those to conduct our testing. This allowed us to look as similar as possible and avoid skewing participants’ impressions.  We chose cardigans in blue because color theory suggests that blue persuades the participants to trust the facilitator while feeling calm and confident. We also worked together to create a very detailed script that was used by each facilitator for each test.

Our next round of usability testing will incorporate many of the same recommendations provided by our usability expert, discussed above, with a few additions and changes. This Fall, we will be including a mobile device portion using a camera mount (Mr. Tappy see http://www.mrtappy.com/) to screen record, testing different tasks, and working with different libraries. Our libraries’ staff also recommended making the report more action-oriented with best setup practices and highlighting instructional needs.  We are also developing a list of common solutions for participant problems, such as when to redirect or correct misspellings. Finally, as much as we love the cardigans, we will be wearing matching logoed polos underneath for those test rooms that mirror the climate of the Sahara Desert.

We have enjoyed our usability experiences immensely–it is a great chance to visit with both library staff, faculty, and students from other institutions in our consortium. Working collaboratively proved to be a success in our consortia where smaller libraries, short staff, and minimal resources made it otherwise impossible to conduct large scale usability testing.   Plus, we welcome having another cardigan in our wardrobe.

More detailed information about our Spring 2015 study can be found in our report, “PALNI WorldCat Discovery Usability Report.”

About our guest authors:

Eric Bradley is Head of Instruction and Reference at Goshen College and an Information Fluency Coordinator for PALNI.  He has been at Goshen since 2013.  He does not moonlight as a Mixed Martial Arts fighter or Los Angeles studio singer.

Ruth Szpunar is an Instruction and Reference Librarian at DePauw University and an Information Fluency Coordinator for PALNI. She has been at DePauw since 2005. In her spare time she can be found munching on chocolate or raiding the aisles at the Container Store.

Megan West has been the Digital Communications Manager at PALNI since 2011. She specializes in graphic design, user experience, project management and has a strange addiction to colored pencils.

Data, data everywhere…but do we want to drink?

The role of data, digital curation, and scholarly communication in academic libraries.

Ask around and you’ll hear that data is the new bacon (or turkey bacon, in my case. Sorry, vegetarians). It’s the hot thing that everyone wants a piece of. It is another medium with which we interact and derive meaning from. It is information[1]; potentially valuable and abundant. But much like [turkey] bacon, un-moderated gorging, without balance or diversity of content, can raise blood pressure and give you a heart attack. To understand how best to interact with the data landscape, it is important to look beyond it.

What do academic libraries need to know about data? A lot, but in order to separate the signal from the noise, it is imperative to look at the entire environment. To do this, one can look to job postings as a measure of engagement. The data curation positions, research data services departments, and data management specializations focus almost exclusively on digital data. However, these positions, which are often catch-alls for many other things do not place the data management and curation activities within the larger frame of digital curation, let alone scholarly communication. Missing from job descriptions is an awareness of digital preservation or archival theory as it relates to data management or curation. In some cases, this omission could be because a fully staffed digital collections department has purview over these areas. Nonetheless, it is important to articulate the need to communicate with those stakeholders in the job description. It may be said that if the job ad discusses data curation, digital preservation should be an assumed skill, yet given the tendencies to have these positions “do-all-the-things” it is negligent not to explicitly mention it.

Digital curation is an area that has wide appeal for those working in academic and research libraries. The ACRL Digital Curation Interest Group (DCIG) has one of the largest memberships within ACRL, with 1075 members as of March 2015. The interest group was intentionally named “digital curation” rather than “data curation” because the founders (Patricia Hswe and Marisa Ramirez) understood the interconnectivity of the domains and that the work in one area, like archives, could influence the work in another, like data management. For example, the work from Digital POWRR can help inform digital collection platform decisions or workflows, including data repository concerns. This Big Tent philosophy can help frame the data conversations within libraries in a holistic, unified manner, where the various library stakeholders work collaboratively to meet the needs of the community.

The absence of a holistic approach to data can result in the propensity to separate data from the corpus of information for which librarians already provide stewardship. Academic libraries may recognize the need to provide leadership in the area of data management, but balk when asked to consider data a special collection or to ingest data into the institutional repository. While librarians should be working to help the campus community become critical users and responsible producers of data, the library institution must empower that work by recognizing this as an extension of the scholarly communication guidance currently in place. This means that academic libraries must incorporate the work of data information literacy into their existing information literacy and scholarly communication missions, else risk excluding these data librarian positions from the natural cohort of colleagues doing that work, or risk overextending the work of the library.

This overextension is most obvious in the positions that seek a librarian to do instruction in data management, reference, and outreach, and also provide expertise in all areas of data analysis, statistics, visualization, and other data manipulation. There are some academic libraries where this level of support is reasonable, given the mission, focus, and resourcing of the specific institution. However, considering the diversity of scope across academic libraries, I am skeptical that the prevalence of job ads that describe this suite of services is justified. Most “general” science librarians would scoff if a job ad asked for experience with interpreting spectra. The science librarian should know where to direct the person who needs help with reading the spectra, or finding comparative spectra, but it should not be a core competency to have expertise in that domain. Yet experience with SPSS, R, Python, statistics and statistical literacy, and/or data visualization software find their way into librarian position descriptions, some more specialized than others.

For some institutions this is not an overextension, but just an extension of the suite of specialized services offered, and that is well and good. My concern is that academic libraries, feeling the rush of an approved line for all things data, begin to think this is a normal role for a librarian. Do not mistake me, I do not write from the perspective that libraries should not evolve services or that librarians should not develop specialized areas of expertise. Rather, I raise a concern that too often these extensions are made without the strategic planning and commitment from the institution to fully support the work that this would entail.

Framing data management and curation within the construct of scholarly communication, and its intersections with information literacy, allows for the opportunity to build more of this content delivery across the organization, enfranchising all librarians in the conversation. A team approach can help with sustainability and message penetration, and moves the organization away from the single-position skill and knowledge-sink trap. Subject expertise is critical in the fast-moving realm of data management and curation, but it is an expertise that can be shared and that must be strategically supported. For example, with sufficient cross-training liaison librarians can work with their constituents to advise on meeting federal data sharing requirements, without requiring an immediate punt to the “data person” in the library (if such a person exists). In cases where there is no data point person, creating a data working group is a good approach to distribute across the organization both the knowledge and the responsibility for seeking out additional information.

Data specialization cuts across disciplinary bounds and concerns both public services and technical services. It is no easy task, but I posit that institutions must take a simultaneously expansive yet well-scoped approach to data engagement – mindful of the larger context of digital curation and scholarly communication, while limiting responsibilities to those most appropriate for a particular institution.

[1] Lest the “data-information-knowledge-wisdom” hierarchy (DIKW) torpedo the rest of this post, let me encourage readers to allow for an expansive definition of data. One that allows for the discrete bits of data that have no meaning without context, such as a series of numbers in a .csv file, and the data that is described and organized, such as those exact same numbers in a .csv file, but with column and row descriptors and perhaps an associated data dictionary file. Undoubtedly, the second .csv file is more useful and could be classified as information, but most people will continue to call it data.

Yasmeen Shorish is assistant professor and Physical & Life Sciences librarian at James Madison University. She is a past-convener for the ACRL Digital Curation Interest Group and her research focus is in the areas of data information literacy and scholarly communication.

How is programming work supported (or not…) by administrators in libraries?

[Editor’s Note:  This post is part of a series of posts related to ACRL TechConnect’s 2015 survey on Programming Languages, Frameworks, and Web Content Management Systems in Libraries.  The survey was distributed between January and March 2015 and received 265 responses.  The first post in this series is available here.]

In our last post in this series, we discussed how library programmers learn about and develop new skills in programming in libraries.  We also wanted to find out how library administrators or library culture in general does or does not support learning skills in programming.

From anecdotal accounts, we hypothesized that learning new programming skills might be impeded by factors including lack of access to necessary technologies or server environments, lack of support for training, travel or professional development opportunities, or overloaded job descriptions that make it difficult to find the time to learn and develop new skills.  While respondents to our survey did in some cases indicate these barriers, we actually found that most respondents felt supported by their administration or library to develop new programming skills.

Most respondents feel supported, but lack of time is a problem

The question we asked respondents was:

Please describe how your employing institution either does or does not support your efforts to learn or improve programming or development skills. “Support” can refer to funding, training, mentoring, work time allocation, or other means of support.

The question was open-ended, enabling respondents to provide details about their experiences.  We received 193 responses to this question and categorized responses by whether they overall indicated support or lack of support.  74% of respondents indicated at least some support for learning programming by their library administration, while 26% report a lack of support for learning programming.

Of those who mentioned that their administration or supervisors provide a supportive environment for learning about programming, the top kind of support mentioned was training, closely followed by funding for professional development opportunities.  Flexibility in work time was also frequently mentioned by respondents.  Mentoring and encouragement were mentioned less frequently.


However, even among those who feel supported in terms of funding and training opportunities, respondents indicated that time to actually complete training or professional development, is, in practice, scarce:

Work time allocation is a definite issue – I’m the only systems librarian and have responsibilities governing web site, intranet, discovery layer, link resover, ereserve system, meeting room booking system and library management system. No time for deep learning.

Low staffing often contributes to the lack of time to develop skills, even in supportive environments:

They definitely support developing new skills, but we have a very small technology staff so it’s difficult to find time to learn something new and implement it.

Respondents indicated the importance to their employers of aligning training and funding requests with current work projects and priorities:

I would be able to get support in terms of work time allocation, limited funding for training. I’m limited by external control of library technology platforms (centrally administrated), need to identify utility of learning language to justify training, use, &c.

26% of respondents indicate a lack of support for learning programming

Of those respondents who indicated that their workplace is not supportive of programming professional development or learning opportunities, lack of funding and training was the most commonly cited type of support that respondents found lacking.

Lack of  Funding and Training

The main lack of support comes in the form of funding and training. There are few opportunities to network and attend training events (other than virtually online) to learn how to do my job better. I basically have to read and research (either with a book or on the web) to learn about programming for libraries.

Respondents mentioned that though they could do training during their work hours, they are not necessarily funded to do so:

I am given time for self-education, but no formal training or provision for formal education classes.

Lack of Mentoring / Peer Support

Peer support was important to many respondents, both in supportive and unsupportive environments.  Many respondents who felt supported mentioned how important it was to have colleagues in their workplace to whom they can turn to get advice and help with troubleshooting.  Comments such as this one illustrate the difficulty of being the only systems or technology support person in one’s workplace:

They are very open to supporting me financially and giving me work time to learn (we have an institutional license to lynda.com and they have funded off site training), but there is not a lot of peer support for learning. I am a solo systems department and most of our campus IT staff are contractors, so there is not the opportunity for a community of colleagues to share ideas and to learn from each other.

Understaffing / Low Pay for Programming Skills

Closely related to the lack of peer support, respondents specifically mentioned that being the only technical staff person at their institution can make it difficult to find time for learning, and that understaffing contributes to the high workload:

There’s no money for training and we are understaffed so there’s no time for self-taught skills. I am the only non-Windows programmer so there’s no one I can confer with on programming challenges. I learn whatever I need to know on the fly and only to the degree it’s necessary to get the job done.

I’m the only “tech” on site, so I don’t have time to learn anything new.

One respondent mentioned that pay for those with programming skills is not competitive at his or her institution:

We have zero means for support, partially due to a complex web of financial reasons. No training, little encouragement, and a refusal to hire/pay at market rates programming staff.

Future Research and Other Questions

As with the first post in this series, the analysis of the data yields more questions than clear conclusions.  Some respondents indicated they have very supportive workplaces, where they feel like their administration and supervisors provide every opportunity to develop new skills and learn about the technologies they want to learn about.  Others express frustration with the lack of funding or ability to collaborate with colleagues on projects that require programming skills.

One question that requires a more thorough examination of the data is whether those whose jobs do not specifically require programming skills feel as supported in learning about programming as those who were hired to be programmers.  30% of survey respondents indicated that programming is *not* part of their official job duties, but that they do programming or similar activities to perform job functions.  Initial analysis indicates there is no significant difference between these respondents and respondents as a whole.  However, there may be differences in support based on the type of position one has in a library (e.g., staff, faculty, or administration), and we did not gather that information from respondents in this survey.  At least two respondents, however, indicates that this may be the case at least at some libraries:

Training & funding is available; can have release time to attend; all is easier for librarians to obtain than for staff to obtain which is sad since staff tend to do more of the programming

Some staff have a lot of support, some have nill, it depends on where/what project you are working on.

In the next (and final) post in this series, we’ll explore some preliminary data on popular programming languages in libraries, and examine how often library programmers get to use their preferred programming languages in their work.