Are Your LibGuides 2.0 (images, tables, & videos) mobile friendly? Maybe not, and here’s what you can do about it.

LibGuides version 2 was released in summer 2014, and built on Bootstrap 3. However, after examining my own institutions’ guides, and conducting a simple random sampling of academic libraries in the United States, I found that many LibGuides did not display well on phones or mobile devices when it came to images, videos, and tables. Springshare documentation stated that LibGuides version 2 is mobile friendly out of the box and no additional coding is necessary, however, I found this not to necessarily be accurate. While the responsive features are available, they aren’t presented clearly as options in the graphical interface and additional coding needs to be added using the HTML editor in order for mobile display to be truly responsive when it comes to images, videos, and tables.

At my institution, our LibGuides are reserved for our subject librarians to use for their research and course guides. We also use the A-Z database list and other modules. As the LibGuides administrator, I’d known since its version 2 release that the new system was built on Bootstrap, but I didn’t know enough about responsive design to do anything about it at the time. It wasn’t until this past October when I began redesigning our library’s website using Bootstrap as the framework that I delved into customizing our Springshare products utilizing what I had learned.

I have found while looking at our own guides individually, and speaking to the subject librarians about their process, that they have been creating and designing their guides by letting the default settings take over for images, tables, and videos.  As a result, several tables are running out of their boxes, images are getting distorted, and videos are stretched vertically and have large black top and bottom margins. This is because additional coding and/or tweaking is indeed necessary in most cases for these to display correctly on mobile.

I’m by no means a Bootstrap expert, but my findings have been verified with Springshare, and I was told by Springshare support that they will be looked at by the developers. Support indicated that there may be a good reason things work as they do, perhaps to give users flexibility in their decisions, or perhaps a technical reason. I’m not sure, but for now we have begun work on making the adjustments so they display correctly. I’d be interested to hear others’ experiences with these elements and what they have had to do, if anything to assure they are responsive.

Method

Initially, as I learned how to use Bootstrap with the LibGuides system, I looked at my own library’s subject guides and testing the responsiveness and display. To start, I browsed through our guides with my Android phone. I then used Chrome and IE11 on desktop and resized the windows to see if the tables stayed within their boxes, and images respond appropriately. I peeked at the HTML and elements within LibGuides to see how the librarians had their items configured. Once I realized the issues were similar across all guides, I took my search further. Selfishly hoping it wasn’t just us, I used the LibGuides Community site where I sorted the list by libraries on version 2, then sorted by academic libraries. Each state’s list had to be looked at separately (you can’t sort by the whole United States). I placed all libraries from each state in a separate Excel sheet in alphabetical order. Using the random sort function, I examined two to three, sometimes five libraries per state (25 states viewed) by following the link provided in the community site list. I also inspected the elements of several LibGuides in my spreadsheet live in Chrome. I removed dimensions or styling to see how the pages responded since I don’t have admin access to any other universities guides. I created a demo guide for the purposes of testing where I inserted various tables, images and videos.

Some Things You Can Try

Even if you or your LibGuides authors may or may not be familiar with Bootstrap or fundamentals of responsive design, anyone should be able to design or update these guide elements using the instructions below; there is no serious Bootstrap knowledge needed for these solutions.

Tables

As we know, tables should not be used for layout. They are meant to display tabular data. This is another issue I came across in my investigation. Many librarians are using tables in this manner. Aside from being an outdated practice, this poses a more serious issue on a mobile device. Authors can learn how to float images, or create columns and rows right within the HTML Editor as an alternative. So for the purposes of this post, I’ll only be using a table in a tabular format.

When inserting a table using the table icon in the rich text editor you are asked typical table questions. How many rows? How many columns? In speaking with the librarians here at my institution, no one is really giving it much thought beyond this. They are filling in these blanks, inserting the table, and populating it. Or worse, copying and pasting a table created in Word. 

However, if you leave things as they are and the table has any width to it, this will be your result once minimized or viewed on mobile device:

Figure 1: Rows overflowing from table

Figure 1: LibGuides default table with no responsive class added

As you can see, the table runs out of the container (the box). To alleviate this, you will have to open the HTML Editor, find where the table begins, and wrap the table in the table-responsive class. The HTML Editor is available to all regular users, no administrative access is needed. If you aren’t familiar with adding classes, you will also need to close the tag after the last table code you see. The HTML looks something like this:

<div class="table-responsive">
    all the other elements go here
</div>

Below is the result of wrapping all table elements in the table-responsive class. As you can see it is cleaner, there is no run-off, and bootstrap added a horizontal scroll bar since the table is really too big for the box once it is resized. On a phone, you can now swipe sideways to scroll through the table.

Figure 3: Result of adding the responsive table class

Figure 2: Result of adding the responsive table class.

Springshare has also made the Bootstrap table styling classes available, which you can see in the editor dropdown as well. You can experiment with these to see which styling you prefer (borders, hover rows, striped rows…), but they don’t replace adding the table-responsive class to the table.

Images

When inserting an image in a LibGuides box, the system brings the dimensions of the image with it into the Image Properties box by default. After various tests I found it best to match the image size to the layout/box prior to uploading, and then remove the dimensions altogether from within the Image Properties box (and don’t place it in an unresponsive table). This can easily be done right within the Image Properties box when the image is inserted. It can also be done in the HTML Editor afterwards.

Figure 4: Image dimensions can be removed in the Image Properties box

Figure 3: Image dimensions can be removed in the Image Properties box.

 

Image dimensions removed

On the left: dimensions in place. On the right: dimensions removed.

By removing the dimensions, the image is better able to resize accordingly, especially in IE which seems to be less forgiving than Chrome. Guide creators should also add descriptive Alternative Text while in the Image Properties box for accessibility purposes.

Some users may be tempted to resize large images by adjusting the dimensions right in the Properties box . However, doing this doesn’t actually decrease the size that gets passed to the user so it doesn’t help download speed. Substantial resizing needs to be done prior to upload. Springshare recommends adjustments of no more than 10-15%.1

Videos

There are a few things I tried while figuring the best way to embed a YouTube video:

    1.  Use the YouTube embed code as is. Which can result in a squished image, and a lot of black border in the top and bottom margins.

Default Youtube embed code

    1. Use the YouTube embed code but remove the iframe dimensions (width=”560″ height=”315″). Results in a small image that looks fine, but stays small regardless of the box size.
    2. Use the YouTube embed code, remove the iframe dimensions and add the embed-responsive class. In this case, 16by9. This results in a nice responsive display, with no black margins. Alternately, I discovered that leaving the iframe dimensions while adding the responsive class looks nearly the same.

Youtube embed code dimensions removed and responsive class added

It should also be noted that LibGuides creators and editors should manually add a “title” attribute to the embed code for accessibility.2 Neither LibGuides nor YouTube does this automatically, so it’s up to the guide creator to add it in the HTML Editor. In addition, the “frameborder=0” will be overwritten by Bootstrap, so you can remove it or leave, it’s up to you.

Considering Box Order/Stacking

The way boxes stack and order on smaller devices is also something LibGuides creators or editors should take into consideration. The layout is essentially comprised of columns, and in Bootstrap the columns stack a certain way depending on device size.

I’ve tested several guides and believe the following are representative of how boxes will stack on a phone, or small mobile device. However, it’s always best to test your layout to be sure. Test your own guides by minimizing and resizing your browser window and watch how they stack.

Box stacking order of a guide with no large top box and three columns.

 

Box stacking order of a guide with a large top box and two columns.

 

Conclusion

After looking at the number of libraries that have these same issues, it may be safe to say that our subject librarians are similar to others in regard to having limited HTML, CSS, or design skills. They rely on LibGuides easy to use interface and system to do most of the work as their time is limited, or they have no interest in learning these additional skills. Our librarians spend most of their teaching time in a classroom, using a podium and large screen, or at the reference desk on large screens. Because of this they are not highly attuned to the mobile user and how their guides display on other devices, even though their guides are being accessed by students on phones or tablets. We will be initiating a mobile reference service soon, perhaps this will help bring further awareness. For now, I recently taught an internal workshop in order demonstrate and share what I have learned in hopes of helping the librarians get these elements fixed. Helping ensure new guides will be created with mobile in mind is also a priority. To date, several librarians have gone through their guides and made the changes where necessary. Others have summer plans to update their guides and address these issues at the same time. I’m not aware of any way to make these changes in bulk, since they are very individual in nature.

 


Danielle Rosenthal is the Web Development & Design Librarian at Florida Gulf Coast University. She is responsible for the library’s web site and its applications in support of teaching, learning, and scholarship activities of the FGCU Library community. Her interests include user interface, responsive, and information design.

 

Notes:

1 Maximizing your LibGuides for Mobile http://buzz.springshare.com/springynews/news-29/tips
2 http://acrl.ala.org/techconnect/post/accessibility-testing-libguides-2-0


Looking Across the Digital Preservation Landscape

When it comes to digital preservation, everyone agrees that a little bit is better than nothing. Look no further than these two excellent presentations from Code4Lib 2016, “Can’t Wait for Perfect: Implementing “Good Enough” Digital Preservation” by Shira Peltzman and Alice Sara Prael, and “Digital Preservation 101, or, How to Keep Bits for Centuries” by Julie Swierczek. I highly suggest you go check those out before reading more of this post if you are new to digital preservation, since they get into some technical details that I won’t.

The takeaway from these for me was twofold. First, digital preservation doesn’t have to be hard, but it does have to be intentional, and secondly, it does require institutional commitment. If you’re new to the world of digital preservation, understanding all the basic issues and what your options are can be daunting. I’ve been fortunate enough to lead a group at my institution that has spent the last few years working through some of these issues, and so in this post I want to give a brief overview of the work we’ve done, as well as the current landscape for digital preservation systems. This won’t be an in-depth exploration, more like a key to the map. Note that ACRL TechConnect has covered a variety of digital preservation issues before, including data management and preservation in “The Library as Research Partner” and using bash scripts to automate digital preservation workflow tasks in “Bash Scripting: automating repetitive command line tasks”.

The committee I chair started examining born digital materials, but expanded focus to all digital materials, since our digitized materials were an easier test case for a lot of our ideas. The committee spent a long time understanding the basic tenets of digital preservation–and in truth, we’re still working on this. For this process, we found working through the NDSA Levels of Digital Preservation an extremely helpful exercise–you can find a helpfully annotated version with tools by Shira Peltzman and Alice Sara Prael, as well as an additional explanation by Shira Peltman. We also relied on the Library of Congress Signal blog and the work of Brad Houston, among other resources. A few of the tasks we accomplished were to create a rough inventory of digital materials, a workflow manual, and to acquire many terabytes (currently around 8) of secure networked storage space for files to replace all removable hard drives being used for backups. While backups aren’t exactly digital preservation, we wanted to at the very least secure the backups we did have. An inventory and workflow manual may sound impressive, but I want to emphasize that these are living and somewhat messy documents. The major advantage of having these is not so much for what we do have, but for identifying gaps in our processes. Through this process, we were able to develop a lengthy (but prioritized) list of tasks that need to be completed before we’ll be satisfied with our processes. An example of this is that one of the major workflow gaps we discovered is that we have many items on obsolete digital media formats, such as floppy disks, that needs to be imaged before it can even be inventoried. We identified the tool we wanted to use for that, but time and staffing pressures have left the completion of this project in limbo. We’re now working on hiring a graduate student who can help work on this and similar projects.

The other piece of our work has been trying to understand what systems are available for digital preservation. I’ll summarize my understanding of this below, with several major caveats. This is a world that is currently undergoing a huge amount of change as many companies and people work on developing new systems or improving existing systems, so there is a lot missing from what I will say. Second, none of these solutions are necessarily mutually exclusive. Some by design require various pieces to be used together, some may not require it, but your circumstances may dictate a different solution. For instance, you may not like the access layer built into one system, and so will choose something else. The dream that you can just throw money at the problem and it will go away is, at present, still just a dream–as are so many library technology problems.

The closest to such a dream is the end-to-end system. This is something where at one end you load in a file or set of files you want to preserve (for example, a large set of donated digital photographs in TIFF format), and at the other end have a processed archival package (which might include the TIFF files, some metadata about the processing, and a way to check for bit rot in your files), as well as an access copy (for example, a smaller sized JPG appropriate for display to the public) if you so desire–not all digital files should be available to the public, but still need to be preserved.

Examples of such systems include Preservica, ArchivesDirect, and Rosetta. All of these are hosted vended products, but ArchivesDirect is based on open source Archivematica so it is possible to get some idea of the experience of using it if you are able to install the tools on which it based. The issues with end-t0-end systems are similar to any other choice you make in library systems. First, they come at a high price–Preservica and ArchivesDirect are open about their pricing, and for a plan that will meet the needs of medium-sized libraries you will be looking at $10,000-$14,000 annual cost. You are pretty much stuck with the options offered in the product, though you still have many decisions to make within that framework. Migrating from one system to another if you change your mind may involve some very difficult processes, and so inertia dictates that you will be using that system for the long haul, which a short trial period or demos may not be enough to really tell you that it’s a good idea. But you do have the potential for more simplicity and therefore a stronger likelihood that you will actually use them, as well as being much more manageable for smaller staffs that lack dedicated positions for digital preservation work–or even room in the current positions for digital preservation work.  A hosted product is ideal if you don’t have the staff or servers to install anything yourself, and helps you get your long-term archival files onto Amazon Glacier. Amazon Glacier is, by the way, where pretty much all the services we’re discussing store everything you are submitting for long-term storage. It’s dirt cheap to store on Amazon Glacier and if you can restore slowly, not too expensive to restore–only expensive if you need to restore a lot quickly. But using it is somewhat technically challenging since you only interact with it through APIs–there’s no way to log in and upload files or download files as with a cloud storage service like Dropbox. For that reason, when you’re paying a service hundreds of dollars a terabyte that ultimately stores all your material on Amazon Glacier which costs pennies per gigabye, you’re paying for the technical infrastructure to get your stuff on and off of there as much as anything else. In another way you’re paying an insurance policy for accessing materials in a catastrophic situation where you do need to recover all your files–theoretically, you don’t have to pay extra for such a situation.

A related option to an end-to-end system that has some attractive features is to join a preservation network. Examples of these include Digital Preservation Network (DPN) or APTrust. In this model, you pay an annual membership fee (right now $20,000 annually, though this could change soon) to join the consortium. This gives you access to a network of preservation nodes (either Amazon Glacier or nodes at other institutions), access to tools, and a right (and requirement) to participate in the governance of the network. Another larger preservation goal of such networks is to ensure long-term access to material even if the owning institution disappears. Of course, $20,000 plus travel to meetings and work time to participate in governance may be out of reach of many, but it appears that both DPN and APTrust are investigating new pricing models that may meet the needs of smaller institutions who would like to participate but can’t contribute as much in money or time. This a world that I would recommend watching closely.

Up until recently, the way that many institutions were achieving digital preservation was through some kind of repository that they created themselves, either with open source repository software such as Fedora Repository or DSpace or some other type of DIY system. With open source Archivematica, and a few other tools, you can build your own end-to-end system that will allow you to process files, store the files and preservation metadata, and provide access as is appropriate for the collection. This is theoretically a great plan. You can make all the choices yourself about your workflows, storage, and access layer. You can do as much or as little as you need to do. But in practice for most of us, this just isn’t going to happen without a strong institutional commitment of staff and servers to maintain this long term, at possibly a higher cost than any of the other solutions. That realization is one of the driving forces behind Hydra-in-a-Box, which is an exciting initiative that is currently in development. The idea is to make it possible for many different sizes of institutions to take advantage of the robust feature sets for preservation in Fedora and workflow management/access in Hydra, but without the overhead of installing and maintaining them. You can follow the project on Twitter and by joining the mailing list.

After going through all this, I am reminded of one of my favorite slides from Julie Swierczek’s Code4Lib presentation. She works through the Open Archival Initiative System model graph to explain it in depth, and comes to a point in the workflow that calls for “Sustainable Financing”, and then zooms in on this. For many, this is the crux of the digital preservation problem. It’s possible to do a sort of ok job with digital preservation for nothing or very cheap, but to ensure long term preservation requires institutional commitment for the long haul, just as any library collection requires. Given how much attention digital preservation is starting to receive, we can hope that more libraries will see this as a priority and start to participate. This may lead to even more options, tools, and knowledge, but it will still require making it a priority and putting in the work.


A Reflection on Code4Lib 2016

See also: Margaret’s reflections on Code4Lib 2013 and recap of the 2012 keynote.

 


About a month ago was the 2016 Code4Lib conference in sunny Philadelphia. I’ve only been to a few Code4Lib conferences, starting with Raleigh in 2014, but it’s quickly become my favorite libraryland conference. This won’t be a comprehensive recap but a little taste of what makes the event so special.

Appetizers: Preconferences

One of the best things about Code4Lib is the affordable preconferences. It’s often a pittance to add on a preconference or two, extending your conference for a whole day. Not only that, there’s typically a wealth of options: the 2015 conference boasted fifteen preconferences to choose from, and Philadelphia somehow managed to top that with an astonishing twenty-four choices. Not only are they numerous, the preconferences vary widely in their topics and goals. There’s always intensely practical ones focused on bootstrapping people new to a particular framework, programming language, or piece of software (e.g. Railsbridge, workshops focused on Blacklight or Hydra). But there are also events for practicing your presentation or the aptly named “Getting Ready for Workshops” Workshop. One of my personal favorite ideas—though I must admit I’ve never attended—is the perennial “Fail4Lib” sessions where attendees examine their projects that haven’t succeeded and discuss what they’ve learned.

This year, I wanted to run a preconference of my own. I enjoy teaching, but I rarely get to do it in my current position. Previously, in a more generalist technologist position, I would teach information literacy alongside the other librarians. But as a Systems Librarian, it can sometimes feel like I rarely get out from behind my terminal. A preconference was an appealing chance to teach information professionals on a topic that I’ve accumulated some expertise in. So I worked with Coral Sheldon-Hess to put together a workshop focused on the fundamentals of the command line: what it is, how to use it, and some of the pivotal concepts. I won’t say too much more about the workshop because Coral wrote an excellent, detailed blog post right after we were done. The experience was great and feedback we received, including a couple kind emails from our participants, was very positive. Perhaps we, or someone else, can repeat the workshop in the future, as we put all our materials online.

Main Course: Presentations

Thankfully I don’t have to detail the conference talks too much, because they’re all available on YouTube. If a talk looks intriguing, I strongly encourage you to check out the recording. I’m not too ashamed to admit that a few went way over my head, so seeing the original will certainly be more informative than any summary I could offer.

One thing that was striking was how the two keynotes centered on themes of privacy and surveillance. Kate Krauss, Director of Communications of the Tor Project, lead the conference off. Naturally, Tor being privacy software, Krauss focused on stories of government surveillance. She noted how surveillance focuses on the most marginalized people, citing #BlackLivesMatter and the transgender community as examples. Krauss’ talk provided concrete steps that librarians could take, for instance examining our own data collection practices, ensuring our services are secure, hosting privacy workshops, and running a Tor relay. She even mentioned The Library Freedom Project as a positive example of librarians fighting online surveillance, which she posited as one of the premier civil rights issues of our time.

On the final day, Gabriel Weinberg of the search engine DuckDuckGo spoke on similar themes, except he concentrated on how his company’s lack of personalization and tracking differentiated it from companies like Google and Apple. To me, Weinberg’s talk bookended well with Krauss’ because he highlighted the dangers of corporate surveillance. While the government certainly has abused its access to certain fundamental pieces of our country’s infrastructure—obtaining records from major telecom companies without a warrant comes to mind—tech companies are also culpable in enabling the unparalleled degree of surveillance possible in the modern era, simply by collecting such massive quantities of data linked to individuals (and, all too often, by failing to secure their applications properly).

While the pair of keynotes were excellent and thematic, my favorite moments of the conference were the talks by librarians. Becky Yoose gave perhaps the most rousing, emotional talk I’ve ever heard at a conference on the subject of burnout. Burnout is all too real in our profession, but not often spoken of, particularly in such a public venue. Becky forced us all to confront the healthiness and sustainability of our work/life balance, stressing the importance not only of strong organizational policies to prevent burnout but also personal practices. Finally, Andreas Orphanides gave a thoughtful presentation on the political implications of design choices. Dre’s well-chosen, alternatingly brutal and funny examples—from sidewalk spikes that prevent homeless people from lying in doorways, to an airline website labelling as “lowest” a price clearly higher than others on the very same page—outlined how our design choices reflect our values, and how we can better align our values with those of our users.

I don’t mean to discredit anyone else’s talks—there were many more excellent ones, on a variety of topics. Dinah Handel captured my feelings best in this enthusiastic tweet:

Dessert: Community

My main enjoyment from Code4Lib is the sense of community. You’ll hear a lot of people at conferences state things like “I feel like these are my people.” And we are lucky as a profession to have plenty of strong conference options, depending on our locality, specialization, and interests. At Code4Lib, I feel like I can strike up a conversation with anyone I meet about an impending ILS migration, my favorite command-line tool, or the vagaries of mapping between metadata schemas. While I love my present position, I’m mostly a solo systems person surrounded by a few other librarians all with a different expertise. As much as I want to discuss how ludicrous the webpub.def syntax is, or why reading XSLT makes me faintly ill, I know it’d bore my colleagues to death. At Code4Lib, people can at least tolerate such subjects of conversation, if not revel in them.

Code4Lib is great not solely because of it’s focus on technology and code, which a few other library organizations share, but because of the efforts of community members to make it a pleasurable experience for all. To name just a couple of the new things Code4Lib introduced this year: while previous years have had Duty Officers whom attendees could safely report harassment to, they were announced & much more visible this year; sponsored child care was available for conference goers with small children; and a service provided live transcription of all the talks.1 This is in addition to a number of community-building measures that previous Code4Lib conferences featured, such as a series of newcomers dinners on the first night, a “share and play” game night, and diversity scholarships. Overall, it’s evident that the Code4Lib community is committed to being positive and welcoming. Not that other library organizations aren’t, but it should be evident that our profession isn’t immune from problems. Being proactive and putting in place measures to prevent issues like harassment is a shining example of what makes Code4Lib great.

All this said, the community does have its issues. While a 40% female attendance rate is fair for a technology conference, it’s clear that the intersection of coding and librarianship is more male-dominated than the rest of the profession at large. Notably, Code4Lib has done an incredible job of democratically selecting keynote speakers over the past few years—five female and one male for the past three conferences—but the conference has also been largely white, so much so that the 2016 conference’s Program Committee gave a lightning talk addressing the lack of speaker diversity. Hopefully, measures like the diversity scholarships and conscious efforts on the part of the community can make progress here. But the unbearable whiteness of librarianship remains a very large issue.

Finally, it’s worth noting that Code4Lib is entirely volunteer-run. Since it’s not an official professional organization with membership dues and full-time staff members, everything is done by people willing to spare their own time to make the occasion a great one. A huge thanks to the local planning committee and all the volunteers who made such a great event possible. It’s pretty stunning to me that Code4Lib manages to put together some of the nicest benefits of any conference—the live streaming and transcribed talks come to mind—without a huge backing organization, and while charging pretty reasonable registration prices.

Night Cap

I’d recommend Code4Lib to anyone in the library community who deals with technology, whether you’re a manager, cataloger, systems person, or developer. There’s a wide breadth of material suitable for anyone and a great, supportive community. If that’s not enough, the proportion of presentations featuring pictures of cats and/or animated gifs is higher than your average conference.

Notes

  1. Aside: Matt Miller made a fun “Overheard at Code4Lib 2016” app using the transcripts

Supporting Library Staff Technology Training

Keeping up with technical skills and finding time to learn new things can be a struggle, no matter your role in a library (or in any organization, for that matter).  In some academic libraries, professional development opportunities have been historically available to librarians and library faculty, and less available (or totally unavailable) for staff positions.  In this post, I argue that this disparity, where it may exist, is not only prima facie unfair, but can reduce innovation and willingness to change in the library.  If your library does not have a policy that specifically addresses training and professional development for all library staff, this post will provide some ideas on how to start crafting one.

In this post, when referring to “training and professional development,” I mostly have in mind technology training – though a training policy could cover non-technical training, such as leadership, time management, or project management training (though of course, some of those skills are closely related to technology).

Rationale

In the absence of a staff training policy or formal support for staff training, staff are likely still doing the training, but may not feel supported by the library to do so.  In ACRL TechConnect’s 2015 survey on learning programming in libraries, respondents noted disparities at their libraries between support for technical training for faculty or librarian positions and staff positions.  Respondents also noted that even though support for training was available in principle (e.g., funding was potentially available for travel or training), workloads were too high to find the time to complete training and professional development, and some respondents indicated learning on their own time was the only feasible way to train.   A policy promoting staff training and professional development should therefore explicitly allocate time and resources for training, so that training can actually be completed during work hours.

There is not a significant amount of recent research reflecting the impact of staff training on library operations.  Research in other industries has found that staff training can improve morale, reduce employee turnover and increase organizational innovation.1  In a review of characteristics of innovative companies, Choudhary (2014) found that “Not surprisingly, employees are the most important asset of an organization and the most important source of innovation.” 2  Training and workshops – particularly those that feature “lectures/talks from accomplished persons outside the organization” are especially effective in fostering happy and motivated employees 3 – and it’s happy and motivated employees that contribute most to a culture of innovation in an organization.

Key Policy Elements

Time

Your policy should outline how much time for training is available to each employee (for example, 2 hours a week or 8 hours a month).  Ensuring that staff have enough time for training while covering their existing duties is the most challenging part of implementing a training policy or plan.  For service desks in particular, scheduling adequate coverage while staff are doing professional development can be very difficult – especially as many libraries are understaffed.  To free up time, an option might be to train and promote a few student workers to do higher-level tasks to cover staff during training (you’ll need to budget to pay these students a higher wage for this work).  If your library wants to promote a culture of learning among staff, but there really is no time available to staff to do training, then the library probably needs more staff.

A training policy should be clear that training should be scheduled in advance with supervisor approval, and supervisors should be empowered to integrate professional development time into existing schedules.  Your policy may also specify that training hours can be allocated more heavily during low-traffic times in the library, such as summer, spring, and winter breaks, and that employees will likely train less during high-traffic or project-intensive times of the year.  In this way, a policy that specifies that an employee has X number of training hours per month or year might be more flexible than a policy that calls for X number of training hours per week.

Equipment and Space

Time is not enough.  Equipment, particularly mobile devices such as iPads or laptops – should also be available for staff use and checkout. These devices should be configured to enable staff to install required plugins and software for viewing webinars and training videos.   Library staff whose offices are open and vulnerable to constant interruption by patrons or student workers may find training is more effective if they have the option to check out a mobile device and head to another area – away from their desk – to focus.  Quiet spaces and webinar viewing rooms may also be required, and most libraries already have group or individual study areas.  Ensure that your policy states whether or how staff may reserve these spaces for training use.

Funding

There are tons of training materials, videos, and courses that are freely available online – but there are also lots of webinars and workshops that have a cost that are totally worth paying for.  A library that offers funding for professional development for some employees (such as librarians or those with faculty status), but not others, risks alienating staff and sending the message that staff learning is not valued by the organization.  Staff should know what the process is to apply for funding to travel, attend workshops, and view webinars.  Be sure to write up the procedures for requesting this funding either in the training policy itself or documented elsewhere but available to all employees.  Funding might be limited, but it’s vital to be transparent about travel funding request procedures.

An issue that is probably outside of the scope of a training policy, but is nonetheless very closely related, is staff pay.  If you’re asking staff to train more, know more, and do more, compensation needs to reflect this. Pay scales may not have caught up to the reality that many library staff positions now require technology skills that were not necessary in the past; some positions may need to be re-classed.  For this reason, creating a staff training policy may not be possible in a vacuum, but this process may need to be integrated with a library strategic planning and/or re-organization plan.  It’s incredibly important on this point that library leadership is on board with a potential training policy and its strategic and staffing implications.

Align Training with Organizational Goals

It likely goes without saying that training and professional development should align with organizational goals, but you should still say it in your policy – and specify where those organizational goals are documented. How those goals are set is determined by the strategic planning process at your library, but you may wish to outline in your policy that supervisors and department heads can set departmental goals and encourage staff to undertake training that aligns with these goals.  This can, in theory, get a little tricky: if we want to take a yoga class as part of our professional development, is that OK?  If your organization values mindfulness and/or wellness, it might be!

If your library wants to promote a culture of experimentation and risk-taking, consider explicitly defining and promoting those values in your policy.  This can help guide supervisors when working with staff to set training priorities.  One exciting potential outcome of implementing a training policy is to foster an environment where employees feel secure in trying out new skills, so make it clear that employees are empowered to do so.  Communication / Collaboration

Are there multiple people in your library interested in learning Ruby?  If there were, would you have any way of knowing?  Effective communication can be a massive challenge on its own (and is way beyond the scope of this post), but when setting up and documenting a training policy staff, you could include guidance for how staff should communicate their training activities with the rest of the library.  This could take the form of something totally low-tech (like a bulletin board or shared training calendar in the break room) or could take the form of an intranet blog where everyone is encouraged to write a post about their recent training and professional development experiences.  Consider planning to hold ‘share-fests’ a few times a year where staff can share new ideas and skills with others in the library to further recognize training accomplishments.

Training is in the Job Description

Training and professional development should be included in all job descriptions (a lot easier said than done, admittedly).  Employees need to know they are empowered to use work time to complete training and professional development.  There may be union, collective bargaining, and employee review implications to this – which I certainly am not qualified to speak on – but these issues should be addressed when planning to implement a training policy.  For new hires going forward, expect to have a period of ‘onboarding’ during which time the new staff member will devote a significant amount of time to training (this may already be happening informally, but I have certainly had experiences as a staff member being hired in and spending the first few weeks of my new job trying to figure out what my job is on my own!).

Closing the Loop:  Idea and Innovation Management

OK, so you’ve implemented a training policy, and now training and professional development is happening constantly in your library.  Awesome!  Not only is everyone learning new skills, but staff have great ideas for new services, or are learning about new software they want to implement.  How do you keep the momentum going?

One option might be to set up a process to track ideas and innovative projects in your library.  There’s a niche software industry around idea and innovation management that features some highly robust and specialized products (Brightidea, Spigit  and Ideascale are some examples), but you could start small and integrate idea tracking into an existing ticket system like SpiceWorks, OSTicket, or even LibAnswers.  A periodic open vote could be held to identify high-impact projects and prioritize new ideas and services.  It’s important to be transparent and accountable for this – adopting internally-generated ideas can in and of itself be a great morale-booster if handled properly, but if staff feel like their ideas are not valued, a culture of innovation will die before it gets off the ground.

Does your library have a truly awesome culture of learning and employee professional development?  I’d love to hear about it in the comments or @lpmagnuson.

Notes

  1. Sung, S. , & Choi, J. (2014). Do organizations spend wisely on employees? effects of training and development investments on learning and innovation in organizations. Journal of Organizational Behavior,35(3), 393-412.
  2.  Choudhary, A. (2014). Four Critical Traits of Innovative Organizations. Journal of Organizational Culture, Communication and Conflict, 18(2), 45-58.
  3. Ibid.

Evaluating Whether You Should Move Your Library Site to Drupal 8

After much hard work over years by the Drupal community, Drupal users rejoiced when Drupal 8 came out late last year. The system has been completely rewritten and does a lot of great stuff–but can it do what we need Drupal websites to do for libraries?  The quick answer seems to be that it’s not quite ready, but depending on your needs it might be worth a look.

For those who aren’t familiar with Drupal, it’s a content management system designed to manage complex sites with multiple types of content, users, features, and appearances.  Certain “core” features are available to everyone out of the box, but even more useful are the “modules”, which extend the features to do all kinds of things from the mundane but essential backup of a site to a flashy carousel slider. However, the modules are created by individuals or companies and contributed back to the community, and thus when Drupal makes a major version change they need to be rewritten, quite drastically in the case of Drupal 8. That means that right now we are in a period where developers may or may not be redoing their modules, or they may be rethinking about how a certain task should be done in the future. Because most of these developers are doing this work as volunteers, it’s not reasonable to expect that they will complete the work on your timeline. The expectation is that if a feature is really important to you, then you’ll work on development to make it happen. That is, of course, easier said than done for people who barely have enough time to do the basic web development asked of them, much less complex programming or learning a new system top to bottom, so most of us are stuck waiting or figuring out our own solutions.

Despite my knowledge of the reality of how Drupal works, I was very excited at the prospect of getting into Drupal 8 and learning all the new features. I installed it right away and started poking around, but realized pretty quickly I was going to have to do a complete evaluation for whether it was actually practical to use it for my library’s website. Our website has been on Drupal 7 since 2012, and works pretty well, though it does need a new theme to bring it into line with 2016 design and accessibility standards. Ideally, however, we could be doing even more with the site, such as providing better discovery for our digital special collections and making the site information more semantic web friendly. It was those latter, more advanced, feature desires that made me really wish to use Drupal 8, which includes semantic HTML5 integration and schema.org markup, as well as better integration with other tools and libraries. But the question remains–would it really be practical to work on migrating the site immediately, or would it make more sense to spend some development time on improving the Drupal 7 site to make it work for the next year or so while working on Drupal 8 development more slowly?

A bit of research online will tell you that there’s no right answer, but that the first thing to do in an evaluation is determine whether any the modules on which your site depends are available for Drupal 8, and if not, whether there is a good alternative. I must add that while all the functions I am going to mention can be done manually or through custom code, a lot of that work would take more time to write and maintain than I expect to have going forward. In fact, we’ve been working to move more of our customized code to modules already, since that makes it possible to distribute some of the workload to others outside of the very few people at our library who write code or even know HTML well, not to mention taking advantage of all the great expertise of the Drupal community.

I tried two different methods for the evaluation. First, I created a spreadsheet with all the modules we actually use in Drupal 7, their versions, and the current status of those modules in Drupal 8 or if I found a reasonable substitute. Next, I tried a site that automates that process, d8upgrade.org. Basically you fill in your website URL and email, and wait a day for your report, which is very straightforward with a list of modules found for your site, whether there is a stable release, an alpha or beta release, or no Drupal 8 release found yet. This is a useful timesaver, but will need some manual work to complete and isn’t always completely up to date.

My manual analysis determined that there were 30 modules on which we depend to a greater or lesser extent. Of those, 10 either moved into Drupal core (so would automatically be included) or the functions on which used them moved into another piece of core. 5 had versions available in Drupal 8, with varying levels of release (i.e. several in stable alpha release, so questionable to use for production sites but probably fine), and 5 were not migrated but it was possible to identify substitute Drupal 8 modules. That’s pretty good– 18 modules were available in Drupal 8, and in several cases one module could do the job that two or more had done in Drupal 7. Of the additional 11 modules that weren’t migrated and didn’t have an easy substitution, three of them are critical to maintaining our current site workflows. I’ll talk about those in more detail below.

d8upDrupal8analysisgrade.org found 21 modules in use, though I didn’t include all of them on my own spreadsheet if I didn’t intend to keep using them in the future. I’ve included a screenshot of the report, and there are a few things to note. This list does not have all the modules I had on my list, since some of those are used purely behind the scenes for administrative purposes and would have no indication of use without administrative access. The very last item on the list is Core, which of course isn’t going to be upgraded to Drupal 8–it is Drupal 8. I also found that it’s not completely up to date. For instance, my own analysis found a pre-release version of Workbench Moderation, but that information had not made it to this site yet. A quick email to them fixed it almost immediately, however, so this screenshot is out of date.

I decided that there were three dealbreaker modules for the upgrade, and I want to talk about why we rely on them, since I think my reasoning will be applicable to many libraries with limited web development time. I will also give honorable mention to a module that we are not currently using, but I know a lot of libraries rely on and that I would potentially like to use in the future.

Webform is a module that creates a very simple to use interface for creating webforms and doing all kinds of things with them beyond just simply sending emails. We have many, many custom PHP/MySQL forms throughout our website and intranet, but there are only two people on the staff who can edit those or download the submitted entries from them. They also occasionally have dreadful spam problems. We’ve been slowly working on migrating these custom forms to the Drupal Webform module, since that allows much more distribution of effort across the staff, and provides easier ways to stop spam using, for instance, the Honeypot module or Mollom. (We’ve found that the Honeypot module stopped nearly all our spam problems and didn’t need to move to Mollom, since we don’t have user comments to moderate). The thought of going back to coding all those webforms myself is not appealing, so for now I can’t move forward until I come up with a Drupal solution.

Redirect does a seemingly tiny job that’s extremely helpful. It allows you to create redirects for URLs on your site, which is incredibly helpful for all kinds of reasons. For instance, if you want to create a library site branded link that forwards somewhere else like a database vendor or another page on your university site, or if you want to change a page URL but ensure people with bookmarks to the old page will still find it. This is, of course, something that you can do on your web server, assuming you have access to it, but this module takes a lot of the administrative overhead away and helps keep things organized.

Backup and Migrate is my greatest helper in my goal to be someone who would like to at least be in the neighborhood of best practices for web development when web development is only half my job, or some weeks more like a quarter of my job. It makes a very quick process of keeping my development, staging, and production sites in sync, and since I created a workflow using this module I have been far more successful in keeping my development processes sane. It provides an interface for creating a backup of your site database, files directories, or your database and files that you can use in the Backup and Migrate module to completely restore a site. I use it at least every two weeks, or more often when working on a particular feature to move the database between servers (I don’t move the files with the module for this process, but that’s useful for backups that are for emergency restoration of the site). There are other ways to accomplish this work, but this particular workflow has been so helpful that I hate to dump a lot of time into redoing it just now.

One last honorable mention goes to Workbench, which we don’t use but I know a lot of libraries do use. This allows you to create a much more friendly interface for content editors so they don’t have to deal with the administrative backend of Drupal and allows them to just see their own content. We do use Workbench Moderation, which does have a Drupal 8 release, and allows a moderation queue for the six or so members of staff who can create or edit content but don’t have administrative rights to have their content checked by an administrator. None of them particularly like the standard Drupal content creation interface, and it’s not something that we would ever ask the rest of the staff to use. We know from the lack of use of our intranet, which also is on Drupal, that no one particularly cares for editing content there. So if we wanted to expand access to website editing, which we’ve talked about a lot, this would be a key module for us to use.

Given the current status of these modules  with rewrites in progress, it seems likely that by the end of the year it may be possible to migrate to Drupal 8 with our current setup, or in playing around with Drupal 8 on a development site that we determine a different way to approach these needs. If you have the interest and time to do this, there are worse ways to pass the time. If you are creating a completely new Drupal site and don’t have a time crunch, starting in Drupal 8 now is probably the way to go, since by the time the site would be ready you may have additional modules available and get to take advantage of all the new features. If this is something you’re trying to roll out by the end of the semester, maybe wait on it.

Have you considered upgrading your library’s site to Drupal 8? Have you been successful? Let us know in the comments.


Store and display high resolution images with the International Image Interoperability Framework (IIIF)

Recently a faculty member working in the Digital Humanities on my campus asked the library to explore International Image Interoperability Framework (IIIF) image servers, with the ultimate goal of determining whether it would be feasible for the library to support a IIIF server as a service for the campus.  I typically am not very involved in supporting work in the Digital Humanities on my campus, despite my background in (and love for) the humanities (philosophy majors, unite!). Since I began investigating this technology, I seem to see references to IIIF-compliance popping up all over the place, mostly in discussions related to IIIF compatibility in Digital Asset Management System (DAMS) repositories like Hydra 1 and Rosetta 2, but also including ArtStor3 and the Internet Archive 4.

IIIF was created by a group of technologists from Stanford, the British Library, and Oxford to solve three problems: 1) slow loading of high resolution images in the browser, 2) high variation of user experience across image display platforms, requiring users to learn new controls and navigation for different image sites, and 3) the complexity of setting up high performance image servers.5 Image servers traditionally have also tended to silo content, coupling back-end storage with either customized or commercial systems that do not allow additional 3rd party applications to access the stored data.

By storing your images in a way that multiple applications can access them and render them, you enable users to discover your content through a variety of different portals. With IIIF, images can be stored in a way that facilitates API access to them. This enables a variety of applications to retrieve the data. For example, if you have images stored in a IIIF-compatible server, you could have multiple front-end discovery platforms access the images through API, either at your own institution or other institutions that would be interested in providing gateways to your content. You might have images that are relevant to multiple repositories or collections; for instance, you might want your images to be discoverable through your institutional repository, discovery system, and digital archives system.

IIIF systems are designed to work with two components: an image server (such as the Python-based Loris application)6 and a front-end viewer (such as Mirador 7 or OpenSeadragon8).  There are other viewer options out there (IIIF Viewer 9, for example), and you could conceivably write your own viewer application, or write a IIIF display plugin that can retrieve images from IIIF servers.  Your image server can serve up images via APIs (discussed below) to any IIIF-compatible front-end viewer, and any IIIF-compatible front-end viewer can be configured to access information served by any IIIF-compatible image server.

IIIF Image API and Presentation API

IIIF-compatible software enables retrieval of content from two APIs: the Image API and the Presentation API. As you might expect, the Image API is designed to enable the retrieval of actual images. Supported file types depends on the image server application being used, but API calls enable the retrieval of specific file type extensions including .jpg, .tif, .png, .gif, .jp2, .pdf, and .webp.10. A key feature of the API is the ability to request images to be returned by image region – meaning that if only a portion of the image is requested, the image server can return precisely the area of the image requested.11 This enables faster, more nimble rendering of detailed image regions in the viewer.

A screenshot showing a region of an image that can be returned via a IIIF Image API request. The region to be retrieved is specified using pixel area references (Left, Top, Right, Bottom).

A screenshot showing a region of an image that can be returned via a IIIF Image API request. The region to be retrieved is specified using pixel area references (Left, Top, Right, Bottom). These reference points are then included in the request URI. (Image Source: IIIF Image API 2.0. http://iiif.io/api/image/2.0/#region)

The basic structure of a request to a IIIF image server follows a standard scheme:

{scheme}://{server}{/prefix}/{identifier}/{region}/{size}/{rotation}/{quality}.{format}

An example request to a IIIF image server might look like this:

http://www.example.org/imageservice/abcd1234/full/full/0/default.jpg12

The Presentation API returns contextual and descriptive information about images, such as how an image fits in with a collection or compound object, or annotations and properties to help the viewer understand the origin of the image. The Presentation API retrieves metadata stored as “manifests” that are often expressed as JSON for Linked Data, or JSON-LD.13 Image servers such as Loris may only provide the ability to work with the Image API; Presentation API data and metadata can be stored on any server and image viewers such as Mirador can be configured to retrieve presentation API data.14

Why would you need a IIIF Image Server or Viewer?

IIIF servers and their APIs are particularly suited for use by cultural heritage organizations. The ability to use APIs to render high resolution images in the browser efficiently is essential for collections like medieval manuscripts that have very fine details that lower-quality image rendering might obscure. Digital humanities, art, and history scholars who need access to high quality images for their research would be able to zoom, pan and analyze images very closely.  This sort of an analysis can also facilitate collaborative editing of metadata – for example, a separate viewing client could be set up specifically to enable scholars to add metadata, annotations, or translations to documents without necessarily publishing the enhanced data to other repositories.

Example: Biblissima

A nice example of the power of the IIIF Framework is with the Biblissima Mirador demo site. As the project website describes it,

In this demo, the user can consult a number of manuscripts, held by different institutions, in the same interface. In particular, there are several manuscripts from Stanford and Yale, as well as the first example from Gallica and served by Biblissima (BnF Français 1728)….

It is important to note that the images displayed in the viewer do not leave their original repositories; this is one of the fundamental principles of the IIIF initiative. All data (images and associated metadata) remain in their respective repositories and the institutions responsible for them maintain full control over what they choose to share. 15.

A screenshot of the Biblissima Mirador demo site.

The Biblissima Mirador demo site displays images that are gathered from remote repositories via API. In this screenshot, the viewer can select from manuscripts available from Yale, the National Library of Wales, and Harvard.

The approach described by Biblissima represents the increasing shift toward designing repositories to guide users toward linked or related information that may not be actually held by the repository.  While I can certainly anticipate some problems with this approach for some archival collections – injecting objects from other collections might skew the authentic representation of some collections, even if the objects are directly related to each other – this approach might work well to help represent provenance for collections that have been broken up across multiple institutions. Without this kind of architecture, researchers would have to visit and keep track of multiple repositories that contain similar collections or associated objects. Manuscript collections are particularly suited to this kind approach, where a single manuscript may have been separated into individual leaves that can be found in multiple institutions worldwide – these manuscripts can be digitally re-assembled without requiring institutions to transfer copies of files to multiple repositories.

One challenge we are running into in exploring IIIF is how to incorporate this technology into existing legacy applications that host high resolution images (for example, ContentDM and DSpace).  We wouldn’t necessarily want to build a separate IIIF image server – it would be ideal if we could continue storing our high res images on our existing repositories and pull them together with a IIIF viewer such as Loris).  There is a Python-based translator to enable ContentDM to serve up images using the IIIF standard16, but I’ve found it difficult to find case studies or step-by-step implementation and troubleshooting information (if you have set up IIIF with ContentDM, I’d love to know about your experience!).  To my knowledge, there is not an existing way to integrate IIIF with DSpace (but again, I would love to stand corrected if there is something out there).  Because IIIF is such a new standard, and legacy applications were not necessarily built to enable this kind of content distribution, it may be some time before legacy digital asset management applications integrate IIIF easily and seamlessly.  Apart from these applications serving up content for use with IIIF viewers, embedding IIIF viewer capabilities into existing applications would be another challenge.

Finally, another challenge is discovering IIIF repositories from which to pull images and content.  Libraries looking to explore supporting IIIF viewers will certainly need to collaborate with content experts, such as archivists, historians, digital humanities and/or art scholars, who may be familiar with external repositories and sources of IIIF content that would be relevant to building coherent collections for IIIF viewers.  Viewers are manually configured to pull in content from repositories, and so any library wanting to support a IIIF viewer will need to locate sources of content and configure the viewer to pull in that content.

Undertaking support for IIIF servers and viewers is fundamentally not a trivial project, but can be a way for libraries to potentially expand the visibility and findability of their own high-resolution digital collections (by exposing content through a IIIF-compatible server) or enable their users to find content related to their collections (by supporting a IIIF viewer).  While my library hasn’t determined what exactly our role will be in supporting IIIF technology, we will definitely be taking information learned from this experiences to shape our exploration of emerging digital asset management systems, such as Hydra and Islandora.

More Information

  • IIIF Website: http://search.iiif.io/
  • IIIF Metadata Overview: https://lib.stanford.edu/home/iiif-metadata-overview
  • IIIF Google Group: https://groups.google.com/forum/#!forum/iiif-discuss

Notes

 

  1. https://wiki.duraspace.org/display/hydra/Page+Turners+%3A+The+Landscape
  2.  Tools for Digital Humanities: Implementation of the Mirador high-resolution viewer on Rosetta – Roxanne Wyns, Business Consultant, KU Leuven/LIBIS – Stephan Pauls, Software architect. http://igelu.org/wp-content/uploads/2015/08/5.42-IGeLU2015_5.42_RoxanneWyns_StephanPauls_v1.pptx
  3.  D-Lib Magazine. 2015. “”Bottled or Tap?” A Map for Integrating International Image Interoperability Framework (IIIF) into Shared Shelf and Artstor”. D-Lib Magazine. 2015-08. http://www.dlib.org/dlib/july15/ying/07ying.html
  4. https://blog.archive.org/2015/10/23/zoom-in-to-9-3-million-internet-archive-books-and-images-through-iiif/
  5. Snydman, Stuart, Robert Sanderson and Tom Cramer. 2015. The International Image Interoperability Framework (IIIF): A
    community & technology approach for web-based images. Archiving Conference 1. 16-21(6). https://stacks.stanford.edu/file/druid:df650pk4327/2015ARCHIVING_IIIF.pdf.
  6. https://github.com/pulibrary/loris
  7. http://github.com/IIIF/mirador
  8.  http://openseadragon.github.io/
  9. http://klokantech.github.io/iiifviewer/
  10.  http://iiif.io/api/image/2.0/#format
  11. http://iiif.io/api/image/2.0/#region
  12. Snydman, Sanderson, and Cramer, The International Image Interoperability Framework (IIIF), 2
  13. http://iiif.io/api/presentation/2.0/#primary-resource-types-1
  14. https://groups.google.com/d/msg/iiif-discuss/F2_-gA6EWjc/2E0B7sIs2hsJ
  15.  http://www.biblissima-condorcet.fr/en/news/interoperable-viewer-prototype-now-online-mirador
  16. https://github.com/IIIF/image-api/tree/master/translators/ContentDM

Low Expectations Distributed: Yet Another Institutional Repository Collection Development Workflow

Anyone who has worked on an institutional repository for even a short time knows  that collecting faculty scholarship is not a straightforward process, no matter how nice your workflow looks on paper or how dedicated you are. Keeping expectations for the process manageable (not necessarily low, as in my clickbaity title) and constant simplification and automation can make your process more manageable, however, and therefore work better. I’ve written before about some ways in which I’ve automated my process for faculty collection development, as well as how I’ve used lightweight project management tools to streamline processes. My newest technique for faculty scholarship collection development brings together pieces of all those to greatly improve our productivity.

Allocating Your Human and Machine Resources

First, here is the personnel situation we have for the institutional repository I manage. Your own circumstances will certainly vary, but I think institutions of all sizes will have some version of this distribution. I manage our repository as approximately half my position, and I have one graduate student assistant who works about 10-15 hours a week. From week to week we only average about 30-40 hours total to devote to all aspects of the repository, of which faculty collection development is only a part. We have 12 librarians who are liaisons with departments and do the majority of the outreach to faculty and promotion of the repository, but a limited amount of the collection development except for specific parts of the process. While they are certainly welcome to do more, in reality, they have so much else to do that it doesn’t make sense for them to spend their time on data entry unless they want to (and some of them do). The breakdown of work is roughly that the liaisons promote the repository to the faculty and answer basic questions; I answer more complex questions, develop procedures, train staff, make interpretations of publishing agreements, and verify metadata; and my GA does the simple research and data entry. From time to time we have additional graduate or undergraduate student help in the form of faculty research assistants, and we have a group of students available for digitization if needed.

Those are our human resources. The tools that we use for the day-to-day work include Digital Measures (our faculty activity system), Excel, OpenRefine, Box, and Asana. I’ll say a bit about what each of these are and how we use them below. By far the most important innovation for our faculty collection development workflow has been integration with the Faculty Activity System, which is how we refer to Digital Measures on our campus. Many colleges and universities have some type of faculty activity system or are in the process of implementing one. These generally are adopted for purposes of annual reports, retention, promotion, and tenure reviews. I have been at two different universities working on adopting such systems, and as you might imagine, it’s a slow process with varying levels of participation across departments. Faculty do not always like these systems for a variety of reasons, and so there may be hesitation to complete profiles even when required. Nevertheless, we felt in the library that this was a great source of faculty publication information that we could use for collection development for the repository and the collection in general.

We now have a required question about including the item in the repository on every item the faculty member enters in the Faculty Activity System. If a faculty member is saying they published an article, they also have to say whether it should be included in the repository. We started this in late 2014, and it revolutionized our ability to reach faculty and departments who never had participated in the repository before, as well as simplify the lives of faculty who were eager participants but now only had to enter data in one place. Of course, there are still a number of people whom we are missing, but this is part of keeping your expectation low–if you can’t reach everyone, focus your efforts on the people you can. And anyway, we are now so swamped with submissions that we can’t keep up with them, which is a good if unusual problem to have in this realm. Note that the process I describe below is basically the same as when we analyze a faculty member’s CV (which I described in my OpenRefine post), but we spend relatively little time doing that these days since it’s easier for most people to just enter their material in Digital Measures and select that they want to include it in the repository.

The ease of integration between your own institution’s faculty activity system (assuming it exists) and your repository certainly will vary, but in most cases it should be possible for the library to get access to the data. It’s a great selling point for the faculty to participate in the system for your Office of Institutional Research or similar office who administers it, since it gives faculty a reason to keep it up to date when they may be in between review cycles. If your institution does not yet have such a system, you might still discuss a partnership with that office, since your repository may hold extremely useful information for them about research activity of which they are not aware.

The Workflow

We get reports from the Faculty Activity System on roughly a quarterly basis. Faculty member data entry tends to bunch around certain dates, so we focus on end of semesters as the times to get the reports. The reports come by email as Excel files with information about the person, their department, contact information, and the like, as well as information about each publication. We do some initial processing in Excel to clean them up, remove duplicates from prior reports, and remove irrelevant information.  It is amazing how many people see a field like “Journal Title” as a chance to ask a question rather than provide information. We focus our efforts on items that have actually been published, since the vast majority of people have no interest in posting pre-prints and those that do prefer to post them in arXiv or similar. The few people who do know about pre-prints and don’t have a subject archive generally submit their items directly. This is another way to lower expectations of what can be done through the process. I’ve already described how I use OpenRefine for creating reports from faculty CVs using the SHERPA/RoMEO API, and we follow a similar but much simplified process since we already have the data in the correct columns. Of course, following this process doesn’t tell us what we can do with every item. The journal title may be entered incorrectly so the API call didn’t pick it up, or the journal may not be in SHERPA/RoMEO. My graduate student assistant fills in what he is able to determine, and I work on the complex cases. As we are doing this, the Excel spreadsheet is saved in Box so we have the change history tracked and can easily add collaborators.

Screen Capture from Asana Setup

A view of how we use Asana for managing faculty collection development workflows.

At this point, we are ready to move to Asana, which is a lightweight project management tool ideal for several people working on a group of related projects. Asana is far more fun and easy to work with than Excel spreadsheets, and this helps us work together better to manage workload and see where we are with all our on-going projects. For each report (or faculty member CV), we create a new project in Asana with several sections. While it doesn’t always happen in practice, in theory each citation is a task that moves between sections as it is completed, and finally checked off when it is either posted or moved off into some other fate not as glamorous as being archived as open access full text. The sections generally cover posting the publisher’s PDF, contacting publishers, reminders for followup, posting author’s manuscripts, or posting to SelectedWorks, which is our faculty profile service that is related to our repository but mainly holds citations rather than full text. Again, as part of the low expectations, we focus on posting final PDFs of articles or book chapters. We add books to a faculty book list, and don’t even attempt to include full text for these unless someone wants to make special arrangements with their publisher–this is rare, but again the people who really care make it happen. If we already know that the author’s manuscript is permitted, we don’t add these to Asana, but keep them in the spreadsheet until we are ready for them.

We contact publishers in batches, trying to group citations by journal and publisher to increase efficiency so we can send one letter to cover many articles or chapters. We note to follow up with a reminder in one month, and then again in a month after that. Usually the second notice is enough to catch the attention of the publisher. As they respond, we move the citation to either posting publisher’s PDF section or to author’s manuscript section, or if it’s not permitted at all to the post to SelectedWorks section. While we’ve tried several different procedures, I’ve determined it’s best for the liaison librarians to ask just for author’s accepted manuscripts for items after we’ve verified that no other version may be posted. And if we don’t ever get them, we don’t worry about it too much.

Conclusion

I hope you’ve gotten some ideas from this post about your own procedures and new tools you might try. Even more, I hope you’ll think about which pieces of your procedures are really working for you, and discard those that aren’t working any more. Your own situation will dictate which those are, but let’s all stop beating ourselves up about not achieving perfection. Make sure to let your repository stakeholders know what works and what doesn’t, and if something that isn’t working is still important, work collaboratively to figure out a way around that obstacle. That type of collaboration is what led to our partnership with the Office of Institutional Research to use the Digital Measures platform for our collection development, and that in turn has  led to other collaborative opportunities.

 


African Art Pedagogy: A Hypertexted Journey

File_000_cropped

AAP Exhibit, installation view, CCA’s Meyer Library, Oakland, CA

In early fall, our Instructional Designer, Bobby White, who is based in the Library, brought a potential Digital Scholarship project to my attention, a Library exhibit idea with both a digital and physical component. In this article I’ll talk about the idea behind the project, our process, the technology utilized, and our reflections after we completed the exhibit.

Leslie Townsend, faculty member in the Visual Studies and Writing and Literature programs, approached the Libraries with the idea to share her pedagogy for an African Art survey course (part of the Visual Studies Program) with other faculty and the greater CCA community. In addition to displaying material artifacts, she was interested in linking the many types of digital artifacts from the course–images, videos, texts, student assignments, reference materials, syllabus–into an integrated digital display. Bobby had recently seen student work from faculty member Rebekah Edwards’ literature class using the software Twine and suggested Twine for this project. Twine is an open-source tool for telling interactive, nonlinear stories, and is also used for narrative games. Like a labyrinth, it allows viewers to choose different and multiple paths to travel through a particular story. Twine offered a unique way to open up Leslie’s African Art course to reveal layered perspectives–instructor, student, assessment, and reflection–and reveal complex interactions unavailable in a traditional 2-D format. The African Art Pedagogy Exhibit was to be the Libraries’ first Digital Scholarship project and our first exhibit utilizing an iPad.

Bobby and I set about to learn Twine, and began a series of weekly meetings with Leslie to discuss the content and structure of the Twine, as well as to curate a selection of objects and books related to her course. We had already determined that we would use an iPad to display the Twine; the Libraries’ had purchased several iPads in the past year or so, and we have been interested in deploying them for a variety of purposes, including for displays. I began researching a display stand for the iPad, and eventually settled on an iPad floor stand from a Website called Displays2Go, which specializes in marketing displays. The criteria included a locking case, cable management, a rotating bracket to allow flexibility in display, a fit for the iPad Air, hidden home button (to keep users from navigating away from the exhibit), relatively inexpensive price, and last but not least, pleasing aesthetically. When it came time to install, we also utilized the iPad’s “Guided Access” feature, which keeps users in the app.

As for Twine, we discovered there are currently two versions of Twine; we chose to use the newest version (version 2), for what seemed like obvious reasons — newer versions tend to be better supported and offer new features. But in the case of Twine, the new version represents a renewed focus on text, and away from the easy integration of adding images that version 1 offers. Adding images and links to embedded videos were important to this project, to give viewers direct contact with selected course materials. We were able to work with version 2, but it required additional research. For a future project, we would look more closely at Twine 1 and consider using it instead.

The goals we developed going into the project were to

  • Design an integrated physical and digital pedagogy exhibition in the Library
  • Test Twine’s application in an academic environment
  • Share Leslie’s pedagogical process with colleagues
  • Offer an experience of her African Art course to a range of viewers in the Library: students, faculty, staff, visitors
  • Enable Leslie to continue to develop the Twine after the exhibition
  • Explore options and issues with sharing the Twine outside the Library once the exhibition ended

The three of us then began to work as team, and in short order defined our roles — a key component to a successful collaboration, and one that made it easy and enjoyable to work together. These were based on our individual expertise/s: Leslie Leslie focused on providing the content, and input on the flow of the narrative; Bobby focused on Twine and pedagogy development; and I assumed the project management hat, as well as Twine development.

Neither Bobby nor I have a background in African Art so one of our initial tasks was to get to know Leslie’s curriculum, both through her syllabus and in conversation with her. We defined the content areas for our Twine: syllabus, student work, teaching philosophy/learning outcomes, and resources, and created a structure for storing and sharing materials in Google Drive, which our campus uses. At this point we began to re-imagine the course as an exhibit: the content areas would become four paths in Twine, that intermingle and connect, depending on the choices a visitor makes. The content areas are: Curriculum Guide, Students in Action, Teaching Philosophy and Learning Outcomes, and Experience African Art. I built a timeline with milestones, working backward from the exhibition date, and we scheduled weekly working meetings (initially two-hour blocks, though toward the end of the project we had a few full-day working sessions). In addition to our weekly meetings, Leslie spent additional time pulling together coursework, and Bobby and I spent time researching Twine questions and implementation questions. But it was difficult to properly estimate the amount of time we needed, especially since we were engaged in multiple new tasks: learning an open-source software, figuring out how to host the completed work, and turning a course into an open narrative. Bobby reflected after the fact that this type of scenario will most likely repeat itself, as part of what we do in the Libraries now is engage with new technologies. Leslie observed that she could imagine another project in the future taking place over a longer period of time, perhaps over a semester and a summer, as we spent many hours toward the end of the project, and could easily have spent more.

Once we’d identified works for inclusion, we had a variety of media to organize: electronic documents, links to embedded videos, and physical objects. We categorized works into proper folders, selected physical objects to scan or photograph, and hashed out the best way to present the material, to tell the story the course suggested to us. It was a fully collaborative process, which was one of its joys. One of the challenges we struggled with was whether we should map the story out in advance or whether we could build it once we’d added all the ‘raw’ material into Twine. Twine’s premise is simple: create a nonlinear story easily and quickly, by creating passages. At its most basic, each passage contains some text and a link or links embedded anywhere within the text to go to another part of the story. Images and multimedia can also be embedded within passages. When building a Twine, one works in a map where you can see all of the passages you’ve created and how they’re linked to one another. It’s a great feature, to be able to have a bird’s-eye view; one navigates back and forth between the editor view of the passage, a preview of the passage/s, and the map of the whole story. We settled on getting all of our content into passages in Twine and then connecting them into multiple narratives, which we thought would allow us to better see the possibilities that the Twine format offered.

Twine-Screen Shot 2015-11-11 at 3.16.24 PM

African Art Pedagogy Exhibit, Twine map screenshot

Simultaneously, Bobby and I began researching where we might host the finished work. A site called philome.la publishes text-only Twines for free, though if you want to include locally stored images or other media, and/or if you have any privacy concerns, it’s not the place to host your Twine. We also looked into using Google Drive and Dropbox as hosting sites but both services have now made it very hard if not impossible to use them as hosting sites. Our solution: we requested a slice of space on one of our Educational Technology Service’s Web servers. This turned out to be ideal, as we now have a space to host future digital-scholarship projects. We still have to grapple with some rights issues for the site: we digitized a few images from books that Leslie uses in her course, which we believe falls under fair use when only shown in the library, but would most likely not be considered Fair Use were we to share the site publicly, as we could not control who would see the images nor what they might do with them. The nature of the digital portion of the exhibit presents opportunities beyond the library exhibit dates, a complicated but exciting aspect of the project. Stay tuned.

Gradually we built out our content into passages and connected them into a variety of paths for the viewers to choose from: we broke up the syllabus into individual passages, with links forward through the syllabus, and links to key course materials, which in turn might take the viewer to other course materials; the Students in Action section is comprised of two assignments, with introductions by Leslie, which offer an insight into students’ interactions with the materials and learning: an introduction to the geography of the continent, and excerpts from a few student papers; Teaching Philosophy and Learning Outcomes offers Leslie a way to frame and share her thinking about the course, one of the most valuable parts of the exercise; lastly, Experience African Art shares a selection of curated, visual course materials, with explications. A map of the continent of Africa is the unofficial hub of the story, as many links across sections radiate to and from it.

aapeTwineTeachPhil

Screenshot from AAP, Teaching Philosophy

Physical objects chosen for display were related to images and text in the Twine, and gave the exhibition a tactile presence that was a nice complement to the digital, while increasing the overall visibility of the exhibition. The Libraries’ Assistant Curator (a work-study position), Hannah Novillo-Erickson, worked with Leslie and I on the exhibit installation, another nice collaboration point.

Overall, we consider the African Art Pedagogy exhibit1 (link to Twine) a successful undertaking. The opportunity to work in-depth with both the Instructional Designer and a faculty member was an invaluable, rich, learning experience. It required a significant time investment, but, having lived through it, the Instructional Designer and I now have a ballpark figure to work with going forward, as well as ideas about how to manage and possibly reduce the time outlay. We found examples of writing composition and writers employing Twine, but we did not find any examples of projects similar to ours, which is kind of exciting. The technology, though easy, still demanded respect in terms of a learning curve, both conceptual and technological. I consider our Twine to be more of a first iteration; I wish I had more time to refine it, now that I better understand its potential in relation to our subject matter. Leslie observed that it showed her relationships and things she could do with the pedagogy that she hadn’t seen previously. She couldn’t imagine how she would do something like this on her own; I assured her that facilitating these types of projects is one of the goals of the new Digital Scholarship position.

Lisa Conrad is the Digital Scholarship Librarian at California College of the Arts, in the San Francisco Bay Area. She received an MFA in Visual Arts from the University of Illinois at Chicago’s School of Art and Art History, and an MLIS from San Jose State University’s School of Library and Information Science. Images from her art work 4 1/2 feet can be seen at fourandahalffeet.

Notes

  1. for educational purposes only; no re-use of any of the images in the Twine.

#1Lib1Ref

A few of us at Tech Connect participated in the #1Lib1Ref campaign that’s running from January 15th to the 23rd . What’s #1Lib1Ref? It’s a campaign to encourage librarians to get involved with improving Wikipedia, specifically by citation chasing (one of my favorite pastimes!). From the project’s description:

Imagine a World where Every Librarian Added One More Reference to Wikipedia.
Wikipedia is a first stop for researchers: let’s make it better! Your goal today is to add one reference to Wikipedia! Any citation to a reliable source is a benefit to Wikipedia readers worldwide. When you add the reference to the article, make sure to include the hashtag #1Lib1Ref in the edit summary so that we can track participation.

Below, we each describe our experiences editing Wikipedia. Did you participate in #1Lib1Ref, too? Let us know in the comments or join the conversation on Twitter!


 

I recorded a short screencast of me adding a citation to the Darbhanga article.

— Eric Phetteplace


 

I used the Citation Hunt tool to find an article that needed a citation. I selected the second one I found, which was about urinary tract infections in space missions. That is very much up my alley. I discovered after a quick Google search that the paragraph in question was plagiarized from a book on Google Books! After a hunt through the Wikipedia policy on quotations, I decided to rewrite the paragraph to paraphrase the quote, and then added my citation. As is usual with plagiarism, the flow was wrong, since there was a reference to a theme in the previous paragraph of the book that wasn’t present in the Wikipedia article, so I chose to remove that entirely. The Wikipedia Citation Tool for Google Books was very helpful in automatically generating an acceptable citation for the appropriate page. Here’s my shiny new paragraph, complete with citation: https://en.wikipedia.org/wiki/Astronautical_hygiene#Microbial_hazards_in_space.

— Margaret Heller


 

I edited the “Library Facilities” section of the “University of Maryland Baltimore” article in Wikipedia.  There was an outdated link in the existing citation, and I also wanted to add two additional sentences and citations. You can see how I went about doing this in my screen recording below. I used the “edit source” option to get the source first in the Text Editor and then made all the changes I wanted in advance. After that, I copy/pasted the changes I wanted from my text file to the Wikipedia page I was editing. Then, I previewed and saved the page. You can see that I also had a typo in my text  and had to fix that again to make the citation display correctly. So I had to edit the article more than once. After my recording, I noticed another typo in there, which I fixed it using the “edit” option. The “edit” option is much easier to use than the “edit source” option for those who are not familiar with editing Wiki pages. It offers a menu bar on the top with several convenient options.

wiki_edit_menu

The menu bar for the “edit” option in Wikipeda

The recording of editing a Wikipedia article:

— Bohyun Kim


 

It has been so long since I’ve edited anything on Wikipedia that I had to make a new account and read the “how to add a reference” link; which is to say, if I could do it in 30 minutes while on vacation, anyone can. There is a WYSIWYG option for the editing interface, but I learned to do all this in plain text and it’s still the easiest way for me to edit. See the screenshot below for a view of the HTML editor.

I wondered what entry I would find to add a citation to…there have been so many that I’d come across but now I was drawing a total blank. Happily, the 1Lib1Ref campaign gave some suggestions, including “Provinces of Afghanistan.” Since this is my fatherland, I thought it would be a good service to dive into. Many of Afghanistan’s citations are hard to provide for a multitude of reasons. A lot of our history has been an oral tradition. Also, not insignificantly, Afghanistan has been in conflict for a very long time, with much of its history captured from the lens of Great Game participants like England or Russia. Primary sources from the 20th century are difficult to come by because of the state of war from 1979 onwards and there are not many digitization efforts underway to capture what there is available (shout out to NYU and the Afghanistan Digital Library project).

Once I found a source that I thought would be an appropriate reference for a statement on the topography of Uruzgan Province, I did need to edit the sentence to remove the numeric values that had been written since I could not find a source that quantified the area. It’s not a precise entry, to be honest, but it does give the opportunity to link to a good map with other opportunities to find additional information related to Afghanistan’s agriculture. I also wanted to chose something relatively uncontroversial, like geographical features rather than historical or person-based, for this particular campaign.

— Yasmeen Shorish

WikiEditScreenshot

Edited area delineated by red box.