My #HuntLibrary: Using Instagram to Crowdsource the Story of a New Library

[Updated to reflect open-source availability. May 13th, 2013]

North Carolina State University opened the James B. Hunt Jr. Library in January of 2013, creating a heart for our Centennial Campus that defines the research library of the future. My #HuntLibrary was created as a platform to foster student and community engagement with the new building via social media imagery and to preserve and archive these images as part of the record of the Hunt Library launch. My #HuntLibrary is a Ruby on Rails application that harvests images from Instagram and provides several browsing views, mechanisms for sharing, tools for users to select their favorite images, an administrative interface for moderating images, and a system for harvesting images for inclusion in the NCSU Libraries digital archives. Built according to the principles of “responsive design,” My #HuntLibrary is usable on mobile devices, tablets, desktops, e-boards, and the massive MicroTiles displays in the Hunt Library.

In the three months since the launch of My #HuntLibrary (coinciding with the opening of the Hunt Library building), we received nearly 1700 images from over 600 different users, over 6800 “like” votes, and over 53,000 “battle” votes. This post will detail some of the risks involved with the project, the technical challenges we faced, how student engagement strengthened the project, and the potential benefits of giving students and community members a voice in the documentation of the Hunt Library.

The code that drives My #HuntLibrary has been extracted into an open-source Rails Engine called “lentil” that is available on GitHub.

My #HuntLibrary front page
My #HuntLibrary
Planning for Risk

Most projects carry some level of risk and My #HuntLibrary was no different. It was difficult to predict the level of engagement we would be able to achieve with various application features. The timeline for development was short, carried a firm deadline (due to the need to launch alongside the new library), and included work with several technologies that were new to the development team. Additionally, the application relied on a third-party API that could change at any time. In order to mitigate project risks, we structured the project around goals with short and long (and more speculative) timelines that would each individually justify the project effort.

  1. Utilize social media to increase engagement with a new library

Social media engagement with students was a linchpin of our opening strategy. Before the Hunt Library came online, NC State students already had a high degree of ownership over existing Libraries spaces and we sought to extend that to our new library. My #HuntLibrary could contribute to that sense of ownership by providing a platform for users of the new library to document and share their experience, learn about the experiences of their peers, and to collectively curate the images using voting tools. Furthermore, My #HuntLibrary is an opportunity for staff to learn about important and unexpected uses of the building during the critical post-launch period.

  1. Provide a mechanism for students to contribute to digital collections

We felt that the Hunt Library opening could be an opportunity for students to add their voices to the documentation of campus history that is preserved in our extensive digital collections. My #HuntLibrary could also allow us to leverage social technologies to diversify the perspectives reflected in our archival collections. This is our first major social media preservation effort and we hope that our project, along with others (such as GWU Libraries’ Social Feed Manager, or the State of North Carolina’s Social Media Archive), may begin to contribute possible answers to several questions related to social media archives, including:

  • Can we utilize existing streams of social media content to integrate additional student perspectives in our documentation of the history of their university? Does this enhance our special collections?

  • Can an invitation to participate in core library practices, such as the development of special collections, serve as an effective engagement strategy?

  • What is the research value of social media collections? How does this value vary based on media, users, and harvesting methods?
  1. Explore new technologies

The developers involved with the project created a support structure that included pair programming, code reviews, and tutorial sessions that mitigated many of the technical risks, including the integration of new software frameworks and libraries and the coordination of work on a tight schedule. This project also provided an opportunity to learn more about the design of interfaces for the large-scale displays described later in this article.

Student Engagement

Although we knew it would be possible to utilize the Instagram API to collect and display photographs about the Hunt Library, we needed to have a reasonable expectation that people (and students in particular) would participate in the project. This question hinged on the likelihood that a person would tag a photograph of the new library with a hashtag that would allow us to capture it. The Libraries had previous experience trying to engage students through Twitter around the question “What are you doing in the library right now?” We looked back on that project’s limitations to inform our engagement strategy. The chosen hashtag (#whyncsulib) was unique, but in order to answer our question, students had to be aware of the hashtag and willing to deviate somewhat from their normal social media communication patterns. However, we found that it was already common for students to use the tag #DHHill to visually depict their activities in our D. H. Hill Library on Instagram. 

Example #DHHill Instagram images
Example #DHHill Instagram images

Assuming that students would continue this tagging behavior at the new library, we chose the hashtag “#HuntLibrary” in hopes that it would see widespread adoption regardless of the level of awareness of our project.

As we began to design the application and develop a social media plan, another milestone in the project came with the opportunity to present the idea to actual students. The NCSU Libraries Student Advisory Board is charged with providing guidance and input on the programs and services the Libraries offers. This regular open meeting (fueled by free food) allowed us to collect feedback on specific questions about the project (e.g. do students want images to “Battle?”). The feedback from this presentation varied widely, from useful (e.g. roughly two-thirds of the students present had Instagram installed on their phones and yes, they want to battle) to unsanctionable (“If you want cute photographs you should let us bring cats into the library”). However, the general reaction from the students was that it seemed like a good idea, and we continued work with increased confidence.

The Student Advisory Board meeting also led to another breakthrough: our Director’s commitment of funds to award an iPad Mini to the photographer of the best image. Prior to the Advisory Board meeting, our only participation incentive was an assurance that the best photographs would be ingested into the University’s permanent digital archives. While this is a thrilling idea to a roomful of librarians, we were uncertain that students would have the same reaction. Perhaps unsurprisingly, when our Director asked the gathered students if they would take more pictures if there were an iPad Mini at stake, the students were unanimous in their response. Although we later learned in usability tests that students reacted very positively to the idea of contributing to their University’s story, the tablet prize gave the project a focal point, and the contest became the cornerstone of our student engagement strategy.

Display Technology

The NCSU Libraries’ vision is to be NC State’s competitive advantage. This vision is often operationalized by putting cutting-edge technology in the hands of our students and faculty. For the Hunt Library, we made a strategic investment in large-scale, architecturally integrated visualization spaces such as ultra-high definition video walls and virtual environment studios. These visualization spaces serve as large canvases to reflect the research and activities of our campus in new interactive ways. The Hunt Library is, in short, a storytelling building.

We anticipated that My #HuntLibrary would produce a visually compelling record of the new library, and so we chose to display the photographic activity in one of the library’s most accessible visualization spaces: the iPearl Immersion Theater. The Theater features a curved video wall that is twenty-one feet wide and seven feet tall. The wall uses Christie MicroTiles, a modular display system based on LED and DLP technologies that gives the wall an effective resolution of 6824 pixels by 2240 pixels. MicroTiles feature high color saturation and a wide color spectrum, making them ideal for Instagram photographs of the colorful library. A key part of the technology behind the MicroTiles is a Christie Vista Spyder. The Spyder is a hardware-based video processor that allows for 12-bit scaling. This upsampling capability was important for our application, as it allowed small (612 pixels square) images to be enlarged to two-foot images in the Theater with very few noticeable compression artifacts. 

Viewing My #HuntLibrary in the Immersion Theater. Photo by Instagram user crmelvin14.
Viewing My #HuntLibrary in the Immersion Theater. Photo by Instagram user crmelvin14. 

As a public, physical space, the iPearl Immersion Theater allowed us to create embodied and shared user experiences that were fundamentally different from the web and mobile views of My #HuntLibrary. The Theater is a semi-open space near the entrance to the library, adjacent to an expansive reading lounge. The video wall installation had an attractive presence that invited passers-by inside to examine the images. Once inside the Theater, the content could be appreciated more fully by moving around in the space. Standing close to the wall enabled the user to see more detail about a particular photograph while moving farther away gave an impressionistic sense of the library’s spaces. While dwell times for the installation were sometimes low because users often dropped in for a moment before heading to their intended destination, seating in the Theater allowed for a more leisurely viewing experience as new photographs rotated into the display. Small groups of people gathered in the Theater to discuss the merits of their favorite photographs, point out their own photographs to their friends, and engage in conversations with strangers about the images.

Responsive Web Design

With the large MicroTiles displays in the Hunt Library we now face the challenge of designing for very small (mobile device) and very large displays and many sizes in between (tablets, laptops, desktops, e-boards). The growing popularity of responsive web design techniques have helped developers and designers meet the challenge of building applications that work well on a wide range of device screen sizes. Responsive web design generally means using a combination of CSS3 media queries, fluid grids, and flexible images to progressively enhance a single web design for optimal display and use on a wide range of screen sizes and devices (Marcotte 2010). Most of the discussion of responsive design centers around building for devices ranging from phone-sized to desktop-sized displays. However, there is no technical reason why responsive design cannot work for even wider ranges of display sizes.

Our final design for My #HuntLibrary includes two different responsive designs, one of which supports mouse and touch interactions and display sizes ranging from phones to desktops, and another for non-interactive public display of the photographs on displays ranging from large eboards to more than twenty-foot wide Christie MicroTiles arrays. Our decision to build two different responsive designs for the smaller and larger sets of displays has more to do with the context in which these displays are used (personal, interactive devices versus public, non-interactive displays) than any technical limitations imposed by responsive web design techniques. In our case, the design of My #HuntLibrary for phones, tablets, and laptop and desktop computers has features to support interactive browsing, sharing photos, and a photo competition “Battle View” for people to compare sets of images and pick their favorites. These features would not translate well to the Libraries’ larger public displays, which range in size from a large eboard to huge Christie MicroTiles video walls, and which are, for now, mostly non-interactive. It made sense to develop a different view optimized to support a non-interactive display of the My #HuntLibrary photos. For the eboard-sized and larger public displays we developed a grid of images that are periodically replaced by new images, a few at a time.

Mobile view of My #HuntLibrary.
Mobile view of My #HuntLibrary.
My #HuntLibrary on Christie MicroTiles in the Immersion Theater.
My #HuntLibrary on Christie MicroTiles in the Immersion Theater.
Collecting Social Media

Although the initial development push was heavily focused on the core data management and display infrastructure, the longer-term goal of content preservation (for the sake of historical documentation rather than personal archives) influenced most aspects of the project. In particular, we have attempted and are continuing to address three major preservation-related themes: harvesting, crowdsourced curation, and legal clearance.

For short-term use of the images, we harvest only the metadata, leaving the images on the Instagram servers. Clearly, for long-term preservation we would need to collect the images themselves. This harvesting is complicated by the necessity to declare an arbitrary “break” from the source materials, at which point any changes to the metadata (or removal of the images) would not be reflected by our system. We are currently developing a milestone-based harvesting schedule that takes into account both the length of time the image is in the system and the submission of a donor agreement.

While we are currently planning on collecting all “#huntlibrary” images, we are very interested in the potential to allow our users to influence the selection process for certain parts of our archival collection. In order to test and support this goal, we developed two voting tools: individual image “like” voting and this-or-that “battle” voting. Our hope (which early usage metrics seem to support) is that we could use the data from these tools to select images for preservation, or at least to promote a subset of preserved images, that reflect the interests of our community. In addition to improving our selection processes, this may be an opportunity to promote archival selection as a student engagement tool by promoting opportunities for students to influence the historical record of their own experiences.

Image battle interface.
Image battle interface.

Finally, we worked with a lawyer and copyright specialist at our library to develop a donor agreement that was short and clear enough to be submitted as a comment on an image. Instagram users retain rights to their own images and thus the ability to grant the limited rights that we are requesting. Furthermore, the use of the Instagram comment system will allow us to automate this process, provided that we are responsive to takedown requests.


In the three months since the launch of My #HuntLibrary (coinciding with the opening of the Hunt Library building), we received nearly 1700 images from over 600 different users, over 6800 “like” votes, and over 53,000 “battle” votes. In addition to these measures of user contributions (of either images or vote-based reviews), My #HuntLibrary recorded 135,908 pageviews from 10,421 unique visitors (according to Google Analytics) during this period. Furthermore, the project was regularly cited by students, staff, and institutional partners on social media channels, and was featured (with an emphasis on historical documentation) during the Hunt Library Dedication events.

The evaluation of the archival components of this application will take place on a longer timeline. We are currently extending the long-term content harvesting features in order to support these activities in a more automated way. We have received several indications of the value of pursuing image preservation features, including surprisingly enthusiastic reactions to questions about the preservation of images from students taking part in a My #HuntLibrary user study. As a particularly encouraging example, when an undergraduate student contributor to My #HuntLibrary was asked “How would you feel if one of your Instagram photos were selected by the library to be kept as a permanent record of what students did in 2013?” they responded, “I would be so excited. For me, I think it would be better than winning an iPad.”

About our guest authors:

Jason Casden is the Lead Librarian for the Digital Services Development group at the North Carolina State University Libraries, where he helps to develop and implement scalable digital library applications. He is the project manager and a software developer for “My #HuntLibrary,” and has served as a project or technical lead for projects including the Suma physical space and service usage assessment toolkit, the WolfWalk geo-enhanced mobile historical guide, and Library Course Tools.

Mike Nutt is a Fellow at NCSU Libraries, where he leads a strategic initiative called “Networked Library: Marketing the 21st Century Library.” He is the product lead for My #HuntLibrary, and also facilitates content strategies for the large video walls in NC State’s new Hunt Library. He founded the University of North Carolina at Chapel Hill student group Carolina Digital Story Lab and was a research assistant at the UNC-CH Carolina Digital Library and Archives.

Cory Lown is Digital Technologies Development Librarian at North Carolina State University Libraries where he works collaboratively to design and develop applications to improve end-user resource discovery and use of library services. He has contributed as a developer and/or interface designer to a number of projects, including My #HuntLibrary, WolfWalk, QuickSearch, and the latest version of the library’s mobile website.

Bret Davidson is currently a Digital Technologies Development Librarian at the North Carolina State University Libraries. Previously, Bret worked as an NCSU Libraries Fellow on visualization tools and resources to support the new James B. Hunt, Jr. Library. Prior to becoming a librarian, Bret was a music educator in the public schools of Pennsylvania and Illinois, as well as a performing musician with the River City Brass Band in Pittsburgh, PA.

Adventures with Raspberry Pi: A Librarian’s Introduction

Raspberry Pi
A Raspberry Pi computer
A Raspberry Pi computer (image credit: Wikimedia Commons)

Raspberry Pi, a $35 fully-functional desktop computer about the size of a credit card, is currently enjoying a high level of buzz, popularity, and media exposure. Librarians are, of course, also getting in on the action. I have been working with a Raspberry Pi to act as a low-power web server for a project delivering media-rich web content for museum exhibits in places without access to the internet. I’ve found working with the little Linux machine to be a lot of fun and I’m very excited about doing more with Raspberry Pi. However, as with many things librarians get excited about, it can be difficult to see through the enthusiasm to the core of the issue. Is the appeal of these cute little computers universal or niche? Do I need a Raspberry Pi in order to offer core services to my patrons? In other words: do we all need to run out and buy a Raspberry Pi, are they of interest to a certain niche of librarians, or are Raspberry Pi just the next library technology fad and soon to go the way of offering reference service in Second Life? 1 To help us answer this question, I’d like to take a moment to explain what a Raspberry Pi device is, speculate who will be interested in one, provide examples of some library projects that use Raspberry Pi, and offer a shopping list for those who want to get started.

What is Raspberry Pi

From the FAQ at

The Raspberry Pi is a credit-card sized computer that plugs into your TV and a keyboard. It’s a capable little PC which can be used for many of the things that your desktop PC does, like spreadsheets, word-processing and games. It also plays high-definition video. We want to see it being used by kids all over the world to learn programming.

This description from Raspberry Pi covers the basics. (H2G2 has a more detailed history of the project.) A Raspberry Pi (also known as a Raspi, or an RPi) is a small and inexpensive computer designed to extend technology education to young students who don’t currently have access to more expensive traditional computers. The Raspberry Pi project counteracts a movement away from general-purpose computing devices and toward internet appliances and mobile devices. The Pew Internet and American Life Project notes that: “smartphone owners, young adults, minorities, those with no college experience, and those with lower household income levels are more likely than other groups to say that their phone is their main source of internet access.2 Access to the internet today is pervasive and less expensive than ever before, but also more likely to come from an appliance or mobile device and without the programming tools and command-line control that were standard for previous generations of computer users. This means a smaller percentage of computer users are likely to pick up these skills on their own. Raspberry Pi offers a very-low cost solution to this lack of access to programming and command-line tools.

In addition to the stated goal of the Raspberry Pi organization, a lot of adults who already have access to technology are also very excited about the possibilities enabled by the small and cheap computing platform. What sets the Raspberry Pi apart from other computers is its combination of small size and low price. While you can do very similar things with a re-purposed or salvaged computer system running Linux, the Raspberry Pi is much smaller and uses less power. Similarly, you can do some of these things with a similarly-sized smart-phone, but those are much more expensive than a Raspberry Pi. For the technology hobbyist and amateur mad scientist, the Raspberry Pi seems to hit a sweet spot on both physical size and cost of entry.

The heart of the Raspberry Pi (or RPi) Model B is a Broadcom system-on-a-chip that includes a 700mhz ARM processor, 512mb RAM, USB and Ethernet controllers, and a graphics processor capable of HD resolutions. According to the FAQ its real-world performance is on par with a first generation Xbox or a 300mhz Pentium II computer. In my personal experience it is powerful enough for typical web browsing tasks or to host a WordPress based web site. Raspberry Pi devices also come with a GPIO (general purpose input and output) port, which enables an RPi to control electronic circuits. This makes the RPi a very flexible tool, but it doesn’t quite provide the full functionality of an Arduino or similar micro-controller3.

Out of the box, a Raspberry Pi will require some extra equipment to get up and running. There is a shopping list included at the bottom of the article that contains known working parts. If you keep boxes of spare parts and accessories around, just in case, you likely already have some of these parts. In addition to a $35 Raspberry Pi model b computer, you will definitely need an SD card with at least 4gb storage and a 5 volt 1 amp (minimum) micro-usb power supply. An extra cell phone charger looks like the right part, but probably does not put out the minimum amperage to run an RPi, but a tablet charger likely will. You can read the fine print on the ‘wall wart’ part of the charger for its amperage rating.  If you want to use your Raspberry Pi as a workstation4, you’ll also need an HDMI cable, a digital monitor and a USB keyboard and mouse. Any USB keyboard or mouse will work, but the monitor will need to have an HDMI input. 5 Additionally, you may also want to use a USB wifi adapter to connect to wireless networks and since the Raspberry Pi has only two USB ports, you may also want a powered USB hub so you can connect more peripherals. The Raspberry Pi unit ships as a bare board, so you may want to keep your RPi in a case to protect it from rough handling.

Who is the Raspberry Pi for?

Now that we’ve covered what kind of kit is needed to get started, we can ask: are you the kind of librarian who is likely to be interested in a Raspberry Pi? I’ve noticed some “enthusiasm fatigue” out there, or librarians who are weary of overhyped tools that don’t provide the promised revolution. I love my Raspberry Pi units, but I don’t think they have universal appeal, so I’ve made a little quiz that may help you decide whether you are ready to order one today or pass on the fad, for now.

  1. Are you excited to work in a Linux operating system?
  2. Are you willing to use trial and error analysis to discover just right configuration for your needs?
  3. Do you enjoy the challenge of solving a living problem more than the security of a well-polished system?

If the answer to all three of these questions is an enthusiastic YES, then you are just the kind of librarian who will love experimenting with a Raspberry Pi. If your enthusiasm is more tempered or if you answered no to one or more of the questions, then it is not likely that a Raspberry Pi will meet your immediate needs. RPi are projects not products. They make great prototypes or test-boxes, but they aren’t really a turn-key solution to any existing large-scale library problems. Not every library or librarian needs a Raspberry Pi, but I think a significant number of geeky and DIY librarians will be left asking: “Where have you been all my life?”

If you are a librarian looking to learn Linux, programming, or server administration and you’d rather do this on a cheap dedicated machine than on your work machine, Raspberry Pi is going to make your day. If you want to learn how to install and configure something like WordPress or Drupal and you don’t have a web server to practice on (and local AMP tools aren’t what you are looking for) a Raspberry Pi is an ideal tool to develop that part of your professional skill set. If you want to learn code, learn robotics, or build DIY projects then you’ll love Raspberry Pi. RPi are great for learning more about computers, networks, and coding. They are very educational, but at the end of the day they fall a bit more on the hobby end of the spectrum then on the professional product end.

Raspberry Pi Projects for Librarians

So, if you’ve taken the quiz and are still interested in getting started with Raspberry Pi, there are a few good starting points. If you prefer printed books O’Reilly Media’s Getting Started with Raspberry Pi is fantastic. On the web. I’ve found the Raspberry Pi wiki at to be an indispensable resource and their list of tutorials is worth a special mention. Adafruit (an electronics kit vendor and education provider) also has some very good tutorials and project guides. For library specific projects, I have three suggestions, but there are many directions you may want to go with your Rasberry Pi. I’m currently working to set mine up as a web server for local content, so museums can offer rich interpretive media about their exhibits without having to supply free broadband to the public. When this is finished, I’m going to build projects two and three.

  • Project One: Get your RPi set up.

This is the out-of-the-box experience and will take you through the set up of your hardware and software. has a basic getting started guide, Adafruit has a more detailed walkthrough, and there are some good YouTube tutorials as well. In this project you’ll download the operating system for your Raspberry Pi, transfer it to your SD card, and boot up your machine, and perform the first time setup. Once you’re device is up and running you can spend some time familiarizing yourself with it and getting comfortable with the operating system.

  • Project One-Point-Five: Play with your Raspberry Pi

Once your credit card sized computer is up and functional, kick the tires. Check out the graphical interface, use the command line, and try running it headless. Take baby steps if baby steps are what is fun and comfortable, or run headlong into a project that is big and crazy; the idea here is to have fun, get used to the environment, and learn enough to ask useful questions about what to do next. This is a good time to check out the Adafruit series of tutorials or’s tutorial list.

  • Project Two: Build an Information Kiosk to Display Local Mass Transit Information

I found this on the elinux list of tutorials and I think it is great for libraries, provided they are in an area served by NextBus or a similar service. The tutorial walks users through the process of building a dedicated information kiosk for transit information. The steps are clear and documented with photographs and code examples. Beginning users may want to refer to other references, such as the O’Reilly Book or a Linux Tutorial to fill in some gaps.  I suspect the tricky bit will be finding a source for real-time GPS telemetry from the local transit service, but this is a great project for those who have worked through basic projects and are ready to build something practical for their library.

  • Project Three: Build a Dedicated OPAC Terminal.

While dedicated OPAC terminals may no longer be the cutting edge of library technology, our patrons still need to find books on the shelves. Library Journal’s Digital Shift blog and John Lolis from the White Plains public library describe a project that uses the Raspbian OS to power a catalog-only public terminal. The concept is straight-forward and working prototypes have been completed, but as of yet I do not see a step-by-step set of instructions for the beginner or novice. As a follow up to this post, I will document the build process for TechConnect. The gist of this project is to set up a kiosk-type browser, or a browser that only does a set task or visits a limited range of sites, on the Raspberry Pi. Eli Neiberger has raised some good questions on Twitter about the suitability of RPi hardware for rough-and-tumble public abuse use, but this is the sort of issue testing may resolve. If librarians can crowd-source a durable low-cost OPAC kiosk using Lolis’ original design, we’ll have done something significant.

Raspberry Pi Shopping List

As mentioned above, you may have many of these items already. If not, I’ve purchased and tested the following accessories for a couple of Raspberry Pi projects.

Basic Kit: (parts sourced through Amazon for ease of institutional purchase. Other sources may be preferable or less expensive.)


Raspberry Pi kits (Some vendors have put together full kits with a wide range of parts and accessories. These kits include breadboards and parts for arduino-type projects.)


  1. Good and necessary work is still being done in Second Life, but it has become a niche service, not a revolution in the way we provide library services.
  3. Check out this forum thread for a basic distinction between Arduino and Raspberry Pi.
  4. The alternative is to run it ‘headless’ over your network using SSH.
  5. Monitors with DVI input will work with a small and cheap HDMI to DVI adaptor. Analog monitors–the ones with blue VGA connectors–will work if you purchase an HDMI to VGA converter-adapter which start around $20.

Learn to Love the Command Line

“Then the interface-makers went to work on their GUIs, and introduced a new semiotic layer between people and machines. People who use such systems have abdicated the responsibility, and surrendered the power, of sending bits directly to the chip that’s doing the arithmetic, and handed that responsibility and power over to the OS.”
—Neal Stephenson, In the Beginning was the Command Line

Many of us here at Tech Connect are fans of the command line. We’ve written posts on configuring a local server, renaming files en masse, & using the Git version control system that are full of command line incantations, some mysterious and some magical. No doubt; the command line is intimidating. I’m sure most computer users, when they see it, think “Didn’t we move beyond this already? Can’t someone just write an app for that?” Up until about a year ago, I felt that way, but I’ve come to love the command line like Big Brother. And I’m here to convince you that you should love the command line, too.


So why use a command line when you can probably accomplish almost everything in a graphical user interface (GUI)? Consider the most repetitive, dreary task you do on a daily basis. It might be copying text back-and-forth between applications over and over. You might periodically back up several different folders by copying them to an external drive. We all have repetitive workflows in dire need of automation. Sure, you could write some macros, but macros can be brittle, breaking when the slightest change is introduced. They’re often tricky to write correctly, so tricky that the blinking square at the start of the command prompt is starting to look promising.

One of the first joys of the command line is that everything you do can be scripted. Any set of steps, no matter how lengthy and intricate, can be compiled into a script which completes the process in a matter of seconds. Copy a dozen folders in disparate locations to a backup drive? No problem. Optimize a web app for deployment? A script can minify your JavaScript and CSS, concatenate files, optimize images, test your code for bugs, and even push it out to a remote server.

Available Software

It may seem odd, but moving to the Neanderthal command line can actually increase the amount of software available to you. There are lots of programs that don’t come with a graphical interface, either because they’re lightweight tools that don’t require a GUI or because their author knew that they’d be used almost exclusively by people comfortable with the command line. There are also many software packages that, while they have GUIs available, are more powerful on the command line. Git is a good example: there are numerous graphical interfaces and most are quite good. But learning the commands opens up a wealth of options, configurations, and customizations that just do not exist graphically.

Tying It All Together

There are many wonderful GUIs that allow you to do complex things: generate visualizations, debug code, edit videos. But most applications are in their own separate silos and work you do in one app cannot be easily connected to any other. On the command line, output from one command can easily be “piped” into another, letting you create long chains of commands. For instance, let’s say I want to create a text document with all the filenames in a particular directory in it. With a file manager GUI, this is a royal pain: I can see a list of files but I can only copy their names one by one. If I click-and-drag to select all the names, the GUI thinks I want to select the files themselves and won’t let me paste the text of their names.

On the command line, I simply write the output of the ls command to a file: [1]

ls -a > filenames.txt

Now filenames.txt will list all the files and directories in the current folder. The > writes the output of ls -a (list all files) and it’s one of a few different methods that redirects output. Now what if I want only filenames that contain a number in them?

ls -a | grep [0-9] > filenames-with-numbers.txt

I already have a list of all the file names, so I “pipe” the output of that command using the vertical bar | to grep, which in turn outputs only lines that have a number zero through nine in them, finally writing the text to filenames-with-numbers.txt. This is a contrived example but it illustrates something incredibly powerful about the command line: the text output of any command can easily be used as input for another command. While it may look like a lot of punctuated gibberish, it’s actually pretty intuitive to work with. Anyone who has tried copying GUI-formatted text into an environment with different formatting should be envious.


So you want to play around with a programming language. You’ve written a file named “” or “hello-world.rb” or “hello-world.php” but you have no idea how to get the code to actually run. Command line to the rescue! If you have the programming language installed and in your path, you can run python or ruby hello-world.rb or php hello-world.php and the output of your script will print right in your terminal. But wait, there’s more!

Many modern scripting languages come with a REPL built-in, which stands for “Read-Eval-Print loop.” In English, this means you write text that’s instantly evaluated as code in the programing language. For instance, below I enter the Python REPL and do some simple math:

$ python
Python 2.7.2
Type "help", "copyright", "credits" or "license" for more information.
>>> 1 + 2
>>> 54 * 34
>>> 64 % 3
>>> exit()

The $ represents my command prompt, so I simply ran the command python which entered the REPL, printing out some basic information like the version number of the language I’m using. Inside the REPL, I did some addition, multiplication, and a modulus. The exit() function call at the end exited the REPL and put me back in my command prompt. There’s really no better way to try out a programming language than with a REPL. [2] The REPL commands for Ruby and PHP are irb and php -a respectively.

Stumbling Blocks

Everyone who starts out working on the command line will run into a bunch of common hangups. It’s frustrating, but it doesn’t need to be. It’s really like learning common user interface conventions: how would you know an anachronistic floppy disk means “save this file” if you hadn’t already clicked it a million times? Below is my attempt to alleviate some of the most common quirks of the command line.

Where Is It?

Before we get too far, it would probably be good to know how to get to a command prompt in the first place. You use a terminal emulator to open a command prompt in a window on your operating system. On a Mac, the terminal emulator is the aptly-named, which is located in Applications\Utilities. On Ubuntu Linux, there’s also a Terminal application which you can find using the Dash or with the handy Ctrl+Alt+T shortcut. [3] KDE Linux distributions use Konsole as the default command line application instead. Finally, Windows has a couple terminal emulators and, sadly, they are incompatible with *NIX shells like those found on Mac and Linux machines. Windows PowerShell can be found in Start > All Programs > Accessories or by searching after clicking the Start button. You can open PowerShell on Windows 8 by searching as well.

Escaping Spaces

Spaces are a meaningful character on the command line: they separate one command from the next. So when you leave spaces in filenames or text strings, commands tend to return errors. You need to escape spaces in one of two ways: precede a space with a backslash, “\”, which tells the command line to interpret the following character literally (i.e. not as a directive), or by wrapping the string with spaces in quotes. So mv File Name.txt New Name.txt will be greeted with the usage information for the mv command because the program assumes you don’t understand how to use it, while mv "Sad Cats.txt" "LOL Cats.txt" will successfully rename “Sad Cats.txt” to “LOL Cats.txt”.

Luckily, you won’t need to type quotes or slashes very often due to the miracle that is tab completion. If you start typing a command or file name, you can hit the tab key and your shell will try to fill in the rest. If there are two names that begin identically then you’ll have to provide enough to disambiguate between the two: if “This is my file.txt” and “This is my second file.txt” are both in the same folder, you’ll have to type at least “This is my ” and then, depending on whether your next letter is an F or an S, the shell can tab complete whatever follows to the appropriate name.

Copy & Paste

Command line interfaces were invented before common GUI applications and their keyboard shortcuts like Ctrl+V and Ctrl+C. Those shortcuts were already assigned different actions on the command line and so they typically do not do what you’d expect, since for backwards compatibility they need to stick to their original meanings. So does that mean you have to type every long URL or string of text into the terminal? No! You can still copy-paste but it works a bit different on most terminals. On Ubuntu Linux, Ctrl+Shift+C or V will perform copy and paste respectively, while I’ve found that right-clicking in Windows PowerShell will paste. Mac OS X’s can actually use the conventional ⌘+C or ⌘+V because they don’t conflict with Ctrl.

Moving Around the Line

Too many times I’ve typed out a long command, perhaps one with a URL or other pasted string of text in it, only to spot a typo dozens of characters back. On the command line, you cannot simply click to move your cursor to a different position, you have to press the back arrow dozens of times to go back and correct a mistake.

Or so I thought. On Mac’s, option-click will move the cursor. Unfortunately, most other terminal applications don’t provide any mouse navigation. Luckily, there’s a series of keyboard shortcuts that work in many popular shells:

  • Ctrl+A jumps to the beginning of the line
  • Ctrl+E jumps to the end of the line
  • Ctrl+U deletes to beginning of line
  • Ctrl+K deletes to end of line
  • Ctrl+W deletes the word next to cursor

Those shortcuts can save you a lot time and are often quicker than using the mouse. Typing Ctrl+W to delete a lengthy URL is much more convenient than shift-clicking with the mouse to select it all, then hitting the delete key.

Make It Stop!!!

One of the first tips for learning the command line is to “read the friendly manual.” You can run the man command (short for manual) and pass it any other command to learn more about its usage, e.g. man cp will give you the manual of the copy command. However, the manual is often presented in a special format; it doesn’t just print the manual, it gives you a scrollable interface that replaces the command prompt. But how the heck do you exit the manual to get back to the command line? I’ll admit, when I was first learning, I used to simply close the terminal and re-open because I had no idea how to get out of the all-too-friendly manual.

But that’s unnecessary: pressing the letter q will quit the manual, while battering away on the ESC key or typing “exit please dear lord let me exit” won’t do a thing. q exits a few other commands, too.

Another good trick is Ctrl+C which causes a keyboard interrupt, canceling whatever process is running. I use this all the time because I frequently run a test server from the command line, or a SASS command that watches for file changes, or do something really stupid that will execute forever. Ctrl+C is always the exit strategy.


So you installed the “awesomesauce” software, but every time you run awesomesauce in your command prompt you get a message like “command not found.” What gives?

All shells come with a PATH variable that specifies where the shell looks for executable programs. You can’t simply type the name of a script or executable anywhere on your system and expect the shell to know about it. Instead, you have to tell the command line where to look. Let’s say I just installed awesomesauce to ~/bin/awesomesauce where ~ stands for my user’s home folder (on Mac OS X this will be /Users/MyUsername, for example—you can run echo ~ to see where your home folder is). Here’s how I get it working:

$ # command isn't in my path yet - this is a comment by the way
$ awesomesauce
bash: awesomesauce: command not found
$ export PATH=$PATH:~/bin
$ awesomesauce
You executed the Awesome Sauce command! Congratulations!

The export command lets me modify my PATH variable, appending the ~/bin directory to the list of other places in the PATH. Now that the shell is looking in ~/bin, it sees the awesomesauce command and lets me run it from the command line.

It quickly gets tedious appending directories to your PATH every time you want to use something, so it’s wise to use a startup script that runs every time you open a new Terminal. In BASH, the common shell that comes with Mac OS X and is the default in most Linux distributions, you can do this by adding the appropriate export commands to a file named .bash_profile in your home file. The commands in .bash_profile are executed every time a new Terminal window is opened.

Use the Google, my Friend

There are certainly other hangups not covered above, but a tremendous amount of good information exists on the web. People have been using command line interfaces for quite a while now and there is documentation for almost everything. There are great question-and-answer forums, like StackExchange’s Superuser section, with tons of content on the command line and writing shell scripts. I’ve basically self-taught myself the command line using Google and a few select sources. In fact, the man command alone is one of the shining advantages of the command line: image if you could just type text like man spot healing brush in PhotoShop to figure out how all those crazy tools work?

Learning the command line is a challenge but it’s well worth the investment. Once you have a few basics under your belt, it opens up vast possibilities and can greatly increase your productivity. Still, I wouldn’t say that everyone needs to know the command line. Not everyone needs to learn coding either, but everyone can benefit from it. With a little practice, some trial and error, and many, many Google searches, you’ll be well on your way to commandeering like a boss.

Further Reading – a great wiki for learning more about the common BASH shell – command line reference

Using the Terminal – Ubuntu’s guide to the command line

In the Beginning was the Command Line – Neal Stephenson’s treatise about the command line, GUIs, Windows, Linux, & BeOS. It actually isn’t that much about the command line but it’s an interesting read.


[1]^ The commands listed in this post will not work on Windows, but that doesn’t mean you can’t do the same things: the names and syntax of the commands are just different. It’s worth noting that Windows users can use the Cygwin project to get a shell experience comparable to *NIX systems.

[2]^ If you’re still too scared to try the command line after this post, the great site provides REPLs for over a dozen different programming languages.

[3]^ The default terminal on Ubuntu, and many other Linuxes, is technically “GNOME Terminal” but it may show up as just Terminal in search results. While these are the default terminals, most systems have more advanced alternatives that offer a few niceties like arranging terminal windows in a grid, paste history, and configurable keyboard shortcuts. On Mac, iTerm2 is an excellent choice. Lifehacker recommends Terminator for Linux, and it is also available for Windows and Mac. Wikipedia has an unnecessarily long list of terminal emulators for the truly insane.

Playing with JavaScript and JQuery – the Ebook link HTML string generator and the EZproxy bookmarklet generator

In this post, I will describe two cases in which I solved a practical problem with a little bit of JavaScript and JQuery. Check them out first here before reading the rest of the post which will explain how the code works.

  1. Library ebook link HTML string generator
  2. EZproxy bookmarklet generator – Longer version (with EZproxy Suffix)
  3. EZproxy bookmarklet generator – Shorter version (with EZproxy Prefix)
1. Library ebook link HTML string generator

If you are managing a library website, you will be using a lot of hyperlinks for library resources. Sometimes you can distribute some of the repetitive tasks to student workers, but they are usually not familiar with HTML. I had a situation in which I needed to add a number of links for library e-books for my library’s Course E-book LibGuide page, when I was swamped with many other projects at the same time. So I wondered if I could somehow still use the library’s student assistant’s help by creating an providing a simple tool, so that the student only needs to input the link title and url and get me the result as HTML. This way, I can still delegate some work when I am swamped with other projects that require my attention. (N.B. Needless to say, this doesn’t mean that what I did was the best way to use the student assistance for this type of work. I didn’t want them to edit the page directly because this page had tabs tabs and the student using the WYSWYG editor might inadvertently remove part of the tabbed box code.)

The following code exactly does that.

This HTML form takes an e-book title and the link to the book as input and spits out a hyperlink as a list item as a result. For example, if you fill in the title field with ‘Bradley’s Neurology in Clinical Practice’ and the link field with its url:’s+Neurology+in+Clinical+Practice, then the result would be shown in the text area : <li><a href=”’s+Neurology+in+Clinical+Practice”> Bradley’s Neurology in Clinical Practice</a></li>  I also wanted the library student assistant to be able to do this for many e-books at once and just send me the whole set of HTML snippets that cover all ebooks. So after running the form once, if one fills out the title and the link field with another e-book information, the result would be added to the end of the previous HTML string and be displayed in the text area. The result would be like this: <li><a href=”’s+Neurology+in+Clinical+Practice”> Bradley’s Neurology in Clinical Practice</a></li><li><a href=”″>Cardiovascular Physiology</a></li>. Since the code is in the text area, the student can also edit if there was any error when s/he was filling out the form after clicking the button ‘Send to Text Area’.

Now, let’s take a look at what is going on behind the scene. This is the entire html file. The Javascript/JQuery code that is generating the html string in the text area is from line 22-32.

<script type="text/javascript" src=""></script>
		This page creates the html-friendly string for a link with a title and a url. 
		<li>Fill out the form below and click the button. </li>
		<li>If the link is messy, clean up first by using the <a href="">URL decoder</a>.</li>
		<li>Copy and paste the result inside the text area after you are done.</li>
		Title: <input type="text" id="title1" size="100"/><br/>
		Link: <input type="text" id="link1" size="100"/><br/>
		<button id="b1">Send to Text Area</button>
	<p id="result1"></p>
	<textarea rows="20" cols="80"></textarea>

	<script type="text/javascript">
	  //  alert('<a href="'+$("#link").val()+'">' + $("#title").val()+"</a>");
	    $('#result1').text('<li><a href="'+$("#link1").val()+'">' + $("#title1").val()+"</a></li>");
	    var res1='<li><a href="'+$("#link1").val()+'">' + $("#title1").val()+"</a></li>";
	 //   $('textarea').text(res1);
	}); // doc ready


Since I am using JQuery I am starting with the obligatory $(document).ready in line 2. In line 3,  I am giving a callback function that will be executed when the #b1 – the button with the id of b1 in line 17 above- is clicked.  Line 4 is commented out. I used this initially to test if I am getting the right string out of the input from the title and the link field using the JS alert. Line 5 is filling the p tag with the id of result 1 in line 19 above with the thus-created string. The string is also saved in variable res1 in line 7. Then it is attached to the content of the textarea field in line 8. Line 9 is commented out. If you use line 9 instead of line 8, any existing content in the textarea will be removed and only the new string created from the title and the link field will show up in the text area.

<script type="text/javascript">
	  //  alert('<a href="'+$("#link").val()+'">' + $("#title").val()+"</a>");
	    $('#result1').text('<li><a href="'+$("#link1").val()+'">' + $("#title1").val()+"</a></li>");
	    var res1='<li><a href="'+$("#link1").val()+'">' + $("#title1").val()+"</a></li>";
	 //   $('textarea').text(res1);
	}); // doc ready

You do not have to know a lot about scripting to solve a simple task like this!

2. EZproxy bookmarklet generator – Longer version

My second example is a bookmarklet that reloads the current page through a specific EZproxy system.

If you think that this bookmarklet reinvents the wheel since the LibX toolbar already does this, you are correct. And also if you are a librarian working with e-resources, you already know to add the EZproxy suffix at the end of the domain name of the url when a patron asks if a certain article on a web page is available through the library or not. But I found that no matter how many times I explain this trick of adding the EZproxy suffix to patrons, the trick doesn’t seem to stick in their busy minds. Also, many doctors and medical students, who are the primary patrons of my library, work at the computers in hospitals and they do not have the necessary privilege to install a toolbar on those computers. But they can create a bookmark.

Similarly, many students asked me why there is no LibX toolbar for their mobile devices unlike in their school laptops. (In the medical school where I work, all students are required to purchase a school-designated laptop; this laptop is pre-configured with all the necessary programs including the library LibX toolbar.) Well, mobile devices are not exactly computers and so the browser toolbar doesn’t work. But students want an alternative and they can create a bookmark on their tablets and smartphones. So the proxy bookmarklet is still a worthwhile tool for the mobile device users.

This is where the bookmarklet is: To test, drag the link on the top that says Full-Text from FIU Library to your bookmark toolbar. Then go to Click the new bookmarklet you got on your toolbar. The page will reload and you will be asked to log in through the Florida International University EZproxy system. When you are authenticated, you will be seeing the page proxied:

You will be surprised to see how simple the bookmarklet is (and there is even a shorter version than this which I will show in the next section). It is a JavaScript function wrapped inside a hyperlink. Lines 2-5 each takes the domain name, the path name, and any search string after the url path from the current window location object. So in the case of, is and location.pathname is Issue.aspx . The rest of the url ?journalid=67&issueID=4452&direction=P — is In line 4, I am putting my institution’s ezproxy suffix between these two, and in line 5, I am asking the browser load this new link.

<a href="javascript:(function(){
	var path=location.pathname;
	var newlink='http://'+host+''+path+srch;;
})();">Full-Text from FIU Library</a>

Now let’s take a look at the whole form. I created this form for those who want to create a ready-made bookmarklet recipe. All they need is their institution’s EZproxy suffix and whatever name they want to give to the bookmarklet. Once one fills out those two fields and click ‘Customize’ button, one will get the full HTML page code with the bookmarklet as a link in it.

<script type="text/javascript" src=""></script>
<h1>EZproxy Bookmarklet</h1>	
	<a href="javascript:(function(){
		var path=location.pathname;
                var newlink='http://'+host+''+path+srch;;
	})();">Full-Text from FIU Library</a> 
	(Drag the link to the bookmark toolbar!)
<p>This is a bookmarket that will reroute a current page through FIU EZproxy.
If you have the LibX toolbark installed, you do not need this bookmarklet. Simply select 'Reload Page via XYZ EZproxy' on the menu that appears when you right click.
Created by Bohyun Kim on March, 2013. 	
<h2>How to Test</h2>
	<li>Drag the link above to the bookmark toolbar of your web browser.</li>
	<li>Click the bookmark when you are on a webpage that has an academic article.</li>
	<li>It will ask you to log in through the FIU EZproxy.</li>
	<li>Once you are authenticated and the library has a subscription to the journal, you will will able to get the full-text article on the page.</li>
	<li>Look at the url and see if it contains If it does, the bookmarklet is working.</li>

<h2>Make One for Your Library</h2>
		Bookmark title: <input type="text" id="title1" size="40" placeholder="e.g. Full-Text ABC Library"/>
		<em>e.g. Full-text ABC Library</em>
		Library EZproxy Suffix: <input type="text" id="link1" size="31" placeholder="e.g."/>
		<button id="b1">Customize</button></em>
	<p><strong>Copy the following into a text editor and save it as an html file.</strong></p>
		<li>Open the file in a web browser and drag your link to the bookmark toolbar.</li>
	<p id="result1" style="color:#F7704F;">**Customized code will appear here.**</p>
	<p><strong>If you want to make any changes to the code:</strong></p>
	<textarea rows="10" cols="60"></textarea>

	<script type="text/javascript">
		var pre="&lt;html&gt; &lt;head&gt; &lt;script type=&quot;text/javascript&quot; src=&quot;;&gt;&lt;/script&gt; &lt;/head&gt; &lt;body&gt; &lt;a href=&quot;javascript:(function(){ var; var path=location.pathname; var; var newlink='http://'+hst";
		var link1=$('#link1').val();
		var post="'+path+srch;; })();">";
	  	var title=$("#title1").val();
	  	var end="&lt;/a&gt; &lt;/body&gt; &lt;/html&gt;";
	  	var final=$('<div />').html(pre+"+'."+link1+post+title+end).text()
	}); // doc ready


That ‘customize’ part is done here with several lines of JQuery. You will notice that the process is quite similar to what I did in my first example. In lines 4-8, I am just stitching together a bunch of text strings to spit out the whole code in the text area eventually when the ‘Customize’ button is clicked. All special characters used in HTML tags such as ‘<‘ and ‘>’ have been changed to html enities. In line 9, I am taking the entire string saved in the variable end —I hope you name your variables a little more carefully than I do!—  and adding it to an empty div so that the string would be set as the inner HTML property of that div. And then I retrieve it using the .text() method. The result is the HTML code with the html entities decoded.

<script type="text/javascript">
		var pre="&lt;html&gt; &lt;head&gt; &lt;script type=&quot;text/javascript&quot; src=&quot;;&gt;&lt;/script&gt; &lt;/head&gt; &lt;body&gt; &lt;a href=&quot;javascript:(function(){ var; alert(hst); var path=location.pathname; var; var newlink='http://'+hst";
		var link1=$('#link1').val();
		var post="'+path+srch;; })();">";
	  	var title=$("#title1").val();
	  	var end="&lt;/a&gt; &lt;/body&gt; &lt;/html&gt;";
	  	var final=$('<div />').html(pre+"+'."+link1+post+title+end).text()
	}); // doc ready

Not too bad, right? I hope these examples show how just a few or several lines of code can be used to solve your practical problems at work. Coding is there for you to automate time-consuming and/or repetitive tasks.

3. EZproxy bookmarklet generator – Shorter version

There is a simpler way to create a EZproxy bookmarklet than the one sketched above. If you simply add EZproxy prefix in front of the entire url of the page where a user is is, you achieve the same result. In this case, you do not have to break the url with host, pathname, search string, etc.

<a href="javascript:void(location.href=%22">Full-Text from FIU Library</a>

Here are the code for this much simpler EZproxy bookmarklet and the bookmarklet generator. If you know the prefix of you library’s EZproxy prefix, you can make one for your library.

So there are many ways to get the same thing done. Some are more elegant and some are less so. Usually a shorter one is a more elegant solution. The lesson is that you usually get to the most elegant solution after coming up with many less elegant solutions first.

Do you have a problem that can be solved by creating several lines of code? Have you ever solved a practical problem using a bit of code? Share your experience in the comments!


Making Your Website Accessible Part 3: Content WCAG Compliance

Welcome to part 3 of the web accessibility series. While part 1 explained a bit about web accessibility and the web consortium accessibility guidelines (WCAG) and part 2 covered implementing WCAG on the development side, part 3 will cover the content side of things.

The main goal is that you walk away with a set of guidelines you can give to staff. I’ll start with the easier stuff that apply to all staff that have access to editing content, and then I’ll move on to more complicated content, namely media and forms, where the guidelines may be for technical staff either for content creation or for evaluating add-ons/plugins.


Use headings 1-6 to structure information so people using assistive technologies can navigate web pages easier (2.4.6). If your page were displayed with only headers or with headers and content indented, would it make sense? Will your headers be in the right place? Many screen readers and tools use an “Outline View”, which shows only the headings of a page.

Header 1 is the Title of the Webpage
Header 2 is a Subtopic of the Webpage
Header 3 is a Specific Sub-topic of Header 2 (and 1)
Another Header 2 would be a Subtopic of Header 1
(but separate from the  first Header 2)


<h1>E-Resources in Engineering</h1> = title of your page.
  <h2>Standards and Codes</h2> = topic 1
    <h3>International Standards Organization</h3> = 1st subtopic of topic 1
    <h3>Canadian Standards Association</h3> = 2nd subtopic of h2 topic 1
  <h2>Journal Databases</h2> = topic 2
    <h3>Compendex</h3> = subtopic of topic 2

Enter your HTML or the URL of your page into HTML5 Outliner to see an outline view.


The text for every link should be descriptive (2.4.4) and unique (within a single page). Imagine there was no other text around it (except the title of the page). Would you understand what that link is for? Many assistive technologies have a “links only” option for viewing.

While many CMS provide the option to add additional text to a link through the title attribute (and is described as a sufficient technique, H33), the title attribute is generally not read by screen readers 1 as reading the title attribute on links (and images) is off by default. 2

Text in a Foreign Language

While each page should already be marked with a ‘default’ language (3.1.1), chunks of text (e.g. quotes, titles) in another language should be marked (3.1.2) with the ‘lang’ attribute and the appropriate language code, which can be applied to any HTML element. 3


Only use tables for information and data that should be presented in a tabular format. For example, a calendar, a list of contact information, or data sets should be presented in a table (1.3.1). Tables should not be used for layout, such as presenting content in columns.

If you need to use a table, then the information must be presented in a way that is accessible to users with disabilities and allows the table information to still be understood by the user even if they can’t see the table’s layout.


<table> : This element defines an HTML table
<tr> : This element defines a table row
<th> : This element defines a table header and should have the appropriate scope
<td> : This element defines a table cell
<caption> : This element identifies the table and acts as a title for the table. A caption is shown to all.

Example using Code for a Simple Table (Technique H63)

  <caption>Overdue Fines for Library Materials On Loan</caption>
    <th scope="col">Type of Item</th>
    <th scope="col">Amount per Day</th>

For examples of more complex tables, take a look at Technique H43.

  • Alternate Text

When you include image elements on your website, you also need to include alternate or alt text, which allows you to include descriptive information about the image. While the text is not visible to most users, assistive technologies will read the alternate text as a description of the image, so write concisely, while still providing an accurate description of the image.

However, if your image has been included for purely decorative purposes, leave the alternate text empty. Assistive technology will ignore the image completely.

For example:

<img src="/6th-floorplan.jpg" alt="floor plan of the library's 6th floor">

If decorative only:

<img src="/example.jpg" alt="">
  • Title for Images

The title field allows you to provide additional information about an image in addition to the alternate text (meaning the title should not repeat alternate text). The title will also typically appear if users hover over an image with a title. However, as already noted, the title is typically not seen or read, so the use of title is discouraged 4.

  • Use Text Whenever Possible

Basically, don’t use an image of text instead of just text – use CSS instead (1.4.5). The one exception is for logos.

The logo of this website is an image, but all other text you see in this screenshot is just text
The logo of this website is an image, but all other text you see in this screenshot is just text
  • Meaning through Text

In a similar vein, understanding information and instructions should not be dependent on any sensory characteristics, such as sound, colour, or shape (1.3.3, 1.4.1). Think about it as: your message should be understood in text-only mode.

Audio/Video, Animations & Interactions
  • Autoplay

In general, audio and visual content should not play automatically. Instead, users should be able to hit a play button when they are ready for the content. Audio that autoplays for more than 3 seconds must have a pause/stop control or independent volume control (1.4.2).

  • Alternatives to Media

All non-text content, including audio and visual materials should have an alternate (1.1.1), typically a text transcript and/or descriptive audio (1.2.1, 1.2.3). At the AA level, WCAG requires audio descriptions for all videos as well (1.2.5), but the Ontario legislation (AODA) excludes it.

[youtube=]Example of Audio Description in a Movie

Much like images, if the audio/video is actually an alternate to text, such as in a tutorial where an explanation is already fully explained in text form, then no alternate is required.

  • Captions

All audio/video should have captions (aka subtitles) again unless the actual audio/video is an alternative to text (1.2.2). At the AA level, WCAG requires captions for live (streaming) audio/video as well (1.2.4), but this is the one other exception to the AODA.

  • Timing

Timing issues are particularly important in media and interactive elements, although it applies to any session based activity, such as bookings and forms. The basic idea is that anything that is on a timer can be extended and in some cases, users have to be warned (2.2.1). The exceptions are when it’s in real time (e.g. auction) or it’s essential (e.g. timed quiz).

Toronto Public Library gives this session time out warning to users
Toronto Public Library gives this session time out warning to users
  • Moving, Blinking, & Flashing

While this applies to websites in general, particularly for interactive elements or animations, the simplified version is that anything that starts automatically should have an accessible mechanism to pause, stop, or hide it (2.2.2). Additionally, nothing on the website should flash more than 3 times within 1 second so as not to cause seizures (2.3.1).

WARNING: Epilepsy inducing website(please don't ever do anything like this, ever)
WARNING: Epilepsy inducing website
(please don’t ever do anything like this, ever)
  • Keyboard Accessible

Again, this applies to the website as a whole, but particularly with more interactive elements, everything on a website should have the ability to be used through a keyboard (2.1.1).

One of the major issues with web based media is that they tend to live in a Flash-based container, which is not usually accessible. (This may shortly prove to be less of a problem with accessible controls in Flash videos by default built into Flash CS6 5.) YouTube videos are a prime example.

One alternative is to provide a link to an accessible player. For YouTube videos, you can create a link that will load a video (or playlist) into an online accessible player, such as Easy YouTube Player or the Accessible Interface to YouTube.

The other common method is to wrap the Flash object with accessible controls (see Keyboard Accessible YouTube Controls for example). However, wrapping an embedded video may remove existing functionality, such as full screen or volume control.

If you host your own videos, consider having accessible controls load for all your videos, such as using the JW Player Controls.

I also emphasize that you should be careful to never have a keyboard trap (2.1.2) though this is uncommon these days. See the following section for more on keyboard accessibility.


Keyboard accessibility along with many guidelines talked about here may apply to any similar element in a website, but the rest of the guidelines are covered here as they are commonly related or encountered with form elements.

  • Tab Order

Not surprisingly, users should be tabbing through fields in the meaningful order, which is not always the same as the default browser order (2.4.3).

annotated screenshot of an incorrect tab order on a form
Default order based on the code, which is obviously confusing to a user.

To fix the (above) problem, you can use tab index attribute.

For simplicity, I’ve left out some coding such as the form attributes.

  <label for="realname">name: <input id="realname" tabindex=1>
  <label for="comments">comments
  <textarea id="comments" cols=25 rows=5 tabindex=4></textarea>
  <label for="email">email: <input id="email" tabindex=2>
  <label for="dep">department: <select id="dep" tabindex=3>
  <option value="">...</select>
  • On Focus & Input Behaviour

Whenever an element can take input or interaction from the user, the current focus should be visible (2.4.7). The most basic example is the border/shadow effect when your cursor is in a text box, which is already built into the latest webkit browsers. (Some of the other browsers do show a minimal effect, but nothing that would pass the colour contract guidelines.)

screenshot of default field focus styling in Chrome and Safari

Since the default behaviour only covers the webkit browser, some simple CSS will do the trick.6

input:focus, textarea:focus, select:focus {
  border: 2px solid #8AADE1;
  box-shadow: 0 0 3px #8AADE1; /* not supported pre-IE9, but border still applies */
  outline: medium none;

Putting focus on or entering something into an element should also not change the context (3.2.1, 3.2.2). Finally, submission should be manual, not automatic (though I’ve never encountered an automatic one).

  • Avoiding and Correcting Mistakes

Appropriate labels (and possibly instructions) should always go along with user input (3.3.2). Form elements in particular should generally each have a label element attached (though not used for buttons 7), such as in the basic code example above.

Any errors in input should always be identified and described (3.3.1), with corrections suggested whenever possible (3.3.3). For business transactions, particularly related financial or legal transactions, all data should be given to the user to be reviewed first (3.3.4).

With HTML5, marking required elements is now simpler than ever with the required attribute, such as:

<input type="text" name="name" required>

The browser will automatically identify whether the field was left blank and inform the user.

screenshot of popup when field required in chrome, firefox, and opera

Similarly, there are now more input types you can use for browser-based form validation, such as the email input type:

<input type="email" id="user_email">

Do not use recaptcha. It irks me to no ends that recaptcha claims it is accessible, because they provide an audio alternative. While that’s true, next time you encounter one, try listening to it.

The alternative is to use a captcha that is more accessible. The wikipedia article on captcha can give you some ideas and example. Personally, I like the simple questions, such as ‘If you write 5 followed directly by 2, what number did you write?’

HTML5 Accessibility Support

Unfortunately, accessibility support for HTML5 is only partially there. Even basic elements, such as header and footer, are not supported across browsers. Based on various sources that I have read, for assistive technology purposes, your website should be accessible through the latest IE and Firefox on Windows, and Safari on OSX. Check out the HTML5 Accessibility website to see which HTML5 elements and attributes have accessibility support in the various browsers.

In the mean time, while in part 2, I mentioned that WAI-ARIA should only be used for custom interfaces, since not all browsers support all the new HTML5 elements and attributes, it has been suggested to use ARIA roles during this transitional phase. If you find something you’re using isn’t supported in the recommended browsers, check out Using WAI-ARIA in HTML, which talks about which ARIA role you should use.


This is the last of the series. I’ve made sure to cover all the guidelines, and while I haven’t specifically covered scripting, there’s almost no difference. Flash, Silverlight, and PDF are fairly different things, so I point you to the WCAG guidelines and other resources (see the next section).

Awareness of web accessibility has increased, but we can do more, a lot more, and there is no better place to start than with our own institution’s website. While web technologies are forging ahead and making it easier for people to interact with the web, adaptive or assistive technologies are frequently playing catch up, so we need to be aware of what works for everyone and what doesn’t. Make the web friendlier, for everyone.

Resources & More Information

If you want more information and resources, check out:


Revisiting PeerJ

A few months ago as part of a discussion on open peer review, I described the early stages of planning for a new type of journal, called PeerJ. Last month on February 12 PeerJ launched with its first 30 articles. By last week, the journal had published 53 articles. There are a number of remarkable attributes of the journal so far, so in this post I want to look at what PeerJ is actually doing, and some lessons that academic libraries can take away–particularly for those who are getting into publishing.

What PeerJ is Doing

On the opening day blog post (since there are no editorials or issues in PeerJ, communication from the editors has to be done via blog post 1), the PeerJ team outlined their mission under four headings: to make their content open and help to make that standard practice, to practice constant innovation, to “serve academia”, and to make this happen at minimal cost to researchers and no cost to the public. The list of advisory board and academic editors is impressive–it is global and diverse, and includes some big names and Nobel laureates. To someone judging the quality of the work likely to be published, this is a good indication. The members of PeerJ range in disciplines, with the majority in Molecular Biology. To submit and/or publish work requires a fee, but there is a free plan that allows one pre-print to be posted on the forthcoming PeerJ PrePrints.

PeerJ’s publication methods are based on PLoS ONE, which publishes articles based on subjective scientific and methodological soundness rather with no emphasis placed on subjective measures of novelty or interest (see more on this). Like all peer-reviewed journals, articles are sent to an academic editor in the field, who then sends the article to peer reviewers. Everything is kept confidential until the article actually is published, but authors are free to talk about their work in other venues like blogs.

Look and Feel
PeerJ on an iPhone size screen
PeerJ on an iPhone size screen

There are several striking dissimilarities between PeerJ and standard academic journals. The home page of the journal emphasizes striking visuals and is responsive to devices, so the large image scales to a small screen for easy reading. The “timeline” display emphasizes new and interesting content. 2 The code they used to make this all happen is available openly on the PeerJ Github account. The design of the page reflects best practices for non-profit web design, as described by the non-profit social media guide Nonprofit Tech 2.0. The page tells a story, makes it easy to get updates, works on all devices, and integrates social media. The design of the page has changed iteratively even in the first month to reflect the realities of what was actually being published and how people were accessing it. 3 PDFs of articles were designed to be readable on screens, especially tablets, so rather than trying to fit as much text as possible on one page as many PDFs are designed, they have single columns with left margins, fewer words per line, and references hyperlinked in the text. 4

How Open Peer Review Works

One of the most notable features of PeerJ is open peer review. This is not mandatory, but approximately half the reviewers and authors have chosen to participate. 5 This article is an example of open peer review in practice. You can read the original article, the (in this case anonymous) reviewer’s comments, the editors comments and the author’s rebuttal letter. Anyone who has submitted an article to a peer reviewed journal before will recognize this structure, but if you have not, this might be an exciting glimpse of something you have never seen before. As a non-scientist, I personally find this more useful as a didactic tool to show the peer review process in action, but I can imagine how helpful it would be to see this process for articles about areas of library science in which I am knowledgeable.

With only 53 articles and in existence for such a short time, it is difficult to measure what impact open peer review has on articles, or to generalize about which authors and reviewers choose an open process. So far, however, PeerJ reports that several authors have been very positive about their experience publishing with the journal. The speed of review is very fast, and reviewers have been constructive and kind in their language. One author goes into more detail in his original post, “One of the reviewers even signed his real name. Now, I’m not totally sure why they were so nice to me. They were obvious experts in the system that I studied …. But they were nice, which was refreshing and encouraging.” He also points out that the exciting thing about PeerJ for him is that all it requires are projects that were technically well-executed and carefully described, so that this encourages publication of negative or unexpected results, thus avoiding the file drawer effect.6

This last point is perhaps the most important to note. We often talk of peer-reviewed articles as being particularly significant and “high-impact.” But in the case of PeerJ, the impact is not necessarily due to the results of the research or the type of research, but that it was well done. One great example of this is the article “Significant Changes in the Skin Microbiome Mediated by the Sport of Roller Derby”. 7 This was a study about the transfer of bacteria during roller derby matches, and the study was able to prove its hypothesis that contact sports are a good environment in which to study movements of bacteria among people. The (very humorous) review history indicates that the reviewers were positive about the article, and felt that it had promise for setting a research paradigm. (Incidentally, one of the reviewers remained anonymous , since he/she felt that this could “[free] junior researchers to openly and honestly critique works by senior researchers in their field,” and signed the letter “Diligent but human postdoc reviewer”.) This article was published the beginning of March, and already has 2,307 unique visits to the page, and has been shared widely on social media. We can assume that one of the motivations for sharing this article was the potential for roller derby jokes or similar, but will this ultimately make the article’s long term impact stronger? This will be something to watch.

What Can Academic Libraries Learn?

A recent article In the Library With the Lead Pipe discussed the open ethos in two library publications, In the Library With the Lead Pipe and Code4Lib Journal. 8 This article concluded that more LIS publications need to open the peer review process, though the publications mentioned are not peer reviewed in the traditional sense. There are very few, if any, open peer reviewed publications in the nature of PeerJ outside of the sciences. Could libraries or library-related publications match this process? Would they want to?

I think we can learn a few things from PeerJ. First, the rapid publication cycle means that more work is getting published more quickly. This is partly because they have so many reviewers and so any one reviewer isn’t overburdened–and due to their membership model, it is in the best financial interests of potential future authors to be current reviewers. As In the Library With the Lead Pipe points out that a central academic library journal, College & Research Libraries, is now open access and early content is available as a pre-print, the pre-prints reflect content that will be published in some cases well over a year from now. A year is a long time to wait, particularly for work that looks at current technology. Information Technology in Libraries (ITAL), the LITA journal is also open access and provides pre-prints as well–but this page appears to be out of date.

Another thing we can learn is making reading easier and more convenient while still maintaining a professional appearance and clean visuals. Blogs like ACRL Tech Connect and In the Library with the Lead Pipe deliver quality content fairly quickly, but look like blogs. Journals like the Journal of Librarianship and Scholarly Communication have a faster turnaround time for review and publication (though still could take several months), but even this online journal is geared for a print world. Viewing the article requires downloading a PDF with text presented in two columns–hardly the ideal online reading experience. In these cases, the publication is somewhat at the mercy of the platform (WordPress in the former, BePress Digital Commons in the latter), but as libraries become publishers, they will have to develop platforms that meet the needs of modern researchers.

A question put to the ACRL Tech Connect contributors about preferred reading methods for articles suggests that there is no one right answer, and so the safest course is to release content in a variety of formats or make it flexible enough for readers to transform to a preferred format. A new journal to watch is Weave: Journal of Library User Experience, which will use the Digital Commons platform but present content in innovative ways. 9 Any libraries starting new journals or working with their campuses to create new journals should be aware of who their readers are and make sure that the solutions they choose work for those readers.



  1. “The Launch of PeerJ – PeerJ Blog.” Accessed February 19, 2013.
  2. “Some of the Innovations of the PeerJ Publication Platform – PeerJ Blog.” Accessed February 19, 2013.
  4. “The Thinking Behind the Design of PeerJ’s PDFs.” Accessed March 18, 2013.
  6. “PeerJ Delivers: The Review Process.” Accessed March 18, 2013.
  7. Meadow, James F., Ashley C. Bateman, Keith M. Herkert, Timothy K. O’Connor, and Jessica L. Green. “Significant Changes in the Skin Microbiome Mediated by the Sport of Roller Derby.” PeerJ 1 (March 12, 2013): e53. doi:10.7717/peerj.53.
  8. Ford, Emily, and Carol Bean. “Open Ethos Publishing at Code4Lib Journal and In the Library with the Lead Pipe.” In the Library with the Lead Pipe (December 12, 2012).
  9. Personal communication with Matthew Reidsma, March 19, 2013.