Websockets For Real-time And Interactive Interfaces

TL;DR WebSockets allows the server to push up-to-date information to the browser without the browser making a new request. Watch the videos below to see the cool things WebSockets enables.

Real-Time Technologies

You are on a Web page. You click on a link and you wait for a new page to load. Then you click on another link and wait again. It may only be a second or a few seconds before the new page loads after each click, but it still feels like it takes way too long for each page to load. The browser always has to make a request and the server gives a response. This client-server architecture is part of what has made the Web such a success, but it is also a limitation of how HTTP works. Browser request, server response, browser request, server response….

But what if you need a page to provide up-to-the-moment information? Reloading the page for new information is not very efficient. What if you need to create a chat interface or to collaborate on a document in real-time? HTTP alone does not work so well in these cases. When a server gets updated information, HTTP provides no mechanism to push that message to clients that need it. This is a problem because you want to get information about a change in chat or a document as soon as it happens. Any kind of lag can disrupt the flow of the conversation or slow down the editing process.

Think about when you are tracking a package you are waiting for. You may have to keep reloading the page for some time until there is any updated information. You are basically manually polling the server for updates. Using XMLHttpRequest (XHR) (also commonly known as Ajax) has been a popular way to try to work around the limitations of HTTP somewhat. After the initial page load, JavaScript can be used to poll the server for any updated information without user intervention.

Using JavaScript in this way you can still use normal HTTP and almost simulate getting a real-time feed of data from the server. After the initial request for the page, JavaScript can repeatedly ask the server for updated information. The browser client still makes a request and the server responds, and the request can be repeated. Because this cycle is all done with JavaScript it does not require user input, does not result in a full page reload, and the amount of data which is returned from the server can be minimal. In the case where there is no new data to return, the server can just respond with something like, “Sorry. No new data. Try again.” And then the browser repeats the polling–tries again and again until there is some new data to update the page. And then goes back to polling again.

This kind of polling has been implemented in many different ways, but all polling methods still have some queuing latency. Queuing latency is the time a message has to wait on the server before it can be delivered to the client. Until recently there has not been a standardized, widely implemented way for the server to send messages to a browser client as soon as an event happens. The server would always have to sit on the information until the client made a request. But there are a couple of standards that do allow the server to send messages to the browser without having to wait for the client to make a new request.

Server Sent Events (aka EventSource) is one such standard. Once the client initiates the connection with a handshake, Server Sent Events allows the server to continue to stream data to the browser. This is a true push technology. The limitation is that only the server can send data over this channel. In order for the browser to send any data to the server, the browser would still need to make an Ajax/XHR request. EventSource also lacks support even in some recent browsers like IE11.

WebSockets allows for full-duplex communication between the client and the server. The client does not have to open up a new connection to send a message to the server which saves on some overhead. When the server has new data it does not have to wait for a request from the client and can send messages immediately to the client over the same connection. Client and server can even be sending messages to each other at the same time. WebSockets is a better option for applications like chat or collaborative editing because the communication channel is bidirectional and always open. While there are other kinds of latency involved here, WebSockets solves the problem of queuing latency. Removing this latency concern is what is meant by WebSockets being a real-time technology. Current browsers have good support for WebSockets.

Using WebSockets solves some real problems on the Web, but how might libraries, archives, and museums use them? I am going to share details of a couple applications from my work at NCSU Libraries.

Digital Collections Now!

When Google Analytics first turned on real-time reporting it was mesmerizing. I could see what resources on the NCSU Libraries’ Rare and Unique Digital Collections site were being viewed at the exact moment they were being viewed. Or rather I could view the URL for the resource being viewed. I happened to notice that there would sometimes be multiple people viewing the same resource at the same time. This gave me some hint that today someone’s social share or forum post was getting a lot of click throughs right now. Or sometimes there would be a story in the news and we had an image of one of the people involved. I could then follow up and see examples of where we were being effective with search engine optimization.

The Rare & Unique site has a lot of visual resources like photographs and architectural drawings. I wanted to see the actual images that were being viewed. The problem, though, was that Google Analytics does not have an easy way to click through from a URL to the resource on your site. I would have to retype the URL, copy and paste the part of the URL path, or do a search for the resource identifier. I just wanted to see the images now. (OK, this first use case was admittedly driven by one of the great virtues of a programmer–laziness.)

My first attempt at this was to create a page that would show the resources which had been viewed most frequently in the past day and past week. To enable this functionality, I added some custom logging that is saved to a database. Every view of every resource would just get a little tick mark that would be tallied up occasionally. These pages showing the popular resources of the moment are then regenerated every hour.

It was not a real-time view of activity, but it was easy to implement and it did answer a lot of questions for me about what was most popular. Some images are regularly in the group of the most-viewed images. I learned that people often visit the image of the NC State men’s basketball 1983 team roster which went on to win the NCAA tournament. People also seem to really like the indoor pool at the Biltmore estate.

Really Real-Time

Now that I had this logging in place I set about to make it really real-time. I wanted to see the actual images being viewed at that moment by a real user. I wanted to serve up a single page and have it be updated in real-time with what is being viewed. And this is where the persistent communication channel of WebSockets came in. WebSockets allows the server to immediately send these updates to the page to be displayed.

People have told me they find this real-time view to be addictive. I found it to be useful. I have discovered images I never would have seen or even known how to search for before. At least for me this has been an effective form of serendipitous discovery. I also have a better sense of what different traffic volume actually feels like on good day. You too can see what folks are viewing in real-time now. And I have written up some more details on how this is all wired up together.

The Hunt Library Video Walls

I also used WebSockets to create interactive interfaces on the the Hunt Library video walls. The Hunt Library has five large video walls created with Cristie MicroTiles. These very large displays each have their own affordances based on the technologies in the space and the architecture. The Art Wall is above the single service point just inside the entrance of the library and is visible from outside the doors on that level. The Commons Wall is in front of a set of stairs that also function as coliseum-like seating. The Game Lab is within a closed space and already set up with various game consoles.

https://storify.com/ncsulibraries/video-walls
Listen to Wikipedia

When I saw and heard the visualization and sonification Listen to Wikipedia, I thought it would be perfect for the iPearl Immersion Theater. Listen to Wikipedia visualizes and sonifies data from the stream of edits on Wikipedia. The size of the bubbles is determined by the size of the change to an entry, and the sound changes in pitch based on the size of the edit. Green circles show edits from unregistered contributors, and purple circles mark edits performed by automated bots. (These automated bots are sometimes used to integrate library data into Wikipedia.) A bell signals an addition to an entry. A string pluck is a subtraction. New users are announced with a string swell.

The original Listen to Wikipedia (L2W) is a good example of the use of WebSockets for real-time displays. Wikipedia publishes all edits for every language into IRC channels. A bot called wikimon monitors each of the Wikipedia IRC channels and watches for edits. The bot then forwards the information about the edits over WebSockets to the browser clients on the Listen to Wikipedia page. The browser then takes those WebSocket messages and uses the data to create the visualization and sonification.

As you walk into the Hunt Library almost all traffic goes past the iPearl Immersion Theater. The one feature that made this space perfect for Listen to Wikipedia was that it has sound and, depending on your tastes, L2W can create pleasant ambient sounds1. I began by adjusting the CSS styling so that the page would fit the large. Besides setting the width and height, I adjusted the size of the fonts. I added some text to a panel on the right explaining what folks are seeing and hearing. On the left is now text asking passersby to interact with the wall and the list of languages currently being watched for updates.

One feature of the original L2W that we wanted to keep was the ability to change which languages are being monitored and visualized. Each language can individually be turned off and on. During peak times the English Wikipedia alone can sound cacophonous. An active bot can make lots of edits of all roughly similar sizes. You can also turn off or on changes to Wikidata which collects structured data that can support Wikipedia entries. Having only a few of the less frequently edited languages on can result in moments of silence punctuated by a single little dot and small bell sound.

We wanted to keep the ability to change the experience and actually get a feel for the torrent or trickle of Wikipedia edits and allow folks to explore what that might mean. We currently have no input device for directly interacting with the Immersion Theater wall. For L2W the solution was to allow folks to bring their own devices to act as a remote control. We encourage passersby to interact with the wall with a prominent message. On the wall we show the URL to the remote control. We also display a QR code version of the URL. To prevent someone in New Zealand from controlling the Hunt Library wall in Raleigh, NC, we use a short-lived, three-character token.

Because we were uncertain how best to allow a visitor to kick off an interaction, we included both a URL and QR code. They each have slightly different URLs so that we can track use. We were surprised to find that most of the interactions began with scanning the QR code. Currently 78% of interactions begin with the QR code. We suspect that we could increase the number of visitors interacting with the wall if there were other simpler ways to begin the interaction. For bring-your-own-device remote controls we are interested in how we might use technologies like Bluetooth Low Energy within the building for a variety of interactions with the surroundings and our services.

The remote control Web page is a list of big checkboxes next to each of the languages. Clicking on one of the languages turns its stream on or off on the wall (connects or disconnects one of the WebSockets channels the wall is listening on). The change happens almost immediately with the wall showing a message and removing or adding the name of the language from a side panel. We wanted this to be at least as quick as the remote control on your TV at home.

The quick interaction is possible because of WebSockets. Both the browser page on the wall and the remote control client listen on another WebSockets channel for such messages. This means that as soon as the remote control sends a message to the server it can be sent immediately to the wall and the change reflected. If the wall were using polling to get changes, then there would potentially be more latency before a change registered on the wall. The remote control client also uses WebSockets to listen on a channel waiting for updates. This allows feedback to be displayed to the user once the change has actually been made. This feedback loop communication happens over WebSockets.

Having the remote control listen for messages from the server also serves another purpose. If more than one person enters the space to control the wall, what is the correct way to handle that situation? If there are two users, how do you accurately represent the current state on the wall for both users? Maybe once the first user begins controlling the wall it locks out other users. This would work, but then how long do you lock others out? It could be frustrating for a user to have launched their QR code reader, lined up the QR code in their camera, and scanned it only to find that they are locked out and unable to control the wall. What I chose to do instead was to have every message of every change go via WebSockets to every connected remote control. In this way it is easy to keep the remote controls synchronized. Every change on one remote control is quickly reflected on every other remote control instance. This prevents most cases where the remote controls might get out of sync. While there is still the possibility of a race condition, it becomes less likely with the real-time connection and is harmless. Besides not having to lock anyone out, it also seems like a lot more fun to notice that others are controlling things as well–maybe it even makes the experience a bit more social. (Although, can you imagine how awful it would be if everyone had their own TV remote at home?)

I also thought it was important for something like an interactive exhibit around Wikipedia data to provide the user some way to read the entries. From the remote control the user can get to a page which lists the same stream of edits that are shown on the wall. The page shows the title for the most recently edited entry at the top of the page and pushes others down the page. The titles link to the current revision for that page. This page just listens to the same WebSockets channels as the wall does, so the changes appear on the wall and remote control at the same time. Sometimes the stream of edits can be so fast that it is impossible to click on an interesting entry. A button allows the user to pause the stream. When an intriguing title appears on the wall or there is a large edit to a page, the viewer can pause the stream, find the title, and click through to the article.

The reaction from students and visitors has been fun to watch. The enthusiasm has had unexpected consequences. For instance one day we were testing L2W on the wall and noting what adjustments we would want to make to the design. A student came in and sat down to watch. At one point they opened up their laptop and deleted a large portion of a Wikipedia article just to see how large the bubble on the wall would be. Fortunately the edit was quickly reverted.

We have also seen the L2W exhibit pop up on social media. This Instagram video was posted with the comment, “Reasons why I should come to the library more often. #huntlibrary.”

This is people editing–Oh, someone just edited Home Alone–editing Wikipedia in this exact moment.

The original Listen to Wikipedia is open source. I have also made the source code for the Listen to Wikipedia exhibit and remote control application available. You would likely need to change the styling to fit whatever display you have.

Other Examples

I have also used WebSockets for some other fun projects. The Hunt Library Visualization Wall has a unique columnar design, and I used it to present images and video from our digital special collections in a way that allows users to change the exhibit. For the Code4Lib talk this post is based on, I developed a template for creating slide decks that include audience participation and synchronized notes via WebSockets.

Conclusion

The Web is now a better development platform for creating real-time and interactive interfaces. WebSockets provides the means for sending real-time messages between servers, browser clients, and other devices. This opens up new possibilities for what libraries, archives, and museums can do to provide up to the moment data feeds and to create engaging interactive interfaces using Web technologies.

If you would like more technical information about WebSockets and these projects, please see the materials from my Code4Lib 2014 talk (including speaker notes) and some notes on the services and libraries I have used. There you will also find a post with answers to the (serious) questions I was asked during the Code4Lib presentation. I’ve also posted some thoughts on designing for large video walls.

Thanks: Special thanks to Mike Nutt, Brian Dietz, Yairon Martinez, Alisa Katz, Brent Brafford, and Shirley Rodgers for their help with making these projects a reality.


About Our Guest Author: Jason Ronallo is the Associate Head of Digital Library Initiatives at NCSU Libraries and a Web developer. He has worked on lots of other interesting projects. He occasionally writes for his own blog Preliminary Inventory of Digital Collections.

Notes

  1. Though honestly Listen to Wikipedia drove me crazy listening to it so much as I was developing the Immersion Theater display.

My #HuntLibrary: Using Instagram to Crowdsource the Story of a New Library

[Updated to reflect open-source availability. May 13th, 2013]
 
Introduction

North Carolina State University opened the James B. Hunt Jr. Library in January of 2013, creating a heart for our Centennial Campus that defines the research library of the future. My #HuntLibrary was created as a platform to foster student and community engagement with the new building via social media imagery and to preserve and archive these images as part of the record of the Hunt Library launch. My #HuntLibrary is a Ruby on Rails application that harvests images from Instagram and provides several browsing views, mechanisms for sharing, tools for users to select their favorite images, an administrative interface for moderating images, and a system for harvesting images for inclusion in the NCSU Libraries digital archives. Built according to the principles of “responsive design,” My #HuntLibrary is usable on mobile devices, tablets, desktops, e-boards, and the massive MicroTiles displays in the Hunt Library.

In the three months since the launch of My #HuntLibrary (coinciding with the opening of the Hunt Library building), we received nearly 1700 images from over 600 different users, over 6800 “like” votes, and over 53,000 “battle” votes. This post will detail some of the risks involved with the project, the technical challenges we faced, how student engagement strengthened the project, and the potential benefits of giving students and community members a voice in the documentation of the Hunt Library.

The code that drives My #HuntLibrary has been extracted into an open-source Rails Engine called “lentil” that is available on GitHub.

My #HuntLibrary front page
My #HuntLibrary
Planning for Risk

Most projects carry some level of risk and My #HuntLibrary was no different. It was difficult to predict the level of engagement we would be able to achieve with various application features. The timeline for development was short, carried a firm deadline (due to the need to launch alongside the new library), and included work with several technologies that were new to the development team. Additionally, the application relied on a third-party API that could change at any time. In order to mitigate project risks, we structured the project around goals with short and long (and more speculative) timelines that would each individually justify the project effort.

  1. Utilize social media to increase engagement with a new library

Social media engagement with students was a linchpin of our opening strategy. Before the Hunt Library came online, NC State students already had a high degree of ownership over existing Libraries spaces and we sought to extend that to our new library. My #HuntLibrary could contribute to that sense of ownership by providing a platform for users of the new library to document and share their experience, learn about the experiences of their peers, and to collectively curate the images using voting tools. Furthermore, My #HuntLibrary is an opportunity for staff to learn about important and unexpected uses of the building during the critical post-launch period.

  1. Provide a mechanism for students to contribute to digital collections

We felt that the Hunt Library opening could be an opportunity for students to add their voices to the documentation of campus history that is preserved in our extensive digital collections. My #HuntLibrary could also allow us to leverage social technologies to diversify the perspectives reflected in our archival collections. This is our first major social media preservation effort and we hope that our project, along with others (such as GWU Libraries’ Social Feed Manager, or the State of North Carolina’s Social Media Archive), may begin to contribute possible answers to several questions related to social media archives, including:

  • Can we utilize existing streams of social media content to integrate additional student perspectives in our documentation of the history of their university? Does this enhance our special collections?

  • Can an invitation to participate in core library practices, such as the development of special collections, serve as an effective engagement strategy?

  • What is the research value of social media collections? How does this value vary based on media, users, and harvesting methods?
  1. Explore new technologies

The developers involved with the project created a support structure that included pair programming, code reviews, and tutorial sessions that mitigated many of the technical risks, including the integration of new software frameworks and libraries and the coordination of work on a tight schedule. This project also provided an opportunity to learn more about the design of interfaces for the large-scale displays described later in this article.

Student Engagement

Although we knew it would be possible to utilize the Instagram API to collect and display photographs about the Hunt Library, we needed to have a reasonable expectation that people (and students in particular) would participate in the project. This question hinged on the likelihood that a person would tag a photograph of the new library with a hashtag that would allow us to capture it. The Libraries had previous experience trying to engage students through Twitter around the question “What are you doing in the library right now?” We looked back on that project’s limitations to inform our engagement strategy. The chosen hashtag (#whyncsulib) was unique, but in order to answer our question, students had to be aware of the hashtag and willing to deviate somewhat from their normal social media communication patterns. However, we found that it was already common for students to use the tag #DHHill to visually depict their activities in our D. H. Hill Library on Instagram. 

Example #DHHill Instagram images
Example #DHHill Instagram images

Assuming that students would continue this tagging behavior at the new library, we chose the hashtag “#HuntLibrary” in hopes that it would see widespread adoption regardless of the level of awareness of our project.

As we began to design the application and develop a social media plan, another milestone in the project came with the opportunity to present the idea to actual students. The NCSU Libraries Student Advisory Board is charged with providing guidance and input on the programs and services the Libraries offers. This regular open meeting (fueled by free food) allowed us to collect feedback on specific questions about the project (e.g. do students want images to “Battle?”). The feedback from this presentation varied widely, from useful (e.g. roughly two-thirds of the students present had Instagram installed on their phones and yes, they want to battle) to unsanctionable (“If you want cute photographs you should let us bring cats into the library”). However, the general reaction from the students was that it seemed like a good idea, and we continued work with increased confidence.

The Student Advisory Board meeting also led to another breakthrough: our Director’s commitment of funds to award an iPad Mini to the photographer of the best image. Prior to the Advisory Board meeting, our only participation incentive was an assurance that the best photographs would be ingested into the University’s permanent digital archives. While this is a thrilling idea to a roomful of librarians, we were uncertain that students would have the same reaction. Perhaps unsurprisingly, when our Director asked the gathered students if they would take more pictures if there were an iPad Mini at stake, the students were unanimous in their response. Although we later learned in usability tests that students reacted very positively to the idea of contributing to their University’s story, the tablet prize gave the project a focal point, and the contest became the cornerstone of our student engagement strategy.

Display Technology

The NCSU Libraries’ vision is to be NC State’s competitive advantage. This vision is often operationalized by putting cutting-edge technology in the hands of our students and faculty. For the Hunt Library, we made a strategic investment in large-scale, architecturally integrated visualization spaces such as ultra-high definition video walls and virtual environment studios. These visualization spaces serve as large canvases to reflect the research and activities of our campus in new interactive ways. The Hunt Library is, in short, a storytelling building.

We anticipated that My #HuntLibrary would produce a visually compelling record of the new library, and so we chose to display the photographic activity in one of the library’s most accessible visualization spaces: the iPearl Immersion Theater. The Theater features a curved video wall that is twenty-one feet wide and seven feet tall. The wall uses Christie MicroTiles, a modular display system based on LED and DLP technologies that gives the wall an effective resolution of 6824 pixels by 2240 pixels. MicroTiles feature high color saturation and a wide color spectrum, making them ideal for Instagram photographs of the colorful library. A key part of the technology behind the MicroTiles is a Christie Vista Spyder. The Spyder is a hardware-based video processor that allows for 12-bit scaling. This upsampling capability was important for our application, as it allowed small (612 pixels square) images to be enlarged to two-foot images in the Theater with very few noticeable compression artifacts. 

Viewing My #HuntLibrary in the Immersion Theater. Photo by Instagram user crmelvin14.
Viewing My #HuntLibrary in the Immersion Theater. Photo by Instagram user crmelvin14. 

As a public, physical space, the iPearl Immersion Theater allowed us to create embodied and shared user experiences that were fundamentally different from the web and mobile views of My #HuntLibrary. The Theater is a semi-open space near the entrance to the library, adjacent to an expansive reading lounge. The video wall installation had an attractive presence that invited passers-by inside to examine the images. Once inside the Theater, the content could be appreciated more fully by moving around in the space. Standing close to the wall enabled the user to see more detail about a particular photograph while moving farther away gave an impressionistic sense of the library’s spaces. While dwell times for the installation were sometimes low because users often dropped in for a moment before heading to their intended destination, seating in the Theater allowed for a more leisurely viewing experience as new photographs rotated into the display. Small groups of people gathered in the Theater to discuss the merits of their favorite photographs, point out their own photographs to their friends, and engage in conversations with strangers about the images.

Responsive Web Design

With the large MicroTiles displays in the Hunt Library we now face the challenge of designing for very small (mobile device) and very large displays and many sizes in between (tablets, laptops, desktops, e-boards). The growing popularity of responsive web design techniques have helped developers and designers meet the challenge of building applications that work well on a wide range of device screen sizes. Responsive web design generally means using a combination of CSS3 media queries, fluid grids, and flexible images to progressively enhance a single web design for optimal display and use on a wide range of screen sizes and devices (Marcotte 2010). Most of the discussion of responsive design centers around building for devices ranging from phone-sized to desktop-sized displays. However, there is no technical reason why responsive design cannot work for even wider ranges of display sizes.

Our final design for My #HuntLibrary includes two different responsive designs, one of which supports mouse and touch interactions and display sizes ranging from phones to desktops, and another for non-interactive public display of the photographs on displays ranging from large eboards to more than twenty-foot wide Christie MicroTiles arrays. Our decision to build two different responsive designs for the smaller and larger sets of displays has more to do with the context in which these displays are used (personal, interactive devices versus public, non-interactive displays) than any technical limitations imposed by responsive web design techniques. In our case, the design of My #HuntLibrary for phones, tablets, and laptop and desktop computers has features to support interactive browsing, sharing photos, and a photo competition “Battle View” for people to compare sets of images and pick their favorites. These features would not translate well to the Libraries’ larger public displays, which range in size from a large eboard to huge Christie MicroTiles video walls, and which are, for now, mostly non-interactive. It made sense to develop a different view optimized to support a non-interactive display of the My #HuntLibrary photos. For the eboard-sized and larger public displays we developed a grid of images that are periodically replaced by new images, a few at a time.

Mobile view of My #HuntLibrary.
Mobile view of My #HuntLibrary.
My #HuntLibrary on Christie MicroTiles in the Immersion Theater.
My #HuntLibrary on Christie MicroTiles in the Immersion Theater.
Collecting Social Media

Although the initial development push was heavily focused on the core data management and display infrastructure, the longer-term goal of content preservation (for the sake of historical documentation rather than personal archives) influenced most aspects of the project. In particular, we have attempted and are continuing to address three major preservation-related themes: harvesting, crowdsourced curation, and legal clearance.

For short-term use of the images, we harvest only the metadata, leaving the images on the Instagram servers. Clearly, for long-term preservation we would need to collect the images themselves. This harvesting is complicated by the necessity to declare an arbitrary “break” from the source materials, at which point any changes to the metadata (or removal of the images) would not be reflected by our system. We are currently developing a milestone-based harvesting schedule that takes into account both the length of time the image is in the system and the submission of a donor agreement.

While we are currently planning on collecting all “#huntlibrary” images, we are very interested in the potential to allow our users to influence the selection process for certain parts of our archival collection. In order to test and support this goal, we developed two voting tools: individual image “like” voting and this-or-that “battle” voting. Our hope (which early usage metrics seem to support) is that we could use the data from these tools to select images for preservation, or at least to promote a subset of preserved images, that reflect the interests of our community. In addition to improving our selection processes, this may be an opportunity to promote archival selection as a student engagement tool by promoting opportunities for students to influence the historical record of their own experiences.

Image battle interface.
Image battle interface.

Finally, we worked with a lawyer and copyright specialist at our library to develop a donor agreement that was short and clear enough to be submitted as a comment on an image. Instagram users retain rights to their own images and thus the ability to grant the limited rights that we are requesting. Furthermore, the use of the Instagram comment system will allow us to automate this process, provided that we are responsive to takedown requests.

Conclusion

In the three months since the launch of My #HuntLibrary (coinciding with the opening of the Hunt Library building), we received nearly 1700 images from over 600 different users, over 6800 “like” votes, and over 53,000 “battle” votes. In addition to these measures of user contributions (of either images or vote-based reviews), My #HuntLibrary recorded 135,908 pageviews from 10,421 unique visitors (according to Google Analytics) during this period. Furthermore, the project was regularly cited by students, staff, and institutional partners on social media channels, and was featured (with an emphasis on historical documentation) during the Hunt Library Dedication events.

The evaluation of the archival components of this application will take place on a longer timeline. We are currently extending the long-term content harvesting features in order to support these activities in a more automated way. We have received several indications of the value of pursuing image preservation features, including surprisingly enthusiastic reactions to questions about the preservation of images from students taking part in a My #HuntLibrary user study. As a particularly encouraging example, when an undergraduate student contributor to My #HuntLibrary was asked “How would you feel if one of your Instagram photos were selected by the library to be kept as a permanent record of what students did in 2013?” they responded, “I would be so excited. For me, I think it would be better than winning an iPad.”

About our guest authors:

Jason Casden is the Lead Librarian for the Digital Services Development group at the North Carolina State University Libraries, where he helps to develop and implement scalable digital library applications. He is the project manager and a software developer for “My #HuntLibrary,” and has served as a project or technical lead for projects including the Suma physical space and service usage assessment toolkit, the WolfWalk geo-enhanced mobile historical guide, and Library Course Tools.

Mike Nutt is a Fellow at NCSU Libraries, where he leads a strategic initiative called “Networked Library: Marketing the 21st Century Library.” He is the product lead for My #HuntLibrary, and also facilitates content strategies for the large video walls in NC State’s new Hunt Library. He founded the University of North Carolina at Chapel Hill student group Carolina Digital Story Lab and was a research assistant at the UNC-CH Carolina Digital Library and Archives.

Cory Lown is Digital Technologies Development Librarian at North Carolina State University Libraries where he works collaboratively to design and develop applications to improve end-user resource discovery and use of library services. He has contributed as a developer and/or interface designer to a number of projects, including My #HuntLibrary, WolfWalk, QuickSearch, and the latest version of the library’s mobile website.

Bret Davidson is currently a Digital Technologies Development Librarian at the North Carolina State University Libraries. Previously, Bret worked as an NCSU Libraries Fellow on visualization tools and resources to support the new James B. Hunt, Jr. Library. Prior to becoming a librarian, Bret was a music educator in the public schools of Pennsylvania and Illinois, as well as a performing musician with the River City Brass Band in Pittsburgh, PA.

Demystifying the Library with Game-Based Mobile Learning

How do you orient students to to the library? Put them in a classroom and show them the website? Walk them around in a giant herd, pointing out the important spaces? That’s how we at North Carolina State University Libraries were doing it, too. And we were finding ourselves a little disappointed. Wouldn’t it be better, we thought, if we could get the students out into the library, actually engaging with staff, exploring the spaces, and discovering the collections themselves?

Classroom Based Library Orientation

 

Background & Rationale

We had long felt that classroom-based library orientation had inherent flaws and we had tried several alternatives, including a scavenger hunt. Although the scavenger hunt was popular, it was not sustainable: it took a significant amount of work to hide paper clues around the library before each hunt and the activity could not be scaled up to meet the needs of over a hundred ENG 101 classes per semester. So, we focused our efforts on enhancing traditional classroom-based instruction and creating online tutorials.

In 2011, I held a focus group with several instructors in the First Year Writing Program, and the message was clear: they believed that students would benefit from more face-to-face library instruction and that instruction should be more active and engaging. This confirmed my gut feeling that, while online tutorials can be very effective at delivering content, they do not necessarily promote our “affective” goals of reducing library-related anxiety and fostering confidence in using the library’s collections and spaces. After classroom instruction, we distribute a short survey that asks students if they remain confused about how to find information, about whom to ask for help, about how to navigate the physical spaces of the library, or anything else. The most common response by far – from 44% of surveyed students – was that they still didn’t feel comfortable finding their way around our large library, which is in fact four merged buildings. We needed to develop an activity that would simultaneously teach students about our collections and services, introduce them to critical library staff, and help them learn their way around the library’s spaces.

Project Development

It was with this feedback in mind that two colleagues — Adam Rogers and Adrienne Lai — and I revisited the idea of the scavenger hunt in March 2011. Since the last scavenger hunt attempt in 2010,  mobile devices and the cloud based apps that run on them had become mainstream.  If we could develop a scavenger hunt that relied on mobile technology, such as iPod Touches, and which didn’t rely on students finding paper clues throughout the library, we might be able to sustain and scale it.

We first investigated out-of-the-box scavenger hunt solutions such as SCVNGR and Scavenger Hunt With Friends, which were appealing in that they were self contained and provided automatic scoring. However, we did not have a budget for the project and discovered that the free versions could not meet our needs. Furthermore, apps that rely on GPS coordinates to display challenges and questions did not work reliably inside our building.

Ultimately, we decided we needed to come up with something ourselves that would allow students to submit answers to scavenger hunt questions “mobilely”, automatically calculate scores or allow us to score student answers rapidly, and enable us to display results and provide feedback at the end of the 50 minute activity. Our eventual solution made use of traditional approaches to scavenger hunts, in the form of paper maps and clue sheets, alongside novel cloud-based technologies such as Evernote and Google Docs.

The Scavenger Hunt in 50 Minutes

0:00-10:00: A class arrives at the library classroom and is greeted by a librarian, who introduces the activity and divides the group into 3-5 teams of about 4 students. Each team gets a packet with a list of 15 questions and an iPod Touch. The iPod Touches are already logged into Evernote accounts assigned to each team.

Our eventual solution combined old and new technologies.

10:00-35:00: Teams disperse into the library to discover the answers to their 15 questions. Some questions require text-based answers; others prompt students to submit a photo. We ask them to introduce themselves to and take a photo with a librarian, to find a book in the stacks and take a photo of it as evidence, and to find the collection of circulating DVD’s, among other things. Each answer is submitted as an Evernote note. While students are exploring the library, a librarian monitors the teams’ Evernote accounts (which have been shared with our master account) and scoring their answers using a GoogleDocs spreadsheet. Meanwhile, another library staff member copies student photos into a PowerPoint document to run while students return at the end of the hunt.

Students consult the question list.

35:00-50:00: At the end of 25 minutes, students return to the classroom, where a slideshow displays the photos they took, the correct answers to the questions, and a URL to a short survey about the activity. After all team members have returned, the librarians reveal the teams’ scores, declare a winning team, and distribute prizes.

Student meets a librarian.

Feedback

The scavenger hunt has been very popular with both students and faculty. In the two semesters we have been offering the hunt (Fall 2011 and Spring 2012), we have facilitated over 90 hunts and reached over 1,600 students. 91% of surveyed students considered the activity fun and enjoyable, 93% said they learned something new about the library, and 95% indicated that they felt comfortable asking a staff member for help after having completed the activity. Instructors find the activity worthwhile as well. One ENG 101 faculty member wrote that the “activity engaged students… on a level that led to increased understanding, deeper learning, and almost complete recall of important library functions.”

Lessons Learned & Adjustments

After almost 100 scavenger hunts, we have learned how to optimize this activity for our target audiences. First we discovered that, for our institution, this scavenger hunt works best when scheduled for a class. Often, however, one instructor would schedule scavenger hunts for three consecutive sections of a class. In these cases, we learned to use only half our iPods for the first session. In the second session, while the second half of the iPods were in use, the first half would be refreshed and made ready for the last group of students.

In the very early scavenger hunts in Fall 2011, students reported lagginess with the iPods and occasional crashing of Evernote. However, since some critical iOS and Evernote updates, this has not been a problem.

Finally, after an unexpected website outage, we learned how dependent our activity was on the functionality of our website. We now keep an ‘emergency’ version of our scavenger hunt questions in case of another outage.

More details about implementing the NCSU Libraries Mobile Scavenger Hunt are available on the NCSU Libraries’ website.

 

About Our Guest Author: Anne Burke is Undergraduate Instruction & Outreach Librarian at  NCSU Libraries. She holds an MSLIS from Syracuse University and an MA in Education from Manhattanville College. She like to explore new and exciting ways to teach students about information.