TL;DR WebSockets allows the server to push up-to-date information to the browser without the browser making a new request. Watch the videos below to see the cool things WebSockets enables.
You are on a Web page. You click on a link and you wait for a new page to load. Then you click on another link and wait again. It may only be a second or a few seconds before the new page loads after each click, but it still feels like it takes way too long for each page to load. The browser always has to make a request and the server gives a response. This client-server architecture is part of what has made the Web such a success, but it is also a limitation of how HTTP works. Browser request, server response, browser request, server response….
But what if you need a page to provide up-to-the-moment information? Reloading the page for new information is not very efficient. What if you need to create a chat interface or to collaborate on a document in real-time? HTTP alone does not work so well in these cases. When a server gets updated information, HTTP provides no mechanism to push that message to clients that need it. This is a problem because you want to get information about a change in chat or a document as soon as it happens. Any kind of lag can disrupt the flow of the conversation or slow down the editing process.
This kind of polling has been implemented in many different ways, but all polling methods still have some queuing latency. Queuing latency is the time a message has to wait on the server before it can be delivered to the client. Until recently there has not been a standardized, widely implemented way for the server to send messages to a browser client as soon as an event happens. The server would always have to sit on the information until the client made a request. But there are a couple of standards that do allow the server to send messages to the browser without having to wait for the client to make a new request.
Server Sent Events (aka EventSource) is one such standard. Once the client initiates the connection with a handshake, Server Sent Events allows the server to continue to stream data to the browser. This is a true push technology. The limitation is that only the server can send data over this channel. In order for the browser to send any data to the server, the browser would still need to make an Ajax/XHR request. EventSource also lacks support even in some recent browsers like IE11.
WebSockets allows for full-duplex communication between the client and the server. The client does not have to open up a new connection to send a message to the server which saves on some overhead. When the server has new data it does not have to wait for a request from the client and can send messages immediately to the client over the same connection. Client and server can even be sending messages to each other at the same time. WebSockets is a better option for applications like chat or collaborative editing because the communication channel is bidirectional and always open. While there are other kinds of latency involved here, WebSockets solves the problem of queuing latency. Removing this latency concern is what is meant by WebSockets being a real-time technology. Current browsers have good support for WebSockets.
Using WebSockets solves some real problems on the Web, but how might libraries, archives, and museums use them? I am going to share details of a couple applications from my work at NCSU Libraries.
Digital Collections Now!
When Google Analytics first turned on real-time reporting it was mesmerizing. I could see what resources on the NCSU Libraries’ Rare and Unique Digital Collections site were being viewed at the exact moment they were being viewed. Or rather I could view the URL for the resource being viewed. I happened to notice that there would sometimes be multiple people viewing the same resource at the same time. This gave me some hint that today someone’s social share or forum post was getting a lot of click throughs right now. Or sometimes there would be a story in the news and we had an image of one of the people involved. I could then follow up and see examples of where we were being effective with search engine optimization.
The Rare & Unique site has a lot of visual resources like photographs and architectural drawings. I wanted to see the actual images that were being viewed. The problem, though, was that Google Analytics does not have an easy way to click through from a URL to the resource on your site. I would have to retype the URL, copy and paste the part of the URL path, or do a search for the resource identifier. I just wanted to see the images now. (OK, this first use case was admittedly driven by one of the great virtues of a programmer–laziness.)
My first attempt at this was to create a page that would show the resources which had been viewed most frequently in the past day and past week. To enable this functionality, I added some custom logging that is saved to a database. Every view of every resource would just get a little tick mark that would be tallied up occasionally. These pages showing the popular resources of the moment are then regenerated every hour.
It was not a real-time view of activity, but it was easy to implement and it did answer a lot of questions for me about what was most popular. Some images are regularly in the group of the most-viewed images. I learned that people often visit the image of the NC State men’s basketball 1983 team roster which went on to win the NCAA tournament. People also seem to really like the indoor pool at the Biltmore estate.
Now that I had this logging in place I set about to make it really real-time. I wanted to see the actual images being viewed at that moment by a real user. I wanted to serve up a single page and have it be updated in real-time with what is being viewed. And this is where the persistent communication channel of WebSockets came in. WebSockets allows the server to immediately send these updates to the page to be displayed.
People have told me they find this real-time view to be addictive. I found it to be useful. I have discovered images I never would have seen or even known how to search for before. At least for me this has been an effective form of serendipitous discovery. I also have a better sense of what different traffic volume actually feels like on good day. You too can see what folks are viewing in real-time now. And I have written up some more details on how this is all wired up together.
The Hunt Library Video Walls
I also used WebSockets to create interactive interfaces on the the Hunt Library video walls. The Hunt Library has five large video walls created with Cristie MicroTiles. These very large displays each have their own affordances based on the technologies in the space and the architecture. The Art Wall is above the single service point just inside the entrance of the library and is visible from outside the doors on that level. The Commons Wall is in front of a set of stairs that also function as coliseum-like seating. The Game Lab is within a closed space and already set up with various game consoles.
Listen to Wikipedia
When I saw and heard the visualization and sonification Listen to Wikipedia, I thought it would be perfect for the iPearl Immersion Theater. Listen to Wikipedia visualizes and sonifies data from the stream of edits on Wikipedia. The size of the bubbles is determined by the size of the change to an entry, and the sound changes in pitch based on the size of the edit. Green circles show edits from unregistered contributors, and purple circles mark edits performed by automated bots. (These automated bots are sometimes used to integrate library data into Wikipedia.) A bell signals an addition to an entry. A string pluck is a subtraction. New users are announced with a string swell.
The original Listen to Wikipedia (L2W) is a good example of the use of WebSockets for real-time displays. Wikipedia publishes all edits for every language into IRC channels. A bot called wikimon monitors each of the Wikipedia IRC channels and watches for edits. The bot then forwards the information about the edits over WebSockets to the browser clients on the Listen to Wikipedia page. The browser then takes those WebSocket messages and uses the data to create the visualization and sonification.
As you walk into the Hunt Library almost all traffic goes past the iPearl Immersion Theater. The one feature that made this space perfect for Listen to Wikipedia was that it has sound and, depending on your tastes, L2W can create pleasant ambient sounds. I began by adjusting the CSS styling so that the page would fit the large. Besides setting the width and height, I adjusted the size of the fonts. I added some text to a panel on the right explaining what folks are seeing and hearing. On the left is now text asking passersby to interact with the wall and the list of languages currently being watched for updates.
One feature of the original L2W that we wanted to keep was the ability to change which languages are being monitored and visualized. Each language can individually be turned off and on. During peak times the English Wikipedia alone can sound cacophonous. An active bot can make lots of edits of all roughly similar sizes. You can also turn off or on changes to Wikidata which collects structured data that can support Wikipedia entries. Having only a few of the less frequently edited languages on can result in moments of silence punctuated by a single little dot and small bell sound.
We wanted to keep the ability to change the experience and actually get a feel for the torrent or trickle of Wikipedia edits and allow folks to explore what that might mean. We currently have no input device for directly interacting with the Immersion Theater wall. For L2W the solution was to allow folks to bring their own devices to act as a remote control. We encourage passersby to interact with the wall with a prominent message. On the wall we show the URL to the remote control. We also display a QR code version of the URL. To prevent someone in New Zealand from controlling the Hunt Library wall in Raleigh, NC, we use a short-lived, three-character token.
Because we were uncertain how best to allow a visitor to kick off an interaction, we included both a URL and QR code. They each have slightly different URLs so that we can track use. We were surprised to find that most of the interactions began with scanning the QR code. Currently 78% of interactions begin with the QR code. We suspect that we could increase the number of visitors interacting with the wall if there were other simpler ways to begin the interaction. For bring-your-own-device remote controls we are interested in how we might use technologies like Bluetooth Low Energy within the building for a variety of interactions with the surroundings and our services.
The remote control Web page is a list of big checkboxes next to each of the languages. Clicking on one of the languages turns its stream on or off on the wall (connects or disconnects one of the WebSockets channels the wall is listening on). The change happens almost immediately with the wall showing a message and removing or adding the name of the language from a side panel. We wanted this to be at least as quick as the remote control on your TV at home.
The quick interaction is possible because of WebSockets. Both the browser page on the wall and the remote control client listen on another WebSockets channel for such messages. This means that as soon as the remote control sends a message to the server it can be sent immediately to the wall and the change reflected. If the wall were using polling to get changes, then there would potentially be more latency before a change registered on the wall. The remote control client also uses WebSockets to listen on a channel waiting for updates. This allows feedback to be displayed to the user once the change has actually been made. This feedback loop communication happens over WebSockets.
Having the remote control listen for messages from the server also serves another purpose. If more than one person enters the space to control the wall, what is the correct way to handle that situation? If there are two users, how do you accurately represent the current state on the wall for both users? Maybe once the first user begins controlling the wall it locks out other users. This would work, but then how long do you lock others out? It could be frustrating for a user to have launched their QR code reader, lined up the QR code in their camera, and scanned it only to find that they are locked out and unable to control the wall. What I chose to do instead was to have every message of every change go via WebSockets to every connected remote control. In this way it is easy to keep the remote controls synchronized. Every change on one remote control is quickly reflected on every other remote control instance. This prevents most cases where the remote controls might get out of sync. While there is still the possibility of a race condition, it becomes less likely with the real-time connection and is harmless. Besides not having to lock anyone out, it also seems like a lot more fun to notice that others are controlling things as well–maybe it even makes the experience a bit more social. (Although, can you imagine how awful it would be if everyone had their own TV remote at home?)
I also thought it was important for something like an interactive exhibit around Wikipedia data to provide the user some way to read the entries. From the remote control the user can get to a page which lists the same stream of edits that are shown on the wall. The page shows the title for the most recently edited entry at the top of the page and pushes others down the page. The titles link to the current revision for that page. This page just listens to the same WebSockets channels as the wall does, so the changes appear on the wall and remote control at the same time. Sometimes the stream of edits can be so fast that it is impossible to click on an interesting entry. A button allows the user to pause the stream. When an intriguing title appears on the wall or there is a large edit to a page, the viewer can pause the stream, find the title, and click through to the article.
The reaction from students and visitors has been fun to watch. The enthusiasm has had unexpected consequences. For instance one day we were testing L2W on the wall and noting what adjustments we would want to make to the design. A student came in and sat down to watch. At one point they opened up their laptop and deleted a large portion of a Wikipedia article just to see how large the bubble on the wall would be. Fortunately the edit was quickly reverted.
We have also seen the L2W exhibit pop up on social media. This Instagram video was posted with the comment, “Reasons why I should come to the library more often. #huntlibrary.”
This is people editing–Oh, someone just edited Home Alone–editing Wikipedia in this exact moment.
The original Listen to Wikipedia is open source. I have also made the source code for the Listen to Wikipedia exhibit and remote control application available. You would likely need to change the styling to fit whatever display you have.
I have also used WebSockets for some other fun projects. The Hunt Library Visualization Wall has a unique columnar design, and I used it to present images and video from our digital special collections in a way that allows users to change the exhibit. For the Code4Lib talk this post is based on, I developed a template for creating slide decks that include audience participation and synchronized notes via WebSockets.
The Web is now a better development platform for creating real-time and interactive interfaces. WebSockets provides the means for sending real-time messages between servers, browser clients, and other devices. This opens up new possibilities for what libraries, archives, and museums can do to provide up to the moment data feeds and to create engaging interactive interfaces using Web technologies.
If you would like more technical information about WebSockets and these projects, please see the materials from my Code4Lib 2014 talk (including speaker notes) and some notes on the services and libraries I have used. There you will also find a post with answers to the (serious) questions I was asked during the Code4Lib presentation. I’ve also posted some thoughts on designing for large video walls.
Thanks: Special thanks to Mike Nutt, Brian Dietz, Yairon Martinez, Alisa Katz, Brent Brafford, and Shirley Rodgers for their help with making these projects a reality.
About Our Guest Author: Jason Ronallo is the Associate Head of Digital Library Initiatives at NCSU Libraries and a Web developer. He has worked on lots of other interesting projects. He occasionally writes for his own blog Preliminary Inventory of Digital Collections.
- Though honestly Listen to Wikipedia drove me crazy listening to it so much as I was developing the Immersion Theater display. ↩
How do you orient students to to the library? Put them in a classroom and show them the website? Walk them around in a giant herd, pointing out the important spaces? That’s how we at North Carolina State University Libraries were doing it, too. And we were finding ourselves a little disappointed. Wouldn’t it be better, we thought, if we could get the students out into the library, actually engaging with staff, exploring the spaces, and discovering the collections themselves?
Background & Rationale
We had long felt that classroom-based library orientation had inherent flaws and we had tried several alternatives, including a scavenger hunt. Although the scavenger hunt was popular, it was not sustainable: it took a significant amount of work to hide paper clues around the library before each hunt and the activity could not be scaled up to meet the needs of over a hundred ENG 101 classes per semester. So, we focused our efforts on enhancing traditional classroom-based instruction and creating online tutorials.
In 2011, I held a focus group with several instructors in the First Year Writing Program, and the message was clear: they believed that students would benefit from more face-to-face library instruction and that instruction should be more active and engaging. This confirmed my gut feeling that, while online tutorials can be very effective at delivering content, they do not necessarily promote our “affective” goals of reducing library-related anxiety and fostering confidence in using the library’s collections and spaces. After classroom instruction, we distribute a short survey that asks students if they remain confused about how to find information, about whom to ask for help, about how to navigate the physical spaces of the library, or anything else. The most common response by far – from 44% of surveyed students – was that they still didn’t feel comfortable finding their way around our large library, which is in fact four merged buildings. We needed to develop an activity that would simultaneously teach students about our collections and services, introduce them to critical library staff, and help them learn their way around the library’s spaces.
It was with this feedback in mind that two colleagues — Adam Rogers and Adrienne Lai — and I revisited the idea of the scavenger hunt in March 2011. Since the last scavenger hunt attempt in 2010, mobile devices and the cloud based apps that run on them had become mainstream. If we could develop a scavenger hunt that relied on mobile technology, such as iPod Touches, and which didn’t rely on students finding paper clues throughout the library, we might be able to sustain and scale it.
We first investigated out-of-the-box scavenger hunt solutions such as SCVNGR and Scavenger Hunt With Friends, which were appealing in that they were self contained and provided automatic scoring. However, we did not have a budget for the project and discovered that the free versions could not meet our needs. Furthermore, apps that rely on GPS coordinates to display challenges and questions did not work reliably inside our building.
Ultimately, we decided we needed to come up with something ourselves that would allow students to submit answers to scavenger hunt questions “mobilely”, automatically calculate scores or allow us to score student answers rapidly, and enable us to display results and provide feedback at the end of the 50 minute activity. Our eventual solution made use of traditional approaches to scavenger hunts, in the form of paper maps and clue sheets, alongside novel cloud-based technologies such as Evernote and Google Docs.
The Scavenger Hunt in 50 Minutes
0:00-10:00: A class arrives at the library classroom and is greeted by a librarian, who introduces the activity and divides the group into 3-5 teams of about 4 students. Each team gets a packet with a list of 15 questions and an iPod Touch. The iPod Touches are already logged into Evernote accounts assigned to each team.
10:00-35:00: Teams disperse into the library to discover the answers to their 15 questions. Some questions require text-based answers; others prompt students to submit a photo. We ask them to introduce themselves to and take a photo with a librarian, to find a book in the stacks and take a photo of it as evidence, and to find the collection of circulating DVD’s, among other things. Each answer is submitted as an Evernote note. While students are exploring the library, a librarian monitors the teams’ Evernote accounts (which have been shared with our master account) and scoring their answers using a GoogleDocs spreadsheet. Meanwhile, another library staff member copies student photos into a PowerPoint document to run while students return at the end of the hunt.
35:00-50:00: At the end of 25 minutes, students return to the classroom, where a slideshow displays the photos they took, the correct answers to the questions, and a URL to a short survey about the activity. After all team members have returned, the librarians reveal the teams’ scores, declare a winning team, and distribute prizes.
The scavenger hunt has been very popular with both students and faculty. In the two semesters we have been offering the hunt (Fall 2011 and Spring 2012), we have facilitated over 90 hunts and reached over 1,600 students. 91% of surveyed students considered the activity fun and enjoyable, 93% said they learned something new about the library, and 95% indicated that they felt comfortable asking a staff member for help after having completed the activity. Instructors find the activity worthwhile as well. One ENG 101 faculty member wrote that the “activity engaged students… on a level that led to increased understanding, deeper learning, and almost complete recall of important library functions.”
Lessons Learned & Adjustments
After almost 100 scavenger hunts, we have learned how to optimize this activity for our target audiences. First we discovered that, for our institution, this scavenger hunt works best when scheduled for a class. Often, however, one instructor would schedule scavenger hunts for three consecutive sections of a class. In these cases, we learned to use only half our iPods for the first session. In the second session, while the second half of the iPods were in use, the first half would be refreshed and made ready for the last group of students.
In the very early scavenger hunts in Fall 2011, students reported lagginess with the iPods and occasional crashing of Evernote. However, since some critical iOS and Evernote updates, this has not been a problem.
Finally, after an unexpected website outage, we learned how dependent our activity was on the functionality of our website. We now keep an ‘emergency’ version of our scavenger hunt questions in case of another outage.
More details about implementing the NCSU Libraries Mobile Scavenger Hunt are available on the NCSU Libraries’ website.
About Our Guest Author: Anne Burke is Undergraduate Instruction & Outreach Librarian at NCSU Libraries. She holds an MSLIS from Syracuse University and an MA in Education from Manhattanville College. She like to explore new and exciting ways to teach students about information.