Using the Stripe API to Collect Library Fines by Accepting Online Payment

Recently, my library has been considering accepting library fines via online. Currently, many library fines of a small amount that many people owe are hard to collect. As a sum, the amount is significant enough. But each individual fines often do not warrant even the cost for the postage and the staff work that goes into creating and sending out the fine notice letter. Libraries that are able to collect fines through the bursar’s office of their parent institutions may have a better chance at collecting those fines. However, others can only expect patrons to show up with or to mail a check to clear their fines. Offering an online payment option for library fines is one way to make the library service more user-friendly to those patrons who are too busy to visit the library in person or to mail a check but are willing to pay online with their credit cards.

If you are new to the world of online payment, there are several terms you need to become familiar with. The following information from the article in SixRevisions is very useful to understand those terms.1

  • ACH (Automated Clearing House) payments: Electronic credit and debit transfers. Most payment solutions use ACH to send money (minus fees) to their customers.
  • Merchant Account: A bank account that allows a customer to receive payments through credit or debit cards. Merchant providers are required to obey regulations established by card associations. Many processors act as both the merchant account as well as the payment gateway.
  • Payment Gateway: The middleman between the merchant and their sponsoring bank. It allows merchants to securely pass credit card information between the customer and the merchant and also between merchant and the payment processor.
  • Payment Processor: A company that a merchant uses to handle credit card transactions. Payment processors implement anti-fraud measures to ensure that both the front-facing customer and the merchant are protected.
  • PCI (the Payment Card Industry) Compliance: A merchant or payment gateway must set up their payment environment in a way that meets the Payment Card Industry Data Security Standard (PCI DSS).

Often, the same company functions as both payment gateway and payment processor, thereby processing the credit card payment securely. Such a product is called ‘Online payment system.’ Meyer’s article I have cited above also lists 10 popular online payment systems: Stripe, Authorize.Net, PayPal, Google Checkout, Amazon Payments, Dwolla, Braintree, Samurai by FeeFighters, WePay, and 2Checkout. Bear in mind that different payment gateways, merchant accounts, and bank accounts may or may not work together, your bank may or may not work as a merchant account, and your library may or may not have a merchant account. 2

Also note that there are fees in using online payment systems like these and that different systems have different pay structures. For example, Authorize.net has the $99 setup fee and then charges $20 per month plus a $0.10 per-transaction fee. Stripe charges 2.9% + $0.30 per transaction with no setup or monthly fees. Fees for mobile payment solutions with a physical card reader such as Square may go up much higher.

Among various online payment systems, I picked Stripe because it was recommended on the Code4Lib listserv. One of the advantages for using Stripe is that it acts as both the payment gateway and the merchant account. What this means is that your library does not have to have a merchant account to accept payment online. Another big advantage of using Stripe is that you do not have to worry about the PCI compliance part of your website because the Stripe API uses a clever way to send the sensitive credit card information over to the Stripe server while keeping your local server, on which your payment form sits, completely blind to such sensitive data. I will explain this in more detail later in this post.

Below I will share some of the code that I have used to set up Stripe as my library’s online payment option for testing. This may be of interest to you if you are thinking about offering online payment as an option for your patrons or if you are simply interested in how an online payment API works. Even if your library doesn’t need to collect library fines via online, an online payment option can be a handy tool for a small-scale fund-raising drive or donation.

The first step to take to make Stripe work is getting an API keys. You do not have to create an account to get API keys for testing. But if you are going to work on your code more than one day, it’s probably worth getting an account. Stripe API has excellent documentation. I have read ‘Getting Started’ section and then jumped over to the ‘Examples’ section, which can quickly get you off the ground. (https://stripe.com/docs/examples) I found an example by Daniel Schröter in GitHub from the list of examples in the Stripe’s Examples section and decided to test out. (https://github.com/myg0v/Simple-Bootstrap-Stripe-Payment-Form) Most of the time, getting an example code requires some probing and tweaking such as getting all the required library downloaded and sorting out the paths in the code and adding API keys. This one required relatively little work.

Now, let’s take a look at the form that this code creates.

borrowedcode

In order to create a form of my own for testing, I decided to change a few things in the code.

  1. Add Patron & Payment Details.
  2. Allow custom amount for payment.
  3. Change the currency from Euro to US dollars.
  4. Configure the validation for new fields.
  5. Hide the payment form once the charge goes through instead of showing the payment form below the payment success message.

html

4. can be done as follows. The client-side validation is performed by Bootstrapvalidator jQuery Plugin. So you need to get the syntax correct to get the code, which now has new fields, to work properly.
validator

This is the Javascript that allows you to send the data submitted to your payment form to the Stripe server. First, include the Stripe JS library (line 24). Include JQuery, Bootstrap, Bootstrap Form Helpers plugin, and Bootstrap Validator plugin (line 25-28). The next block of code includes an event handler for the form, which send the payment information to the Stripe via AJAX when the form is submitted. Stripe will validate the payment information and then return a token that identifies this particular transaction.

jspart

When the token is received, this code calls for the function, stripeResponseHandler(). This function, stripeResponseHandler() checks if the Stripe server did not return any error upon receiving the payment information and, if no error has been returned, attaches the token information to the form and submits the form.

jspart2

The server-side PHP script then checks if the Stripe token has been received and, if so, creates a charge to send it to Stripe as shown below. I am using PHP here, but Stripe API supports many other languages than PHP such as Ruby and Python. So you have many options. The real payment amount appears here as part of the charge array in line 326. If the charge succeeds, the payment success message is stored in a div to be displayed.

phppart

The reason why you do not have to worry about the PCI compliance with Stripe is that Stripe API asks to receive the payment information via AJAX and the input fields of sensitive information does not have the name attribute and value. (See below for the Card Holder Name and Card Number information as an example; Click to bring up the clear version of the image.)  By omitting the name attribute and value, the local server where the online form sits is deprived of any means to retrieve the information in those input fields submitted through the form. Since sensitive information does not touch the local server at all, PCI compliance for the local server becomes no concern. To clarify, not all fields in the payment form need to be deprived of the name attribute. Only the sensitive fields that you do not want your web server to have access to need to be protected this way. Here, for example, I am assigning the name attribute and value to fields such as name and e-mail in order to use them later to send a e-mail receipt.

(NB. Please click images to see the enlarged version.)

Screen Shot 2014-08-17 at 8.01.08 PM

Now, the modified form has ‘Fee Category’, custom ‘Payment Amount,’ and some other information relevant to the billing purpose of my library.

updated

When the payment succeeds, the page changes to display the following message.

success

Stripe provides a number of fake card numbers for testing. So you can test various cases of failures. The Stripe website also displays all payments and related tokens and charges that are associated with those payments. This greatly helps troubleshooting. One thing that I noticed while troubleshooting is that Stripe logs sometimes do lag behind. That is, when a payment would succeed, associated token and charge may not appear under the “Logs” section immediately. But you will see the payment shows up in the log. So you will know that associated token and charge will eventually appear in the log later.

recent_payment

Once you are ready to test real payment transactions, you need to flip the switch from TEST to LIVE located on the top left corner. You will also need to replace your API keys for ‘TESTING’ (both secret and public) with those for ‘LIVE’ transaction. One more thing that is needed before making your library getting paid with real money online is setting up SSL (Secure Sockets Layer) for your live online payment page. This is not required for testing but necessary for processing live payment transactions. It is not a very complicated work. So don’t be discouraged at this point. You just have to buy a security certificate and put it in your Web server. Speak to your system administrator for how to get the SSL set up for your payment page. More information about setting up SSL can be found in the Stripe documentation I just linked above.

My library has not yet gone live with this online payment option. Before we do, I may make some more modifications to the code to fit the staff workflow better, which is still being mapped out. I am also planning to place the online payment page behind the university’s Shibboleth authentication in order to cut down spam and save some tedious data entry by library patrons by getting their information such as name, university email, student/faculty/staff ID number directly from the campus directory exposed through Shibboleth and automatically inserting it into the payment form fields.

In this post, I have described my experience of testing out the Stripe API as an online payment solution. As I have mentioned above, however, there are many other online payment systems out there. Depending your library’s environment and financial setup, different solutions may work better than others. To me, not having to worry about the PCI compliance by using Stripe was a big plus. If your library accepts online payment, please share what solution you chose and what factors led you to the particular online payment system in the comments.

* This post has been based upon my recent presentation, “Accepting Online Payment for Your Library and ‘Stripe’ as an Example”, given at the Code4Lib DC Unconference. Slides are available at the link above.

Notes
  1. Meyer, Rosston. “10 Excellent Online Payment Systems.” Six Revisions, May 15, 2012. http://sixrevisions.com/tools/online-payment-systems/.
  2. Ullman, Larry. “Introduction to Stripe.” Larry Ullman, October 10, 2012. http://www.larryullman.com/2012/10/10/introduction-to-stripe/.

Please Don’t Kill the Web: A Screed on Performance & Austerity

Understanding web performance is important for everyone, whether you’re a front-end web developer working with HTML and CSS, an application developer writing services that will interact with the web, or a content author who writes for the web without using code. While there are only a few simple tricks to making web pages load quickly, they’re unfortunately eschewed all too often.

Libraries can ill afford to alienate anyone in their communities. We aim to be open institutions that welcome anyone to use our services, regardless of race, gender, class, creed, ability, or anything else. Yet when we make websites that work only for high-powered desktop computers with broadband connections, we privilege the wealthy. Design a slow enough website and even laptops on decent wireless connections may struggle to load a site in a timely fashion. But what about people who only own smart phones or people who live in rural areas where dial-up is the only option? Poor web performance renders sites unusable for some and frustrating for all.

But what can I do?!?

Below, I’ll elaborate on why certain practices make sites faster, but I want to outline some actions you can take to immediately speed up your sites.

First of all, test your main web properties to identify where the pain points are. Google Pagespeed and Yahoo’s YSlow are two good services that not only spot a broad range of potential problems but provide easy instructions for ameliorating them. They both output letter grades but it’s not worth worrying about getting an A; simply find the laggard sites and look for low-hanging fruit to fix.

Speaking of low-hanging fruit, I’m going to guess you have images on your site. In fact, images make up the bulk of most websites’ download size.1 So if you’re a content author uploading images to a site, ensuring that their file sizes are as small as possible goes a long way to speeding up your site. First of all, ask if the image is really necessary. While images certainly aid a site’s visual appeal, so does clean design with thoughtful colors and highlighted calls to action. It’s becoming easier and easier to avoid images too, as CSS advances and techniques like SVG and icon fonts become more popular. In general, if an image isn’t a logo or picture, it can be replaced through those technologies with less of a performance impact.

If you’re certain an image is necessary, ensure that you’re uploading an appropriately sized one. If you upload a 1200×900 image which will be scaled down to 400×300 inside an <img> tag, change the dimensions before uploading. Lastly, run the image through an optimizer that removes useless gunk from the file (this includes metadata and unused color profiles). ImageOptim is a great choice on Mac. I don’t use Windows so I can’t vouch for these, but FileOptimizer and RIOT (Radical Image Optimization Tool) appear to be popular choices for that platform.

What’s the second largest source of bytes in a website? JavaScript. Scripts are exponentially smaller than images but a prime opportunity to trim download size. You should always minify your JavaScript before putting it on a live site. What’s minification? It takes all the functionally useless bytes out of a file, producing text that’s operational but smaller. Here’s an example script before and after minification.

Before (236 bytes):

// add a proxy server prefix to all links with class=proxied
var links = document.querySelectorAll('a[href].proxied');

Array.prototype.forEach.call(links, function (link) {
    var proxy = 'https://proxy.example.com/login?url='
        , newHref = proxy + link.href;

    link.href = newHref;
});

After (169 bytes):

var links=document.querySelectorAll("a[href].proxied");Array.prototype.forEach.call(links,function(r){var e="https://proxy.example.com/login?url=",l=e+r.href;r.href=l});

By running my script through UglifyJS, I reduced its size 28%. We can see where Uglify applied its tricks: it removed a comment, removed whitespace (including line breaks), and changed my local variable names to single letters. Yet my script’s actions are the same, thus there’s no reason to deliver the larger, human-readable version to a browser.

Minifying JavaScript is a best practice, but CSS can also be minified. CSS benefits primarily through whitespace removal because selectors and properties must all remain the same. While it may not be an option in situations where your pages are served up by a CMS or vendor application, HTML can benefit substantially from having comments removed, quotes taken off of attributes, and even closing tags erased (perfectly valid under HTML5). While the outcome is ugly, the vast majority of your users won’t peak at your source code but will appreciate the quicker page load.

There’s one last optimization that most content authors or front-end web developers can do: combine like files. Rather than make the browser request several separate scripts and stylesheets, combine your assets into as few files as possible. Saving HTTP requests helps sites load quickly, especially over cellular connections. While it’s possible to minify and combine files one-by-one by pasting them into online tools, ideally you have a development workflow that employs a tool like Grunt or Gulp to minify your text-based resources and concatenate them into one big package. 2

When going through one’s scripts and stylesheets looking for optimizations, it’s again pertinent to ask how much is truly necessary. Yes, you can accomplish some gorgeous things with just a little code, slick modal dialogs and luscious fly-out menus. But is that really what your website needs? Are users struggling because your nav bar doesn’t collapse and stick to the top of the page as you scroll, or because your link text is obscure? Throwing scripts at a site can create more issues than it resolves.

But what can my server admin do?!?

If you have access to your library’s servers, then you’ll be able to change a few settings that will make broad improvements. If not, try emailing your local server admin to see if they can make these alterations.

One of the easiest, and biggest, improvements you can make is to turn on compression. With compression on, your server packs up requests into smaller files (think .zip archives) before sending them over the web. This makes a huge difference with most web assets because they’re repetitive plain text which is very amenable to compression algorithms. The best part is that turning on compression is typically a matter of copy-pasting a few lines into a server configuration file. If you’re unsure of how to do it, see the HTML5 Boilerplate Server Configs project. If you’re not certain your site is using compression, try the aptly-named GzipWTF.

What’s the only thing faster than sending a minified, compressed file? Not having to send that file at all. Browsers cache assets if they’re told to, meaning that repeat visits to your site won’t incur repeat downloads. This significantly increases speed if a visitor spends a lot of time on your site. Caching is, again, a matter of inserting a few configuration lines. It can be a little tricky, however, because when you update cached files you want users to download the new version, bypassing their local cache. One work-around is filename-based versioning. 3

Don’t have server access? Or are your servers taxed to the breaking point? Popular front-end resources are often available over Content Delivery Networks (CDNs). These CDNs have servers spread across the globe whose only job is to serve up static assets which get cached in browsers. Because many sites use them, users may already have assets cached in their browser (e.g. if they’ve visited Reddit recently, they have version 2.1.1. of the jQuery JavaScript library from a popular CDN run by Google). While Google and Microsoft run well-known CDNs for a few popular libraries, the lesser-known jsDelivr, CDNJS, and Bootstrap CDN give you even greater options.

The Basic Rules

I’ve avoided the theory behind web performance thus far, but not because it’s complex. There are only two primary rules: send fewer bytes and send fewer requests. So given a choice between two files that fulfill the same function, choose the smaller one (the logic behind minification and compression). If you can combine multiple files into one or avoiding sending something altogether, that saves requests (the logic behind concatenation and caching).

Sending fewer bytes makes intuitive sense; we know that larger files take longer to download. Not only that, we’ve all probably visually witnessed the effects of file size as manifested in progress bars or massive images slowly tiling into place. The benefits of concatenation, however, may seem peculiar. Why would sending one 50kb file be quicker than sending two 25kb files, or five 10kb files? It turns out that there are inefficient redundancies across multiple HTTP requests. A request has a few stages:

  • A DNS lookup translates a domain like “acrl.ala.org” into an IP address like “38.69.5.149”
  • The client and server establish a connection with a TCP handshake
  • The server receives the request and determines what to send back
  • Finally, the data is pushed out over the network

Throughout these steps there may be a great amount of latency as data travels through the air and over wires. All of that latency is repeated with each request, making multiple requests slower. So even if we serve all our assets from the same domain, thus requiring only one DNS request, and configure our server to not repeat the TCP connection on each request, there’s still additional overhead to multiple requests.

Here’s a visual explanation of this phenomenon:

While the DNS, TCP, and download times are identical for two scripts sent independently and the product of their concatenation, the total “Time To First Byte” (analogous to latency) is greater with the dual requests.

Incidentally, the next version of the HTTP specification—HTTP 2.0, currently active as the SPDY protocol—features request multiplexing. The ability to pack multiple requests into one connection virtually eliminates the need for concatenation. It’s a long way off though, since the spec hasn’t been finalized and it will need to be implemented in a backwards-compatible way in both web browsers and servers.

Don’t Sweat What You Can’t Control

Here’s a sad truth: a lot of your library’s web properties probably have terrible performance and there’s little you can do about it. Libraries rely heavily upon third-party websites that may give you a box for pasting custom HTML into, but no or limited control over page templates. Since we’re purchasing a web site and not a web service, we trade convenience for quality. 4 Perhaps because making a generically useful website is really tough, or perhaps because they just don’t know, most of these vendor sites ignore best practices. Similarly, we tend to reuse large open source applications without auditing their performance characteristics. I began listing some of the more egregious offenders, but the list became so long that I’ve put it in a separate appendix below.

There’s More

There’s a lot more to web performance, including avoiding redirects, combining images into sprites, and putting scripts down towards the end of the <body> tag. An entire field has grown up around the topic, spurred on by the rising complexity of web applications and the prominence of mobile devices. Google’s Make the Web Faster project is great place to learn more, as is Steve Souder’s dated but classic High Performance Web Sites. The main takeaway, however, is that understanding a few tenets of web performance can make a difference. By slimming your images and combining your CSS, you can make your sites accessible to more devices and quicker for everyone.

Appendix A: The Wall of Performance Shame

I don’t mean to imply that any of these software packages are poorly made or bad choices. In fact, I’m quite fond of many of them. But run a few through tools like Google Pagespeed and it quickly becomes apparent that even the easiest optimizations aren’t being applied.


LibGuides 1.0 loads all its 5 scripts in the <head> and includes two large utility libraries in jQuery and Dojo, which I struggle to believe is necessary. LibGuides 2.0 does the right thing in relying on CDNs to serve up a few libraries, but still includes 5 scripts, two of which are unminified (despite having “.min” in the name).

The EQUELLA digital repository software loads 11 CSS files and and 14 scripts in the <head> of the document. I didn’t bother to check them all, but the ones I did view are unminified (e.g. they’ve left jQuery at full size, which increases the download 154kb for no good reason). This essentially crushes the page load on all but desktop devices with decent bandwidth; I cannot imagine a user enduring the wait on a phone’s 3G connection.

DSpace loads 9 stylesheets, most of which appear unminified, and 5 scripts (which, luckily, are minified) so far up in the <head> of the document that the <title> of the page loads noticeably slow on a phone.

VuFind loads 10 scripts in a row in the <head>; these could be easily combined.

Thinking a little broader beyond library-specific software, Drupal could do a better job concatenating and minifying resources. While the performance tools have a simple checkbox for combining scripts, by default Drupal will still load multiple (3-6 stylesheets and scripts) resources on each page. The JavaScript and CSS that come with many contributed modules tend to not be minified for some reason and Drupal sometimes doesn’t process them itself. I don’t know the best practices for providing client-side assets with a module but many authors seem to be ignoring performance.

On a positive note, the Blacklight discovery layer does a great job of combining all CSS and JavaScript into one file apiece, probably due to the fact it’s built with Ruby on Rails which provides some nice tools for conforming to best practices. The Koha ILS also appears to keep the number of files down and pushes most of its scripts to the bottom of the document.

Notes

  1. The HTTP Archive is a wonderful source for data like this.
  2. These development workflows are a bit much to go into at the moment, but trust me that they’re not too difficult to set up. There are template configurations which you can use that make it relatively painless to simply point a piece of software at a folder full of scripts and watch an optimized output appear in a prescribed destination. If there’s interest in another blog post with more details, leave me a note in the comments and I’ll happily write one.
  3. This Stack Overflow answer gives a decent overview of filename-based versioning with some further reading. Google’s Make the Web Faster project has a nice article on how HTTP caching works.
  4. When I say “web service” I mean platforms which expose all their functionality through an API, which allows one to write a custom interface layer on top. That’s often a lot of work but would provide total control over things like the number and order of resources loaded in the HTML.

Bootstrap Responsibly

Bootstrap is the most popular front-end framework used for websites. An estimate by meanpath several months ago sat it firmly behind 1% of the web – for good reason: Bootstrap makes it relatively painless to puzzle together a pretty awesome plug-and-play, component-rich site. Its modularity is its key feature, developed so Twitter could rapidly spin-up internal microsites and dashboards.

Oh, and it’s responsive. This is kind of a thing. There’s not a library conference today that doesn’t showcase at least one talk about responsive web design. There’s a book, countless webinars, courses, whole blogs dedicated to it (ahem), and more. The pressure for libraries to have responsive, usable websites can seem to come more from the likes of us than from the patronbase itself, but don’t let that discredit it. The trend is clear and it is only a matter of time before our libraries have their mobile moment.

Library websites that aren’t responsive feel dated, and more importantly they are missing an opportunity to reach a bevy of mobile-only users that in 2012 already made up more than a quarter of all web traffic. Library redesigns are often quickly pulled together in a rush to meet the growing demand from stakeholders, pressure from the library community, and users. The sprint makes the allure of frameworks like Bootstrap that much more appealing, but Bootstrapped library websites often suffer the cruelest of responsive ironies:

They’re not mobile-friendly at all.

Assumptions that Frameworks Make

Let’s take a step back and consider whether using a framework is the right choice at all. A front-end framework like Bootstrap is a Lego set with all the pieces conveniently packed. It comes with a series of templates, a blown-out stylesheet, scripts tuned to the environment that let users essentially copy-and-paste fairly complex web-machinery into being. Carousels, tabs, responsive dropdown menus, all sorts of buttons, alerts for every occasion, gorgeous galleries, and very smart decisions made by a robust team of far-more capable developers than we.

Except for the specific layout and the content, every Bootstrapped site is essentially a complete organism years in the making. This is also the reason that designers sometimes scoff, joking that these sites look the same. Decked-out frameworks are ideal for rapid prototyping with a limited timescale and budget because the design decisions have by and large already been made. They assume you plan to use the framework as-is, and they don’t make customization easy.

In fact, Bootstrap’s guide points out that any customization is better suited to be cosmetic than a complete overhaul. The trade-off is that Bootstrap is otherwise complete. It is tried, true, usable, accessible out of the box, and only waiting for your content.

Not all Responsive Design is Created Equal

It is still common to hear the selling point for a swanky new site is that it is “responsive down to mobile.” The phrase probably rings a bell. It describes a website that collapses its grid as the width of the browser shrinks until its layout is appropriate for whatever screen users are carrying around. This is kind of the point – and cool, as any of us with a browser-resizing obsession could tell you.

Today, “responsive down to mobile” has a lot of baggage. Let me explain: it represents a telling and harrowing ideology that for these projects mobile is the afterthought when mobile optimization should be the most important part. Library design committees don’t actually say aloud or conceive of this stuff when researching options, but it should be implicit. When mobile is an afterthought, the committee presumes users are more likely to visit from a laptop or desktop than a phone (or refrigerator). This is not true.

See, a website, responsive or not, originally laid out for a 1366×768 desktop monitor in the designer’s office, wistfully depends on visitors with that same browsing context. If it looks good in-office and loads fast, then looking good and loading fast must be the default. “Responsive down to mobile” is divorced from the reality that a similarly wide screen is not the common denominator. As such, responsive down to mobile sites have a superficial layout optimized for the developers, not the user.

In a recent talk at An Event Apart–a conference–in Atlanta, Georgia, Mat Marquis stated that 72% of responsive websites send the same assets to mobile sites as they do desktop sites, and this is largely contributing to the web feeling slower. While setting img { width: 100%; } will scale media to fit snugly to the container, it is still sending the same high-resolution image to a 320px-wide phone as a 720px-wide tablet. A 1.6mb page loads differently on a phone than the machine it was designed on. The digital divide with which librarians are so familiar is certainly nowhere near closed, but while internet access is increasingly available its ubiquity doesn’t translate to speed:

  1. 50% of users ages 12-29 are “mostly mobile” users, and you know what wireless connections are like,
  2. even so, the weight of the average website ( currently 1.6mb) is increasing.

Last December, analysis of data from pagespeed quantiles during an HTTP Archive crawl tried to determine how fast the web was getting slower. The fastest sites are slowing at a greater rate than the big bloated sites, likely because the assets we send–like increasingly high resolution images to compensate for increasing pixel density in our devices–are getting bigger.

The havoc this wreaks on the load times of “mobile friendly” responsive websites is detrimental. Why? Well, we know that

  • users expect a mobile website to load as fast on their phone as it does on a desktop,
  • three-quarters of users will give up on a website if it takes longer than 4 seconds to load,
  • the optimistic average load time for just a 700kb website on 3G is more like 10-12 seconds

eep O_o.

A Better Responsive Design

So there was a big change to Bootstrap in August 2013 when it was restructured from a “responsive down to mobile” framework to “mobile-first.” It has also been given a simpler, flat design, which has 100% faster paint time – but I digress. “Mobile-first” is key. Emblazon this over the door of the library web committee. Strike “responsive down to mobile.” Suppress the record.

Technically, “mobile-first” describes the structure of the stylesheet using CSS3 Media Queries, which determine when certain styles are rendered by the browser.

.example {
  styles: these load first;
}

@media screen and (min-width: 48em) {

  .example {

    styles: these load once the screen is 48 ems wide;

  }

}

The most basic styles are loaded first. As more space becomes available, designers can assume (sort of) that the user’s device has a little extra juice, that their connection may be better, so they start adding pizzazz. One might make the decision that, hey, most of the devices less than 48em (720px approximately with a base font size of 16px) are probably touch only, so let’s not load any hover effects until the screen is wider.

Nirvana

In a literal sense, mobile-first is asset management. More than that, mobile-first is this philosophical undercurrent, an implicit zen of user-centric thinking that aligns with libraries’ missions to be accessible to all patrons. Designing mobile-first means designing to the lowest common denominator: functional and fast on a cracked Blackberry at peak time; functional and fast on a ten year old machine in the bayou, a browser with fourteen malware toolbars trudging through the mire of a dial-up connection; functional and fast [and beautiful?] on a 23″ iMac. Thinking about the mobile layout first makes design committees more selective of the content squeezed on to the front page, which makes committees more concerned with the quality of that content.

The Point

This is the important statement that Bootstrap now makes. It expects the design committee to think mobile-first. It comes with all the components you could want, but they want you to trim the fat.

Future Friendly Bootstrapping

This is what you get in the stock Bootstrap:

  • buttons, tables, forms, icons, etc. (97kb)
  • a theme (20kb)
  • javascripts (30kb)
  • oh, and jQuery (94kb)

That’s almost 250kb of website. This is like a browser eating a brick of Mackinac Island Fudge – and this high calorie bloat doesn’t include images. Consider that if the median load time for a 700kb page is 10-12 seconds on a phone, half that time with out-of-the-box Bootstrap is spent loading just the assets.

While it’s not totally deal-breaking, 100kb is 5x as much CSS as an average site should have, as well as 15%-20% of what all the assets on an average page should weigh. Josh Broton

To put this in context, I like to fall back on Ilya Girgorik’s example comparing load time to user reaction in his talk “Breaking the 1000ms Time to Glass Mobile Barrier.” If the site loads in just 0-100 milliseconds, this feels instant to the user. By 100-300ms, the site already begins to feel sluggish. At 300-1000ms, uh – is the machine working? After 1 second there is a mental context switch, which means that the user is impatient, distracted, or consciously aware of the load-time. After 10 seconds, the user gives up.

By choosing not to pair down, your Bootstrapped Library starts off on the wrong foot.

The Temptation to Widgetize

Even though Bootstrap provides modals, tabs, carousels, autocomplete, and other modules, this doesn’t mean a website needs to use them. Bootstrap lets you tailor which jQuery plugins are included in the final script. The hardest part of any redesign is to let quality content determine the tools, not the ability to tabularize or scrollspy be an excuse to implement them. Oh, don’t Google those. I’ll touch on tabs and scrollspy in a few minutes.

I am going to be super presumptuous now and walk through the total Bootstrap package, then make recommendations for lightening the load.

Transitions

Transitions.js is a fairly lightweight CSS transition polyfill. What this means is that the script checks to see if your user’s browser supports CSS Transitions, and if it doesn’t then it simulates those transitions with javascript. For instance, CSS transitions often handle the smooth, uh, transition between colors when you hover over a button. They are also a little more than just pizzazz. In a recent article, Rachel Nabors shows how transition and animation increase the usability of the site by guiding the eye.

With that said, CSS Transitions have pretty good browser support and they probably aren’t crucial to the functionality of the library website on IE9.

Recommendation: Don’t Include.

 Modals

“Modals” are popup windows. There are plenty of neat things you can do with them. Additionally, modals are a pain to design consistently for every browser. Let Bootstrap do that heavy lifting for you.

Recommendation: Include

Dropdown

It’s hard to conclude a library website design committee without a lot of links in your menu bar. Dropdown menus are kind of tricky to code, and Bootstrap does a really nice job keeping it a consistent and responsive experience.

Recommendation: Include

Scrollspy

If you have a fixed sidebar or menu that follows the user as they read, scrollspy.js can highlight the section of that menu you are currently viewing. This is useful if your site has a lot of long-form articles, or if it is a one-page app that scrolls forever. I’m not sure this describes many library websites, but even if it does, you probably want more functionality than Scrollspy offers. I recommend jQuery-Waypoints – but only if you are going to do something really cool with it.

Recommendation: Don’t Include

Tabs

Tabs are a good way to break-up a lot of content without actually putting it on another page. A lot of libraries use some kind of tab widget to handle the different search options. If you are writing guides or tutorials, tabs could be a nice way to display the text.

Recommendation: Include

Tooltips

Tooltips are often descriptive popup bubbles of a section, option, or icon requiring more explanation. Tooltips.js helps handle the predictable positioning of the tooltip across browsers. With that said, I don’t think tooltips are that engaging; they’re sometimes appropriate, but you definitely use to see more of them in the past. Your library’s time is better spent de-jargoning any content that would warrant a tooltip. Need a tooltip? Why not just make whatever needs the tooltip more obvious O_o?

Recommendation: Don’t Include

Popover

Even fancier tooltips.

Recommendation: Don’t Include

Alerts

Alerts.js lets your users dismiss alerts that you might put in the header of your website. It’s always a good idea to give users some kind of control over these things. Better they read and dismiss than get frustrated from the clutter.

Recommendation: Include

Collapse

The collapse plugin allows for accordion-style sections for content similarly distributed as you might use with tabs. The ease-in-ease-out animation triggers motion-sickness and other aaarrghs among users with vestibular disorders. You could just use tabs.

Recommendation: Don’t Include

Button

Button.js gives a little extra jolt to Bootstrap’s buttons, allowing them to communicate an action or state. By that, imagine you fill out a reference form and you click “submit.” Button.js will put a little loader icon in the button itself and change the text to “sending ….” This way, users are told that the process is running, and maybe they won’t feel compelled to click and click and click until the page refreshes. This is a good thing.

Recommendation: Include

Carousel

Carousels are the most popular design element on the web. It lets a website slideshow content like upcoming events or new material. Carousels exist because design committees must be appeased. There are all sorts of reasons why you probably shouldn’t put a carousel on your website: they are largely inaccessible, have low engagement, are slooooow, and kind of imply that libraries hate their patrons.

Recommendation: Don’t Include.

Affix

I’m not exactly sure what this does. I think it’s a fixed-menu thing. You probably don’t need this. You can use CSS.

Recommendation: Don’t Include

Now, Don’t You Feel Better?

Just comparing the bootstrap.js and bootstrap.min.js files between out-of-the-box Bootstrap and one tailored to the specs above, which of course doesn’t consider the differences in the CSS, the weight of the images not included in a carousel (not to mention the unquantifiable amount of pain you would have inflicted), the numbers are telling:

File Before After
bootstrap.js 54kb 19kb
bootstrap.min.js 29kb 10kb

So, Bootstrap Responsibly

There is more to say. When bouncing this topic around twitter awhile ago, Jeremy Prevost pointed out that Bootstrap’s minified assets can be GZipped down to about 20kb total. This is the right way to serve assets from any framework. It requires an Apache config or .htaccess rule. Here is the .htaccess file used in HTML5 Boilerplate. You’ll find it well commented and modular: go ahead and just copy and paste the parts you need. You can eke out even more performance by “lazy loading” scripts at a given time, but these are a little out of the scope of this post.

Here’s the thing: when we talk about having good library websites we’re mostly talking about the look. This is the wrong discussion. Web designs driven by anything but the content they already have make grasping assumptions about how slick it would look to have this killer carousel, these accordions, nifty tooltips, and of course a squishy responsive design. Subsequently, these responsive sites miss the point: if anything, they’re mobile unfriendly.

Much of the time, a responsive library website is used as a marker that such-and-such site is credible and not irrelevant, but as such the website reflects a lack of purpose (e.g., “this website needs to increase library-card registration). A superficial understanding of responsive webdesign and easy-to-grab frameworks entail that the patron is the least priority.

 

About Our Guest Author :

Michael Schofield is a front-end librarian in south Florida, where it is hot and rainy – always. He tries to do neat things there. You can hear him talk design and user experience for libraries on LibUX.


Two Free Methods for Sharing Google Analytics Data Visualizations on a Public Dashboard

At this point, Google Analytics is arguably the standard way to track website usage data.  It’s easy to implement, but powerful enough to capture both wide general usage trends and very granular patterns.  However, it is not immediately obvious how to share Google Analytics data either with internal staff or external stakeholders – both of whom often have widespread demand for up-to-date metrics about library web property performance.  While access to Google Analytics can be granted by Google Analytics administrators to individual Google account-holders through the Google Analytics Admin screen, publishing data without requiring authentication requires some intermediary steps.

There are two main free methods of publishing Google Analytics visualizations to the web, and both involve using the Google Analytics API: OOCharts and the Google Analytics superProxy.  Both methods rely upon an intermediary service to retrieve data from the API and cache it, both to improve retrieval time and to avoid exceeding limits in API requests.1  The first method – OOCharts – requires much less time to get running initially. However, OOCharts’ long-standing beta status, and its status as a stand-alone service has less potential for long-term support than the second method, the Google Analytics superProxy.  For that reason, while OOCharts is certainly easier to set up, the superProxy method is definitely worth the investment in time (depending on what your needs).  I’ll cover both methods.

OOCharts Beta

OOCharts’ service facilitates the creation and storage of Google Analytics API Keys, which are required for sending secure requests to the API in order to retrieve data.

When setting up an OOCharts account, create your account utilizing the email address you use to access Google Analytics.  For example, if you log into Google Analytics using admin@mylibrary.org, I suggest you use this email address to sign up for OOCharts.  After creating an account with OOCharts, you will be directed to Google Analytics to authorize the OOCharts service to access your Google Analytics data on your behalf.

Let OOCharts gather Google Analytics Data on your Behalf.

Let OOCharts gather Google Analytics Data on your Behalf.

After authorizing the service, you will be able to generate an API key for any of the properties your to which your Google Analytics account has access.

In the OOCharts interface, click your name in the upper right corner, and then click inside the Add API Key field to see a list of available Analytics Properties from your account.

 

Once you’ve select a property, OOCharts will generate a key for your OOCharts application.  When you go back to the list of API Keys, you’ll see your keys along with property IDs (shown in brackets after the URL of your properties, e.g., [9999999].

APIkeys

Your API Keys now show your key values and your Google Analytics property IDs, both of which you’ll need to create visualizations with the OOCharts library.

 

Creating Visualizations with the OOCharts JavaScript Library

OOCharts appears to have started as a simple chart library for visualization data returned from the Google Analytics API. After you have set up your OOCharts account, download the front-end JavaScript library (available at http://docs.oocharts.com/) and upload it to your server or localhost.   Navigate to the /examples directory and locate the ‘timeline.html’ file.  This file contains a simplified example that displays web visits over time in Google Analytics’ familiar timeline format.

The code for this page is very simple, and contains two methods – JavaScript-only and with HTML Attributes – for creating OOCharts visualizations.  Below, I’ve separated out the required elements for both methods.  While either methods will work on their own, using HTML attributes allows for additional customizations and styling:

JavaScript-Only
		<h3>With JS</h3>
		<div id='chart'></div>
		
		<script src='../oocharts.js'></script>
		<script type="text/javascript">

			window.onload = function(){

				oo.setAPIKey("{{ YOUR API KEY }}");

				oo.load(function(){

					var timeline = new oo.Timeline("{{ YOUR PROFILE ID }}", "180d");

					timeline.addMetric("ga:visits", "Visits");

					timeline.addMetric("ga:newVisits", "New Visits");

					timeline.draw('chart');

				});
			};

		</script>
With HTML Attributes:
<h3>With HTML Attributes</h3>
	<div data-oochart='timeline' 
        data-oochart-metrics='ga:visits,Visits,ga:newVisits,New Visits' 
        data-oochart-start-date='180d' 
        data-oochart-profile='{{ YOUR PROFILE ID }}'></div>
		
			<script src='../oocharts.js'></script>
			<script type="text/javascript">

			window.onload = function(){

				oo.setAPIKey("{{ YOUR API KEY }}");

				oo.load(function(){
				});
			};

		</script>

 

For either method, plugin {{ YOUR API KEY }} where indicated with the API Key generated in OOCharts and replace {{ YOUR PROFILE ID }} with the associated eight-digit profile ID.  Load the page in your browser, and you get this:

With the API Key and Profile ID in place, the timeline.html example looks like this.  In this example I also adjusted the date paramter (30d by default) to 180d for more data.

With the API Key and Profile ID in place, the timeline.html example looks like this. In this example I also adjusted the date parameter (30d by default) to 180d for more data.

This example shows you two formats for the chart – one that is driven solely by JavaScript, and another that can be customized using HTML attributes.  For example, you could modify the <div> tag to include a style attribute or CSS class to change the width of the chart, e.g.:

<h3>With HTML Attributes</h3>

<div style=”width:400px” data-oochart='timeline' 
data-oochart-start-date='180d' 
data-oochart-metrics='ga:visits,Visits,ga:newVisits,New Visits' 
data-oochart-profile='{{ YOUR PROFILE ID }}'></div>

Here’s the same example.html file showing both the JavaScript-only format and the HTML-attributes format, now with a bit of styling on the HTML attributes chart to make it smaller:

You can use styling to adjust the HTML attributes example.

You can use styling to adjust the HTML attributes example.

Easy, right?  So what’s the catch?

OOCharts only allows 10,000 requests a month – which is even easier to exceed than the 50,000 limit on the Google Analytics API.  Each time your page loads, you use another request.  Perhaps more importantly, your Analytics API key and profile ID are pretty much ‘out there’ for the world to see if they view your page source, because those values are stored in your client-side JavaScript2.  If you’re making a private intranet for your library staff, that’s probably not a big deal; but if you want to publish your dashboard fully to the public, you’ll want to make sure those values are secure.  You can do this with the Google Analytics superProxy.

Google Analytics superProxy

In 2013, Google Analytics released a method of accessing Google Analytics API data that doesn’t require end-users to authenticate in order to view data, known as the Google Analytics superProxy.  Much like OOCharts, the superProxy facilitates the creation of a query engine that retrieves Google Analytics statistics through the Google Analytics Core Reporting API, caches the statistics in a separate web application service, and enables the display of Google Analytics data to end users without requiring individual authentication. Caching the data has the additional benefit of ensuring that your application will not exceed the Google Core Reporting API request limit of 50,000 requests each day. The superProxy can be set up to refresh a limited number of times per day, and most dashboard applications only need a daily refresh of data to stay current.
The required elements of this method are available on the superProxy Github page (Google Analytics, “Google Analytics superProxy”). There are four major parts to the setup of the superProxy:

  1. Setting up Google App Engine hosting,
  2. Preparing the development environment,
  3. Configuring and deploying the superProxy to Google’s App Engine Appspot host; and
  4. Writing and scheduling queries that will be used to populate your dashboard.
Set up Google App Engine hosting

First, your Google Analytics account credentials to access the Google App Engine at https://appengine.google.com/start. The superProxy application you will be creating will be freely hosted by the Google App Engine. Create your application and designate an Application Identifier that will serve as the endpoint domain for queries to your Google Analytics data (e.g., mylibrarycharts.appspot.com).

Create your App Engine Application

Create your App Engine Application

You can leave the default authentication option, Open to All Google Users, selected.  This setting only reflects access to your App Engine administrative screen and does not affect the ability for end-users to view the dashboard charts you create.  Only those Google users who have been authorized to access Google Analytics data will be able to access any Google Analytics information through the Google App Engine.

Ensure that API access to Google Analytics is turned on under the Services pane of the Google Developer’s Console. Under APIs and Auth for your project, visit the APIs menu and ensure that the Analytics API is turned on.

Turn on the Google Analytics API.

Turn on the Google Analytics API.  Make sure the name under the Projects label in the upper left corner is the same as your newly created Google App Engine project (e.g., librarycharts).

 

Then visit the Credentials menu to set up an OAuth 2.0 Client ID. Set the Authorized JavaScript Origins value to your appspot domain (e.g., http://mylibrarycharts.appspot.com). Use the same value for the Authorized Redirect URI, but add /admin/auth to the end (e.g., http://mylibrarycharts.appspot.com/admin/auth). Note the OAuth Client ID, OAuth Client Secret, and OAuth Redirect URI that are stored here, as you will need to reference them later before you deploy your superProxy application to the Google App Engine.

Finally, visit the Consent Screen menu and choose an email address (such as your Google account email address), fill in the product name field with your Application Identifier (e.g., mylibrarycharts) and save your settings. If you do not include these settings, you may experience errors when accessing your superProxy application admin menu.

Prepare the Development Environment

In order to configure superProxy and deploy it to Google App Engine you will need Python 2.7 installed and the Google App Engine Launcher (AKA the Google App Engine SDK).  Python just needs to be installed for the App Engine Launcher to run; don’t worry, no Python coding is required.

Configure and Deploy the superProxy

The superProxy application is available from the superProxy Github page. Download the .zip files and extract them onto your computer into a location you can easily reference (e.g., C:/Users/yourname/Desktop/superproxy or /Applications/superproxy). Use a text editor such as Notepad or Notepad++ to edit the src/app.yaml to include your Application ID (e.g., mylibrarycharts). Then use Notepad to edit src/config.py to include the OAuth Client ID, OAuth Client Secret, and the OAuth Redirect URI that were generated when you created the Client ID in the Google Developer’s Console under the Credentials menu. Detailed instructions for editing these files are available on the superProxy Github page.

After you have edited and saved src/app.yaml and src/config.py, open the Google App Engine Launcher previously downloaded. Go to File > Add Existing Application. In the dialogue box that appears, browse to the location of your superProxy app’s /src directory.

To upload your superProxy application, use the Google App Engine Launcher and browse to the /src directory where you saved and configured your superProxy application.

To upload your superProxy application, use the Google App Engine Launcher and browse to the /src directory where you saved and configured your superProxy application.

Click Add, then click the Deploy button in the upper right corner of the App Engine Launcher. You may be asked to log into your Google account, and a log console may appear informing you of the deployment process. When deployment has finished, you should be able to access your superProxy application’s Admin screen at http://[yourapplicationID].appspot.com/admin, replacing [yourapplicationID] with your Application Identifier.

Creating superProxy Queries

SuperProxy queries request data from your Google Analytics account and return that data to the superProxy application. When the data is returned, it is made available to an end-point that can be used to populate charts, graphs, or other data visualizations. Most data available to you through the Google Analytics native interface is available through superProxy queries.

An easy way to get started with building a query is to visit the Google Analytics Query Explorer. You will need to login with your Google Analytics account to use the Query Explorer. This tool allows you to build an example query for the Core Reporting API, which is the same API service that your superProxy application will be using.

Google Analytics API Query Explorer

Running example queries through the Google Analytics Query Explorer can help you to identify the metrics and dimensions you would like to use in superProxy queries. Be sure to note the metrics and dimensions you use, and also be sure to note the ids value that is populated for you when using the API Explorer.

 

When experimenting with the Google Analytics Query explorer, make note of all the elements you use in your query. For example, to create a query that retrieves the number of users that visited your site between July 4th and July 18th 2014, you will need to select your Google Account, Property and View from the drop-down menus, and then build a query with the following parameters:

  • ids = this is a number (usually 8 digits) that will be automatically populated for you when you choose your Google Analytics Account, Property and View. The ids value is your property ID, and you will need this value later when building your superProxy query.
  • dimensions = ga:browser
  • metrics = ga:users
  • start-date = 07-04-2014
  • end-date = 07-18-2014

You can set the max-results value to limit the number of results returned. For queries that could potentially have thousands of results (such as individual search terms entered by users), limiting to the top 10 or 50 results will retrieve data more quickly. Clicking on any of the fields will generate a menu from which you can select available options. Click Get Data to retrieve Google Analytics data and verify that your query works

Successful Google Analytics Query Explorer query result showing visits by browser.

Successful Google Analytics Query Explorer query result showing visits by browser.

After building a successful query, you can replicate the query in your superProxy application. Return to your superProxy application’s admin page (e.g., http://[yourapplicationid].appspot.com/admin) and select Create Query. Name your query something to make it easy to identify later (e.g., Users by Browser). The Refresh Interval refers to how often you want the superProxy to retrieve fresh data from Google Analytics. For most queries, a daily refresh of the data will be sufficient, so if you are unsure, set the refresh interval to 86400. This will refresh your data every 86400 seconds, or once per day.

Create a superProxy Query

Create a superProxy Query

We can reuse all of the elements of queries built using the Google Analytics API Explorer to build the superProxy query encoded URI.  Here is an example of an Encoded URI that queries the number of users (organized by browser) that have visited a web property in the last 30 days (you’ll need to enter your own profile ID in the ids value for this to work):

https://www.googleapis.com/analytics/v3/data/ga?ids=ga:99999991
&metrics=ga:users&dimensions=ga:browser&max-results=10
&start-date={30daysago}&end-date={today}

Before saving, be sure to run Test Query to see a preview of the kind of data that is returned by your query. A successful query will return a json response, e.g.:

{"kind": "analytics#gaData", "rows": 
[["Amazon Silk", "8"], 
["Android Browser", "36"], 
["Chrome", "1456"], 
["Firefox", "1018"], 
["IE with Chrome Frame", "1"], 
["Internet Explorer", "899"], 
["Maxthon", "2"], 
["Opera", "7"], 
["Opera Mini", "2"], 
["Safari", "940"]], 
"containsSampledData": false, 
"totalsForAllResults": {"ga:users": "4398"}, 
"id": "https://www.googleapis.com/analytics/v3/data/ga?ids=ga:84099180&dimensions=ga:browser&metrics=ga:users&start-date=2014-06-29&end-date=2014-07-29&max-results=10", 
"itemsPerPage": 10, "nextLink": "https://www.googleapis.com/analytics/v3/data/ga?ids=ga:84099180&dimensions=ga:browser&metrics=ga:users&start-date=2014-07-04&end-date=2014-07-18&start-index=11&max-results=10", 
"totalResults": 13, "query": {"max-results": 10, "dimensions": "ga:browser", "start-date": "2014-06-29", "start-index": 1, "ids": "ga:84099180", "metrics": ["ga:users"], "end-date": "2014-07-18"}, 
"profileInfo": {"webPropertyId": "UA-9999999-9", "internalWebPropertyId": "99999999", "tableId": "ga:84099180", "profileId": "9999999", "profileName": "Library Chart", "accountId": "99999999"}, 
"columnHeaders": [{"dataType": "STRING", "columnType": "DIMENSION", "name": "ga:browser"}, {"dataType": "INTEGER", "columnType": "METRIC", "name": "ga:users"}], 
"selfLink": "https://www.googleapis.com/analytics/v3/data/ga?ids=ga:84099180&dimensions=ga:browser&metrics=ga:users&start-date=2014-06-29&end-date=2014-07-29&max-results=10"}

Once you’ve tested a successful query, save it, which will allow the json string to become accessible to an application that can help to visualize this data. After saving, you will be directed to the management screen for your API, where you will need to click Activate Endpoint to begin publishing the results of the query in a way that is retrievable. Then click Start Scheduling so that the query data is refreshed on the schedule you determined when you built the query (e.g., once a day). Finally, click Refresh Data to return data for the first time so that you can start interacting with the data returned from your query. Return to your superProxy application’s Admin page, where you will be able to manage your query and locate the public end-point needed to create a chart visualization.

Using the Google Visualization API to visualize Google Analytics data

Included with the superProxy .zip file downloaded to your computer from the Github repository is a sample .html page located under /samples/superproxy-demo.html. This file uses the Google Visualization API to generate two pie charts from data returned from superProxy queries. The Google Visualization API is a service that can ingest raw data (such as json arrays that are returned by the superProxy) and generate visual charts and graphs. Save superproxy-demo.html onto a web server or onto your computer’s localhost.  We’ll set up the first pie chart to use the data from the Users by Browser query saved in your superProxy app.

Open superproxy-demo.html and locate this section:

var browserWrapper = new google.visualization.ChartWrapper({
// Example Browser Share Query
"containerId": "browser",
// Example URL: http://your-application-id.appspot.com/query?id=QUERY_ID&format=data-table-response
"dataSourceUrl": "REPLACE WITH Google Analytics superProxy PUBLIC URL, DATA TABLE RESPONSE FORMAT",
"refreshInterval": REPLACE_WITH_A_TIME_INTERVAL,
"chartType": "PieChart",
"options": {
"showRowNumber" : true,
"width": 630,
"height": 440,
"is3D": true,
"title": "REPLACE WITH TITLE"
}
});

Three values need to be modified to create a pie chart visualization:

  • dataSourceUrl: This value is the public end-point of the superProxy query you have created. To get this value, navigate to your superProxy admin page and click Manage Query on the Users by Browser query you have created. On this page, right click the DataTable (JSON Response) link and copy the URL (Figure 8). Paste the copied URL into superproxy-demo.html, replacing the text REPLACE WITH Google Analytics superProxy PUBLIC URL, DATA TABLE FORMAT. Leave quotes around the pasted URL.
Right-click the DataTable (JSON Response) link and copy the URL to your clipboard. The copied link will serve as the dataSouceUrl value in superproxy-demo.html.

Right-click the DataTable (JSON Response) link and copy the URL to your clipboard. The copied link will serve as the dataSouceUrl value in superproxy-demo.html.

  • refreshInterval – you can leave this value the same as the refresh Interval of your superProxy query (in seconds) – e.g., 86400.
  • title – this is the title that will appear above your pie chart, and should describe the data your users are looking at – e.g., Users by Browser.

Save the modified file to your server or local development environment, and load the saved page in a browser.  You should see a rather lovely pie chart:

Your pie chart's data will refresh automatically on the refresh schedule you set in your query.

Your pie chart’s data will refresh automatically based upon the Refresh Interval you specify in your superProxy query and your page’s JavaScript parameters.

That probably seemed like a lot of work just to make a pie chart.  But now that your app is set up, making new charts from your Google Analytics data just involves visiting your App Engine site, scheduling a new query, and referencing that with the Google Visualization API.  To me, the Google superProxy method has three distinct advantages over the simpler OOCharts method:

  • Security – Users won’t be able to view your API Keys by viewing the source of your dashboard’s web page
  • Stability – OOCharts might not be around forever.  For that matter, Google’s free App Engine service might not be around forever, but betting on Google is [mostly] a safe bet
  • Flexibility – You can create a huge range of queries, and test them out easily using the API explorer, and the Google Visualization API has extensive documentation and a fairly active user group from whom to gather advice and examples.

 

Notes

  1. There is a 50,000 request per day limit on the Analytics API.  That sounds like a lot, but it’s surprisingly easy to exceed. Consider creating a dashboard with 10 charts, each making a call to the Analytics API.  Without a service that caches the data, the data is refreshed every time a user loads a page.  After just 5,000 visits to the page (which makes 10 API calls – one for each chart – each time the page is loaded), the API limit is exceeded:  5,000 page loads x 10 calls per page = 50,000 API requests.
  2. You can use pre-built OOCharts Queries – https://app.oocharts.com/mc/query/list – to hide your profile ID (but not your API Key). There are many ways to minify and obfuscate client-side JavaScript to make it harder to read, but it’s still pretty much accessible to someone who wants to get it

LibraryQuest Levels Up

Almost a year ago, GVSU Libraries launched LibraryQuest, our mobile quest-based game. It was designed to teach users about library spaces and services in a way that (we hoped) would be fun and engaging.  The game was released “into the wild” in the last week of August, 2013, which is the beginning of our fall semester.  It ran continuously until late November, shortly after midterms (we wanted to end early enough in the semester that we still had students on campus for post-game assessment efforts).  For details on the early development of the game, take a look at my earlier ACRL TechConnect post.  This article will focus on what happened after launch.

Screenshot of one of the quests.

A screenshot from one of our simpler quests as it was being played.

Running the Game

Once the app was released, we settled on a schedule that would put out between three to five new quests each month the game ran.  Designing quests is very time intensive, and 3-5 a month was all we could manage with the man and woman power we had available. We also had short duration quests run at random intervals to encourage students to keep checking the app.  Over the course of the game, we created about 30 quests total.  Almost all quests were designed with a specific educational objective in mind, such as showing students how a specific library system worked or where something or someone was in the physical building.  Quests were chiefly designed by our Digital Initiatives Librarian (me) with help and support from our implementation team and other staff in the library as needed.

For most of the quests, we developed quest write-up sheets like this one:  Raiders of the Lost…Bin.  The sheets detailed the name of the quest, points, educational objective, steps, completion codes, and any other information that defined the quest.  These sheets proved invaluable whenever a staff member needed to know something about a quest, which was often.  Even simple quests like the one above required a fair amount of cooperation and coordination.  For the raiders quest, we needed a special cataloging record created, we had to tag several plastic crowns and get them into our automatic storage and retrieval system.

For every quest players completed, they earned points.  For every thirty points they earned, they were entered once in a drawing to win an iPad.  This was a major component of the game’s advertising, since we imagined it would be the biggest draw to play (and we may not have been right, as you’ll see).  Once the game closed in November, we held the drawing, publicized the winner, and then commenced a round of post-game assessment.

Thank You for Playing: Post-Game Assessment

When the game wrapped in mid-November, we took some time to examine the statistics the game had collected.  One of our very talented design students created a game dashboard that showed all the metrics collected by the game database in graphic form. The final tally of registered players came in at 397. That means 397 people downloaded the app and logged in at least once (in case you’re curious, the total enrollment of GVSU is 25,000 students). This number probably includes a few non-students (since anyone could download the app), but we did some passes throughout the life of the game to remove non-student players from the tally and so feel confident that the vast majority of registered players are students. During development, we set a goal of having at least 300 registered players, based mostly on the cost of the game and how much money we had spent on other outreach efforts.  So we did, technically, meet that goal, but a closer examination of the numbers paints a more nuanced picture of student participation.

A screenshot of the LibraryQuest Dashboard

A detail from our game dashboard. This part shows overall numbers and quest completion dates.

Of the registered players, 173 earned points, meaning they completed at least one quest.  That means that 224 players downloaded the app and logged in at least once, but then failed to complete any quest content.  Clearly, getting players to take the first step and get involved in the game was somehow problematic.  There are any number of explanations for this, including encounters with technical problems that may have turned players off (the embedded QR code scanner was a problem throughout the life of the game), an unwillingness to travel to locations to do physical quests, or something else entirely.  The maximum number of points you could earn was 625, which was attained by one person, although a few others came close. Players tended to cluster at the lower and middle of the point spectrum, which was entirely expected.  Getting the maximum number of points required a high degree of dedication, since it meant paying very close attention to the app for all the temporary, randomly appearing quests.

A shot of a different aspect of the LibraryQuest Game dashboard.

Another detail from the game dashboard, showing acceptance and completion metrics for some of the quests. We used this extensively to determine which quests to retire at the end of the month.

In general, online-only quests were more popular than quests involving physical space, and were taken and completed more often.  Of the top five most-completed quests, four are online-only.  There are a number of possible explanations for this, including the observation offered by one of our survey recipients that possibly a lot of players were stationed at our downtown campus and didn’t want to travel to our Allendale campus, which is where most of the physical quests were located.

Finally, of our 397 registered users, only 60 registered in the second semester the game ran.  The vast majority signed up soon after game launch, and registrations tapered off over time.  This reinforced data we collected from other sources that suggested the game ran for too long and the pacing needed to be sped up.

In addition to data collected from the game itself, we also put out two surveys over the course of the game. The first was a mid-game survey that asked questions about quest design (what quests students liked or didn’t like, and why).  Responses to this survey were bewilderingly contradictory. Students would cite a quest as their favorite, while others would cite the exact same quest as their least favorite (and often for the same reasons).  The qualitative post-game evaluation we did provides some possible explanation for this (see below).  The second survey was a simple post-game questionnaire that asked whether students had enjoyed the game, whether they’d learned something, and if this was something we should continue doing.  We also asked if they had learned anything, and if so, what they had learned.  90% of the respondents to this survey indicated that they had learned something about the library, that they thought this was a good idea, and that it was something we should do again.

Finally, we offered players points and free coffee to come in to the library and spend 15-20 minutes talking to us about their experience playing the game.  We kept questions short and simple to keep within the time window.  We asked about overall impressions of the game, if the students would change anything, if they learned anything (and if so, what) and about what quests they liked or didn’t like and why.  The general tone of the feedback was very positive. Students seemed intrigued by the idea and appreciated that the library was trying to teach in nontraditional, self-directed ways.  When asked to sum up their overall impressions of the game, students said things like “Very well done, but could be improved upon”or “good but needs polish,” or my personal favorite: “an effective use of bribery to learn about the library.”

One of the things we asked people about was whether the game had changed how they thought about the library. They typically answered that it wasn’t so much that the game had changed how they thought about the library so much as it changed the way they thought about themselves in relation to it. They used words like “”aware,” “confident,” and “knowledgeable.”  They felt like they knew more about what they could do here and what we could do for them. Their retention of some of the quest content was remarkable, including library-specific lingo and knowledge of specific procedures (like how to use the retrieval system and how document delivery worked).

Players noted a variety of problems with the game. Some were technical in nature. The game app takes a long time to load, likely because of the way the back-end is designed. Some of them didn’t like the facebook login.  Stability on android devices was problematic (this is no surprise, as developing the android version was by far the more problematic part of developing the app).  Other problems were nontechnical, including quest content that didn’t work or took too long (my own lack of experience designing quests is to blame), communication issues (there’s no way to let us know when quest content doesn’t work), the flow and pacing of new quests (more content faster), and marketing issues.  These problems may in part account for the low on boarding numbers in terms of players that actually completed content.

They also had a variety of reasons for playing. While most cited the iPad grand prize as the major motivator, several of them said they wanted to learn about the library or were curious about the game, and that they thought it might be fun. This may explain differing reactions to the quest content survey that so confused me.  People who just wanted to have fun were irked by quests that had an overt educational goal.  Students who just wanted the iPad didn’t want to do lengthy or complex quests. Students who loved games for the fun wanted very hard quests that challenged them.  This diversity of desire is something all game developers struggle to cope with, and it’s a challenge for designing popular games that appeal to a wide variety of people.

Where to Go from Here

Deciding whether or not Library Quest has been successful depends greatly on what angle you look at the results from.  On one hand, the game absolutely taught people things.  Students in the survey and interviews were able to list concrete things they knew how to do, often in detail and using terminology directly from the game.  One student proudly showed us a book she had gotten from ILL, which she hadn’t known how to use before she played.  On the other hand, the overall participation was low, especially when contrasted against the expense and staff time of creating and running the game.  Looking only at the money spent (approximately $14,700), it’s easy to calculate an output of about $85 per student reached (173 with points) in development, prizes, and advertising.  The challenge is creating engaging games that are appealing to a large number of students in a way that’s economical in terms of staff time and resources.

After looking at all of this data and talking to Yeti CGI, our development partners, we feel there is still a great deal for us to learn here, and the results are promising enough that we should continue to experiment.  Both organizations feel there is still a great deal to learn about making games in physical space and that we’ve just scratched the surface of what we might be able to do.  With the lessons we have learned from this round of the game, we are looking to completely redesign the way the game app works, as well as revise the game into a shorter, leaner experience that does not require as much content or run so long.   In addition, we are seeking campus partners who would be interested in using the app in classes, as part of student life events, or in campus orientation.  Even if these events don’t directly involve the library, we can learn from the experience how to design better quest content that the library can use.  Embedding the app in smaller and more fixed events helps with marketing and cost issues.

Because the app design is so expensive, we are looking into the possibility of a research partnership with Yeti CGI.  We could both benefit from learning more about how mobile gaming works in a physical space, and sharing those lessons would get us Yeti’s help rebuilding the app as well as working with us to figure out content creation and pacing, without another huge outlay of development capital. We are also looking at ways to turn the game development itself into an educational opportunity. By working with our campus mobile app development lab, we can provide opportunities for GVSU students to learn app design.  Yeti is looking at making more of the game’s technical architecture open (for example, we are thinking about having all quest content marked up in XML) so that students can build custom interfaces and tools for the game.

Finally, we are looking at grants to support running and revising the game.  Our initial advertising and incentives budgets were very low, and we are curious to see what would happen if we put significant resources into those areas.  Would we see bigger numbers?  Would other kinds of rewards in addition to the iPad (something asked for by students) entice players into completing more quest content?  Understanding exactly how much money needs to be put into incentives and advertising can help quantify the total cost of running a large, open game for libraries, which would be valuable information for other libraries contemplating running large-scale games.

Get the LibraryQuest App: (iPhoneAndroid)

 

About our Guest Author:
Kyle Felker is the Digital Initiatives Librarian at Grand Valley State University Libraries, where he has worked since February of 2012.  He is also a longtime gamer.  He can be reached at felkerk@gvsu.edu, or on twitter @gwydion9.


A Short and Irreverently Non-Expert Guide to CSS Preprocessors and Frameworks

It took me a long time to wrap my head around what a CSS preproccessor was and why you might want to use one. I knew everyone was doing it, but for the amount of CSS I was doing, it seemed like overkill. And one more thing to learn! When you are a solo web developer/librarian, it’s always easier to clutch desperately at the things you know well and not try to add more complexity. So this post is for those of you are in my old position. If you’re already an expert, you can skip the post and go straight to the comments to sell us on your own favorite tools.

The idea, by the way, is that you will be able to get one of these up and running today. I promise it’s that easy (assuming you have access to install software on your computer).

Ok, So Why Should I Learn This?

Creating a modern and responsive website requires endless calculations, CSS adjustments, and other considerations that it’s not really possible to do it (especially when you are a solo web developer) without building on work that lots of other people have done. Even if you aren’t doing anything all that sophisticated in your web design, you probably want at minimum to have a nicely proportioned columnar layout with colors and typefaces that are easy to adapt as needed. Bonus points if your site is responsive so it looks good on tablets and phones. All of that takes a lot of math and careful planning to accomplish, and requires much attention to documentation if you want others to be able to maintain the site.

Frameworks and preprocessors take care of many development challenges. They do things like provide responsive columnar layouts that are customizable to your design needs, provide “mixins” of code others have written, allow you to nest elements rather than writing CSS with selectors piled on selectors, and create variables for any elements you might repeat such as typefaces and colors. Once you figure it out, it’s pretty addictive to figure out where you can be more efficient in your CSS. Then if your institution switches the shade of red used for active links, H1s, and footer text, you only have to change one variable to update this rather than trying to find all the places that color appears in your stylesheets.

What You Should Learn

If I’ve convinced you this is something worth putting time into, now you have to figure out where you should spend that time.This post will outline one set of choices, but you don’t need to stick with this if you want to get more involved in learning more.

Sometimes these choices are made for you by whatever language or content management system you are using, and usually one will dictate another. For instance, if you choose a framework that uses Sass, you should probably learn Sass. If the theme you’ve chosen for your content management system already includes preprocessor files, you’ll save yourself lots of time just by going with what that theme has chosen. It’s also important to note that you can use a preprocessor with any project, even just a series of flat HTML files, and in that case definitely use whatever you prefer.

I’m going to point out just a few of each here, since this is basically what I’ve used with which I am familiar. There are lots and lots of all of these things out there, and I would welcome people with specific experience to describe it in the comments.

Before you get too scared about the languages these tools use, it’s really only relevant to beginners insofar as you have to have one of these languages installed on your system to compile your Sass or Less files into CSS.

Preprocessors

Sass was developed in 2006. It can be written with Sass or SCSS syntax. Sass runs on Ruby.

Less was developed in 2009. It was originally written in Ruby, but now is in Javascript.

A Completely Non-Comprehensive List of CSS Frameworks

Note! You don’t have to use a framework for every project. If you have a two column layout with a simple breakpoint and don’t need all the overhead, feel free to not use one. But for most projects, frameworks provide things like responsive layouts and useful features (for instance tabs, menus, image galleries, and various CSS tricks) you might want to build in your site without a lot of thought.

  • Compass (runs on Sass)
  • Bootstrap (runs on Less)
  • Foundation (runs on Sass)
  • There are a million other ones out there too. Google search for css frameworks gets “About 4,860,000 results”. So I’m not kidding about needing to figure it out based on the needs of your project and what preprocessor you prefer.
A Quick and Dirty Tutorial for Getting These Up and Running

Let’s try out Sass and Compass. I started working with them when I was theming Omeka, and I run them on my standard issue work Windows PC, so I know you can do it too!

Ingredients:

Stir to combine.

Ok it’s not quite that easy. But it’s not a lot harder.

  1. Sass has great installation instructions. I’ve always run it from the command line without a fuss, and I don’t even particularly care for the command line. So feel free to follow those instructions.
  2. If you don’t have Ruby installed already (likely on a standard issue work Windows PC–it is already installed on Macs), install it. Sass suggests RubyInstaller, and I second that suggestion. There are some potential confusing or annoying things about doing it this way, but don’t worry about it until one pops up.
  3. If you are on Windows, open up a new command line window by typing cmd in your start search bar. If you are on a Mac, open up your Terminal app.
  4. Now all you have to do is install Sass with one line of code in your command line client/terminal window: gem install sass. If you are on a Mac, you might get an error message and  have to use the command sudo gem install sass.
  5. Next install Compass. You’ve got it, just type gem install compass.
  6. Read through this tutorial on Getting Started with Sass and Compass. It’s basically what I just said, but will link you out to a bunch of useful tutorials.
  7. If you decide the command line isn’t for you, you might want to look into other options for installing and compiling your files. The Sass installation instructions linked above give a few suggestions.
Working on a Project With These Tools

When you start a project, Compass will create a series of files Sass/SCSS files (you choose which syntax) that you will edit and then compile into CSS files. The easiest way to get started is to head over to the Compass installation page and use their menu to generate the command line text you’ll need to get started.

For this example, I’ll create a new project called acrltest by typing compass create acrltest. In the image below you’ll see what this looks like. This text also provides some information about what to do next, so you may want to copy it to save it for later.

compass1

You’ll now have a new directory with several recommended SCSS files, but you are by no means limited to these files. Sass provides the @import command, which imports additional SCSS files, which can either be full CSS files, or “partials”, which allow you to define styles without creating an additional CSS file. Partials start with an underscore. The normal practice is to have a partial _base.scss file, where you identify base styles. You will then import these to your screen.scss file to use whatever you have in that file.

Here’s what this looks like.

On _base.scss, I’ve created two variables.

$headings: #345fff;
$links: #c5ff34;

Now on my screen.scss file, I’ve imported my file and can use these variables.

@import "base";

a {
    color: $links;
   }

 h1 {
    color: $headings;
 }

Now this doesn’t do anything yet since there are no CSS files that you can use on the web. Here’s how you actually make the CSS files.

Open up your command prompt or terminal window and change to the directory your Compass project is in. Then type compass watch, and Compass will compile your SCSS files to CSS. Everytime you save your work, the CSS will be updated, which is very handy.

compass2

Now you’ll have something like this in the screen.css file in the stylesheets directory:

 

/* line 9, ../sass/screen.scss */
a {
  color: #c5ff34;
}

/* line 13, ../sass/screen.scss */
h1 {
  color: #345fff;
}

Note that you will have comments inserted telling you where this all came from.

Next Steps

This was an extremely basic overview, and there’s a lot more I didn’t cover–one of the major things being all the possibilities that frameworks provide. But I hope you start to get why this can ultimately make your life easier.

If you have a little experience with this stuff, please share it in the comments.


The Tools Behind Doge Decimal Classification

Four months ago, I made Doge Decimal Classification, a little website that translates Dewey Decimal class names into “Doge” speak à la the meme. It’s a silly site, for sure, but I took the opportunity to learn several new tools. This post will briefly detail each and provide resources for learning more.

Node & NPM

Because I’m curious about the framework, I chose to write Doge Decimal Classification in Node.js. Node is a relatively recent programming framework that expands the capabilities of JavaScript; rather than running in a browser, Node lets you write server-side software or command line utilities.

For Doge Decimal Classification (henceforth just DDC because, let’s face it, it’s pretty much obsoleted Dewey at this point), I wanted to write two pieces: a module and a website. A module is a reusable piece of code which provides some functionality; in this case, doge-speak Dewey class names. My website is a consumer of that module, it takes the module’s output and dresses it up in Comic Sans and bright colors. By dividing the two pieces a bit, I can work on them separately and perhaps reuse the DDC module elsewhere, for instance as a command-line tool.

For the module, I needed to write a NPM package, which turned out to be fairly straightforward. NPM is Node’s package manager, it provides a central repository for thousands of modules. Skipping over the boring part where I write code that actually does something, an NPM package is defined by a package.json file that contains metadata. That metadata tells NPM how users can find, install, and use my module. You can read my module’s package.json and some of its meaning may be self-evident, but here are a few pieces worth explaining:

  • the main field determines what the entry point of my module is, so when another app uses it (which, in Node, is done via a line like “var ddc = require('dogedc');") Node knows to load and execute a certain file
  • the dependencies field lists any packages which my package uses (it just so happens I didn’t use any) while devDependencies contains packages which anyone who wanted to work on developing the dogedc module itself would need (e.g. to run tests or automate other tedious tasks)
  • the repository field tells NPM what version control software I’m using and where to download the source code for my package (in this case, on GitHub)

Once I had my running code and package.json file in place, all I needed to do was run npm publish inside my project and it was published on NPM for anyone to install. Magic! When I update dogedc, either by adding new features or squashing bugs, all I need to do is run npm publish once again.

One final nicety of NPM that’s worth describing is the npm link command. While I’m working on my module, I want to be able to use it in the website, but also develop new features and fiddle with it in other contexts. It doesn’t make much sense to repeatedly install it from NPM everywhere that I use it, especially when I’m debugging new code which I don’t want to publish yet. Instead, npm link tells NPM to use my local copy of dogedc as if it was installed globally from NPM. This lets me preview the experience of someone consuming the latest version of my module without actually pushing that version out to the world.

Learn more: What is Node.js & why do I care? , How to create a NodeJS NPM package, NPM Developer Guide, & How do I get started with Node.js

Doge-ification

Now for perhaps the only amusing part of this post: how I went about translating Dewey Decimal classes into doge. The doge meme consists almost entirely of two-word phrases along the lines of “Much zombies. Such death. So amaze” (to quote from a Walking Dead themed meme). The only common one-word phrase is “wow” which is sprinkled liberally in most Doge images.

Reading through a few memes, the approach to taking a class name and turning it into doge becomes clear: split the name into a series of two-word doge phrases with a random adjective inserted before each noun. So “Extraterrestial Worlds” (999) becomes something like “Many extraterrestial. Much worlds.” Because the meme ignores most grammar conventions, we don’t need to worry about mismatching count and noncount nouns with appropriate adjectives. So, for instance, even though “worlds” is a count noun (you can say “one world, two worlds”) we can use the noncount adjective “much” to modify it.

There are a few more steps we need to take to create proper doge phrases. First of all, how do we know what adjectives to use? After viewing a bunch of memes, I decided on a list of the most commonly used ones: many, much, so, such, and very. Secondly, our simple procedure might work on “Extraterrestial Worlds” just fine, but what about “Library and Information Sciences”? The resulting “Many library. Much and. So information. Such sciences.” doesn’t look quite right. We need to strip out small words like conjunctions because they’re not used in doge.

So the final algorithm resembles:

  • Make the entire class name lowercase
  • Strip out stop words and punctuation
  • Split the string into an array on spaces (in JavaScript this is “array.split(" ")")
  • Loop over the array, each time adding a Doge word in front of the current word and then a period
  • Flip a coin to decide whether or not to add “Wow” on the end when we’re done

There are still a few weaknesses here. Most notably, if we use an unassigned class number like 992 the phrase “Not assigned or no longer used” comes out terribly (“Much not. Many assigned. So no. Much longer. Very used.”) simply because it’s a more traditional sentence, not a noun-laden class name. To write this algorithm in a robust way, one would have to incorporate natural language processing to identify which words should be stripped out, or perhaps be more thoughtful about which adjectives modify which nouns. That seems like a fun project for another day, but it was too much for my DDC module.

Learn more: The article A Linguist Explains the Grammar of Doge. Wow was useful in understanding doge.

Express

With my module in hand, I chose to use Express for the foundation of my site because it’s the most popular web framework for Node right now. Express is similar to Ruby on Rails or Django for Python. 1 Express isn’t a CMS like WordPress or Drupal; you can’t just install it and have a running blog working in a few minutes. Instead, Express provides tools and conventions for writing a web app. It scaffolds you over some of the busywork involved but does not do everything for you.

To get started with Express, you can generate a basic template for a site with a command line tool courtesy of NPM:

$ # first we install this tool globally (the -g flag)
$ npm install -g express-generator
$ # creates a ddc folder & puts the site template in it
$ express ddc

   create : ddc
   create : ddc/package.json
   create : ddc/app.js
   …a whole bunch more of this…

   install dependencies:
     $ cd ddc && npm install

   run the app:
     $ DEBUG=ddc ./bin/www
$ # just following the instructions above…
$ cd ddc && npm install

Since the Doge Decimal Classification site is simplistic, the initial template that Express provides created about 80% of the final code for the site. Mostly by looking through the generated project’s structure, I figured out what I needed to change and where I should make edits, such as the stark CSS that comes with a new Express site.

There’s a lot to Express so I won’t cover the framework in detail, but to give a short example of how it works I’ll discuss routing. Routing is the practice of determining what content to serve when someone visits a particular URL. Express organizes routing into two parts, one of which occurs in an “app.js” file and the other inside a “routes” directory. Here’s an example:

app.js:

// in app.js, I tell Express which routes to use for
// certain requests. Below means "for GET requests to
// the web site's root, use index"
app.get("/", routes.index);
// for GET requests anywhere else, use the "number" route
// ":number" becomes a special token I can use later
app.get("/:number", routes.number);

routes/index.js:

// inside my routes/index.js folder, I define routes.
// The "request" and "response" parameters below
// correspond to the users' request and what I choose
// to send back to them
exports.index = function(request, response){
        response.render("index", {
            // a random number from 0 to 999
            number: Math.floor(Math.random()*1000)
        });
    });
};

exports.number = function(request, response){
     // here req.params.number is that ":number"
     // from app.js above
        response.render("index", {
            number: request.params.number
        });
    });
};

Routing is how information from a user’s request—such as visiting dogedc.herokuapp.com/020—gets passed into my website and used to generate specific content. I use it to render the specific class number that someone chooses to put onto the URL, but you can imagine better use cases like delivering a user’s profile with a route like “app.get('/users/:user', routes.profile)“. There’s another piece here in how that “number” information gets used to generate HTML, but we’ve talked about Express enough.

Learn more: The Dead-Simple Step-by-Step Guide for Front-End Developers to Getting Up and Running with Node.JS, Express, Jade, and MongoDB takes you through a more involved example of using Express with the popular MongoDB database. Introduction to Express by the ever-excellent NetTuts is also a decent intro. 2

Social Media <meta>s

In my website, I also included Facebook Open Graph and Twitter Card metadata using HTML meta tags. These ensure that I have greater control over how Facebook, Twitter, and other social media platforms present DDC. For instance, someone could share a link to the site in Twitter without including my username. With Twitter cards, I can associate the site with my account, which causes my name to display beside it. In general, the link looks better; an image and tagline I specify also appear. Compare the two tweets in the screenshot below, one which links to a Tumblr (which doesn’t have my <meta> tags on it) and one where Twitter Cards are in effect:

2 tweets about DDC

Learn more: Gearing Up Your Sites for Sharing with Twitter & Facebook Meta Tags

Heroku

There’s one final piece to the DDC site: deployment and hosting. I’ve written the web app and thanks to Express I can even run it locally with a simple npm start command from inside its directory. But how do I share it with the world?

Since I chose to write Doge Decimal Classification in Node, my hosting options were limited. Node is a new technology and it’s not available on everywhere. If I really wanted something easy to publish, I would’ve used PHP or pure client-side JavaScript. But I purposefully set out to learn new tools and that’s in large part why I chose to host my app on Heroku.

Heroku is a popular cloud hosting platform. It’s oriented around a suite of command line tools which push new versions of your website to the cloud, as opposed to a traditional web host which might use FTP, a web-based administrative dashboard like Plesk or cPanel, or SSH to provide access to a remote server. Instead, Heroku builds upon the fact that most developers are already versioning their apps with git. Rather than add another new technology to the stack that you have to learn, Heroku works via a smart integration with git.

Once you have the Heroku command line tools installed, all you need to do is write a short “Procfile” which tells Heroku how to start your app (my Procfile is a single line, “web: npm start”). From there, you can run heroku create to start your app and then git push heroku master to push your code live. This makes publishing an app a cinch and Heroku has a free tier which has suited my simple site just fine. If we wanted to scale up, however, Heroku makes increasing the power of our server a matter of running a single command.

Learn more: Getting Started with Heroku; there’s a more specific article for starting a Node project

In Conclusion

There are actually a couple more things I learned in writing Doge Decimal Classification, such as writing unit tests with Nodeunit and using Jade for templates, but this post has gone on long enough. While building DDC, there were many different technologies I could’ve chosen to utilize rather than the ones I selected here. Why Node and Express over Ruby and Sinatra? Because I felt like it. There are entirely too many programming languages, frameworks, and tools for any one person to become familiar with them all. What’s more, I’m not sure when I’ll get a chance to these tools again, as they’re not common to any of the library technology I work with. But I found using a fun side project to level up in new areas much worthwhile, very enjoy.

Notes

  1. Technically, Rails and Django are more full-featured than Express, which is perhaps more similar to the smaller Sinatra or Flask frameworks for Ruby and Python respectively. The general idea of all these frameworks is the same, though; they provide useful functionality for writing web apps in a particular programming language. Some are just more “batteries included” than others.
  2. There are many more fine Express tutorials out there, but try to stick to recent ones; Express is being developed at a rapid pace and chances are if you run npm install express at the beginning of a tutorial, you’ll end up with a more current version than the one being covered, with subtle but important differences. Be cognizant of that possibility and install the appropriate version by running npm install express@3.3.3 (if 3.3.3 is the version you want).

What Should Academic Librarians Know about Net Neutrality?

John Oliver describes net neutrality as the most boring important issue. More than that, it’s a complex idea that can be difficult to understand without a strong grasp of the architecture of the internet, which is not at all intuitive. An additional barrier to having a measured response is that most of the public discussions about net neutrality conflate it with negotiations over peering agreements (more on that later) and ultimately rest in contracts with unknown terms. The hyperbole surrounding net neutrality may be useful in riling up public sentiment, but the truth seems far more subtle. I want to approach a definition and an understanding of the issues surrounding net neutrality, but this post will only scratch the surface. Despite the technical and legal complexities, this is something worth understanding, since as academic librarians our daily lives and work revolve around internet access for us and for our students.

The most current public debate about net neutrality surrounds the Federal Communications Commission’s (FCC) ability to regulate internet service providers after a January 2014 court decision struck down the FCC’s 2010 Open Internet Order (PDF). The FCC is currently in an open comment period on a new plan to promote and protect the open internet.

The Communications Act of 1934 (PDF) created the FCC to regulate wire and radio communication. This classified phone companies and similar services as “common carriers”, which means that they are open to all equally. If internet service providers are classified in the same way, this ensures equal access, but for various reasons they are not considered common carriers, which was affirmed by the Supreme Court in 2005. The FCC is now seeking to use section 706 of the 1996 Telecommunications Act (PDF) to regulate internet service providers. Section 706 gave the FCC regulatory authority to expand broadband access, particularly to elementary and high schools, and this piece of it is included in the current rulemaking process.

The legal part of this is confusing to everyone, not least the FCC. We’ll return to that later. But for now, let’s turn our attention to the technical part of net neutrality, starting with one of the most visible spats.

A Tour Through the Internet

I am a Comcast customer for my home internet. Let’s say I want to watch Netflix. How do I get there from my home computer? First comes the traceroute that shows how the request from my computer travels over the physical lines that make up the internet.


 

C:\Users\MargaretEveryday>tracert netflix.com

Tracing route to netflix.com [69.53.236.17]
over a maximum of 30 hops:

  1     1 ms    <1 ms    <1 ms  10.0.1.1
  2    24 ms    30 ms    37 ms  98.213.176.1
  3    43 ms    40 ms    29 ms  te-0-4-0-17-sur04.chicago302.il.chicago.comcast.
net [68.86.115.41]
  4    20 ms    32 ms    36 ms  te-2-6-0-11-ar01.area4.il.chicago.comcast.net [6
8.86.197.133]
  5    33 ms    30 ms    37 ms  he-3-14-0-0-cr01.350ecermak.il.ibone.comcast.net
 [68.86.94.125]
  6    27 ms    34 ms    30 ms  pos-1-4-0-0-pe01.350ecermak.il.ibone.comcast.net
 [68.86.86.162]
  7    30 ms    41 ms    54 ms  chp-edge-01.inet.qwest.net [216.207.8.189]
  8     *        *        *     Request timed out.
  9    73 ms    69 ms    69 ms  63.145.225.58
 10    65 ms    77 ms    96 ms  te1-8.csrt-agg01.prod1.netflix.com [69.53.225.6]

 11    80 ms    81 ms    74 ms  www.netflix.com [69.53.236.17]

Trace complete.
Airport

Step 1. My computer sends data to this wireless router, which is hooked to my cable modem, which is wired out to the telephone pole in front of my apartment.

 

 

 

 

 

 

 

 

 

 

2. The cables travel through the city underground, accessed through manholes like this one.

2-4. The cables travel through the city underground, accessed through manholes like this one.

 

 

 

 

 

 

 

 

 

 

 

 

 

5- . Eventually my request to go to Netflix makes it to 350 E. Cermak, which is a major collocation and internet exchange site. If you've ever taken the shuttle bus at ALA in Chicago, you've gone right past this building.

5- 6. Eventually my request to go to Netflix makes it to 350 E. Cermak, which is a major collocation and internet exchange site. If you’ve ever taken the shuttle bus at ALA in Chicago, you’ve gone right past this building. Image © 2014 Google.

 

 

 

 

 

 

 

 

 

 

 

7-9. Now the request leaves Comcast, and goes out to a Tier 1 internet provider, which owns cables that cross the country. In this case, the cables belong to CenturyLink (which recently purchased Qwest).

10. My request has now made it to Grand Forks, ND, where Netflix buys space from Amazon Web Services.

10. My request has now made it to Grand Forks, ND, where Netflix buys space from Amazon Web Services. All this happened in less than a second. Image © 2014 Google.

 

 

 

 

 

 

 

 

 

 

Why should Comcast ask Netflix to pay to transmit their data over Comcast’s networks? Understanding this requires a few additional concepts.

Peering

Peering is an important concept in the structure of the internet. Peering is a physical link of hardware to hardware between networks in internet exchanges, which are (as pictured above) huge buildings filled with routers connected to each other. 1.  Facebook Peering is an example of a very open peering policy. Companies and internet service providers can use internet exchange centers to plug their equipment together directly, and so make their connections faster and more reliable. For websites such as Facebook which have an enormous amount of upload and download traffic, it’s well worth the effort for a small internet service provider to peer with Facebook 2.

Peering relies on some equality of traffic, as the name implies. The various tiers of internet service providers you may have heard of are based on with whom they “peer”. Tier 1 ISPs are large enough that they all peer with each other, and thus form what is usually called the backbone of the internet.

Academic institutions created the internet originally–computer science departments at major universities literally had the switches in their buildings. In the US this was ARPANET, but a variety of networks at academic institutions existed throughout the world. Groups such as Internet2 allow educational, research, and government networks to connect and peer with each other and commercial entities (including Facebook, if the traceroute from my workstation is any indication). Smaller or isolated institutions may rely on a consumer ISP, and what bandwidth is available to them may be limited by geography.

The Last Mile

Consumers, by contrast, are really at the mercy of whatever company dominates in their neighborhoods. Consumers obviously do not have the resources to lay their own fiber optic cables directly to all the websites they use most frequently. They rely on an internet service provider to do the heavy lifting, just as most of us rely on utility companies to get electricity, water, and sewage service (though of course it’s quite possible to live off the grid to a certain extent on all those services depending on where you live). We also don’t build our own roads, and we expect that certain spaces are open for traveling through by anyone. This idea of roads open for all to get from the wider world to arterial streets to local neighborhoods is thus used as an analogy for the internet–if internet service providers (like phone companies) must be common carriers, this ensures the middle and last miles aren’t jammed.

When Peering Goes Bad

Think about how peering works–it requires a roughly equal amount of traffic being sent and received through peered networks, or at least an amount of traffic to which both parties can agree. This is the problem with Netflix. Unlike big companies such as Facebook, and especially Google, Netflix is not trying to build its own network. It relies on content delivery services and internet backbone providers to get content from its servers (all hosted on Amazon Web Services) to consumers. But Netflix only sends traffic, it doesn’t take traffic, and this is the basis of most of the legal battles going on with internet service providers that service the “last mile”.

The Netflix/Comcast trouble started in 2010, when Netflix contracted with Level 3 for content delivery. Comcast claimed that Level 3 was relying on a peering relationship that was no longer valid with this increase in traffic, no matter who was sending it. (See this article for complete details.) Level 3, incidentally, accused another Tier 1 provider, Cogent, of overstepping their settlement-free peering agreement back in 2005, and cut them off for a short time, which cut pieces of the internet off from each other.

Netflix tried various arrangements, but ultimately negotiated with Comcast to pay for direct access to their last mile networks through internet exchanges, one of which is illustrated above in steps 4-6. This seems to be the most reasonable course of action for Netflix to get their outbound content over networks, since they really don’t have the ability to do settlement-free peering. Of course, Reed Hastings, the CEO of Netflix, didn’t see it that way. But for most cases, settlement-free peering is still the only way the internet can actually work, and while we may not see the agreements that make this happen, it won’t be going anywhere. In this case, Comcast was not offering Netflix paid prioritization of its content, it was negotiating for delivery of the content at all. This might seem equally wrong, but someone has to pay for the bandwidth, and why shouldn’t Netflix pay for it?

What Should We Do?

If companies want to connect with each other or build their own network connections, they can do under whatever terms work best for them. The problem would be if certain companies were using the same lines that everyone was using but their packets got preferential treatment. The imperfect road analogy works well enough for these purposes. When a firetruck, police car, and ambulance are racing through traffic with sirens blazing, we are usually ok with the resulting traffic jam since we can see this requires that speed for an emergency situation. But how do we feel when we suspect a single police car has turned on a siren just to cut in line to get to lunch faster? Or a funeral procession blocks traffic? Or an elected official has a motorcade? Or a block party? These situations are regulated by government authorities, but we may or may not like that these uses of public ways are being allowed and causing our own travel to slow down. Going further, it is clearly illegal for a private company to block a public road and charge a high rate for faster travel, but imagine if no governmental agency had the power to regulate this? The FCC is attempting to make sure they have those regulatory powers.

That said it doesn’t seem like anyone is actually planning to offer paid prioritization. Even Comcast claims “no company has had a stronger commitment to openness of the Internet…” and that they have no plans of offering such a service . I find it unlikely that we will face a situation that Barbara Stripling describes as “prioritizing Mickey Mouse and Jennifer Lawrence over William Shakespeare and Teddy Roosevelt.”

I certainly won’t advocate against treating ISPs as common carriers–my impression is that this is what the 1996 Telecommunications Act was trying to get at, though the legal issues are confounding. However, a larger problem facing libraries (not so much large academics, but smaller academics and publics) is the digital divide. If there’s no fiber optic line to a town, there isn’t going to be broadband access, and an internet service provider has no business incentive to create a line for a small town that may not generate a lot of revenue. I think we need to remain vigilant about ensuring that everyone has access to the internet at all or at a fast speed, and not get too sidetracked about theoretical future possible malfeasance by internet service providers. These points are included in the FCC’s proposal, but are not receiving most of the attention, despite the fact that they are given explicit regulatory authority to do this.

Public comments are open at the FCC’s website until July 15, so take the opportunity to leave a comment about Protecting and Promoting the Open Internet, and also consider comments on E-rate and broadband access, which is another topic the FCC is currently considering. (You can read ALA’s proposal about this here (PDF).)

  1. Blum, Andrew. Tubes: a Journey to the Center of the Internet. New York: Ecco, 2012, 80.
  2. Blum, 125-126.

Websockets For Real-time And Interactive Interfaces

TL;DR WebSockets allows the server to push up-to-date information to the browser without the browser making a new request. Watch the videos below to see the cool things WebSockets enables.

Real-Time Technologies

You are on a Web page. You click on a link and you wait for a new page to load. Then you click on another link and wait again. It may only be a second or a few seconds before the new page loads after each click, but it still feels like it takes way too long for each page to load. The browser always has to make a request and the server gives a response. This client-server architecture is part of what has made the Web such a success, but it is also a limitation of how HTTP works. Browser request, server response, browser request, server response….

But what if you need a page to provide up-to-the-moment information? Reloading the page for new information is not very efficient. What if you need to create a chat interface or to collaborate on a document in real-time? HTTP alone does not work so well in these cases. When a server gets updated information, HTTP provides no mechanism to push that message to clients that need it. This is a problem because you want to get information about a change in chat or a document as soon as it happens. Any kind of lag can disrupt the flow of the conversation or slow down the editing process.

Think about when you are tracking a package you are waiting for. You may have to keep reloading the page for some time until there is any updated information. You are basically manually polling the server for updates. Using XMLHttpRequest (XHR) (also commonly known as Ajax) has been a popular way to try to work around the limitations of HTTP somewhat. After the initial page load, JavaScript can be used to poll the server for any updated information without user intervention.

Using JavaScript in this way you can still use normal HTTP and almost simulate getting a real-time feed of data from the server. After the initial request for the page, JavaScript can repeatedly ask the server for updated information. The browser client still makes a request and the server responds, and the request can be repeated. Because this cycle is all done with JavaScript it does not require user input, does not result in a full page reload, and the amount of data which is returned from the server can be minimal. In the case where there is no new data to return, the server can just respond with something like, “Sorry. No new data. Try again.” And then the browser repeats the polling–tries again and again until there is some new data to update the page. And then goes back to polling again.

This kind of polling has been implemented in many different ways, but all polling methods still have some queuing latency. Queuing latency is the time a message has to wait on the server before it can be delivered to the client. Until recently there has not been a standardized, widely implemented way for the server to send messages to a browser client as soon as an event happens. The server would always have to sit on the information until the client made a request. But there are a couple of standards that do allow the server to send messages to the browser without having to wait for the client to make a new request.

Server Sent Events (aka EventSource) is one such standard. Once the client initiates the connection with a handshake, Server Sent Events allows the server to continue to stream data to the browser. This is a true push technology. The limitation is that only the server can send data over this channel. In order for the browser to send any data to the server, the browser would still need to make an Ajax/XHR request. EventSource also lacks support even in some recent browsers like IE11.

WebSockets allows for full-duplex communication between the client and the server. The client does not have to open up a new connection to send a message to the server which saves on some overhead. When the server has new data it does not have to wait for a request from the client and can send messages immediately to the client over the same connection. Client and server can even be sending messages to each other at the same time. WebSockets is a better option for applications like chat or collaborative editing because the communication channel is bidirectional and always open. While there are other kinds of latency involved here, WebSockets solves the problem of queuing latency. Removing this latency concern is what is meant by WebSockets being a real-time technology. Current browsers have good support for WebSockets.

Using WebSockets solves some real problems on the Web, but how might libraries, archives, and museums use them? I am going to share details of a couple applications from my work at NCSU Libraries.

Digital Collections Now!

When Google Analytics first turned on real-time reporting it was mesmerizing. I could see what resources on the NCSU Libraries’ Rare and Unique Digital Collections site were being viewed at the exact moment they were being viewed. Or rather I could view the URL for the resource being viewed. I happened to notice that there would sometimes be multiple people viewing the same resource at the same time. This gave me some hint that today someone’s social share or forum post was getting a lot of click throughs right now. Or sometimes there would be a story in the news and we had an image of one of the people involved. I could then follow up and see examples of where we were being effective with search engine optimization.

The Rare & Unique site has a lot of visual resources like photographs and architectural drawings. I wanted to see the actual images that were being viewed. The problem, though, was that Google Analytics does not have an easy way to click through from a URL to the resource on your site. I would have to retype the URL, copy and paste the part of the URL path, or do a search for the resource identifier. I just wanted to see the images now. (OK, this first use case was admittedly driven by one of the great virtues of a programmer–laziness.)

My first attempt at this was to create a page that would show the resources which had been viewed most frequently in the past day and past week. To enable this functionality, I added some custom logging that is saved to a database. Every view of every resource would just get a little tick mark that would be tallied up occasionally. These pages showing the popular resources of the moment are then regenerated every hour.

It was not a real-time view of activity, but it was easy to implement and it did answer a lot of questions for me about what was most popular. Some images are regularly in the group of the most-viewed images. I learned that people often visit the image of the NC State men’s basketball 1983 team roster which went on to win the NCAA tournament. People also seem to really like the indoor pool at the Biltmore estate.

Really Real-Time

Now that I had this logging in place I set about to make it really real-time. I wanted to see the actual images being viewed at that moment by a real user. I wanted to serve up a single page and have it be updated in real-time with what is being viewed. And this is where the persistent communication channel of WebSockets came in. WebSockets allows the server to immediately send these updates to the page to be displayed.

People have told me they find this real-time view to be addictive. I found it to be useful. I have discovered images I never would have seen or even known how to search for before. At least for me this has been an effective form of serendipitous discovery. I also have a better sense of what different traffic volume actually feels like on good day. You too can see what folks are viewing in real-time now. And I have written up some more details on how this is all wired up together.

The Hunt Library Video Walls

I also used WebSockets to create interactive interfaces on the the Hunt Library video walls. The Hunt Library has five large video walls created with Cristie MicroTiles. These very large displays each have their own affordances based on the technologies in the space and the architecture. The Art Wall is above the single service point just inside the entrance of the library and is visible from outside the doors on that level. The Commons Wall is in front of a set of stairs that also function as coliseum-like seating. The Game Lab is within a closed space and already set up with various game consoles.

Listen to Wikipedia

When I saw and heard the visualization and sonification Listen to Wikipedia, I thought it would be perfect for the iPearl Immersion Theater. Listen to Wikipedia visualizes and sonifies data from the stream of edits on Wikipedia. The size of the bubbles is determined by the size of the change to an entry, and the sound changes in pitch based on the size of the edit. Green circles show edits from unregistered contributors, and purple circles mark edits performed by automated bots. (These automated bots are sometimes used to integrate library data into Wikipedia.) A bell signals an addition to an entry. A string pluck is a subtraction. New users are announced with a string swell.

The original Listen to Wikipedia (L2W) is a good example of the use of WebSockets for real-time displays. Wikipedia publishes all edits for every language into IRC channels. A bot called wikimon monitors each of the Wikipedia IRC channels and watches for edits. The bot then forwards the information about the edits over WebSockets to the browser clients on the Listen to Wikipedia page. The browser then takes those WebSocket messages and uses the data to create the visualization and sonification.

As you walk into the Hunt Library almost all traffic goes past the iPearl Immersion Theater. The one feature that made this space perfect for Listen to Wikipedia was that it has sound and, depending on your tastes, L2W can create pleasant ambient sounds1. I began by adjusting the CSS styling so that the page would fit the large. Besides setting the width and height, I adjusted the size of the fonts. I added some text to a panel on the right explaining what folks are seeing and hearing. On the left is now text asking passersby to interact with the wall and the list of languages currently being watched for updates.

One feature of the original L2W that we wanted to keep was the ability to change which languages are being monitored and visualized. Each language can individually be turned off and on. During peak times the English Wikipedia alone can sound cacophonous. An active bot can make lots of edits of all roughly similar sizes. You can also turn off or on changes to Wikidata which collects structured data that can support Wikipedia entries. Having only a few of the less frequently edited languages on can result in moments of silence punctuated by a single little dot and small bell sound.

We wanted to keep the ability to change the experience and actually get a feel for the torrent or trickle of Wikipedia edits and allow folks to explore what that might mean. We currently have no input device for directly interacting with the Immersion Theater wall. For L2W the solution was to allow folks to bring their own devices to act as a remote control. We encourage passersby to interact with the wall with a prominent message. On the wall we show the URL to the remote control. We also display a QR code version of the URL. To prevent someone in New Zealand from controlling the Hunt Library wall in Raleigh, NC, we use a short-lived, three-character token.

Because we were uncertain how best to allow a visitor to kick off an interaction, we included both a URL and QR code. They each have slightly different URLs so that we can track use. We were surprised to find that most of the interactions began with scanning the QR code. Currently 78% of interactions begin with the QR code. We suspect that we could increase the number of visitors interacting with the wall if there were other simpler ways to begin the interaction. For bring-your-own-device remote controls we are interested in how we might use technologies like Bluetooth Low Energy within the building for a variety of interactions with the surroundings and our services.

The remote control Web page is a list of big checkboxes next to each of the languages. Clicking on one of the languages turns its stream on or off on the wall (connects or disconnects one of the WebSockets channels the wall is listening on). The change happens almost immediately with the wall showing a message and removing or adding the name of the language from a side panel. We wanted this to be at least as quick as the remote control on your TV at home.

The quick interaction is possible because of WebSockets. Both the browser page on the wall and the remote control client listen on another WebSockets channel for such messages. This means that as soon as the remote control sends a message to the server it can be sent immediately to the wall and the change reflected. If the wall were using polling to get changes, then there would potentially be more latency before a change registered on the wall. The remote control client also uses WebSockets to listen on a channel waiting for updates. This allows feedback to be displayed to the user once the change has actually been made. This feedback loop communication happens over WebSockets.

Having the remote control listen for messages from the server also serves another purpose. If more than one person enters the space to control the wall, what is the correct way to handle that situation? If there are two users, how do you accurately represent the current state on the wall for both users? Maybe once the first user begins controlling the wall it locks out other users. This would work, but then how long do you lock others out? It could be frustrating for a user to have launched their QR code reader, lined up the QR code in their camera, and scanned it only to find that they are locked out and unable to control the wall. What I chose to do instead was to have every message of every change go via WebSockets to every connected remote control. In this way it is easy to keep the remote controls synchronized. Every change on one remote control is quickly reflected on every other remote control instance. This prevents most cases where the remote controls might get out of sync. While there is still the possibility of a race condition, it becomes less likely with the real-time connection and is harmless. Besides not having to lock anyone out, it also seems like a lot more fun to notice that others are controlling things as well–maybe it even makes the experience a bit more social. (Although, can you imagine how awful it would be if everyone had their own TV remote at home?)

I also thought it was important for something like an interactive exhibit around Wikipedia data to provide the user some way to read the entries. From the remote control the user can get to a page which lists the same stream of edits that are shown on the wall. The page shows the title for the most recently edited entry at the top of the page and pushes others down the page. The titles link to the current revision for that page. This page just listens to the same WebSockets channels as the wall does, so the changes appear on the wall and remote control at the same time. Sometimes the stream of edits can be so fast that it is impossible to click on an interesting entry. A button allows the user to pause the stream. When an intriguing title appears on the wall or there is a large edit to a page, the viewer can pause the stream, find the title, and click through to the article.

The reaction from students and visitors has been fun to watch. The enthusiasm has had unexpected consequences. For instance one day we were testing L2W on the wall and noting what adjustments we would want to make to the design. A student came in and sat down to watch. At one point they opened up their laptop and deleted a large portion of a Wikipedia article just to see how large the bubble on the wall would be. Fortunately the edit was quickly reverted.

We have also seen the L2W exhibit pop up on social media. This Instagram video was posted with the comment, “Reasons why I should come to the library more often. #huntlibrary.”

This is people editing–Oh, someone just edited Home Alone–editing Wikipedia in this exact moment.

The original Listen to Wikipedia is open source. I have also made the source code for the Listen to Wikipedia exhibit and remote control application available. You would likely need to change the styling to fit whatever display you have.

Other Examples

I have also used WebSockets for some other fun projects. The Hunt Library Visualization Wall has a unique columnar design, and I used it to present images and video from our digital special collections in a way that allows users to change the exhibit. For the Code4Lib talk this post is based on, I developed a template for creating slide decks that include audience participation and synchronized notes via WebSockets.

Conclusion

The Web is now a better development platform for creating real-time and interactive interfaces. WebSockets provides the means for sending real-time messages between servers, browser clients, and other devices. This opens up new possibilities for what libraries, archives, and museums can do to provide up to the moment data feeds and to create engaging interactive interfaces using Web technologies.

If you would like more technical information about WebSockets and these projects, please see the materials from my Code4Lib 2014 talk (including speaker notes) and some notes on the services and libraries I have used. There you will also find a post with answers to the (serious) questions I was asked during the Code4Lib presentation. I’ve also posted some thoughts on designing for large video walls.

Thanks: Special thanks to Mike Nutt, Brian Dietz, Yairon Martinez, Alisa Katz, Brent Brafford, and Shirley Rodgers for their help with making these projects a reality.


About Our Guest Author: Jason Ronallo is the Associate Head of Digital Library Initiatives at NCSU Libraries and a Web developer. He has worked on lots of other interesting projects. He occasionally writes for his own blog Preliminary Inventory of Digital Collections.

Notes

  1. Though honestly Listen to Wikipedia drove me crazy listening to it so much as I was developing the Immersion Theater display.

Open Sourcing Ideas: Sharing and Recreating a Library Instruction Program

Overture: We’ve Got a Theory

In April 2013 at the ACRL Conference in Indianapolis, IN, Char Booth, Lia Friedman, Adrienne Lai, and Alice Whiteside presented a panel entitled, “Love your library: Building goodwill from the inside out and the outside in” in which they highlighted examples of non-traditional marketing in academic libraries at Claremont, the University of California San Diego, Mount Holyoke, and North Carolina State University. The panelists freely encouraged audience members to recreate and adapt the ideas at other institutions, saying “Here is something that worked for us. Maybe it will work for you!” One of the ideas Alice shared was a food-themed citation help event she developed with colleagues Chrissa Godbout and Kathleen Norton at Mount Holyoke. John Jackson recreated the event at Whittier College a year later. From opposite coasts, we’ve joined forces here to discuss the development of the ExCITING Food workshop, its reiterations, and the importance of sharing ideas among academic library communities.

Borrowing each others’ ideas is common in our field, and the “Love your library” panel celebrated and encouraged this practice. When we “steal” each other’s trade secrets (with proper credit, of course), everyone benefits. The advantages of “open-sourcing” instructional programming is probably obvious to readers of TechConnect. The information literacy needs of most undergraduates, especially first-year students, are roughly the same in that they come to college with little to no experience with scholarly communication practices, limited knowledge of the breadth of information resources, and feel overwhelmed by the complex requirements (i.e. format, tone, structure, citations) of their assignments. Even accounting for the idiosyncrasies of each institution, librarians can quickly adapt events that were successful at other libraries to their own unique communities, saving time, reducing the stress of preparation, and ultimately fulfilling a recognized information need for their users by sharing successful attempts at “sneaky teaching” with the professional community at large.

In our experience, everyone benefits more if the first round of sharing isn’t the end of it. We have many methods and modes of learning about “stealable” ideas: professional literature, conference presentations, the Web, and word of mouth. Databases like PRIMO and LOEX Instructional Resources, personal blogs, Slideshare, and LibGuides all facilitate this type of sharing. More rare is the ability to provide public feedback on how one programming event succeeded or failed in a different context and how it was adapted. How can we more actively create an open-source mindset around instructional development? We hope this post is a step in that direction.

Going Through the Motions

Creating the Event at Mount Holyoke College

Alice: At Mount Holyoke College, we hatched the idea for ExCITING Food when the Dean of Students Office asked if the library could provide a workshop on citing sources during fall 2012 orientation. The planning group consisted of myself, Chrissa Godbout, Kathleen Norton, and our MLS intern Lilly Sundell-Thomas. We felt strongly that orientation, when new students are concerned with getting their bearings, meeting new friends, and struggling to stay afloat of the information overload, was the wrong time to discuss the ethical use of resources in their future research papers. That said, we appreciated that the Dean of Students Office turned to the library with this request, and we began to think about other ways to address this clearly identified need.

Historically we haven’t had great success with drop-in workshops in the library, and we knew we wanted to try something different. Our goal was to help students understand the why, when, and how of citing sources. For the greatest impact, we wanted to reach them at the point in the semester when they were thinking about their bibliographies. We hoped our “not-a-workshop” would be informative but also low-threshold and engaging. At Mount Holyoke, the surest path to engaging students usually involves food, and thus the brainstorming began. When we pitched the idea to our department head, he was skeptical: a fun drop-in citation help event? Persuaded by our enthusiasm, he fortunately agreed to support our modest budget of $50 for food. As we figured out the details, we ran the idea by our student workers and reached out to the Speaking, Arguing, and Writing  (SAW) Center. The SAW Center agreed to join the effort, helping to advertise and staff the event.

students at mount holyoke event

Students attending the ExCITING Food workshop at Mount Holyoke

One of the central ideas for ExCITING Food is citing the recipes for each snack provided; we used the snacks themselves to illustrate different citation styles, and we selected snacks to showcase a range of recipe sources (book, website, archival material, etc.). Mount Holyoke College has a strong sense of its own history, and everyone on campus knows about Mary Lyon, the school’s founder, and her vision for women’s education. The Archives have a few recipes written out in her own hand, including one for gingerbread, a variation on her molasses cake. This was a clear winner for ExCITING Food.

 M. Lyon, ca. 1845, Molasses Cake with Plums, unpublished manuscript, Mount Holyoke College Archives and Special Collections, South Hadley, MA.

After that, we got a little carried away with picking snacks that had a connection to MHC: the infamous “Chef Jeff” cookies from Dining Services, caramel corn with an image from the Archives of students shucking corn ca. 1917. We even wrote to the President’s office asking for a recipe; she graciously replied but misunderstood our intention, sending a favorite recipe for a hearty stew. Instead of stew, we went with a recipe from the library director for mulled cider (C. Patriquin, personal communication, Nov 16, 2012).

Handouts from Mount Holyoke event

Setup for the Mount Holyoke ExCITING Food event

We chose to host ExCITING Food two weeks before exams, when many students were in the thick of working on final papers. We promoted the event via social media, posters, and personal emails to First Year Seminar faculty asking them to encourage their students to attend; SAW Center writing mentors, who are current students, distributed flyers and helped spread the word. Late in the afternoon on a Wednesday, we set up tables in the library atrium, a high traffic area in front of the main entrance, and wheeled out piles of handouts, platters of cookies, and crock pots full of mulled cider on book trucks. Our handouts included sample bibliographies (with the snack recipes) in different citation styles, RefWorks information, and DIY stickers (printed on mailing labels) with friendly URLs for the library’s Citing Guide and the SAW Center. Six librarians and two SAW Center writing mentors staffed the event, distributing snacks and handouts and answering questions.

Through the combination of thoughtful timing, delicious food, and a bit of silliness, we pulled off an extremely successful session. In one hour, we distributed handouts, snacks, and our elevator pitch to over a hundred students, provided 21 citation consultations, and received abundant positive feedback from students and from our partners in SAW.

Recreating the Event at Whittier College

John: Impressed by the creativity and accessibility of the outreach events presented during Alice’s ACRL 2013 session, I have since tried to reproduce many of the events in spirit if not in detail. In April 2014, Wardman Library hosted its iteration of the exCITING Food workshop in the week leading up to finals. The day before finals began was thought to be the best time as students were beginning to think about the requirements of their final projects but not yet overwhelmed by details and deadlines.

We promoted the event on the faculty and student email lists, our social media pages, via flyers and posters, and additionally contacted faculty who we knew had assigned bibliographies as final projects. We know that students and faculty struggle to manage the incredible amount of email they receive daily and so it was important to send frequent reminders via our (less intrusive) social media and to speak with teaching faculty directly about our plans, especially ways in which students could benefit from the information presented in our posters and handouts.

Whittier College setup

Setup for the ExCITING Food event at Whittier College

One of my primary goals for the event was to highlight the helpfulness and creativity of library staff. Accordingly, I asked each staff member to contribute a dish to the event. This was perhaps the greatest source of anxiety for me: acquiring staff buy-in to make and bring enough food to make the event successful. Wardman Library is staffed by 13 employees, many of whom are extremely busy during the final weeks of the semester (especially our circulation and media staff). I was hesitant to ask my colleagues to take time outside of work to locate an appropriate recipe (we needed to have enough variety in the sources) and make it on the designate day. However, my colleagues were incredibly supportive and we produced enough food to push the scheduled 2-hour event into a 4-hour one.

Originally, we planned to host the event outside the library in order to capture the portion of our student population that does not frequent the library on a regular basis, but coincidentally (and to our benefit) the southern California heat forced us to hold the event indoors. Instead, we held the workshop inside the library near an area that we thought would be unobtrusive and wouldn’t interfere with students trying to study for finals. To our surprise, the students were reluctant to approach the event, thinking it was invitation or RSVP only. So we waited for an appropriate moment and moved the event to a more central location, near the main stairwell between the library entrance and access to the bookstacks, one of the most heavily trafficked areas of the library. This turned out to immediately increase the number of students that approached the tables unreservedly.

At the event, we provided a number of dishes including a brownies recipe from Katharine Hepburn, cornbread from a late nineteenth century college cookbook, and cookies made from various websites to illustrate citing a material that lacks an author or publishing date.

Henderson, H. (2003, July 6). Straight Talk From Miss Hepburn: Plus the Actress’s Own Brownie Recipe. New York Times, p. CY9. New York, N.Y., United States.

Clayton, H. J. (1883). Clayton’s Quaker cook-book: being a practical treatise on the culinary art. San Francisco: Women’s Co-operative Printing Office.

Easy OREO Truffles. (n.d.). Allrecipes.com. Retrieved April 25, 2014, from http://allrecipes.com/Recipe/Easy-OREO-Truffles/Detail.aspx.

In addition to the food, we provided two-sided half-sheet handouts that contained the recipe for each dish on one side and how to cite it in MLA, APA, and Chicago style formats on the other side. We also created three 20 in. x 30 in. posters outlining the when, why, and how of citations and placed these behind the food table. We made sure at least one librarian and one additional staff member were present at the table at all times and encouraged all library staff to stop by during the event to meet and talk with students.

students sampling dishes

Students attending the ExCITING Food event at Whittier College

At the ACRL conference presentation, the panelists introduced the idea of “camogogy”: the combination of pedagogy and camouflage, or “sneaky teaching.” Ultimately, this was the spirit I endeavored to recreate at our iteration of the event and even went so far as to downplay the educational aspect of the workshop. Most surprising to me, however, was how little camouflage or “sneakiness” was required. The students loved the idea of citing recipes and seemed genuinely excited at the prospect of improving their own citations. A number of students returned later to ask specific questions about citing sources and, most importantly to me, identified librarians as being a resource for finalizing their bibliographies.

Once More, With Feeling: Future Adaptations

Alice: At Mount Holyoke, this event is on its way to becoming a library tradition. In November 2013 we hosted ExCITING Snacks, with essentially the same components as the first iteration and equal success. Our major change in the second year was that we simplified our snacks: just popcorn instead of caramel corn, cold cider instead of mulled cider. We also refined our publicity approach and were thrilled to get assistance from the Academic Deans Office, which sent an email blast to all first years, sophomores, and juniors about the event. Looking ahead, our favorite question is “Who else can we collaborate with?” While ExCITING Food is a fun event, the instructional component is very clear, and I think that has helped us find allies.

Learning about the details of Whittier College’s implementation of ExCITING Food has also helped us rethink our approach and consider new elements. Next time, we will definitely create large posters to help students identify at a glance what the event is about. While we haven’t yet explored taking ExCITING Food out of the library, this is now on our list as we brainstorm new ways to collaborate with offices across campus.

John: The success of our first attempt at the ExCITING Food workshop and the enthusiasm it generated among the faculty at Whittier College has all but guaranteed that we will attempt it again next year. However, there are certainly improvements to be made. For instance, we would like to be able to capture a “non-library” audience, students who do not regularly visit the library, and may consider moving the event to a more central location on campus. This could potentially open up opportunities for new collaborations with, say, catering services (if we decided to host the workshop near the student cafeteria) or the Center for Advising and Academic Success (if we wanted to host the workshop near the tutoring center). Additionally, we would like to find a way to involve faculty and student peer mentors, not only in promoting the event, but also in providing on-site help with creating citation (or even providing additional food!).

Where Do We Go From Here?

Talking about how we adapted an idea in this forum is the first step. What other ways can we publish, adapt, and improve instruction content as a community? Between 2006 and 2008, the Oregon Library Association’s Library Instruction Roundtable maintained a Library Instruction Wiki (since expired). Some academic faculty are using GitHub to post their class syllabus for other teachers to modify, fork, and utilize version control. Public librarians like Ben Bizzle of the Craighead County Jonesboro Public Library use Dropbox to share marketing content with other librarians. And who among us has not used Google Drive to collaborate?

The tools for open-source development of instruction material exist: we simply need to make a concerted effort to develop this content on a large scale.

References

Booth, C., Friedman, L. Lai, A., &  Whiteside, A. (2013, April). Love your library: Building goodwill from the inside out and the outside in. Panel presented at the meeting of the Association of College and Research Libraries, Indianapolis, IN. (slides | recording)

About our Guest Authors:

Alice Whiteside is a Librarian & Instructional Technology Consultant at Mount Holyoke College. An active member of the Art Libraries Society of North America (ARLIS/NA), she currently serves as chair of the Professional Development Committee-Education Subcommittee. Alice holds an MSLS from the University of North Carolina at Chapel Hill and a BA in Art History from Bard College.

John Jackson is the Reference & Instruction Librarian at the Wardman Library of Whittier College, a private liberal arts college outside Los Angeles. John holds an MLIS from San Jose State University and an MA in Medieval Studies from the University of Virginia.