Reflections on Code4Lib 2013

Disclaimer: I was on the planning committee for Code4Lib 2013, but this is my own opinion and does not reflect other organizers of the conference.

We have mentioned Code4Lib before on this blog, but for those who are unfamiliar, it is a loose collective of programmers working in libraries, librarians, and others interested in code and libraries. (You can read more about it on the website.) The Code4Lib conference has emerged as a venue to share very new technology and have discussions with a wide variety of people who might not attend conferences more geared to librarians. Presentations at the conference are decided by the votes of anyone interested in selecting the program, and additionally lightning talks and breakout sessions allow wide participation and exposure to extremely new projects that have not made it into the literature or to conferences with a longer lead time. The Code4Lib 2013 conference ran February 11-14 at University of Illinois Chicago. You can see a list of all programs here, which includes links to the video archive of the conference.

While there were many types of projects presented, I want to focus on those talks which illustrated what I saw as thread running through the conference–care and emotion. This is perhaps unexpected for a technical conference. Yet those themes underlie a great deal of the work that takes place in academic library technology and the types of projects presented at Code4Lib. We tend to work in academic libraries because we care about the collections and the people using those collections. That intrinsic motivation focuses our work.

Caring about the best way to display collections is central to successful projects. Most (though not all) the presenters and topics came out of academic libraries, and many of the presentations dealt with creating platforms for library and archival metadata and collections. To highlight a few: Penn State University has developed their own institutional repository application called ScholarSphere that provides a better user experience for researchers and managers of the repository. The libraries and archives of the Rock and Roll Hall of Fame dealt with the increasingly common problem of wanting to present digital content alongside more traditional finding aids, and so developed a system for doing so. Corey Harper from New York University presented an extremely interesting and still experimental project to use linked data to enrich interfaces for interacting with library collections. Note that all these projects combined various pieces of open source software and library/web standards to create solutions that solve a problem facing academic or research libraries for a particular setting. I think an important lesson for most academic librarians looking at descriptions of projects like this is that it takes more than development staff to make projects like this. It takes purpose, vision, and dedication to collecting and preserving content–in other words, emotion and care. A great example of this was the presentation about DIYHistory from the University of Iowa. This project started out initially as an extremely low-tech solution for crowdsourcing archival transcription, but got so popular that it required a more robust solution. They were able to adapt open source tools to meet their needs, still keeping the project very within the means of most libraries (the code is here).

Another view of emotion and care came from Mark Matienzo, who did a lightning talk (his blog post gives a longer version with more details). His talk discussed the difficulties of acknowledging and dealing with the emotional content of archives, even though emotion drives interactions with materials and collections. The records provided are emotionless and affectless, despite the fact that they represent important moments in history and lives. The type of sharing of what someone “likes” on Facebook does not satisfactorily answer the question of what they care about,or represent the emotion in their lives. Mark suggested that a tool like Twine, which allows writing interactive stories could approach the difficult question of bringing together the real with the emotional narrative that makes up experience.

One of the ways we express care for our work and for our colleagues is by taking time to be organized and consistent in code. Naomi Dushay of Stanford University Library presented best practices for code handoffs, which described some excellent practices for documenting and clarifying code and processes. One of the major takeaways is that being clear, concise, and straightforward is always preferable, even as much as we want to create cute names for our servers and classes. To preserve a spirit of fun, you can use the cute name and attach a description of what the item actually does.

Originally Bess Sadler, also from Stanford, was going to present with Naomi, but ended up presenting a different talk and the last one of the conference on Creating a Commons (the full text is available here). This was a very moving look at what motivates her to create open source software and how to create better open source software projects. She used the framework of the Creative Commons licenses to discuss open source software–that it needs to be “[m]achine readable, human readable, and lawyer readable.” Machine readable means that code needs to be properly structured and allow for contributions from multiple people without breaking, lawyer readable means that the project should have the correct structure and licensing to collaborate across institutions. Bess focused particularly on the “human readable” aspect of creating communities and understanding the “hacker epistemology,” as she so eloquently put it, “[t]he truth is what works.” Part of understanding that requires being willing to reshape default expectations–for instance, the Code4Lib community developed a Code of Conduct at Bess’s urging to underline the fact that the community aims at inclusion and creating a safe space. She encouraged everyone to keep working to do better and “file bug reports” about open source communities.

This year’s Code4Lib conference was a reminder to me about why I do the work I do as an academic librarian working in a technical role. Even though I may spend a lot of time sitting in front of a computer looking at code, or workflows, or processes, I know it makes access to the collections and exploration of those collections better.


Migrating to LibGuides 2.0

This summer Springshare released LibGuides 2.0, which is a complete revamp of the LibGuides system. Many libraries use LibGuides, either as course/research guides or in some cases as the entire library website, and so this is something that’s been on the mind of many librarians this summer, whichever side of LibGuides they usually see. The process of migrating is not too difficult, but the choices you make in planning the new interface can be challenging. As the librarians responsible for the migration, we will discuss our experience of planning and implementing the new LibGuides platform.

Making the Decision to Migrate

While migrating this summer was optional, Springshare will probably only support LibGuides 1 for another two years, and at Loyola we felt it was better to move sooner rather than later. Over the past few years there were perpetual LibGuides cleanup projects, and this seemed to be a good opportunity to finalize that work. At the same time, we wanted to experiment with new designs for the library’s website that would bring it in closer alignment with the university’s new brand as well as make the site responsive, and LibGuides seemed like the ideal place to experiment with some of those ideas. Several new features, revealed on Springshare’s blog, resonated with subject-area specialists which was another reason to push for a migration sooner than later. We also wanted to have it in place before the first day of classes, which gave us a few months to experiment.

The Reference and Electronic Resources librarian, Will Kent, as well as the Head of Reference, Niamh McGuigan, and the Digital Services Librarian, Margaret Heller, worked in concert to make decisions, as well as inviting all the other reference and instruction librarians (as well as anyone else who was interested) to participate in the process. There were a few ground rules the core team went by, however: we were migrating and the process was iterative, i.e. we weren’t waiting for perfection to launch.

Planning the Migration

During the migration planning process, the small team of three librarians worked together to create a timeline, report to the library staff on progress, solicit feedback on the system, and update the LibGuide policies to reflect the new changes and functions. As far as front-end migration went, we addressed large staff-wide meetings, provided updates, polled subject specialists on the progress, prepared our 400 databases for conversion to the new A-Z list, and demonstrated new features, and opened changes that they should be aware of. We would relay updates from Springshare and handle any troubleshooting questions as they happened.

Given the new features – new categories, new ways of searching, the A-Z database list, and other features, it was important for us to sit down, discuss standards, and update our content policies. The good news was that most of our content was in good shape for the migration. The process was swift and barring inevitable, tiny bugs went smoothly.

Our original timeline was to present the migration steps at our June monthly joint meeting of collections and reference staff, and give a timeline one month until the July meeting to complete the work. For various reasons this ended up stretching until mid-August, but we still launched the day before classes began. We are constantly in the process of updating guide types, adding new resources, and re-classifying boxes to adhere to our new policies.

Working on the Design

LibGuides 2.0 provides two basic templates, a left navigation menu and a top tabbed menu that looks similar to the original LibGuides (additional templates are available with the LibGuides CMS product). We had originally discussed using the left navigation box template and originally began a design based on this, but ultimately people felt more comfortable with the tabbed navigation. Whiteboard sketch of the LibGuides UI

For the initial prototype, Margaret worked off a template that we’d used before for Omeka. This mirrors the Loyola University Chicago template very closely. We kept all of the LibGuides standard template–i.e. 1-3 columns with the number of columns and sections within the column determined by the page creator, but added a few additional pieces in the header and footer, as well as making big changes to the tabs.

The first step in planning the design was to understand what customization happened in the template, and which in the header and footer which are entered separately in the admin UI. Margaret sketched out our vision for the site on the whiteboard wall to determine existing selectors and those that would need to be added, as well as get a sense of whether we would need to change the content section at all. In the interests of completing the project in a timely fashion, we determined that the bare minimum of customization to unify the research guides with the rest of the university websites would be the first priority.

For those still planning a redesign, the Code4Lib community has many suggestions on what to consider. The main thing to consider is that LibGuides 2.0 is based on the Bootstrap 3.0 framework, which Michael Schofield recently implored us to use responsibly. Other important considerations are the accessibility of the solution you pick, and use of white space.

LibGuides Look & Feel UI tabs The Look & Feel section under ‘Admin’ has several tabs with sections for Header and Footer, Custom CSS/JS, and layout of pages–Guide Pages Layout is the most relevant for this post.

Just as in the previous version of LibGuides, one can enter custom code for the header and footer (which in this case is almost the same as the regular library website), as well link to a custom CSS file (we did not include any custom Javascript here, but did include several Google Fonts and our custom icon). The Guide Pages Layout is new, and this is where one can edit the actual template that creates each page. We didn’t make any large changes here, but were still able to achieve a unique look with custom CSS.

The new LibGuides platform is responsive, but we needed to account for several items we added to the interface. We added a search box that would allow users to search the entire university website, as well as several new logos, so Margaret added a few media queries to adjust these features on a phone or tablet, as well as adjust the spacing of the custom footer.

Improving the Design

Our first design was ready to present to the subject librarians a month after the migration process started. It was based on the principle of matching the luc.edu pages closely (example), in which the navigation tabs across the top have unusual cutouts, and section titles are very large. No one was very happy with this result, however, as it made the typical LibGuides layout with multiple sections on a page unusable and the tabs not visible enough. While one approach would have been to change the navigation to left navigation menu and limit the number of sections, the majority of the subject librarians preferred to keep things closer to what they had been, with a view to moving toward a potential new layout in the future.

Once we determined a literal interpretation of the university website was not usable for our content, we found inspiration for the template body from another section of the university website that was aimed at presenting a lot of dynamic content with multiple sections, but kept the standard luc.edu header. This allowed us to create a page that was recognizably part of Loyola, but presented our LibGuides content in a much more usable form.

Sticky Tabs

Sticky Tabs

The other piece we borrowed from the university website was sticky tabs. This was an attempt to make the tabs more visible and usable based on what we knew from usability testing on the old platform and what users would already know from the university site. Because LibGuides is based on the Bootstrap framework, it was easy to drop this in using the Affix plugin (tutorial on how to use this)1. The tabs are translucent so they don’t obscure content as one scrolls down.

Our final result was much more popular with everyone. It has a subtle background color and border around each box with a section header that stands out but doesn’t overwhelm the content. The tabs are not at all like traditional LibGuides tabs, functioning somewhat more like regular header links.

mainview

Final result.

Next Steps

Over the summer we were not able to conduct usability testing on the new interface due to the tight timeline, so the first step this fall is to integrate it into our regular usability testing schedule to make iterative changes based on user feedback. We also need to continue to audit the page to improve accessibility.

The research guides are one of the most used links on our website (anywhere between 10,000 and 20,000 visits per month), so our top priority was to make sure the migration did not interfere with use – both in terms of patron access and content creation by the subject-area librarians. Thanks to our feedback sessions, good communication with Springshare, and reliable new platform, the migration went smoothly without interruption.

About our guest author: Will Kent is Reference/Instruction and Electronic Resources Librarian and subject specialist for Nursing and Chemistry at Loyola University Chicago. He received his MSLIS from University of Illinois Urbana-Champaign in 2011 with a certificate in Community Informatics.

Notes
  1. You may remember that in the Bootstrap Responsibly post Michael suggested it wasn’t necessary to use this, but it is the most straightforward way in LibGuides 2.0

Using the Stripe API to Collect Library Fines by Accepting Online Payment

Recently, my library has been considering accepting library fines via online. Currently, many library fines of a small amount that many people owe are hard to collect. As a sum, the amount is significant enough. But each individual fines often do not warrant even the cost for the postage and the staff work that goes into creating and sending out the fine notice letter. Libraries that are able to collect fines through the bursar’s office of their parent institutions may have a better chance at collecting those fines. However, others can only expect patrons to show up with or to mail a check to clear their fines. Offering an online payment option for library fines is one way to make the library service more user-friendly to those patrons who are too busy to visit the library in person or to mail a check but are willing to pay online with their credit cards.

If you are new to the world of online payment, there are several terms you need to become familiar with. The following information from the article in SixRevisions is very useful to understand those terms.1

  • ACH (Automated Clearing House) payments: Electronic credit and debit transfers. Most payment solutions use ACH to send money (minus fees) to their customers.
  • Merchant Account: A bank account that allows a customer to receive payments through credit or debit cards. Merchant providers are required to obey regulations established by card associations. Many processors act as both the merchant account as well as the payment gateway.
  • Payment Gateway: The middleman between the merchant and their sponsoring bank. It allows merchants to securely pass credit card information between the customer and the merchant and also between merchant and the payment processor.
  • Payment Processor: A company that a merchant uses to handle credit card transactions. Payment processors implement anti-fraud measures to ensure that both the front-facing customer and the merchant are protected.
  • PCI (the Payment Card Industry) Compliance: A merchant or payment gateway must set up their payment environment in a way that meets the Payment Card Industry Data Security Standard (PCI DSS).

Often, the same company functions as both payment gateway and payment processor, thereby processing the credit card payment securely. Such a product is called ‘Online payment system.’ Meyer’s article I have cited above also lists 10 popular online payment systems: Stripe, Authorize.Net, PayPal, Google Checkout, Amazon Payments, Dwolla, Braintree, Samurai by FeeFighters, WePay, and 2Checkout. Bear in mind that different payment gateways, merchant accounts, and bank accounts may or may not work together, your bank may or may not work as a merchant account, and your library may or may not have a merchant account. 2

Also note that there are fees in using online payment systems like these and that different systems have different pay structures. For example, Authorize.net has the $99 setup fee and then charges $20 per month plus a $0.10 per-transaction fee. Stripe charges 2.9% + $0.30 per transaction with no setup or monthly fees. Fees for mobile payment solutions with a physical card reader such as Square may go up much higher.

Among various online payment systems, I picked Stripe because it was recommended on the Code4Lib listserv. One of the advantages for using Stripe is that it acts as both the payment gateway and the merchant account. What this means is that your library does not have to have a merchant account to accept payment online. Another big advantage of using Stripe is that you do not have to worry about the PCI compliance part of your website because the Stripe API uses a clever way to send the sensitive credit card information over to the Stripe server while keeping your local server, on which your payment form sits, completely blind to such sensitive data. I will explain this in more detail later in this post.

Below I will share some of the code that I have used to set up Stripe as my library’s online payment option for testing. This may be of interest to you if you are thinking about offering online payment as an option for your patrons or if you are simply interested in how an online payment API works. Even if your library doesn’t need to collect library fines via online, an online payment option can be a handy tool for a small-scale fund-raising drive or donation.

The first step to take to make Stripe work is getting an API keys. You do not have to create an account to get API keys for testing. But if you are going to work on your code more than one day, it’s probably worth getting an account. Stripe API has excellent documentation. I have read ‘Getting Started’ section and then jumped over to the ‘Examples’ section, which can quickly get you off the ground. (https://stripe.com/docs/examples) I found an example by Daniel Schröter in GitHub from the list of examples in the Stripe’s Examples section and decided to test out. (https://github.com/myg0v/Simple-Bootstrap-Stripe-Payment-Form) Most of the time, getting an example code requires some probing and tweaking such as getting all the required library downloaded and sorting out the paths in the code and adding API keys. This one required relatively little work.

Now, let’s take a look at the form that this code creates.

borrowedcode

In order to create a form of my own for testing, I decided to change a few things in the code.

  1. Add Patron & Payment Details.
  2. Allow custom amount for payment.
  3. Change the currency from Euro to US dollars.
  4. Configure the validation for new fields.
  5. Hide the payment form once the charge goes through instead of showing the payment form below the payment success message.

html

4. can be done as follows. The client-side validation is performed by Bootstrapvalidator jQuery Plugin. So you need to get the syntax correct to get the code, which now has new fields, to work properly.
validator

This is the Javascript that allows you to send the data submitted to your payment form to the Stripe server. First, include the Stripe JS library (line 24). Include JQuery, Bootstrap, Bootstrap Form Helpers plugin, and Bootstrap Validator plugin (line 25-28). The next block of code includes an event handler for the form, which send the payment information to the Stripe via AJAX when the form is submitted. Stripe will validate the payment information and then return a token that identifies this particular transaction.

jspart

When the token is received, this code calls for the function, stripeResponseHandler(). This function, stripeResponseHandler() checks if the Stripe server did not return any error upon receiving the payment information and, if no error has been returned, attaches the token information to the form and submits the form.

jspart2

The server-side PHP script then checks if the Stripe token has been received and, if so, creates a charge to send it to Stripe as shown below. I am using PHP here, but Stripe API supports many other languages than PHP such as Ruby and Python. So you have many options. The real payment amount appears here as part of the charge array in line 326. If the charge succeeds, the payment success message is stored in a div to be displayed.

phppart

The reason why you do not have to worry about the PCI compliance with Stripe is that Stripe API asks to receive the payment information via AJAX and the input fields of sensitive information does not have the name attribute and value. (See below for the Card Holder Name and Card Number information as an example; Click to bring up the clear version of the image.)  By omitting the name attribute and value, the local server where the online form sits is deprived of any means to retrieve the information in those input fields submitted through the form. Since sensitive information does not touch the local server at all, PCI compliance for the local server becomes no concern. To clarify, not all fields in the payment form need to be deprived of the name attribute. Only the sensitive fields that you do not want your web server to have access to need to be protected this way. Here, for example, I am assigning the name attribute and value to fields such as name and e-mail in order to use them later to send a e-mail receipt.

(NB. Please click images to see the enlarged version.)

Screen Shot 2014-08-17 at 8.01.08 PM

Now, the modified form has ‘Fee Category’, custom ‘Payment Amount,’ and some other information relevant to the billing purpose of my library.

updated

When the payment succeeds, the page changes to display the following message.

success

Stripe provides a number of fake card numbers for testing. So you can test various cases of failures. The Stripe website also displays all payments and related tokens and charges that are associated with those payments. This greatly helps troubleshooting. One thing that I noticed while troubleshooting is that Stripe logs sometimes do lag behind. That is, when a payment would succeed, associated token and charge may not appear under the “Logs” section immediately. But you will see the payment shows up in the log. So you will know that associated token and charge will eventually appear in the log later.

recent_payment

Once you are ready to test real payment transactions, you need to flip the switch from TEST to LIVE located on the top left corner. You will also need to replace your API keys for ‘TESTING’ (both secret and public) with those for ‘LIVE’ transaction. One more thing that is needed before making your library getting paid with real money online is setting up SSL (Secure Sockets Layer) for your live online payment page. This is not required for testing but necessary for processing live payment transactions. It is not a very complicated work. So don’t be discouraged at this point. You just have to buy a security certificate and put it in your Web server. Speak to your system administrator for how to get the SSL set up for your payment page. More information about setting up SSL can be found in the Stripe documentation I just linked above.

My library has not yet gone live with this online payment option. Before we do, I may make some more modifications to the code to fit the staff workflow better, which is still being mapped out. I am also planning to place the online payment page behind the university’s Shibboleth authentication in order to cut down spam and save some tedious data entry by library patrons by getting their information such as name, university email, student/faculty/staff ID number directly from the campus directory exposed through Shibboleth and automatically inserting it into the payment form fields.

In this post, I have described my experience of testing out the Stripe API as an online payment solution. As I have mentioned above, however, there are many other online payment systems out there. Depending your library’s environment and financial setup, different solutions may work better than others. To me, not having to worry about the PCI compliance by using Stripe was a big plus. If your library accepts online payment, please share what solution you chose and what factors led you to the particular online payment system in the comments.

* This post has been based upon my recent presentation, “Accepting Online Payment for Your Library and ‘Stripe’ as an Example”, given at the Code4Lib DC Unconference. Slides are available at the link above.

Notes
  1. Meyer, Rosston. “10 Excellent Online Payment Systems.” Six Revisions, May 15, 2012. http://sixrevisions.com/tools/online-payment-systems/.
  2. Ullman, Larry. “Introduction to Stripe.” Larry Ullman, October 10, 2012. http://www.larryullman.com/2012/10/10/introduction-to-stripe/.

Websockets For Real-time And Interactive Interfaces

TL;DR WebSockets allows the server to push up-to-date information to the browser without the browser making a new request. Watch the videos below to see the cool things WebSockets enables.

Real-Time Technologies

You are on a Web page. You click on a link and you wait for a new page to load. Then you click on another link and wait again. It may only be a second or a few seconds before the new page loads after each click, but it still feels like it takes way too long for each page to load. The browser always has to make a request and the server gives a response. This client-server architecture is part of what has made the Web such a success, but it is also a limitation of how HTTP works. Browser request, server response, browser request, server response….

But what if you need a page to provide up-to-the-moment information? Reloading the page for new information is not very efficient. What if you need to create a chat interface or to collaborate on a document in real-time? HTTP alone does not work so well in these cases. When a server gets updated information, HTTP provides no mechanism to push that message to clients that need it. This is a problem because you want to get information about a change in chat or a document as soon as it happens. Any kind of lag can disrupt the flow of the conversation or slow down the editing process.

Think about when you are tracking a package you are waiting for. You may have to keep reloading the page for some time until there is any updated information. You are basically manually polling the server for updates. Using XMLHttpRequest (XHR) (also commonly known as Ajax) has been a popular way to try to work around the limitations of HTTP somewhat. After the initial page load, JavaScript can be used to poll the server for any updated information without user intervention.

Using JavaScript in this way you can still use normal HTTP and almost simulate getting a real-time feed of data from the server. After the initial request for the page, JavaScript can repeatedly ask the server for updated information. The browser client still makes a request and the server responds, and the request can be repeated. Because this cycle is all done with JavaScript it does not require user input, does not result in a full page reload, and the amount of data which is returned from the server can be minimal. In the case where there is no new data to return, the server can just respond with something like, “Sorry. No new data. Try again.” And then the browser repeats the polling–tries again and again until there is some new data to update the page. And then goes back to polling again.

This kind of polling has been implemented in many different ways, but all polling methods still have some queuing latency. Queuing latency is the time a message has to wait on the server before it can be delivered to the client. Until recently there has not been a standardized, widely implemented way for the server to send messages to a browser client as soon as an event happens. The server would always have to sit on the information until the client made a request. But there are a couple of standards that do allow the server to send messages to the browser without having to wait for the client to make a new request.

Server Sent Events (aka EventSource) is one such standard. Once the client initiates the connection with a handshake, Server Sent Events allows the server to continue to stream data to the browser. This is a true push technology. The limitation is that only the server can send data over this channel. In order for the browser to send any data to the server, the browser would still need to make an Ajax/XHR request. EventSource also lacks support even in some recent browsers like IE11.

WebSockets allows for full-duplex communication between the client and the server. The client does not have to open up a new connection to send a message to the server which saves on some overhead. When the server has new data it does not have to wait for a request from the client and can send messages immediately to the client over the same connection. Client and server can even be sending messages to each other at the same time. WebSockets is a better option for applications like chat or collaborative editing because the communication channel is bidirectional and always open. While there are other kinds of latency involved here, WebSockets solves the problem of queuing latency. Removing this latency concern is what is meant by WebSockets being a real-time technology. Current browsers have good support for WebSockets.

Using WebSockets solves some real problems on the Web, but how might libraries, archives, and museums use them? I am going to share details of a couple applications from my work at NCSU Libraries.

Digital Collections Now!

When Google Analytics first turned on real-time reporting it was mesmerizing. I could see what resources on the NCSU Libraries’ Rare and Unique Digital Collections site were being viewed at the exact moment they were being viewed. Or rather I could view the URL for the resource being viewed. I happened to notice that there would sometimes be multiple people viewing the same resource at the same time. This gave me some hint that today someone’s social share or forum post was getting a lot of click throughs right now. Or sometimes there would be a story in the news and we had an image of one of the people involved. I could then follow up and see examples of where we were being effective with search engine optimization.

The Rare & Unique site has a lot of visual resources like photographs and architectural drawings. I wanted to see the actual images that were being viewed. The problem, though, was that Google Analytics does not have an easy way to click through from a URL to the resource on your site. I would have to retype the URL, copy and paste the part of the URL path, or do a search for the resource identifier. I just wanted to see the images now. (OK, this first use case was admittedly driven by one of the great virtues of a programmer–laziness.)

My first attempt at this was to create a page that would show the resources which had been viewed most frequently in the past day and past week. To enable this functionality, I added some custom logging that is saved to a database. Every view of every resource would just get a little tick mark that would be tallied up occasionally. These pages showing the popular resources of the moment are then regenerated every hour.

It was not a real-time view of activity, but it was easy to implement and it did answer a lot of questions for me about what was most popular. Some images are regularly in the group of the most-viewed images. I learned that people often visit the image of the NC State men’s basketball 1983 team roster which went on to win the NCAA tournament. People also seem to really like the indoor pool at the Biltmore estate.

Really Real-Time

Now that I had this logging in place I set about to make it really real-time. I wanted to see the actual images being viewed at that moment by a real user. I wanted to serve up a single page and have it be updated in real-time with what is being viewed. And this is where the persistent communication channel of WebSockets came in. WebSockets allows the server to immediately send these updates to the page to be displayed.

People have told me they find this real-time view to be addictive. I found it to be useful. I have discovered images I never would have seen or even known how to search for before. At least for me this has been an effective form of serendipitous discovery. I also have a better sense of what different traffic volume actually feels like on good day. You too can see what folks are viewing in real-time now. And I have written up some more details on how this is all wired up together.

The Hunt Library Video Walls

I also used WebSockets to create interactive interfaces on the the Hunt Library video walls. The Hunt Library has five large video walls created with Cristie MicroTiles. These very large displays each have their own affordances based on the technologies in the space and the architecture. The Art Wall is above the single service point just inside the entrance of the library and is visible from outside the doors on that level. The Commons Wall is in front of a set of stairs that also function as coliseum-like seating. The Game Lab is within a closed space and already set up with various game consoles.

Listen to Wikipedia

When I saw and heard the visualization and sonification Listen to Wikipedia, I thought it would be perfect for the iPearl Immersion Theater. Listen to Wikipedia visualizes and sonifies data from the stream of edits on Wikipedia. The size of the bubbles is determined by the size of the change to an entry, and the sound changes in pitch based on the size of the edit. Green circles show edits from unregistered contributors, and purple circles mark edits performed by automated bots. (These automated bots are sometimes used to integrate library data into Wikipedia.) A bell signals an addition to an entry. A string pluck is a subtraction. New users are announced with a string swell.

The original Listen to Wikipedia (L2W) is a good example of the use of WebSockets for real-time displays. Wikipedia publishes all edits for every language into IRC channels. A bot called wikimon monitors each of the Wikipedia IRC channels and watches for edits. The bot then forwards the information about the edits over WebSockets to the browser clients on the Listen to Wikipedia page. The browser then takes those WebSocket messages and uses the data to create the visualization and sonification.

As you walk into the Hunt Library almost all traffic goes past the iPearl Immersion Theater. The one feature that made this space perfect for Listen to Wikipedia was that it has sound and, depending on your tastes, L2W can create pleasant ambient sounds1. I began by adjusting the CSS styling so that the page would fit the large. Besides setting the width and height, I adjusted the size of the fonts. I added some text to a panel on the right explaining what folks are seeing and hearing. On the left is now text asking passersby to interact with the wall and the list of languages currently being watched for updates.

One feature of the original L2W that we wanted to keep was the ability to change which languages are being monitored and visualized. Each language can individually be turned off and on. During peak times the English Wikipedia alone can sound cacophonous. An active bot can make lots of edits of all roughly similar sizes. You can also turn off or on changes to Wikidata which collects structured data that can support Wikipedia entries. Having only a few of the less frequently edited languages on can result in moments of silence punctuated by a single little dot and small bell sound.

We wanted to keep the ability to change the experience and actually get a feel for the torrent or trickle of Wikipedia edits and allow folks to explore what that might mean. We currently have no input device for directly interacting with the Immersion Theater wall. For L2W the solution was to allow folks to bring their own devices to act as a remote control. We encourage passersby to interact with the wall with a prominent message. On the wall we show the URL to the remote control. We also display a QR code version of the URL. To prevent someone in New Zealand from controlling the Hunt Library wall in Raleigh, NC, we use a short-lived, three-character token.

Because we were uncertain how best to allow a visitor to kick off an interaction, we included both a URL and QR code. They each have slightly different URLs so that we can track use. We were surprised to find that most of the interactions began with scanning the QR code. Currently 78% of interactions begin with the QR code. We suspect that we could increase the number of visitors interacting with the wall if there were other simpler ways to begin the interaction. For bring-your-own-device remote controls we are interested in how we might use technologies like Bluetooth Low Energy within the building for a variety of interactions with the surroundings and our services.

The remote control Web page is a list of big checkboxes next to each of the languages. Clicking on one of the languages turns its stream on or off on the wall (connects or disconnects one of the WebSockets channels the wall is listening on). The change happens almost immediately with the wall showing a message and removing or adding the name of the language from a side panel. We wanted this to be at least as quick as the remote control on your TV at home.

The quick interaction is possible because of WebSockets. Both the browser page on the wall and the remote control client listen on another WebSockets channel for such messages. This means that as soon as the remote control sends a message to the server it can be sent immediately to the wall and the change reflected. If the wall were using polling to get changes, then there would potentially be more latency before a change registered on the wall. The remote control client also uses WebSockets to listen on a channel waiting for updates. This allows feedback to be displayed to the user once the change has actually been made. This feedback loop communication happens over WebSockets.

Having the remote control listen for messages from the server also serves another purpose. If more than one person enters the space to control the wall, what is the correct way to handle that situation? If there are two users, how do you accurately represent the current state on the wall for both users? Maybe once the first user begins controlling the wall it locks out other users. This would work, but then how long do you lock others out? It could be frustrating for a user to have launched their QR code reader, lined up the QR code in their camera, and scanned it only to find that they are locked out and unable to control the wall. What I chose to do instead was to have every message of every change go via WebSockets to every connected remote control. In this way it is easy to keep the remote controls synchronized. Every change on one remote control is quickly reflected on every other remote control instance. This prevents most cases where the remote controls might get out of sync. While there is still the possibility of a race condition, it becomes less likely with the real-time connection and is harmless. Besides not having to lock anyone out, it also seems like a lot more fun to notice that others are controlling things as well–maybe it even makes the experience a bit more social. (Although, can you imagine how awful it would be if everyone had their own TV remote at home?)

I also thought it was important for something like an interactive exhibit around Wikipedia data to provide the user some way to read the entries. From the remote control the user can get to a page which lists the same stream of edits that are shown on the wall. The page shows the title for the most recently edited entry at the top of the page and pushes others down the page. The titles link to the current revision for that page. This page just listens to the same WebSockets channels as the wall does, so the changes appear on the wall and remote control at the same time. Sometimes the stream of edits can be so fast that it is impossible to click on an interesting entry. A button allows the user to pause the stream. When an intriguing title appears on the wall or there is a large edit to a page, the viewer can pause the stream, find the title, and click through to the article.

The reaction from students and visitors has been fun to watch. The enthusiasm has had unexpected consequences. For instance one day we were testing L2W on the wall and noting what adjustments we would want to make to the design. A student came in and sat down to watch. At one point they opened up their laptop and deleted a large portion of a Wikipedia article just to see how large the bubble on the wall would be. Fortunately the edit was quickly reverted.

We have also seen the L2W exhibit pop up on social media. This Instagram video was posted with the comment, “Reasons why I should come to the library more often. #huntlibrary.”

This is people editing–Oh, someone just edited Home Alone–editing Wikipedia in this exact moment.

The original Listen to Wikipedia is open source. I have also made the source code for the Listen to Wikipedia exhibit and remote control application available. You would likely need to change the styling to fit whatever display you have.

Other Examples

I have also used WebSockets for some other fun projects. The Hunt Library Visualization Wall has a unique columnar design, and I used it to present images and video from our digital special collections in a way that allows users to change the exhibit. For the Code4Lib talk this post is based on, I developed a template for creating slide decks that include audience participation and synchronized notes via WebSockets.

Conclusion

The Web is now a better development platform for creating real-time and interactive interfaces. WebSockets provides the means for sending real-time messages between servers, browser clients, and other devices. This opens up new possibilities for what libraries, archives, and museums can do to provide up to the moment data feeds and to create engaging interactive interfaces using Web technologies.

If you would like more technical information about WebSockets and these projects, please see the materials from my Code4Lib 2014 talk (including speaker notes) and some notes on the services and libraries I have used. There you will also find a post with answers to the (serious) questions I was asked during the Code4Lib presentation. I’ve also posted some thoughts on designing for large video walls.

Thanks: Special thanks to Mike Nutt, Brian Dietz, Yairon Martinez, Alisa Katz, Brent Brafford, and Shirley Rodgers for their help with making these projects a reality.


About Our Guest Author: Jason Ronallo is the Associate Head of Digital Library Initiatives at NCSU Libraries and a Web developer. He has worked on lots of other interesting projects. He occasionally writes for his own blog Preliminary Inventory of Digital Collections.

Notes

  1. Though honestly Listen to Wikipedia drove me crazy listening to it so much as I was developing the Immersion Theater display.

Library & Academic Tech Conferences Roundup

Here we present a summary of various library technology conferences that ACRL TechConnect authors have been to. There are a lot of them and some fairly niche. So we hope this guide serves to assist neophytes and veterans alike in choosing how they spend their limited professional development monies. Do you attend one of these conferences every year because it’s awesome? Did we miss your favorite conference? Let us know in the comments!

The lisevents.com website might be of interest, as it compiles LIS conferences of all types. Also, one might be able to get a sense of the content of a conference by searching for its hashtag on Twitter. Most conferences list their hashtag on their website.

Access

  • Time: late in the year, typically September or October
  • Place: Canada
  • Website: http://accessconference.ca/
  • Access is a Canada’s annual library technology conference. Although the focus is primarily on technology, a wide variety of topics are addressed from linked data, innovation, makerspace, to digital archiving by librarians in various areas of specialization. (See the past conferences’ schedules: http://accessconference.ca/about/past-conferences/) Access provides an excellent opportunity to get an international perspective without traveling too far. Access is also a single-track conference, offers great opportunities to network, and starts with preconferences and the hackathon, which welcomes to all types of librarians not just library coders. Both preconferences and the hackathon are optional but highly recommended. (p.s. One of the ACRL TechConnect authors thinks that this is the conference with the best conference lunch and snacks.)

Code4Lib

  • Time: early in the year, typically February but this year in late March
  • Place: varies
  • Website: http://code4lib.org/conference/
  • Code4Lib is unique in that it is organized by a group of volunteers and not supported by any formal organization. While it does cover some more general technology concepts, the conference tends to be focused on coding, naturally. Preconferences from past years have covered the Railsbridge curriculum for learning Ruby on Rails and Blacklight, the open source discovery interface. Code4Lib moves quickly—talks are short (20 minutes) with even shorter lightning talks thrown in—but is also all on one track in the same room; attendees can see every presentation.

Computers in Libraries

  • Time: Late March or early April
  • Place: Washington, DC
  • Website: http://www.infotoday.com/conferences.asp
  • Computers in Libraries is a for-profit conference hosted by Information Today. Its use of tracks, organizing presentations around a topic or group of topics, is a useful way to attend a conference and its overall size is more conducive to networking, socializing, and talking with vendors in the exhibit hall than many other conferences. However, the role of consultants in panel and presentation selection and conference management, as opposed to people who work in libraries, means that there is occasionally a focus on trends that are popular at the moment, but don’t pan out, as well as language more suited to an MBA than an MLIS. The conference also lacks a code of conduct and given the corporate nature of the conference, the website is surprisingly antiquated.
  • They also run Internet Librarian, which meets in Monterey, California, every fall.
    — Jacob Berg, Library Director, Trinity Washington University

Digital Library Federation Forum

  • Time: later in the year, October or November
  • Place: varies
  • Website: http://www.diglib.org/
  • We couldn’t find someone who attended this. If you have, please add your review of this conference in the comments section!

edUI

  • Time: late in the year, typically November
  • Place: Richmond, VA
  • Website: http://eduiconf.org/
  • Not a library conference, edUI is aimed at web professionals working in higher education but draws a fair number of librarians. The conference tends to draw excellent speakers, both from within higher education and the web industry at large. Sessions cover user experience, design, social media, and current tools of the trade. The talks suit a broad range of specialties, from programmers to people who work on the web but aren’t technologists foremost.

Electronic Resources & Libraries

  • Time: generally early in the year, late-February to mid-March.
  • Place: Austin, TX
  • Website: http://www.electroniclibrarian.com/
  • The main focus of this conference is workflows and issues surrounding electronic resources (such as licensed databases and online journals, and understanding these is crucial to anyone working with library technology, whether or not they manage e-resources on a daily basis. In recent years the conference has expanded greatly into areas such as open access and user experience, with tracks specifically dedicated to those areas. This year there were also some overlapping programs and themes with SXSW and the Leadership, Technology, Gender Summit.

Handheld Librarian

  • Time: held a few times throughout the year
  • Place: online
  • Website: http://handheldlibrarian.org
  • An online conference devoted specifically to mobile technologies. The advantage of this conference is that without traveling, you can get a glimpse of the current developments and applications of mobile technologies in libraries. It originally started in 2009 as an annual one-day online conference based upon the accepted presentation proposals submitted in advance. The conference went through some changes in recent years, and now it offers a separate day of workshops in addition and focuses on a different theme in mobile technologies in libraries. All conference presentations and workshops are recorded. If you are interested in attending, it is a good idea to check out the presentations and the speakers in advance.

Internet Librarian

  • Time: October
  • Place: Monterey, CA
  • Website: http://www.infotoday.com/conferences.asp
  • Internet Librarian is for-profit conference hosted by Information Today. It is quite similar to Information Today’s Computers in Libraries utilizing tracks to organize a large number of presentations covering a broad swath of library information technology topics. Internet Librarian also hosts the Internet @ Schools track that focus on the IT needs of the K12 library community. IL is held annually in Monterey California in October. The speaker list is deep and varied and one can expect keynote speakers to be prominent and established names in the field. The conference is well attended and provides a good opportunity to network with library technology peers. As with Computers in Libraries, there is no conference code of conduct.

KohaCon

  • Time: varies, typically in the second half of the year
  • Place: varies, international
  • Website: http://koha-community.org/kohacon/
  • The annual conference devoted to the Koha open source ILS.

 Library Technology Conference

  • Time: mid-March
  • Place: St. Paul, MN
  • Website: http://libtechconf.org/
  • LTC is an annual library conference that takes place in March. It’s both organized by and takes place at Macalester College in St. Paul. Not as completely tech-heavy as a Code4Lib or even an Access, talks at LTC tend to run a whole range of technical aptitudes. Given the time and location of LTC, it has historically been primarily of regional interest but has seen elevating levels of participation nationally and internationally.
    — John Fink, Digital Scholarship Librarian, McMaster University
  • We asked Twitter for a short overview of Library Technology Conference, and Matthew Reidsma offered up this description:

LITA Forum

  • Time: Late in the year, typically November
  • Place: varies
  • Website: http://www.ala.org/lita/conferences
  • A general library technology conference that’s moderately sized, with some 300 attendees most years. One of LITA’s nice aspects is that, because of the smaller size of the conference and the arranged networking dinners, it’s very easy to meet other librarians. You need not be involved with LITA to attend and there are no committee or business meetings.

Open Repositories

  • Time: mid-summer, June or July
  • Place: varies, international
  • Website: changes each year, here are the 2013 and 2014 sites
  • A mid-sized conference focused specifically on institutional repositories.

Online NorthWest

  • Time: February
  • Place: Corvallis, OR
  • Website: http://onlinenorthwest.org/
  • A small library technology conference in the Pacific Northwest. Hosted by the Oregon University System, but invites content from Public, Medical, Special, Legal, and Academic libraries.

THATcamps

  • Time: all the time
  • Place: varies, international
  • Website: http://thatcamp.org/
  • Every THATCamp is different, but all revolve around technology and the humanities (i.e. The Technology And Humanities Camp). They are unconferences with “no spectators”, and so will reflect the interests of the participants. Some have specific themes such as digital pedagogy, others are attached to conferences as pre or post conference events, and some are more general regional events. Librarians are important participants in THATCamps, and if there is one in your area or at a conference you’re attending, you should go. They cost under $30 and are a great networking and education opportunity. Sign up for the THATCamp mailing list or subscribe to the RSS feed to find out about new THATCamps. They have a attendee limit and usually fill up quickly.

What is Node.js & why do I care?

At its simplest, Node.js is server-side JavaScript. JavaScript is a popular programming language, but it almost always runs inside a web browser. So JavaScript can, for instance, manipulate the contents of this page by being included inside <script> tags, but it doesn’t get to play around with the files on our computer or tell our server what HTML to send like the PHP that runs this WordPress blog.

Node is interesting for more than just being on the server side. It provides a new way of writing web servers while using an old UNIX philosophy. Hopefully, by the end of this post, you’ll see its potential and how it differentiates itself from other programming environments and web frameworks.

Hello, World

To start, let’s do some basic Node programming. Head over to nodejs.org and click Install.1 Once you’ve run the installer, a node executable will be available for you on the command line. Any script you pass to node will be interpreted and the results displayed. Let’s do the classic “hello world” example. Create a new file in a text editor, name it hello.js, and put the following on its only line:

console.log('Hello, world!');

If you’ve written JavaScript before, you may recognize this already. console.log is a common debugging method which prints strings to your browser’s JavaScript console. In Node, console.log will output to your terminal. To see that, open up a terminal (on Mac, you can use Terminal.app while on Windows both cmd.exe and PowerShell will work) and navigate to the folder where you put hello.js. Your terminal will likely open in your user’s home folder; you can change directories by typing cd followed by a space and the subdirectory you want to go inside. For instance, if I started at “C:\users\me” I could run cd Documents to enter “C:\users\me\Documents”. Below, we open a terminal, cd into the Documents folder, and run our script to see its results.

$ cd Documents
$ node hello.js
Hello, world!

That’s great and all, but it leaves a lot to be desired. Let’s do something a little more sophisticated; let’s write a web server which responds “Hello!” to any request sent to it. Open a new file up, name it server.js, and write this inside:

var http = require('http');
http.createServer(handleRequest).listen(8888);
function handleRequest (request, response) {
  response.end( 'Hello!' );
}

In our terminal, we can run node server.js and…nothing happens. Our prompt seems to hang, not outputting anything but also not letting us type another command. What gives? Well, Node is running a web server and it’s waiting for responses. Open up your web browser and navigate to “localhost:8888″; the exclamation “Hello!” should appear. In four lines of code, we just wrote an HTTP server. Sure, it’s the world’s dumbest server that only says “Hello!” over and over no matter what we request from it, but it’s still an achievement. If you’re the sort of person who gets giddy at how easy this was, then Node.js is for you.

Let’s walk through server.js line-by-line. First, we import the core HTTP library that comes with Node. The “require” function is a way of loading external modules into your script, similar to how the function of the same name does in Ruby or import in Python. The HTTP library gives us a handy “createServer” method which receives HTTP requests and passes them along to a callback function. On our 2nd line, we call createServer, pass it the function we want to handle incoming requests, and set it to listen for requests sent to port 8888. The choice of 8888 is arbitrary; we could choose any number over 1024, while operating systems often restrict the lower ports which are already in use by specific protocols. Finally, we define our handleRequest callback which will receive a request and response object for each HTTP request. Those objects have many useful properties and methods, but we simply called the response object’s end method which sends a response and optionally accepts some data to put into that response.

The use of callback functions is very common in Node. If you’ve written JavaScript for a web browser you may recognize this style of programming; it’s the same as when you define an event listener which responds to mouse clicks, or assign a function to process the result of an AJAX request. The callback function doesn’t executive synchronously in the same order you wrote it in your code, it waits for some “event” to occur, whether that event is a click or an AJAX request returning data.

In our HTTP server example, we also see a bit of what makes Node different from other server-side languages like PHP, Perl, Python, and Ruby. Those languages typically work with a web server, such as Apache, which passes certain requests over to the languages and serves up whatever they return. Node is a server, it gives you low-level access to the inner workings of protocols like HTTP and TCP. You don’t need to run Apache and have requests sent to Node: it handles them on its own.

Who cares?

Some of you are no doubt wondering: what exactly is the big deal? Why am I reading about this? Surely, the world has enough programming languages, and JavaScript is nothing new, even server-side JavaScript isn’t that new.2 There are already plenty of web servers out there. What need does Node.js fill?

To answer that, we must revisit the origins of Node. The best way to understand is to watch Ryan Dahl present on the impetus for creating Node. He says, essentially, that other programming frameworks are doing IO (input/output) wrong. IO comes in many forms: when you’re reading or writing to a file, when you’re querying databases, and when you’re receiving and sending HTTP requests. In all of these situations, your code asks for data…waits…and waits…and then, once it has the data, it manipulates it or performs some calculation, and then sends it somewhere else…and waits…and waits. Basically, because the code is constantly waiting for some IO operation, it spends most of its time sitting around rather than crunching digits like it wants to. IO operations are commonly the bottlenecks in programs, so we shouldn’t let our code just stop every time they perform one.

Node not only has a beneficial asynchronous programming model but it has developed other advantages as well. Because lots of people already know how to write JavaScript, it’s started up much quicker than languages which are entirely new to developers. It reuses Google Chrome’s V8 as a JavaScript interpreter, giving it a big speed boost. Node’s package manager, NPM, is growing at a tremendous rate, far faster than its sibling package managers for Java, Ruby, and Python. NPM itself is a strong point of Node; it’s learned from other package managers and has many excellent features. Finally, other programming languages were developed to be all-purpose tools. Node, while it does share the same all-purpose utility, is really intended for the web. It’s meant to write web servers and handle HTTP intelligently.

Node also follows many UNIX principles. Doug McIlroy succinctly summarized the UNIX philosophy as “Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.” NPM does a great job letting authors write small modules which work well together. This has been tough previously in JavaScript because web browsers have no “require” function; there’s no native way for modules to define and load their dependencies, which resulted in the popularity of large, complicated libraries.3 jQuery is a good example; it’s tremendously popular and it includes hundreds of functions in its API, while most sites that use it really only need a few. Large, complicated programs are more difficult to test, debug, and reason about, which is why UNIX avoided them.

Many Node modules also support streams that allow you to pipe data through a series of programs. This is analogous to how BASH and other shells let you pipe text from one command to another, with each command taking the output of the last as its input. To visualize this, see Stream Playground written by John Resig, creator of jQuery. Streams allow you to plug-in different functionality in when needed. This pseudocode shows how one might read a CSV from a server’s file system (the core “fs” library stands for “file system”), filter out certain rows, and send it over HTTP:

fs.createReadStream('spreadsheet.csv').pipe(filter).pipe(http);
// Want to compress the response? Just add another pipe.
fs.createReadStream('spreadsheet.csv').pipe(filter).pipe(compressor).pipe(http);

Streams have the advantage of limiting how much memory a program uses because only small portions of data are being operated on at once. Think of the difference between copying a million-line spreadsheet all at once or line-by-line; the second is less likely to crash or run into the limit of how much data the system clipboard can hold.

Libraryland Examples

Node is still very new and there aren’t a lot prominent examples of library usage. I’ll try to present a few, but I think it’s more worth knowing about as a major trend in web development.

Most amusingly, Ed Summers of the Library of Congress and Sean Hannan of Johns Hopkins University made a Cataloging Highscores page that presents original cataloging performed in WorldCat in a retro arcade-style display. This app uses the popular socket.io module that establishes a real-time connection between your browser and the server, a strength of Node. Any web service that needs to be continually updated is a prime candidate for Node.js: current news articles, social media streams, auto-complete suggestions as a user types in search terms, and chat reference all come to mind. In fact, SpringShare’s LibChat uses socket.io as well, though I can’t tell if it’s using Node on the server or PHP. A similar example of real-time updating, also by Ed Summers, is Wikistream which streams the dizzying number of edits happening on various Wikipedias through your browser.4

There was a lightning talk on Node at Code4Lib 2010 which mentions writing a connector to the popular Apache Solr search platform. Aaron Coburn’s proposed talk for Code4Lib 2014 mentions that Amherst is using Node to build the web front-end to their Fedora-based digital library.

Tools You Can Use

With the explosive growth of NPM, there are already tons of useful tools written in Node. While many of these are tools for writing web servers, like Express, some are command line programs you can use to accomplish a variety of tasks.

Yeoman is a scaffolding application that makes it easy to produce various web apps by giving you expert templates. You can install separate generators that produce templates for things like a Twitter Bootstrap site, a JavaScript bookmarklet, a mobile site, or a project using the Angular JavaScript MVC framework. Running yo angular to invoke the Angular generator gives you a lot more than just a base HTML file and some JavaScript libraries; it also provides a series of Grunt tasks for testing, running a development server, and building a site optimized for production. Grunt is another incredibly useful Node project, dubbed “the JavaScript task runner.” It lets you pick from hundreds of community plugins to automate tedious tasks like minifying and concatenating your scripts before deploying a website.

Finally, another tool that I like is phantomas which is a Node project that works with PhantomJS to run a suite of performance tests on a site. It provides more detailed reports than any other performance tool I’ve used, telling you things like how many DOM queries ran and median latency of HTTP requests.

Learn More

Nodeschool.io features a growing number of lessons on using Node. Better yet, the lessons are actually written in Node, so you install them with NPM and verify your results on the command line. There are several topics, from basics to using streams to working with databases.

Nettuts+, always a good place for coding tutorials, has an introduction to Node which takes you from installation to coding a real-time server. If you want to learn about writing a real-time chat application with socket.io, they have a tutorial for that, too.

If you want a broad and thorough overview, there are a few introductory books on Node, with The Node Beginner Book offering several free chapters. O’Reilly’s Node for Front-End Developers is also a good starting point.

How to Node is a popular blog with articles on various topics, though some are too in-depth for beginners. I’d head here if you want to learn more on a specific topic, such as streams, or working with particular databases like MongoDB.

Finally, the Node API docs are a good place to go when you get stuck using a particular core module.

Notes

  1. If you use a package manager, such as Homebrew on Mac OS X or APT on Linux, Node is likely available within it. One caveat I have noticed is that the stock Debian/Ubuntu apt-get install nodejs is a few major versions behind; you may want to add Chris Lea’s PPA to get a current version. If you’re subject to the whims of your IT department, you may need to convince them to install Node for you, or talk to your sysadmin to get it on your server. Since it’s a rather new technology, don’t be surprised if you have to explain what it is and why you want to try it out.
  2. Previous projects, including Rhino from Mozilla and Narwhal, have let people use JavaScript outside the server. Node, however, has caught on far more than either of these projects, for some of the reasons outlined in this post.
  3. RequireJS is one project that’s trying to address this need. The ECMAScript standard that defines JavaScript is also working on native modules but they’re in draft form and it’ll be a long time before all browsers support them.
  4. If you’re curious, the code for both Cataloging Highscores and Wikistream are open source and available on GitHub.

An Experiment with Publishing on GitHub

Scholarly publishing, if you haven’t noticed, is nearing a crisis. Authors are questioning the value added by publishers. Open Access publications are growing in number and popularity. Peer review is being criticized and re-invented. Libraries are unable to pay price increases for subscription journals. Traditional measures of scholarly impact and journal rankings are being questioned while new ones are developed. Fresh business models or publishing platforms appear to spring up daily.1

I personally am a little frustrated with scholarly publishing, albeit for reasons not entirely related to the above. I find that most journals haven’t adapted to the digital age yet and thus are still employing editorial workflows and yielding final products suited to print.

How come I have yet to see a journal article PDF with clickable hyperlinks? For that matter, why is PDF still the dominant file format? What advantage does a fixed-width format hold over flexible, fluid-width HTML?2 Why are raw data not published alongside research papers? Why are software tools not published alongside research papers? How come I’m still submitting black-and-white charts to publications which are primarily read online? Why are digital-only publications still bound to regular publication schedules when they could publish like blogs, as soon as the material is ready? To be fair, some journals have answered some of these questions, but the issues are still all too frequent.

So, as a bit of an experiment, I recently published a short research study entirely on GitHub.3 I included the scripts used to generate data, the data, and an article-like summary of the whole process.

What makes it possible

Unfortunately, I wouldn’t recommend my little experiment for most scholars, except perhaps for pre- or post-prints of work published elsewhere. Why? The primary reason people publish research is for tenure review, for enhancing a CV. I won’t list my study—though, arguably, I should be able to—simply because it didn’t go through the usual scholarly publishing gauntlet. It wasn’t peer-reviewed, it didn’t appear in a journal, and it wouldn’t count for much in the eyes of traditional faculty members.

However, I’m at a community college. Research and publication are not among my position’s requirements. I’m judged on my teaching and various library responsibilities, while publications are an unnecessary bonus. Would it help to have another journal article on my CV? Yes, probably. But there’s little pressure and personally I’m more interested in experimentation than in lengthening my list of publications.

Other researchers might also worry about someone stealing their ideas or data if they begin publishing an incomplete project. For me, again, publication isn’t really a competitive field. I would be happy to see someone reuse my project, even if they didn’t give proper attribution back. Openness is an advantage, not a vulnerability.

It’s ironic that being at a non-research institution frees me up to do research. It’s done mostly in my free-time, which isn’t great, but the lack of pressure means I can play with modes of publication, or not worry about the popularity of journals I submit to. To some degree, this is indicative of structural problems with scholarly publishing: there’s inertia in that, in order to stay in the game and make a name for yourself, you can’t do anything too wild. You need to publish, and publish in the recognized titles. Only tenured faculty, who after all owe at least some of their success to the current system, can risk dabbling with new publishing models and systems of peer-review.

What’s really good

GitHub, and the web more generally, are great mediums for scholarship. They address several of my prior questions.

For one, the web is just as suited to publishing data as text. There’s no limit on file format or (practically) size. Even if I was analyzing millions of data points, I could make a compressed archive available for others to download, verify, and reuse in their own research. For my project, I used a Google Spreadsheet which allows others to download the data or simply view it on the web. The article itself can be published on GitHub Pages, which provides free hosting for static websites.

article on GitHub pages

Here’s how the final study looks when published on GitHub Pages.

While my study didn’t undergo any peer review, it is open for feedback via a pull request or the “issues” queue on GitHub. Typically, peer review is a closed process. It’s not apparent what criticisms were leveled at an article, or what the authors did to address them. Having peer review out in the open not only illuminates the history of a particular article but also makes it easier to see the value being added. Luckily, there are more and more journals with open peer review, such as PeerJ which we’ve written about previously. When I explain peer review to students, I often open up the “Peer Review history” section of a PeerJ article. Students can see that even articles written by professional researchers have flaws which the reviewing process is designed to identify and mitigate.

Another benefit of open peer review, present in publishing on GitHub too, is the ability to link to specific versions of an article. This has at least two uses. First of all, it has historical value in that one can trace the thought process of the researcher. Much like original manuscripts are a source of insight for literary analyses, merely being able to trace the evolution of a journal article enables new research projects in and of itself.

Secondly, as web content can be a moving target as it is revised over time, being able to link to specific versions aids those referencing a work. Linking to a git “commit” (think a particular point in time), possibly using perma.cc or the Internet Archive to store a copy of the project as it existed then, is an elegant way of solving this problem. For instance, at one point I manually removed some data points which were inappropriate for the study I was performing. One can inspect the very commit where I did this, seeing which lines of text were deleted and possibly identifying any mistakes which were made.

I’ve also grown tired of typical academic writing. The tendency to value erudite over straightforward language, lengthy titles with the snarky half separated from the actually descriptive half by a colon, the anxiety about the particularities of citations and style manuals; all of these I could do without. Let’s write compelling, truthful content without fetishizing consistency and losing the uniqueness of our voice. I’m not saying my little study achieves much in this regard, but it was a relief to be free to write in whatever manner I found most suitable.

Finally, and most encouraging in my mind, the time to publication of a research project can be greatly reduced with new web-based means. I wrote a paper in graduate school which took almost two years to appear in a peer-reviewed journal; by the time I was given the pre-prints to review, I’d entirely forgotten about it. On GitHub, all delays were solely my fault. While it’s true (you can see so in the project’s history) that the seeds of this project were planted nearly a year ago, I started working in earnest just a few months ago and finished the writing in early October.

What’s really bad

GitHub, while a great company which has reduced the effort needed to use version control with its clean web interface and graphical applications, is not the most universally understood platform. I have little doubt that if I were to publish a study on my blog, I would receive more commentary. For one, GitHub requires an account which only coders or technologists would be likely to have already, while many comment platforms (like Disqus) build off of common social media accounts like Twitter and Facebook. Secondly, while GitHub’s “pull requests” are more powerful than comments in that they can propose changes to the actual content of a project, they’re doubtless less understood as well. Expecting scholarly publishing to suddenly embrace software development methodologies is naive at best.

As a corollary to GitHub’s rather niche appeal, my article hasn’t undergone any semblance of peer review. I put it out there; if someone spots an inaccuracy, I’ll make note of and address it, but no relevant parties will necessarily critique the work. While peer review has its problems—many intimate with the problems of scholarly publishing at large—I still believe in the value of the process. It’s hard to argue a publication has reached an objective conclusion when only a single pair of eyes have scrutinized it.

Researchers who are afraid of having their work stolen, or of publishing incomplete work which may contain errors, will struggle to accept open publishing models using tools like GitHub. Prof Hacker, in an excellent post on “Forking the Academy”, notes many cultural challenges to moving scholarly publishing towards an open source software model. Scholars may worry that forking a repository feels like plagiarism or goes against the tradition of valuing original work. To some extent, these fears may come more from misunderstandings than genuine problems. Using version control, it’s perfectly feasible to withhold publishing a project until it’s complete and to remove erroneous missteps taken in the middle of a work. Theft is just as possible under the current scholarly publishing model; increasing the transparency and speed of one’s publishing does not give license to others to take credit for it. Unless, of course, one uses a permissive license like the Public Domain.

Convincing academics that the fears above are unwarranted or can be overcome is a challenge that cannot be overstated. In all likelihood, GitHub as a platform will never be a major player in scholarly publishing. The learning curve, both technical and cultural, is simply too great. Rather, a good starting point would be to let the appealing aspects of GitHub—versioning, pull requests, issues, granular attribution of authorship at the commit level—inform the development of new, user-friendly platforms with final products that more closely resemble traditional journals. Prof Hacker, again, goes a long way towards developing this with a wish list for a powerful collaborative writing platform.

What about the IR?

The discoverability of web publications is problematic. While I’d like to think my research holds value for others’ literature reviews, it’s never going to show up while searching in a subscription database. It seems unreasonable to ask researchers, who already look in many places to compile complete bibliographies, to add GitHub to their list of commonly consulted sources. Further fracturing the scholarly publishing environment not only inconveniences researchers but it goes against the trend of discovery layers and aggregators (e.g. Google Scholar) which aim to provide a single search across multiple databases.

On the other hand, an increasing amount of research‐from faculty and students alike—is conducted through Google, where GitHub projects will appear alongside pre-prints in institutional repositories. Simply being able to tweet out a link to my study, which is readable on a smartphone and easily saved to any read-it-later service, likely increases its readership over stodgy PDFs sitting in subscription databases.

Institutional repositories solve some, but not all, of the deficiencies of publishing on GitHub. Discoverability is increased because researchers at your institution may search the IR just like they do subscription databases. Futhermore, thanks to the Open Archives Initiative and the OAI-PMH standard, content can be aggregated from multiple IRs into larger search engines like OCLC’s OAIster. However, none of the major IR software players support versioned publication. Showing work-in-progress, linking to specific points in time of a work, and allowing for easy reuse are all lost in the IR.

Every publication in its place

As I’ve stated, publishing independently on GitHub isn’t for everyone. It’s not going to show up on your CV and it’s not necessarily going to benefit from the peer review process. But plenty of librarians are already doing something similar, albeit a bit less formal: we’re writing blog posts with original research or performing quick studies at our respective institutions. It’s not a great leap to put these investigations under version control and then publish them on the web. GitHub could be a valuable compliment to more traditional venues, reducing the delay between when data is collected and when it’s available for public consumption. Furthermore, it’s not at all mutually exclusive with article submissions. One could gain both the immediate benefit of getting one’s conclusions out there, but also produce a draft of a journal article.

As scholarly publishing continues to evolve, I hope we’ll see a plethora of publishing models rather than one monolithic process replacing traditional print-based journals. Publications hosted on GitHub, or a similar platform, would sit nicely alongside open, web-based publications like PeerJ, scholarly blog/journal hybrids like In The Library with the Lead Pipe, deposits in Institutional Repositories, and numerous other sources of quality content.

Notes

  1. I think a lot of these statements are fairly well-recognized in the library community, but here’s some evidence: the recent Open Access “sting” operation (which we’ll cover more in-depth in a forthcoming post) that exposed flaws in some journals’ peer review process, altmetrics, PeerJ, other experiments with open peer review (e.g. by Shakespeare Quarterly), the serials crisis (which is well-known enough to have a Wikipedia entry), predictions that all scholarship will be OA in a decade or two, and increasing demands that scholarly journals allow text mining access all come to mind.
  2. I’m totally prejudiced in this matter because I read primarily through InstaPaper. A journal like Code4Lib, which publishes in HTML, is easy to send to read-it-later services, while PDFs aren’t. PDFs also are hard to read on smartphones, but they can preserve details like layout, tables, images, and font choices better than HTML. A nice solution is services which offer a variety of formats for the same content, such as Open Journal Systems with its ability to provide HTML, PDF, and ePub versions of articles.
  3. For non-code uses of GitHub, see our prior Tech Connect post.

Demystifying Programming

We talk quite a bit about code here at Tech Connect and it’s not unusual to see snippets of it pasted into a post. But most of us, indeed most librarians, aren’t professional programmers or full-time developers; we had to learn like everyone else. Depending on your background, some parts of coding will be easy to pick up while others won’t make sense for years. Here’s an attempt to explain the fundamental building blocks of programming languages.

The Languages

There are a number of popular programming languages: C, C#, C++, Java, JavaScript, Objective C, Perl, PHP, Python, and Ruby. There are numerous others, but this semi-arbitrary selection cover the ones most commonly in use. It’s important to know that each programming language requires its own software to run. You can write Python code into a text file on a machine that doesn’t have the Python interpreter installed, but you can’t execute it and see the results.

A lot of learners stress over which language to learn first unnecessarily. Once you’ve picked up one language, you’ll understand all of the foundational pieces listed below. Then you’ll be able to transition quickly to another language by understanding a few syntax changes: Oh, in JavaScript I write function myFunction(x) to define a function, while in Python I write def myFunction(x). Programming languages differ in other ways too, but knowing the basics of one provides a huge head start on learning the basics of any other.

Finally, it’s worth briefly distinguishing compiled versus interpreted languages. Code written in a compiled language, such as all the capital C languages and Java, must first be passed to a compiler program which then spits out an executable—think a file ending if .exe if you’re on Windows—that will run the code. Interpreted languages, like Perl, PHP, Python, and Ruby, are quicker to program in because you just pass your code along to an interpreter program which immediately executes it. There’s one fewer step: for a compiled language you need to write code, generate an executable, and then run the executable while interpreted languages sort of skip that middle step.

Compiled languages tend to run faster (i.e. perform more actions or computations in a given amount of time) than interpreted ones, while interpreted ones tend to be easier to learn and more lenient towards the programmer. Again, it doesn’t matter too much which you start out with.

Variables

Variables are just like variables in algebra; they’re names which stand in for some value. In algebra, you might write:

x = 10 + 3

which is also valid code in many programming languages. Later on, if you used the value of x, it would be 13.

The biggest difference between variables in math and in programming is that programming variables can be all sort of things, not just numbers. They can be strings of text, for instance. Below, we combine two pieces of text which were stored in variables:

name = 'cat'
mood = ' is laughing'
both = name + mood

In the above code, both would have a value of ‘cat is laughing’. Note that text strings have to be wrapped in quotes—often either double or single quotes is acceptable—in order to distinguish them from the rest of the code. We also see above that variables can be the product of other variables.

Comments

Comments are pieces of text inside a program which are not interpreted as code. Why would you want to do that? Well, comments are very useful for documenting what’s going on in your code. Even if your code is never going to be seen by anyone else, writing comments helps understand what’s going on if you return to a project after not thinking about it for a while.

// This is a comment in JavaScript; code is below.
number = 5;
// And a second comment!

As seen above, comments typically work by having some special character(s) at the beginning of the line which tells the programming language that the rest of the line can be ignored. Common characters that indicate a line is a comment are # (Python, Ruby), // (C languages, Java, JavaScript, PHP), and /* (CSS, multi-line blocks of comments in many other languages).

Functions

As with variables, functions are akin to those in math: they take an input, perform some calculations with it, and return an output. In math, we might see:

f(x) = (x * 3)/4

f(8) = 6

Here, the first line is a function definition. It defines how many parameters can be passed to the function and what it will do with them. The second line is more akin to a function execution. It shows that the function returns the value 6 when passed the parameter 8. This is really, really close to programming already. Here’s the math above written in Python:

def f(x):
  return (x * 3)/4

f(8)
# which returns the number 6

Programming functions differ from mathematical ones in much the same way variables do: they’re not limited to accepting and producing numbers. They can take all sorts of data—including text—process it, and then return another sort of data. For instance, virtually all programming languages allow you to find the length of a text string using a function. This function takes text input and outputs a number. The combinations are endless! Here’s how that looks in Python:

len('how long?')
# returns the number 9

Python abbreviates the word “length” to simply “len” here, and we pass the text “how long?” to the function instead of a number.

Combining variables and functions, we might store the result of running a function in a variable, e.g. y = f(8) would store the value 6 in the variable y if f(x) is the same as above. This may seem silly—why don’t you just write y = 6 if that’s what you want!—but functions help by abstracting out blocks of code so you can reuse them over and over again.

Consider a program you’re writing to manage the e-resource URLs in your catalog, which are stored in MARC field 856 subfield U. You might have a variable named num_URLs (variable names can’t have spaces, thus the underscore) which represents the number of 856 $u subfields a record has. But as you work on records, that value is going to change; rather than manually calculate it each time and set num_URLs = 3 or num_URLs = 2 you can write a function to do this for you. Each time you pass the function a bibliographic record, it will return the number of 856 $u fields, substantially reducing how much repetitive code you have to write.

Conditionals

Many readers are probably familiar with IFTTT, the “IF This Then That” web service which can glue together various accounts, for instance “If I post a new photo to Instagram, then save it to my Dropbox backup folder.” These sorts of logical connections are essential to programming, because often whether or not you perform a particular action varies depending on some other condition.

Consider a program which counts the number of books by Virginia Woolf in your catalog. You want to count a book only if the author is Virginia Woolf. You can use Ruby code like this:

if author == 'Virginia Woolf'
  total = total + 1
end

There are three parts here: first we specify a condition, then there’s some code which runs only if the condition is true, and then we end the condition. Without some kind of indication that the block of code inside the condition has ended, the entire rest of our program would only run depending on if the variable author was set to the right string of text.

The == is definitely weird to see for the first time. Why two equals? Many programming languages use a variety of double-character comparisons because the single equals already has a meaning: single equals assigns a value to a variable (see the second line of the example above) while double-equals compares two values. There are other common comparisons:

  • != often means “is not equal to”
  • > and < are the typical greater or lesser than
  • >= and <= often mean “greater/lesser than or equal to”

Those can look weird at first, and indeed one of the more common mistakes (made by professionals and newbies alike!) is accidentally putting a single equals instead of a double.[1] While we’re on the topic of strange double-character equals signs, it’s worth pointing out that += and -= are also commonly seen in programming languages. These pairs of symbols respectively add or subtract a given number from a variable, so they do assign a value but they alter it slightly. For instance, above I could have written total += 1 which is identical in outcome as total = total + 1.

Lastly, conditional statements can be far more sophisticated than a mere “if this do that.” You can write code that says “if blah do this, but if bleh do that, and if neither do something else.” Here’s a Ruby script that would count books by Virginia Woolf, books by Ralph Ellison, and books by someone other than those two.

total_vw = 0
total_re = 0
total_others = 0
if author == 'Virginia Woolf'
  total_vw += 1
elsif author == 'Ralph Ellison'
  total_re += 1
else
  total_others += 1
end

Here, we set all three of our totals to zero first, then check to see what the current value of author is, adding one to the appropriate total using a three-part conditional statement. The elsif is short for “else if” and that condition is only tested if the first if wasn’t true. If neither of the first two conditions is true, our else section serves as a kind of fallback.

Arrays

An array is simply a list of variables, in fact the Python language has an array-like data type named “list.” They’re commonly denoted with square brackets, e.g. in Python a list looks like

stuff = [ "dog", "cat", "tree"]

Later, if I want to retrieve a single piece of the array, I just access it using its index wrapped in square brackets, starting from the number zero. Extending the Python example above:

stuff[0]
# returns "dog"
stuff[2]
# returns "tree"

Many programming languages also support associative arrays, in which the index values are strings instead of numbers. For instance, here’s an associative array in PHP:

$stuff = array(
  "awesome" => "sauce",
  "moderate" => "spice",
  "mediocre" => "condiment",
);
echo $stuff["mediocre"];
// prints out "condiment"

Arrays are useful for storing large groups of like items: instead of having three variables, which requires more typing and remembering names, we have just have one array containing everything. While our three natural numbers aren’t a lot to keep track of, imagine a program which deals with all the records in a library catalog, or all the search results returned from a query: having an array to store that large list of items suddenly becomes essential.

Loops

Loops repeat an action a set number of times or until a condition is met. Arrays are commonly combined with loops, since loops make it easy to repeat the same operation on each item in an array. Here’s a concise example in Python which prints every entry in the “names” array to the screen:

names = ['Joebob', 'Suebob', 'Bobob']
for name in names:
  print name

Without arrays and loops, we’d have to write:

name1 = 'Joebob'
name2 = 'Suebob'
name3 = 'Bobob'
print name1
print name2
print name3

You see how useful arrays are? As we’ve seen with both functions and arrays, programming languages like to expose tools that help you repeat lots of operations without typing too much text.

There are a few types of loops, including “for” loops and “while” loops loops. Our “for” loop earlier went through a whole array, printing each item out, but a “while” loop only keeps repeating while some condition is true. Here is a bit of PHP that prints out the first four natural numbers:

$counter = 1;
while ( $counter < 5 ) {
  echo $counter;
  $counter = $counter + 1;
}

Each time we go through the loop, the counter is increased by one. When it hits five, the loop stops. But be careful! If we left off the $counter = $counter + 1 line then the loop would never finish because the while condition would never be false. Infinite loops are another potential bug in a program.

Objects & Object-Oriented Programming

Object-oriented programming (oft-abbreviated OOP) is probably the toughest item in this post to explain, which is why I’d rather people see it in action by trying out Codecademy than read about it. Unfortunately, it’s not until the end of the JavaScript track that you really get to work with OOP, but it gives you a good sense of what it looks like in practice.

In general, objects are simply a means of organizing code. You can group related variables and functions under an object. You make an object inherit properties from another one, if it needs to use all the same variables and functions but also add some of its own.

For example, let’s say we have a program that deals with a series of people, each of which have a few properties like their name and age but also the ability to say hi. We can create a people class which is kind of like a template; it helps us stamp out new copies of objects without rewriting the same code over and over. Here’s an example in JavaScript:

function Person(name, age) {
  this.name = name;
  this.age = age;
  this.sayHi = function() {
    console.log("Hi, I'm " + name + ".");
  };
}

Joebob = new Person('Joebob', 39);
Suebob = new Person('Suebob', 40);
Bobob = new Person('Bobob', 3);
Bobob.sayHi();
// prints "Hi, I'm Bobob."
Suebob.sayHi();
// prints "Hi, I'm Suebob."

Our Person function is essentially a class here; it allows us to quickly create three people who are all objects with the same structure, yet they have unique values for their name and age.[2] The code is a bit complicated and JavaScript isn’t a great example, but basically think of this: if we wanted to do this without objects, we’d end up repeating the content of the Person block of code three times over.

The efficiency gained with objects is similar to how functions save us from writing lots of redundant code; identifying common structures and grouping them together under an object makes our code more concise and easier to maintain as we add new features. For instance, if we wanted to add a myAgeIs function that prints out the person’s age, we could just add it to the Person class and then all our people objects would be able to use it.

Modules & Libraries

Lest you worry that every little detail in your programs must be written from scratch, I should mention that all popular programming languages have mechanisms which allow you to reuse others’ code. Practically, this means that most projects start out by identifying a few fundamental building blocks which already exist. For instance, parsing MARC data is a non-trivial task which takes some serious knowledge both of the data structure and the programming language you’re using. Luckily, we don’t need to write a MARC parsing program on our own, because several exist already:

The Code4Lib wiki has an even more extensive list of options.

In general, it’s best to reuse as much prior work as possible rather than spend time working on problems that have already been solved. Complicated tasks like writing a full-fledged web application take a lot of time and expertise, but code libraries already exist for this. Particularly when you’re learning, it can be rewarding to use a major, well-developed project at first to get a sense of what’s possible with programming.

Attention to Detail

The biggest hangup for new programmers often isn’t conceptual: variables, functions, and these other constructs are all rather intuitive, especially once you’ve tried them a few times. Instead, many newcomers find out that programming languages are very literal and unyielding. They can’t read your mind and are happy to simply give up and spit out errors if they can’t understand what you’re trying to do.

For instance, earlier I mentioned that text variables are usually wrapped in quotes. What happens if I forget an end quote? Depending on the language, the program may either just tell you there’s an error or it might badly misinterpret your code, treating everything from your open quote down to the next instance of a quote mark as one big chunk of variable text. Similarly, accidentally misusing double equals or single equals or any of the other arcane combinations of mathematical symbols can have disastrous results.

Once you’ve worked with code a little, you’ll start to pick up tools that ease a lot of minor issues. Most code editors use syntax highlighting to distinguish different constructs  which helps to aid in error recognition. This very post uses a syntax highlighter for WordPress to color keywords like “function” and distinguish variable names. Other tools can “lint” your code for mistakes or code which, while technically valid, can easily lead to trouble. The text editor I commonly use does wonderful little things like provide closing quotes and parens, highlight lines which don’t pass linting tests, and enable me to test-run selected snippets of code.

There’s lots more…

Code isn’t magic; coders aren’t wizards. Yes, there’s a lot to programming and one can devote a lifetime to its study and practice. There are also thousands of resources available for learning, from MOOCs to books to workshops for beginners. With just a few building blocks like the ones described in this post, you can write useful code which helps you in your work.

Footnotes

[1]^ True story: while writing the very next example, I made this mistake.

[2]^ Functions which create objects are called constructor functions, which is another bit of jargon you probably don’t need to know if you’re just getting started.


Advice on Being a Solo Library Technologist

I am an Emerging Technologies Librarian at a small library in the middle of a cornfield. There are three librarians on staff. The vast majority of our books fit on one floor of open stacks. Being so small can pose challenges to a technologist. When I’m banging my head trying to figure out what the heck “this” refers to in a particular JavaScript function, to whom do I turn? That’s but an example of a wide-ranging set of problems:

  • Lack of colleagues with similar skill sets. This has wide-ranging ill effects, from giving me no one to ask questions to or bounce ideas off of, to making it more difficult to sell my ideas.
  • Broad responsibilities that limit time spent on technology
  • Difficulty creating endurable projects that can be easily maintained
  • Difficulty determining which projects are appropriate to our scale

Though listservs and online sources alleviate some of these concerns, there’s a certain knack to be a library technologist at a small institution.[1] While I still have a lot to learn, I want to share some strategies that have helped me thus far.

Know Thy Allies

At my current position, it took me a long time to figure out how the college was structured. Who is responsible for managing the library’s public computers? Who develops the website? If I want some assessment data, where do I go? Knowing the responsibilities of your coworkers is vital and effective collaboration is a necessary element of being a technologist. I’ve been very fortunate to work with coworkers who are immensely helpful.

IT Support can help with both your personal workstation and the library’s setup. Remember that IT’s priorities are necessarily adverse to yours: they want to keep everything up and running, you want to experiment and kick the tires. When IT denies a request or takes ages to fix something that seems trivial to you, remember that they’re just as overburdened as you are. Their assistance in installing and troubleshooting software is invaluable. This is a two-way street: you often have valuable insight into how users behave and what setups are most beneficial. Try to give and take, asking for favors at the same time that you volunteer your services.

Institutional Research probably goes by a dozen different names at any given dozen institutions. These names may include “Assessment Office,” “Institutional Computing,” or even the fearsome “Institutional Review Board” of research universities. These are your data collection and management people and—whether you know it or not—they have some great stuff for you. It took me far too long to browse the IR folder on our shared drive which contains insightful survey data from the CCSSE and in-house reports. There’s a post-graduate survey which essentially says “the library here is awesome,” good to have when arguing for funding. But they also help the library work with the assessment data that our college gathers; we hope to identify struggling courses and offer our assistance.

The web designer should be an obvious contact point. Most technology is administered through the web these days—shocking, I know. The webmaster will not only be able to get you access to institutional servers but they may have learned valuable lessons from their own positions. They, too, struggle to complete a wide range of tasks. They have to negotiate many stakeholders who all want a slice of the vaunted homepage, often the subject of territorial battles. They may have a folder of good PR images or a style guide sitting around somewhere; at the very least, some O’Reilly books you want to borrow.

The Learning Management System administrator is similar to the webmaster. They probably have some coding skills and carry an immense, important burden. At my college, we have a slew of educational technologists who work in the “Faculty Development Center” and preside over the LMS. They’re not only technologically savvy, often introducing me to new tools or techniques, but they know how faculty structure their courses and have a handle on pedagogical theory. Their input can not only generate new ideas but help you ground your initiatives in a solid theoretical basis.

Finally, my list of allies is obviously biased towards academic libraries. But public librarians have similar resources available, they just go by different names. Your local government has many of these same positions: data management, web developer, technology guru. Find out who they are and reach out to them. Anyone can look for local hacker/makerspaces or meetups, which can be a great way not only to develop your skills but to meet people who may have brilliant ideas and insight.

Build Sustainably

Building projects that will last is my greatest struggle. It’s not so hard to produce an intricate, beautiful project if I pour months of work into it, but what happens the month after it’s “complete”? A shortage of ideas has never been my problem, it’s finding ones that are doable. Too often, I’ll get halfway into a project and realize there’s simply no way I can handle the upkeep on top of my usual responsibilities, which stubbornly do not diminish. I have to staff a reference desk, teach information literacy, and make purchases for our collection. Those are important responsibilities and they often provide a platform for experimentation, but they’re also stable obligations that cannot be shirked.

One of the best ways to determine if a project is feasible is to look around at what other libraries are doing. Is there an established project—for instance, a piece of open source software with a broad community base—which you can reuse? Or are other libraries devoting teams of librarians to similar tasks? If you’re seeing larger institutions struggle to perfect something, then maybe it’s best to wait until the technology is more mature. On the other hand, dipping your toe in the water can quickly give you a sense of how much time you’ll need to invest. Creating a prototype or bringing coworkers on board at early stages lets you see how much traction you have. If others are resistant or if your initial design is shown to have gaping flaws, perhaps another project is more worthy of your time. It’s an art but often saying no, dropping a difficult initiative, or recognizing that an experiment has failed is the right thing to do.

Documentation, Documentation, Documentation

One of the first items I accomplished on arrival at my current position was setting up a staff-side wiki on PBworks. While I’m still working on getting other staff members to contribute to it (approximately 90% of the edits are mine), it’s been an invaluable information-sharing resource. Part-time staff members in particular have noted how it’s nice to have one consistent place to look for updates and insider information.

How does this relate to technology? In the last couple years, my institution has added or redesigned dozens of major services. I was going to write a ludicrously long list but…just trust me, we’ve changed a lot of stuff. A new technology or service cannot succeed without buy-in, and you don’t get buy-in if no one knows how to use it. You need documentation: well-written, illustrative documentation. I try to keep things short and sweet, providing screencasts and annotated images to highlight important nuances. Beyond helping others, it’s been invaluable to me as well. Remember when I said I wasn’t so great at building sustainably? Well, I’ll admit that there are some workflows or code snippets that are Greek each time I revisit them. Without my own instructions or blocks of comments, I would have to reverse engineer the whole process before I could complete it again.

Furthermore, not all my fellow staff are on par with my technical skills. I’m comfortable logging into servers, running Drush commands, analyzing the statistics I collect. And that’s not an indictment of my coworkers; they shouldn’t need to do any of this stuff. But some of my projects are reliant on arcane data schemas or esoteric commands. If I were to win the lottery and promptly retire, sophisticated projects lacking documentation would grind to a halt. Instead, I try to write instructions such that anyone could login to Drupal and apply module updates, for instance, even if they were previously unfamiliar with the CMS. I feel a lot better knowing that my bus factor is a little lower and that I can perhaps even take a vacation without checking email, some day.

Choose Wisely

The honest truth is that smaller institutions cannot afford to invest in every new and shiny object that crosses their path. I see numerous awesome innovations at other libraries which simply are not wise investments for a college of our size. We don’t have the scale, skills, and budget for much of the technology out there. Even open source solutions are a challenge because they require skill to configure and maintain. Everything I wrote about sustainability and allies is trying to mitigate this lack of scale, but the truth is some things are just not right for us. It isn’t helpful to build projects that only you can continue, or develop ones which require so much attention that other fundamental responsibilities (doubtless less sexy—no less important) fall through the cracks.

I record my personal activities in Remember the Milk, tagging tasks according to topic. What do you think was the tag I used most last year? Makerspace? Linked data? APIs? Node.js? Nope, it was infolit. That is hardly an “emerging” field but it’s a vital aspect of my position nonetheless.

I find that the best way to select amongst initiatives is to work backwards: what is crucial to your library? What are the major challenges, obvious issues that you’re facing? While I would not abandon pet projects entirely, because sometimes they can have surprisingly wide-ranging effects, it helps to ground your priorities properly.[2] Working on a major issue virtually guarantees that your work will attract more support from your institution. You may find more allies willing to help, or at least coworkers who are sympathetic when you plead with them to cover a reference shift or swap an instruction session because you’re overwhelmed. The big issues themselves are easy to find: user experience, ebooks, discovery, digital preservation, {{insert library school course title here}}. At my college, developmental education and information literacy are huge. It’s not hard to align my priorities with the institution’s.

Enjoy Yourself

No doubt working on your own or with relatively little support is challenging and stressful. It can be disappointing to pass up new technologies because they’re too tough to implement, or when a project fails due to one of the bullet points listed above. But being a technologist should always be fun and bring feelings of accomplishment. Try to inject a little levity and experimentation into the places where it’s least expected; who knows, maybe you’ll strike a chord.

There are also at least a couple advantages to being at a smaller institution. For one, you often have greater freedom and less bureaucracy. What a single individual does on your campus may be done by a committee (or even—the horror—multiple committees) elsewhere. As such, building consensus or acquiring approval can be a much simplified process. A few informal conversations can substitute for mountains of policies, forms, meetings, and regulations.

Secondly, workers at smaller places are more likely to be jack-of-all trades librarians. While I’m a technologist, I wear plenty of more traditional librarian hats as well. On the one hand, that certainly means I have less time to devote to each responsibility than a specialist would; on the other, it gives me a uniquely holistic view of the library’s operations. I not only understand how the pieces fit together, but am better able to identify high-level problems affecting multiple areas of service.

I’m still working through a lot of these issues, on my own. How do you survive as a library technologist? Is it just as tough being a large institution? I’m all eyes.

Footnotes

[1]^ Here are a few of my favorite sources for being a technology librarian:

  • Listservs, particularly Code4Lib and Drupal4Lib. Drupal4Lib is a great place to be if you’re using Drupal and are running into issues, there are a lot of “why won’t this work” and “how do you do X at your library” threads and several helpful experts who hang around the list.
  • For professional journals, once again Code4Lib is very helpful. ITAL is also open access and periodically good tech tips appear in C&RL News or C&RL. Part of being at a small institution is being limited to open access journals; these are the ones I read most often.
  • Google. Google is great. For answering factual questions or figuring out what the most common tool is for a particular task, a quick search can almost always turn up the answer. I’d be remiss if I didn’t mention that Google usually leads me to one of a couple excellent sources, like Stack Overflow or the Mozilla Developer Network.
  • Twitter. Twitter is great, too. I follow many innovative librarians but also leading figures in other fields.
  • GitHub. GitHub can help you find reusable code, but there’s also a librarian community and you can watch as they “star” projects and produce new repositories. I find GitHub useful as a set of instructive code; if I’m wondering how to accomplish a task, I can visit a repo that does something similar and learn from how better developers do it.

[2]^ We’ve covered managing side projects and work priorities previously in “From Cool to Useful: Incorporating hobby projects into library work.”


Coding & Collaboration on GitHub

Previously on Tech Connect we wrote about the Git version control system, walking you through “cloning” a project onto to your computer, making some small changes, and committing them to the project’s history. But that post concluded on a sad note: all we could do was work by ourselves, fiddling with Git on our own computer and gaining nothing from the software’s ability to manage multiple contributors. Well, here we will return to Git to specifically cover GitHub, one of the most popular code-sharing websites around.

Git vs. GitHub

Git is open source version control software. You don’t need to rely on any third-party service to use it and you can benefit from many of its features even if you’re working on your own.

GitHub, on the other hand, is a company that hosts Git repositories on their website. If you allow your code to be publicly viewable, then you can host your repository for free. If you want to have a private repository, then you have to pay for a subscription.

GitHub layers some unique features on top of Git. There’s an Issues queue where bug reports and feature requests can be tracked and assigned to contributors. Every project has a Graphs section where interesting information, such as number of lines added and deleted over time, is charted (see the graphs for jQuery, for instance). You can create gists which are mini-repositories, great for sharing or storing snippets of useful code. There’s even a Wiki feature where a project can publish editable documentation and examples. All of these nice features build upon, but ultimately have little to do with, Git.

Collaboration

GitHub is so successful because of how well it facilitates collaboration. Hosted version control repositories are nothing new; SourceForge has been doing this since 1999, almost a decade prior to GitHub’s founding in 2008. But something about GitHub has struck a chord and it’s taken off like wildfire. Depending on how you count, it’s the most popular collection of open source code, over SourceForge and Google Code.[1] The New York Times profiled co-founder Tom Preston-Werner. It’s inspired spin-offs, like Pixelapse which has been called “GitHub for Photoshop” and Docracy which TechCrunch called “GitHub for legal documents.” In fact, just like the phrase “It’s Facebook for {{insert obscure user group}}” became a common descriptor for up-and-coming social networks, “It’s GitHub for {{insert non-code document}}” has become commonplace. There are many inventive projects which use GitHub as more than just a collection of code (more on this later).

Perhaps GitHub’s popularity is due to Git’s own popularity, though similar sites host Git repositories too.[2] Perhaps the GitHub website simply implements better features than its competitors. Whatever the reason, it’s certain that GitHub does a marvelous job of allowing multiple people to manage and work on a project.

Fork It, Bop It, Pull It

Let’s focus two nice features of GitHub—Forking and the Pull Request [3]—to see exactly why GitHub is so great for collaboration.

If you recall our prior post on Git, we cloned a public repository from GitHub and made some minor changes. Then, when reviewing the results of git log, we could see that our changes were present in the project’s history. That’s great, but how would we go about getting our changes back into the original project?

For the actual step-by-step process, see the LibCodeYear GitHub Project’s instructions. There are basically only two changes from our previous process, one at the very beginning and one at the end.

GItHub's Fork Button

First, start by forking the repository you want to work on. To do so, set up a GitHub account, sign in, visit the repository, and click the Fork button in the upper right. After a pretty sweet animation of a book being scanned, a new project (identical to the original in both name and files) will appear on your GitHub account. You can then clone this forked repository onto your local computer by running git clone on the command line and supplying the URL listed on GitHub.

Now you can do your editing. This part is the same as using Git without GitHub. As you change files and commit changes to the repository, the history of your cloned version and the one on your GitHub account diverge. By running git push you “push” your local changes up to GitHub’s remote server. Git will prompt you for your GitHub password, which can get annoying after a while so you may want to set up an SSH key on GitHub so that you don’t need to type it in each time. Once you’ve pushed, if you visit the repository on GitHub and click the “commits” tab right above the file browser, you can see that your local changes have been published to GitHub. However, they’re still not in the original repository, which is underneath someone else’s account. How do you add your changes to the original account?

GitHub's Pull Request Button

In your forked repository on GitHub, something is different: there’s a Pull Request button in the same upper right area where the Fork one is. Click that button to initiate a pull request. After you click it, you can choose which branches on your GitHub repository to push to the original GitHub repository, as well as write a note explaining your changes. When you submit the request, a message is sent to the project’s owners. Part of the beauty of GitHub is in how pull requests are implemented. When you send one, an issue is automatically opened in the receiving project’s Issues queue. Any GitHub account can comment on public pull requests, connecting them to open issues (e.g. “this fixes bug #43″) or calling upon other contributors to review the request. Then, when the request is approved, its changes are merged into the original repository.

diagram of forking & pulling on GitHub

“Pull Request” might seem like a strange term. “Push” is the name of the command that takes commits from your local computer and adds them to some remote server, such as your GitHub account. So shouldn’t it be called a “push request” since you’re essentially pushing from your GitHub account to another one? Think of it this way: you are requesting that your changes be pulled (e.g. the git pull command) into the original project. Honestly, “push request” might be just as descriptive, but for whatever reason GitHub went with “pull request.”

GitHub Applications

While hopefully we’ve convinced you that the command line is a fine way to do things, GitHub also offers Mac and Windows applications. These apps are well-designed and turn the entire process of creating and publishing a Git repository into a point-and-click affair. For instance, here is the fork-edit-pull request workflow from earlier except done entirely through a GitHub app:

  • Visit the original repository’s page, click Fork
  • On your repository’s page, select “Clone in Mac” or “Clone in Windows” depending on which OS you’re using. The repository will be cloned onto your computer
  • Make your changes and then, when you’re ready to commit, open up the GitHub app, selecting the repository from the list of your local ones
  • Type in a commit message and press Commit
    writing a commit message in GitHub for Windows
  • To sync changes with GitHub, click Sync
  • Return to the repository on GitHub, where you can click the Pull Request button and continue from there

GitHub without the command line, amazing! You can even work with local Git repositories, using the app to do commits and view previous changes, without ever pushing to GitHub. This is particularly useful on Windows, where installing Git can have a few more hurdles. Since the GitHub for Windows app comes bundled with Git, a simple installation and login can get you up-and-running. The apps also make the process of pushing a local repository to GitHub incredibly easy, whereas there are a few steps otherwise. The apps’ visual display of “diffs” (differences in a file between versions, with added and deleted lines highlighted) and handy shortcuts to revert to particular commits can appeal even to those of us that love the command line.

viewing a diff in GitHub for Windows

More than Code

In my previous post on Git, I noted that version control has applications far beyond coding. GitHub hosts a number of inventive projects that demonstrate this.

  • The Code4Lib community hosts an Antiharassment Policy on GitHub. Those in support can simply fork the repository and add their name to a text file, while the policy’s entire revision history is present online as well
  • The city of Philadelphia experimented with using GitHub for procurements with successful results
  • ProfHacker just wrapped up a series on GitHub, ending by discussing what it would mean to “fork the academy” and combine scholarly publishing with forking and pull requests
  • The Jekyll static-site generator makes it possible to generate a blog on GitHub
  • The Homebrew package manager for Mac makes extensive use of Git to manage the various formulae for its software packages. For instance, if you want to roll back to a previous version of an installed package, you run brew versions $PACKAGE where $PACKAGE is the name of the package. That command prints a list of Git commits associated with older versions of the package, so you can enter the Homebrew repository and run a Git command like git checkout 0476235 /usr/local/Library/Formula/gettext.rb to get the installation formula for version 0.17 of the gettext package.

These wonderful examples aside, GitHub is not a magic panacea for coding, collaboration, or any of the problems facing libraries. GitHub can be an impediment to those who are intimidated or simply not sold on the value of learning what’s traditionally been a software development tool. On the Code4Lib listserv, it was noted that the small number of signatories on the Antiharassment Policy might actually be due to its being hosted on GitHub. I struggle to sell people on my campus of the value of Google Docs with its collaborative editing features. So, as much as I’d like the Strategic Plan the college is producing to be on GitHub where everyone could submit pull requests and comment on commits, it’s not necessarily the best platform. It is important, however, not to think of it as limited purely to versioning code written by professional developers. GitHub has uses for amateurs and non-coders alike.

Footnotes

[1]^ GitHub Has Passed SourceForge, (June 2, 2011), ReadWrite.

[2]^ Previously-mentioned SourceForge also supports Git, as does Bitbucket.

[3]^ I think this would make an excellent band name, by the way.