Hosting a Coding Challenge in the Library

In Fall of 2016, the city of Los Angeles held a 2-week “Innovate LA” event intended to celebrate innovation and creativity within the LA region.  Dozens of organizations around Los Angeles held events during Innovate LA to showcase and provide resources for making, invention, and application development.  As part of this event, the library at California State University, Northridge developed and hosted two weeks of coding challenges, designed to introduce novice coders to basic development using existing tutorials. Coders were rewarded with digital badges distributed by the application Credly.

The primary organization of the events came out of the library’s Creative Media Studio, a space designed to facilitate audio and video production as well as experimentation with emerging technologies such as 3D printing and virtual reality.  Users can use computers and recording equipment in the space, and can check out media production devices, such as camcorders, green screens, GoPros, and more.  Our aim was to provide a fun, very low-stress way to learn about coding, provide time for new coders to get hands-on help with coding tutorials, and generally celebrate how coding can be fun.  While anyone was welcome to join, our marketing efforts specifically focused on students, with coding challenges distributed daily throughout the Innovate LA period through Facebook.

The Challenges

The coding challenges were sourced from existing coding tutorial sites such as Free Code CampLearn Ruby and Codecademy.  We wanted to offer a mix of front-end and server side coding challenges, starting with HTML, CSS and JavaScript and ramping up to PHP, Python, and Ruby.  We tested out several free tutorials and chose tutorials that had the most straightforward instructions that provided immediate feedback about incorrect code. We also tried to keep the interfaces consistent, using Free Code Camp most frequently so participants could get used to the interface and focus on coding rather than the tutorial mechanism itself.

Here’s a list of the challenges and their corresponding badges earned:

Challenge Badge Received
Say Hello to the HML Elements, Headline with the H2 Element, Inform with the Paragraph Element HTML Ninja
Change the Color of Text, Use CSS Selectors to Style Elements, Use a CSS Class to Style an Element CSS Ninja
Use Responsive Design with Bootstrap Fluid Containers, Make Images Mobile Responsive, Center Text with Bootstrap Bootstrapper
Comment your JavaScript Code, Declare JavaScript Variables, Storing Values with the Assignment Operator JavaScript Hacker
Learn how Script Tags and Document Ready Work, Target HTML Elements with Selectors Using jQuery, Target Elements by Class Using jQuery jQuery Ninja
Uncomment HTML, Comment out HTML, Fill in the Blank with Placeholder Text HTML Master
Style Multiple Elements with a CSS Class, Change the Font Size of an Element, Set the Font Family of an Element CSS Master
Create a Bootstrap Button, Create a a Block Element Bootstrap Button, Taste the Bootstrap Button Color Rainbow Bootstrap Master
Getting Started and Cat/Dog JS Game Maker
Target Elements by ID Using jQuery, Delete your jQuery Functions, Target the same element with multiple jQuery selectors jQuery Master
Hello World

Variables and Types

Lists

Python Ninja
Hello World, Variables and Types, Simple Arrays PHP Ninja
Hello World, Variables and Types, Math Ruby Ninja
How to Use APIs with JavaScript (complete through Step 9: Authentication and API Keys) API Ninja
Edit or create a wikipedia page. You may join in at the Wikipedia Edit-a-thon or do editing remotely. The Citation Hunt tool is a cool/easy way of going about editing a Wikipedia page. Narrow it to a topic that interests you and make. WikiWiz
Create a 3D Model for an original animated character. You may use TinkerCAD or Blender as free options or feel free to use SolidWorks AutoCAD if you are familiar with them. If you don’t know where to begin, TinkerCAD has step by step tutorials for you to bring your ideas to life. 3D Designer
Get a selfie with a Google Cardboard or any virtual reality goggles VR Explorer

Note the final three challenges – editing a Wikipedia page, creating a 3D model, and experimenting with Google Cardboard or other virtual reality (VR) goggles are not coding challenges, but we wanted to use the opportunity to promote some of the other services the Creative Media Studio provides.  Conveniently, the library was hosting a Wikipedia Edit-A-Thon during the same period as the coding challenges, so it made sense to leverage both of those events as part of our Innovate LA programming.

The coding challenges and instructions were distributed via Facebook, and we also held “office hours” (complete with snacks) in one of the library’s computer labs to provide assistance with completing the challenges.  The office hours were mostly informal, with two library staff members available to walk users through completing and submitting the challenges.  One special office hours was planned, bringing in a guest professor from our Cinema and Television Arts program to help users with a web-based game making tutorial he had designed.  This partnership was very successful, and that particular office hour session had the most attendance of any we offered.  In future iterations of this event, more advance planning would enable us to partner with additional faculty members and feature tutorials they already use effectively with students in their curriculum.

Credly

We needed a way to both accept submissions documenting completion of coding challenges and a way to award digital badges.  Originally we had investigated potentially distributing digital badges through our campus learning management system, as some learning management systems like Moodle are capable of awarding digital badges.  There were a couple of problems with this – 1) we wanted the event to be open to anyone, including members of the community who wouldn’t have access to the learning management system, and 2), the digital badge capability hadn’t been activated in our campus’ instance of Moodle.   Another route we considered taking was accepting submissions for completed challenges was through the university’s Portfolium application, which has a fairly robust ability to accept submissions for completed work, but again, wouldn’t facilitate anyone from outside of the university participating. Credly seemed like an easy, efficient way to both accept submissions and award badges that could also be embedded in 3rd party applications, such as LinkedIN.  Since we hosted the competition in 2016, the capability to integrate Credly badges in Portfolium has been made available.

Credly enables you to either design your badges using Credly’s Badge Builder or upload your own badge designs.  Luckily, we had access to amazing student designers Katie Pappace, Rose Rieux, and Eva Cohen, who custom-created our badges using Adobe Illustrator.  A Credly account for the library’s Creative Media Studio was created to issue the badges, and Credly “Credits” were defined using the custom-created badge designs for each of the coding skills for which we wanted to award badges.

When a credit is designed in Credly and you enable the credit to allow others to claim the credit, you have several options.  You can require a claim code, which requires users to submit a code in order to claim the credit.  Claim codes are useful if you want to award badges not based on evidence (like file submission) but are awarding badges based on participation or attendance at an event at which you distribute the claim code to attendees.  When claim codes are required, you can also set approval of submissions to be automatic, so that anyone with a claim code automatically receives their badge.  We didn’t require a claim code, and instead required evidence to be submitted.

When requiring evidence, you can configure which what types of evidence are appropriate to receive the badge. Choices for evidence submission include a URL, a document (Word, text, or PDF), image, audio file, video file, or just an open text submission.  As users were completing code challenges, we asked for screenshots (images) as evidence of completion for most challenges.  We reviewed all submissions to ensure the submission was correct, but by requiring screenshots, we could easily see whether or not the tutorial itself had “passed” the code submission.

Awards

Credly gives the ability of easily counting the number of badges earned by each of the participants. From those numbers, we were able to determine the top badge earners and award them prizes. All participants, even the ones with a single badge, were awarded buttons of each of their earned badges. In addition to the virtual and physical badges, participants with the greatest number of earned badges were rewarded with prizes. The top five prizes were awarded with gift cards and the grand prize winner also got a 3D printed trophy designed with Tinkercad and their photo as a Lithopane incorporated into the trophy. A low stakes award ceremony was held for all contestants and winners. Top awards were high commodity and it was a good opportunity for students to meet others interested in coding and STEM.

Lessons Learned

Our first attempt at hosting coding challenges in the library taught us a few things.  First, taking a screenshot is definitely not a skill most participants started out with – the majority of initial questions we received from participants were not related to coding, but rather involved how to take a screenshot of their completed code to submit to Credly.  For future events, we’ll definitely make sure to include step-by-step instructions for taking screenshots on both PC and Mac with each challenge, or consider an alternative method of collecting submissions (e.g., copying and pasting code as a text submission into Credly).  It’s still important to not assume that copying and pasting text from a screen is a skill that all participants will have.

As noted above, planning ahead would enable us to more effectively reach out and partner with faculty, and possibly coordinate coding challenges with curriculum.  A few months before the coding challenges, we did reach out to computer science faculty, cinema and television arts faculty, and other faculty who teach curriculum involving code, but if we had reached out much earlier (e.g., the semester before) we likely would have been able to garner more faculty involvement.  Faculty schedules are so jam-packed and often set that way so far in advance, at least six months of advance notice is definitely appreciated.

Only about 10% of coding challenge participants came to coding office hours regularly, but that enabled us to provide tailored, one-on-one assistance to our novice coders.  A good portion of understanding how to get started with coding and application development is not related to syntax, but involves larger questions about how applications work:  if I wanted to make a website, where would my code go?  How does a URL figure out where my website code is?  How does a browser understand and render code?  What’s the difference between JavaScript (client-side code) and PHP (server-side code), and why are they different?  These were the types of questions we really enjoyed answering with participants during office hours.  Having fewer, more targeted office hours — where open questions are certainly encouraged, but where participants know the office hours are focused on particular topics — makes attending the office hours more worthwhile, and I think gives novice coders the language to ask questions they may not know they have.

One small bit of feedback that was personally rewarding for the authors:  at one of our office hours, a young woman came up to us and asked us if we were the planners of the coding challenges.  When we said yes, she told how excited she was (and a bit surprised) to see women involved with coding and development.  She asked us several questions about our jobs and how we got involved with careers relating to technology.  That interaction indicated to us that future outreach could potentially focus on promoting coding to women specifically, or hosting coding office hours to enable mentoring for women coders on campus, modeling (or joining up with) Women Who Code networks.

If you’re interested in hosting support for coding activities or challenges in your library, a great resource to get started with is Hour of Code, which promotes holding one-hour introductions to coding and computer science particularly during Computer Science Education Week.  Hour of Code provides tutorials, resources for hosts, promotional materials and more.  This year, Hour of Code week / Computer Science Education Week will be  December 4-10 2017, so start planning now!


Are Your LibGuides 2.0 (images, tables, & videos) mobile friendly? Maybe not, and here’s what you can do about it.

LibGuides version 2 was released in summer 2014, and built on Bootstrap 3. However, after examining my own institutions’ guides, and conducting a simple random sampling of academic libraries in the United States, I found that many LibGuides did not display well on phones or mobile devices when it came to images, videos, and tables. Springshare documentation stated that LibGuides version 2 is mobile friendly out of the box and no additional coding is necessary, however, I found this not to necessarily be accurate. While the responsive features are available, they aren’t presented clearly as options in the graphical interface and additional coding needs to be added using the HTML editor in order for mobile display to be truly responsive when it comes to images, videos, and tables.

At my institution, our LibGuides are reserved for our subject librarians to use for their research and course guides. We also use the A-Z database list and other modules. As the LibGuides administrator, I’d known since its version 2 release that the new system was built on Bootstrap, but I didn’t know enough about responsive design to do anything about it at the time. It wasn’t until this past October when I began redesigning our library’s website using Bootstrap as the framework that I delved into customizing our Springshare products utilizing what I had learned.

I have found while looking at our own guides individually, and speaking to the subject librarians about their process, that they have been creating and designing their guides by letting the default settings take over for images, tables, and videos.  As a result, several tables are running out of their boxes, images are getting distorted, and videos are stretched vertically and have large black top and bottom margins. This is because additional coding and/or tweaking is indeed necessary in most cases for these to display correctly on mobile.

I’m by no means a Bootstrap expert, but my findings have been verified with Springshare, and I was told by Springshare support that they will be looked at by the developers. Support indicated that there may be a good reason things work as they do, perhaps to give users flexibility in their decisions, or perhaps a technical reason. I’m not sure, but for now we have begun work on making the adjustments so they display correctly. I’d be interested to hear others’ experiences with these elements and what they have had to do, if anything to assure they are responsive.

Method

Initially, as I learned how to use Bootstrap with the LibGuides system, I looked at my own library’s subject guides and testing the responsiveness and display. To start, I browsed through our guides with my Android phone. I then used Chrome and IE11 on desktop and resized the windows to see if the tables stayed within their boxes, and images respond appropriately. I peeked at the HTML and elements within LibGuides to see how the librarians had their items configured. Once I realized the issues were similar across all guides, I took my search further. Selfishly hoping it wasn’t just us, I used the LibGuides Community site where I sorted the list by libraries on version 2, then sorted by academic libraries. Each state’s list had to be looked at separately (you can’t sort by the whole United States). I placed all libraries from each state in a separate Excel sheet in alphabetical order. Using the random sort function, I examined two to three, sometimes five libraries per state (25 states viewed) by following the link provided in the community site list. I also inspected the elements of several LibGuides in my spreadsheet live in Chrome. I removed dimensions or styling to see how the pages responded since I don’t have admin access to any other universities guides. I created a demo guide for the purposes of testing where I inserted various tables, images and videos.

Some Things You Can Try

Even if you or your LibGuides authors may or may not be familiar with Bootstrap or fundamentals of responsive design, anyone should be able to design or update these guide elements using the instructions below; there is no serious Bootstrap knowledge needed for these solutions.

Tables

As we know, tables should not be used for layout. They are meant to display tabular data. This is another issue I came across in my investigation. Many librarians are using tables in this manner. Aside from being an outdated practice, this poses a more serious issue on a mobile device. Authors can learn how to float images, or create columns and rows right within the HTML Editor as an alternative. So for the purposes of this post, I’ll only be using a table in a tabular format.

When inserting a table using the table icon in the rich text editor you are asked typical table questions. How many rows? How many columns? In speaking with the librarians here at my institution, no one is really giving it much thought beyond this. They are filling in these blanks, inserting the table, and populating it. Or worse, copying and pasting a table created in Word. 

However, if you leave things as they are and the table has any width to it, this will be your result once minimized or viewed on mobile device:

Figure 1: Rows overflowing from table

Figure 1: LibGuides default table with no responsive class added

As you can see, the table runs out of the container (the box). To alleviate this, you will have to open the HTML Editor, find where the table begins, and wrap the table in the table-responsive class. The HTML Editor is available to all regular users, no administrative access is needed. If you aren’t familiar with adding classes, you will also need to close the tag after the last table code you see. The HTML looks something like this:

<div class="table-responsive">
    all the other elements go here
</div>

Below is the result of wrapping all table elements in the table-responsive class. As you can see it is cleaner, there is no run-off, and bootstrap added a horizontal scroll bar since the table is really too big for the box once it is resized. On a phone, you can now swipe sideways to scroll through the table.

Figure 3: Result of adding the responsive table class

Figure 2: Result of adding the responsive table class.

Springshare has also made the Bootstrap table styling classes available, which you can see in the editor dropdown as well. You can experiment with these to see which styling you prefer (borders, hover rows, striped rows…), but they don’t replace adding the table-responsive class to the table.

Images

When inserting an image in a LibGuides box, the system brings the dimensions of the image with it into the Image Properties box by default. After various tests I found it best to match the image size to the layout/box prior to uploading, and then remove the dimensions altogether from within the Image Properties box (and don’t place it in an unresponsive table). This can easily be done right within the Image Properties box when the image is inserted. It can also be done in the HTML Editor afterwards.

Figure 4: Image dimensions can be removed in the Image Properties box

Figure 3: Image dimensions can be removed in the Image Properties box.

 

Image dimensions removed

On the left: dimensions in place. On the right: dimensions removed.

By removing the dimensions, the image is better able to resize accordingly, especially in IE which seems to be less forgiving than Chrome. Guide creators should also add descriptive Alternative Text while in the Image Properties box for accessibility purposes.

Some users may be tempted to resize large images by adjusting the dimensions right in the Properties box . However, doing this doesn’t actually decrease the size that gets passed to the user so it doesn’t help download speed. Substantial resizing needs to be done prior to upload. Springshare recommends adjustments of no more than 10-15%.1

Videos

There are a few things I tried while figuring the best way to embed a YouTube video:

    1.  Use the YouTube embed code as is. Which can result in a squished image, and a lot of black border in the top and bottom margins.

Default Youtube embed code

    1. Use the YouTube embed code but remove the iframe dimensions (width=”560″ height=”315″). Results in a small image that looks fine, but stays small regardless of the box size.
    2. Use the YouTube embed code, remove the iframe dimensions and add the embed-responsive class. In this case, 16by9. This results in a nice responsive display, with no black margins. Alternately, I discovered that leaving the iframe dimensions while adding the responsive class looks nearly the same.

Youtube embed code dimensions removed and responsive class added

It should also be noted that LibGuides creators and editors should manually add a “title” attribute to the embed code for accessibility.2 Neither LibGuides nor YouTube does this automatically, so it’s up to the guide creator to add it in the HTML Editor. In addition, the “frameborder=0” will be overwritten by Bootstrap, so you can remove it or leave, it’s up to you.

Considering Box Order/Stacking

The way boxes stack and order on smaller devices is also something LibGuides creators or editors should take into consideration. The layout is essentially comprised of columns, and in Bootstrap the columns stack a certain way depending on device size.

I’ve tested several guides and believe the following are representative of how boxes will stack on a phone, or small mobile device. However, it’s always best to test your layout to be sure. Test your own guides by minimizing and resizing your browser window and watch how they stack.

Box stacking order of a guide with no large top box and three columns.

 

Box stacking order of a guide with a large top box and two columns.

 

Conclusion

After looking at the number of libraries that have these same issues, it may be safe to say that our subject librarians are similar to others in regard to having limited HTML, CSS, or design skills. They rely on LibGuides easy to use interface and system to do most of the work as their time is limited, or they have no interest in learning these additional skills. Our librarians spend most of their teaching time in a classroom, using a podium and large screen, or at the reference desk on large screens. Because of this they are not highly attuned to the mobile user and how their guides display on other devices, even though their guides are being accessed by students on phones or tablets. We will be initiating a mobile reference service soon, perhaps this will help bring further awareness. For now, I recently taught an internal workshop in order demonstrate and share what I have learned in hopes of helping the librarians get these elements fixed. Helping ensure new guides will be created with mobile in mind is also a priority. To date, several librarians have gone through their guides and made the changes where necessary. Others have summer plans to update their guides and address these issues at the same time. I’m not aware of any way to make these changes in bulk, since they are very individual in nature.

 


Danielle Rosenthal is the Web Development & Design Librarian at Florida Gulf Coast University. She is responsible for the library’s web site and its applications in support of teaching, learning, and scholarship activities of the FGCU Library community. Her interests include user interface, responsive, and information design.

 

Notes:

1 Maximizing your LibGuides for Mobile http://buzz.springshare.com/springynews/news-29/tips
2 http://acrl.ala.org/techconnect/post/accessibility-testing-libguides-2-0


A Reflection on Code4Lib 2016

See also: Margaret’s reflections on Code4Lib 2013 and recap of the 2012 keynote.

 


About a month ago was the 2016 Code4Lib conference in sunny Philadelphia. I’ve only been to a few Code4Lib conferences, starting with Raleigh in 2014, but it’s quickly become my favorite libraryland conference. This won’t be a comprehensive recap but a little taste of what makes the event so special.

Appetizers: Preconferences

One of the best things about Code4Lib is the affordable preconferences. It’s often a pittance to add on a preconference or two, extending your conference for a whole day. Not only that, there’s typically a wealth of options: the 2015 conference boasted fifteen preconferences to choose from, and Philadelphia somehow managed to top that with an astonishing twenty-four choices. Not only are they numerous, the preconferences vary widely in their topics and goals. There’s always intensely practical ones focused on bootstrapping people new to a particular framework, programming language, or piece of software (e.g. Railsbridge, workshops focused on Blacklight or Hydra). But there are also events for practicing your presentation or the aptly named “Getting Ready for Workshops” Workshop. One of my personal favorite ideas—though I must admit I’ve never attended—is the perennial “Fail4Lib” sessions where attendees examine their projects that haven’t succeeded and discuss what they’ve learned.

This year, I wanted to run a preconference of my own. I enjoy teaching, but I rarely get to do it in my current position. Previously, in a more generalist technologist position, I would teach information literacy alongside the other librarians. But as a Systems Librarian, it can sometimes feel like I rarely get out from behind my terminal. A preconference was an appealing chance to teach information professionals on a topic that I’ve accumulated some expertise in. So I worked with Coral Sheldon-Hess to put together a workshop focused on the fundamentals of the command line: what it is, how to use it, and some of the pivotal concepts. I won’t say too much more about the workshop because Coral wrote an excellent, detailed blog post right after we were done. The experience was great and feedback we received, including a couple kind emails from our participants, was very positive. Perhaps we, or someone else, can repeat the workshop in the future, as we put all our materials online.

Main Course: Presentations

Thankfully I don’t have to detail the conference talks too much, because they’re all available on YouTube. If a talk looks intriguing, I strongly encourage you to check out the recording. I’m not too ashamed to admit that a few went way over my head, so seeing the original will certainly be more informative than any summary I could offer.

One thing that was striking was how the two keynotes centered on themes of privacy and surveillance. Kate Krauss, Director of Communications of the Tor Project, lead the conference off. Naturally, Tor being privacy software, Krauss focused on stories of government surveillance. She noted how surveillance focuses on the most marginalized people, citing #BlackLivesMatter and the transgender community as examples. Krauss’ talk provided concrete steps that librarians could take, for instance examining our own data collection practices, ensuring our services are secure, hosting privacy workshops, and running a Tor relay. She even mentioned The Library Freedom Project as a positive example of librarians fighting online surveillance, which she posited as one of the premier civil rights issues of our time.

On the final day, Gabriel Weinberg of the search engine DuckDuckGo spoke on similar themes, except he concentrated on how his company’s lack of personalization and tracking differentiated it from companies like Google and Apple. To me, Weinberg’s talk bookended well with Krauss’ because he highlighted the dangers of corporate surveillance. While the government certainly has abused its access to certain fundamental pieces of our country’s infrastructure—obtaining records from major telecom companies without a warrant comes to mind—tech companies are also culpable in enabling the unparalleled degree of surveillance possible in the modern era, simply by collecting such massive quantities of data linked to individuals (and, all too often, by failing to secure their applications properly).

While the pair of keynotes were excellent and thematic, my favorite moments of the conference were the talks by librarians. Becky Yoose gave perhaps the most rousing, emotional talk I’ve ever heard at a conference on the subject of burnout. Burnout is all too real in our profession, but not often spoken of, particularly in such a public venue. Becky forced us all to confront the healthiness and sustainability of our work/life balance, stressing the importance not only of strong organizational policies to prevent burnout but also personal practices. Finally, Andreas Orphanides gave a thoughtful presentation on the political implications of design choices. Dre’s well-chosen, alternatingly brutal and funny examples—from sidewalk spikes that prevent homeless people from lying in doorways, to an airline website labelling as “lowest” a price clearly higher than others on the very same page—outlined how our design choices reflect our values, and how we can better align our values with those of our users.

I don’t mean to discredit anyone else’s talks—there were many more excellent ones, on a variety of topics. Dinah Handel captured my feelings best in this enthusiastic tweet:

Dessert: Community

My main enjoyment from Code4Lib is the sense of community. You’ll hear a lot of people at conferences state things like “I feel like these are my people.” And we are lucky as a profession to have plenty of strong conference options, depending on our locality, specialization, and interests. At Code4Lib, I feel like I can strike up a conversation with anyone I meet about an impending ILS migration, my favorite command-line tool, or the vagaries of mapping between metadata schemas. While I love my present position, I’m mostly a solo systems person surrounded by a few other librarians all with a different expertise. As much as I want to discuss how ludicrous the webpub.def syntax is, or why reading XSLT makes me faintly ill, I know it’d bore my colleagues to death. At Code4Lib, people can at least tolerate such subjects of conversation, if not revel in them.

Code4Lib is great not solely because of it’s focus on technology and code, which a few other library organizations share, but because of the efforts of community members to make it a pleasurable experience for all. To name just a couple of the new things Code4Lib introduced this year: while previous years have had Duty Officers whom attendees could safely report harassment to, they were announced & much more visible this year; sponsored child care was available for conference goers with small children; and a service provided live transcription of all the talks.1 This is in addition to a number of community-building measures that previous Code4Lib conferences featured, such as a series of newcomers dinners on the first night, a “share and play” game night, and diversity scholarships. Overall, it’s evident that the Code4Lib community is committed to being positive and welcoming. Not that other library organizations aren’t, but it should be evident that our profession isn’t immune from problems. Being proactive and putting in place measures to prevent issues like harassment is a shining example of what makes Code4Lib great.

All this said, the community does have its issues. While a 40% female attendance rate is fair for a technology conference, it’s clear that the intersection of coding and librarianship is more male-dominated than the rest of the profession at large. Notably, Code4Lib has done an incredible job of democratically selecting keynote speakers over the past few years—five female and one male for the past three conferences—but the conference has also been largely white, so much so that the 2016 conference’s Program Committee gave a lightning talk addressing the lack of speaker diversity. Hopefully, measures like the diversity scholarships and conscious efforts on the part of the community can make progress here. But the unbearable whiteness of librarianship remains a very large issue.

Finally, it’s worth noting that Code4Lib is entirely volunteer-run. Since it’s not an official professional organization with membership dues and full-time staff members, everything is done by people willing to spare their own time to make the occasion a great one. A huge thanks to the local planning committee and all the volunteers who made such a great event possible. It’s pretty stunning to me that Code4Lib manages to put together some of the nicest benefits of any conference—the live streaming and transcribed talks come to mind—without a huge backing organization, and while charging pretty reasonable registration prices.

Night Cap

I’d recommend Code4Lib to anyone in the library community who deals with technology, whether you’re a manager, cataloger, systems person, or developer. There’s a wide breadth of material suitable for anyone and a great, supportive community. If that’s not enough, the proportion of presentations featuring pictures of cats and/or animated gifs is higher than your average conference.

Notes

  1. Aside: Matt Miller made a fun “Overheard at Code4Lib 2016” app using the transcripts

Near Us and Libraries, Robots Have Arrived

The movie, Robot and Frank, describes the future in which the elderly have a robot as their companion and also as a helper. The robot monitors various activities that relate to both mental and physical health and helps Frank with various house chores. But Frank also enjoys the robot’s company and goes on to enlist the robot into his adventure of breaking into a local library to steal a book and a greater heist later on. People’s lives in the movie are not particularly futuristic other than a robot in them. And even a robot may not be so futuristic to us much longer either. As a matter of fact, as of June 2015, there is now a commercially available humanoid robot that is close to performing some of the functions that the robot in the movie ‘Frank and Robot’ does.

Pepper_GESTURE_ON-001

Pepper Robot, Image from Aldebaran, https://www.aldebaran.com/en/a-robots/who-is-pepper

A Japanese company, SoftBank Robotics Corp. released a humanoid robot named ‘Pepper’ to the market back in June. The Pepper robot is 4 feet tall, 61 pounds, speaks 17 languages and is equipped with an array of cameras, touch sensors, accelerometer, and other sensors in his “endocrine-type multi-layer neural network,” according to the CNN report. The Pepper robot was priced at ¥198,000 ($1,600). The Pepper owners are also responsible for an additional ¥24,600 ($200) monthly data and insurance fee. While the Pepper robot is not exactly cheap, it is surprisingly affordable for a robot. This means that the robot industry has now matured to the point where it can introduce a robot that the mass can afford.

Robots come in varying capabilities and forms. Some robots are as simple as a programmable cube block that can be combined with one another to be built into a working unit. For example, Cubelets from Modular Robotics are modular robots that are used for educational purposes. Each cube performs one specific function, such as flash, battery, temperature, brightness, rotation, etc. And one can combine these blocks together to build a robot that performs a certain function. For example, you can build a lighthouse robot by combining a battery block, a light-sensor block, a rotator block, and a flash block.

 

A variety of cubelets available from the Modular Robotics website.

A variety of cubelets available from the Modular Robotics website.

By contrast, there are advanced robots such as those in the form of an animal developed by a robotics company, Boston Dynamics. Some robots look like a human although much smaller than the Pepper robot. NAO is a 58-cm tall humanoid robot that moves, recognizes, hears and talks to people that was launched in 2006. Nao robots are an interactive educational toy that helps students to learn programming in a fun and practical way.

Noticing their relevance to STEM education, some libraries are making robots available to library patrons. Westport Public Library provides robot training classes for its two Nao robots. Chicago Public Library lends a number of Finch robots that patrons can program to see how they work. In celebration of the National Robotics Week back in April, San Diego Public Library hosted their first Robot Day educating the public about how robots have impacted the society. San Diego Public Library also started a weekly Robotics Club inviting anyone to join in to help build or learn how to build a robot for the library. Haslet Public Library offers the Robotics Camp program for 6th to 8th graders who want to learn how to build with LEGO Mindstorms EV3 kits. School librarians are also starting robotics clubs. The Robotics Club at New Rochelle High School in New York is run by the school’s librarian, Ryan Paulsen. Paulsen’s robotics club started with faculty, parent, and other schools’ help along with a grant from NASA and participated in a FIRST Robotics Competition. Organizations such as the Robotics Academy at Carnegie Mellon University provides educational outreach and resources.

Image from Aldebaran website at https://www.aldebaran.com/en/humanoid-robot/nao-robot

There are also libraries that offer coding workshops often with Arduino or Raspberry Pi, which are inexpensive computer hardware. Ames Free Library offers Raspberry Pi workshops. San Diego Public Library runs a monthly Arduino Enthusiast Meetup. Arduinos and Raspberry Pis can be used to build digital devices and objects that can sense and interact the physical world, which are close to a simple robot. We may see more robotics programs at those libraries in the near future.

Robots can fulfill many other functions than being educational interactive toys, however. For example, robots can be very useful in healthcare. A robot can be a patient’s emotional companion just like the Pepper. Or it can provide an easy way to communicate for a patient and her/his caregiver with physicians and others. A robot can be used at a hospital to move and deliver medication and other items and function as a telemedicine assistant. It can also provide physical assistance for a patient or a nurse and even be use for children’s therapy.

Humanoid robots like Pepper may also serve at a reception desk at companies. And it is not difficult to imagine them as sales clerks at stores. Robots can be useful at schools and other educational settings as well. At a workplace, teleworkers can use robots to achieve more active presence. For example, universities and colleges can offer a similar telepresence robot to online students who want to virtually experience and utilize the campus facilities or to faculty who wish to offer the office hours or collaborate with colleagues while they are away from the office. As a matter of fact, the University of Texas, Arlington, Libraries recently acquired several Telepresence Robots to lend to their faculty and students.

Not all robots do or will have the humanoid form as the Pepper robot does. But as robots become more and more capable, we will surely get to see more robots in our daily lives.

References

Alpeyev, Pavel, and Takashi Amano. “Robots at Work: SoftBank Aims to Bring Pepper to Stores.” Bloomberg Business, June 30, 2015. http://www.bloomberg.com/news/articles/2015-06-30/robots-at-work-softbank-aims-to-bring-pepper-to-stores.

“Boston Dynamics.” Accessed September 8, 2015. http://www.bostondynamics.com/.

Boyer, Katie. “Robotics Clubs At the Library.” Public Libraries Online, June 16, 2014. http://publiclibrariesonline.org/2014/06/robotics-clubs-at-the-library/.

“Finch Robots Land at CPL Altgeld.” Chicago Public Library, May 12, 2014. https://www.chipublib.org/news/finch-robots-land-at-cpl/.

McNickle, Michelle. “10 Medical Robots That Could Change Healthcare – InformationWeek.” InformationWeek, December 6, 2012. http://www.informationweek.com/mobile/10-medical-robots-that-could-change-healthcare/d/d-id/1107696.

Singh, Angad. “‘Pepper’ the Emotional Robot, Sells out within a Minute.” CNN.com, June 23, 2015. http://www.cnn.com/2015/06/22/tech/pepper-robot-sold-out/.

Tran, Uyen. “SDPL Labs: Arduino Aplenty.” The Library Incubator Project, April 17, 2015. http://www.libraryasincubatorproject.org/?p=16559.

“UT Arlington Library to Begin Offering Programming Robots for Checkout.” University of Texas Arlington, March 11, 2015. https://www.uta.edu/news/releases/2015/03/Library-robots-2015.php.

Waldman, Loretta. “Coming Soon to the Library: Humanoid Robots.” Wall Street Journal, September 29, 2014, sec. New York. http://www.wsj.com/articles/coming-soon-to-the-library-humanoid-robots-1412015687.


Best Practices for Hacking Third-Party Sites

While customizing vendor web services is not the most glamorous task, it’s something almost every library does. Whether we have full access to a templating system, as with LibGuides 2, or merely the ability to insert an HTML header or footer, as on many database platforms, we are tested by platform limitations and a desire to make our organization’s fractured web presence cohesive and usable.

What does customizing a vendor site look like? Let’s look at one example before going into best practices. Many libraries subscribe to EBSCO databases, which have a corresponding administrative side “EBSCOadmin”. Electronic Resources and Web Librarians commonly have credentials for these admin sites. When we sign into EBSCOadmin, there are numerous configuration options for our database subscriptions, including a “branding” tab under the “Customize Services” section.

While EBSCO’s branding options include specifying the primary and secondary colors of their databases, there’s also a “bottom branding” section which allows us to inject custom HTML. Branding colors can be important, but this post is focuses on effectively injecting markup onto vendor web pages. The steps for doing so in EBSCOadmin are numerous and not informative for any other system, but the point is that when given custom HTML access one can make many modifications, from inserting text on the page, to an entirely new stylesheet, to modifying user interface behavior with JavaScript. Below, I’ve turned footer links orange and written a message to my browser’s JavaScript console using the custom HTML options in EBSCOadmin.

customized EBSCO database

These opportunities for customization come in many flavors. We might have access only to a section of HTML in the header or footer of a page. We might be customizing the appearance of our link resolver, subscription databases, or catalog. Regardless, there are a few best practices which can aid us in making modifications that are effective.

General Best Practices

Ditch best practices when they become obstacles

It’s too tempting; I have to start this post about best practices by noting their inherent limitations. When we’re working with a site designed by someone else, the quality of our own code is restricted by decisions they made for unknown reasons. Commonly-spouted wisdom—reduce HTTP requests! don’t use eval! ID selectors should be avoided!—may be unusable or even counter-productive.

To note but one shining example: CSS specificity. If you’ve worked long enough with CSS then you know that it’s easy to back yourself into a corner by using overly powerful selectors like IDs or—the horror—inline style attributes. These methods of applying CSS have high specificity, which means that CSS written later in a stylesheet or loaded later in the HTML document might not override them as anticipated, a seeming contradiction in the “cascade” part of CSS. The hydrogen bomb of specificity is the !important modifier which automatically overrides anything but another !important later in the page’s styles.

So it’s best practice to avoid inline style attributes, ID selectors, and especially !important. Except when hacking on vendor sites it’s often necessary. What if we need to override an inline style? Suddenly, !important looks necessary. So let’s not get caught up following rules written for people in greener pastures; we’re already in the swamp, throwing some mud around may be called for.

There are dozens of other examples that come to mind. For instance, in serving content from a vendor site where we have no server-side control, we may be forced to violate web performance best practices such as sending assets with caching headers and utilizing compression. While minifying code is another performance best practice, for small customizations it adds little but obfuscates our work for other staff. Keeping a small script or style tweak human-readable might be more prudent. Overall, understanding why certain practices are recommended, and when it’s appropriate to sacrifice them, can aid our decision-making.

Test. Test. Test. When you’re done testing, test again

Whenever we’re creating an experience on the web it’s good to test. To test with Chrome, with Firefox, with Internet Explorer. To test on an iPhone, a Galaxy S4, a Chromebook. To test on our university’s wired network, on wireless, on 3G. Our users are vast; they contain multitudes. We try to represent their experiences as best as possible in the testing environment, knowing that we won’t emulate every possibility.

Testing is important, sure. But when hacking a third party site, the variance is more than doubled. The vendor has likely done their own testing. They’ve likely introduced their own hacks that work around issues with specific browsers, devices, or connectivity conditions. They may be using server-side device detection to send out subtly different versions of the site to different users; they may not offer the same functionality in all situations. All of these circumstances mean that testing is vitally important and unending. We will never cover enough ground to be sure our hacks are foolproof, but we better try or they’ll not work at all.

Analytics and error reporting

Speaking of testing, how will we know when something goes wrong? Surely, our users will send us a detailed error report, complete with screenshots and the full schematics of every piece of hardware and software involved. After all, they do not have lives or obligations of their own. They exist merely to make our code more error-proof.

If, however, for some odd reason someone does not report an error, we may still want to know that one occurred. It’s good to set up unobtrusive analytics that record errors or other measures of interaction. Did we revamp a form to add additional validation? Try tracking what proportion of visitors successfully submit the form, how often the validation is violated, how often users submit invalid data multiple times in a row, and how often our code encounters an error. There are some intriguing client-side error reporting services out there that can catch JavaScript errors and detail them for our perusal later. But even a little work with events in Google Analytics can log errors, successes, and everything in between. With the mere information that problems are occurring, we may be able to identify patterns, focus our testing, and ultimately improve our customizations and end-user experience.

Know when to cut your losses

Some aspects of a vendor site are difficult to customize. I don’t want to say impossible, since one can do an awful lot with only a single <script> tag to work with, but unfeasible. Sometimes it’s best to know when sinking more time and effort into a customization isn’t worth it.

For instance, our repository has a “hierarchy browse” feature which allows us to present filtered subsets of items to users. We often get requests to customize the hierarchies for specific departments or purposes—can we change the default sort, can we hide certain info here but not there, can we use grid instead of list-based results? We probably can, because the hierarchy browse allows us to inject arbitrary custom HTML at the top of each section. But the interface for doing so is a bit clumsy and would need to be repeated everywhere a customization is made, sometimes across dozens of places simply to cover a single department’s work. So while many of these change requests are technically possible, they’re unwise. Updates would be difficult and impossible to automate, virtually ensuring errors are introduced over time as I forget to update one section or make a manual mistake somewhere. Instead, I can focus on customizing the site-wide theme to fix other, potentially larger issues with more maintainable solutions.

A good alternative to tricky and unmaintainable customizations is to submit a feature request to the vendor. Some vendors have specific sites where we can submit ideas for new features and put our support behind others’ ideas. For instance, the Innovative Users Group hosts an annual vote where members can select their most desired enhancement requests. Remember that vendors want to make a better product after all; our feedback is valued. Even if there’s no formal system for submitting feature requests, a simple email to our sales representative or customer support can help.

CSS Best Practices

While the above section spoke to general advice, CSS and JavaScript have a few specific peculiarities to keep in mind while working within a hostile host environment.

Don’t write brittle, overly-specific selectors

There are two unifying characteristics of hacking on third-party sites: 1) we’re unfamiliar with the underlying logic of why the site is constructed in a particular way and 2) everything is subject to change without notice. Both of these making targeting HTML elements, whether with CSS or JavaScript, challenging. We want our selectors to be as flexible as possible, to withstand as much change as possible without breaking. Say we have the following list of helpful tools in a sidebar:

<div id="tools">
    <ul>
        <li><span class="icon icon-hat"></span><a href="#">Email a Librarian</a></li>
        <li><span class="icon icon-turtle"></span><a href="#">Citations</a></li>
        <li><span class="icon icon-unicorn"></span><a href="#">Catalog</a></li>
    </ul>
</div>

We can modify the icons listed with a selector like #tools > ul > li > span.icon.icon-hat. But many small changes could break this style: a wrapper layer injected in between the #tools div and the unordered list, a switch from unordered to ordered list, moving from <span>s for icons to another tag such as <i>. Instead, a selector like #tools .icon.icon-hat assumes that little will stay the same; it thinks there’ll be icons inside the #tools section, but doesn’t care about anything in between. Some assumptions have to stay, that’s the nature of customizing someone else’s site, but it’s pretty safe to bet on the icon classes to remain.

In general, sibling and child selectors make for poor choices for vendor sites. We’re suddenly relying not just on tags, classes, and IDs to stay the same, but also the particular order that elements appear in. I’d also argue that pseudo-selectors like :first-child, :last-child, and :nth-child() are dangerous for the same reason.

Avoid positioning if possible

Positioning and layout can be tricky to get right on a vendor site. Unless we’re confident in our tests and have covered all the edge cases, try to avoid properties like position and float. In my experience, many poorly structured vendor sites employ ad hoc box-sizing measurements, float-based layout, and lack a grid system. These are all a recipe for weird interconnections between disparate parts—we try to give a call-out box a bit more padding and end up sending the secondary navigation flying a thousand pixels to the right offscreen.

display: none is your friend

display: none is easily my most frequently used CSS property when I customize vendor sites. Can’t turn off a feature in the admin options? Hide it from the interface entirely. A particular feature is broken on mobile? Hide it. A feature is of niche appeal and adds more clutter than it’s worth? Hide it. The footer? Yeah, it’s a useless advertisement, let’s get rid of it. display: none is great but remember it does affect a site’s layout; the hidden element will collapse and no longer take up space, so be careful when hiding structural elements that are presented as menus or columns.

Attribute selectors are excellent

Attribute selectors, which enable us to target an element by the value of any of its HTML attributes, are incredibly powerful. They aren’t very common, so here’s a quick refresher on what they look. Say we have the following HTML element:

<a href="http://example.com" title="the best site, seriously" target="_blank">

This is an anchor tag with three attributes: href, title, and target. Attribute selectors allow us to target an element by whether it has an attribute or an attribute with a particular value, like so:

/* applies to <a> tags with a "target" attribute */
a[target] {
    background: red;
}
/* applies to <a> tags with an "href" that begin with "http://"
this is a great way to style links pointed at external websites
or one particular external website! */
a[href^="http://"] {
    cursor: help;
}
/* applies to <a> tags with the text "best" anywhere in their "title" attribute */
a[title*="best"] {
    font-variant: small-caps;
}

Why is this useful among the many ways we can select elements in CSS? Vendor sites often aren’t anticipating all the customizations we want to employ; they may not provide handy class and ID styling hooks where we need them. Or, as noted above, the structure of the document may be subject to change either over time or across different pieces of the site. Attribute selectors can help mitigate this by making style bindings more explicit. Instead of saying “change the background icon for some random span inside a list inside a div”, we can say “change the background icon for the link that points at our citation management tool”.

If that’s unclear, let me give another example from our institutional repository. While we have the ability to list custom links in the main left-hand navigation of our site, we cannot control the icons that appear with them. What’s worse, there are virtually no styling hooks available; we have an unadorned anchor tag to work with. But that turns out to be plenty for a selector of form a[href$=hierarchy] to target all <a>s with an href ending in “hierarchy”; suddenly we can define icon styles based on the URLs we’re pointing it, which is exactly what we want to base them on anyways.

Attribute selectors are brittle in their own ways—when our URLs change, these icons will break. But they’re a handy tool to have.

JavaScript Best Practices

Avoid the global scope

JavaScript has a notorious problem with global variables. By default, all variables lacking the var keyword are made global. Furthermore, variables outside the scope of any function will also be global. Global variables are considered harmful because they too easily allow unrelated pieces of code to interact; when everything’s sharing the same namespace, the chance that common names like i for index or count are used in two conflicting contexts increases greatly.

To avoid polluting the global scope with our own code, we wrap our entire script customizations in an immediately-invoked function expression (IIFE):

(function() {
    // do stuff here 
}())

Wrapping our code in this hideous-looking construction gives it its own scope, so we can define variables without fear of overwriting ones in the global scope. As a bonus, our code still has access to global variables like window and navigator. However, global variables defined by the vendor site itself are best avoided; it is possible they will change or are subject to strange conditions that we can’t determine. Again, the fewer assumptions our code makes about how the vendor’s site works, the more resilient it will be.

Avoid calling vendor-provided functions

Oftentimes the vendor site itself will put important functions in the global scope, funtions like submitForm or validate where their intention seems quite obvious. We may even be able to reverse engineer their code a bit, determining what the parameters we should pass to these functions are. But we must not succumb to the temptation to actually reference their code within our own!

Even if we have a decent handle on the vendor’s current code, it is far too subject to change. Instead, we should seek to add or modify site functionality in a more macro-like way; instead of calling vendor functions in our code, we can automate interactions with the user interface. For instance, say the “save” button is in an inconvenient place on a form and has the following code:

<button type="submit" class="btn btn-primary" onclick="submitForm(0)">Save</button>

We can see that the button saves the form by calling the submitForm function when it’s clicked with a value of 0. Maybe we even figure out that 0 means “no errors” whereas 1 means “error”.1 So we could create another button somewhere which calls this same submitForm function. But so many changes break our code; if the meaning of the “0” changes, if the function name changes, or if something else happens when the save button is clicked that’s not evident in the markup. Instead, we can have our new button trigger the click event on the original save button exactly as a user interacting with the site would. In this way, our new save button should emulate exactly the behavior of the old one through many types of changes.

{{Insert Your Best Practices Here}}

Web-savvy librarians of the world, what are the practices you stick to when modifying your LibGuides, catalog, discovery layer, databases, etc.? It’s actually been a while since I did customization outside of my college’s IR, so the ideas in this post are more opinion than practice. If you have your own techniques—or disagree with the ones in this post!—we’d love to hear about it in the comments.

Notes

  1. True story, I reverse engineered a vendor form where this appeared to be the case.

Using Grunt to Automate Repetitive Tasks

Riding a tangent from my previous post on web performance, here is an introduction to Grunt, a JavaScript task runner.

Why would we use Grunt? It’s become a common tool for web development as it puts together a number of tedious but necessary steps for optimizing a website. However, “task runner” is intentionally generic; Grunt isn’t specifically limited to websites, it can move and modify files in manifold ways.

Initial Setup

Unfortunately, virtually no operating system comes prepared to use Grunt out of the box. Grunt is written in  Node.js and thus require a few install steps. The good news is that Node is cross-platform; in my experience, it works better on Windows than most other programming frameworks.

  • Install Node
  • Ensure that the node and npm commands are on your path by running node --version and npm --version
    • If not, try a web search like “add node to path {{operating system}}”, it takes at most editing a single line of a particular file
  • Install Grunt globally with npm install -g grunt-cli
  • Ensure Grunt is on your path (grunt --version) and, again, search for an answer if not
  • Inside your project, run npm install grunt to install a local copy of grunt (it’ll appear in a folder named “node_modules”)

We’re ready to run! Let’s do a basic example.

First Example: Basic Web App Optimization

Say we have a simple website: there’s an index.html page, a stylesheet in a “css” subfolder, and a script in a “js” subfolder. First, let’s define what we want to accomplish. We want to: keep a full-size, easily readable copy of all our code while also building minified versions of both the CSS and JS. To do this, we’ll use three plugins: cssmin, uglify, and copy. This whole example is available on GitHub; even if you don’t use git, you can download a zip archive of the files.

First, inside our project, We run npm install grunt-contrib-cssmin grunt-contrib-uglify grunt-contrib-copy. These plugins are now installed in a “nodemodules” folder, but Grunt still needs to know _how to use them. It needs to know what tasks manipulate what files and other options. We provide this information in a file named “Gruntfile.js” in the root of our project. Here’s our initial one:

module.exports = function(grunt) {
    // this tells Grunt about the 3 plugins we installed 
    grunt.loadNpmTasks('grunt-contrib-cssmin');
    grunt.loadNpmTasks('grunt-contrib-uglify');
    grunt.loadNpmTasks('grunt-contrib-copy');
    grunt.initConfig({
        // our configuration will go here 
    });
    // we're going to want to run cssmin, uglify, & copy together 
    // so let's group them under a single "build" task 
    grunt.registerTask('build', ['cssmin', 'uglify', 'copy']);
};

For each task, there’ll be a section inside initConfig with its settings. For cssmin, the setup is nested a few layers but is really just a single line of code. Under a cssmin property we specify a target name, which can be anything. Targets allow us to have multiple configurations for a single task, which is handy in more complex projects but often unneeded. Under our target, which we’ll name “minify”, there’s a files property where we associate an array of input files with a single output file. This makes concatenating multiple stylesheets easy.

cssmin: {
  minify: {
    files: {
      'build/css/main.css': ['css/main.css']
    }
  }
}, // trailing comma since we'll add another section

Uglify’s setup is identical. We’ll name the target the same and only change the paths inside the files property.

uglify: {
  minify: {
    files: {
      'build/js/main.js': ['js/main.js']
    }
  }
}, // again, trailing comma, we add more below

While cssmin handles stylesheets and uglify handles JavaScript, our index.html only needs to be copied and not modified.1 See if you can write out the copy task’s settings by yourself, mimicking what we’ve already done.

Now we run grunt build in our command prompt and some messages tell us about each task’s status. Look in the “build” folder which appears after we ran the command. Far smaller, optimized versions of our main CSS and JS files are in there.

Great! We’ve accomplished our goals. But we must run grunt build over and over each time we want to remake our optimized assets, switching between our code editor and command prompt each time. If we’re doing a lot of piecemeal editing, this is most annoying. Instead, let’s use another plugin by running npm install grunt-contrib-watch to get the “watch” task and load it with the line grunt.loadNpmTasks('grunt-contrib-watch').2 Then, write this configuration:

watch: {
    minify: {
      files: ['\*.html', 'css/\*.css', 'js/\*.js'],
      tasks: ['build']
  }
}

Watch has just two intuitive parameters in its configuration; an array of files to watch and an array of tasks to execute when those files change. The asterisks are wildcards, so unlike our settings above this stanza isn’t dependent on exact file names. Now, by running grunt watch in our command prompt, optimized assets are magically constructed every time we save changes to a file. We can edit in peace without continually switching between the command line and our editor. Better yet, watch can work with a local development server to reload new versions of files upon every edit.3 The right combination of tasks can yield super efficient workflows where we edit and view results without worrying about optimizations made behind the scenes.

More Advanced: Portable Header

While the above is suitable for much small-scale web development, Grunt can handle far more complex situations. For example, I wrote a portable HTML header which can be inserted onto various vendor websites such as LibGuides or an A to Z list. The project’s Gruntfile is 159 lines long and makes use of ten Grunt plugins.

I won’t go into detail to explain how each Grunt tasks’ settings work, but I will outline what’s happening. A “sass” task compiles SCSS code into minified CSS that browser can understand using an external program. A couple of linting tools, jshint and scss-lint, check files against code quality heuristics. Our good friends copy and uglify are back doing their job, only this time they’ve joined by htmlmin which handles the index page. “String-replace” is an example of a multi-target task; its first target searches over a series of files for strings wrapped in double-curly braces like “{{example}}”. It then swaps out these placeholders with values specified in another file. The second takes the entire contents of a stylesheet and a script and inlines them into the main HTML.

That’s a lot of labor being handling by computer programs instead of humans. While passing a couple files through tools that remove comments and whitespace isn’t tough, the many steps in constructing an optimized HTML header from several files provide a good demonstration of Grunt’s value. While it took me some time to configure everything properly, the combined “build” task for this project has probably run hundreds of times and saved me hours of work. Not only that, because of the linting and minification, the final product is doubtless more high-quality than I could assemble manually.

The length and complexity of my Gruntfile points to one of the tougher pieces of using Grunt heavily; the order and delegated responsibilities of numerous tasks is tricky to coordinate properly. Throughout my Gruntfile, there are comments indicating when particular tasks run because running them out of order would either be fruitless or cause an error. For instance, the “string-replace” task’s “inline” target must run after those other files have been minified, otherwise the minification serves no purpose (the inlined code would be full size).

Similarly, coordinating which tasks move which files has been a constant headache for me in many projects. While the “copy” task moves images to the build folder, the “tpl” target of the “string-replace” task moves everything else. But I could’ve also used the uglify or sass tasks to move files! Since every task can potentially move the files it operates upon, it’s difficult to keep track of where a file is at a particular time in the workflow. The best way to debug these issues is to run multi-task aliases like build one at a time; first run uglify, then run cssmin, then run htmlmin… checking the state of files in between each to make sure that changes are occurring as anticipated.

Use Cases Abound

I use Grunt in almost all my projects, whether they’re web development or not. I use Grunt to copy my shell customizations into place, so that when I’m working on a new one I can just run grunt watch and rely on the changes being synched into place. I use Grunt to build Chrome extensions which require extra packaging steps before they can be pushed into the Chrome Web Store. I use Grunt in our catalog’s customized pages to minify code and also to check for potential errors. As an additional step, Grunt can be hooked up to deploy processes such that once a successful build is made the new files are pushed off to a remote server.

Grunt can be used to construct almost any workflow from a series of discrete pieces. Compiling some EAD finding aids into an HTML website via XSLT. Processing vendor MARC files with a PyMARC script and then uploading them into an ILS. Anything that can be scripted could be tied to Grunt tasks with a plug-in like grunt-exec, which executes arbitrary shell commands. However, there is a limit to what it’s sensible to do with Grunt. These last two examples are arguably better accomplished with shell scripts. Grunt is at its best when its great suite of plug-ins are relied upon and those tend to perform web-specific tasks. Grunt also requires at least a modicum of comfort with coding. It falls into an odd space, because while the configuration file is indeed JavaScript, it reads like a series of lists of settings, files, and ordered tasks. If you have more complex needs that involve if-then conditions and custom scripts, a lot of Grunt’s utility is negated. On the other hand, for those who would rather avoid code and the command line, options like the CodeKit app make more sense.

The Grunt site’s Getting Started and Sample Gruntfile pages are helpful sources of documentation.

A Beginner’s Guide to Grunt: Redux — a nice, updated overview. Some of the steps here are unnecessary for beginners, however, as they require a lot of files and structure. That’s great for experienced developers in the long run, because everything is smaller and more modular, but too much setup for simple projects.

I find myself constantly consulting the readme’s for various grunt plugins to figure out how they work, since their options are not necessarily discoverable otherwise. A quick way to pop open the home page of a package is by running npm home grunt-contrib-uglify (inserting the plugin name of your choice) which will open the registered home page of the package, often on GitHub.

Finally, it’s worth mentioning Gulp, a competing JavaScript task runner. Gulp is the same type of tool as Grunt (you wouldn’t use both in a project) but follows a different design philosophy. In short, Gulp tends to run faster due to its design and setting it up looks more like code and less like a configuration file, which some people prefer.

Notes

  1. There’s actually another great plugin, grunt-contrib-htmlmin, which minifies HTML. Its settings are only a little bit more involved than the copy task, so trying to configure htmlmin would make another nice exercise for those wanting to build on this post.
  2. Writing grunt.loadNpmTasks for each task we add to a complex Gruntfile gets tiresome. It takes a bit more initial work—we need to run npm init before anything else, fill in some prompts, and append --save-dev to all your npm install commands—so I decided to skip it in this intro, but we can use load-grunt-tasks to get this down to a single line that doesn’t need to be updated each time a new plugin is added.
  3. The appropriate setting is options.livereload as documented here. While our scenario doesn’t quite capture how time-saving this can be, grunt watch shines when working with a language that compiles to CSS like SASS. A process like “edit SASS, compile CSS, reload web page, view changes” becomes simply “edit SASS, view changes” because the intermediary stages are triggered by grunt watch.

Analyzing EZProxy Logs

Analyzing EZProxy logs may not be the most glamorous task in the world, but it can be illuminating. Depending on your EZProxy configuration, log analysis can allow you to see the top databases your users are visiting, the busiest days of the week, the number of connections to your resources occurring on or off-campus, what kinds of users (e.g., staff or faculty) are accessing proxied resources, and more.

What’s an EZProxy Log?

EZProxy logs are not significantly different from regular server logs.  Server logs are generally just plain text files that record activity that happens on the server.  Logs that are frequently analyzed to provide insight into how the server is doing include error logs (which can be used to help diagnose problems the server is having) and access logs (which can be used to identify usage activity).

EZProxy logs are a kind of modified access log, which record activities (page loads, http requests, etc.) your users undertake while connected in an EZProxy session. This article will briefly outline five potential methods for analyzing EZProxy logs:  AWStats, Piwik, EZPaarse, a custom Python script for parsing starting-point URLS (SPU) logs, and a paid option called Splunk.

The ability of  any log analyzer will of course depend upon how your EZProxy log directives are configured.  You will need to know your LogFormat and/or LogSPU directives in order to configure most log file analyzing solutions.  In EZProxy, you can see how your logs are formatted in config.txt/ezproxy.cfg by looking for the LogFormat directive, 1  e.g.,

LogFormat %h %l %u %t “%r” %s %b “%{user-agent}i”

and / or, to log Starting Point URLs (SPUs):

LogSPU -strftime log/spu/spu%Y%m.log %h %l %u %t “%r” %s %b “%{ezproxy-groups}i”

Logging Starting Point URLs can be useful because those tend to be users either clicking into a database or the full-text of an article, but no activity after that initial contact is logged.  This type of logging does not log extraneous resource loading, such as loading scripts and images – which often clutter up your traditional LogFormat logs and lead to misleadingly high hits.  LogSPU directives can be defined in addition to traditional LogFormat to provide two different possible views of your users’ data.  SPULogs can be easier to analyze and give more interesting data, because they can give a clearer picture of which links and databases are most popular  among your EZProxy users.  If you haven’t already set it up, SPULogs can be a very useful way to observe general usage trends by database.

You can find some very brief anonymized EZProxy log sample files on Gist:

On a typical EZProxy installation, historical monthly logs can be found inside the ezproxy/log directory.  By default they will rotate out every 12 months, so you may only find the past year of data stored on your server.

AWStats

Get It:  http://www.awstats.org/#DOWNLOAD

Best Used With:  Full Logs or SPU Logs

Code / Framework:  Perl

    An example AWStats monthly history report. Can you tell when our summer break begins?

An example AWStats monthly history report. Can you tell when our summer break begins?

AWStats Pros:

  • Easy installation, including on localhost
  • You can define your unique LogFormat easily in AWStats’ .conf file.
  • Friendly, albeit a little bit dated looking, charts show overall usage trends.
  • Extensive (but sometimes tricky) customization options can be used to more accurately represent sometimes unusual EZProxy log data.
Hourly traffic distribution in AWStats.  While our traffic peaks during normal working hours, we have steady usage going on until about 1 AM, after which point it crashes pretty hard.  We could use this data to determine  how much virtual reference staffing we should have available during these hours.

Hourly traffic distribution in AWStats. While our traffic peaks during normal working hours, we have steady usage going on until about Midnight, after which point it crashes pretty hard. We could use this data to determine how much virtual reference staffing we should have available during these hours.

 

AWStats Cons:

  • If you make a change to .conf files after you’ve ingested logs, the changes do not take effect on already ingested data.  You’ll have to re-ingest your logs.
  • Charts and graphs are not particularly (at least easily) customizable, and are not very modern-looking.
  • Charts are static and not interactive; you cannot easily cross-section the data to make custom charts.

Piwik

Get It:  http://piwik.org/download/

Best Used With:  SPULogs, or embedded on web pages web traffic analytic tool

Code / Framework:  Python

piwik visitor dashboard

The Piwik visitor dashboard showing visits over time. Each point on the graph is interactive. The report shown actually is only displaying stats for a single day. The graphs are friendly and modern-looking, but can be slow to load.

Piwik Pros:

  • The charts and graphs generated by Piwik are much more attractive and interactive than those produced by AWStats, with report customizations very similar to what’s available in Google Analytics.
  • If you are comfortable with Python, you can do additional customizations to get more details out of your logs.
Piwik file ingestion in PowerShell

To ingest a single monthly log took several hours. On the plus side, with this running on one of Lauren’s monitors, anytime someone walked into her office they thought she was doing something *really* technical.

Piwik Cons:

  • By default, parsing of large log files seems to be pretty slow, but performance may depend on your environment, the size of your log files and how often you rotate your logs.
  • In order to fully take advantage of the library-specific information your logs might contain and your LogFormat setup, you might have to do some pretty significant modification of Piwik’s import_logs.py script.
When looking at popular pages in Piwik you’re somewhat at the mercy that the subdirectories of databases have meaningful labels; luckily EBSCO does, as shown here.  We have a lot of users looking at EBSCO Ebooks, apparently.

When looking at popular pages in Piwik you’re somewhat at the mercy that the subdirectories of database URLs have meaningful labels; luckily EBSCO does, as shown here. We have a lot of users looking at EBSCO Ebooks, apparently.

EZPaarse

Get Ithttp://analogist.couperin.org/ezpaarse/download

Best Used With:  Full Logs or SPULogs

Code / Framework:  Node.js

ezPaarse’s friendly drag and drop interface.  You can also copy/paste lines for your logs to try out the functionality by creating an account at http://ezpaarse.couperin.org.

ezPaarse’s friendly drag and drop interface. You can also copy/paste lines for your logs to try out the functionality by creating an account at http://ezpaarse.couperin.org.

EZPaarse Pros:

  • Has a lot of potential to be used to analyze existing log data to better understand e-resource usage.
  • Drag-and-drop interface, as well as copy/paste log analysis
  • No command-line needed
  • Its goal is to be able to associate meaningful metadata (domains, ISSNs) to provide better electronic resource usage statistics.
ezPaarse Excel output generated from a sample log file, showing type of resource (article, book, etc.) ISSN, publisher, domain, filesize, and more.

ezPaarse Excel output generated from a sample log file, showing type of resource (article, book, etc.) ISSN, publisher, domain, filesize, and more.

EZPaarse Cons:

  • In Lauren’s testing, we couldn’t get of the logs to ingest correctly (perhaps due to a somewhat non-standard EZProxy logformat) but the samples files provided worked well. UPDATE 11/26:  With some gracious assistance from EZPaarse’s developers, we got EZPaarse to work!  It took about 10 minutes to process 2.5 million log lines, which is pretty awesome. Lesson learned – if you get stuck, reach out to ezpaarse [at] couperin.org or tweet for help @ezpaarse.  Also be sure to try out some of the pre-defined parameters set up by other institutions under Parameters. Check out the comments below for some more detail from ezpaarse’s developers.
  • Output is in Excel Sheets rather than a dashboard-style format – but as pointed out in the comments below, you can optionally output the results in JSON.

Write Your Own with Python

Get Started With:  https://github.com/robincamille/ezproxy-analysis/blob/master/ezp-analysis.py

Best used with: SPU logs

Code / Framework:  Python

code

Screenshot of a Python script, available at Robin Davis’ Github

 

Custom Script Pros:

  • You will have total control over what data you care about. DIY analyzers are usually written up because you’re looking to answer a specific question, such as “How many connections come from within the Library?”
  • You will become very familiar with the data! As librarians in an age of user tracking, we need to have a very good grasp of the kinds of data that our various services collect from our patrons, like IP addresses.
  • If your script is fairly simple, it should run quickly. Robin’s script took 5 minutes to analyze almost 6 years of SPU logs.
  • Your output will probably be a CSV, a flexible and useful data format, but could be any format your heart desires. You could even integrate Python libraries like Plotly to generate beautiful charts in addition to tabular data.
  • If you use Python for other things in your day-to-day, analyzing structured data is a fun challenge. And you can impress your colleagues with your Pythonic abilities!

 

Action shot: running the script from the command line. (Source)

Action shot: running the script from the command line.

Custom Script Cons:

  • If you have not used Python to input/output files or analyze tables before, this could be challenging.
  • The easiest way to run the script is within an IDE or from the command line; if this is the case, it will likely only be used by you.
  • You will need to spend time ascertaining what’s what in the logs.
  • If you choose to output data in a CSV file, you’ll need more elbow grease to turn the data into a beautiful collection of charts and graphs.
output

Output of the sample script is a labeled CSV that divides connections by locations and user type (student or faculty). (Source)

Splunk (Paid Option)

Best Used with:  Full Logs and SPU Logs

Get It (as a free trial):  http://www.splunk.com/download

Code / Framework:  Various, including Python

A Splunk distribution showing traffic by days of the week.  You can choose to visualize this data in several formats, such as a bar chart or scatter plot.  Notice that this chart was generated by a syntactical query in the upper left corner:  host=lmagnuson| top limit=20 date_wday

A Splunk distribution showing traffic by days of the week. You can choose to visualize this data in several formats, such as a bar chart or scatter plot. Notice that this chart was generated by a syntactical query in the upper left corner: host=lmagnuson| top limit=20 date_wday

Splunk Pros:  

  • Easy to use interface, no scripting/command line required (although command line interfacing (CLI) is available)
  • Incredibly fast processing.  As soon as you import a file, splunk begins ingesting the file and indexing it for searching
  • It’s really strong in interactive searching.  Rather than relying on canned reports, you can dynamically and quickly search by keywords or structured queries to generate data and visualizations on the fly.
Here's a search for log entries containing a URL (digital.films.com), which Splunk uses to create a chart showing the hours of the day that this URL is being accessed.  This particular database is most popular around 4 PM.

Here’s a search for log entries containing a URL (digital.films.com), which Splunk uses to display a chart showing the hours of the day that this URL is being accessed. This particular database is most popular around 4 PM.

Splunk Cons:

    • It has a little bit of a learning curve, but it’s worth it for the kind of features and intelligence you can get from Splunk.
    • It’s the only paid option on this list.  You can try it out for 60 days with up to 500MB/day a day, and certain non-profits can apply to continue using Splunk under the 500MB/day limit.  Splunk can be used with any server access or error log, so a library might consider partnering with other departments on campus to purchase a license.2

What should you choose?

It depends on your needs, but AWStats is always a tried and true easy to install and maintain solution.  If you have the knowledge, a custom Python script is definitely better, but obviously takes time to test and develop.  If you have money and could partner with others on your campus (or just need a one-time report generated through a free trial), Splunk is very powerful, generates some slick-looking charts, and is definitely work looking into.  If there are other options not covered here, please let us know in the comments!

About our guest author: Robin Camille Davis is the Emerging Technologies & Distance Services Librarian at John Jay College of Criminal Justice (CUNY) in New York City. She received her MLIS from the University of Illinois Urbana-Champaign in 2012 with a focus in data curation. She is currently pursuing an MA in Computational Linguistics from the CUNY Graduate Center.

Notes
  1. Details about LogFormat and what each %/lettter value means can be found at http://www.oclc.org/support/services/ezproxy/documentation/cfg/logformat.en.html; LogSPU details can be found http://oclc.org/support/services/ezproxy/documentation/cfg/logspu.en.html
  2. Another paid option that offers a free trial, and comes with extensions made for parsing EZProxy logs, is Sawmill: https://www.sawmill.net/downloads.html

Hacking in Python with PyMARC

marc icon with an arrow pointing to a spreadsheet icon

The pymarc Python library is an extremely handy library that can be used for writing scripts to read, write, edit, and parse MARC records.  I was first introduced to pymarc through Heidi Frank’s excellent Code4Lib journal article on using pymarc in conjunction with MARCedit.1.  Here at ACRL TechConnect, Becky Yoose has also written about some cool applications of pymarc.

In this post, I’m going to walk through a couple of applications of using pymarc to make comma-separated (.csv) files for batch loading in DSpace and in the KBART format (e.g., for OCLC KnowledgeBase purposes).  I know what you’re thinking – why write a script when I can use MARCedit for that?  Among MARCedit’s many indispensable features is its Export Tab-Delimited Data feature. 2.  But there might be a couple of use cases where scripts can come in handy:

  • Your MARC records lack data that you need in the tab-delimited or .csv output – for example, a field used for processing in whatever system you’re loading the data into that isn’t found in the MARC record source.
  • Your delimited file output requires things like empty fields or parsed, modified fields.  MARCedit is great for exporting whole MARC fields and subfields into delimited files, but you may need to pull apart a MARC element or parse a MARC element out with some additional data.  Pymarc is great for this.
  • You have to process on a lot of records frequently, and just want something tailor-made for your use case.  While you can certainly use MARCedit to save a frequently used export template, editing that saved template when you need to change the output parameters of your delimited file isn’t easy.  With a Python script, you can easily edit your script if your delimited file requirements change.

 

Some General Notes about Python

Python is a useful scripting language to know for a lot of reasons, and for small scripting projects can have a fairly shallow learning curve.  In order to start using Python, you’ll need to set your development environment.  Macs already come with Python installed, but to double-check which version you have installed, launch Terminal and type Python -v.  In a Windows environment, you’ll need to do a few more steps, detailed here.  Full Python documentation for 2.x can be found here (though personally, I find it a little dense at times!), and CodeAcademy features some pretty extensive tutorials to help you learn the syntax quickly.  Google also has some pretty extensive tutorials on Python.  Personally, I find it easiest to learn when I have a project that actually means something to me, so if you’re familiar with MARC records,  just downloading the Python scripts mentioned in this post below and learning how to run them can be a good start.

Spacing and indentation in Python is very important, so if you’re getting errors running scripts make sure that your conditional statements are indented properly 3. Also the version of Python on your machine makes a really big difference, and the version will determine whether your code runs successfully.  These examples have all been tested with Python 2.6 and 2.7, but not with Python 3 or above.  I find that Python 2.x has more help resources out on the web, which is nice to be able to take advantage of when you’re first starting out.

Use Case 1:  MARC to KBART

The complete script described below, along with some sample MARC records, is on GitHub.

The KBART format is a NISO/United Kingdom Serials Group (UKSG) initiative designed to standardize information for use with Knowledge Base systems.  The KBART format is a series of standardized fields that can be used to identify serial coverage (e.g., start date and end date) URLs, publisher information, and more.4  Notably, it’s the required format for adding and modifying customized collections in OCLC’s Knowledge Base. 5.

In this use case, I’m using OCLC’s modified KBART file – most of the fields are KBART standard but a few fields are specific to OCLC’s Knowledge Base, e.g., oclc_collection_name.6.  Typically, these KBART files would be created either manually, or generated by using some VLOOKUP Excel magic with an existing KBART file 7.  In my use case, I needed to batch migrate a bunch of data stored in MARC records to the OCLC Knowledge Base, and these particular records didn’t always correspond neatly to existing OCLC Knowledge Base Collections.  For example, one collection we wanted to manage in the OCLC Knowledge Base was the library’s print holdings so that these titles were displayed in OCLC’s A-Z Journal lookup tool.

First, I’ll need a few helper libraries for my script:

import csv
from pymarc import MARCReader
from os import listdir
from re import search

Then, I’ll declare the source directory where my MARC records are (in this case, in the same directory the script lives in a folder called marc) and instruct the script to go find all the .mrc files in that directory.  Note that this enables the script to process either a single file of multiple MARC records, or multiple distinct files, all at once.

# change the source directory to whatever directory your .mrc files are in
SRC_DIR = 'marc/'

The script identifies MARC records by looking for all files ending in .mrc using the Python filter function.  Within the filter function, lambda creates a one-off anonymous function to define the filter parameters: 8

# get a list of all .mrc files in source directory 
file_list = filter(lambda x: search('.mrc', x), listdir(SRC_DIR))

Now I’ll define the output parameters.  I need a tab-delimited file, not a comma-delimited file, but either way I’ll use the csv.writer function to create a .txt file and define the tab delimiter (\t).  I also don’t really want the fields quoted unless there’s a special character or something that might cause a problem reading the file, so I’ll set quoting to minimal:

#create tab delimited text file that quotes if special characters are present
csv_out = csv.writer(open('kbart.txt', 'w'), delimiter = '\t', 
quotechar = '"', quoting = csv.QUOTE_MINIMAL)


And I’ll also create the header row, which includes the KBART fields required by the OCLC Knowledge Base:

#create the header row
csv_out.writerow(['publication_title', 'print_identifier', 
'online_identifier', 'date_first_issue_online', 
'num_first_vol_online', 'num_first_issue_online', 
'date_last_issue_online', 'num_last_vol_online', 
'num_last_issue_online', 'title_url', 'first_author', 
'title_id', 'coverage_depth', 'coverage_notes', 
'publisher_name', 'location', 'title_notes', 'oclc_collection_name', 
'oclc_collection_id', 'oclc_entry_id', 'oclc_linkscheme', 
'oclc_number', 'action'])

Next, I’ll start a loop for writing a row into the tab-delimited text file for every MARC record found.

for item in file_list:
 fd = file(SRC_DIR + '/' + item, 'r')
 reader = MARCReader(fd)

By default, we’ll need to set each field’s value to blank (”), so that errors are avoided if the value is not present in the MARC record.  We can set all the values to blank to start with in one statement:

for record in reader:
    publication_title = print_identifier = online_identifier 
    = date_first_issue_online = num_first_vol_online 
    = num_first_issue_online = date_last_issue_online 
    = num_last_vol_online = num_last_issue_online 
    = title_url = first_author = title_id = coverage_depth 
    = coverage_notes = publisher_name = location = title_notes 
    = oclc_collection_name = oclc_collection_id = oclc_entry_id 
    = oclc_linkscheme = oclc_number = action = ''

Now I can start pulling in data from MARC fields.  At its simplest, for each field defined in my CSV, I can pull data out of MARC fields using a construction like this:

    #title_url
    if record ['856'] is not None:
      title_url = record['856']['u']

I can use generic python string parsing tools to clean up fields.  For example, if I need to strip the ending / out of a MARC 245$a field, I can use .rsplit to return everything before that final slash (/):

 
   # publication_title
    if record['245'] is not None:
      publication_title = record['245']['a'].rsplit('/', 1)[0]
      if record['245']['b'] is not None:
          publication_title = publication_title + " " + record['245']['b']

Also note that once you’ve declared a variable value (like publication_title) you can re-use that variable name to add more onto the string, as I’ve done with the 245$b subtitle above.

As is usually the case for MARC records, the trickiest business has to do with parsing out serial ranges.  The KBART format is really designed to capture, as accurately as possible, thresholds for beginning and ending dates.  MARC makes this…complicated.  Many libraries might use the 866 summary field to establish summary ranges, but there are varying local practices that determine what these might look like.  In my case, the minimal information I was looking for included beginning and ending years, and the records I was processing with the script stored summary information fairly cleanly in the 863 and 866 fields:

    # date_first_issue_online
    if record ['866'] is not None:
      date_first_issue_online = record['863']['a'].rsplit('-', 1)[0]
 
    # date_last_issue_online
    if record ['866'] is not None:
      date_last_issue_online = record['866']['a'].rsplit('-', 1)[-1]

Where further adjustments were necessary, and I couldn’t easily account for the edge cases programmatically, I imported the KBART file into OpenRefine for further cleanup.  A sample KBART file created by this script is available here.

Use Case 2:  MARC to DSpace Batch Upload

Again, the complete script for transforming MARC records for DSpace ingest is on GitHub.

This use case is very similar to the KBART scenario.  We use DSpace as our institutional repository, and we had about 2000 historical theses and dissertation MARC  records to transform and ingest into DSpace.  DSpace facilitates batch-uploading metadata as CSV files according to the RFC4180 format, which basically means all the field elements use double quotes. 9  While the fields being pulled from are different, the structure of the script is almost exactly the same.

When we first define the CSV output, we need to ensure that quoting is used for all elements – csv.QUOTE_ALL:

csv_out = csv.writer(open('output/theses.txt', 'w'), delimiter = '\t', 
quotechar = '"', quoting = csv.QUOTE_ALL)

The other nice thing about using Python for this transformation is the ability to program in static text that would be the same for all the lines in the CSV file.  For example, I parsed together a more friendly display of the department the thesis was submitted to like so:

# dc.contributor.department
if record ['690']['x'] is not None:
dccontributordepartment = ('UniversityName.  Department of ') + 
record['690']['x'].rsplit('.', 1)[0]

You can view a sample output file created by this script on here on Github.

Other PyMARC Projects

There are lots of other, potentially more interesting things that can be done with pymarc, although certainly one of its most common applications is converting MARC to more flexible formats, such as MARCXML 10.  If you’re interested in learning more,  join the pymarc Google Group, where you can often get help by the original pymarc developers (Ed Summers, Mark Matienzo, Gabriel Farrell and Geoffrey Spear).

Notes
  1. Frank, Heidi. “Augmenting the Cataloger’s Bag of Tricks: Using MarcEdit, Python, and pymarc for Batch-Processing MARC Records Generated from the Archivists’ Toolkit.” Code4Lib Journal, 20 (2013):  http://journal.code4lib.org/articles/8336
  2. https://www.youtube.com/watch?v=qkzJmNOvY00
  3. For the specifications on indentation, take a look at http://legacy.python.org/dev/peps/pep-0008/#id12
  4. http://www.uksg.org/kbart/s5/guidelines/data_fields
  5. http://oclc.org/content/dam/support/knowledge-base/kb_modify.pdf
  6.  A sample blank KBART Excel sheet for OCLC’s Knowledge Base can be found here:  http://www.oclc.org/support/documentation/collection-management/kb_kbart.xlsx.  KBART files must be saved as tab-delimited .txt files prior to upload, but are obviously more easily edited manually in Excel
  7. If you happen to be in need of this for OCLC’s KB, or  are in need of comparing two Excel files, here’s a walkthrough of how to use VLOOKUP to select owned titles from a large OCLC KBART file: http://youtu.be/mUhkMzpPnBE
  8. A nice tutorial on using lambda and filter together can be found here:  http://www.u.arizona.edu/~erdmann/mse350/topics/list_comprehensions.html#filter
  9.  https://wiki.duraspace.org/display/DSDOC18/Batch+Metadata+Editing
  10.  http://docs.evergreen-ils.org/2.3/_migrating_your_bibliographic_records.html

Using the Stripe API to Collect Library Fines by Accepting Online Payment

Recently, my library has been considering accepting library fines via online. Currently, many library fines of a small amount that many people owe are hard to collect. As a sum, the amount is significant enough. But each individual fines often do not warrant even the cost for the postage and the staff work that goes into creating and sending out the fine notice letter. Libraries that are able to collect fines through the bursar’s office of their parent institutions may have a better chance at collecting those fines. However, others can only expect patrons to show up with or to mail a check to clear their fines. Offering an online payment option for library fines is one way to make the library service more user-friendly to those patrons who are too busy to visit the library in person or to mail a check but are willing to pay online with their credit cards.

If you are new to the world of online payment, there are several terms you need to become familiar with. The following information from the article in SixRevisions is very useful to understand those terms.1

  • ACH (Automated Clearing House) payments: Electronic credit and debit transfers. Most payment solutions use ACH to send money (minus fees) to their customers.
  • Merchant Account: A bank account that allows a customer to receive payments through credit or debit cards. Merchant providers are required to obey regulations established by card associations. Many processors act as both the merchant account as well as the payment gateway.
  • Payment Gateway: The middleman between the merchant and their sponsoring bank. It allows merchants to securely pass credit card information between the customer and the merchant and also between merchant and the payment processor.
  • Payment Processor: A company that a merchant uses to handle credit card transactions. Payment processors implement anti-fraud measures to ensure that both the front-facing customer and the merchant are protected.
  • PCI (the Payment Card Industry) Compliance: A merchant or payment gateway must set up their payment environment in a way that meets the Payment Card Industry Data Security Standard (PCI DSS).

Often, the same company functions as both payment gateway and payment processor, thereby processing the credit card payment securely. Such a product is called ‘Online payment system.’ Meyer’s article I have cited above also lists 10 popular online payment systems: Stripe, Authorize.Net, PayPal, Google Checkout, Amazon Payments, Dwolla, Braintree, Samurai by FeeFighters, WePay, and 2Checkout. Bear in mind that different payment gateways, merchant accounts, and bank accounts may or may not work together, your bank may or may not work as a merchant account, and your library may or may not have a merchant account. 2

Also note that there are fees in using online payment systems like these and that different systems have different pay structures. For example, Authorize.net has the $99 setup fee and then charges $20 per month plus a $0.10 per-transaction fee. Stripe charges 2.9% + $0.30 per transaction with no setup or monthly fees. Fees for mobile payment solutions with a physical card reader such as Square may go up much higher.

Among various online payment systems, I picked Stripe because it was recommended on the Code4Lib listserv. One of the advantages for using Stripe is that it acts as both the payment gateway and the merchant account. What this means is that your library does not have to have a merchant account to accept payment online. Another big advantage of using Stripe is that you do not have to worry about the PCI compliance part of your website because the Stripe API uses a clever way to send the sensitive credit card information over to the Stripe server while keeping your local server, on which your payment form sits, completely blind to such sensitive data. I will explain this in more detail later in this post.

Below I will share some of the code that I have used to set up Stripe as my library’s online payment option for testing. This may be of interest to you if you are thinking about offering online payment as an option for your patrons or if you are simply interested in how an online payment API works. Even if your library doesn’t need to collect library fines via online, an online payment option can be a handy tool for a small-scale fund-raising drive or donation.

The first step to take to make Stripe work is getting an API keys. You do not have to create an account to get API keys for testing. But if you are going to work on your code more than one day, it’s probably worth getting an account. Stripe API has excellent documentation. I have read ‘Getting Started’ section and then jumped over to the ‘Examples’ section, which can quickly get you off the ground. (https://stripe.com/docs/examples) I found an example by Daniel Schröter in GitHub from the list of examples in the Stripe’s Examples section and decided to test out. (https://github.com/myg0v/Simple-Bootstrap-Stripe-Payment-Form) Most of the time, getting an example code requires some probing and tweaking such as getting all the required library downloaded and sorting out the paths in the code and adding API keys. This one required relatively little work.

Now, let’s take a look at the form that this code creates.

borrowedcode

In order to create a form of my own for testing, I decided to change a few things in the code.

  1. Add Patron & Payment Details.
  2. Allow custom amount for payment.
  3. Change the currency from Euro to US dollars.
  4. Configure the validation for new fields.
  5. Hide the payment form once the charge goes through instead of showing the payment form below the payment success message.

html

4. can be done as follows. The client-side validation is performed by Bootstrapvalidator jQuery Plugin. So you need to get the syntax correct to get the code, which now has new fields, to work properly.
validator

This is the Javascript that allows you to send the data submitted to your payment form to the Stripe server. First, include the Stripe JS library (line 24). Include JQuery, Bootstrap, Bootstrap Form Helpers plugin, and Bootstrap Validator plugin (line 25-28). The next block of code includes an event handler for the form, which send the payment information to the Stripe via AJAX when the form is submitted. Stripe will validate the payment information and then return a token that identifies this particular transaction.

jspart

When the token is received, this code calls for the function, stripeResponseHandler(). This function, stripeResponseHandler() checks if the Stripe server did not return any error upon receiving the payment information and, if no error has been returned, attaches the token information to the form and submits the form.

jspart2

The server-side PHP script then checks if the Stripe token has been received and, if so, creates a charge to send it to Stripe as shown below. I am using PHP here, but Stripe API supports many other languages than PHP such as Ruby and Python. So you have many options. The real payment amount appears here as part of the charge array in line 326. If the charge succeeds, the payment success message is stored in a div to be displayed.

phppart

The reason why you do not have to worry about the PCI compliance with Stripe is that Stripe API asks to receive the payment information via AJAX and the input fields of sensitive information does not have the name attribute and value. (See below for the Card Holder Name and Card Number information as an example; Click to bring up the clear version of the image.)  By omitting the name attribute and value, the local server where the online form sits is deprived of any means to retrieve the information in those input fields submitted through the form. Since sensitive information does not touch the local server at all, PCI compliance for the local server becomes no concern. To clarify, not all fields in the payment form need to be deprived of the name attribute. Only the sensitive fields that you do not want your web server to have access to need to be protected this way. Here, for example, I am assigning the name attribute and value to fields such as name and e-mail in order to use them later to send a e-mail receipt.

(NB. Please click images to see the enlarged version.)

Screen Shot 2014-08-17 at 8.01.08 PM

Now, the modified form has ‘Fee Category’, custom ‘Payment Amount,’ and some other information relevant to the billing purpose of my library.

updated

When the payment succeeds, the page changes to display the following message.

success

Stripe provides a number of fake card numbers for testing. So you can test various cases of failures. The Stripe website also displays all payments and related tokens and charges that are associated with those payments. This greatly helps troubleshooting. One thing that I noticed while troubleshooting is that Stripe logs sometimes do lag behind. That is, when a payment would succeed, associated token and charge may not appear under the “Logs” section immediately. But you will see the payment shows up in the log. So you will know that associated token and charge will eventually appear in the log later.

recent_payment

Once you are ready to test real payment transactions, you need to flip the switch from TEST to LIVE located on the top left corner. You will also need to replace your API keys for ‘TESTING’ (both secret and public) with those for ‘LIVE’ transaction. One more thing that is needed before making your library getting paid with real money online is setting up SSL (Secure Sockets Layer) for your live online payment page. This is not required for testing but necessary for processing live payment transactions. It is not a very complicated work. So don’t be discouraged at this point. You just have to buy a security certificate and put it in your Web server. Speak to your system administrator for how to get the SSL set up for your payment page. More information about setting up SSL can be found in the Stripe documentation I just linked above.

My library has not yet gone live with this online payment option. Before we do, I may make some more modifications to the code to fit the staff workflow better, which is still being mapped out. I am also planning to place the online payment page behind the university’s Shibboleth authentication in order to cut down spam and save some tedious data entry by library patrons by getting their information such as name, university email, student/faculty/staff ID number directly from the campus directory exposed through Shibboleth and automatically inserting it into the payment form fields.

In this post, I have described my experience of testing out the Stripe API as an online payment solution. As I have mentioned above, however, there are many other online payment systems out there. Depending your library’s environment and financial setup, different solutions may work better than others. To me, not having to worry about the PCI compliance by using Stripe was a big plus. If your library accepts online payment, please share what solution you chose and what factors led you to the particular online payment system in the comments.

* This post has been based upon my recent presentation, “Accepting Online Payment for Your Library and ‘Stripe’ as an Example”, given at the Code4Lib DC Unconference. Slides are available at the link above.

Notes
  1. Meyer, Rosston. “10 Excellent Online Payment Systems.” Six Revisions, May 15, 2012. http://sixrevisions.com/tools/online-payment-systems/.
  2. Ullman, Larry. “Introduction to Stripe.” Larry Ullman, October 10, 2012. http://www.larryullman.com/2012/10/10/introduction-to-stripe/.