Lazy Consensus and Libraries

Happy feet
Photo courtesy of Flickr user enggul

Librarians, as a rule, don’t tolerate anarchy well. They like things to be organized and to follow processes. But when it comes to emerging technologies, too much reliance on planning and committees can stifle creativity and delay adoption. The open source software community can offer librarians models for how to make progress on big projects with minimal oversight.

“Lazy consensus” is one such model from which librarians can learn a lot. At the Code4Lib conference in February 2012, Bethany Nowviskie of the University of Virginia Scholar’s Lab encouraged library development teams to embrace this concept in order to create more innovative libraries. (I encourage you to watch a video or read the text of her keynote.) This goes for all sizes and types of academic libraries, whether they have a development staff or just staff with enthusiasm for learning about emerging technologies.

What is lazy consensus?

According to the Apache software foundation:

Lazy Consensus means that when you are convinced that you know what the community would like to see happen you can simply assume that you already have consensus and get on with the work. You don’t have to insist people discuss and/or approve your plan, and you certainly don’t need to call a vote to get approval. You just assume you have the community’s support unless someone says otherwise.
(quote from http://incubator.apache.org/odftoolkit/docs/governance/lazyConsensus.html)

Nowviskie suggests lazy consensus as a way to cope with an institutional culture where “no” is too often the default answer, since in lazy consensus the default answer is “yes.” If someone doesn’t agree with a proposal, he or she must present and defend an alternative within a reasonable amount of time (usually 72 hours). This ensures that the people who really care about a project have a chance to speak up and make sure the project is going in the right direction. By changing the default answer to YES, we make it easier to move forward on the things we really care about.

When you care about delivering the best possible experience and set of services for your library patrons, you should advocate for ways to make that happen and spend your time thinking about how to make that happen. Nowviskie points out the kinds of environments in which this is likely to thrive. Developers and technologists need time for research and development, “20% time” projects, and freedom to explore new possibilities. Even at small libraries without any development staff, librarians need time to research and understand issues of technology in libraries to make better decisions about the adoption of emerging technologies.

Implementing lazy consensus

Implementing lazy consensus in your library must be done with care. First and foremost, you must be aware of the culture you are in and be respectful of it even as you see room for change and improvement. Coming in the first day at a new job is not the moment to implement this process across the board, but in your own work or your department’s work you can set an example and a precedent. Nowviskie provides a few guidelines for healthy lazy consensus. Emphasize working hard and with integrity while being open and friendly. Keep everyone informed about what you are working on, and keep your mission in mind as the centerpiece of your work. In libraries, this means you must keep public services involved in any project from the earliest possible stages, and always maintain a commitment to maintaining the best possible user experience. When you or your team reliably deliver good results you will show the value in the process.

While default negativity can certainly stifle creativity, default positivity for all ideas can be equally stifling. Jonah Lehrer wrote in a recent New Yorker article article that the evidence shows that traditional brainstorming, where all ideas are presented to a group without criticism, doesn’t work. Creating better ideas requires critiquing wrong assumptions, which in turn helps us examine our own assumptions. In adopting lazy consensus, make sure there is authentic room for debate. Responding to a disagreement about a course of action with reasoned critique and alternate paths is more likely to result in creative ideas, and brings the discussion forward rather than ending it with a “no.”

Librarians know a lot about information and people. The open source software community knows a lot about how to run flexible and transparent organizations. Combining the two can create wonderful experiences for our users.

Getting Creative with Data Visualization: A Case Study

The Problem

At the NCSU Libraries, my colleagues and I in the Research and Information Services department do a fair bit of instruction, especially to classes from the university’s First Year Writing Program. Some new initiatives and outreach have significantly increased our instruction load, to the point where it was getting more difficult for us to effectively cover all the sessions that were requested due to practical limits of our schedules. By way of a solution, we wanted to train some of our grad assistants, who (at the time of this writing) are all library/information science students from that school down the road, in the dark arts of basic library instruction, to help spread the instruction burden out a little.

This would work great, but there’s a secondary problem: since UNC is a good 40 minute drive away, our grad assistants tend to have very rigid schedules, which are fixed well in advance — so we can’t just alter our grad assistants’ schedules on short notice to have them cover a class. Meanwhile, instruction scheduling is very haphazard, due to wide variation in how course slots are configured in the weekly calendar, so it can be hard to predict when instruction requests are likely to be scheduled. What we need is a technique to maximize the likelihood that a grad student’s standing schedule will overlap with the timing of instruction requests that we do get — before the requests come in.

Searching for a Solution – Bar graph-based analysis

The obvious solution was to try to figure out when during the day and week we provided library instruction most frequently. If we could figure this out, we could work with our grad students to get their schedules to coincide with these busy periods.

Luckily, we had some accrued data on our instructional activity from previous semesters. This seemed like the obvious starting point: look at when we taught previously and see what days and times of day were most popular. The data consisted of about 80 instruction sessions given over the course of the prior two semesters; data included date, day of week, session start time, and a few other tidbits. The data was basically scraped by hand from the instruction records we maintain for annual reports; my colleague Anne Burke did the dirty work of collecting and cleaning the data, as well as the initial analysis.

Anne’s first pass at analyzing the data was to look each day of the week in terms of courses taught in the morning, afternoon, and evening. A bit of hand-counting and spreadsheet magic produced this:

Instruction session count by day of week and time of day, Spring 2010-Fall 2011

This chart was somewhat helpful — certainly it’s clear that Monday, Tuesday and Thursday are our busiest days — but but it doesn’t provide a lot of clarity regarding times of day that are hot for instruction.  Other than noting that Friday evening is a dead time (hardly a huge insight), we don’t really get a lot of new information on how the instruction sessions shake out throughout the week.

Let’s Get Visual – Heatmap-based visualization

The chart above gets the fundamentals right — since we’re designing weekly schedules for our grad assistants, it’s clear that the relevant dimensions are days of week and times of day. However, there are basically two problems with the stacked bar chart approach: (1) The resolution of the stacked bars — morning, afternoon and evening — is too coarse. We need to get more granular if we’re really going to see the times that are popular for instruction; (2) The stacked bar chart slices just don’t fit our mental model of a week. If we’re going to solve a calendaring problem, doesn’t it make a lot of sense to create a visualization that looks like a calendar?

What we need is a matrix — something where one dimension is the day of the week and the other dimension is the hour of the day (with proportional spacing) — just like a weekly planner. Then for any given hour, we need something to represent how “popular” that time slot is for instruction. It’d be great if we had some way for closely clustered but non-overlapping sessions to contribute “weight” to each other, since it’s not guaranteed that instruction session timing will coincide precisely.

When I thought about analyzing the data in these terms, the concept of a heatmap immediately came to mind. A heatmap is a tool commonly used to look for areas of density in spatial data. It’s often used for mapping click or eye-tracking data on websites, to develop an understanding of the areas of interest on the website. A heatmap’s density modeling works like this: each data point is mapped in two dimensions and displayed graphically as a circular “blob” with a small halo effect; in closely-packed data, the blobs overlap. Areas of overlap are drawn with more intense color, and the intensity effect is cumulative, so the regions with the most intense color correspond to the areas of highest density of points.

I had heatmaps on the brain since I had just used them extensively to analyze user interaction patterns with a touchscreen application that I had recently developed.

Heatmap example from my previous work, tracking touches on a touchscreen interface. The heatmap is overlaid onto an image of the interface.

Part of my motivation for using heatmaps to solve our scheduling problem was simply to use the tools I had at hand: it seemed that it would be a simple matter to convert the instruction data into a form that would be amenable to modeling with the heatmap software I had access to. But in a lot of ways, a heatmap was a perfect tool: with a proper arrangement of the data, the heatmap’s ability to model intensity would highlight the parts of each day where the most instruction occurred, without having to worry too much about the precise timing of instruction sessions.

The heatmap generation tool that I had was a slightly modified version of the Heatmap PHP class from LabsMedia’s ClickHeat, an open-source tool for website click tracking. My modified version of the heatmap package takes in an array of (x,y) ordered pairs, corresponding to the locations of the data points to be mapped, and outputs a PNG file of the generated heatmap.

So here was the plan: I would convert each instruction session in the data to a set of (x,y) coordinates, with one coordinate representing day of week and the other representing time of day. Feeding these coordinates into the heatmap software would, I hoped, create five colorful swatches, one for each day of the week. The brightest regions in the swatches would represent the busiest times of the corresponding days.

Arbitrarily, I selected the y-coordinate to represent the day of the week. So I decided that any Monday slot, for instance, would be represented by some small (but nonzero) y-coordinate, with Tuesday represented by some greater y-coordinate, etc., with the intervals between consecutive days of the week equal. The main concern in assigning these y-coordinates was for the generated swatches to be far enough apart so that the heatmap “halo” around one day of the week would not interfere with its neighbors — we’re treating the days of the week independently. Then it was a simple matter of mapping time of day to the x-coordinate in a proportional manner. The graphic below shows the output from this process.

Raw heatmap of instruction data as generated by heatmap software

In this graphic, days of the week are represented by the horizontal rows of blobs, with Monday as the first row and Friday as the last. The leftmost extent of each row corresponds to approximately 8am, while the rightmost extent is about 7:30pm. The key in the upper left indicates (more or less) the number of overlapping data points in a given location. A bit of labeling helps to clarify things:

Heatmap of instruction data, labeled with days of week and approximate indications of time of day.

Right away, we get a good sense of the shape of the instruction week. This presentation reinforces the findings of the earlier chart: that Monday, Tuesday, and Thursday are busiest, and that Friday afternoon is basically dead. But we do see a few other interesting tidbits, which are visible to us specifically through the use of the heatmap:

  • Monday, Tuesday and Thursday aren’t just busy, they’re consistently well-trafficked throughout the day.
  • Friday is really quite slow throughout.
  • There are a few interesting hotspots scattered here and there, notably first thing in the morning on Tuesday.
  • Wednesday is quite sparse overall, except for two or three prominent afternoon/evening times.
  • There is a block of late afternoon-early evening time-slots that are consistently busy in the first half of the week.

Using this information, we can take a much more informed approach to scheduling our graduate students, and hopefully be able to maximize their availability for instruction sessions.

“Better than it was before. Better. Stronger. Faster.” – Open questions and areas for improvement

As a proof of concept, this approach to analyzing our instruction data for the purposes of setting student schedules seems quite promising. We used our findings to inform our scheduling of graduate students this semester, but it’s hard to know whether our findings can even be validated: since this is the first semester where we’re actively assigning instruction to our graduate students, there’s no data available to compare this semester against, with respect to amount of grad student instruction performed. Nevertheless, it seems clear that knowledge of popular instruction times is a good guideline for grad student scheduling for this purpose.

There’s also plenty of work to be done as far as data collection and analysis is concerned. In particular:

  • Data curation by hand is burdensome and inefficient. If we can automate the data collection process at all, we’ll be in a much better position to repeat this type of analysis in future semesters.
  • The current data analysis completely ignores class session length, which is an important factor for scheduling (class times vary between 50 and 100 minutes). This data is recorded in our instruction spreadsheet, but there aren’t any set guidelines on how it’s entered — librarians entering their instruction data tend to round to the nearest quarter- or half-hour increment at their own preference, so a 50-minute class is sometimes listed as “.75 hours” and other times as “1 hour”. More accurate and consistent session time recording would allow us to reliably use session length in our analysis.
  • To make the best use of session length in the analysis, I’ll have to learn a little bit more about PHP’s image generation libraries. The current approach is basically a plug-in adaptation of ClickHeat’s existing Heatmap class, which is only designed to handle “point” data. To modify the code to treat sessions as little line segments corresponding to their duration (rather than points that correspond to their start times) would require using image processing methods that are currently beyond my ken.
  • A bit better knowledge of the image libraries would also allow me to add automatic labeling to the output file. You’ll notice the prominent use of “ish” to describe the hours dimension of the labeled heatmap above: this is because I had neither the inclination nor the patience to count pixels to determine where exactly the labels should go. With better knowledge of the image libraries I would be able to add graphical text labels directly to the generated heatmap, at precisely the correct location.

There are other fundamental questions that may be worth answering — or at least experimenting against — as well. For instance, in this analysis I used data about actual instruction sessions performed. But when lecturers request library sessions, they include two or three “preferred” dates, of which we pick the one that fits our librarian and room schedules best. For the purposes of analysis, it’s not entirely clear whether we should use the actual instruction data, which takes into account real space limitations but is also skewed by librarian availability; or whether we should look strictly at what lecturers are requesting, which might allow us to schedule our grad students in a way that could accommodate lecturers’ first choices better, but which might run us up against the library’s space limitations. In previous semesters, we didn’t store the data on the requests we received; this semester we’re doing that, so I’ll likely perform two analyses, one based on our actual instruction and one based on requests. Some insight might be gained by comparing the results of the two analyses, but it’s unclear what exactly the outcome will be.

Finally, it’s hard to predict how long-term trends in the data will affect our ability to plan for future semesters. It’s unclear whether prior semesters are a good indicator of future semesters, especially as lecturers move into and out of the First Year Writing Program, the source of the vast majority of our requests. We’ll get a better sense of this, presumably, as we perform more frequent analyses — it would also make sense to examine each semester separately to look for trends in instruction scheduling from semester to semester.

In any case, there’s plenty of experimenting left to do and plenty of improvements that we could make.

Reflections and Lessons Learned

There’s a few big points that I took away from this experience. A big one is simply that sometimes the right approach is a totally unexpected one. You can gain some interesting insights if you don’t limit yourself to the tools that are most familiar for a particular problem. Don’t be afraid to throw data at the wall and see what sticks.

Really, what we did in this case is not so different from creating separate histograms of instruction times for each day of the week, and comparing the histograms to each other. But using heatmaps gave us a couple of advantages over traditional histograms: first, our bin size is essentially infinitely narrow; because of the proximity effects of the heatmap calculation, nearby but non-overlapping data points still contribute weight to each other without us having to define bins as in a regular histogram. Second, histograms are typically drawn in two dimensions, which would make comparing them against each other rather a nuisance. In this case, our separate heatmap graphics for each day of the week are basically one-dimensional, which allows us to compare them side by side with little fuss. This technique could be used for side-by-side examinations of multiple sets of any histogram-like data for quick and intuitive at-a-glance comparison.

In particular, it’s important to remember — especially if your familiarity with heatmaps is already firmly entrenched in a spatial mapping context — that data doesn’t have to be spatial in order to be analyzed with heatmaps. This is really just an extension of the idea of graphical data analysis: A heatmap is just another way to look at arbitrary data represented graphically, not so different from a bar graph, pie chart, or scatter plot. Anything that you can express in two dimensions (or even just one), and where questions of frequency, density, proximity, etc., are relevant, can be analyzed using the heatmap approach.

A final point: as an analysis tool, the heatmap is really about getting a feel for how the data lies in aggregate, rather than getting a precise sense of where each point falls. Since the halo effect of a data point extends some distance away from the point, the limits of influence of that point on the final image are a bit fuzzy. If precision analysis is necessary, then heatmaps are not the right tool.

About our guest author: Andreas Orphanides is Librarian for Digital Technologies and Learning in the Research and Information Services department at NCSU Libraries. He holds an MSLS from UNC-Chapel Hill and a BA in mathematics from Oberlin College. His interests include instructional design, user interface development, devising technological solutions to problems in library instruction and public services, long walks on the beach, and kittens.

Action Analytics

What is Action Analytics?

If you say “analytics” to most technology-savvy librarians, they think of Google Analytics or similar web analytics services. Many libraries are using such sophisticated data collection and analyses to improve the user experience on library-controlled sites.  But the standard library analytics are retrospective: what have users done in the past? Have we designed our web platforms and pages successfully, and where do we need to change them?

Technology is enabling a different kind of future-oriented analytics. Action Analytics is evidence-based, combines data sets from different silos, and uses actions, performance, and data from the past to provide recommendations and actionable intelligence meant to influence future actions at both the institutional and the individual level. We’re familiar with these services in library-like contexts such as Amazon’s “customers who bought this item also bought” book recommendations and Netflix’s “other movies you might enjoy”.

BookSeer β by Apt

Action Analytics in the Academic Library Landscape

It was a presentation by Mark David Milliron at Educause 2011 on “Analytics Today: Getting Smarter About Emerging Technology, Diverse Students, and the Completion Challenge” that made me think about the possibilities of the interventionist aspect of analytics for libraries.  He described the complex dependencies between inter-generational poverty transmission, education as a disrupter, drop-out rates for first generation college students, and other factors such international competition and the job market.  Then he moved on to the role of sophisticated analytics and data platforms and spoke about how it can help individual students succeed by using technology to deliver the right resource at the right time to the right student.  Where do these sorts of analytics fit into the academic library landscape?

If your library is like my library, the pressure to prove your value to strategic campus initiatives such student success and retention is increasing. But assessing services with most analytics is past-oriented; how do we add the kind of library analytics that provide a useful intervention or recommendation? These analytics could be designed to help an individual student choose a database, or trigger a recommendation to dive deeper into reference services like chat reference or individual appointments. We need to design platforms and technology that can integrate data from various campus sources, do some predictive modeling, and deliver a timely text message to an English 101 student that recommends using these databases for the first writing assignment, or suggests an individual research appointment with the appropriate subject specialist (and a link to the appointment scheduler) to every honors students a month into their thesis year.

Ethyl Blum, librarian

Privacy Implications

But should we? Are these sorts of interventions creepy and stalker-ish?* Would this be seen as an invasion of privacy? Does the use of data in this way collide with the profession’s ethical obligation and historical commitment to keep individual patron’s reading, browsing, or viewing habits private?

Every librarian I’ve discussed this with felt the same unease. I’m left with a series of questions: Have technology and online data gathering changed the context and meaning of privacy in such fundamental ways that we need to take a long hard look at our assumptions, especially in the academic environment? (Short answer — yes.)  Are there ways to manage opt-in and opt-out preferences for these sorts of services so these services are only offered to those who want them? And does that miss the point? Aren’t we trying to influence the students who are unaware of library services and how the library could help them succeed?

Furthermore, are we modeling our ideas of “creepiness” and our adamant rejection of any “intervention” on the face-to-face model of the past that involved a feeling of personal surveillance and possible social judgment by live flesh persons?  The phone app Mobilyze helps those with clinical depression avoid known triggers by suggesting preventative measures. The software is highly personalized and combines all kinds of data collected by the phone with self-reported mood diaries. Researcher Colin Depp observes that participants felt that the impersonal advice delivered via technology was easier to act on than “say, getting advice from their mother.”**

While I am not suggesting in any way that libraries move away from face-to-face, personalized encounters at public service desks, is there room for another model for delivering assistance? A model that some students might find less intrusive, less invasive, and more effective — precisely because it is technological and impersonal? And given the struggle that some students have to succeed in school, and the staggering debt that most of them incur, where exactly are our moral imperatives in delivering academic services in an increasingly personalized, technology-infused, data-dependent environment?

Increasingly, health services, commercial entities, and technologies such as browsers and social networking environments that are deeply embedded in most people’s lives, use these sorts of action analytics to allow the remote monitoring of our aging parents, sell us things, and match us with potential dates. Some of these uses are for the benefit of the user; some are for the benefit of the data gatherer. The moment from the Milliron presentation that really stayed with me was the poignant question that a student in a focus group asked him: “Can you use information about me…to help me?”

Can we? What do you think?

* For a recent article on academic libraries and Facebook that addresses some of these issues, see Nancy Kim Phillips, Academic Library Use of Facebook: Building Relationships with Students, The Journal of Academic Librarianship, Volume 37, Issue 6, December 2011, Pages 512-522, ISSN 0099-1333, 10.1016/j.acalib.2011.07.008. See also a very recent New York Times article on use of analytics by companies which discusses the creepiness factor.

 

Design in Libraries


photo by kvanhorn on flickr some rights reserved

What is the importance of design in libraries?

Libraries have users, and those users go through an experience, whether they walk through the doors and into the building or use the library’s online website and resources.

Why should we care?

  • Design can give users a good experience or a bad one.

Think about when you go to a store. Any store. The experience you have can be improved by design, or hindered by design. For example, if you go to a store in person and you are trying to find a particular item- signs have the potential to help you find that item. But what if you had too many signs? Or what if those signs are unreadable?

Think about when you are at a restaurant. You certainly want to have an enjoyable dining experience. From the time you walk through the door to the time you leave, everything you encounter and experience has an impact on you. When you sit down to the table and open the menu, you want to be able to easily read the menu items. Have you ever been to a restaurant where it was hard to decide on what to order because the menu was difficult to read? This seems to be a common occurrence, yet a simple well thought out menu can change that experience entirely. And great design can even give people an emotional experience that they remember deeply. Think about a time when you went to a restaurant and had a great experience; think about what details made that experience great.

  • Now take these concepts and redirect them to libraries.

When a user walks into the door of the library, what is their experience? Put yourself in the users shoes. Are signs unfriendly or hard to read? Is it difficult to find what they might be looking for? Think of design as a way of providing service to users. Good design takes training and study, however, within a relatively small timeframe, anyone can understand design basics and fundamentals to create decent design that communicates. Librarians are good at organizing things and within design lies organization. Designing is simply organization and choices about elements such as typography, composition, contrast, and color to name a few.

  • Design is also about restraint; what you don’t do.

This is an important distinction because often people get excited when they explore elements of design and want to put everything they love all into one design. Often this doesn’t work very well and it comes back to making good choices and sometimes leaving out an element you really love but doesn’t work in the overall design you are building. It’s okay though- designers collect like librarians do and we just save that good stuff for another design that it will really work well in. Keeping a little library and saving elements and inspirations are part of being a good designer. Whether online or in paper- both practices are good to get into. The tool Pinterest serves this purpose well but any tool or method for collecting design elements and inspiration is good practice. When you are looking for ideas- don’t forget to go to that tool to give you new ideas or to help get you thinking in new ways.

  • Good design in libraries leads to a quality user experience.

This concept of design can extend to all kinds of user experiences in the library, including layout of a room, the library’s web presence, the building’s architecture, furniture choices, marketing materials, and more. But let’s start simple- start in your library and examine what you have for signs. Ask this: what does this sign communicate to our user? How does it look and feel to you? What if this sign were in a store where you were making a purchase, how would it communicate in that scenario? Ask users what they think of your signs as well. Developing an understanding of what works and what doesn’t, will only lead to better design and thus better user experience.

 

Hot Topic: Infographics

Do you know what an infographic is? Infographics are visual representation of facts, tutorials, or other data-centered information that inform while staying focused on great design.

Here’s an example of one about the history of the iPad:

iPad infographic
iPad infographic via http://killerinfographics.submitinfographics.com/

This infographic takes a whole mess of data and makes it visually interesting and easy to digest.

So, what do infographics have to do with libraries? Libraries have tons of data- both informational and instructional data ranging from topics like local history facts to how to do research. Take a look at this Google infographic recently posted on the HackCollege site: http://www.hackcollege.com/blog/2011/11/23/infographic-get-more-out-of-google.html

A snippet:

Google Infographic
Google infographic via http://www.hackcollege.com/

This image highlights several complex research skills while explaining the thought process behind it in one easy to understand sentence, while being attractive and compelling to look at. What’s better than that?

Great examples of infographics can be found across the web. Wired magazine, for one, often uses them and Holy Kaw!, Guy Kawasaki’s website (http://holykaw.alltop.com) also highlights great infographics from other sites. Another great site to see examples of different types of infographics is http://killerinfographics.submitinfographics.com/.

The importance of infographics and other great visualizations of data (see Warby Parker’s 2011 Annual Report for the best annual report ever: http://www.warbyparker.com/annual-report-2011) to libraries is obvious. People respond to great design, and great design makes information accessible and inviting. It is in our best interests to strive for great design in all that we do, to make libraries accessible and inviting.

Recently, Sophie Brookover, New Jersey librarian, posted in the ALA Think Tank Facebook page (http://www.facebook.com/groups/ALAthinkTANK/) about starting a group of librarians learning to create infographics, much like the Code Year project. This idea is very much in the early stages, but keep an eye on it or get involved- good things are sure to come.

I Want to Learn to Code, but…

"Coder Rage" by librarykitty, some rights reserved.

You may have seen people posting that they are learning to code with CodeYear, mentioned in our earlier blog post “Tips for Everyone Doing the #codeyear”.  While CodeYear and Codecademy are not the first sites to teach programming, CodeYear has seen quite a bit of marketing and notice, especially in the library world (#libcodeyear and #catcode).

Many find themselves, however, in a familiar situation when dealing with learning to code. And it starts with the person saying or thinking “I want to learn to code, but…

Do you fall under any of these categories?

1. “I don’t have enough time to learn coding.”

You can work through the time issue in two ways. The first way is block off time. You have to look at your schedule and decide, for example, “ok, I’m working on my coding lesson between 1 to 2pm.” Once you made that decision, tell the rest of the world, so that they know that you’re working on learning something during that time.

For some folks, though, blocking off an hour may be impossible due to disruptions from work or personal life. When you’re in a situation where frequent disruptions are a fact of life, documentation is your friend. Keep notes of what you learned, what questions you have, what issues you ran across, and so on – this will make sure that you do not end up having to repeat a lesson, or losing track of your thoughts during a lesson.

2.  “This is too hard.”

Here I must stress one of the key survival traits for people learning to code: ask questions! Find people who are taking the same lesson and ask. Find coders and ask. Find an online forum and ask. Post your question on Twitter, Facebook, blog, or any other broadcasting medium. Just ASK.

More often than not your question will be answered, or you will be pointed in the right direction in answering your question. The overused saying “there is no such thing as a stupid question” applies here. Coding is a community activity, and it’s to your benefit to approach it as such.

3. “I don’t like the tutorial/course.”

It’s OK to say “hey, this course isn’t what I thought it would be” or “hey, I’m not finding this course useful.” Ask yourself, “in which environment do I feel like I learn the most?” Is it a physical classroom? A virtual classroom? Do you like learning on your own? With a small group of friends? With a large group?  There are various formats and venues where you can find courses in coding, from credit-earning classes to how-to books. For example, the Catcode wiki lists a variety of coding lessons or learning opportunities at various levels of coding knowledge. Choose the one (or a few) that will fit best with you, and go for it. It might take a few tries, but you will find something that works for you.

So, if you find yourself saying “I want to learn code, but…,” there is hope for you yet.

Find what’s holding you back, tackle it, and work out a possible solution. If you don’t get it the first time, that’s OK. It’s OK to fail, as long as you learn and understand why it failed, and apply what you learned in future endeavors. For now, we are stuck in learning coding the hard way: practice, practice, practice.

Learning code the hard way, on the other hand, is not too hard once you have taken the first few steps.

The Start-Up Library

“Here’s an analogy. The invention of calculus was shocking because for a long time it had simply been presumed that you couldn’t divide by zero. The integrity of math itself seemed to depend on the presumption. Then some genius titans came along and said, “Yeah, maybe you can’t divide by zero, but what would happen if you “could”? We’re going to come as close to doing it as we can, to see what happens.” – David Foster Wallace*

What if a library operated more like an Internet start-up and less like a library?

To be a library in the digital era is to steward legacy systems and practices of an era long past. Contemporary librarianship is at its worst when it accepts the poorly crafted vended services and offers poorly thought through service models, simply because this is the way we have always operated.

Internet start-ups, in the decade of 2010, heavily feature software as a service. The online presence to the Internet start-up is of foundational concern since it isn’t simply a “presence” to the start-up — the online environment is the only environment for the Internet start-up.

Search services would act and look contemporary

If we were an Internet start-up, we wouldn’t use instructional services as a crutch that would somehow correct poor design in our catalogs or other discovery layers. We wouldn’t accept the poorly designed vendor databases we currently accept. We would ask for interfaces that act and look contemporary, and if vendors did not deliver, we would make our own. And we would do this in 30-day time-lines, not six months and not years to roll out, as is the current lamentable state of library software services.

Students in the current era will look at a traditional library catalog search box and say: “that looks very 90s” – we shouldn’t be amused by that comment, unless of course we are trying to look 20 years out of date.

We would embrace perpetual beta.

If the library thought of its software services more like Internet start-ups, we would not be so cautious — we would perpetually improve and innovate in our software offerings. Think of the technology giants Google and Apple, they are never content to rest on laurels, everyday they get up and they invent like their lives depended on it. Do we?

We wouldn’t settle.

For years we’ve accepted legacy ILS systems – we need to move away from accepting the status quo, the way things have always been done, and the way we always work is not the way we should always work — if the information environments have changed, shouldn’t this be reflected in the library’s software services?

We would be bold.

We need to look at massive re-wiring in the way we think about software as a service in libraries; we are smarter and better than mediocrity.

The notion of software services in libraries may be dramatically improved if we thought of our gateways and virtual experiences more like Internet start-ups conceptualize their do or die services; which are seemingly made more effective and efficient every thirty to sixty days.

If Internet start-ups ran their web services the way libraries contently run legacy systems, the company would surely fold, or more likely, never have attracted seed funding to start operating as a start-up. Let’s do our profession a favor and turn the lights out on the library way of running libraries. Let’s run our library as if it were an Internet start-up.

___

* also: “… this purely theoretical construct wound up yielding incredibly practical results. Suddenly you could plot the area under curves and do rate-change calculations. Just about every material convenience we now enjoy is a consequence of this “as if.” But what if Leibniz and Newton had wanted to divide by zero only to show jaded audiences how cool and rebellious they were? It’d never have happened, because that kind of motivation doesn’t yield results. It’s hollow. Dividing-as-if-by-zero was titanic and ingenuous because it was in the service of something. The math world’s shock was a price they had to pay, not a payoff in itself.” – David Foster Wallace

The End of Academic Library Circulation?

What Library Circulation Data Shows

Unless current patterns change, by 2020 university libraries will no longer have circulation desks. This claim may seem hyperbolic if you’ve been observing your library, or even if you’ve been glancing over ACRL or National Center for Education Statistics data. If you have been looking at the data, you might be familiar with a pattern that looks like this:

total circulationThis chart shows total circulation for academic libraries, and while there’s a decline it certainly doesn’t look like it will hit zero anytime soon, definitely not in just 8 years. But there is a problem with this data and this perspective on library statistics.  When we talk about “total circulation” we’re talking about a property of the library, we’re not really thinking about users.

Here’s another set of data that you need to look at to really understand circulation:
fall enrollmentsAcademic enrollment has been rising rapidly.  This means more students, which in turns means greater circulation.  So if total circulation has been dropping despite an increase in users then something else must be going on.  So rather than asking the question “How many items does my library circulate?” we need to alter that to “How many items does the average student checkout?”

Here is that data:

circulation per student

This chart shows the upper/lower quartiles and median for circulation per FTE student.  As you can see this data shows a much more dramatic drop in the circulation of library materials. Rising student populations hide this fact.

xkcd
[source: http://xkcd.com/605/]

But 2020? Can I be serious?  The simple linear regression model in the charts is probably a good predictor of 2012, but not necessarily 2020. Hitting zero without flattening out seems pretty unlikely. However, it is worth noting the circulation per user in the lower quartile for less than 4 year colleges reached 1.1 in 2010. If you’re averaging around 1 item per user, every user that takes out 2 items means there’s another who has checked out 0.

What’s Happening Here?

Rather than waste too much time trying to predict a future we’ll live in in less than a decade, let’s explore the more interesting question: “What’s happening here?”

By far the number one hypothesis I get when I show people this data is “Clearly this is just because of the rise of e-journals and e-books”. This hypothesis is reasonable: What has happened is simply that users have switched from print to electronic. This data represents a shift in media, nothing more.

But there are 2 very large problems with this hypothesis.

First, print journal circulation is not universal among academic libraries. In the cases where there is no print journal circulation the effect of e-journals would not be present in circulation data. However, I don’t have information to point out exactly how many academic libraries did circulate print journals. Maybe the effect of e-journals on just the libraries that do circulate serials could effect the data for everyone. The data we have already shown resolves this issue. Libraries that did circulate serials would have higher circulation per user than those that did not. By showing different quartiles we can address this discrepancy in the data between libraries that did and did not circulate journals. If you look at the data you’ll see that indeed the upper quartile does seem to have a higher rate of decline, but not enough to validate this hypothesis. The median and lower quartiles also experience this shift, so something else must be at work.

Second, e-books were not largely adopted until the mid 2000s, yet the decline preceding 2000 is at least as steep as after. If you look at the chart below you’ll notice that ebook acquisition rates did not exceed print until 2010:

ebooks vs printEbooks, of course, do have an effect on usage, but they’re not the primary factor in this change.

So clearly we must reject the hypothesis that this is merely a media shift. Certainly the shift from print to electronic has had some effect, but it is not the sole cause. If it’s not a shift in media, the most reasonable explanation is that it’s a shift in user behavior.  Students are simply not using books (in any format) as much as they used to.

What is Causing this Shift in User Behavior?

The next question is what is the cause of this shift.

I think the most simple answer is the web. 1996 is the first data point showing a drop in circulation. Of course the web was quite small then, but AOL and Yahoo! were already around, and the Internet Archive had been founded.  If you think back to a pre-web time, pretty much anything you needed to know more about required a trip to the library and checking out a book.

The most important thing to take away is that, regardless of cause, user behavior has changed and by all data points is still changing.  In the end, the greatest question is how will academic libraries adapt?  It is clear that the answer is not as simple as a transition to a new media. To survive, librarians must find the answer before we have enough data to prove these predictions.

If you enjoyed exploring this data please check out Library Data and follow @librarydata on twitter.

Data Source:

About our guest author: Will Kurt is a software engineer at Articulate Global, pursuing his masters in computer science at the University of Nevada, Reno and is a former librarian. He holds an MLIS from Simmons College and has worked in various roles in public, private and special libraries at organizations such as: MIT, BBN Technologies and the University of Nevada, Reno. He has written and presented on a range of topics including: play, user interfaces, functional programming and data
visualization.

Tips for Everyone Doing the #codeyear

Learn to Code in 2012!

If you are a librarian interested in learning how to code, 2012 is a perfect year for you to start the project. Thanks to CodeAcademy (http://codeacademy.com), free JavaScript lessons are provided every week at http://codeyear.com/. The lessons are interactive and geared towards beginners. So even if you do not have any previous experience in programming, you will be able to pick up the new skill soon enough as long as you are patient and willing to spend time on mastering each lesson every week.

A great thing about this learn-how-to-program project, called #codeyear in Twitter (#libcodeyear and #catcode in the library-land) is that there are +375,443 people (and counting up) out there who are doing exactly the same lessons as you are. The greatest thing about this #libcodeyear / #catcode project is that librarians have organized themselves around this project for the collective learning experience.  How librarian-like, don’t you think?

Now, if you are ready to dive in, here are some useful resources.  And after these Resources, I will tell you a little bit more about how to best ask help about your codes when they are not working for you.

Resources for Collective Learning

Syntax Error: Catch the most frustrating bugs!

Now what I really like about #codeyear lessons so far is that some of the lessons trip you by trivial things like a typo! So you need to find a typo and fix it to pass a certain lesson. Now you may ask “How the hell does fixing a typo count as a programming lesson?”

Let me tell you. Finding a typo is no triviality in coding. Catching a similar syntax error will save you from the most frustrating experience in coding.

The examples of seemingly innocuous syntax errors are:

  • var myFunction = funtction (){blah, blah, blah … };
  • var myNewFunction = function (]{blah, blah, blah … };
  • for(i=0,  i<10, i++;)
  • var substr=’Hello World’; alert(subst);
  • –//This is my first JavaScript

Can you figure out why these lines would not work?  Give it a try! You won’t be sorry. Post your answers in the comments section.

How to Ask Help about Your Codes      

by Matteo De Felice in Flickr (http://farm4.staticflickr.com/3577/3502347936_43b5e2a886.jpg)

I am assuming that as #codeyear, #catcode, #libcodeyear project progresses, more people are going to ask questions about problems that stump them. Some lessons already have Q&A in the CodeAcademy site. So check those out. Reading through others’ questions will give valuable insight to how codes work and where they can easily trip you.

That having been said, you may want to ask questions to the places mentioned in the Resources section above. If you do, it’s a good idea to follow some rules. This will make your question more likely to be looked at by others and way more likely to be answered correctly.

  • Before asking a question, try to research yourself. Google the question, check out the Q&A section in the CodeAcademy website, check out other online tutorials about JS (see below for some of the recommended ones).
  • If this fails, do the following:
    • Specify your problem clearly.
      (Don’t say things like “I don’t get lesson 3.5.” or “JavaScript function is too hard,” unless the purpose is just to rant.)
    • Provide your codes with any parts/details that are related to the lines with a problem.
      (Bear in mind that you might think there is a problem in line 10 but the problem may lie in line 1, which you are not looking.) Highlight/color code the line you are having a problem. Make it easy for others to immediately see the problematic part.
    • Describe what you have done to troubleshoot this (even if it didn’t work.)
      : This helps the possible commenter to know what your reasoning is behind your codes and what solutions you have already tried, thereby saving their time. So this will make it more likely that someone will actually help you. To believe it or not, what seems completely obvious and clear to you can be completely alien and unfathomable to others.

Some JavaScript Resources

There are many resources that will facilitate your learning JavaScript. In addition to the lessons provided by CodeAcademy, you may also find these other tutorials helpful to get a quick overview of JavaScript syntax, usage, functions, etc. From my experience, I know that I get a better understanding when I review the same subject from more than one resource.

If you have other favorite Javascript please share in the comment section.

ACRL TechConnect blog will continue to cover #libcodeyear / #catcode related topics throughout the year!  The post up next will tell you all about some of the excuses people deploy to postpone learning how to code and what might break the mental blockage!

Zotero: A Guide for Librarians, Researchers and Educators

ACRL announces the publication of Zotero: A Guide for Librarians, Researchers and Educators. Authored by Jason Puckett of Georgia State University, Zotero: A Guide for Librarians, Researchers and Educators is the first book-length treatment of this powerful research tool developed by the Center for History and New Media at George Mason University.

Written for end users, librarians and teachers, the book introduces Zotero and presents it in the context of bibliography managers and open source software. Puckett then provides detailed instructions on using the software in research and writing, along with a wealth of useful information including instructional best practices, examples, support tips and advanced techniques for those who teach and support Zotero.

“Puckett draws on his deep understanding of Zotero’s technology to provide clear, concise guidelines and tips for beginners and experts alike,” says Sean Takats, co-director of Zotero, assistant professor of History at George Mason University and director of research projects at the Center for History and New Media. “As a bonus, he convincingly argues why you — yes, you — need to be using research software and why Zotero is the best choice.”

A perfect guidebook to a robust open access research tool that allows the user to manage all aspects of bibliographic data, Zotero: A Guide for Librarians, Researchers and Educators is essential for librarians, classroom faculty and students alike.

Zotero: A Guide for Librarians, Researchers and Educators will be available at the 2011 ALA Annual Conference in New Orleans and is available for purchase in print and as an ePub, Kindle, or PDF e-book through the ALA Online Store; Amazon.com; and by telephone order at (866) 746-7252 in the U.S. or (770) 442-8633 for international customers.