Learning Web Analytics from the LITA 2012 National Forum Pre-conference

Note: The 2012 LITA Forum pre-conference on Web Analytics was taught by Tabatha (Tabby) Farney and Nina McHale.  Our guest authors, Joel Richard and Kelly Sattler were two of the people who attended the pre-conference and they wrote a summary of the pre-conference to share with the ACRL TechConnect readers.

In advance of the conference, Tabby and Nina reached out to the participants ahead of time with a survey on what we the participants were interested in learning and solicited questions to be answered in the class.  Twenty-one participants responded and of them seventeen were already using Google Analytics (GA).  About half those using GA check their reports 1-2 times per month and the rest less often.  The conference opened with introductions and a brief description of what we were doing with analytics on our website and what we hoped to learn.

Web Analytics Strategy

The overall theme of the pre-conference was the following:

A web analytics strategy is the structured process of identifying and evaluating your key performance indicators on the basis of an organization’s objectives and website goals – the desired outcomes, or what you want people to do on the website.

We learned that beyond the tool we use measure our analytics, we need to identify what we want our website to do.  We do this by using pre-existing documentation our institutions have on their mission and purpose as well as the mission and purpose of the website and who it is to serve. Additionally, we need a privacy statement so our patrons understand that we will be tracking their movements on the site and what we will be collecting. We learned that there are challenges when using only IP addresses (versus cookies) for tracking purposes.  For example, does our institution’s network architecture allow for you to identify patrons versus staff using IP address or are cookies a necessity?

Tool Options for Website Statistics

To start things off, we discussed the types of web analytics tools that are available and which we were using. Many of the participants were already using Google Analytics (GA) and thus most of the activities were demonstrated in GA as we could log into our own accounts.  We were reminded that though it is free, GA keeps our data and does not allow us to delete it.  GA has us place a bit of Javascript code on the pages we want tracked. It is easier to set up GA within a content management system but it may not work as well for mobile devices.  Piwik is an open-source alternative to Google Analytics that uses a similar Javascript tagging method.  Additionally we were reminded that if we use any Javascript tagging method, we should review our code snippets least every two years as they do change.

We learned about other, less common systems for tracking user activity. AWStats is installed locally and reads the website log files and processes them into reports.  It offers the user more control and may be more useful for sites not in a content management system.  Sometimes it provides more information than desired and will be unable to clearly differentiate between users based on IP.  Other similar tools are Webalizer, FireStats, and Webtrends.

A third option is to use Web Beacons which are small, invisible transparent GIFs embedded on every page.  This is useful for when Javascript won’t work, but they probably aren’t as applicable today as they once were.

Finally, we took a brief look at the heat mapping tool, Crazy Egg.  It focuses on visual analytics and uses Javascript tagging to provide heat maps of exactly where visitors clicked on our site offering insights as to what areas of a page receive the most attention.  Crazy Egg has a 30 day free trial and then it costs per page tracked, but there are subscriptions for under $100/month if you find the information worth the cost.  The images can really give webmasters an understanding of what the users are doing on their site and are persuasive tools when redesigning a page or analyzing specific kinds of user behavior.

Core Concepts and Metrics of Web Analytics

Next, Tabby and Nina presented a basic list of terminology used within web analytics.  Of course, different tools refer to the same concept by different names, but these were the terms we used throughout our session.

  • Visits – A visit is when someone comes to the site. A visit ends when a user has not seen a new page in 30 minutes (or when they have left the site.)
  • Visitor Types: New & Returning – A cookie is used to determine whether a visitor has been to the site in the past. If a user disables cookies or clears them regularly, they will show up as a new user each time they visit.
  • Unique Visitors – To distinguish visits by the same person, the cookie is used to track when the same person returns to the site in a given period of time (hours, days, weeks or more).
  • Page Views – More specific than “hits,” a page view is recorded when a page is loaded in a visitor’s browser.
  • User Technology – This includes information about the visitor’s operating system, browser version, mobile device or desktop computer, etc.
  • Geographic Data – A visitor’s location in the world can often be determined to which city they are in.
  • Entry and Exit Pages – These refer to the page the visitor sees first during their visit (Entry) and the last page they see before leaving or their session expires (Exit).
  • Referral Sources – Did the visitor come from another site? If so, this will tell who is sending traffic to us.
  • Bounce Rate – A bounce is when someone comes to the site and views only one page before leaving.
  • Engagement Metrics – This indicates how much visitors are on our site measured by time they spent on the site or number of pages viewed.
Goals/Conversion

Considering how often the terms “goals” and “conversions” are used, we learned that it is important to realize that in web analytics lingo, a goal is a metric, also referred to as a conversion, and measures whether a desired action has occurred on your site. There are four primary types of conversions:

  1. URL Destination – A visitor has reached a targeted end page.  For commercial sites, this would be the “Thank you for your purchase” page. For a library site, this is a little more challenging to classify and will include several different pages or types of pages.
  2. Visit Duration – How much time a visitor spends on our site. This is often an unclear concept. If a user is on the site for a long time, we don’t know if they were interrupted while on our site, if they had a hard time finding what they were looking for, or if they were enthralled with all the amazing information we provide and read every word twice.
  3. Pages per Visit – Indicates site engagement. Similar to Visit Duration, many pages may mean the user was interested in our content, or that they were unable to find what they were looking for.  We distinguish this by looking at the “paths” of page the visitor saw.  As an example, we might want to know if someone finds the page they were looking for in three pages or less.
  4. Events – Targets an action on the site. This can be anything and is often used to track outbound pages or links to a downloadable PDF.

Conversion rate is an equation that shows the percentage of how often the desired action occurs.

Conversion rate = Desired action / Total or Unique visits

Goal Reports also known as Conversion Reports are sometimes provided by the tool and include the total number of conversions and the conversion rate.  We learned that we can also assign a monetary value to take advantage of the more commerce-focused tools often used in analytics software, but the results can be challenging to interpret.  Conversion reports also show an Abandonment Rate as people leave our site. However, we can counter this by creating a “funnel” that identifies the steps needed to complete the goal. The funnel report shows us where in the steps visitors drop off and how many make it through the complete conversion.

Key Performance Indicators (KPIs) were a focus of much of the conference.  They measure the outcome based on our site’s objectives/goals and are implemented via conversion rates.  KPIs are unique to each site.  Through examples, we learned that each organization’s web presence may be made up of multiple sites. For instance, an organization may have its main library pages, Libguides, the catalog, a branch site, a set of sites for digitized collections, etc. A KPI may span activities on more than one of these sites.

Segment or Filter

We then discussed the similarities and differences between Segments and Filters, both of which offer methods to narrow the data enabling us to focus on a particular point of interest.  The difference between the two is that (i) filtering will remove the data from the collection process thereby resulting in lost data; whereas (ii) segmentation hides data from the reports leaving it available for other reports. Generally, we felt that the use of Segments was preferable over Filters in Google Analytics given that it is impossible to recover data that is lost during GA’s real-time data collection.

We talked about the different kinds of segments that some of us are using. For example, is Joel’s organization, he is using a technique to segment the staff computers in their offices from computers in the library branches by adding a query string to the homepage URL of the branch computers’ browsers. Using this, he can create a segment in Google Analytics to view the activity of either group of users by segmenting on the different Entry pages (with and without this special query string). Segmenting on IP Address also further segregates his users between researchers and the general public.

Benchmarking

As a step towards measuring success for our sites, we discussed benchmarking, which is used to look at the performance of our sites before and after a change. Having performance data before making changes is essential to knowing whether those changes are successful, as defined by our goals and KPIs.

Comparing a site to itself either in a prior iteration or before making a change is called Internal Benchmarking. Comparing a site to other similar sites on the Internet is known as External Benchmarking. Since external benchmarking requires data to make a comparison, we need to request of another website their data or reports. Another alternative is to use service sites such as Alexa, Quantcast, Hitwise and others, which will do the comparison for you.  Keep in mind that these may use e-commerce or commercial indicators which may not make for a good comparison to humanities-oriented sites.

Event Tracking

Page views and visitor statistics are important for tracking how our site is doing, but sometimes we need to know about events that aren’t tracked through the normal means. We learned that an Event, both in the conceptual sense and in the analytics world, can be used to track actions that don’t naturally result in a page view. Events are used to track access to resources that aren’t a web page, such as videos, PDFs, dynamic page elements, and outbound links.

Tracking events doesn’t always come naturally and require some effort to set up. Content management systems (CMS) like Drupal help make event tracking easy either via a module or plugin or simply by editing a template or function that produces the HTML pages.  If a website is not using a CMS the webmaster will need to add event tracking code to each link or action that they wish to record in Google Analytics. Fortunately, as we saw, the event tracking code is simple and easy to add to a site and there is good documentation describing this in Google’s Event Tracking Guide documentation.

Finally, we learned that tracking events is preferable to creating “fake” pageviews as it does not inflate the statistics generated by regular pageviews due to the visitors’ usual browsing activities.

Success for our websites

Much of the second half of the conference was focused on learning about and performing some exercises to define and measure success for our sites. We started by understanding our site in terms of our Users, our Content and our Goals. These all point to the site’s purpose and circle back around to the content delivered by our site to the users in order to meet our goals. It’s all interconnected. The following questions and steps helped us to clarify the components that we need to have in hand to develop a successful website.

Content Audit – Perform an inventory that lists every page on the site. This are likely to be tedious and time-consuming. It includes finding abandoned pages, lost images, etc.  The web server is a great place to start identifying files.  Sometimes we can use automated web crawling tools to find the pages on our site.  Then we need to evaluate that content. Beyond the basic use of a page, consider recording last updated date, bounce rate, time on page, whether it is a landing page or not, and who is responsible for the content.

Identifying Related Sites – Create a list of sites that our site links to and sites that link back to our site.  Examples: parent site (e.g. our organization’s overall homepage), databases, journals, library catalog site, blog site, flickr, Twitter, Facebook, Internet Archive, etc.

Who are our users? – What is our site’s intended audience or audiences? For us at the conference, this was a variety of people: students, staff, the general public, collectors, adults, teens, parents, etc. Some of us may need to use a survey to determine this.  Some populations of users (e.g. staff) might be identified via IP Addresses. We were reminded that most sites serve one major set of users with other smaller groups of users served. For example, students might be the primary users whereas faculty and staff are secondary users.

Related Goals and plans – Use existing planning documents, strategic goals, a library’s mission statement to set a mission statement and/or goals for the website. Who are we going to help? Who is our audience?  We must define why our site exists and it’s purpose on the web.  Generally we’ll have one primary purpose per site. Secondary purposes also help define what the site does and fall under the “nice to have” category, but are also very useful to our users. (For example, Amazon.com’s primary purpose is to sell products, but secondary purposes include reviews, wishlists, ratings, etc.)

When we have a new service to promote, we can use analytics and goals to track how well that goal is being met. This is an ongoing expansion of the website and the web analytics strategy.  We were reminded to make goals that are practical, simple and achievable. Priorities can change from year to year in what we will monitor and promote.

Things to do right away

Nearing the end of our conference, we discussed things that we can do improve our analytics in the near term. These are not necessarily quick to implement, but doing these things will put us in a good place for starting our web analytics strategy. It was mentioned that if we aren’t tracking our website’s usage at all, we should install something today to at least begin collecting data!

  1. Share what we are doing with our colleagues. Educate them at a high level, so they know more about our decision making process. Be proactive and share information; don’t wait to be asked what’s going on. This will offer a sense of inclusion and transparency. What we do is not magic in any sense. We may also consider granting read-only access to some people who are interested in seeing and playing with the statistics on their own.
  2. Set a schedule for pulling and analyzing your data and statistics. On a quarterly basis, report to staff on things that we found that were interesting: important metrics, fun things, anecdotes about what is happening on our site. Also check our goals that we are tracking in analytics on a quarterly basis; do not “set and forget” our goals. On monthly basis, we should report to IT staff on topics of concern, 404 pages, important values, and things that need attention.
  3. Test, Analyze, Edit, and Repeat. This is an ongoing, long-term effort to keep improving our sites. During a site redesign, we compare analytics data before and after we make changes. Use analytics to make certain the changes we are implementing have a positive effect. Use analytics to drive the changes in our site, not because it would be cool/fun/neat to do things a certain way. Remember that our site is meant to serve our users.
  4. Measure all content. Get tracking code installed across all of our sites. Google Analytics cross-domain tracking is tricky to set up, but once installed will track users as they move between different servers. Examples for this are our website, blog, OPAC, and other servers. For things not under our control, be sure to at least track outbound to know when people leave our site.
  5. Measure all users. When we are reporting, segment the users into groups as much as possible to understand their different habits.
  6. Look at top mobile content. Use that information to divide the site and focus on things that mobile users are going to most often.
Summary

Spending eight hours learning about a topic and how to practically apply it to our site is a great way to get excited about taking on more responsibilities in our daily work. There is still a good deal of learning to be done since much of the expertise in web analytics comes from taking the time to experiment with the data and settings.

We, Kelly and Joel, are looking forward to working with analytics from the ground-up, so to speak. We are both are in an early stage of redeploying our website under new software which allows us to take into account the most up-to-date analytics tools and techniques available to us. Additionally, our organizations, though different in their specific missions and goals, are entering into a new round of long-term planning with the result being a new set of goals for the next three to five years. It becomes clear that the website is an important part of this planning and that the goals of our websites directly translate into actions that we take when configuring and using Google Analytics.

We both expect that we will experience a learning curve in understanding and applying web analytics and there will be a set of long-term, ongoing tasks for us. However, after this session, we are more confident about how to effectively apply and understand analytics towards tracking and achieving the goals of our organization and create an effective and useful set of websites.

About our Guest Authors:

Kelly Sattler is a Digital Project Librarian and Head of Web Services at Michigan State University.  She and her team are involved with migrating the Libraries’ website into Drupal 7 and are analyzing our Google Analytics data, search terms, and chat logs to identify places where we can improve our site through usability studies. Kelly spent 12 years in Information Technology at a large electrical company before becoming a librarian and has a bachelor’s degree in Computer Engineering.  She can be found on twitter at @ksattler.

Joel Richard is the lead Web Developer for the Smithsonian Libraries in Washington, DC and is currently in the process of rebuilding and migrating 15 years’ worth of content to Drupal 7. He has 18 years of experience in software development and internet technology and is a confirmed internet junkie. In his spare time, he is an enthusiastic proponent of Linked Open Data and believes it will change the way the internet works. One day. He can be found on twitter at @cajunjoel.


Making Your Website Accessible Part 1: understanding WCAG

With more and more services and resources becoming digital, web accessibility has become an ever increasingly important topic. As a result, I thought a summary of my findings would be useful to others that are involved with web services.

What is Web Accessibility?

Some define web accessibility to mean making the web accessible to those with disabilities (including visual, auditory, motor, and cognitive) 1. However, I prefer the more general meaning of making the web accessible equally to everyone, including those with disabilities 2. To take this further, regardless of whether someone has a disability, they should be able to access information in their preferred manner including using any browser, operating system, or device.

A quick (but common) example of a problem is how a user is expected to control a video if they cannot use a mouse to click on buttons (they may depend on a keyboard or be visually impaired), especially when most videos still use some form of Flash. Try it sometime, and see what happens. Web accessibility guidelines, such as WCAG, attempt to address these issues.

What is WCAG?

The Web Content Accessibility Guidelines are just that, a set of guidelines, written by the Web Accessibility Initiative (WAI) group of the W3C. Currently, it’s at version 2.0, but many organizations still use version 1.0 (usually with modifications to make it more current).

  • The Breakdown of WCAG

Many (myself included) find WCAG to be difficult to navigate and comprehend. To get a better idea of ideas behind the guidelines, reading a summary can be of great help. The WCAG part of the Web Accessibility Initiative (WAI) site provides a good summary of the guidelines:

Perceivable

Operable

Understandable

Robust

-WCAG 2.0 at a Glance

Even if you don’t follow the specifics of WCAG, these are good general guidelines for web accessibility. While many of these might seem like common sense, the average person will not be detrimentally affected by not following guidelines. Also, some actually can have grave consequences (I always like to point out that content should not cause seizures).

WCAG 2.0 as a Mind Map by Avoka

The specific guidelines expand on these ideas and provide more detail on how each guideline might be met. To wade through the specifics, I suggest using the quick reference, How to Meet WCAG 2.0, and use the customization based on your current technologies and focus. By unchecking everything, except the level(s) you expect to meet, it also provides a good checklist.

  • A Note on Techniques

In most cases, the sufficient techniques and failures listed under each guideline are quite useful as examples of how you might meet (or fail) each guideline. On the other hand, the techniques are “informative” and are not necessarily the best method in fulfilling a given guideline. (See my critcism below.) You might also give advisory techniques a glance if you’re keen on knowing how to further enhance accessibility, but then my view is that there is enough to deal with without getting into even more detail.

Criticisms & Lessons Learned

WCAG is not without criticism. The wikipedia article on web accessibility gives a nice short blurb and links to articles of interest. From what I’ve read, the three most compelling arguments against WCAG2.0 specifically are that:

  • Level AAA is impractical to meet,
  • even at Level AAA, meeting WCAG2.0 does not make your website accessible, and
  • the documentation is difficult to navigate/understand.

To briefly explain the second point, as I mentioned, part of web accessibility is to make your website easily accessible on any device, but WCAG does not directly address this issue (which you could address for example through responsive design, as explained by Lisa in the last TechConnect post).

My biggest criticism though of WCAG2.0 is that the documentation (in particular, the techniques) is not current.

For example, under Sufficient Techniques for 1.3.1, Technique H73 tells you to use the summary attribute in table elements to provide an overview of what a given table is about. However, the summary attribute in HTML5 is deprecated (which I didn’t discover until I put a page with a table through the W3C validator).

Why Should You Care?

Despite its shortcomings, WCAG provides an excellent (base) set of guidelines to follow. Even if you choose not follow WCAG, consider customizing it to fit your organization’s needs.

  • Increased Access

If you’re reading this, you’re more than likely working at a public institution of some sort. Public institutions serve a very diverse group of people, and while persons with disabilities is a minority, we’re to serve one minority as well as any other and the majority as well (at least ideally).

Improving web accessibility helps a diverse audience in and of itself. Everything from mild to more extreme cases, the guidelines help to serve:

  • blind users, who will use screen readers
  • low vision users, who may use screen readers or magnifying tools
  • colourblind users
  • mobility impaired users, who might only use a keyboard
  • users with cognitive or learning disabilities
  • and more…

While in many cases serving one minority can mean taking away from another, that’s not the case here. You can easily improve your site to meet web accessibility requirements without negatively impacting the rest of your audience in any way (it simply might not make any difference).

  • Legislation: AODA Regulations

Legislation and regulations are additional reasons to adhere to a set of web accessibility guidelines. Even if it’s not legislated, most institutions (public and private) will have policies and/or mandates that require or promote equal/equivalent access to all users.

In Ontario, Canada we have the Accessibility for Ontarians with Disabilities Act (AODA), which states that under Section 14, subsection 4:

  • By January 1, 2014, new internet websites and web content on [public sector] sites must conform with WCAG 2.0 Level A.
  • By January 1, 2021, all internet websites and web content must conform with WCAG 2.0 Level AA, other than,
    • success criteria 1.2.4 Captions (Live), and
    • success criteria 1.2.5 Audio Descriptions (Pre-recorded).

Different countries and provinces/states obviously have different regulations, and I’m not really familiar with others, but I know that many of them use WCAG as well.

If you’re in the US, check out the Section 508 Government website. It focuses on the government regulations for their own departments, but includes lots of resources.

In academic libraries, I would also highly recommend contacting the equivalent disabilities support office (frequently called Access Centre), as they will provide support for staff dealing with accessibility issues.

  • Better for Everyone

While web accessibility generally focuses on providing and improving access for persons with disabilities, making sure your site follow a set of guidelines can also improve the experience of your other users as well.

Even something as simple as providing alternative text for images can help ‘regular’ users. What if their internet or your site is slow? What if they blocked (larger) images because they’re on a data plan? What if they’re using a text only browser? (While rare nowadays, not completely unheard of.) Much like when using a screen reader, having alternative text for images means a user would know what they’re missing and if it’s important.

Provided the ideal circumstances, some guidelines can still provide a better user experience.

Make text readable and understandable. -Guideline 3.1

Who wouldn’t want their user to be able to read and understand their site? Making content consistent across the site also improves navigation and browsability of a site.

We all access the web differently. Heck, I have a script blocker, flash blocker, and ad blocker all installed on every browser I regularly use. I really like sites that allow me to use them without unblocking any of those, because I believe that users should be able to do at least the basic things of any given website without ads, scripts, and flash. Web accessibility guidelines can help you make your site more accessible for everyone, and provide a better user experience.

The One-Liner

Basically, WCAG is a good source of information and a good set of guidelines to follow. While you need to be cautious of some of the finer details, making your site meet web accessibility guidelines can improve your site not only for those with special needs, but for your regular users as well.

 

About our Guest Author: Cynthia Ng is currently the Innovative Technologies Librarian (and also filling in for the Web Services Librarian) at Ryerson University Library and Archives. While she is especially interested in web development and services, her focus is on integrating technology into the library in a holistic manner to best serve its user. She is a recent MLIS graduate from the University of British Columbia, and also holds a BEd from UBC. She can be found blogging at Learning (Lib)Tech and on Twitter as @TheRealArty.


More APIs: writing your own code (2)

My last post “The simpest AJAX: writing your own code (1)” discussed a few Javascript and JQuery examples that make the use of the Flickr API. In this post, I try out APIs from providers other than Flickr. The examples will look plain to you since I didn’t add any CSS to dress them up. But remember that the focus here is not in presentation but in getting the data out and re-displaying it on your own. Once you get comfortable with this process, you can start thinking about a creative and useful way in which you can present and mash up the same data. We will go through 5 examples I created with three different APIs.  Before taking a look at the codes, check out the results below first.

 

I. Pinboard API

The first example is Pinboard. Many libraries moved their bookmarks in Del.icio.us to a different site when there was a rumor that Del.cio.us may be shut down by Yahoo. One of those sites were Pinboard. By getting your bookmark feeds from Pinboard and manipulating them, you can easily present a subset of your bookmark as part of your website.

(a) Display bookmarks in Pinboard using its API

The following page uses JQuery to access the JSONP feed of my public bookmarks in Pinboard. $.ajax() method is invoked on line 13. Line 15, jsonp:”cb”, gives the name to a callback function that will wrap the JSON feed data in it. Note line 18 where I print out data received into the console. This way, you can check if you are receiving JSONP feed in the console of Firebug. Line 19-22 uses $.each() function to access each element in the JSONP feed and the .append() method to add each bookmark’s title and url to the “pinboard” div. JQuery documentation has detailed explanation and examples for its functions and methods. So make sure to check it out if you have any questions about a JQuery function or method.

Pinboard API – example 1

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Pinboard: JSONP-AJAX Example</title>
<script type="text/javascript" src="http://jqueryjs.googlecode.com/files/jquery-1.3.2.min.js"></script>
</head>
<body>
<p> This page takes the JSON feed from <a href="http://pinboard.in/u:bohyunkim/">my public links in Pinboard</a> and re-presents them here. See <a href="http://feeds.pinboard.in/json/u:bohyunkim/">the Pinboard's JSON feed of my links</a>.</p>
<div id="pinboard"><h1>All my links in Pinboard</h1></div>
<script>
$(document).ready(function(){	
$.ajax({
  url:'http://feeds.pinboard.in/json/u:bohyunkim',
  jsonp:"cb",
  dataType:'jsonp',
  success: function(data) {
  	console.log(data); //dumps the data to the console to check if the callback is made successfully
    $.each(data, function(index, item){
      $('#pinboard').append('<div><a href="' + item.u + '">' + item.d
+ '</a></div>');
      }); //each
    } //success
  }); //ajax

});//ready
</script>
</body>
</html>

Here is the screenshot of the page. I opened up the Console window of the Firebug (since I dumped the received in line 18) and you can see the incoming data here. (Note. Click the images to see the large version.)

But it is much more convenient to see the organization and hierarchy of the JSONP feed in the Net panel of Firebug.

And each element of the JSONP feed can be drilled down for further details by clicking the object in each row.

(b) Display only a certain number of bookmarks

Now, let’s display only a certain number of bookmarks. In order to do this, one more line is needed. Line 9 checks the position of each element and breaks the loop when the 5th element is processed.

Pinboard API – example 2

$.ajax({
  url:'http://feeds.pinboard.in/json/u:bohyunkim',
  jsonp:"cb",
  dataType:'jsonp',
  success: function(data) {
    $.each(data, function(index, item){
      	$('#pinboard').append('<div><a href="' + item.u + '">' + item.d
+ '</a></div>');
    	if (index == 4) return false; //only display 5 items
      }); //each
    } //success
  }); //ajax

(c) Display bookmarks with a certain tag

Often libraries want to display bookmarks with a particular tag. Here I add a line using JQuery method $.inArray() to display only bookmarks tagged with ‘fiu.’ $.inArray()  method takes value and array as parameters and returns 0 if the value is found in the array otherwise -1. Line 7 checks if the tag array of a bookmark (item.t) does include ‘fiu,’ and only in such case displays the bookmark. As a result, only the bookmarks with the tag ‘fiu’ are shown in the page.

Pinboard API – example 3

$.ajax({
  url:'http://feeds.pinboard.in/json/u:bohyunkim',
  jsonp:"cb",
  dataType:'jsonp',
  success: function(data) {
    $.each(data, function(index, item){
    	if ($.inArray("fiu", item.t)!==-1) // if the tag is 'fiu'
      		$('#pinboard').append('<div><a href="' + item.u + '">' + item.d
+ '</a></div>');
	}); //each
    } //success
  }); //ajax
II. Reddit API

My second example uses Reddit API. Reddit is a site where people comment on news items of interest. Here I used $.getJSON() instead of $.ajax() in order to process the JSONP feed from the Science section of Reddit. In the case of Pinboard API, I could not find out a way to construct a link that includes a call back function in the url. Some of the parameters had to be specified such as jsonp:”cb”, dataType:’jsonp’. For this reason, I needed to use $.ajax() function. On the other hand, in Reddit, getting the JSONP feed url was straightforward: http://www.reddit.com/r/science/.json?jsonp=cb.

Line 19 adds a title of the page. Line 20-22 extracts the title and link to the news article that is being commented and displays it. Under the news item, the link to the comments for that article in Reddit is added as a bullet item. You can see that, in Line 17 and 18, I have used the console to check if I get the right data and targeting the element I want and then commented out later.

This is just an example, and for that reason, the result is a rather simplified version of the original Reddit page with less information. But as long as you are comfortable accessing and manipulating data at different levels of the JSONP feed sent from an API, you can slice and dice the data in a way that suits your purpose best. So in order to make a clever mash-up, not only the technical coding skills but also your creative ideas of what different sets of data and information to connect and present to offer something new that has not been available or apparent before.

Reddit API – example

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Reddit-Science: JSONP-AJAX Example</title>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.0/jquery.min.js"></script>
</head>

<body>
<p> This page takes the JSONP feed from <a href="http://www.reddit.com/r/science/">Reddit's Science section</a> and presents the link to the original article and the comments in Reddit to the article as a pair. See <a href="http://www.reddit.com/r/science/.json?jsonp=?">JSONP feed</a> from Reddit.</p>

<div id="feed"> </div>
<script type="text/javascript">
	//run function to parse json response, grab title, link, and media values, and then place in html tags
$(document).ready(function(){		
	$.getJSON('http://www.reddit.com/r/science/.json?jsonp=?', function(rd){
		//console.log(rd);
		//console.log(rd.data.children[0].data.title);
		$('#feed').html('<h1>*Subredditum Scientiae*</h1>');
		$.each(rd.data.children, function(k, v){
      		$('#feed').append('<div><p>'+(k+1)+': <a href="' + v.data.url + '">' + v.data.title+'</a></p><ul><li style="font-variant:small-caps;font-size:small"><a href="http://www.reddit.com'+v.data.permalink+'">Comments from Reddit</a></li></ul></div>');
      }); //each
	}); //getJSON
});//ready	
</script>

</body>
</html>

The structure of a JSON feed can be confusing to make out particularly. So make sure to use the Firebug Net window to figure out the organization of the feed content and the property name for the value you want.

But what if the site from which you would like to get data doesn’t offer JSONP feed? Fortunately you can convert any RSS or XML feed into JSONP feed. Let’s take a look!

III. PubMed Feed with Yahoo Pipes API

Consider this PubMed search. This is simple search that looks for items in PubMed that has to do with Florida International University College of Medicine where I work. You may want to access the data feed of this search result, manipulate, and display in your library website. So far, we have performed a similar task with the Pinboard and the Reddit API using JQuery. But unfortunately PubMed does not offer any JSON feed. We only get RSS feed instead from PubMed.

This is OK, however. You can either manipulate the RSS feed directly or convert the RSS feed into JSON, which you are more familiar with now. Yahoo Pipes is a handy tool for just that purpose. You can do the following tasks with Yahoo Pipes:

  • combine many feeds into one, then sort, filter and translate it.
  • geocode your favorite feeds and browse the items on an interactive map.
  • power widgets/badges on your web site.
  • grab the output of any Pipes as RSS, JSON, KML, and other formats.

Furthermore, there may be a pipe that has been already created for exactly what you want to do by someone else. PubMed is a popular resource. As I expected, I found a pipe for PubMed search. I tested, copied the pipe, and changed the search term. Here is the screenshot of my Yahoo Pipe.

If you want to change the pipe, you can click “View Source” and make further changes. Here I just changed the search terms and saved the pipe.

After that, you want to get the results of the Pipe as JSON. If you hover over the “Get as JSON” link in the first screenshot above, you will get a link: http://pipes.yahoo.com/pipes/pipe.run?_id=e176c4da7ae8574bfa5c452f9bb0da92&_render=json&limit=100&term=”Florida International University” and “College of Medicine” But this returns JSON, not JSONP.

In order to get that JSON feed wrapped into a callback function, you need to add this bit, &_callback=myCallback, at the end of the url: http://pipes.yahoo.com/pipes/pipe.run?_id=e176c4da7ae8574bfa5c452f9bb0da92&_render=json&limit=10&term=%22Florida+International+University%22+and+%22College+of+Medicine%22&_callback=myCallback. Now the JSON feed appears wrapped in a callback function like this: myCallback( ). See the difference?

Line 25 enables you to bring in this JSONP feed and invokes the callback function named “myCallback.” Line 14-23 defines this callback function to process the received feed. Line 18-20 takes the JSON data received at the level of data.value. item, and prints out each item’s title (item.title) with a link (item.link). Here I am giving a number for each item by (index+1). If you don’t put +1, the index will begin from 0 instead of 1. Line 21 stops the process when the processed item reaches 5 in number.

Yahoo Pipes API/PubMed – example

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>PubMed and Yahoo Pipes: JSONP-AJAX Example</title>
<script type="text/javascript" src="http://jqueryjs.googlecode.com/files/jquery-1.3.2.min.js"></script>
</head>
<body>
<p> This page takes the JSONP feed from <a href="http://pipes.yahoo.com/pipes/pipe.info?_id=e176c4da7ae8574bfa5c452f9bb0da92"> a Yahoo Pipe</a>, which creates JSONP feed out of a PubMed search results and re-presents them here. 
<br/>See <a href="http://pipes.yahoo.com/pipes/pipe.run?_id=e176c4da7ae8574bfa5c452f9bb0da92&_render=json&limit=10&term=%22Florida+International+University%22+and+%22College+of+Medicine%22&_callback=myCallback">the Yahoo Pipe's JSON feed</a> and <a href="http://www.ncbi.nlm.nih.gov/pubmed?term=%22Florida%20International%20University%22%20and%20%22College%20of%20Medicine%22">the original search results in PubMed</a>.</p>
<div id="feed"></div>
<script type="text/javascript">
	//run function to parse json response, grab title, link, and media values - place in html tags
	function myCallback(data) {
		//console.log(data);
		//console.log(data.count);
		$("#feed").html('<h1>The most recent 5 publications from <br/>Florida International University College of Medicine</h1><h2>Results from PubMed</h2>');
		$.each(data.value.items, function(index, item){
      		$('#feed').append('<p>'+(index+1)+': <a href="' + item.link + '">' + item.title
+ '</a></p>');
        if (index == 4) return false; //display the most recent five items
      }); //each
	} //function
	</script>
<script type="text/javascript" src="http://pipes.yahoo.com/pipes/pipe.run?_id=e176c4da7ae8574bfa5c452f9bb0da92&_render=json&limit=10&term=%22Florida+International+University%22+and+%22College+of+Medicine%22&_callback=myCallback"></script>
</body>
</html>

Do you feel more comfortable now with APIs? With a little bit of JQuery and JSON, I was able to make a good use of third-party APIs. Next time, I will try the Worldcat Search API, which is closer to the library world and see how that works.

 


Responsive web design and libraries

Brad Frost's This is Responsive Website

Brad Frost’s This is Responsive Website via desktop browser

Responsive web design is not really a new concept, yet libraries and many websites in general, still have a way to go in adopting this method of web design. ACRL’s Tech Connect has covered various web design topics, including mobile applications, hybrid mobile applications, design basics, and website readability. Consider the responsive web design approach and its benefits another tool in your toolbelt and yet another option for libraries to present themselves online to their users.

So what is it?

As the number of devices in which we are able to view and interact with websites grow, it becomes more cumbersome to keep up with the plethora of laptops, desktops, mobile devices, and potentially devices or platforms that don’t yet exist. As our users view our websites and online resources in these various ways, we must ask the simple question- what does this look like and what is the user’s experience? If you have a smart phone or tablet, you may recall viewing a website and being annoyed that it is merely a shrunken teeny version of the site you’re used to seeing rather than adjusting nicely to accommodate the device you’re using. In many cases it requires you, the user, to do something to make it work on your device rather than accomodate you; a rather unappealing way to treat users I think. Here’s an examples from the New York Times website:

New York Times displayed on a desktop browser

New York Times displayed on a desktop browser

New York Times displayed on the iphone

New York Times displayed on the iphone aka the Barbie sized website

To avoid these awful Barbie-sized websites on our devices, the solutions we often see are: develop a mobile website, develop a mobile app, or create a website that is responsive and can handle most anything that comes it’s way. All of these are great solutions to developing a mobile presence however the responsive approach is fast emerging as a winning solution.

What makes it great?

The very basic benefit to creating a responsive website design is that you have one site for all devices- it’s intended to be inclusive for desktop machines and a variety of devices. A responsive site does not require anything of the user; no downloading or additional buttons to click, the result is immediate. That’s it. Rather than separate approaches for mobile through either a mobile site or mobile applications and then another approach for desktop machines- this method is flexible and covers it all under one design.

Responsive websites in the wild

The best way to experience responsive web design is to try out some websites that are designed in this way. Here are a few great examples with screenshots included- I recommend checking them out in your browser and then shrinking your browser window down to get the full effect or how this technology works. If you’re even more invested in the experience, bring the site up on any number of devices to see the benefits of responsive design done well.

Graphic designer, John Boilard, showcases his portfolio website using responsive design:
http://jpboneyard.com/ As you can see the images are flexible and each project displays very well on many devices. The images adjust accordingly and the performance of the site is easy to use and easy to view.

JP Boneyard Design viewed on a desktop browser

JP Boneyard Design viewed on a desktop browser

JP Boneyard design viewed on an iphone

JP Boneyard design viewed on an iphone

Web designer and the founder of responsive design, Ethan Marcotte, http://ethanmarcotte.com/ showcases not only his design but also his publications. There is a variety of content on his website and the responsive design works extremely well. For a quick article on responsive design from the graphic design perspective, see his article on A List Apart.

Ethan Marcotte website via desktop browser

Ethan Marcotte website via desktop browser

Ethan Marcotte website via the iphone

Ethan Marcotte website via the iphone

Matthew Reidsma, web services librarian at Grand Valley State University, also has a fantastic flexible site where even videos are responsive: http://matthew.reidsrow.com/ He is a web librarian to watch as he often writes and presents on responsive design (more on that below) and he is always creating really awesome and interesting web projects. The Grand Valley State University Libraries have a fantastic responsive site: http://gvsu.edu/library/

Matthew Reidsma's website via desktop browser

Matthew Reidsma’s website via desktop browser

 

Matthew Reidsma's website via iphone

Matthew Reidsma’s website via iphone

Grand Valley State University Libraries website via desktop browser

Grand Valley State University Libraries website via desktop browser

Grand Valley State University Libraries website via iphone

Grand Valley State University Libraries website via iphone

The website TitleCase, co-founded by designers and type experts, Jessica Hische and Erik Marinovich; this is an elegant example of showcasing this side project with a focus on type while all giving users a pleasant experience through responsive design.

TitleCase website via desktop browser

Title Case website via desktop browser

TitleCase website via iphone

Title Case website via iphone

Scribble Tone, the Portland, OR based design studio, created this responsive website for Design Week Portland: http://www.designweekportland.com/#footer With the gorgeous flexible images, the bold colors, and clear typefaces- all created and designed responsively- this website is pleasant and fun to use.

Portland Design Week website via desktop browser

Portland Design Week website via desktop browser

Portland Design Week website via iphone

Portland Design Week website via iphone

Resources

If you were like me and did not attend ALA Annual this year and get to see Matthew Reidma’s fantastic talk on Responsive Web Design for Libraries: Get Beyond the Myth of the Mobile Web, you are still in luck because he posted the talk and slides on his website here. He covers quite a lot and he provides a fantastic list of resources within that post. It’s a great presentation and truly worth watching all the way through.

To delve in more deeply into the details of designing with responsive web methods, I highly recommend Responsive Web Design by Ethan Marcotte. He not only provides good examples and techniques for designing this way, but he also explains the flexible grid and flexible images as well and displays his balanced approach between good design and good functionality. His website is also responsive and he’s designed complex websites: http://ethanmarcotte.com/

Another option, particularly for those who use WordPress as a CMS, there are a number of free responsive themes available that are already designed. Simply selecting the theme or taking a further step by creating a child theme that you could then customize would be a fairly simple way to make your website design responsive and more enjoyable.

The website, This is Responsive has a wide variety of tools to develop your own responsive website as does Bootstrap. So even if you don’t think of yourself as a designer, there are a lot of ways to get started in creating good design that is also responsive.

Whether you are interested in responsive design, mobile websites, or creating mobile apps, it is critical for libraries to be aware of the user experience as our customers use our websites and online resources on a variety of devices. The bar is being raised with responsive designed websites and users will come to expect this kind of experience. As the web, platforms, and devices evolve, it will be crucial for libraries to be already poised to offer a positive experience through good thoughtful design.

Brad Frost's This is Responsive Website via iphone

Brad Frost’s This is Responsive Website via iphone


What exactly does a fiber-based gigabit speed library app look like?

Mozilla and the National Science Foundation are sponsoring an open round of submissions for developers/app designers to create fiber-based gigabit apps. The detailed contest information is available over at Mozilla Ignite (https://mozillaignite.org/about/). Cash prizes to fund promising start up ideas are being award for a total of $500,000 over three rounds of submissions. Note: this is just the start, and these are seed projects to garner interest and momentum in the area. A recent hackathon in Chattanooga lists out what some coders are envisioning for this space:  http://colab.is/2012/hackers-think-big-at-hackanooga/

The video for the Mozilla Ignite Challenge 2012 on Vimeo is slightly helpful for examples of gigabit speed affordances.

If you’re still puzzled after the video, you are not alone. One of the reasons for the contest is that network designers are not quite sure what immense levels of processing in the network and next generation transfer speeds will really mean.

Consider that best case transfer speeds on a network are somewhere along the lines of 10 megabits per second. There are of course variances of this speed across your home line (it may hover closer to 5 mb/s), but this is pretty much the standard that average subscribers can expect. A gigabit speed rate transfers data at 100 times that speed, 1,000 megabits per second. When a whole community is able to achieve 1,000 megabits upstream and downstream, you basically have no need for things like “streaming” video – the data pipes are that massive.

One theory is that gigabit apps could provide public benefit, solve societal issues and usher in the next generation of the Internet. Think of gigabit speed as the difference between getting water (Internet) through a straw, and getting water (Internet) through a fire-hose. The practicality of this contest is to seed startups with ideas that will in some way impact healthcare (realtime health monitoring), the environment and energy challenges. The local Champaign Urbana municipal gigabit speed fiber cause is noble, as it will provide those in areas without access to broadband an awesome pipeline to the Internet. It is a intergovernmental partnership that aims to serve municipal needs, as well as pave the way for research and industry start-ups.

Here are some attributes that Mozilla Ignite Challenge lists as the possible affordances of fiber based gigabit speed apps:

  • Speed
  • Video
  • Big Data
  • Programmable networks

As I read about the Mozilla Ignite open challenge, I wondered about the possibilities for libraries and as a thought experiment I list out here some ideas for library services that live on gigabit speed networks:

* Consider the video data you could provide access to – in libraries that are stewarding any kind of video gigabit speeds would allow you to provide in-library viewing that has few bottlenecks. A fiber-based gigabit speed video viewing app of all library video content available at once. Think about viewing every video in your collection simultaneously. You could have them playing to multiple clusters (grid videos) in the library at multiple stations. Without streaming.

* Consider Sensors and sensor arrays and fiber. One idea that is promulgated for the use of fiber-based gigabit speed networks are the affordances to monitor large amounts of data in real time. The sensor networks that could be installed around the library facility could help to throttle energy consumption in real time, making the building more energy efficient and less costly to maintain. Such a facilities based app would impact savings on the facilities budgets.

* Consider collaborations among libraries with fiber affordances. Libraries that are linked by fiber-based gigabit speeds would be able to transfer large amounts of data in fractions of what it takes now. There are implications here for data curation infrastructure (http://smartech.gatech.edu/handle/1853/28513).

Another way to approach this problem is by asking: “What problem does gigabit speed networking solve?” One of the problems with the current web is the de facto  latency. Your browser needs to request a page from a server which is then sent back to your client. We’ve gotten so accustomed to this latency that we expect it, but what if pages didn’t have to be requested? What if servers didn’t have to send pages? What if a zero latency web meant we needed a new architecture to take advantage of data possibilities?

Is your library poised to take advantage of increased data transfer? What apps do you want to get funding for?