Cybersecurity, Usability, Online Privacy, and Digital Surveillance

Cybersecurity is an interesting and important topic, one closely connected to those of online privacy and digital surveillance. Many of us know that it is difficult to keep things private on the Internet. The Internet was invented to share things with others quickly, and it excels at that job. Businesses that process transactions with customers and store the information online are responsible for keeping that information private. No one wants social security numbers, credit card information, medical history, or personal e-mails shared with the world. We expect and trust banks, online stores, and our doctor’s offices to keep our information safe and secure.

However, keeping private information safe and secure is a challenging task. We have all heard of security breaches at J.P Morgan, Target, Sony, Anthem Blue Cross and Blue Shield, the Office of Personnel Management of the U.S. federal government, University of Maryland at College Park, and Indiana University. Sometimes, a data breach takes place when an institution fails to patch a hole in its network systems. Sometimes, people fall for a phishing scam, or a virus in a user’s computer infects the target system. Other times, online companies compile customer data into personal profiles. The profiles are then sold to data brokers and on into the hands of malicious hackers and criminals.

https://www.flickr.com/photos/topgold/4978430615

Image from Flickr – https://www.flickr.com/photos/topgold/4978430615

Cybersecurity vs. Usability

To prevent such a data breach, institutional IT staff are trained to protect their systems against vulnerabilities and intrusion attempts. Employees and end users are educated to be careful about dealing with institutional or customers’ data. There are systematic measures that organizations can implement such as two-factor authentication, stringent password requirements, and locking accounts after a certain number of failed login attempts.

While these measures strengthen an institution’s defense against cyberattacks, they may negatively affect the usability of the system, lowering users’ productivity. As a simple example, security measures like a CAPTCHA can cause an accessibility issue for people with disabilities.

Or imagine that a university IT office concerned about the data security of cloud services starts requiring all faculty, students, and staff to only use cloud services that are SOC 2 Type II certified as an another example. SOC stands for “Service Organization Controls.” It consists of a series of standards that measure how well a given service organization keeps its information secure. For a business to be SOC 2 certified, it must demonstrate that it has sufficient policies and strategies that will satisfactorily protect its clients’ data in five areas known as “Trust Services Principles.” Those include the security of the service provider’s system, the processing integrity of this system, the availability of the system, the privacy of personal information that the service provider collects, retains, uses, discloses, and disposes of for its clients, and the confidentiality of the information that the service provider’s system processes or maintains for the clients. The SOC 2 Type II certification means that the business had maintained relevant security policies and procedures over a period of at least six months, and therefore it is a good indicator that the business will keep the clients’ sensitive data secure. The Dropbox for Business is SOC 2 certified, but it costs money. The free version is not as secure, but many faculty, students, and staff in academia use it frequently for collaboration. If a university IT office simply bans people from using the free version of Dropbox without offering an alternative that is as easy to use as Dropbox, people will undoubtedly suffer.

Some of you may know that the USPS website does not provide a way to reset the password for users who forgot their usernames. They are instead asked to create a new account. If they remember the account username but enter the wrong answers to the two security questions more than twice, the system also automatically locks their accounts for a certain period of time. Again, users have to create a new account. Clearly, the system that does not allow the password reset for those forgetful users is more secure than the one that does. However, in reality, this security measure creates a huge usability issue because average users do forget their passwords and the answers to the security questions that they set up themselves. It’s not hard to guess how frustrated people will be when they realize that they entered a wrong mailing address for mail forwarding and are now unable to get back into the system to correct because they cannot remember their passwords nor the answers to their security questions.

To give an example related to libraries, a library may decide to block all international traffic to their licensed e-resources to prevent foreign hackers who have gotten hold of the username and password of a legitimate user from accessing those e-resources. This would certainly help libraries to avoid a potential breach of licensing terms in advance and spare them from having to shut down compromised user accounts one by one whenever those are found. However, this would make it impossible for legitimate users traveling outside of the country to access those e-resources as well, which many users would find it unacceptable. Furthermore, malicious hackers would probably just use a proxy to make their IP address appear to be located in the U.S. anyway.

What would users do if their organization requires them to reset passwords on a weekly basis for their work computers and several or more systems that they also use constantly for work? While this may strengthen the security of those systems, it’s easy to see that it will be a nightmare having to reset all those passwords every week and keeping track of them not to forget or mix them up. Most likely, they will start using less complicated passwords or even begin to adopt just one password for all different services. Some may even stick to the same password every time the system requires them to reset it unless the system automatically detects the previous password and prevents the users from continuing to use the same one. Ill-thought-out cybersecurity measures can easily backfire.

Security is important, but users also want to be able to do their job without being bogged down by unwieldy cybersecurity measures. The more user-friendly and the simpler the cybersecurity guidelines are to follow, the more users will observe them, thereby making a network more secure. Users who face cumbersome and complicated security measures may ignore or try to bypass them, increasing security risks.

Image from Flickr - https://www.flickr.com/photos/topgold/4978430615

Image from Flickr – https://www.flickr.com/photos/topgold/4978430615

Cybersecurity vs. Privacy

Usability and productivity may be a small issue, however, compared to the risk of mass surveillance resulting from aggressive security measures. In 2013, the Guardian reported that the communication records of millions of people were being collected by the National Security Agency (NSA) in bulk, regardless of suspicion of wrongdoing. A secret court order prohibited Verizon from disclosing the NSA’s information request. After a cyberattack against the University of California at Los Angeles, the University of California system installed a device that is capable of capturing, analyzing, and storing all network traffic to and from the campus for over 30 days. This security monitoring was implemented secretly without consulting or notifying the faculty and those who would be subject to the monitoring. The San Francisco Chronicle reported the IT staff who installed the system were given strict instructions not to reveal it was taking place. Selected committee members on the campus were told to keep this information to themselves.

The invasion of privacy and the lack of transparency in these network monitoring programs has caused great controversy. Such wide and indiscriminate monitoring programs must have a very good justification and offer clear answers to vital questions such as what exactly will be collected, who will have access to the collected information, when and how the information will be used, what controls will be put in place to prevent the information from being used for unrelated purposes, and how the information will be disposed of.

We have recently seen another case in which security concerns conflicted with people’s right to privacy. In February 2016, the FBI requested Apple to create a backdoor application that will bypass the current security measure in place in its iOS. This was because the FBI wanted to unlock an iPhone 5C recovered from one of the shooters in San Bernadino shooting incident. Apple iOS secures users’ devices by permanently erasing all data when a wrong password is entered more than ten times if people choose to activate this option in the iOS setting. The FBI’s request was met with strong opposition from Apple and others. Such a backdoor application can easily be exploited for illegal purposes by black hat hackers, for unjustified privacy infringement by other capable parties, and even for dictatorship by governments. Apple refused to comply with the request, and the court hearing was to take place in March 22. The FBI, however, withdrew the request saying that it found a way to hack into the phone in question without Apple’s help. Now, Apple has to figure out what the vulnerability in their iOS if it wants its encryption mechanism to be foolproof. In the meanwhile, iOS users know that their data is no longer as secure as they once thought.

Around the same time, the Senate’s draft bill titled as “Compliance with Court Orders Act of 2016,” proposed that people should be required to comply with any authorized court order for data and that if that data is “unintelligible” – meaning encrypted – then it must be decrypted for the court. This bill is problematic because it practically nullifies the efficacy of any end-to-end encryption, which we use everyday from our iPhones to messaging services like Whatsapp and Signal.

Because security is essential to privacy, it is ironic that certain cybersecurity measures are used to greatly invade privacy rather than protect it. Because we do not always fully understand how the technology actually works or how it can be exploited for both good and bad purposes, we need to be careful about giving blank permission to any party to access, collect, and use our private data without clear understanding, oversight, and consent. As we share more and more information online, cyberattacks will only increase, and organizations and the government will struggle even more to balance privacy concerns with security issues.

Why Libraries Should Advocate for Online Privacy?

The fact that people may no longer have privacy on the Web should concern libraries. Historically, libraries have been strong advocates of intellectual freedom striving to keep patron’s data safe and protected from the unwanted eyes of the authorities. As librarians, we believe in people’s right to read, think, and speak freely and privately as long as such an act itself does not pose harm to others. The Library Freedom Project is an example that reflects this belief held strongly within the library community. It educates librarians and their local communities about surveillance threats, privacy rights and law, and privacy-protecting technology tools to help safeguard digital freedom, and helped the Kilton Public Library in Lebanon, New Hampshire, to become the first library to operate a Tor exit replay, to provide anonymity for patrons while they browse the Internet at the library.

New technologies brought us the unprecedented convenience of collecting, storing, and sharing massive amount of sensitive data online. But the fact that such sensitive data can be easily exploited by falling into the wrong hands created also the unparalleled level of potential invasion of privacy. While the majority of librarians take a very strong stance in favor of intellectual freedom and against censorship, it is often hard to discern a correct stance on online privacy particularly when it is pitted against cybersecurity. Some even argue that those who have nothing to hide do not need their privacy at all.

However, privacy is not equivalent to hiding a wrongdoing. Nor do people keep certain things secrets because those things are necessarily illegal or unethical. Being watched 24/7 will drive any person crazy whether s/he is guilty of any wrongdoing or not. Privacy allows us safe space to form our thoughts and consider our actions on our own without being subject to others’ eyes and judgments. Even in the absence of actual massive surveillance, just the belief that one can be placed under surveillance at any moment is sufficient to trigger self-censorship and negatively affects one’s thoughts, ideas, creativity, imagination, choices, and actions, making people more conformist and compliant. This is further corroborated by the recent study from Oxford University, which provides empirical evidence that the mere existence of a surveillance state breeds fear and conformity and stifles free expression. Privacy is an essential part of being human, not some trivial condition that we can do without in the face of a greater concern. That’s why many people under political dictatorship continue to choose death over life under mass surveillance and censorship in their fight for freedom and privacy.

The Electronic Frontier Foundation states that privacy means respect for individuals’ autonomy, anonymous speech, and the right to free association. We want to live as autonomous human beings free to speak our minds and think on our own. If part of a library’s mission is to contribute to helping people to become such autonomous human beings through learning and sharing knowledge with one another without having to worry about being observed and/or censored, libraries should advocate for people’s privacy both online and offline as well as in all forms of communication technologies and devices.


Doing Six Impossible Things Before Breakfast: An Approach to Keeping it User Centered

Keeping any large technical project user-centered is challenging at best. Adding in something like an extremely tight timeline makes it too easy to dispense with this completely. Say, for instance, six months to migrate to a new integrated library system that combines your old ILS plus your link resolver and many other tools and a new discovery layer. I would argue, however, that it’s on a tight timeline like that that a major focus on user experience research can become a key component of your success. I am referring in this piece specifically to user experience on the web, but of course there are other aspects of user experience that go into such a project. While none of my observations about usability testing and user experience are new, I have realized from talking to others that they need help advocating for the importance of user research. As we turn to our hopes and goals for 2016, let’s all make a resolution to figure out a way to make better user experience research happen, even if it seems impossible.

  1. Selling the Need For User Testing

    When I worked on implementing a discovery layer at my job earlier this year, I had a team of 18 people from three campuses with varying levels of interest and experience in user testing. It was really important to us that we had an end product that would work for everyone at all levels, whether novice or experienced researcher, as well as for the library staff who would need to use the system on a daily basis. With so many people and such a tight timeline building user testing into the schedule in the first place helped us to frame our decisions as a hypothesis to confirm or nullify in the next round of testing. We tried to involve as many people as possible in the testing, though we had a core group who had experience with running the tests administer them. Doing a test as early as possible is good to convince others of the need for testing. People who had never seen a usability test done before found them convincing immediately and were much more on board for future tests.

  2. Remembering Who Your Users Are

    Reference and instruction librarians are users too. We sometimes get so focused on reminding librarians that they are not the users that we don’t make things work for them–and they do need to use the catalog too. Librarians who work with students in the classroom and in research consultations on a daily basis have a great deal of insight into seemingly minor issues that may lead to major frustrations. Here’s an example. The desktop view of our discovery layer search box was about 320 pixels long which works fine–if you are typing in just one word.  Yet we were “selling” the discovery layer as something that handled known-item searching well, which meant that much of a pasted in citation wasn’t visible. The reference librarians who were doing this exact work knew this would be an issue. We expanded the search box so more words are visible and so it works better for known-item searching.

    The same goes for course reserves, interlibrary loan, or other staff who work with a discovery layer frequently often with an added pressure of tight deadlines. If you can shave seconds off for them that adds up a huge amount over the course of the year, and will additionally potentially solve issues for other users. One example is that the print view of a book record had very small text–the print stylesheet was set to print at 85% font size, which meant it was challenging to read. The reserves staff relied on this print view to complete their daily work with the student worker. For one student the small print size created an accessibility issue which led to inefficient manual workarounds. We were able to increase the print stylesheet to greater than 100% font size which made the printed page easily readable, and therefore fix the accessibility issue for this specific use case. I suspect there are many other people whom this benefits as well.

  3. Divide the Work

    I firmly believe that everyone who is interested in user experience on the web should get some hands on experience with it. That said, not everyone needs to do the hands on work, and with a large project it is important that people focus on their core reason for being on the team. Dividing the group into overlapping teams who worked on data testing, interface testing, and user education and outreach helped us to see the big picture but not overwhelm everyone (a little Overwhelm is going to happen no matter what). These groups worked separately much of the time for deep dives into specific issues, but helped inform each other across the board. For instance, the data group might figure out a potential issue, for which the interface group would determine a test scenario. If testing indicated a change, the user education group could be aware of implications for outreach.

  4. A Quick Timeline is Your Friend

    Getting a new tool out with only a few months turnaround time is certainly challenging, but it forces you to forget about perfection and get features done. We got our hands on the discovery layer on Friday, and were doing tests the following Tuesday, with additional tests scheduled for two weeks after the first look. This meant that our first tests were on something very rough, but gave us a big list of items to fix in the next two weeks before the next test (or put on hold if lower priority). We ended up taking off two months from live usability in the middle of the process to focus on development and other types of testing (such as with trusted beta testers). But that early set of tests was crucial in setting the agenda and showing the importance of testing. We  ultimately did 5 rounds of testing, 4 of which happened before the discovery layer went live, and 1 a few months after.

  5. Think on the Long Scale

    The vendor or the community of developers is presumably not going to stop working on the product, and neither should you. For this reason, it is helpful to make it clear who is doing the work and ensure that it is written into committee charges, job descriptions, or other appropriate documentation. Maintain a list of long-term goals, and in those short timescales figure out just one or two changes you could make. The academic year affords many peaks and lulls, and those lulls can be great times to make minor changes. Regular usability testing ensures that these changes are positive, as well as uncovering new needs as tools and needs change.

  6. Be Iterative

    Iteration is the way to ensure that your long timescale stays manageable. Work never really stops, but that’s ok. You need a job, right? Back to that idea of a short timeline–borrow from the Agile method to think in timescales of 2 weeks-1 month. Have the end goal in mind, but know that getting there will happen in tiny pieces. This does require some faith that all the crucial pieces will happen, but as long as someone is keeping an eye on those (in our case, the vendor helped a lot with this), the pressure is off on being “finished”. If a test shows that something is broken that really needs to work, that can become high priority, and other desired features can move to a future cycle. Iteration helps you stay on track and get small pieces done regularly.

Conclusion

I hope I’ve made the case for why you need to have a user focus in any project, particularly a large and complex one. Whether you’re a reference librarian, project manager, web developer or cataloger, you have a responsibility to ensure the end result is usable, useful, and something people actually want to use. And no matter how tight your timeline, stick to making sure the process is user centered, and you’ll be amazed at how many impossible things you accomplished.


Collaborative UX Testing: Cardigans Included

Usability Testing

Understanding and responding to user needs has always been at the heart of librarianship, although in recent years this has taken a more intentional approach through the development of library user experience positions and departments.  Such positions are a mere fantasy though for many smaller libraries, where librarian teams of three or four run the entire show.  For the twenty-three member libraries of the Private Academic Library Network of Indiana (PALNI) consortium this is regularly the case, with each school on staff having an average of four librarians.  However, by leveraging existing collaborative relationships, utilizing recent changes in library systems and consortium staffing, and (of course) picking up a few new cardigans, PALNI has begun studying library user experience at scale with a collaborative usability testing model.

With four library testing locations spread over 200 miles in Indiana, multiple facilitators were used to conduct testing for the consortial discovery product, OCLC’s WorldCat Discovery. Using WebEx to screen record and project the testing into a library staff observation room, 30 participants completed three general tasks with multiple parts helping us to assess user needs and participant behavior.

There were clear advantages of collaborative testing over the traditional, siloed approach which were most obviously shown in the amount and type of data we received. The most important opportunity was the ability to test different setups of the same product. This type of comparative data led to conclusive setup recommendations, and showed problems unique to the institutions versus general user problems. The chance to test multiple schools also provided a lot more data, which reduced the likelihood of testing only outliers.

The second major advantage of collaborative testing was the ability to work as a team. From a physical standpoint, working as a team allowed us to spread the testing out, keeping it fresh in our minds and giving enough time in-between to fix scripts and materials. This also allowed us to test before and after technical upgrades. From a relational perspective, the shouldering of the work and continual support reduced burn out during the testing. Upon analyzing the data, different people brought different skill sets. Our particular team consisted of a graphic/interface designer, a sympathetic ear, and a master editor, all of whom played important roles when it came to analyzing and writing the report. Simply put, it was an enjoyable experience which resulted in valuable, comparative data – one that could not have happened if the libraries had taken a siloed approach.

When we were designing our test, we met with Arnold Arcolio, a User Researcher in OCLC’s User Experience and Information Architecture Group. He gave us many great pieces of advice. Some of them we found to work well in our testing, while others we rejected. The most valuable piece of advice he gave us was to start with the end in mind. Make sure you have clear objectives for what data you are trying to obtain. If you leave your objectives open ended, you will spend the rest of your life reviewing the data and learning interesting things about your users every time.

He recommended: We decided:
Test at least two users of the same type. This helps avoid outliers. For us, that meant testing at least two first year students and two seniors.
Test users on their own devices. We found this to be impractical for our purposes, as all devices used for testing had to have web conferencing software which allowed us to record users’ screen.
Have the participants read the tasks out loud. A technique that we used and recommend as well.
Use low-tech solutions for our testing, rather than expensive software and eye tracking software. This was a huge relief to PALNI’s executive director who manages our budget.
Test participants where they would normally do their research, in dorm rooms, faculty offices, etc. We did not take this recommendation due to time and privacy concerns.
He was very concerned about our use of multiple facilitators. We standardized our testing as much as possible.  First, we choose uniforms for our facilitators. Being librarians, the obvious choice was cardigans. We ordered matching, logoed cardigans from Lands’ End and wore those to conduct our testing. This allowed us to look as similar as possible and avoid skewing participants’ impressions.  We chose cardigans in blue because color theory suggests that blue persuades the participants to trust the facilitator while feeling calm and confident. We also worked together to create a very detailed script that was used by each facilitator for each test.

Our next round of usability testing will incorporate many of the same recommendations provided by our usability expert, discussed above, with a few additions and changes. This Fall, we will be including a mobile device portion using a camera mount (Mr. Tappy see http://www.mrtappy.com/) to screen record, testing different tasks, and working with different libraries. Our libraries’ staff also recommended making the report more action-oriented with best setup practices and highlighting instructional needs.  We are also developing a list of common solutions for participant problems, such as when to redirect or correct misspellings. Finally, as much as we love the cardigans, we will be wearing matching logoed polos underneath for those test rooms that mirror the climate of the Sahara Desert.

We have enjoyed our usability experiences immensely–it is a great chance to visit with both library staff, faculty, and students from other institutions in our consortium. Working collaboratively proved to be a success in our consortia where smaller libraries, short staff, and minimal resources made it otherwise impossible to conduct large scale usability testing.   Plus, we welcome having another cardigan in our wardrobe.

More detailed information about our Spring 2015 study can be found in our report, “PALNI WorldCat Discovery Usability Report.”

About our guest authors:

Eric Bradley is Head of Instruction and Reference at Goshen College and an Information Fluency Coordinator for PALNI.  He has been at Goshen since 2013.  He does not moonlight as a Mixed Martial Arts fighter or Los Angeles studio singer.

Ruth Szpunar is an Instruction and Reference Librarian at DePauw University and an Information Fluency Coordinator for PALNI. She has been at DePauw since 2005. In her spare time she can be found munching on chocolate or raiding the aisles at the Container Store.

Megan West has been the Digital Communications Manager at PALNI since 2011. She specializes in graphic design, user experience, project management and has a strange addiction to colored pencils.


This Is How I Work (Nadaleen Tempelman-Kluit)

Editor’s Note: This post is part of ACRL TechConnect’s series by our regular and guest authors about The Setup of our work.

 

Nadaleen Tempelman-Kluit @nadaleen

Location: New York, NY

Current Gig: Head, User Experience (UX), New York University Libraries

Current Mobile Device: iPhone 6

Current Computer:

Work: Macbook pro 13’ and Apple 27 inch Thunderbolt display

Old dell PC that I use solely to print and to access our networked resources

Home:

I carry my laptop to and from work with me and have an old MacBook Pro at home.

Current Tablet: First generation iPad, supplied by work

One word that best describes how you work: has anyone said frenetic yet?

What apps/software/tools can’t you live without?

Communication / Workflow

Slack is the UX Dept. communication tool in which all our communication takes place, including instant messaging, etc. We create topic channels in which we add links and tools and thoughts, and get notified when people add items. We rarely use email for internal communication.

Boomeranggmail-I write a lot of emails early in the morning so can schedule them to be sent at different times of the day without forgetting.

Pivotal Tracker-is a user story-based project planning tool based on agile software development methods. We start with user flows then integrate them into bite size user stories in Pivotal, and then point them for development

Google Drive

Gmail

Google Hangouts-We work closely with our Abu Dhabi and Shanghai campus libraries, so we do a lot of early morning and late night meetings using Google Hangouts (or GoToMeeting, below) to include everyone.

Wireframing, IA, Mockups

Sketch: A great lightweight design app

OmniGraffle: A more heavy duty tool for wire framing, IA work, mockups, etc. Compatible with a ton of stencil libraries, including he great Knoigi (LINK) and Google material design icons). Great for interactive interface demos, and for user flows and personas (link)

Adobe Creative Cloud

Post It notes, Graph paper, White Board, Dry-Erase markers, Sharpies, Flip boards

Tools for User Centered Testing / Methods 

GoToMeeting– to broadcast formal usability testing to observers in another room, so they can take notes and view the testing in real time and ask virtual follow up questions for the facilitator to ask participants.

Crazy Egg-a heat mapping hot spotting A/B testing tool which, when coupled with analytics, really helps us get a picture of where users are going on our site.

Silverback– Screen capturing usability testing software app.

PostitPlus – We do a lot of affinity grouping exercises and interface sketches using post it notes,  so this app is super cool and handy.

OptimalSort-Online card sorting software.

Personas-To think through our user flows when thinking through a process, service, or interface. We then use these personas to create more granular user stories in Pivotal Tracker (above).

What’s your workspace like?

I’m on the mezzanine of Bobst Library which is right across from Washington Square Park. I have a pretty big office with a window overlooking the walkway between Bobst and the Stern School of Business.

I have a huge old subway map on one wall with an original heavy wood frame, and everyone likes looking at old subway lines, etc. I also have a map sheet of the mountain I’m named after. Otherwise, it’s all white board and I’ve added our personas to the wall as well so I can think through user stories by quickly scanning and selecting a relevant persona.

I’m in an area where many of my colleagues mailboxes are, so people stop by a lot. I close my door when I need to concentrate, and on Fridays we try to work collaboratively in a basement conference room with a huge whiteboard.

I have a heavy wooden L shaped desk which I am trying to replace with a standing desk.

Every morning I go to Oren’s, a great coffee shop nearby, with the same colleague and friend, and we usually do “loops” around Washington Square Park to problem solve and give work advice. It’s a great way to start the day.

What’s your best time saving trick

Informal (but not happenstance) communication saves so much time in the long run and helps alleviate potential issues that can arise when people aren’t communicating. Though it takes a few minutes, I try to touch base with people regularly.

What’s your favorite to do list manager

My whiteboard, supplemented by stickies (mac), and my huge flip chart notepad with my wish list on it. Completed items get transferred to a “leaderboard.”

Besides your phone and computer, what gadget can’t you live without?

Headphones

What everyday thing are you better at than everyone else?

I don’t think I do things better than other people, but I think my everyday strengths include:  encouraging and mentoring, thinking up ideas and potential solutions, getting excited about other people’s ideas, trying to come to issues creatively, and dusting myself off.

What are you currently reading?

I listen to audiobooks and podcasts on my bike commute. Among my favorites:

In print, I’m currently reading:

What do you listen to while at work?

Classical is the only type of music I can play while working and still be able to (mostly) concentrate. So I listen to the masters, like Bach, Mozart and Tchaikovsky

When we work collaboratively on creative things that don’t require earnest concentration I defer to one of the team to pick the playlist. Otherwise, I’d always pick Josh Ritter.

Are you more of an introvert or an extrovert?

Mostly an introvert who fakes being an extrovert at work but as other authors have said (Eric, Nicholas) it’s very dependent on the situation and the company.

What’s your sleep routine like?

Early to bed, early to rise. I get up between 5-6 and go to bed between around 10.

Fill in the blank: I’d love to see _________ answer these same questions.

@Morville (Peter Morville)

@leahbuley (Leah Buley)

What’s the best advice you’ve ever received?

Show up


Bootstrap Responsibly

Bootstrap is the most popular front-end framework used for websites. An estimate by meanpath several months ago sat it firmly behind 1% of the web – for good reason: Bootstrap makes it relatively painless to puzzle together a pretty awesome plug-and-play, component-rich site. Its modularity is its key feature, developed so Twitter could rapidly spin-up internal microsites and dashboards.

Oh, and it’s responsive. This is kind of a thing. There’s not a library conference today that doesn’t showcase at least one talk about responsive web design. There’s a book, countless webinars, courses, whole blogs dedicated to it (ahem), and more. The pressure for libraries to have responsive, usable websites can seem to come more from the likes of us than from the patronbase itself, but don’t let that discredit it. The trend is clear and it is only a matter of time before our libraries have their mobile moment.

Library websites that aren’t responsive feel dated, and more importantly they are missing an opportunity to reach a bevy of mobile-only users that in 2012 already made up more than a quarter of all web traffic. Library redesigns are often quickly pulled together in a rush to meet the growing demand from stakeholders, pressure from the library community, and users. The sprint makes the allure of frameworks like Bootstrap that much more appealing, but Bootstrapped library websites often suffer the cruelest of responsive ironies:

They’re not mobile-friendly at all.

Assumptions that Frameworks Make

Let’s take a step back and consider whether using a framework is the right choice at all. A front-end framework like Bootstrap is a Lego set with all the pieces conveniently packed. It comes with a series of templates, a blown-out stylesheet, scripts tuned to the environment that let users essentially copy-and-paste fairly complex web-machinery into being. Carousels, tabs, responsive dropdown menus, all sorts of buttons, alerts for every occasion, gorgeous galleries, and very smart decisions made by a robust team of far-more capable developers than we.

Except for the specific layout and the content, every Bootstrapped site is essentially a complete organism years in the making. This is also the reason that designers sometimes scoff, joking that these sites look the same. Decked-out frameworks are ideal for rapid prototyping with a limited timescale and budget because the design decisions have by and large already been made. They assume you plan to use the framework as-is, and they don’t make customization easy.

In fact, Bootstrap’s guide points out that any customization is better suited to be cosmetic than a complete overhaul. The trade-off is that Bootstrap is otherwise complete. It is tried, true, usable, accessible out of the box, and only waiting for your content.

Not all Responsive Design is Created Equal

It is still common to hear the selling point for a swanky new site is that it is “responsive down to mobile.” The phrase probably rings a bell. It describes a website that collapses its grid as the width of the browser shrinks until its layout is appropriate for whatever screen users are carrying around. This is kind of the point – and cool, as any of us with a browser-resizing obsession could tell you.

Today, “responsive down to mobile” has a lot of baggage. Let me explain: it represents a telling and harrowing ideology that for these projects mobile is the afterthought when mobile optimization should be the most important part. Library design committees don’t actually say aloud or conceive of this stuff when researching options, but it should be implicit. When mobile is an afterthought, the committee presumes users are more likely to visit from a laptop or desktop than a phone (or refrigerator). This is not true.

See, a website, responsive or not, originally laid out for a 1366×768 desktop monitor in the designer’s office, wistfully depends on visitors with that same browsing context. If it looks good in-office and loads fast, then looking good and loading fast must be the default. “Responsive down to mobile” is divorced from the reality that a similarly wide screen is not the common denominator. As such, responsive down to mobile sites have a superficial layout optimized for the developers, not the user.

In a recent talk at An Event Apart–a conference–in Atlanta, Georgia, Mat Marquis stated that 72% of responsive websites send the same assets to mobile sites as they do desktop sites, and this is largely contributing to the web feeling slower. While setting img { width: 100%; } will scale media to fit snugly to the container, it is still sending the same high-resolution image to a 320px-wide phone as a 720px-wide tablet. A 1.6mb page loads differently on a phone than the machine it was designed on. The digital divide with which librarians are so familiar is certainly nowhere near closed, but while internet access is increasingly available its ubiquity doesn’t translate to speed:

  1. 50% of users ages 12-29 are “mostly mobile” users, and you know what wireless connections are like,
  2. even so, the weight of the average website ( currently 1.6mb) is increasing.

Last December, analysis of data from pagespeed quantiles during an HTTP Archive crawl tried to determine how fast the web was getting slower. The fastest sites are slowing at a greater rate than the big bloated sites, likely because the assets we send–like increasingly high resolution images to compensate for increasing pixel density in our devices–are getting bigger.

The havoc this wreaks on the load times of “mobile friendly” responsive websites is detrimental. Why? Well, we know that

  • users expect a mobile website to load as fast on their phone as it does on a desktop,
  • three-quarters of users will give up on a website if it takes longer than 4 seconds to load,
  • the optimistic average load time for just a 700kb website on 3G is more like 10-12 seconds

eep O_o.

A Better Responsive Design

So there was a big change to Bootstrap in August 2013 when it was restructured from a “responsive down to mobile” framework to “mobile-first.” It has also been given a simpler, flat design, which has 100% faster paint time – but I digress. “Mobile-first” is key. Emblazon this over the door of the library web committee. Strike “responsive down to mobile.” Suppress the record.

Technically, “mobile-first” describes the structure of the stylesheet using CSS3 Media Queries, which determine when certain styles are rendered by the browser.

.example {
  styles: these load first;
}

@media screen and (min-width: 48em) {

  .example {

    styles: these load once the screen is 48 ems wide;

  }

}

The most basic styles are loaded first. As more space becomes available, designers can assume (sort of) that the user’s device has a little extra juice, that their connection may be better, so they start adding pizzazz. One might make the decision that, hey, most of the devices less than 48em (720px approximately with a base font size of 16px) are probably touch only, so let’s not load any hover effects until the screen is wider.

Nirvana

In a literal sense, mobile-first is asset management. More than that, mobile-first is this philosophical undercurrent, an implicit zen of user-centric thinking that aligns with libraries’ missions to be accessible to all patrons. Designing mobile-first means designing to the lowest common denominator: functional and fast on a cracked Blackberry at peak time; functional and fast on a ten year old machine in the bayou, a browser with fourteen malware toolbars trudging through the mire of a dial-up connection; functional and fast [and beautiful?] on a 23″ iMac. Thinking about the mobile layout first makes design committees more selective of the content squeezed on to the front page, which makes committees more concerned with the quality of that content.

The Point

This is the important statement that Bootstrap now makes. It expects the design committee to think mobile-first. It comes with all the components you could want, but they want you to trim the fat.

Future Friendly Bootstrapping

This is what you get in the stock Bootstrap:

  • buttons, tables, forms, icons, etc. (97kb)
  • a theme (20kb)
  • javascripts (30kb)
  • oh, and jQuery (94kb)

That’s almost 250kb of website. This is like a browser eating a brick of Mackinac Island Fudge – and this high calorie bloat doesn’t include images. Consider that if the median load time for a 700kb page is 10-12 seconds on a phone, half that time with out-of-the-box Bootstrap is spent loading just the assets.

While it’s not totally deal-breaking, 100kb is 5x as much CSS as an average site should have, as well as 15%-20% of what all the assets on an average page should weigh. Josh Broton

To put this in context, I like to fall back on Ilya Girgorik’s example comparing load time to user reaction in his talk “Breaking the 1000ms Time to Glass Mobile Barrier.” If the site loads in just 0-100 milliseconds, this feels instant to the user. By 100-300ms, the site already begins to feel sluggish. At 300-1000ms, uh – is the machine working? After 1 second there is a mental context switch, which means that the user is impatient, distracted, or consciously aware of the load-time. After 10 seconds, the user gives up.

By choosing not to pair down, your Bootstrapped Library starts off on the wrong foot.

The Temptation to Widgetize

Even though Bootstrap provides modals, tabs, carousels, autocomplete, and other modules, this doesn’t mean a website needs to use them. Bootstrap lets you tailor which jQuery plugins are included in the final script. The hardest part of any redesign is to let quality content determine the tools, not the ability to tabularize or scrollspy be an excuse to implement them. Oh, don’t Google those. I’ll touch on tabs and scrollspy in a few minutes.

I am going to be super presumptuous now and walk through the total Bootstrap package, then make recommendations for lightening the load.

Transitions

Transitions.js is a fairly lightweight CSS transition polyfill. What this means is that the script checks to see if your user’s browser supports CSS Transitions, and if it doesn’t then it simulates those transitions with javascript. For instance, CSS transitions often handle the smooth, uh, transition between colors when you hover over a button. They are also a little more than just pizzazz. In a recent article, Rachel Nabors shows how transition and animation increase the usability of the site by guiding the eye.

With that said, CSS Transitions have pretty good browser support and they probably aren’t crucial to the functionality of the library website on IE9.

Recommendation: Don’t Include.

 Modals

“Modals” are popup windows. There are plenty of neat things you can do with them. Additionally, modals are a pain to design consistently for every browser. Let Bootstrap do that heavy lifting for you.

Recommendation: Include

Dropdown

It’s hard to conclude a library website design committee without a lot of links in your menu bar. Dropdown menus are kind of tricky to code, and Bootstrap does a really nice job keeping it a consistent and responsive experience.

Recommendation: Include

Scrollspy

If you have a fixed sidebar or menu that follows the user as they read, scrollspy.js can highlight the section of that menu you are currently viewing. This is useful if your site has a lot of long-form articles, or if it is a one-page app that scrolls forever. I’m not sure this describes many library websites, but even if it does, you probably want more functionality than Scrollspy offers. I recommend jQuery-Waypoints – but only if you are going to do something really cool with it.

Recommendation: Don’t Include

Tabs

Tabs are a good way to break-up a lot of content without actually putting it on another page. A lot of libraries use some kind of tab widget to handle the different search options. If you are writing guides or tutorials, tabs could be a nice way to display the text.

Recommendation: Include

Tooltips

Tooltips are often descriptive popup bubbles of a section, option, or icon requiring more explanation. Tooltips.js helps handle the predictable positioning of the tooltip across browsers. With that said, I don’t think tooltips are that engaging; they’re sometimes appropriate, but you definitely use to see more of them in the past. Your library’s time is better spent de-jargoning any content that would warrant a tooltip. Need a tooltip? Why not just make whatever needs the tooltip more obvious O_o?

Recommendation: Don’t Include

Popover

Even fancier tooltips.

Recommendation: Don’t Include

Alerts

Alerts.js lets your users dismiss alerts that you might put in the header of your website. It’s always a good idea to give users some kind of control over these things. Better they read and dismiss than get frustrated from the clutter.

Recommendation: Include

Collapse

The collapse plugin allows for accordion-style sections for content similarly distributed as you might use with tabs. The ease-in-ease-out animation triggers motion-sickness and other aaarrghs among users with vestibular disorders. You could just use tabs.

Recommendation: Don’t Include

Button

Button.js gives a little extra jolt to Bootstrap’s buttons, allowing them to communicate an action or state. By that, imagine you fill out a reference form and you click “submit.” Button.js will put a little loader icon in the button itself and change the text to “sending ….” This way, users are told that the process is running, and maybe they won’t feel compelled to click and click and click until the page refreshes. This is a good thing.

Recommendation: Include

Carousel

Carousels are the most popular design element on the web. It lets a website slideshow content like upcoming events or new material. Carousels exist because design committees must be appeased. There are all sorts of reasons why you probably shouldn’t put a carousel on your website: they are largely inaccessible, have low engagement, are slooooow, and kind of imply that libraries hate their patrons.

Recommendation: Don’t Include.

Affix

I’m not exactly sure what this does. I think it’s a fixed-menu thing. You probably don’t need this. You can use CSS.

Recommendation: Don’t Include

Now, Don’t You Feel Better?

Just comparing the bootstrap.js and bootstrap.min.js files between out-of-the-box Bootstrap and one tailored to the specs above, which of course doesn’t consider the differences in the CSS, the weight of the images not included in a carousel (not to mention the unquantifiable amount of pain you would have inflicted), the numbers are telling:

File Before After
bootstrap.js 54kb 19kb
bootstrap.min.js 29kb 10kb

So, Bootstrap Responsibly

There is more to say. When bouncing this topic around twitter awhile ago, Jeremy Prevost pointed out that Bootstrap’s minified assets can be GZipped down to about 20kb total. This is the right way to serve assets from any framework. It requires an Apache config or .htaccess rule. Here is the .htaccess file used in HTML5 Boilerplate. You’ll find it well commented and modular: go ahead and just copy and paste the parts you need. You can eke out even more performance by “lazy loading” scripts at a given time, but these are a little out of the scope of this post.

Here’s the thing: when we talk about having good library websites we’re mostly talking about the look. This is the wrong discussion. Web designs driven by anything but the content they already have make grasping assumptions about how slick it would look to have this killer carousel, these accordions, nifty tooltips, and of course a squishy responsive design. Subsequently, these responsive sites miss the point: if anything, they’re mobile unfriendly.

Much of the time, a responsive library website is used as a marker that such-and-such site is credible and not irrelevant, but as such the website reflects a lack of purpose (e.g., “this website needs to increase library-card registration). A superficial understanding of responsive webdesign and easy-to-grab frameworks entail that the patron is the least priority.

 

About Our Guest Author :

Michael Schofield is a front-end librarian in south Florida, where it is hot and rainy – always. He tries to do neat things there. You can hear him talk design and user experience for libraries on LibUX.


Redesigning the Item Record Summary View in a Library Catalog and a Discovery Interface

A. Oh, the Library Catalog

Almost all librarians have a love-hate relationship with their library catalogs (OPAC), which are used by library patrons. Interestingly enough, I hear a lot more complaints about the library catalog from librarians than patrons. Sometimes it is about the catalog missing certain information that should be there for patrons. But many other times, it’s about how crowded the search results display looks. We actually all want a clean-looking, easy-to-navigate, and efficient-to-use library catalog. But of course, it is much easier to complain than to come up with an viable alternative.

Aaron Schmidt has recently put forth an alternative design for a library item record. In his blog post, he suggests a library catalog shifts its focus from the bibliographic information (or metadata if not a book) of a library item to a patron’s tasks performed in relation to the library item so that the catalog functions more as “a tool that prioritizes helping people accomplish their tasks, whereby bibliographic data exists quietly in the background and is exposed only when useful.” This is a great point. Throwing all the information at once to a user only overwhelms her/him. Schmidt’s sketch provides a good starting point to rethink how to design the library catalog’s search results display.

Screen Shot 2013-10-09 at 1.34.08 PM

From the blog post, “Catalog Design” by Aaron Schmidt

B. Thinking about Alternative Display Design

The example above is, of course, too simple to apply to the library catalog of an academic library straight away. For an usual academic library patron to determine whether s/he wants to either check out or reserve the item, s/he is likely to need a little more information than the book title, the author, and the book image. For example, students who look for textbooks, the edition information as well as the year of publication are important. But I take it that Schmidt’s point was to encourage more librarians to think about alternative designs for the library catalog rather than simply compare what is available and pick what seems to be the best among those.

Screen Shot 2013-10-09 at 1.44.36 PM

Florida International University Library Catalog – Discovery layer, Mango, provided by Florida Virtual Campus

Granted that there may be limitations in how much we can customize the search results display of a library catalog. But that is not a reason to stop thinking about what the optimal display design would be for the library catalog search results. Sketching alternatives can be in itself a good exercise in evaluating the usability of an information system even if not all of your design can be implemented.

Furthermore, more and more libraries are implementing a discovery layer over their library catalogs, which provides much more room to customize the display of search results than the traditional library catalog. Open source discovery systems such as Blacklight or VuFind provides great flexibility in customizing the search results display. Even proprietary discovery products such as Primo, EDS, Summon offer a level of customization by the libraries.

Below, I will discuss some principles to follow in sketching alternative designs for search results in a library catalog, present some of my own sketches, and show other examples implemented by other libraries or websites.

C. Principles

So, if we want to improve the item record summary display to be more user-friendly, where can we start and what kind of principles should we follow? These are the principles that I followed in coming up with my own design:

  • De-clutter.
  • Reveal just enough information that is essential to determine the next action.
  • Highlight the next action.
  • Shorten texts.

These are not new principles. They are widely discussed and followed by many web designers including librarians who participate in their libraries’ website re-design. But we rarely apply these to the library catalog because we think that the catalog is somehow beyond our control. This is not necessarily the case, however. Many libraries implement discovery layers to give a completely different and improved look from that of their ILS-es’ default display.

Creating a satisfactory design on one’s own instead of simply pointing out what doesn’t work or look good in existing designs is surprisingly hard but also a refreshing challenge. It also brings about the positive shift of focus in thinking about a library catalog from “What is the problem in the catalog?” to “What is a problem and what can we change to solve the problem?”

Below I will show my own sketches for an item record summary view for the library catalog search results. These are clearly a combination of many other designs that I found inspiring in other library catalogs. (I will provide the source of those elements later in this post.) I tried to mix and revise them so that the result would follow those four principles above as closely as possible. Check them out and also try creating your own sketches. (I used Photoshop for creating my sketches.)

D. My Own Sketches

Here is the basic book record summary view. What I tried to do here is giving just enough information for the next action but not more than that: title, author, type, year, publisher, number of library copies and holds. The next action for a patron is to check the item out. On the other hand, undecided patrons will click the title to see the detailed item record or have the detailed item record to be texted, printed, e-mailed, or to be used in other ways.

(1) A book item record

Screen Shot 2013-10-09 at 12.46.38 PM

This is a record of a book that has an available copy to check out. Only when a patron decides to check out the item, the next set of information relevant to that action – the item location and the call number – is shown.

(2) With the check-out button clicked

check out box open

If no copy is available for check-out, the best way to display the item is to signal that check-out is not possible and to highlight an alternative action. You can either do this by graying out the check-out button or by hiding the button itself.

Many assume that adding more information would automatically increase the usability of a website. While there are cases in which this could be true, often a better option is to reveal information only when it is relevant.

I decided to gray out the check-out button when there is no available copy and display the reserve button, so that patrons can place a hold. Information about how many copies the library has and how many holds are placed (“1 hold / 1 copy”) would help a patron to decide if they want to reserve the book or not.

(3) A book item record when check-out is not available

Screen Shot 2013-10-09 at 12.34.54 PM

I also sketched two other records: one for an e-Book without the cover image and the other with the cover image. Since the appropriate action in this case is reading online, a different button is shown. You may place the ‘Requires Login’ text or simply omit it because most patrons will understand that they will have to log in to read a library e-book and also the read-online button will itself prompt log in once clicked anyway.

(4) An e-book item record without a book cover

Screen Shot 2013-10-09 at 12.35.54 PM

(5) An e-book item record with a book cover

Screen Shot 2013-10-09 at 12.48.33 PM

(6) When the ‘Read Online’ button is clicked, an e-book item record with multiple links/providers

When there are multiple options for one electronic resource, those options can be presented in a similar way in which multiple copies of a physical book are shown.

Screen Shot 2013-10-09 at 12.35.22 PM

(6) A downloadable e-book item record

For a downloadable resource, changing the name of the button to ‘download’ is much more informative.

Screen Shot 2013-10-09 at 12.35.13 PM

(7) An e-journal item record

Screen Shot 2013-10-09 at 12.47.31 PM

(7) When the ‘Read Online’ button is clicked, an e-journal item record with multiple links/providers

Screen Shot 2013-10-09 at 12.41.56 PM

E. Inspirations

Needless to say, I did not come up with my sketches from scratch. Here are the library catalogs whose item record summary view inspired me.

torontopublic

Toronto Public Library catalog has an excellent item record summary view, which I used as a base for my own sketches. It provides just enough information for the summary view. The title is hyperlinked to the detailed item record, and the summary view displays the material type and the year in bod for emphasis. The big green button also clearly shows the next action to take. It also does away with unnecessary labels that are common in library catalog such as ‘Author:’ ‘Published:’ ‘Location:’ ‘Link:.’

User Experience Designer Ryan Feely, who worked on Toronto Public Library’s catalog search interface, pointed out the difference between a link and an action in his 2009 presentation “Toronto Public Library Website User Experience Results and Recommendations.” Actions need to be highlighted as a button or in some similar design to stand out to users (slide 65). And ideally, only the actions available for a given item should be displayed.

Another good point which Feely makes (slide 24) is that an icon is often the center of attention and so a different icon should be used to signify different type of materials such as a DVD or an e-Journal. Below are the icons that Toronto Public Library uses for various types of library materials that do not have unique item images. These are much more informative than the common “No image available” icon.

eAudiobooke-journal eMusic

vinyl VHS eVideo

University of Toronto Libraries has recently redesigned their library catalog to be completely responsive. Their item record summary view in the catalog is brief and clear. Each record in the summary view also uses a red and a green icon that helps patrons to determine the availability of an item quickly. The icons for citing, printing, e-mailing, or texting the item record that often show up in the catalog are hidden in the options icon at the bottom right corner. When the mouse hovers over, a variety of choices appear.

Screen Shot 2013-10-09 at 4.45.33 PM

univtoronto

Richland Library’s catalog displays library items in a grid as a default, which makes the catalog more closely resemble an online bookstore or shopping website. Patrons can also change the view to have more details shown with or without the item image. The item record summary view in the default grid view is brief and to the point. The main type of patron action, such as Hold or Download, is clearly differentiated from other links as an orange button.

richland

Screen Shot 2013-10-13 at 8.33.46 PM

Standford University Library offers a grid view (although not as the default like Richland Library). The grid view is very succinct with the item title, call number, availability information in the form of a green checkmark, and the item location.

Screen Shot 2013-10-13 at 8.37.37 PM

What is interesting about Stanford University Library catalog (using Blacklight) is that when a patron hovers its mouse over an item in the grid view, the item image displays the preview link. And when clicked, a more detailed information is shown as an overlay.

Screen Shot 2013-10-13 at 8.37.58 PM

Brigham Young University completely customized the user interface of the Primo product from ExLibris.

byu

And University of Michigan Library customized the search result display of the Summon product from SerialsSolutions.

Screen Shot 2013-10-14 at 11.16.49 PM

Here are some other item record summary views that are also fairly straightforward and uncluttered but can be improved further.

Sacramento Public Library uses the open source discovery system, VuFind, with little customization.

dcpl

I have not done an extensive survey of library catalogs to see which one has the best item record summary view. But it seemed to me that in general academic libraries are more likely to provide more information than necessary in the item record summary view and also to require patrons to click a link instead of displaying relevant information right away. For example, the ‘Check availability’ link that is shown in many library catalogs is better when it is replaced by the actual availability status of ‘available’ or ‘checked out.’ Similarly, the ‘Full-text online’ or ‘Available online’ link may be clearer with an button titled ‘Read online’ or ‘Access online.’

F. Challenges and Strategies

The biggest challenge in designing the item record summary view is to strike the balance between too little information and too much information about the item. Too little information will require patrons to review the detailed item record just to identify if the item is the one they are looking for or not.

Since librarians know many features of the library catalog, they tend to err on the side of throwing all available features into the item record summary view. But too much information not only overwhelms patrons and but also makes it hard for them to locate the most relevant information at that stage and to identify the next available action. Any information irrelevant to a given task is no more than noise to a patron.

This is not a problem unique to a library catalog but generally applicable to any system that displays search results. In their book, Designing the Search Experience , Tony Russell-Rose and Tyler Tate describes this as achieving ‘the optimal level of detail.’ (p.130)

Useful strategies for achieving the optimal level of detail for the item summary view in the case of the library catalog include:

  • Removing all unnecessary labels
  • Using appropriate visual cues to make the record less text-heavy
  • Highlighting next logical action(s) and information relevant to that action
  • Systematically guiding a patron to the actions that are relevant to a given item and her/his task in hand

Large online shopping websites, Amazon, Barnes & Noble, and eBay all make a good use of these strategies. There are no labels such as ‘price,’ ‘shipping,’ ‘review,’ etc. Amazon highlights the price and the user reviews most since those are the two most deciding factors for consumers in their browsing stage. Amazon only offers enough information for a shopper to determine if s/he is further interested in purchasing the item. So there is not even the Buy button in the summary view. Once a shopper clicks the item title link and views the detailed item record, then the buying options and the ‘Add to Cart’ button are displayed prominently.

Screen Shot 2013-10-09 at 1.21.15 PM

Barnes & Noble’s default display for search results is the grid view, and the item record summary view offers only the most essential information – the item title, material type, price, and the user ratings.

Screen Shot 2013-10-09 at 1.24.05 PM

eBay’s item record summary view also offers only the most essential information, the highest bid and the time left, while people are browsing the site deciding whether to check out the item in further detail or not.

Screen Shot 2013-10-09 at 1.28.47 PM

G. More Things to Consider

An item record summary view, which we have discussed so far, is surely the main part of the search results page. But it is only a small part of the search results display and even a smaller part of the library catalog. Optimizing the search results page, for example, entails not just re-designing the item record summary view but choosing and designing many other elements of the page such as organizing the filtering options on the left and deciding on the default and optional views. Determining the content and the display of the detailed item record is another big part of creating a user-friendly library catalog. If you are interested in this topic, Tony Russell-Rose and Tyler Tate’s book Designing the Search Experience (2013) provides an excellent overview.

Librarians are professionals trained in many uses of a searchable database, a known item search, exploring and browsing, a search with incomplete details, compiling a set of search results, locating a certain type of items only by location, type, subject, etc. But since our work is also on the operation side of a library, we often make the mistake of regarding the library catalog as one huge inventory system that should hold and display all the acquisition, cataloging, and holdings information data of the library collection. But library patrons are rarely interested in seeing such data. They are interested in identifying relevant library items and using them. All the other information is simply a guide to achieving this ultimate goal, and the library catalog is another tool in their many toolboxes.

Online shopping sites optimize their catalog to make purchase as efficient and simple as possible. Both libraries and online shopping sites share the common interests of guiding the users to one ultimate task – identifying an appropriate item for the final borrowing or access/purchase. For creating user-oriented library catalog sketches, it is helpful to check out how non-library websites are displaying their search results as well.

Screen Shot 2013-10-13 at 9.52.27 PM

music

themes

Once you start looking other examples, you will realize that there are very many ways to display search results and you will soon want to sketch your own alternative design for the search results display in the library catalog and the discovery system. What do you think would be the level of optimum detail for library items in the library catalog or the discovery interface?

Further Reading

 

 


The Mobile App Design Process: A Tube Map Infographic

Last June I had a great experience team-teaching a week-long seminar on designing mobile apps at the Digital Humanities Summer Institute (DHSI). Along with my colleagues from WSU Vancouver’s Creative Media and Digital Culture (CMDC) program, I’ll be returning this June to the beautiful University of Victoria in British Columbia to teach the course again1. As part of the course, I created a visual overview of the process we use for app making. I hope you’ll find it a useful perspective on the work involved in crafting mobile apps and an aid to the process of creating your own.

topological map of the mobile app design process

A visual guide to the process of designing and building mobile apps. Start with Requirements Analysis in the upper-left and follow the tracks to Public Release. (Click for full-sized image.)

Creating the Tube Map:

I’m fond of the tube-map infographic style, also know as the topological map2, because of its ability to highlight relationships between systems and especially because of how it distinguishes between linear (do once) and recursive (do over and over) processes. The linear nature of text in a book or images in slide-deck presentations can artificially impose a linearity that does not mirror the creative process we want to impart. In this example, the design and prototyping loops on the tube-map help communicate that a prototype model is an aid to modeling the design process and not a separate step completed only when the design has been finalized.

These maps are also fun and help spur the creative process. There are other tools for process mapping such as using flowcharts or mind-maps, but in this case I found the topological map has a couple of advantages. First and foremost, I associate the other two with our strategic planning process, so the tube map immediately seems more open, fun, and creative. This is, of course, rooted in my own experience and your experiences will vary but if you are looking for a new perspective on process mapping or a new way to display interconnected systems that is vibrant, fun, and shakes things up a bit the tube map may be just the thing.

I created the map using the open source vector-graphics program Inkscape3 which can be compared to Adobe Illustrator and Corel Draw. Inkscape is free (both gratis and libre) and is powerful, but there is a bit of a learning curve. Being unfamiliar with vector graphics or the software tools to create them, I worked with an excellent tutorial provided by Wikipedia on creating vector graphic topological maps4. It took me a few days of struggling and slowly becoming familiar with the toolset before I felt comfortable creating with Inkscape. I count this as time well spent, as many graphics used in mobile app and icon sets required by app stores can be made with vector graphic editors. The Inkscape skills I picked up while making the map have come in very handy on multiple occasions since then.

Reading the Mobile App Map:

Our process through the map begins with a requirements analysis or needs assessment. We ask: what does the client want the app to do? What do we know about our end users? How do the affordances of the device affect this? Performing case studies helps us learn about our users before we start designing to meet their needs. In the design stage we want people to make intentional choices about the conceptual and aesthetic aspects of  their app design. Prototype models like wireframe mock-ups, storyboards, or Keynotopia5 prototypes help us visualize these choices, eventually resulting in a working prototype of our app. Stakeholders can test and request modifications to the prototype, avoiding potentially expensive and labor intensive code revisions later in the process.

Once both the designers and clients are satisfied with the prototype and we’ve seen how potential users interact with it, we’re ready to commit our vision to code. Our favored code platform uses HTML 5, CSS 3, jQuery Mobile6, and PhoneGap7 to make hybrid web apps. Hybrid apps are written as web apps–HTML/JavaScript web sites that look and performlike apps–then use a tool like PhoneGap to translate this code into the native format for a device. PhoneGap translates a web app into a format that works with the device’s native programming environment. This provides more direct and thus faster access to device hardware and also enables us to place our app in official app stores. Hybrid apps are not the only available choice and aren’t perfect for every use case. They can be slower than native apps and may have some issues accessing device hardware, but the familiar coding language, multi-device compatibility, and ease of making updates across multiple platforms make them an ideal first step for mobile app design. LITA has an upcoming webinar on creating web apps that employs this system8.

Once the prototype has been coded into a hybrid app, we have another opportunity for evaluation and usability testing. We teach a pervasive approach that includes evaluation and testing all throughout the process, but this stage is very important as it is a last chance to make changes before sending the code to an app marketplace. After the app has been submitted, opportunities to make updates, fix bugs, and add features can be limited, sometimes significantly, by the app store’s administrative processes.

After you have spent some time following the lines of the tube map and reading this very brief description, I hope you can see this infographic as an aid to designing mobile web apps. I find it particularly helpful for identifying the source of a particular problem I’m having and also suggesting tools and techniques that can help resolve it. As a personal example, I am often tempted to start writing code before I’ve completely made up my mind what I want the code to do, which leads to frustration. I use the map to remind me to look at my wireframe and use that to guide the structure of my code. I hope you all find it useful as well.


Making Your Website Accessible Part 2: Implementing WCAG

In Part 1, I covered what web accessibility is, its importance, and the Web Content Accessibility Guidelines (WCAG). This post focuses on how to implement WCAG into the structure and layout of the website (including templates/themes, plugins, etc.). While I will be referring to WCAG, I have based this post on what I have found in terms of general best practices, so hopefully this post is applicable to any site.

Using a Template for Layout

First off, I’m going to assume that at the very least your website uses a template even if it doesn’t use a content management system (CMS). Whether your site is developed in-house or not, the points below should be true, otherwise you’re making your website inaccessible for everyone (not just those with accessibility needs).

A template will help you with:

  • consistent navigation (3.2.3)
  • consistent identification of the different parts of each page (3.2.4) – i.e. you assign ids consistently
  • avoiding tables for layout purposes
  • providing multiple ways to discover content (2.4.5)
  • meaningful order to content (1.3.2) – more details below
  • keyboard accessibility (2.1) by insert bypass blocks (2.4.1) – more details below

To provide multiple ways to content, I’m partial to providing links to related pages (local nav) and a search bar, but there are other options.

  • Ordering Content

A template layout is particularly important for the second to last point, ‘meaningful order to content’. Screenreaders, much like people, read top to bottom. However, people generally read in the order that they see text, but screenreaders read in code order (think when viewing ‘page source’). For example:

<body>
  <div>
    <!-- your main/primary content -->
  </div>
  <div>
    <!-- secondary content, this may be a number of things e.g. local nav -->
  </div>
</body>

If you want your secondary content to show up before your primary content, you can just use CSS to move the divs around visually.

  • Keyboard Navigation

Your site also needs to be accessible by keyboard, and to help screenreader users (and those that use text based browsers), you can allow bypass blocks by inserting an anchor link that would allow users to skip blocks of content that are repeated on the various pages of a website.

For example, you might have a link at the very top of the page to skip to the main menu (global nav) or the content. At the main menu , you again might have something similar. This is just one possible example:

<style type="text/css">
.assistive-text {
    position: absolute !important;
    clip: rect(1px 1px 1px 1px); /* IE6, IE7 */
    clip: rect(1px, 1px, 1px, 1px);
}
a.assistive-text:active,
a.assistive-text:focus {
    background: #eee;
    border-bottom: 1px solid #ddd;
    color: #1982d1;
    clip: auto !important;
    font-size: 12px;
    position: absolute;
    text-decoration: underline;
    top: 0;
    left: 7.6%;
}
</style> [...]
<body>
  <header>
    <a href="#access">Skip to main menu</a>
     <hgroup>
        <h1>Your Library</h1>
        <h2>Tagline</h2>
     </hgroup>
     <nav>
        <!-- Allow screen readers/text browsers to get right to the good stuff -->
        <h3>Main Menu</h3>
        <a href="#content">Skip to content</a>
        <a href="#secondary">Skip to page navigation</a>
        <!-- global nav -->
     </nav>
   </header>
   <!-- rest of page -->
</body>
Responsive Template

A responsive site allows all your users to access and view your site on any size device, screen resolution, and browser window size (within reason). For example, take a look at Grand Valley State Libraries’ website in the desktop and mobile views below.

screenshot

GVSU Libraries’ Website Desktop View

screenshot of GVSU Libraries' Website

Mobile view

If you’re unfamiliar with responsive web design, you may want to take a look at Lisa Kurt’s Responsive Web Design and Libraries post to become more familiar with the topic.

The basic technique to make a site responsive is by using media queries shift the look of the content depending on screen size. Making a site responsive already provides greater access to all your users, but you can take this farther with a simple difference to make your site even more accessible. If you use ’em’ (instead of pixels) for your media queries (see Matthew Reidma’s Responsive Web Design for Libraries) in your responsive template, you should be able to resize your page up to 200%  without any problems (1.4.4).

As part of your responsive design, also consider that touch screens don’t have the highest precision, so links and any other interactive pieces should not be too small. In general, this also helps users who have difficulty with fine motor skills to navigate your site.

Valid & Proper Markup

Using valid markup is part of the guideline (4.1.1), but you can go further than that by using HTML5 structural tags to define the roles of the various sections of a webpage (4.1.2, 1.3.1). For example, the basic structure of your website might look something like this:

<!DOCTYPE html>
<html lang="en"><!-- every page should specify the language of the page (3.1.1) -->
  <head>
    <title>Every Page Should Have a Title (2.4.2)</title>
  </head>
  <body>
    <header>
      <hgroup>
        <h1>site name</h1>
        <h2>tagline</h2>
      </hgroup>
      <nav>
        <a href='#'>Global Nav Link</a>
        <a href='#'>Second Nav Link</a>
        <a href='#'>More Nav Link</a>
      </nav>
    </header>
      <section>
        <article>
          <!-- your content, a blog post for example -->
          <aside>
            <!-- might have something like quick facts -->
          </aside>
        </article>
        <article>
          <!-- another standalone piece -->
        </article>
      </section>
    <footer>copyright and other info</footer>
  </body>
</html>

You may optionally include more metadata, not only for the benefit of screen readers, but also for indexing purposes.

Presentation

A number of guidelines deal with presentation aspects.

At the very basic level, presentation and layout should be separate from content. So layout control (such as sizes, floats, padding, etc.), colours, fonts, and practically anything you would put in ‘style’ should be done in CSS, separating it from the HTML (1.3.1). Screen readers (and other tools) can override CSS rules or turn CSS off completely, allowing the user to customize the font, colour, link colour, etc.

As the basic colour scheme is determined through the site’s general style sheet, you will also need to make sure that you fulfill the colour specific guidelines. Colour contrast needs to be at least 4.5:1, except logos and large text (18+pt, 14+pt bold), which require a minimum 3:1 ratio (1.4.3). I recommend using the WCAG Contrast Checker Firefox add-on. Here’s an example:

ColorChecker screenshot

It will highlight errors in red, and you can click on any line which will highlight the related element. The only problem is when you have multiple elements layered on top of each other. As you can see in the example, it’s checking the colour of the text ‘Research Help’ against the yellow you see bordering the menu (global navigation), rather than the element right behind the text. So, you do have to vet the results, but it’s a great little tool to help you quickly check the contrast of your text colours, and for images, you can enter numbers manually yourself to easily check the ratio.

Additional Tools

For more tools, like the colour contrast checker, check out the W3C Web Accessibility Tools list. My picks are WAVE (gives you different views, such as text-only) and Fangs (screen reader emulator).

Other Techniques & Reference

There are a lot more techniques that I haven’t covered in the WCAG Quick Reference, but be cautioned that some of these techniques are already obsolete. Follow the guidelines as those are the requirements; the techniques are ways that do fulfill the guidelines, but not the required way to do so.

Scripting & Custom User Interfaces

As this post focuses on HTML/CSS, it does not cover scripts, Flash, or PDF. The WCAG Quick Reference covers these and more.

For custom interfaces, WAI-ARIA should be used in conjunction with WCAG. There are some UI modules that are web accessible already, so I encourage you to use these:

If you’re using plugins, then at least make it a feature request, and consider contributing to the plugin to make it accessible.

Expectations

Libraries in particular use a multitude of tools and services. No one can expect your organization to make all your tools and services web accessible, especially when you likely don’t have full control over all of them. Nevertheless, do what you can, and request/advocate for web accessibility to companies that you have dealings with when it’s not controlled in-house. Even asking whether web accessibility guidelines have been considered or met can start the conversation going, as the developers may have simply not thought about accessibility. There are also some workarounds, such as providing an overlay (which I will cover in the next post in regards to video), but most workarounds I have seen take away functionality for some users, while making it more accessible for others. Always best to have accessibility built-in to a product or site.

The Bottom Line

While there are a few techniques that are specially to make sites accessible for people with disabilities, good, solid design principles will go a long way in making an accessible site for all your users. You also don’t need to redesign your whole site. Consider using the agile development idea and implementing one technique at a time to improve your site over time.

Next Time

In this post, I have focused on the structure and layout of the website, i.e. the elements that you would typically have in themes or templates. I have purposely left out guidelines that deal with the content of a website as many organizations rely on various staff members to populate the site with content (usually through a CMS), including content that might be done  either by IT or non-IT staff, such as forms and audio/visual content. However, all the content related guidelines also apply to any template or generated content (links, images, forms e.g. search bar), which is what I will cover in the third and final post.

About our Guest Author: Cynthia Ng is currently the Web Services Librarian at Ryerson University Library and Archives. While she is especially interested in web development and services, her focus is on integrating technology into the library in a holistic manner to best serve its users. She is a recent MLIS graduate from the University of British Columbia, and also holds a BEd from UBC. She can be found blogging at Learning (Lib)Tech and on Twitter as @TheRealArty.


Mobile App Use Studies Across a Decentralized Research Library

The University of Illinois’ team of IT diversity interns are working on departmental-specific mobile app modules and user studies of those app modules this Fall semester. The Illinois library is a decentralized system of nearly thirty departmental libraries serving diverse needs of staff, researchers, scientists, graduate students, and undergraduate students. Given such a diverse population, we wondered if it was possible to turn our prototyping pipeline to connect with other unit specific needs outside of our own space, the Undergraduate Library.

Specifically, this fall we wanted to understand how departmental collections and other library locations would use our already developed RESTful web services –which serves as the core component of the prototyping pipeline– for departmental and subject based mobile application modules.

This blog post describes the methods we used to quickly gather feedback on new and exciting features for department collections.The mobile application modules we studied include enhanced wayfinding support of multi-story buildings and collections (originally designed for an undergrad space of one level of collections), a reserves module for all libraries, and hours integration into book information data elements.

Enhanced Wayfinding in a Departmental Library

library wayfinding help

We studied the implementation of feature requests in mobile wayfinding apps

 

This module includes a navigation rose in the upper left corner of the mobile interface. Included is functionality for line segments that draw your current location to the location of your desired book in the stacks. The user can get their current location in the book stacks by using the barcode scanner module to scan the book nearest them, which then sets their current location. After setting their current location, any additional book that is then searched for in the library will generate a line segment path to the location of the searched for book.

Turn by turn directional support based on user location is a new enhancement, though we’ve been getting requests a few times in our previous user studies, from 2010-2011 use studies on maps and library wayfinding using mobile devices.

Reserves Module

Reserve access is one of the most requested mobile app features

Reserve access is one of the most requested mobile app features

This reserve program offers the student access to library reserves from an Android device. The multiple drop downs allow the student to select her course reserves by either course, department, or major.

Hours of library locations integrated into book data

hours module

If the item is in a library that is closed, this item may not useful to a student.

Most OPACs will let you know if the book is available. This is not always so straightforward in OPACs that inventory multiple locations. Some of the departmental collections actually have business hours of 9-5, or other limited hours during the weekend, yet the OPAC will show an item as Available so long as the book is not checked out. We tried to address the problem in our display module by adding a location status to the library — this checks against an hours database to let the user know if the library is actually open for the student to check out an available book. 

Rapid use studies

With a number of feature enhancements to our core set of mobile app modules, we wanted to gather empirical data that could inform production services. The fastest way to get user input into modules that may be in the early design phase is to approach users who are currently in the building. In the case of these modules, we were experimenting with the idea of location services in department collections, and the wayfinding support was specific to the ACES library and so we made this our test location.

Once here we approached users with a test set of questions around the modules. We asked what parts of the app are useful for helping the students integrate library resources into their work. We also asked and observed what doesn’t help, and additionally what features would be worthwhile to further develop.

We asked these questions about app modules:

  • Please describe any previous experience finding items in the Funk ACES Library?
  • What software modules help students integrate library content into their course work?
  • How easy to use is the application?
  • Does the student need time to learn how to use the software?
  • What unexpected things occur?
  • How do students react when the application does not work as they expect?
  • Do students make use of the location-based features?

Follow up

After collecting initial use data the team is reshaping a few of the modules to make them easier to use while at the same time brainstorming ideas for making some of the features more evident. One of the key findings of this last round of user studies was that although we implemented a number of requested features, students could not always locate, and then use or in some cases understand the features and how they would help. So we need to think about making the help features more helpful, and more engaging, overall. We theorize that another reason they couldn’t always find the help tools we designed into the ACES modules was the fact that the modular offerings of our experiment have become a bit cluttered.

If you take a look at any of the above screenshots you will notice their were eight included modules in the bottom of the mobile interface for this study. We did put many options in front of the study participants, so the next round of user studies will be more focused on areas we think are worthwhile to develop: particularly the engaging elements of wayfinding, but also the reserves module was called out as the one part that students considered would be most helpful for integrating library resources into their work.

Finally, as we poured over a few of the software choices we made to construct the Android layers, we realized they were not quite modular enough, and so this caused errors in overall functionality of the app during the study. To correct this we are thinking about definitions for the core aspects of modular design.

A final step for our work this semester is to showcase all of our software prototypes to the library staff at the University of Illinois. To that end we are having an open house during finals week, where we are inviting all staff members into our prototyping space and asking also for their feedback on the software developed, and ask staff to try out some of the newest ideas in mobile technology, including our in-development augmented reality shelf browser, which is being coded with funds from a Sparks! IMLS grant. A user study for mobile augmented reality applications is planned this Spring 2013.

Another information technology problem we will work on in the Spring 2013 semester is how to incorporate our RESTful feeds into the library discovery layers. The Music and Performing Arts library location is likely our next collaboration for stacks based wayfinding support inside of the OPAC.

We would like to integrate our wayfinding feed into the OPAC to help students get from the book information to the stacks location, using the RESTful web-services we’ve designed for system efficiency from the onset. The next step for our fledgling prototyping initiative is system integration, which involves taking this prototype work and injecting its most useful and used components into production environments like our VuFind search and our Primo discovery layer.

 

See also:

Huang, Y.M., Wu, D.F., Guo, Q. (2009), “Build a RESTful Web service using Jersey and Apache Tomcat,” http://www.ibm.com/developerworks/web/library/wa-aj-tomcat/

Jones, T. & Richey, R. (2000), “Rapid Prototyping methodology in action: a developmental study,”Educational Technology Research and Development 48, 63-80

Prototyping as a Process for Improved User Experience with Library and Archives Websites: http://journal.code4lib.org/articles/7394

Rapid Prototyping a Collections-Based  Mobile Wayfinding Application:
http://hdl.handle.net/2142/24001

 

 


Learning Web Analytics from the LITA 2012 National Forum Pre-conference

Note: The 2012 LITA Forum pre-conference on Web Analytics was taught by Tabatha (Tabby) Farney and Nina McHale.  Our guest authors, Joel Richard and Kelly Sattler were two of the people who attended the pre-conference and they wrote a summary of the pre-conference to share with the ACRL TechConnect readers.

In advance of the conference, Tabby and Nina reached out to the participants ahead of time with a survey on what we the participants were interested in learning and solicited questions to be answered in the class.  Twenty-one participants responded and of them seventeen were already using Google Analytics (GA).  About half those using GA check their reports 1-2 times per month and the rest less often.  The conference opened with introductions and a brief description of what we were doing with analytics on our website and what we hoped to learn.

Web Analytics Strategy

The overall theme of the pre-conference was the following:

A web analytics strategy is the structured process of identifying and evaluating your key performance indicators on the basis of an organization’s objectives and website goals – the desired outcomes, or what you want people to do on the website.

We learned that beyond the tool we use measure our analytics, we need to identify what we want our website to do.  We do this by using pre-existing documentation our institutions have on their mission and purpose as well as the mission and purpose of the website and who it is to serve. Additionally, we need a privacy statement so our patrons understand that we will be tracking their movements on the site and what we will be collecting. We learned that there are challenges when using only IP addresses (versus cookies) for tracking purposes.  For example, does our institution’s network architecture allow for you to identify patrons versus staff using IP address or are cookies a necessity?

Tool Options for Website Statistics

To start things off, we discussed the types of web analytics tools that are available and which we were using. Many of the participants were already using Google Analytics (GA) and thus most of the activities were demonstrated in GA as we could log into our own accounts.  We were reminded that though it is free, GA keeps our data and does not allow us to delete it.  GA has us place a bit of Javascript code on the pages we want tracked. It is easier to set up GA within a content management system but it may not work as well for mobile devices.  Piwik is an open-source alternative to Google Analytics that uses a similar Javascript tagging method.  Additionally we were reminded that if we use any Javascript tagging method, we should review our code snippets least every two years as they do change.

We learned about other, less common systems for tracking user activity. AWStats is installed locally and reads the website log files and processes them into reports.  It offers the user more control and may be more useful for sites not in a content management system.  Sometimes it provides more information than desired and will be unable to clearly differentiate between users based on IP.  Other similar tools are Webalizer, FireStats, and Webtrends.

A third option is to use Web Beacons which are small, invisible transparent GIFs embedded on every page.  This is useful for when Javascript won’t work, but they probably aren’t as applicable today as they once were.

Finally, we took a brief look at the heat mapping tool, Crazy Egg.  It focuses on visual analytics and uses Javascript tagging to provide heat maps of exactly where visitors clicked on our site offering insights as to what areas of a page receive the most attention.  Crazy Egg has a 30 day free trial and then it costs per page tracked, but there are subscriptions for under $100/month if you find the information worth the cost.  The images can really give webmasters an understanding of what the users are doing on their site and are persuasive tools when redesigning a page or analyzing specific kinds of user behavior.

Core Concepts and Metrics of Web Analytics

Next, Tabby and Nina presented a basic list of terminology used within web analytics.  Of course, different tools refer to the same concept by different names, but these were the terms we used throughout our session.

  • Visits – A visit is when someone comes to the site. A visit ends when a user has not seen a new page in 30 minutes (or when they have left the site.)
  • Visitor Types: New & Returning – A cookie is used to determine whether a visitor has been to the site in the past. If a user disables cookies or clears them regularly, they will show up as a new user each time they visit.
  • Unique Visitors – To distinguish visits by the same person, the cookie is used to track when the same person returns to the site in a given period of time (hours, days, weeks or more).
  • Page Views – More specific than “hits,” a page view is recorded when a page is loaded in a visitor’s browser.
  • User Technology – This includes information about the visitor’s operating system, browser version, mobile device or desktop computer, etc.
  • Geographic Data – A visitor’s location in the world can often be determined to which city they are in.
  • Entry and Exit Pages – These refer to the page the visitor sees first during their visit (Entry) and the last page they see before leaving or their session expires (Exit).
  • Referral Sources – Did the visitor come from another site? If so, this will tell who is sending traffic to us.
  • Bounce Rate – A bounce is when someone comes to the site and views only one page before leaving.
  • Engagement Metrics – This indicates how much visitors are on our site measured by time they spent on the site or number of pages viewed.
Goals/Conversion

Considering how often the terms “goals” and “conversions” are used, we learned that it is important to realize that in web analytics lingo, a goal is a metric, also referred to as a conversion, and measures whether a desired action has occurred on your site. There are four primary types of conversions:

  1. URL Destination – A visitor has reached a targeted end page.  For commercial sites, this would be the “Thank you for your purchase” page. For a library site, this is a little more challenging to classify and will include several different pages or types of pages.
  2. Visit Duration – How much time a visitor spends on our site. This is often an unclear concept. If a user is on the site for a long time, we don’t know if they were interrupted while on our site, if they had a hard time finding what they were looking for, or if they were enthralled with all the amazing information we provide and read every word twice.
  3. Pages per Visit – Indicates site engagement. Similar to Visit Duration, many pages may mean the user was interested in our content, or that they were unable to find what they were looking for.  We distinguish this by looking at the “paths” of page the visitor saw.  As an example, we might want to know if someone finds the page they were looking for in three pages or less.
  4. Events – Targets an action on the site. This can be anything and is often used to track outbound pages or links to a downloadable PDF.

Conversion rate is an equation that shows the percentage of how often the desired action occurs.

Conversion rate = Desired action / Total or Unique visits

Goal Reports also known as Conversion Reports are sometimes provided by the tool and include the total number of conversions and the conversion rate.  We learned that we can also assign a monetary value to take advantage of the more commerce-focused tools often used in analytics software, but the results can be challenging to interpret.  Conversion reports also show an Abandonment Rate as people leave our site. However, we can counter this by creating a “funnel” that identifies the steps needed to complete the goal. The funnel report shows us where in the steps visitors drop off and how many make it through the complete conversion.

Key Performance Indicators (KPIs) were a focus of much of the conference.  They measure the outcome based on our site’s objectives/goals and are implemented via conversion rates.  KPIs are unique to each site.  Through examples, we learned that each organization’s web presence may be made up of multiple sites. For instance, an organization may have its main library pages, Libguides, the catalog, a branch site, a set of sites for digitized collections, etc. A KPI may span activities on more than one of these sites.

Segment or Filter

We then discussed the similarities and differences between Segments and Filters, both of which offer methods to narrow the data enabling us to focus on a particular point of interest.  The difference between the two is that (i) filtering will remove the data from the collection process thereby resulting in lost data; whereas (ii) segmentation hides data from the reports leaving it available for other reports. Generally, we felt that the use of Segments was preferable over Filters in Google Analytics given that it is impossible to recover data that is lost during GA’s real-time data collection.

We talked about the different kinds of segments that some of us are using. For example, is Joel’s organization, he is using a technique to segment the staff computers in their offices from computers in the library branches by adding a query string to the homepage URL of the branch computers’ browsers. Using this, he can create a segment in Google Analytics to view the activity of either group of users by segmenting on the different Entry pages (with and without this special query string). Segmenting on IP Address also further segregates his users between researchers and the general public.

Benchmarking

As a step towards measuring success for our sites, we discussed benchmarking, which is used to look at the performance of our sites before and after a change. Having performance data before making changes is essential to knowing whether those changes are successful, as defined by our goals and KPIs.

Comparing a site to itself either in a prior iteration or before making a change is called Internal Benchmarking. Comparing a site to other similar sites on the Internet is known as External Benchmarking. Since external benchmarking requires data to make a comparison, we need to request of another website their data or reports. Another alternative is to use service sites such as Alexa, Quantcast, Hitwise and others, which will do the comparison for you.  Keep in mind that these may use e-commerce or commercial indicators which may not make for a good comparison to humanities-oriented sites.

Event Tracking

Page views and visitor statistics are important for tracking how our site is doing, but sometimes we need to know about events that aren’t tracked through the normal means. We learned that an Event, both in the conceptual sense and in the analytics world, can be used to track actions that don’t naturally result in a page view. Events are used to track access to resources that aren’t a web page, such as videos, PDFs, dynamic page elements, and outbound links.

Tracking events doesn’t always come naturally and require some effort to set up. Content management systems (CMS) like Drupal help make event tracking easy either via a module or plugin or simply by editing a template or function that produces the HTML pages.  If a website is not using a CMS the webmaster will need to add event tracking code to each link or action that they wish to record in Google Analytics. Fortunately, as we saw, the event tracking code is simple and easy to add to a site and there is good documentation describing this in Google’s Event Tracking Guide documentation.

Finally, we learned that tracking events is preferable to creating “fake” pageviews as it does not inflate the statistics generated by regular pageviews due to the visitors’ usual browsing activities.

Success for our websites

Much of the second half of the conference was focused on learning about and performing some exercises to define and measure success for our sites. We started by understanding our site in terms of our Users, our Content and our Goals. These all point to the site’s purpose and circle back around to the content delivered by our site to the users in order to meet our goals. It’s all interconnected. The following questions and steps helped us to clarify the components that we need to have in hand to develop a successful website.

Content Audit – Perform an inventory that lists every page on the site. This are likely to be tedious and time-consuming. It includes finding abandoned pages, lost images, etc.  The web server is a great place to start identifying files.  Sometimes we can use automated web crawling tools to find the pages on our site.  Then we need to evaluate that content. Beyond the basic use of a page, consider recording last updated date, bounce rate, time on page, whether it is a landing page or not, and who is responsible for the content.

Identifying Related Sites – Create a list of sites that our site links to and sites that link back to our site.  Examples: parent site (e.g. our organization’s overall homepage), databases, journals, library catalog site, blog site, flickr, Twitter, Facebook, Internet Archive, etc.

Who are our users? – What is our site’s intended audience or audiences? For us at the conference, this was a variety of people: students, staff, the general public, collectors, adults, teens, parents, etc. Some of us may need to use a survey to determine this.  Some populations of users (e.g. staff) might be identified via IP Addresses. We were reminded that most sites serve one major set of users with other smaller groups of users served. For example, students might be the primary users whereas faculty and staff are secondary users.

Related Goals and plans – Use existing planning documents, strategic goals, a library’s mission statement to set a mission statement and/or goals for the website. Who are we going to help? Who is our audience?  We must define why our site exists and it’s purpose on the web.  Generally we’ll have one primary purpose per site. Secondary purposes also help define what the site does and fall under the “nice to have” category, but are also very useful to our users. (For example, Amazon.com’s primary purpose is to sell products, but secondary purposes include reviews, wishlists, ratings, etc.)

When we have a new service to promote, we can use analytics and goals to track how well that goal is being met. This is an ongoing expansion of the website and the web analytics strategy.  We were reminded to make goals that are practical, simple and achievable. Priorities can change from year to year in what we will monitor and promote.

Things to do right away

Nearing the end of our conference, we discussed things that we can do improve our analytics in the near term. These are not necessarily quick to implement, but doing these things will put us in a good place for starting our web analytics strategy. It was mentioned that if we aren’t tracking our website’s usage at all, we should install something today to at least begin collecting data!

  1. Share what we are doing with our colleagues. Educate them at a high level, so they know more about our decision making process. Be proactive and share information; don’t wait to be asked what’s going on. This will offer a sense of inclusion and transparency. What we do is not magic in any sense. We may also consider granting read-only access to some people who are interested in seeing and playing with the statistics on their own.
  2. Set a schedule for pulling and analyzing your data and statistics. On a quarterly basis, report to staff on things that we found that were interesting: important metrics, fun things, anecdotes about what is happening on our site. Also check our goals that we are tracking in analytics on a quarterly basis; do not “set and forget” our goals. On monthly basis, we should report to IT staff on topics of concern, 404 pages, important values, and things that need attention.
  3. Test, Analyze, Edit, and Repeat. This is an ongoing, long-term effort to keep improving our sites. During a site redesign, we compare analytics data before and after we make changes. Use analytics to make certain the changes we are implementing have a positive effect. Use analytics to drive the changes in our site, not because it would be cool/fun/neat to do things a certain way. Remember that our site is meant to serve our users.
  4. Measure all content. Get tracking code installed across all of our sites. Google Analytics cross-domain tracking is tricky to set up, but once installed will track users as they move between different servers. Examples for this are our website, blog, OPAC, and other servers. For things not under our control, be sure to at least track outbound to know when people leave our site.
  5. Measure all users. When we are reporting, segment the users into groups as much as possible to understand their different habits.
  6. Look at top mobile content. Use that information to divide the site and focus on things that mobile users are going to most often.
Summary

Spending eight hours learning about a topic and how to practically apply it to our site is a great way to get excited about taking on more responsibilities in our daily work. There is still a good deal of learning to be done since much of the expertise in web analytics comes from taking the time to experiment with the data and settings.

We, Kelly and Joel, are looking forward to working with analytics from the ground-up, so to speak. We are both are in an early stage of redeploying our website under new software which allows us to take into account the most up-to-date analytics tools and techniques available to us. Additionally, our organizations, though different in their specific missions and goals, are entering into a new round of long-term planning with the result being a new set of goals for the next three to five years. It becomes clear that the website is an important part of this planning and that the goals of our websites directly translate into actions that we take when configuring and using Google Analytics.

We both expect that we will experience a learning curve in understanding and applying web analytics and there will be a set of long-term, ongoing tasks for us. However, after this session, we are more confident about how to effectively apply and understand analytics towards tracking and achieving the goals of our organization and create an effective and useful set of websites.

About our Guest Authors:

Kelly Sattler is a Digital Project Librarian and Head of Web Services at Michigan State University.  She and her team are involved with migrating the Libraries’ website into Drupal 7 and are analyzing our Google Analytics data, search terms, and chat logs to identify places where we can improve our site through usability studies. Kelly spent 12 years in Information Technology at a large electrical company before becoming a librarian and has a bachelor’s degree in Computer Engineering.  She can be found on twitter at @ksattler.

Joel Richard is the lead Web Developer for the Smithsonian Libraries in Washington, DC and is currently in the process of rebuilding and migrating 15 years’ worth of content to Drupal 7. He has 18 years of experience in software development and internet technology and is a confirmed internet junkie. In his spare time, he is an enthusiastic proponent of Linked Open Data and believes it will change the way the internet works. One day. He can be found on twitter at @cajunjoel.