Information Architecture for a Library Website Redesign

My library is about to embark upon a large website redesign during this summer semester. This isn’t going to be just a new layer of CSS, or a minor version upgrade to Drupal, or moving a few pages around within the same general site. No, it’s going to be a huge, sweeping change that affects the whole of our web presence. With such an enormous task at hand, I wanted to discuss some of the tools and approaches that we’re using to make sure the new site meets our needs.

Why Redesign?

I’ve heard about why the wholesale website redesign is a flawed approach, why we should be continually, iteratively working on our sites. Continual changes stop problems from building up, plus large swaths of changes can disrupt our users who were used to the old site. The gradual redesign makes a lot of sense to me, and also seems like a complete luxury that I’ve never had in my library positions.

The primary problem with a series of smaller changes is that that approach assumes a solid fundamental to begin with. Our current site, however, has a host of interconnected problems that makes tackling any individual issue a challenge. It’s like your holiday lights sitting in a box all year; they’re hopelessly tangled by the time you take them out again.

Our site has decades of discarded, forgotten content. That’s mostly harmless; it’s hard to find and sees virtually no traffic. But it’s still not great to have outdated information scattered around. In particular, I’m not thrilled that a lot of it is static HTML, images, and documents sitting outside our content management system. It’s hard to know how much content we even have because it cannot be managed in one place.

We also fell into a pattern of adding content to the site but never removing or re-organizing existing content. Someone would ask for a button here, or a page dictating a policy there, or a new FAQ entry. Pages that were added didn’t have particular owners responsible for their currency and maintenance; I, as Systems Librarian, was expected to run the technical aspects of the site but also be its primary content editor. That’s simply an impossible task, as I don’t know every detail of the library’s operations or have the time to keep on top of a menagerie of pages of dubious importance.

I tried to create a “website changes form” to manage things, but it didn’t work for staff nor myself. The few staff who did fill out the form ended up requesting things that were difficult to do, large theme changes that I wasn’t comfortable making without user testing or approval from our other librarians. The little content that was added was minor text being ferried through this form and myself, essentially slowing down the editorial process and furthering this idea that web content was solely my domain.

To top our content troubles off, we’re also on an unsupported, outdated version of Drupal. Upgrading or switching a CMS isn’t necessarily related to a website redesign. If you have a functional website on a broken piece of software, you probably don’t want to toss out the good with the bad. But in our case, similar to how our ILS migration gave us the opportunity to clean up our bibliographic records, a CMS migration gives us a chance to rebuild a crumbling website. It just doesn’t make sense to invest technical effort in migrating all our existing content when it’s so clearly in need of major structural change.

Card Sort

Making a card sort
Cards in the middle of being constructed.

Not wanting to go into a redesign process blind, we set out to collect data on our current site and how it could be improved. One of the first ways we gathered data was to ask all library staff to perform a card sort. A card sort is an activity wherein pieces of web content are put on cards which can then be placed into categories; the idea is to form a rough information architecture for your site which can dictate structure and main menus. You can do either open or closed card sorts, meaning the categories are up to the participants to invent or provided ahead of time.

For our card sort, I chose to do an open card sort since we were so uncertain on the categories. Secondly, I selected web content based on our existing site’s analytics. It was clear to me that our current site was bloated and disorganized; there were pages tucked into the nooks of cyberspace that no one had visited in years. There was all sorts of overlapping and unnecessary content. So I selected ≈20 popular pages but also gave each group two pieces of blank paper on which to add whatever content they felt was missing.

Finally, trying to get as much and as useful data as possible, I modified the card sort procedure in a couple ways. I asked people to role play as different types of stakeholders (graduate & undergraduate students, faculty, administrators) and to justify their decisions from that vantage point. I also had everyone, after sorting was done, put dots on content they felt was important enough for the home page. Since one of our current site’s primary challenges in maintenance, or the lack thereof, I wanted to add one last activity wherein participants would write a “responsible staff member” on each card (e.g. the instruction librarian maintains the instruction policy page). Sadly, we ran out of time and couldn’t do that bit.

The results of the card sort were informative. A few categories emerged as a commonality across everyone’s sorts: collections, “about us”, policies, and current events/news. We discovered a need for new content to cover workshops, exhibits, and events happening in the library which were currently only represented (and not very well) on blog posts. In terms of the home page, it was clear that LibGuides, collections, news, and most importantly our open hours needed to represented.

Treejack & Analytics

Once we had enough information to build out the site’s architecture, I organized our content into a few major categories. But there were still several questions on my mind: would users understand terms like “special collections”? Would they understand where to look for LibGuides? Would they know how to find the right contact for various questions? To answer some of these questions, I turned to Optimal Workshop’s “Treejack” tool. Treejack tests a site’s information architecture by having users navigate basic text links to perform basic tasks. We created a few tasks aimed at answering our questions and recruited students to perform them. While we’re only using the free tier of Optimal Workshop, and only using student stakeholders, the data was till informative.

For one, Optimal Workshop’s results data is rich and visualized well. It shows the exact routes each user took through our site’s content, the time it took to complete a task, and whether a task was completed directly, completed indirectly, or failed. Completed directly means the user took an ideal route through our content; no bouncing up and down the site’s hierarchy. Indirect completion means they eventually got to the right place, but didn’t take a perfect path there, while failure means they ended in the wrong place. The graph’s the demonstrate each tasks’ outcomes are wonderful:

Data & charts for a task
The data & charts Treejack shows for a moderately successful task.
"Pie tree" visualizing users' paths
A “pie tree” showing users’ paths while attempting a task.

We can see here that most of our users found their way to LibGuides (named “study guides” here). But a few people expected to find them under our “Collections” category and bounced around in there, clearly lost. This tells us we should represent our guides under Collections alongside items like databases, print collections, and course reserves. While building and running your own Treejack-type tests would be easy, I definitely recommend Optimal Workshop as a great product which provides much insight.

There’s much work to be done in terms of testing—ideally we would adjust our architecture to address the difficulties that users had, recruit different sets of users (faculty & staff), and attempt to answer more questions. That’ll be difficult during the summer while there are fewer people on campus but we know enough now to start adjusting our site and moving along in the redesign process.

Another piece of our redesign philosophy is using analytics about the current site to inform our decisions about the new one. For instance, I track interactions with our home page search box using Google Analytics events 1. The search box has three tabs corresponding to our discovery layer, catalog, and LibGuides. Despite thousands of searches and interactions with the search box, LibGuides search is seeing only trace usage. The tab was clicked on a mere 181 times this year; what’s worse, only 51 times did a user actually search afterwards. This trace amount of usage, plus the fact that users are clearly clicking onto the tab and then not finding what they want there, indicates it’s just not worth any real estate on the home page. When you add in that our LibGuides now appear in our discovery layer, their search tab is clearly disposable.

What’s Next

Data, tests, and conceptual frameworks aside, our next stage will involve building something much closer to an actual, functional website. Tools like Optimal Workshop are wonderful for providing high-level views on how to structure our information, but watching a user interact with a prototype site is so much richer. We can see their hesitation, hear them discuss the meanings of our terms, get their opinions on our stylistic choices. Prototype testing has been a struggle for me in the past; users tend to fixate on the unfinished or unrefined nature of the prototype, providing feedback that tells me what I already know (yes, we need to replace the placeholder images; yes, “Lorem ipsum dolor sit amet” is written on every page) rather than something new. I hope to counter that by setting appropriate expectations and building a small but fairly robust prototype.

We’re also building our site in an entirely new piece of software, Wagtail. Wagtail is exciting for a number of reasons, and will probably have to be the subject of future posts, but it does help address some of the existing issues I noted earlier. We’re excited by the innovative Streamfield approach to content—a replacement for large, rich text fields which are unstructured and often let users override a site’s base styles. We’ve also heard whispers of new workflow features which let us send reminders to owners of different content pages to revisit them periodically. While I could do something like this myself with an ad hoc mess of calendar events and spreadsheets, having it build right into the CMS bodes well for our future maintenace plans. Obviously, the concepts underlying Wagtail and the tools it offers will influence how we implement our information architecture. But we also started gathering data long before we knew what software we’d use, so exactly how it will work remains to be figured out.

Has your library done a website redesign or information architecture test recently? What tools or approaches did you find useful? Let us know in the comments!

Notes

  1. I described Google Analytics events before in a previous Tech Connect post

Migrating to LibGuides 2.0

This summer Springshare released LibGuides 2.0, which is a complete revamp of the LibGuides system. Many libraries use LibGuides, either as course/research guides or in some cases as the entire library website, and so this is something that’s been on the mind of many librarians this summer, whichever side of LibGuides they usually see. The process of migrating is not too difficult, but the choices you make in planning the new interface can be challenging. As the librarians responsible for the migration, we will discuss our experience of planning and implementing the new LibGuides platform.

Making the Decision to Migrate

While migrating this summer was optional, Springshare will probably only support LibGuides 1 for another two years, and at Loyola we felt it was better to move sooner rather than later. Over the past few years there were perpetual LibGuides cleanup projects, and this seemed to be a good opportunity to finalize that work. At the same time, we wanted to experiment with new designs for the library’s website that would bring it in closer alignment with the university’s new brand as well as make the site responsive, and LibGuides seemed like the ideal place to experiment with some of those ideas. Several new features, revealed on Springshare’s blog, resonated with subject-area specialists which was another reason to push for a migration sooner than later. We also wanted to have it in place before the first day of classes, which gave us a few months to experiment.

The Reference and Electronic Resources librarian, Will Kent, as well as the Head of Reference, Niamh McGuigan, and the Digital Services Librarian, Margaret Heller, worked in concert to make decisions, as well as inviting all the other reference and instruction librarians (as well as anyone else who was interested) to participate in the process. There were a few ground rules the core team went by, however: we were migrating and the process was iterative, i.e. we weren’t waiting for perfection to launch.

Planning the Migration

During the migration planning process, the small team of three librarians worked together to create a timeline, report to the library staff on progress, solicit feedback on the system, and update the LibGuide policies to reflect the new changes and functions. As far as front-end migration went, we addressed large staff-wide meetings, provided updates, polled subject specialists on the progress, prepared our 400 databases for conversion to the new A-Z list, and demonstrated new features, and opened changes that they should be aware of. We would relay updates from Springshare and handle any troubleshooting questions as they happened.

Given the new features – new categories, new ways of searching, the A-Z database list, and other features, it was important for us to sit down, discuss standards, and update our content policies. The good news was that most of our content was in good shape for the migration. The process was swift and barring inevitable, tiny bugs went smoothly.

Our original timeline was to present the migration steps at our June monthly joint meeting of collections and reference staff, and give a timeline one month until the July meeting to complete the work. For various reasons this ended up stretching until mid-August, but we still launched the day before classes began. We are constantly in the process of updating guide types, adding new resources, and re-classifying boxes to adhere to our new policies.

Working on the Design

LibGuides 2.0 provides two basic templates, a left navigation menu and a top tabbed menu that looks similar to the original LibGuides (additional templates are available with the LibGuides CMS product). We had originally discussed using the left navigation box template and originally began a design based on this, but ultimately people felt more comfortable with the tabbed navigation. Whiteboard sketch of the LibGuides UI

For the initial prototype, Margaret worked off a template that we’d used before for Omeka. This mirrors the Loyola University Chicago template very closely. We kept all of the LibGuides standard template–i.e. 1-3 columns with the number of columns and sections within the column determined by the page creator, but added a few additional pieces in the header and footer, as well as making big changes to the tabs.

The first step in planning the design was to understand what customization happened in the template, and which in the header and footer which are entered separately in the admin UI. Margaret sketched out our vision for the site on the whiteboard wall to determine existing selectors and those that would need to be added, as well as get a sense of whether we would need to change the content section at all. In the interests of completing the project in a timely fashion, we determined that the bare minimum of customization to unify the research guides with the rest of the university websites would be the first priority.

For those still planning a redesign, the Code4Lib community has many suggestions on what to consider. The main thing to consider is that LibGuides 2.0 is based on the Bootstrap 3.0 framework, which Michael Schofield recently implored us to use responsibly. Other important considerations are the accessibility of the solution you pick, and use of white space.

LibGuides Look & Feel UI tabs The Look & Feel section under ‘Admin’ has several tabs with sections for Header and Footer, Custom CSS/JS, and layout of pages–Guide Pages Layout is the most relevant for this post.

Just as in the previous version of LibGuides, one can enter custom code for the header and footer (which in this case is almost the same as the regular library website), as well link to a custom CSS file (we did not include any custom Javascript here, but did include several Google Fonts and our custom icon). The Guide Pages Layout is new, and this is where one can edit the actual template that creates each page. We didn’t make any large changes here, but were still able to achieve a unique look with custom CSS.

The new LibGuides platform is responsive, but we needed to account for several items we added to the interface. We added a search box that would allow users to search the entire university website, as well as several new logos, so Margaret added a few media queries to adjust these features on a phone or tablet, as well as adjust the spacing of the custom footer.

Improving the Design

Our first design was ready to present to the subject librarians a month after the migration process started. It was based on the principle of matching the luc.edu pages closely (example), in which the navigation tabs across the top have unusual cutouts, and section titles are very large. No one was very happy with this result, however, as it made the typical LibGuides layout with multiple sections on a page unusable and the tabs not visible enough. While one approach would have been to change the navigation to left navigation menu and limit the number of sections, the majority of the subject librarians preferred to keep things closer to what they had been, with a view to moving toward a potential new layout in the future.

Once we determined a literal interpretation of the university website was not usable for our content, we found inspiration for the template body from another section of the university website that was aimed at presenting a lot of dynamic content with multiple sections, but kept the standard luc.edu header. This allowed us to create a page that was recognizably part of Loyola, but presented our LibGuides content in a much more usable form.

Sticky Tabs
Sticky Tabs

The other piece we borrowed from the university website was sticky tabs. This was an attempt to make the tabs more visible and usable based on what we knew from usability testing on the old platform and what users would already know from the university site. Because LibGuides is based on the Bootstrap framework, it was easy to drop this in using the Affix plugin (tutorial on how to use this)1. The tabs are translucent so they don’t obscure content as one scrolls down.

Our final result was much more popular with everyone. It has a subtle background color and border around each box with a section header that stands out but doesn’t overwhelm the content. The tabs are not at all like traditional LibGuides tabs, functioning somewhat more like regular header links.

mainview
Final result.
Next Steps

Over the summer we were not able to conduct usability testing on the new interface due to the tight timeline, so the first step this fall is to integrate it into our regular usability testing schedule to make iterative changes based on user feedback. We also need to continue to audit the page to improve accessibility.

The research guides are one of the most used links on our website (anywhere between 10,000 and 20,000 visits per month), so our top priority was to make sure the migration did not interfere with use – both in terms of patron access and content creation by the subject-area librarians. Thanks to our feedback sessions, good communication with Springshare, and reliable new platform, the migration went smoothly without interruption.

About our guest author: Will Kent is Reference/Instruction and Electronic Resources Librarian and subject specialist for Nursing and Chemistry at Loyola University Chicago. He received his MSLIS from University of Illinois Urbana-Champaign in 2011 with a certificate in Community Informatics.

Notes
  1. You may remember that in the Bootstrap Responsibly post Michael suggested it wasn’t necessary to use this, but it is the most straightforward way in LibGuides 2.0

Bootstrap Responsibly

Bootstrap is the most popular front-end framework used for websites. An estimate by meanpath several months ago sat it firmly behind 1% of the web – for good reason: Bootstrap makes it relatively painless to puzzle together a pretty awesome plug-and-play, component-rich site. Its modularity is its key feature, developed so Twitter could rapidly spin-up internal microsites and dashboards.

Oh, and it’s responsive. This is kind of a thing. There’s not a library conference today that doesn’t showcase at least one talk about responsive web design. There’s a book, countless webinars, courses, whole blogs dedicated to it (ahem), and more. The pressure for libraries to have responsive, usable websites can seem to come more from the likes of us than from the patronbase itself, but don’t let that discredit it. The trend is clear and it is only a matter of time before our libraries have their mobile moment.

Library websites that aren’t responsive feel dated, and more importantly they are missing an opportunity to reach a bevy of mobile-only users that in 2012 already made up more than a quarter of all web traffic. Library redesigns are often quickly pulled together in a rush to meet the growing demand from stakeholders, pressure from the library community, and users. The sprint makes the allure of frameworks like Bootstrap that much more appealing, but Bootstrapped library websites often suffer the cruelest of responsive ironies:

They’re not mobile-friendly at all.

Assumptions that Frameworks Make

Let’s take a step back and consider whether using a framework is the right choice at all. A front-end framework like Bootstrap is a Lego set with all the pieces conveniently packed. It comes with a series of templates, a blown-out stylesheet, scripts tuned to the environment that let users essentially copy-and-paste fairly complex web-machinery into being. Carousels, tabs, responsive dropdown menus, all sorts of buttons, alerts for every occasion, gorgeous galleries, and very smart decisions made by a robust team of far-more capable developers than we.

Except for the specific layout and the content, every Bootstrapped site is essentially a complete organism years in the making. This is also the reason that designers sometimes scoff, joking that these sites look the same. Decked-out frameworks are ideal for rapid prototyping with a limited timescale and budget because the design decisions have by and large already been made. They assume you plan to use the framework as-is, and they don’t make customization easy.

In fact, Bootstrap’s guide points out that any customization is better suited to be cosmetic than a complete overhaul. The trade-off is that Bootstrap is otherwise complete. It is tried, true, usable, accessible out of the box, and only waiting for your content.

Not all Responsive Design is Created Equal

It is still common to hear the selling point for a swanky new site is that it is “responsive down to mobile.” The phrase probably rings a bell. It describes a website that collapses its grid as the width of the browser shrinks until its layout is appropriate for whatever screen users are carrying around. This is kind of the point – and cool, as any of us with a browser-resizing obsession could tell you.

Today, “responsive down to mobile” has a lot of baggage. Let me explain: it represents a telling and harrowing ideology that for these projects mobile is the afterthought when mobile optimization should be the most important part. Library design committees don’t actually say aloud or conceive of this stuff when researching options, but it should be implicit. When mobile is an afterthought, the committee presumes users are more likely to visit from a laptop or desktop than a phone (or refrigerator). This is not true.

See, a website, responsive or not, originally laid out for a 1366×768 desktop monitor in the designer’s office, wistfully depends on visitors with that same browsing context. If it looks good in-office and loads fast, then looking good and loading fast must be the default. “Responsive down to mobile” is divorced from the reality that a similarly wide screen is not the common denominator. As such, responsive down to mobile sites have a superficial layout optimized for the developers, not the user.

In a recent talk at An Event Apart–a conference–in Atlanta, Georgia, Mat Marquis stated that 72% of responsive websites send the same assets to mobile sites as they do desktop sites, and this is largely contributing to the web feeling slower. While setting img { width: 100%; } will scale media to fit snugly to the container, it is still sending the same high-resolution image to a 320px-wide phone as a 720px-wide tablet. A 1.6mb page loads differently on a phone than the machine it was designed on. The digital divide with which librarians are so familiar is certainly nowhere near closed, but while internet access is increasingly available its ubiquity doesn’t translate to speed:

  1. 50% of users ages 12-29 are “mostly mobile” users, and you know what wireless connections are like,
  2. even so, the weight of the average website ( currently 1.6mb) is increasing.

Last December, analysis of data from pagespeed quantiles during an HTTP Archive crawl tried to determine how fast the web was getting slower. The fastest sites are slowing at a greater rate than the big bloated sites, likely because the assets we send–like increasingly high resolution images to compensate for increasing pixel density in our devices–are getting bigger.

The havoc this wreaks on the load times of “mobile friendly” responsive websites is detrimental. Why? Well, we know that

  • users expect a mobile website to load as fast on their phone as it does on a desktop,
  • three-quarters of users will give up on a website if it takes longer than 4 seconds to load,
  • the optimistic average load time for just a 700kb website on 3G is more like 10-12 seconds

eep O_o.

A Better Responsive Design

So there was a big change to Bootstrap in August 2013 when it was restructured from a “responsive down to mobile” framework to “mobile-first.” It has also been given a simpler, flat design, which has 100% faster paint time – but I digress. “Mobile-first” is key. Emblazon this over the door of the library web committee. Strike “responsive down to mobile.” Suppress the record.

Technically, “mobile-first” describes the structure of the stylesheet using CSS3 Media Queries, which determine when certain styles are rendered by the browser.

.example {
  styles: these load first;
}

@media screen and (min-width: 48em) {

  .example {

    styles: these load once the screen is 48 ems wide;

  }

}

The most basic styles are loaded first. As more space becomes available, designers can assume (sort of) that the user’s device has a little extra juice, that their connection may be better, so they start adding pizzazz. One might make the decision that, hey, most of the devices less than 48em (720px approximately with a base font size of 16px) are probably touch only, so let’s not load any hover effects until the screen is wider.

Nirvana

In a literal sense, mobile-first is asset management. More than that, mobile-first is this philosophical undercurrent, an implicit zen of user-centric thinking that aligns with libraries’ missions to be accessible to all patrons. Designing mobile-first means designing to the lowest common denominator: functional and fast on a cracked Blackberry at peak time; functional and fast on a ten year old machine in the bayou, a browser with fourteen malware toolbars trudging through the mire of a dial-up connection; functional and fast [and beautiful?] on a 23″ iMac. Thinking about the mobile layout first makes design committees more selective of the content squeezed on to the front page, which makes committees more concerned with the quality of that content.

The Point

This is the important statement that Bootstrap now makes. It expects the design committee to think mobile-first. It comes with all the components you could want, but they want you to trim the fat.

Future Friendly Bootstrapping

This is what you get in the stock Bootstrap:

  • buttons, tables, forms, icons, etc. (97kb)
  • a theme (20kb)
  • javascripts (30kb)
  • oh, and jQuery (94kb)

That’s almost 250kb of website. This is like a browser eating a brick of Mackinac Island Fudge – and this high calorie bloat doesn’t include images. Consider that if the median load time for a 700kb page is 10-12 seconds on a phone, half that time with out-of-the-box Bootstrap is spent loading just the assets.

While it’s not totally deal-breaking, 100kb is 5x as much CSS as an average site should have, as well as 15%-20% of what all the assets on an average page should weigh. Josh Broton

To put this in context, I like to fall back on Ilya Girgorik’s example comparing load time to user reaction in his talk “Breaking the 1000ms Time to Glass Mobile Barrier.” If the site loads in just 0-100 milliseconds, this feels instant to the user. By 100-300ms, the site already begins to feel sluggish. At 300-1000ms, uh – is the machine working? After 1 second there is a mental context switch, which means that the user is impatient, distracted, or consciously aware of the load-time. After 10 seconds, the user gives up.

By choosing not to pair down, your Bootstrapped Library starts off on the wrong foot.

The Temptation to Widgetize

Even though Bootstrap provides modals, tabs, carousels, autocomplete, and other modules, this doesn’t mean a website needs to use them. Bootstrap lets you tailor which jQuery plugins are included in the final script. The hardest part of any redesign is to let quality content determine the tools, not the ability to tabularize or scrollspy be an excuse to implement them. Oh, don’t Google those. I’ll touch on tabs and scrollspy in a few minutes.

I am going to be super presumptuous now and walk through the total Bootstrap package, then make recommendations for lightening the load.

Transitions

Transitions.js is a fairly lightweight CSS transition polyfill. What this means is that the script checks to see if your user’s browser supports CSS Transitions, and if it doesn’t then it simulates those transitions with javascript. For instance, CSS transitions often handle the smooth, uh, transition between colors when you hover over a button. They are also a little more than just pizzazz. In a recent article, Rachel Nabors shows how transition and animation increase the usability of the site by guiding the eye.

With that said, CSS Transitions have pretty good browser support and they probably aren’t crucial to the functionality of the library website on IE9.

Recommendation: Don’t Include.

 Modals

“Modals” are popup windows. There are plenty of neat things you can do with them. Additionally, modals are a pain to design consistently for every browser. Let Bootstrap do that heavy lifting for you.

Recommendation: Include

Dropdown

It’s hard to conclude a library website design committee without a lot of links in your menu bar. Dropdown menus are kind of tricky to code, and Bootstrap does a really nice job keeping it a consistent and responsive experience.

Recommendation: Include

Scrollspy

If you have a fixed sidebar or menu that follows the user as they read, scrollspy.js can highlight the section of that menu you are currently viewing. This is useful if your site has a lot of long-form articles, or if it is a one-page app that scrolls forever. I’m not sure this describes many library websites, but even if it does, you probably want more functionality than Scrollspy offers. I recommend jQuery-Waypoints – but only if you are going to do something really cool with it.

Recommendation: Don’t Include

Tabs

Tabs are a good way to break-up a lot of content without actually putting it on another page. A lot of libraries use some kind of tab widget to handle the different search options. If you are writing guides or tutorials, tabs could be a nice way to display the text.

Recommendation: Include

Tooltips

Tooltips are often descriptive popup bubbles of a section, option, or icon requiring more explanation. Tooltips.js helps handle the predictable positioning of the tooltip across browsers. With that said, I don’t think tooltips are that engaging; they’re sometimes appropriate, but you definitely use to see more of them in the past. Your library’s time is better spent de-jargoning any content that would warrant a tooltip. Need a tooltip? Why not just make whatever needs the tooltip more obvious O_o?

Recommendation: Don’t Include

Popover

Even fancier tooltips.

Recommendation: Don’t Include

Alerts

Alerts.js lets your users dismiss alerts that you might put in the header of your website. It’s always a good idea to give users some kind of control over these things. Better they read and dismiss than get frustrated from the clutter.

Recommendation: Include

Collapse

The collapse plugin allows for accordion-style sections for content similarly distributed as you might use with tabs. The ease-in-ease-out animation triggers motion-sickness and other aaarrghs among users with vestibular disorders. You could just use tabs.

Recommendation: Don’t Include

Button

Button.js gives a little extra jolt to Bootstrap’s buttons, allowing them to communicate an action or state. By that, imagine you fill out a reference form and you click “submit.” Button.js will put a little loader icon in the button itself and change the text to “sending ….” This way, users are told that the process is running, and maybe they won’t feel compelled to click and click and click until the page refreshes. This is a good thing.

Recommendation: Include

Carousel

Carousels are the most popular design element on the web. It lets a website slideshow content like upcoming events or new material. Carousels exist because design committees must be appeased. There are all sorts of reasons why you probably shouldn’t put a carousel on your website: they are largely inaccessible, have low engagement, are slooooow, and kind of imply that libraries hate their patrons.

Recommendation: Don’t Include.

Affix

I’m not exactly sure what this does. I think it’s a fixed-menu thing. You probably don’t need this. You can use CSS.

Recommendation: Don’t Include

Now, Don’t You Feel Better?

Just comparing the bootstrap.js and bootstrap.min.js files between out-of-the-box Bootstrap and one tailored to the specs above, which of course doesn’t consider the differences in the CSS, the weight of the images not included in a carousel (not to mention the unquantifiable amount of pain you would have inflicted), the numbers are telling:

File Before After
bootstrap.js 54kb 19kb
bootstrap.min.js 29kb 10kb

So, Bootstrap Responsibly

There is more to say. When bouncing this topic around twitter awhile ago, Jeremy Prevost pointed out that Bootstrap’s minified assets can be GZipped down to about 20kb total. This is the right way to serve assets from any framework. It requires an Apache config or .htaccess rule. Here is the .htaccess file used in HTML5 Boilerplate. You’ll find it well commented and modular: go ahead and just copy and paste the parts you need. You can eke out even more performance by “lazy loading” scripts at a given time, but these are a little out of the scope of this post.

Here’s the thing: when we talk about having good library websites we’re mostly talking about the look. This is the wrong discussion. Web designs driven by anything but the content they already have make grasping assumptions about how slick it would look to have this killer carousel, these accordions, nifty tooltips, and of course a squishy responsive design. Subsequently, these responsive sites miss the point: if anything, they’re mobile unfriendly.

Much of the time, a responsive library website is used as a marker that such-and-such site is credible and not irrelevant, but as such the website reflects a lack of purpose (e.g., “this website needs to increase library-card registration). A superficial understanding of responsive webdesign and easy-to-grab frameworks entail that the patron is the least priority.

 

About Our Guest Author :

Michael Schofield is a front-end librarian in south Florida, where it is hot and rainy – always. He tries to do neat things there. You can hear him talk design and user experience for libraries on LibUX.

Redesigning the Item Record Summary View in a Library Catalog and a Discovery Interface

A. Oh, the Library Catalog

Almost all librarians have a love-hate relationship with their library catalogs (OPAC), which are used by library patrons. Interestingly enough, I hear a lot more complaints about the library catalog from librarians than patrons. Sometimes it is about the catalog missing certain information that should be there for patrons. But many other times, it’s about how crowded the search results display looks. We actually all want a clean-looking, easy-to-navigate, and efficient-to-use library catalog. But of course, it is much easier to complain than to come up with an viable alternative.

Aaron Schmidt has recently put forth an alternative design for a library item record. In his blog post, he suggests a library catalog shifts its focus from the bibliographic information (or metadata if not a book) of a library item to a patron’s tasks performed in relation to the library item so that the catalog functions more as “a tool that prioritizes helping people accomplish their tasks, whereby bibliographic data exists quietly in the background and is exposed only when useful.” This is a great point. Throwing all the information at once to a user only overwhelms her/him. Schmidt’s sketch provides a good starting point to rethink how to design the library catalog’s search results display.

Screen Shot 2013-10-09 at 1.34.08 PM
From the blog post, “Catalog Design” by Aaron Schmidt

B. Thinking about Alternative Display Design

The example above is, of course, too simple to apply to the library catalog of an academic library straight away. For an usual academic library patron to determine whether s/he wants to either check out or reserve the item, s/he is likely to need a little more information than the book title, the author, and the book image. For example, students who look for textbooks, the edition information as well as the year of publication are important. But I take it that Schmidt’s point was to encourage more librarians to think about alternative designs for the library catalog rather than simply compare what is available and pick what seems to be the best among those.

Screen Shot 2013-10-09 at 1.44.36 PM
Florida International University Library Catalog – Discovery layer, Mango, provided by Florida Virtual Campus

Granted that there may be limitations in how much we can customize the search results display of a library catalog. But that is not a reason to stop thinking about what the optimal display design would be for the library catalog search results. Sketching alternatives can be in itself a good exercise in evaluating the usability of an information system even if not all of your design can be implemented.

Furthermore, more and more libraries are implementing a discovery layer over their library catalogs, which provides much more room to customize the display of search results than the traditional library catalog. Open source discovery systems such as Blacklight or VuFind provides great flexibility in customizing the search results display. Even proprietary discovery products such as Primo, EDS, Summon offer a level of customization by the libraries.

Below, I will discuss some principles to follow in sketching alternative designs for search results in a library catalog, present some of my own sketches, and show other examples implemented by other libraries or websites.

C. Principles

So, if we want to improve the item record summary display to be more user-friendly, where can we start and what kind of principles should we follow? These are the principles that I followed in coming up with my own design:

  • De-clutter.
  • Reveal just enough information that is essential to determine the next action.
  • Highlight the next action.
  • Shorten texts.

These are not new principles. They are widely discussed and followed by many web designers including librarians who participate in their libraries’ website re-design. But we rarely apply these to the library catalog because we think that the catalog is somehow beyond our control. This is not necessarily the case, however. Many libraries implement discovery layers to give a completely different and improved look from that of their ILS-es’ default display.

Creating a satisfactory design on one’s own instead of simply pointing out what doesn’t work or look good in existing designs is surprisingly hard but also a refreshing challenge. It also brings about the positive shift of focus in thinking about a library catalog from “What is the problem in the catalog?” to “What is a problem and what can we change to solve the problem?”

Below I will show my own sketches for an item record summary view for the library catalog search results. These are clearly a combination of many other designs that I found inspiring in other library catalogs. (I will provide the source of those elements later in this post.) I tried to mix and revise them so that the result would follow those four principles above as closely as possible. Check them out and also try creating your own sketches. (I used Photoshop for creating my sketches.)

D. My Own Sketches

Here is the basic book record summary view. What I tried to do here is giving just enough information for the next action but not more than that: title, author, type, year, publisher, number of library copies and holds. The next action for a patron is to check the item out. On the other hand, undecided patrons will click the title to see the detailed item record or have the detailed item record to be texted, printed, e-mailed, or to be used in other ways.

(1) A book item record

Screen Shot 2013-10-09 at 12.46.38 PM

This is a record of a book that has an available copy to check out. Only when a patron decides to check out the item, the next set of information relevant to that action – the item location and the call number – is shown.

(2) With the check-out button clicked

check out box open

If no copy is available for check-out, the best way to display the item is to signal that check-out is not possible and to highlight an alternative action. You can either do this by graying out the check-out button or by hiding the button itself.

Many assume that adding more information would automatically increase the usability of a website. While there are cases in which this could be true, often a better option is to reveal information only when it is relevant.

I decided to gray out the check-out button when there is no available copy and display the reserve button, so that patrons can place a hold. Information about how many copies the library has and how many holds are placed (“1 hold / 1 copy”) would help a patron to decide if they want to reserve the book or not.

(3) A book item record when check-out is not available

Screen Shot 2013-10-09 at 12.34.54 PM

I also sketched two other records: one for an e-Book without the cover image and the other with the cover image. Since the appropriate action in this case is reading online, a different button is shown. You may place the ‘Requires Login’ text or simply omit it because most patrons will understand that they will have to log in to read a library e-book and also the read-online button will itself prompt log in once clicked anyway.

(4) An e-book item record without a book cover

Screen Shot 2013-10-09 at 12.35.54 PM

(5) An e-book item record with a book cover

Screen Shot 2013-10-09 at 12.48.33 PM

(6) When the ‘Read Online’ button is clicked, an e-book item record with multiple links/providers

When there are multiple options for one electronic resource, those options can be presented in a similar way in which multiple copies of a physical book are shown.

Screen Shot 2013-10-09 at 12.35.22 PM

(6) A downloadable e-book item record

For a downloadable resource, changing the name of the button to ‘download’ is much more informative.

Screen Shot 2013-10-09 at 12.35.13 PM

(7) An e-journal item record

Screen Shot 2013-10-09 at 12.47.31 PM

(7) When the ‘Read Online’ button is clicked, an e-journal item record with multiple links/providers

Screen Shot 2013-10-09 at 12.41.56 PM

E. Inspirations

Needless to say, I did not come up with my sketches from scratch. Here are the library catalogs whose item record summary view inspired me.

torontopublic

Toronto Public Library catalog has an excellent item record summary view, which I used as a base for my own sketches. It provides just enough information for the summary view. The title is hyperlinked to the detailed item record, and the summary view displays the material type and the year in bod for emphasis. The big green button also clearly shows the next action to take. It also does away with unnecessary labels that are common in library catalog such as ‘Author:’ ‘Published:’ ‘Location:’ ‘Link:.’

User Experience Designer Ryan Feely, who worked on Toronto Public Library’s catalog search interface, pointed out the difference between a link and an action in his 2009 presentation “Toronto Public Library Website User Experience Results and Recommendations.” Actions need to be highlighted as a button or in some similar design to stand out to users (slide 65). And ideally, only the actions available for a given item should be displayed.

Another good point which Feely makes (slide 24) is that an icon is often the center of attention and so a different icon should be used to signify different type of materials such as a DVD or an e-Journal. Below are the icons that Toronto Public Library uses for various types of library materials that do not have unique item images. These are much more informative than the common “No image available” icon.

eAudiobooke-journal eMusic

vinyl VHS eVideo

University of Toronto Libraries has recently redesigned their library catalog to be completely responsive. Their item record summary view in the catalog is brief and clear. Each record in the summary view also uses a red and a green icon that helps patrons to determine the availability of an item quickly. The icons for citing, printing, e-mailing, or texting the item record that often show up in the catalog are hidden in the options icon at the bottom right corner. When the mouse hovers over, a variety of choices appear.

Screen Shot 2013-10-09 at 4.45.33 PM

univtoronto

Richland Library’s catalog displays library items in a grid as a default, which makes the catalog more closely resemble an online bookstore or shopping website. Patrons can also change the view to have more details shown with or without the item image. The item record summary view in the default grid view is brief and to the point. The main type of patron action, such as Hold or Download, is clearly differentiated from other links as an orange button.

richland

Screen Shot 2013-10-13 at 8.33.46 PM

Standford University Library offers a grid view (although not as the default like Richland Library). The grid view is very succinct with the item title, call number, availability information in the form of a green checkmark, and the item location.

Screen Shot 2013-10-13 at 8.37.37 PM

What is interesting about Stanford University Library catalog (using Blacklight) is that when a patron hovers its mouse over an item in the grid view, the item image displays the preview link. And when clicked, a more detailed information is shown as an overlay.

Screen Shot 2013-10-13 at 8.37.58 PM

Brigham Young University completely customized the user interface of the Primo product from ExLibris.

byu

And University of Michigan Library customized the search result display of the Summon product from SerialsSolutions.

Screen Shot 2013-10-14 at 11.16.49 PM

Here are some other item record summary views that are also fairly straightforward and uncluttered but can be improved further.

Sacramento Public Library uses the open source discovery system, VuFind, with little customization.

dcpl

I have not done an extensive survey of library catalogs to see which one has the best item record summary view. But it seemed to me that in general academic libraries are more likely to provide more information than necessary in the item record summary view and also to require patrons to click a link instead of displaying relevant information right away. For example, the ‘Check availability’ link that is shown in many library catalogs is better when it is replaced by the actual availability status of ‘available’ or ‘checked out.’ Similarly, the ‘Full-text online’ or ‘Available online’ link may be clearer with an button titled ‘Read online’ or ‘Access online.’

F. Challenges and Strategies

The biggest challenge in designing the item record summary view is to strike the balance between too little information and too much information about the item. Too little information will require patrons to review the detailed item record just to identify if the item is the one they are looking for or not.

Since librarians know many features of the library catalog, they tend to err on the side of throwing all available features into the item record summary view. But too much information not only overwhelms patrons and but also makes it hard for them to locate the most relevant information at that stage and to identify the next available action. Any information irrelevant to a given task is no more than noise to a patron.

This is not a problem unique to a library catalog but generally applicable to any system that displays search results. In their book, Designing the Search Experience , Tony Russell-Rose and Tyler Tate describes this as achieving ‘the optimal level of detail.’ (p.130)

Useful strategies for achieving the optimal level of detail for the item summary view in the case of the library catalog include:

  • Removing all unnecessary labels
  • Using appropriate visual cues to make the record less text-heavy
  • Highlighting next logical action(s) and information relevant to that action
  • Systematically guiding a patron to the actions that are relevant to a given item and her/his task in hand

Large online shopping websites, Amazon, Barnes & Noble, and eBay all make a good use of these strategies. There are no labels such as ‘price,’ ‘shipping,’ ‘review,’ etc. Amazon highlights the price and the user reviews most since those are the two most deciding factors for consumers in their browsing stage. Amazon only offers enough information for a shopper to determine if s/he is further interested in purchasing the item. So there is not even the Buy button in the summary view. Once a shopper clicks the item title link and views the detailed item record, then the buying options and the ‘Add to Cart’ button are displayed prominently.

Screen Shot 2013-10-09 at 1.21.15 PM

Barnes & Noble’s default display for search results is the grid view, and the item record summary view offers only the most essential information – the item title, material type, price, and the user ratings.

Screen Shot 2013-10-09 at 1.24.05 PM

eBay’s item record summary view also offers only the most essential information, the highest bid and the time left, while people are browsing the site deciding whether to check out the item in further detail or not.

Screen Shot 2013-10-09 at 1.28.47 PM

G. More Things to Consider

An item record summary view, which we have discussed so far, is surely the main part of the search results page. But it is only a small part of the search results display and even a smaller part of the library catalog. Optimizing the search results page, for example, entails not just re-designing the item record summary view but choosing and designing many other elements of the page such as organizing the filtering options on the left and deciding on the default and optional views. Determining the content and the display of the detailed item record is another big part of creating a user-friendly library catalog. If you are interested in this topic, Tony Russell-Rose and Tyler Tate’s book Designing the Search Experience (2013) provides an excellent overview.

Librarians are professionals trained in many uses of a searchable database, a known item search, exploring and browsing, a search with incomplete details, compiling a set of search results, locating a certain type of items only by location, type, subject, etc. But since our work is also on the operation side of a library, we often make the mistake of regarding the library catalog as one huge inventory system that should hold and display all the acquisition, cataloging, and holdings information data of the library collection. But library patrons are rarely interested in seeing such data. They are interested in identifying relevant library items and using them. All the other information is simply a guide to achieving this ultimate goal, and the library catalog is another tool in their many toolboxes.

Online shopping sites optimize their catalog to make purchase as efficient and simple as possible. Both libraries and online shopping sites share the common interests of guiding the users to one ultimate task – identifying an appropriate item for the final borrowing or access/purchase. For creating user-oriented library catalog sketches, it is helpful to check out how non-library websites are displaying their search results as well.

Screen Shot 2013-10-13 at 9.52.27 PM

music

themes

Once you start looking other examples, you will realize that there are very many ways to display search results and you will soon want to sketch your own alternative design for the search results display in the library catalog and the discovery system. What do you think would be the level of optimum detail for library items in the library catalog or the discovery interface?

Further Reading

 

 

Creating themes in Omeka 2.0

Omeka is an easy to use content management system for digital exhibits created by the Ray Rosenzweig Center for History and New Media. It’s very modular, so you can customize it for various functions. I won’t go into the details here on how to set up Omeka, but you can read documentation and see example collections at Omeka.org. If you want to experiment with Omeka without installing it on your own server, you can set up a hosted account at Omeka.net

Earlier this year Omeka was completely rewritten and released a 2.0 version (now 2.1). Like with many open source content management systems, it took awhile for the contributed plug-ins and themes to catch up to the new release. As of July, most of the crucial contributed plug-ins were available, and if you haven’t yet updated your installation this is a good time to think about doing so. In this post I’m going to focus on the process of customizing Omeka 2.0 to your institution, and specifically creating a custom theme. While there are now several good themes available for 2.0, you will probably want to make a default theme that matches the rest of your website. One of the nice features of Omeka that is quite different from other content management systems is that it is very easy for  the people who create exhibits to pick a custom theme that differs from the default theme. That said, providing a custom theme for your institution makes it easy for visitors to know where they are, and will also make it easier on the staff who are creating exhibits since you can adapt the theme to their needs.

Planning

Like any design project, you should start with a discussion with the people who use the system most. (If you are new to design, check the ACRL TechConnect posts on design). In my case, there are two archives on campus who both use Omeka for their exhibits. Mock up what the layout should look like–you may not be able to get it perfectly, but use this as a guide to future development. We came up with a rough sketch based on what the archivist liked and didn’t like about templates available, and worked together on determining the priorities for the design. (Side note: if you can get your whole wall painted with whiteboard paint this is a very fun collaborative project.)

Rough sketch of ideas for new theme.
Rough sketch of ideas for new theme.
Development

Development is very easy to start when you are modifying an existing theme. Start with a theme (there are only a few that are 2.0 compatible) that is close to what you need. Rather than the subtheme system you may be used to with Drupal or WordPress, with Omeka you can pick the theme you want to hack on and copy the entire directory and rename it.

Here was the process I followed to build my theme. I suggest that you set up a local development environment (I used XAMPP) to do this work, but make sure that you have at least one exhibit to test, since some of the CSS is different for exhibits than for the rest of the site.

  • Pick a theme
Seasons Autumn
Seasons (with the Autumn color scheme)

I started with the Seasons theme. I copied the seasons directory from the themes directory and pasted it back with a new name of luctest (which I renamed when it was time to move it to a production environment).

  • Modify theme.ini

This is what you will start with. You really only need to edit the author, title, and description unless you want to edit the rest.

[theme]
author = "Roy Rosenzweig Center for History and New Media"
title = "Seasons"
description = "A colorful theme with a configuration option to switch style sheets for a particular season, plus 'night'."
license = "GPLv3"
website = "<a href="http://omeka.org">http://omeka.org</a>"
support_link = "<a href="http://omeka.org/forums/forum/themes-and-public-display">http://omeka.org/forums/forum/themes-and-public-display</a>"
omeka_minimum_version="2.0"
omeka_target_version="2.0"
version="2.1.1"
tags="yellow, blue, summer, season, fall, orange, green, dark"
  • Modify config.ini

Check which elements are set in the configuration (i.e. the person such as an archivist who is creating the exhibit can set them) and which you need to set in the theme. This can cause a lot of frustration when you attempt to style an element whose value is actually set by the user. If you don’t want to allow the user to change anything, you can take that option out of the config.ini, just make sure you’ve set it elsewhere.

[config]

; Style Sheet
style_sheet.type = "select"
style_sheet.options.label = "Style Sheet"
style_sheet.options.description = "Choose a style sheet"
style_sheet.options.multiOptions.spring = "Spring"
style_sheet.options.multiOptions.summer = "Summer"
style_sheet.options.multiOptions.autumn = "Autumn"
style_sheet.options.multiOptions.winter = "Winter"
style_sheet.options.multiOptions.night = "Night"
style_sheet.options.value = "winter"

logo.type = "file"
logo.options.label = "Logo File"
logo.options.description = "Choose a logo file. This will replace the site title in the header of the theme. Recommended maximum width for the logo is 500px."
logo.options.validators.count.validator = "Count"
logo.options.validators.count.options.max = "1"

display_featured_item.type = "checkbox"
display_featured_item.options.label = "Display Featured Item"
display_featured_item.options.description = "Check this box if you wish to show the featured item on the homepage."
display_featured_item.options.value = "1"

display_featured_collection.type = "checkbox"
display_featured_collection.options.label = "Display Featured Collection"
display_featured_collection.options.description = "Check this box if you wish to show the featured collection on the homepage."
display_featured_collection.options.value = "1"

display_featured_exhibit.type = "checkbox"
display_featured_exhibit.options.label = "Display Featured Exhibit"
display_featured_exhibit.options.description = "Check this box if you wish to show the featured exhibit on the homepage."
display_featured_exhibit.options.value = "1"

homepage_recent_items.type = "text"
homepage_recent_items.options.label = "Homepage Recent Items"
homepage_recent_items.options.description = "Choose a number of recent items to be displayed on the homepage."
homepage_recent_items.options.maxlength = "2"

homepage_text.type = "textarea"
homepage_text.options.label = "Homepage Text"
homepage_text.options.description = "Add some text to be displayed on your homepage."
homepage_text.options.rows = "5"
homepage_text.options.attribs.class = "html-input"

(This is just a sample of part of the config.ini file).

  • Modify CSS

Open up css/style.css and check which elements you need to modify (note that some themes may have the style sheets divided up differently.) Some items are obvious, some you will have to use Firebug or another tool to determine which class styles the element. You can always ask in the Omeka themes and display forum if you can’t figure it out.

The Seasons theme has different styles for each color scheme, and in the interests of time I picked the color scheme closest to the color scheme I wanted to end with. You could use the concept of different schemes to identify the collections and/or exhibits of different units. Make sure you read through the whole style sheet first to determine which elements are theme-wide, and which are set in the color scheme.

  • Test, test, test

The 2.0 themes that I’ve experimented with are all responsive and work well with different browsers. This probably goes without saying, but if you have changed the spacing at all, make sure you test your design in multiple window sizes and devices.

  • Voila
LUC2013final
Final version of redesigned theme.

We have a few additional items to add to this design, but it’s met our immediate needs very well, and most importantly matches the design of the Archives and Special Collections website so it’s clear to users that they are still in the right place.

Next steps

Since this was a new content management system to me, I still have a lot to learn about the best ways to do certain things. This experience was helpful not just in learning Omeka, but also as a small-scale test of planning a new theme for our entire library website, which runs on Drupal.

One Week, One Tool, Many Lessons

When I was a kid, I cherished the Paula Danziger book Remember Me to Harold Square, in which a group of kids call themselves the Serendipities, named for the experience of making fortunate discoveries accidentally.  Last week I found myself remembering the book over and over again as I helped develop Serendip-o-matic, a tool which introduces serendipity to research, as part of a twelve person team attending One Week | One Tool at the Roy Rosenzweig Center for History and New Media at George Mason University (RRCHNM).

In this blog post, I’ll take you through the development of the “serendipity machine”, from the convening of the team to the selection and development of the tool.  The experience turned out to be an intense learning experience for me, so along the way, I will share some of my own fortunate discoveries.

(Note: this is a pretty detailed play-by-play of the process.  If you’re more interested in the result, please see the RRCHNM news items on both our process and our product, or play with Serendip-o-matic itself.)

The Eve of #OWOT

@foundhistory: One Week | One Tool is Here! Meet the crew and join in! http://t.co/4EyDPllNnu #owot #buildsomething

Approximately thirty people applied to be part of One Week | One Tool (OWOT), an Institute for Advanced Topics in the Digital Humanities, sponsored by the National Endowment for the Humanities.  Twelve were selected and we arrive on Sunday, July 28, 2013 and convene in the Well, the watering hole at the Mason Inn.

As we circle around the room and introduce ourselves, I can’t help but marvel at the myriad skills of this group.  Each person arrives with more than one bag of tricks from which to pull, including skills in web development, historical scholarship, database administration, librarianship, human-computer interaction and literature studies.  It takes about a minute before someone mentions something I’ve never heard of and so the Googling begins.  (D3, for the record, is a Javascript library for data visualizations).

Tom Scheinfeldt (@foundhistory), the RRCHNM director-at-large who organized OWOT, delivers the pre-week pep talk and discusses how we will measure success.  The development of the tool is important, but so is the learning experience for the twelve assembled scholars.  It’s about the product, but also about the process.  We are encouraged to learn from each other, to “hitch our wagon” to another smart person in the room and figure out something new.

As for the product, the goal is to build something that is used.  This means that defining and targeting the audience is essential.

The tweeting began before we arrived, but typing starts in earnest at this meeting and the #owot hashtag is populated with our own perspectives and feedback from the outside.  Feedback, as it turns out, will be the priority for Day 1.

Day 1

@DoughertyJack: “One Week One Tool team wants feedback on which digital tool to build.”

Mentors from RRCHNM take the morning to explain some of the basic tenets of what we’re about to do.  Sharon Leon talks about the importance of defining the project: “A project without an end is not a project.”  Fortunately, the one week timeline solves this problem for us initially, but there’s the question of what happens after this week?

Patrick Murray-John takes us through some of the finer points of developing in a collaborative environment.  Sheila Brennan discusses outreach and audience, and continues to emphasize the point from the night before: the audience definition is key.  She also says the sentence that, as we’ll see, would need to be my mantra for the rest of the project: “Being willing to make concrete decisions is the only way you’re going to get through this week.

All of the advice seems spot-on and I find myself nodding my head.  But we have no tool yet, and so how to apply specifics is still really hazy.  The tool is the piece of the puzzle that we need.

We start with an open brainstorming session, which results in a filled whiteboard of words and concepts.  We debate audience, we debate feasibility, we debate openness.  Debate about openness brings us back to the conversation about audience – for whom are we being open?  There’s lot of conversation but at the end, we essentially have just a word cloud associated with projects in our heads.

So, we then take those ideas and try to express them in the following format: X tool addresses Y need for Z audience.  I am sitting closest to the whiteboards so I do a lot of the scribing for this second part and have a few observations:

  • there are pet projects in the room – some folks came with good ideas and are planning to argue for them
  • our audience for each tool is really similar; as a team we are targeting “researchers”, though there seems to be some debate on how inclusive that term is.  Are we including students in general?  Teachers?  What designates “research”?  It seems to depend on the proposed tool.
  • the problem or need is often hard to articulate.  “It would be cool” is not going to cut it with this crowd, but there are some cases where we’re struggling to define why we want to do something.
Clarifying Ideas
Clarifying Ideas. Photo by Mia Ridge.

A few group members begin taking the rows and creating usable descriptions and titles for the projects in a Google Doc, as we want to restrict public viewing while still sharing within the group.  We discuss several platforms for sharing our list with the world, and land on IdeaScale.  We want voters to be able to vote AND comment on ideas, and IdeaScale seems to fit the bill.  We adjourn from the Center and head back to the hotel with one thing left to do: articulate these ideas to the world using IdeaScale and get some feedback.

The problem here, of course, is that everyone wants to make sure that their idea is communicated effectively and we need to agree on public descriptions for the projects.  Finally, it seems like there’s a light at the end of the tunnel…until we hit another snag.  IdeaScale requires a login to vote or comment and there’s understandable resistance around the table to that idea.  For a moment, it feels like we’re back to square one, or at least square five.  Team members begin researching alternatives but nothing is perfect, we’ve already finished dinner and need the votes by 10am tomorrow.  So we stick with IdeaScale.

And, not for the last time this week, I reflect on Sheila’s comment, “being willing to make concrete decisions is the only way you’re going to get through this week.”  When new information, such as the login requirement, challenges the concrete decision you made, how do you decide whether or not to revisit the decision?  How do you decide that with twelve people?

I head to bed exhausted, wondering about how many votes we’re going to get, and worried about tomorrow: are we going to make a decision?

Day 2

@briancroxall: “We’ve got a lot of generous people in the #owot room who are willing to kill their own ideas.”

It turns out that I need not have worried.  In the winnowing from 11 choices down to 2, many members of the team are willing to say, “my tool can be done later” or “that one can be done better outside this project.”   Approximately 100 people weighed in on the IdeaScale site, and those votes are helpful as we weigh each idea.  Scott Kleinman leads us in a discussion about feasbility for implementation and commitment in the room and the choices begin to fall away.  At the end, there are four, but after a few rounds of voting we’re down to two with equal votes that must be differentiated.  After a little more discussion, Tom proposes a voting system that allows folks to weight their votes in terms of commitment and the Serendipity project wins out.  The drafted idea description reads:

“A serendipitous discovery tool for researchers that takes information from your personal collection (such as a Zotero citation library  or a CSV file) and delivers content (from online libraries or collections like DPLA or Europeana) similar to it, which can then be visualized and manipulated.”

We decide to keep our project a secret until our launch and we break for lunch before assigning teams.  (Meanwhile, #owot hashtag follower Sherman Dorn decides to create an alternative list of ideas – One Week Better Tools – which provides some necessary laughs over the next couple of days).

After lunch, it’s time to break out responsibilities.  Mia Ridge steps up, though, and suggests that we first establish a shared understanding of the tool.  She sketches on one of the whiteboards the image which would guide our development over the next few days.

This was a takeaway moment for me.  I frequently sketch out my projects, but I’m afraid the thinking often gets pushed out in favor of the doing when I’m running low on time.  Mia’s suggestion that we take the time despite being against the clock probably saved us lots of hours and headaches later in the project.  We needed to aim as a group, so our efforts would fire in the same direction.  The tool really takes shape in this conversation, and some of the tasks are already starting to become really clear.  (We are also still indulging our obsession with mustaches at this time, as you may notice.)

Mia sketchs app.
Mia Ridge leads discussion of application. Photo by Meghan Frazer.

Tom leads the discussion of teams.  He recommends three: a project management team, a design/dev team and an outreach team.  The project managers should be selected first, and they can select the rest of the teams.  The project management discussion is difficult; there’s an abundance of qualified people in the room.  From my perspective, it makes sense to have the project managers be folks who can step in and pinch hit as things get hectic, but we also need our strongest technical folks on the dev team.  In the end, Brian Croxall and I are selected to be the project management team.

We decide to ask the remaining team members where they would like to be and see where our numbers end up.  The numbers turn out great: 7 for design/dev and 3 for outreach, with two design/dev team members slated to help with outreach needs as necessary.

The teams hit the ground running and begin prodding the components of the idea. The theme of the afternoon is determining the feasibility of this “serendipity engine” we’ve elected to build.  Mia Ridge, leader of the design/dev team, runs a quick skills audit and gets down to the business of selecting programming languages, frameworks and strategies for the week. They choose to work in Python with the Django framework.  Isotope, a JQuery plugin I use in my own development, is selected to drive the results page.  A private Github repository is set up under a code name.  (Beyond Isotope, HTML and CSS, I’m a little out of my element here, so for more technical details, please visit the public repository’s wiki.)  The outreach team lead, Jack Dougherty, brainstorms with his team on overall outreach needs and high priority tasks.  The Google document from yesterday becomes a Google Drive folder, with shells for press releases, a contact list for marketing and work plans for both teams.

This is the first point where I realize that I am going to have to adjust to a lack of hands on work.  I do my best when I’m working a keyboard: making lists, solving problems with code, etc.  As one of the project managers, my job is much less on the keyboard and much more about managing people and process.

When the teams come back together to report out, there’s a lot of getting each side up to speed, and afterwards our mentors advise us that the meetings have to be shorter.  We’re already at the end of day 2, though both teams would be working into the night on their work plans and Brian and need I still need to set the schedule for tomorrow.

We’re past the point where we can have a lot of discussion, except for maybe about the name.

Day 3

@briancroxall: Prepare for more radio silence from #owot today as people put their heads down and write/code.

@DoughertyJack: At #owot we considered 120 different names for our tool and FINALLY selected number 121 as the winner. Stay tuned for Friday launch!

Wednesday is tough.  We have to come up with a name, and all that exploration from yesterday needs to be a prototype by the end of the day. We are still hammering out the language we use in talking to each other and there’s some middle ground to be found on terminology. One example is the use of the word “standup” in our schedule.  “Standup” means something very specific to developers familiar with the Agile development process whereas I just mean, “short update meeting.”  Our approach to dealing with these issues is to identify the confusion and quickly agree on language we all understand.

I spend most of the day with the outreach team.  We have set a deadline for presenting names at lunchtime and are hoping the whole team can vote after lunch.  This schedule turns out to be folly as the name takes most of the day and we have to adjust our meeting times accordingly.  As project managers, Brian and I are canceling meetings (because folks are on a roll, we haven’t met a deadline, etc) whenever we can, but we have to balance this with keeping the whole team informed.

Camping out in a living room type space in RRCHNM, spread out among couches and looking at a Google Doc being edited on a big-screen TV, the outreach team and various interested parties spend most of the day brainstorming names.  We take breaks to work on the process press release and other essential tasks, but the name is the thing for the moment.  We need a name to start working on branding and logos.  Product press releases need to be completed, the dev team needs a named target and of course, swag must be ordered.

It is in this process, however, that an Aha! moment occurs for me.  We have been discussing names for a long time and folks are getting punchy.  The dev team lead and our designer, Amy Papaelias, have joined the outreach team along with most of our CHNM mentors.  I want to revisit something dev team member Eli Rose said earlier in the day.  To paraphrase, Eli said that he liked the idea that the tool automated or mechanized the concept of surprise.  So I repeat Eli’s concept to the group and it isn’t long after that that Mia says, “what about Serendip-o-matic?”  The group awards the name with head nods and “I like that”s and after running it by developers and dealing with our reservations (eg, hyphens, really?), history is made.

As relieved as I am to finally have a name, the bigger takeaway for me here is in the role of the manager.  I am not responsible for the inspiration for the name or the name itself, but instead repeating the concept to the right combination of people at a time when the team was stuck.  The project managers can create an opportunity for the brilliant folks on the team to make connections.  This thought serves as a consolation to me as I continue to struggle without concrete tasks.

Meanwhile, on the other side the building, the rest of dev team is pushing to finish code.  We see a working prototype at the end of the day, and folks are feeling good, but its been a long day.  So we go to dinner as a team, and leave the work behind for a couple of hours, though Amy is furiously sketching at various moments throughout the meal as she tries to develop a look and feel for this newly named thing.

On the way home from dinner, I think, “there’s only two days left.”  All of the sudden it feels like we haven’t gotten anywhere.

Day 4

@shazamrys: Just looking at first logo designs from @fontnerd. This is going to be great. #owot

We start hectic but excited.  Both teams were working into the night, Amy has a logo strategy and it goes over great.  Still, there’s lots of work to do today. Brian and I sit down in the morning and try to discuss what happens with the tool and the team next week and the week after before jumping in and trying to help the teams where we can, including things like finding a laptop for a dev team member, connecting someone with javascript experience to a team member who is stuck, or helping edit the press release.  This day is largely a blur in my recollection, but there are some salient points that stick out.

The decision to add the Flickr API to our work in order to access the Flickr Commons is made with the dev team, based on the feeling that we have enough time and the images located there enhance our search results and expand our coverage of subject areas and geographic locations.

We also spend today addressing issues.  The work of both teams overlaps in some key areas.  In the afternoon, Brian and I realize that we have mishandled some of the communication regarding language on the front page and both teams are working on the text.  We scramble to unify the approaches and make sure that efforts are not wasted.

This is another learning moment for me.  I keep flashing on Sheila’s words from Monday, and worry that our concrete decision making process is suffering from”too many cooks in the kitchen.” Everyone on this team has a stake in the success of this project and we have lots of smart people with valid opinions.  But everyone can’t vote on everything and we are spending too much time getting consensus now, with a mere twenty-four hours to go.  As a project manager, part of my job is to start streamlining and making executive decisions, but I am struggling with how to do that.

As we prepare to leave the center at 6pm, things are feeling disconnected.  This day has flown by.  Both teams are overwhelmed by what has to get done before tomorrow and despite hard work throughout the day, we’re trying to get a dev server and production server up and running.  As we regroup at the Inn, the dev team heads upstairs to a quiet space to work and eat and the outreach team sets up in the lobby.

Then, good news arrives.  Rebecca Sutton-Koeser has managed to get both the dev and production servers up and the code is able to be deployed.  (We are using Heroku and Amazon Web Services specifically, but again, please see the wiki for more technical details.)

The outreach team continues to work on documentation, and release strategy and Brian and I continue to step in where we can.  Everyone is working until midnight or later, but feeling much better about our status then we did at 6pm.

Day 5

@raypalin: If I were to wait one minute, I could say launch is today. Herculean effort in final push by my #owot colleagues. Outstanding, inspiring.

The final tasks are upon us.  Scott Williams moves on from his development responsibilities to facilitate user testing, which was forced to slide from Thursday due to our server problems.  Amanda Visconti works to get the interactive results screen finalized.  Ray Palin hones our list of press contacts and works with Amy to get the swag design in place. Amrys Williams collaborates with the outreach team and then Sheila to publish the product press release.  Both the dev and outreach teams triage and fix and tweak and defer issues as we move towards our 1pm “code chill”, a point which we’re hoping to have the code in a fairly stable state.

We are still making too many decisions with too many people, and I find myself weighing not only the options but how attached people are to either option.  Several choices are made because they reflect the path of least resistance.  The time to argue is through and I trust the team’s opinions even when I don’t agree.

We end up running a little behind and the code freeze scheduled for 2pm slides to 2:15.  But at this point we know: we’re going live at 3:15pm.

Jack Dougherty has arranged a Google hangout with Dan Cohen of the Digital Public Library of America and Brett Bobley and Jen Serventi of the NEH Office of Digital Humanities, which the project managers co-host.  We broadcast the conversation live via the One Week | One Tool website.

The code goes live and the broadcast starts but my jitters do not subside…until I hear my teammates cheering in the hangout.  Serendip-o-matic is live.

The Aftermath

At 8am on Day 6, Serendip-o-matic had its first pull request and later in the day, a fourth API – Trove of Australia – was integrated.  As I drafted this blog post on Day 7, I received email after email generated by the active issue queue and the tweet stream at #owot is still being populated.  On Day 9, the developers continue to fix issues and we are all thinking about long term strategy.  We are brainstorming ways to share our experience and help other teams achieve similar results.

I found One Week | One Tool incredibly challenging and therefore a highly rewarding experience.  My major challenge lay in shifting my mindset from that of a someone hammering on a keyboard in a one-person shop to a that of a project manager for a twelve-person team.  I write for this blog because I like to build things and share how I built them, but I have never experienced the building from this angle before.  The tight timeline ensured that we would not have time to go back and agonize over decisions, so it was a bit like living in a project management accelerator.  We had to recognize issues, fix them and move on quickly, so as not to derail the project.

However, even in those times when I became acutely more aware of the clock, I never doubted that we would make it.  The entire team is so talented; I never lost my faith that a product would emerge.  And, it’s an application that I will use, for inspiration and for making fortunate discoveries.

@meghanfrazer: I am in awe. Amazing to work with such a smart, giving team. #owot #feedthemachine

Team Photo
The #OWOT team. Photo by Sharon Leon.

(More on One Week | One Tool, including other blog entries, can be found by visiting the One Week | One Tool Zotero Group.)

Taking a trek with SCVNGR: Developing asynchronous, mobile orientations and instruction for campus

Embedding the library in campus-wide orientations, as well as developing standalone library orientations, is often part of outreach and first year experience work. Reaching all students can be a challenge, so finding opportunities for better engaging campus helps to promote the library and increase student awareness. Using a mobile app for orientations can provide many benefits such as increasing interactivity and offering an asynchronous option for students to learn about the library on their own time. We have been trying out SCVNGR at the University of Arizona (UA) Libraries and are finding it is a more fun and engaging way to deliver orientations and instruction to students.

Why use game design for library orientations and instruction?

Game-based learning can be a good match for orientations, just as it can be for instruction (I have explored this before with ACRL TechConnect previously, looking at badges). Rather than just presenting a large amount of information to students or having them fill out a paper-based scavenger hunt activity, using something like SCVNGR can get students interacting more with the library in a way that offers more engagement in real time and with feedback. However, simply adding a layer of points and badges or other game mechanics to a non-game situation doesn’t automatically make it fun and engaging for students. In fact, doing this ineffectively can cause more harm than good (Nicholson, 2012). Finding a way to use the game design to motivate participants beyond simply acquiring points tends to be the common goal in using game design in orientations and instruction. Thinking of the WIIFM (What’s In It For Me) principle from a students’ perspective can help, and in the game design we used at the University of Arizona with SCVNGR for a class orientation, we created activities based on common questions and concerns of students.

Why SCVNGR?
scvngr home screen
scvngr home screen

 

SCVNGR is a mobile app game for iPhone and Android where players can complete challenges in specific locations. Rather than getting clues and hints like in a traditional scavenger hunt, this game is more focused on activities within a location instead of finding the location. Although this takes some of the mystery away, it works very well for simply informing people about locations that are new to them and having them interact with the space.

Students need to physically be in the location for the app to work, where they use the location to search for “challenges” (single activities to complete) or “treks” (a series of single activities that make up the full experience for a location), and then complete the challenges or treks to earn points, badges, and recognition.

Some libraries have made their own mobile scavenger hunt activities without the aid of a paid app. For example, North Carolina State University uses the NCSU Libraries’ Mobile Scavenger Hunt, which is a combination of students recording responses in Evernote, real time interaction, and tracking by librarians.  One of the reasons we went with SCVNGR, however, is because this sort of mobile orientation requires a good amount of librarian time and is synchronous, whereas SCVNGR does not require as much face-to-face librarian time and allows for asynchronous student participation. Although we do use more synchronous instruction for some of our classes, we also wanted to have the option for asynchronous activities, and in particular for the large-scale orientations where many different groups will come in at many different times. Although SCVNGR is not free for us, the app is free to students. They offer 24/7 support and other academic institutions offer insight and ideas in a community for universities.

Other academic libraries have used SCVNGR for orientations and even library instruction. A few examples are:

 

How did the UA Libraries use SCVNGR?

Because a lot of instruction has moved online and there are so many students to reach, we are working on SCVNGR treks for both instruction and basic orientations at the University of Arizona (UA). We are in the process of setting up treks for large-scale campus orientations (New Student Orientation, UA Up Close for both parents and students, etc.) that take place during the summer, and we have tested SCVNGR  out on a smaller scale as a pilot for individual classes. There tends to be greater success and engagement if the Trek is tied to something, such as a class assignment or a required portion of an orientation session that must be completed. One concern for an app-based activity is that not all students will have smartphones. This was alleviated by putting students into groups ahead of time, ensuring that at least one person in the group did have a device compatible to use SCVNGR. However, we do lend technology at the UA Libraries, and so if a group was without a smartphone or tablet, they would be able to check one out from the library.

trek page for ais197b at ua libraries
trek page for ais197b at ua libraries

We first piloted a trek on an American Indian Studies student success course (AIS197b). This course for freshmen introduces students to services on campus that will be useful to them while they are at the UA. Last year, we presented a quick information session on library services, and then had the students complete a scavenger hunt for a class grade (participation points) with pencil and paper throughout the library. Although they seemed glad to be able to get out and move around, it didn’t seem particularly fun and engaging. On top of that, every time the students got stuck or had a question, they had to come back to the main floor to find librarians and get help.  In contrast, when students get an answer wrong in SCVNGR, feedback is programmed in to guide them to the correct information. And, because they don’t need clues to make it to the next step (they just go back and select the next challenge in the trek), they are able to continue without one mistake preventing them from moving on to the next activity. This semester, we first presented a brief instruction session (approximately 15-20 min) and then let students get started on SCVNGR.

You can see in the screenshot below how question design works, where you can select the location, how many points count toward the activity, type of activity (taking a photo, answering with text, or scanning a QR code), and then providing feedback. If a student answers a question incorrectly, as I mention above, they will receive feedback to help them in figuring out the correct answer. I really like that when students get answers right, they know instantly. This is positive reinforcement for them to continue.

scvngr answer feedback
scvngr answer feedback

The activities designed for students in this class were focused on photo and text-based challenges. We stayed away from QR codes because they can be finicky with some phones, and simply taking a picture of the QR code meets the challenge requirement for that option of activity. Our challenges included:

  • Meet the reference desk (above): Students meet desk staff and ask how they can get in touch for reference assistance; answers are by text and students type in which method they think they would use the most: email, chat, phone, or in person.
  • Prints for a day: Students find out about printing (a frequent question of new students), and text in how to pay for printing after finding the information at the Express Documents Center.
  • Playing favorites: Students wander around the library and find their favorite study spot. Taking a picture completes the challenge, and all images are collected in the Trek’s statistics.
  • Found in the stacks: After learning how to use the catalog (we provided a brief instruction session to this class before setting them loose), students search the catalog for books on a topic they are interested in, then locate the book on the shelf and take a picture. One student used this time to find books for another class and was really glad he got some practice.
  • A room of one’s own: The UA Libraries implemented online study room reservations as of a year ago. In order to introduce this new option to students, this challenge had them use their smartphones to go to the mobile reservation page and find out what the maximum amount of hours study rooms can be reserved for and text that in.

SCVNGR worked great with this class for simple tasks, such as meeting people at the reference desk, finding a book, or taking a picture of a favorite study spot, but for tasks that might require more critical thinking or more intricate work, this would not be the best platform to use in that level of instruction. SCVNGR’s assessment options are limited for students to respond to questions or complete an activity. Texting in detailed answers or engaging in tasks like searching a database would be much harder to record. Likewise, because more instruction that is tied to critical thinking is not so much location-based (evaluating a source or exploring copyright issues, for example), and so it would be hard to tie these tasks and acquisition of skill to an actual location-based activity to track. One instance of this was with the Found in the Stacks challenge; students were supposed to search for a book in the catalog and then locate it on the shelf, but there would be nothing stopping them from just finding a random book on the shelf and taking a picture of it to complete the challenge. SCVNGR provides a style guide to help in game design, and the overall understanding from this document is that simplicity is most effective for this platform.

Another feature that works well is being able to choose if the Trek is competitive or not, and also use “SmartRoute,” which is the ability to have challenges show up for participants based on distance and least-crowded areas. This is wonderful, particularly as students get sort of congested at certain points in a scavenger hunt: they all crowd around the same materials or locations simultaneously because they’re making the same progress through the activity. We chose to use SmartRoute for this class so they would be spread out during the game.

scvngr trek settings
scvngr trek settings

When trying to assess student effort and impact of the trek, you can look at stats and rankings. It’s possible to view specific student progress, all activity by all participants, and rankings organized by points.

scvngr statistics
scvngr statistics

Another feature is the ability to collect items submitted for challenges (particularly pictures). One of our challenges is for students to find their favorite study spot in the library and take a picture of it. This should be fun for them to think about and is fairly easy, and it helps us do some space assessment. It’s then possible to collect pictures like the following (student’s privacy protected via purple blob).

student images of ua main library via scvngr
student images of ua main library via scvngr

On the topic of privacy, students enter in their name to set up an account, but only their first name and first initial of their last name appear as their username. Although last names are then hidden, SCVNGR data is viewable by anyone who is within the geographical range to access the challenge: it is not closed to an institution. If students choose to take pictures of themselves, their identity may be revealed, but it is possible to maintain some privacy by not sharing images of specific individuals or sharing any personal information through text responses. On the flip side of  not wanting to associate individual students with their specific activities, it gets trickier when an instructor plans to award points for student participation. In that case, it’s possible to request reports from SCVNGR for instructors so they can see how much and which students participated. In a large class of over 100 students, looking at the data can be messier, particularly if students have the same first name and last initial. Because of this issue, SCVNGR might be better used for large-scale orientations where participation does not need to be tracked, and small classes where instructors would be easily able to know who is who in the data for activity.

Lessons learned

Both student and instructor feedback was very positive. Students seemed to be having fun, laughing, and were not getting stuck nearly as much as the previous year’s pencil-and-paper hunt. The instructor noted it seemed a lot more streamlined and engaging for the class. When students checked in with us at the end before heading out, they said they enjoyed the activity and although there were a couple of hiccups with the software and/or how we designed the trek, they said it was a good experience and they felt more comfortable with using the library.

Next time, I would be more careful about using text responses. I had gone down to our printing center to tell the current student worker what answers students in the class would be looking for so she could answer it for them, but they wound up speaking with someone else and getting different answers. Otherwise, the level of questions seemed appropriate for this class and it was a good way to pilot how SCVNGR works, if students might like it, and how long different types of questions take for bringing this to campus on a larger scale. I would also be cautious about using SCVNGR too heavily for instruction, since it doesn’t seem to have capabilities for more complex tasks or a great deal of critical thinking. It is more suited to basic instruction and getting students more comfortable in using the library.

Pros

  • Ability to reach many students and asynchronously
  • Anyone can complete challenges and treks; this is great for prospective students and families, community groups, and any programs doing outreach or partnerships outside of campus since a university login is not required.
  • Can be coordinate with campus treks if other units have accounts or a university-wide license is purchased.
  • WYSIWYG interface, no programming skills necessary
  • Order of challenges in a trek can be assigned staggered so not everyone is competing for the same resources at the same time.
  • Can collect useful data through users submitting photos and comments (for example, we can examine library space and student use by seeing where students’ favorite spots to study are).

Cons

  • SCVNGR is not free to use, an annual fee applies (in the $900-range for a library-only license, which is not institution-wide).
  • Privacy is a concern since anyone can see activity in a location; it’s not possible to close this to campus.
  • When completing a trek, users do not get automatic prompts to proceed to the next challenge; instead, they must go back to the home location screen and choose the next challenge (this can get a little confusing for students).
  • SCVNGR is more difficult to use with instruction, especially when looking to incorporate critical thinking and more complex activities
  • Instructors might have a harder time figuring out how to grade participation because treks are open to anyone; only students’ first name and last initial appear, so if either a large class completes a trek for an assignment or if an orientation trek for the public is used, a special report must be requested from SCVNGR that the library could send to the instructor for grading purposes.

 

Conclusion

SCVNGR is a good way to increase awareness and get students and other groups comfortable in using the library. One of the main benefits is that it’s asynchronous, so a great deal of library staff time is not required to get people interacting with services, collections, and space. Although this platform is not perfect for more in-depth instruction, it does work at the basic orientation level, and students and the instructor in the course we piloted it on had a good experience.

 

References

Nicholson, S. (2012). A user-centered theoretical framework for meaningful gamification. Paper Presented at Games+Learning+Society 8.0, Madison, WI. Retrieved from http://scottnicholson.com/pubs/meaningfulframework.pdf.

—-

About Our Guest Author: Nicole Pagowsky is an Instructional Services Librarian at the University of Arizona where she explores game-based learning, student retention, and UX. You can find her on Twitter, @pumpedlibrarian.

Good Design: Pleasing to the Eyes “and” Functional

Many librarians work with technology even if their job titles are not directly related to technology. Design is somewhat similar to technology in that aspect. The primary function of a librarian is to serve the needs of library patrons, and we often do this by creating instructional or promotional materials such as a handout and a poster. Sometimes this design work goes to librarians in public services such as circulation or reference. Other times it is assigned to librarians who work with technology because it involves some design software.

The problem is that knowing how to use a piece of design software does not entail the ability to create a great work of design. One may be a whiz at Photoshop but can still produce an ugly piece of design. Most of us, librarians, are quite unfamiliar with the concept of design. ACRL TechConnect covered the topic design previously in Design 101 – Part 1 and Design 101 – Part 2. So be sure to check them out. In this post, I will share my experience of creating a poster for my library in the context of libraries and design.

1. Background

My workplace recently launched the new Kindle e-book leader lending program sponsored by the National Network of Libraries of Medicine/Southeast Atlantic Region Express Mobile Technology Project. This project is to be completed in a few months, and we have successfully rolled out 10 Kindles with 30 medical e-book titles for circulation early this year. One of the tasks left for me to do as the project manager is to create a poster to further promote this e-book reader program. No matter how great the Kindle e-book lending program is, if patrons don’t know about it, it won’t get much use. A good poster can attract a lot of attention from library patrons. I can just put a small sign with “Kindles available!” written on it somewhere in the library. But the impact would be quite different.

2. Trying to design a poster

When I planned the grant budget, I included an budget item for large posters. But the item only covers the printing costs, not the design costs. So I started designing a poster myself.  Here are a few of my first attempts. Even to my untrained eyes, these look unprofessional and amateurish, however. The first one looked more like a handout than a poster. So I decided to make the background black.  That makes the QR code and the library logo invisible however. To fix this, I added a white background behind them. Slightly better maybe? Not really.

Kindle0
My first try doesn’t look so good!
Kindle1
My second attempt is only marginally better!

 

 

 

 

 

 

 

 

 

 

 

One thing I know about design is that an image can save or kill your work. A stunning image alone can make a piece of design awesome. So I did some Google search and found out this nice image of Kindle. Now it looks like that I need to flip the poster to make it wide.

Kindle2
The power of a nice image! Too bad it is copyrighted…

But there is a catch. The image I found is copyrighted. This was just an example to show how much power a nice image or photograph can have to the overall quality of a work of design. I also looked for Kindle images/photographs in Flick Creative Commons but failed to locate one that allows making derivatives. This is a very common problem for libraries, which tend to have little access to quality images/photographs. If you are lucky you may find a good image from Pixabay which offer very nice photographs and images that are in public domain.

Kindle3
Changed the poster setup to to make it wide.
3. What went wrong

You probably already have some ideas about what went wrong with my failed attempts so far. The font doesn’t look right. The poster looks more like a handout. The image looks amateurish in the first two examples. But the whole thing is functional for sure, some may say. It does the job of conveying the message that the library now has Kindle E-book readers to offer. Others may object. No, not really, the wording is vague, far from clear. You can go on forever. A lot of times, these issues are solved by adding more words, more instructions, and more links, which can be also problematic.

But one thing is clear. These are not pretty. And what that means is that if I print this and hang up on the wall around the library, our new Kindle e-book lending program would fail to convey certain sentiments that I had in mind to our library patrons. I want the poster to present this program as a new and exciting new service. I would like the patron to see the poster and get interested, curious, and feel that the library is trying something innovative. Conveying those sentiments and creating a certain impression about the library ‘is’ the function of the poster as much as informing library patrons about the existence of the new Kindle e-book reader lending program. Now the posters above won’t do a good job at performing that function. So in those aspects, they are not really functional. Sometimes beauty is a necessity. For promotional materials, which libraries make a lot but tend to neglect the design aspect of them, ‘pleasing to the eyes’ is part of their essential function.

4. Fixing it

What I should have done is to search for examples first that advertise a similar program at other libraries. I was very lucky in this case. In the search results, I ran into this quite nice circulation desk signage created by Saint Mary’s College of Maryland Library. This was made as a circulation desk sign, but it gave me an inspiration that I can use for my poster.

An example can give you much needed inspiration!

Once you have some examples and inspiration, creating your own becomes much easier. Here, I pretty much followed the same color scheme and the layout from the one above. I changed the font and the wording and replaced the kindle image with a different one, which is close to what my library circulates. The image is from Amazon itself, and Amazon will not object people using their own product image to promote the product itself. So the copyright front is clear. You can see my final poster below. If I did not run into this example, however, I would have probably searched for Kindle advertisements, posters, and similar items for other e-book readers for inspiration.

One thing to remember is the purpose of the design. In  my case, the poster is planned to be printed on a large glossy paper  (36′ x 24′). So I had to make sure that the image will appear clear and crisp and not blurry when printed on the large-size paper. If your design is going to be used only online or printed on a small-size item, this is less of an issue.

Final result!
5. Good design isn’t just about being pretty

Hopefully, this example shows why good design is not just a matter of being pretty. Many of us have an attitude that being pretty is the last thing to be considered. This is not always false. When it is difficult enough to make things work as intended, making them pretty can seem like a luxury. But for promoting library services and programs at least, just conveying information is not sufficient. Winning the heart of library patrons is not just about letting people know what the library does but also about how the library does things. For this reason, the way in which the library lets people know about its services and programs also matters. Making things beautiful is one way to improve on this “how” aspect as far as promotional materials are concerned. Making individual interactions personally pleasant and the transactions on the library website user-friendly would be another way to achieve the same goal. Design is a broad concept that can be applied not only to visual work but also to a thought process, a tool, a service, etc., and it can be combined with other concept such as usability.

Resources

While I was doing this, I also discovered a great resource, Librarian Design Share. This is a great place to look for an inspiration or to submit your own work, so that it can inspire other librarians. Here are a few more resources that may be useful to those who work at a library and want to learn a bit more about visual design. Please share your experience and useful resources for the library design work in the comments!

 

The Mobile App Design Process: A Tube Map Infographic

Last June I had a great experience team-teaching a week-long seminar on designing mobile apps at the Digital Humanities Summer Institute (DHSI). Along with my colleagues from WSU Vancouver’s Creative Media and Digital Culture (CMDC) program, I’ll be returning this June to the beautiful University of Victoria in British Columbia to teach the course again1. As part of the course, I created a visual overview of the process we use for app making. I hope you’ll find it a useful perspective on the work involved in crafting mobile apps and an aid to the process of creating your own.

topological map of the mobile app design process
A visual guide to the process of designing and building mobile apps. Start with Requirements Analysis in the upper-left and follow the tracks to Public Release. (Click for full-sized image.)
Creating the Tube Map:

I’m fond of the tube-map infographic style, also know as the topological map2, because of its ability to highlight relationships between systems and especially because of how it distinguishes between linear (do once) and recursive (do over and over) processes. The linear nature of text in a book or images in slide-deck presentations can artificially impose a linearity that does not mirror the creative process we want to impart. In this example, the design and prototyping loops on the tube-map help communicate that a prototype model is an aid to modeling the design process and not a separate step completed only when the design has been finalized.

These maps are also fun and help spur the creative process. There are other tools for process mapping such as using flowcharts or mind-maps, but in this case I found the topological map has a couple of advantages. First and foremost, I associate the other two with our strategic planning process, so the tube map immediately seems more open, fun, and creative. This is, of course, rooted in my own experience and your experiences will vary but if you are looking for a new perspective on process mapping or a new way to display interconnected systems that is vibrant, fun, and shakes things up a bit the tube map may be just the thing.

I created the map using the open source vector-graphics program Inkscape3 which can be compared to Adobe Illustrator and Corel Draw. Inkscape is free (both gratis and libre) and is powerful, but there is a bit of a learning curve. Being unfamiliar with vector graphics or the software tools to create them, I worked with an excellent tutorial provided by Wikipedia on creating vector graphic topological maps4. It took me a few days of struggling and slowly becoming familiar with the toolset before I felt comfortable creating with Inkscape. I count this as time well spent, as many graphics used in mobile app and icon sets required by app stores can be made with vector graphic editors. The Inkscape skills I picked up while making the map have come in very handy on multiple occasions since then.

Reading the Mobile App Map:

Our process through the map begins with a requirements analysis or needs assessment. We ask: what does the client want the app to do? What do we know about our end users? How do the affordances of the device affect this? Performing case studies helps us learn about our users before we start designing to meet their needs. In the design stage we want people to make intentional choices about the conceptual and aesthetic aspects of  their app design. Prototype models like wireframe mock-ups, storyboards, or Keynotopia5 prototypes help us visualize these choices, eventually resulting in a working prototype of our app. Stakeholders can test and request modifications to the prototype, avoiding potentially expensive and labor intensive code revisions later in the process.

Once both the designers and clients are satisfied with the prototype and we’ve seen how potential users interact with it, we’re ready to commit our vision to code. Our favored code platform uses HTML 5, CSS 3, jQuery Mobile6, and PhoneGap7 to make hybrid web apps. Hybrid apps are written as web apps–HTML/JavaScript web sites that look and performlike apps–then use a tool like PhoneGap to translate this code into the native format for a device. PhoneGap translates a web app into a format that works with the device’s native programming environment. This provides more direct and thus faster access to device hardware and also enables us to place our app in official app stores. Hybrid apps are not the only available choice and aren’t perfect for every use case. They can be slower than native apps and may have some issues accessing device hardware, but the familiar coding language, multi-device compatibility, and ease of making updates across multiple platforms make them an ideal first step for mobile app design. LITA has an upcoming webinar on creating web apps that employs this system8.

Once the prototype has been coded into a hybrid app, we have another opportunity for evaluation and usability testing. We teach a pervasive approach that includes evaluation and testing all throughout the process, but this stage is very important as it is a last chance to make changes before sending the code to an app marketplace. After the app has been submitted, opportunities to make updates, fix bugs, and add features can be limited, sometimes significantly, by the app store’s administrative processes.

After you have spent some time following the lines of the tube map and reading this very brief description, I hope you can see this infographic as an aid to designing mobile web apps. I find it particularly helpful for identifying the source of a particular problem I’m having and also suggesting tools and techniques that can help resolve it. As a personal example, I am often tempted to start writing code before I’ve completely made up my mind what I want the code to do, which leads to frustration. I use the map to remind me to look at my wireframe and use that to guide the structure of my code. I hope you all find it useful as well.

Test-driving Purdue’s Passport gamification platform for library instruction

Gamification in libraries has become a topic of interest in the professional discourse, and one that ACRL TechConnect has covered in Applying Game Dynamics to Library Services and Why Gamify and What to Avoid in Gamification. Much of what has been written about badging systems in libraries pertains to gamifying library services. However, being an Instructional Services Librarian, I have been interested in tying gamification to library instruction.

When library skills are not always part of required learning outcomes or directly associated with particular classes, thinking more creatively about promotion and embeddedness of library tutorials prompted me to become interested in tying a badging system to the University of Arizona Libraries’ online learning objects. For a brief review on badges, they are visual representations of skills and achievements. They can be used with or instead of grades depending on the scenario and include details to support their credibility (criteria, issuer, evidence, currency).

From classhack.com

Becoming a beta tester for Purdue’s Passport platform gives me the opportunity to better sketch out what our plans are and to test how gamification could work in this context. Passport, according to Purdue, is “A learning system that demonstrates academic achievement through customizable badges.” Through this platform, instructors can design instruction for badges to be associated with learning outcomes. Currently, Passport can only be used by applying to be a beta tester. As they improve the software, it should be available to more people and have greater integration (it currently connects with Mozilla Open Backpack and within the Purdue system).We are still comparing platforms and possibilities for the University of Arizona Libraries, and testing Passport has been the first step in figuring out what we want, what is available, and how we would like to design this form of instruction. I will share my impression of Passport and using badging technology for these purposes from my experience using the software.

Refresher on motivation

It’s important to understand how motivation works in relation to a points and badges system, while also having a clear goal in mind. I recently wrote a literature review on motivation in gamified learning scenarios as part of my work toward a second Master’s in Educational Technology. The general ideas to take away are the importance of employing game mechanics thoughtfully into your framework to avoid users’ relying solely on the scoring system, as well as focusing on the engagement aspects of gamification rather than using badges and points just for manipulation. Points should be used as a feedback mechanism rather than just promoting them as items to harvest.

Structure and scalability

Putting this into perspective for gamifying library instruction at the University of Arizona, we want to be sure student motivation is directed at developing research skills that can be visually demonstrated to instructors and future employers through badges, with points serving as feedback and further motivation. We are using the ACRL Information Literacy Standards as an outline for the badges we create; the Standards are not perfect, but they serve well as a map for conceptualizing research skills and are a way we can organize the content. Within each skill set or badge, activities for completion are multidimensional: students must engage in a variety of tasks, such as doing a tutorial, reading a related article or news story, and completing a quiz. We plan to allow for risk taking and failure — important aspects of game design — so students can re-try the material until they understand it (Gee, 2007).

As you can see in this screen capture, the badges corresponding to the ACRL Standards include: Research Initiator (Standard 1), Research Assailant (Standard 2), Research Investigator (Standard 3), and Research Warrior (Standard 4). As a note, I have not yet created a badge for Standard 5 or one to correspond with our orientations (also, all names you can see in any image I include are of my colleagues trying out the badges, and not of students). A great aspect of this platform is the ability to design your own badges with their WYSIWYG editor.

Main challenge screen
Main challenge screen

Because a major issue for us is scalability with limited FTE, we have to be cautious in which assessment methods we choose for approving badges. Since we would have a hard time offering meaningful, individualized feedback for every student who would complete these tasks, having something automatic is more ideal. Passport allows options for students to test their skills, with multiple-choice quizzes, uploading a document, and entering text. For our purposes, using multiple-choice quizzes with predetermined responses is currently the best method. If we develop specific badges for smaller courses on a case-by-case basis, it might be possible to accept written responses and more detailed work, but in trying to roll this out to campus-at-large, automated scoring is necessary.

Leveling up

Within each badge, also referred to as a challenge, there are tasks to complete. Finishing these tasks adds up to earning the badge. It’s essentially leveling up (which is progressing to the next level based on achievement); although the way Passport is designed, the students can complete the tasks in any order. Within the suite of badges, I have reinforced information and skills throughout so students must use previous skills learned for future success. In this screen capture, you can see the overall layout by task title.

Task progress by users
Task progress by users

When including tasks that require instructor approval (if students were to submit documents or write text), an instructor would click on each yellow box stating that approval is needed to determine if the student successfully completed the task and supply personalized feedback (image above). And you can see the breakdown of tasks under each challenge to review what was learned; this can serve as confirmation for outside parties of what kind of work each badge entailed (image below).

Badge work details
Badge work details
Showing off

Once badges are earned, they can be displayed in a user’s Passport profile and Mozilla Open Badges. Here is an example of what a badge portfolio looks like:

User badge portfolio
User badge portfolio

Passport “classrooms” are closed and require a log in for earning badges (FERPA), but if students agree to connectivity with Mozilla’s Open Badges Backpack, achievements can then be shared with Twitter, Facebook, LinkedIn, and other networks. Badges can also connect with e-portfolios and resumes (since it’s in Beta this functionality works best with Purdue platforms). This could be a great, additional motivator for students in helping them get jobs. From Project Information Literacy, we do know employers find new graduates are lacking research skills, so being able to present these skills as fulfilled to future employers can be useful for soon-to-be and recent graduates. The badges link back to more information, as mentioned, and employers can get more detail. Students can even make their submitted work publicly available so employers, instructors, and peers can see their efforts.

Wrapping up

Whether or not it is possible to integrate Passport fully into our library website for students to access, using this tool has at least given me a way to essentially sketch out how our badging system will work. We can also try some user testing with students on these tasks to gauge motivation and instructional effectiveness. Having this system become campus-wide in collaboration with other units and departments would also aid in creating more meaning behind the badges; but in the meantime, tying this smaller scale layout to specific class instruction or non-disciplinary collaborations will be very useful.

Although some sources say gamification will be taking a huge nosedive by 2014 due to poor design and over-saturation,  keeping tabs on other platforms available and how to best incorporate this technology into library instruction is where I will be looking this semester and beyond as we work on plans for rolling out a full badging system within the next couple of years. Making learning more experiential and creating choose-your-own adventure scenarios are effective in giving students ownership over their education. Using points and badges for manipulating users is certainly detrimental and should fall out of use in the near future, but using this framework in a positive manner for motivation and to support student learning can have beneficial effects for students, campus, and the library.

Additional Resources

Books:

Dignan, A. (2012). Game Frame. New York: The Free Press.

Gee, J. P. (2007). What video games have to teach us about learning and literacy. New York: Palgrave Macmillan.

Kapp, K. M. (2012). The gamification of learning and instruction: Game-based methods and strategies for training and education. San Francisco, CA: Pfeiffer.

Koster, R. (2005). A theory of fun for game design. Scottsdale, AZ: Paraglyph Press.

Online:

Because Play Matters: A game lab dedicated to transformative games and play for informal learning environments in the iSchool at Syracuse: http://becauseplaymatters.com/

Digital badges show students’ skills along with degree (Purdue News): http://www.purdue.edu/newsroom/releases/2012/Q3/digital-badges-show-students-skills-along-with-degree.html

Gamification Research Network: http://gamification-research.org/

TL-DR: Where gamers and information collide: http://tl-dr.ca/

—-

About Our Guest Author: Nicole Pagowsky is an Instructional Services Librarian at the University of Arizona where she explores game-based learning, student retention, and UX. You can find her on Twitter, @pumpedlibrarian.