Real World Semantic Web?: Facebook’s Open Graph Protocol

Original image available at https://developers.facebook.com/docs/opengraph/

Librarians need to understand what the semantic web is and how to use it, but this can be challenging. While the promise of the semantic web has existed for over a decade, to the uninitiated there may not seem to be many implementations that are accessible to the average person.

One implementation that most people use daily is Facebook’s Open Graph Protocol, which is their version of the semantic web. This is a useful example to illustrate the ideas behind the semantic web and linked data. Libraries and other cultural institutions want and need to make their data open, and Facebook’s openness is highly questionable, so it will also illustrate some of the potential problems with linked data that isn’t open. There is much great work being done in the library world with the semantic web and linked data, which will be addressed in more detail in further posts.

The Semantic Web and Linked Data

The “semantic web” describes a web where data is understood by computers in some of the same ways humans understand it. Tim Berners-Lee illustrates this wonderfully in his 2001 Scientific American article with a future in which the diagnosis of a family member with cancer is made easier by the smart device which can find the most appropriate specialist in a convenient location at a convenient time, with very little work on the part of the searcher. This is only possible, however, when data is semantically meaningful. Open hours for a doctor (or a library) written on a website mean something to a human, but very little to a computer. Once those hours are structured in a way that can be made meaningful, the computer can tell you if the doctor’s office is open–and if it has access to your calendar, what you have to cancel to go there.

Linking data takes this implementation a step further and makes it possible to connect data, to avoid, as the W3C says “a sheer collection of datasets”. Berners-Lee outlines the steps that need to be followed to make linked data in a 2006 post, namely to use uniform resource indicators (URIs) as names, to present those URIs in the hypertext protocol, use a standard format such as RDF to present useful information, and link to additional URIs with related information. A 2010 follow-up points out that to be linked open data, the data must be presented with a license that allows free unimpeded use, such as the Creative Commons CC-BY license. Such data doesn’t have to be structured in any particular way as long as it’s open. He says that “…you get one (big!) star if the information has been made public at all, even if it is a photo of a scan of a fax of a table — if it has an open licence.” But “five-star” linked open data meets all of the above requirements as well.

Facebook’s Open Graph Protocol

Moving into a different world, let’s consider what the semantic web and linked data look like at Facebook. First, it is interesting to consider what Facebook was before it was semantic. When Facebook first started in 2005, you could make a list of things you “liked”. You might have said you “liked” the movie Clueless and “liked” running, but these were just lists that would let others in your college classes know a few facts about you next time you saw them in class or at a party. In theory you could use these lists to find others that shared your interests, but this required a person to understand what interests matched each other.

But starting in 2010 these “likes” took on a real semantic meaning. Suddenly “liking” the movie Clueless meant that, among other things, the owners of the “Clueless” identity on Facebook could directly send you marketing announcements. In addition, you could “like” content outside of Facebook completely as long as that website used the correct markup on the page to speak to Facebook, and thus link together content with people. Unlike Facebook’s earlier scheme of Beacon, it was easier to understand how you were exposing yourself to advertisers and to control privacy and sharing, though this still left people troubled.

In late 2011/early 2012 Facebook opened up this system even more to third party developers, which went along with the new Facebook Timeline. Now any person could perform any verb with any application. So “Margaret read a book on Goodreads” or “Margaret listened to a song on Spotify”–real world actions–turn into semantically meaningful statements on my Facebook Timeline. As long as the user authenticates the application, the application can access the necessary information to grab the information about the object from the webpage and show the user’s interaction with it.

Developing for the Open Graph

The Open Graph protocol was developed based on the idea of the “social graph”, which represents the connections between people and the types of relationships they have with each other. In the Facebook universe, this includes the relationships people have with other types of entities, such as media, products, and companies.  It was developed by Facebook to make a quick and easy way for websites to include semantically meaningful data. It is based on the standard RDF specification for linked data and includes basic and optional metadata, as well as different types of structured data about objects, of which music and videos are the most well-defined.

To see the Open Graph in action, simply replace “www” with “graph” at the beginning of any Facebook page. For instance, let’s take a look at my own library’s information at http://graph.facebook.com/rebeccacrownlibrary. You can see that this page describes a library, and get our phone number, physical location, and open hours. Most important, a computer viewing this page can understand this information. For complete details, see the Graph API documentation–even for non-developers this is interesting; for instance, find out how to get the URL for your current profile picture to embed in other sites. To get access to this information, you can use various methods, including the Facebook Query Language.

Of course, you only get access to this information if it’s explicitly made public by the page. For anything beyond that, applications must use authentication in order to access more. Linking information from outside of Facebook is one way only–you can’t pull very much at all out of Facebook into the open web. Note that, for instance, Google searches will pull up only basic information from a Facebook page rather than any content that page has posted.

Outside of Facebook–How “Open” is the Open Graph?

It is precisely this closed effect that has a lot of people worried about Facebook’s implementation of the semantic web. Brad Fitzpatrick described the problems in 2007 inherent in implementations of the “social graph” on the web, which was that standards were quirky, non-interoperable, and usually completely walled off. The solution would be a Social Graph API that would create a social graph outside of any one company and belonging to all. This would allow people to find friends and connections without signing up for additional services or relying on Facebook or any other company.  Fitzpatrick did later create a Social Graph API, which Google recently pulled out of their products. Some of the problems of an open social graph are familiar to librarians: people are hesitant to share too much information with just anyone about with whom they associate, what they like, and what they think (Prodromou). The great boon for advertisers in social networking services is that inside walled gardens with reasonable privacy controls is that people are willing to share much more information. Thus the walled garden of Facebook, inaccessible to Google, means that that valuable social data is inaccessible. It is perhaps not coincidental that around the same time Google stopped supporting the open Social Graph API that they released the API for their own social networking service Google Plus.

Concerns with the Open Graph remain that it is not actually open, and in particular that it uses the open standard of RDF to ingest but not share content (Turenhout). The Open Graph Protocol website states that a variety of big websites are publishing websites with Open Graph markup and it is ingested by Facebook (of course), Google, and mixi. It remains unclear how much this particular standard will be adopted outside of Facebook.

Conclusion

Whether or not you think  you have any idea what linked data is, any time you click a “like” button on a website or sign up for a social sharing app in Facebook, you are participating in the semantic web. But every time that data link goes behind a Facebook wall, it fails in being open linked data. Just as librarians have always worked to keep the world’s knowledge available to all, we must continue to ensure that potentially important linked data is kept open as well–and with no commercial motive. The LODLAM Summit has outlined and continues to work on what linked open data looks like for libraries, archives, and museums. The W3C Library Linked Data Incubator Group released its final report in fall 2011, which provides a thorough overview of the roles and responsibilities of libraries in the world of linked open data. There is a lot of possibility around this area right now, and the future openness of the world wide web may very well depend on action taken right now.

In a future post, we will examine some specific examples of work being done in the library world around the semantic web and linked data.

Works Cited

Axon, Samuel. “Facebook’s Open Graph Personalizes the Web.” Mashable, April 21, 2010. http://mashable.com/2010/04/21/facebook-open-graph/.
Berners-Lee, Tim, James Hendler, and Ora Lassila. “The Semantic Web.” Scientific American 284, no. 5 (May 2001): 34. doi: 10.1038/scientificamerican0501-34
Berners-Lee, Tim. “Linked Data.” Design Issues, July 27, 2006. http://www.w3.org/DesignIssues/LinkedData.html.
Fitzpatrick, Brad, and David Recordon. “Thoughts on the Social Graph.” Bradfitz.com, August 17, 2007. http://bradfitz.com/social-graph-problem/.
Geron, Tomio. “Facebook Expands Open Graph To 60 New Apps, Many More Coming.” Forbes.com (January 18, 2011): 20.
Giles, Jim. “If Facebook Likes the Semantic Web, You’ll Love It.” New Scientist, July 31, 2010.
Iskold, Alex. “Social Graph: Concepts and Issues.” Read Write Web, September 11, 2007. http://www.readwriteweb.com/archives/social_graph_concepts_and_issues.php.
Mitchell, Jon. “Google Plus Releases APIs for Search, +1s and Comments.” Read Write Web, October 4, 2011. http://www.readwriteweb.com/archives/google_plus_releases_apis_for_search_1s_and_commen.php.
Prodromou, Evan. “On the Social Graph API.” Evan Prodromou: His Life and Times, February 21, 2012. http://evanprodromou.name/2012/02/21/on-the-social-graph-api/.
Turenhout, Ryanne. “Harry Halpin on the Hidden History of the ‘Like’ Button.” Institute of Network Cultures, March 10, 2012. http://networkcultures.org/wpmu/unlikeus/2012/03/10/harry-halpin-on-the-hidden-history-of-the-like-button/.

When Browsing Becomes Confusing

During the  usability testing I ran a while ago, there was one task that quite baffled at least one participant. I will share the case with you in this post.  The task given to the usability testing participant was this: “You would like to find out if the library has a journal named New England Journal of Medicine online.”

The testing begins at the Florida International University Medical library website, which has a search box with multiple tabs. As you can see below, one of the tabs is E-Journals. Most of the users selected the E-Journals tab and typed in the journal title. This gave them a satisfactory answer right away.  But a few took a different path, and this approach revealed something interesting about browsing the library’s e-journals in the E-Journal portal site which is a system separate from the library’s website.

Browsing for a Specific E-Journal

1. In the case I observed, a student selected the link ‘Medical E-journals’ in the library homepage above instead of using the search box. The student was taken to the E-Journal Portal site, which also presents a search box where one can type in a journal title. But the student opted to browse and clicked ‘N.’

2. The student was given the following screen after clicking ‘N.’ He realized that that there are lots of e-journals whose title begins with ‘N’ and clicked ‘Next.’

3.  The site presented him with the following screen. At this point, he expressed puzzlement at what happened after the click. The screen appeared to him the same as before. He could not tell what his click did to the screen. So he clicked ‘Next’ again.

4. He was still baffled at first and then gave up browsing. The student typed in a journal title in the search box instead and got the match.

Lessons Learned

A couple of things were learned from observing this case.

  • First, this case shows that some people prefer browsing to searching even when the search could be much faster and the search box is clearly visible.
  • Second, a click needs to create a visible change to prevent a user’s frustration.
  • Third, what is a visible and discernible change may well be different to different people.

The first is nothing new. We know that some users prefer to search while some prefer to browse. So both features – search and browse – in a Web site should work intuitively. In this example, the E-Journal portal has a good search feature but shows some confusing aspect in browsing. I found the change from step 2 to 3 and step 3 to 4 somewhat baffling just as the student who participated in the usability testing did. I could not discern the difference from step 2 to 3 and step 3 to 4 right away. Although I was familiar with the E-Journal portal, I was not aware of this issue at all until I saw a person actually attempting to get to New England Journal of Medicine by browsing only because I myself have always used the search feature in the past.

But, when I showed this case to one of my colleagues, she said the change of the screens shown above was clear to her. She did not share the same level of confusion that the student experienced. Also, once I had figured out what the difference in each step, I could no longer experience the same confusion either. So how confusing this browsing experience can vary. I will go over the process one more time below and point out why this browsing process could be confusing to some people.

The student had difficulty in perceiving the change from step 2 to 3.  The screen in step 3 appeared to him to have unchanged from step 2. The same for the screen in step 3. from step 4.  Actually, there was a change. It was just hard to notice to the student and was something different from what he expected. What the system does when a user clicks ‘Next,’ is to move from the first item on the sub-list under N to the second item (N&H-Nai -> NAJ-Nan) and then again from the second item to the third item (NAJ-Nan -> Nat-Nat). This, however, did not match what the student expected. He thought the ‘Next’ link would bring up the sub-list beginning with the  next of the last entry, ‘Nat-Nat,’ not the next of the currently selected entry. The fact that the sub-list shows many ‘Nat-Nat’s also confused him. (This is likely to be because the system is bundling 50 e-journals and then extract the first three letters of the first and last journal in the bundle to create items on the sub-list.)   A user sees the last item on the sub-list in step 3 and 4. stay the same ‘Nat-Nat’ and wonders whether his clicking ‘Next’ had any effect.

Making browsing a large number of items user-friendly is a challenge. The more items there are to browse, the more items the system should allow a user to skip at once.  This will help a user to get to the item s/he is looking for more quickly. Also, when there are many items to browse, a user is likely to look for the second and third category to zoom in on the item s/he is looking for. Faceted browsing/search is an effective way to organize a large number of items so that people can quickly drill down to a sub-category of things which they are interested in. Many libraries now use a discovery system over an OPAC (online public access catalog) to provide such faceted browsing/search. In this case, for example, allowing a user to select the second letter of the item after selecting the first instead of trudging through each bundle of fifty journal titles would expedite the browsing significantly.

What other things can you think of to improve the browsing experience in this E-Journal portal? Do you have any Web site where you can easily and quickly browse a large number of items?

 

** Below  are the screens with the changes marked in red for your review:

2′.


3′.

4′.

 

 

Workflow Automation in Technical Services: Part 2

Note: This is part two of a two part series on workflow automation in Technical Services. Part one covered the what and process of workflow automation and an example of an item level workflow automation process. Part two will discuss batch level workflow automation and resources/tools for workflow automation.

Last time, we discussed the basics of workflow automation and some examples of item-level automation in cataloging and acquisitions workflows. Automating workflows on an item-to-item basis provides greater consistency and efficiency in daily tasks done by staff, allowing them to spend more time on more complex workflows and tasks that may not be so readily automated. Item level workflow automation can be a low barrier investment in creating a more efficient operation.

Then you have the electronic journals, ebooks, and databases. You have large record files that are tied to physical resources – for example, record downloads from WorldCat Cataloging Partners. And then there are all those records in the system – MARC, XML, whatnot – that have missing or incorrect information (the infamous “dirty data”). Why can’t we just stick with item-level processing for everything?

Item level automation or batch automation?

For item level automation, you have a very granular level of control over the process, dealing with items one at a time. If the items are very similar in nature or have only a couple differences in how each item will be processed, though, then going through each item individually probably doesn’t make a lot of sense. On the other hand, batch processing allows you to go through many items at once, which makes adding or maintaining resources a quicker job than going through item by item. You do give up a certain level of control over details with batch processing, however, which leaves you to decide where the “good enough” marker should go in terms of data quality.

Overall, you want to avoid sub-optimizing your workflow. Sub-optimization happens when a part of an organization focuses the success of its own area instead of the entire organization’s success [1]. Going through each resource record individually might give you the greatest control over the record, but if you’re going through a file containing 10,000+ records individually, even with an item level automated workflow, the turnaround time for creating access for all those resources will be much higher than if the file was processed at once. However, with the right tools, you can deal with record batches with speed and a good level of control over the data.

MarcEdit is your friend

Many people have at least heard about MarcEdit, or have colleagues who have used it extensively. MarcEdit is a freely available program (for Windows) created by Terry Reese that works with MARC records in a variety of ways. You can add, delete, or modify fields in records, create MARC records from data in spreadsheets, crosswalk to and from the MARC format, split files, join files, generate call numbers, de-duplicate records – and that’s only part of what you can do with MarcEdit. Also, if you find yourself going through the same batch workflow for the same files on a regular basis, MarcEdit’s Script Wizard helps with automating routine batch processing workflows.

Example: Missing 041 1_ subfield h, or, this item is a translation, not in two languages!

Many of you may have moved your older library catalogs to a newer discovery layer; I’ve survived one move at my previous place of work and will probably have another move under my belt soon. One consequence of moving to a new discovery layer is that data previously ignored by the previous layer sticks out like a sore thumb in the new layer. This example is one of those dirty data discoveries: a particular MARC variable field incorrectly indicated that an item is in two or more languages instead of a translation. Not only you have unhappy library users who thought you had a copy of The Little Prince in both French and English, but this error exists in a few thousand records, finding yourself with a potentially resource intensive cleanup project.

If you can isolate and export those records in one (or a couple of) files from your database, then you can use MarcEdit to clean up the field in a relatively short time. Open the file in MarcEdit’s MarcEditor, and make your way to the “Edit Subfield” under the tools menu. Let’s say that there are a lot of records that have engfre in the 041 field and you want to change all the records with that entry at once. Replace the engfre field data with eng$hfre and you’ve taken care of all those records in one pass.

Since you probably have more than engfre in your file, you can use regular expressions in MarcEdit to change multiple fields at once regardless of language code. Using the Find/Replace tool, search for the 041 field subfield a, but this time add your regular expression and mark the “Use regular expression” box. The following expression is assuming that the 041 field has two language codes that are three letters in length, so you will have to do a little cleanup after running this replace command to catch the three or more language codes as well as two letter language codes. (h/t to zemkat for the regular expression!)

Libraries and modules and packages, oh my!

What if you’ve been learning some code, or are looking for an excuse to learn? You’re in luck! Some of the common programming languages have tools to deal with MARC data. Rolling your own batch automation scripts and applications allows you the most flexibility in working with other library data formats as well. However, if you haven’t programmed before, choose smaller projects to start. In addition, if the script or application doesn’t work, you’re your own tech support.

Example: Creating order records for patron driven acquisition (PDA) items triggered for purchase

Patron driven acquisition usually involves the ingestion of several hundred to thousands of records into the local database for items that are not technically owned by the library at that point in time. Depending on the PDA vendor one uses, the item is triggered for purchase after it reaches a use threshold (for example, 10 page views). The library will receive an invoice with these purchases, but we will still need to create order records in the system to show that these items have been bought. Considering that on a given week,  the number of purchases can range from single digits to higher double digits, that’s a lot of order records to manually key in.

After dabbling with pymarc at code4lib 2010, I thought this would be a good project to learn more about pymarc and python overall. Here is an outline of the script actions:

  1. In the trigger report spreadsheet, extract the local control numbers for the items triggered for purchase.
  2. Execute a SQL query against the local database for our locally developed next generation catalog, matching the local control number and extracting the MARC records from database.
  3. In each MARC record:
  • add a 590 and 790 field for donor/fund information
  • add a 949 field containing bibliographic record overlay and the order record creation information for the system, including cost of the item extracted from the spreadsheet.
  • change the 947 field data to indicate that the item has been purchased (for statistical reporting later on)
  1. Write the MARC records to a file for import into the ILS.

The output file is then uploaded into the ILS manually, which gives staff the chance to address any issues with the records that the system might have before import. Overall, the process from downloading the trigger report spreadsheet to uploading the record file into the ILS takes a few minutes, depending on the size of the file.

Which automation tools and resources to use?

There are a multitude of other automation tools and resources that cannot be fully covered in two blog posts. Your mileage may vary with these tools; you might find Macro Express to be a better fit for your organization than AutoIt, or you find that working with ruby-marc is easier for you than MarcEdit (resource links listed below). The best way to figure out what’s right for you is to play around with various tools and get a feel for them. More often than not, you’ll end up using multiple tools for different levels and types of workflow automation.

Don’t forget about the built-in tools in existing applications as well! Sometimes the best tools for the job are already there for you to take advantage of them.

For your convenience, here are the tools mentioned in the two blog posts, including a few others:


[1] http://dictionary.cambridge.org/dictionary/business-english/sub-optimization

Disruptive Educational Models and Open Education

Eating Your Own Dog Food

One of the most memorable experiences I had as a library student was becoming a patron of my own library. As on online library school student* I usually worked either in my office at pre-approved times, or at home. However, depending on the assignment, sometimes I worked out at the reference area public access computers. It nearly drove me mad, for a very simple reason – this was in the day before optical mouse devices, and the trackballs on our mice were incredibly sticky and jerky, despite regular cleaning routines. It was so bad I wondered how students could stand to work on our workstations, and how it made them feel about the library in general, since there is nothing like a solid hour or so of constantly repeated, albeit small, irritations to make a person develop indelible negative feelings towards a particular environment.

I’ve heard the same thing from colleagues that have started graduate programs here at my university; they are shocked at how hard it can be to be a student in the library, even with insider knowledge, and it can be demoralizing (and galvanizing) to watch classmates and even instructors dismiss library services and resources with “too confusing” or “learning curve too steep” as they ruthlessly practice least-effort satisficing for their information needs.

In information technology circles, the concept of having to use your own platforms/services is known as “eating your own dog food” or “dogfooding.” While there are pitfalls to relying too heavily on it as an assessment tool (we all have insider knowledge about libraries, software, and resources that can smooth the process for us), it is an eye-opening exercise, especially to listen to our users be brutally frank about what we offer — or don’t.

DIY Universities and Open Education

I am suggesting something related but complementary to dogfooding — sampling the models and platforms of a burgeoning movement that has the potential to be a disruptive force in higher education. DIY U and the coming transformation of education are all the rage (pun intended) these days, as prestigious universities and professors, Edupunks, loose collaboratives, and start-ups participate in collaborative free online offerings through various platforms and with different aims: CourseraKhan AcademyP2PUMIT OpenCourseWareUdacityNYU Open Education, and many more. This is a call to action for us as librarians. Instead of endlessly debating what this might mean, or where it might be going, and this movement’s possible effect on academic libraries, I suggest actually signing up for a course and experiencing it first-hand.

For library technologists facing the brave new world of higher education in the 21st century, there are three major advantages to taking a class in one of the new experimental DIY universities. We get to experience new platforms, delivery mechanisms, and modes of teaching, some of which may be applicable to the work of the academic library. In addition, many of the courses offered are technical courses that are directly applicable to our daily work. Thirdly, it allows us as academic participants to personally assess the often intemperate and hyperbolic language on both sides of the debate: “can’t possibly be as good as institutional campus-based face-to-face EVER” versus “This changes everything, FOREVER.” How many faculty on your campuses do you think have actually taken an online class, especially in one of these open educational initiatives? This is an opportunity to become an informed voice in any local campus debates and conversations. These conversations and debates will involve our core services, whether faculty and administrators realize it or  not.

It will also encourage some future-oriented thinking about where libraries could fit into this changing educational landscape. One of the more interesting possible effects in these collaborative,  open-to-all ventures is the necessity of using free or open access high quality resources. Where will that put the library? What does that mean for instructional resources hidden behind a particular institution’s authentication wall? Academic libraries and services have been tied to a particular institution — what happens when those affiliations blur and change extremely rapidly? There are all sorts of implications for faculty, students, libraries, vendors, and open access/open educational resources platforms. As a thought exercise, take a look at these seven predictions for the future of technology-enabled universities from JISC’s Head of Innovation, Sarah Porter. Which ones DON’T involve libraries? As a profession, let’s get out on the bleeding edge and investigate the developing models.

I just signed up for “Model Thinking” through Coursera. Taught by Professor Scott E. Page from the Center for the Study of Complex Systems at the University of Michigan, the course will cover modeling information to make sense of trends, social movements, behaviors, because “evidence shows that people who think with models consistently outperform those who don’t. And, moreover people who think with lots of models outperform people who use only one.” That sounds applicable to making decisions about e-books, collection development, workflow redesign, and changing models of higher education, et cetera.

Some Suggestions:

  • Coursera offers clusters of courses in Society, Networks, and Information (Model Thinking, Gamification, Social Networking Analysis, among others) and Computer Science (Algorithims, Compilers, Game Theory, etc.). If you have a music library or handle streaming media in your library, what about Listening to World Music? If you are curious about humanities subjects that have depended on traditional library materials in the past, try A History of the World since 1300 or Greek and Roman Mythology.
  • Udacity offers Building a Search Engine, Design of Computer Programs, and Programming a Robotic Car (automate a bookmobile?).
  • Set up your own peer class with P2PU, or take Become a Citizen Scientist, Curating Content, or Programming with the Twitter API.
  • If you are in the New York City area and can attend an in-person workshop, General Assembly offers Storytelling Skills, Programming Fundamentals for Non-Programmers, and Dodging the Dangers of Copyright Law (taught by participants in Yale Law School’s Information Society Project) as part of a menu of  tech and tech-business related workshops. These have fees ranging from $15 to $30.
  • Before I take my Model Thinking class, I’m planning to brush up my algebra at Khan Academy.
  • Try the archived lectures at Harvard’s “Building Mobile Applications“, hosted in their institutional repository.
  • Health Sciences Librarian? What about Information Technology in the Health Care System of the Future from MIT OpenCourseWare?

 

* Full disclosure: I am a proud graduate of University of Illinois’ LEEP (5.0) MSLIS program, and I also have another master’s degree done the old fashioned way, and I am an enthusiastic supporter of online education done correctly.

Career Impact and Library Technology Research

This blog post is not concerned with the specific application of a technology, rather it advocates the rather post-modern idea of research and writing in library technology for career impact. I take as my departure point the fact that not all research articles are useful contributions to the field. While intellectual rigor has its place in research, if the connection to service improvements or broader big picture questions are not addressed by scholarly research outputs the profession, as a whole, will not advance.

In a sense, it is after tenure when academic librarians begin to think of notions of careers of impact. We may ask ourselves what library needs or open problems were met by our work. We ask: did our research outputs matter?  Did our research stand up over time? Has the field moved forward at all?

A major problem in library and information science literature from an editorial perspective is the local-ness of any given paper. To generalize, many papers now coming into journal submission portals report how a specific local problem was addressed. The paper does its intellectual work only as far as its local institution is concerned. Broadly, what is needed in library writing — writing that is primarily driven from tenure line librarians is a need to consider practice of librarianship beyond the boundaries of a discreet study.

This underscores another significant problem which could be addressed by the right kind of mentorship in library settings: addressing the why of publishing, this would be a good corollary to the how, which veterans can teach – veteran tenured librarians will be able to speak to the methods for getting into print, getting even into the top tier journals like the Journal of Academic Librarianship. However, what is missing, and what this post is fundamentally concerned with, is the why of publishing for tenure.

When I started writing, the impulse was to sound smart. This is something I regretted deeply when I watched new library school students take notes on that paper. Now, I’m writing to communicate, since a wise person once said: “the smartest people are those who can communicate with others,” and what it is we are attempting to communicate when we publish are ways to improve practice – to move the field forward. That is why we publish. That is why we research. That is why we choose and stay on the tenure track, to have a career of impact in the field.

Can such a thing be taught? It’s like asking if morality can be taught, because it is a rather moral (and, possibly post-modern anti-ego thinking) choice to think of your profession as advancing and not yourself. While most tenure track activities can have the effect of growing ones ego, the path worth going down, the very interesting and profound path librarians must follow, if they are to remain honest, is to empty the ego, to empty any concern for the individual career and to think instead of the profession.

Our careers are not our own, anymore than the libraries we worked in and lived in were ours. The IT career of impact for librarians is that career which was made in the service to the profession.

 

Personal Data Monitoring: Gamifying Yourself

The academic world has been talking about gamification of learning for some time now. The 2012 Horizon Report says gamification of learning will become mainstream in 2-3 years. Gamification taps into the innate human love of narrative and displaying accomplishments.  Anyone working through Code Year is personally familiar with the lure of the green bar that tells you how far you are to your next badge. In this post I want to address a related but slightly different topic: personal data capture and analytics.

Where does the library fit into this? One of the roles of the academic library is to help educate and facilitate the work of researchers. Effective research requires collecting a wide variety of relevant sources, reading them, and saving the relevant information for the future. The 2010 book Too Much to Know by Ann Blair describes the note taking and indexing habits taught to scholars in early modern Europe. Keeping a list of topics and sources was a major focus of scholars, and the resulting notes and indexes were published in their own right. Nowadays maintaining a list of sources is easier than ever with the many tools to collect and store references–but challenges remain due to the abundance of sources and pressure to publish, among others.

New Approaches and Tools in Personal Data Monitoring

Tracking one’s daily habits, reading lists and any other personal information is a very old human habit. Understanding what you are currently doing is the first step in creating better habits, and technology makes it easier to collect this data. Stephen Wolfram has been using technology to collect data about himself for nearly 25 years, and he posted some visual examples of this a few weeks ago. This includes items such as how many emails he’s sent and received, keystrokes made, and file types created. The Felton report, produced by Nick Felton, is a gorgeously designed book with personal data about himself and his family. But you don’t have to be a data or design whiz to collect and display personal information. For instance, to display your data in a visually compelling way you can use a service such as Daytum to create a personal data dashboard.

Hours of Activity recorded by Fitbit

In the realm of fitness and health, there are many products that will help capture, store, and analyze personal data.  Devices like the Fitbit now clip or strap to your body and count steps taken, floors climbed, and hours slept. Pedometers and GPS enabled sport watches help those trying to get in shape, but the new field of personal genetic monitoring and behavior analytics promise to make it possible to know very specific information about your health and understand potential future choices to make. 23andMe will map your personal genome and provide a portal for analyzing and understanding your genetic profile, allowing unprecedented ability to understand health. (Though there is doubt about whether this can accurately predict disease). For the behavioral and lifestyle aspects of health a new service called Ginger.io will help collect daily data for health professionals.

Number of readers recorded by Mendeley

Visual cues of graphs of accomplishments and green progress bars can be as helpful in keeping up research and monitoring one’s personal research habits just as much as they help in learning to code or training for a marathon. One such feature is the personal reading challenge on Goodreads,which lets you set a goal of how many books to read in the year, tracks what you’ve read, and lets you know how far behind or ahead you are at your current reading pace. Each book listed as in progress has a progress bar indicating how far along in the book you are. This is a simple but effective visual cue. Another popular tool, Mendeley, provides a convenient way to store PDFs and track references of all kinds. Built into this is a small green icon that indicates a reference is unread. You can sort references by read/unread–by marking a reference as “read”, the article appears as read in the Mendeley research database. Academia.eduprovides another way for scholars to share research papers and see how many readers they have.

Libraries and Personal Data

How can libraries facilitate this type of personal data monitoring and make it easy for researchers to keep track of what they have done and help them set goals for the future? Last November the Academic Book Writing Month (#acbowrimo) Twitter hashtag community spun off of National Novel Writing Month and challenged participants to complete the first draft of an academic book or other lengthy work. Participants tracked daily word counts and research goals and encouraged each other to complete the work. Librarians could work with researchers at their institutions, both faculty and students, on this type of peer encouragement. We already do this type of activity, but tools like Twitter make it easier to share with a community who might not come to the library often.

The recent furor over the change in Google’s privacy settings prompted many people to delete their Google search histories. Considered another way, this is a treasure trove of past interests to mine for a researcher trying to remember a book he or she was searching for some years ago—information that may not be available anywhere else. Librarians have certain professional ethics that make collecting and analyzing that type of personal data extremely complex. While we collect all types of data and avidly analyze it, we are careful to not keep track of what individuals read, borrowed, or asked of a librarian. This keeps individual researchers’ privacy safe; the major disadvantage is that it puts the onus on the individual to collect his own data. For people who might read hundreds or thousands of books and articles it can be a challenge to track all those individual items. Library catalogs are not great at facilitating this type of recordkeeping. Some next generation catalogs provide better listing and sharing features, but the user has to know how to add each item. Even if we can’t provide users a historical list of all items they’ve ever borrowed, we can help to educate them on how to create such lists. And in fact, unless we do help researchers create lists like this we lose out on an important piece of the historical record, such as the library borrowing history in Dissenting Academies Online.

Conclusion

What are some types of data we can ethically and legally share to help our researchers track personal data? We could share statistics on the average numbers of books checked out by students and faculty, articles downloaded, articles ordered, and other numbers that will help people understand where they fall along a continuum of research. Of course all libraries already collect this information–it’s just a matter of sharing it in a way that makes it easy to use. People want to collect and analyze data about what they do to help them reach their goals. Now that this is so easy we must consider how we can help them.

 

Works Cited
Blair, Ann. Too Much to Know : Managing Scholarly Information Before the Modern Age. New Haven: Yale University Press, 2010.

What is a Graphic Design Development Process?

Previously, I wrote about the value of design in libraries, and others, including Stephen Bell and Aaron Schmidt, have written and presented on the topic of design in libraries as well. Now I’d like to focus on and delve specifically into what graphic design process may entail. For librarians who design regularly, I hope this helps to articulate what you may be doing already or perhaps add a bit to your tools and tips. For those that don’t design, I hope that this might give you insight into a process that is more complex than it may seem and that you might give designing a try yourself. For some ideas, try any of these are great library design projects: signs, webpages, posters, flyers, bookmarks, banners, etc.

What Is It Like to Design?

People might wonder why design needs to be a process. The very basic process of design, like many processes, is to solve a problem and then create a solution. Jason Fried, founder of 37signals and co-author of Rework, tweeted recently, “Your first design may be the best, but you won’t know until you can’t find a better one.” He later added this image from The Intercom blog as an illustration to make this important point. Striving for an elegant or best solution is something librarians and designers have in common. Librarians often share best practices and examining this process may not only assist us in terms of design, but perhaps we can apply these concepts to other areas of librarianship as we create programs, outreach, marketing, and more.

Design is a process.
Designers work hard to develop a successful design and it doesn’t always come easy. Here are some of the basic steps designers take in the development phase of their work. Every designer is a bit different, and not all designers follow the exact same process. However, this is a pretty good foundation for beginner designers and once you get good, you can incorporate or modify pieces of the process to make it work for you and the project at hand. Design is subjective and there are few hard and fast rules to follow, however, in future posts I’ll be talking more about design elements and details to help you create stronger designs that will speak to your users.

Design has constraints.
Before you start laying things out and jumping into a design, you want to understand what the “specs”  or specifications are. These are the details of the final piece you need up front before you begin any design. For example, is the piece going to be printed or is it an online piece? What’s the budget? Is it black and white, color, how many colors? What size? If printed, what paper will it be printed on? Will color bleed to the edge or is there a border? Is there folding or cutting involved?

All of these considerations are going to be the rules you must work under. But most designers like to think of them as challenges; many times if the specs aren’t too restrictive they can actually empower the designer to drive harder to make it more creative. You really don’t want to start designing before you get this all worked out because once you’ve jumped in it can mean starting over if a critical spec is missed. If you have designed for a set of specs and then try to modify it to fit all new specs later, it almost always compromises the strength of the design to work this way. Better to know those specs up front.

Design requires an open mind.
Sketch like crazy. You may think you have the best, most original idea ever once you get your assignment or have your specs, but please do yourself a huge favor and sketch some ideas out first. Do at least a page of sketches if not much more. Take notes, do some research on the topic, do word associations and mind maps and draw stick figures and doodle. Keep an open mind to new possibilities. Observe the world around you, daydream, and collect inspiration. You might still stick with that first idea but chances are you come up with something even better and usually more original if you push yourself to think in new ways and explore.

Design step by step.
Depending on the complexity of the piece, whether it’s print or web, I might do more or less of each step below. If you’re designing or reworking a website, this is a good method to get a powerful, thoughtful design. And of course, you can go back and iterate based on feedback given, changes to the design that impact design elements. If the design structure is strong, changes should be fairly small.

Basic Design Development Process:

1. research the topic, take notes, ask questions, doodle, jot down ideas, simmer 

2. series of thumbnail sketches
This is an extension of step 1. Do as many as you can muster…do it until you are sick of it. Here is a great presentation I recently found on sketching.

3. build wireframe
Stay abstract/block in composition. This is going to be larger than a thumbnail but try to keep it free from detail.

4. sketch comps
Take steps 2 and 3 and flesh out 3 comps. These should not be final but should follow specs and be close to finished in terms of look and feel for the major design components. You may use lorum ipsum text if you wish. This technique helps to keep people from giving feedback about the content over the design. Of course there are times the content may absolutely need to be there but use your own discretion and know that this is an option and may help in moving forward.

5. finalize comps
Usually 3 choices are offered to a client, but if you are your own client obviously just do your favorite.
All of this is separate from any CSS, html, javascript, etc. Mock it up using Photoshop and/or Illustrator (or a similar program of your choice). The point is to focus on the design apart from laying down code. “Form Follows Function” really rings true. It isn’t an either/or statement. The product must work first and foremost and the design will support, enhance, and make it work better. If it doesn’t work, no amount of gorgeous design will change something that is badly broken.

TaDa, right?
The design is done, let’s celebrate!

Well, not exactly. This process is merely just one phase of a much larger process that includes steps including: initially meeting the client, negotiating a contract, presenting your designs, more testing and usability, iterative design adjustments, possibly working with developers or print houses, etc. Design is a process that requires study, skills, schooling, and knowledge like many fields. I’ll be talking about more design topics in the future, so what is not covered here I’ll try and cover next time. Luckily, I gathered some great…

Design resources to get you started:

This is not a comprehensive list by any means but highlights of a few resources to get you thinking about design.

  • Non-Designer’s Design Book: One of the best beginner design books out there (overlook the cover- it really is a great book!).
  • Smashing Magazine: Really good stuff on this website- including freebies, like decent icons and vector artwork. Covers typography, color, graphic design, etc.
  • a list apart: another great site that delves into all kinds of topics but has great stuff on graphic design, UI design, typography, illustrations. etc.
  • Fast Company Design: relevant design articles and examples from industry.
  • IDEO: design thinking, great high level design examples- check out their portfolio in selected works.
  • Thinking With Type: title says it all- learn about the fine art and science of typefaces. You will never look at design and type the same way again.
  • Stop Stealing Sheep and Find Out How Type Works: another must on typography
  • Drawing on the Right Side of the Brain: seriously. even if you think you can’t draw. try it. anyone can draw, truly. Drawing helps you think in new and creative ways- it will help you be more creative and help in problem solving anything. Even those small doodles are valuable.

Pick. your. favorite. see above. do it.

Enjoy and thanks again!

Glimpses into user behavior

 

Heat map of clicks on the library home page
Heat map of clicks on the library home page

Between static analytics and a usability lab

Would you like an even more intimate glimpse into what users are actually doing on your site, instead of what you (or the library web committee)  think they are doing? There are several easy-to-use web-based analytics services like ClickTale , userflyLoop11Crazy Egg, Inspectlet, or Optimalworkshop. These online usability services offer various ways to track what users are doing as they actually navigate your pages — all without setting up a usability lab, recruiting participants, or introducing the artificiality and anxiety of an observed user session. ClickTale and userfly record user actions that you can view later as a video; most services offer heatmaps of where users actually click on your site; some offer “eye tracking” maps based on mouse movement.

  • Most services allow you to sign up for one free account for a limited amount of data or time.
  • Most allow you to specify which pages or sections of your site that you want to test at a time.
  • Many have monthly pricing plans that would allow for snapshots of user activity in various months of the year without having to pay for an entire year’s service.

We’re testing Inspectlet at the moment. I like it because the free account offers the two services I’m most interested in: periodic video captures of the designated site and heat maps of actual clicks. The code is a snippet added to the web pages of interest. The screen captures are fascinating — watch below as an off-campus user searches the library home page for the correct place to do an author search in the library catalog. I view it as a bit of a cautionary illustration about providing a lot of options. Follow the yellow “spotlight” to track the user’s mouse movements. As a contrast, I watched video after video of clearly experienced users taking less than two seconds to hit the “Ebsco Academic Search” link. Be prepared; watching a series of videos of unassisted users can dismantle your or your web committee’s cherished notions about how users navigate your site.

Inspectlet video thumbnail

This is a Jing video of a screen capture — the actual screen captures are much sharper, and I have zoomed out for illustrative purposes. The free Inspectlet account does not support downloads of capture videos, but Rachit Gupta, the founder, wrote me that in the coming few weeks, Inspectlet is releasing a feature to allow downloads for paid accounts. Paid accounts also have access to real time analytics, so libraries would be able to get a montage of what’s happening in the lobby as it is happening. Imagine being able to walk out and announce a “pop-up library workshop” on using the library catalog effectively after seeing the twentieth person fumble through the OPAC.

Another thing I like about Inspectlet is the ability to anonymize the IP addresses in the individual screen captures to protect an individual patron’s privacy.

The chart below compares the features of a few of the most widely used web-based analytics tools.

 

Vendor Video Captures Heat Maps Mouse & Click Tracking Real Time mode Other Privacy Policy Pricing plan
ClickTale

Scroll maps, form analytics, conversion funnels, campaigns Privacy Policy Basic $99/month; limited free plan; month to month pricing; higher
education discounts available (call)
Crazy Egg  

  Scroll maps, click overlays, confetti overlay Privacy Policy Basic $9/month (annual)
Inspectlet

Scroll  maps, Custom API, anonymized IP addresses Privacy Policy Starter $7.99/month; limited free account.Can cancel subscription at any time.
mouseflow

Movement heatmaps, link analytics Privacy Policy Small: aprox $13 US/month; free plan.Can cancel subscription at any time.
seevolution

Scroll maps, visual tool set for real  time Privacy Policy Light: $29/month. Free plan, but very limited details.
userfly

Terms (with a brief privacy explanation) Basic $10/month; free 10 captures a monthCan cancel subscription at any time.

If you are using one of these services, or a similar service, what have you learned about your users?

Testing new designs or alternative designs – widely used web-based usability tools

After you’ve watched your users and determined where there are problems or where you would like to try an alternative design,  these services offer easy ways to test new designs and gather feedback from users without setting up a local usability lab.

 

Loop11 Create test scenarios and analyze results (see demo) Privacy Policy First project free; $350 per project
Optimalworkshop  Card sorting, Tree Testing, Click Testing Privacy Policy Free plan small project; $109 for each separate plan; 50% discount for education providers
OpenHallway Create test scenarios and analyze results Terms of Service Basic: $49/month; limited free account, Can cancel subscription at any time.
Usabilla Create test scenarios and analyze results; mobile UX testing Terms of Service Starter: $19/month. Can cancel subscription at any time.

 

Workflow Automation in Technical Services: Part 1

Note: This is part one of a two part series on workflow automation in Technical Services. Part one will cover the what and process of workflow automation and an example of an item level workflow automation process. Part two will discuss batch level workflow automation and resources/tools for workflow automation.

The mysterious door at the library

Door leading into Technical Services
Photo by author

A majority of you might have passed by this door many times in your library lives. Sometimes it isn’t even a door; maybe a room divider, or an invisible line that runs across the room. In any case, you may have ventured into the space called “Technical Services” (or a similar name), but do you know what goes on there? For most libraries, Technical Services staff acquire, create, and maintain access to library materials, spanning from books and a box of rocks to various electronic databases and digitized local collections. Without them, it would be hard for a library to serve its users: no physical items to borrow, no electronic journals to search for articles, and no metadata in the library discovery layer for users and staff to search for those resources. With the variety of items come a variety of workflows to process those items, many of which are repeated at various intervals: some once a week while others repeated multiple times a day. Staff time and resources are spoken for every time a workflow is repeated. Every time a workflow is manually repeated, less time and resources can be spent on other projects or on new projects that would add value to existing collections or add new collections for library users to use. Technology provides a variety of strategies for workflow automation that reduce time spent on repetitive workflows.

What is workflow automation?

The oversimplified answer to this question is that workflow automation is the process where you have the computer do the things that it can be programmed to do, thereby reducing repetitive manual actions by the staff member.

There are two types of automation to consider when you look at your workflows:

  1. Data Entry: This type of automation is fairly straight forward, and you’ve probably already done this type of automation already without realizing it. For example, the automation script completes a form with data that remains the same for each form or types out standard text in an email being sent to a vendor. Useful for automating repetitive keystrokes, be it system codes, text, or even creating new documents in certain applications, such as an item recor. The automation script is hard-coded, meaning that the output of that script will be the same every time you run it.
  2. Decision Making: This type of automation makes all the decisions for you! Okay, while it won’t make every decision for you, several automation languages and programs can handle fairly complex decision making flowcharts using standard conditionals. For example, if bibliographic record “A” has field “B”, then do action ”C”; else do action “D”. As you probably already guessed, this type of automation resembles coding to a certain extent. The automation script that is designed to deal with several possible outcomes is not hard-coded like the data entry script described above.

What can be automated?

Most Technical Services departments acquire, create, and maintain access to a variety of different formats, from physical to electronic formats. Traditionally, workflows focus on the individual item going through the department and its various teams: acquisitions, cataloging, and processing, for example. With the changeover to electronic formats, workflows are going more towards a batch approach, processing and/or cataloging multiple items (for example, a collection of ebooks) at once.

In addition to adding materials to library collections, a library’s Technical Services staff do a fair amount of database maintenance for the library’s ILS (Integrated Library System). The term “dirty data” is thrown around the TS departments, covering database projects dealing with misspellings, outdated codes, or incorrect codes – anything that could inhibit a library user’s access to the resource.

Why should I automate my workflows?

  • Better quality control of workflow and data. Any time you let a human near a workflow, errors can be introduced into a workflow: incorrect codes, mistyped text, or mishandled items. Having an automated workflow cuts down on the workflow’s fail points and allow for better overall consistency and accuracy.
  • Save staff time.  You and your staff spend a good amount of time with repetitive keystrokes and decisions. Even small repetitive actions add up during the work day, resulting in hours of valuable staff time and resources. By automating the repetitive actions, you free up staff time to work on more complex workflows which are not as easily automated.

How do you decide what workflows to automate?

  • Flowchart your workflow.  A simple flowchart from the beginning of the workflow to the end might reveal several places where current manual decision making can be relegated to a script. If a person is currently looking for a code in the order record to figure out what location code they should enter in the item record, the script could be set to do the same.
  • What are the patterns? In each step, what data remains constant throughout all items? What codes, phrases, or fields do you insert every time you go through the workflow? Is there a pattern of going from one application to another at the same point in every workflow? One record to another?
  • How will the script access the data? Working with a file of MARC records will be different than working with a bibliographic record that is open in your ILS. Having a file of data is easier, but if you’re automating an item-level workflow, you will be dealing with windows that you have to work with. Getting data from a window can be tricky; sometimes you are able to access the data directly, and other times you will have to scrape the screen to get to the data that you want to work on with the script.

Example: Receipt Cataloging

At my former place of work, Technical Services had three levels of cataloging: receipt cataloging, copy cataloging, and original cataloging. All monographs would go through the receipt cataloging process, with items being bumped to the two higher levels of cataloging. The majority of items that go through receipt cataloging, having met a list of 40+ criteria, are fast-tracked to physical processing, shortening the time between the item arriving at the library to being placed on the shelf, which is the overreaching goal of receipt cataloging. The criteria range from determining if the record is DLC (Library of Congress) to determining if the 008, 050, and 260 ‡c dates match in the bibliographic record (if not a conference publication).

Given that the criteria and the decision making flowchart are fairly standard and straightforward, this workflow was built with automation in mind. My predecessor used Macro Express (ME) for the first version of the receipt cataloging macros. When we got to the point where we were bumping up against ME’s limits, I migrated the macros to AutoIt, where I was able to include many more quality control checks on the bibliographic and item records.

Below is a screencast where I walk through the receipt cataloging process. If I wasn’t explaining what was happening, the whole process would have taken a minute and 10 seconds to complete, a couple of seconds more if the item was bumped to another team in the department. Compared to a five minute turnaround time if our staff manually checked every criteria, the macros allows the department to go through more items during the day with better quality control.

Bonus Example: Ordering from GOBI

Another workflow at my former place of work involved ordering monographs from GOBI. The workflow, unlike receipt cataloging, have a lot more complex decision making flowchart and more exceptions. While I could not automate on the level of receipt cataloging, there were still patterns and routines that I could automate, such as searching the library catalog with information supplied by GOBI, and determining which codes to enter in the 949 field in the OCLC record (for exporting into our database).

Below is a screencast that shows a part of the notification ordering automation script set.

Preview for Part 2

In this post, I covered more of the item level workflow automation possibilities. More of Technical Services workflows, however, are changing towards dealing with many items at once. In part 2, I will discuss some examples of batch process automation and several tools (including those mentioned in this post) that can assist in making life easier in Technical Services.

Making Library e-Books on the e-Book Reader Visible

Browsing Experience in the Virtual vs. the Physical Space

However entangled our lives are in virtual spaces, it is in the physical space that we exist. For this reason, human attention is most easily directed at where visual and other sensory stimuli are. The resulting sensory feedback from interacting with the source of these stimuli further enriches the experience we have in the physical space. Libraries can take advantage of this fact in order to bring users’ fleeting attention to their often-invisible online collections. So far, our experience on the Internet, where we spend so much time, is still mostly limited to one or two sensory stimuli and provides little or no sensory feedback. A library’s online resources, often touted for its 24/7 accessibility anywhere, are no exception to this limitation.

Flickr - "augmented reality game bibliotheek deventer"

Think about new library books, for example. The print ones are usually prominently displayed at a library lobby area attracting library visitors to walk up and browse them in the physical space. By thumbing through a new book and moving back and forth from the table of contents to different chapters, we can quickly get a sense of what kind of a book it is and decide whether we want to further read the book or not. The tactile, olfactory, visual, and auditory sensory input that we get from thumbing through a newly printed book with fresh ink contributes to making this experience enjoyable and memorable at the same time.

Now compare this experience with reading a library Web page with the list of new online library books on a computer screen. Each book is reduced to a string of words and a hyperlink. It is hard to provide any engaging experience with a string of words and a hyperlink.

The Invisibility Problem of Library e-Books

Like many libraries, Florida International University (FIU) Library started an e-book reader lending program that circulates e-book readers. Each reader comes with more than one hundred titles that have been selected by subject librarians. But how can a library make these library e-books on e-book readers noticed by library users? How can a library help a user to quickly figure out what books are available on, say, a library Kindle device when those are specifically what the user is looking for?

Well, if a user runs a keyword search in the library’s online catalog, say, with ‘Kindle,’  s/he will find more than sufficient information since the library has already neatly cataloged all titles available on the Kindle device there. But many users may fail to try this or even be unaware of the new e-book reader lending program in the first place. The e-book reader lending program offers a great service to library users. However, the library e-books offered on the e-book readers can be largely invisible to users who tend to think that what they can see in a library is all a library has.

Giving Physical Presence to Library e-Books on e-Book Readers

The problem can be solved by giving some physical presence to e-books on the library’s e-book readers using a dummy bookmark on the stacks. This is particularly effective as it quickly captures users’ attention while they are already browsing the library stacks looking for something to read.

Users are familiar with a dummy book on physical shelves that marks a print title that is often looked for under different names or the recent change of the location of a title. Applied to Kindle e-books, a dummy bookmark is just as effective. A user can walk around the space where stacks are located and physically identify those e-books that the library makes available on a e-book reader in each subject section. By a visible cue, a dummy bookmarks create a direct sensory association between an e-book and something physical (that provides a visible and tactile feedback) in a user’s mind, thereby effectively expanding a users’ idea of what is available at a library.

When you pull out the bookmark, it looks like this. The bookmark includes the book’s cover image, title, author, and call number, which help a user to locate the title record in the library’s online catalog. But in reality, users are more likely to just walk down to the Course Reserves area to check out an e-book reader after reading this sign.

I tweeted this photo a while ago when I accidentally found out the idea was implemented while looking for some book in the stacks. (See the disclaimer below.)  I was quite surprised by many positive comments that I received in Twitter. Many librarians also suggested adding a QR code to the dummy bookmark next to the Call Number. The addition of the QR code would be an excellent bonus on the bookmark. It will allow users to check the availability of the title on their mobile devices, so that they can avoid the situation in which the e-book and the e-book reader device have been already checked out.

If you are running a pricy e-book reader lending program at your library, a dummy bookmark might be an inexpensive but highly effective way to make those e-books stand out to users on the library stacks. What other things do you do at your library to make your online resources and e-books more visible to users?

Disclaimer: I have suggested this idea at the E-resources group meeting where all FIU libraries (including Medical Library where I work) are represented. But the implementation was done solely by the FIU main Library for their Kindle e-book collection on their stacks. For those who are curious, I was unable to find the exact number of dummy bookmarks on the stacks.