Getting Beyond Library Assessment Fatigue: or how SEO taught me to stop whining and love the data

This post is a bit of a thought-experiment. It grew out of a conversation I had with a colleague about something I like to call “assessment fatigue.” I believe we need quality assessment, but I also get extremely tired of hearing the assessment fad everywhere. My fatigue with assessment-speak has been making it difficult to engage with the real work of assessment, but a recent conversation about Search Engine Optimization and Web Analytics (of all things) is helping me get beyond this. I’m hopeful that by sharing and exploring this thought-arc with you, we can profitably move beyond assessment-speak and assessment-fatigue and on to the thoughtful and intentional work of building library services informed with data.

TLDR: Jump to the list of three rules from SEO that can apply to library assessment fatigue at the bottom of this article.

Assessment Fatigue

Assessment fatigue is the state of not wanting to hear another another word about measuring, rubrics, or demonstrating value. I am a frequent sufferer of assessment fatigue, despite the fact that I am convinced that assessment is absolutely necessary to guide the work of the library. I don’t know of a viable alternative to the outcomes-assessment model1 of goal setting and performance evaluation. I think there is great work out there 2 about how to incorporate assessment into the work of academic libraries. I’ve seen it lead libraries to achieving amazing things and thus I’m a believer in the power of outcomes and data driven planning.

I’m also sick to death of hearing about it. It is frighteningly easy to turn talk of assessment into a dry and empty caricature of what it can be. So much so, that I’m usually hesitant to get on board with a new assessment project because they can turn into something out of a Kafka novel or Terry Gilliam movie at the drop of a hat. This gives me a bad attitude and my internal monologue can resemble: “Oh yes, let’s reduce the complexities of academic work to the things that are most easily quantified and then plot our success on a rubric.” or “Let’s reduce information literacy to a standardized test and then make our instruction program teach to that test.” I also hear Leonard Nimoy’s voice from Civilization IV in my head saying “The bureaucracy is expanding to meet the needs of the expanding bureaucracy.” These snarky thoughts are at best unhelpful and at worst get in the way of the work of the library, but I’d be lying if I denied indulging them from time to time. Assessment is undeniably necessary, but it can also be tremendously annoying for the rank and file librarians required to add gathering data to their already over-full workloads.

Happily, I’ve discovered something that rescues me from my whining and helps me engage in useful assessment activities. It comes from, oddly enough, what I’ve learned about Search Engine Optimization (SEO). This connection may appear initially to be tenuous, but using it has been profitable for me and helped both my attitude and my productivity. To help make this all a little more clear I’m going to begin by explaining what I’ve learned through teaching SEO to undergraduates and then I’ll demonstrate that SEO and library assessment share some key characteristics, namely they both have suffer from a bad reputation among those who carry them out, they are both absolutely required in order to do the highest quality work in their respective fields, and both are ultimately justified by the power of data-driven decision making.

Teaching SEO

I include a unit on Search Engine Optimization in a course I teach on information architecture. In the class we cover basic organization theory, database structures, searching databases, search engine structure, searching the web, SEO, and microdata markup. I was reluctant at first to add the SEO unit, because I understood SEO as a largely seedy and underhanded marketing affair. Once I taught it, however, I realized that doing SEO the right way requires a nuanced understanding of how web search works. Students who learned how to do SEO also learned how to search and their insights on web search bled over and made them better database searchers as well.

Quick Primer on Web Search & SEO

What makes students who understand SEO and web architecture more effective database searchers has to do with a little known detail of full-text keyword searching: by itself, keyword searching doesn’t work very well. More finely put, keyword search works just fine, but the results of these searches, by themselves, aren’t very useful. Finding keyword matches is easy, the real challenge is in packaging the results of a keyword search in a manner that is useful to the searcher. Unlike databases with well-organized metadata structures, keyword searches don’t have a way of telling what keywords mean. Web content has no title, author, or subject fields3. So when I search for “blues” the keyword search doesn’t know if I’m looking for colors, moods, music, jeans, cheeses, or the French national football side.

Because of this lack of context, search engines create useful results rankings by treating HTML tags and web structural elements as implicit metadata. If a keyword is found inside a URL, <title> or <h1> tag, the site is ranked more highly than a site where the keyword appears only in thet <body> tag. Anchor-link text, the words underlined in blue in a web link, are especially valuable, since they contain another person’s description of what a site is. In the following example, the anchor-link text “The ACRL TechConnect blog, a site about library technology for academic librarians” succinctly and accurately describes the content being linked to. This makes the content more findable to readers using search engines.

<a href="http://acrl.ala.org/techconnect/">The ACRL TechConnect blog, a site about library technology for academic librarians.</a>

Thus, when we code a site or even make a link, we are, in effect, cataloging the web. This is also why we should never use “click here” in our anchor-link text. When we do that we are squandering an opportunity to add descriptive information to the link, and make it more difficult for potential readers to discover our content. The following is the WRONG way to write a web link.

<a href="http://acrl.ala.org/techconnect/">Click here</a> for the The ACRL TechConnect blog, a site about library technology for academic librarians.

In this example, the descriptive information is outside the link (outside the <a></a> tags) and thus unrecognisable as descriptive information to a search engine

Search companies like Ask, Bing, Google, and Yahoo! don’t organize the web, they just capture how users and content creators organize and describe their own and each others’ content. SEO, very basically speaking, is the practice of putting knowledge of web search architecture into practice. When we use short but descriptive text in our URLs, <title> tags, <h1> tags, and write descriptive anchor link text–when we practice responsible SEO in other words–we are performing the public service of making the web more accessible for everyone. Search engine architecture and SEO are, of course, much more complicated than these short paragraphs can detail, but this is the general concept: because there is no standardized way of cataloging pages, search engine companies have found workarounds to make “a vast collection of completely uncontrolled heterogeneous documents” [Brin quote] act like a database. Using that loose metaphor, SEO can be seen as the process of getting web designers to think like catalogers.

SEO’s Bad Reputation

When viewed from this perspective, SEO doesn’t seem all that bad. It seems like a natural process of understanding how search engines use web site data and using that knowledge to maximize public access to one’s site data. In reality, it doesn’t always work so cleanly. Ad revenue is based on page views and clicks, so practically speaking, SEO often becomes the process of maximizing revenue by driving traffic to a site by any means. In other words, all too often SEO experts act like marketers, not like catalogers. Because of these abuses SEO is commonly understood as the process of maximizing search ranking regardless of relevance, user intent, or ethics.

If you want to test my hypothesis here, simply send a tweet containing the letters SEO or #seo and examine the quality of your new followers and messages. (Spoiler: you’ll get a lot of spam comments and spam followers so don’t try this at home.)

Of course, SEO doesn’t have to be shady or immoral, but since there are profits to be made off by shady and immoral SEO ‘experts’, the field has earned its bad reputation. Any web designer who wants people to find her content needs to perform fundamental SEO tasks, but there is little talk about the process, out of fear of being lumped in with the shady side of things. For me, the best argument for doing SEO is to keep the reason for SEO in the front of my mind: we need to bother with the mess of SEO because SEO is what connects our content to our audience. Because I care about both my audience and my content, I’m willing to do unpleasant tasks necessary to ensure my audience can find my content.

The Connection Between Library Assessment and SEO

It seems clear to me that library assessment suffers from some of the same reputation problems that SEO has. Regardless of how quality assessment is integral to library performance, the current fad status of assessment can make it difficult for librarians in their daily work to see any benefits behind the hype. Failures of past assessment fads to bring about promised changes (TQM anyone?) make librarians wary of drinking the assessment Kool-Aid. I’m not focusing here on grumpy or curmudgeonly librarians, but hard-working professionals who have heard too many assessment pep-talks to get excited about the next project.

This is why I find SEO to be a useful analogue for library assessment. Both SEO and library assessment are things that are absolutely necessary to the success of our efforts, but both are also held in distaste by many of the rank and file who are required to engage in these activities. One key to getting beyond the initial negative reaction and the bad reputations of these activities is to focus on the reasons we have for engaging in them. For example, we do SEO because we want to connect users with our content. We do assessment because we want to make decisions based on data, not whim. We do assessment because we want to know if our efforts to serve our users are actually serving our users. In other words: because I care deeply about providing the highest quality service to our library patrons, I am willing to do underlying work to make certain our efforts are having the desired effects.

Keeping the ultimate goal in mind is not only helpful for setting priorities, but it also helps us govern the potentially insidious natures of both SEO and assessment. By this I mean that if we keep in mind that SEO is about connecting our intended audience to our quality content, we are much less likely to be tempted to engage in unsavory marketing schemes, because those are not about either our intended audience or quality content. In the same vein, if we remain mindful that library assessment is about using data to improve how we serve our users, we are unlikely to take shortcuts such as teaching to a standardized test, choosing only easily quantifiable measures, or assessing only areas of strength. These shortcuts will serve only to undermine that goal in the long run.

Moving Forward

Returning to the conversation with my colleague that sparked this post, after I finished whining about my assessment fatigue, I was explained why I felt it was necessary to add a section on web analytics to my SEO course unit. My students worked on a project where the analysed a website for its SEO and made suggestions to improve access to the content. Without the data provided by web analytics, they had no way of knowing if their suggestions made things better, worse, or had no effect. She replied that this is the precise reason that librarians need to collect assessment data. Without assessment data, we have no tools to tell if our work choices improve, worsen, or have no effect on library services. She was, of course, absolutely right. Without quality assessment data, we have no way of knowing whether our decisions about library service lead to increased access to relevant information and improved patron experience.

Three SEO Rules that Apply to Library Assessment

In conclusion and in continuation of the metaphor that library assessment is a lot like SEO, here are three rules from SEO that can speak to our library assessment efforts.

1. Know how search engines function. (Know how accreditation functions.)

If you want people who use search engines to successfully find your site, you have to know how search engines function and incorporate that knowledge into your site design. Similarly, if you are assessing library performance in order to demonstrate value to stakeholders such as accreditors or campus administrators outside the library, you need to know what these bodies value and write your assessment to measure for these values.

2. Know your content and your audience. (Know your library and your users.)

The most common error in SEO efforts is designing to generate maximum traffic to your site. If successful, this approach can generate a large quantity of traffic, traffic that is collectively annoyed to find themselves at your site. The proper approach is to know your content and design your SEO to attract genuinely interested traffic. A similar temptation applies to library assessment. It is possible to skew your analytics to show only amazing success in all areas, but this comes at the cost of gathering useful about actual library services and at the cost of being able to improve services based on that data. Assessment data is valuable because it tells us about how the library serves our users. Data skewed to show only positive results is useless when it comes to helping the library achieve its mission.

3. Design for humans, not for machines. (Assess for library users.)

This rule sums up the law and the prophets for SEO: design for humans and not for machines. What it means is don’t let your desire for search ranking tempt you into designing an ugly page that your audience hates. Put the people first. When you have a choice to make between a design element that favors human readers and a design element that favors search engines crawlers, ALWAYS choose people. While SEO efforts have a real impact on search ranking, a quality web site is more important for search ranking than quality SEO effort. Similarly, if you find yourself tempted to compromise service to patrons in order to focus on assessment, always err on the side of the patron. Librarian time and attention are finite resources, but if we consciously and consistently prioritize our patrons ahead of our assessment efforts, our assessment efforts will uncover more favorable data than if we put the data ahead of the people we are here to serve.


4 Comments on “Getting Beyond Library Assessment Fatigue: or how SEO taught me to stop whining and love the data”

  1. Interesting post! I can definitely see the linkages there. I see assessment as being one of those “threshold concepts”; when you internalize it’s value, you can’t imagine not thinking it important. But until you’ve internalized its value, you can’t imagine thinking it important. Getting over the hump is the hard part and I think you’re right that keeping your focus on why you’re doing assessment is the key. When so many libraries do assessment for accreditation or accountability purposes only, it’s no wonder why so many people are cynical or sick of assessment. When the focus is on assessing to improve student learning or to improve the library’s services, collections, etc., assessment is a no-brainer (except for those librarians who already insist they know what students need without ever having to do an assessment).

    For “know how search engines function” I’d say the proper analogy with assessment would be “know what students are supposed to be learning” (i.e. learning outcomes). Without knowing what they are supposed to have learned, you can’t design a good assessment. And for “know your content and your audience”, I’d say “know what you want to learn at the end of this.” Too often, librarians design assessments that don’t provide any meaningful or actionable data. This is a waste of time. Not surprisingly, I think you’re right on for #3.

    • Nicholas Schiller says:

      Thanks Meredith! Your point about knowing learning outcomes is spot on, that is a much better fit for that particular rule. Lately “organizational literacy” to use an ugly buzz-word as become of soap box of mine and I squeezed it in here, making a key mistake I’m trying to warn others against. Assessment is hard.

      You made some comments at an ILAGO conference a while back about how to entice tenured colleagues to engage with new assessment projects that I’ve been thinking about since then. That perspective balances the words of a wise, now retired, colleague from a local library who said: “There’s the assessment you do because they tell you to do it, and then there’s the assessment you do because you want to get better.” My hope is I can find a way to legitimately blend the two into one indistinguishable whole.

  2. dgrigar says:

    Love the essay, Nicholas. Especially your comment that SEO does not have to be “shady.” I seldom if ever admit to my Humanities colleagues that I love working with SEO because it is seen as doing commercial work and so is not scholarly. But analyzing data that is part and parcel of SEO work is fascinating to me and very intellectual if you really understand what is going–and of course it connects with rhetorical strategies relating to audience, purpose, and message:)

  3. […] Taught Me to Stop Whining and Love the Data.” ACRL TechConnect Blog. Accessed August 14, 2013. http://acrl.ala.org/techconnect/?p=3498. This is an interesting way to view library assessment through the lens of SEO (search engine […]


Leave a Reply