I write a lot of “how-to” posts. This is fine and I actually think it’s fun…until I have a month like this past one, in which I worry that I have no business telling anyone “how-to” do anything. In which I have written the following comment multiple times:
//MF - this is a bad way to do this.
I decided to write this post because I think that maybe it is important to share the narrative of “how we struggle” alongside the “how-to”. I’ll describe the original problem I needed to tackle, my work towards a solution, and the remaining issues that exist. There are portions of the solution that work fine and I utilize those portions to illustrate the original requirements. But, as you will see, there is an assortment of unfinished details. At the end of the post, I’ll give you the link to my solution and you can judge for yourself.
The Original Problem
We want to implement a new gallery to highlight student work on our department’s main website. My primary area of responsibility is a different website – the digital library – which is the archival home for student work. Given my experience in working with these items and also with Views Isotope, the task of developing a “proof of concept” solution fell to me. I needed to implement some of the technically trickier things in our proposed solution for the gallery pages in order to prove that the features are feasible for the main website. I decided to use the digital library for this development because
- the digital library already has the appropriate infrastructure in place
- both the digital library and our main website are based in Drupal 7
- the “proof of concept” solution, once complete, could remain in the digital library as a browse feature for our historical collections of student work
The full requirements for the final gallery are outside of the scope of this post, but our problem area is modifying the Views Isotope module to do some advanced things.
First, I need to take the existing Views Isotope module and modify it to use hash history on the same page as multiple filters. Hash history with Isotope is implemented using the jQuery BBQ plugin, as is demonstrated on the Isotope website. This essentially means that when a user clicks on a filter, a hash value is added to the URL and becomes a line in the browser’s history. This allows one to click the back button to view the grid with the previous filters applied.
Our specific use case is the following: when viewing galleries of student work from our school, a user can filter the list of works by several filter options, such as degree or discipline, (i.e., Undergraduate Architecture). These filter options are powered by term references in Drupal, as we saw in my earlier Isotope post. If the user clicks on an individual work to see details, clicking on the back button should return them to the already filtered list – they should not have to select the filters again.
Let’s take a look at how the end of the URL should progress. If we start with:
Then, we select Architecture as our discipline, we should see the URL change to :
Everything after the # is referred to as the hash. If we then click on Undergraduate, the URL will change to:
If we click our Back button, we should go back to
With each move, the Isotope animation should fire and the items on the screen should only be visible if they contain term references to the vocabulary selected.
Further, the selected filter options should be marked as being currently selected. Items in one vocabulary which require an item in another vocabulary should be hidden and shown as appropriate. For example, if a user selects Architecture, they should not be able to select PhD from the degree program, because we do not offer a PhD degree in Architecture, therefore there is no student work. Here is an example of how the list of filters might look.
A good real-world example of the types of features we need can be seen at NPR’s Best Books of 2013 site. The selected filter is marked, options that are no longer available are removed and the animation fires when the filter changes. Further, when you click the Back button, you are taken back through your selections.
It turns out that the jQuery BBQ plugin works quite nicely with Isotope, again as demonstrated on the Isotope website. It also turns out that support for BBQ is included in Drupal core for the Overlay module. So theoretically this should all play nicely together.
The existing views-isotope.js file handles filtering as the menu items are clicked. The process is basically as follows:
- When the document is ready
- Identify the container of items we want to filter, as well as the class on the items and set up Isotope with those options.
- Pre-select All in each set of filters.
- Identify all of the possible filter options on the page.
- If a filter item is clicked,
- first, check to make sure it isn’t already selected, if so, escape
- then remove the “selected” class from the currently selected option in this set
- add the “selected” class to the current item
- set up an “options” variable to hold the value of the selected filter(s)
- check for other items in other filter sets with the selected class and add them all to the selected filter value
- call Isotope with the selected filter value(s)
To add the filter to the URL we can use bbq.pushState, which will add “a ‘state’ into the browser history at the current position, setting location.hash and triggering any bound hashchange event callbacks (provided the new state is different than the previous state).”
We then want to handle what’s in the hash when the browser back button is clicked, or if a user enters a URL with the hash value applied. So we add an option for handling the hashchange event mentioned above. Instead of calling isotope from the link click function, we call it from the hashchange event portion. Now our algorithm looks more like this, with the items in bold added:
- Include the misc/jquery.ba-bbq.js for BBQ (I have to do this explicitly because I don’t use Overlay)
- When the document is ready
- identify the container of items we want to filter, as well as the class on the items and set up Isotope with those options.
- Pre-select All in each set of filters.
- Identify all of the possible filter options on the page.
- If a filter item is clicked,
- first, check to make sure it isn’t already selected, if so, escape
- then remove the “selected” class from the currently selected option in this set
- add the “selected” class to the current item
- set up an “options” variable to hold the value of the selected filter(s)
- push “options” to URL and trigger hashchange (don’t call Isotope yet)
- If a hashchange event is detected
- create new “hashOptions” object according to what’s in the hash, using the deparam.fragment function from jQuery BBQ
- manipulate css classes such as “not-available” (ie. If Architecture is selected, apply to PhD) and “selected” based on what’s in “hashOptions”
- call Isotope with “hashOptions” as the parameter
- trigger hashchange event to pick up anything that’s in the URL when the page loads
I also updated any available pager links so that they link not just to the appropriate page, but also so that the filters are included in the link. This is done by appending the hash value to the href attribute for each link with a class “pager”.
And it works. Sort of…
The Unfinished Details
Part of the solution described above only works on Chrome and – believe it or not – Internet Explorer. In all browsers, clicking the back button works just as described above, as long as one is still on the page with the list of works. However, when linking directly to page with the filters included (as we are doing with the pager) or hitting back from a page that does not have the hash present (say, after visiting an individual item), it does not work on Firefox or Safari. I think this may have to do with the deparam.fragment function, because that appears to be where it gets stuck, but so far can’t track it down. I could directly link to window.location.hash, but I think that’s a security issue (what’s to stop someone from injecting something malicious after the hash?)
Also, in order to make sure the classes are applied correctly, it feels like I do a lot of “remove it from everywhere, then add it back”. For example, if I select Architecture, PhD is then hidden from the degree list by assigning the class “not-available”. When a user clicks on City and Regional Planning or All, I need that PhD to appear again. Unfortunately, the All filter is handled differently – it is only present in the hash if no other options on the page are selected. So, I remove “not-available” from all filter lists on hashchange and then reassign based on what’s in the hash. It seems like it would be more efficient just to change the one I need, but I can’t figure it out. Or maybe I should alter the way All is handled completely – I don’t know.
It is hard to have confidence in a solution when building while learning. When I run into a snag, I have to consider whether or not the problem is the entire approach, as opposed to a syntax error or a limitation of the library. Frequently, I find an answer to an issue I’m having, but have to look up something from the answer in order to understand it. I worry that the code contains rookie mistakes – or even intermediate mistakes – which will bite us later, but it is difficult to do an exhaustive analysis of all the available resources. Coding elegantly is an art which requires more than a basic understanding of how the pieces play together.
Inelegant code, however, can still help make progress. To see the progress I have made, you can visit https://ksamedia.osu.edu/student-work-archives and play with the filters. This solution is good because it proves we can develop our features using Isotope, BBQ and Views Isotope. The trick now is figuring out how to paint and put locks and doors on our newly built house, or possibly move a wall or two.
Conversations about gender relations, bias, and appropriate behavior have bubbled up all over the technology sector recently. We have seen conferences adopt codes of conduct that strive to create welcoming atmospheres. We have also seen cases of bias and harassment, cases that may once have been tolerated or ignored, now being identified and condemned. These conversations, like gender itself, are not simple or binary but being able to listen respectfully and talk honestly about uncomfortable topics offers hope that positive change is possible.
On October 28th Sarah Houghton, the director of the San Rafael Public Library, moderated a panel on gender in library technology at the Internet Librarian conference. In today’s post I’d like to share my small contributions to the panel discussion that day and also to share how my understanding of the issues changed after the discussion there. It is my hope that more conversations—more talking and more listening—about gender issue in library technology will be sparked from this start.
Part I: Internet Librarian Panel on Gender
Our panel’s intent was to invite librarians into a public conversation about gender issues. In the Internet Librarian program our invitation read:
Join us for a lively panel and audience discussion about the challenges of gender differences in technology librarianship. The topics of fairness and bias with both genders have appeared in articles, blogs, etc and this panel of women and men who work in libraries and gender studies briefly share personal experiences, then engage the audience about experiences and how best to increase understanding between the genders specifically in the area of technology work in librarianship. 1
Panelists: Sarah Houghton, Ryan Claringbole, Emily Clasper, Kate Kosturski, Lisa Rabey, John Bultena, Tatum Lindsay, and Nicholas Schiller
My invitation to participate on the stemmed from blog posts I wrote about how online conversations about gender issues can go off the rails and become disasters. I used my allotted time to share some simple suggestions I developed observing these conversations. Coming from my personal (white cis straight male) perspective, I paid attention to things that I and my male colleagues do and say that result in unintended offense, silencing, and anger in our female colleagues. By reverse engineering these conversational disasters, I attempted to learn from unfortunate mistakes and build better conversations. Assuming honest good intentions, following these suggestions can help us avoid contention and build more empathy and trust.
- Listen generously. Context and perspective are vital to these discussions. If we’re actively cultivating diverse perspectives then we are inviting ideas that conflict with our assumptions. It’s more effective to assume these ideas come from unfamiliar but valid contexts than to assume they are automatically unreasonable. By deferring judgement until after new ideas have been assimilated and understood we can avoid silencing voices that we need to hear.
- Defensive responses can be more harmful than offensive responses. No one likes to feel called on the carpet, but the instinctive responses we can give when we feel blamed or accused can be worse than simply giving offense. Defensive denials can lead to others feeling silenced, which is much more damaging and divisive than simple disagreement. It can be the difference between communicating “you and I disagree on this matter” and communicating “you are wrong and don’t get a voice in this conversation.” That kind of silencing and exclusion can be worse than simply giving offense.
- It is okay to disagree or to be wrong. Conversations about gender are full of fear. People are afraid to speak up for fear of reprisal. People are afraid to say the wrong thing and be revealed as a secret misogynist. People are afraid. The good news is that conversations where all parties feel welcome, respected, and listened to can be healing. Because context and perspective matter so much in how we address issues, once we accept the contexts and perspectives of others, we are more likely to receive acceptance of our own perspectives and contexts. Given an environment of mutual respect and inclusion, we don’t need to be afraid of holding unpopular views. These are complex issues and once trust is established, complex positions are acceptable.
This is what I presented at the panel session and I still stand behind these suggestions. They can be useful tools for building better conversations between people with good intentions. Specifically, they can help men in our field avoid all-too-common barriers to productive conversation.
That day I listened and learned a lot from the audience and from my fellow panelists. I shifted my priorities. I still think cultivating better conversations is an important goal. I still want to learn how to be a better listener and colleague. I think these are skills that don’t just happen, but need to be intentionally cultivated. That said, I came in to the panel believing that the most important gender related issue in library technology was finding ways for well-intentioned colleagues to communicate effectively about an uncomfortable problem. Listening to my colleagues tell their stories, I learned that there are more direct and pressing gender issues in libraries.
Part II: After the Panel
As I listened to my fellow panelists tell their stories and then as I listened to people in the audience share their experiences, no one else seemed particularly concerned about well-intentioned people having misunderstandings or about breakdowns in communication. Instead, they related a series of harrowing personal experiences where men (and women, but mostly men) were directly harassing, intentionally abusive, and strategically cruel in ways that are having a very large impact on the daily work, career paths, and the quality of life of my female colleagues. I assumed that since this kind of harassment clearly violates standard HR policies that the problem is adequately addressed by existing administrative policies. That assumption is incorrect.
It is easy to ignore what we don’t see and I don’t see harassment taking place in libraries and I don’t often hear it discussed. It has been easy to underestimate the prevalence and impact it has on many of my colleagues. Listening to librarians.
Then, after the conference one evening, a friend of mine was harassed on the street and I had another assumption challenged. It happened quickly, but a stranger on the street harassed my friend while I watched in stunned passivity. 2 I arrived at the conference feeling positive about my grasp of the issues and also feeling confident about my role as an ally. I left feeling shaken and doubting both my thoughts and my actions.
In response to the panel and its aftermath, I’ve composed three more points to reflect what I learned. These aren’t suggestions, like I brought to the panel, instead they are realizations or statements. I’m obviously not an expert on the topic and I’m not speaking from a seat of authority. I’m relating stories and experiences told by others and they tell them much better than I do. In the tradition of geeks and hackers now that I have learned something new I’m sharing it with the community in hopes that my experience moves the project forward. It is my hope that better informed and more experienced voices will take this conversation farther than I am able to. These three realizations may be obvious to some, because they were not obvious to me, it seems useful to clearly articulate them.
- Intentional and targeted harassment of women is a significant problem in the library technology field. While subtle micro aggressions, problem conversations, and colleagues who deny that significant gender issues exist in libraries are problematic, these issues are overshadowed by direct and intentional harassing behavior targeting gender identity or sex. The clear message I heard at the panel was that workplace harassment is a very real and very current threat to women working in library technology fields.
- This harassment is not visible to those not targeted by it. It is easy to ignore what we do not see. Responses to the panel included many library technology women sharing their experiences and commenting that it was good to hear others’ stories. Even though the experience of workplace harassment was common, those who spoke of it reported feelings of isolation. While legislation and human resources polices clearly state harassment is unacceptable and unlawful, it still happens and when it happens the target can be isolated by the experience. Those of us who participate in library conferences, journals, and online communities can help pierce this isolation by cultivating opportunities to talk about these issues openly and public. By publicly talking about gender issues, we can thwart isolation and make the problems more visible to those who are not direct targets of harassment.
- This is a cultural problem, not only an individual problem. While no one point on the gender spectrum has a monopoly on either perpetrating or being the target of workplace harassment, the predominant narrative in our panel discussion was men harassing women. Legally speaking, these need to be treated as individual acts, but as a profession, we can address the cultural aspects of the issue. Something in our library technology culture is fostering an environment where women are systematically exposed to bad behavior from men.
In the field of Library Technology, we spend a lot of our time and brain power intentionally designing user experiences and assessing how users interact with our designs. Because harassment of some of our own is pervasive and cultural, I suggest we turn the same attention and intentionality to designing a workplace culture that is responsive to the needs of all of us who work here. I look forward to reading conference presentations, journal articles, and online discussions where these problems are publicly identified and directly addressed rather than occurring in isolation or being ignored.
I also love Views Isotope, a Drupal 7 module that enabled me to create a dynamic image gallery for our school’s Year in Review. This module (paired with a few others) is instrumental in building our new digital library.
In this blog post, I will walk you through how we created the Year in Review page, and how we plan to extrapolate the design to our collection views in the Knowlton Digital Library. This post assumes you have some basic knowledge of Drupal, including an understanding of content types, taxonomy terms and how to install a module.
Year in Review Project
Our Year in Review project began over the summer, when our communications team expressed an interest in displaying the news stories from throughout the school year in an online, interactive display. The designer on our team showed me several examples of card-like interfaces, emphasizing the importance of ease and clean graphics. After some digging, I found Isotope, which appeared to be the exact solution we needed. Isotope, according to its website, assists in creating “intelligent, dynamic layouts that can’t be achieved with CSS alone.” This JQuery library provides for the display of items in a masonry or grid-type layout, augmented by filters and sorting options that move the items around the page.
At first, I was unsure we could make this library work with Drupal, the content management system we employ for our main web site and our digital library. Fortunately I soon learned – as with many things in Drupal – there’s a module for that. The Views Isotope module provides just the functionality we needed, with some tweaking, of course.
We set out to display a grid of images, each representing a news story from the year. We wanted to allow users to filter those news stories based on each of the sections in our school: Architecture, Landscape Architecture and City and Regional Planning. News stories might be relevant to one, two or all three disciplines. The user can see the news story title by hovering over the image, and read more about the new story by clicking on the corresponding item in the grid.
Views Isotope Basics
Views Isotope is installed in the same way as other Drupal modules. There is an example in the module and there are also videos linked from the main module page to help you implement this in Views. (I found this video particularly helpful.)
You must have the following modules installed to use Views Isotope:
You also need to install the Isotope JQuery library. It is important to note that Isotope is only free for non-commercial projects. To install the library, download the package from the Isotope GitHub repository. Unzip the package and copy the whole directory into your libraries directory. Within your Drupal installation, this should be in the /sites/all/libraries folder. Once the module and the library are both installed, you’re ready to start.
If you have used Drupal, you have likely used Views. It is a very common way to query the underlying database in order to display content.The Views Isotope module provides additional View types: Isotope Grid, Isotope Filter Block and Isotope Sort Block. These three view types combine to provide one display. In my case, I have not yet implemented the Sort Block, so I won’t discuss it in detail here.
To build a new view, go to Structure > Views > Add a new view. In our specific example, we’ll talk about the steps in more detail. However, there’s a few important tenets of using Views Isotope, regardless of your setup:
- There is a grid. The View type Isotope Grid powers the main display.
- The field on which we want to filter is included in the query that builds the grid, but a CSS class is applied which hides the filters from the grid display and shows them only as filters.
- The Isotope Filter Block drives the filter display. Again, a CSS class is applied to the fields in the query to assign the appropriate display and functionality, instead of using default classes provided by Views.
- Frequently in Drupal, we are filtering on taxonomy terms. It is important that when we display these items we do not link to the taxonomy term page, so that a click on a term filters the results instead of taking the user away from the page.
With those basic tenets in mind, let’s look at the specific process of building the Year in Review.
Building the Year in Review
Armed with the Views Isotope functionality, I started with our existing Digital Library Drupal 7 instance and one content type, Item. Items are our primary content type and contain many, many fields, but here are the important ones for the Year in Review:
- Title: text field containing the headline of the article
- Description: text field containing the shortened article body
- File: File field containing an image from the article
- Item Class: A reference to a taxonomy term indicating if the item is from the school archives
- Discipline: Another term reference field which ties the article to one or more of our disciplines: Architecture, Landscape Architecture or City and Regional Planning
- Showcase: Boolean field which flags the article for inclusion in the Year in Review
The last field was essential so that the communications team liaison could curate the page. There are more news articles in our school archives then we necessarily want to show in the Year in Review, and the showcase flag solves this problem.
In building our Views, we first wanted to pull all of the Items which have the following characteristics:
- Item Class: School Archives
- Showcase: True
So, we build a new View. While logged in as administrator, we click on Structure, Views then Add a New View. We want to show Content of type Item, and display an Isotope Grid of fields. We do not want to use a pager. In this demo, I’m going to build a Page View, but a Block works as well (as we will see later). So my settings appear as follows:
Click on Continue & edit. For the Year in Review we next needed to add our filters – for Item Class and Showcase. Depending on your implementation, you may not need to filter the results, but likely you will want to narrow the results slightly. Next to Filter Criteria, click on Add.
If you click Update Preview at the bottom of the View edit screen, you’ll see that much of the formatting is already done with just those steps.
Note that the formatting in the image above is helped along by some CSS. To style the grid elements, the Views Isotope module contains its own CSS in the module folder ([drupal_install]/sites/all/modules/views_isotope). You can move forward with this default display if it works for your site. Or, you can override this in the site’s theme files, which is what I’ve done above. In my theme CSS file, I have applied the following styling to the class “isotope-element”
I use the Rendered File Formatter and select the Grid View Mode, which applies an Image Style to the file, resizing it to 180 x 140. Clicking Update Preview again shows that the image has been added each item.
This is closer, but in our specific example, we want to hide the title until the user hovers over the item. So, we need to add some CSS to the title field.
In my CSS file, I have the following:
Note the opacity is 0 – which means the div is transparent, allowing the image to show through. Then, I added a hover style which just changes the opacity to mostly cover the image:
Now, if we update preview, we should see the changes.
The last thing we need to do is add the Discipline field for each item so that we can filter.
There are two very important things here. First, we want to make sure that the field is not formatted as a link to the term, so we select Plain text as the Formatter.
Second, we need to apply a CSS class here as well, so that the Discipline fields show in filters, not in the grid. To do that, check the Customize field HTML and select the DIV element. Then, select Create a class and enter “isotope-filter”. Also, uncheck “Apply default classes.” Click Apply.
Using Firebug, I can now look at the generated HTML from this View and see that isotope-element <div> contains all the fields for each item, though the isotope-filter class loads Discipline as hidden.
<div class="isotope-element landscape-architecture" data-category="landscape-architecture"> <div class="views-field views-field-title"> (collapsed for brevity) </div> <div class="views-field views-field-field-file"> (collapsed for brevity) </div> <div> <div class="isotope-filter">Landscape Architecture</div> </div> </div>
You might also notice that the data-category for this element is assigned as landscape-architecture, which is our Discipline term for this item. This data-category will drive the filters.
So, let’s save our View by clicking Save at the top and move on to create our filter block. Create a new view, but this time create a block which displays taxonomy terms of type Discipline. Then, click on Continue & Edit.
The first thing we want to do is adjust view so that the default row wrappers are not applied. Note: this is the part I ALWAYS forget, and then when my filters don’t work it takes me forever to track it down.
Click on Settings next to Fields.
Next, we do not want the fields to be links to term pages, because a user click should filter the results, not link back to the term. So, click on the term name to edit that field. Uncheck the box next to “Link this field to its taxonomy term page”. Click on Apply.
Save the view.
The last thing is to make the block appear on the page with the grid. In practice, Drupal administrators would use Panels or Context to accomplish this (we use Context), but it can also be done using the Blocks menu.
So, go to Structure, then click on Blocks. Find our Isotope-Filter Demo block. Because it’s a View, the title will begin with “View:”
Click Configure. Set block settings so that the Filter appears only on the appropriate Grid page, in the region which is appropriate for your theme. Click save.
Now, let’s visit our /isotope-grid-demo page. We should see both the grid and the filter list.
It’s worth noting that here, too, I have customized the CSS. If we look at the rendered HTML using Firebug, we can see that the filter list is in a div with class “isotope-options” and the list itself has a class of “isotope-filters”.
<div class="isotope-options"> <ul class="isotope-filters option-set clearfix" data-option-key="filter"> <li><a class="" data-option-value="*" href="#filter">All</a></li> <li><a class="filterbutton" href="#filter" data-option-value=".architecture">Architecture</a></li> <li><a class="filterbutton selected" href="#filter" data-option-value=".city-and-regional-planning">City and Regional Planning</a></li> <li><a class="filterbutton" href="#filter" data-option-value=".landscape-architecture">Landscape Architecture</a></li> </ul> </div>
I have overridden the CSS for these classes to remove the background from the filters and change the list-style-type to none, but you can obviously make whatever changes you want. When I click on one of the filters, it shows me only the news stories for that Discipline. Here, I’ve clicked on City and Regional Planning.
So, how do we plan to use this in our digital library going forward? So far, we have mostly used the grid without the filters, such as in one of our Work pages. This shows the metadata related to a given work, along with all the items tied to that work. Eventually, each of the taxonomy terms in the metadata will be a link. The following grids are all created with blocks instead of pages, so that I can use Context to override the default term or node display.
However, in our recently implemented Collection view, we allow users to filter the items based on their type: image, video or document. Here, you see an example of one of our lecture collections, with the videos and the poster in the same grid, until the user filters for one or the other.
There are two obstacles to using this feature in a more widespread manner throughout the site. First, I have only recently figured out how to implement multiple filter options. For example, we might want to filter our news stories by Discipline and Semester. To do this, we rewrite the sorting fields in our Grid display so that they all display in one field. Then, we create two Filter blocks, one for each set of terms. Implementing this across the site so that users can sort by say, item type and vocabulary term, will make it more useful to us.
Second, we have several Views that might return upwards of 500 items. Loading all of the image files for this result set is costly, especially when you add in the additional overhead of a full image loading in the background for a Colorbox overlay and Drupal performance issues. The filters will not work across pages, so if I use pager, I will only filter the items on the page I’m viewing. I believe this can fixed somehow using Infinite Scroll (as described in several ways here), but I have not tried yet.
With these two advanced options, there are many options for improving the digital library interface. I am especially interested in how to use multiple filters on a set of search results returned from a SOLR index.
What other extensions might be useful? Let us know what you think in the comments.
- Views Isotope: https://drupal.org/project/views_isotope
- Isotope JQuery library with examples: http://isotope.metafizzy.co/index.html
- If you want to go one step further, we have also implemented Colorbox, so when a user clicks on a tile, they get a popup overlay gallery, instead of going straight to the node. More information on Colorbox can be found at the Colorbox site (http://www.jacklmoore.com/colorbox/) and the Colorbox Drupal module (https://drupal.org/project/colorbox).
Are you interested in writing a guest post or becoming a regular contributor to ACRL TechConnect blog? Or, do you blog about library technology?
Three of ACRL TechConnect blog authors, Bohyun, Eric, and Margaret, will be at LITA Forum this year. (Two of us are on the LITA Forum planning committee and yes, we are very active at LITA as well as in ACRL! :)
So, we decided to have a small meet-up!
Come chat with us about everyday challenges and solutions in library technology over drinks.
This is an informal meet-up & All are welcome!
TechConnect Meet-up at #LITAforum
- Date: Saturday Nov 9
- Time: 9 PM
- Location: Hyatt Regency Louisville – Hotel bar
311 South Fourth Street, Louisville, KY 40202
- Twitter hashtag: #TCmeetup
- Contact: @bohyunkim, @margaret_heller, @phette23
View NRHA Louisville Activities Map in a larger map
This is the story of Library Quest (iPhone, Android), the App That (Almost) Wasn’t. It’s a (somewhat) cautionary tale of one library’s effort to leverage gamification and mobile devices to create a new and different way of orienting students to library services and collections. Many libraries are interested in the possibilities offered by both games and mobile devices, and they should be. But developing for mobile platforms is new and largely uncharted territory for libraries, and while there have been some encouraging developments in creating games in library instruction, other avenues of game creation are mostly unexplored. This is what we learned developing our first mobile app and our first large-scale game…at the same time!
Development of the Concept: Questing for Knowledge
The saga of Library Quest began in February of 2012, when I came on board at Grand Valley State University Libraries as Digital Initiatives Librarian. I had been reading some books on gamification and was interested in finding a problem that the concept might solve. I found two. First, we were about to open a new 65 million dollar library building, and we needed ways to take advantage of the upsurge of interest we knew this would create. How could we get people curious about the building to learn more about our services, and to strengthen that into a connection with us? Second, GVSU libraries, like many other libraries, was struggling with service awareness issues. Comments by our users in the service dimension of our latest implementation of Libqual+ indicated that many patrons missed out on using services like inter-library loan because they were unaware that they existed. Students often are not interested in engaging with the library until they need something specific from us, and when that need is filled, their interest declines sharply. How could we orient students to library services and create more awareness of what we could do for them?
We designed a very simple game to address both problems. It would be a quest or task based game, in which students actively engaged with our services and spaces, earning points and rewards as they did so. The game app would offer tasks to students, verify their progress through multistep tasks by asking users to input alphanumeric codes or by scanning QR codes (which we ended up putting on decals that could be stuck to any flat surface). Because this was an active game, it seemed natural to target it at mobile devices, so that people could play as they explored. The mobile marketplace is more or less evenly split between iOS and Android devices, so we knew we wanted the game to be available on both platforms. This became the core concept for Library Quest. Library administration gave the idea their blessing and approval to use our technology development budget, around $12,000, to develop the game. Back up and read that sentence over if you need to, and yes, that entire budget was for one mobile app. The expense of building apps is the first thing to wrap your mind around if you want to create one. While people often think of apps as somehow being smaller and simpler than desktop programs, the reality is very different.
We contracted with Yeti CGI, a outside game development firm, to do the coding. This was essential-app development is complicated and we didn’t have the necessary skills or experience in-house. If we hadn’t used an outside developer, the game app would never have gotten off the ground. We had never worked with a game-development company before, and Yeti had never worked with a library, although they had ties to higher education and were enthusiastic about the project. Working with an outside developer always carries certain risks and advantages, and communication is always an issue.
One thing we could have done more of at this stage was spend time working on game concept and doing paper prototyping of that concept. In his book Game Design Workshop, author Tracey Fullerton stresses two key components in designing a good game: defining the experience you want the player to have, and doing paper prototyping. Defining the game experience from the player’s perspective forces the game designer to ask questions about how the game will play that it might not otherwise occur to them to ask. Will this be a group or a solo experience? Where will the fun come from? How will the player negotiate the rules structure of the game? What choices will they have at what points? As author Jane McGonigal notes, educational games often fail because they do not put the fun first, which is another way of saying that they haven’t fully thought through the player’s experience. Everything in the game: rules, rewards, format, etc. should be shaped from the concept of the experience the designer wants to give the player. Early concepts can and should be tested with paper prototyping. It’s a lot easier to change rules structure for a game made with paper, scissors, and glue than with code and developers (and a lot less expensive). In retrospect, we could have spent more time talking about experience and more time doing paper prototypes before we had Yeti start writing code. While our game is pretty solid, we may have missed opportunities to be more innovative or provide a stronger gameplay experience.
Concept to conception: Wireframing and Usability Testing
The first few months of development were spent creating, approving, and testing paper wireframes of the interface and art concepts. While we perhaps should have done more concept prototyping, we did do plenty of usability testing of the game interface as it developed, starting with the paper prototypes and continuing into the initial beta version of the game. That is certainly something I would recommend that anyone else do as well. Like a website or anything else that people are expected to use, a mobile app interface needs to be intuitive and conform to user expectations about how it should operate, and just as in website design, the only way to create an interface that does so is to engage in cycles of iterative testing with actual users. For games, this is particularly important because they are supposed to be fun, and nothing is less fun than struggling with poor interface design.
A side note related to usability: one of the things that surfaced in doing prototype testing of the app was that giving players tasks involving library resources and watching them try to accomplish those tasks turns out to be an excellent way of testing space and service design as well. There were times when students were struggling not with the interface, but with the library! Insufficient signage, space layout that was not clear, assumed knowledge of or access to information the students had no way of knowing, were all things became apparent in watching students try to do tasks that should have been simple. It serves as a reminder that usability concepts apply to the physical world as much as they do to the web, and that we can and should test services in the real world the same way we test them in virtual spaces.
Development: Where the Rubber Meets the Phone
Involving an outside developer made the game possible, but it also meant that we had to temper our expectations about the scale of app development. This became much more apparent once we’d gotten past paper prototyping and began testing beta versions of the game. There were several ideas that we developed early on, such as notifications of new quests, and an elaborate title system, that had to be put aside as the game evolved because of cost, and because developing other features that were more central to gameplay turned out to be more difficult than anticipated. For example, one of the core concepts of the game was that students would be able to scan QR codes to verify that they had visited specific locations. Because mobile phone users do not typically have QR code reader software installed, Yeti built QR code reader functionality into the game app. This made scanning the code a more seamless part of gameplay, but getting the scanner software to work well on both the android and iOS versions proved a major challenge (and one that’s still vexing us somewhat at launch). Tweaks to improve stability and performance on iOS threw off the android version, and vice versa. Despite the existence of programs like Phonegap and Adobe Air, which will supposedly produce versions of the software that run on both platforms, there can still be a significant amount of work involved in tuning the different versions to get them to work well.
Developing apps that work on the android platform is particularly difficult and expensive. While Apple has been accused of having a fetish for control, their proprietary approach to their mobile operating system produces a development environment that is, compared to android, easy to navigate. This is because android is usually heavily modified by specific carriers and manufacturers to run on their hardware. Which means that if you want to ensure that your app runs well on an android device, the app must be tested and debugged on that specific combination of android version and hardware. Multiply the 12 major versions of android still commonly used by the hundreds of devices that run it, and you begin to have an idea of the scope of the problem facing a developer. While android only accounts for 50% of our potential player base, it easily took up 80% of the time we spent with Yeti debugging, and the result is an app that we are sure works on only a small selection of android devices out there. By contrast, it works perfectly well on all but the very oldest versions of iOS.
Publishing a Mobile App: (Almost) Failure to Launch
When work began on Library Quest, our campus had no formal approval process for mobile apps, and the campus store accounts were controlled by our student mobile app development lab. In the year and a half we spent building it, control of the campus store accounts was moved to our campus IT department, and formal guidelines and a process for publishing mobile apps started to materialize. All of which made perfect sense, as more and more campus entities were starting to develop mobile apps and campus was rightly concerned about branding and quality issues, as well as ensuring any apps that were published furthered the university’s teaching and research mission. However, this resulted in us trying to navigate an approval process as it materialized around us very late in development, with requests coming in for changes to the game appearance to bring it into like with new branding standards when the game was almost complete.
It was here the game almost foundered as it was being launched. During some of the discussions, it surfaced that one of the commercial apps being used by the university for campus orientation bore some superficial resemblance to Library Quest in terms of functionality, and the concern was raised that our app might be viewed as a copy. University counsel got involved. For a while, it seemed the app might be scrapped entirely, before it ever got out to the students! If there had been a clear approval process when we began the app, we could have dealt with this at the outset, when the game was still in the conceptual phase. We could have either modified the concept, or addressed the concern before any development was done. Fortunately, it was decided that the risk was minimal and we were allowed to proceed.
Post-Launch: Game On!
As I write this, it’s over a year since Library Quest was conceived and it has just been released “into the wild” on the Apple and Google Play stores. We’ve yet to begin the major advertising push for the game, but it already has over 50 registered users. While we’ve learned a great deal, some of the most important questions about this project are still up in the air. Can we orient students using a game? Will they learn anything? How will they react to an attempt to engage with them on mobile devices? There are not really a lot of established ways to measure success for this kind of project, since very few libraries have done anything remotely like it. We projected early on in development that we wanted to see at least 300 registered users, and that we wanted at least 50 of them to earn the maximum number of points the game offered. Other metrics for success are “squishier,” and involve doing surveys and focus groups once the game wraps to see what reactions students had to the game. If we aren’t satisfied with performance at the end of the year, either because we didn’t have enough users or because the response was not positive, then we will look for ways to repurpose the app, perhaps as part of classroom teaching in our information literacy program, or as part of more focused and smaller scale campus orientation activities.
Even if it’s wildly successful, the game will eventually need to wind down, at least temporarily. While the effort-reward cycle that games create can stimulate engagement, keeping that cycle going requires effort and resources. In the case of Library Quest, this would include the money we’ve spent on our prizes and the effort and time we spend developing quests and promoting the game. If Library Quest endures, we see it having a cyclical life that’s dependent on the academic year. We would start it anew each fall, promoting it to incoming freshmen, and then wrap it up near the end of our winter semester, using the summers to assess and re-engineer quests and tweak the app.
Lessons Learned: How to Avoid Being a Cautionary Tale
- Check to see if your campus has an approval process and a set of guidelines for publishing mobile apps. If it doesn’t, do not proceed until they exist. Lack of such a process until very late in development almost killed our game. Volunteer to help draft these guidelines and help create the process, if you need to. There should be some identified campus app experts for you to talk to before you begin work, so you can ask about apps already in use and about any licensing agreements campus may have. There should be a mechanism to get your concept approved at the outset, as well as the finished product.
- Do not underestimate the power of paper. Define your game’s concept early, and test it intensively with paper prototypes and actual users. Think about the experience you want the players to have, as well as what you want to teach them. That’s a long way of saying “think about how to make it fun.” Do all of this before you touch a line of code.
- Keep testing throughout development. Test your wireframes, test your beta version, test, test, test with actual players. And pay attention to anything your testing might be telling you about things outside the game, especially if the game interfaces with the physical world at all.
- Be aware that mobile app development is hard, complex, and expensive. Apps seem smaller because they’re on small devices, but in terms of complexity, they are anything but. Developing cross-platform will be difficult (but probably necessary), and supporting android will be an ongoing challenge. Wherever possible, keep it simple. Define your core functionality (what does the app *have* to do to accomplish its mission) and classify everything else you’d like it to do as potentially droppable features.
- Consider your game’s life-cycle at the outset. How long do you need it to run to do what you want it to do? How much effort and money will you need to spend to keep it going for that long? When will it wind down?
Fullerton, Tracy. Game Design Workshop (4th Edition). Amsterdam, Morgan Kaufmann. 2008
McGonigal, Jane. Reality is Broken: Why Games Make us Better and How they Can Change the World. Penguin Press, New York. 2011
About our Guest Author:
Kyle Felker is the Digital Initiatives Librarian at Grand Valley State University Libraries, where he has worked since February of 2012. He is also a longtime gamer. He can be reached at firstname.lastname@example.org, or on twitter @gwydion9.
Once you have built a local development environment using an AMP stack, the next logical question is, “now what?” And the answer is, truly, whatever you want. As an example, in this blog post we will walk through installing Drupal and WordPress on your local machine so that you can develop and test in a low-risk environment. However, you can substitute other content management systems or development platforms and the goal is the same: we want to mimic our web server environment on our local machine.
The only prerequisite for these recipes is a working AMP stack (see our tutorials for Mac and Windows), and administrative rights to your computer. The two sets of steps are very similar. We need to download and unpack the files to our web root, create a database and point to it from a configuration file, and run an install script from the browser.
There are tutorials around the web on how to do both things, but I think there’s two likely gotchas for newbies:
- There’s no “installer” that installs the platform to your system. You unzip and copy the files to the correct place. The “install” script is really a “setup” script, and is run after you can access the site through a browser.
- Setting up and linking the database must be done correctly, or the site won’t work.
So, we’ll step through each process with some extra explanation.
Drupal is an open source content management platform. Many libraries use it for their website because it is free and it allows for granular user permissions. So as the site administrator, I can provide access for staff to edit certain pages (ie, reference desk schedule) but not others (say, colleague’s user profiles). In our digital library, my curatorial users can edit content, authorized student users can see content just for our students, and anonymous users can see public collections. The platform has its downsides, but there is a large and active user community. A problem’s solution is usually only a Google search (or a few) away.
The Drupal installation guide is a little more technical, so feel free to head there if you’re comfortable on the command line.
First, download the Drupal files from the Drupal core page. The top set of downloads (green background) are stable versions of the platform. The lower set are versions still in development. For our purposes we want the green download, and because I am on my Mac, I will download the tar.gz file for the most recent version (at the time of this writing, 7.23). If you are on a Windows machine, and have 7zip installed, you can also use the .tar.gz file. If you do not have 7zip installed, use the .zip file.
Now, we need to create the database we’re going to use for Drupal. In building the AMP stack, we also installed phpMyAdmin, and we’ll use it now. Open a browser and navigate to the phpMyAdmin installation (if you followed the earlier tutorials, this will be http://localhost/~yourusername/phpmyadmin on Mac and http://localhost/phpmyadmin on Windows). Log in with the root user you created when you installed MySQL.
The Drupal installation instructions suggest creating a user first, and through that process, creating the database we will use. So, start by clicking on Users.
Look for the Add user button.
Next, we want to create a username – which will serve as the user login as well as the name of the database. Create a password and select “Local” from the Host dropdown. This will only allow traffic from the local machine. Under the “Database for user”, we want to select “Create database with same name and grant all privileges.”
Next, let’s copy the Drupal files and configure the settings. Locate the file you downloaded in the first step above and move it to your web root folder. This is the folder you previously used to test Apache and install phpMyAdmin so you could access files through your browser. For example, on my Mac this is in mfrazer/sites.
You may want to change the folder name from drupal-7.23 to something a little more user friendly, e.g. drupal without the version number. Generally, it’s bad practice to have periods in file or folder names. However, for the purposes of this tutorial, I’m going to leave the example unchanged.
Now, we want to create our settings file. Inside your Drupal folder, look for the sites folder. We want to navigate to sites/default and create a copy of the file called default.settings.php. Rename the copy to settings.php and open in your code editor.
Each section of this file contains extensive directions on how to set the settings. At the end of the Database Settings section, (line 213 as of this writing), we want to replace this
$databases = array();
$databases['default']['default'] = array( 'driver' => 'mysql', 'database' => 'sampledrupal', 'username' => 'sampledrupal', 'password' => 'samplepassword', 'host' => 'localhost', 'prefix' => '', );
Remember, if you followed the steps above, ‘database’ and ‘username’ should have the same value. Save the file.
Go back to your unpacked folder and create a directory called “files” in the same directory as our settings.php file.
Now we can navigate to the setup script in our browser. The URL is comprised of the web root, the name of the folder you extracted the drupal files into and then install.php. So, in my case this is:
If I was on a Windows machine, and had changed the name of the folder to be mydrupal, then the path would be
Either way, you should get something that looks like this:
For your first installation, I would choose Standard, so you can see what the Standard install looks like. I use Minimal for many of my sites, but if it’s your first pass into Drupal it is good to see what’s there.
Next, pick a language and click Save and Continue. Now, the script is going to attempt to verify your requirements. You may run into an error that looks like this:
We need to make our files directory writable by our web server users. We can do this a bunch of different ways. It’s important to think about what you’re doing, because it involves file permissions, especially if you are allowing users in from outside your system.
On my Mac, I choose to make _www (which is the hidden web server user) the owner of the folder. To do this, I open Terminal and type in
sudo chown _www files
Remember, sudo will elevate us to administrator. Type in your password when prompted. The next command is chown, followed by the new owner the folder in question. So this command will change the owner to _www for the folder “files”.
In Windows, I did not see this error. However, if needed, I would handle the permissions through the user interface, by navigating to the files folder, right-clicking and selecting Properties. Click on the Security tab, then click on Edit. In this case, we are just going to grant permissions to the users of this machine, which will include the web server user.
Click on Users, then scroll down to click the check box under “Allow” and next to “Write.” Click on Apply and then Ok. Click OK again to close the Properties window.
On my Windows machine, I got a PHP error instead of the file permissions error.
This is an easy fix, we just need to enable the gd2 and mbstring extensions in our php.ini file and restart Apache to pick up the changes.
To do this, open your php.ini file (if you followed our tutorials, this will be in your c:\opt\local directory). Beginning on Line 868, in the Windows Extensions section, uncomment (remove the semi-colon) from the following lines (they are not right next to each other, they’re in a longer list, but we want these three uncommented):
Save php.ini file. Restart Apache by going to Services, click on Apache 2.4 and click Restart.
Once you think you’ve fixed the issues, go back to your browser and click on Refresh. The Verify Requirements should pass and you should see a progress bar as Drupal installs.
Next you are taken to the Configure Site page, where you fill in the site name, your email address and create the first user. This is important, as there are a couple of functions restricted only to this user, so remember the user name and password that you choose. I usually leave the Server Settings alone and uncheck the Notification options.
Click Save and Continue. You should be congratulated and provided a link to your new site.
WordPress is a very common blogging platform; we use it at ACRL TechConnect. It can also be modified to be used as a content management platform or an image gallery.
Full disclosure: Until writing this post, I have never done a local install of WordPress. Fortunately, I can report that it’s very straightforward. So, let’s get started.
The MAMP instructions for WordPress advise creating the database first, and using the root credentials. I am not wild about this solution, because I prefer to have separate users for my databases and I do not want my root credentials sitting in a file somewhere. So we will set up the database the same way we did above: create a user and a database at the same time.
Open a browser and navigate to the phpMyAdmin installation (if you followed the earlier tutorials, this will be http://localhost/~yourusername/phpmyadmin on Mac and http://localhost/phpmyadmin on Windows). Log in with the root user you created when you installed MySQL and click on Users.
Look for the Add user button.
Next, we want to create a username – which will serve as the user login as well as the name of the database. Create a password and select “Local” from the Host dropdown. This will only allow traffic from the local machine. Under the “Database for user”, we want to select “Create database with same name and grant all privileges.”
Now, let’s download our files. Go to http://wordpress.org/download and click 0n Download WordPress. Move the .zip file to your web root folder and unzip it. This is the folder you previously used to test Apache and install phpMyAdmin so you could access files through your browser. For example, on my Mac this is in mfrazer/sites. If you followed our tutorial for Windows, it would be c:\sites
Next, we need create a config file. WordPress comes with a wp-config-sample.php file. Make a copy of it and rename it to wp-config.php and open it with your code editor.
Enter in the database name, user name and password we just created. Remember, if you followed the steps above, the database name and user name should be the same. Verify that the host is set to local and save the file.
Navigate in your browser to the WordPress folder. The URL is comprised of the web root and the name of the folder where you extracted the WordPress files. So, in my case this is:
If I was on a Windows machine, and had changed the name of the folder to be wordpress-dev, then the path would be
Either way, you should get something that looks like this:
Fill in the form and click on Install WordPress. It might take a few minutes, but you should get a success message and a Log In button. Log in to your site using the credentials you just created in the form.
You’re ready to start coding and testing. The next step is to think about what you want to do. You might take a look at the theming resources provided by both WordPress and Drupal. You might want to go all out and write a module. No matter what, though, you now have an environment that will help you abide by the cardinal rule of development: Thou shalt not mess around in production.
Let us know how it’s going in the comments!
When I was a kid, I cherished the Paula Danziger book Remember Me to Harold Square, in which a group of kids call themselves the Serendipities, named for the experience of making fortunate discoveries accidentally. Last week I found myself remembering the book over and over again as I helped develop Serendip-o-matic, a tool which introduces serendipity to research, as part of a twelve person team attending One Week | One Tool at the Roy Rosenzweig Center for History and New Media at George Mason University (RRCHNM).
In this blog post, I’ll take you through the development of the “serendipity machine”, from the convening of the team to the selection and development of the tool. The experience turned out to be an intense learning experience for me, so along the way, I will share some of my own fortunate discoveries.
(Note: this is a pretty detailed play-by-play of the process. If you’re more interested in the result, please see the RRCHNM news items on both our process and our product, or play with Serendip-o-matic itself.)
The Eve of #OWOT
Approximately thirty people applied to be part of One Week | One Tool (OWOT), an Institute for Advanced Topics in the Digital Humanities, sponsored by the National Endowment for the Humanities. Twelve were selected and we arrive on Sunday, July 28, 2013 and convene in the Well, the watering hole at the Mason Inn.
Tom Scheinfeldt (@foundhistory), the RRCHNM director-at-large who organized OWOT, delivers the pre-week pep talk and discusses how we will measure success. The development of the tool is important, but so is the learning experience for the twelve assembled scholars. It’s about the product, but also about the process. We are encouraged to learn from each other, to “hitch our wagon” to another smart person in the room and figure out something new.
As for the product, the goal is to build something that is used. This means that defining and targeting the audience is essential.
The tweeting began before we arrived, but typing starts in earnest at this meeting and the #owot hashtag is populated with our own perspectives and feedback from the outside. Feedback, as it turns out, will be the priority for Day 1.
@DoughertyJack: “One Week One Tool team wants feedback on which digital tool to build.”
Mentors from RRCHNM take the morning to explain some of the basic tenets of what we’re about to do. Sharon Leon talks about the importance of defining the project: “A project without an end is not a project.” Fortunately, the one week timeline solves this problem for us initially, but there’s the question of what happens after this week?
Patrick Murray-John takes us through some of the finer points of developing in a collaborative environment. Sheila Brennan discusses outreach and audience, and continues to emphasize the point from the night before: the audience definition is key. She also says the sentence that, as we’ll see, would need to be my mantra for the rest of the project: “Being willing to make concrete decisions is the only way you’re going to get through this week.”
All of the advice seems spot-on and I find myself nodding my head. But we have no tool yet, and so how to apply specifics is still really hazy. The tool is the piece of the puzzle that we need.
We start with an open brainstorming session, which results in a filled whiteboard of words and concepts. We debate audience, we debate feasibility, we debate openness. Debate about openness brings us back to the conversation about audience – for whom are we being open? There’s lot of conversation but at the end, we essentially have just a word cloud associated with projects in our heads.
So, we then take those ideas and try to express them in the following format: X tool addresses Y need for Z audience. I am sitting closest to the whiteboards so I do a lot of the scribing for this second part and have a few observations:
- there are pet projects in the room – some folks came with good ideas and are planning to argue for them
- our audience for each tool is really similar; as a team we are targeting “researchers”, though there seems to be some debate on how inclusive that term is. Are we including students in general? Teachers? What designates “research”? It seems to depend on the proposed tool.
- the problem or need is often hard to articulate. “It would be cool” is not going to cut it with this crowd, but there are some cases where we’re struggling to define why we want to do something.
A few group members begin taking the rows and creating usable descriptions and titles for the projects in a Google Doc, as we want to restrict public viewing while still sharing within the group. We discuss several platforms for sharing our list with the world, and land on IdeaScale. We want voters to be able to vote AND comment on ideas, and IdeaScale seems to fit the bill. We adjourn from the Center and head back to the hotel with one thing left to do: articulate these ideas to the world using IdeaScale and get some feedback.
The problem here, of course, is that everyone wants to make sure that their idea is communicated effectively and we need to agree on public descriptions for the projects. Finally, it seems like there’s a light at the end of the tunnel…until we hit another snag. IdeaScale requires a login to vote or comment and there’s understandable resistance around the table to that idea. For a moment, it feels like we’re back to square one, or at least square five. Team members begin researching alternatives but nothing is perfect, we’ve already finished dinner and need the votes by 10am tomorrow. So we stick with IdeaScale.
And, not for the last time this week, I reflect on Sheila’s comment, “being willing to make concrete decisions is the only way you’re going to get through this week.” When new information, such as the login requirement, challenges the concrete decision you made, how do you decide whether or not to revisit the decision? How do you decide that with twelve people?
I head to bed exhausted, wondering about how many votes we’re going to get, and worried about tomorrow: are we going to make a decision?
It turns out that I need not have worried. In the winnowing from 11 choices down to 2, many members of the team are willing to say, “my tool can be done later” or “that one can be done better outside this project.” Approximately 100 people weighed in on the IdeaScale site, and those votes are helpful as we weigh each idea. Scott Kleinman leads us in a discussion about feasbility for implementation and commitment in the room and the choices begin to fall away. At the end, there are four, but after a few rounds of voting we’re down to two with equal votes that must be differentiated. After a little more discussion, Tom proposes a voting system that allows folks to weight their votes in terms of commitment and the Serendipity project wins out. The drafted idea description reads:
“A serendipitous discovery tool for researchers that takes information from your personal collection (such as a Zotero citation library or a CSV file) and delivers content (from online libraries or collections like DPLA or Europeana) similar to it, which can then be visualized and manipulated.”
We decide to keep our project a secret until our launch and we break for lunch before assigning teams. (Meanwhile, #owot hashtag follower Sherman Dorn decides to create an alternative list of ideas – One Week Better Tools – which provides some necessary laughs over the next couple of days).
After lunch, it’s time to break out responsibilities. Mia Ridge steps up, though, and suggests that we first establish a shared understanding of the tool. She sketches on one of the whiteboards the image which would guide our development over the next few days.
This was a takeaway moment for me. I frequently sketch out my projects, but I’m afraid the thinking often gets pushed out in favor of the doing when I’m running low on time. Mia’s suggestion that we take the time despite being against the clock probably saved us lots of hours and headaches later in the project. We needed to aim as a group, so our efforts would fire in the same direction. The tool really takes shape in this conversation, and some of the tasks are already starting to become really clear. (We are also still indulging our obsession with mustaches at this time, as you may notice.)
Tom leads the discussion of teams. He recommends three: a project management team, a design/dev team and an outreach team. The project managers should be selected first, and they can select the rest of the teams. The project management discussion is difficult; there’s an abundance of qualified people in the room. From my perspective, it makes sense to have the project managers be folks who can step in and pinch hit as things get hectic, but we also need our strongest technical folks on the dev team. In the end, Brian Croxall and I are selected to be the project management team.
We decide to ask the remaining team members where they would like to be and see where our numbers end up. The numbers turn out great: 7 for design/dev and 3 for outreach, with two design/dev team members slated to help with outreach needs as necessary.
The teams hit the ground running and begin prodding the components of the idea. The theme of the afternoon is determining the feasibility of this “serendipity engine” we’ve elected to build. Mia Ridge, leader of the design/dev team, runs a quick skills audit and gets down to the business of selecting programming languages, frameworks and strategies for the week. They choose to work in Python with the Django framework. Isotope, a JQuery plugin I use in my own development, is selected to drive the results page. A private Github repository is set up under a code name. (Beyond Isotope, HTML and CSS, I’m a little out of my element here, so for more technical details, please visit the public repository’s wiki.) The outreach team lead, Jack Dougherty, brainstorms with his team on overall outreach needs and high priority tasks. The Google document from yesterday becomes a Google Drive folder, with shells for press releases, a contact list for marketing and work plans for both teams.
This is the first point where I realize that I am going to have to adjust to a lack of hands on work. I do my best when I’m working a keyboard: making lists, solving problems with code, etc. As one of the project managers, my job is much less on the keyboard and much more about managing people and process.
When the teams come back together to report out, there’s a lot of getting each side up to speed, and afterwards our mentors advise us that the meetings have to be shorter. We’re already at the end of day 2, though both teams would be working into the night on their work plans and Brian and need I still need to set the schedule for tomorrow.
We’re past the point where we can have a lot of discussion, except for maybe about the name.
Wednesday is tough. We have to come up with a name, and all that exploration from yesterday needs to be a prototype by the end of the day. We are still hammering out the language we use in talking to each other and there’s some middle ground to be found on terminology. One example is the use of the word “standup” in our schedule. “Standup” means something very specific to developers familiar with the Agile development process whereas I just mean, “short update meeting.” Our approach to dealing with these issues is to identify the confusion and quickly agree on language we all understand.
I spend most of the day with the outreach team. We have set a deadline for presenting names at lunchtime and are hoping the whole team can vote after lunch. This schedule turns out to be folly as the name takes most of the day and we have to adjust our meeting times accordingly. As project managers, Brian and I are canceling meetings (because folks are on a roll, we haven’t met a deadline, etc) whenever we can, but we have to balance this with keeping the whole team informed.
Camping out in a living room type space in RRCHNM, spread out among couches and looking at a Google Doc being edited on a big-screen TV, the outreach team and various interested parties spend most of the day brainstorming names. We take breaks to work on the process press release and other essential tasks, but the name is the thing for the moment. We need a name to start working on branding and logos. Product press releases need to be completed, the dev team needs a named target and of course, swag must be ordered.
It is in this process, however, that an Aha! moment occurs for me. We have been discussing names for a long time and folks are getting punchy. The dev team lead and our designer, Amy Papaelias, have joined the outreach team along with most of our CHNM mentors. I want to revisit something dev team member Eli Rose said earlier in the day. To paraphrase, Eli said that he liked the idea that the tool automated or mechanized the concept of surprise. So I repeat Eli’s concept to the group and it isn’t long after that that Mia says, “what about Serendip-o-matic?” The group awards the name with head nods and “I like that”s and after running it by developers and dealing with our reservations (eg, hyphens, really?), history is made.
As relieved as I am to finally have a name, the bigger takeaway for me here is in the role of the manager. I am not responsible for the inspiration for the name or the name itself, but instead repeating the concept to the right combination of people at a time when the team was stuck. The project managers can create an opportunity for the brilliant folks on the team to make connections. This thought serves as a consolation to me as I continue to struggle without concrete tasks.
Meanwhile, on the other side the building, the rest of dev team is pushing to finish code. We see a working prototype at the end of the day, and folks are feeling good, but its been a long day. So we go to dinner as a team, and leave the work behind for a couple of hours, though Amy is furiously sketching at various moments throughout the meal as she tries to develop a look and feel for this newly named thing.
On the way home from dinner, I think, “there’s only two days left.” All of the sudden it feels like we haven’t gotten anywhere.
The decision to add the Flickr API to our work in order to access the Flickr Commons is made with the dev team, based on the feeling that we have enough time and the images located there enhance our search results and expand our coverage of subject areas and geographic locations.
We also spend today addressing issues. The work of both teams overlaps in some key areas. In the afternoon, Brian and I realize that we have mishandled some of the communication regarding language on the front page and both teams are working on the text. We scramble to unify the approaches and make sure that efforts are not wasted.
This is another learning moment for me. I keep flashing on Sheila’s words from Monday, and worry that our concrete decision making process is suffering from”too many cooks in the kitchen.” Everyone on this team has a stake in the success of this project and we have lots of smart people with valid opinions. But everyone can’t vote on everything and we are spending too much time getting consensus now, with a mere twenty-four hours to go. As a project manager, part of my job is to start streamlining and making executive decisions, but I am struggling with how to do that.
As we prepare to leave the center at 6pm, things are feeling disconnected. This day has flown by. Both teams are overwhelmed by what has to get done before tomorrow and despite hard work throughout the day, we’re trying to get a dev server and production server up and running. As we regroup at the Inn, the dev team heads upstairs to a quiet space to work and eat and the outreach team sets up in the lobby.
Then, good news arrives. Rebecca Sutton-Koeser has managed to get both the dev and production servers up and the code is able to be deployed. (We are using Heroku and Amazon Web Services specifically, but again, please see the wiki for more technical details.)
The outreach team continues to work on documentation, and release strategy and Brian and I continue to step in where we can. Everyone is working until midnight or later, but feeling much better about our status then we did at 6pm.
The final tasks are upon us. Scott Williams moves on from his development responsibilities to facilitate user testing, which was forced to slide from Thursday due to our server problems. Amanda Visconti works to get the interactive results screen finalized. Ray Palin hones our list of press contacts and works with Amy to get the swag design in place. Amrys Williams collaborates with the outreach team and then Sheila to publish the product press release. Both the dev and outreach teams triage and fix and tweak and defer issues as we move towards our 1pm “code chill”, a point which we’re hoping to have the code in a fairly stable state.
We are still making too many decisions with too many people, and I find myself weighing not only the options but how attached people are to either option. Several choices are made because they reflect the path of least resistance. The time to argue is through and I trust the team’s opinions even when I don’t agree.
We end up running a little behind and the code freeze scheduled for 2pm slides to 2:15. But at this point we know: we’re going live at 3:15pm.
Jack Dougherty has arranged a Google hangout with Dan Cohen of the Digital Public Library of America and Brett Bobley and Jen Serventi of the NEH Office of Digital Humanities, which the project managers co-host. We broadcast the conversation live via the One Week | One Tool website.
The code goes live and the broadcast starts but my jitters do not subside…until I hear my teammates cheering in the hangout. Serendip-o-matic is live.
At 8am on Day 6, Serendip-o-matic had its first pull request and later in the day, a fourth API – Trove of Australia – was integrated. As I drafted this blog post on Day 7, I received email after email generated by the active issue queue and the tweet stream at #owot is still being populated. On Day 9, the developers continue to fix issues and we are all thinking about long term strategy. We are brainstorming ways to share our experience and help other teams achieve similar results.
I found One Week | One Tool incredibly challenging and therefore a highly rewarding experience. My major challenge lay in shifting my mindset from that of a someone hammering on a keyboard in a one-person shop to a that of a project manager for a twelve-person team. I write for this blog because I like to build things and share how I built them, but I have never experienced the building from this angle before. The tight timeline ensured that we would not have time to go back and agonize over decisions, so it was a bit like living in a project management accelerator. We had to recognize issues, fix them and move on quickly, so as not to derail the project.
However, even in those times when I became acutely more aware of the clock, I never doubted that we would make it. The entire team is so talented; I never lost my faith that a product would emerge. And, it’s an application that I will use, for inspiration and for making fortunate discoveries.
(More on One Week | One Tool, including other blog entries, can be found by visiting the One Week | One Tool Zotero Group.)
I attended the Library Code Year Interest Group‘s preconference on Python, sponsored by LITA (Library Information Technology Association) and ALCTS (Association for Library Collections and Technical Services), at the American Library Association’s conference in Chicago this year. The workshop was taught and staffed by IG members Andromeda Yelton, Becky Yoose, Bohyun Kim, Carli Spina, Eric Phetteplace, Jen Young, and Shana McDanold. It was based on work done by the Boston Python Workshop group, and with tools developed there and by Coding Bat. The preconference was designed to provide a basic introduction to the Python programming language, and succeeded admirably.
Here’s why I think it’s important for librarians to learn to code: it provides options in lots of conversations where librarians have traditionally not had many. One of the lightning talks (by Heidi Frank at NYU) concerned using Python to manipulate MARC records. The tools to do that kind of thing have tended to be either a) the property of vendors who provide the features the majority of their customers want or b) the province of overworked library systems staff. Both scenarios tend to lead to tools which are limiting or aggravating for the individual needs of almost every individual user. Ever try to change a graphic in your OPAC? Export a report in the file format *you* need? Learning to code is one way to solve those kinds of problems. Code empowers.
The preconference was very self-directed. There were a set of introductory tutorials before a late-morning lecture, then a lovely lunch (provided by the Python Foundation), then the option of spending more time on the morning’s activities or using the morning’s skills to work on one of two projects. The first project, ColorWall, used Python to create a bunch of very pretty cascading displays of color. The second, Wordplay, used it to answer crossword puzzle questions–”How many words fit the pattern E*G****R?” “How many words have all five vowels in order?”. My table opted to stick with the morning exercises, and learned an important lesson about what kinds of things Python can infer and what kinds it can’t. Python and your computer are very fast and systematic, but also very literal-minded. I suspect that they’d much rather be doing math than whatever you’re asking them to do for you.
My own background is moderately technical. I remember writing BASIC programs for my Commodore 128. I’ve done web design and systems work for a couple of libraries, and I have a lot of experience working with software of various kinds. I used this set of directions to install Linux on the Chromebook I brought to the preconference. I have installed new Linux operating systems on computers dozens of times. This doesn’t scare me:
And I still had the feeling of 8th-grade-math-class dread when I got stuck, as I frequently did, during the second part of the guided introduction. Did I miss something? Am I just not smart enough to do this? The whole litany of self-doubt. I got completely stuck on when to use loops and when not to use them. Loops are one of the most basic Python functions, a way of telling Python to systematically go down a list of things and do something to them. Utterly baffling, because it requires you to ask the question like a computer would, rather than like a human. Totally normal for me and anyone else starting out. Or, in fact, for anyone who programs. Programming is failure. The trick is learning to treat failure as part of the process, rather than as a calamity.
What’s powerful about the approach the IG used is that there were lots of people available to help, about seven teaching assistants to forty attendees. The accompanying documentation was cheerful and clear, and the attitude was “let’s figure this out together”. This is the polar opposite of a common approach to teaching programming, of which the polite paraphrase is “Read The Fine Manual”. My experience with music lessons as an adult and with the programming instruction I’ve looked at is that the (usually) well-meaning people doing the teaching tend to be people who learn best by trying things until they work. Lots of people learn best that way; lots of people do not. And librarians tend more often to be in the second category: wanting a bit of a sense of the forest before dealing with all of the trees individually. There is also a genderedness to the traditional approach. The Boston Python group’s approach (self-directed, with vast amounts of personal help available) was specifically designed to be welcoming to women and newcomers. Used here, it definitely worked. Attendees were 60% female, which is striking for a programming event, even at a library conference.
For me, learning Python is an investment in supporting the digital humanities work I will be increasingly involved in. I’m looking forward to learning how to use it to manipulate and analyze text. As I look more closely, I see that Python has modules for manipulating CSV files. One of my ongoing projects involves mapping indexes like MLA Bibliography to our full-text coverage so my students and faculty know what to expect when they use them. I’ve been using complicated Excel spreadsheets to do this, with only marginally satisfying results. I think that Python will give me better results, and hopefully results I can share with others.
The immediate takeaways for me are more about relationships and affiliation than code, though I do have a structure, in the form of the documentation for the workshop, which I will use to follow-up (you can use it, too!). I am lucky enough to be in the Boston area, so I will take advantage of the active Boston Python Meetup group, which has frequent workshops and events for programmers at all levels. Most importantly, I am clear from the workshop that Python is not inherently more complicated to learn than MARC formatting or old-style DIALOG searching. I wouldn’t say the workshop demystified Python for me–there’s still a lot of work for me to do–, but I will say that learning a useful amount of Python now seems entirely doable and worthwhile.
Computer code is crucial to the present and future of librarianship. Librarians have a unique facility with explaining the technical to non-technical people. Librarians learning to code together is an investment in ourselves and our profession.
About our guest author:
Chris Strauber is Humanities Research and Instruction Librarian and Coordinator of Instructional Design at Tufts University’s Tisch Library.
This post is a bit of a thought-experiment. It grew out of a conversation I had with a colleague about something I like to call “assessment fatigue.” I believe we need quality assessment, but I also get extremely tired of hearing the assessment fad everywhere. My fatigue with assessment-speak has been making it difficult to engage with the real work of assessment, but a recent conversation about Search Engine Optimization and Web Analytics (of all things) is helping me get beyond this. I’m hopeful that by sharing and exploring this thought-arc with you, we can profitably move beyond assessment-speak and assessment-fatigue and on to the thoughtful and intentional work of building library services informed with data.
TLDR: Jump to the list of three rules from SEO that can apply to library assessment fatigue at the bottom of this article.
Assessment fatigue is the state of not wanting to hear another another word about measuring, rubrics, or demonstrating value. I am a frequent sufferer of assessment fatigue, despite the fact that I am convinced that assessment is absolutely necessary to guide the work of the library. I don’t know of a viable alternative to the outcomes-assessment model1 of goal setting and performance evaluation. I think there is great work out there 2 about how to incorporate assessment into the work of academic libraries. I’ve seen it lead libraries to achieving amazing things and thus I’m a believer in the power of outcomes and data driven planning.
I’m also sick to death of hearing about it. It is frighteningly easy to turn talk of assessment into a dry and empty caricature of what it can be. So much so, that I’m usually hesitant to get on board with a new assessment project because they can turn into something out of a Kafka novel or Terry Gilliam movie at the drop of a hat. This gives me a bad attitude and my internal monologue can resemble: “Oh yes, let’s reduce the complexities of academic work to the things that are most easily quantified and then plot our success on a rubric.” or “Let’s reduce information literacy to a standardized test and then make our instruction program teach to that test.” I also hear Leonard Nimoy’s voice from Civilization IV in my head saying “The bureaucracy is expanding to meet the needs of the expanding bureaucracy.” These snarky thoughts are at best unhelpful and at worst get in the way of the work of the library, but I’d be lying if I denied indulging them from time to time. Assessment is undeniably necessary, but it can also be tremendously annoying for the rank and file librarians required to add gathering data to their already over-full workloads.
Happily, I’ve discovered something that rescues me from my whining and helps me engage in useful assessment activities. It comes from, oddly enough, what I’ve learned about Search Engine Optimization (SEO). This connection may appear initially to be tenuous, but using it has been profitable for me and helped both my attitude and my productivity. To help make this all a little more clear I’m going to begin by explaining what I’ve learned through teaching SEO to undergraduates and then I’ll demonstrate that SEO and library assessment share some key characteristics, namely they both have suffer from a bad reputation among those who carry them out, they are both absolutely required in order to do the highest quality work in their respective fields, and both are ultimately justified by the power of data-driven decision making.
I include a unit on Search Engine Optimization in a course I teach on information architecture. In the class we cover basic organization theory, database structures, searching databases, search engine structure, searching the web, SEO, and microdata markup. I was reluctant at first to add the SEO unit, because I understood SEO as a largely seedy and underhanded marketing affair. Once I taught it, however, I realized that doing SEO the right way requires a nuanced understanding of how web search works. Students who learned how to do SEO also learned how to search and their insights on web search bled over and made them better database searchers as well.
Quick Primer on Web Search & SEO
What makes students who understand SEO and web architecture more effective database searchers has to do with a little known detail of full-text keyword searching: by itself, keyword searching doesn’t work very well. More finely put, keyword search works just fine, but the results of these searches, by themselves, aren’t very useful. Finding keyword matches is easy, the real challenge is in packaging the results of a keyword search in a manner that is useful to the searcher. Unlike databases with well-organized metadata structures, keyword searches don’t have a way of telling what keywords mean. Web content has no title, author, or subject fields3. So when I search for “blues” the keyword search doesn’t know if I’m looking for colors, moods, music, jeans, cheeses, or the French national football side.
Because of this lack of context, search engines create useful results rankings by treating HTML tags and web structural elements as implicit metadata. If a keyword is found inside a URL, <title> or <h1> tag, the site is ranked more highly than a site where the keyword appears only in thet <body> tag. Anchor-link text, the words underlined in blue in a web link, are especially valuable, since they contain another person’s description of what a site is. In the following example, the anchor-link text “The ACRL TechConnect blog, a site about library technology for academic librarians” succinctly and accurately describes the content being linked to. This makes the content more findable to readers using search engines.
<a href="http://acrl.ala.org/techconnect/">The ACRL TechConnect blog, a site about library technology for academic librarians.</a>
Thus, when we code a site or even make a link, we are, in effect, cataloging the web. This is also why we should never use “click here” in our anchor-link text. When we do that we are squandering an opportunity to add descriptive information to the link, and make it more difficult for potential readers to discover our content. The following is the WRONG way to write a web link.
<a href="http://acrl.ala.org/techconnect/">Click here</a> for the The ACRL TechConnect blog, a site about library technology for academic librarians.
In this example, the descriptive information is outside the link (outside the <a></a> tags) and thus unrecognisable as descriptive information to a search engine
Search companies like Ask, Bing, Google, and Yahoo! don’t organize the web, they just capture how users and content creators organize and describe their own and each others’ content. SEO, very basically speaking, is the practice of putting knowledge of web search architecture into practice. When we use short but descriptive text in our URLs, <title> tags, <h1> tags, and write descriptive anchor link text–when we practice responsible SEO in other words–we are performing the public service of making the web more accessible for everyone. Search engine architecture and SEO are, of course, much more complicated than these short paragraphs can detail, but this is the general concept: because there is no standardized way of cataloging pages, search engine companies have found workarounds to make “a vast collection of completely uncontrolled heterogeneous documents” [Brin quote] act like a database. Using that loose metaphor, SEO can be seen as the process of getting web designers to think like catalogers.
SEO’s Bad Reputation
When viewed from this perspective, SEO doesn’t seem all that bad. It seems like a natural process of understanding how search engines use web site data and using that knowledge to maximize public access to one’s site data. In reality, it doesn’t always work so cleanly. Ad revenue is based on page views and clicks, so practically speaking, SEO often becomes the process of maximizing revenue by driving traffic to a site by any means. In other words, all too often SEO experts act like marketers, not like catalogers. Because of these abuses SEO is commonly understood as the process of maximizing search ranking regardless of relevance, user intent, or ethics.
If you want to test my hypothesis here, simply send a tweet containing the letters SEO or #seo and examine the quality of your new followers and messages. (Spoiler: you’ll get a lot of spam comments and spam followers so don’t try this at home.)
Of course, SEO doesn’t have to be shady or immoral, but since there are profits to be made off by shady and immoral SEO ‘experts’, the field has earned its bad reputation. Any web designer who wants people to find her content needs to perform fundamental SEO tasks, but there is little talk about the process, out of fear of being lumped in with the shady side of things. For me, the best argument for doing SEO is to keep the reason for SEO in the front of my mind: we need to bother with the mess of SEO because SEO is what connects our content to our audience. Because I care about both my audience and my content, I’m willing to do unpleasant tasks necessary to ensure my audience can find my content.
The Connection Between Library Assessment and SEO
It seems clear to me that library assessment suffers from some of the same reputation problems that SEO has. Regardless of how quality assessment is integral to library performance, the current fad status of assessment can make it difficult for librarians in their daily work to see any benefits behind the hype. Failures of past assessment fads to bring about promised changes (TQM anyone?) make librarians wary of drinking the assessment Kool-Aid. I’m not focusing here on grumpy or curmudgeonly librarians, but hard-working professionals who have heard too many assessment pep-talks to get excited about the next project.
This is why I find SEO to be a useful analogue for library assessment. Both SEO and library assessment are things that are absolutely necessary to the success of our efforts, but both are also held in distaste by many of the rank and file who are required to engage in these activities. One key to getting beyond the initial negative reaction and the bad reputations of these activities is to focus on the reasons we have for engaging in them. For example, we do SEO because we want to connect users with our content. We do assessment because we want to make decisions based on data, not whim. We do assessment because we want to know if our efforts to serve our users are actually serving our users. In other words: because I care deeply about providing the highest quality service to our library patrons, I am willing to do underlying work to make certain our efforts are having the desired effects.
Keeping the ultimate goal in mind is not only helpful for setting priorities, but it also helps us govern the potentially insidious natures of both SEO and assessment. By this I mean that if we keep in mind that SEO is about connecting our intended audience to our quality content, we are much less likely to be tempted to engage in unsavory marketing schemes, because those are not about either our intended audience or quality content. In the same vein, if we remain mindful that library assessment is about using data to improve how we serve our users, we are unlikely to take shortcuts such as teaching to a standardized test, choosing only easily quantifiable measures, or assessing only areas of strength. These shortcuts will serve only to undermine that goal in the long run.
Returning to the conversation with my colleague that sparked this post, after I finished whining about my assessment fatigue, I was explained why I felt it was necessary to add a section on web analytics to my SEO course unit. My students worked on a project where the analysed a website for its SEO and made suggestions to improve access to the content. Without the data provided by web analytics, they had no way of knowing if their suggestions made things better, worse, or had no effect. She replied that this is the precise reason that librarians need to collect assessment data. Without assessment data, we have no tools to tell if our work choices improve, worsen, or have no effect on library services. She was, of course, absolutely right. Without quality assessment data, we have no way of knowing whether our decisions about library service lead to increased access to relevant information and improved patron experience.
Three SEO Rules that Apply to Library Assessment
In conclusion and in continuation of the metaphor that library assessment is a lot like SEO, here are three rules from SEO that can speak to our library assessment efforts.
1. Know how search engines function. (Know how accreditation functions.)
If you want people who use search engines to successfully find your site, you have to know how search engines function and incorporate that knowledge into your site design. Similarly, if you are assessing library performance in order to demonstrate value to stakeholders such as accreditors or campus administrators outside the library, you need to know what these bodies value and write your assessment to measure for these values.
2. Know your content and your audience. (Know your library and your users.)
The most common error in SEO efforts is designing to generate maximum traffic to your site. If successful, this approach can generate a large quantity of traffic, traffic that is collectively annoyed to find themselves at your site. The proper approach is to know your content and design your SEO to attract genuinely interested traffic. A similar temptation applies to library assessment. It is possible to skew your analytics to show only amazing success in all areas, but this comes at the cost of gathering useful about actual library services and at the cost of being able to improve services based on that data. Assessment data is valuable because it tells us about how the library serves our users. Data skewed to show only positive results is useless when it comes to helping the library achieve its mission.
3. Design for humans, not for machines. (Assess for library users.)
This rule sums up the law and the prophets for SEO: design for humans and not for machines. What it means is don’t let your desire for search ranking tempt you into designing an ugly page that your audience hates. Put the people first. When you have a choice to make between a design element that favors human readers and a design element that favors search engines crawlers, ALWAYS choose people. While SEO efforts have a real impact on search ranking, a quality web site is more important for search ranking than quality SEO effort. Similarly, if you find yourself tempted to compromise service to patrons in order to focus on assessment, always err on the side of the patron. Librarian time and attention are finite resources, but if we consciously and consistently prioritize our patrons ahead of our assessment efforts, our assessment efforts will uncover more favorable data than if we put the data ahead of the people we are here to serve.
- See this 1998 white paper for a good definition of the outcomes assessment model. http://www.ala.org/acrl/publications/whitepapers/taskforceacademic ↩
- Deb Gilchrist and Megan Oakleaf collaborated on this excellent overview. http://www.learningoutcomeassessment.org/documents/LibraryLO_000.pdf ↩
- the HTML meta-description tag is a rough stand in for subject fields but without controlled vocabulary ↩
Previously, we discussed the benefits of installing a local AMP stack (Apache, MySQL & PHP) for the purposes of development and testing, and walked through installing a stack in the Mac environment. In this post, we will turn our attention to Windows. (If you have not read Local Dev Environments for Newbies Part 1, and you are new to the AMP stack, you might want to go read the Introduction and Tips sections before continuing with this tutorial.)
Much like with the Mac stack, there are Windows stack installers that will do all of this for you. For example, if you are looking to develop for Drupal, there’s an install package called Acquia that comes with a stack installer. There’s also WAMPserver and XAMPP. If you opt to go this route, you should do some research and decide which option is the best for you. This article contains reviews of many of the main players, though it is a year old.
However, we are going to walk through each component manually so that we can see how it all works together.
So, let’s get going with Recipe 2 – Install the AMP Stack on Windows 7.
Notepad and Wordpad come with most Windows systems, but you may want to install a more robust code editor to edit configuration files and eventually, your code. I prefer Notepad++, which is open source and provides much of the basic functionality needed in a code editor. The examples here will reference Notepad++ but feel free to use whichever code editor works for you.
For our purposes, we are not going to allow traffic from outside the machine to access our test server. If you need this functionality, you will need to open a port in your firewall on port 80. Be very careful with this option.
As a prerequisite to installing Apache, we need to install the Visual C++ 2010 SP1 Redistributable Package x86. As a pre-requisite to installing PHP, we need to install the Visual C++ 2008 SP1 Redistributable Package x86.
I create a directory called opt\local in my C drive to house all of the stack pieces. I do this because it’s easier to find things on the command line when I need to and I like keeping development environment applications separate from Program Files. I also create a directory called sites to house my web files.
The last two prerequisites are more like common gotchas. The first is that while you are manipulating configuration and initialization files throughout this process, you may find the Windows default view settings are getting in your way. If this is the case, you can change it by going to Organize > Folder and search options > View tab.
This will bring up a dialog which allows you to set preferences for the folder you are currently viewing. You can select the option to “show hidden files” and uncheck the “hide file extensions” option, both of which make developing easier.
The other thing to know is that in our example, we will work with a Windows 7 installation – a 64-bit operating system. However, when we get to PHP, you’ll notice that their website does not provide a 64-bit installer. I have seen errors in the past when a 32-bit PHP installer and a 64-bit Apache version were both used, so we will install the 32-bit versions for both components.
Ok, I think we’re all set. Let’s install Apache.
We want to download the .zip file for latest version. For Windows binaries, I use apachelounge, which builds windows installer files. For this example we’ll download httpd-2.4.4-win32.zip to the Desktop of our Windows machine.
Next, we want to extract files into chosen location for Apache directory, eg c:\opt\local\Apache24. You can accomplish this a variety of ways but if you have WinZip, you can follow these steps:
- Copy the .zip folder to c:\opt\local
- Right-click and select “Extract all files”.
- Open the extracted folder, right-click on the Apache24 folder and select Cut.
- Go back up one directory and right-click to Paste the Apache24 folder, so that it now resides inside c:\opt\local.
This extraction “installs” Apache; there is no installer to run, but we will need to configure a few things.
We want to open httpd.conf: this file contains all of the configuration settings for our web server. If you followed the directions above, you can find the file in C:\opt\local\Apache24\conf\httpd.conf – we want to open it with our code editor and make the following changes:
1. Find this line (in my copy, it’s line 37):
Change it to match the directory where you installed Apache. In my case, it reads:
You might notice that our slashes slant in the opposite direction from the usual Windows sytax. In Windows, backslash ( \ ) delineates different directories, but in Unix, it’s forward slash ( / ). Apache reads the configuration file in the Unix manner, even though we are working in Windows. If you get a “directory not found” error at any point, check your slashes.
2. At Line 58, we are going to change the listen command to just listen to our machine. Change
3. There are 100 lines around 72-172 that all start with LoadModule. Some of these are comments (they begin with a “#”). Later on, you may need to uncomment some of these for a certain web program to work, like SSL. For now, though, we’ll leave these as is.
4. Next, we want to change our Document Root and the directory directive to the directory which has the web files. These lines (beginning on line 237 in my copy) read:
Later, we’ll want to change this to our “sites” folder we created earlier. For now, we’re just going to change this to the Apache installation directory for testing. So, it should read:
Save the httpd.conf file. (In two of our test cases, after saving the file, closing and re-opening, the file appeared unchanged. If you are having issues, try doing Save As and save the file to your desktop, then drag it into c:\opt\local\Apache24).
Next, we want to test our Apache configuration. To do this, we open the command line. In Windows, you can do this by going to the Start Menu, and typing
in the Search box. Then, press Enter. Once you’re in the command prompt, type in
(Note that the first part of this path is the install directory I used above. If you chose a different directory to install Apache, use that instead.) Next, we start the web server with a “-t” flag to test it. Type in:
If you get a Syntax OK, you’re golden.
Otherwise, try to resolve any errors based on the error message. If the error message does not make any sense after checking your code for typos, go back and make sure that your changes to httpd.conf did actually save.
Once you get Syntax OK, type in:
This will start the web server. You should not get a message regarding the firewall if you changed the listen command to localhost:80. But, if you do, decide what traffic you want to allow to your machine. I would click “Cancel” instead of “Allow Access”, because I don’t want to allow outside access.
Now the server is running. You’ll notice that you no longer have a C:\> prompt in the Command window. To test our server, we open a browser and type in http://localhost – you should get a website with text that reads “It works!”
Instead of starting up the server this way every time, we want to install it as a Windows service. So, let’s go back to our command prompt and press Ctrl+C to stop web server. You should now have a prompt again.
To install Apache as a service, type:
httpd.exe –k install
You will most likely get an error that looks like this:
We need to run our command prompt as an administrator. So, let’s close the cmd.exe window and go back to our Start menu. Go to Start > All Programs > Accessories and right-click on Command Prompt. Select “Run As Administrator”.
(Note: If for some reason you do not have the ability to right-click, there’s a “How-To Geek” post with a great tip. Go to the Start menu and in the Run box, type in cmd.exe as we did before, but instead of hitting Enter, hit Ctrl+Shift+Enter. This does the same thing as the right-click step above.)
Click on Yes at the prompt that comes up, allowing the program to make changes. You’ll notice that instead of starting in our user directory, we are starting in Windows\system32 So, let’s go back to our bin directory with:
Now, we can run our
httpd.exe –k install
command again, and it should succeed. To start the service, we want to open our Services Dialog, located in the Control Panel (Start Menu > Control Panel) in the Administrative Tools section. If you display your Control Panel by category (the default), you click on System & Security, then Administrative Tools. If you display your control panel by small icon, Administrative Tools should be listed.
Double click on Services.
Find Apache2.4 in the list and select it. Verify that the Startup Type is set to Automatic if you want the Service to start automatically (if you would prefer that the Service only start at certain times, change this to Manual, but remember that you have to come back in here to start it). With Apache2.4 selected, click on Start Service in the left hand column.
Go back to the browser and hit Refresh to verify that everything is still working. It should still say “It Works!” And with that affirmation, let’s move to PHP.
(Before installing PHP, make sure you have installed the Visual C++ 2008 Redistributable Package from the prerequisite section.)
For our purposes, we want to use the Thread Safe .zip from the PHP Downloads page. Because we are running PHP under Apache, but not as a CGI, we use the thread safe version. (For more on thread safe vs. non-thread safe, see this Wikipedia entry or this stackoverflow post)
Once you’ve downloaded the .zip file, extract it to your \opt\local directory. Then, rename the folder to simply “php”. As with Apache24, extracting the files does the “install”, we just need to configure everything to run properly. Go to the directory where you installed PHP, (in my case, c:\opt\local\php) and find php.ini-development.
Make a copy of the file and rename the copy php.ini (this is one of those places where you may want to set the Folder and search options if you’re having problems).
Open the file in Notepad++ (or your code editor of choice). Note that here, comments are preceded by a “;” (without quotes) and the directories are delineated using the standard Windows format, with a “\”. Most of the document is commented out, and includes a large section on recommended settings for production and development, so if you’re not sure of the changes to make you can check in the file (in addition to the PHP documentation). For this tutorial, we want to make the following changes:
1. On line 708, uncomment (remove semi-colon) include_path under “Windows” and make sure it matches the directory where you installed PHP (if the line numbers have changed, just search for Paths and Directories).
3. Beginning on Line 868, in the Windows Extensions section, uncomment (remove the semi-colon) from the following lines (they are not right next to each other, they’re in a longer list, but we want these three uncommented):
extension=php_mysql.dll extension=php_mysqli.dll extension=php_pdo_mysql.dll
Save php.ini file.
You may want to double-check that the .dll files we enabled above are actually in the c:\opt\local\php\ext folder before trying to run php, because you will see an error if they are not there.
Next, we want to add the php directory to our path environment variables. This section is a little tricky; be *extremely* careful when you are making changes to system settings like this.
First, we navigate to the Environment variables by opening the Control Panel and going to System & Security > System > Advanced System Settings > Environment Variables.
In the bottom scroll box, scroll until you find “Path”, click on it, then click on Edit.
Append the following to the end of the Variable Value list (the semi-colon ends the previous item, then we add our installation path).
Click OK and continue to do so until you are out of the dialog.
Lastly, we need to add some lines to the httpd.conf so that Apache will play nice with PHP. The httpd.conf file may still be open in your text editor. If not, go back to c:\opt\local\Apache24\conf and open it. At the bottom of this file, we need to add the following:
LoadModule php5_module "c:/opt/local/php/php5apache2_4.dll" AddHandler application/x-httpd-php .php PHPIniDir "c:/opt/local/php"
This tells Apache where to find php and loads the module needed to work with PHP. (Note: php5apache2_4.dll must be installed in the directory you specified above in the LoadModule statement. It should have been extracted with the other files, but to download the file if it is not there, you can go to the apachelounge additional downloads page.)
While we’re in this file, we also want to tell Apache to look for an index.php file. We’ll need this for testing, but also for some content management systems. To do this, we change the DirectoryIndex directive on line 271. It should look like
<IfModule dir_module> DirectoryIndex index.html
We want to change the DirectoryIndex line so it reads
DirectoryIndex index.php index.html
Before we restart Apache to pick up these changes, we’re going to do one last thing. To test our php, we want to create a file called index.php with the following text inside:
<?php phpinfo(); ?>
Save it to c:\opt\local\Apache24\htdocs
Restart Apache by going back to the Services dialog. (If you closed it, it’s Control Panel > System & Security > Administrative Tools > Services). Click on Apache2.4 and then click on Restart.
If you get an error, you can always go back to the command line, navigate to c:\opt\local\Apache24\bin and run httpd.exe –t again. This will check your syntax, which is most likely to the be problem. (This page is also helpful in troubleshooting PHP 5.4 and Apache if you are having issues.)
Open a browser window and type in http://localhost – instead of “It Works!” you should see a list configuration settings for PHP. (In one of our test cases, the tester needed to close Internet Explorer re-open it for this part to work.)
Now, we move to the database.
To install MySQL, we can follow the directions at the MySQL site. For the purposes of this tutorial, we’re going to use the most recent version as of this writing, which is 5.6.11. To download the files we need, we go to the Community Server download page.
Again, we can absolutely use the installer here, which is the first option. The MySQL installers will prompt you through the setup, and this video does a great job of walking through the process.
But, the since the goal of this tutorial is to see all the parts, I’m going to run through the setup manually. First, we download the .zip archive. Choose the .zip file which matches your operating system; I will choose 64-bit (there’s no agreement issue here). Extract the files to c:\opt\local\mysql. We do this in the same way we did the Apache24 files above.
Since we’re installing to our opt\local drive, we need to tell MySQL to look there for the program files and the data. We do this by setting up an option file. We can modify a file provided for us called my-default.ini. Change the name to my.ini and open it with your code editor.
In the MySQL config files, we use the Unix directory “/” again, and the comments are again preceded by a “#”. So, to set our locations, we want to remove the # from the beginning of the basedir and datadir lines, and change to our installation directory as shown below.
Then save my.ini.
As with Apache, we’re going to start MySQL for the first time from the command line, to make sure everything is working ok. If you still have it open, navigate back there. If not, remember to select the Run As Administrator option.
From your command prompt, type in
cd \opt\local\mysql\bin mysqld --console
You should see a bunch of statements scroll by as the first database is created. You may also get a firewall popup. I hit Cancel here, so as not to allow access from outside my computer to the MySQL databases.
Ctrl+C to stop the server. Now, let’s install MySQL as a service. To do that, we type the command:
Next, we want to start the MySQL service, so we need to go back to Services. You may have to Refresh the list in order to see the MySQL service. You can do this by going to Action > Refresh in the menu.
Then, we start the service my clicking on MySQL and clicking Start Service on the left hand side.
One thing about installing MySQL in this manner is that the initial root user for the database will not have a password. To see this, go back to your command line. Type in
mysql -u root
This will open the command line MySQL client and allow you to run queries. The -u flag sets the user, in this case, root. Notice you are not prompted for a password. Type in:
select user, host, password from mysql.user;
This command should show all the created user accounts, the hosts from which they can log in, and their passwords. The semi-colon at the end is crucial – it signifies the end of a SQL command.
Notice in the output that the password column is blank. MySQL provides documentation on how to fix this on the Securing the Initial Accounts documentation page, but we’ll also step through it here. We want to use the SET PASSWORD command to set the password for all of the root accounts.
Substituting the password you want for newpwd (keep the single quotes in the command), type in
SET PASSWORD FOR 'root'@'localhost' = PASSWORD('newpwd'); SET PASSWORD FOR 'root'@'127.0.0.1' = PASSWORD('newpwd'); SET PASSWORD FOR 'root'@'::1' = PASSWORD('newpwd');
You should get a confirmation after every command. Now, if you run the select user command from above, you’ll see that there are values in the password field, equivalent to encrypted versions of what you specified.
A note about security: I am not a security expert and for a development stack we are usually less concerned with security. But it is generally not a good idea to type in plain text passwords in the command line, because if the commands are being logged you’ve just saved your password in a plain text file that someone can access. In this case, we have not turned on any logging, and the SET PASSWORD should not store the password in plain text. But, this is something to keep in mind.
As before with Mac OS X, we could stop here. But then you would have to administer the MySQL databases using the command line. So we’ll install phpMyAdmin to make it a little easier and test to see how our web server works with our sites folder.
Download the phpmyadmin.zip file from the phpmyadmin page to the sites folder we created all the way at the beginning. Note that this does *not* go into the opt folder.
Extract the files to a folder called phpmyadmin using the same methods we’ve used previously.
Since we now want to use our sites folder instead of the default htdocs folder, we will need to change the DocumentRoot and Directory directives on lines 237 and 238 of our Apache config file. So, open httpd.conf again.
We want to change the DocumentRoot to sites, and we’re going to set up the phpMyAdmin directory.
Save the httpd.conf file. Go back to Services and Restart the Apache2.4 service.
We will complete the configuration through the browser. First, open the browser and try to navigate to http://localhost again. You should get a 403 error.
Instead, navigate to http://localhost/phpmyadmin/setup
Click on the New Server button to set up a connection to our MySQL databases. Double check that under the Basic Settings tab, the Server Name is set to localhost, and then click on Authentication. Verify that the type is “cookie”.
At the bottom of the page, click on Save. Now, change the address in the browser to http://localhost/phpmyadmin and log in with the root user, using the password you set above.
And that’s it. Your Windows AMP stack should be ready to go.
In the next post, we’ll talk about how to install a content management system like WordPress or Drupal on top of the base stack. Questions, comments or other recipes you would like to see? Let us know in the comments.