One Shocking Tool Plus Two Simple Ideas That Will Forever Chage How You Share Links

The Click Economy

The economy of the web runs on clicks and page views. The way web sites turn traffic into profit is complex, but I think we can get away with a broad gloss of the link economy as long as we acknowledge that greater underlying complexity exists. Basically speaking, traffic (measured in clicks, views, unique visitors, length of visit, etc.) leads to ad revenue. Web sites benefit when viewers click on links to their pages and when these viewers see and click on ads. The scale of the click economy is difficult to visualize. Direct benefits from a single click or page view are minuscule. Profits tend to be nonexistent or trivial on any scale smaller than unbelievably massive. This has the effect of making individual clicks relatively meaningless, but systems that can magnify clicks and aggregate them are extremely valuable. What this means for the individual person on the web is that unless we are Ariana Huffington, Sheryl Sandberg, Larry Page, or Mark Zuckerberg we probably aren’t going to get rich off of clicks. However, we do have impact and our online reputations can significantly influence which articles and posts go viral. If we understand how the click economy works, we can use our reputation and influence responsibly. If we are linking to content we think is good and virtuous, then there is no problem with spreading “link juice” indiscriminately. However, if we want to draw someone’s attention to content we object to, we can take steps to link responsibly and not have our outrage fuel profits for the content’s author. 1 We’ve seen that links benefit the site’s owners in two way: directly through ad revenues and indirectly through “link juice” or the positive effect that inbound links have on search engine ranking and social network trend lists. If our goal is to link without benefiting the owner of the page we are linking to, we will need a separate technique for each the two ways a web site benefits from links.

For two excellent pieces on the click economy, check out see Robinson Meyer’s Why are Upworthy Headlines Suddenly Everywhere?2 in the Atlantic Monthly and Clay Johnson’s book The Information Diet especially The New Journalists section of chapter three3

Page Rank

Page Rank is the name of a key algorithm Google uses to rank web pages it returns. 4 It counts inbound links to a page and keeps track of the relative importance of the sites the links come from. A site’s Page Rank score is a significant part of how Google decides to rank search results. 5 Search engines like Google recognize that there would be a massive problem if all inbound links were counted as votes for a site’s quality. 6 Without some mechanism to communicate “I’m linking to this site as an example of awful thinking” there really would be no such thing as bad publicity and a website with thousands of complaints and zero positive reviews would shoot to the top of search engine rankings. For example, every time a librarian used martinlutherking.org (A malicious propaganda site run by the white supremacist group Stormfront) as an example in a lesson about web site evaluation, the page would rise in Google’s ranking and more people would find it in the course of natural searches for information on Dr. King. When linking to malicious content, we can avoid increasing its Page Rank score, by adding the rel=“nofollow” attribute to the anchor link tag. A normal link is written like this:

<a href=“http://www.horriblesite.com/horriblecontent/“ target=”_blank”>This is a horrible page.</a>

This link would add the referring page’s reputation or “link juice” to the horrible site’s Page Rank. To fix that, we need to add the rel=“nofollow” attribute.

<a href=“http://www.horriblesite.com/horriblecontent/“ target=”_blank” rel=“nofollow”>This is a horrible page.</a>

This addition communicates to the search engine that the link should not count as a vote for the site’s value or reputation. Of course, not all linking takes place on web pages anymore. What happens if we want to share this link on Facebook or Twitter? Both Facebook and Twitter automatically add rel=“nofollow” to their links (you can see this if you view page source), but we should not rely on that alone. Social networks aggregate links and provide their own link juice similarly to search engines. When sharing links on social networks, we’ll want to employ a tool that keeps control of the link’s power in our own hands. donotlink.com is an very interesting tool for this purpose.

donotlink.com

donotlink.com is a service that creates safe links that don’t pass on any reputation or link juice. It is ideal for sharing links to sites we object to. On one level, it works similarly to a URL shortener like bit.ly or tinyurl.com. It creates a new URL customized for sharing on social networks. On deeper levels, it does some very clever stuff to make sure no link juice dribbles to the site being linked. They explain what, why, and how very well on their site. Basically speaking donotlink.com passes the link through a new URL that uses javascript, a robots.txt file, and the nofollow and noindex link attributes to both ask search engines and social networks to not apply link juice and to make it structurally difficult to do ignore these requests. 7 This makes donotlink.com’s link masking service an excellent solution to the problem of web sites indirectly profiting from negative attention.

Page Views & Traffic

All of the techniques listed above will deny a linked site the indirect benefits of link juice. They will not, however, deny the site the direct benefits from increased traffic or views and clicks on the pages advertisements. There are ways to share content without generating any traffic or advertising revenues, but these involve capturing the content and posting it somewhere else so they raise ethical questions about respect for intellectual property. So I suggest using only with both caution and intentionality. A quick and easy way to direct traffic to content without benefiting the hosting site is to use a link to Google’s cache of the page. If you can find a page in a Google search, clicking the green arrow next to the URL (see image) will give the option of viewing the cached page. Then just copy the full URL and share that link instead of the original. Viewers can read the text without giving the content page views. Not all pages are visible on Google, so the Wayback Machine from the Internet Archive is a great alternative. The Wayback Machine provides access to archived version of web pages and also has a mechanism (see the image on the right) for adding new pages to the archive.

screengrab of google cache

Screengrab of Google Cache

screengrab of wayback machine

Caching a site at the wayback machine

Both of these solutions rely on external hosts and if the owner of the content is serious about erasing a page, there are processes for removing content from both Google’s cache and the Wayback Machine archives. To be certain of archiving content, the simplest solution is to capture a screenshot and share the image file. This gives you control over the image, but may be unwieldy for larger documents. In these cases saving as a PDF may be a useful workaround. (Personally, I prefer to use the Clearly browser plugin with Evernote, but I have a paid Evernote account and am already invested in the Evernote infrastructure.)

Summing up

In conclusion, there are a number of steps we can take when we want to be responsible with how we distribute link juice. If we want to share information without donating our online reputation to the information’s owner, we can use donotlink.com to generate a link that does not improve their search engine ranking. If we want to go a step further, we can link to a cached version of the page or share a screenshot.

Notes

  1. Using outrageous or objectionable content to generate web traffic is a black-hat SEO technique known as “evil hooks.” There is a lot of profit in “You won’t believe what this person said!” links.
  2. http://www.theatlantic.com/technology/archive/2013/12/why-are-upworthy-headlines-suddenly-everywhere/282048/
  3. The Information Diet, page 35-41
  4. https://en.wikipedia.org/wiki/PageRank
  5. Matt Cuts How Search Works Video.
  6. I’ve used this article http://www.nytimes.com/2010/11/28/business/28borker.html to explain this concept to my students. It is also referenced by donotlink.com in their documentation.
  7. javascript is slightly less transparent to search engines and social networks than is HTML, robots.txt is a file on a web server that tells search engine bots which pages to crawl (it works more like a no trespassing sign than a locked gate), noindex tells bots not to add the link to its index.

Higher ‘Professional’ Ed, Lifelong Learning to Stay Employed, Quantified Self, and Libraries

The 2014 Horizon Report is mostly a report on emerging technologies. Many academic librarians carefully read its Higher Ed edition issued every year to learn about the upcoming technology trends. But this year’s Horizon Report Higher Ed edition was interesting to me more in terms of how the current state of higher education is being reflected on the report than in terms of the technologies on the near-term (one-to-five year) horizon of adoption. Let’s take a look.

A. Higher Ed or Higher Professional Ed?

To me, the most useful section of this year’s Horizon Report was ‘Wicked Challenges.’ The significant backdrop behind the first challenge “Expanding Access” is the fact that the knowledge economy is making higher education more and more closely and directly serve the needs of the labor market. The report says, “a postsecondary education is becoming less of an option and more of an economic imperative. Universities that were once bastions for the elite need to re-examine their trajectories in light of these issues of access, and the concept of a credit-based degree is currently in question.” (p.30)

Many of today’s students enter colleges and universities with a clear goal, i.e. obtaining a competitive edge and a better earning potential in the labor market. The result that is already familiar to many of us is the grade and the degree inflation and the emergence of higher ed institutions that pursue profit over even education itself. When the acquisition of skills takes precedence to the intellectual inquiry for its own sake, higher education comes to resemble higher professional education or intensive vocational training. As the economy almost forces people to take up the practice of lifelong learning to simply stay employed, the friction between the traditional goal of higher education – intellectual pursuit for its own sake – and the changing expectation of higher education — creative, adaptable, and flexible workforce – will only become more prominent.

Naturally, this socioeconomic background behind the expansion of postsecondary education raises the question of where its value lies. This is the second wicked challenge listed in the report, i.e. “Keeping Education Relevant.” The report says, “As online learning and free educational content become more pervasive, institutional stakeholders must address the question of what universities can provide that other approaches cannot, and rethink the value of higher education from a student’s perspective.” (p.32)

B. Lifelong Learning to Stay Employed

Today’s economy and labor market strongly prefer employees who can be hired, retooled, or let go at the same pace with the changes in technology as technology becomes one of the greatest driving force of economy. Workers are expected to enter the job market with more complex skills than in the past, to be able to adjust themselves quickly as important skills at workplaces change, and increasingly to take the role of a creator/producer/entrepreneur in their thinking and work practices. Credit-based degree programs fall short in this regard. It is no surprise that the report selected “Agile Approaches to Change” and “Shift from Students as Consumers to Students as Creators” as two of the long-range and the mid-range key trends in the report.

A strong focus on creativity, productivity, entrepreneurship, and lifelong learning, however, puts a heavier burden on both sides of education, i.e. instructors and students (full-time, part-time, and professional). While positive in emphasizing students’ active learning, the Flipped Classroom model selected as one of the key trends in the Horizon report often means additional work for instructors. In this model, instructors not only have to prepare the study materials for students to go over before the class, such as lecture videos, but also need to plan active learning activities for students during the class time. The Flipped Classroom model also assumes that students should be able to invest enough time outside the classroom to study.

The unfortunate side effect or consequence of this is that those who cannot afford to do so – for example, those who have to work on multiple jobs or have many family obligations, etc. – will suffer and fall behind. Today’s students and workers are now being asked to demonstrate their competencies with what they can produce beyond simply presenting the credit hours that they spent in the classroom. Probably as a result of this, a clear demarcation between work, learning, and personal life seems to be disappearing. “The E-Learning Predictions for 2014 Report”  from EdTech Europe predicts that ‘Learning Record Stores’, which track, record, and quantify an individual’s experiences and progress in both formal and informal learning, will be emerging in step with the need for continuous learning required for today’s job market. EdTech Europe also points out that learning is now being embedded in daily tasks and that we will see a significant increase in the availability and use of casual and informal learning apps both in education but also in the workplace.

C. Quantified Self and Learning Analytics

Among the six emerging technologies in the 2014 Horizon Report Higher Education edition, ‘Quantified Self’ is by far the most interesting new trend. (Other technologies should be pretty familiar to those who have been following the Horizon Report every year, except maybe the 4D printing mentioned in the 3D printing section. If you are looking for the emerging technologies that are on a farther horizon of adoption, check out this article from the World Economic Forum’s Global Agenda Council on Emerging Technologies, which lists technologies such as screenless display and brain-computer interfaces.)

According to the report, “Quantified Self describes the phenomenon of consumers being able to closely track data that is relevant to their daily activities through the use of technology.” (ACRL TechConnect has covered personal data monitoring and action analytics previously.) Quantified self is enabled by the wearable technology devices, such as Fitbit or Google Glass, and the Mobile Web. Wearable technology devices automatically collect personal data. Fitbit, for example, keeps track of one’s own sleep patterns, steps taken, and calories burned. And the Mobile Web is the platform that can store and present such personal data directly transferred from those devices. Through these devices and the resulting personal data, we get to observe our own behavior in a much more extensive and detailed manner than ever before. Instead of deciding on which part of our life to keep record of, we can now let these devices collect about almost all types of data about ourselves and then see which data would be of any use for us and whether any pattern emerges that we can perhaps utilize for the purpose of self-improvement.

Quantified Self is a notable trend not because it involves an unprecedented technology but because it gives us a glimpse of what our daily lives will be like in the near future, in which many of the emerging technologies that we are just getting used to right now – the mobile, big data, wearable technology – will come together in full bloom. Learning Analytics,’ which the Horizon Report calls “the educational application of ‘big data’” (p.38) and can be thought of as the application of Quantified Self in education, has been making a significant progress already in higher education. By collecting and analyzing the data about student behavior in online courses, learning analytics aims at improving student engagement, providing more personalized learning experience, detecting learning issues, and determining the behavior variables that are the significant indicators of student performance.

While privacy is a natural concern for Quantified Self, it is to be noted that we ourselves often willingly participate in personal data monitoring through the gamified self-tracking apps that can be offensive in other contexts. In her article, “Gamifying the Quantified Self,” Jennifer Whitson writes:

Gamified self-tracking and participatory surveillance applications are seen and embraced as play because they are entered into freely, injecting the spirit of play into otherwise monotonous activities. These gamified self-improvement apps evoke a specific agency—that of an active subject choosing to expose and disclose their otherwise secret selves, selves that can only be made penetrable via the datastreams and algorithms which pin down and make this otherwise unreachable interiority amenable to being operated on and consciously manipulated by the user and shared with others. The fact that these tools are consumer monitoring devices run by corporations that create neoliberal, responsibilized subjectivities become less salient to the user because of this freedom to quit the game at any time. These gamified applications are playthings that can be abandoned at whim, especially if they fail to pleasure, entertain and amuse. In contrast, the case of gamified workplaces exemplifies an entirely different problematic. (p.173; emphasis my own and not by the author)

If libraries and higher education institutions becomes active in monitoring and collecting students’ learning behavior, the success of an endeavor of that kind will depend on how well it creates and provides the sense of play to students for their willing participation. It will be also important for such kind of learning analytics project to offer an opt-out at any time and to keep the private data confidential and anonymous as much as possible.

D. Back to Libraries

The changed format of this year’s Horizon Report with the ‘Key Trends’ and the ‘Significant Challenges’ has shown the forces in play behind the emerging technologies to look out for in higher education much more clearly. A big take-away from this report, I believe, is that in spite of the doubt about the unique value of higher education, the demand will be increasing due to the students’ need to obtain a competitive advantage in entering or re-entering the workforce. And that higher ed institutions will endeavor to create appropriate means and tools to satisfy students’ need of acquiring and demonstrating skills and experience in a way that is appealing to future employers beyond credit-hour based degrees, such as competency-based assessments and a badge system, is another one.

Considering that the pace of change at higher education tends to be slow, this can be an opportunity for academic libraries. Both instructors and students are under constant pressure to innovate and experiment in their teaching and learning processes. Instructors designing the Flipped Classroom model may require a studio where they can record and produce their lecture videos. Students may need to compile portfolios to demonstrate their knowledge and skills for job interviews. Returning adult students may need to acquire the habitual lifelong learning practices with the help from librarians. Local employers and students may mutually benefit from a place where certain co-projects can be tried. As a neutral player on the campus with tech-savvy librarians and knowledgeable staff, libraries can create a place where the most palpable student needs that are yet to be satisfied by individual academic departments or student services are directly addressed. Maker labs, gamified learning or self-tracking modules, and a competency dashboard are all such examples. From the emerging technology trends in higher ed, we see that the learning activities in higher education and academic libraries will be more and more closely tied to the economic imperative of constant innovation.

Academic libraries may even go further and take up the role of leading the changes in higher education. In his blog post for Inside Higher Ed, Joshua Kim suggests exactly this and also nicely sums up the challenges that today’s higher education faces:

  • How do we increase postsecondary productivity while guarding against commodification?
  • How do we increase quality while increasing access?
  • How do we leverage technologies without sacrificing the human element essential for authentic learning?

How will academic libraries be able to lead the changes necessary for higher education to successfully meet these challenges? It is a question that will stay with academic libraries for many years to come.


An Incomplete Solution: Working with Drupal, Isotope, and JQuery BBQ Plugin

I write a lot of “how-to” posts.  This is fine and I actually think it’s fun…until I have a month like this past one, in which I worry that I have no business telling anyone “how-to” do anything. In which I have written the following comment multiple times:

//MF - this is a bad way to do this.

In the last four weeks I have struggled mightily to make changes to the JavaScript in a Drupal module (the aforementioned Views Isotope).  I have felt lost, I have felt jubilation, I have sworn profusely at my computer and in the end, I have modifications that mostly work.  They basically do what we need.  But the changes I made perform differently in different browsers, are likely not efficient, and I’m not sure the code is included in the right place.  Lots of it might be referred to as “hacky.”

However, the work needed to be done and I did not have time to be perfect.  This needed to mostly work to show that it could work and so I could hand off the concepts and algorithms to someone else for use in another of our sites.  Since I have never coded in JavaScript or jQuery, I needed to learn the related libraries on the fly and try to relate them back to previous coding experiences.  I needed to push to build the house and just hope that we can paint and put on doors and locks later on.

I decided to write this post because I think that maybe it is important to share the narrative of “how we struggle” alongside the “how-to”.  I’ll describe the original problem I needed to tackle, my work towards a solution, and the remaining issues that exist.  There are portions of the solution that work fine and I utilize those portions to illustrate the original requirements.  But, as you will see, there is an assortment of unfinished details.  At the end of the post, I’ll give you the link to my solution and you can judge for yourself.

The Original Problem

We want to implement a new gallery to highlight student work on our department’s main website.  My primary area of responsibility is a different website – the digital library – which is the archival home for student work.  Given my experience in working with these items and also with Views Isotope, the task of developing a “proof of concept” solution fell to me.  I needed to implement some of the technically trickier things in our proposed solution for the gallery pages in order to prove that the features are feasible for the main website.  I decided to use the digital library for this development because

  • the digital library already has the appropriate infrastructure in place
  • both the digital library and our main website are based in Drupal 7
  • the “proof of concept” solution, once complete, could remain in the digital library as a browse feature for our historical collections of student work

The full requirements for the final gallery are outside of the scope of this post, but our problem area is modifying the Views Isotope module to do some advanced things.

First, I need to take the existing Views Isotope module and modify it to use hash history on the same page as multiple filters.  Hash history with Isotope is implemented using the jQuery BBQ plugin, as is demonstrated on the Isotope website. This essentially means that when a user clicks on a filter, a hash value is added to the URL and becomes a line in the browser’s history.  This allows one to click the back button to view the grid with the previous filters applied.

Our specific use case is the following: when viewing galleries of student work from our school, a user can filter the list of works by several filter options, such as degree or discipline, (i.e., Undergraduate Architecture).  These filter options are powered by term references in Drupal, as we saw in my earlier Isotope post.  If the user clicks on an individual work to see details, clicking on the back button should return them to the already filtered list – they should not have to select the filters again.

Let’s take a look at how the end of the URL should progress.  If we start with:

.../student-work-archives

Then, we select Architecture as our discipline, we should see the URL change to :

.../student-work-archives#filter=.architecture

Everything after the # is referred to as the hash.  If we then click on Undergraduate, the URL will change to:

.../student-work-archives#filter=.undergraduate.architecture

If we click our Back button, we should go back to

.../student-work-archives#filter=.architecture

With each move, the Isotope animation should fire and the items on the screen should only be visible if they contain term references to the vocabulary selected.

Further, the selected filter options should be marked as being currently selected.  Items in one vocabulary which require an item in another vocabulary should be hidden and shown as appropriate.  For example, if a user selects Architecture, they should not be able to select PhD from the degree program, because we do not offer a PhD degree in Architecture, therefore there is no student work.  Here is an example of how the list of filters might look.

filter-list-compOnce Architecture is selected, PhD is removed as a option.  Once Undergraduate is selected, our G1, G2 and G3 options are no longer available.

A good real-world example of the types of features we need can be seen at NPR’s Best Books of 2013 site.  The selected filter is marked, options that are no longer available are removed and the animation fires when the filter changes.  Further, when you click the Back button, you are taken back through your selections.

The Solution

It turns out that the jQuery BBQ plugin works quite nicely with Isotope, again as demonstrated on the Isotope website.  It also turns out that support for BBQ is included in Drupal core for the Overlay module.   So theoretically this should all play nicely together.

The existing views-isotope.js file handles filtering as the menu items are clicked.  The process is basically as follows:

  • When the document is ready
    • Identify the container of items we want to filter, as well as the class on the items and set up Isotope with those options.
    • Pre-select All in each set of filters.
    • Identify all of the possible filter options on the page.
    • If a filter item is clicked,
      • first, check to make sure it isn’t already selected, if so, escape
      • then remove the “selected” class from the currently selected option in this set
      • add the “selected” class to the current item
      • set up an “options” variable to hold the value of the selected filter(s)
      • check for other items in other filter sets with the selected class and add them all to the selected filter value
      • call Isotope with the selected filter value(s)

To add the filter to the URL we can use bbq.pushState, which will add “a ‘state’ into the browser history at the current position, setting location.hash and triggering any bound hashchange event callbacks (provided the new state is different than the previous state).”

$bbq.pushState( options);

We then want to handle what’s in the hash when the browser back button is clicked, or if a user enters a URL with the hash value applied.  So we add an option for handling the hashchange event mentioned above.  Instead of calling isotope from the link click function, we call it from the hashchange event portion.  Now our algorithm looks more like this, with the items in bold added:

  • Include the misc/jquery.ba-bbq.js for BBQ (I have to do this explicitly because I don’t use Overlay)
  • When the document is ready
    • identify the container of items we want to filter, as well as the class on the items and set up Isotope with those options.
    • Pre-select All in each set of filters.
    • Identify all of the possible filter options on the page.
    • If a filter item is clicked,
      •   first, check to make sure it isn’t already selected, if so, escape
      •  then remove the “selected” class from the currently selected option in this set
      • add the “selected” class to the current item
      • set up an “options” variable to hold the value of the selected filter(s)
      • check for other items in other filter sets with the selected class and add them all to the selected filter value
      • push “options” to URL and trigger hashchange (don’t call Isotope yet)
    • If a hashchange event is detected
      • create new “hashOptions” object according to what’s in the hash, using the deparam.fragment function from jQuery BBQ
      • manipulate css classes such as “not-available” (ie. If Architecture is selected, apply to PhD) and “selected” based on what’s in “hashOptions”
      • call Isotope with “hashOptions” as the parameter
    • trigger hashchange event to pick up anything that’s in the URL when the page loads

I also updated any available pager links so that they link not just to the appropriate page, but also so that the filters are included in the link.  This is done by appending the hash value to the href attribute for each link with a class “pager”.

And it works.  Sort of…

The Unfinished Details

Part of the solution described above only works on Chrome and – believe it or not – Internet Explorer.  In all browsers, clicking the back button works just as described above, as long as one is still on the page with the list of works.  However, when linking directly to page with the filters included (as we are doing with the pager) or hitting back from a page that does not have the hash present (say, after visiting an individual item), it does not work on Firefox or Safari.  I think this may have to do with the deparam.fragment function, because that appears to be where it gets stuck, but so far can’t track it down.  I could directly link to window.location.hash, but I think that’s a security issue (what’s to stop someone from injecting something malicious after the hash?)

Also, in order to make sure the classes are applied correctly, it feels like I do a lot of “remove it from everywhere, then add it back”.  For example, if I select Architecture, PhD is then hidden from the degree list by assigning the class “not-available”.  When a user clicks on City and Regional Planning or All, I need that PhD to appear again.  Unfortunately, the All filter is handled differently – it is only present in the hash if no other options on the page are selected.  So, I remove “not-available” from all filter lists on hashchange and then reassign based on what’s in the hash.  It seems like it would be more efficient just to change the one I need, but I can’t figure it out.  Or maybe I should alter the way All is handled completely – I don’t know.

I also made these changes directly in the views-isotope.js, which is a big no-no.  What happens when the module is updated?  But, I have never written a custom module for Drupal which included JavaScript, so I’m not even sure how to do it in a way that makes sense.  I have only made changes to this one file.  Can I just override it somewhere?  I’m not sure.  Until I figure it out, we have a backup file and lots and lots of comments.

All of these details are symptoms of learning on the fly.  I studied computer science, so I understand conceptual things like how to loop through an array or return a value from a function, but that was more than ten years ago and my practical coding experience since then has not involved writing jQuery or Javascript.  Much of what I built I pieced together from the Drupal documentation, the Views Isotope documentation and issue queue, the Isotope documentation, the jQuery BBQ documentation and numerous visits to the w3schools.com pages on jQuery and Javascript.  I also frequently landed on the jQuery API Documentation page.

It is hard to have confidence in a solution when building while learning.  When I run into a snag, I have to consider whether or not the problem is the entire approach, as opposed to a syntax error or a limitation of the library.  Frequently, I find an answer to an issue I’m having, but have to look up something from the answer in order to understand it.  I worry that the code contains rookie mistakes – or even intermediate mistakes – which will bite us later, but it is difficult to do an exhaustive analysis of all the available resources.  Coding elegantly is an art which requires more than a basic understanding of how the pieces play together.

Inelegant code, however, can still help make progress.  To see the progress I have made, you can visit https://ksamedia.osu.edu/student-work-archives and play with the filters. This solution is good because it proves we can develop our features using Isotope, BBQ and Views Isotope.  The trick now is figuring out how to paint and put locks and doors on our newly built house, or possibly move a wall or two.


#libtechgender: A Post in Two Parts

Conversations about gender relations, bias, and appropriate behavior have bubbled up all over the technology sector recently. We have seen conferences adopt codes of conduct that strive to create welcoming atmospheres. We have also seen cases of bias and harassment, cases that may once have been tolerated or ignored, now being identified and condemned. These conversations, like gender itself, are not simple or binary but being able to listen respectfully and talk honestly about uncomfortable topics offers hope that positive change is possible.

On October 28th Sarah Houghton, the director of the San Rafael Public Library, moderated a panel on gender in library technology at the Internet Librarian conference. In today’s post I’d like to share my small contributions to the panel discussion that day and also to share how my understanding of the issues changed after the discussion there. It is my hope that more conversations—more talking and more listening—about gender issue in library technology will be sparked from this start.

Part I: Internet Librarian Panel on Gender

Our panel’s intent was to invite librarians into a public conversation about gender issues. In the Internet Librarian program our invitation read:

Join us for a lively panel and audience discussion about the challenges of gender differences in technology librarianship. The topics of fairness and bias with both genders have appeared in articles, blogs, etc and this panel of women and men who work in libraries and gender studies briefly share personal experiences, then engage the audience about experiences and how best to increase understanding between the genders specifically in the area of technology work in librarianship. 1
Panelists: Sarah Houghton, Ryan Claringbole, Emily Clasper, Kate Kosturski, Lisa Rabey, John Bultena, Tatum Lindsay, and Nicholas Schiller

My invitation to participate on the stemmed from blog posts I wrote about how online conversations about gender issues can go off the rails and become disasters. I used my allotted time to share some simple suggestions I developed observing these conversations. Coming from my personal (white cis straight male) perspective, I paid attention to things that I and my male colleagues do and say that result in unintended offense, silencing, and anger in our female colleagues. By reverse engineering these conversational disasters, I attempted to learn from unfortunate mistakes and build better conversations. Assuming honest good intentions, following these suggestions can help us avoid contention and build more empathy and trust.

  1. Listen generously. Context and perspective are vital to these discussions. If we’re actively cultivating diverse perspectives then we are inviting ideas that conflict with our assumptions. It’s more effective to assume these ideas come from unfamiliar but valid contexts than to assume they are automatically unreasonable. By deferring judgement until after new ideas have been assimilated and understood we can avoid silencing voices that we need to hear.
  2. Defensive responses can be more harmful than offensive responses. No one likes to feel called on the carpet, but the instinctive responses we can give when we feel blamed or accused can be worse than simply giving offense. Defensive denials can lead to others feeling silenced, which is much more damaging and divisive than simple disagreement. It can be the difference between communicating  “you and I disagree on this matter” and communicating “you are wrong and don’t get a voice in this conversation.” That kind of silencing and exclusion can be worse than simply giving offense.
  3. It is okay to disagree or to be wrong. Conversations about gender are full of fear. People are afraid to speak up for fear of reprisal. People are afraid to say the wrong thing and be revealed as a secret misogynist. People are afraid. The good news is that conversations where all parties feel welcome, respected, and listened to can be healing. Because context and perspective matter so much in how we address issues, once we accept the contexts and perspectives of others, we are more likely to receive acceptance of our own perspectives and contexts. Given an environment of mutual respect and inclusion, we don’t need to be afraid of holding unpopular views. These are complex issues and once trust is established, complex positions are acceptable.

This is what I presented at the panel session and I still stand behind these suggestions. They can be useful tools for building better conversations between people with good intentions. Specifically, they can help men in our field avoid all-too-common barriers to productive conversation.

That day I listened and learned a lot from the audience and from my fellow panelists. I shifted my priorities. I still think cultivating better conversations is an important goal. I still want to learn how to be a better listener and colleague.  I think these are skills that don’t just happen, but need to be intentionally cultivated. That said, I came in to the panel believing that the most important gender related issue in library technology was finding ways for well-intentioned colleagues to communicate effectively about an uncomfortable problem. Listening to my colleagues tell their stories, I learned that there are more direct and pressing gender issues in libraries.

Part II: After the Panel

As I listened to my fellow panelists tell their stories and then as I listened to people in the audience share their experiences, no one else seemed particularly concerned about well-intentioned people having misunderstandings or about breakdowns in communication. Instead, they related a series of harrowing personal experiences where men (and women, but mostly men) were directly harassing, intentionally abusive, and strategically cruel in ways that are having a very large impact on the daily work, career paths, and the quality of life of my female colleagues. I assumed that since this kind of harassment clearly violates standard HR policies that the problem is adequately addressed by existing administrative policies. That assumption is incorrect.

It is easy to ignore what we don’t see and I don’t see harassment taking place in libraries and I don’t often hear it discussed. It has been easy to underestimate the prevalence and impact it has on many of my colleagues. Listening to librarians.

Then, after the conference one evening, a friend of mine was harassed on the street and I had another assumption challenged. It happened quickly, but a stranger on the street harassed my friend while I watched in stunned passivity. 2 I arrived at the conference feeling positive about my grasp of the issues and also feeling confident about my role as an ally. I left feeling shaken and doubting both my thoughts and my actions.

In response to the panel and its aftermath, I’ve composed three more points to reflect what I learned. These aren’t suggestions, like I brought to the panel, instead they are realizations or statements. I’m obviously not an expert on the topic and I’m not speaking from a seat of authority. I’m relating stories and experiences told by others and they tell them much better than I do. In the tradition of geeks and hackers now that I have learned something new I’m sharing it with the community in hopes that my experience moves the project forward. It is my hope that better informed and more experienced voices will take this conversation farther than I am able to. These three realizations may be obvious to some, because they were not obvious to me, it seems useful to clearly articulate them.

  1. Intentional and targeted harassment of women is a significant problem in the library technology field. While subtle micro aggressions, problem conversations, and colleagues who deny that significant gender issues exist in libraries are problematic, these issues are overshadowed by direct and intentional harassing behavior targeting gender identity or sex. The clear message I heard at the panel was that workplace harassment is a very real and very current threat to women working in library technology fields.
  2. This harassment is not visible to those not targeted by it. It is easy to ignore what we do not see. Responses to the panel included many library technology women sharing their experiences and commenting that it was good to hear others’ stories. Even though the experience of workplace harassment was common, those who spoke of it reported feelings of isolation. While legislation and human resources polices clearly state harassment is unacceptable and unlawful, it still happens and when it happens the target can be isolated by the experience. Those of us who participate in library conferences, journals, and online communities can help pierce this isolation by cultivating opportunities to talk about these issues openly and public. By publicly talking about gender issues, we can thwart isolation and make the problems more visible to those who are not direct targets of harassment.
  3. This is a cultural problem, not only an individual problem. While no one point on the gender spectrum has a monopoly on either perpetrating or being the target of workplace harassment, the predominant narrative in our panel discussion was men harassing women. Legally speaking, these need to be treated as individual acts, but as a profession, we can address the cultural aspects of the issue. Something in our library technology culture is fostering an environment where women are systematically exposed to bad behavior from men.

In the field of Library Technology, we spend a lot of our time and brain power intentionally designing user experiences and assessing how users interact with our designs. Because harassment of some of our own is pervasive and cultural, I suggest we turn the same attention and intentionality to designing a workplace culture that is responsive to the needs of all of us who work here. I look forward to reading conference presentations, journal articles, and online discussions where these problems are publicly identified and directly addressed rather than occurring in isolation or being ignored.

  1. infotoday.com/il2013/Monday.asp#TrackD
  2. I don’t advocate a macho confrontational response or take responsibility for the actions of others, but an ally has their friend’s back and that night I did not speak up.

Building a Dynamic Image Display with Drupal & Isotope

I am in love with Isotope.  It’s not often that you hear someone profess their love for a JQuery library (unless it’s this), but there it is.  I want to display everything in animated grids.

I also love Views Isotope, a Drupal 7 module that enabled me to create a dynamic image gallery for our school’s Year in Review.  This module (paired with a few others) is instrumental in building our new digital library.

yearinreview

In this blog post, I will walk you through how we created the Year in Review page, and how we plan to extrapolate the design to our collection views in the Knowlton Digital Library.  This post assumes you have some basic knowledge of Drupal, including an understanding of content types, taxonomy terms and how to install a module.

Year in Review Project

Our Year in Review project began over the summer, when our communications team expressed an interest in displaying the news stories from throughout the school year in an online, interactive display.  The designer on our team showed me several examples of card-like interfaces, emphasizing the importance of ease and clean graphics.  After some digging, I found Isotope, which appeared to be the exact solution we needed.  Isotope, according to its website, assists in creating “intelligent, dynamic layouts that can’t be achieved with CSS alone.”  This JQuery library provides for the display of items in a masonry or grid-type layout, augmented by filters and sorting options that move the items around the page.

At first, I was unsure we could make this library work with Drupal, the content management system we employ for our main web site and our digital library.  Fortunately I soon learned – as with many things in Drupal – there’s a module for that.  The Views Isotope module provides just the functionality we needed, with some tweaking, of course.

We set out to display a grid of images, each representing a news story from the year.  We wanted to allow users to filter those news stories based on each of the sections in our school: Architecture, Landscape Architecture and City and Regional Planning.  News stories might be relevant to one, two or all three disciplines.  The user can see the news story title by hovering over the image, and read more about the new story by clicking on the corresponding item in the grid.

Views Isotope Basics

Views Isotope is installed in the same way as other Drupal modules.  There is an example in the module and there are also videos linked from the main module page to help you implement this in Views.  (I found this video particularly helpful.)

You must have the following modules installed to use Views Isotope:

You also need to install the Isotope JQuery library.  It is important to note that Isotope is only free for non-commercial projects.  To install the library, download the package from the Isotope GitHub repository.  Unzip the package and copy the whole directory into your libraries directory.  Within your Drupal installation, this should be in the /sites/all/libraries folder.  Once the module and the library are both installed, you’re ready to start.

If you have used Drupal, you have likely used Views.  It is a very common way to query the underlying database in order to display content.The Views Isotope module provides additional View types: Isotope Grid, Isotope Filter Block and Isotope Sort Block.  These three view types combine to provide one display.  In my case, I have not yet implemented the Sort Block, so I won’t discuss it in detail here.

To build a new view, go to Structure > Views > Add a new view.  In our specific example, we’ll talk about the steps in more detail.  However, there’s a few important tenets of using Views Isotope, regardless of your setup:

  1. There is a grid.  The View type Isotope Grid powers the main display.
  2. The field on which we want to filter is included in the query that builds the grid, but a CSS class is applied which hides the filters from the grid display and shows them only as filters.
  3. The Isotope Filter Block drives the filter display.  Again, a CSS class is applied to the fields in the query to assign the appropriate display and functionality, instead of using default classes provided by Views.
  4. Frequently in Drupal, we are filtering on taxonomy terms.  It is important that when we display these items we do not link to the taxonomy term page, so that a click on a term filters the results instead of taking the user away from the page.

With those basic tenets in mind, let’s look at the specific process of building the Year in Review.

Building the Year in Review

Armed with the Views Isotope functionality, I started with our existing Digital Library Drupal 7 instance and one content type, Item.  Items are our primary content type and contain many, many fields, but here are the important ones for the Year in Review:

  • Title: text field containing the headline of the article
  • Description: text field containing the shortened article body
  • File: File field containing an image from the article
  • Item Class: A reference to a taxonomy term indicating if the item is from the school archives
  • Discipline: Another term reference field which ties the article to one or more of our disciplines: Architecture, Landscape Architecture or City and Regional Planning
  • Showcase: Boolean field which flags the article for inclusion in the Year in Review

The last field was essential so that the communications team liaison could curate the page.  There are more news articles in our school archives then we necessarily want to show in the Year in Review, and the showcase flag solves this problem.

In building our Views, we first wanted to pull all of the Items which have the following characteristics:

  • Item Class: School Archives
  • Showcase: True

So, we build a new View.  While logged in as administrator, we click on Structure, Views then Add a New View.  We want to show Content of type Item, and display an Isotope Grid of fields.  We do not want to use a pager.  In this demo, I’m going to build a Page View, but a Block works as well (as we will see later).  So my settings appear as follows:

GridSettings

Click on Continue & edit.  For the Year in Review we next needed to add our filters – for Item Class and Showcase.  Depending on your implementation, you may not need to filter the results, but likely you will want to narrow the results slightly.  Next to Filter Criteria, click on Add.

addclass0I first searched for Item Class, then clicked on Apply.

addclass1Next, I need to select a value for Item Class and click on Apply.

addclassI repeated the process with the Showcase field.

addshowcase

If you click Update Preview at the bottom of the View edit screen, you’ll see that much of the formatting is already done with just those steps.

preview1

Note that the formatting in the image above is helped along by some CSS.  To style the grid elements, the Views Isotope module contains its own CSS in the module folder ([drupal_install]/sites/all/modules/views_isotope).  You can move forward with this default display if it works for your site.  Or, you can override this in the site’s theme files, which is what I’ve done above.  In my theme CSS file, I have applied the following styling to the class “isotope-element”

.isotope-element {
    float: left;
    height: 140px;
    margin: 6px;
    overflow: hidden;
    position: relative;
    width: 180px;
}
I put the above code in my CSS file associated with my theme, and it overrides the default Views Isotope styling.  “isotope-element” is the class applied to the div which contains all the fields being displayed for each item.  Let’s add a few more items and see how the rendered HTML looks.
First, I want to add an image.  In my case, all of my files are fields of type File, and I handle the rendering through Media based on file type.  But you could use any image field, also.

addfile

I use the Rendered File Formatter and select the Grid View Mode, which applies an Image Style to the file, resizing it to 180 x 140.  Clicking Update Preview again shows that the image has been added each item.

imageAndtext

This is closer, but in our specific example, we want to hide the title until the user hovers over the item.  So, we need to add some CSS to the title field.

hidetitle

In my CSS file, I have the following:

.isotope-grid-text {
    background: none repeat scroll 0 0 #4D4D4F;
    height: 140px;
    left: 0;
    opacity: 0;
    position: absolute;
    top: 0;
    width: 100%;
    z-index: 20;
}

Note the opacity is 0 – which means the div is transparent, allowing the image to show through.  Then, I added a hover style which just changes the opacity to mostly cover the image:

.isotope-grid-text {
  opacity: 0.9;
}

Now, if we update preview, we should see the changes.

imagewhover

The last thing we need to do is add the Discipline field for each item so that we can filter.

There are two very important things here.  First, we want to make sure that the field is not formatted as a link to the term, so we select Plain text as the Formatter.

Second, we need to apply a CSS class here as well, so that the Discipline fields show in filters, not in the grid.  To do that, check the Customize field HTML and select the DIV element.  Then, select Create a class and enter “isotope-filter”.  Also, uncheck “Apply default classes.”  Click Apply.
addfilter1

Using Firebug, I can now look at the generated HTML from this View and see that isotope-element <div> contains all the fields for each item, though the isotope-filter class loads Discipline as hidden.

<div class="isotope-element landscape-architecture" data-category="landscape-architecture">
  <div class="views-field views-field-title"> (collapsed for brevity) </div>
  <div class="views-field views-field-field-file"> (collapsed for brevity) </div>
  <div>  
    <div class="isotope-filter">Landscape Architecture</div>
  </div>
</div>

You might also notice that the data-category for this element is assigned as landscape-architecture, which is our Discipline term for this item.  This data-category will drive the filters.

So, let’s save our View by clicking Save at the top and move on to create our filter block.  Create a new view, but this time create a block which displays taxonomy terms of type Discipline.  Then, click on Continue & Edit.

filterblockThe first thing we want to do is adjust view so that the default row wrappers are not applied.  Note: this is the part I ALWAYS forget, and then when my filters don’t work it takes me forever to track it down.

Click on Settings next to Fields.

fieldsettingsUncheck the Provide default field wrapper elements.  Click Apply.

fieldsettings2

Next, we do not want the fields to be links to term pages, because a user click should filter the results, not link back to the term.  So, click on the term name to edit that field.  Uncheck the box next to “Link this field to its taxonomy term page”.  Click on Apply.

term-nolink

Save the view.

The last thing is to make the block appear on the page with the grid.  In practice, Drupal administrators would use Panels or Context to accomplish this (we use Context), but it can also be done using the Blocks menu.

So, go to Structure, then click on Blocks.  Find our Isotope-Filter Demo block.  Because it’s a View, the title will begin with “View:”

blockname

Click Configure.  Set block settings so that the Filter appears only on the appropriate Grid page, in the region which is appropriate for your theme.  Click save.

blocksettings

Now, let’s visit our /isotope-grid-demo page.  We should see both the grid and the filter list.

final

It’s worth noting that here, too, I have customized the CSS.  If we look at the rendered HTML using Firebug, we can see that the filter list is in a div with class “isotope-options” and the list itself has a class of “isotope-filters”.

<div class="isotope-options">
  <ul class="isotope-filters option-set clearfix" data-option-key="filter">
    <li><a class="" data-option-value="*" href="#filter">All</a></li>
    <li><a class="filterbutton" href="#filter" data-option-value=".architecture">Architecture</a></li>
    <li><a class="filterbutton selected" href="#filter" data-option-value=".city-and-regional-planning">City and Regional Planning</a></li>
    <li><a class="filterbutton" href="#filter" data-option-value=".landscape-architecture">Landscape Architecture</a></li>
  </ul>
</div>

I have overridden the CSS for these classes to remove the background from the filters and change the list-style-type to none, but you can obviously make whatever changes you want.  When I click on one of the filters, it shows me only the news stories for that Discipline.  Here, I’ve clicked on City and Regional Planning.

crpfilter

Next Steps

So, how do we plan to use this in our digital library going forward?  So far, we have mostly used the grid without the filters, such as in one of our Work pages.  This shows the metadata related to a given work, along with all the items tied to that work.  Eventually, each of the taxonomy terms in the metadata will be a link.  The following grids are all created with blocks instead of pages, so that I can use Context to override the default term or node display.

WorkScreenShot

However, in our recently implemented Collection view, we allow users to filter the items based on their type: image, video or document.  Here, you see an example of one of our lecture collections, with the videos and the poster in the same grid, until the user filters for one or the other.

CollectionPage

There are two obstacles to using this feature in a more widespread manner throughout the site.  First, I have only recently figured out how to implement multiple filter options.  For example, we might want to filter our news stories by Discipline and Semester.  To do this, we rewrite the sorting fields in our Grid display so that they all display in one field.  Then, we create two Filter blocks, one for each set of terms.  Implementing this across the site so that users can sort by say, item type and vocabulary term, will make it more useful to us.

Second, we have several Views that might return upwards of 500 items.  Loading all of the image files for this result set is costly, especially when you add in the additional overhead of a full image loading in the background for a Colorbox overlay and Drupal performance issues.  The filters will not work across pages, so if I use pager, I will only filter the items on the page I’m viewing.  I believe this can fixed somehow using Infinite Scroll (as described in several ways here), but I have not tried yet.

With these two advanced options, there are many options for improving the digital library interface.  I am especially interested in how to use multiple filters on a set of search results returned from a SOLR index.

What other extensions might be useful?  Let us know what you think in the comments.

Resources

 


Come to Our Meet-up at #LITAforum!

Are you interested in writing a guest post or becoming a regular contributor to ACRL TechConnect blog? Or, do you blog about library technology?

Three of ACRL TechConnect blog authors, Bohyun, Eric, and Margaret, will be at LITA Forum this year.  (Two of us are on the LITA Forum planning committee and yes, we are very active at LITA as well as in ACRL! :)

So, we decided to have a small meet-up!

Come chat with us about everyday challenges and solutions in library technology over drinks.

This is an informal meet-up & All are welcome!

TechConnect Meet-up at #LITAforum


View NRHA Louisville Activities Map in a larger map
 

 


Library Quest: Developing a Mobile Game App for A Library

This is the story  of Library Quest (iPhone, Android), the App That (Almost) Wasn’t. It’s a (somewhat) cautionary tale of one library’s effort to leverage gamification and mobile devices to create a new and different way of orienting students to library services and collections.  Many libraries are interested in the possibilities offered by both games and mobile devices,  and they should be.  But developing for mobile platforms is new and largely uncharted territory for libraries, and while there have been some encouraging developments in creating games in library instruction, other avenues of game creation are mostly unexplored.  This is what we learned developing our first mobile app and our first large-scale game…at the same time!

Login Screen

The login screen for the completed game. We use integrated Facebook login for a host of technical reasons.

Development of the Concept: Questing for Knowledge

The saga of Library Quest began in February of 2012, when I came on board at Grand Valley State University Libraries as Digital Initiatives Librarian.  I had been reading some books on gamification and was interested in finding a problem that the concept might solve.  I found two.  First, we were about to open a new 65 million dollar library building, and we needed ways to take advantage of the upsurge of interest we knew this would create.  How could we get people curious about the building to learn more about our services, and to strengthen that into a connection with us?  Second, GVSU libraries, like many other libraries, was struggling with service awareness issues.  Comments by our users in the service dimension of our latest implementation of Libqual+ indicated that many patrons missed out on using services like inter-library loan because they were unaware that they existed.  Students often are not interested in engaging with the library until they need something specific from us, and when that need is filled, their interest declines sharply.  How could we orient students to library services and create more awareness of what we could do for them?

We designed a very simple game to address both problems.  It would be a quest or task based game, in which students actively engaged with our services and spaces, earning points and rewards as they did so.  The game app would offer tasks to students, verify their progress through multistep tasks by asking users to input alphanumeric codes or by scanning QR codes (which we ended up putting on decals that could be stuck to any flat surface).  Because this was an active game, it seemed natural to target it at mobile devices, so that people could play as they explored.  The mobile marketplace is more or less evenly split between iOS and Android devices, so we knew we wanted the game to be available on both platforms.  This became the core concept for Library Quest.  Library administration gave the idea their blessing and approval to use our technology development budget, around $12,000, to develop the game.  Back up and read that sentence over if you need to, and yes, that entire budget was for one mobile app.  The expense of building apps is the first thing to wrap your mind around if you want to create one.  While people often think of apps as somehow being smaller and simpler than desktop programs, the reality is very different.

IMG_0101

The main game screen. We found a tabbed view worked best, with quests that are available in one tab, quests that have been accepted but not completed in another, and finished quests in the third.

We contracted with Yeti CGI, a outside game development firm, to do the coding.  This was essential-app development is complicated and we didn’t have the necessary skills or experience in-house.  If we hadn’t used an outside developer, the game app would never have gotten off the ground.  We had never worked with a game-development company before, and Yeti had never worked with a library, although they had ties to higher education and were enthusiastic about the project.  Working with an outside developer always carries certain risks and advantages, and communication is always an issue.

One thing we could have done more of at this stage was spend time working on game concept and doing paper prototyping of that concept.  In his book Game Design Workshop, author Tracey Fullerton stresses two key components in designing a good game: defining the experience you want the player to have, and doing paper prototyping.  Defining the game experience from the player’s perspective forces the game designer to ask questions about how the game will play that it might not otherwise occur to them to ask.  Will this be a group or a solo experience?  Where will the fun come from?  How will the player negotiate the rules structure of the game?  What choices will they have at what points?  As author Jane McGonigal notes, educational games often fail because they do not put the fun first, which is another way of saying that they haven’t fully thought through the player’s experience.  Everything in the game: rules, rewards, format, etc.  should be shaped from the concept of the experience the designer wants to give the player.  Early concepts can and should be tested with paper prototyping.  It’s a lot easier to change rules structure for a game made with paper, scissors, and glue than with code and developers (and a lot less expensive).  In retrospect, we could have spent more time talking about experience and more time doing paper prototypes before we had Yeti start writing code.  While our game is pretty solid, we may have missed opportunities to be more innovative or provide a stronger gameplay experience.

Concept to conception: Wireframing and Usability Testing

The first few months of development were spent creating, approving, and testing paper wireframes of the interface and art concepts.  While we perhaps should have done more concept prototyping, we did do plenty of usability testing of the game interface as it developed, starting with the paper prototypes and continuing into the initial beta version of the game.  That is certainly something I would recommend that anyone else do as well.  Like a website or anything else that people are expected to use, a mobile app interface needs to be intuitive and conform to user expectations about how it should operate, and just as in website design, the only way to create an interface that does so is to engage in cycles of iterative testing with actual users.  For games, this is particularly important because they are supposed to be fun, and nothing is less fun than struggling with poor interface design.

A side note related to usability: one of the things that surfaced in doing prototype testing of the app was that giving players tasks involving library resources and watching them try to accomplish those tasks turns out to be an excellent way of testing space and service design as well.  There were times when students were struggling not with the interface, but with the library! Insufficient signage, space layout that was not clear, assumed knowledge of or access to information the students had no way of knowing, were all things became apparent in watching students try to do tasks that should have been simple.  It serves as a reminder that usability concepts apply to the physical world as much as they do to the web, and that we can and should test services in the real world the same way we test them in virtual spaces.

photo

A quest in progress. We can insert images and links into quest screens, which allows us to use webpages and images as clues.

Development:  Where the Rubber Meets the Phone

Involving an outside developer made the game possible, but it also meant that we had to temper our expectations about the scale of app development.  This became much more apparent once we’d gotten past paper prototyping and began testing beta versions of the game.  There were several ideas that we  developed early on, such as notifications of new quests, and an elaborate title system, that had to be put aside as the game evolved because of cost, and because developing other features that were more central to gameplay turned out to be more difficult than anticipated.  For example, one of the core concepts of the game was that students would be able to scan QR codes to verify that they had visited specific locations.  Because mobile phone users do not typically have QR code reader software installed, Yeti built QR code reader functionality into the game app.  This made scanning the code a more seamless part of gameplay, but getting the scanner software to work well on both the android and iOS versions proved a major challenge (and one that’s still vexing us somewhat at launch).  Tweaks to improve stability and performance on iOS threw off the android version, and vice versa.  Despite the existence of programs like Phonegap and Adobe Air, which will supposedly produce versions of the software that run on both platforms, there can still be a significant amount of work involved in tuning the different versions to get them to work well.

Developing apps that work on the android platform is particularly difficult and expensive.  While Apple has been accused of having a fetish for control, their proprietary approach to their mobile operating system produces a development environment that is, compared to android, easy to navigate.  This is because android is usually heavily modified by specific carriers and manufacturers to run on their hardware. Which means that if you want to ensure that your app runs well on an android device, the app must be tested and debugged on that specific combination of android version and hardware.  Multiply the 12 major versions of android still commonly used by the hundreds of devices that run it, and you begin to have an idea of the scope of the problem facing a developer.  While android only accounts for 50% of our potential player base, it easily took up 80% of the time we spent with Yeti debugging, and the result is an app that we are sure works on only a small selection of android devices out there.  By contrast, it works perfectly well on all but the very oldest versions of iOS.

Publishing a Mobile App: (Almost) Failure to Launch

When work began on Library Quest, our campus had no formal approval process for mobile apps, and the campus store accounts were controlled by our student mobile app development lab.  In the year and a half we spent building it, control of the campus store accounts was moved to our campus IT department, and formal guidelines and a process for publishing mobile apps started to materialize.  All of which made perfect sense, as more and more campus entities were starting to develop mobile apps and campus was rightly concerned about branding and quality issues, as well as ensuring any apps that were published furthered the university’s teaching and research mission.  However, this resulted in us trying to navigate an approval process as it materialized around us very late in development, with requests coming in for changes to the game appearance to bring it into like with new branding standards when the game was almost complete.

It was here the game almost foundered as it was being launched. During some of the discussions, it surfaced that one of the commercial apps being used by the university for campus orientation bore some superficial resemblance to Library Quest in terms of functionality, and the concern was raised that our app might be viewed as a copy.  University counsel got involved.  For a while, it seemed the app might be scrapped entirely, before it ever got out to the students!  If there had been a clear approval process when we began the app, we could have dealt with this at the outset, when the game was still in the conceptual phase.  We could have either modified the concept, or addressed the concern before any development was done.  Fortunately, it was decided that the risk was minimal and we were allowed to proceed.

A quest completion screen for one of our test quests.  These screens stick around when the quest is done, forming a kind of personalized FAQ about library services and spaces.

A quest completion screen for one of our test quests. These screens stick around when the quest is done, forming a kind of personalized FAQ about library services and spaces.

Post-Launch: Game On!

As I write this, it’s over a year since Library Quest was conceived and it has just been released “into the wild” on the Apple and Google Play stores.  We’ve yet to begin the major advertising push for the game, but it already has over 50 registered users.  While we’ve learned a great deal, some of the most important questions about this project are still up in the air.  Can we orient students using a game?  Will they learn anything?  How will they react to an attempt to engage with them on mobile devices?  There are not really a lot of established ways to measure success for this kind of project, since very few libraries have done anything remotely like it.  We projected early on in development that we wanted to see at least 300 registered users, and that we wanted at least 50 of them to earn the maximum number of points the game offered.  Other metrics for success are “squishier,” and involve doing surveys and focus groups once the game wraps to see what reactions students had to the game.  If we aren’t satisfied with performance at the end of the year, either because we didn’t have enough users or because the response was not positive, then we will look for ways to repurpose the app, perhaps as part of classroom teaching in our information literacy program, or as part of more focused and smaller scale campus orientation activities.

Even if it’s wildly successful, the game will eventually need to wind down, at least temporarily.  While the effort-reward cycle that games create can stimulate engagement, keeping that cycle going requires effort and resources.  In the case of Library Quest, this would include the money we’ve spent on our prizes and the effort and time we spend developing quests and promoting the game.  If Library Quest endures, we see it having a cyclical life that’s dependent on the academic year.  We would start it anew each fall, promoting it to incoming freshmen, and then wrap it up near the end of our winter semester, using the summers to assess and re-engineer quests and tweak the app.

Lessons Learned:  How to Avoid Being a Cautionary Tale
  1. Check to see if your campus has an approval process and a set of guidelines for publishing mobile apps. If it doesn’t, do not proceed until they exist. Lack of such a process until very late in development almost killed our game. Volunteer to help draft these guidelines and help create the process, if you need to.  There should be some identified campus app experts for you to talk to before you begin work, so you can ask about apps already in use and about any licensing agreements campus may have. There should be a mechanism to get your concept approved at the outset, as well as the finished product.
  2. Do not underestimate the power of paper.  Define your game’s concept early, and test it intensively with paper prototypes and actual users.  Think about the experience you want the players to have, as well as what you want to teach them.  That’s a long way of saying “think about how to make it fun.”  Do all of this before you touch a line of code.
  3. Keep testing throughout development.  Test your wireframes, test your beta version, test, test, test with actual players.  And pay attention to anything your testing might be telling you about things outside the game, especially if the game interfaces with the physical world at all.
  4. Be aware that mobile app development is hard, complex, and expensive.  Apps seem smaller because they’re on small devices, but in terms of complexity, they are anything but.  Developing cross-platform will be difficult (but probably necessary), and supporting android will be an ongoing challenge.  Wherever possible, keep it simple.  Define your core functionality (what does the app *have* to do to accomplish its mission) and classify everything else you’d like it to do as potentially droppable features.
  5. Consider your game’s life-cycle at the outset.  How long do you need it to run to do what you want it to do?  How much effort and money will you need to spend to keep it going for that long?  When will it wind down?
References

Fullerton, Tracy.  Game Design Workshop (4th Edition).  Amsterdam, Morgan Kaufmann.  2008

McGonigal, Jane.  Reality is Broken: Why Games Make us Better and How they Can Change the World. Penguin Press, New York.  2011

About our Guest Author:
Kyle Felker is the Digital Initiatives Librarian at Grand Valley State University Libraries, where he has worked since February of 2012.  He is also a longtime gamer.  He can be reached at felkerk@gvsu.edu, or on twitter @gwydion9.  

 


Local Dev Environments for Newbies Part 3: Installing Drupal and WordPress

Once you have built a local development environment using an AMP stack, the next logical question is, “now what?”  And the answer is, truly, whatever you want.  As an example, in this blog post we will walk through installing Drupal and WordPress on your local machine so that you can develop and test in a low-risk environment.  However, you can substitute other content management systems or development platforms and the goal is the same: we want to mimic our web server environment on our local machine.

The only prerequisite for these recipes is a working AMP stack (see our tutorials for Mac and Windows), and administrative rights to your computer.  The two sets of steps are very similar.  We need to download and unpack the files to our web root, create a database and point to it from a configuration file, and run an install script from the browser.

There are tutorials around the web on how to do both things, but I think there’s two likely gotchas for newbies:

  • There’s no “installer” that installs the platform to your system.  You unzip and copy the files to the correct place.  The “install” script is really a “setup” script, and is run after you can access the site through a browser.
  • Setting up and linking the database must be done correctly, or the site won’t work.

So, we’ll step through each process with some extra explanation.

Drupal

Drupal is an open source content management platform.  Many libraries use it for their website because it is free and it allows for granular user permissions.  So as the site administrator, I can provide access for staff to edit certain pages (ie, reference desk schedule) but not others (say, colleague’s user profiles).  In our digital library, my curatorial users can edit content, authorized student users can see content just for our students, and anonymous users can see public collections.  The platform has its downsides, but there is a large and active user community.  A problem’s solution is usually only a Google search (or a few) away.

The Drupal installation guide is a little more technical, so feel free to head there if you’re comfortable on the command line.

First, download the Drupal files from the Drupal core page.  The top set of downloads (green background) are stable versions of the platform.  The lower set are versions still in development.  For our purposes we want the green download, and because I am on my Mac, I will download the tar.gz file for the most recent version (at the time of this writing, 7.23).  If you are on a Windows machine, and have 7zip installed, you can also use the .tar.gz file.  If you do not have 7zip installed, use the .zip file.

dcore

Now, we need to create the database we’re going to use for Drupal.  In building the AMP stack, we also installed phpMyAdmin, and we’ll use it now.  Open a browser and navigate to the phpMyAdmin installation (if you followed the earlier tutorials, this will be http://localhost/~yourusername/phpmyadmin on Mac and http://localhost/phpmyadmin on Windows).  Log in with the root user you created when you installed MySQL.

The Drupal installation instructions suggest creating a user first, and through that process, creating the database we will use.  So, start by clicking on Users.

dusers

Look for the Add user button.

dadduser

Next, we want to create a username – which will serve as the user login as well as the name of the database.  Create a password and select “Local” from the Host dropdown.  This will only allow traffic from the local machine.  Under the “Database for user”, we want to select “Create database with same name and grant all privileges.”

ddb

Next, let’s copy the Drupal files and configure the settings.  Locate the file you downloaded in the first step above and move it to your web root folder.  This is the folder you previously used to test Apache and install phpMyAdmin so you could access files through your browser.  For example, on my Mac this is in mfrazer/sites.

dstructure

You may want to change the folder name from drupal-7.23 to something a little more user friendly, e.g. drupal without the version number.  Generally, it’s bad practice to have periods in file or folder names.  However, for the purposes of this tutorial, I’m going to leave the example unchanged.

Now, we want to create our settings file.  Inside your Drupal folder, look for the sites folder. We want to navigate to sites/default and create a copy of the file called default.settings.php.  Rename the copy to settings.php and open in your code editor.

dsettings

Each section of this file contains extensive directions on how to set the settings.  At the end of the Database Settings section, (line 213 as of this writing), we want to replace this

$databases = array();

with this

$databases['default']['default'] = array(
 'driver' => 'mysql',
 'database' => 'sampledrupal',
 'username' => 'sampledrupal',
 'password' => 'samplepassword',
 'host' => 'localhost',
 'prefix' => '',
 );

Remember, if you followed the steps above, ‘database’ and ‘username’ should have the same value.  Save the file.

Go back to your unpacked folder and create a directory called “files” in the same directory as our settings.php file.

dfiles

 

Now we can navigate to the setup script in our browser.  The URL is comprised of the web root, the name of the folder you extracted the drupal files into and then install.php.  So, in my case this is:

localhost/~mfrazer/drupal-7.23/install.php

If I was on a Windows machine, and had changed the name of the folder to be mydrupal, then the path would be

localhost\mydrupal\install.php

Either way, you should get something that looks like this:

dinstall

For your first installation, I would choose Standard, so you can see what the Standard install looks like.  I use Minimal for many of my sites, but if it’s your first pass into Drupal it is good to see what’s there.

Next, pick a language and click Save and Continue.  Now, the script is going to attempt to verify your requirements.  You may run into an error that looks like this:

derror

We need to make our files directory writable by our web server users.  We can do this a bunch of different ways.  It’s important to think about what you’re doing, because it involves file permissions, especially if you are allowing users in from outside your system.

On my Mac, I choose to make _www (which is the hidden web server user) the owner of the folder.  To do this, I open Terminal and type in

cd ~/sites/drupal-7.23/sites/default
sudo chown _www files

Remember, sudo will elevate us to administrator.  Type in your password when prompted. The next command is chown, followed by the new owner the folder in question.  So this command will change the owner to _www for the folder “files”.

In Windows, I did not see this error.  However, if needed, I would handle the permissions through the user interface, by navigating to the files folder, right-clicking and selecting Properties.  Click on the Security tab, then click on Edit.  In this case, we are just going to grant permissions to the users of this machine, which will include the web server user.

dwperma

Click on Users, then scroll down to click the check box under “Allow” and next to “Write.”  Click on Apply and then Ok.  Click OK again to close the Properties window.

dwperm

On my Windows machine, I got a PHP error instead of the file permissions error.

phperror

This is an easy fix, we just need to enable the gd2 and mbstring extensions in our php.ini file and restart Apache to pick up the changes.

To do this, open your php.ini file (if you followed our tutorials, this will be in your c:\opt\local directory).  Beginning on Line 868, in the Windows Extensions section, uncomment (remove the semi-colon) from the following lines (they are not right next to each other, they’re in a longer list, but we want these three uncommented):

extension=php_gd2.dll
extension=php_mbstring.dll

Save php.ini file.  Restart Apache by going to Services, click on Apache 2.4 and click Restart.

Once you think you’ve fixed the issues, go back to your browser and click on Refresh.  The Verify Requirements should pass and you should see a progress bar as Drupal installs.

Next you are taken to the Configure Site page, where you fill in the site name, your email address and create the first user.  This is important, as there are a couple of functions restricted only to this user, so remember the user name and password that you choose.  I usually leave the Server Settings alone and uncheck the Notification options.

dconfigsite

Click Save and Continue.  You should be congratulated and provided a link to your new site.

dfinal

WordPress

WordPress is a very common blogging platform; we use it at ACRL TechConnect.  It can also be modified to be used as a content management platform or an image gallery.

Full disclosure: Until writing this post, I have never done a local install of WordPress.  Fortunately, I can report that it’s very straightforward.  So, let’s get started.

The MAMP instructions for WordPress advise creating the database first, and using the root credentials.  I am not wild about this solution, because I prefer to have separate users for my databases and I do not want my root credentials sitting in a file somewhere.  So we will set up the database the same way we did above:  create a user and a database at the same time.

Open a browser and navigate to the phpMyAdmin installation (if you followed the earlier tutorials, this will be http://localhost/~yourusername/phpmyadmin on Mac and http://localhost/phpmyadmin on Windows).  Log in with the root user you created when you installed MySQL and click on Users.

dusers

Look for the Add user button.

dadduser

Next, we want to create a username – which will serve as the user login as well as the name of the database.  Create a password and select “Local” from the Host dropdown.  This will only allow traffic from the local machine.  Under the “Database for user”, we want to select “Create database with same name and grant all privileges.”

wpdb

Now, let’s download our files.  Go to http://wordpress.org/download and click 0n Download WordPress.  Move the .zip file to your web root folder and unzip it.  This is the folder you previously used to test Apache and install phpMyAdmin so you could access files through your browser.  For example, on my Mac this is in mfrazer/sites.  If you followed our tutorial for Windows, it would be c:\sites

wpunpack

Next, we need create a config file.  WordPress comes with a wp-config-sample.php file.  Make a copy of it and rename it to wp-config.php and open it with your code editor.

wpconfig

Enter in the database name, user name and password we just created.  Remember, if you followed the steps above, the database name and user name should be the same.  Verify that the host is set to local and save the file.

Navigate in your browser to the WordPress folder.  The URL is comprised of the web root and the name of the folder where you extracted the WordPress files.  So, in my case this is:

localhost/~mfrazer/wordpress

If I was on a Windows machine, and had changed the name of the folder to be wordpress-dev, then the path would be

localhost\wordpress-dev

Either way, you should get something that looks like this:

wpform

Fill in the form and click on Install WordPress.  It might take a few minutes, but you should get a success message and a Log In button.  Log in to your site using the credentials you just created in the form.

You’re ready to start coding and testing.  The next step is to think about what you want to do.  You might take a look at the theming resources provided by both WordPress and Drupal.  You might want to go all out and write a module.  No matter what, though, you now have an environment that will help you abide by the cardinal rule of development: Thou shalt not mess around in production.

Let us know how it’s going in the comments!


One Week, One Tool, Many Lessons

When I was a kid, I cherished the Paula Danziger book Remember Me to Harold Square, in which a group of kids call themselves the Serendipities, named for the experience of making fortunate discoveries accidentally.  Last week I found myself remembering the book over and over again as I helped develop Serendip-o-matic, a tool which introduces serendipity to research, as part of a twelve person team attending One Week | One Tool at the Roy Rosenzweig Center for History and New Media at George Mason University (RRCHNM).

In this blog post, I’ll take you through the development of the “serendipity machine”, from the convening of the team to the selection and development of the tool.  The experience turned out to be an intense learning experience for me, so along the way, I will share some of my own fortunate discoveries.

(Note: this is a pretty detailed play-by-play of the process.  If you’re more interested in the result, please see the RRCHNM news items on both our process and our product, or play with Serendip-o-matic itself.)

The Eve of #OWOT

@foundhistory: One Week | One Tool is Here! Meet the crew and join in! http://t.co/4EyDPllNnu #owot #buildsomething

Approximately thirty people applied to be part of One Week | One Tool (OWOT), an Institute for Advanced Topics in the Digital Humanities, sponsored by the National Endowment for the Humanities.  Twelve were selected and we arrive on Sunday, July 28, 2013 and convene in the Well, the watering hole at the Mason Inn.

As we circle around the room and introduce ourselves, I can’t help but marvel at the myriad skills of this group.  Each person arrives with more than one bag of tricks from which to pull, including skills in web development, historical scholarship, database administration, librarianship, human-computer interaction and literature studies.  It takes about a minute before someone mentions something I’ve never heard of and so the Googling begins.  (D3, for the record, is a Javascript library for data visualizations).

Tom Scheinfeldt (@foundhistory), the RRCHNM director-at-large who organized OWOT, delivers the pre-week pep talk and discusses how we will measure success.  The development of the tool is important, but so is the learning experience for the twelve assembled scholars.  It’s about the product, but also about the process.  We are encouraged to learn from each other, to “hitch our wagon” to another smart person in the room and figure out something new.

As for the product, the goal is to build something that is used.  This means that defining and targeting the audience is essential.

The tweeting began before we arrived, but typing starts in earnest at this meeting and the #owot hashtag is populated with our own perspectives and feedback from the outside.  Feedback, as it turns out, will be the priority for Day 1.

Day 1

@DoughertyJack: “One Week One Tool team wants feedback on which digital tool to build.”

Mentors from RRCHNM take the morning to explain some of the basic tenets of what we’re about to do.  Sharon Leon talks about the importance of defining the project: “A project without an end is not a project.”  Fortunately, the one week timeline solves this problem for us initially, but there’s the question of what happens after this week?

Patrick Murray-John takes us through some of the finer points of developing in a collaborative environment.  Sheila Brennan discusses outreach and audience, and continues to emphasize the point from the night before: the audience definition is key.  She also says the sentence that, as we’ll see, would need to be my mantra for the rest of the project: “Being willing to make concrete decisions is the only way you’re going to get through this week.

All of the advice seems spot-on and I find myself nodding my head.  But we have no tool yet, and so how to apply specifics is still really hazy.  The tool is the piece of the puzzle that we need.

We start with an open brainstorming session, which results in a filled whiteboard of words and concepts.  We debate audience, we debate feasibility, we debate openness.  Debate about openness brings us back to the conversation about audience – for whom are we being open?  There’s lot of conversation but at the end, we essentially have just a word cloud associated with projects in our heads.

So, we then take those ideas and try to express them in the following format: X tool addresses Y need for Z audience.  I am sitting closest to the whiteboards so I do a lot of the scribing for this second part and have a few observations:

  • there are pet projects in the room – some folks came with good ideas and are planning to argue for them
  • our audience for each tool is really similar; as a team we are targeting “researchers”, though there seems to be some debate on how inclusive that term is.  Are we including students in general?  Teachers?  What designates “research”?  It seems to depend on the proposed tool.
  • the problem or need is often hard to articulate.  “It would be cool” is not going to cut it with this crowd, but there are some cases where we’re struggling to define why we want to do something.
Clarifying Ideas

Clarifying Ideas. Photo by Mia Ridge.

A few group members begin taking the rows and creating usable descriptions and titles for the projects in a Google Doc, as we want to restrict public viewing while still sharing within the group.  We discuss several platforms for sharing our list with the world, and land on IdeaScale.  We want voters to be able to vote AND comment on ideas, and IdeaScale seems to fit the bill.  We adjourn from the Center and head back to the hotel with one thing left to do: articulate these ideas to the world using IdeaScale and get some feedback.

The problem here, of course, is that everyone wants to make sure that their idea is communicated effectively and we need to agree on public descriptions for the projects.  Finally, it seems like there’s a light at the end of the tunnel…until we hit another snag.  IdeaScale requires a login to vote or comment and there’s understandable resistance around the table to that idea.  For a moment, it feels like we’re back to square one, or at least square five.  Team members begin researching alternatives but nothing is perfect, we’ve already finished dinner and need the votes by 10am tomorrow.  So we stick with IdeaScale.

And, not for the last time this week, I reflect on Sheila’s comment, “being willing to make concrete decisions is the only way you’re going to get through this week.”  When new information, such as the login requirement, challenges the concrete decision you made, how do you decide whether or not to revisit the decision?  How do you decide that with twelve people?

I head to bed exhausted, wondering about how many votes we’re going to get, and worried about tomorrow: are we going to make a decision?

Day 2

@briancroxall: “We’ve got a lot of generous people in the #owot room who are willing to kill their own ideas.”

It turns out that I need not have worried.  In the winnowing from 11 choices down to 2, many members of the team are willing to say, “my tool can be done later” or “that one can be done better outside this project.”   Approximately 100 people weighed in on the IdeaScale site, and those votes are helpful as we weigh each idea.  Scott Kleinman leads us in a discussion about feasbility for implementation and commitment in the room and the choices begin to fall away.  At the end, there are four, but after a few rounds of voting we’re down to two with equal votes that must be differentiated.  After a little more discussion, Tom proposes a voting system that allows folks to weight their votes in terms of commitment and the Serendipity project wins out.  The drafted idea description reads:

“A serendipitous discovery tool for researchers that takes information from your personal collection (such as a Zotero citation library  or a CSV file) and delivers content (from online libraries or collections like DPLA or Europeana) similar to it, which can then be visualized and manipulated.”

We decide to keep our project a secret until our launch and we break for lunch before assigning teams.  (Meanwhile, #owot hashtag follower Sherman Dorn decides to create an alternative list of ideas – One Week Better Tools – which provides some necessary laughs over the next couple of days).

After lunch, it’s time to break out responsibilities.  Mia Ridge steps up, though, and suggests that we first establish a shared understanding of the tool.  She sketches on one of the whiteboards the image which would guide our development over the next few days.

This was a takeaway moment for me.  I frequently sketch out my projects, but I’m afraid the thinking often gets pushed out in favor of the doing when I’m running low on time.  Mia’s suggestion that we take the time despite being against the clock probably saved us lots of hours and headaches later in the project.  We needed to aim as a group, so our efforts would fire in the same direction.  The tool really takes shape in this conversation, and some of the tasks are already starting to become really clear.  (We are also still indulging our obsession with mustaches at this time, as you may notice.)

Mia sketchs app.

Mia Ridge leads discussion of application. Photo by Meghan Frazer.

Tom leads the discussion of teams.  He recommends three: a project management team, a design/dev team and an outreach team.  The project managers should be selected first, and they can select the rest of the teams.  The project management discussion is difficult; there’s an abundance of qualified people in the room.  From my perspective, it makes sense to have the project managers be folks who can step in and pinch hit as things get hectic, but we also need our strongest technical folks on the dev team.  In the end, Brian Croxall and I are selected to be the project management team.

We decide to ask the remaining team members where they would like to be and see where our numbers end up.  The numbers turn out great: 7 for design/dev and 3 for outreach, with two design/dev team members slated to help with outreach needs as necessary.

The teams hit the ground running and begin prodding the components of the idea. The theme of the afternoon is determining the feasibility of this “serendipity engine” we’ve elected to build.  Mia Ridge, leader of the design/dev team, runs a quick skills audit and gets down to the business of selecting programming languages, frameworks and strategies for the week. They choose to work in Python with the Django framework.  Isotope, a JQuery plugin I use in my own development, is selected to drive the results page.  A private Github repository is set up under a code name.  (Beyond Isotope, HTML and CSS, I’m a little out of my element here, so for more technical details, please visit the public repository’s wiki.)  The outreach team lead, Jack Dougherty, brainstorms with his team on overall outreach needs and high priority tasks.  The Google document from yesterday becomes a Google Drive folder, with shells for press releases, a contact list for marketing and work plans for both teams.

This is the first point where I realize that I am going to have to adjust to a lack of hands on work.  I do my best when I’m working a keyboard: making lists, solving problems with code, etc.  As one of the project managers, my job is much less on the keyboard and much more about managing people and process.

When the teams come back together to report out, there’s a lot of getting each side up to speed, and afterwards our mentors advise us that the meetings have to be shorter.  We’re already at the end of day 2, though both teams would be working into the night on their work plans and Brian and need I still need to set the schedule for tomorrow.

We’re past the point where we can have a lot of discussion, except for maybe about the name.

Day 3

@briancroxall: Prepare for more radio silence from #owot today as people put their heads down and write/code.

@DoughertyJack: At #owot we considered 120 different names for our tool and FINALLY selected number 121 as the winner. Stay tuned for Friday launch!

Wednesday is tough.  We have to come up with a name, and all that exploration from yesterday needs to be a prototype by the end of the day. We are still hammering out the language we use in talking to each other and there’s some middle ground to be found on terminology. One example is the use of the word “standup” in our schedule.  “Standup” means something very specific to developers familiar with the Agile development process whereas I just mean, “short update meeting.”  Our approach to dealing with these issues is to identify the confusion and quickly agree on language we all understand.

I spend most of the day with the outreach team.  We have set a deadline for presenting names at lunchtime and are hoping the whole team can vote after lunch.  This schedule turns out to be folly as the name takes most of the day and we have to adjust our meeting times accordingly.  As project managers, Brian and I are canceling meetings (because folks are on a roll, we haven’t met a deadline, etc) whenever we can, but we have to balance this with keeping the whole team informed.

Camping out in a living room type space in RRCHNM, spread out among couches and looking at a Google Doc being edited on a big-screen TV, the outreach team and various interested parties spend most of the day brainstorming names.  We take breaks to work on the process press release and other essential tasks, but the name is the thing for the moment.  We need a name to start working on branding and logos.  Product press releases need to be completed, the dev team needs a named target and of course, swag must be ordered.

It is in this process, however, that an Aha! moment occurs for me.  We have been discussing names for a long time and folks are getting punchy.  The dev team lead and our designer, Amy Papaelias, have joined the outreach team along with most of our CHNM mentors.  I want to revisit something dev team member Eli Rose said earlier in the day.  To paraphrase, Eli said that he liked the idea that the tool automated or mechanized the concept of surprise.  So I repeat Eli’s concept to the group and it isn’t long after that that Mia says, “what about Serendip-o-matic?”  The group awards the name with head nods and “I like that”s and after running it by developers and dealing with our reservations (eg, hyphens, really?), history is made.

As relieved as I am to finally have a name, the bigger takeaway for me here is in the role of the manager.  I am not responsible for the inspiration for the name or the name itself, but instead repeating the concept to the right combination of people at a time when the team was stuck.  The project managers can create an opportunity for the brilliant folks on the team to make connections.  This thought serves as a consolation to me as I continue to struggle without concrete tasks.

Meanwhile, on the other side the building, the rest of dev team is pushing to finish code.  We see a working prototype at the end of the day, and folks are feeling good, but its been a long day.  So we go to dinner as a team, and leave the work behind for a couple of hours, though Amy is furiously sketching at various moments throughout the meal as she tries to develop a look and feel for this newly named thing.

On the way home from dinner, I think, “there’s only two days left.”  All of the sudden it feels like we haven’t gotten anywhere.

Day 4

@shazamrys: Just looking at first logo designs from @fontnerd. This is going to be great. #owot

We start hectic but excited.  Both teams were working into the night, Amy has a logo strategy and it goes over great.  Still, there’s lots of work to do today. Brian and I sit down in the morning and try to discuss what happens with the tool and the team next week and the week after before jumping in and trying to help the teams where we can, including things like finding a laptop for a dev team member, connecting someone with javascript experience to a team member who is stuck, or helping edit the press release.  This day is largely a blur in my recollection, but there are some salient points that stick out.

The decision to add the Flickr API to our work in order to access the Flickr Commons is made with the dev team, based on the feeling that we have enough time and the images located there enhance our search results and expand our coverage of subject areas and geographic locations.

We also spend today addressing issues.  The work of both teams overlaps in some key areas.  In the afternoon, Brian and I realize that we have mishandled some of the communication regarding language on the front page and both teams are working on the text.  We scramble to unify the approaches and make sure that efforts are not wasted.

This is another learning moment for me.  I keep flashing on Sheila’s words from Monday, and worry that our concrete decision making process is suffering from”too many cooks in the kitchen.” Everyone on this team has a stake in the success of this project and we have lots of smart people with valid opinions.  But everyone can’t vote on everything and we are spending too much time getting consensus now, with a mere twenty-four hours to go.  As a project manager, part of my job is to start streamlining and making executive decisions, but I am struggling with how to do that.

As we prepare to leave the center at 6pm, things are feeling disconnected.  This day has flown by.  Both teams are overwhelmed by what has to get done before tomorrow and despite hard work throughout the day, we’re trying to get a dev server and production server up and running.  As we regroup at the Inn, the dev team heads upstairs to a quiet space to work and eat and the outreach team sets up in the lobby.

Then, good news arrives.  Rebecca Sutton-Koeser has managed to get both the dev and production servers up and the code is able to be deployed.  (We are using Heroku and Amazon Web Services specifically, but again, please see the wiki for more technical details.)

The outreach team continues to work on documentation, and release strategy and Brian and I continue to step in where we can.  Everyone is working until midnight or later, but feeling much better about our status then we did at 6pm.

Day 5

@raypalin: If I were to wait one minute, I could say launch is today. Herculean effort in final push by my #owot colleagues. Outstanding, inspiring.

The final tasks are upon us.  Scott Williams moves on from his development responsibilities to facilitate user testing, which was forced to slide from Thursday due to our server problems.  Amanda Visconti works to get the interactive results screen finalized.  Ray Palin hones our list of press contacts and works with Amy to get the swag design in place. Amrys Williams collaborates with the outreach team and then Sheila to publish the product press release.  Both the dev and outreach teams triage and fix and tweak and defer issues as we move towards our 1pm “code chill”, a point which we’re hoping to have the code in a fairly stable state.

We are still making too many decisions with too many people, and I find myself weighing not only the options but how attached people are to either option.  Several choices are made because they reflect the path of least resistance.  The time to argue is through and I trust the team’s opinions even when I don’t agree.

We end up running a little behind and the code freeze scheduled for 2pm slides to 2:15.  But at this point we know: we’re going live at 3:15pm.

Jack Dougherty has arranged a Google hangout with Dan Cohen of the Digital Public Library of America and Brett Bobley and Jen Serventi of the NEH Office of Digital Humanities, which the project managers co-host.  We broadcast the conversation live via the One Week | One Tool website.

The code goes live and the broadcast starts but my jitters do not subside…until I hear my teammates cheering in the hangout.  Serendip-o-matic is live.

The Aftermath

At 8am on Day 6, Serendip-o-matic had its first pull request and later in the day, a fourth API – Trove of Australia – was integrated.  As I drafted this blog post on Day 7, I received email after email generated by the active issue queue and the tweet stream at #owot is still being populated.  On Day 9, the developers continue to fix issues and we are all thinking about long term strategy.  We are brainstorming ways to share our experience and help other teams achieve similar results.

I found One Week | One Tool incredibly challenging and therefore a highly rewarding experience.  My major challenge lay in shifting my mindset from that of a someone hammering on a keyboard in a one-person shop to a that of a project manager for a twelve-person team.  I write for this blog because I like to build things and share how I built them, but I have never experienced the building from this angle before.  The tight timeline ensured that we would not have time to go back and agonize over decisions, so it was a bit like living in a project management accelerator.  We had to recognize issues, fix them and move on quickly, so as not to derail the project.

However, even in those times when I became acutely more aware of the clock, I never doubted that we would make it.  The entire team is so talented; I never lost my faith that a product would emerge.  And, it’s an application that I will use, for inspiration and for making fortunate discoveries.

@meghanfrazer: I am in awe. Amazing to work with such a smart, giving team. #owot #feedthemachine

Team Photo

The #OWOT team. Photo by Sharon Leon.

(More on One Week | One Tool, including other blog entries, can be found by visiting the One Week | One Tool Zotero Group.)


Python Preconference at ALA Annual: Attendee Perspective

I attended the Library Code Year Interest Group‘s preconference on Python, sponsored by LITA (Library Information Technology Association)  and ALCTS (Association for Library Collections and Technical Services), at the American Library Association’s conference in Chicago this year. The workshop was taught and staffed by IG members Andromeda Yelton, Becky Yoose, Bohyun Kim, Carli Spina, Eric Phetteplace, Jen Young, and Shana McDanold. It was based on work done by the Boston Python Workshop group, and with tools developed there and by Coding Bat. The preconference was designed to provide a basic introduction to the Python programming language, and succeeded admirably.

Here’s why I think it’s important for librarians to learn to code: it provides options in lots of conversations where librarians have traditionally not had many. One of the lightning talks (by Heidi Frank at NYU) concerned using Python to manipulate MARC records. The tools to do that kind of thing have tended to be either a) the property of vendors who provide the features the majority of their customers want or b) the province of overworked library systems staff. Both scenarios tend to lead to tools which are limiting or aggravating for the individual needs of almost every individual user. Ever try to change a graphic in your OPAC? Export a report in the file format *you* need? Learning to code is one way to solve those kinds of problems. Code empowers.

The preconference was very self-directed. There were a set of introductory tutorials before  a late-morning lecture, then a lovely lunch (provided by the Python Foundation), then the option of spending more time on the morning’s activities or using the morning’s skills to work on one of two projects. The first project, ColorWall, used Python to create a bunch of very pretty cascading displays of color. The second, Wordplay, used it to answer crossword puzzle questions–”How many words fit the pattern E*G****R?” “How many words have all five vowels in order?”. My table opted to stick with the morning exercises, and learned an important lesson about what kinds of things Python can infer and what kinds it can’t. Python and your computer are very fast and systematic, but also very literal-minded. I suspect that they’d much rather be doing math than whatever you’re asking them to do for you.

My own background is moderately technical. I remember writing BASIC programs for my Commodore 128. I’ve done web design and systems work for a couple of libraries, and I have a lot of experience working with software of various kinds. I used this set of directions to install Linux on the Chromebook I brought to the preconference. I have installed new Linux operating systems on computers dozens of times. This doesn’t scare me:

And I still had the feeling of 8th-grade-math-class dread when I got stuck, as I frequently did, during the second part of the guided introduction. Did I miss something? Am I just not smart enough to do this? The whole litany of self-doubt. I got completely stuck on when to use loops and when not to use them. Loops are one of the most basic Python functions, a way of telling Python to systematically go down a list of things and do something to them. Utterly baffling, because it requires you to ask the question like a computer would, rather than like a human. Totally normal for me and anyone else starting out. Or, in fact, for anyone who programs. Programming is failure. The trick is learning to treat failure as part of the process, rather than as a calamity.

What’s powerful about the approach the IG used is that there were lots of people available to help, about seven teaching assistants to forty attendees. The accompanying documentation was cheerful and clear, and the attitude was “let’s figure this out together”. This is the polar opposite of a common approach to teaching programming, of which the polite paraphrase is “Read The Fine Manual”. My experience with music lessons as an adult and with the programming instruction I’ve looked at is that the (usually) well-meaning people doing the teaching tend to be people who learn best by trying things until they work. Lots of people learn best that way; lots of people do not. And librarians tend more often to be in the second category: wanting a bit of a sense of the forest before dealing with all of the trees individually. There is also a genderedness to the traditional approach. The Boston Python group’s approach (self-directed, with vast amounts of personal help available) was specifically designed to be welcoming to women and newcomers. Used here, it definitely worked. Attendees were 60% female, which is striking for a programming event, even at a library conference.

For me, learning Python is an investment in supporting the digital humanities work I will be increasingly involved in. I’m looking forward to learning how to use it to manipulate and analyze text. As I look more closely, I see that Python has modules for manipulating CSV files. One of my ongoing projects involves mapping indexes like MLA Bibliography to our full-text coverage so my students and faculty know what to expect when they use them. I’ve been using complicated Excel spreadsheets to do this, with only marginally satisfying results. I think that Python will give me better results, and hopefully results I can share with others.

The immediate takeaways for me are more about relationships and affiliation than code, though I do have a structure, in the form of the documentation for the workshop, which I will use to follow-up (you can use it, too!). I am lucky enough to be in the Boston area, so I will take advantage of the active Boston Python Meetup group, which has frequent workshops and events for programmers at all levels. Most importantly, I am clear from the workshop that Python is not inherently more complicated to learn than MARC formatting or old-style DIALOG searching. I wouldn’t say the workshop demystified Python for me–there’s still a lot of work for me to do–, but I will say that learning a useful amount of Python now seems entirely doable and worthwhile.

Computer code is crucial to the present and future of librarianship. Librarians have a unique facility with explaining the technical to non-technical people. Librarians learning to code together is an investment in ourselves and our profession.

About our guest author:

Chris Strauber is Humanities Research and Instruction Librarian and Coordinator of Instructional Design at Tufts University’s Tisch Library.