* The ≈ symbol indicates that the two items are similar, but not equal, to each other.
Hacker is a disputed term. The word hacker is so often mis-applied to describe law breaking, information theft, privacy violation, and other black-hat activities that the mistake has become permanently installed in our lexicon. I am not using hacker in this sense of the word. To be clear: when I use the word hacker and when I write about hacker values, I am not referring to computer criminals and their sketchy value systems. Instead, I am using hacker in its original meaning: a person who makes clever use of technology and information to solve practical problems.
With the current popularity of hackerspaces and makerspaces in libraries, library hack-a-thons, and hacking projects for librarians; it is clear that library culture is warming to the hacker ethic. This is a highly positive trend and one that I encourage more librarians to participate in. The reason I am so excited to see libraries encourage adoption of the hacker ethic is that hackers share several core values with libraries. Working together we can serve our communities more effectively. This may appear to be counter-intuitive, especially due to a very common public misconception that hacker is just another word for computer-criminal. In this post I want to correct this error, explain the values behind the hacker movement, and show how librarians and hackers share core values. It is my hope that this opens the door for more librarians to get started in productive and positive library hackery.
First, a working definition: hackers are people who empower themselves with information in order to modify their environment and make the world a better place. That’s it. Hacking doesn’t require intruding into computer security settings. There’s no imperative that hackers have to work with code, computers, or technology–although many do. Besides the traditional computer software hacker, there are many kinds of crafters, tinkerers, and makers that share core the hacker values. These makers all share knowledge about their world and engage in hands-on modification of it in order to make it a better place.
For a richer and more detailed look into the hacker ethic than provided by my simplified definition I recommend three books. First try Corey Doctorow’s young adult novel, Little Brother 1. This novel highlights the hacker values of self-empowerment with information, hands-on hacking, and acting for the public good. Little Brother is not only an award-winning story, but it also comes with a bibliography that is one of the best introductions to hacking available. Next, check out Steven Levy’s classic book Hackers: Heroes of the Computer Revolution 2. Levy details the history of hackers in the early 1980s and explains the values that drove the movement. Third, try Chris Anderson’s Makers: The New Industrial Revolution 3. Anderson tells the story of the contemporary maker movement and the way it is combining the values of the traditional do-it-yourself (DIY) movement with the values of the computer hacker community to spark a vibrant and powerful creative movement across the world.
In the preface to Hackers: Heroes of the Computer revolution, Levy observed a common philosophy that the hackers shared:
It was a philosophy of sharing, openness, decentralization, and getting your hands on machines at any cost to improve the machines and improve the world.
The Wikipedia entry on the hacker programming subculture builds on Levy’s observations and revises the list of core hacker values as:
- Engaging in the Hands-on Imperative.
These values are also restated and expanded on in another Wikipedia article on Hacker Ethics. Each of these articulations of hacker values differs subtly, yet while they differ they reinforce the central idea that there are core hacker values and that the conception of hacker as computer criminal is misinformed and inaccurate. (While there clearly are computer criminals, the error lies in labeling these people as hackers. These criminals violate hacker values as much as they violate personal privacy and the law.)
Once we understand that hacking is rooted in the core values of sharing, openness, collaboration, and hands-on activity; we can begin to see that hackers and librarians share several core values and that there is a rich environment for developing synergies and collaborative projects between the two groups. If we can identify and isolate the core values that librarians share with hackers, we will be well on our way to identifying areas for productive collaboration and cross-pollination of ideas between our cultures.
If we are going to compare hacker values with library values, an excellent starting point is the American Library Association’s Library Bill of Rights. I recently had the pleasure of attending a keynote presentation by Char Booth who made this point most persuasively. She spoke eloquently and developed a nuanced argument on the topic of the narratives we use to describe our libraries. She encouraged us to look beyond the tired narratives of library-as-container-of-information or library-as-content-repository and instead create new narratives that describe the enduring concept of the library. This concept of library captures the values and services libraries provide without being inextricably linked to the information containers and technologies that libraries have historically used.
As she developed this argument, Char encouraged us to look to library history and extract the core values that will continue to apply as our collections and services adapt and change. As an example, she displayed the 1948 Library Bill of Rights and extracted out of each paragraph a core value. Her lesson: these are still our core values, even if the way we serve our patrons has radically changed.
Char distilled the Library Bill of Rights into five core values: access, freedom, advocacy, inquiry, and openness. If we compare these values with the hacker values from above: sharing, openness, collaboration, and the hands-on-imperative, we’ll see that at least in terms of access to information, public openness, freedom, sharing, and collaboration libraries and hackers are on the same page. There are many things that hackers and libraries can do together that further these shared values and goals.
It should be noted that hackers have a traditionally anti-authoritarian bent and unlike libraries, their value of open access to information often trumps their civic duty to respect license agreements and copyright law. Without trivializing this difference, there are many projects that libraries and hackers can do together that honor our shared values and do not violate the core principles of either partner. After all, libraries have a lot of experience doing business with partners who do not share or honor the core library values of freedom, openness, and access to information. If we can maintain productive relationships with certain parties that reject values close to the heart of libraries and librarians, it stands to reason that we can also pursue and maintain relationships with other groups that respect these core values, even as we differ in others.
At the end of the day, library values and hacker values are more alike than different. Especially in the areas of library work that involve advocacy for freedom, openness, and access to information we have allies and potential partners who share core values with us.
If my argument about library values and hacker values has been at all persuasive, it raises the question: what do hacker/library partnerships look like? Some of the answers to this have been hinted at above. They look like Jason Griffey’s LibraryBox project. This wonderful project involves hacking on multiple levels. On one level, it provides the information needed for libraries to modify (hack) a portable wifi router into a public distribution hub for public domain, open access, and creative-commons licensed books and media. LibraryBoxes can bring digital media to locations that are off the net. On another level, it is a hack of an existing hacker project PirateBox. PirateBox is a private portable network designed to provide untraceable local file-sharing. Griffey hacked the hack in order to create a project more in-line with library values and mission.
These partnerships can also look like the Washington DC public library’s Accessibility Hack-a-Thon, an ongoing project that brings together, civic, library, and hacker groups to collaborate on hacking projects that advance the public good in their city. Another great example of bringing hacker ethics into the library can be found in TechConnect’s own Bohyun Kim’s posts on AJAX and APIs. Using APIs to customize web services is a perfect example of a library hack: it leverages our understanding of technology and empowers us to customize and perfect our environment. With an injection of hacker values into library services, we no longer have to remain at the mercy of the default setting. We can empower ourselves to hack our way to better tools, a better library, and a better world.
An excellent example of hackery from outside the library community is Audrey Watters’ Hack Education and Hack [Higher] Education blogs. Just as computer hackers use their inside information of computer systems to remake the environment, Audrey users her inside knowledge of education systems to make positive changes to the system.
- Doctorow, Cory. 2008. Little brother. New York: Tom Doherty Associates. http://craphound.com/littlebrother/download/ ↩
- Levy, Steven. 2010. Hackers Heroes of the Computer Revolution. Cambridge: O’Reilly Media, Incorporated. http://shop.oreilly.com/product/0636920010227.do ↩
- Anderson, Chris. 2012. Makers the new industrial revolution. New York: Crown Business. http://worldcat.org/oclc/812195098 ↩
Cultivating Change in the Academy: 50+ Stories from the Digital Frontlines
This is a review of the ebook Cultivating Change in the Academy: 50+ Stories from the Digital Frontlines and also of the larger project that collected the stories that became the content of the ebook. The project collects discussions about how technology can be used to improve student success. Fifty practical examples of successful projects are the result. Academic librarians will find the book to be a highly useful addition to our reference or professional development collections. The stories collected in the ebook are valuable examples of innovative pedagogy and administration and are useful resources to librarians and faculty looking for technological innovations in the classroom. Even more valuable than the collected examples may be the model used to collect and publish them. Cultivating Change, especially in its introduction and epilogue, offers a model for getting like minds together on our campuses and sharing experiences from a diversity of campus perspectives. The results of interdisciplinary cooperation around technology and success make for interesting reading, but we can also follow their model to create our own interdisciplinary collaborations at home on our campuses. More details about the ongoing project are available on their community site. The ebook is available as a blog with comments and also as an .epub, .mobi, or .pdf file from the University of Minnesota Digital Conservancy.
Cultivating Change in the Academy: 50+ Stories from the Digital Frontlines 1
The stories that make up the ebook have been peer reviewed and organized into chapters on the following topics: Changing Pedagogies (teaching using the affordances of today’s technology), Creating Solutions (technology applied to specific problems), Providing Direction (technology applied to leadership and administration), and Extending Reach (technology employed to reach expanded audiences.) The stories follow a semi-standard format that clearly lays out each project, including the problem addressed, methodology, results, and conclusions.
Section One: Changing Pedagogies
The opening chapter focuses on applications of academic technology in the classroom that specifically address issues of moving instruction from memorization to problem solving and interactive coaching. These efforts are often described by the term “digital pedagogy” (For an explanation of digital pedagogy, see Brian Croxall’s elegant definition.2) I’m often critical of digital pedagogy efforts because they can confuse priorities and focus on the digital at the expense of the pedagogy. The stories in this section do not make this mistake and correctly focus on harnessing the affordances of technology (the things we can do now that were not previously possible) to achieve student-success and foster learning.
One particularly impressive story, Web-Based Problem-Solving Coaches for Physics Students, explained how a physics course used digital tools to enable more detailed feedback to student work using the cognitive apprenticeship model. This solution encouraged the development of problem-solving skills and has to potential to scale better than classical lecture/lab course structures.
Section Two: Creating Solutions
This section focuses on using digital technology to present content to students outside of the classroom. Technology is extending the reach of the University beyond the limits of our campus spaces, this section address how innovations can make distance education more effective. A common theme here is the concept of the flipped classroom. (See Salmam Khan’s TED talk for a good description of flipping the classroom. 3) In a flipped classroom the traditional structure of content being presented to students in lectures during class time and creative work being assigned as homework is flipped. Content is presented outside the classroom and instructors lead students in creative projects during class time. Solutions listed in this section include podcasts, video podcasts, and screencasts. They also address synchronous and asynchronous methods of distance education and some theoretical approaches for instructors to employ as they transition from primarily face to face instruction to more blended instruction environments.
Of special note is the story Creating Productive Presence: A Narrative in which the instructor assesses the steps taken to provide a distance cohort with the appropriate levels of instructor intervention and student freedom. In face-to-face instruction, students have body-language and other non-verbal cues to read on the instructor. Distance students, without these familiar cues, experienced anxiety in a text-only communication environment. Using delegates from student group projects and focus groups, the instructor was able to find an appropriate classroom presence balanced between cold distance and micro-management of the group projects.
Section Three: Providing Direction
The focus of this section is on innovative new tools for administration and leadership and how administration can provide leadership and support for the embrace of disruptive technologies on campus. The stories here tie the overall effort to use technology to advance student success to accreditation, often a necessary step to motivate any campus to make uncomfortable changes. Data archives, the institutional repository, clickers (class polling systems), and project management tools fall under this general category.
The University Digital Conservancy: A Platform to Publish, Share, and Preserve the University’s Scholarship is of particular interest to librarians. Written by three UM librarians, it makes a case for institutional repositories, explains their implementation, discusses tracking article-level impacts, and most importantly includes some highly useful models for assessing institutional repository impact and use.
Section Four: Extending Reach
The final section discusses ways technology can enable the university to reach wider audiences. Examples include moving courseware content to mobile platforms, using SMS messaging to gather research data, and using mobile devices to scale the collection of oral histories. Digital objects scale in ways that physical objects cannot and these projects take advantage of this scale to expand the reach of the university.
Not to be missed in this section is R U Up 4 it? Collecting Data via Texting: Developing and Testing of the Youth Ecological Momentary Assessment System (YEMAS). R U Up 4 it? is the story of using SMS (texting) to gather real-time survey data from teen populations.
Propagating the Meme
The stories and practical experiences recorded in Cultivating Change in the Academy are valuable in their own right. It is a great resource for ideas and shared experience for anyone looking for creative ways to leverage technology to achieve educational goals. For this reader though, the real value of this project is the format used to create it. The book is full of valuable and interesting content. However, in the digital world, content isn’t king. As Corey Doctorow tells us:
Content isn’t king. If I sent you to a desert island and gave you the choice of taking your friends or your movies, you’d choose your friends — if you chose the movies, we’d call you a sociopath. Conversation is king. Content is just something to talk about.[2. http://boingboing.net/2006/10/10/disney-exec-piracy-i.html]
The process the University of Minnesota followed to generate conversation around technology and student success is detailed in a white paper. 4 After reading some of the stories in Cultivating Change, if you find yourself wishing similar conversations could take place on your campus, this is the road-map the University of Minnesota followed. Before they were able to publish their stories, the University of Minnesota had to bring together their faculty, staff, and administration to talk about employing innovative technological solutions to the project of increasing student success. In a time when conversation trumps content, a successful model for creating these kinds of conversations on our own campuses will also trump the written record of other’s conversations.
- Hill Duin, A. et al (eds) (2012) Cultivating Change in the Academy: 50+ Stories from the Digital Frontlines at the University of Minnesota in 2012, An Open-Source eBook. University of Minnesota. Creative Commons BY NC SA. http://digital-rights.net/wp-content/uploads/books/CC50_UMN_ebook.pdf ↩
- http://www.briancroxall.net/digitalpedagogy/what-is-digital-pedagogy/ ↩
- http://www.ted.com/talks/salman_khan_let_s_use_video_to_reinvent_education.html ↩
- http://bit.ly/Rj5AIR ↩
Open access publication makes access to research free for the end reader, but in many fields it is not free for the author of the article. When I told a friend in a scientific field I was working on this article, he replied “Open access is something you can only do if you have a grant.” PeerJ, a scholarly publishing venture that started up over the summer, aims to change this and make open access publication much easier for everyone involved.
While the first publication isn’t expected until December, in this post I want to examine in greater detail the variation on the “gold” open-access business model that PeerJ states will make it financially viable 1, and the open peer review that will drive it. Both of these models are still very new in the world of scholarly publishing, and require new mindsets for everyone involved. Because PeerJ comes out of funding and leadership from Silicon Valley, it can more easily break from traditional scholarly publishing and experiment with innovative practices. 2
PeerJ is a platform that will host a scholarly journal called PeerJ and a pre-print server (similar to arXiv) that will publish biological and medical scientific research. Its founders are Peter Binfield (formerly of PLoS ONE) and Jason Hoyt (formerly of Mendeley), both of whom are familiar with disruptive models in academic publishing. While the “J” in the title stands for Journal, Jason Hoyt explains on the PeerJ blog that while the journal as such is no longer a necessary model for publication, we still hold on to it. “The journal is dead, but it’s nice to hold on to it for a little while.” 3. The project launched in June of this year, and while no major updates have been posted yet on the PeerJ website, they seem to be moving towards their goal of publishing in late 2012.
To submit a paper for consideration in PeerJ, authors must buy a “lifetime membership” starting at $99. (You can submit a paper without paying, but it costs more in the end to publish it). This would allow the author to publish one paper in the journal a year. The lifetime membership is only valid as long as you meet certain participation requirements, which at minimum is reviewing at least one article a year. Reviewing in this case can mean as little as posting a comment to a published article. Without that, the author might have to pay the $99 fee again (though as yet it is of course unclear how strictly PeerJ will enforce this rule). The idea behind this is to “incentivize” community participation, a practice that has met with limited success in other arenas. Each author on a paper, up to 12 authors, must pay the fee before the article can be published. The Scholarly Kitchen blog did some math and determined that for most lab setups, publication fees would come to about $1,124 4, which is equivalent to other similar open access journals. Of course, some of those researchers wouldn’t have to pay the fee again; for others, it might have to be paid again if they are unable to review other articles.
Peer Review: Should it be open?
PeerJ, as the name and the lifetime membership model imply, will certainly be peer-reviewed. But, keeping with its innovative practices, it will use open peer review, a relatively new model. Peter Binfield explained in this interview PeerJ’s thinking behind open peer review.
…we believe in open peer review. That means, first, reviewer names are revealed to authors, and second, that the history of the peer review process is made public upon publication. However, we are also aware that this is a new concept. Therefore, we are initially going to encourage, but not require, open peer review. Specifically, we will be adopting a policy similar to The EMBO Journal: reviewers will be permitted to reveal their identities to authors, and authors will be given the choice of placing the peer review and revision history online when they are published. In the case of EMBO, the uptake by authors for this latter aspect has been greater than 90%, so we expect it to be well received. 5
In single blind peer review, the reviewers would know the name of the author(s) of the article, but the author would not know who reviewed the article. The reviewers could write whatever sorts of comments they wanted to without the author being able to communicate with them. For obvious reasons, this lends itself to abuse where reviewers might not accept articles by people they did not know or like or tend to accept articles from people they did like 6 Even people who are trying to be fair can accidentally fall prey to bias when they know the names of the submitters.
Double blind peer review in theory takes away the ability for reviewers to abuse the system. A link that has been passed around library conference planning circles in the past few weeks is the JSConf EU 2012 which managed to improve its ratio of female presenters by going to a double-blind system. Double blind is the gold standard for peer review for many scholarly journals. Of course, it is not a perfect system either. It can be hard to obscure the identity of a researcher in a small field in which everyone is working on unique topics. It also is a much lengthier process with more steps involved in the review process. To this end, it is less than ideal for breaking medical or technology research that needs to be made public as soon as possible.
In open peer review, the reviewers and the authors are known to each other. By allowing for direct communication between reviewer and researcher, this speeds up the process of revisions and allows for greater clarity and speed 7. Open peer review doesn’t affect the quality of the reviews or the articles negatively, it does make it more difficult to find qualified reviewers to participate, and it might make a less well-known researcher more likely to accept the work of a senior colleague or well-known lab. 8.
Given the experience of JSConf and a great deal of anecdotal evidence from women in technical fields, it seems likely that open peer review is open to the same potential abuse of single peer review. While open peer review might make the rejected author able to challenge unfair rejections, this would require that the rejected author feels empowered enough in that community to speak up. Junior scholars who know they have been rejected by senior colleagues may not want to cause a scene that could affect future employment or publication opportunities. On the other hand, if they can get useful feedback directly from respected senior colleagues, that could make all the difference in crafting a stronger article and going forward with a research agenda. Therein lies the dilemma of open peer review.
Who pays for open access?
A related problem for junior scholars exists in open access funding models, at least in STEM publishing. As open access stands now, there are a few different models that are still being fleshed out. Green open access is free to the author and free to the reader; it is usually funded by grants, institutions, or scholarly societies. Gold open access is free to the end reader but has a publication fee charged to the author(s).
This situation is very confusing for researchers, since when they are confronted with a gold open access journal they will have to be sure the journal is legitimate (Jeffrey Beall has a list of Predatory Open Access journals to aid in this) as well as secure funding for publication. While there are many schemes in place for paying publication fees, there are no well-defined practices in place that illustrate long-term viability. Often this is accomplished by grants for the research, but not always. The UK government recently approved a report that suggests that issuing “block grants” to institutions to pay these fees would ultimately cost less due to reduced library subscription fees. As one article suggests, the practice of “block grants” or other funding strategies are likely to not be advantageous to junior scholars or those in more marginal fields 9. A large research grant for millions of dollars with the relatively small line item for publication fees for a well-known PI is one thing–what about the junior humanities scholar who has to scramble for a few thousand dollar research stipend? If an institution only gets so much money for publication fees, who gets the money?
By offering a $99 lifetime membership for the lowest level of publication, PeerJ offers hope to the junior scholar or graduate student to pursue projects on their own or with a few partners without worrying about how to pay for open access publication. Institutions could more readily afford to pay even $250 a year for highly productive researchers who were not doing peer review than the $1000+ publication fee for several articles a year. As above, some are skeptical that PeerJ can afford to publish at those rates, but if it is possible, that would help make open access more fair and equitable for everyone.
Open access with low-cost paid up front could be very advantageous to researchers and institutional bottom lines, but only if the quality of articles, peer reviews, and science is very good. It could provide a social model for publication that will take advantage of the web and the network effect for high quality reviewing and dissemination of information, but only if enough people participate. The network effect that made Wikipedia (for example) so successful relies on a high level of participation and engagement very early on to be successful [Davis]. A community has to build around the idea of PeerJ.
In almost the opposite method, but looking to achieve the same effect, this last week the Sponsoring Consortium for Open Access Publishing in Particle Physics (SCOAP3) announced that after years of negotiations they are set to convert publishing in that field to open access starting in 2014. 10 This means that researchers (and their labs) would not have to do anything special to publish open access and would do so by default in the twelve journals in which most particle physics articles are published. The fees for publication will be paid upfront by libraries and funding agencies.
So is it better to start a whole new platform, or to work within the existing system to create open access? If open (and through a commenting s system, ongoing) peer review makes for a lively and engaging network and low-cost open access makes publication cheaper, then PeerJ could accomplish something extraordinary in scholarly publishing. But until then, it is encouraging that organizations are working from both sides.
- Brantley, Peter. “Scholarly Publishing 2012: Meet PeerJ.” PublishersWeekly.com, June 12, 2012. http://www.publishersweekly.com/pw/by-topic/digital/content-and-e-books/article/52512-scholarly-publishing-2012-meet-peerj.html. ↩
- Davis, Phil. “PeerJ: Silicon Valley Culture Enters Academic Publishing.” The Scholarly Kitchen, June 14, 2012. http://scholarlykitchen.sspnet.org/2012/06/14/peerj-silicon-valley-culture-enters-academic-publishing/. ↩
- Hoyt, Jason. “What Does the ‘J’ in ‘PeerJ’ Stand For?” PeerJ Blog, August 22, 2012. http://blog.peerj.com/post/29956055704/what-does-the-j-in-peerj-stand-for. ↩
- http://scholarlykitchen.sspnet.org/2012/06/14/is-peerj-membership-publishing-sustainable/ ↩
- Brantley ↩
- Wennerås, Christine, and Agnes Wold. “Nepotism and sexism in peer-review.” Nature 387, no. 6631 (May 22, 1997): 341–3. ↩
- For an ingenious way of demonstrating this, see Leek, Jeffrey T., Margaret A. Taub, and Fernando J. Pineda. “Cooperation Between Referees and Authors Increases Peer Review Accuracy.” PLoS ONE 6, no. 11 (November 9, 2011): e26895. ↩
- Mainguy, Gaell, Mohammad R Motamedi, and Daniel Mietchen. “Peer Review—The Newcomers’ Perspective.” PLoS Biology 3, no. 9 (September 2005). http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1201308/. ↩
- Crotty, David. “Are University Block Grants the Right Way to Fund Open Access Mandates?” The Scholarly Kitchen, September 13, 2012. http://scholarlykitchen.sspnet.org/2012/09/13/are-university-block-grants-the-right-way-to-fund-open-access-mandates/. ↩
- Van Noorden, Richard. “Open-access Deal for Particle Physics.” Nature 489, no. 7417 (September 24, 2012): 486–486. ↩
Tuesday June 5th through Friday June 8th 2012, 500 creatives from numerous fields such as, computer science, art, design, data visualization, gathered together to listen, converse, and participate in the second Eyeo Festival. Held in Minneapolis, MN at the Walker Art Center, the event organizers created an environment of learning, exchange, exploration, and fun. There were various workshops with some top names leading the way. Thoughtfully curated presentations throughout the day complemented keynotes held nightly in party-like atmospheres: Eyeo was an event not to be missed. Ranging from independent artists to the highest levels of innovative companies, Eyeo offered inspiration on many levels.
Why the Eyeo Festival?
As I began to think about what I experienced at the Eyeo Festival, I struggled to express exactly how impactful this event was for me and those I connected with. In a way, Eyeo is like TED and in fact, many presenters have given TED talks. Eyeo has a more targeted focus on art, design, data, and creative code but it is also so much more than that. With an interactive art and sound installation, Zygotes, by Tangible Interaction kicking off the festival, though the video is a poor substitute to actually being there, it still evokes a sense of wonder and possibility. I strongly encourage anyone who is drawn to design, data, art, interaction or to express their creativity through code to attend this outstanding creative event and follow the incredible people that make up the impressive speaker list.
I went to the Eyeo Festival because I like to seek out what professionals in other fields are doing. I like staying curious and stretching outside my comfort zone in big ways, surrounding myself with people doing things I don’t understand, and then trying to understand them. Over the years I’ve been to many library conferences and there are some amazing events with excellent programming but they are, understandably, very library-centric. So, to challenge myself, I decided to go to a conference where there would be some content related to libraries but that was not a library conference. There are many individuals and professions outside of libraries that care about many of the same values and initiatives we do, that work on similar kinds of problems, and have the same drive to make the world a better place. So why not talk to them, ask questions, learn, and see what their perspective is? How do they approach and solve problems? What is their process in creating? What is their perspective and attitude? What kind of communities are they part of and work with?
I was greatly inspired by the group of librarians who have attended the SXSWi Festival which has grown further over the years. There are a now a rather large number of librarians speaking about and advocating for libraries in such an innovative and elevated platform. There is even a Facebook Group where professionals working in libraries, archives, and museums can connect with each other for encouragement, support, and collaborations in relation to SXSWi. Andrea Davis, Reference & Instruction Librarian at the Dudley Knox Library, Naval Postgraduate School in Monterey, CA, has been heavily involved in offering leadership in getting librarians to collaborate at SXSW. She states, “I’ve found it absolutely invigorating to get outside of library circles to learn from others, and to test the waters on what changes and effects are having on those not so intimately involved in libraries. Getting outside of library conferences keeps the blood flowing across tech, publishing, education. Insularity doesn’t do much for growth and learning.”
I’ve also been inspired by librarians who have been involved in the TED community, such as Janie Herman and her leadership with Princeton Public Library’s local TEDx in addition to her participation in the TEDxSummit in Doha, Qatar. Additionally, Chrystie Hill, the Community Relations Director at OCLC, has given more than one TedX talk about libraries. Seeing our library colleagues represent our profession in arenas broader than libraries is energizing and infectious.
Librarians having a seat at the table and a voice at two of the premier innovative gatherings in the world is powerful. This concept of librarians embedding themselves in communities outside of librarianship has been discussed in a number of articles including The Undergraduate Science Librarian and In the Library With the Lead Pipe.
Rather than giving detailed comprehensive coverage of Eyeo, you’ll see a glimpse of a few presentations plus a number of resources so that you can see for yourself some of the amazing, collaborative work being done. Presenter’s names link to the full talk that you can watch for yourself. Because a lot of the work being done is interactive and participatory in some way, I encourage you to seek these projects out and interact with them. The organizers are in the midst of processing a lot of videos and putting them up on the Eyeo Festival Vimeo channel; I highly recommend watching them and checking back for more.
Principal of Fathom, a Boston based design and data visualization firm, and co-initiator of the programming language Processing, Ben Fry’s work in data visualization and design is worth delving into. In his Eyeo presentation, 3 Things, the project that most stood out was the digitization project Fathom produced for GE: http://fathom.info/latest/category/ge. Years of annual reports were beautifully digitized and incorporated into an interactive web application they built from scratch. When faced with scanning issues, they built a tool that improved the scanned results.
Data artist in residence for the New York Times, and former geneticist, Jer Thorp’s range in working with data, art, and design is far and wide. Thorp is one of the few founders of the Eyeo Festival and in his presentation Near/Far he discussed several data visualization projects with the focus on storytelling. The two main pieces that stood out from Jer’s talk was his encouragement to dive into data visualization. He even included 10 year old, Theodore Zaballos’ handmade visualization of The Illiad which was rather impressive. The other piece that stood out was his focus on data visualization in context to location and people owning their own data versus a third party. This lead into the Open Paths project he showcased. He has also presented to librarians at the Canadian library conference, Access 2011.
Jen Lowe was by far the standout from all of the amazing Ignite Eyeo talks. She spoke about how people are intrinsically inspired by storytelling and the need for those working with data to focus on storytelling through the use of visualizing data and the story it tells. She works for the Open Knowledge Foundation in addition to running Datatelling and she has her library degree (she’s one of us!).
Jonathan Harris gave one of the most personal and poignant presentations at Eyeo. In a retrospective of his work, Jonathan covered years of work interwoven with personal stories from his life. Jonathan is an artist and designer and his work life and personal life are rarely separated. Each project began with the initial intention and ended with a more critical inward examination from the artist. The presentation led to his most recent endeavor, the Cowbird project, where storytelling once again emerges strongly. In describing this project he focused on the idea that technology and software could be used for good, in a more human way, created by “social engineers” to build a community of storytellers. He describes Cowbird as “a community of storytellers working to build a public library of human experience.”
Additional people + projects to delve into:
Fernanda Viegas and Martin Wattenberg of the Google Big Picture data visualization group. Wind Map: http://hint.fm/wind/
Kyle McDonald: http://kylemcdonald.net/
Tahir Temphill: http://tahirhemphill.com/ and his latest work, Hip Hop Word Count: http://staplecrops.com/index.php/hiphop_wordcount/
Julian Oliver: http://julianoliver.com/
Nicholas Felton of Facebook: http://feltron.com/
Local Projects: http://localprojects.net/
Oblong Industries: http://oblong.com/
Eyebeam Art + Technology Center: http://eyebeam.org/
What can libraries get from the Eyeo Festival?
Libraries and library work are everywhere at this conference. That this eclectic group of creative people were often thinking about and producing work similar to librarians is thrilling. There is incredible potential for libraries to embrace some of the concepts and problems in many of the presentations I saw and conversations I was part of. There are multiple ways that libraries could learn from and perhaps participate in this broader community and work across fields.
People love libraries and these attendees were no exception. There were attendees from numerous private/corporate companies, newspapers, museums, government, libraries, and more. I was not the only library professional in attendance so I suspect those individuals might see the potential I see, which I also find really exciting. The drive behind every presenter and attendee was by far creativity in some form, the desire to make something, and communicate. The breadth of creativity and imagination that I saw reminded me of a quote from David Lankes in his keynote from the New England Library Association Annual Conference:
“What might kill our profession is not ebooks, Amazon or Google, but a lack of imagination. We must envision a bright future for librarians and the communities they serve, then fight to make that vision a reality. We need a new activist librarianship focused on solving the grand challenges of our communities. Without action we will kill librarianship.”
If librarianship is in need of more imagination and perhaps creativity too, there is a world of wonder out there in terms of resources to help us achieve this vision.
The Eyeo Festival is but one place where we can become inspired, learn, and dream and then bring that experience back to our libraries and inject our own imagination, ideas, experimentation, and creativity into the work we do. By doing the most creative, imaginative library work we can do will inspire our communities; I have seen it first hand. Eyeo personally taught me that I need to fail more, focus more, make more, and have more fun doing it all.
Eating Your Own Dog Food
One of the most memorable experiences I had as a library student was becoming a patron of my own library. As on online library school student* I usually worked either in my office at pre-approved times, or at home. However, depending on the assignment, sometimes I worked out at the reference area public access computers. It nearly drove me mad, for a very simple reason – this was in the day before optical mouse devices, and the trackballs on our mice were incredibly sticky and jerky, despite regular cleaning routines. It was so bad I wondered how students could stand to work on our workstations, and how it made them feel about the library in general, since there is nothing like a solid hour or so of constantly repeated, albeit small, irritations to make a person develop indelible negative feelings towards a particular environment.
I’ve heard the same thing from colleagues that have started graduate programs here at my university; they are shocked at how hard it can be to be a student in the library, even with insider knowledge, and it can be demoralizing (and galvanizing) to watch classmates and even instructors dismiss library services and resources with “too confusing” or “learning curve too steep” as they ruthlessly practice least-effort satisficing for their information needs.
In information technology circles, the concept of having to use your own platforms/services is known as “eating your own dog food” or “dogfooding.” While there are pitfalls to relying too heavily on it as an assessment tool (we all have insider knowledge about libraries, software, and resources that can smooth the process for us), it is an eye-opening exercise, especially to listen to our users be brutally frank about what we offer — or don’t.
DIY Universities and Open Education
I am suggesting something related but complementary to dogfooding — sampling the models and platforms of a burgeoning movement that has the potential to be a disruptive force in higher education. DIY U and the coming transformation of education are all the rage (pun intended) these days, as prestigious universities and professors, Edupunks, loose collaboratives, and start-ups participate in collaborative free online offerings through various platforms and with different aims: Coursera, Khan Academy, P2PU, MIT OpenCourseWare, Udacity, NYU Open Education, and many more. This is a call to action for us as librarians. Instead of endlessly debating what this might mean, or where it might be going, and this movement’s possible effect on academic libraries, I suggest actually signing up for a course and experiencing it first-hand.
For library technologists facing the brave new world of higher education in the 21st century, there are three major advantages to taking a class in one of the new experimental DIY universities. We get to experience new platforms, delivery mechanisms, and modes of teaching, some of which may be applicable to the work of the academic library. In addition, many of the courses offered are technical courses that are directly applicable to our daily work. Thirdly, it allows us as academic participants to personally assess the often intemperate and hyperbolic language on both sides of the debate: “can’t possibly be as good as institutional campus-based face-to-face EVER” versus “This changes everything, FOREVER.” How many faculty on your campuses do you think have actually taken an online class, especially in one of these open educational initiatives? This is an opportunity to become an informed voice in any local campus debates and conversations. These conversations and debates will involve our core services, whether faculty and administrators realize it or not.
It will also encourage some future-oriented thinking about where libraries could fit into this changing educational landscape. One of the more interesting possible effects in these collaborative, open-to-all ventures is the necessity of using free or open access high quality resources. Where will that put the library? What does that mean for instructional resources hidden behind a particular institution’s authentication wall? Academic libraries and services have been tied to a particular institution — what happens when those affiliations blur and change extremely rapidly? There are all sorts of implications for faculty, students, libraries, vendors, and open access/open educational resources platforms. As a thought exercise, take a look at these seven predictions for the future of technology-enabled universities from JISC’s Head of Innovation, Sarah Porter. Which ones DON’T involve libraries? As a profession, let’s get out on the bleeding edge and investigate the developing models.
I just signed up for “Model Thinking” through Coursera. Taught by Professor Scott E. Page from the Center for the Study of Complex Systems at the University of Michigan, the course will cover modeling information to make sense of trends, social movements, behaviors, because “evidence shows that people who think with models consistently outperform those who don’t. And, moreover people who think with lots of models outperform people who use only one.” That sounds applicable to making decisions about e-books, collection development, workflow redesign, and changing models of higher education, et cetera.
- Coursera offers clusters of courses in Society, Networks, and Information (Model Thinking, Gamification, Social Networking Analysis, among others) and Computer Science (Algorithims, Compilers, Game Theory, etc.). If you have a music library or handle streaming media in your library, what about Listening to World Music? If you are curious about humanities subjects that have depended on traditional library materials in the past, try A History of the World since 1300 or Greek and Roman Mythology.
- Udacity offers Building a Search Engine, Design of Computer Programs, and Programming a Robotic Car (automate a bookmobile?).
- Set up your own peer class with P2PU, or take Become a Citizen Scientist, Curating Content, or Programming with the Twitter API.
- If you are in the New York City area and can attend an in-person workshop, General Assembly offers Storytelling Skills, Programming Fundamentals for Non-Programmers, and Dodging the Dangers of Copyright Law (taught by participants in Yale Law School’s Information Society Project) as part of a menu of tech and tech-business related workshops. These have fees ranging from $15 to $30.
- Before I take my Model Thinking class, I’m planning to brush up my algebra at Khan Academy.
- Try the archived lectures at Harvard’s “Building Mobile Applications“, hosted in their institutional repository.
- Health Sciences Librarian? What about Information Technology in the Health Care System of the Future from MIT OpenCourseWare?
* Full disclosure: I am a proud graduate of University of Illinois’ LEEP (5.0) MSLIS program, and I also have another master’s degree done the old fashioned way, and I am an enthusiastic supporter of online education done correctly.
Librarians, as a rule, don’t tolerate anarchy well. They like things to be organized and to follow processes. But when it comes to emerging technologies, too much reliance on planning and committees can stifle creativity and delay adoption. The open source software community can offer librarians models for how to make progress on big projects with minimal oversight.
“Lazy consensus” is one such model from which librarians can learn a lot. At the Code4Lib conference in February 2012, Bethany Nowviskie of the University of Virginia Scholar’s Lab encouraged library development teams to embrace this concept in order to create more innovative libraries. (I encourage you to watch a video or read the text of her keynote.) This goes for all sizes and types of academic libraries, whether they have a development staff or just staff with enthusiasm for learning about emerging technologies.
What is lazy consensus?
According to the Apache software foundation:
Lazy Consensus means that when you are convinced that you know what the community would like to see happen you can simply assume that you already have consensus and get on with the work. You don’t have to insist people discuss and/or approve your plan, and you certainly don’t need to call a vote to get approval. You just assume you have the community’s support unless someone says otherwise.
(quote from http://incubator.apache.org/odftoolkit/docs/governance/lazyConsensus.html)
Nowviskie suggests lazy consensus as a way to cope with an institutional culture where “no” is too often the default answer, since in lazy consensus the default answer is “yes.” If someone doesn’t agree with a proposal, he or she must present and defend an alternative within a reasonable amount of time (usually 72 hours). This ensures that the people who really care about a project have a chance to speak up and make sure the project is going in the right direction. By changing the default answer to YES, we make it easier to move forward on the things we really care about.
When you care about delivering the best possible experience and set of services for your library patrons, you should advocate for ways to make that happen and spend your time thinking about how to make that happen. Nowviskie points out the kinds of environments in which this is likely to thrive. Developers and technologists need time for research and development, “20% time” projects, and freedom to explore new possibilities. Even at small libraries without any development staff, librarians need time to research and understand issues of technology in libraries to make better decisions about the adoption of emerging technologies.
Implementing lazy consensus
Implementing lazy consensus in your library must be done with care. First and foremost, you must be aware of the culture you are in and be respectful of it even as you see room for change and improvement. Coming in the first day at a new job is not the moment to implement this process across the board, but in your own work or your department’s work you can set an example and a precedent. Nowviskie provides a few guidelines for healthy lazy consensus. Emphasize working hard and with integrity while being open and friendly. Keep everyone informed about what you are working on, and keep your mission in mind as the centerpiece of your work. In libraries, this means you must keep public services involved in any project from the earliest possible stages, and always maintain a commitment to maintaining the best possible user experience. When you or your team reliably deliver good results you will show the value in the process.
While default negativity can certainly stifle creativity, default positivity for all ideas can be equally stifling. Jonah Lehrer wrote in a recent New Yorker article article that the evidence shows that traditional brainstorming, where all ideas are presented to a group without criticism, doesn’t work. Creating better ideas requires critiquing wrong assumptions, which in turn helps us examine our own assumptions. In adopting lazy consensus, make sure there is authentic room for debate. Responding to a disagreement about a course of action with reasoned critique and alternate paths is more likely to result in creative ideas, and brings the discussion forward rather than ending it with a “no.”
Librarians know a lot about information and people. The open source software community knows a lot about how to run flexible and transparent organizations. Combining the two can create wonderful experiences for our users.
What is Action Analytics?
If you say “analytics” to most technology-savvy librarians, they think of Google Analytics or similar web analytics services. Many libraries are using such sophisticated data collection and analyses to improve the user experience on library-controlled sites. But the standard library analytics are retrospective: what have users done in the past? Have we designed our web platforms and pages successfully, and where do we need to change them?
Technology is enabling a different kind of future-oriented analytics. Action Analytics is evidence-based, combines data sets from different silos, and uses actions, performance, and data from the past to provide recommendations and actionable intelligence meant to influence future actions at both the institutional and the individual level. We’re familiar with these services in library-like contexts such as Amazon’s “customers who bought this item also bought” book recommendations and Netflix’s “other movies you might enjoy”.
Action Analytics in the Academic Library Landscape
It was a presentation by Mark David Milliron at Educause 2011 on “Analytics Today: Getting Smarter About Emerging Technology, Diverse Students, and the Completion Challenge” that made me think about the possibilities of the interventionist aspect of analytics for libraries. He described the complex dependencies between inter-generational poverty transmission, education as a disrupter, drop-out rates for first generation college students, and other factors such international competition and the job market. Then he moved on to the role of sophisticated analytics and data platforms and spoke about how it can help individual students succeed by using technology to deliver the right resource at the right time to the right student. Where do these sorts of analytics fit into the academic library landscape?
If your library is like my library, the pressure to prove your value to strategic campus initiatives such student success and retention is increasing. But assessing services with most analytics is past-oriented; how do we add the kind of library analytics that provide a useful intervention or recommendation? These analytics could be designed to help an individual student choose a database, or trigger a recommendation to dive deeper into reference services like chat reference or individual appointments. We need to design platforms and technology that can integrate data from various campus sources, do some predictive modeling, and deliver a timely text message to an English 101 student that recommends using these databases for the first writing assignment, or suggests an individual research appointment with the appropriate subject specialist (and a link to the appointment scheduler) to every honors students a month into their thesis year.
But should we? Are these sorts of interventions creepy and stalker-ish?* Would this be seen as an invasion of privacy? Does the use of data in this way collide with the profession’s ethical obligation and historical commitment to keep individual patron’s reading, browsing, or viewing habits private?
Every librarian I’ve discussed this with felt the same unease. I’m left with a series of questions: Have technology and online data gathering changed the context and meaning of privacy in such fundamental ways that we need to take a long hard look at our assumptions, especially in the academic environment? (Short answer — yes.) Are there ways to manage opt-in and opt-out preferences for these sorts of services so these services are only offered to those who want them? And does that miss the point? Aren’t we trying to influence the students who are unaware of library services and how the library could help them succeed?
Furthermore, are we modeling our ideas of “creepiness” and our adamant rejection of any “intervention” on the face-to-face model of the past that involved a feeling of personal surveillance and possible social judgment by live flesh persons? The phone app Mobilyze helps those with clinical depression avoid known triggers by suggesting preventative measures. The software is highly personalized and combines all kinds of data collected by the phone with self-reported mood diaries. Researcher Colin Depp observes that participants felt that the impersonal advice delivered via technology was easier to act on than “say, getting advice from their mother.”**
While I am not suggesting in any way that libraries move away from face-to-face, personalized encounters at public service desks, is there room for another model for delivering assistance? A model that some students might find less intrusive, less invasive, and more effective — precisely because it is technological and impersonal? And given the struggle that some students have to succeed in school, and the staggering debt that most of them incur, where exactly are our moral imperatives in delivering academic services in an increasingly personalized, technology-infused, data-dependent environment?
Increasingly, health services, commercial entities, and technologies such as browsers and social networking environments that are deeply embedded in most people’s lives, use these sorts of action analytics to allow the remote monitoring of our aging parents, sell us things, and match us with potential dates. Some of these uses are for the benefit of the user; some are for the benefit of the data gatherer. The moment from the Milliron presentation that really stayed with me was the poignant question that a student in a focus group asked him: “Can you use information about me…to help me?”
Can we? What do you think?
* For a recent article on academic libraries and Facebook that addresses some of these issues, see Nancy Kim Phillips, Academic Library Use of Facebook: Building Relationships with Students, The Journal of Academic Librarianship, Volume 37, Issue 6, December 2011, Pages 512-522, ISSN 0099-1333, 10.1016/j.acalib.2011.07.008. See also a very recent New York Times article on use of analytics by companies which discusses the creepiness factor.
What Library Circulation Data Shows
Unless current patterns change, by 2020 university libraries will no longer have circulation desks. This claim may seem hyperbolic if you’ve been observing your library, or even if you’ve been glancing over ACRL or National Center for Education Statistics data. If you have been looking at the data, you might be familiar with a pattern that looks like this:
This chart shows total circulation for academic libraries, and while there’s a decline it certainly doesn’t look like it will hit zero anytime soon, definitely not in just 8 years. But there is a problem with this data and this perspective on library statistics. When we talk about “total circulation” we’re talking about a property of the library, we’re not really thinking about users.
Here’s another set of data that you need to look at to really understand circulation:
Academic enrollment has been rising rapidly. This means more students, which in turns means greater circulation. So if total circulation has been dropping despite an increase in users then something else must be going on. So rather than asking the question “How many items does my library circulate?” we need to alter that to “How many items does the average student checkout?”
Here is that data:
This chart shows the upper/lower quartiles and median for circulation per FTE student. As you can see this data shows a much more dramatic drop in the circulation of library materials. Rising student populations hide this fact.
But 2020? Can I be serious? The simple linear regression model in the charts is probably a good predictor of 2012, but not necessarily 2020. Hitting zero without flattening out seems pretty unlikely. However, it is worth noting the circulation per user in the lower quartile for less than 4 year colleges reached 1.1 in 2010. If you’re averaging around 1 item per user, every user that takes out 2 items means there’s another who has checked out 0.
What’s Happening Here?
Rather than waste too much time trying to predict a future we’ll live in in less than a decade, let’s explore the more interesting question: “What’s happening here?”
By far the number one hypothesis I get when I show people this data is “Clearly this is just because of the rise of e-journals and e-books”. This hypothesis is reasonable: What has happened is simply that users have switched from print to electronic. This data represents a shift in media, nothing more.
But there are 2 very large problems with this hypothesis.
First, print journal circulation is not universal among academic libraries. In the cases where there is no print journal circulation the effect of e-journals would not be present in circulation data. However, I don’t have information to point out exactly how many academic libraries did circulate print journals. Maybe the effect of e-journals on just the libraries that do circulate serials could effect the data for everyone. The data we have already shown resolves this issue. Libraries that did circulate serials would have higher circulation per user than those that did not. By showing different quartiles we can address this discrepancy in the data between libraries that did and did not circulate journals. If you look at the data you’ll see that indeed the upper quartile does seem to have a higher rate of decline, but not enough to validate this hypothesis. The median and lower quartiles also experience this shift, so something else must be at work.
Second, e-books were not largely adopted until the mid 2000s, yet the decline preceding 2000 is at least as steep as after. If you look at the chart below you’ll notice that ebook acquisition rates did not exceed print until 2010:
Ebooks, of course, do have an effect on usage, but they’re not the primary factor in this change.
So clearly we must reject the hypothesis that this is merely a media shift. Certainly the shift from print to electronic has had some effect, but it is not the sole cause. If it’s not a shift in media, the most reasonable explanation is that it’s a shift in user behavior. Students are simply not using books (in any format) as much as they used to.
What is Causing this Shift in User Behavior?
The next question is what is the cause of this shift.
I think the most simple answer is the web. 1996 is the first data point showing a drop in circulation. Of course the web was quite small then, but AOL and Yahoo! were already around, and the Internet Archive had been founded. If you think back to a pre-web time, pretty much anything you needed to know more about required a trip to the library and checking out a book.
The most important thing to take away is that, regardless of cause, user behavior has changed and by all data points is still changing. In the end, the greatest question is how will academic libraries adapt? It is clear that the answer is not as simple as a transition to a new media. To survive, librarians must find the answer before we have enough data to prove these predictions.
- All library data referenced in this post comes from the Library Statistics Program (National Center for Education Statistics) nces.ed.gov/pubsearch/getpubcats.asp?sid=041#
- Data regarding fall enrollments is from “Fast Facts” (National Center for Education Statistics) http://nces.ed.gov/fastfacts/display.asp?id=98
About our guest author: Will Kurt is a software engineer at Articulate Global, pursuing his masters in computer science at the University of Nevada, Reno and is a former librarian. He holds an MLIS from Simmons College and has worked in various roles in public, private and special libraries at organizations such as: MIT, BBN Technologies and the University of Nevada, Reno. He has written and presented on a range of topics including: play, user interfaces, functional programming and data