PeerJ: Could it Transform Open Access Publishing?

Open access publication makes access to research free for the end reader, but in many fields it is not free for the author of the article. When I told a friend in a scientific field I was working on this article, he replied “Open access is something you can only do if you have a grant.” PeerJ, a scholarly publishing venture that started up over the summer, aims to change this and make open access publication much easier for everyone involved.

While the first publication isn’t expected until December, in this post I want to examine in greater detail the variation on the “gold” open-access business model that PeerJ states will make it financially viable 1, and the open peer review that will drive it. Both of these models are still very new in the world of scholarly publishing, and require new mindsets for everyone involved. Because PeerJ comes out of funding and leadership from Silicon Valley, it can more easily break from traditional scholarly publishing and experiment with innovative practices. 2

PeerJ Basics

PeerJ is a platform that will host a scholarly journal called PeerJ and a pre-print server (similar to arXiv) that will publish biological and medical scientific research. Its founders are Peter Binfield (formerly of PLoS ONE) and Jason Hoyt (formerly of Mendeley), both of whom are familiar with disruptive models in academic publishing. While the “J” in the title stands for Journal, Jason Hoyt explains on the PeerJ blog that while the journal as such is no longer a necessary model for publication, we still hold on to it. “The journal is dead, but it’s nice to hold on to it for a little while.” 3. The project launched in June of this year, and while no major updates have been posted yet on the PeerJ website, they seem to be moving towards their goal of publishing in late 2012.

To submit a paper for consideration in PeerJ, authors must buy a “lifetime membership” starting at $99. (You can submit a paper without paying, but it costs more in the end to publish it). This would allow the author to publish one paper in the journal a year. The lifetime membership is only valid as long as you meet certain participation requirements, which at minimum is reviewing at least one article a year. Reviewing in this case can mean as little as posting a comment to a published article. Without that, the author might have to pay the $99 fee again (though as yet it is of course unclear how strictly PeerJ will enforce this rule). The idea behind this is to “incentivize” community participation, a practice that has met with limited success in other arenas. Each author on a paper, up to 12 authors, must pay the fee before the article can be published. The Scholarly Kitchen blog did some math and determined that for most lab setups, publication fees would come to about $1,124 4, which is equivalent to other similar open access journals. Of course, some of those researchers wouldn’t have to pay the fee again; for others, it might have to be paid again if they are unable to review other articles.

Peer Review: Should it be open?

PeerJ, as the name and the lifetime membership model imply, will certainly be peer-reviewed. But, keeping with its innovative practices, it will use open peer review, a relatively new model. Peter Binfield explained in this interview PeerJ’s thinking behind open peer review.

…we believe in open peer review. That means, first, reviewer names are revealed to authors, and second, that the history of the peer review process is made public upon publication. However, we are also aware that this is a new concept. Therefore, we are initially going to encourage, but not require, open peer review. Specifically, we will be adopting a policy similar to The EMBO Journal: reviewers will be permitted to reveal their identities to authors, and authors will be given the choice of placing the peer review and revision history online when they are published. In the case of EMBO, the uptake by authors for this latter aspect has been greater than 90%, so we expect it to be well received. 5

In single blind peer review, the reviewers would know the name of the author(s) of the article, but the author would not know who reviewed the article. The reviewers could write whatever sorts of comments they wanted to without the author being able to communicate with them. For obvious reasons, this lends itself to abuse where reviewers might not accept articles by people they did not know or like or tend to accept articles from people they did like 6 Even people who are trying to be fair can accidentally fall prey to bias when they know the names of the submitters.

Double blind peer review in theory takes away the ability for reviewers to abuse the system. A link that has been passed around library conference planning circles in the past few weeks is the JSConf EU 2012 which managed to improve its ratio of female presenters by going to a double-blind system. Double blind is the gold standard for peer review for many scholarly journals. Of course, it is not a perfect system either. It can be hard to obscure the identity of a researcher in a small field in which everyone is working on unique topics. It also is a much lengthier process with more steps involved in the review process.  To this end, it is less than ideal for breaking medical or technology research that needs to be made public as soon as possible.

In open peer review, the reviewers and the authors are known to each other. By allowing for direct communication between reviewer and researcher, this speeds up the process of revisions and allows for greater clarity and speed 7.  Open peer review doesn’t affect the quality of the reviews or the articles negatively, it does make it more difficult to find qualified reviewers to participate, and it might make a less well-known researcher more likely to accept the work of a senior colleague or well-known lab.  8.

Given the experience of JSConf and a great deal of anecdotal evidence from women in technical fields, it seems likely that open peer review is open to the same potential abuse of single peer review. While  open peer review might make the rejected author able to challenge unfair rejections, this would require that the rejected author feels empowered enough in that community to speak up. Junior scholars who know they have been rejected by senior colleagues may not want to cause a scene that could affect future employment or publication opportunities. On the other hand, if they can get useful feedback directly from respected senior colleagues, that could make all the difference in crafting a stronger article and going forward with a research agenda. Therein lies the dilemma of open peer review.

Who pays for open access?

A related problem for junior scholars exists in open access funding models, at least in STEM publishing. As open access stands now, there are a few different models that are still being fleshed out. Green open access is free to the author and free to the reader; it is usually funded by grants, institutions, or scholarly societies. Gold open access is free to the end reader but has a publication fee charged to the author(s).

This situation is very confusing for researchers, since when they are confronted with a gold open access journal they will have to be sure the journal is legitimate (Jeffrey Beall has a list of Predatory Open Access journals to aid in this) as well as secure funding for publication. While there are many schemes in place for paying publication fees, there are no well-defined practices in place that illustrate long-term viability. Often this is accomplished by grants for the research, but not always. The UK government recently approved a report that suggests that issuing “block grants” to institutions to pay these fees would ultimately cost less due to reduced library subscription fees.  As one article suggests, the practice of “block grants” or other funding strategies are likely to not be advantageous to junior scholars or those in more marginal fields 9. A large research grant for millions of dollars with the relatively small line item for publication fees for a well-known PI is one thing–what about the junior humanities scholar who has to scramble for a few thousand dollar research stipend? If an institution only gets so much money for publication fees, who gets the money?

By offering a $99 lifetime membership for the lowest level of publication, PeerJ offers hope to the junior scholar or graduate student to pursue projects on their own or with a few partners without worrying about how to pay for open access publication. Institutions could more readily afford to pay even $250 a year for highly productive researchers who were not doing peer review than the $1000+ publication fee for several articles a year. As above, some are skeptical that PeerJ can afford to publish at those rates, but if it is possible, that would help make open access more fair and equitable for everyone.

Conclusion

Open access with low-cost paid up front could be very advantageous to researchers and institutional  bottom lines, but only if the quality of articles, peer reviews, and science is very good. It could provide a social model for publication that will take advantage of the web and the network effect for high quality reviewing and dissemination of information, but only if enough people participate. The network effect that made Wikipedia (for example) so successful relies on a high level of participation and engagement very early on to be successful [Davis]. A community has to build around the idea of PeerJ.

In almost the opposite method, but looking to achieve the same effect, this last week the Sponsoring Consortium for Open Access Publishing in Particle Physics (SCOAP3) announced that after years of negotiations they are set to convert publishing in that field to open access starting in 2014. 10 This means that researchers (and their labs) would not have to do anything special to publish open access and would do so by default in the twelve journals in which most particle physics articles are published. The fees for publication will be paid upfront by libraries and funding agencies.

So is it better to start a whole new platform, or to work within the existing system to create open access? If open (and through a commenting s system, ongoing) peer review makes for a lively and engaging network and low-cost open access  makes publication cheaper, then PeerJ could accomplish something extraordinary in scholarly publishing. But until then, it is encouraging that organizations are working from both sides.

  1. Brantley, Peter. “Scholarly Publishing 2012: Meet PeerJ.” PublishersWeekly.com, June 12, 2012. http://www.publishersweekly.com/pw/by-topic/digital/content-and-e-books/article/52512-scholarly-publishing-2012-meet-peerj.html.
  2. Davis, Phil. “PeerJ: Silicon Valley Culture Enters Academic Publishing.” The Scholarly Kitchen, June 14, 2012. http://scholarlykitchen.sspnet.org/2012/06/14/peerj-silicon-valley-culture-enters-academic-publishing/.
  3. Hoyt, Jason. “What Does the ‘J’ in ‘PeerJ’ Stand For?” PeerJ Blog, August 22, 2012. http://blog.peerj.com/post/29956055704/what-does-the-j-in-peerj-stand-for.
  4. http://scholarlykitchen.sspnet.org/2012/06/14/is-peerj-membership-publishing-sustainable/
  5. Brantley
  6. Wennerås, Christine, and Agnes Wold. “Nepotism and sexism in peer-review.” Nature 387, no. 6631 (May 22, 1997): 341–3.
  7. For an ingenious way of demonstrating this, see Leek, Jeffrey T., Margaret A. Taub, and Fernando J. Pineda. “Cooperation Between Referees and Authors Increases Peer Review Accuracy.” PLoS ONE 6, no. 11 (November 9, 2011): e26895.
  8. Mainguy, Gaell, Mohammad R Motamedi, and Daniel Mietchen. “Peer Review—The Newcomers’ Perspective.” PLoS Biology 3, no. 9 (September 2005). http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1201308/.
  9. Crotty, David. “Are University Block Grants the Right Way to Fund Open Access Mandates?” The Scholarly Kitchen, September 13, 2012. http://scholarlykitchen.sspnet.org/2012/09/13/are-university-block-grants-the-right-way-to-fund-open-access-mandates/.
  10. Van Noorden, Richard. “Open-access Deal for Particle Physics.” Nature 489, no. 7417 (September 24, 2012): 486–486.

Tablets in Library Workflows: Revolution & Healthy Skepticism

 
Tablet Revolution: Healthy Skepticism

Tablets and mobile computing have been the subject of a lot of Internet hype. A quick search for “tablet revolution” will confirm this, but if we’re appropriately skeptical about the hype cycle, we’ll want to test the impact of tablets on our library ourselves. We can do this in a few ways. We can check the literature to see what studies have been done. 1 We can check our web analytics to see which devices are being used to access our web sites. 2 We can also walk the public areas in our libraries and count patrons working on tablets. These investigations can tell us how and how often tablets are being used, but they don’t tell us how or if tablets are revolutionizing library use.

In order to better answer this question, I started a little project. Over the last year, I’ve been using informal methods to track the effects that tablet use have on my work. I secured some equipment funding and acquired an Apple iPad 2 and an Android tablet, the Asus Transformer Prime. I started doing my work on these devices, keeping an eye on how they changed my daily workflow, how suited they were to my daily tasks, and whether or not they increased my productivity or the quality of my work. Over the course of the year I can report that tablets have changed the way I work. Most of the changes are incremental, but there are at least a couple cases of genuine revolution to report.

Deploying Tablets in my Workflow

As I spent some time doing my work using the tablets, I discovered there were three possible results to my efforts to integrate them into my daily work. Some tasks simply did not translate well to the tablet environment. Other tasks translated fairly seamlessly to the tablet environment; what I could do on a computer I could also do on a tablet. Finally there were a few cases where the affordances of the tablets: touch interface, networked portability, and app environment enabled me to do my work in new ways, ways not possible using a traditional workstation or laptop.

The first sort of task, the kind in which tablets failed to produce positive results, tended to involve heavy processing requirements, the need to connect peripheral devices, or involved complex software programs not ported to mobile apps. Examples included editing image, sound, or video files; analyzing datasets; and creating presentation slides. The tablets lacked the processing power, peripheral interfaces, or fine interface control to make them adequate platforms for the editing tasks. Statistical analysis software shares the same heavy processor requirements and I was unable to find mobile apps equivalent to SPSS or Atlas TI. In the case of presentation slides, all the necessary conditions for success seemed to be present. Keynote for iOS is a great app, but I was never satisfied with the quality of my tablet-created presentations and soon returned to composing my slides in Keynote on my laptop. As a general rule of thumb, I found tasks that require lots of processing power, super-fine input control (fingers and even styli are imprecise on touch screens), or highly-specialized software environments to be poor candidates for moving to tablets.

The majority of my day-to-day work tasks fell into a second set of tasks, these tasks enabled me to easily replace my traditional computer with a tablet. I discovered that after a little research to discover the proper apps and a little time to learn how to use them, a tablet was a good as a computer, most of the time. At first, I experimented with treating the tablet as a small portable computer. I acquired Apple’s Bluetooth keyboard and the keyboard dock accessory for the Transformer and was able to do word processing, text editing and coding, email, instant messaging, and pretty much any browser-based activity without significant adjustment. I found text entry without a keyboard to be too clumsy a process for serious work. Tablets also are ideal for server-administration, since the computer on the other end handled the heavy lifting. There are SSH, FTP, and text editing apps that make tablets perfect remote administration environments. I also found text-based tasks like writing, email, chat, reading, and most things browser-based or whose files live in the cloud or on a server can be done just as well on a tablet as it can on a workstation or laptop.

The limitation to this general rule is that in some cases the iPad presented file management difficulties. The iOS defaults push users into using iTunes and iCloud to manage documents. If you like these options, there is no problem. I found these options lacking in flexibility, so I had to engage in a little hackery to get access to the files I needed on the iPad. Dropbox and Evernote are good examples of cloud storage apps that work once you learn how to route all your documents through them. In the end, I found myself preferring apps that access personal cloud space (Jungledisk) or my home NAS storage (Synology DS File) in my workflow. The Transformer Prime required fewer document-flow kluges and its keyboard accessory includes a USB flash-drive interface which is very useful for sharing documents with local colleagues and doesn’t require a fancy workaround.

A second limitation I encountered was in accessing web video content. Not frequently, but often enough to be noticed, certain web video files (Flash encoded) would not play on the iPad. The Android tablet is Flash capable and suffered fewer of these problems. Video isn’t a key part of my workflow, so for me this is mearly an annoyance, not a serious hindrance to productivity.

Touching Revolution

Of course, simply duplicating the capabilities of traditional computer environments in a smaller form-factor is not revolutionary. As long as I was using a tablet as if it were a smaller computer, then my work didn’t change, only the tools I was doing it with changed. It was when I started working outside of the keyboard and mouse interface model and started touching my work that new ways of approaching tasks presented themselves. When I started using a stylus to write on the screen of a tablet the revolution became apparent.

As an undergraduate, Mortimer Adler and Charles Van Doren’s How to Read a Book 3 was a required reading and their lesson on annotation while we read stuck with me. When it comes to professional development reading, annotation is absolutely necessary to comprehension and integration of content. Thus Amazon’s Kindle reader app for Android and iOS became my favorite ebook platform, due to its superior system for taking and sharing reading notes across platforms. I rely so heavily on annotations that I cannot do my work using ebook platforms that don’t allow me to take notes in text. In the same vein, I use personal copies of printed books for my research instead of borrowed library copies, because I have to write in the margins to process ideas.

Tablets revolutionized my reading when I discovered PDF annotation apps that allowed me to use a stylus to write on the top of documents. Apps like Notetaker HD and iAnnotate for iOS and ezPDF Reader for Android give readers the digital advantage of unlimited amounts of text without the bulk and weight of paper printouts. They also give the reader analog advantages of free-hand highlighting and writing notes in the margin. Combine these advantages with Zotero-friendly apps such as Zotpad, Zotfile, and Zandy that connect my favorite discovery tool to my tablet and I found myself reading more, taking better notes, and drawing clearer connections between documents. The portability of digital files on a mobile wirelessly connected device combined with the stylus and touch-screen method of text input enabled me to interact with my reading in ways impossible using either printed paper or a traditional computer monitor and keyboard. Now, my entire library and all of my reading lists came with me everywhere, so I carved out more time to read each week. When I opened a text, I was able to capture my thoughts about the reading more accurately and completely. This wasn’t just reading in a different medium, it was reading in a different method and it worked better than the way I had been doing before.

Tablets with reading annotation apps revolutionized the way that I read and organized my reading notes, but they had an even bigger impact on the way that I grade student papers. I love teaching, but grading essays is a task that I dread. Essays are heavy and hard to carry around. When I have essays with me, I have a constant and irrational background fear that someone will steal my car and I’ll lose irreplaceable student work. When I started using the tablet, I had my students submit their essays in PDF format. Then, I read their work in a similar manner to my professional reading. I read the essays on a tablet, using the stylus to highlight passages and write feedback in the margins. When I was finished, I could email the document back to the student and also keep an archived copy. This solved a number of paper distribution and unique copy problems. The students got better feedback more quickly and I always had a reference copy if questions arose later in the term.

A Personal Revolution

Taken by themselves, these reading and grading innovations may sound like incremental changes, not revolutions. For example, laptops are quite portable and we’ve had the ability to add notes and comments to PDF documents for a long time. There is no reason I couldn’t adopt this workflow without buying an additional expensive gadget, except that I couldn’t. I tried electronic reading and grading workflows before I had a tablet and rejected them. Reading on a computer monitor and typing comments into a PDF didn’t result in interesting thoughts about the reading. I tried grading by adding comments to PDF documents on a laptop and found my feedback comments to be arid and less helpful than the remarks I wrote in the margins of paper essays, so I switched back to colored pen on paper. These experiences are all anecdotal and personal, but accurately describe my experience. With a tablet, the feel of touching a screen and writing with a stylus enabled an organic flow of thoughts from my brain to the text. I can list the affordances of mobile computing that make this possible: ubiquitous wireless broadband networking, touch interface, lightweight and portable devices, a robust app ecology, and cloud storage of documents. The revolution lies in how these technical details combined in my workflow to creates an environment where I did better work with fewer distractions and more convenience.

Next Steps

One requirement to justify the time and expense of this project is that I share my findings. This post is an effort in that direction, but I will also be offering a series of faculty workshops on using tablets in academic workflows. I’m planning a workshop where faculty can put their hands on a range of tablet devices, a petting zoo of tablets. There will also be a workshop on reading app for tablets and one on grading workflows. One challenge to presenting what I’ve learned about tablets is that most of what I have learned is personal. I’ve spoken with scholars who do not share my preference for hand-written thoughts; my workflows are not revolutionary for them. What ultimately may be the most beneficial result of my project is uncovering a method for effectively communicating emerging technology experiences with non-technologically inclined colleagues.

 

  1. Pew. Tablet and E-book reader Ownership Nearly Double Over the Holiday Gift-Giving Period. Pew Internet Libraries. http://libraries.pewinternet.org/2012/01/23/tablet-and-e-book-reader-ownership-nearly-double-over-the-holiday-gift-giving-period/.
  2. Wikipedia contributors. 2012. Mobile web analytics. Wikipedia, the free encyclopedia. Wikimedia Foundation, Inc., September 13. https://en.wikipedia.org/w/index.php?title=Mobile_web_analytics&oldid=510528022.
  3. Mortimer Adler, How to read a book, Rev. and updated ed. (New York: Simon and Schuster, 1972).

The simplest AJAX: writing your own code (1)

It has been 8 months since the Code Year project started. Back in January, I have provided some tips. Now I want to check in to see if how well you have been following along. Falling behind? You are not alone. Luckily there are still 3-4 months left.

Teaching oneself how to code is not easy. One of the many challenges is keeping at it on a regular basis. Both at home and at work, there seems to be always a dozen things higher in priority than code lessons. Another problem is that often we start a learning project by reading a book with some chosen examples. The Code Year project is somewhat better since it provides interactive tutorials. But at the end of many tutorials, you may have experienced the nagging feeling of doubt about whether you can now go out to the real world and make something that works. Have you done any real-time project yet?

If you are like me, the biggest obstacle in starting your own first small coding project is not so much the lack of knowledge as the fantasy that you still have yet more to learn before trying any such real-life-ish project. I call this ‘fantasy’ because there is never such a time when you are in full possession of knowledge before jumping into a project. In most cases, you discover what you need to learn only after you start a project and run into a problem that you need to solve.

So for this blog post, I tried building something very small. During the process, I had to fight constantly with the feeling that I should go back to the Code Year Project and take those comforting lessons in Javascript and JQeury that I didn’t have time to work through yet. But I also knew that I would be so much more motivated to keep learning if I can just see myself making something on my own. I decided to try some very simple AJAX stuff and started by looking at two examples on the Web.  Here I will share those examples and my review process that enabled me to write my own bit of code. After looking at these, I was able to use different APIs to get the same result. My explanation below is intentionally detailed for beginners. But if you can understand the examples without my line-by-line explanation, feel free to skip and go directly to the last section where the challenge is.  For what would your AJAX skill be useful?  There are a lot of useful data in the cloud. Using AJAX, you can dynamically display your library’s photos stored in Flickr in your library’s website or generate a custom bibliography on the fly using the tags in Pinboard or MESH (Medical Subject Heading) and other filters in PubMed. You can mash up data feeds from multiple providers and create something completely new and interesting such as HealthMap, iSpiecies, and Housing Maps.

Warm-up 1: Jason’s Flickr API example

I found this example, “Flickr API – Display Photos (JSON)” quite useful. This example is at Jason Clark’s website. Jason has many cool code examples and working programs under the Code & Files page. You can see the source of the whole HTML page here . But let’s see the JS part below.

<script type="text/javascript">
//run function to parse json response, grab title, link, and media values - place in html tags
function jsonFlickrFeed(fr) {
    var container = document.getElementById("feed");
    var markup = '<h1>' + '<a href="' + fr.link+ '">' + fr.title + '</a>'+ '</h1>';
    for (var i = 0; i < fr.items.length; i++) {
    markup += '<a title="' + fr.items[i].title + '" href="' + fr.items[i].link + '"><img src="' + fr.items[i].media.m + '" alt="' + fr.items[i].title + '"></a>';
}
container.innerHTML = markup;
}
</script>
<script type="text/javascript" src="http://api.flickr.com/services/feeds/photos_public.gne?tags=cil2009&format=json">
</script>

After spending a few minutes looking at the source of the page, you can figure out the following:

  • Line 12 imports data formatted in JSON from Flickr, and the JSON data is wrapped in a JS function called jsonFlickrFeed. You can find these data source urls in API documentation usually. But many API documentations are often hard to decipher. In this case, this MashupGuide page by Raymond Yee was quite helpful.
  • Line 3-8 are defining the jsonFlickrFeed function that processes the JSON data.

You can think of JSON as a JS object or an associative array of them. Can you also figure out what is going on inside the jsonFlickrFeed function? Let’s go through it line by line.

  • Line 4 creates a variable, container, and sets it to the empty div given the id of the “feed.”
  • Line 5 creates another variable, markup, which will include a link and a title of “fr,” which is an arbitrary name that refers to the JSON data thrown inside the jsonFlickrFeed fucntion.
  • Line 6-8 are a for-loop that goes through every object in the items array and extracts its title and link as well as the image source link and title. The loop also adds the resulting HTML string to the markup variable.
  • Line 9 assigns the content of the markup variable as the value of the HTML content of the variable, container. Since the empty div with the “feed” id was assigned to the variable container, now the feed div has the content of var markup as its HTML content.

So these two JS snippets take an empty div like this:

<div id="feed"></div>

Then they dynamically generate the content inside with the source data from Flickr following some minimal presentation specified in the JS itself. Below is the dynamically generated content for the feed div. The result like this.

<div id="feed">
<h1>
<a href="http://www.flickr.com/photos/tags/cil2009/">Recent Uploads tagged cil2009</a>
</h1>
<a href="http://www.flickr.com/photos/matthew_francis/3458100856/" title="Waiting at Vienna metro (cropped)">
<img alt="Waiting at Vienna metro (cropped)" src="http://farm4.staticflickr.com/3608/3458100856_d01b26cf1b_m.jpg">
</a>
<a href="http://www.flickr.com/photos/libraryman/3448484629/" title="Laptop right before CIL2009 session">
<img alt="Laptop right before CIL2009 session" src="http://farm4.staticflickr.com/3389/3448484629_9874f4ab92_m.jpg">
</a>
<a href="http://www.flickr.com/photos/christajoy42/4814625142/" title="Computers in Libraries 2009">
<img alt="Computers in Libraries 2009" src="http://farm5.staticflickr.com/4082/4814625142_f9d9f90118_m.jpg">
</a>
<a href="http://www.flickr.com/photos/librarianinblack/3613111168/" title="David Lee King">
<img alt="David Lee King" src="http://farm4.staticflickr.com/3354/3613111168_02299f2b53_m.jpg">
</a>
<a href="http://www.flickr.com/photos/librarianinblack/3613111084/" title="Aaron Schmidt">
<img alt="Aaron Schmidt" src="http://farm4.staticflickr.com/3331/3613111084_b5ba9e70bd_m.jpg">
</a>
<a href="http://www.flickr.com/photos/librarianinblack/3612296027/" block"="" libraries"="" in="" computers="" title="The Kids on the ">
<img block"="" libraries"="" in="" computers="" alt="The Kids on the " src="http://farm3.staticflickr.com/2426/3612296027_6f4043077d_m.jpg">
</a>
<a href="http://www.flickr.com/photos/pegasuslibrarian/3460426841/" title="Dave and Greg look down at CarpetCon">
<img alt="Dave and Greg look down at CarpetCon" src="http://farm4.staticflickr.com/3576/3460426841_ef2e57ab49_m.jpg">
</a>
<a href="http://www.flickr.com/photos/pegasuslibrarian/3460425549/" title="Jason and Krista at CarpetCon">
<img alt="Jason and Krista at CarpetCon" src="http://farm4.staticflickr.com/3600/3460425549_55443c5ddb_m.jpg">
</a>
<a href="http://www.flickr.com/photos/pegasuslibrarian/3460422979/" title="Lunch with Dave, Laura, and Matt">
<img alt="Lunch with Dave, Laura, and Matt" src="http://farm4.staticflickr.com/3530/3460422979_96c020a440_m.jpg">
</a>
<a href="http://www.flickr.com/photos/jezmynne/3436564507/" title="IMG_0532">
<img alt="IMG_0532" src="http://farm4.staticflickr.com/3556/3436564507_551c7c5c0d_m.jpg">
</a>
<a href="http://www.flickr.com/photos/jezmynne/3436566975/" title="IMG_0529">
<img alt="IMG_0529" src="http://farm4.staticflickr.com/3328/3436566975_c8bfe9b081_m.jpg">
</a>
<a href="http://www.flickr.com/photos/jezmynne/3436556645/" title="IMG_0518">
<img alt="IMG_0518" src="http://farm4.staticflickr.com/3579/3436556645_9b01df7f93_m.jpg">
</a>
<a href="http://www.flickr.com/photos/jezmynne/3436569429/" title="IMG_0530">
<img alt="IMG_0530" src="http://farm4.staticflickr.com/3371/3436569429_92d0797719_m.jpg">
</a>
<a href="http://www.flickr.com/photos/jezmynne/3436558817/" title="IMG_0524">
<img alt="IMG_0524" src="http://farm4.staticflickr.com/3331/3436558817_3ff88a60be_m.jpg">
</a>
<a href="http://www.flickr.com/photos/jezmynne/3437361826/" title="IMG_0521">
<img alt="IMG_0521" src="http://farm4.staticflickr.com/3371/3437361826_29a38e0609_m.jpg">
</a>
<a href="http://www.flickr.com/photos/jezmynne/3437356988/" title="IMG_0516">
<img alt="IMG_0516" src="http://farm4.staticflickr.com/3298/3437356988_5aaa94452c_m.jpg">
</a>
<a href="http://www.flickr.com/photos/jezmynne/3437369906/" title="IMG_0528">
<img alt="IMG_0528" src="http://farm4.staticflickr.com/3315/3437369906_01015ce018_m.jpg">
</a>
<a href="http://www.flickr.com/photos/jezmynne/3436560613/" title="IMG_0526">
<img alt="IMG_0526" src="http://farm4.staticflickr.com/3579/3436560613_98775afc79_m.jpg">
</a>
<a href="http://www.flickr.com/photos/jezmynne/3437359398/" title="IMG_0517">
<img alt="IMG_0517" src="http://farm4.staticflickr.com/3131/3437359398_7e339cf161_m.jpg">
</a>
<a href="http://www.flickr.com/photos/jezmynne/3436535739/" title="IMG_0506">
<img alt="IMG_0506" src="http://farm4.staticflickr.com/3646/3436535739_c164062d6b_m.jpg">
</a>
</div>

Strictly speaking, Flickr is returning data in JSONP rather than JSON here. You will see what JSONP means in a little bit. But for now, don’t worry about that distinction. What is cool is that you can grab the data from a third party like Flickr and then you can remix and represent them in your own page.

Warm-up 2: Doing the same with JQuery using $.getJSON()

Since I had figured out how to display data from Flickr using Javascript (thanks to Jason’s code example), the next I wanted to try was to do the same with JQuery.  After some googling, I discovered that there is a convenient JQeury method called $.getJSON().  The official JQuery page on this $.getJSON() method includes not only the explanation about JSONP (which allows you to load the data from the domain other than yours in your browser and manipulate it unlike JSON which will be restricted by the same origin policy) but also the JQuery example of processing the same Flickr JSONP data. This is the example from the JQuery website.

$.getJSON("http://api.flickr.com/services/feeds/photos_public.gne?jsoncallback=?",
  {
    tags: "mount rainier",
    tagmode: "any",
    format: "json"
  },
  function(data) {
    $.each(data.items, function(i,item){
      $("<img/>").attr("src", item.media.m).appendTo("#images");
      if ( i == 3 ) return false;
    });
  });

As you can see in the first line, the data feed urls for JSONP response have a part similar to &jasoncallback=? at the end. The function name can vary and the API documentation of a data provider provides that bit of information. Let’s go through the codes line by line:

  • Line 1-6 requests and takes in the data feed from the speicified URL in JSONP format.
  • Once the data is received and ready, the script invokes the anonymous function from line 7-11. This function makes use of the JQuery method $.each().
  • For each of data.items, the anonymous function applies another anonymous function from line 9-10.
  • Line 9 creates an image tag – $(“<img/>”), attaches each item’s media.m element as the source attribute to the image tag – .attr(“src”, item.media.m), and lastly appends the resulting string to the empty div with the id of “images” – .appendTo(“#images”).
  • Line 10 makes sure that no more than 4 items in data.items is processed.

You can see the entire HTML page codes in the JQuery website’s $.getJSON() page.

Your Turn: Try out an API other than Flickr

So far we have looked through two examples.  Not too bad, right? To keep the post at a reasonable length, I will get to the little bit of code that I wrote in the next post.  This means that you can try the same and we can compare the result next time. Now here is the challenge. Both examples we saw used the Flickr API. Could you write code for a different API provider that does the same thing? Remember that you have to pick a data provider that offers feeds in JSONP if you want to avoid dealing with the same origin policy.

Here are a few providers you might want to check out. They all offer their data feeds in JSONP.

First, find out what data URLs you can use to get JSONP responses. Then write several lines of codes in JS and JQuery to process and display the data in the way you like in your own webpage. You may end up with some googling and research while you are at it.

Here are a few tips that will help you along the way:

  • Verify the data feed URL to see if you are getting the right JSONP responses. Just type the source url into the browser window and see if you get something like this.
  • Get the Firebug for debugging if you don’t already have it.
  • Use the Firebug’s NET panel to see if you are receiving the data OK.
  • Use the Console panel for debugging. The part of data that you want to pick up may be in several levels deep. So it is useful to know if you are getting the right item first before trying to manipulate it.

Happy Coding! See the following screenshots for the Firebug NET panel and Console panel. (Click the images to see the bigger and clearer version.) Don’t forget to share your completed project in the comments section as well as any questions, comments, advice, suggestions!

Net panel in Firebug

 

Console panel in Firebug


Rapid Prototyping Mobile Augmented Reality Applications

This Fall semester the Undergraduate Library at the University of Illinois at Urbana Champaign along with partners from the Graduate School of Library and Information Science and Computer Science graduate students with experience in programming OpenCV, will begin coding an open source mobile Augmented Reality (AR) app for deeper in-library engagement with both print and physical resources. The funding comes from a recently awarded IMLS Sparks! Grant. Our objectives include the following:

  • Create shelf recognition software for mobile devices that integrate print and digital resources into the on-site library experience and experiment with location based recommendation services.
  • Investigate the potential of creating a system that shows users how they are physically navigating an “idea space.”
  • Complete iterative rapid use studies of mobile software with library patrons and communicate results back to programming staff for incremental app design.
  • Work with our Library IT staff to identify skills and technical infrastructure needed in order to make AR an ongoing part of technology in libraries.
  • Make available the AR apps through the Library’s mobile labs experimental apps area (http://m.library.illinois.edu/labs.asp).

There are multiple problems with access to the variety of collections in our networked era (Lee, 2000) including their highly disparate nature (many vended platforms serving licensed library content) and their increasing intangibility (the move to massively electronic, or e-only access in libraries and information centers). Moreover, library collection developers are faced with the challenge of providing increased access to digital while still maintaining print. Lee (2000) argues for library research redefining library collections as information contexts.

This work will address the contextual information needs of library users while leveraging recent advances in mobile-networked technologies, experimenting with a way to increase access to collections of all types. The research team will deploy, test, and evaluate mobile applications that create novel “augmented book stacks.”

 

(a.) Subject of book stack set is identified by app index, and displayed on interface. (b.) Recommendations (e-book, digital items, or databases) brought onto interface in real-time. (c.) Popular books are indicated on title using circulation data from integrated library system historical circulation count (this can be a Z39.50 call or a pre-loaded circulation report database).

 

To create such applications, researchers will make use of video functionality that augment shelves of interest to a user in the library stacks inserting interactive graphics through the video feed of a phone onto the physical book stack environment in real-time. As a comparison to current state of the art mobile AR apps, like the ShelvAR app in development at Miami University, the proposed system does not require 2D tags as targets on books, but rather uses a combination of computer vision software code for feature detection and optical character recognition (OCR) software to parse the text of titles, call numbers, and subjects on the book stacks. A prototype project for OCR running in Android can be implemented following this tutorial. Our research group does not propose a replication of the state of the art, but will implement a system that pushes forward the state of the art in innovation for research and learning with AR in library stacks.

The project team will experiment with overlaying relevant resources from other parts of the library’s collection such as the library’s licensed set of databases, other Internet based resources, or books that are relevant but not shelved nearby. This augmentation will enhance the serendipitous discovery of books so that items relevant to a user’s location, but not shelved near her can be brought into the browsing experience; with this technology books that are checked out, or otherwise unavailable can still be made useful to a users information search. Our staff will experiment with system features that create “idea spaces” for the user, which will serve to help students and library users exploit previous discovery routes and spaces in the book stacks. The premise of “idea spaces” comes from an unspoken assumption among librarians: the intellectual organization of items in library collections are valuable constructs. By presenting graphical overlays of the subject areas of the collection, we make this assumption explicit and assert that as a user navigates the geographic spaces of a library collection, they are actually navigating intellectual spaces. With a user location is paired an idea (or set of related ideas), delivered in our proposed system with a graphical overly in the video feed. The user’s location, her context in the collection, is the query point for the idea spaces system.

This experiment will be valuable for all libraries that support print and digital resources. Underscoring this work is the overarching concern with making all library collections more accessible. Researchers will undertake rapid prototyping (as a test case for the chosen method see: Jones & Richey, 2000) of the augmented reality feature set in order to understand user preferences of mobile interfaces that best support location-based recommendations, and make all results of this experimentation including software code and computing workflows freely available. Such experimentation could lead to profound changes in the way people research and learn in library spaces.

Grant activities will begin in October 2012 and conclude September 2013. The evaluation plan for the grant is a systematic measurement of project outputs against the stated goals with the resulting evaluative outputs communicating what worked and was useful for library patrons in AR apps. By operationalizing a rapid evaluation of augmented reality services the research team hopes to identify the fail points for mobile services in this domain in addition to the most desired and useful feature set for all augmented reality systems in library book stacks.

Cited

Jones, T. & Richey, R.  (2000) “Rapid Prototyping methodology in action: a developmental study,” Educational Technology Research and Development 48, 63-80.

Lee, H. (2000), “What is a collection?” Journal of the American Society for Information Science, 51 (12) 1106-1113.

Suggested Reading

Regarding collocation objectives in library science see: Svenonius, E. (2000), The Intellectual Foundation of Information Organization, MIT Press, Cambridge, MA, pp.21-22

See also

Additional sample code for image processing with Android devices available here, courtesy of openly available lecture notes from Stanford’s Digital Image Processing Course EE368.

Forthcoming this October, a paper detailing additional AR use cases in library services: Hahn, J. (2012). Mobile augmented reality applications for library services. New Library World 113 (9/10)