African Art Pedagogy: A Hypertexted Journey

File_000_cropped

AAP Exhibit, installation view, CCA’s Meyer Library, Oakland, CA

In early fall, our Instructional Designer, Bobby White, who is based in the Library, brought a potential Digital Scholarship project to my attention, a Library exhibit idea with both a digital and physical component. In this article I’ll talk about the idea behind the project, our process, the technology utilized, and our reflections after we completed the exhibit.

Leslie Townsend, faculty member in the Visual Studies and Writing and Literature programs, approached the Libraries with the idea to share her pedagogy for an African Art survey course (part of the Visual Studies Program) with other faculty and the greater CCA community. In addition to displaying material artifacts, she was interested in linking the many types of digital artifacts from the course–images, videos, texts, student assignments, reference materials, syllabus–into an integrated digital display. Bobby had recently seen student work from faculty member Rebekah Edwards’ literature class using the software Twine and suggested Twine for this project. Twine is an open-source tool for telling interactive, nonlinear stories, and is also used for narrative games. Like a labyrinth, it allows viewers to choose different and multiple paths to travel through a particular story. Twine offered a unique way to open up Leslie’s African Art course to reveal layered perspectives–instructor, student, assessment, and reflection–and reveal complex interactions unavailable in a traditional 2-D format. The African Art Pedagogy Exhibit was to be the Libraries’ first Digital Scholarship project and our first exhibit utilizing an iPad.

Bobby and I set about to learn Twine, and began a series of weekly meetings with Leslie to discuss the content and structure of the Twine, as well as to curate a selection of objects and books related to her course. We had already determined that we would use an iPad to display the Twine; the Libraries’ had purchased several iPads in the past year or so, and we have been interested in deploying them for a variety of purposes, including for displays. I began researching a display stand for the iPad, and eventually settled on an iPad floor stand from a Website called Displays2Go, which specializes in marketing displays. The criteria included a locking case, cable management, a rotating bracket to allow flexibility in display, a fit for the iPad Air, hidden home button (to keep users from navigating away from the exhibit), relatively inexpensive price, and last but not least, pleasing aesthetically. When it came time to install, we also utilized the iPad’s “Guided Access” feature, which keeps users in the app.

As for Twine, we discovered there are currently two versions of Twine; we chose to use the newest version (version 2), for what seemed like obvious reasons — newer versions tend to be better supported and offer new features. But in the case of Twine, the new version represents a renewed focus on text, and away from the easy integration of adding images that version 1 offers. Adding images and links to embedded videos were important to this project, to give viewers direct contact with selected course materials. We were able to work with version 2, but it required additional research. For a future project, we would look more closely at Twine 1 and consider using it instead.

The goals we developed going into the project were to

  • Design an integrated physical and digital pedagogy exhibition in the Library
  • Test Twine’s application in an academic environment
  • Share Leslie’s pedagogical process with colleagues
  • Offer an experience of her African Art course to a range of viewers in the Library: students, faculty, staff, visitors
  • Enable Leslie to continue to develop the Twine after the exhibition
  • Explore options and issues with sharing the Twine outside the Library once the exhibition ended

The three of us then began to work as team, and in short order defined our roles — a key component to a successful collaboration, and one that made it easy and enjoyable to work together. These were based on our individual expertise/s: Leslie Leslie focused on providing the content, and input on the flow of the narrative; Bobby focused on Twine and pedagogy development; and I assumed the project management hat, as well as Twine development.

Neither Bobby nor I have a background in African Art so one of our initial tasks was to get to know Leslie’s curriculum, both through her syllabus and in conversation with her. We defined the content areas for our Twine: syllabus, student work, teaching philosophy/learning outcomes, and resources, and created a structure for storing and sharing materials in Google Drive, which our campus uses. At this point we began to re-imagine the course as an exhibit: the content areas would become four paths in Twine, that intermingle and connect, depending on the choices a visitor makes. The content areas are: Curriculum Guide, Students in Action, Teaching Philosophy and Learning Outcomes, and Experience African Art. I built a timeline with milestones, working backward from the exhibition date, and we scheduled weekly working meetings (initially two-hour blocks, though toward the end of the project we had a few full-day working sessions). In addition to our weekly meetings, Leslie spent additional time pulling together coursework, and Bobby and I spent time researching Twine questions and implementation questions. But it was difficult to properly estimate the amount of time we needed, especially since we were engaged in multiple new tasks: learning an open-source software, figuring out how to host the completed work, and turning a course into an open narrative. Bobby reflected after the fact that this type of scenario will most likely repeat itself, as part of what we do in the Libraries now is engage with new technologies. Leslie observed that she could imagine another project in the future taking place over a longer period of time, perhaps over a semester and a summer, as we spent many hours toward the end of the project, and could easily have spent more.

Once we’d identified works for inclusion, we had a variety of media to organize: electronic documents, links to embedded videos, and physical objects. We categorized works into proper folders, selected physical objects to scan or photograph, and hashed out the best way to present the material, to tell the story the course suggested to us. It was a fully collaborative process, which was one of its joys. One of the challenges we struggled with was whether we should map the story out in advance or whether we could build it once we’d added all the ‘raw’ material into Twine. Twine’s premise is simple: create a nonlinear story easily and quickly, by creating passages. At its most basic, each passage contains some text and a link or links embedded anywhere within the text to go to another part of the story. Images and multimedia can also be embedded within passages. When building a Twine, one works in a map where you can see all of the passages you’ve created and how they’re linked to one another. It’s a great feature, to be able to have a bird’s-eye view; one navigates back and forth between the editor view of the passage, a preview of the passage/s, and the map of the whole story. We settled on getting all of our content into passages in Twine and then connecting them into multiple narratives, which we thought would allow us to better see the possibilities that the Twine format offered.

Twine-Screen Shot 2015-11-11 at 3.16.24 PM

African Art Pedagogy Exhibit, Twine map screenshot

Simultaneously, Bobby and I began researching where we might host the finished work. A site called philome.la publishes text-only Twines for free, though if you want to include locally stored images or other media, and/or if you have any privacy concerns, it’s not the place to host your Twine. We also looked into using Google Drive and Dropbox as hosting sites but both services have now made it very hard if not impossible to use them as hosting sites. Our solution: we requested a slice of space on one of our Educational Technology Service’s Web servers. This turned out to be ideal, as we now have a space to host future digital-scholarship projects. We still have to grapple with some rights issues for the site: we digitized a few images from books that Leslie uses in her course, which we believe falls under fair use when only shown in the library, but would most likely not be considered Fair Use were we to share the site publicly, as we could not control who would see the images nor what they might do with them. The nature of the digital portion of the exhibit presents opportunities beyond the library exhibit dates, a complicated but exciting aspect of the project. Stay tuned.

Gradually we built out our content into passages and connected them into a variety of paths for the viewers to choose from: we broke up the syllabus into individual passages, with links forward through the syllabus, and links to key course materials, which in turn might take the viewer to other course materials; the Students in Action section is comprised of two assignments, with introductions by Leslie, which offer an insight into students’ interactions with the materials and learning: an introduction to the geography of the continent, and excerpts from a few student papers; Teaching Philosophy and Learning Outcomes offers Leslie a way to frame and share her thinking about the course, one of the most valuable parts of the exercise; lastly, Experience African Art shares a selection of curated, visual course materials, with explications. A map of the continent of Africa is the unofficial hub of the story, as many links across sections radiate to and from it.

aapeTwineTeachPhil

Screenshot from AAP, Teaching Philosophy

Physical objects chosen for display were related to images and text in the Twine, and gave the exhibition a tactile presence that was a nice complement to the digital, while increasing the overall visibility of the exhibition. The Libraries’ Assistant Curator (a work-study position), Hannah Novillo-Erickson, worked with Leslie and I on the exhibit installation, another nice collaboration point.

Overall, we consider the African Art Pedagogy exhibit1 (link to Twine) a successful undertaking. The opportunity to work in-depth with both the Instructional Designer and a faculty member was an invaluable, rich, learning experience. It required a significant time investment, but, having lived through it, the Instructional Designer and I now have a ballpark figure to work with going forward, as well as ideas about how to manage and possibly reduce the time outlay. We found examples of writing composition and writers employing Twine, but we did not find any examples of projects similar to ours, which is kind of exciting. The technology, though easy, still demanded respect in terms of a learning curve, both conceptual and technological. I consider our Twine to be more of a first iteration; I wish I had more time to refine it, now that I better understand its potential in relation to our subject matter. Leslie observed that it showed her relationships and things she could do with the pedagogy that she hadn’t seen previously. She couldn’t imagine how she would do something like this on her own; I assured her that facilitating these types of projects is one of the goals of the new Digital Scholarship position.

Lisa Conrad is the Digital Scholarship Librarian at California College of the Arts, in the San Francisco Bay Area. She received an MFA in Visual Arts from the University of Illinois at Chicago’s School of Art and Art History, and an MLIS from San Jose State University’s School of Library and Information Science. Images from her art work 4 1/2 feet can be seen at fourandahalffeet.

Notes

  1. for educational purposes only; no re-use of any of the images in the Twine.

#1Lib1Ref

A few of us at Tech Connect participated in the #1Lib1Ref campaign that’s running from January 15th to the 23rd . What’s #1Lib1Ref? It’s a campaign to encourage librarians to get involved with improving Wikipedia, specifically by citation chasing (one of my favorite pastimes!). From the project’s description:

Imagine a World where Every Librarian Added One More Reference to Wikipedia.
Wikipedia is a first stop for researchers: let’s make it better! Your goal today is to add one reference to Wikipedia! Any citation to a reliable source is a benefit to Wikipedia readers worldwide. When you add the reference to the article, make sure to include the hashtag #1Lib1Ref in the edit summary so that we can track participation.

Below, we each describe our experiences editing Wikipedia. Did you participate in #1Lib1Ref, too? Let us know in the comments or join the conversation on Twitter!


 

I recorded a short screencast of me adding a citation to the Darbhanga article.

— Eric Phetteplace


 

I used the Citation Hunt tool to find an article that needed a citation. I selected the second one I found, which was about urinary tract infections in space missions. That is very much up my alley. I discovered after a quick Google search that the paragraph in question was plagiarized from a book on Google Books! After a hunt through the Wikipedia policy on quotations, I decided to rewrite the paragraph to paraphrase the quote, and then added my citation. As is usual with plagiarism, the flow was wrong, since there was a reference to a theme in the previous paragraph of the book that wasn’t present in the Wikipedia article, so I chose to remove that entirely. The Wikipedia Citation Tool for Google Books was very helpful in automatically generating an acceptable citation for the appropriate page. Here’s my shiny new paragraph, complete with citation: https://en.wikipedia.org/wiki/Astronautical_hygiene#Microbial_hazards_in_space.

— Margaret Heller


 

I edited the “Library Facilities” section of the “University of Maryland Baltimore” article in Wikipedia.  There was an outdated link in the existing citation, and I also wanted to add two additional sentences and citations. You can see how I went about doing this in my screen recording below. I used the “edit source” option to get the source first in the Text Editor and then made all the changes I wanted in advance. After that, I copy/pasted the changes I wanted from my text file to the Wikipedia page I was editing. Then, I previewed and saved the page. You can see that I also had a typo in my text  and had to fix that again to make the citation display correctly. So I had to edit the article more than once. After my recording, I noticed another typo in there, which I fixed it using the “edit” option. The “edit” option is much easier to use than the “edit source” option for those who are not familiar with editing Wiki pages. It offers a menu bar on the top with several convenient options.

wiki_edit_menu

The menu bar for the “edit” option in Wikipeda

The recording of editing a Wikipedia article:

— Bohyun Kim


 

It has been so long since I’ve edited anything on Wikipedia that I had to make a new account and read the “how to add a reference” link; which is to say, if I could do it in 30 minutes while on vacation, anyone can. There is a WYSIWYG option for the editing interface, but I learned to do all this in plain text and it’s still the easiest way for me to edit. See the screenshot below for a view of the HTML editor.

I wondered what entry I would find to add a citation to…there have been so many that I’d come across but now I was drawing a total blank. Happily, the 1Lib1Ref campaign gave some suggestions, including “Provinces of Afghanistan.” Since this is my fatherland, I thought it would be a good service to dive into. Many of Afghanistan’s citations are hard to provide for a multitude of reasons. A lot of our history has been an oral tradition. Also, not insignificantly, Afghanistan has been in conflict for a very long time, with much of its history captured from the lens of Great Game participants like England or Russia. Primary sources from the 20th century are difficult to come by because of the state of war from 1979 onwards and there are not many digitization efforts underway to capture what there is available (shout out to NYU and the Afghanistan Digital Library project).

Once I found a source that I thought would be an appropriate reference for a statement on the topography of Uruzgan Province, I did need to edit the sentence to remove the numeric values that had been written since I could not find a source that quantified the area. It’s not a precise entry, to be honest, but it does give the opportunity to link to a good map with other opportunities to find additional information related to Afghanistan’s agriculture. I also wanted to chose something relatively uncontroversial, like geographical features rather than historical or person-based, for this particular campaign.

— Yasmeen Shorish

WikiEditScreenshot

Edited area delineated by red box.


Doing Six Impossible Things Before Breakfast: An Approach to Keeping it User Centered

Keeping any large technical project user-centered is challenging at best. Adding in something like an extremely tight timeline makes it too easy to dispense with this completely. Say, for instance, six months to migrate to a new integrated library system that combines your old ILS plus your link resolver and many other tools and a new discovery layer. I would argue, however, that it’s on a tight timeline like that that a major focus on user experience research can become a key component of your success. I am referring in this piece specifically to user experience on the web, but of course there are other aspects of user experience that go into such a project. While none of my observations about usability testing and user experience are new, I have realized from talking to others that they need help advocating for the importance of user research. As we turn to our hopes and goals for 2016, let’s all make a resolution to figure out a way to make better user experience research happen, even if it seems impossible.

  1. Selling the Need For User Testing

    When I worked on implementing a discovery layer at my job earlier this year, I had a team of 18 people from three campuses with varying levels of interest and experience in user testing. It was really important to us that we had an end product that would work for everyone at all levels, whether novice or experienced researcher, as well as for the library staff who would need to use the system on a daily basis. With so many people and such a tight timeline building user testing into the schedule in the first place helped us to frame our decisions as a hypothesis to confirm or nullify in the next round of testing. We tried to involve as many people as possible in the testing, though we had a core group who had experience with running the tests administer them. Doing a test as early as possible is good to convince others of the need for testing. People who had never seen a usability test done before found them convincing immediately and were much more on board for future tests.

  2. Remembering Who Your Users Are

    Reference and instruction librarians are users too. We sometimes get so focused on reminding librarians that they are not the users that we don’t make things work for them–and they do need to use the catalog too. Librarians who work with students in the classroom and in research consultations on a daily basis have a great deal of insight into seemingly minor issues that may lead to major frustrations. Here’s an example. The desktop view of our discovery layer search box was about 320 pixels long which works fine–if you are typing in just one word.  Yet we were “selling” the discovery layer as something that handled known-item searching well, which meant that much of a pasted in citation wasn’t visible. The reference librarians who were doing this exact work knew this would be an issue. We expanded the search box so more words are visible and so it works better for known-item searching.

    The same goes for course reserves, interlibrary loan, or other staff who work with a discovery layer frequently often with an added pressure of tight deadlines. If you can shave seconds off for them that adds up a huge amount over the course of the year, and will additionally potentially solve issues for other users. One example is that the print view of a book record had very small text–the print stylesheet was set to print at 85% font size, which meant it was challenging to read. The reserves staff relied on this print view to complete their daily work with the student worker. For one student the small print size created an accessibility issue which led to inefficient manual workarounds. We were able to increase the print stylesheet to greater than 100% font size which made the printed page easily readable, and therefore fix the accessibility issue for this specific use case. I suspect there are many other people whom this benefits as well.

  3. Divide the Work

    I firmly believe that everyone who is interested in user experience on the web should get some hands on experience with it. That said, not everyone needs to do the hands on work, and with a large project it is important that people focus on their core reason for being on the team. Dividing the group into overlapping teams who worked on data testing, interface testing, and user education and outreach helped us to see the big picture but not overwhelm everyone (a little Overwhelm is going to happen no matter what). These groups worked separately much of the time for deep dives into specific issues, but helped inform each other across the board. For instance, the data group might figure out a potential issue, for which the interface group would determine a test scenario. If testing indicated a change, the user education group could be aware of implications for outreach.

  4. A Quick Timeline is Your Friend

    Getting a new tool out with only a few months turnaround time is certainly challenging, but it forces you to forget about perfection and get features done. We got our hands on the discovery layer on Friday, and were doing tests the following Tuesday, with additional tests scheduled for two weeks after the first look. This meant that our first tests were on something very rough, but gave us a big list of items to fix in the next two weeks before the next test (or put on hold if lower priority). We ended up taking off two months from live usability in the middle of the process to focus on development and other types of testing (such as with trusted beta testers). But that early set of tests was crucial in setting the agenda and showing the importance of testing. We  ultimately did 5 rounds of testing, 4 of which happened before the discovery layer went live, and 1 a few months after.

  5. Think on the Long Scale

    The vendor or the community of developers is presumably not going to stop working on the product, and neither should you. For this reason, it is helpful to make it clear who is doing the work and ensure that it is written into committee charges, job descriptions, or other appropriate documentation. Maintain a list of long-term goals, and in those short timescales figure out just one or two changes you could make. The academic year affords many peaks and lulls, and those lulls can be great times to make minor changes. Regular usability testing ensures that these changes are positive, as well as uncovering new needs as tools and needs change.

  6. Be Iterative

    Iteration is the way to ensure that your long timescale stays manageable. Work never really stops, but that’s ok. You need a job, right? Back to that idea of a short timeline–borrow from the Agile method to think in timescales of 2 weeks-1 month. Have the end goal in mind, but know that getting there will happen in tiny pieces. This does require some faith that all the crucial pieces will happen, but as long as someone is keeping an eye on those (in our case, the vendor helped a lot with this), the pressure is off on being “finished”. If a test shows that something is broken that really needs to work, that can become high priority, and other desired features can move to a future cycle. Iteration helps you stay on track and get small pieces done regularly.

Conclusion

I hope I’ve made the case for why you need to have a user focus in any project, particularly a large and complex one. Whether you’re a reference librarian, project manager, web developer or cataloger, you have a responsibility to ensure the end result is usable, useful, and something people actually want to use. And no matter how tight your timeline, stick to making sure the process is user centered, and you’ll be amazed at how many impossible things you accomplished.


Creating a Flipbook Reader for the Web

At my library, we have a wonderful collection of artist’s books. These titles are not mere text, or even concrete poetry shaping letters across a page, but works of art that use the book as a form in the same way a sculptor might choose clay or marble. They’re all highly inventive and typically employ an interplay between language, symbol, and image that challenges our understanding of what a print volume is. As an art library, collecting these unique and inspiring works seems natural. But how can we share our artists’ books with the world? For a while, our work study students have been scanning sets of pages from the books using a typical flatbed scanner but we weren’t sure how to present these images. Below, I’ll detail how we came to fork the Internet Archive’s Bookreader to publish our images to the web.

Choosing a Book Display

Our Digital Scholarship Librarian, Lisa Conrad, led the artists’ book project. She investigated a number of potential options for creating interactive “flipbooks” out of our series of images. We had a few requirements we were looking for:

  • mobile friendly — ideally, our books would be able to be read on any device, not just desktops with Adobe Flash
  • easy to use — our work study staff shouldn’t need sophisticated skills, like editing code or markup, to publish a work
  • visually appealing — obviously we’re dealing with works of art, if our presentation is hideous or obscures the elegance of the original then it’s doing more harm than good
  • works with our repository — ultimately, the books would “live” in our institutional repository, so we needed software that would emit something like static HTML or documents that could be easily retained in it

These are not especially stringent requirements, in my mind. Still, we struggled to find decent options. The mobile limitation restricted things quite a bit; a surprising number of apps used flash or published in desktop-first formats. We felt our options to customize and simplify the user interface of other apps was too small. Even if an app spat out a nice bundle of HTML and assets that we could upload, the HTML might call out to sketchy external services or present a number of options we didn’t want or need. While I was at first hesitant to implement an open source piece of software that I knew would require much of my time, ultimately the Internet Archive’s Bookreader became the frontrunner. We felt more comfortable using an established, free piece of software that doesn’t require any server-side component (it’s all JavaScript!). My primary concern was that the last commit to the bookreader code base was over a half year ago.

Implementing the Internet Archive Bookreader

The Bookreader proved startlingly easy to implement. We simply copied the code from an example given by the Internet Archive, edited several lines of JavaScript, and were ready to go. Everything else was refinement. Still, we made a few nice decisions in working on the Bookreader that I’d like to share. If you’re not a coder, feel free to skip the rest of this section as it may be unnecessarily detailed.

First off, our code is all up on GitHub. But it won’t be useful to anyone else! It contains specific logic based on how our IR serves up links to the bookreader app itself. But one of the smarter moves, if I may indulge in some self-congratulation, was using submodules in git. If you know git then you know it’s easily one of the best ways to version code and manage projects. But what if you app is just an implementation of an existing code base? There’s a huge portion of code you’re borrowing from somewhere else, and you want to be able to segment that off and incorporate changes to it separately.

As I noted before, the bookreader isn’t exactly actively developed. But it still benefits us to separate our local modifications from the stable, external code. A submodule lets us incorporate another repository into our own. It’s the clean way of copying someone else’s files into our own version. We can reference files in the other repository as if they’re sitting right beside our own, but also pull in updates in a controlled manner without causing tons of extra commits. Elegant, handy, sublime.

Most of our work happened in a single app.js file. This file was copied from the Internet Archive’s provided example and only modified slightly. To give you a sense of how one modifies the original script, here’s a portion:

// Create the BookReader object
br = new BookReader();
// Total number of leafs
br.numLeafs = vaultItem.pages;
// Book title & URL used for the book title link
br.bookTitle= vaultItem.title;
br.bookUrl  = vaultItem.root + 'items/' + vaultItem.id + '/' + vaultItem.version + '/';
// how does the bookreader know what images to retrieve given a page number (index)?
br.getPageURI = function(index, reduce, rotate) {
    // reduce and rotate are ignored in this simple implementation, but we
    // could e.g. look at reduce and load images from a different directory
    var url = vaultItem.root + 'file/' + vaultItem.id + '/' + vaultItem.version + '/' + vaultItem.filenames + (index + 1) + '.JPG';
    return url;
}

The Internet Archive’s code provides us with a Bookreader class. All we do is instantiate it once and override certain properties and methods. The code above is all that’s necessary to display a particular image for a given page number; the vaultItem object (VAULT is the name of our IR) consists of information about a single artists’ book, like the number of pages, its title, its ID and version within the IR. The bookreader app cobbles together these pieces of info to figure out it should display images given a page or pair of pages. The getPageURI function is mostly working with a single index argument, while the group of concatenated strings for the URL are related to how our repository stores files. It’s highly specific to the IR we’re using, but not terribly complicated.

The Bookreader itself sits on a web server outside the IR. Since we cannot publicly share almost all of these books (more on this below), we restrict access to the images to users who are authenticated with the IR. So how can the external reader app serve up images of books within the repository? We expect people to discover the books via our library catalog, which links to the IR, or within the IR itself. Once they sign in, our IR’s display templates contain specially crafted URLs that pass the bookreader information about the item in our repository via their query string. Here’s a shortened example of one query string:

?title=Crystals%20to%20Aden&id=17c06cc5-c419-4e77-9bdb-43c69e94b4cd&pages=26

From there, the app parses the query string to figure out that the book’s title is Crystals to Aden, its ID within the IR is “17c06cc5-c419-4e77-9bdb-43c69e94b4cd”, and it has twenty six pages. We store those values in the vaultItem object referenced in the script above. That object contains enough information for the bookreader to determine how to retrieve images from the IR for each page of the book. Since the user has already authenticated with the IR when they discovered the URL earlier, the IR happily serves up the images.

Ch-ch-ch-challenges

Easily the most difficult part of our artists’ books project has been spreading awareness. It’s a cool project that’s required tons of work all around, from the Digital Scholarship Librarian managing the project, to our diligent work study student scanning images, to our circulation staff cataloging them in our IR, to me solving problems in JavaScript while my web design student worker polished the CSS. We’re proud of the result, but also struggling to share it outside of our walls. Our IR is great at some things but not particularly intuitive to use, so we cannot count on patrons stumbling across the artists’ books on their own very often. Furthermore, putting anything behind a login is obviously going to increase the number of people who give up before accessing it. Not being on the campus Central Authentication Service only exacerbates this for us.

The second challenge is—what else!—copyright. We don’t own the rights to any of these titles. We’ve been guerrilla digitizing them without permission from publishers. For internal sharing, that’s fine and well under Fair Use; we’re only showing excerpts and security settings in our IR ensure they’re only visible to our constituents. But I pine to share these gorgeous creations with people outside the college, with my social networks, with other librarians. Right now, we only have permission to share Crystals to Aden by Michael Bulteau (thanks to duration press for letting us!). Which is great! Except Crystals is a relatively text-heavy work; it’s fine poetry, yes, but not the earth-shattering reinterpretation of the codex I promised you in my opening paragraph.

Hopefully, we can secure permissions to share further works. Internally, we’ve pushed out notices on social media, the LMS, and email lists to inform people of the artists’ books. They’re also available for checkout, so hopefully our digital teasers bring people in to see the real deal.

Lasting Problems

And that’s something that must be mentioned; our digital simulations are definitely not the real deal. Never has a set of titles been so resistant to digitization. Our work study has even asked us “how could I possibly separate this work into distinct pairs of pages” about a work which used partially-overlapping leaves that look like vinyl record sleeves.1 Artists deliberately chose the printed book form and there’s a deep sacrilege in trying to digitally represent that. We can only ever offer a vague adumbration of what was truly intended. I still see value in our bookreader—and the works often look brilliant even as scanned images—but there are fundamental impediments to its execution.

Secondly, scanned images are not text. Many works do have text on the page, but because we’re displaying images in the browser users cannot select the text to copy it. Alongside each set of images in our IR, we also have a PDF copy of all the pages with OCR‘d text. But a decision was made not to make the PDFs visible to non-library staff, since they would be easy to download and disseminate (unlike the many discrete images in our bookreader). This all adds up to make the bookreader very inaccessible; its all images, there’s no feasible way for us to associate alt text with each one, and the PDF copy that might be of interest to visually-impaired users is hidden.

I’ve ended on a sour note, but digitizing our artists’ books and fiddling with the Internet Archive’s Bookreader was a great project. Fun, a bit challenging, with some splendid results. We have our work cut out for us if we want to draw more attention to the books and have them be compelling for everyone. But other libraries in similar situations may find the Bookreader to be a very viable, easy-to-implement solution. If you don’t have permissions issues and are dealing with more traditional works, it’s built to be customizable yet offers a pleasant reading experience.

Notes

  1. Paradoxic Mutations by Margot Lovejoy

A Clean House at the Directory of Open Access Journals

The Directory of Open Access Journals (DOAJ) is an international directory of journals and index of articles that are available open access. Dating back to 2003, the DOAJ was at the center of a controversy surrounding the “sting” conducted by John Bohannon in Science, which I covered in 2013. Essentially Bohannon used journals listed in DOAJ to try to find journals that would publish an article of poor quality as long as authors paid a fee. At the time many suggested that a crowdsourced journal reviewing platform might be the way to resolve the problem if DOAJ wasn’t a good source. While such a platform might still be a good idea, the simpler and more obvious solution is the one that seems to have happened: for DOAJ to be more strict with publishers about requirements for inclusion in the directory. 1.

The process of cleaning up the DOAJ has been going on for some time and is getting close to an important milestone. All the 10,000+ journals listed in DOAJ were required to reapply for inclusion, and the deadline for that is December 30, 2015. After that time, any journals that haven’t reapplied will be removed from the DOAJ.

“Proactive Not Reactive”

Contrary to popular belief, the process for this started well before the Bohannon piece was published 2. In December 2012 an organization called Infrastructure Services for Open Access (IS4OA)  (founded by Alma Swan and Caroline Sutton) took over DOAJ from Lund University, and announced several initiatives, including a new platform, distributed editorial help, and improved criteria for inclusion. 3 Because DOAJ grew to be an important piece of the scholarly communications infrastructure it was inevitable that they would have to take such a step sooner or later. With nearly 10,000 journals and only a small team of editors it wouldn’t have been sustainable over time, and to lose the DOAJ would have been a blow to the open access community.

One of the remarkable things about the revitalization of the DOAJ is the transparency of the process. The DOAJ News Service blog has been detailing the behind the scenes processes in detail since May 2014. One of the most useful things is a list of journals who have claimed to be listed in DOAJ but are not. Another important piece of information is the 2015-2016 development roadmap. There is a lot going on with the DOAJ update, however, so below I will pick out what I think is most important to know.

The New DOAJ

In March 2014, the DOAJ created a new application form with much higher standards for inclusion. Previously the form for inclusion was only 6 questions, but after working with the community they changed the application to require 58 questions. The requirements are detailed on a page for publishers, and the new application form is available as a spreadsheet.

While 58 questions seems like a lot, it is important to note that journals need not fulfill every single requirement, other than the basic requirements for inclusion. The idea is that journal publishers must be transparent about the structure and funding of the journal, and that journals explicitly labeled as open access meet some basic theoretical components of open access. For instance, one of the  basic requirements is that  “the full text of ALL content must be available for free and be Open Access without delay”. Certain other pieces are strong suggestions, but not meeting them will not reject a journal. For instance, the DOAJ takes a strong stand against impact factors and suggests that they not be presented on journal websites at all 4.

To highlight journals that have extremely high standards for “accessibility, openness, discoverability reuse and author rights”, the DOAJ has developed a “Seal” that is awarded to journals who answer “yes” to the following questions (taken from the DOAJ application form):

have an archival arrangement in place with an external party (Question 25). ‘No policy in place’ does not qualify for the Seal.

provide permanent identifiers in the papers published (Question 28). ‘None’ does not qualify for the Seal.

provide article level metadata to DOAJ (Question 29). ‘No’ or failure to provide metadata within 3 months do not qualify for the Seal.

embed machine-readable CC licensing information in article level metadata (Question 45). ‘No’ does not qualify for the Seal.

allow reuse and remixing of content in accordance with a CC BY, CC BY-SA or CC BY-NC license (Question 47). If CC BY-ND, CC BY-NC-ND, ‘No’ or ‘Other’ is selected the journal will not qualify for the Seal.

have a deposit policy registered in a deposit policy directory. (Question 51) ‘No’ does not qualify for the Seal.

allow the author to hold the copyright without restrictions. (Question 52) ‘No’ does not qualify for the Seal.

Part of the appeal of the Seal is that it focuses on the good things about open access journals rather than the questionable practices. Having a whitelist is much more appealing for people doing open access outreach than a blacklist. Journals with the Seal are available in a facet on the new DOAJ interface.

Getting In and Out of the DOAJ

Part of the reworking of the DOAJ was the requirementand required all currently listed journals to reapply–as of November 19 just over 1,700 journals had been accepted under the new criteria, and just over 800 had been removed (you can follow the list yourself here). For now you can find journals that have reapplied with a green check mark (what DOAJ calls The Tick!). That means that about 85% of journals that were previously listed either have not reapplied, or are still in the verification pipeline 5. While DOAJ does not discuss specific reasons a journal or publisher is removed, they do give a general category for removal. I did some analysis of the data provided in the added/removed/rejected spreadsheet.

At the time of analysis, there were 1776 journals on the accepted list. 20% of these were added since September, and with the deadline looming this number is sure to grow. Around 8% of the accepted journals have the DOAJ Seal.

There were 809 journals removed from the DOAJ, and the reasons fell into the following general categories. I manually checked some of the journals with only 1 or 2 titles, and suspect that some of these may be reinstated if the publisher chooses to reapply. Note that well over half the removed journals weren’t related to misconduct but were ceased or otherwise unavailable.

Inactive (has not published in the last calender year) 233
Suspected editorial misconduct by publisher 229
Website URL no longer works 124
Ceased publishing 108
Journal not adhering to Best Practice 62
Journal is no longer Open Access 45
Has not published enough articles this calendar year 2
Wrong ISSN 2
Other; delayed open access 1
Other; no content 1
Other; taken offline 1
Removed at publisher’s request 1
809

The spreadsheet lists 26 journals that were rejected. Rejected journals will know the specific reasons why their applications were rejected, but those specific reasons are not made public. Journals may reapply after 6 months once they have had an opportunity to amend the issues. 6  The general stated reasons were as follows:

Unknown 19
Has not published enough articles 2
Journal website lacks necessary information 2
Not an academic/scholarly journal 1
Only Abstracts 1
Web site URL doesn’t work 1
Conclusion

The work that DOAJ is doing to improve transparency and the screening process is very important for open access advocates, who will soon have a tool that they can trust to provide much more complete information for scholars and librarians. For too long we have been forced to use the concept of a list of “questionable” or even “predatory” journals. A directory of journals with robust standards and easy to understand interface will be a fresh start for the rhetoric of open access journals.

Are you the editor of an open access journal? What do you think of the new application process? Leave your thoughts in the comments (anonymously if you like).


The Library as Research Partner

As I typed the title for this post, I couldn’t help but think “Well, yeah. What else would the library be?” Instead of changing the title, however, I want to actually unpack what we mean when we say “research partner,” especially in the context of research data management support. In the most traditional sense, libraries provide materials and space that support the research endeavor, whether it be in the physical form (books, special collections materials, study carrels) or the virtual (digital collections, online exhibits, electronic resources). Moreover, librarians are frequently involved in aiding researchers as they navigate those spaces and materials. This aid is often at the information seeking stage, when researchers have difficulty tracking down references, or need expert help formulating search strategies. Libraries and librarians have less often been involved at the most upstream point in the research process: the start of the experimental design or research question. As one considers the role of the Library in the scholarly life-cycle, one should consider the ways in which the Library can be a partner with other stakeholders in that life-cycle. With respect to research data management, what is the appropriate role for the Library?

In order to achieve effective research data management (RDM), planning for the life-cycle of the data should occur before any data are actually collected. In circumstances where there is a grant application requirement that triggers a call to the Library for data management plan (DMP) assistance, this may be possible. But why are researchers calling the Library? Ostensibly, it is because the Library has marketed itself (read: its people) as an expert in the domain of data management. It has most likely done this in coordination with the Research Office on campus. Even more likely, it did this because no one else was. It may have done this as a response to the National Science Foundation (NSF) DMP requirement in 2011, or it may have just started doing this because of perceived need on campus, or because it seems like the thing to do (which can lead to poorly executed hiring practices). But unlike monographic collecting or electronic resource acquisition, comprehensive RDM requires much more coordination with partners outside the Library.

Steven Van Tuyl has written about the common coordination model of the Library, the Research Office, and Central Computing with respect to RDM services. The Research Office has expertise in compliance and Central Computing can provide technical infrastructure, but he posits that there could be more effective partners in the RDM game than the Library. That perhaps the Library is only there because no one else was stepping up when DMP mandates came down. Perhaps enough time has passed, and RDM and data services have evolved enough that the Library doesn’t have to fill that void any longer. Perhaps the Library is actually the *wrong* partner in the model. If we acknowledge that communities of practice drive change, and intentional RDM is a change for many of the researchers, then wouldn’t ceding this work to the communities of practice be the most effective way to stimulate long lasting change? The Library has planted some starter seeds within departments and now the departments could go forth and carry the practice forward, right?

Well, yes. That would be ideal for many aspects of RDM. I personally would very much like to see the intentional planning for, and management of, research data more seamlessly integrated into standard experimental methodology. But I don’t think that by accomplishing that, the Library should be removed as a research partner in the data services model. I say this for two reasons:

  1. The data/information landscape is still changing. In addition to the fact that more funders are requiring DMPs, more research can benefit from using openly available (and well described – please make it understandable) data. While researchers are experts in their domain, the Library is still the expert in the information game. At its simplest, data sources are another information source. The Library has always been there to help researchers find sources; this is another facet of that aid. More holistically, the Library is increasingly positioning itself to be an advocate for effective scholarly communication at all points of the scholarship life-cycle. This is a logical move as the products of scholarship take on more diverse and “nontraditional” forms.

Some may propose that librarians who have cultivated RDM expertise can still provide data seeking services, but perhaps they should not reside in the Library. Would it not be better to have them collocated with the researchers in the college or department? Truly embedded in the local environment? I think this is a very interesting model that I have heard some large institutions may want to explore more fully. But I think my second point is a reason to explore this option with some caution:

2. Preservation and access. Libraries are the experts in the preservation and access of materials. Central Computing is a critical institutional partner in terms of infrastructure and determining institutional needs for storage, porting, computing power, and bandwidth but – in my experience – are happy to let the long-term preservation and access service fall to another entity. Libraries (and archives) have been leading the development of digital preservation best practices for some time now, with keen attention to complex objects. While not all institutions can provide repository services for research data, the Library perspective and expertise is important to have at the table. Moreover, because the Library is a discipline-agnostic entity, librarians may be able to more easily imagine diverse interest in research data than the data producer. This can increase the potential vehicles for data sharing, depending on the discipline.

Yes, RDM and data services are reaching a place of maturity in academic institutions where many Libraries are evaluating, or re-evaluating, their role as a research partner. While many researchers and departments may be taking a more proactive or interested position with RDM, it is not appropriate for Libraries to be removed from the coordinated work that is required. Libraries should assert their expertise, while recognizing the expertise of other partners, in order to determine effective outreach strategies and resource needs. Above all, Libraries must set scope for this work. Do not be deterred by the increased interest from other campus entities to join in this work. Rather, embrace that interest and determine how we all can support and strengthen the partnerships that facilitate the innovative and exciting research and scholarship at an institution.


From Consensus to Expertise: Rethinking Library Web Governance

The world is changing, the web is changing and libraries are changing along with them. Commercial behemoths like Amazon, Google and Facebook, together with significant advancements in technical infrastructure and consumer technology, have established a new set of expectations for even casual users of the web. These expectations have created new mental models of how things ought to work, and why—not just online, either. The Internet of Things may not yet be fully realized but we clearly see its imminent appearance in our daily lives.

Within libraries, has our collective concept of the intention and purpose of the library website evolved as well? How should the significant changes in how the web works, what websites do and how we interact with them also impact how we manage, assess and maintain library websites?

In some cases it has been easier to say what the library website is not – a catalog, a fixed-form document, a repository—although it facilitates access to these things, and perhaps makes them discoverable. What, then, is the library website? As academic librarians, we define it as follows.

The library website is an integrated representation of the library, providing continuously updated content and tools to engage with the academic mission of the college/university.

It is constructed and maintained for the benefit of the user. Value is placed on consumption of content by the user rather than production of content by staff.

Moving from a negative definition to a positive definition empowers both stewards of and contributors to the website to participate in an ongoing conversation about how to respond proactively to the future, our changing needs and expectations and, chiefly, to our users’ changing needs and expectations. Web content management systems have moved from being just another content silo to being a key part of library service infrastructure. Building on this forward momentum enables progress to a better, more context-sensitive user experience for all as we consider our content independent of its platform.

It is just this reimagining of how and why the library website contributes value, and what role it fulfills within the organization in terms of our larger goals for connecting with our local constituencies—supporting research and teaching through providing access to resources and expertise—that demands a new model for library web governance.

Emerging disciplines like content strategy and a surge of interest in user experience design and design thinking give us new tools to reflect on our practice and even to redefine what constitutes best practice in the area of web librarianship.

Historically, libraries have managed websites through committees and task forces. Appointments to these governing bodies were frequently driven by a desire to ensure adequate balance across the organizational chart, and to varying degrees by individuals’ interest and expertise. As such, we must acknowledge the role of internal politics as a variable factor in these groups’ ability to succeed—one might be working either with, or against, the wind. Librarians, particularly in groups of this kind, notoriously prefer consensus-driven decision making.

The role of expertise is largely taken for granted across most library units; that is, not just anyone is qualified to perform a range of essential duties, from cataloging to instruction to server administration to website management.1 Consciously according ourselves and our colleagues the trust to employ their unique expertise allows individuals to flourish and enlarges the capacity of the organization and the profession. In the context of web design and governance, consensus is a blocker to nimble, standards-based, user-focused action. Collaborative processes, in which all voices are heard, together with empirical data are essential inputs for effective decision-making by domain experts in web librarianship as in other areas of library operations.

Web librarianship, through bridging and unifying individual and collaborative contributions to better enable discovery, supports the overall mission of libraries in the context of the following critical function:

providing multiple systems and/or interfaces
to browse, identify, locate, obtain and use
spaces, collections, and services,
either known or previously unknown to the searcher
with the goal of enabling completion of an information-related task or goal.

The scope of content potentially relevant to the user’s discovery journey encompasses the hours a particular library is open on a given day up to and including advanced scholarship, and all points between. This perhaps revives in the reader’s mind the concept of library website as portal – that analogy has its strengths and weaknesses, to be sure. Ultimately, success for a library website may be defined as the degree to which it enables seamless passage; the user’s journey only briefly intersects with our systems and services, and we should permit her to continue on to her desired information destination without unnecessary inconvenience or interference. A friction-free experience of this kind requires a holistic vision and relies on thoughtful stewardship and effective governance of meaningful content – in other words, on a specific and cultivated expertise, situated within the context of library practice. Welcome to a new web librarianship.


Courtney Greene McDonald (@xocg) is Head of Discovery & Research Services at the Indiana University Libraries in Bloomington. A technoluddite at heart, she’s equally likely to be leafing through the NUC to answer a reference question as she is to be knee-deep in a config file. She presents and writes about user experience in libraries, and is the author of Putting the User First: 30 Strategies for Transforming Library Services (ACRL 2014). She’s also a full–time word nerd and gourmand, a fair–weather gardener, and an aspiring world traveler.

Anne Haines (@annehaines) is the Web Content Specialist for the Indiana University Bloomington Libraries. She loves creating webforms in Drupal, talking to people about how to make their writing work better on the web, and sitting in endless meetings. (Okay, maybe not so much that last one.) You can find her hanging out at the intersection of content strategy and librarianship, singing a doo-wop tune underneath the streetlight.

Rachael Cohen (@RachaelCohen1) is the Discovery User Experience Librarian at Indiana University Bloomington Libraries, where she is the product owner for the library catalog discovery layer and manages the web-scale discovery service. When she’s not negotiating with developers, catalogers, and public service people you can find her hoarding books and Googling for her family.

 

Notes

  1. While it is safe to say that all library staff have amassed significant experience in personal web use, not all staff are equally equipped with the growing variety of skillsets and technical mastery necessary to oversee and steward a thriving website.

Wikipedia, Libraries, & Neutrality

This piece is substantially based off of a column I wrote for RUSQ that will appear early next year.


Broadly construed, there are two camps of opinions surrounding Wikipedia in librarianship. The first is that Wikipedia is not an academic quality source. It is something students need to be warned away from. The community authorship of wikipedia is typically the source of this criticism; since “anyone can edit” the online encyclopedia, there’s no way to tell whether you’re receiving high quality information written by experts or the ill-informed opinions of internet trolls.

The other camp is far more positive regarding Wikipedia. After all, the values of the Wikipedian community clearly align with our own. The ultimate goal of the project is not to give a venue for poor research or fringe theories, but to enable free and open access to information. While Wikipedia isn’t flawless, many of its articles compete with more established reference sources. Famously, Nature performed a comparison of scientific articles and found Wikipedia to be comparable to Encyclopedia Britannica. But following the study, a community project to correct the identified errors in Wikipedia sprung up, fixing them all in a little over a month.1 With this common ground, it’s no wonder that libraries have found Wikipedia to be a valuable partner in publicizing our content. Articles like Using Wikipedia to Extend Digital Collections, Putting the Library in Wikipedia, and Wikipedia Lover, Not a Hater: Harnessing Wikipedia to Increase the Discoverability of Library Resources all discuss the value of working with Wikipedia to highlight library digital collections and metadata, not against.

A good example of Wikipedia driving traffic to library special collections was brought to my attention by fellow Tech Connect author Margaret Heller, who pointed me towards the Google Analytics Usage Reports for CARLI Digital Collections. CARLI regularly sees Wikipedia as one of the top external traffic sources, with Wikipedia being noted in last few quarterly reports as a traffic source leading “to home pages or images from multiple CARLI Collections”.

The Problem with Neutral

I must admit that I side with the pro-Wikipedia folks. I love using Wikipedia as a resource—as a procrastination-enabler I have open several articles on roguelikes that I’m reading while I right this—and I love contributing to it in whatever small ways I can, from fixing broken markup to citation hunting. But Wikipedia is imperfect in ways more insidious that its anonymous authorship or occasionally inaccurate details. Rather, one problem lies within one of the five pillars that define the philosophy of Wikipedia; “Wikipedia is written from a neutral point of view”.

Neutrality has been under fire from #critlib lately, a group of librarians emphasizing critical theory especially with respect to information literacy instruction. ALA Annual featured a well-attended presentation entitled “But We’re Neutral!” And Other Librarian Fictions Confronted by #critlib. A seminal article appearing earlier this year in Code4Lib by Bess Sadler and Chris Bourg begins with a rousing section labelled Libraries are not Neutral:

In spite of the pride many libraries take in their neutrality, libraries have never been neutral repositories of knowledge. Research libraries in particular have always reflected the inequalities, biases, ethnocentrism, and power imbalances that exist throughout the academic enterprise through collection policies and hiring practices that reflect the biases of those in power at a given institution. In addition, theoretically neutral library activities like cataloging have often re-created societal patterns of exclusion and inequality.

Wikipedia shares this problem; while it appears neutral on the surface, its topical coverage and treatment of subjects reflect the skewed power relations of our society. A 2011 study found that 91% of editors were men. The same study shows that few editors come from the Global South and that the English Wikipedia receives far more focus than other languages. Another research paper from 2011 goes a bit further in demonstrating that “male articles are significantly longer than female articles”.2 The editorial gender gap has real effects on the encyclopedia’s content; it’s not just that having editors of all genders is good in its own right, it’s that Wikipedia’s claims to objectivity and neutrality are jeopardized by the disbalance.

Art+Feminism

So what’s a concerned librarian to do? I think the wrong thing would be to denounce Wikipedia. For one, the alternative sources we or our patrons would turn to are no less problematic. Encyclopaedia Britannica has a well-documented history of supposedly scientific articles written from the dominant viewpoint (c.f. the racist description of black people in the 1911 edition). What’s more, Wikipedia is going to be used. It’s massively popular. Sticking our heads in the sand because it doesn’t live up to its own standards of neutrality improves nothing.

Luckily, there are several Wikipedia projects focused on recruiting editors from underrepresented groups and addressing lackluster coverage of particular topics which we can support. One such project is Art + Feminism. In its own words:

Art+Feminism is a rhizomatic campaign to improve coverage of women and the arts on Wikipedia, and to encourage female editorship…Content is skewed by the lack of female participation. Many articles on notable women in history and art are absent on Wikipedia. This represents an alarming aporia in an increasingly important repository of shared knowledge.

Art+Feminism started out by hosting an edit-a-thon out of the Eyebeam Art and Technology Center in New York City in 2014, with more than 30 other locations worldwide joining in. An edit-a-thon is an event where people gather to perform Wikipedia edits, often centered around a particular theme or project. Higher education institutions and libraries make perfect partners for such occasions. We typically have useful materials to be cited and our students form a large body of potential participates who can be easily incentivized to join in, whether with extra credit or simply some food. So when my library heard of the upcoming 2nd annual edit-a-thon, we immediately began planning to host it. Below, I’ll briefly outline what we did in the hopes it’ll encourage other institutions to join us during this year’s edit-a-thon on March 5th.

Hosting an Edit-a-thon

First, we set up a meetup page on Wikipedia. If you’re unfamiliar with Wikipedia, creating a page like this isn’t a struggle. For one, you can simply copy the entire source markup of someone else’s meetup, then edit your specific details into that skeleton. For two, you can enable the experimental Visual Editor to make Wikipedia even easier to edit without learning wiki markup. The meetup page is an important place for putting up information like timing and directions, but is also a place for us to talk about the impact we made by showing how many editors attended and what articles we improved or created.

While we were putting initial details on our meetup page, we set about securing a location on the date of the edit-a-thon. We discovered that a gallery associated with our school had hosted the edit-a-thon the prior year, but they were unable to repeat it. Our school has campuses in both San Francisco and Oakland, but since Oakland had no other edit-a-thon locations we decided to host it there with the idea that SF locals already had an event nearby.

Our next steps aimed to make the event as easy for newcomers as possible. A staff member pulled relevant materials from our collection, so that researching would be simple and our rarer, more valuable resources might lend some of their information to Wikipedia. We also wanted experienced editors to be on hand during the event. I looked at a local Wikiproject, asking for help on the Talk page and then corresponding directly with a couple editors who had expressed interest. Finally, we managed to secure a visit from someone who actually works for the Wikimedia Foundation that runs the encyclopedia.

During the day, we set out snacks and name badges for everyone. Similar to the color-coded name badges at Ada Camp and other tech conferences, Art+Feminism recommended giving out name badges which signify one’s willingness to be photographed: green meant feel free to take a photo, orange meant please ask permission first, and red meant absolutely not. These steps ensure that everyone is comfortable at the event and not being exposed on the internet against their will. At the start of the day, we explained the coloring system and the Wikimedia staff person gave a short talk on how to write articles that endure, standing up to scrutiny over time. The results of the global event are listed on Wikipedia.

While I feel we accomplished much at our local event, there was one negative experience. An image uploaded to Wikimedia Commons for use on a page was flagged for deletion. The editor flagging it said something along the lines of “this isn’t your personal photo album” as the image was a headshot of a female artist. In the ensuing discussion around the proposed deletion, I noted that the image was about to be used on an article. It was never removed from Commons. Still, the incident underscores cultural problems in Wikipedia. The confrontational style of the discussion lacked good faith. Further, I heard a gendered undertone in the editor’s response; how many pictures of white men are derided as personal photos? While our library staff person was undeterred, moments of hostility like these drive away newcomers.

In Which I Admit I’m Missing the Point

To be fair, Wikipedia itself acknowledges that it fails to live up to neutral status. That the encyclopedia strives towards neutrality is the more vital point. But it’s not as if reaching a supposedly perfect neutrality resolves the issues that folks like #critlib are highlighting. Neutral as a positive value is precisely the problem, because there is no neutral stance that can be taken from outside of society’s power relations and history of inequity. So should we instead be agitating for Wikipedia to become less neutral and take more active stances on social issues? Or are edit-a-thons like Art+Feminism the most viable route towards ensuring topics are covered in a way that surfaces marginalized peoples and their experiences? I have no answers, just food for thought.

  1. See External peer review/Nature December 2005 for Wikipedia’s internal take on the correction process. The original Nature article is paywalled here and is doi:10.1038/438900a. A similar but open access study is doi:10.1371/journal.pone.0106930, Accuracy and Completeness of Drug Information in Wikipedia: A Comparison with Standard Textbooks of Pharmacology.
  2. WP:Clubhouse? An Exploration of Wikipedia’s Gender Imbalance. I’m skeptical of how “male” and “female” articles were defined, but the paper itself is thorough in its argumentation and statistical analysis. It’s also worth noting that this paper, and much of the other literature around the gender gap, ignores genders outside the female-male binary. It’s most useful to conceive of the gap as a disproportionate majority of males than a minority of females, while leaving all other genders out of the picture.

Near Us and Libraries, Robots Have Arrived

The movie, Robot and Frank, describes the future in which the elderly have a robot as their companion and also as a helper. The robot monitors various activities that relate to both mental and physical health and helps Frank with various house chores. But Frank also enjoys the robot’s company and goes on to enlist the robot into his adventure of breaking into a local library to steal a book and a greater heist later on. People’s lives in the movie are not particularly futuristic other than a robot in them. And even a robot may not be so futuristic to us much longer either. As a matter of fact, as of June 2015, there is now a commercially available humanoid robot that is close to performing some of the functions that the robot in the movie ‘Frank and Robot’ does.

Pepper_GESTURE_ON-001

Pepper Robot, Image from Aldebaran, https://www.aldebaran.com/en/a-robots/who-is-pepper

A Japanese company, SoftBank Robotics Corp. released a humanoid robot named ‘Pepper’ to the market back in June. The Pepper robot is 4 feet tall, 61 pounds, speaks 17 languages and is equipped with an array of cameras, touch sensors, accelerometer, and other sensors in his “endocrine-type multi-layer neural network,” according to the CNN report. The Pepper robot was priced at ¥198,000 ($1,600). The Pepper owners are also responsible for an additional ¥24,600 ($200) monthly data and insurance fee. While the Pepper robot is not exactly cheap, it is surprisingly affordable for a robot. This means that the robot industry has now matured to the point where it can introduce a robot that the mass can afford.

Robots come in varying capabilities and forms. Some robots are as simple as a programmable cube block that can be combined with one another to be built into a working unit. For example, Cubelets from Modular Robotics are modular robots that are used for educational purposes. Each cube performs one specific function, such as flash, battery, temperature, brightness, rotation, etc. And one can combine these blocks together to build a robot that performs a certain function. For example, you can build a lighthouse robot by combining a battery block, a light-sensor block, a rotator block, and a flash block.

 

A variety of cubelets available from the Modular Robotics website.

A variety of cubelets available from the Modular Robotics website.

By contrast, there are advanced robots such as those in the form of an animal developed by a robotics company, Boston Dynamics. Some robots look like a human although much smaller than the Pepper robot. NAO is a 58-cm tall humanoid robot that moves, recognizes, hears and talks to people that was launched in 2006. Nao robots are an interactive educational toy that helps students to learn programming in a fun and practical way.

Noticing their relevance to STEM education, some libraries are making robots available to library patrons. Westport Public Library provides robot training classes for its two Nao robots. Chicago Public Library lends a number of Finch robots that patrons can program to see how they work. In celebration of the National Robotics Week back in April, San Diego Public Library hosted their first Robot Day educating the public about how robots have impacted the society. San Diego Public Library also started a weekly Robotics Club inviting anyone to join in to help build or learn how to build a robot for the library. Haslet Public Library offers the Robotics Camp program for 6th to 8th graders who want to learn how to build with LEGO Mindstorms EV3 kits. School librarians are also starting robotics clubs. The Robotics Club at New Rochelle High School in New York is run by the school’s librarian, Ryan Paulsen. Paulsen’s robotics club started with faculty, parent, and other schools’ help along with a grant from NASA and participated in a FIRST Robotics Competition. Organizations such as the Robotics Academy at Carnegie Mellon University provides educational outreach and resources.

Image from Aldebaran website at https://www.aldebaran.com/en/humanoid-robot/nao-robot

There are also libraries that offer coding workshops often with Arduino or Raspberry Pi, which are inexpensive computer hardware. Ames Free Library offers Raspberry Pi workshops. San Diego Public Library runs a monthly Arduino Enthusiast Meetup. Arduinos and Raspberry Pis can be used to build digital devices and objects that can sense and interact the physical world, which are close to a simple robot. We may see more robotics programs at those libraries in the near future.

Robots can fulfill many other functions than being educational interactive toys, however. For example, robots can be very useful in healthcare. A robot can be a patient’s emotional companion just like the Pepper. Or it can provide an easy way to communicate for a patient and her/his caregiver with physicians and others. A robot can be used at a hospital to move and deliver medication and other items and function as a telemedicine assistant. It can also provide physical assistance for a patient or a nurse and even be use for children’s therapy.

Humanoid robots like Pepper may also serve at a reception desk at companies. And it is not difficult to imagine them as sales clerks at stores. Robots can be useful at schools and other educational settings as well. At a workplace, teleworkers can use robots to achieve more active presence. For example, universities and colleges can offer a similar telepresence robot to online students who want to virtually experience and utilize the campus facilities or to faculty who wish to offer the office hours or collaborate with colleagues while they are away from the office. As a matter of fact, the University of Texas, Arlington, Libraries recently acquired several Telepresence Robots to lend to their faculty and students.

Not all robots do or will have the humanoid form as the Pepper robot does. But as robots become more and more capable, we will surely get to see more robots in our daily lives.

References

Alpeyev, Pavel, and Takashi Amano. “Robots at Work: SoftBank Aims to Bring Pepper to Stores.” Bloomberg Business, June 30, 2015. http://www.bloomberg.com/news/articles/2015-06-30/robots-at-work-softbank-aims-to-bring-pepper-to-stores.

“Boston Dynamics.” Accessed September 8, 2015. http://www.bostondynamics.com/.

Boyer, Katie. “Robotics Clubs At the Library.” Public Libraries Online, June 16, 2014. http://publiclibrariesonline.org/2014/06/robotics-clubs-at-the-library/.

“Finch Robots Land at CPL Altgeld.” Chicago Public Library, May 12, 2014. https://www.chipublib.org/news/finch-robots-land-at-cpl/.

McNickle, Michelle. “10 Medical Robots That Could Change Healthcare – InformationWeek.” InformationWeek, December 6, 2012. http://www.informationweek.com/mobile/10-medical-robots-that-could-change-healthcare/d/d-id/1107696.

Singh, Angad. “‘Pepper’ the Emotional Robot, Sells out within a Minute.” CNN.com, June 23, 2015. http://www.cnn.com/2015/06/22/tech/pepper-robot-sold-out/.

Tran, Uyen. “SDPL Labs: Arduino Aplenty.” The Library Incubator Project, April 17, 2015. http://www.libraryasincubatorproject.org/?p=16559.

“UT Arlington Library to Begin Offering Programming Robots for Checkout.” University of Texas Arlington, March 11, 2015. https://www.uta.edu/news/releases/2015/03/Library-robots-2015.php.

Waldman, Loretta. “Coming Soon to the Library: Humanoid Robots.” Wall Street Journal, September 29, 2014, sec. New York. http://www.wsj.com/articles/coming-soon-to-the-library-humanoid-robots-1412015687.