Educating Your Campus about Predatory Publishers

The recent publication of Monica Berger and Jill Cirasella’s piece in College and Research Libraries News “Beyond Beall’s List: Better understanding predatory publishers” is a reminder that the issue of “predatory publishers” continues to require focus for those working in scholarly communication. Berger and Cirasella have done a exemplary job of laying out some of the issues with Beall’s list, and called on librarians to be able “to describe the beast, its implications, and its limitations—neither understating nor overstating its size and danger.”

At my institution academic deans have identified “predatory” journals as an area of concern, and I am sure similar conversations are happening at other institutions. Here’s how I’ve “described the beast” at my institution, and models for services we all can provide, whether subject librarian or scholarly communication librarian.

What is a Predatory Publisher? And Why Does the Dean Care?

The concept of predatory publishers became much more widely known in 2013 with a publication of an open access sting by John Bohannon in Science, which I covered in this post. As a recap, Bohannon created a fake but initially believable poor quality scientific article, and submitted it to open access journals. He found that the majority of journals accepted the poor quality paper, 45% of which were included in the Directory of Open Access Journals. At the time of publication in October 2013 the response to this article was explosive in the scholarly communications world. It seems that more than a year later the reaction continues to spread. Late in the fall semester of 2014, library administration asked me to prepare a guide about predatory publishers, due to concern among the deans that unscrupulous publishers might be taking advantage of faculty. This was a topic I’d been educating faculty about on an ad hoc basis for years, but I never realized we needed to address it more systematically. That all has changed, with senior library administration now doing regular presentations about predatory publishers to faculty.

If we are to be advocates of open access, we need to focus on the positive impact that open access has rather than dwell for too long on the bad sides of it. We also need faculty to be clear on their own goals for making their work open access so that they may make more informed choices. Librarians have limited faculty bandwidth on the topic, and so focusing on education about self-archiving articles (otherwise known as green open access) or choosing no-fee (also known as gold) open access journals is a better way to achieve advocacy goals than suggesting faculty choose only a certain set of gold open access journals. Unless we are offering money for paying article fees, we also don’t have much say about where faculty choose to publish. Education about how to choose a journal and a license responsibly is what we should focus on, even if it diverges from certain ideals (see Meredith Farkas on choosing creative commons licenses.)

Understanding the Needs and Preparing the Material

As I mentioned, my library administration asked for a guide that that they could use in presentations and share with faculty. In preparing this guide, I worked with our library’s Scholarly Communications committee (of which I am co-chair) to determine the format and content.

We decided that adding this material to our existing Open Access research guide would be the best move, since it was already up and we shared the URL widely already. We have a robust series of Open Access Week events (which I wrote about last fall) and this seemed to ideal place to continue engaging people. That said, we determined that the guide needed an overhaul to make it more clear that open access was an on-going area of concern, not a once a year event. Since faculty are not always immediately thinking of making work open access but of the mechanics of publishing, I preferred to start with the title “Publishing Your Own Work”.

To describe its features a bit more, I wanted to start from the mindset of self-archiving work to make it open access with a description of our repository and Peter Suber’s useful guide to making one’s own work open access. I then continued with an explanation of article publication fees, since I often get questions along those lines. They are not unique to open access journals, and don’t imply any fee to accept for publication, which was a fear that I heard more than once during Open Access Week last year. I only then discussed the concept of predatory journals, with the hope that a basic understanding of the process would allay fears. I then present a list of steps to research a journal. I thought these steps were more common sense than anything, but after conversations with faculty and administration, I realized that my intuition about what type of journal I am dealing with is obvious because I have daily practice and experience. For people new to the topic I tried to break down research into easy steps that help them to figure out where a journal is on the continuum from outright scams to legitimate but new or unusual journals. It was also important to me to emphasize self-archiving as a strategy no matter the journal publication model.

Lastly, while most academic libraries have a model of liaison librarians engaging in scholarly communications activities, the person who spends every day working on these issues is likely to be more versed in emerging trends. So it is important to work with liaisons to help them research journals and to identify quality open access journals in their disciplines. We plan to add this information to the guide in a future version.

Taking it on the Road

We felt that in-person instruction on these matters with faculty was a crucial next step, particularly for people who publish in traditional journals but want to make their work available. Traditional journals’ copyright transfer agreements can be predatory, even if we don’t think about it in those terms. Taking inspiration from the ACRL Scholarly Communications Roadshow I attended a few years ago, I decided to take the curriculum from that program and offer it to faculty and graduate students. We read through three publication agreements as a group, and then discussed how open the publishers were to reuse of material, or whether they mentioned it at all. We then included a section on addenda to contracts for negotiation about additional rights.

The first workshop received modest attendance, but included some thoughtful conversations, and we have promised to run it again. Some people may never have read their agreements closely, and never realized they were doing something illegal or not specifically allowed by, for instance, sharing an article they wrote with their students. That concrete realization is more likely to spur action than more abstract arguments about the benefits of open access.

Escaping the Predator Metaphor

If I could go back, I would get rid of the concept of “predator” attached to open access journals. Let’s call it instead unscrupulous entrants into an emerging business model. That’s not as catchy, but it explains why this has happened. I would argue, personally, that the hybrid gold journals by large publishers are just as predatory, as they capitalize on funding requirements to make articles open access with high fees. They too are trying new business models, and those may not be tenable either. As I said above, choosing a journal with eyes wide open and understanding all the ramifications of different publication models is the only way forward. To suggest that faculty are innocently waiting to be pounced on by predators is to deny their agency and their ability to make choices about their own work. There may be days where that metaphor seems apt, but I think overall this is a damaging mentality to librarians interested in promoting new models of scholarly communication. I hope we can provide better resources and programming to escape this, as well as to help administration to understand how to choose to fund open access initiatives.

In the comments I’d like to hear more suggestions about how to escape the “predator” metaphor, as well as your own techniques for educating faculty on your campus.


GIS and Geospatial Data Tools

I was recently appointed the geography subject librarian for my library, which was mildly terrifying considering that I do not have a background in geography. But I was assigned the subject because of my interest in data visualization, and since my appointment I’ve learned a few things about the awesome potential opportunities to integrate Geographic Information Systems (GIS) and geospatial visualization tools into information literacy instruction and library services generally.  A little bit of knowledge about GIS and geospatial visualization goes a long way, and is useful across a variety of disciplines, including social sciences, business, humanities and environmental studies and sciences.   If you are into open data (who isn’t?) and you like maps and / or data visualization (who doesn’t?!) then it’s definitely worth it to learn about some tools and resources to work with geospatial information.

About GIS and Geospatial Data

Geographic Information Systems, or GIS, are software tools that enable visualizing and interpreting data (social, demographic, economic, political, topographic, spatial, natural resources, etc.) using maps and geospatial data. Often data is visualized using layers, where a base map (containing, for example, a political map of a city) or tiles are overlaid with shapes, data points, or choropleth shading. For example, in the map below, a map of districts in Tokyo is overlaid with data points representing the number of seniors living in the area: 1

You may be familiar with Google Earth, which has a lot of features similar to a GIS (but is arguably not really a GIS, due to its lack of data analysis and query tools typically found in a fully-featured GIS). You can download a free Pro version of Google Earth that enables you to import GIS data. GIS data can appear in a variety of formats, and while there isn’t space here to go into each of them, a few common formats you might come across include Shapefiles, KML, and GeoJSON2  Shapefiles, as the name suggests, represent shapes (e.g., polygons) as layers of vector data that can be visualized in GIS programs and Google Earth Pro.  You may also come across KML files (Keyhole Markup Language), which is an XML-style standard  for representing geographic data, and is commonly used with Google Earth and Google Maps.  GeoJSON is another format for representing geospatial information that is ideal for use with web services.  The various formats of GIS and geospatial data deserve a full post on their own, and I plan to write a follow-up post exploring some of these formats and how they are used in greater detail.

GIS/Geospatial Visualization Tools

ArcGIS (ESRI)

ArcGIS is arguably the industry standard for GIS software, and the maker of ArcGIS (ESRI) publishes manuals and guides for GIS students and practitioners.  There are a few different ArcGIS products:  ArcGIS for Desktop, ArcGIS Online, and ArcGIS server.  Personally I am only familiar with ArcGIS online, but you can do some pretty cool things with a totally free account, like create this map of where drones can and cannot fly in the United States: 3

ArcGIS can be very powerful and is particularly useful for complex geospatial datasets and visualizations (particularly visualizations that might require multiple layers of data or topographic / geologic data). A note about signing up with ArcGIS online:  You don’t actually need to sign up for a ‘free trial’ to explore the software – you can just create a free account that, as I understand it, is not limited to a trial period.  Not all features may be available in the completely free account.

CartoDB

CartoDB is both an open source application and a freemium cloud service that can be used to make some pretty amazing geospatial visualizations that can be embedded in web pages, like this choropleth that visualizes the amount of various kinds of pollution across Los Angeles.4

CartoDB’s aesthetics are really strong, and default map settings tend to be pretty gorgeous.  It also leverages Torque to enable animations (which is what’s behind the heatmap animation of this map showing Twitter activity related to Ferguson, MO over time).5  CartoDB can import Shapefiles, GeoJSON, and .csv files, and has a robust SQL API (built on PostGreSQL) that can be used to import and export data. CartoDB also has its own JavaScript library (CartoDB.js) that can be leveraged for building attractive custom apps.

More JavaScript Libraries

In addition to CartoDB.js mentioned above, there are lots of other flexible JavaScript libraries for mapping and geospatial visualization on the scene that can be leveraged for visualizing geospatial data:

  • OpenLayers – OpenLayers enables pulling in ’tile’ layers as base maps from a variety of sources, as well as enabling parsing of vector data in a wide range of formats, such as GeoJSON and KML.
  • Leaflet.js – A fairly user-friendly and lightweight library used for creating basic interactive, mobile-friendly maps.  In my opinion, Leaflet is a good library to get started with if you’re just jumping in to geospatial visualization.
  • D3.js – Everyone’s favorite JavaScript charting library also has some geospatial visualization features for certain kinds of maps, such as this choropleth example.
  • Mapbox - Mapbox.js is a JavaScript API library built on top of Leaflet.js, but Mapbox also offers a suite of tools for more extensive mapping and geospatial visualization needs

Open Geospatial Data

Librarians wanting to integrate geospatial data visualization and GIS into interdisciplinary instruction can take advantage of open data sets that are increasingly available online. Sui (2014) notes that increasingly large data sets are being released freely and openly on the web, which is an exciting trend for GIS and open data enthusiasts. However, Sui also notes that the mere fact that data is legally released and made accessible “does not necessarily mean that data is usable (unless one has the technical expertise); thus they are not actually used at all.”6  Libraries could play a crucial role in helping users understand and interpret public data by integrating data visualization into information literacy instruction.

Some popular places to find open data that could be used in geospatial visualiation include:

  • Data.gov -  Since 2009, Data.gov has published thousands of public open datasets, including datasets containing geographic and geospatial information.  As of this month, you can now open geospatial data files directly in CartoDB (requires a CartoDB account) to start making visualizations.  There isn’t a huge amount of geospatial data available currently, but Data.gov will hopefully benefit from initiatives like Project Open Data, which was launched in 2013 by the White House and designed to accelerate the publishing of open data sets by government agencies.
  • Google Public Data Explorer – This is a somewhat small set of public data that Google has gathered from other open data repositories (such as Eurostat) that can be directly visualized using Google charting tools.  For example, you could create a visualization of European population change by country using data available through the Public Data Explorer.  While the currently available data is pretty limited, Google has prepared a kind of open data metadata standard (Data Set Publishing Language, or DSPL) that might increase the availability of data through the explorer if the standard takes off.
  • publicdata.eu – The destination for Europe’s public open data, a nice feature of publicdata.eu is the ability to filter down to datasets that contain Shapefiles (.shp files) that can be directly imported into GIS software or Google Earth Pro.
  • OpenStreetMap (OSM) –  Open, crowdsourced street map data that can be downloaded or referenced to create basemaps or other geospatial visualizations that rely on transportation networks (roads, railways, walking paths, etc.).  OpenStreetMap data are open, so for those who would prefer to make applications that are based entirely on open data (rather than commercial solutions), OSM can be combined with JavaScript libraries like Leaflet.js for fully open geospatial applications.

GIS and Geospatial Visualization In the Library

I feel like I’ve only really scratched the surface with the possibilities for libraries to get involved with GIS and geospatial data.  Libraries are doing really exciting things with these technologies, whether it’s creating new ways of interacting with historical maps, lending GPS units, curating and preserving geospatial data, exploring geospatial linked data possibilities with GeoSPARQL or integrating GIS or geospatial visualization into information literacy / instruction programs.  For more ideas about integrating GIS and geospatial visualization into library instruction and services, check out these guides:

(EDIT 4/13) Also be sure to check out ALA’s Map and Geospatial Information Round Table (MAGIRT).  Thanks to Paige Andrew and Kathy Weimer for pointing out this awesome resource in the comments.

If you’re working on something awesome related to geospatial data in your library and would be interested in writing about it for ACRL TechConnect, contact me on Twitter @lpmagnuson or drop me a line in the comments!

Notes

  1. AtlasPublisher. Tokyo Senior Population. https://www.arcgis.com/home/webmap/viewer.html?webmap=6990a8c5e87b42ee80701cf985383d5d.  (Note:  Apologies if you have trouble seeing or zooming in on embedded visualizations in this post; the interaction behavior of these embedded iframes can be a little unpredictable if your cursor gets near them.  It’s definitely a drawback of embedding these interactive visualizations as iframes.)
  2. The Open Geospatial Consortium is an organization that gathers and shares information about geographic and geospatial data formats, and details about a variety of geospatial file formats and standards can be found on its website:  http://www.opengeospatial.org/.
  3. ESRI. A Nation of Drones. http://story.maps.arcgis.com/apps/MapSeries/?appid=79798a56715c4df183448cc5b7e1b999
  4. Lauder, Thomas Suh (2014).  Pollution Burdenshttp://graphics.latimes.com/responsivemap-pollution-burdens/.
  5. YMMV, but the performance of map animations that use Torque seems to be a little tricky, especially when embedded in an iFrame.  I tried to embed the Ferguson Twitter map into this post (because it is really cool looking), and it really slowed down page loading, and the script seemed to get stuck at times.
  6. Sui, Daniel. “Opportunities and Impediments for Open GIS.” Transactions in GIS, 18.1 (2014): 1-24.

A Video on Browser Extensions

I thought we’d try something new on ACRL TechConnect, so I recorded a fifteen-minute video discussing general use cases for browser extensions and some specifics of Google Chrome extensions.

The video mentions my WikipeDPLA post on this blog and walks through some slides I presented at a Code4Lib Northern California event.

If you’re looking for another good extension example in libraryland, Stephen Schor of New York Public Library recently wrote extensions for Chrome and Firefox that improve the appearance and utility of the Library of Congress’ EAD documentation. The Chrome extension uses the same content script approach as my silly example in the video. It’s a good demonstration of how you can customize a site you don’t control using the power of browser add-ons.

Have you found a use case for browser add-ons at your library? Let us know in the comments!


Biohackerspace, DIYbio, and Libraries

“Demonstrating DNA extraction” on Flickr

What Is a Biohackerspace?

A biohackerspace is a community laboratory that is open to the public where people are encouraged to learn about and experiment with biotechnology. Like a makerspace, a biohackerspace provides people with tools that are usually not available at home. A makerspace offers making and machining tools such as a 3D printer, a CNC (computer numerically controlled) milling machine, a vinyl cutter, and a laser cutter. A biohackerspace, however, contains tools such as microscopes, Petri dishes, freezers, and PCR (Polymerase Chain Reaction) machines, which are often found in a wet lab setting. Some of these tools are unfamiliar to many. For example, a PCR machine amplifies a segment of DNA and creates many copies of a particular DNA sequence. A CNC milling machine carves, cuts, and drills materials such as wood, hard plastic, and metal according to the design entered into a computer. Both a makerspace and a biohackerspace provide access to these tools to individuals, which are usually cost-prohibitive to own.

Genspace in Brooklyn (http://genspace.org/) is the first biohackerspace in the United States founded in 2010 by molecular biologist Ellen Jorgenson. Since then, more biohackerspaces have opened, such as BUGSS (Baltimore Underground Science Space, http://www.bugssonline.org/) in Baltimore, MD, BioLogik Labs (https://www.facebook.com/BiologikLabs) in Norfolk, VA, BioCurious in Sunnyvale, CA, Berkeley BioLabs (http://berkeleybiolabs.com/) in Berkeley, CA, Biotech and Beyond (http://biotechnbeyond.com/) in San Diego, CA, and BioHive (http://www.biohive.net/) in Seattle, WA.

What Do people Do in a Biohackerpsace?

Just as people in a makerspace work with computer code, electronics, plastic, and other materials for DYI-manufacturing, people in a biohackerspace tinker with bacteria, cells, and DNA. A biohackersapce allows people to tinker with and make biological things outside of the institutional biology lab setting. They can try activities such as splicing DNA or reprogramming bacteria.1 The projects that people pursue in a biohackerspace vary ranging from making bacteria that glow in the dark to identifying the neighbor who fails to pick up after his or her dog. Surprisingly enough, these are not as difficult or complicate as we imagine.2 Injecting a luminescent gene into bacteria can yield the bacteria that glow in the dark. Comparing DNA collected from various samples of dog excrement and finding a match can lead to identifying the guilty neighbor’s dog.3 Other possible projects at a biohackerspace include finding out if an organic food item from a supermarket is indeed organic, creating bacteria that will decompose plastic, checking if a certain risky gene is present in your body. An investigational journalist may use her or his biohacking skills to verify certain evidence. An environmentalist can measure the pollution level of her neighborhood and find out if a particular pollutant exceeds the legal limit.

Why Is a Biohackerpsace Important?

A biohackerspace democratizes access to biotechnology equipment and space and enables users to share their findings. In this regard, a biohakerspace is comparable to the open-source movement in computer programming. Both allow people to solve the problems that matter to them. Instead of pursing a scientific breakthrough, biohackers look for solutions to the problems that are small but important. By contrast, large institutions, such as big pharmaceutical companies, may not necessarily pursue solutions to such problems if those solutions are not sufficiently profitable. For example, China experienced a major food safety incident in 2008 involving melamine-contaminated milk and infant formula. It costs thousands of dollars to test milk for the presence of melamine in a lab. After reading about the incident, Meredith Patterson, a notable biohacker who advocates citizen science, started working on an alternative test, which will cost only a dollar and can be done in a home kitchen.4 To solve the problem, she planned to splice a glow-in-the-dark jellyfish gene into the bacteria that turns milk into yogurt and then add a biochemical sensor that detects melamine, all in her dining room. If the milk turns green when combined with this mixture, that milk contains melamine.

The DIYbio movement refers to the new trend of individuals and communities studying molecular and synthetic biology and biotechnology without being formally affiliated with an academic or corporate institution.5 DIYbio enthusiasts pursue most of their projects as a hobby. Some of those projects, however, hold the potential to solve serious global problems. One example is the inexpensive melamine test in a milk that we have seen above. Biopunk, a book by Marcus Wohlsen, also describes another DIYbio approach to develop an affordable handheld thermal cycler that rapidly replicates DNA as an inexpensive diagnostics for the developing world.6 Used in conjunction with a DNA-reading chip and a few vials containing primers for a variety of disease, this device called ‘LavaAmp’ can quickly identify diseases that break out in remote rural areas.

The DIYbio movement and a biohackerspace pioneer a new realm of science literacy, i.e. doing science. According to Meredith Patterson, scientific literacy is not understanding science but doing science. In her 2010 talk at the UCLA Center for Society and Genetics’ symposium, “Outlaw Biology? Public Participation in the Age of Big Bio,” Patterson argued, “scientific literacy empowers everyone who possesses it to be active contributors to their own health care; the quality of their food, water, and air; their very interactions with their own bodies and the complex world around them.”7

How Can Libraries Be Involved?

While not all librarians agree that a makerspace is an endeavor suitable for a library, more libraries have been creating a makerspace and offering makerspace-related programs for their patrons in recent years. Maker programs support hands-on learning in the STEAM education and foster creative and innovative thinking through tinkering and prototyping activities. They also introduce new skills to students and the public for whom the opportunities to learn about those things are still rare. Those new skills – 3D modeling, 3D printing, and computer programming – enrich students’ learning experience, provide new teaching tools for instructors, and help adults to find employment or start their own businesses. Those skills can also be used to solve everyday problem such as an creating inexpensive prosthetic limb or custom parts that are need to repair household items.

However, creating a makerspace or running a maker program in a library setting is not an easy task. Libraries often lack sufficient funding to purchase various equipment for a makerspace as well as the staff who are capable of developing appropriate maker programs. This means that in order to create and operate a successful makerspace, a library must make a significant upfront investment in equipment and staff education and training. For this reason, the importance of the accurate needs-assessment and the development of programs appropriate and useful to library patrons cannot be over-empahsized.

A biohackerspace requires a wet laboratory setting, where chemicals, drugs, and a variety of biological matter are tested and analyzed in liquid solutions or volatile phases. Such a laboratory requires access to water, proper plumbing and ventilation, waste disposal, and biosafety protocols. Considering these issues, it will probably take a while for any library to set up a biohackerspace.

This should not dissuade libraries from being involved with biohackerspace-related activities, however. Instead of setting up a biohackerspace, libraries can invite speakers to talk about DIYbio and biohacking to raise awareness about this new area of making to library patrons. Libraries can also form a partnership with a local biohackerspace in a variety of ways. Libraries can co-host or cross-promote relevant programs at biohackerspaces and libraries to facilitate the cross-pollination of ideas. A libraries’ reading collection focused on biohacking could be greatly useful. Libraries can contribute their expertise in grant writing or donate old computing equipment to biohackerspaces. Libraries can offer their expertise in digital publishing and archiving to help biohackerspaces publish and archive their project outcome and research findings.

Is a Biohackerpsace Safe?

The DIYbio movement recognized the potential risk in biohacking early on and created codes of conduct in 2011. The Ask a Biosafety Expert (ABE) service at DIY.org provides free biosafety advice from a panel of volunteer experts, along with many biosafety resources. Some biohackerspaces have an advisory board of professional scientists who review the projects that will take place at their spaces. Most biohackerspaces meet the Biosafety Level 1 criteria set out by the Centers for Disease Control and Prevention (CDC).

Democratization of Biotechnology

While the DIYbio movement and biohackerspaces are still in the early stage of development, they hold great potential to drive future innovation in biotechnology and life sciences. The DIYbio movement and biohackerspaces try to transform ordinary people into citizen scientists, empower them to come up with solutions to everyday problems, and encourage them to share those solutions with one another. Not long ago, we had mainframe computers that were only accessible to a small number of professional computer scientists locked up at academic or corporate labs. Now personal computers are ubiquitous, and many professional and amateur programmers know how to write code to make a personal computer do the things they would like it to do. Until recently, manufacturing was only possible on a large scale through factories. Many makerspaces that started in recent years, however, have made it possible for the public to create a model on a computer and 3D print a physical object based on that model at a much lower cost and on a much smaller scale. It remains to be seen if the DIYbio movement and biohackerspaces will bring similar change to biotechnology.

Notes

  1. Boustead, Greg. “The Biohacking Hobbyist.” Seed, December 11, 2008. http://seedmagazine.com/content/article/the_biohacking_hobbyist/.
  2. Bloom, James. “The Geneticist in the Garage.” The Guardian, March 18, 2009. http://www.theguardian.com/technology/2009/mar/19/biohacking-genetics-research.
  3. Landrain, Thomas, Morgan Meyer, Ariel Martin Perez, and Remi Sussan. “Do-It-Yourself Biology: Challenges and Promises for an Open Science and Technology Movement.” Systems and Synthetic Biology 7, no. 3 (September 2013): 115–26. doi:10.1007/s11693-013-9116-4.
  4. Wohlsen, Marcus. Biopunk: Solving Biotech’s Biggest Problems in Kitchens and Garages. Penguin, 2011., p.38-39.
  5. Jorgensen, Ellen D., and Daniel Grushkin. “Engage With, Don’t Fear, Community Labs.” Nature Medicine 17, no. 4 (2011): 411–411. doi:10.1038/nm0411-411.
  6. Wohlsen, Marcus. Biopunk: Solving Biotech’s Biggest Problems in Kitchens and Garages. Penguin, 2011. p. 56.
  7. A Biopunk Manifesto by Meredith Patterson, 2010. http://vimeo.com/18201825.

Using Grunt to Automate Repetitive Tasks

Riding a tangent from my previous post on web performance, here is an introduction to Grunt, a JavaScript task runner.

Why would we use Grunt? It’s become a common tool for web development as it puts together a number of tedious but necessary steps for optimizing a website. However, “task runner” is intentionally generic; Grunt isn’t specifically limited to websites, it can move and modify files in manifold ways.

Initial Setup

Unfortunately, virtually no operating system comes prepared to use Grunt out of the box. Grunt is written in  Node.js and thus require a few install steps. The good news is that Node is cross-platform; in my experience, it works better on Windows than most other programming frameworks.

  • Install Node
  • Ensure that the node and npm commands are on your path by running node --version and npm --version
    • If not, try a web search like “add node to path {{operating system}}”, it takes at most editing a single line of a particular file
  • Install Grunt globally with npm install -g grunt-cli
  • Ensure Grunt is on your path (grunt --version) and, again, search for an answer if not
  • Inside your project, run npm install grunt to install a local copy of grunt (it’ll appear in a folder named “node_modules”)

We’re ready to run! Let’s do a basic example.

First Example: Basic Web App Optimization

Say we have a simple website: there’s an index.html page, a stylesheet in a “css” subfolder, and a script in a “js” subfolder. First, let’s define what we want to accomplish. We want to: keep a full-size, easily readable copy of all our code while also building minified versions of both the CSS and JS. To do this, we’ll use three plugins: cssmin, uglify, and copy. This whole example is available on GitHub; even if you don’t use git, you can download a zip archive of the files.

First, inside our project, We run npm install grunt-contrib-cssmin grunt-contrib-uglify grunt-contrib-copy. These plugins are now installed in a “nodemodules” folder, but Grunt still needs to know _how to use them. It needs to know what tasks manipulate what files and other options. We provide this information in a file named “Gruntfile.js” in the root of our project. Here’s our initial one:

module.exports = function(grunt) {
    // this tells Grunt about the 3 plugins we installed 
    grunt.loadNpmTasks('grunt-contrib-cssmin');
    grunt.loadNpmTasks('grunt-contrib-uglify');
    grunt.loadNpmTasks('grunt-contrib-copy');
    grunt.initConfig({
        // our configuration will go here 
    });
    // we're going to want to run cssmin, uglify, & copy together 
    // so let's group them under a single "build" task 
    grunt.registerTask('build', ['cssmin', 'uglify', 'copy']);
};

For each task, there’ll be a section inside initConfig with its settings. For cssmin, the setup is nested a few layers but is really just a single line of code. Under a cssmin property we specify a target name, which can be anything. Targets allow us to have multiple configurations for a single task, which is handy in more complex projects but often unneeded. Under our target, which we’ll name “minify”, there’s a files property where we associate an array of input files with a single output file. This makes concatenating multiple stylesheets easy.

cssmin: {
  minify: {
    files: {
      'build/css/main.css': ['css/main.css']
    }
  }
}, // trailing comma since we'll add another section

Uglify’s setup is identical. We’ll name the target the same and only change the paths inside the files property.

uglify: {
  minify: {
    files: {
      'build/js/main.js': ['js/main.js']
    }
  }
}, // again, trailing comma, we add more below

While cssmin handles stylesheets and uglify handles JavaScript, our index.html only needs to be copied and not modified.1 See if you can write out the copy task’s settings by yourself, mimicking what we’ve already done.

Now we run grunt build in our command prompt and some messages tell us about each task’s status. Look in the “build” folder which appears after we ran the command. Far smaller, optimized versions of our main CSS and JS files are in there.

Great! We’ve accomplished our goals. But we must run grunt build over and over each time we want to remake our optimized assets, switching between our code editor and command prompt each time. If we’re doing a lot of piecemeal editing, this is most annoying. Instead, let’s use another plugin by running npm install grunt-contrib-watch to get the “watch” task and load it with the line grunt.loadNpmTasks('grunt-contrib-watch').2 Then, write this configuration:

watch: {
    minify: {
      files: ['\*.html', 'css/\*.css', 'js/\*.js'],
      tasks: ['build']
  }
}

Watch has just two intuitive parameters in its configuration; an array of files to watch and an array of tasks to execute when those files change. The asterisks are wildcards, so unlike our settings above this stanza isn’t dependent on exact file names. Now, by running grunt watch in our command prompt, optimized assets are magically constructed every time we save changes to a file. We can edit in peace without continually switching between the command line and our editor. Better yet, watch can work with a local development server to reload new versions of files upon every edit.3 The right combination of tasks can yield super efficient workflows where we edit and view results without worrying about optimizations made behind the scenes.

More Advanced: Portable Header

While the above is suitable for much small-scale web development, Grunt can handle far more complex situations. For example, I wrote a portable HTML header which can be inserted onto various vendor websites such as LibGuides or an A to Z list. The project’s Gruntfile is 159 lines long and makes use of ten Grunt plugins.

I won’t go into detail to explain how each Grunt tasks’ settings work, but I will outline what’s happening. A “sass” task compiles SCSS code into minified CSS that browser can understand using an external program. A couple of linting tools, jshint and scss-lint, check files against code quality heuristics. Our good friends copy and uglify are back doing their job, only this time they’ve joined by htmlmin which handles the index page. “String-replace” is an example of a multi-target task; its first target searches over a series of files for strings wrapped in double-curly braces like “{{example}}”. It then swaps out these placeholders with values specified in another file. The second takes the entire contents of a stylesheet and a script and inlines them into the main HTML.

That’s a lot of labor being handling by computer programs instead of humans. While passing a couple files through tools that remove comments and whitespace isn’t tough, the many steps in constructing an optimized HTML header from several files provide a good demonstration of Grunt’s value. While it took me some time to configure everything properly, the combined “build” task for this project has probably run hundreds of times and saved me hours of work. Not only that, because of the linting and minification, the final product is doubtless more high-quality than I could assemble manually.

The length and complexity of my Gruntfile points to one of the tougher pieces of using Grunt heavily; the order and delegated responsibilities of numerous tasks is tricky to coordinate properly. Throughout my Gruntfile, there are comments indicating when particular tasks run because running them out of order would either be fruitless or cause an error. For instance, the “string-replace” task’s “inline” target must run after those other files have been minified, otherwise the minification serves no purpose (the inlined code would be full size).

Similarly, coordinating which tasks move which files has been a constant headache for me in many projects. While the “copy” task moves images to the build folder, the “tpl” target of the “string-replace” task moves everything else. But I could’ve also used the uglify or sass tasks to move files! Since every task can potentially move the files it operates upon, it’s difficult to keep track of where a file is at a particular time in the workflow. The best way to debug these issues is to run multi-task aliases like build one at a time; first run uglify, then run cssmin, then run htmlmin… checking the state of files in between each to make sure that changes are occurring as anticipated.

Use Cases Abound

I use Grunt in almost all my projects, whether they’re web development or not. I use Grunt to copy my shell customizations into place, so that when I’m working on a new one I can just run grunt watch and rely on the changes being synched into place. I use Grunt to build Chrome extensions which require extra packaging steps before they can be pushed into the Chrome Web Store. I use Grunt in our catalog’s customized pages to minify code and also to check for potential errors. As an additional step, Grunt can be hooked up to deploy processes such that once a successful build is made the new files are pushed off to a remote server.

Grunt can be used to construct almost any workflow from a series of discrete pieces. Compiling some EAD finding aids into an HTML website via XSLT. Processing vendor MARC files with a PyMARC script and then uploading them into an ILS. Anything that can be scripted could be tied to Grunt tasks with a plug-in like grunt-exec, which executes arbitrary shell commands. However, there is a limit to what it’s sensible to do with Grunt. These last two examples are arguably better accomplished with shell scripts. Grunt is at its best when its great suite of plug-ins are relied upon and those tend to perform web-specific tasks. Grunt also requires at least a modicum of comfort with coding. It falls into an odd space, because while the configuration file is indeed JavaScript, it reads like a series of lists of settings, files, and ordered tasks. If you have more complex needs that involve if-then conditions and custom scripts, a lot of Grunt’s utility is negated. On the other hand, for those who would rather avoid code and the command line, options like the CodeKit app make more sense.

The Grunt site’s Getting Started and Sample Gruntfile pages are helpful sources of documentation.

A Beginner’s Guide to Grunt: Redux — a nice, updated overview. Some of the steps here are unnecessary for beginners, however, as they require a lot of files and structure. That’s great for experienced developers in the long run, because everything is smaller and more modular, but too much setup for simple projects.

I find myself constantly consulting the readme’s for various grunt plugins to figure out how they work, since their options are not necessarily discoverable otherwise. A quick way to pop open the home page of a package is by running npm home grunt-contrib-uglify (inserting the plugin name of your choice) which will open the registered home page of the package, often on GitHub.

Finally, it’s worth mentioning Gulp, a competing JavaScript task runner. Gulp is the same type of tool as Grunt (you wouldn’t use both in a project) but follows a different design philosophy. In short, Gulp tends to run faster due to its design and setting it up looks more like code and less like a configuration file, which some people prefer.

Notes

  1. There’s actually another great plugin, grunt-contrib-htmlmin, which minifies HTML. Its settings are only a little bit more involved than the copy task, so trying to configure htmlmin would make another nice exercise for those wanting to build on this post.
  2. Writing grunt.loadNpmTasks for each task we add to a complex Gruntfile gets tiresome. It takes a bit more initial work—we need to run npm init before anything else, fill in some prompts, and append --save-dev to all your npm install commands—so I decided to skip it in this intro, but we can use load-grunt-tasks to get this down to a single line that doesn’t need to be updated each time a new plugin is added.
  3. The appropriate setting is options.livereload as documented here. While our scenario doesn’t quite capture how time-saving this can be, grunt watch shines when working with a language that compiles to CSS like SASS. A process like “edit SASS, compile CSS, reload web page, view changes” becomes simply “edit SASS, view changes” because the intermediary stages are triggered by grunt watch.

Trapped in the Glass Cage

Imagine this scenario: you don’t normally have a whole lot to do at your job. It’s a complex job, sure, but day-to-day you’re spending most of your time monitoring a computer and typing in data. But one day, something goes wrong. The computer fails. You are suddenly asked to perform basic job functions that the computer normally takes care of for you, and you don’t really remember well how to do them. In the mean time, the computer is screaming at you about an error, and asking for additional inputs. How well do you function?

The Glass Cage

In Nicholas Carr’s new book The Glass Cage, this scenario is the frightening result of malfunctions with airplanes, and in the cases he describes, result in crashes and massive loss of life. As librarians, we are thankfully not responsible on a daily basis for the lives of hundreds of people, but like pilots, we too have automated much of our work and depend on systems that we often have no control over. What happens when a database we rely on goes down–say, all OCLC services go down for a few hours in December when many students are trying to get a few last sources for their papers? Are we able to take over seamlessly from the machines in guiding students?

Carr is not against automation, nor indeed against technology in general, though this is a criticism frequently leveled at him. But he is against the uncritical abnegation of our faculties to technology companies. In his 2011 book The Shallows, he argues that offloading memory to the internet and apps makes us more shallow, distractable thinkers. While I didn’t buy all his arguments (after all, Socrates didn’t approve of off-loading memory to writing since it would make us all shallow, distractable thinkers), it was thought-provoking. In The Glass Cage, he focuses on automation specifically, using autopilot technologies as the focal point–“the glass cage” is the name pilots use for cockpits since they are surrounded by screens. Besides the danger of not knowing what to do when the automated systems fail, we create potentially more dangerous situations by not paying attention to what choices automated systems make. As Carr writes, “If we don’t understand the commercial, political, intellectual, and ethical motivations of the people writing our software, or the limitations inherent in automated data processing, we open ourselves to manipulation.” 1

We have automated many mundane functions of library operation that have no real effect, or a positive effect. For instance, no longer do students sign out books by writing their names on paper cards which are filed away in drawers. While some mourn for the lost history of who had out the book–or even the romance novel scenario of meeting the other person who checks out the same books–by tracking checkouts in a secure computerized system we can keep better track of where books are, as well as maintain privacy by not showing who has checked out each book. And when the checkout system goes down, it is easy to figure out how to keep things going in the interim. We can understand on an instinctual level how such a system works and what it does. Like a traditional computerized library catalog, we know more or less how data gets in the system, and how data gets out. We have more access points to the data, but it still follows its paper counterpart in creation and structure.

Over the past decade, however, we have moved away more and more from those traditional systems. We want to provide students with systems that align with their (and our) experience outside libraries. Discovery layers take traditional library data and transform it with indexes and algorithms to create a new, easier way to find research material. If traditional automated systems, like autopilot systems, removed the physical effort of moving between card catalogs, print indexes, and microfilm machines, these new systems remove much of the mental effort of determining where to search for that type of information and the particular skills needed to search the relevant database. That is a surely a useful and good development. When one is immersed in a research question, the system shouldn’t get in the way.

Dr. Screen

That said, the nearly wholesale adoption of discovery systems provided by vendors leaves academic librarians in an awkward position. We can find a parallel in medicine. Carr relates the rush into electronic medical records (EMR) starting in 2004 with the Heath Information Technology Adoption Initiative. This meant huge amounts of money available for digitizing records, as well as a huge windfall for health information companies. While an early study by the RAND corporation (funded in part by those health information companies) indicated enormous promise from electronic medical records to save money and improve care. 2 But in actual fact, these systems did not do everything they were supposed to do. All the data that was supposed to be easy to share between providers was locked up in proprietary systems. 3 In addition, other studies showed that these systems did not merely substitute automated record-keeping for manual, they changed the way medicine was practiced. 4 EMR systems provide additional functions beyond note-taking, such as checklists and prompts with suggestions for questions and tests, which in turn create additional and more costly bills, test requests, and prescriptions. 5 The EMR systems change the dynamic between doctor and patient as well. The systems encourage the use of boilerplate text that lacks the personalized story of an individual patient, and the inability to flip through pages tended to diminish the long view of a patient’s entire medical history. 6 The presence of the computer in the room and the constant multitasking of typing notes into a computer means that doctors cannot be fully present with the patient. 7 With the constant presence of the EMR and its checklists, warnings, and prompts, doctors lose the ability to gain intuition and new understandings that the EMR could never provide. 8

The reference librarian has an interaction with patrons that is not all that different from doctors with patients (though as with pilots, the stakes are usually quite different). We work one on one with people on problems that are often undefined or misunderstood at the beginning of the interaction, and work towards a solution through conversation and cursory examinations of resources. We either provide the resource that solves the problem (e.g. the prescription), or make sure the patron has the tools available to solve problem over time (e.g. diet and exercise recommendations). We need to use subtle queues of body language and tone of voice to see how things are going, and use instinctive knowledge to understand if there is a deeper but unexpressed problem. We need our tools at hand to work with patrons, but we need to be present and use our own experience and judgment in knowing the appropriate tool to use. That means that we have to understand how the tool we have works, and ideally have some way of controlling it. Unfortunately that has not always been the case with vendor discovery systems. We are at the mercy of the system, and reactions to this vary. Some people avoid using it at all costs and won’t teach using the discovery system, which means that students are even less likely to use it, preferring the easier to get to even if less robust Google search. Or, if students do use it, they may still be missing out on the benefits of having academic librarians available–people who have spent years developing domain knowledge and the best resources available at the library, which knowledge can’t be replaced by an algorithm. Furthermore, the vendor platforms and content only interoperate to the extent the vendors are willing to work together, for which many of them have a disincentive since they want their own index to come out on top.

Enter the ODI

Just as doctors may have given up some of their professional ability and autonomy to proprietary databases of patient information, academic librarians seem to have done something similar with discovery systems. But the NISO Open Discovery Initiative (ODI) has potential to make the black box more transparent. This group has been working for two years to develop a set of practices that aim to make some aspects of discovery even across providers, and so give customers and users more control in understanding what they are seeing and ensure that indexes are complete. The Recommended Practice addresses some (but not all) major concerns in discovery service platforms. Essentially it covers requirements for metadata that content providers must provide to discovery service providers and to libraries, as well as best practices for content providers and discovery service providers. The required core metadata is followed by the “enriched” content which is optional–keywords, abstract, and full text. (Though the ODI makes it clear that including these is important–one might argue that the abstract is essential). 9 Discovery service providers are in turn strongly encouraged to make the content their repositories hold clear to their customers, and the metadata required for this. Discovery service providers should follow suggested practices to ensure “fair linking”, specifically to not use business relationships as a ranking or ordering consideration, and allow libraries to set their own preferences about choice of providers and wording for links. ODI suggests a fairly simple set of usage statistics that should be provided and exactly what they should measure. 10

While this all sets a good baseline, what is out of scope for ODI is equally important. It “does not address issues related to performance or features of the discovery services, as these are inherently business and design decisions guided by competitive market forces.” 11 Performance and features includes the user interface and experience, the relevancy ranking algorithms, APIs, specific mechanisms for fair linking, and data exchange (which is covered by other protocols). The last section of the Recommended Practice covers some of those in “Recommended Next Steps”. One of those that jumps out is the “on-demand lookup by discovery service users” 12, which suggests that users should be able to query the discovery service to determine “…whether or not a particular collection, journal, or book is included in the indexed content”13–seemingly the very goal of discovery in the first place.

“Automation of Intellect”

We know that many users only look at the first page of results for the resource they want. If we don’t know what results should be there, or how they get there, we are leaving users at the mercy of the tool. Disclosure of relevancy rankings is a major piece of transparency that ODI leaves out, and without understanding or controlling that piece of discovery, I think academic librarians are still caught in the trap of the glass cage–or become the chauffeur in the age of the self-driving car. This has been happening in all professional fields as machine learning algorithms and processing power to crunch big data sets improve. Medicine, finance, law, business, and information technology itself have been increasingly automated as software can run algorithms to analyze scenarios that in the past would require a senior practitioner. 14 So what’s the problem with this? If humans are fallible (and research shows that experts are equally if not more fallible), why let them touch anything? Carr argues that “what makes us smart is not our ability to pull facts from documents.…It’s our ability to make sense of things…” 15 We can grow to trust the automated system’s algorithms beyond our own experience and judgment, and lose the possibility of novel insights. 16

This is not to say that discovery systems do not solve major problems or that libraries should not use them. They do, and as much as practical libraries should make discovery as easy as possible. But as this ODI Recommended Practice makes clear, much remains a secret business decision for discovery service vendors, and thus something over which academic librarian can exercise control only though their dollars in choosing a platform and their advocacy in working with vendors to ensure they understand the system and it does what they need.

Notes

  1. Nicholas Carr, The Glass Cage: Automation and Us (New York: Norton, 2014), 208.
  2. Carr, 93.
  3. Carr, 95.
  4. Carr, 97.
  5. Carr, 98.
  6. Carr, 101-102.
  7. Carr, 103.
  8. Carr, 105-106.
  9.  National Information Standards Organization (NISO) Open Discovery Initiative (ODI) Working Group, Open Discovery Initiative: Promoting Transparency in Discovery (Baltimore: NISO, 2014): 25-26.
  10. NISO ODI, 25-27.
  11. NISO ODI, 3.
  12. NISO ODI, 32.
  13. NISO ODI, 32.
  14. Carr, 115-117.
  15. Carr, 121.
  16. Carr, 124.

Themes from the Knight Foundation’s News Challenge Grant Competition

This year’s Knight Foundation’s News Challenge project grant competition focused on the premise that libraries are “key for improving Americans’ ability to know about and to be involved with what takes place around them.”  While the Knight Foundation’s mission traditionally focuses on journalism and media, the Foundation has funded several projects related to libraries in the past, such as projects enabling Chicago Public Library and New York Public Library to lend Wi-Fi hot-spots to patrons, Jason Griffey’s LibraryBox Project, and a DPLA project to clarify intellectual property rights for libraries sharing digital materials.  Winners receive a portion of around $5 million in funding, and typical awards range between $200,000 and $400,000, with some additional funding available for smaller (~$35,000) start-up projects.

The News Challenge this year called for ideas that “leverage libraries as a platform to build more knowledgeable communities”.  There were over 600 proposals in this years’ News Challenge (all of which you can find here), and 42 proposals made it into the semi-finals, with winners to be announced January 30th, 2015.  While not all of the proposed ideas specifically involve digital technologies, most propose the creation of some kind of application or space where library users can access and learn about new technologies and digital skills.

Notably, while the competition was open to anyone (not just libraries or librarians), around 30 of the 42 finalist projects have at least one librarian or library organization sponsor on the team.1  Nearly all the projects involve the intersection of multiple disciplines, including history, journalism, engineering, education/instructional design, music, and computer science.  Three general themes seem to have emerged from this year’s News Challenge finalists:  1) maker spaces tailored to specific community needs; 2) libraries innovating new ways to publish, curate, and lend DRM-free ebooks and other content; and 3) facilitating the preservation of born-digital user-generated histories.

MakerSpaces Made for Communities

Given how popular makerspaces have become in libraries2 over the past several years, it’s not surprising to see that many News Challenge proposals  seek funding for the space and/or equipment to create library makerspaces.  What is interesting about many of these makerspace proposals is that many of them highlight the need to develop makerspaces that are specifically relevant to to the interests and needs of the local communities a library serves.

For example, one proposal out of Philadelphia – Libraries as Hip-Hop Techspace – proposes equipping a library space with tools for learning about and creating digital music via hip-hop.  Another proposal out of Vermont focuses on teaching digital literacies and skillsets via makerspace technologies in rural communities.  Of course, many proposals include 3D printers, but what stands out about proposals that have made it through to the finals (like this one that proposes 3D printed books for blind children, for example, or this one that proposes a tiny makerspace for a small community) are those that emphasize how the project would use 3D printing and associated technologies in an innovative way that is still meaningful to specific community learning needs and interests.

One theme that runs through these maker-related proposals, whether they come from cities or rural locations, is the relevance of these spaces to the business and economic needs of the communities in which they are proposed.  Several entries point out the potential for library maker-spaces to be entrepreneurial incubators designed to enable users to develop, prototype, and market their own software or other products.  Many of these proposals specifically mention how the current “digital divide” is, at its heart, an economic divide – such as this maker space proposal from San Jose, which argues that “In order for the next generation of Silicon Valley’s leaders to rise from its neighborhoods, access to industry knowledge and tools should begin early to inspire participation, experimentation, and collaboration – essential skills of the thriving economy in the area.”  Creativity and experimentation are increasingly essential skills in the labor force, and these proposals highlight how a library’s role in fostering creative expression ultimately provides an economic benefit.

Publishing and Promoting DRM-Free Content

Like maker spaces, libraries serving as publishers is not a new trend.3  What is interesting about Knight Foundation proposals that that center on this topic are the projects that position libraries as publishers of content that otherwise would struggle to get published in the current marketplace, while recognizing the desire for libraries to more easily lend digital content to their users.

A great example of this is a proposal by the developers of JukePop to leverage “libraries’ ebook catalogs so they become THE publishing platform for indie authors looking to be discovered by the most avid readers”  JukePop is a publishing platform for indie authors that enables distribution of ebooks – often in serial form – to readers who can provide feedback to authors.  Downloads from library websites are DRM-free.4 Other proposals – like this proposal for improving the licensing models of indie games – also have a core value of figuring out new ways to streamline user access to content while providing benefits and compensation to creators.

A somewhat different – and super interesting – manifestation of this theme is the Gitenberg project, which proposes utilizing Github as a platform for version control to enable libraries to contribute to improving digital manuscripts and metadata for Project Gutenberg.  The project already has a repository that you can start forking from and making pull requests to improve Project Gutenberg data.

Telling Community Stories through Born-Digital Media Preservation

Libraries and archives have always held the role of preserving the cultural heritage of the communities they serve, but face challenges in easily  gathering, preserving, and curating born-digital media.  While individual users capture huge amounts of media on mobile devices, as the Open Archive proposal puts it, “the most common destination for this media is on social media platforms that can chill free speech and are not committed to privacy, authentication, or long-term preservation.”  StoryCorps, for example, proposes the creation of better tools and distributed education for libraries and archives to record and preserve diverse community stories through interviews.  CurateScape “seeks to remake public life through a distributed and participatory network of digital storytelling,” emphasizing the need for technology that can be adopted by libraries to document local community histories.  The Internet Archive proposes more streamlined tools and frameworks to enable users to more easily submit content for preservation to archive.org.  The Open Archive project, also associated with the Internet Archive, emphasizes empowering users through their local libraries and archives to capture and submit their media to community collections.  Recognizing that there is still an enormous amount of pre-digital content waiting to be preserved New York Public Library seeks to “democratize the digitization process” through mobile and distributed digitization tools.

The element that seems to motivate many of these proposals is an emphasis on unique, community-centric collections that tell a story.  They focus on the importance of local institutions as connection points for users to share their content, and their experiences, with the world, while also documenting the context surrounding those experiences.  Libraries and archives are uniquely positioned as the logical place to document local histories in the interest of long-term preservation, but need better solutions.  It’s definitely exciting to see the energy behind these proposals and the innovative solutions that are on the horizon.

Start Thinking about Next Year!

Full disclosure:  I was part of a team that submitted a proposal to the News Challenge, and while we’ve been notified it won’t be funded, it was a fantastic learning experience.  I definitely made some great connections with other Challenge participants (many tweets have been exchanged!) – and it’s not surprising to me that even in a competition, library people find ways to work together and collaborate.  While winners for this years’ competition won’t be announced until January 30th, and the specific “challenge” prompt for the next competition won’t be identified until late next year (and may not be specifically library-related), I would definitely encourage anyone with a good, relevant idea to think about applying to next years’ News Challenge.

There’s also some interesting stuff to be found in the “Inspiration” section of the News Challenge, where people could submit articles, discussions, and ideas relevant to this year’s Challenge.  One of the articles linked to in that section is the transcript of a 2013 speech by author Neil Gaiman, which features a lot of lovely bits of motivation for every library enthusiast – like this one, which I think captures a central thread running through almost every News Challenge proposal:  “A library is a place that is a repository of information and gives every citizen equal access to it… It’s a community space. It’s a place of safety, a haven from the world. It’s a place with librarians in it. What the libraries of the future will be like is something we should be imagining now.”

Notes

  1. The projects that do not have direct library/librarian involvement listed on the project team – including “Cowbird” (https://newschallenge.org/challenge/libraries/evaluation/building-libraries-of-human-experience-transforming-america-s-libraries-into-community-storytelling-centers-for-the-digital-age) and “Libraries as Hip-Hop TechSpace” (https://newschallenge.org/challenge/libraries/refinement/libraries-as-hip-hop-techspace) are definitely interesting to read.  I do think they provide a unique perspective on how the ethos and mission of libraries is perceived by those not necessarily embedded in the profession – and I’m encouraged that these proposals do show a fairly clear understanding of the directions libraries are moving in.
  2. http://acrl.ala.org/techconnect/?p=2340
  3. See http://lj.libraryjournal.com/2014/03/publishing/the-public-library-as-publisher and http://dx.doi.org/10.3998/3336451.0017.207
  4. Santa Clara County Library district is a development partner with JukePop – you can check out their selection of JukePop-sourced DRM-free titles available here:  http://www.sccl.org/Browse/eBooks-Downloads/Episodic-Fiction

This is How I Work (Bryan J. Brown)

Editor’s Note: This post is part of ACRL TechConnect’s series by our regular and guest authors about The Setup of our work.

 

After being tagged by Eric Phetteplace, I was pleased to discover that I had been invited to take part in the “This is How I Work” series. I love seeing how other people view work and office life, so I’m happy to see this trend make it to the library world.

Name: Bryan J. Brown (@bryjbrown)

Location: Tallahassee, Florida, United States

Current Gig: Web Developer, Technology and Digital Scholarship, Florida State University Libraries

Current Mobile Device: Samsung Galaxy Note 3 w/ OtterBox Defender cover (just like Becky Yoose!). It’s too big to fit into my pants pocket comfortably, but I love it so much. I don’t really like tablets, so having a gigantic phone is a nice middle ground.

Current Computer: 15 inch MacBook Pro w/ 8GB of RAM. I’m a Linux person at heart, but when work offers you a free MBP you don’t turn it down. I also use a thunderbolt monitor in my office for dual-screen action.

Current Tablet: 3rd gen. iPad, but I don’t use it much these days. I bought it for reading books, but I strongly prefer to read them on my phone or laptop instead. The iPad just feels huge and awkward to hold.

One word that best describes how you work: Structured. I do my best when I stay within the confines of a strict system and/or routine that I’ve created for myself, it helps me keep the chaos of the universe at bay.

What apps/software/tools can’t you live without?

Unixy stuff:

  • Bash: I’ve tried a few other shells (tcsh, zsh, fish), but none have inspired me to switch.
  • Vim: I use this for everything, even journal entries and grocery lists. I have *some* customizations, but it’s pretty much stock (except I love my snippets plugin).
  • tmux: Like GNU Screen, but better.
  • Vagrant: The idea of throwaway virtual machines has changed the way I approach development. I do all my work inside Vagrant machines now. When I eventually fudge things, I can just run ‘vagrant destroy’ and pretend it never happened!
  • Git: Another game changer. I shouldn’t have waited so long to learn about version control. Git has saved my bacon countless times.
  • Anaconda: I’m a Python fan, but I like Python 3 and the scientific packages. Most systems only have Python 2, and a lot of the scientific packages fail to build for obscure reasons. Anaconda takes care of all that nonsense and allows you to have the best, most current Python goodness on any platform. I find it very comforting to know that I can use my favorite language and packages everywhere no matter what.
  • Todo.txt-CLI: A command line interface to the Todo.txt system, which I am madly in love with. If you set it to save your list to Dropbox, you can manage it from other devices, too. My work life revolves around my to-do list which I mostly manage at my laptop with Todo.txt-CLI.

Other:

  • Dropbox: Keeping my stuff in order across machines is a godsend. All my most important files are kept in Dropbox so I can always get to them, and being able to put things in a public folder and share the URL is just awesome.
  • Google Drive: I prefer Dropbox better for plain storage, but the ability to write documents/spreadsheets/drawings/surveys at will, store them in the cloud, share them with coworkers and have them write along with you is too cool. I can’t imagine working in a pre-Drive world.
  • Trello: I only recently discovered Trello, but now I use it for everything at work. It’s the best thing for keeping a group of people on track with a large project, and moving cards around is strangely satisfying. Also you can put rocket stickers on cards.
  • Quicksilver for Mac: I love keyboard shortcuts. A lot. Quicksilver is a Mac app for setting up keyboard shortcuts for everything. All my favorite apps have hotkeys now.
  • Todo.txt for Android: A nice mobile interface for the Todo.txt system. One of the few apps I’ve paid money for, but I don’t regret it.
  • Plain.txt for Android: This one is kind of hard to explain until you use it. It’s a mobile text editor for taking notes that get saved in Dropbox, which is useful in more ways than you can imagine. Plain.txt is my mobile interface to the treasure trove of notes I usually write in Vim on my laptop. I keep everything from meeting notes to recipes (as well as the previously mentioned grocery lists and journal entries) in it. Second only to Todo.txt in helping me stay sane.

What’s your workspace like?

My office is one of my favorite places. A door I can shut, a big whiteboard and lots of books and snacks. Who could ask for more? I’m trying out the whole “standing desk” thing, and slowly getting used to it (but it *does* take some getting used to). My desk is multi-level (it came from a media lab that no longer exists where it held all kinds of video editing equipment), so I have my laptop on a stand and my second monitor on the level above it so that I can comfortably look slightly down to see the laptop or slightly up to see the big display.

20141204_105656

What’s your best time-saving trick?

Break big, scary, complicated tasks into smaller ones that are easy to do. It makes it easier to get started and stay on track, which almost always results in getting the big scary thing done way faster than you thought you would.

What’s your favorite to-do list manager?

I am religious about my use of Todo.txt, whether from the command line or with my phone. It’s my mental anchor, and I am obsessive about keeping it clean and not letting things linger for too long. I prioritize things as A (get done today), B (get done this week), C (get done soon), and D (no deadline).

I’m getting into Scrum lately, so my current workflow is to make a list of everything I want to finish this week (my sprint) and mark them as B priority (my sprint backlog, either moving C tasks to B or adding new ones in manually). Then, each morning I pick out the things from the B list that I want to get done today and I move them to A. If some of the A things are complicated I break them into smaller chunks. I then race myself to see if I can get them all done before the end of the day. It turns boring day-to-day stuff into a game, and if I win I let myself have a big bowl of ice cream.

Besides your phone and computer, what gadget can’t you live without?

Probably a nice, comfy pair of over-the-ear headphones. I hate earbuds, they sound thin and let in all the noise around you. I need something that totally covers my ears to block the outside world and put me in a sonic vacuum.

What everyday thing are you better at than everyone else?

I guess I’m pretty good at the whole “Inbox Zero” thing. I check my email once in the morning and delete/reply/move everything accordingly until there’s nothing left, which usually takes around 15 minutes. Once you get into the habit it’s easy to stay on top.

What are you currently reading?

  • The Information by James Gleick. I’m reading if for Club Bibli/o, a library technology bookclub. We just started, so you can still join if you like!
  • Pro Drupal 7 Development by Todd Tomlinson and John K. VanDyk. FSU Libraries is a Drupal shop, so this is my bread and butter. Or at least it will be once I get over the insane learning curve.
  • Buddhism Plain and Simple by Steve Hagen. The name says it all, Steve Hagen is great at presenting the core parts of Buddhism that actually help you deal with things without all the one hand clapping nonsense.

What do you listen to while you work?

Classic ambient artists like Brian Eno and Harold Budd are great when I’m in a peaceful, relaxed place, and I’ll listen to classical/jazz if I’m feeling creative. Most of the time though it’s metal, which is great for decimating to-do lists. If I really need to focus on something, any kind of music can be distracting so I just play static from simplynoise.com. This blocks all the sound outside my office and puts me in the zone.

Are you more of an introvert or an extrovert?

Introvert for sure. I can be sociable when I need to, but my office is my sanctuary. I really value having a place where I can shut the door and recharge my social batteries.

What’s your sleep routine like?

I’ve been an early bird by necessity since grad school, the morning is the best time to get things done. I usually wake up around 4:30am so I can hit the gym when it opens at 5am (I love having the whole place to myself). I start getting tired around 8pm, so I’m usually fast asleep by 10pm.

Fill in the blank: I’d love to see _________ answer these same questions.

Richard Stallman. I bet he’d have some fun answers.

What’s the best advice you’ve ever received?

Do your best. As simple as it sounds, it’s a surprisingly powerful statement. Obviously you can’t do *better* than your best, and if you try your best and fail then there’s nothing to regret. If you just do the best job you can at any given moment you’ll have the best life you can. There’s lots of philosophical loopholes buried that perspective, but it’s worked for me so far.


How I Work (Margaret Heller)

Editor’s Note: This post is part of ACRL TechConnect’s series by our regular and guest authors about The Setup of our work.

 

Margaret Heller, @margaret_heller

Location: Chicago, IL

Current Gig: Digital Services Librarian, Loyola University Chicago

Current Mobile Device: iPhone 5s. It took me years and years of thinking to finally buy a smart phone, and I did it mainly because my iPod Touch and slightly smart phone were both dying so it could replace both.

Current Computer:

Work: Standard issue Dell running Windows 7, with two monitors.

Home: Home built running Windows 7, in need of an upgrade that I will get around to someday.

Current Tablet: iPad 3, which I use constantly. One useful tip is that I have the Adobe Connect, GoToMeeting, Google Hangout, and Lync apps which really help with participating in video calls and webinars from anywhere.

One word that best describes how you work: Tenaciously

What apps/software/tools can’t you live without?

Outlook and Lync are my main methods of communicating with other library staff. I love working at a place where IMing people is the norm. I use these both on desktop and on my phone and tablet. I love that a recent upgrade means that we can listen to voice mails in our email.

Firefox is my normal work web browser. I tend to use Chrome at home. The main reason for the difference is synced bookmarks. I have moved my bookmarks between browsers so many times that I have some of the original sites I bookmarked when I first used Netscape in the late 90s. Needless to say, very few of the sites still exist, but it reminds me of old hobbies and interests. I also don’t need the login to stream shows from my DVR at in my bookmark toolbar at work.

Evernote I use for taking meeting notes, conference notes, recipes, etc. I usually have it open all day at work.

Notepad++ is where I do most of my code writing.

OpenRefine is my favored tool for bulk editing metadata, closely aligned with Excel since I need Excel to get data into our institutional repository.

Filezilla is my favored FTP client.

WriteMonkey is the distraction free writing environment I use on my desktop computer (and how I am writing this post). I use Editorial on my iPad.

Spotify and iTunes for music and podcasts.

RescueTime for staying on track with work–I get an email every Sunday night so I can swear off social media for the next week. (It lasts about a day).

FocusBooster makes a great Pomodoro timer.

Zotero is my constant lifesaver when I can’t remember how to cite something, and the only way I stay on track with writing posts for ACRL TechConnect.

Feedly is my RSS reader, and most of the time I stay on top of it.

Instapaper is key to actually reading rather than skimming articles, though of course I am always behind on it.

Box (and Box Sync) is our institutional cloud file storage service, and I use it extensively for all my collaborative projects.

Asana is how we keep track of ongoing projects in the department, and I use it for prioritizing personal projects as well.

What’s your workspace like? :A large room in the basement with two people full time, and assorted student workers working on the scanner. We have pieces of computers sitting around, though moved out an old server rack that was taking up space. (Servers are no longer located in the library but in the campus data centers). My favorite feature is the white board wall behind my desk, which provides enough space to sketch out ideas in progress.

I have a few personal items in the office: a tea towel from the Bodleian Library in Oxford, a reproduction of an antique map of France, Belgium, & Holland, a photo of a fiddlehead fern opening, and small stone frogs to rearrange while I am talking on the phone. I also have a photo of my baby looking at a book, though he’s so much bigger now I need to add additional photos of him. My desk has in tray, out tray, and a book cart shaped business card holder I got at a long ago ALA conference. I am a big proponent of a clean desk, though the later in the semester it gets the more likely I am to have extra papers, but it’s important to my focus to have an empty desk.

There’s usually a lot going on in here and no natural light, so I go outside to work in the summer, or sometimes just to another floor in the building to enjoy the lake view and think through problems.

What’s your best time-saving trick?: Document and schedule routine tasks so I don’t forget steps or when to take care of them. I also have a lot of rules and shortcuts set up in my email so I can process email very quickly and not work out of my inbox. Learn the keyboard shortcuts! I can mainly get through Gmail without touching the mouse and it’s great.

What’s your favorite to-do list manager?: Remember the Milk is how I manage tasks. I’ve been using it for years for Getting Things Done. I pay for it, and so currently have access to the new version which is amazing, but I am sworn to secrecy about its appearance or features. I have a Google Doc presentation I use for Getting Things Done weekly reviews, but just started using an Asana project to track all my ongoing projects in one place without overwhelming Remember the Milk or the Google Doc. It tells me I currently have 74 projects. A few more have come in that I haven’t added yet either.

Besides your phone and computer, what gadget can’t you live without?: For a few more weeks, my breast pump, which I am not crazy about, but it makes the hard choices of parenting a little bit easier. I used to not be able to live without my Nook until I cut my commute from an hour on the train to a 20 minute walk, so now I need earbuds for the walk. I am partial to Pilot G2 pens, which I use all the time for writing ideas on scrap paper.

What everyday thing are you better at than everyone else?: Keeping my senses of humor and perspective available for problem solving.

What are you currently reading?: How to be a Victorian by Ruth Goodman (among other things). So far I have learned how Victorians washed themselves, and it makes me grateful for central heating.

What do you listen to while you work?: Podcasts (Roderick on the Line is required listening), mainly when I am doing work that doesn’t require a lot of focus. I listen mostly to full albums on Spotify (I have a paid account), though occasionally will try a playlist if I can’t decide what to listen to. But I much prefer complete albums, and try to stay on top of new releases as well as old favorites.

Are you more of an introvert or an extrovert?: A shy extrovert, though I think I should be an introvert based on the popular perception. I do genuinely like seeing other people, and get restless if I am alone for too long.

What’s your sleep routine like?: I try hard to get in bed at 9:30, but by 10 at the latest. Or ok, maybe 10:15. Awake at 6 or whenever the baby wakes up. (He mostly sleeps through the night, but sometimes I am up with him at 4 until he falls asleep again). I do love sleeping though, so chances to sleep in are always welcome.

Fill in the blank: I’d love to see _________ answer these same questions. Occasional guest author Andromeda Yelton.

What’s the best advice you’ve ever received?: You are only asked to be yourself. Figure out how you can best help the world, and work towards that rather than comparing yourself to others. People can adjust to nearly any circumstance, so don’t be afraid to try new things.


This Is How I Work (Nadaleen Tempelman-Kluit)

Editor’s Note: This post is part of ACRL TechConnect’s series by our regular and guest authors about The Setup of our work.

 

Nadaleen Tempelman-Kluit @nadaleen

Location: New York, NY

Current Gig: Head, User Experience (UX), New York University Libraries

Current Mobile Device: iPhone 6

Current Computer:

Work: Macbook pro 13’ and Apple 27 inch Thunderbolt display

Old dell PC that I use solely to print and to access our networked resources

Home:

I carry my laptop to and from work with me and have an old MacBook Pro at home.

Current Tablet: First generation iPad, supplied by work

One word that best describes how you work: has anyone said frenetic yet?

What apps/software/tools can’t you live without?

Communication / Workflow

Slack is the UX Dept. communication tool in which all our communication takes place, including instant messaging, etc. We create topic channels in which we add links and tools and thoughts, and get notified when people add items. We rarely use email for internal communication.

Boomeranggmail-I write a lot of emails early in the morning so can schedule them to be sent at different times of the day without forgetting.

Pivotal Tracker-is a user story-based project planning tool based on agile software development methods. We start with user flows then integrate them into bite size user stories in Pivotal, and then point them for development

Google Drive

Gmail

Google Hangouts-We work closely with our Abu Dhabi and Shanghai campus libraries, so we do a lot of early morning and late night meetings using Google Hangouts (or GoToMeeting, below) to include everyone.

Wireframing, IA, Mockups

Sketch: A great lightweight design app

OmniGraffle: A more heavy duty tool for wire framing, IA work, mockups, etc. Compatible with a ton of stencil libraries, including he great Knoigi (LINK) and Google material design icons). Great for interactive interface demos, and for user flows and personas (link)

Adobe Creative Cloud

Post It notes, Graph paper, White Board, Dry-Erase markers, Sharpies, Flip boards

Tools for User Centered Testing / Methods 

GoToMeeting- to broadcast formal usability testing to observers in another room, so they can take notes and view the testing in real time and ask virtual follow up questions for the facilitator to ask participants.

Crazy Egg-a heat mapping hot spotting A/B testing tool which, when coupled with analytics, really helps us get a picture of where users are going on our site.

Silverback- Screen capturing usability testing software app.

PostitPlus – We do a lot of affinity grouping exercises and interface sketches using post it notes,  so this app is super cool and handy.

OptimalSort-Online card sorting software.

Personas-To think through our user flows when thinking through a process, service, or interface. We then use these personas to create more granular user stories in Pivotal Tracker (above).

What’s your workspace like?

I’m on the mezzanine of Bobst Library which is right across from Washington Square Park. I have a pretty big office with a window overlooking the walkway between Bobst and the Stern School of Business.

I have a huge old subway map on one wall with an original heavy wood frame, and everyone likes looking at old subway lines, etc. I also have a map sheet of the mountain I’m named after. Otherwise, it’s all white board and I’ve added our personas to the wall as well so I can think through user stories by quickly scanning and selecting a relevant persona.

I’m in an area where many of my colleagues mailboxes are, so people stop by a lot. I close my door when I need to concentrate, and on Fridays we try to work collaboratively in a basement conference room with a huge whiteboard.

I have a heavy wooden L shaped desk which I am trying to replace with a standing desk.

Every morning I go to Oren’s, a great coffee shop nearby, with the same colleague and friend, and we usually do “loops” around Washington Square Park to problem solve and give work advice. It’s a great way to start the day.

What’s your best time saving trick

Informal (but not happenstance) communication saves so much time in the long run and helps alleviate potential issues that can arise when people aren’t communicating. Though it takes a few minutes, I try to touch base with people regularly.

What’s your favorite to do list manager

My whiteboard, supplemented by stickies (mac), and my huge flip chart notepad with my wish list on it. Completed items get transferred to a “leaderboard.”

Besides your phone and computer, what gadget can’t you live without?

Headphones

What everyday thing are you better at than everyone else?

I don’t think I do things better than other people, but I think my everyday strengths include:  encouraging and mentoring, thinking up ideas and potential solutions, getting excited about other people’s ideas, trying to come to issues creatively, and dusting myself off.

What are you currently reading?

I listen to audiobooks and podcasts on my bike commute. Among my favorites:

In print, I’m currently reading:

What do you listen to while at work?

Classical is the only type of music I can play while working and still be able to (mostly) concentrate. So I listen to the masters, like Bach, Mozart and Tchaikovsky

When we work collaboratively on creative things that don’t require earnest concentration I defer to one of the team to pick the playlist. Otherwise, I’d always pick Josh Ritter.

Are you more of an introvert or an extrovert?

Mostly an introvert who fakes being an extrovert at work but as other authors have said (Eric, Nicholas) it’s very dependent on the situation and the company.

What’s your sleep routine like?

Early to bed, early to rise. I get up between 5-6 and go to bed between around 10.

Fill in the blank: I’d love to see _________ answer these same questions.

@Morville (Peter Morville)

@leahbuley (Leah Buley)

What’s the best advice you’ve ever received?

Show up