How to Price 3D Printing Service Fees

Many libraries today provide 3D printing service. But not all of them can afford to do so for free. While free 3D printing may be ideal, it can jeopardize the sustainability of the service over time. Nevertheless, many libraries tend to worry about charging service fees.

In this post, I will outline how I determined the pricing schema for our library’s new 3D Printing service in the hope that more libraries will consider offering 3D printing service if having to charge the fee is a factor stopping them. But let me begin with libraries’ general aversion to fees.

A 3D printer in action at the Health Sciences and Human Services Library (HS/HSL), Univ. of Maryland, Baltimore

Service Fees Are Not Your Enemy

Charging fees for the library’s service is not something librarians should regard as a taboo. We live in the times in which a library is being asked to create and provide more and more new and innovative services to help users successfully navigate the fast-changing information landscape. A makerspace and 3D printing are certainly one of those new and innovative services. But at many libraries, the operating budget is shrinking rather than increasing. So, the most obvious choice in this situation is to aim for cost-recovery.

It is to be remembered that even when a library aims for cost-recovery, it will be only partial cost-recovery because there is a lot of staff time and expertise that is spent on planning and operating such new services. Libraries should not be afraid to introduce new services requiring service fees because users will still benefit from those services often much more greatly than a commercial equivalent (if any). Think of service fees as your friend. Without them, you won’t be able to introduce and continue to provide a service that your users need. It is a business cost to be expected, and libraries will not make profit out of it (even if they try).

Still bothered? Almost every library charges for regular (paper) printing. Should a library rather not provide printing service because it cannot be offered for free? Library users certainly wouldn’t want that.

Determining Your Service Fees

What do you need in order to create a pricing scheme for your library’s 3D printing service?

(a) First, you need to list all cost-incurring factors. Those include (i) the equipment cost and wear and tear, (ii) electricity, (iii) staff time & expertise for support and maintenance, and (iv) any consumables such as 3d print filament, painter’s tape. Remember that your new 3D printer will not last forever and will need to be replaced by a new one in 3-5 years.

Also, some of these cost-incurring factors such as staff time and expertise for support is fixed per 3D print job. On the other hand, another cost-incurring factor, 3D print filament, for example, is a cost factor that increases in proportion to the size/density of a 3d model that is printed. That is, the larger and denser a 3d print model is, the more filament will be used incurring more cost.

(b) Second, make sure that your pricing scheme is readily understood by users. Does it quickly give users a rough idea of the cost before their 3D print job begins? An obscure pricing scheme can confuse users and may deter them from trying out a new service. That would be bad user experience.

Also in 3D printing, consider if you will also charge for a failed print. Perhaps you do. Perhaps you don’t. Maybe you want to charge a fee that is lower than a successful print. Whichever one you decide on, have that covered since failed prints will certainly happen.

(c) Lastly, the pricing scheme should be easily handled by the library staff. The more library staff will be involved in the entire process of a library patron using the 3D printing service from the beginning to the end, the more important this becomes. If the pricing scheme is difficult for the staff to work with when they need charge for and process each 3D print job, the new 3D printing service will increase their workload significantly.

Which staff will be responsible for which step of the new service? What would be the exact tasks that the staff will need to do? For example, it may be that several staff at the circulation desk need to learn and handle new tasks involving the 3D printing service, such as labeling and putting away completed 3D models, processing the payment transaction, delivering the model, and marking the job status for the paid 3D print job as ‘completed’ in the 3D Printing Staff Admin Portal if there is such a system in place. Below is the screenshot of the HS/HSL 3D Printing Staff Admin Portal developed in-house by the library IT team.

The HS/HSL 3D Printing Staff Admin Portal, University of Maryland, Baltimore

Examples – 3D Printing Service Fees

It’s always helpful to see how other libraries are doing when you need to determine your own pricing scheme. Here are some examples that shows ten libraries’ 3D printing pricing scheme changed over the recent three years.

  • UNR DeLaMare Library
    • https://guides.library.unr.edu/3dprinting
    • 2014 – $7.20 per cubic inch of modeling material (raised to $8.45 starting July, 2014).
    • 2017 – uPrint – Model Material: $4.95 per cubic inch (=16.38 gm=0.036 lb)
    • 2017 – uPrint – Support Materials: $7.75 per cubic inch
  • NCSU Hunt Library
    • https://www.lib.ncsu.edu/do/3d-printing
    • 2014- uPrint 3D Printer: $10 per cubic inch of material (ABS), with a $5 minimum
    • 2014 – MakerBot 3D Printer: $0.35 per gram of material (PLA), with a $5 minimum
    • 2017 – uPrint – $10 per cubic inch of material, $5 minimum
    • 2017 – F306 – $0.35 per gram of material, $5 minimum
  • Southern Illinois University Library
    • http://libguides.siue.edu/3D/request
    • 2014 – Originally $2 per hour of printing time; Reduced to $1 as the demand grew.
    • 2017 – Lulzbot Taz 5, Luzbot mini – $2.00 per hour of printing time.
  • BYU Library
  • University of Michigan Library
    • The Cube 3D printer checkout is no longer offered.
    • 2017 – Cost for professional 3d printing service; Open access 3d printing is free.
  • GVSU Library
  • University of Tennessee, Chattanooga Library
  • Port Washington Public library
  • Miami University
    • 2014 – $0.20 per gram of the finished print; 2017 – ?
  • UCLA Library, Dalhousie University Library (2014)
    • Free

Types of 3D Printing Service Fees

From the examples above, you will notice that many 3d printing service fee schemes are based upon the weight of a 3D-print model. This is because these libraries are trying recover the cost of the 3d filament, and the amount of filament used is most accurately reflected in the weight of the resulting 3D-printed model.

However, there are a few problems with the weight-based 3D printing pricing scheme. First, it is not readily calculable by a user before the print job, because to do so, the user will have to weigh a model that s/he won’t have until it is 3D-printed. Also, once 3D-printed, the staff will have to weigh each model and calculate the cost. This is time-consuming and not very efficient.

For this reason, my library considered an alternative pricing scheme based on the size of a 3D model. The idea was that we will have roughly three different sizes of an empty box – small, medium, and large – with three different prices assigned. Whichever box into which a user’s 3d printed object fits will determine how much the user will pay for her/his 3D-printed model. This seemed like a great idea because it is easy to determine how much a model will cost to 3d-print to both users and the library staff in comparison to the weight-based pricing scheme.

Unfortunately, this size-based pricing scheme has a few significant flaws. A smaller model may use more filament than a larger model if it is denser (meaning the higher infill ratio). Second, depending on the shape of a model, a model that fits in a large box may use much less filament than the one that fits in a small box. Think about a large tree model with think branches. Then compare that with a 100% filled compact baseball model that fits into a smaller box than the tree model does. Thirdly, the resolution that determines a layer height may change the amount of filament used even if what is 3D-printed is a same model.

Different infill ratios – Image from https://www.packtpub.com/sites/default/files/Article-Images/9888OS_02_22.png

Charging Based upon the 3D Printing Time

So we couldn’t go with the size-based pricing scheme. But we did not like the problems of the weight-based pricing scheme, either. As an alternative, we decided to go with the time-based pricing scheme because printing time is proportionate to how much filament is used, but it does not require that the staff weigh the model each time. A 3D-printing software gives an estimate of the printing time, and most 3D printers also display actual printing time for each model printed.

First, we wanted to confirm the hypothesis that 3D printing time and the weight of the resulting model are proportionate to each other. I tested this by translating the weight-based cost to the time-based cost based upon the estimated printing time and the estimated weight of several cube models. Here is the result I got using the Makerbot Replicator 2X.

  • 9.10 gm/36 min= 0.25 gm per min.
  • 17.48 gm/67 min= 0.26 gm per min.
  • 30.80 gm/117 min= 0.26 gm per min.
  • 50.75 gm/186 min=0.27 gm per min.
  • 87.53 gm/316 min= 0.28 gm per min.
  • 194.18 gm/674 min= 0.29 gm per min.

There is some variance, but the hypothesis holds up. Based upon this, now let’s calculate the 3d printing cost by time.

3D plastic filament is $48 for ABS/PLA and $65 for the dissolvable per 0.90 kg (=2.00 lb) from Makerbot. That means that filament cost is $0.05 per gram for ABS/PLA and $0.07 per gram for the dissolvable. So, 3D filament cost is 6 cents per gram on average.

Finalizing the Service Fee for 3D Printing

For an hour of 3D printing time, the amount of filament used would be 15.6 gm (=0.26 x 60 min). This gives us the filament cost of 94 cents per hour of 3D printing (=15.6 gm x 6 cents). So, for the cost-recovery of filament only, I get roughly $1 per hour of 3D printing time.

Earlier, I mentioned that filament is only one of the cost-incurring factors for the 3D printing service. It’s time to bring in those other factors, such as hardware wear/tear, staff time, electricity, maintenance, etc., plus “no-charge-for-failed-print-policy,” which was adopted at our library. Those other factors will add an additional amount per 3D print job. And at my library, this came out to be about $2. (I will not go into details about how these have been determined because those will differ at each library.) So, the final service fee for our new 3D printing service was set to be $3 up to 1 hour of 3D printing + $1 per additional hour of 3D printing. The $3 is broken down to $1 per hour of 3D printing that accounts for the filament cost and $2 fixed cost for every 3D print job.

To help our users to quickly get an idea of how much their 3D print job will cost, we have added a feature to the HS/HSL 3D Print Job Submission Form online. This feature automatically calculates and displays the final cost based upon the printing time estimate that a user enters.

 

The HS/HSL 3D Print Job Submission form, University of Maryland, Baltimore

Don’t Be Afraid of Service Fees

I would like to emphasize that libraries should not be afraid to set service fees for new services. As long as they are easy to understand and the staff can explain the reasons behind those service fees, they should not be a deterrent to a library trying to introduce and provide a new innovative service.

There are clear benefits to running through all cost-incurring factors and communicating how the final pricing scheme was determined (including the verification of the hypothesis that 3D printing time and the weight of the resulting model are proportionate to each other) to all library staff who will be involved in the new 3D printing service. If any library user inquire about or challenges the service fee, the staff will be able to provide a reasonable explanation on the spot.

I have implemented this pricing scheme at the same time as the launch of my library’s makerspace (the HS/HSL Innovation Space at the University of Maryland, Baltimore – http://www.hshsl.umaryland.edu/services/ispace/) back in April 2015. We have been providing 3D printing service and charging for it for more than two years. I am happy to report that during that entire duration, we have not received any complaint about the service fee. No library user expected our new 3D printing service to be free, and all comments that we received regarding the service fee were positive. Many expressed a surprise at how cheap our 3D printing service is and thanked us for it.

To summarize, libraries should be willing to explore and offer new innovating services even when they require charging service fees. And if you do so, make sure that the resulting pricing scheme for the new service is (a) sustainable and accountable, (b) readily graspable by users, and (c) easily handled by the library staff who will handle the payment transaction. Good luck and happy 3D printing at your library!

An example model with the 3D printing cost and the filament info displayed at the HS/HSL, University of Maryland, Baltimore


Hosting a Coding Challenge in the Library

In Fall of 2016, the city of Los Angeles held a 2-week “Innovate LA” event intended to celebrate innovation and creativity within the LA region.  Dozens of organizations around Los Angeles held events during Innovate LA to showcase and provide resources for making, invention, and application development.  As part of this event, the library at California State University, Northridge developed and hosted two weeks of coding challenges, designed to introduce novice coders to basic development using existing tutorials. Coders were rewarded with digital badges distributed by the application Credly.

The primary organization of the events came out of the library’s Creative Media Studio, a space designed to facilitate audio and video production as well as experimentation with emerging technologies such as 3D printing and virtual reality.  Users can use computers and recording equipment in the space, and can check out media production devices, such as camcorders, green screens, GoPros, and more.  Our aim was to provide a fun, very low-stress way to learn about coding, provide time for new coders to get hands-on help with coding tutorials, and generally celebrate how coding can be fun.  While anyone was welcome to join, our marketing efforts specifically focused on students, with coding challenges distributed daily throughout the Innovate LA period through Facebook.

The Challenges

The coding challenges were sourced from existing coding tutorial sites such as Free Code CampLearn Ruby and Codecademy.  We wanted to offer a mix of front-end and server side coding challenges, starting with HTML, CSS and JavaScript and ramping up to PHP, Python, and Ruby.  We tested out several free tutorials and chose tutorials that had the most straightforward instructions that provided immediate feedback about incorrect code. We also tried to keep the interfaces consistent, using Free Code Camp most frequently so participants could get used to the interface and focus on coding rather than the tutorial mechanism itself.

Here’s a list of the challenges and their corresponding badges earned:

Challenge Badge Received
Say Hello to the HML Elements, Headline with the H2 Element, Inform with the Paragraph Element HTML Ninja
Change the Color of Text, Use CSS Selectors to Style Elements, Use a CSS Class to Style an Element CSS Ninja
Use Responsive Design with Bootstrap Fluid Containers, Make Images Mobile Responsive, Center Text with Bootstrap Bootstrapper
Comment your JavaScript Code, Declare JavaScript Variables, Storing Values with the Assignment Operator JavaScript Hacker
Learn how Script Tags and Document Ready Work, Target HTML Elements with Selectors Using jQuery, Target Elements by Class Using jQuery jQuery Ninja
Uncomment HTML, Comment out HTML, Fill in the Blank with Placeholder Text HTML Master
Style Multiple Elements with a CSS Class, Change the Font Size of an Element, Set the Font Family of an Element CSS Master
Create a Bootstrap Button, Create a a Block Element Bootstrap Button, Taste the Bootstrap Button Color Rainbow Bootstrap Master
Getting Started and Cat/Dog JS Game Maker
Target Elements by ID Using jQuery, Delete your jQuery Functions, Target the same element with multiple jQuery selectors jQuery Master
Hello World

Variables and Types

Lists

Python Ninja
Hello World, Variables and Types, Simple Arrays PHP Ninja
Hello World, Variables and Types, Math Ruby Ninja
How to Use APIs with JavaScript (complete through Step 9: Authentication and API Keys) API Ninja
Edit or create a wikipedia page. You may join in at the Wikipedia Edit-a-thon or do editing remotely. The Citation Hunt tool is a cool/easy way of going about editing a Wikipedia page. Narrow it to a topic that interests you and make. WikiWiz
Create a 3D Model for an original animated character. You may use TinkerCAD or Blender as free options or feel free to use SolidWorks AutoCAD if you are familiar with them. If you don’t know where to begin, TinkerCAD has step by step tutorials for you to bring your ideas to life. 3D Designer
Get a selfie with a Google Cardboard or any virtual reality goggles VR Explorer

Note the final three challenges – editing a Wikipedia page, creating a 3D model, and experimenting with Google Cardboard or other virtual reality (VR) goggles are not coding challenges, but we wanted to use the opportunity to promote some of the other services the Creative Media Studio provides.  Conveniently, the library was hosting a Wikipedia Edit-A-Thon during the same period as the coding challenges, so it made sense to leverage both of those events as part of our Innovate LA programming.

The coding challenges and instructions were distributed via Facebook, and we also held “office hours” (complete with snacks) in one of the library’s computer labs to provide assistance with completing the challenges.  The office hours were mostly informal, with two library staff members available to walk users through completing and submitting the challenges.  One special office hours was planned, bringing in a guest professor from our Cinema and Television Arts program to help users with a web-based game making tutorial he had designed.  This partnership was very successful, and that particular office hour session had the most attendance of any we offered.  In future iterations of this event, more advance planning would enable us to partner with additional faculty members and feature tutorials they already use effectively with students in their curriculum.

Credly

We needed a way to both accept submissions documenting completion of coding challenges and a way to award digital badges.  Originally we had investigated potentially distributing digital badges through our campus learning management system, as some learning management systems like Moodle are capable of awarding digital badges.  There were a couple of problems with this – 1) we wanted the event to be open to anyone, including members of the community who wouldn’t have access to the learning management system, and 2), the digital badge capability hadn’t been activated in our campus’ instance of Moodle.   Another route we considered taking was accepting submissions for completed challenges was through the university’s Portfolium application, which has a fairly robust ability to accept submissions for completed work, but again, wouldn’t facilitate anyone from outside of the university participating. Credly seemed like an easy, efficient way to both accept submissions and award badges that could also be embedded in 3rd party applications, such as LinkedIN.  Since we hosted the competition in 2016, the capability to integrate Credly badges in Portfolium has been made available.

Credly enables you to either design your badges using Credly’s Badge Builder or upload your own badge designs.  Luckily, we had access to amazing student designers Katie Pappace, Rose Rieux, and Eva Cohen, who custom-created our badges using Adobe Illustrator.  A Credly account for the library’s Creative Media Studio was created to issue the badges, and Credly “Credits” were defined using the custom-created badge designs for each of the coding skills for which we wanted to award badges.

When a credit is designed in Credly and you enable the credit to allow others to claim the credit, you have several options.  You can require a claim code, which requires users to submit a code in order to claim the credit.  Claim codes are useful if you want to award badges not based on evidence (like file submission) but are awarding badges based on participation or attendance at an event at which you distribute the claim code to attendees.  When claim codes are required, you can also set approval of submissions to be automatic, so that anyone with a claim code automatically receives their badge.  We didn’t require a claim code, and instead required evidence to be submitted.

When requiring evidence, you can configure which what types of evidence are appropriate to receive the badge. Choices for evidence submission include a URL, a document (Word, text, or PDF), image, audio file, video file, or just an open text submission.  As users were completing code challenges, we asked for screenshots (images) as evidence of completion for most challenges.  We reviewed all submissions to ensure the submission was correct, but by requiring screenshots, we could easily see whether or not the tutorial itself had “passed” the code submission.

Awards

Credly gives the ability of easily counting the number of badges earned by each of the participants. From those numbers, we were able to determine the top badge earners and award them prizes. All participants, even the ones with a single badge, were awarded buttons of each of their earned badges. In addition to the virtual and physical badges, participants with the greatest number of earned badges were rewarded with prizes. The top five prizes were awarded with gift cards and the grand prize winner also got a 3D printed trophy designed with Tinkercad and their photo as a Lithopane incorporated into the trophy. A low stakes award ceremony was held for all contestants and winners. Top awards were high commodity and it was a good opportunity for students to meet others interested in coding and STEM.

Lessons Learned

Our first attempt at hosting coding challenges in the library taught us a few things.  First, taking a screenshot is definitely not a skill most participants started out with – the majority of initial questions we received from participants were not related to coding, but rather involved how to take a screenshot of their completed code to submit to Credly.  For future events, we’ll definitely make sure to include step-by-step instructions for taking screenshots on both PC and Mac with each challenge, or consider an alternative method of collecting submissions (e.g., copying and pasting code as a text submission into Credly).  It’s still important to not assume that copying and pasting text from a screen is a skill that all participants will have.

As noted above, planning ahead would enable us to more effectively reach out and partner with faculty, and possibly coordinate coding challenges with curriculum.  A few months before the coding challenges, we did reach out to computer science faculty, cinema and television arts faculty, and other faculty who teach curriculum involving code, but if we had reached out much earlier (e.g., the semester before) we likely would have been able to garner more faculty involvement.  Faculty schedules are so jam-packed and often set that way so far in advance, at least six months of advance notice is definitely appreciated.

Only about 10% of coding challenge participants came to coding office hours regularly, but that enabled us to provide tailored, one-on-one assistance to our novice coders.  A good portion of understanding how to get started with coding and application development is not related to syntax, but involves larger questions about how applications work:  if I wanted to make a website, where would my code go?  How does a URL figure out where my website code is?  How does a browser understand and render code?  What’s the difference between JavaScript (client-side code) and PHP (server-side code), and why are they different?  These were the types of questions we really enjoyed answering with participants during office hours.  Having fewer, more targeted office hours — where open questions are certainly encouraged, but where participants know the office hours are focused on particular topics — makes attending the office hours more worthwhile, and I think gives novice coders the language to ask questions they may not know they have.

One small bit of feedback that was personally rewarding for the authors:  at one of our office hours, a young woman came up to us and asked us if we were the planners of the coding challenges.  When we said yes, she told how excited she was (and a bit surprised) to see women involved with coding and development.  She asked us several questions about our jobs and how we got involved with careers relating to technology.  That interaction indicated to us that future outreach could potentially focus on promoting coding to women specifically, or hosting coding office hours to enable mentoring for women coders on campus, modeling (or joining up with) Women Who Code networks.

If you’re interested in hosting support for coding activities or challenges in your library, a great resource to get started with is Hour of Code, which promotes holding one-hour introductions to coding and computer science particularly during Computer Science Education Week.  Hour of Code provides tutorials, resources for hosts, promotional materials and more.  This year, Hour of Code week / Computer Science Education Week will be  December 4-10 2017, so start planning now!


Data Refuge and the Role of Libraries

Society is always changing. For some, the change can seem slow and frustrating, while others may feel as though the change occurred in a blink of an eye. What is this change that I speak of? It can be anything…civil rights, autonomous cars, or national leaders. One change that no one ever seems particularly prepared for, however, is when a website link becomes broken. One day, you could click a link and get to a site and the next day you get a 404 error. Sometimes this occurs because a site was migrated to a new server and the link was not redirected. Sometimes this occurs because the owner ceased to maintain the site. And sometimes, this occurs for less benign reasons.

Information access via the Internet is an activity that many (but not all) of us do everyday, in sometimes unconscious fashion: checking the weather, reading email, receiving news alerts. We also use the Internet to make datasets and other sources of information widely available. Individuals, universities, corporations, and governments share data and information in this way. In the Obama administration, the Open Government Initiative led to the development of Project Open Data and data.gov. Federal agencies started looking at ways to make information sharing easier, especially in areas where the data are unique.

One area of unique data is in climate science. Since climate data is captured on a specific day, time, and under certain conditions, it can never be truly reproduced. It will never be January XX, 2017 again. With these constraints, climate data can be thought of as fragile. The copies that we have are the only records that we have. Much of our nation’s climate data has been captured by research groups at institutes, universities, and government labs and agencies. During the election, much of the rhetoric from Donald Trump was rooted in the belief that climate change is a hoax. Upon his election, Trump tapped Scott Pruitt, who has fought much of the EPA’s attempts to regulate pollution, to lead the EPA. This, along with other messages from the new administration, has raised alarms within the scientific community that the United States may repeat the actions of the Harper administration in Canada, which literally threw away thousands of items from federal libraries that were deemed outside scope, through a process that was criticized as not transparent.

In an effort to safeguard and preserve this data, the Penn Program of Environmental Humanities (PPEH) helped organize a collaborative project called Data Refuge. This project requires the expertise of scientists, librarians, archivists, and programmers to organize, document, and back-up data that is distributed across federal agencies’ websites. Maintaining the integrity of the data, while ensuring the re-usability of it, are paramount concerns and areas where librarians and archivists must work hand in glove with the programmers (sometimes one and the same) who are writing the code to pull, duplicate, and push content. Wired magazine recently covered one of the Data Refuge events and detailed the way that the group worked together, while much of the process is driven by individual actions.

In order to capture as much of this data as possible, the Data Refuge project relies on groups of people organizing around this topic across the country. The PPEH site details the requirements to host a successful DataRescue event and has a Toolkit to help promote and document the event. There is also a survey that you can use to nominate climate or environmental data to be part of the Data Refuge. Not in a position to organize an event? Don’t like people? You can also work on your own! An interesting observation from the work on your own page is the option to nominate any “downloadable data that is vulnerable and valuable.” This means that Internet Archive and the End of Term Harvest Team (a project to preserve government websites from the Obama administration) is interested in any data that you have reason to believe may be in jeopardy under the current administration.

A quick note about politics. Politics are messy and it can seem odd that people are organizing in this way, when administrations change every four or eight years and, when there is a party change in the presidency, it is almost a certainty that there will be major departures in policy and prioritizations from administration to administration. What is important to recognize is that our data holdings are increasingly solely digital, and therefore fragile. The positions on issues like climate, environment, civil rights, and many, many others are so diametrically opposite from the Obama to Trump offices, that we – the public – have no assurances that the data will be retained or made widely available for sharing. This administration speaks of “alternative facts” and “disagree[ing] with the facts” and this makes people charged with preserving facts wary.

Many questions about the sustainability and longevity of the project remain. Will End of Term or Data Refuge be able to/need to expand the scope of these DataRescue efforts? How much resourcing can people donate to these events? What is the role of institutions in these efforts? This is a fantastic way for libraries to build partnerships with entities across campus and across a community, but some may view the political nature of these actions as incongruous with the library mission.

I would argue that policies and political actions are not inert abstractions. There is a difference between promoting a political party and calling attention to policies that are in conflict with human rights and freedom to information. Loathe as I am to make this comparison, would anyone truly claim that burning books is protected political speech, and that opposing such burning is “playing politics?” Yet, these were the actions of a political party – in living memory – hosted at university towns across Germany. Considering the initial attempt to silence the USDA and the temporary freeze on the EPA, libraries should strongly support the efforts of PPEH, Data Refuge, End of Term, and concerned citizens across the country.

 


Finding the Right Words in Post-Election Libraries and Higher Ed

This year’s election result has presented a huge challenge to all of us who work in higher education and libraries. Usually, libraries, universities, and colleges do not comment on presidential election result and we refrain from talking about politics at work. But these are not usual times that we are living in.

A black female student was shoved off the sidewalk and called the ‘N’ word at Baylor University. The Ku Klux Klan is openly holding a rally. West Virginia officials publicly made a racist comment about the first lady. Steve Bannon’s prospective appointment as the chief strategist and senior counsel to the new President is being praised by white nationalist leaders and fiercely opposed by civil rights groups at the same time. Bannon is someone who calls for an ethno-state, openly calls Martin Luther King a fraud, and laments white dispossession and the deconstruction of occidental civilization. There are people drawing a swastika at a park. The ‘Whites only’ and ‘Colored’ signs were put up over water fountains in a Florida school. A Muslim student was threatened with a lighter. Asian-American women are being assaulted. Hostile acts targeting minority students are taking place on college campuses.

Libraries and educational institutions exist because we value knowledge and science. Knowledge and science do not discriminate. They grow across all different races, ethnicities, religions, nationalities, sexual identities, and disabilities. Libraries and educational institutions exist to enable and empower people to freely explore, investigate, and harness different ideas and thoughts. They support, serve, and belong to ‘all’ who seek knowledge. No matter how naive it may sound, they are essential to the betterment of human lives, and they do so by creating strength from all our differences, not likeness. This is why diversity, equity, inclusion are non-negotiable and irrevocable values in libraries and educational institutions.

How do we reconcile these values with the president-elect who openly dismissed and expressed hostility towards them? His campaign made remarks and promises that can be interpreted as nothing but the most blatant expressions of racism, sexism, intolerance, bigotry, harassment, and violence. What will we do to address the concerns of our students, staff, and faculty about their physical safety on campus due to their differences in race, ethnicity, religion, nationality, gender, and sexual identity? How do we assure them that we will continue to uphold these values and support everyone regardless of what they look like, how they identify their gender, what their faiths are, what disabilities they may have, who they love, where they come from, what languages they speak, or where they live? How?

We say it. Explicitly. Clearly. And repeatedly.

If you think that your organization is already very much pro-diversity that there is no need to confirm or reaffirm diversity, you can’t be farther from the everyday life minorities experience. Sometimes, saying isn’t much. But right now, saying it out loud can mean everything. If you support those who belong to minority groups but don’t say it out loud, how would they know it? Right now, nothing is obvious other than there is a lot of hate and violence towards minorities.

The entire week after the election, I agonized about what to say to my small team of IT people whom I supervise at work. As a manager, I felt that it was my responsibility to address the anxiety and uncertainty that some of my staff – particularly those in minority groups – would be experiencing due to the election result. I also needed to ensure that whatever dialogue takes place regarding the differences of opinions between those who were pleased and those who were distressed with the election result, those dialogues remain civil and respectful.

Crafting an appropriate message was much more challenging than I anticipated. I felt very strongly about the need to re-affirm the unwavering support and commitment to diversity, equity, and inclusion particularly in relation to libraries and higher education, no matter how obvious it may seem. I also felt the need to establish (within the bounds of my limited authority) that we will continue to respect, value, and celebrate diversity in interacting with library users as well as other library and university staff members. Employees are held to the standard expectations of their institutions, such as diversity, equity, inclusion, tolerance, civil dialogue, and no harassment or violence towards minorities, even if their private opinions conflict with them. At the same time, I wanted to strike a measured tone and neither scare nor upset anyone, whichever side they were on in the election. As a manager, I have to acknowledge that everyone is entitled to their private opinions as long as they do not harm others.

I suspect that many of us – either a manager or not – want to say something similar about the election result. Not so much about who was and should have been as about what we are going to do now in the face of these public incidences of anger, hatred, harassment, violence, and bigotry directed at minority groups, which are coming out at an alarming pace because it affects all of us, not just minorities.

Finding the right words, however, is difficult. You have to carefully consider your role, audience, and the message you want to convey. The official public statement from a university president is going to take a tone vastly different from an informal private message a supervisor sends out to a few members of his or her team. A library director’s message to library patrons assuring the continued service for all groups of users with no discrimination will likely to be quite different from the one she sends to her library staff to assuage their anxiety and fear.

For such difficulty not to delay and stop us from what we have to and want to say to everyone we work with and care for, I am sharing the short message that I sent out to my team last Friday, 3 days after the election. (N.B. ‘CATS’ stands for ‘Computing and Technology Services’ and UMB refers to ‘University of Maryland, Baltimore.’) This is a customized message to address my own team. I am sharing this as a potential template for you to craft your own message. I would like to see more messages that reaffirm diversity, equity, and inclusion as non-negotiable values, explicitly state that we will not step backwards, and make a commitment to continued unwavering support for them.

Dear CATS,

This year’s close and divisive election left a certain level of anxiety and uncertainty in many of us. I am sure that we will hear from President Perman and the university leadership soon.

In the meantime, I want to remind you of something I believe to be very important. We are all here – just as we have been all along – to provide the most excellent service to our users regardless of what they look like, what their faiths are, where they come from, what languages they speak, where they live, and who they love. A library is a powerful place where people transform themselves through learning, critical thinking, and reflection. A library’s doors have been kept open to anyone who wants to freely explore the world of ideas and pursue knowledge. Libraries are here to empower people to create a better future. A library is a place for mutual education through respectful and open-minded dialogues. And, we, the library staff and faculty, make that happen. We get to make sure that people’s ethnicity, race, gender, disability, socio-economic backgrounds, political views, or religious beliefs do not become an obstacle to that pursuit. We have a truly awesome responsibility. And I don’t have to tell you how vital our role is as a CATS member in our library’s fulfilling that responsibility.

Whichever side we stood on in this election, let’s not forget to treat each other with respect and dignity. Let’s use this as an opportunity to renew our commitment to diversity, one of the UMB’s core values. Inclusive excellence is one of the themes of the UMB 2017-2021 Strategic Plan. Each and every one of us has a contribution to make because we are stronger for our differences.

We have much work ahead of us! I am out today, but expect lots of donuts Monday.

Have a great weekend,
Bohyun

 

Monday, I brought in donuts of many different kinds and told everyone they were ‘diversity donuts.’ Try it. I believe it was successful in easing some stress and tension that was palpable in my team after the election.

Photo from Flickr: https://www.flickr.com/photos/vnysia/4598569232

Photo from Flickr: https://www.flickr.com/photos/vnysia/4598569232

Before crafting your own message, I recommend re-reading your institution’s core values, mission and vision statements, and the most recent strategic plan. Most universities, colleges, and libraries include diversity, equity, inclusion, or something equivalent to these somewhere. Also review all public statements or internal messages that came from your institution that reaffirms diversity, equity, and inclusion. You can easily incorporate those into your own message. Make sure to clearly state your (and your institution’s) continued commitment to and unwavering support for diversity and inclusion and explicitly oppose bigotry, intolerance, harassment, and acts of violence. Encourage civil discourse and mutual respect. It is very important to reaffirm the values of diversity, equity, and inclusion ‘before’ listing any resources and help that employees or students may seek in case of harassment or assault. Without the assurance from the institution that it indeed upholds those values and will firmly stand by them, those resources and help mean little.

Below I have also listed messages, notes, and statements sent out by library directors, managers, librarians, and university presidents that reaffirm the full support for and commitment to diversity, equity, and inclusion. I hope to see more of these come out. If you have already received or sent out such a message, I invite you to share in the comments. If you have not, I suggest doing so as soon as possible. Send out a message if you are in a position where doing so is appropriate. Don’t forget to ask for a message addressing those values if you have not received any from your organization.


Making a Basic LTI (Learning Tools Intoperability) App

Learning Tools Interoperability, or LTI, is an open standard maintained by the IMS Global Learning Consortium used to build external tools or plugins for Learning Management Systems (LMS).  A common use case of an LTI is to build an application that can be accessed from within the LMS to perform searches and import resources into a course.  For example, the Wikipedia LTI application enables instructors to search Wikipedia and embed links to articles directly into their courses.  Academic libraries frequently struggle to integrate library resources in learning management systems, so LTI is an obvious standard to embrace as a potential way to make library resources more accessible.  However, when I began researching how I could begin creating an LTI app, I found it very difficult to find examples of existing app code and resources to get started.  You can’t just create any old web application and have that be ‘consumable’ by a learning management system in an LTI-compliant way.  In this post, I’ll outline some of the resources I found useful to get started building your own LTI app.

LTI General Architecture

These are the basic components of an LTI application:

  • The LTI Tool Provider (TP):  This is your application.  The tool provider is the resource the user sees when they access your application from within the learning management system.  The Wikipedia LTI app linked above is an example of a tool provider.
  • The LTI Tool Consumer (TC): This is the learning management system (e.g., Blackboard, Moodle, Canvas) from which the user accesses your tool provider application.
  • The LTI Launch:  When a user accesses your tool provider from the tool consumer, this is called “launching” the LTI application.  Parameters are passed from the tool consumer to your tool provider, including authorization parameters that ensure the user is permitted to access your application, as well as information about the user’s identity, roles within the tool consumer, and the type of request the user is sending (e.g., a “content item message” is sent to your tool to indicate the user is expecting to import a link back to the tool consumer).
  • OAuth:  LTI applications use OAuth signatures for validating messages between the Tool Consumer and the Tool provider.  LTI applications require that the Tool Consumer and the Tool Consumer have each configured a shared key and secret, which is used to build an OAuth “Access Token” to enable communication between the two systems.1

An additional tip for developing LTI apps:  Sign up for a free instructor account for the Canvas learning management system. Canvas accounts hosted on the Instructure website enable you to add a custom LTI tool to your course (once it’s hosted on a web server, of course) and also enables you to quickly experiment with some existing LTI applications (such as the Khan Academy and Merlot LTI apps) to explore possible functionality you might want to include in your application.  This way, you can see what an instructor or student would see when they interact with your tool provider through an LMS.

Building your first “Hello, World” LTI app (with some help from Harvard)

When I first started looking into LTI, I found it really difficult to find a full (but basic) LTI application to get an overall picture of how LTI apps work – there’s lots of LTI class libraries out there, but I wanted an example of how all the pieces of an LTI app fit together. After some fruitless Googling and GitHub searches, I finally stumbled upon this Harvard LTI workshop on LTI apps that really helped me understand how LTI applications work.  The repository includes a full working LTI application that you can simply “plug in” some basic values to create a fully working LTI application, complete with OAuth authentication.

First, be sure to look at the included presentation in the repository, which is a rare example of a set of presentation slides that is 100% understandable out of context, to get a general introduction to the LTI standard and what it attempts to achieve.  You’ll also want to read through the step-by-step LTI blog tutorial that will get you set up with your first “Hello,World!” LTI application, complete with valid OAuth-signed requests.2

I found it especially useful that the Harvard LTI workshop repository includes a pseudo tool consumer (which mimics how the LMS would interact with your tool) that you can use during development on localhost.  Once you follow the steps of the tutorial to build your basic “Hello, World!” single-page LTI application, you can plug the local URL of that into the tool consumer page and check out how the parameters are passed from the tool consumer to the tool provider.   You can also examine the built-in basic LTI php class library that is included, as well as the basic OAuth functionality to see how the OAuth Access Token is constructed.

Use Case:  A WorldCat Discovery API Search and Retrieval Tool for LMS

My particular use case for exploring LTI involves building a search box that would enable a faculty user to add a link to a resource from the WorldCat Discovery system.  If your library subscribed to FirstSearch or you are a WorldShare Management System (WMS) customer you now likely have access to WorldCat Discovery; but the framework I’m using to build my app would work for any Discovery layer with an API (e.g., Summon, Primo, etc.).

Searching and retrieving via LTI is straightforward.  First, using the Harvard LTI workshop LTI application, I created a /lib directory to host the WorldCat Discovery PHP library published by OCLC cloned from GitHub.  I installed the library using Composer as described in the GitHub repository readme instructions.   I created a very simple search form and response page that enables a user to enter a query and then retrieve results from the WorldCat Discovery API based on that query. Then, I set up my “tool.php” application to display the search form and POST the query to the the simple response page:

tool.php:

<?php
error_reporting(E_ALL & ~E_NOTICE);
ini_set("display_errors", 1);
require_once 'ims-blti/blti.php';
$lti = new BLTI("secret", false, false);

session_start();
header('Content-Type: text/html;charset=utf-8');
?>

<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8" />
    <title>Building Tools with the Learning Tools Operability Specification</title>
  </head
  <body>
  <?php 
    if ($lti->valid) {
  ?>
    <h2>Search WorldCat Discovery</h2>
      <form action="results.php" method="post" encType="application/x-www-form-urlencoded">
        Search: <input type="text" name="query" id="query" />
         <?php
    foreach($_POST as $key => $value) {
      echo "<input type=\"hidden\" name=\"" .$key .  "\" value=\"" . $value . "\" />\n";
    }
  ?>
      <input type="submit" name="submit" value="Submit" />
      </form>
    <pre>
    </pre>
    <?php
      } else {
    ?>
      <h2>This was not a valid LTI launch</h2>
      <p>Error message: <?= $lti->message ?></p>
    <?php
      }
    ?>
    </body>
</html>

results.php:

<?php
require_once('../lib/worldcat-discovery-php/vendor/autoload.php');

   use OCLC\Auth\WSKey;
   use OCLC\Auth\AccessToken;
   use WorldCat\Discovery\Bib;

$key = 'somekey';
$secret = 'somesecret';
$options = array('services' => array('WorldCatDiscoveryAPI', 'refresh_token'));
$wskey = new WSKey($key, $secret, $options);
$accessToken = $wskey->getAccessTokenWithClientCredentials('123', '123');

$query = $_POST["query"];
$options = array(
  'useFRBRGrouping' => 'true',
  'sortBy' => 'library_plus_relevance',
  'itemsPerPage' => 25,
  );
$bib = Bib::Search($query, $accessToken, $options);

if (is_a($bib, 'WorldCat\Discovery\Error')) {
   echo $bib->getErrorCode();
   echo $bib->getErrorMessage();
} else {
    foreach ($bib->getSearchResults() as $result){
      echo '<li><a href="'. $result . '">' . $result->getName()->getValue() .' (';
      echo ($result->getDatePublished() ?  '' . $result->getDatePublished()->getValue()  : '') . ')</a></li>';
   }
}

?>

 

The application I’ve created so far is mostly a proof of concept, and I have a few essential tasks to finish the application – first, I need to re-write the URLs to point to a specific WorldCat Discovery instance (pointing to generic WorldCat.org isn’t helpful when a user is wanting to embed the resources of a specific library, to enable full-text access links and ); second, my app needs to enable the user to return these links to the LMS so that students / course participants can click on them.

For the second point, there is an LTI specification called the “content-item-message” that indicates that the type of interaction requested from the tool is the return of a link to the LMS.  The LMS must include this input parameter in the POST request to the tool.  The LMS “knows” to send this parameter when the tool is initially installed in the LMS.

<input type="hidden" name="lti_message_type" value="ContentItemSelectionRequest" />

The POST request to the tool must also indicate the return URL (e.g., the URL back to the LMS) where the link should be sent (the LMS should generate this input parameter for you; your tool just needs to identify this parameter and include it in the POST request to return the link to the LMS):

<input type="hidden" name="content_item_return_url" value="http://www.tc.com/item-return" />

The Tool provider must then render the link to be imported with some description of the content in JSON, for example:

{
  "@context" : "http://purl.imsglobal.org/ctx/lti/v1/ContentItem", 
  "@graph" : [ 
    { "@type" : "LtiLinkItem",
      "url" : "https://someinstitution.worldcat.org/oclc/709669613",
      "mediaType" : "text/html",
      "title" : "Global Warming: Hype or Hazard?"
    }
  ]
}

See the Content Item Message documentation for more details on returning JSON suitable for consumption by the LMS.

Learn, and do, more with LTI

You may find the basic LTI class script included in the Harvard LTI tutorial are insufficient for your use case – the code is a bit aged and the LTI specification has moved on.

A more robust LTI tool provider PHP library than the basic one included in the Harvard tutorial has been made available by IMS Global on GitHub.  You can also find a more complex complete sample app called “Rating” that is a great example of more complex kinds of interactions with an LTI app, including how you might build a server-side data store and recall that data through the LTI app, and how you might handle the assignment of grades or scores through an LTI app.

To learn more, the Canvas Learning Management system has an excellent open course on LTI development that you can enroll yourself in with a free Canvas account.  Once enrolled in the course, you can launch your own locally developed LTI app within the course to check how parameters and data are exchanged between the LMS and the tool.

  1.  See this post on LTI and OAuth for a straightforward discussion of the general implications of OAuth for LTI application development.
  2. I skipped the steps for installing Vagrant and VirtualBox and the tutorial still worked great for me on my MAMP server, so if you’re concerned about installing those and already have a local development server installed (or you’re just working from a LAMP server online) the tutorial will still work for you.

Thoughts from NDLC16

I recently had the pleasure of going to, and presenting at, the National Diversity in Libraries Conference (NDLC) at UCLA. NDLC is an irregular conference that last occurred in 2010. This year’s organizers had hoped to have around 250 registrants and they greatly exceeded those numbers. As a result, there were multiple session, with as many as seven concurrent sessions in a single time slot. I recommend perusing the program for a sense of the conference and full descriptions of sessions and posters. Moreover, there was a lively Twitter stream capturing sessions and continuing conversations at #NDLC16. As I could not attend all of the sessions, I will highlight some key concepts in this post that I think are ideal to carry forward in the work that we do in libraries, supporting people and systems.

Equity (and access, inclusion, and diversity)

Yes, the conference is about “diversity in libraries” but what became apparent throughout many sessions, is that we know we have “diversity.” Not a lot; shockingly little by some measures, but there is diversity. Diversity of life experience, sexual orientation, gender expression, ability status, and racial/ethnic identity (this last one is often very visible and very, very poorly represented). If our goals are to increase diversity, that is great. But if we try to do that work without acknowledging that it takes equity and inclusion to actually make an impact, it is a waste of time.

Equity and inclusion are difficult to achieve when the greater system that we work within has been constructed upon bias and privilege. This is true whether the system is our national government, our institution, or the individual library where we work. You cannot talk about human interactions in a vacuum, and many of the sessions reflected this with themes like “Academic libraries and social justice on campus” and “Cultural aspects and perspectives in health sciences library services.”

Dismantling Structures

An important point is in recognizing the oppressive structures that we operate within and then determining the best course of action to dismantle those and in so doing, remove or reduce the barriers to participation. The difficulty in this lies in the many, many ways that those structures disenfranchise communities. It can be through omission: not collecting in areas that represent the totality of American society, for example. Do collections and archives reflect the communities present? The Native American, Latinx, Asian-American, African-American experience? What of the LGBTQA community?

Or it can be through our actual information seeking systems: subjective subject headings, that belie the dominant narrative. Words are there to describe items, but they are “othering,” (set in contrast to or aside to a presumed norm, e.g. Wikipedia editors’ one-time attempt to separate American novelists based on gender[1]) or strip away intersectionality; or perhaps use euphemisms for uncomfortable truths, such as framing the detention and imprisonment of Japanese-Americans during WWII as relocation camp/assembly center/ temporary detention center, rather than: internment camp/incarceration camp/American concentration camp[2]. (Incidentally, the LCSH for this event is: Japanese American — Evacuation and relocation, 1942-1945, which seems pretty euphemistic to me. What were they being evacuated from? Their safe homes, schools, and jobs?)

Or perhaps it is in our technology: How common practice is it to evaluate databases for screen reader accessibility as part of the selection process? We work with profile systems like PeopleSoft, which limit ability to gender identify. How do animations and lots of visual content on the website affect individuals with autism spectrum disorder (ASD)?

And what about our programming? Are makerspaces implicitly gendering activities in the promotional materials, e.g. using pinks and purples for sewing workshops and blues and black for Arduino workshops? What is really being said when a supervisor asks if there should be an “All Lives Matter” book display to “balance” the “Black Lives Matter” display?

These are complex issues because humans are complex animals. We form identities, often based on how others perceive us. How are the intersections of various facets of identity treated by society? By our organization? By our collections? By our metadata? To strive for a society wherein these oppressive structures are acknowledged, the history that put them in place is a known history, and that all members feel equal and respected is a herculean task. But it is one that we are fortunate enough to be engaged with by virtue of our profession, because we ascribe to ALA’s Core Values of Librarianship, or create things like the ACRL Diversity Standards, or align with the Digital Library Federation’s mission statement.

Start Local

One way to be an active participant in working towards a better future is to assess your local environment. Much of the programming at NDLC concerned recruitment and retention of diverse individuals in our profession. Panel discussions about LIS diversity initiatives and diversity fellow programs and presentations on strategic planning for diversity and inclusion and cultural competencies highlight the serious representation issues in our profession. According to the ALA demographic study of 2014, 87.1% of membership identified as white[3]. The 2015 census data from the United States reports the population as 61.6% white (non-Hispanic, not multi-racial)[4]. For a profession built so firmly on the notion of service and community engagement, this is problematic. Happily, many of the presentations at NDLC made their materials available, so we may learn from each other’s experiences[5].

Representative hiring and active retention are only a portion of what can be done locally to bring about positive change. Collection practices, metadata creation, digital project investment, special collections development, and community outreach and programming are the bread and butter of library activities; whether a public, school, or academic library. It is important that we examine our current processes, not with a “how is this diverse?” perspective, but with a “how is this anti-racist/anti-oppression?” This is an important, yet perhaps subtle, difference. Working towards an anti-oppression mentality means that there is recognition of systemic inequalities woven through the fabric of our society, which may require changing some of the core practices of librarians in order to move the needle.

An example of this could be in the way that an academic library engages with, say, a community of students who inhabit both an ethnic minority identity and are first-generation students. This library may have done a great job at collecting a diverse range of materials, that represent this population well, but if the library does not devote effort in the outreach and programming to this group, it is unlikely that the students will go into the library to discover these works themselves. Why? It’s not that they are lazy, or entitled, or willfully ignorant (as I have witnessed librarians opine).

Libraries are still intimidating places for many students, regardless of identity. Compound that generalized anxiety with first-generation student status and an ethnic minority and it should be clear that the onus is on the librarians to dismantle those barriers – be they perceived or real – and communicate that the library is the students’ library – is our library – is everyone’s library. If this sounds obvious to you (“of course the library is everyone’s library!”), then I fear that you may not have a realistic understanding of the varied and complex lived experiences of people living in America. If this sounds like a challenge worthy of time, effort, and resources, then YAY! And I really hope that you are in a position to take up that challenge.

I’m Tired Now

Another strong takeaway from NDLC is that this work is WORK. Really hard, exhausting work. Work that often falls on the shoulders of those most disenfranchised, and that requires pushing back against a bureaucratic and social machine that has been running for centuries. Writing this blog post took forever, as I have tried to find words that don’t require previous knowledge of “social justice jargon” and that – hopefully – anyone reading can find something to relate to. See, working towards an equitable and inclusive society doesn’t fall to those who lack access to societal privilege. Sure, those individuals will feel the imperative to change in their day to day experiences, but major heavy lifting must also come from those who implicitly – and often unconsciously – benefit from the constructs of the biased system.

Lasting change that doesn’t come from a burn-it-down-and-start-again revolution requires the positive participation of the most privileged. That is a lot of strength and honesty to ask of someone – to recognize that they have – in some ways – benefited from a system, and that benefit has been predicated on someone else’s oppression…that’s a heavy realization. It means recognizing that bringing equity to a system like this will most likely require some loss of benefits to the privileged group.

For example, it is possible that a collection focusing on female religious figures may purchase fewer books on Joan of Arc in order to represent Rabia Basri and Kāraikkāl Ammaiyār. That doesn’t diminish Joan of Arc’s impact on the collection, it just provides a wider lens to view the topic. Yet, some may feel a decline of Joan’s status, and that can be frightening if they’ve built an identity around that status.

This collection example is actually a decent representation of what it means to try to put some of these equity ideals into practice. Recognizing a representation imbalance and then taking action to address it may result in feelings of discomfort or fear for some, but for others it can give voice and visibility. We are lucky to be in a profession where we, as individuals/organizations/systems, can effect change and have such a positive impact on our communities, but we must be willing to recognize uncomfortable truths and believe that a just and equitable society is a future worth working for.

NDLC Again?

Rumor has it that there will be another NDLC in 2020. It was an exhilarating conference and one that I sincerely hope leaders in our profession come to and participate with, en masse. If you are looking to get more of a feel for the entire conference, several Storify’s were created: See http://ndlc.info/ for some, as well as Amelia Gibson’s on the BLM Town Hall, ARL’s, and mine.

[1] http://www.nytimes.com/2013/04/28/opinion/sunday/wikipedias-sexism-toward-female-novelists.html?_r=0

[2] http://www.discovernikkei.org/en/journal/2008/4/24/enduring-communities/

[3]http://www.ala.org/research/sites/ala.org.research/files/content/initiatives/membershipsurveys/September2014ALADemographics.pdf

[4] https://www.census.gov/quickfacts/table/PST045215/00

[5] Coming soon to http://ndlc.info/ or look through #NDLC16


Whither the workshop?

Academic libraries have long provided workshops that focus on research skills and tools to the community. Topics often include citation software or specific database search strategies. Increasingly, however, libraries are offering workshops on topics that some may consider untraditional or outside the natural home of the library. These topics include using R and other analysis packages, data visualization software, and GIS technology training, to name a few. Librarians are becoming trained as Data and Software Carpentry instructors in order to pull from their established lesson plans and become part of a larger instructional community. Librarians are also partnering with non-profit groups like Mozilla’s Science Lab to facilitate research and learning communities.

Traditional workshops have generally been conceived and executed by librarians in the library. Collaborating with outside groups like Software Carpentry (SWC) and Mozilla is a relatively new endeavor. As an example, certified trainers from SWC can come to campus and teach a topic from their course portfolio (e.g. using SQL, Python, R, Git). These workshops may or may not have a cost associated with them and are generally open to the campus community. From what I know, the library is typically the lead organizer of these events. This shouldn’t be terribly surprising. Librarians are often very aware of the research hurdles that faculty encounter, or what research skills aren’t being taught in the classroom to students (more on this later).

Librarians are helpers. If you have some biology knowledge, I find it useful to think of librarians as chaperone proteins, proteins that help other proteins get into their functional conformational shape. Librarians act in the same way, guiding and helping people to be more prepared to do effective research. We may not be altering their DNA, but we are helping them bend in new ways and take on different perspectives. When we see a skills gap, we think about how we can help. But workshops don’t just *spring* into being. They take a huge amount of planning and coordination. Librarians, on top of all the other things we do, pitch the idea to administration and other stakeholders on campus, coordinate the space, timing, refreshments, travel for the instructors (if they aren’t available in-house), registration, and advocate for the funding to pay for the event in order to make it free to the community. A recent listserv discussion regarding hosting SWC workshops resulted in consensus around a recommended minimum six week lead time. The workshops have all been hugely successful at the institutions responding on the list and there are even plans for future Library Carpentry events.

A colleague once said that everything that librarians do in instruction are things that the disciplinary faculty should be doing in the classroom anyway. That is, the research skills workshops, the use of a reference manager, searching databases, the data management best practices are all appropriately – and possibly more appropriately – taught in the classroom by the professor for the subject. While he is completely correct, that is most certainly not happening. We know this because faculty send their students to the library for help. They do this because they lack curricular time to cover any of these topics in depth and they lack professional development time to keep abreast of changes in certain research methods and technologies. And because these are all things that librarians should have expertise in. The beauty of our profession is that information is the coin of the realm for us, regardless of its form or subject. With minimal effort, we should be able to navigate information sources with precision and accuracy. This is one of the reasons why, time and again, the library is considered the intellectual center, the hub, or the heart of the university. Have an information need? We got you. Whether those information sources are in GitHub as code, spreadsheets as data, or databases as article surrogates, we should be able to chaperone our user through that process.

All of this is to the good, as far as I am concerned. Yet, I have a persistent niggle at the back of my mind that libraries are too often taking a passive posture. [Sidebar: I fully admit that this post is written from a place of feeling, of suspicions and anecdotes, and not from empirical data. Therefore, I am both uncomfortable writing it, yet unable to turn away from it.] My concern is that as libraries extend to take on these workshops because there is a need on campus for discipline-agnostic learning experiences, we (as a community) do so without really fomenting what the expectations and compensations of an academic library are, or should be. This is a natural extension of the “what types of positions should libraries provide/support?” question that seems to persist. How much of this response is based on the work of individuals volunteering to meet needs, stretching the work to fit into a job description or existing work loads, and ultimately putting user needs ahead of organizational health? I am not advocating that we ignore these needs; rather I am advocating that we integrate the support for these initiatives within the organization, that we systematize it, and that we own our expertise in it.

This brings me back to the idea of workshops and how we claim ownership of them. Are libraries providing these workshops only because no one else on campus is meeting the need? Or are we asserting our expertise in the domain of information/data shepherding and producing these workshops because the library is the best home for them, not a home by default? And if we are making this assertion, then have we positioned our people to be supported in the continual professional development that this demands? Have we set up mechanisms within the library and within the university for this work to be appropriately rewarded? The end result may be the same – say, providing workshops on R – but the motivation and framing of the service is important.

Information is our domain. We navigate its currents and ride its waves. It is ever changing and evolving, as we must be. And while we must be agile and nimble, we must also be institutionally supported and rewarded. I wonder if libraries can table the self-reflection and self-doubt regarding the appropriateness of our services (see everything ever written regarding libraries and data, digital humanities, digital scholarship, altmetrics, etc.) and instead advocate for the resourcing and recognition that our expertise warrants.


Do Library Stuff Faster with Python

Python is a great programming language to know if you work in a library: it’s (relatively) easy to learn, its syntax is fairly clear and intuitive, and it has great, robust libraries for doing routine library tasks like hacking MARC records and working with delimited data, CSV files, JSON and XML. 1  In this post, I’ll describe a couple of projects I’ve worked on recently that have enabled me to Do Library Stuff Faster using Python.  For reference, both of these scripts were written with Python 2.7 2 in mind, but could easily be adapted for other versions of Python.

Library Holdings Lookup with Beautiful Soup

Here’s a very common library dilemma:  A generous and well-meaning patron, faculty member, or friend of the library has a large personal collection of books or other materials that they would like to bequeath to your library.  They have carefully created a spreadsheet (or word document, or hand-written index) of all of the titles and authors (and maybe dates and ISBNs) in their library and want to know if you want the items.

Many libraries (for very good reason) have policies to just say “no” to these kinds of gifts.  Well-meaning library gift givers don’t always realize that it’s an enormous amount of work for a library to evaluate materials and decide whether or not they can be added to the library’s collection. Beyond relevance to their users and condition of the items, libraries don’t want to accept gifts of duplicate copies of titles they already have in their collection due to limited shelf space.

It’s that final point – how to avoid adding duplicate titles to the collection – that led me to develop a very simple (and very hacky) script to enable me to take a list of titles and authors and do a very simple lookup to see if, at minimum, we have those same titles already in the collection.  Our ILS (Innovative Interface’s Millennium system) does not have a way to feed in a bunch of titles and generate a report of title matches – and I would venture to say that kind of functionality is probably not available in most library systems.  Normally when presented with a dilemma of having to check to see if the library already has a set of titles, we’d sit down an unfortunate student worker and have them manually work through the list – copying and pasting titles into the library catalog and noting down any matches found.  This work is incredibly boring for the student worker, and is a prime candidate for automation (the same task is done over and over again, with a very specific set of criteria as output (match or no match).

Python’s Beautiful Soup library is built for exactly this kind of task – instead of having your student worker scan a bunch of web pages in your catalog, the script can do approximately the same thing by sending search terms to your catalog via URL, and returning back page elements that can tell you whether or not any matches were found.  In my example script, I’m using title and author elements, but you could modify this script to use other elements as long as they are indexed in your catalog – for example, you could send ISBNs, OCLC numbers, etc.

First, using Excel I concatenate a list of titles and authors with a domain and other URL elements to search my library’s catalog.  Here’s a few examples of what the URLs look like:

http://suncat.csun.edu/search~S9/X?SEARCH=t:(Los%20Angeles%20Two%20Hundred)+and+a:(Lavender)&searchscope=9&SORT=DX
http://suncat.csun.edu/search~S9/X?SEARCH=t:(The%20Land%20of%20Journeys'%20Ending)+and+a:(Austin)&searchscope=9&SORT=DX
http://suncat.csun.edu/search~S9/X?SEARCH=t:(Mathematics%20and%20Sex)+and+a:(Ernest)&searchscope=9&SORT=DX

I’ll save the full list of these (in my example, I have over 1000 titles and authors to check) in a plain text file called advancedtitleauth.txt.

Next, I start my Python script by calling the Beautiful Soup library, and some other libraries that are useful (urllib – a library built for fetching data by URLs; csv – a library for working with CSV files; and re, for working with regular expressions ).  You’ll probably have to install Beautiful Soup on your system first, which you can do if you have the pip Python package management system 3 installed by using sudo pip install beautifulsoup4 on your system’s command line.

from bs4 import BeautifulSoup
import urllib
import csv
import re

Then I create a blank array and define a CSV file into which to write the output of the script:

url_list = []
csv_out = csv.writer(open('output.txt', 'w'), delimiter = '\t')

The CSV file I’m creating will use tabs instead of commas as delimiters (hence delimiter = ‘\t’).  Typically when working with library data, I prefer tab-delimited text files over comma-separated files, because you never know when a random comma is going to show up in a title and create a delimiter where their should not be one.

Then I open my list of URLs, read it, append each URL to my array, and feed each URL into Beautiful Soup:

try:
  f = open('advancedtitleauth.txt', 'rb')
  for line in f:
    url_list.append(line)
    r = urllib.urlopen(line).read()
    soup = BeautifulSoup(r)

Beautiful Soup will go fetch the web page of each URL.  Now that I have the web pages, Beautiful Soup can parse out specific features of each page.  In my case, my catalog returns a page with a single record if a match is found, and a browsable index when a match is found (e.g., your title would be here, but it isn’t, so here’s some stuff with titles that would be nearby).  I can use Beautiful Soup to return page elements that tell me whether a match was found, and if a match is found, to write the permanent URL of the match for later evaluation.  This bit of code looks for an HTML div element with the class “bibRecordLink” on the page, which only appears when a single match is found.  If this div is present on the page, the script grabs the link and drops it into the output file.

try:
      link = soup.find_all("div", class_="bibRecordLink")
      directlink = str(link[0])
      directlink = "http://suncat.csun.edu" + directlink[36:]

In the code above, [36:] is Python’s way of noting the start position of a string – so in this case, I’m getting the end of the string starting with the 36th character (which in my case, is the bibliographic ID number of the item that allows me to construct a permalink).

If a title/author search results in multiple possible matches – that is, we might have multiple copies, or the title/author combo is too vague to land on just one item, the page that displays in our catalog shows a browsable list of brief record info.  In my script, I just grab the top result:

 try:
      briefcit = soup.find_all("span", class_="briefcitTitle")
      bestmatch = str(briefcit[0])
      sep = "&"
      bestmatch = bestmatch.split(sep, 1)[0]
      bestmatch = "http://suncat.csun.edu/" + bestmatch[39:]

In the code above, Beautiful Soup finds all the<span> elements with the class “briefcitTitle”, the script returns the first one, and again returns a URL stored in the bestmatch variable.

You can see a sample output of my lookup script here.  You can see that for each entry, I include publication information, direct links, or a best match link if the elements are found.  If none of the elements are found for a lookup URL, the line reads:

nopub nolink nomatch

We can now divide the output file into “no match” entries, direct links, or best match links.  Direct links and best match links will need to be double-checked by a student worker to make sure they actually represent the item we looked up, including the date and edition.  The “no match” entries represent titles we don’t have in our collection, so those can be evaluated more closely to determine if we want them.

The script certainly has room for improvement; I could write in a lot more functionality to better identify publication information, for example, to possibly reduce or eliminate the need for manual review of direct or partial matches.  But the return on investment for this script is fairly highfor a 37-line script written in an afternoon; we can re-use this dozens of times, and hopefully save countless hours of student worker boredom (and re-assign those student workers to more complex and meaningful tasks!).

Rudimentary Keyword Frequency Analysis

This second example involves, again, dealing with a task that could be done manually, but can be done much more quickly with a script.

My university needed to submit data for the AASHE Sustainability Tracking, Assessment, and Rating System (STARS) Report (https://stars.aashe.org/), which requires the analysis of data from campus course offerings as well as research output by faculty.  To submit the report, we needed to know how many courses we offer in sustainability (defined by AASHE “in an inclusive way, encompassing human and ecological health, social justice, secure livelihoods and a better world for all generations”) and how many faculty do research in sustainability.  This project was broken up into two components:  Analysis of research data by faculty and analysis of course data.

Sustainability Keywords

Before we even started analyzing research or course data, we needed to define an approach to identify what counts as “sustainability.”  Thankfully, there was precedent from the University of North Carolina, which had developed a list of sustainability-related keywords used to search against faculty research output 4  We adopted this list of keywords to lookup in faculty research articles and course descriptions.

Research data by faculty

We don’t have a comprehensive inventory of research done by faculty at our campus.  Because we were on a somewhat tight deadline to do the analysis, I came up with a very quick and dirty way of getting a lot of citations by using Web of Science.  Web of Science enables you to do a search for research published by affiliates of your university.  I was able to retrieve about 8,000 citations written by current or former faculty associated with my institution going back about 15 years.  Of course, we cannot consider the data in Web of Science to be fully representative of faculty research output, but it seemed like a good start at least.

Web of Science enables you to export 500-record chunks of metadata, so it took an hour or so to export the metadata in several pieces (see Figure 1 for my Web of Science export criteria).

A screenshot of Web of Science's Output Records function showing the following fields selected: All records in this list (up to 500), Author(s) / Editor(s), Abstract, PubMedID, Title, Source, Keywords, Web of Science Categories, Conference Information, Research Areas.

Figure 1. Web of Science’s Output Records function showing metadata fields selected.

Once I had all of the metadata for the 8,000 or so records written by faculty at my institution, I combined them into a single file.  Next, I needed to identify records that had sustainability keywords in either the title or abstract.

First, I created an array of all of the keywords, and turned that list into a Python set.  A Python set is different from a list in that the order of terms does not matter, and is ideal for checking membership of items in the set against strings (or in my case, a bunch of citation and abstract strings).

word_list = 'Agriculture,Alternative,Applied%Science [..snip..]'
word_set <span class="pl-k">=</span> <span class="pl-c1">set</span>(word_list.split(<span class="pl-s"><span class="pl-pds">'</span>,<span class="pl-pds">'</span></span>))

Note the % in “Applied%Science”.  For some reason my set lookup couldn’t match terms with spaces. My hacky solution was to replace spaces with % characters, and then do a find/replace in my spreadsheet of Web of Science data to replace all keyword matches with spaces (such as Applied Science) with percentage signs (Applied%Science).  Luckily, there were only 10 or so keywords on the list with spaces, so the find/replace did not take very long.  Note also that the set match lookup is case sensitive, so I actually found it easier to just turn everything to lower case in my Web of Science spreadsheet and match on the lower case term (though I kept both upper and lower case terms in my lookup set).

Then I checked to see if any words were in the title, abstract, or both, and constructed my query so that a new column would be added to an output spreadsheet indicating *which* matches were found:

for row in csv_reader:
 if (set(row[23].split()) & word_set) & (set(row[9].split()) & word_set) :
 csv_out.writerow(["title & abstract match",row[61],row[1],row[9],row[23],(set(row[9].split()) & word_set), (set(row[23].split()) & word_set)])

If any of the words in my set were found in the 23rd cell of the spreadsheet (the abstract) and the 9th cell of the spreadsheet (the title), then a row would be written to an output sheet indicating that sustainability keywords were found in the title and abstract, pulling in some citation details about the article (including author names), as well as a cell with a list of the matches found for both title and abstract fields.

I did similar conditionals for rows that found, for example, just a title match, or just an author match:

 elif set(row[9].split()) & word_set:
      csv_out.writerow(["title match",row[61],row[1],row[9],row[23], (set(row[9].split()) & word_set)])
    elif set(row[23].split()) & word_set:
      csv_out.writerow(["abstract match",row[61],row[1],row[9],row[23], (set(row[23].split()) & word_set)])

And that is pretty much the whole script!  With the output file, I did have to do a bit more work to identify current faculty at my institution, but I basically used the same set matching method above using a list provided by HR.

Because the STARS report also required analysis of courses related to sustainability, I also created a very similar script to lookup key terms found in course titles and descriptions.

Of course, just because a research article or course description has a keyword, or even multiple keywords, does not mean it’s relevant at all to sustainability.  One of the keywords identified as related to sustainability, for example, is “invest”, which basically meant that almost every finance class returned as a match.  Manual work was required to review the matches and weed out false positives, but because the keyword matching was already done and we could easily see what matches were found, this work was done fairly quickly.  We could, for example, identify courses and research articles that only had a single keyword match.  If that single keyword match was something like “sustainability” it was likely a sustainability-related course and would merit further review; if the single keyword match was something like “systems” it could probably be weeded out.

As with my author/title lookup script, if I had a bit more time to fuss with the script, I could have probably optimized it further (for example, by assigning weight to more sustainability-related keywords to help calculate a relevance score).  But again, a short amount of time invested in this script saved a huge amount of time, and enabled us to do something we would not otherwise have been able to do.

Python Resources

If you’re interested in learning more about Python and its syntax, and don’t have a lot of Python experience, a good (free) place to start is Google’s Python Class, created by Nick Parlante for Google (I actually took a similar class several years ago, also created by Dr. Parlante, through Coursera, which looks to still be available).  If you want to get started using Python right away and don’t want to have to fuss with installing it on your computer, you can check out the interactive course How to Think Like a Computer Scientist created by Brad Miller and David Ranum at Luther College.  For more examples of usage in Python for library work, check out Charles Ed Hill, Heidi Frank, and Mark Pernotto’s Python chapter in the just-released LITA Guide The Librarian’s Introduction to Programming Languagesedited by Beth Thomsett-Scott (full-disclosure: I am a contributor to this book).

  1. Working with CSV files and JSON Data.  In Sweigart, Al (2015). Automate the Boring Stuff with Python: Practical Programming for Total Beginners. San Francisco: No Starch Press.
  2. For an explanation of the difference between Python 2 and 3, see https://wiki.python.org/moin/Python2orPython3.  The reason I use Python 2.7 for these scripts is because of my computing environment (in which Python 2 is installed by default), but if you have Python 3 installed on your computer, note that syntactical changes in Python 3 mean that many Python 2.x scripts may require revision in order to work.
  3. For instructions on using Pip with your Python installation, see: https://pip.pypa.io/en/latest/installing/
  4.  Blank-White, Kristen. 2014. Researching the Researchers: Developing a Sustainability Research Inventory.  Presented at the 2014 AASHE Conference and Expo, Portland OR. http://www.aashe.org/files/2014conference/presentations/secondPresentationUpload/Blank-White-Kristin_Researching-the-Researchers-Developing-a-Sustainability-Research-Inventory.pdf.

Cybersecurity, Usability, Online Privacy, and Digital Surveillance

Cybersecurity is an interesting and important topic, one closely connected to those of online privacy and digital surveillance. Many of us know that it is difficult to keep things private on the Internet. The Internet was invented to share things with others quickly, and it excels at that job. Businesses that process transactions with customers and store the information online are responsible for keeping that information private. No one wants social security numbers, credit card information, medical history, or personal e-mails shared with the world. We expect and trust banks, online stores, and our doctor’s offices to keep our information safe and secure.

However, keeping private information safe and secure is a challenging task. We have all heard of security breaches at J.P Morgan, Target, Sony, Anthem Blue Cross and Blue Shield, the Office of Personnel Management of the U.S. federal government, University of Maryland at College Park, and Indiana University. Sometimes, a data breach takes place when an institution fails to patch a hole in its network systems. Sometimes, people fall for a phishing scam, or a virus in a user’s computer infects the target system. Other times, online companies compile customer data into personal profiles. The profiles are then sold to data brokers and on into the hands of malicious hackers and criminals.

https://www.flickr.com/photos/topgold/4978430615

Image from Flickr – https://www.flickr.com/photos/topgold/4978430615

Cybersecurity vs. Usability

To prevent such a data breach, institutional IT staff are trained to protect their systems against vulnerabilities and intrusion attempts. Employees and end users are educated to be careful about dealing with institutional or customers’ data. There are systematic measures that organizations can implement such as two-factor authentication, stringent password requirements, and locking accounts after a certain number of failed login attempts.

While these measures strengthen an institution’s defense against cyberattacks, they may negatively affect the usability of the system, lowering users’ productivity. As a simple example, security measures like a CAPTCHA can cause an accessibility issue for people with disabilities.

Or imagine that a university IT office concerned about the data security of cloud services starts requiring all faculty, students, and staff to only use cloud services that are SOC 2 Type II certified as an another example. SOC stands for “Service Organization Controls.” It consists of a series of standards that measure how well a given service organization keeps its information secure. For a business to be SOC 2 certified, it must demonstrate that it has sufficient policies and strategies that will satisfactorily protect its clients’ data in five areas known as “Trust Services Principles.” Those include the security of the service provider’s system, the processing integrity of this system, the availability of the system, the privacy of personal information that the service provider collects, retains, uses, discloses, and disposes of for its clients, and the confidentiality of the information that the service provider’s system processes or maintains for the clients. The SOC 2 Type II certification means that the business had maintained relevant security policies and procedures over a period of at least six months, and therefore it is a good indicator that the business will keep the clients’ sensitive data secure. The Dropbox for Business is SOC 2 certified, but it costs money. The free version is not as secure, but many faculty, students, and staff in academia use it frequently for collaboration. If a university IT office simply bans people from using the free version of Dropbox without offering an alternative that is as easy to use as Dropbox, people will undoubtedly suffer.

Some of you may know that the USPS website does not provide a way to reset the password for users who forgot their usernames. They are instead asked to create a new account. If they remember the account username but enter the wrong answers to the two security questions more than twice, the system also automatically locks their accounts for a certain period of time. Again, users have to create a new account. Clearly, the system that does not allow the password reset for those forgetful users is more secure than the one that does. However, in reality, this security measure creates a huge usability issue because average users do forget their passwords and the answers to the security questions that they set up themselves. It’s not hard to guess how frustrated people will be when they realize that they entered a wrong mailing address for mail forwarding and are now unable to get back into the system to correct because they cannot remember their passwords nor the answers to their security questions.

To give an example related to libraries, a library may decide to block all international traffic to their licensed e-resources to prevent foreign hackers who have gotten hold of the username and password of a legitimate user from accessing those e-resources. This would certainly help libraries to avoid a potential breach of licensing terms in advance and spare them from having to shut down compromised user accounts one by one whenever those are found. However, this would make it impossible for legitimate users traveling outside of the country to access those e-resources as well, which many users would find it unacceptable. Furthermore, malicious hackers would probably just use a proxy to make their IP address appear to be located in the U.S. anyway.

What would users do if their organization requires them to reset passwords on a weekly basis for their work computers and several or more systems that they also use constantly for work? While this may strengthen the security of those systems, it’s easy to see that it will be a nightmare having to reset all those passwords every week and keeping track of them not to forget or mix them up. Most likely, they will start using less complicated passwords or even begin to adopt just one password for all different services. Some may even stick to the same password every time the system requires them to reset it unless the system automatically detects the previous password and prevents the users from continuing to use the same one. Ill-thought-out cybersecurity measures can easily backfire.

Security is important, but users also want to be able to do their job without being bogged down by unwieldy cybersecurity measures. The more user-friendly and the simpler the cybersecurity guidelines are to follow, the more users will observe them, thereby making a network more secure. Users who face cumbersome and complicated security measures may ignore or try to bypass them, increasing security risks.

Image from Flickr - https://www.flickr.com/photos/topgold/4978430615

Image from Flickr – https://www.flickr.com/photos/topgold/4978430615

Cybersecurity vs. Privacy

Usability and productivity may be a small issue, however, compared to the risk of mass surveillance resulting from aggressive security measures. In 2013, the Guardian reported that the communication records of millions of people were being collected by the National Security Agency (NSA) in bulk, regardless of suspicion of wrongdoing. A secret court order prohibited Verizon from disclosing the NSA’s information request. After a cyberattack against the University of California at Los Angeles, the University of California system installed a device that is capable of capturing, analyzing, and storing all network traffic to and from the campus for over 30 days. This security monitoring was implemented secretly without consulting or notifying the faculty and those who would be subject to the monitoring. The San Francisco Chronicle reported the IT staff who installed the system were given strict instructions not to reveal it was taking place. Selected committee members on the campus were told to keep this information to themselves.

The invasion of privacy and the lack of transparency in these network monitoring programs has caused great controversy. Such wide and indiscriminate monitoring programs must have a very good justification and offer clear answers to vital questions such as what exactly will be collected, who will have access to the collected information, when and how the information will be used, what controls will be put in place to prevent the information from being used for unrelated purposes, and how the information will be disposed of.

We have recently seen another case in which security concerns conflicted with people’s right to privacy. In February 2016, the FBI requested Apple to create a backdoor application that will bypass the current security measure in place in its iOS. This was because the FBI wanted to unlock an iPhone 5C recovered from one of the shooters in San Bernadino shooting incident. Apple iOS secures users’ devices by permanently erasing all data when a wrong password is entered more than ten times if people choose to activate this option in the iOS setting. The FBI’s request was met with strong opposition from Apple and others. Such a backdoor application can easily be exploited for illegal purposes by black hat hackers, for unjustified privacy infringement by other capable parties, and even for dictatorship by governments. Apple refused to comply with the request, and the court hearing was to take place in March 22. The FBI, however, withdrew the request saying that it found a way to hack into the phone in question without Apple’s help. Now, Apple has to figure out what the vulnerability in their iOS if it wants its encryption mechanism to be foolproof. In the meanwhile, iOS users know that their data is no longer as secure as they once thought.

Around the same time, the Senate’s draft bill titled as “Compliance with Court Orders Act of 2016,” proposed that people should be required to comply with any authorized court order for data and that if that data is “unintelligible” – meaning encrypted – then it must be decrypted for the court. This bill is problematic because it practically nullifies the efficacy of any end-to-end encryption, which we use everyday from our iPhones to messaging services like Whatsapp and Signal.

Because security is essential to privacy, it is ironic that certain cybersecurity measures are used to greatly invade privacy rather than protect it. Because we do not always fully understand how the technology actually works or how it can be exploited for both good and bad purposes, we need to be careful about giving blank permission to any party to access, collect, and use our private data without clear understanding, oversight, and consent. As we share more and more information online, cyberattacks will only increase, and organizations and the government will struggle even more to balance privacy concerns with security issues.

Why Libraries Should Advocate for Online Privacy?

The fact that people may no longer have privacy on the Web should concern libraries. Historically, libraries have been strong advocates of intellectual freedom striving to keep patron’s data safe and protected from the unwanted eyes of the authorities. As librarians, we believe in people’s right to read, think, and speak freely and privately as long as such an act itself does not pose harm to others. The Library Freedom Project is an example that reflects this belief held strongly within the library community. It educates librarians and their local communities about surveillance threats, privacy rights and law, and privacy-protecting technology tools to help safeguard digital freedom, and helped the Kilton Public Library in Lebanon, New Hampshire, to become the first library to operate a Tor exit relay, to provide anonymity for patrons while they browse the Internet at the library.

New technologies brought us the unprecedented convenience of collecting, storing, and sharing massive amount of sensitive data online. But the fact that such sensitive data can be easily exploited by falling into the wrong hands created also the unparalleled level of potential invasion of privacy. While the majority of librarians take a very strong stance in favor of intellectual freedom and against censorship, it is often hard to discern a correct stance on online privacy particularly when it is pitted against cybersecurity. Some even argue that those who have nothing to hide do not need their privacy at all.

However, privacy is not equivalent to hiding a wrongdoing. Nor do people keep certain things secrets because those things are necessarily illegal or unethical. Being watched 24/7 will drive any person crazy whether s/he is guilty of any wrongdoing or not. Privacy allows us safe space to form our thoughts and consider our actions on our own without being subject to others’ eyes and judgments. Even in the absence of actual massive surveillance, just the belief that one can be placed under surveillance at any moment is sufficient to trigger self-censorship and negatively affects one’s thoughts, ideas, creativity, imagination, choices, and actions, making people more conformist and compliant. This is further corroborated by the recent study from Oxford University, which provides empirical evidence that the mere existence of a surveillance state breeds fear and conformity and stifles free expression. Privacy is an essential part of being human, not some trivial condition that we can do without in the face of a greater concern. That’s why many people under political dictatorship continue to choose death over life under mass surveillance and censorship in their fight for freedom and privacy.

The Electronic Frontier Foundation states that privacy means respect for individuals’ autonomy, anonymous speech, and the right to free association. We want to live as autonomous human beings free to speak our minds and think on our own. If part of a library’s mission is to contribute to helping people to become such autonomous human beings through learning and sharing knowledge with one another without having to worry about being observed and/or censored, libraries should advocate for people’s privacy both online and offline as well as in all forms of communication technologies and devices.


Are Your LibGuides 2.0 (images, tables, & videos) mobile friendly? Maybe not, and here’s what you can do about it.

LibGuides version 2 was released in summer 2014, and built on Bootstrap 3. However, after examining my own institutions’ guides, and conducting a simple random sampling of academic libraries in the United States, I found that many LibGuides did not display well on phones or mobile devices when it came to images, videos, and tables. Springshare documentation stated that LibGuides version 2 is mobile friendly out of the box and no additional coding is necessary, however, I found this not to necessarily be accurate. While the responsive features are available, they aren’t presented clearly as options in the graphical interface and additional coding needs to be added using the HTML editor in order for mobile display to be truly responsive when it comes to images, videos, and tables.

At my institution, our LibGuides are reserved for our subject librarians to use for their research and course guides. We also use the A-Z database list and other modules. As the LibGuides administrator, I’d known since its version 2 release that the new system was built on Bootstrap, but I didn’t know enough about responsive design to do anything about it at the time. It wasn’t until this past October when I began redesigning our library’s website using Bootstrap as the framework that I delved into customizing our Springshare products utilizing what I had learned.

I have found while looking at our own guides individually, and speaking to the subject librarians about their process, that they have been creating and designing their guides by letting the default settings take over for images, tables, and videos.  As a result, several tables are running out of their boxes, images are getting distorted, and videos are stretched vertically and have large black top and bottom margins. This is because additional coding and/or tweaking is indeed necessary in most cases for these to display correctly on mobile.

I’m by no means a Bootstrap expert, but my findings have been verified with Springshare, and I was told by Springshare support that they will be looked at by the developers. Support indicated that there may be a good reason things work as they do, perhaps to give users flexibility in their decisions, or perhaps a technical reason. I’m not sure, but for now we have begun work on making the adjustments so they display correctly. I’d be interested to hear others’ experiences with these elements and what they have had to do, if anything to assure they are responsive.

Method

Initially, as I learned how to use Bootstrap with the LibGuides system, I looked at my own library’s subject guides and testing the responsiveness and display. To start, I browsed through our guides with my Android phone. I then used Chrome and IE11 on desktop and resized the windows to see if the tables stayed within their boxes, and images respond appropriately. I peeked at the HTML and elements within LibGuides to see how the librarians had their items configured. Once I realized the issues were similar across all guides, I took my search further. Selfishly hoping it wasn’t just us, I used the LibGuides Community site where I sorted the list by libraries on version 2, then sorted by academic libraries. Each state’s list had to be looked at separately (you can’t sort by the whole United States). I placed all libraries from each state in a separate Excel sheet in alphabetical order. Using the random sort function, I examined two to three, sometimes five libraries per state (25 states viewed) by following the link provided in the community site list. I also inspected the elements of several LibGuides in my spreadsheet live in Chrome. I removed dimensions or styling to see how the pages responded since I don’t have admin access to any other universities guides. I created a demo guide for the purposes of testing where I inserted various tables, images and videos.

Some Things You Can Try

Even if you or your LibGuides authors may or may not be familiar with Bootstrap or fundamentals of responsive design, anyone should be able to design or update these guide elements using the instructions below; there is no serious Bootstrap knowledge needed for these solutions.

Tables

As we know, tables should not be used for layout. They are meant to display tabular data. This is another issue I came across in my investigation. Many librarians are using tables in this manner. Aside from being an outdated practice, this poses a more serious issue on a mobile device. Authors can learn how to float images, or create columns and rows right within the HTML Editor as an alternative. So for the purposes of this post, I’ll only be using a table in a tabular format.

When inserting a table using the table icon in the rich text editor you are asked typical table questions. How many rows? How many columns? In speaking with the librarians here at my institution, no one is really giving it much thought beyond this. They are filling in these blanks, inserting the table, and populating it. Or worse, copying and pasting a table created in Word. 

However, if you leave things as they are and the table has any width to it, this will be your result once minimized or viewed on mobile device:

Figure 1: Rows overflowing from table

Figure 1: LibGuides default table with no responsive class added

As you can see, the table runs out of the container (the box). To alleviate this, you will have to open the HTML Editor, find where the table begins, and wrap the table in the table-responsive class. The HTML Editor is available to all regular users, no administrative access is needed. If you aren’t familiar with adding classes, you will also need to close the tag after the last table code you see. The HTML looks something like this:

<div class="table-responsive">
    all the other elements go here
</div>

Below is the result of wrapping all table elements in the table-responsive class. As you can see it is cleaner, there is no run-off, and bootstrap added a horizontal scroll bar since the table is really too big for the box once it is resized. On a phone, you can now swipe sideways to scroll through the table.

Figure 3: Result of adding the responsive table class

Figure 2: Result of adding the responsive table class.

Springshare has also made the Bootstrap table styling classes available, which you can see in the editor dropdown as well. You can experiment with these to see which styling you prefer (borders, hover rows, striped rows…), but they don’t replace adding the table-responsive class to the table.

Images

When inserting an image in a LibGuides box, the system brings the dimensions of the image with it into the Image Properties box by default. After various tests I found it best to match the image size to the layout/box prior to uploading, and then remove the dimensions altogether from within the Image Properties box (and don’t place it in an unresponsive table). This can easily be done right within the Image Properties box when the image is inserted. It can also be done in the HTML Editor afterwards.

Figure 4: Image dimensions can be removed in the Image Properties box

Figure 3: Image dimensions can be removed in the Image Properties box.

 

Image dimensions removed

On the left: dimensions in place. On the right: dimensions removed.

By removing the dimensions, the image is better able to resize accordingly, especially in IE which seems to be less forgiving than Chrome. Guide creators should also add descriptive Alternative Text while in the Image Properties box for accessibility purposes.

Some users may be tempted to resize large images by adjusting the dimensions right in the Properties box . However, doing this doesn’t actually decrease the size that gets passed to the user so it doesn’t help download speed. Substantial resizing needs to be done prior to upload. Springshare recommends adjustments of no more than 10-15%.1

Videos

There are a few things I tried while figuring the best way to embed a YouTube video:

    1.  Use the YouTube embed code as is. Which can result in a squished image, and a lot of black border in the top and bottom margins.

Default Youtube embed code

    1. Use the YouTube embed code but remove the iframe dimensions (width=”560″ height=”315″). Results in a small image that looks fine, but stays small regardless of the box size.
    2. Use the YouTube embed code, remove the iframe dimensions and add the embed-responsive class. In this case, 16by9. This results in a nice responsive display, with no black margins. Alternately, I discovered that leaving the iframe dimensions while adding the responsive class looks nearly the same.

Youtube embed code dimensions removed and responsive class added

It should also be noted that LibGuides creators and editors should manually add a “title” attribute to the embed code for accessibility.2 Neither LibGuides nor YouTube does this automatically, so it’s up to the guide creator to add it in the HTML Editor. In addition, the “frameborder=0” will be overwritten by Bootstrap, so you can remove it or leave, it’s up to you.

Considering Box Order/Stacking

The way boxes stack and order on smaller devices is also something LibGuides creators or editors should take into consideration. The layout is essentially comprised of columns, and in Bootstrap the columns stack a certain way depending on device size.

I’ve tested several guides and believe the following are representative of how boxes will stack on a phone, or small mobile device. However, it’s always best to test your layout to be sure. Test your own guides by minimizing and resizing your browser window and watch how they stack.

Box stacking order of a guide with no large top box and three columns.

 

Box stacking order of a guide with a large top box and two columns.

 

Conclusion

After looking at the number of libraries that have these same issues, it may be safe to say that our subject librarians are similar to others in regard to having limited HTML, CSS, or design skills. They rely on LibGuides easy to use interface and system to do most of the work as their time is limited, or they have no interest in learning these additional skills. Our librarians spend most of their teaching time in a classroom, using a podium and large screen, or at the reference desk on large screens. Because of this they are not highly attuned to the mobile user and how their guides display on other devices, even though their guides are being accessed by students on phones or tablets. We will be initiating a mobile reference service soon, perhaps this will help bring further awareness. For now, I recently taught an internal workshop in order demonstrate and share what I have learned in hopes of helping the librarians get these elements fixed. Helping ensure new guides will be created with mobile in mind is also a priority. To date, several librarians have gone through their guides and made the changes where necessary. Others have summer plans to update their guides and address these issues at the same time. I’m not aware of any way to make these changes in bulk, since they are very individual in nature.

 


Danielle Rosenthal is the Web Development & Design Librarian at Florida Gulf Coast University. She is responsible for the library’s web site and its applications in support of teaching, learning, and scholarship activities of the FGCU Library community. Her interests include user interface, responsive, and information design.

 

Notes:

1 Maximizing your LibGuides for Mobile http://buzz.springshare.com/springynews/news-29/tips
2 http://acrl.ala.org/techconnect/post/accessibility-testing-libguides-2-0