Names are Hard

A while ago I stumbled onto the post “Falsehoods Programmers Believe About Names” and was stunned. Personal names are one of the most deceptively difficult forms of data to work with and this article touched on so many common but unaddressed problems. Assumptions like “people have exactly one canonical name” and “My system will never have to deal with names from China/Japan/Korea” were apparent everywhere. I consider myself a fairly critical and studious person, I devote time to thinking about the consequences of design decisions and carefully attempt to avoid poor assumptions. But I’ve repeatedly run into trouble when handling personal names as data. There is a cognitive dissonance surrounding names; we treat them as rigid identifiers when they’re anything but. We acknowledge their importance but struggle to take them as seriously.

Names change. They change due to marriage, divorce, child custody, adoption, gender identity, religious devotion, performance art, witness protection, or none of these at all. Sometimes people just want a new name. And none of these reasons for change are more or less valid than others, though our legal system doesn’t always treat them equally. We have students who change their legal name, which is often something systems expect, but then they have the audacity to want to change their username, too! And that works less often because all sorts of system integrations expect usernames to be persistent.

Names do not have a universal structure. There is no set quantity of components in a name nor an established order to those components. At my college, we have students without surnames. In almost all our systems, surname is a required field, so we put a period “.” there to satisfy that requirement. Then, on displays in our digital repository where surnames are assumed, we end up with bolded section headers like “., Johnathan” which look awkward.

Many Western names might follow a [Given name] – [Middle name] – [Surname] structure and an unfortunate number of the systems I have to deal with assume all names share this structure. It’s easy to see how this yields problematic results. For instance, if you want to a see a sorted list of users, you probably want to sort by family name, but many systems sort by the name in the last position causing, for instance, Chinese names 1 to be handled differently from Western ones. 2 But it’s not only that someone might not have a middle name, or might have two middle names, or might have a family name in the first position—no, even that would be too simple! Some name components defy simple classifications. I once met a person named “Bus Stop”. “Stop” is clearly not a family affiliation, despite coming in the final position of the name. Sometimes the second component of a tripartite Western name isn’t a middle name at all, but a maiden name or the second word of a two-word first name (e.g. “Mary Anne” or “Lady Bird”)! One cannot even determine by looking at a familiar structure the roles of all of a name’s pieces!

Names are also contextual. One’s name with family, with legal institutions, and with classmates can all differ. Many of our international students have alternative Westernized first names. Their family may call them Qiáng but they introduce themselves as Brian in class. We ask for a “preferred name” in a lot of systems, which is a nice step forward, but don’t ask when it’s preferred. Names might be meant for different situations. We have no system remotely ready for this, despite the personalization that’s been seeping into web platforms for decades.

So if names are such a trouble, why not do our best and move on? Aren’t these fringe cases that don’t affect the vast majority of our users? These issues simply cannot be ignored because names are vital. What one is called, even if it’s not a stable identifier, has great effects on one’s life. It’s dispiriting to witness one’s name misspelled, mispronounced, treated as an inconvenience, botched at every turn. A system that won’t adapt to suit a name delegitimizes the name. It says, “oh that’s not your real name” as if names had differing degrees of reality. But a person may have multiple names—or many overlapping names over time—and while one may be more institutionally recognized at a given time, none are less real than the others. If even a single student a year is affected, it’s the absolute least amount of respect we can show to affirm their name(s).

So what do we to do? Endless enumerations of the difficulties of working with names does little but paralyze us. Honestly, when I consider about the best implementation of personal names, the MODS metadata schema comes to mind. Having a <name> element with any number of <namePart> children is the best model available. The <namePart>s can be ordered in particular ways, a “@type” attribute can define a part’s function 3, a record can include multiple names referencing the same person, multiple names with distinct parts can be linked to the same authority record, etc. MODS has a flexible and comprehensive treatment of name data. Unfortunately, returning to “Falsehoods Programmers Believe”, none of the library systems I administer do anywhere near as good a job as this metadata schema. Nor is it necessarily a problem with Western bias—even the Chinese government can’t develop computer systems to accurately represent the names of people in the country, or even agree on what the legal character set should be! 4 It seems that programmers start their apps by creating a “users” database table with columns for unique identifier, username, “firstname”/”lastname” [sic], and work from there. On the bright side, the name isn’t used as the identifier at least! We all learned that in databases class but we didn’t learn to make “names” a separate table linked to “users” in our relational databases.

In my day-to-day work, the best I’ve done is to be sensitive to the importance of names changes specifically and how our systems handle them. After a few meetings with a cross-departmental team, we developed a name change process at our college. System administrators from across the institution are on a shared listserv where name changes are announced. In the libraries, I spoke with our frontline service staff about assisting with name changes. Our people at the circulation desk know to notice name discrepancies—sometimes a name badge has been updated but not our catalog records, we can offer to make them match—but also to guide students who may need to contact the registrar or other departments on campus to initiate the top-down name change process. While most of our the library’s systems don’t easily accommodate username changes, I can write administrative scripts for our institutional repository that alter the ownership of a set of items from an old username to a new one. I think it’s important to remember that we’re inconveniencing the user with the work of implementing their name change and not the other way around. So taking whatever extra steps we can do on our own, without pushing labor onto our students and staff, is the best way we can mitigate how poorly our tools are able to support the protean nature of personal names.

Notes

  1. Chinese names typically have the surname first, followed by the given name.
  2. Another poor implementation can be seen in The Chicago Manual of Style‘s indexing instructions, which has an extensive list of exceptions to the Western norm and how to handle them. But CMoS provides no guidance on how one would go about identifying a name’s cultural background or, for instance, identifying a compound surname.
  3. Although the MODS user guidelines sadly limit the use of the type attribute to a fixed list of values which includes “family” and “given”, rendering it subject to most of the critiques in this post. Substantially expanding this list with “maiden”, “patronymic/matronymic” (names based on a parental given name, e.g. Mikhailovich), and more, as well as some sort of open-ended “other” option, would be a great improvement.
  4. https://www.nytimes.com/2009/04/21/world/asia/21china.html

Our Assumptions: of Neutrality, of People, & of Systems

Discussions of neutrality have been coming up a lot in libraryland recently. I would argue that people have been talking about this for years1 2 3 4,  but this year we saw a confluence of events drive the “neutrality of libraries” topic to the fore. To be clear, I have a position on this on this topic5 and it is that libraries cannot be neutral players and still claim to be a part of the society they serve. But this post is about what we assume to be neutral, what we bring forward with those assumptions, and how to we react when those assumptions are challenged. When we challenge ideas that have been built into systems, either as “benevolent, neutral” librarians or “pure logic, neutral” algorithms, what part of ourselves are we challenging? How do reactions change based on who is doing the challenging? Be forewarned, this is a convoluted landscape.

At the 2018 ALA Midwinter conference, the ALA President’s program was a debate about neutrality. I will not summarize that event (see here), but I do want to call attention to something that became very clear in the course of the program: everyone was using a different definition of neutrality. People spoke with assumptions of what neutrality means and why they do, or do not, believe that it is important for libraries to maintain. But what are we assuming when we make these assumptions? Without an agreed upon definition, some referred to legal rulings to define neutrality, some used a dictionary definition (“not aligned with a political or ideological grouping” – Merriam Webster) without probing how political or ideological perspectives play out in real life. But why do we assume libraries should be neutral? What safety or security does that assumption carry? What else are we assuming should be neutral? Software? Analytics? What value judgements are we bringing forward with those assumptions?

An assumption of neutrality often comes with a transference of trust. A speaker at ALA even said that the three professions thought of as the most trustworthy (via a national poll) are firefighting, nursing, and librarianship, and so, by his logic, we must be neutral. Perhaps some do not conflate trust and neutrality, but when we do assume neutrality equates with trust in these situations, we remove the human aspect from the equation. Nurses and librarians, as people, are not neutral. People hold biases and a variety of lived experiences that shape perspectives and approaches. If you engage this line of thought and interrogate your assumptions and beliefs, it can become apparent that it takes effort to recognize and mitigate our human biases throughout the various interactions in our lives.

What of our technology? Systems and software are often put forward as logic-driven, neutral devices, considered apart from their human creators. The position of some people is that machines lack emotions and are, therefore, immune to our human biases and prejudices. This position is inaccurate and dangerous and requires our attention. Algorithms and analytics are not neutral. They are designed by people, who carry forward their own notions of what is true and what is neutral. These ideas are built into the structure of the systems and have the potential to influence our perception of reality. As we rely on “data-driven decision-making” across all aspects of our society — education, healthcare, entertainment, policy — we transfer trust and power to that data. All too often, we do that without scrutinizing the sources of the data, or the algorithms acting upon them. Moreover, as we push further into machine learning systems – systems that are trained on data to look for patterns and optimize processes – we open the door for those systems to amplify biases. To “learn” our systemic prejudices and inequities.

People far more expert in this domain than me have raised these questions and researched the effects that biased systems can have on our society6 7 8. I often bring these issues up when I want to emphasize how problematic it is to let the assumption of data-driven outcomes as “truth” persist and how critical it is to apply information literacy practices to data. But as I thought about this issue and read more from these experts, I have been struck by the variety of responses that these experts illicit. How do reactions change based on who is doing the challenging?

Angela Galvan questioned assumptions related to hiring, performance, and belonging in librarianship, based on the foundation of the profession’s “whiteness,” and was met with hostile comments on the post9. Nicole A. Cooke wrote about implicit assumptions when we write about tolerance and diversity and has been met with hostile comments10 while her micro-aggression research has been highlighted by Campus Reform11, which led to a series of hostile communications to her. Chris Bourg’s keynote about diversity and technology at Code4Lib was met with hostility12. Safiya Noble wrote a book about bias in algorithms and technology, which resulted in one of the more spectacular Twitter disasters13 14, wherein someone found it acceptable to dismiss her research without even reading the book.

Assumptions of neutrality, whether it be related to library services, space, collections, or the people doing the work, allow oppressive systems to persist and contribute to a climate where the perspectives and expertise of marginalized people in particular can be dismissed. Insisting that we promulgate the library and technology – and the people working in it and with it – as neutral actors, erases the realities that these women (and countless others) have experienced. Moreover, it allows the those operating with harmful and discriminatory assumptions to believe that they *are* neutral, by virtue of working in those spaces, and that their truth is an objective truth. It limits the desire for dialog, discourse, and growth – because who is really motivated to listen when you think you are operating from a place of “Truth”…when you feel that the strength of your assumptions can invalidate a person’s life?

Introducing Omeka S

My library has used Omeka as part of our suite of platforms for creating digital collections and exhibits for many years now. It’s easy to administer and use, and many of our students, particularly in history or digital humanities, learn how to create exhibits with it in class or have experience with it from other institutions, which makes it a good solution for student projects. This creates challenges, however, since it’s been difficult to have multiple sites or distributed administration. A common scenario is that we have a volunteer student, often in history, working on a digital exhibit as part of a practicum, and we want the donor to review the exhibit before it goes live. We had to create administrative accounts for both the student and the donor, which required a lot of explanations about how to get in to just the one part of the system they were supposed to be in (it’s possible to create a special account to view collections that aren’t public, but not exhibits). Even though the admin accounts can’t do everything (there’s a super admin level for that), it’s a bit alarming to hand out administrative accounts to people I barely know.

This problem goes away with Omeka S, which is the new and completely rebuilt Omeka. It supports having multiple sites (which is the new name for exhibits) and distributed administration by site. Along with this, there are sophisticated metadata templates that you can assign to sites or users, which takes away the need for lots of documentation on what metadata to use for which item type. When I showed a member of my library’s technical services department the metadata templates in Omeka S, she gasped with excitement. This should indicate that, at least for those of us working on the back end, this is a fun system to use.

Trying it Out For Yourself

I have included some screenshots below, but you might want to use the Omeka S Sandbox to follow along. You can experiment with anything, and the data is reset every Monday, Wednesday, Friday, and Sunday. This includes a variety of sample exhibits, one is “A Battered Tin Dispatch Box” from which I include some screenshots below.

A Quick Tour Through Omeka S

This is what the Omeka Classic administrative dashboard looks like for a super administrator.Omeka Classic Administrative internfaceAnd this is the dashboard for Omeka S. It’s not all that different functionally, but definitely a different aesthetic experience.

Omeka S administrative interface

Most things in Omeka S work analogously to classic Omeka, but some things have been renamed or moved around. The documentation walks through everything in order, so it’s a great place to start learning. Overall, my feeling about Omeka S is that it’s much easier to tap into the  powerful features with less of a learning curve. I first learned Omeka S at the DLF Forum conference in fall 2017 directly from Patrick Murray-John, the Omeka Development Team Manager, and some of what is below is from his description.

Sites

Sites insetOmeka S has the very useful concept of Sites, which again function like exhibits in classic Omeka. Each site has its own set of administrative functions and user permissions, which allow for viewer, editor, or admin by site. I really appreciate this, since it allowed me to give student volunteers access to just the site they needed, and when we need to give other people access to view the site before it’s published we can do that. It’s easier to add outside or supplementary materials to the exhibit navigation. On the individual pages there are a variety of blocks available, and the layout is easier for people without a lot of HTML skills to set up.

Resource Templates

These existed in Omeka Classic, but were less straightforward. Now you can set a resource template with properties from multiple vocabularies and build the documentation right into the template. The data type can be text or URI, or draw from vocabularies with autosuggest. For example, you can set the Rights field to draw from Rights Statement options.

Items

Items work in a similar fashion to Omeka Classic. Items exist at the installation level, so can be reused across multiple sites. What’s great is that the nature of an item can be much more flexible. They can include URIs, maps, and multiple types of media such as a URL, HTML, IIIF image, oEmbed, or YouTube. This reflects the actual way that we were using Omeka Classic, but without the technical overhead to make it all work. This will make it easier for more people to create much more interactive and web-integrated exhibits.

Item Sets

Item Sets are the new name given to Collections and, like Items, they can have metadata from multiple vocabularies. Item Sets are analogous to Collections, but items can be in multiple Item Sets to be associated with sites to limit what people see. The tools for batch adding and editing are similar, but more powerful because you can actually remove or edit metadata in bulk.

Themes

Themes in Omeka S have changed quite a bit, and as Murray-John explained, it is more complicated to do theming than in the past. Rather than call to local functions, Omeka S uses patterns from Zend Framework 3, and so the process of theming will require more careful thought and planning. That said, the base themes provided are a great base, and thanks to the multiple options for layouts in sites, it’s less critical to be able to create custom themes for certain exhibits. I wrote about how to create themes in Omeka in 2013, and while some of that still holds true, you would want to consult the updated documentation to see how to do this in Omeka S.

Mapping

One of my favorite things in Omeka S is the Mapping module, which allows you to add geolocation metadata to items, and create a map on site pages. Here’s an example from the Omeka S Sandbox with locations related to Scotland Yard mapped for an item in the Battered Tin Dispatch Box exhibit.

Map interface for itemsThis can then turn into an interactive map on the front end.

Map interface for exhibits

For the vast majority of mapping projects that our students want to do, this works in a very straightforward manner. Neatline is a plugin for Omeka Classic that allows much more sophisticated mapping and timelines–while it should be ported over to Omeka S, it currently is not listed as a module. In my experience, however, Neatline is more powerful than what many people are trying to do, and that added complexity can be a challenge. So I think the Mapping module looks like a great compromise.

Possible Approaches to Migration

Migration between Omeka Classic and Omeka S works well for items. For that, there’s the Omeka2 Importer module. Because exhibits work differently, they would have to be recreated. Omeka.net, the hosted version of Omeka, will stay on Omeka Classic for the foreseeable future, so there’s no concern that it will stop being supported any time soon, according to Patrick Murray-John.

Conclusion

We are still working on setting up Omeka S. My personal approach is that as new ideas for exhibits come up we will start them first in Omeka S. As we have time and interest, we may start to migrate older exhibits if they need continual management. Because some of our older exhibits rely on Omeka Classic pla but are planning to mostly create new exhibits in there that don’t rely on Omeka Classic plugins. I am excited to pair this with our other digital collection platforms to build exhibits that use content across our platforms and extend into the wider web.

 

Reflections on Code4Lib 2018

A few members of Tech Connect attended the recent Code4Lib 2018 conference in Washington, DC. If you missed it, the full livestream of the conference is on the Code4Lib YouTube channel. We wanted to  highlight some of our favorite talks and tie them into the work we’re doing.

Also, it’s worth pointing to the Code4Lib community’s Statement in Support of opening keynote speaker Chris Bourg. Chris offered some hard truths in her speech that angry men on the internet, predictably, were unhappy about, but it’s a great model that the conference organizers and attendees promptly stood in support.


Ashley:

One of my favorite talks at Code4lib this year was Amy Wickner’s talk, “Web Archiving and You / Web Archiving and Us.” (Video, slides) I felt this talk really captured some of the essence of what I love most about Code4lib, this being my 4th conference in the past 5 years. (And I believe this was Amy’s first!). This talk was about a technical topic relevant to collecting libraries and handled in a way that acknowledges and prioritizes the essential personal component of any technical endeavor. This is what I found so wonderful about Amy’s talk and this is what I find so refreshing about Code4lib as an inherently technical conference with intentionality behind the human aspects of it.

Web archiving seems to be something of interest but seemingly overwhelming to begin to tackle. I mean, the internet is just so big. Amy brought forth a sort of proposal for ways in which a person or institution can begin thinking about how to start a web archiving project, focusing first on the significance of appraisal. Wickner, citing Terry Cook, spoke of the “care and feeding of archives” and thinking about appraisal as storytelling. I think this is a great way to make a big internet seem smaller, understanding the importance of care in appraisal while acknowledging that for web archiving, it is an essential practice. Representation in web archives is more likely to be chosen in the appraisal of web materials than in other formats historically.

This statement resonated with me: “Much of the power that archivists wield are in how we describe or create metadata that tells a story of a collection and its subjects.”

And also: For web archives, “the narrative of how they are built is closely tied to the stories they tell and how they represent the world.”

Wickner went on to discuss how web archives are and will be used, and who they will be used by, giving some examples but emphasizing there are many more, noting that we must learn to “critically read as much as learn to critically build” web archives, while acknowledging web archives exist both within and outside of institutions. And that for personal archiving, it can be as simple as replacing links in documents with perma.cc, Wayback Machine links, or WebRecorder links.

Another topic I enjoyed in this talk was the celebration of precarious web content through community storytelling on Twitter with the hashtags #VinesWithoutVines and #GifHistory, two brief but joyous moments.


Bohyun:

The part of this year’s Code4Lib conference that I found most interesting was the talks and the discussion at a breakout session related to machine learning and deep learning. Machine learning is a subfield of artificial intelligence and deep learning is a kind of machine learning that utilizes hidden layers between the input layer and the output layer in order to refine and produce the algorithm that best represents the result in the output. Once such algorithm is produced from the data in the training set, it can be applied to a new set of data to predict results. Deep learning has been making waves in many fields such as Go playing, autonomous driving, and radiology to name a few. There were a few different talks on this topic ranging from reference chat sentiment analysis to feature detection (such as railroads) in the map data using the convolutional neural network model.

“Deep Learning for Libraries” presented by Lauren Di Monte and Nilesh Patil from University of Rochester was the most practical one among those talks as it started with a specific problem to solve and resulted in action that will address the problem. In their talk, Di Monte and Patil showed how they applied deep learning techniques to solve a problem in their library’s space assessment. The problem that they wanted to solve is to find out how many people visit the library to use the library’s space and services and how many people are simply passing through to get to another building or to the campus bus stop that is adjacent to the library. This made it difficult for the library to decide on the appropriate staffing level or the hours that best serve the users’ needs. It also prevented the library from showing the library’s reach and impact based upon the data and advocate for needed resources or budget to the decision-makers on the campus. The goal of their project was to develop automated and scalable methods for conducting space assessment and reporting tools that support decision-making for operations, service design, and service delivery.

For this project, they chose an area bounded by four smart control access gates on the first floor. They obtained the log files (with the data at the sensor level minute by minute) from the eight bi-directional sensors on those gates. They analyzed the data in order to create a recurrent neural network model. They trained the algorithm using this model, so that they can predict the future incoming and the outgoing traffic in that area and visually present those findings as a data dashboard application. For data preparation, processing, and modeling, they used Python. The tools used included Seaborn, Matplotlib, Pandas, NumPy, SciPy, TensorFlow, and Keras. They picked the recurrent neural network with stochastic gradient descent optimization, which is less complex than the time series model. For data visualization, they used Tableau. The project code is available at the library’s GitHub repo: https://github.com/URRCL/predicting_visitors.

Their project result led to the library to install six more gates in order to get a better overview of the library space usage. As a side benefit, the library was also able to pinpoint the times when the gates malfunctioned and communicate the issue with the gate vendor. Di Monte and Patil plan to hand over this project to the library’s assessment team for ongoing monitoring and to look for ways to map the library’s traffic flow across multiple buildings as the next step.

Overall, there were a lot of interests in machine learning, deep learning, and artificial intelligence at the Code4Lib conference this year. The breakout session I led at the conference on these topics produced a lively discussion on a variety of tools, current and future projects for many different libraries, as well as the impact of rapidly developing AI technologies on society. This breakout session also generated #ai-dl-ml channel in the Code4Lib Slack Space. The growing interests in these areas are also shown in the newly formed Machine and Deep Learning Research Interest Group of the Library and Information Technology Association. I hope to see more talks and discussion on these topics in the future Code4Lib and other library technology conferences.


Eric:

One of the talks which struck me the most this year was Matthew Reidsma’s Auditing Algorithms. He used examples of search suggestions in the Summon discovery layer to show biased and inaccurate results:

In 2015 my colleague Jeffrey Daniels showed me the Summon search results for his go-to search: “Stress in the workplace.” Jeff likes this search because ‘stress’ is a common engineering term as well as one common to psychology and the social sciences. The search demonstrates how well a system handles word proximities, and in this regard, Summon did well. There are no apparent results for evaluating bridge design. But Summon’s Topic Explorer, the right-hand sidebar that provides contextual information about the topic you are searching for, had an issue. It suggested that Jeff’s search for “stress in the workplace” was really a search about women in the workforce. Implying that stress at work was caused, perhaps, by women.

This sort of work is not, for me, novel or groundbreaking. Rather, it was so important to hear because of its relation to similar issues I’ve been reading about since library school. From the bias present in Library of Congress subject headings where “Homosexuality” used to be filed under “Sexual deviance”, to Safiya Noble’s work on the algorithmic bias of major search engines like Google where her queries for the term “black girls” yielded pornographic results; our systems are not neutral but reify the existing power relations of our society. They reflect the dominant, oppressive forces that constructed them. I contrast LC subject headings and Google search suggestions intentionally; this problem is as old as the organization of information itself. Whether we use hierarchical, browsable classifications developed by experts or estimated proximities generated by an AI with massive amounts of user data at its disposal, there will be oppressive misrepresentations if we don’t work to prevent them.

Reidsma’s work engaged with algorithmic bias in a way that I found relatable since I manage a discovery layer. The talk made me want to immediately implement his recording script in our instance so I can start looking for and reporting problematic results. It also touched on some of what despairs me in library work lately—our reliance on vendors and their proprietary black boxes. We’ve had a number of issues lately related to full-text linking that are confusing for end users and make me feel powerless. I submit support ticket after support ticket only to be told there’s no timeline for the fix.

On a happier note, there were many other talks at Code4Lib that I enjoyed and admired: Chris Bourg gave a rousing opening keynote featuring a rallying cry against mansplaining; Andreas Orphanides, who keynoted last year’s conference, gave yet another great talk on design and systems theory full of illuminating examples; Jason Thomale’s introduction to Pycallnumber wowed me and gave me a new tool I immediately planned to use; Becky Yoose navigated the tricky balance between using data to improve services and upholding our duty to protect patron privacy. I fear I’ve not mentioned many more excellent talks but I don’t want to ramble any further. Suffice to say, I always find Code4Lib worthwhile and this year was no exception.

Creating a Privacy Policy from the Ground Up

Privacy policies are easy to ignore, but when done right, creating one can be a positive experience. In early 2017, several of the staff members at my library started having informal conversations about privacy in our digital platforms, largely a result of the release of ALA Library Privacy Checklists. After several months of talking, it became clear that we needed to get a formal group together to create a privacy policy. This would ensure that we were having conversations with everyone in the library, including our patrons, about privacy. This led to the creation of a Patron Privacy Task Force, which just wrapped up about six months of work. I co-chaired the group with the Head of Reference Services at one the libraries, and we had representatives from all library departments. The final result was just as we had hoped: a thorough and open process producing a clear and accurate policy we have provided to our patrons.

Because many libraries are working on similar projects, I wanted to describe our process and the lessons learned. While ultimately I am pleased with what we produced, I had to restrict the scope due to time and interest from the group, and there are some important takeaways from that.

Planning

When we started the project, construction in the library and changes in staffing were disrupting normal functions. We knew it was important to restrict the project to the time necessary to complete finite deliverables. Rather than creating a new committee, we felt it would be helpful focus effort on learning about privacy, and bringing that knowledge back to departments and standing committees as an embedded value. We had a member from every department across the libraries (which ended up being ten people) and intentionally included a mix of department heads, librarians, and paraprofessional staff. Varying perspectives across departments and functions helped create good discussions and ensured that we would be less likely to miss something important.

Nevertheless, such a large group can be unwieldy, and adding yet another set of meetings can be a challenge for already overburdened schedules. For that reason, my co-chair and I spent a lot of time preplanning all the meetings and creating a   specific project plan that would be flexible to adapt to our needs, which we needed to do in the end. Our plan had our work starting in early August and ending in December with the goal to have: 1) a complete policy, 2) internal best practices documentation, and 3) an outreach plan. As it turned out, we did not complete all the internal documentation, but the policy and outreach plan were complete on time, and the policy went out to the public at the same time as when we reported on the work of the committee to all library staff in mid-January 2018.

One of the most useful aspects of our project was treating it as a professional development opportunity with a number of reading assignments to complete before the kickoff meeting and throughout the work period (I have included some of the resources we used below). We also ensured from time to time we returned to theoretical, or guiding, principles of our work when it felt like we were feeling too bogged down with minutia. The plan ended up starting with research, followed by reading, writing, and practical research, more theory, and a final push with completing the draft of the policy and working on documentation.

Conducting the Privacy Audit

After spending some time talking through the project and figuring out some mechanics, we moved into the privacy audit stage. This requires examining every system and practice the library uses in a systematic manner and determining whether this falls in line with best practices. The ALA Privacy Checklists help with the latter part, but we also relied on Karen Coyle’s Library Privacy Audit spreadsheets. The first step was to brainstorm all the systems we used in our daily work and how we used them, and then divide those up by department. We mapped the systems we used into the spreadsheets, with some additional systems added. For that reason, multiple departments who use the same system in different ways reviewed some systems, and other systems that were unique to a department were reviewed just by that one. We then used the checklists to verify that we had covered the essentials in our audits and to raise additional issues that the spreadsheets did not cover.

This was not always the most straightforward process for people unused to looking carefully at systems, but for that reason it was a useful process. By dividing the work up between departments, it meant that everyone in the library had a better chance to learn about how their work affected patron privacy and ask questions about the processes of other departments as patron information moves across the library. For example, when a patron requests that the library purchase a book, this is recorded in one system and leaves a trail through email as it goes between systems. After the request is placed, that information stays in various systems to ensure the patron gets the book after it arrives. As public and technical services talked through that process, it was easier to identify which pieces of it were important to good service and which created informational residue.

Compiling the audit results into a useful format was a challenge, and this is an area of this project that did not meet my initial hopes. My original plan was to create a flexible best practices manual that would record all the results of the audit and how closely they met the standards set by the checklists. In practice, that was way too complicated, and we ended up just focusing on the “Priority 1” actions, which are those that any library can meet no matter their technical abilities. In fact, many of our practices are much better than that, but breaking the work down into smaller steps was a much more feasible approach. Ultimately, the co-chairs took the research done by all the task force members and created a list of practices for each checklist that indicated where we met best practices and where we needed to do more work. We asked all departments to complete the project identified for each checklist by one year out, and to consider including “Priority 2” level projects in departmental goals for the following fiscal year.

Writing the Policy

The process for writing the policy itself was different from the audit, in that the first draft came entirely from the co-chairs and then went out the group for editing. This was to create a unified tone and help identify all the gaps in knowledge that the task force members could complete with their research. Writing the policy started with the ALA Privacy Policy Creation Toolkit, and in particular the “Sections to Include in a Privacy Policy.” I literally copied those sections into a blank document to start the writing process, though some of them were renamed or reorganized in the final version.

After writing the rough draft, I listed all the sections where I was missing important information, and relied on the task force to fill in those sections. This was a fascinating process as we tried to explain technical processes that each of us understood in a way the whole group could understand and explain to a patron. Explaining the way the scanners could accidentally store a patron’s email address was an example of something that took multiple attempts to get right in the policy. The difficulty I had in writing was useful in itself, however. Each time I felt embarrassed or confused about describing one of our practices told me that this practice needed to change. I hope when we go back to revise the policy, the difficult sections will be easier to write because the practices will be better.

Outreach and Next Steps

One of the important privacy tasks in the checklists is the need for education and outreach to staff and patrons. The process of writing the policy in the task force took care of a lot of staff education, but this will need to be an ongoing process. For that reason, we recommended that the task force reconvene to check on the progress of privacy improvements in 9-10 months after the adoption of the policy, but not necessarily with the exact same members. As we work through fixing our practices this will be a great opportunity to have additional conversations with library staff and include more detail.

No matter how many checklists or guidelines we consult, we will not be able to cover all scenarios. For that reason, we asked people to keep the following guiding principles in mind when making decisions about data collection that could affect patron privacy.

  • Is it necessary to collect this information?
  • Could I tell who an individual was even if there was no name attached?
  • If I need to collect this information, what is the data I can remove to obscure personal information?

To tell patrons about the policy we wrote a blog post and posted it on the library website. Obviously, this will not reach everyone, but at least will catch our most active users–we know from usability testing that people do look at our blog post headlines! Meanwhile, a set of recommended outreach practices included creating guides for how to turn on privacy features in browsers especially for specific vendor platforms with potentially problematic practices, partnering with our campus IT department on information security awareness, and presenting on privacy issues in research and teaching at a faculty professional development event.

Conclusion

As someone who enjoys writing policies and looking for ways to improve processes, this kind of project will always appeal to me. However, many members of the task force told me that it was a useful exercise to improve their own knowledge and keep up to date with how the privacy conversation has changed even in the last few years. Because this is such a constantly shifting topic, this will require active management to keep our policy accurate and our practices in line with changes in technology. The good news is that this was a grassroots effort that can be started up again with relatively little effort as long as someone cares to do so, which I suspect will now always be the case at my library.

Selected Resources the Task Force Used

Other library policies

 

Net Neutrality Roundup: Alternate Internets

Now that we are facing net neutrality regulation rollbacks here in the United States, what new roles could librarians play in the continued struggle to provide people with unrestricted access to information?  ALA has long been dedicated to equal access to information, as clearly outlined in both the Core Values and Code of Ethics. You can read ALA’s Joint Letter to the FCC here. It emphasizes that “a non-neutral net, in which commercial providers can pay for enhanced transmission that libraries and higher education cannot afford, endangers our institutions’ ability to meet our educational mission.”

Net neutrality was discussed back in 2014 on this blog, with Margaret Heller’s post entitled “What Should Academic Librarians Know about Net Neutrality?” We recommend you start there for some background on the legal issues around net neutrality. It includes a fun trip into the physical spaces our content traverses through to get onto our screens. One of the conclusions of that post was that libraries need to work on ensuring that everyone has access to broadband networks to begin with, and that more varied access ensures that no company has a monopoly over internet service in a location. There have been a number of projects along these lines over the past decade and more, and we encourage you to find one in your area and get involved.

Library-based initiatives

Equal access to information starts with having access at all. Several libraries have kicked off initiatives in activities like loaning out wi-fi hotspots for several-month periods in New York City, Brooklyn, and Chicago.

Ideally everyone will have secure and private internet access.The Library Freedom Project has been working for years to protect the privacy of patrons, including educating librarians about the threat of surveillance in modern digital technology, working with Tor Project to configure Tor exit relays in library systems, and creating educational resources for teaching patrons about privacy.

These are some excellent steps towards a more democratic and equal access to information, but what happens if the internet as we know it fundamentally changes? Let’s explore some “alternative internets” that rely on municipal and/or grassroots solutions.

Mesh networks

You might be familiar with wireless mesh networks for home use. You can set up a wireless mesh network in your own house to ensure even coverage across the house. Since each node can cover a certain part of your house, you don’t have to rely on how close you are to the wireless router to connect. You can also change the network around easily as your needs change.

Mesh networks are dynamically routed networks that exchange routes, internet, local networks, and neighbors. They can be wireless or wired. Mesh networks may not be purely a “mesh” but rather a combination of “mesh network” technology as well as “point to point” linking, with connections directly linking to each other, and each of these connections expanding out to their local mesh networks. BMX6/BMX7, BATMAN, and Babel are some of the most popular network protocols (with highly memorable names!) for achieving a broad mesh network, but there are many more. Just as you can install devices in your home, you can cooperate with others in your community or region to create your own network. The LibreMesh project is an example of a way in which DIY wireless networks are being created in several European countries.

Municipal networks

Nineteen towns in Colorado are exploring alternate internet solutions, like a public alternative. Chattanooga offers public gigabit internet speeds. This has some major advantages for the city, including the ability to offer free internet access to low-income residents and ensure that anyone who pays for access gets the same level of access, which is not the case in most cities where some areas pay a high cost for a low signal. Even just the presence and availability of municipal broadband, “has radically altered the way local politicians and many ordinary Chattanoogans conceive of the Internet. They have come to think of it as a right rather than a luxury.”1 A similar initiative in Roanoke is the Roanoke Valley Broadband Authority, which in an interesting twist lobbied the Virginia legislature to reduce oversight of its activities in a bill that originally specifically stated that broadband services should focus on underserved areas–so a reminder that in many ways municipalities view this as an investment in business rather than a social justice issue. 2 In Detroit, the Detroit Community Technology Project is working to set up and bring community wireless to neighborhoods in Detroit. New York City‘s Red Hook neighborhood relied on their mesh network during Hurricane Sandy to stay connected to outside of New York. New York City also has the rapidly-growing NYC Mesh community with two supernodes and another coming later this year, uniting lower Manhattan with Northern and Central Brooklyn. Toronto also has an emerging mesh community with a handful of connected nodes. The Urbana-Champaign Independent Media Center developed CUWiN, which provided open wireless networks in “Champaign-Urbana, Homer, Illinois, tribal lands of the Mesa Grande Reservation, and the townships of South Africa”. 3

Outside of North America, Berlin has its own mesh networked, called Freifunk. Austria has Funkfeuer. Greece has the Athens Wireless Metropolitan Network. Italy has Ninux and Argentina has AlterMundi. Villages in rural northern England are joining together to get connected via a cooperative model called B4RN, where they dig their own trenches for cables using their farm tractors.

Thinking big

Guifi.net is a wi-fi network that covers a large part of Spain and defines itself as “the biggest free, open and neutral network.” It was developed in 2004 as a response to the lack of broadband Internet, where commercial Internet providers weren’t providing a connection or a very poor one in rural areas of the Catalonia region. Guifi has established a Wireless Commons License as guidelines that can be adopted by other networks. At time of posting, 34,306 nodes were active, with over 17,000 planned.

Finally, Brooklyn Public Library was granted $50,000 from IMLS to develop a mesh network and called BKLYN Link, along with a technology fellowship program for 18-24 year olds. Looking forward to what emerges from this initiative!

Conclusion

The internet was started when college campuses connected to each other across first short geographic areas and eventually much longer distances. Could we see academic and public libraries working together and leading the return to old ways of accessing the internet for a new era?

Meanwhile, it’s important to ensure that the FCC has appropriate regulatory powers over ISPs, otherwise we have no recourse if companies choose to prioritize packets. You should contact your legislators and make sure that the people at your campus who work with the government are sharing their perspectives as well. You can get some help with a letter to Congress from ALA.

Taking Diversity to the Next Level

“Building Bridges in a Divisive Climate: Diversity in Libraries, Archives, and Museums,” panel discussion program held at the University of Rhode Island Libraries on Thursday November 30, 2017.

Getting Minorities on Board

I recently moderated a panel discussion program titled “Building Bridges in a Divisive Climate: Diversity in Libraries, Archives, and Museums.”1 Participating in organizing this program was interesting experience. During the whole time, I experienced my perspective constantly shifting back and forth as (i) someone who is a woman of color in the US who experiences and deals with small and large daily acts of discrimination, (ii) an organizer/moderator trying to get as many people as possible to attend and participate, and (iii) a mid-career librarian who is trying to contribute to the group efforts to find a way to move the diversity agenda forward in a positive and inclusive way in my own institution.

In the past, I have participated in multiple diversity-themed programs either as a member of the organizing committee or as an attendee and have been excited to see colleagues organize and run such programs. But when asked to write or speak about diversity myself, I always hesitated and declined. This puzzled me for a long time because I couldn’t quite pinpoint where my own resistance was coming from. I am writing about this now because I think it may shed some light on why it is often difficult to get minorities on board with diversity-related efforts.

A common issue that many organizers experience is that often these diversity programs draw many allies who are already interested in working on the issue of diversity, equity, and inclusion but not necessarily a lot of those who the organizers consider to be the target audience, namely, minorities. What may be the reason? Perhaps I can find a clue for the answer to this question from my own resistance regarding speaking or writing about diversity, preferring rather to be in the audience with a certain distance or as an organizer helping with logistics behind the scene.

To be honest, I always harbored a level of suspicion about how much of the sudden interests in diversity is real and how much of it is simply about being on the next hot trend. Trends come and go, but issues lived through many lives of those who belong to various systematically disadvantaged and marginalized groups are not trends. Although I have been always enthusiastic about participating in diversity-focused programs as attendees and was happy to see diversity, equity, and inclusion discussed in articles and talks, I wasn’t ready to sell out my lived experience as part of a hot trend, a potential fad.

To be clear, I am not saying that any of the diversity-related programs or events were asking speakers or authors to be a sell-out. I am only describing how things felt to me and where my own resistance was originating. I have been and am happy to see diversity discussed even as a one-time fad. Better a fad than no discussion at all.

One may argue that that diversity has been actively discussed for quite some time now. A few years, maybe several, or even more. Some of the prominent efforts to increase diversity in librarianship I know, for example, go as far back as 2007 when Oregon State University Libraries sponsored two scholarships to the Code4Lib conference, one for women and the other for minorities, which have continued from then on as the Code4Lib Diversity Scholarship.2 But if one has lived the entire life as a member of a systematically disadvantaged group either as a woman, a person of color, a person of certain sexual orientation, a person of a certain faith, a person with a certain disability, etc., one knows better than expecting some sudden interests in diversity to change the world we live in and most of the people overnight.

I admit I have been watching the diversity discussion gaining more and more traction in librarianship with growing excitement and concern at the same time. For I felt that all of what is being achieved through so many people’s efforts may get wiped out at any moment. The more momentum it accrues, I worried, the more serious backlash it may come to face. For example, it was openly stated that seeking racial/ethnic diversity is superficial and for appearance’s sake and that those who appear to belong to “Team Diversity” do not work as hard as those in “Team Mainstream.” People make this type of statements in order to create and strengthen a negative association between multiple dimensions of diversity that are all non-normative (such as race/ethnicity, religion, sexual orientation, immigration status, disability) and unfavorable value judgements (such as inferior intellectual capacity or poor work ethic).3 According to this kind of flawed reasoning, a tech company whose entire staff consists of twenty-something white male programmers with a college degree, may well have achieved a high level of diversity because the staff might have potentially (no matter how unlikely) substantial intellectual and personal differences in their thinking, background, and experience, and therefore their clear homogeneity is no real problem. That’s just a matter of trivial “appearance.” The motivation behind this kind of intentional misdirection is to derail current efforts towards expanding diversity, equity, and inclusion by taking people’s attention away from the real issue of systematic marginalization in our society. Of course, the ultimate goal of all diversity efforts should be not the mere inclusion of minorities but enabling them to have agency as equal as the agency those privileged already possess. But objections are being raised against mere inclusion. Anti-diversity sentiment is real, and people will try to rationalize it in any way they can.

Then of course, the other source of my inner resistance to speaking or writing about diversity has been the simple fact that thinking about diversity, equity, and inclusion does not take me to a happy place. It reminds me of many bad experiences accumulated over time that I would rather not revisit. This is why I admire those who have spoken and written about their lived experience as a member of a systematically discriminated and marginalized group. Their contribution is a remarkably selfless one.

I don’t have a clear answer to how this reflection on my own resistance against actively speaking or writing about diversity will help future organizers. But clearly, being asked to join many times had an effect since I finally did accept the invitation to moderate a panel and wrote this article. So, if you are serious about getting more minorities – whether in different religions, genders, disabilities, races, etc. – to speak or write on the issue, then invite them and be ready to do it over and over again even if they decline. Don’t expect that they will trust you at the first invitation. Understand that by accepting such an invitation, minorities do risk far more than non-minorities will ever do. The survey I ran for the registrants of the “Building Bridges in a Divisive Climate: Diversity in Libraries, Archives, and Museums” panel discussion program showed several respondents expressing their concern about the backlash at their workplaces that did or may result from participating in diversity efforts as a serious deterrent.4 If we would like to see more minorities participate in diversity efforts, we must create a safe space for everyone and take steps to deal with potential backlash that may ensue afterwards.5

A Gentle Intro or a Deep Dive?

Another issue that many organizers of diversity-focused events, programs, and initiatives struggle with is two conflicting expectations from their audience. On one hand, there are those who are familiar with diversity, equity, and inclusion issues and want to see how institutions and individuals are going to take their initial efforts to the next level. These people often come from organizations that already implemented certain pro-diversity measures such as search advocates for the hiring process.6 and educational programs that familiarize the staff with the topic of diversity, equity, and inclusion.7 On the other hand, there are still many who are not quite sure what diversity, equity, and inclusion exactly mean in a workplace or in their lives. Those people would continue to benefit from a gentle introduction to things such as privilege, microaggression, and unconscious biases.

The feedback surveys collected after the “Building Bridges in a Divisive Climate: Diversity in Libraries, Archives, and Museums” panel discussion program showed these two different expectations. Some people responded that they deeply appreciated the personal stories shared by the panelists, noting that they did not realize how often minorities are marginalized even in one day’s time. Others, however, said they would be like to hear more about actionable items and strategies that can be implemented to further advance the values of diversity, equity, and inclusion that go beyond personal stories. Balancing these two different demands is a hard act for organizers. However, this is a testament to our collective achievement that more and more people are aware of the importance of continuing efforts to improve diversity, equity, and inclusion in libraries, archives, and museums.

I do think that we need to continue to provide a general introduction to diversity-related issues, exposing people to everyday experience of marginalized groups such as micro-invalidation, impostor syndrome, and basic concepts like white privilege, systematic oppression, colonialism, and intersectionality. One of the comments we received via the feedback survey after our diversity panel discussion program was that the program was most relevant in that it made “having colleagues attend with me to hear what I myself have never told them” possible. General programs and events can be an excellent gateway to more open and less guarded discussion.

At the same time, it seems to be high time for us in libraries, museums, and archives to take a deep dive into different realms of diversity, equity, and inclusion as well. Diversity comes in many dimensions such as age, disability, religion, sexual orientation, race/ethnicity, and socioeconomic status. Many of us feel more strongly about one issue than others. We should create opportunities for ourselves to advocate for specific diversity issues that we care most.

The only thing I would emphasize is that one specific dimension of diversity should not be used as an excuse to neglect others. Exploring socioeconomic inequality issues without addressing how they work combined with the systematic oppression of marginalized groups such as Native Americans, women, or immigrants at the same time can be an example of such a case. All dimensions of diversity are closely knitted with one another, and they do not exist independently. For this reason, a deep dive into different realms of diversity, equity, and inclusion must be accompanied by the strong awareness of their intersectionality.8

Recommendations and Resources for Future Organizers

Organizing a diversity-focused program takes a lot of effort. While planning the “Building Bridges in a Divisive Climate: Diversity in Libraries, Archives, and Museums” panel discussion program at the University of Rhode Island Libraries, I worked closely with my library dean, Karim Boughida, who originally came up with the idea of having a panel discussion program at the University of Rhode Island Libraries, and Renee Neely in the libraries’ diversity initiatives for approximately two months. For panelists, we decided to recruit as many minorities from diverse institutions and backgrounds. We were fortunate to find panelists from a museum, an archive, both a public and an academic library with varying degrees of experience in the field from only a few years to over twenty-five years, ranging from a relatively new archivist to an experienced museum and a library director. Our panel consisted of one-hundred percent people of color. The thoughts and perspectives that those panelists shared were, as a result, remarkably diverse and insightful. For this reason, I recommend spending some time to get the right speakers for your program if your program will have speakers.

Discussion at the “Building Bridges in a Divisive Climate: Diversity in Libraries, Archives, and Museums,” at the University of Rhode Island Libraries.

Another thing I would like to share is the questions that I created for the panel discussion. Even though we had a whole hour, I was able to cover only several of them. But since I discussed all these questions in advance with the panelists and they helped me put a final touch on some of those, I think these questions can be useful to future organizers who may want to run a similar program. They can be utilized for a panel discussion, an unconference, or other types of programs. I hope this is helpful and save time for other organizers.

Sample Questions for the Diversity Panel Discussion

  1. Why should libraries, archives, museums pay attention to the issues related to diversity, equity, and inclusion?
  2. In what ways do you think the lack of diversity in our profession affects the perception of libraries, museums, and archives in the communities we serve?
  3. Do you have any personal or work-related stories that you would like to share that relate to diversity, equity, and inclusion issues?
  4. How did you get interested in diversity, equity, and inclusion issues?
  5. Suppose you discovered that your library’s, archive’s or museum’s collection includes prejudiced information, controversial objects/ documents, or hate-inducing material. What would you do?
  6. Suppose a group of your library / archive / museum patrons want to use your space to hold a local gathering that involves hate speech. What would you do? What would you be mostly concerned about, and what would the things that you would consider to make a decision on how you will respond?
  7. Do you think libraries, archives, and museums are a neutral place? What do you think neutrality means to a library, an archive, a museum in practice in a divisive climate such as now?
  8. What are some of the areas in libraries, museums, and archives where you see privileges and marginalization function as a barrier to achieving our professional values – equal access and critical thinking?  What can we do to remove those barriers?
  9. Could you tell us how colonialist thinking and practice are affecting libraries, museums, and archives either consciously or unconsciously?  Since not everyone is familiar with what colonialism is, please begin with first your brief interpretation of what colonialist thinking or practice look like in libraries, museums, and archives first?
  10. What do you think libraries, archives, and museums can do more to improve critical thinking in the community that we serve?
  11. Although libraries, archives, museums have been making efforts to recruit, hire, and retain diverse personnel in recent years, the success rate has been relatively low. For example, in librarianship, it has been reported that often those hired through these efforts experienced backlash at their own institutions, were subject to unrealistic expectations, and met with unsupportive environment, which led to burnout and a low retention rate of talented people. From your perspective – either as a manager hiring people or a relatively new librarian who looked for jobs – what do you think can be done to improve this type of unfortunate situation?
  12. Many in our profession express their hesitation to actively participate in diversity, equity, and inclusion-related discussion and initiatives at their institutions because of the backlash from their own coworkers. What do you think we can do to minimize such backlash?
  13. Some people in our profession express strong negative feelings regarding diversity, equity, and inclusion-related initiatives. How much of this type of anti-diversity sentiment do you think exist in your field? Some worry that this is even growing faster in the current divisive and intolerant climate. What do you think we can do to counter such anti-diversity sentiment?
  14. There are many who are resistant to the values of diversity, equity, and inclusion. Have you taken any action to promote and advance these values facing such resistance? If so, what was your experience like, and what would be some of the strategies you may recommend to others working with those people?
  15. Many people in our profession want to take our diversity, equity, and inclusion initiatives to the next level, beyond offering mere lip service or simply playing a numbers game for statistics purpose. What do you think that next level may be?

Lastly, I felt strongly about ensuring that the terms and concepts often thrown out in diversity/equity/inclusion-related programs and events – such as intersectionality, white privilege, microaggression, patriarchy, colonialism, and so on – are not used to unintentionally alienate those who are unfamiliar with them. These concepts are useful and convenient shortcuts that allow us to communicate a large set of ideas previously discussed and digested, so that we can move our discussion forward more efficiently. They should not make people feel uncomfortable nor generate any hint of superiority or inferiority.

To this end, I create a pre-program survey which all program registrants were encouraged to take. My survey simply asked people how familiar and how comfortable they are with a variety of terms. At the panel discussion program, we also distributed the glossary of these terms, so that people can all become familiar with them.9 Also, videos can quickly bring all attendees up-to-speed with some basic concepts and phenomena in diversity discussion. For example, in the beginning of our panel discussion program, I played two short videos, “Life of Privilege Explained in a $100 Race” and “What If We Treated White Coworkers The Way We Treat Minority Coworkers?”, which were well received by the attendees.

I am sharing the survey questions, the video links, and the glossary in the hope that they may be helpful as a useful tool for future organizers. For example, one may decide to provide a glossary like this before the program or run an unconference that aims at unpacking the meanings of these terms and discussing how they relate to people’s daily lives.10

In Closing: Diversity, Libraries, Technology, and Our Own Biases

Disagreements on social issues are natural. But the divisiveness that we are currently experiencing seems to be particularly intense. This deeply concerns us, educators and professionals working in libraries, archives, and museums. Libraries, archives, and museums are public institutions dedicated to promoting and advancing civic values. Diversity, equity, and inclusion are part of those core civic values that move our society forward. This task, however, has become increasingly challenging as our society moves in a more and more divisive direction.

To make matters even more complicated, libraries, archives, museums in general lack diversity in their staff composition. This homogeneity can impede achieving our own mission. According to the recent report from Ithaka S+R released this August, we do not appear to have gotten very far. Their report “Inclusion, Diversity, and Equity: Members of the Association of Research (ARL) Libraries – Employee Demographics and Director Perspectives,” shows that libraries and library leadership/administration are both markedly white-dominant (71% and 89% white non-Hispanic respectively).11 Also, while librarianship in general are female dominant (61%), the technology field in libraries is starkly male (70%) along with Makerspace (65%), facilities (64%), and security (73%) positions.12 The survey results in the report show that while the majority of library directors say there are barriers to achieving more diversity in their library, they attribute those barriers to external rather than internal factors such as the library’s geographic location and the insufficiently diverse application pool resulting from the library’s location. What is fascinating, however, is that this directly conflicts with the fact that libraries do show little variation in the ratio of white staff based on degree of urbanization. Equally interesting is that the staff in more homogeneous and less diverse (over 71% White Non-Hispanic) libraries think that their libraries are much more equitable than the library community (57% vs 14%) and that library directors (and staff) consider their own library to be more equitable, diverse, and inclusive than the library community with respect to almost every category such as race/ethnicity, gender, LGBTQ, disabilities, veterans, and religion.

While these findings in the Ithaka S+R report are based upon the survey results from ARL libraries, similar staff composition and attitudes can be assumed to apply to libraries in general. There is a great need for both the library administration and the staff to understand their own unconscious and implicit biases, workplace norms, and organizational culture that may well be thwarting their own diversity efforts.

Diversity, equity, and inclusion have certainly been a topic of active discussion in the recent years. Many libraries have established a committee or a task force dedicated to improving diversity. But how are those efforts paying out? Are they going beyond simply paying a lip service? Is it making a real difference to everyday experience of minority library workers?13 Can we improve, and if so where and how? Where do we go from here? Those would be the questions that we will need to examine in order to take our diversity efforts in libraries, archives, and museums to the next level.

Notes

  1. The program description is available at https://web.uri.edu/library/2017/12/05/building-bridges-in-a-divisive-climate-diversity-in-libraries-archives-and-museums/
  2. Carol Bean, Ranti Junus, and Deborah Mouw, “Conference Report: Code4LibCon 2008,” The Code4Lib Journal, no. 2 (March 24, 2008), http://journal.code4lib.org/articles/72.
  3. Note that this kind of biased assertions often masquerades itself as an objective intellectual pursuit in academia when in reality, it is a direct manifestation of an existing prejudice reflecting the limited and shallow experience of the person posting the question him/herself. A good example of this is found in the remark in 2005 made by Larry Summers, the former Harvard President. He suggested that one reason for relatively few women in top positions in science may be “issues of intrinsic aptitude” rather than widespread indisputable everyday discrimination against women. He resigned after the Harvard faculty of arts and sciences cast a vote of no confidence. See Scott Jaschik, “What Larry Summers Said,” Inside Higher Ed, February 18, 2005, https://www.insidehighered.com/news/2005/02/18/summers2_18.
  4. Our pre-program survey questions can be viewed at https://docs.google.com/forms/d/e/1FAIpQLScP-nQnkHAqli_43pVdidw-dQzrAfLyCdiKutu5dZjqm3F8rA/viewform.
  5. For this purpose, asking all participants to respect one another’s privacy in advance can be a good policy. In addition to this, we specifically decided not to stream or record our panel discussion program, so that both panelists and attendees can freely share their experience and thoughts.
  6. A good example is the Search Advocate program from Oregon State University. See http://searchadvocate.oregonstate.edu/.
  7. For an example, see the workshops offered by the Office of Community, Equity, and Inclusion of the University of Rhode Island at https://web.uri.edu/diversity/ced-inclusion-courses-overview/.
  8. For the limitations of the mainstream diversity discussion in LIS (library and information science) with the focus on inclusion and cultural competency, see David James Hudson, “On ‘Diversity’ as Anti-Racism in Library and Information Studies: A Critique,” Journal of Critical Library and Information Studies 1, no. 1 (January 31, 2017), https://doi.org/https://doi.org/10.24242/jclis.v1i1.6.
  9. You can see our glossary at https://drive.google.com/file/d/1UCI142HUuYTrElgnY-dbNSOXF_IlpM6n/view?usp=sharing; This glossary was put together by Renee Neely.
  10. For the nitty-gritty logistical details for organizing a large event with a group of local and remote volunteers, check the Organizer’s Toolkit created by the 2017 #critlib Unconference organizers at https://critlib2017.wordpress.com/organizers-toolkit/.
  11. Roger Schonfeld and Liam Sweeney, “Inclusion, Diversity, and Equity: Members of the Association of Research Libraries,” Ithaka S+R, August 30, 2017, http://www.sr.ithaka.org/publications/inclusion-diversity-and-equity-arl/.
  12. For the early discussion of diversity-focused recruitment in library technology, see Jim Hahn, “Diversity Recruitment in Library Information Technology,” ACRL TechConnect Blog, August 1, 2012, https://acrl.ala.org/techconnect/post/diversity-recruitment-in-library-information-technology.
  13. See April Hathcock, “White Librarianship in Blackface: Diversity Initiatives in LIS,” In the Library with the Lead Pipe, October 7, 2015, http://www.inthelibrarywiththeleadpipe.org/2015/lis-diversity/ and Angela Galvan, “Soliciting Performance, Hiding Bias: Whiteness and Librarianship,” In the Library with the Lead Pipe (blog), June 3, 2015, http://www.inthelibrarywiththeleadpipe.org/2015/soliciting-performance-hiding-bias-whiteness-and-librarianship.

Yet Another Library Open Hours App

a.k.a. Yet ALOHA.

It’s a problem as old as library websites themselves: how to represent the times when a library building is open in a way that’s easy for patrons to understand and easy for staff to update?

Every website or content management system has its own solution that can’t quite suit our needs. In a previous position, I remember using a Drupal module which looked slick and had a nice menu for entering data on the administrative side…but it was made by a European developer and displayed dates in the (inarguably more logical) DD/MM/YYYY format. I didn’t know enough PHP at the time to fix it, and it would’ve confused our users, so I scrapped it.

Then there’s the practice of simply manually updating an HTML fragment that has the hours written out. This approach has advantages that aren’t easily dismissed: you can write out detailed explanations, highlight one-off closures, adjust to whatever oddity comes up. But it’s tedious for staff to edit a web page and easy to forget. This is especially true if hours information is displayed in several places; keeping everything in sync is an additional burden, with a greater possibility for human error. So when we went to redesign our library website, developing an hours application that made entering data and then reusing it in multiple places easy was at the forefront of my mind.

Why is this so hard?

One might think displaying hours is easy. The end products often look innocuous. But there are a bevy of reasons why it’s complicated for many libraries:

  • open hours differ across different branches
  • hours of particular services within a branch may not fully overlap with the library building’s open hours
  • a branch might close and re-open during the day
  • a branch might be open later than midnight, so technically “closing” on a date different than when it opened
  • holidays, campus closures, unexpected emergencies, and other exceptions disrupt regular schedules
  • in academia, schedules differ whether class is in session, it’s a break during a term, or it’s a break in between terms
  • the staff who know or determine a branch’s open hours aren’t necessarily technically skilled and may be spread across disparate library departments
  • dates and times are unique forms of data with their own unique displays, storage types, and operations (e.g. chronological comparisons)

Looking at other libraries, the struggle to represent their business hours is evident. For instance, the University of Illinois has an immense list of library branches and their open hours on its home page. There’s a lot to like about the display; it’s on the home page so patrons don’t have to go digging for the info, there’s a filter by name feature, the distinct open/closed colors help one to identify at a glance which places are open, the library branch rows expand with extra information. But it’s also an overwhelming amount of information longer than a typical laptop screen.

Hours display on the UIUC Libraries' home page.
Hours display on the UIUC Libraries’ home page.

Many libraries use SpringShare’s LibCal as a way of managing and display their open hours. See Loyola’s Hours page with its embedded table from LibCal. As a disclaimer, I’ve not used LibCal, but it comes with some obvious caveats: it’s a paid service that not all libraries can afford and it’s Yet Another App outside the website CMS. I’ve also been told that the hours entry has a learning curve and that it’s necessary to use the API for most customization. So, much as I appreciate the clarity of the LibCal schedule, I wanted to build an hours app that would work well for us, providing flexibility in terms of data format and display.

Our Hours

Our website CMS Wagtail uses a concept called “snippets” to store pieces of content which aren’t full web pages. If you’re familiar with Drupal, Snippets are like a more abstract version of Blocks. We have a snippet for each staff member, for instance, so that we can connect particular pages to different staff members but also have a page where all staff are displayed in a configurable list. When I built our hours app, snippets were clearly the appropriate way to handle the data. Ideally, hours would appear in multiple places, not be tied to a single page. Snippets also have their own section in the CMS admin side which makes entering them straightforward.

Our definition of an “open hours” snippet has but a few components:

  • the library branch the hours are for
  • the date range being described, e.g. “September 5th through December 15th” for our Fall semester
  • a list of open hours for each weekday, e.g. Monday = “8am – 10pm”, Tuesday = “8am – 8pm”, etc.

There are some nuances here. First, for a given academic term, staff have to enter hours once for each branch, so there is quite a bit of data entry. Secondly, the weekday hours are actually stored as text, not a numeric data type. This lets us add parentheticals such as “8am – 5pm (no checkouts)”. While I can see some theoretical scenarios where having numeric data is handy, such as determining if a particular branch is open on a given hour on a given date, using text simplified building the app’s data model for me and data entry for staff.

But what about when the library closes for a holiday? Each holiday effectively triples the data entry for a term: we need a data set for the time period leading up to the holiday, one for the holiday itself, and one for the time following it. For example, when we closed for Thanksgiving, our Fall term would’ve been split into a pre-Thanksgiving, during Thanksgiving, and post-Thanksgiving triad. And more so for each other holiday.

To alleviate the holiday problem, I made a second snippet type called “closures”. Closures let us punch holes in a set of open hours; rather than require pre- and post- data sets, we have one open hours snippet for the whole term and then any number of closures within it. A closure is composed of only a library branch and a date range. Whenever data about open hours is passed around inside our CMS, the app first consults the list of closures and then adjusts appropriately.

The open hours for the current day are displayed prominently on our home page. When we rebuilt our website, surfacing hours information was a primary design goal. Our old site’s hours page wasn’t exactly easy to find…yet it was the second most-visited page behind the home page.1 In our new site, the hours app allows us to show the same information in a few places, for instance as a larger table that shows our open times for a full week. The page showing the full table will also accept a date parameter in its URL, showing our schedule for future times. This lets us put up a notice about changes for periods like Thanksgiving week or Spring break.

Hours API

What really excited me about building an hours application from the ground up was the chance to include an API (inside the app’s views.py file, which in turn uses a couple functions from models.py). The app’s public API endpoint is at https://libraries.cca.edu/hours?format=json and by default it returns the open hours for the current day for all our library branches. The branch parameter allows API consumers to get the weekly schedule for a single branch while the date parameter lets them discover the hours for a specific date.

// GET https://libraries.cca.edu/hours/?format=json
{
    Materials: "11am - 4pm",
    Meyer: "8am - 5pm",
    Simpson: "9am - 6pm"
}

I’m using the API in two places, our library catalog home page and as an HTML snippet when users search our discovery layer for “hours” or “library hours”. I have hopes that other college websites will also want to reuse this information, for instance on our student portal or on a campus map. One can see the limitation of using text strings as the data format for temporal intervals; an application trying to use this API to determine “is a given library open at this instant” would have to do a bit of parsing to determine if the current time falls within the range. In the end, the benefits for data entry and straightforward display make text the best choice for us.

To summarize, the hours app fulfills our goals for the new website in a few ways. It allows us to surface our schedule not only on our home page but also in other places, sets us up to be able to reuse the information in even more places, and minimizes the burden of data entry on our staff. There are still improvements to be made—as I was writing this post I discovered a problem with cached API responses being outdated—but on the whole I’m very happy with how everything worked out.

Notes


  1. Libraries, I beg you, make your open hours obvious! People want to know.

Memory Labs and audiovisual digitization workflows with Lorena Ramírez-López

Hello! I’m Ashley Blewer, and I’ve recently joined the ACRL TechConnect blogging team. For my first post, I wanted to interview Lorena Ramírez-López. Lorena is working (among other places) at the D.C. Public Library on their Memory Lab initiative, which we will discuss below. Although this upcoming project targets public libraries, Lorena has a history of dedication to providing open technical workflows and documentation to support any library’s mission to set up similar “digitization stations.”

Hi Lorena! Can you please introduce yourself?

Hi! I’m Lorena Ramírez-López. I am a born and raised New Yorker from Queens. I went to New York University for Cinema Studies and Spanish where I did an honors thesis on Paraguayan cinema in regards to sound theory. I continued my education at NYU and graduated from the Moving Image Archiving and Preservation program where I concentrated on video and digital preservation. I was one of the National Digital Stewardship Residents for the American Archive of Public Broadcasting. I did my residency at Howard University television station (WHUT) in Washington D.C from 2016 until this June 2017. Along with being the project manager for the Memory Lab Network, I do contracting work for the National Portrait Gallery on their time based media artworks, joined the Women who Code community, and teach Spanish at Fluent City!

 

Tell us a little bit about DCPL’s Memory Lab and your role in it.

The DC Public Library’s Memory Lab was a National Digital Stewardship Project back in 2014 through 2015. This was the baby of DCPL’s National Digital Stewardship Resident, Jaime Mears, back in the day. A lot of my knowledge on how it started comes from reading the original project proposal, which you can find that on the Library of Congress’s website as well as Jaime Mear’s final report on the Memory Lab is found on the DC Library website. But to summarize its origin story, the Memory Lab was created as a local response to the fact that communities are generating a lot of digital content while still keeping many of their physical materials like VHS, miniDVs, and photos but might not necessarily have the equipment or knowledge to preserve their content. It has been widely accepted in the archival and preservation fields that we have an approximate 15- to 20-year window of opportunity to digitally preserve legacy audio and video recordings on magnetic tape because of the rate of degradation and the obsolescence of playback equipment. The term “video at risk” might ring a bell to some people. There’s also photographs and film, particularly color slides and negatives and moving image film formats, that will also fade and degrade over time. People want to save their memories as well as share them on a digital platform.

There are well-established best practices for digital preservation in archival practice, but these guidelines and documentation are generally written for a professional audience. And while there are a various personal digital archiving resources for a public audience, they aren’t really easy to find on the web and a lot of these resources aren’t updated to reflect the changes in our technology, software, and habits.

That being the case, our communities risk massive loss of history and culture! And to quote Gabriela Redwine’s Digital Preservation Coalition report,  “personal digital archives are important not just because of their potential value to future scholars, but because they are important to the people who created them.”

So the Memory Lab was the library’s local response in the Washington D.C. area to bridge this gap of digital archiving knowledge and provide the tools and resources for library patrons to personally digitize their own personal content.

My role is maintaining this memory lab (digitization rack). When hardware gets worn down or breaks, I fix it. When software for our computers upgrade to newer systems, I update our workflows.

I am currently re-doing the website to reflect the new wiring I did and updating the instructions with more explanations and images. You can expect gifs!

 

You recently received funding from IMLS to create a Memory Lab Network. Can you tell us more about that?

Yes! The DC Public Library in partnership with the Public Library Association received a national leadership grant to expand the memory lab model.

During this project, the Memory Lab Network will partner with seven public libraries across the United States. Our partners will receive training, mentoring, and financial support to develop their own memory lab as well as programs for their library patrons and community to digitize and preserve their personal and family collections. A lot of focus is put on the digitization rack, mostly because it’s cool, but the memory lab model is not just creating a digitization rack. It’s also developing classes and online resources for the community to understand that digital preservation doesn’t just end with digitizing analog formats.

By creating these memory labs, these libraries will help bridge the digital preservation divide between the professional archival community and the public community. But first we have to train and help the libraries set up the memory lab, which is why we are providing travel grants to Washington, D.C. for an in-depth digital preservation bootcamp and training for these seven partners.

If anyone wants to read the proposal, the Institute of Museum and Library Sciences has it here.

 

What are the goals of the Memory Lab Network, and how do you see this making an impact on the overall library field (outside of just the selected libraries)?

One of the main goals is to see how well the memory lab model holds up. The memory lab was a local response to a need but it was meant to be replicated. This funding is our chance to see how we can adapt and improve the memory lab model for other public libraries and not just our own urban library in Washington D.C.

There are actually many institutions and organizations that have digitization stations and/or the knowledge and resources, but we just don’t realize who they are. Sometimes it feels like we keep reinventing the wheel with digital preservation. There are plenty of websites that had contemporary information on digital preservation and links to articles and other explanations at one time. Then those websites weren’t sustained and remained stagnant while housing a series of broken links and lost PDFs. We could (and should) be better of not just creating new resources, but updating the ones we have.

The reasons why some organization aren’t transparent or updating the information, or why we aren’t searching in certain areas, varies, but we should be better at documenting and sharing our information to our archival and public communities. This is why the other goal is to create a network to better communicate and share.

 

What advice do you have for librarians thinking of setting up their own digitization stations? How can someone learn more about aspects of audiovisual preservation on the job?

If you are thinking of setting up your own digitization station, announce that not only to your local community but also the larger archival community. Tell us about this amazing adventure you’re about to tackle. Let us know if you need help! Circulate and cite that article you thought was super helpful. Try to communicate not only your successes but also your problems and failures.

We need to be better at documenting and sharing what we’re doing, especially when dealing with how to handle and repair playback decks for magnetic media. Besides the fact that the companies just stopped supporting this equipment, a lot of this information on how to support and repair equipment could have been shared or passed down by really knowledge experts, but it wasn’t. Now we’re all holding our breath and pulling our hair out because this one dude who repairs U-matic tapes is thinking about retiring. This lack of information and communication shouldn’t be the case in our environment when we can email and call.

We tend to freak out about audiovisual preservation because we see how other professional institutions set up their workflows and the amount of equipment they have. The great advantage libraries have is that not only can they act practically with their resources but also they have the best type of feedback to learn from: library patrons. We’re creating these memory lab models for the general public so getting practical experience, feedback, and concerns are great ways to learn more on what aspects of audiovisual preservation really need to be fleshed out and researched.

And for fun, try creating and archiving your own audiovisual media! You technically already do with taking photos and videos on your phone. Getting to know your equipment and where your media goes is very helpful.

 

Thanks very much, Lorena!

For more information on how to set up a “digitization station” at your library, I recommend Dorothea Salo’s robust website detailing how to build an “audio/video or digital data rescue kit”, available here.

 

A Look Back at Open Access Week 2017

This year’s Open Access Week at my institution was a bit different than before. With our time constrained by conference travel and staff shortages leaving everyone over-scheduled, we decided to aim for a week of “virtual programming”, with a week of blog posts and an invitation to view our open access research guide. While this lacked the splashiness of programming in prior years, in another way it felt important to do this work in this way. Yes, it may well be that only people already well-connected to the library saw any of this material. But promotion of open access requires a great deal of self-education among librarians or other library insiders before we can promote it more broadly. For many libraries, it may be the case that there are only a few “open access” people, and Open Access Week ends up being the only time during the year the topic is addressed by the library as a whole.

All the Colors of Open Access: Black and Green and Gold

There were a few shakeups in scholarly communication and open access over the past few months that made some of these discussions more broadly interesting across the academic landscape. The on-going saga of the infamous Beall’s List has been a major 2017 story. An article in the Chronicle of Higher Education about Jeffrey Beall was emailed to me more than once, and captured the complexity of why such a list is both an appealing solution to a problem but also reliant on sometimes questionable personal judgements. Jeffrey Beall’s attitude towards other scholarly communications librarians can be simplistic and vindictive, as an  interview with Times Higher Education in August made clear. June saw the announcement of Cabell’s Blacklist, which is based on Beall’s list, and uses a list of criteria to judge journal quality. At my own institution I know this prompted discussions of what the purpose of a blacklist is, versus using a vetted list of open access journals like the Directory of Open Access Journals. As a researcher in an article in Nature about this product states, it’s likely that a blacklist is more useful for promotion and tenure committees or hiring committees to judge applicants more than for potential authors to find good journals in which to publish.

This also completely leaves aside the green open access options, in which authors can negotiate with their publisher to make a version of their article openly available–often the final published version, but at least the text before layout. While publishing an article in an open access journal has many benefits, green open access can meet the open access goals of faculty without worrying about paying additional fees or worrying about journal quality. But we still need to educate people on green open access. I was chatting with a friend who is an economist recently, and he was wondering about how open access worked in other disciplines, since he was used to all papers being released as working papers before being published in traditional journals. I contrast this conversation with another where someone in a very different discipline who was concerned that putting even a summary of research could constitute prior publication. Given this wide disparity between disciplines, we will always struggle with widely casting a message about green open access. But I firmly believe that there are individuals within all disciplines who will be excited about open access, and that they will get at least some of their colleagues on board–or perhaps their graduate students. These people may be located in the interdisciplinary side, with one foot in a more preprint-friendly discipline. For instance, the bioethicists in the theology department, or the history of science people in the history department. And even the most well-meaning people forget to make their work open access, so making it as easy as possible while not making it so easy that people don’t know why they would do it–make sure there are still avenues for conversation.

Shaky Platforms

Making things easy to do requires having a good platform, but that became more complicated in August when Elsevier acquired bepress, which prompted discussions among many librarians about their values around open access and whether relying on vendors for open access platforms was a foolish gamble (the Library Loon summarizes this discussion well). This is a complex question, as the kinds of services provided by bepress’s Digital Commons go well beyond a simple hosting platform, and goes along with the strategy I pointed out Elsevier was pursuing in my Open Access 2016 post. Convincing faculty to participate in open access requires a number of strategies, and things like faculty profiles, readership dashboards, and attractive interfaces go a long way. No surprise that after purchasing platforms that make this easy, Elsevier (along other publishers) would go after ResearchGate in October, which is even easier to use in some ways, and certainly appealing for researchers.

All the discussion of predatory journals and blacklists (not to mention SciHub being ordered blocked thanks to an ACS lawsuit) seems old to those of us who have been doing this work for years, but it is still a conversation we need to have. More importantly, focusing on the positive aspects of open access helps get at the reasons people to participate in open access and move the conversation forward. We can do work to educate our communities about finding good open access journals, and how to participate legally. I believe that publishers are providing more green access options because their authors are asking for them, and we are helping authors to know how to ask.

I hope we were not too despairing this Open Access Week. We are doing good work, even if there is still a lot of poisonous rhetoric floating around. In the years I’ve worked in scholarly communication I’ve helped make thousands of articles, book chapters, dissertations, and books open access. Those items have in turn gone on to be cited in new publications. The scholarly communication cycle still goes on.