A few members of Tech Connect attended the recent Code4Lib 2018 conference in Washington, DC. If you missed it, the full livestream of the conference is on the Code4Lib YouTube channel. We wanted to highlight some of our favorite talks and tie them into the work we’re doing.
Also, it’s worth pointing to the Code4Lib community’s Statement in Support of opening keynote speaker Chris Bourg. Chris offered some hard truths in her speech that angry men on the internet, predictably, were unhappy about, but it’s a great model that the conference organizers and attendees promptly stood in support.
One of my favorite talks at Code4lib this year was Amy Wickner’s talk, “Web Archiving and You / Web Archiving and Us.” (Video, slides) I felt this talk really captured some of the essence of what I love most about Code4lib, this being my 4th conference in the past 5 years. (And I believe this was Amy’s first!). This talk was about a technical topic relevant to collecting libraries and handled in a way that acknowledges and prioritizes the essential personal component of any technical endeavor. This is what I found so wonderful about Amy’s talk and this is what I find so refreshing about Code4lib as an inherently technical conference with intentionality behind the human aspects of it.
Web archiving seems to be something of interest but seemingly overwhelming to begin to tackle. I mean, the internet is just so big. Amy brought forth a sort of proposal for ways in which a person or institution can begin thinking about how to start a web archiving project, focusing first on the significance of appraisal. Wickner, citing Terry Cook, spoke of the “care and feeding of archives” and thinking about appraisal as storytelling. I think this is a great way to make a big internet seem smaller, understanding the importance of care in appraisal while acknowledging that for web archiving, it is an essential practice. Representation in web archives is more likely to be chosen in the appraisal of web materials than in other formats historically.
This statement resonated with me: “Much of the power that archivists wield are in how we describe or create metadata that tells a story of a collection and its subjects.”
And also: For web archives, “the narrative of how they are built is closely tied to the stories they tell and how they represent the world.”
Wickner went on to discuss how web archives are and will be used, and who they will be used by, giving some examples but emphasizing there are many more, noting that we must learn to “critically read as much as learn to critically build” web archives, while acknowledging web archives exist both within and outside of institutions. And that for personal archiving, it can be as simple as replacing links in documents with perma.cc, Wayback Machine links, or WebRecorder links.
Another topic I enjoyed in this talk was the celebration of precarious web content through community storytelling on Twitter with the hashtags #VinesWithoutVines and #GifHistory, two brief but joyous moments.
The part of this year’s Code4Lib conference that I found most interesting was the talks and the discussion at a breakout session related to machine learning and deep learning. Machine learning is a subfield of artificial intelligence and deep learning is a kind of machine learning that utilizes hidden layers between the input layer and the output layer in order to refine and produce the algorithm that best represents the result in the output. Once such algorithm is produced from the data in the training set, it can be applied to a new set of data to predict results. Deep learning has been making waves in many fields such as Go playing, autonomous driving, and radiology to name a few. There were a few different talks on this topic ranging from reference chat sentiment analysis to feature detection (such as railroads) in the map data using the convolutional neural network model.
“Deep Learning for Libraries” presented by Lauren Di Monte and Nilesh Patil from University of Rochester was the most practical one among those talks as it started with a specific problem to solve and resulted in action that will address the problem. In their talk, Di Monte and Patil showed how they applied deep learning techniques to solve a problem in their library’s space assessment. The problem that they wanted to solve is to find out how many people visit the library to use the library’s space and services and how many people are simply passing through to get to another building or to the campus bus stop that is adjacent to the library. This made it difficult for the library to decide on the appropriate staffing level or the hours that best serve the users’ needs. It also prevented the library from showing the library’s reach and impact based upon the data and advocate for needed resources or budget to the decision-makers on the campus. The goal of their project was to develop automated and scalable methods for conducting space assessment and reporting tools that support decision-making for operations, service design, and service delivery.
For this project, they chose an area bounded by four smart control access gates on the first floor. They obtained the log files (with the data at the sensor level minute by minute) from the eight bi-directional sensors on those gates. They analyzed the data in order to create a recurrent neural network model. They trained the algorithm using this model, so that they can predict the future incoming and the outgoing traffic in that area and visually present those findings as a data dashboard application. For data preparation, processing, and modeling, they used Python. The tools used included Seaborn, Matplotlib, Pandas, NumPy, SciPy, TensorFlow, and Keras. They picked the recurrent neural network with stochastic gradient descent optimization, which is less complex than the time series model. For data visualization, they used Tableau. The project code is available at the library’s GitHub repo: https://github.com/URRCL/predicting_visitors.
Their project result led to the library to install six more gates in order to get a better overview of the library space usage. As a side benefit, the library was also able to pinpoint the times when the gates malfunctioned and communicate the issue with the gate vendor. Di Monte and Patil plan to hand over this project to the library’s assessment team for ongoing monitoring and to look for ways to map the library’s traffic flow across multiple buildings as the next step.
Overall, there were a lot of interests in machine learning, deep learning, and artificial intelligence at the Code4Lib conference this year. The breakout session I led at the conference on these topics produced a lively discussion on a variety of tools, current and future projects for many different libraries, as well as the impact of rapidly developing AI technologies on society. This breakout session also generated #ai-dl-ml channel in the Code4Lib Slack Space. The growing interests in these areas are also shown in the newly formed Machine and Deep Learning Research Interest Group of the Library and Information Technology Association. I hope to see more talks and discussion on these topics in the future Code4Lib and other library technology conferences.
One of the talks which struck me the most this year was Matthew Reidsma’s Auditing Algorithms. He used examples of search suggestions in the Summon discovery layer to show biased and inaccurate results:
In 2015 my colleague Jeffrey Daniels showed me the Summon search results for his go-to search: “Stress in the workplace.” Jeff likes this search because ‘stress’ is a common engineering term as well as one common to psychology and the social sciences. The search demonstrates how well a system handles word proximities, and in this regard, Summon did well. There are no apparent results for evaluating bridge design. But Summon’s Topic Explorer, the right-hand sidebar that provides contextual information about the topic you are searching for, had an issue. It suggested that Jeff’s search for “stress in the workplace” was really a search about women in the workforce. Implying that stress at work was caused, perhaps, by women.
This sort of work is not, for me, novel or groundbreaking. Rather, it was so important to hear because of its relation to similar issues I’ve been reading about since library school. From the bias present in Library of Congress subject headings where “Homosexuality” used to be filed under “Sexual deviance”, to Safiya Noble’s work on the algorithmic bias of major search engines like Google where her queries for the term “black girls” yielded pornographic results; our systems are not neutral but reify the existing power relations of our society. They reflect the dominant, oppressive forces that constructed them. I contrast LC subject headings and Google search suggestions intentionally; this problem is as old as the organization of information itself. Whether we use hierarchical, browsable classifications developed by experts or estimated proximities generated by an AI with massive amounts of user data at its disposal, there will be oppressive misrepresentations if we don’t work to prevent them.
Reidsma’s work engaged with algorithmic bias in a way that I found relatable since I manage a discovery layer. The talk made me want to immediately implement his recording script in our instance so I can start looking for and reporting problematic results. It also touched on some of what despairs me in library work lately—our reliance on vendors and their proprietary black boxes. We’ve had a number of issues lately related to full-text linking that are confusing for end users and make me feel powerless. I submit support ticket after support ticket only to be told there’s no timeline for the fix.
On a happier note, there were many other talks at Code4Lib that I enjoyed and admired: Chris Bourg gave a rousing opening keynote featuring a rallying cry against mansplaining; Andreas Orphanides, who keynoted last year’s conference, gave yet another great talk on design and systems theory full of illuminating examples; Jason Thomale’s introduction to Pycallnumber wowed me and gave me a new tool I immediately planned to use; Becky Yoose navigated the tricky balance between using data to improve services and upholding our duty to protect patron privacy. I fear I’ve not mentioned many more excellent talks but I don’t want to ramble any further. Suffice to say, I always find Code4Lib worthwhile and this year was no exception.
Because many libraries are working on similar projects, I wanted to describe our process and the lessons learned. While ultimately I am pleased with what we produced, I had to restrict the scope due to time and interest from the group, and there are some important takeaways from that.
When we started the project, construction in the library and changes in staffing were disrupting normal functions. We knew it was important to restrict the project to the time necessary to complete finite deliverables. Rather than creating a new committee, we felt it would be helpful focus effort on learning about privacy, and bringing that knowledge back to departments and standing committees as an embedded value. We had a member from every department across the libraries (which ended up being ten people) and intentionally included a mix of department heads, librarians, and paraprofessional staff. Varying perspectives across departments and functions helped create good discussions and ensured that we would be less likely to miss something important.
Nevertheless, such a large group can be unwieldy, and adding yet another set of meetings can be a challenge for already overburdened schedules. For that reason, my co-chair and I spent a lot of time preplanning all the meetings and creating a specific project plan that would be flexible to adapt to our needs, which we needed to do in the end. Our plan had our work starting in early August and ending in December with the goal to have: 1) a complete policy, 2) internal best practices documentation, and 3) an outreach plan. As it turned out, we did not complete all the internal documentation, but the policy and outreach plan were complete on time, and the policy went out to the public at the same time as when we reported on the work of the committee to all library staff in mid-January 2018.
One of the most useful aspects of our project was treating it as a professional development opportunity with a number of reading assignments to complete before the kickoff meeting and throughout the work period (I have included some of the resources we used below). We also ensured from time to time we returned to theoretical, or guiding, principles of our work when it felt like we were feeling too bogged down with minutia. The plan ended up starting with research, followed by reading, writing, and practical research, more theory, and a final push with completing the draft of the policy and working on documentation.
Conducting the Privacy Audit
After spending some time talking through the project and figuring out some mechanics, we moved into the privacy audit stage. This requires examining every system and practice the library uses in a systematic manner and determining whether this falls in line with best practices. The ALA Privacy Checklists help with the latter part, but we also relied on Karen Coyle’s Library Privacy Audit spreadsheets. The first step was to brainstorm all the systems we used in our daily work and how we used them, and then divide those up by department. We mapped the systems we used into the spreadsheets, with some additional systems added. For that reason, multiple departments who use the same system in different ways reviewed some systems, and other systems that were unique to a department were reviewed just by that one. We then used the checklists to verify that we had covered the essentials in our audits and to raise additional issues that the spreadsheets did not cover.
This was not always the most straightforward process for people unused to looking carefully at systems, but for that reason it was a useful process. By dividing the work up between departments, it meant that everyone in the library had a better chance to learn about how their work affected patron privacy and ask questions about the processes of other departments as patron information moves across the library. For example, when a patron requests that the library purchase a book, this is recorded in one system and leaves a trail through email as it goes between systems. After the request is placed, that information stays in various systems to ensure the patron gets the book after it arrives. As public and technical services talked through that process, it was easier to identify which pieces of it were important to good service and which created informational residue.
Compiling the audit results into a useful format was a challenge, and this is an area of this project that did not meet my initial hopes. My original plan was to create a flexible best practices manual that would record all the results of the audit and how closely they met the standards set by the checklists. In practice, that was way too complicated, and we ended up just focusing on the “Priority 1” actions, which are those that any library can meet no matter their technical abilities. In fact, many of our practices are much better than that, but breaking the work down into smaller steps was a much more feasible approach. Ultimately, the co-chairs took the research done by all the task force members and created a list of practices for each checklist that indicated where we met best practices and where we needed to do more work. We asked all departments to complete the project identified for each checklist by one year out, and to consider including “Priority 2” level projects in departmental goals for the following fiscal year.
Writing the Policy
After writing the rough draft, I listed all the sections where I was missing important information, and relied on the task force to fill in those sections. This was a fascinating process as we tried to explain technical processes that each of us understood in a way the whole group could understand and explain to a patron. Explaining the way the scanners could accidentally store a patron’s email address was an example of something that took multiple attempts to get right in the policy. The difficulty I had in writing was useful in itself, however. Each time I felt embarrassed or confused about describing one of our practices told me that this practice needed to change. I hope when we go back to revise the policy, the difficult sections will be easier to write because the practices will be better.
Outreach and Next Steps
One of the important privacy tasks in the checklists is the need for education and outreach to staff and patrons. The process of writing the policy in the task force took care of a lot of staff education, but this will need to be an ongoing process. For that reason, we recommended that the task force reconvene to check on the progress of privacy improvements in 9-10 months after the adoption of the policy, but not necessarily with the exact same members. As we work through fixing our practices this will be a great opportunity to have additional conversations with library staff and include more detail.
No matter how many checklists or guidelines we consult, we will not be able to cover all scenarios. For that reason, we asked people to keep the following guiding principles in mind when making decisions about data collection that could affect patron privacy.
Is it necessary to collect this information?
Could I tell who an individual was even if there was no name attached?
If I need to collect this information, what is the data I can remove to obscure personal information?
To tell patrons about the policy we wrote a blog post and posted it on the library website. Obviously, this will not reach everyone, but at least will catch our most active users–we know from usability testing that people do look at our blog post headlines! Meanwhile, a set of recommended outreach practices included creating guides for how to turn on privacy features in browsers especially for specific vendor platforms with potentially problematic practices, partnering with our campus IT department on information security awareness, and presenting on privacy issues in research and teaching at a faculty professional development event.
As someone who enjoys writing policies and looking for ways to improve processes, this kind of project will always appeal to me. However, many members of the task force told me that it was a useful exercise to improve their own knowledge and keep up to date with how the privacy conversation has changed even in the last few years. Because this is such a constantly shifting topic, this will require active management to keep our policy accurate and our practices in line with changes in technology. The good news is that this was a grassroots effort that can be started up again with relatively little effort as long as someone cares to do so, which I suspect will now always be the case at my library.
Now that we are facing net neutrality regulation rollbacks here in the United States, what new roles could librarians play in the continued struggle to provide people with unrestricted access to information? ALA has long been dedicated to equal access to information, as clearly outlined in both the Core Values and Code of Ethics. You can read ALA’s Joint Letter to the FCC here. It emphasizes that “a non-neutral net, in which commercial providers can pay for enhanced transmission that libraries and higher education cannot afford, endangers our institutions’ ability to meet our educational mission.”
Net neutrality was discussed back in 2014 on this blog, with Margaret Heller’s post entitled “What Should Academic Librarians Know about Net Neutrality?” We recommend you start there for some background on the legal issues around net neutrality. It includes a fun trip into the physical spaces our content traverses through to get onto our screens. One of the conclusions of that post was that libraries need to work on ensuring that everyone has access to broadband networks to begin with, and that more varied access ensures that no company has a monopoly over internet service in a location. There have been a number of projects along these lines over the past decade and more, and we encourage you to find one in your area and get involved.
Equal access to information starts with having access at all. Several libraries have kicked off initiatives in activities like loaning out wi-fi hotspots for several-month periods in New York City, Brooklyn, and Chicago.
Ideally everyone will have secure and private internet access.The Library Freedom Project has been working for years to protect the privacy of patrons, including educating librarians about the threat of surveillance in modern digital technology, working with Tor Project to configure Tor exit relays in library systems, and creating educational resources for teaching patrons about privacy.
These are some excellent steps towards a more democratic and equal access to information, but what happens if the internet as we know it fundamentally changes? Let’s explore some “alternative internets” that rely on municipal and/or grassroots solutions.
You might be familiar with wireless mesh networks for home use. You can set up a wireless mesh network in your own house to ensure even coverage across the house. Since each node can cover a certain part of your house, you don’t have to rely on how close you are to the wireless router to connect. You can also change the network around easily as your needs change.
Mesh networks are dynamically routed networks that exchange routes, internet, local networks, and neighbors. They can be wireless or wired. Mesh networks may not be purely a “mesh” but rather a combination of “mesh network” technology as well as “point to point” linking, with connections directly linking to each other, and each of these connections expanding out to their local mesh networks. BMX6/BMX7, BATMAN, and Babel are some of the most popular network protocols (with highly memorable names!) for achieving a broad mesh network, but there are many more. Just as you can install devices in your home, you can cooperate with others in your community or region to create your own network. The LibreMesh project is an example of a way in which DIY wireless networks are being created in several European countries.
Nineteen towns in Colorado are exploring alternate internet solutions, like a public alternative. Chattanoogaoffers public gigabit internet speeds. This has some major advantages for the city, including the ability to offer free internet access to low-income residents and ensure that anyone who pays for access gets the same level of access, which is not the case in most cities where some areas pay a high cost for a low signal. Even just the presence and availability of municipal broadband, “has radically altered the way local politicians and many ordinary Chattanoogans conceive of the Internet. They have come to think of it as a right rather than a luxury.”1 A similar initiative in Roanoke is the Roanoke Valley Broadband Authority, which in an interesting twist lobbied the Virginia legislature to reduce oversight of its activities in a bill that originally specifically stated that broadband services should focus on underserved areas–so a reminder that in many ways municipalities view this as an investment in business rather than a social justice issue. 2 In Detroit, the Detroit Community Technology Project is working to set up and bring community wireless to neighborhoods in Detroit. New York City‘s Red Hook neighborhood relied on their mesh network during Hurricane Sandy to stay connected to outside of New York. New York City also has the rapidly-growing NYC Mesh community with two supernodes and another coming later this year, uniting lower Manhattan with Northern and Central Brooklyn. Toronto also has an emerging mesh community with a handful of connected nodes. The Urbana-Champaign Independent Media Center developed CUWiN, which provided open wireless networks in “Champaign-Urbana, Homer, Illinois, tribal lands of the Mesa Grande Reservation, and the townships of South Africa”. 3
Guifi.net is a wi-fi network that covers a large part of Spain and defines itself as “the biggest free, open and neutral network.” It was developed in 2004 as a response to the lack of broadband Internet, where commercial Internet providers weren’t providing a connection or a very poor one in rural areas of the Catalonia region. Guifi has established a Wireless Commons License as guidelines that can be adopted by other networks. At time of posting, 34,306 nodes were active, with over 17,000 planned.
Finally, Brooklyn Public Library was granted $50,000 from IMLS to develop a mesh network and called BKLYN Link, along with a technology fellowship program for 18-24 year olds. Looking forward to what emerges from this initiative!
The internet was started when college campuses connected to each other across first short geographic areas and eventually much longer distances. Could we see academic and public libraries working together and leading the return to old ways of accessing the internet for a new era?
Meanwhile, it’s important to ensure that the FCC has appropriate regulatory powers over ISPs, otherwise we have no recourse if companies choose to prioritize packets. You should contact your legislators and make sure that the people at your campus who work with the government are sharing their perspectives as well. You can get some help with a letter to Congress from ALA.
I recently moderated a panel discussion program titled “Building Bridges in a Divisive Climate: Diversity in Libraries, Archives, and Museums.”1 Participating in organizing this program was interesting experience. During the whole time, I experienced my perspective constantly shifting back and forth as (i) someone who is a woman of color in the US who experiences and deals with small and large daily acts of discrimination, (ii) an organizer/moderator trying to get as many people as possible to attend and participate, and (iii) a mid-career librarian who is trying to contribute to the group efforts to find a way to move the diversity agenda forward in a positive and inclusive way in my own institution.
In the past, I have participated in multiple diversity-themed programs either as a member of the organizing committee or as an attendee and have been excited to see colleagues organize and run such programs. But when asked to write or speak about diversity myself, I always hesitated and declined. This puzzled me for a long time because I couldn’t quite pinpoint where my own resistance was coming from. I am writing about this now because I think it may shed some light on why it is often difficult to get minorities on board with diversity-related efforts.
A common issue that many organizers experience is that often these diversity programs draw many allies who are already interested in working on the issue of diversity, equity, and inclusion but not necessarily a lot of those who the organizers consider to be the target audience, namely, minorities. What may be the reason? Perhaps I can find a clue for the answer to this question from my own resistance regarding speaking or writing about diversity, preferring rather to be in the audience with a certain distance or as an organizer helping with logistics behind the scene.
To be honest, I always harbored a level of suspicion about how much of the sudden interests in diversity is real and how much of it is simply about being on the next hot trend. Trends come and go, but issues lived through many lives of those who belong to various systematically disadvantaged and marginalized groups are not trends. Although I have been always enthusiastic about participating in diversity-focused programs as attendees and was happy to see diversity, equity, and inclusion discussed in articles and talks, I wasn’t ready to sell out my lived experience as part of a hot trend, a potential fad.
To be clear, I am not saying that any of the diversity-related programs or events were asking speakers or authors to be a sell-out. I am only describing how things felt to me and where my own resistance was originating. I have been and am happy to see diversity discussed even as a one-time fad. Better a fad than no discussion at all.
One may argue that that diversity has been actively discussed for quite some time now. A few years, maybe several, or even more. Some of the prominent efforts to increase diversity in librarianship I know, for example, go as far back as 2007 when Oregon State University Libraries sponsored two scholarships to the Code4Lib conference, one for women and the other for minorities, which have continued from then on as the Code4Lib Diversity Scholarship.2 But if one has lived the entire life as a member of a systematically disadvantaged group either as a woman, a person of color, a person of certain sexual orientation, a person of a certain faith, a person with a certain disability, etc., one knows better than expecting some sudden interests in diversity to change the world we live in and most of the people overnight.
I admit I have been watching the diversity discussion gaining more and more traction in librarianship with growing excitement and concern at the same time. For I felt that all of what is being achieved through so many people’s efforts may get wiped out at any moment. The more momentum it accrues, I worried, the more serious backlash it may come to face. For example, it was openly stated that seeking racial/ethnic diversity is superficial and for appearance’s sake and that those who appear to belong to “Team Diversity” do not work as hard as those in “Team Mainstream.” People make this type of statements in order to create and strengthen a negative association between multiple dimensions of diversity that are all non-normative (such as race/ethnicity, religion, sexual orientation, immigration status, disability) and unfavorable value judgements (such as inferior intellectual capacity or poor work ethic).3 According to this kind of flawed reasoning, a tech company whose entire staff consists of twenty-something white male programmers with a college degree, may well have achieved a high level of diversity because the staff might have potentially (no matter how unlikely) substantial intellectual and personal differences in their thinking, background, and experience, and therefore their clear homogeneity is no real problem. That’s just a matter of trivial “appearance.” The motivation behind this kind of intentional misdirection is to derail current efforts towards expanding diversity, equity, and inclusion by taking people’s attention away from the real issue of systematic marginalization in our society. Of course, the ultimate goal of all diversity efforts should be not the mere inclusion of minorities but enabling them to have agency as equal as the agency those privileged already possess. But objections are being raised against mere inclusion. Anti-diversity sentiment is real, and people will try to rationalize it in any way they can.
Then of course, the other source of my inner resistance to speaking or writing about diversity has been the simple fact that thinking about diversity, equity, and inclusion does not take me to a happy place. It reminds me of many bad experiences accumulated over time that I would rather not revisit. This is why I admire those who have spoken and written about their lived experience as a member of a systematically discriminated and marginalized group. Their contribution is a remarkably selfless one.
I don’t have a clear answer to how this reflection on my own resistance against actively speaking or writing about diversity will help future organizers. But clearly, being asked to join many times had an effect since I finally did accept the invitation to moderate a panel and wrote this article. So, if you are serious about getting more minorities – whether in different religions, genders, disabilities, races, etc. – to speak or write on the issue, then invite them and be ready to do it over and over again even if they decline. Don’t expect that they will trust you at the first invitation. Understand that by accepting such an invitation, minorities do risk far more than non-minorities will ever do. The survey I ran for the registrants of the “Building Bridges in a Divisive Climate: Diversity in Libraries, Archives, and Museums” panel discussion program showed several respondents expressing their concern about the backlash at their workplaces that did or may result from participating in diversity efforts as a serious deterrent.4 If we would like to see more minorities participate in diversity efforts, we must create a safe space for everyone and take steps to deal with potential backlash that may ensue afterwards.5
A Gentle Intro or a Deep Dive?
Another issue that many organizers of diversity-focused events, programs, and initiatives struggle with is two conflicting expectations from their audience. On one hand, there are those who are familiar with diversity, equity, and inclusion issues and want to see how institutions and individuals are going to take their initial efforts to the next level. These people often come from organizations that already implemented certain pro-diversity measures such as search advocates for the hiring process.6 and educational programs that familiarize the staff with the topic of diversity, equity, and inclusion.7 On the other hand, there are still many who are not quite sure what diversity, equity, and inclusion exactly mean in a workplace or in their lives. Those people would continue to benefit from a gentle introduction to things such as privilege, microaggression, and unconscious biases.
The feedback surveys collected after the “Building Bridges in a Divisive Climate: Diversity in Libraries, Archives, and Museums” panel discussion program showed these two different expectations. Some people responded that they deeply appreciated the personal stories shared by the panelists, noting that they did not realize how often minorities are marginalized even in one day’s time. Others, however, said they would be like to hear more about actionable items and strategies that can be implemented to further advance the values of diversity, equity, and inclusion that go beyond personal stories. Balancing these two different demands is a hard act for organizers. However, this is a testament to our collective achievement that more and more people are aware of the importance of continuing efforts to improve diversity, equity, and inclusion in libraries, archives, and museums.
I do think that we need to continue to provide a general introduction to diversity-related issues, exposing people to everyday experience of marginalized groups such as micro-invalidation, impostor syndrome, and basic concepts like white privilege, systematic oppression, colonialism, and intersectionality. One of the comments we received via the feedback survey after our diversity panel discussion program was that the program was most relevant in that it made “having colleagues attend with me to hear what I myself have never told them” possible. General programs and events can be an excellent gateway to more open and less guarded discussion.
At the same time, it seems to be high time for us in libraries, museums, and archives to take a deep dive into different realms of diversity, equity, and inclusion as well. Diversity comes in many dimensions such as age, disability, religion, sexual orientation, race/ethnicity, and socioeconomic status. Many of us feel more strongly about one issue than others. We should create opportunities for ourselves to advocate for specific diversity issues that we care most.
The only thing I would emphasize is that one specific dimension of diversity should not be used as an excuse to neglect others. Exploring socioeconomic inequality issues without addressing how they work combined with the systematic oppression of marginalized groups such as Native Americans, women, or immigrants at the same time can be an example of such a case. All dimensions of diversity are closely knitted with one another, and they do not exist independently. For this reason, a deep dive into different realms of diversity, equity, and inclusion must be accompanied by the strong awareness of their intersectionality.8
Recommendations and Resources for Future Organizers
Organizing a diversity-focused program takes a lot of effort. While planning the “Building Bridges in a Divisive Climate: Diversity in Libraries, Archives, and Museums” panel discussion program at the University of Rhode Island Libraries, I worked closely with my library dean, Karim Boughida, who originally came up with the idea of having a panel discussion program at the University of Rhode Island Libraries, and Renee Neely in the libraries’ diversity initiatives for approximately two months. For panelists, we decided to recruit as many minorities from diverse institutions and backgrounds. We were fortunate to find panelists from a museum, an archive, both a public and an academic library with varying degrees of experience in the field from only a few years to over twenty-five years, ranging from a relatively new archivist to an experienced museum and a library director. Our panel consisted of one-hundred percent people of color. The thoughts and perspectives that those panelists shared were, as a result, remarkably diverse and insightful. For this reason, I recommend spending some time to get the right speakers for your program if your program will have speakers.
Another thing I would like to share is the questions that I created for the panel discussion. Even though we had a whole hour, I was able to cover only several of them. But since I discussed all these questions in advance with the panelists and they helped me put a final touch on some of those, I think these questions can be useful to future organizers who may want to run a similar program. They can be utilized for a panel discussion, an unconference, or other types of programs. I hope this is helpful and save time for other organizers.
Sample Questions for the Diversity Panel Discussion
Why should libraries, archives, museums pay attention to the issues related to diversity, equity, and inclusion?
In what ways do you think the lack of diversity in our profession affects the perception of libraries, museums, and archives in the communities we serve?
Do you have any personal or work-related stories that you would like to share that relate to diversity, equity, and inclusion issues?
How did you get interested in diversity, equity, and inclusion issues?
Suppose you discovered that your library’s, archive’s or museum’s collection includes prejudiced information, controversial objects/ documents, or hate-inducing material. What would you do?
Suppose a group of your library / archive / museum patrons want to use your space to hold a local gathering that involves hate speech. What would you do? What would you be mostly concerned about, and what would the things that you would consider to make a decision on how you will respond?
Do you think libraries, archives, and museums are a neutral place? What do you think neutrality means to a library, an archive, a museum in practice in a divisive climate such as now?
What are some of the areas in libraries, museums, and archives where you see privileges and marginalization function as a barrier to achieving our professional values – equal access and critical thinking? What can we do to remove those barriers?
Could you tell us how colonialist thinking and practice are affecting libraries, museums, and archives either consciously or unconsciously? Since not everyone is familiar with what colonialism is, please begin with first your brief interpretation of what colonialist thinking or practice look like in libraries, museums, and archives first?
What do you think libraries, archives, and museums can do more to improve critical thinking in the community that we serve?
Although libraries, archives, museums have been making efforts to recruit, hire, and retain diverse personnel in recent years, the success rate has been relatively low. For example, in librarianship, it has been reported that often those hired through these efforts experienced backlash at their own institutions, were subject to unrealistic expectations, and met with unsupportive environment, which led to burnout and a low retention rate of talented people. From your perspective – either as a manager hiring people or a relatively new librarian who looked for jobs – what do you think can be done to improve this type of unfortunate situation?
Many in our profession express their hesitation to actively participate in diversity, equity, and inclusion-related discussion and initiatives at their institutions because of the backlash from their own coworkers. What do you think we can do to minimize such backlash?
Some people in our profession express strong negative feelings regarding diversity, equity, and inclusion-related initiatives. How much of this type of anti-diversity sentiment do you think exist in your field? Some worry that this is even growing faster in the current divisive and intolerant climate. What do you think we can do to counter such anti-diversity sentiment?
There are many who are resistant to the values of diversity, equity, and inclusion. Have you taken any action to promote and advance these values facing such resistance? If so, what was your experience like, and what would be some of the strategies you may recommend to others working with those people?
Many people in our profession want to take our diversity, equity, and inclusion initiatives to the next level, beyond offering mere lip service or simply playing a numbers game for statistics purpose. What do you think that next level may be?
Lastly, I felt strongly about ensuring that the terms and concepts often thrown out in diversity/equity/inclusion-related programs and events – such as intersectionality, white privilege, microaggression, patriarchy, colonialism, and so on – are not used to unintentionally alienate those who are unfamiliar with them. These concepts are useful and convenient shortcuts that allow us to communicate a large set of ideas previously discussed and digested, so that we can move our discussion forward more efficiently. They should not make people feel uncomfortable nor generate any hint of superiority or inferiority.
I am sharing the survey questions, the video links, and the glossary in the hope that they may be helpful as a useful tool for future organizers. For example, one may decide to provide a glossary like this before the program or run an unconference that aims at unpacking the meanings of these terms and discussing how they relate to people’s daily lives.10
In Closing: Diversity, Libraries, Technology, and Our Own Biases
Disagreements on social issues are natural. But the divisiveness that we are currently experiencing seems to be particularly intense. This deeply concerns us, educators and professionals working in libraries, archives, and museums. Libraries, archives, and museums are public institutions dedicated to promoting and advancing civic values. Diversity, equity, and inclusion are part of those core civic values that move our society forward. This task, however, has become increasingly challenging as our society moves in a more and more divisive direction.
To make matters even more complicated, libraries, archives, museums in general lack diversity in their staff composition. This homogeneity can impede achieving our own mission. According to the recent report from Ithaka S+R released this August, we do not appear to have gotten very far. Their report “Inclusion, Diversity, and Equity: Members of the Association of Research (ARL) Libraries – Employee Demographics and Director Perspectives,” shows that libraries and library leadership/administration are both markedly white-dominant (71% and 89% white non-Hispanic respectively).11 Also, while librarianship in general are female dominant (61%), the technology field in libraries is starkly male (70%) along with Makerspace (65%), facilities (64%), and security (73%) positions.12 The survey results in the report show that while the majority of library directors say there are barriers to achieving more diversity in their library, they attribute those barriers to external rather than internal factors such as the library’s geographic location and the insufficiently diverse application pool resulting from the library’s location. What is fascinating, however, is that this directly conflicts with the fact that libraries do show little variation in the ratio of white staff based on degree of urbanization. Equally interesting is that the staff in more homogeneous and less diverse (over 71% White Non-Hispanic) libraries think that their libraries are much more equitable than the library community (57% vs 14%) and that library directors (and staff) consider their own library to be more equitable, diverse, and inclusive than the library community with respect to almost every category such as race/ethnicity, gender, LGBTQ, disabilities, veterans, and religion.
While these findings in the Ithaka S+R report are based upon the survey results from ARL libraries, similar staff composition and attitudes can be assumed to apply to libraries in general. There is a great need for both the library administration and the staff to understand their own unconscious and implicit biases, workplace norms, and organizational culture that may well be thwarting their own diversity efforts.
Diversity, equity, and inclusion have certainly been a topic of active discussion in the recent years. Many libraries have established a committee or a task force dedicated to improving diversity. But how are those efforts paying out? Are they going beyond simply paying a lip service? Is it making a real difference to everyday experience of minority library workers?13 Can we improve, and if so where and how? Where do we go from here? Those would be the questions that we will need to examine in order to take our diversity efforts in libraries, archives, and museums to the next level.
Note that this kind of biased assertions often masquerades itself as an objective intellectual pursuit in academia when in reality, it is a direct manifestation of an existing prejudice reflecting the limited and shallow experience of the person posting the question him/herself. A good example of this is found in the remark in 2005 made by Larry Summers, the former Harvard President. He suggested that one reason for relatively few women in top positions in science may be “issues of intrinsic aptitude” rather than widespread indisputable everyday discrimination against women. He resigned after the Harvard faculty of arts and sciences cast a vote of no confidence. See Scott Jaschik, “What Larry Summers Said,” Inside Higher Ed, February 18, 2005, https://www.insidehighered.com/news/2005/02/18/summers2_18. ↩
For this purpose, asking all participants to respect one another’s privacy in advance can be a good policy. In addition to this, we specifically decided not to stream or record our panel discussion program, so that both panelists and attendees can freely share their experience and thoughts. ↩
For the limitations of the mainstream diversity discussion in LIS (library and information science) with the focus on inclusion and cultural competency, see David James Hudson, “On ‘Diversity’ as Anti-Racism in Library and Information Studies: A Critique,” Journal of Critical Library and Information Studies 1, no. 1 (January 31, 2017), https://doi.org/https://doi.org/10.24242/jclis.v1i1.6. ↩
It’s a problem as old as library websites themselves: how to represent the times when a library building is open in a way that’s easy for patrons to understand and easy for staff to update?
Every website or content management system has its own solution that can’t quite suit our needs. In a previous position, I remember using a Drupal module which looked slick and had a nice menu for entering data on the administrative side…but it was made by a European developer and displayed dates in the (inarguably more logical) DD/MM/YYYY format. I didn’t know enough PHP at the time to fix it, and it would’ve confused our users, so I scrapped it.
Then there’s the practice of simply manually updating an HTML fragment that has the hours written out. This approach has advantages that aren’t easily dismissed: you can write out detailed explanations, highlight one-off closures, adjust to whatever oddity comes up. But it’s tedious for staff to edit a web page and easy to forget. This is especially true if hours information is displayed in several places; keeping everything in sync is an additional burden, with a greater possibility for human error. So when we went to redesign our library website, developing an hours application that made entering data and then reusing it in multiple places easy was at the forefront of my mind.
Why is this so hard?
One might think displaying hours is easy. The end products often look innocuous. But there are a bevy of reasons why it’s complicated for many libraries:
open hours differ across different branches
hours of particular services within a branch may not fully overlap with the library building’s open hours
a branch might close and re-open during the day
a branch might be open later than midnight, so technically “closing” on a date different than when it opened
holidays, campus closures, unexpected emergencies, and other exceptions disrupt regular schedules
in academia, schedules differ whether class is in session, it’s a break during a term, or it’s a break in between terms
the staff who know or determine a branch’s open hours aren’t necessarily technically skilled and may be spread across disparate library departments
dates and times are unique forms of data with their own unique displays, storage types, and operations (e.g. chronological comparisons)
Looking at other libraries, the struggle to represent their business hours is evident. For instance, the University of Illinois has an immense list of library branches and their open hours on its home page. There’s a lot to like about the display; it’s on the home page so patrons don’t have to go digging for the info, there’s a filter by name feature, the distinct open/closed colors help one to identify at a glance which places are open, the library branch rows expand with extra information. But it’s also an overwhelming amount of information longer than a typical laptop screen.
Many libraries use SpringShare’s LibCal as a way of managing and display their open hours. See Loyola’s Hours page with its embedded table from LibCal. As a disclaimer, I’ve not used LibCal, but it comes with some obvious caveats: it’s a paid service that not all libraries can afford and it’s Yet Another App outside the website CMS. I’ve also been told that the hours entry has a learning curve and that it’s necessary to use the API for most customization. So, much as I appreciate the clarity of the LibCal schedule, I wanted to build an hours app that would work well for us, providing flexibility in terms of data format and display.
Our website CMS Wagtail uses a concept called “snippets” to store pieces of content which aren’t full web pages. If you’re familiar with Drupal, Snippets are like a more abstract version of Blocks. We have a snippet for each staff member, for instance, so that we can connect particular pages to different staff members but also have a page where all staff are displayed in a configurable list. When I built our hours app, snippets were clearly the appropriate way to handle the data. Ideally, hours would appear in multiple places, not be tied to a single page. Snippets also have their own section in the CMS admin side which makes entering them straightforward.
Our definition of an “open hours” snippet has but a few components:
the library branch the hours are for
the date range being described, e.g. “September 5th through December 15th” for our Fall semester
a list of open hours for each weekday, e.g. Monday = “8am – 10pm”, Tuesday = “8am – 8pm”, etc.
There are some nuances here. First, for a given academic term, staff have to enter hours once for each branch, so there is quite a bit of data entry. Secondly, the weekday hours are actually stored as text, not a numeric data type. This lets us add parentheticals such as “8am – 5pm (no checkouts)”. While I can see some theoretical scenarios where having numeric data is handy, such as determining if a particular branch is open on a given hour on a given date, using text simplified building the app’s data model for me and data entry for staff.
But what about when the library closes for a holiday? Each holiday effectively triples the data entry for a term: we need a data set for the time period leading up to the holiday, one for the holiday itself, and one for the time following it. For example, when we closed for Thanksgiving, our Fall term would’ve been split into a pre-Thanksgiving, during Thanksgiving, and post-Thanksgiving triad. And more so for each other holiday.
To alleviate the holiday problem, I made a second snippet type called “closures”. Closures let us punch holes in a set of open hours; rather than require pre- and post- data sets, we have one open hours snippet for the whole term and then any number of closures within it. A closure is composed of only a library branch and a date range. Whenever data about open hours is passed around inside our CMS, the app first consults the list of closures and then adjusts appropriately.
The open hours for the current day are displayed prominently on our home page. When we rebuilt our website, surfacing hours information was a primary design goal. Our old site’s hours page wasn’t exactly easy to find…yet it was the second most-visited page behind the home page.1 In our new site, the hours app allows us to show the same information in a few places, for instance as a larger table that shows our open times for a full week. The page showing the full table will also accept a date parameter in its URL, showing our schedule for future times. This lets us put up a notice about changes for periods like Thanksgiving week or Spring break.
What really excited me about building an hours application from the ground up was the chance to include an API (inside the app’s views.py file, which in turn uses a couple functions from models.py). The app’s public API endpoint is at https://libraries.cca.edu/hours?format=json and by default it returns the open hours for the current day for all our library branches. The branch parameter allows API consumers to get the weekly schedule for a single branch while the date parameter lets them discover the hours for a specific date.
I’m using the API in two places, our library catalog home page and as an HTML snippet when users search our discovery layer for “hours” or “library hours”. I have hopes that other college websites will also want to reuse this information, for instance on our student portal or on a campus map. One can see the limitation of using text strings as the data format for temporal intervals; an application trying to use this API to determine “is a given library open at this instant” would have to do a bit of parsing to determine if the current time falls within the range. In the end, the benefits for data entry and straightforward display make text the best choice for us.
To summarize, the hours app fulfills our goals for the new website in a few ways. It allows us to surface our schedule not only on our home page but also in other places, sets us up to be able to reuse the information in even more places, and minimizes the burden of data entry on our staff. There are still improvements to be made—as I was writing this post I discovered a problem with cached API responses being outdated—but on the whole I’m very happy with how everything worked out.
Libraries, I beg you, make your open hours obvious! People want to know. ↩
Hello! I’m Ashley Blewer, and I’ve recently joined the ACRL TechConnect blogging team. For my first post, I wanted to interview Lorena Ramírez-López. Lorena is working (among other places) at the D.C. Public Library on their Memory Lab initiative, which we will discuss below. Although this upcoming project targets public libraries, Lorena has a history of dedication to providing open technical workflows and documentation to support any library’s mission to set up similar “digitization stations.”
Hi Lorena! Can you please introduce yourself?
Hi! I’m Lorena Ramírez-López. I am a born and raised New Yorker from Queens. I went to New York University for Cinema Studies and Spanish where I did an honors thesis on Paraguayan cinema in regards to sound theory. I continued my education at NYU and graduated from the Moving Image Archiving and Preservation program where I concentrated on video and digital preservation. I was one of the National Digital Stewardship Residents for the American Archive of Public Broadcasting. I did my residency at Howard University television station (WHUT) in Washington D.C from 2016 until this June 2017. Along with being the project manager for the Memory Lab Network, I do contracting work for the National Portrait Gallery on their time based media artworks, joined the Women who Code community, and teach Spanish at Fluent City!
Tell us a little bit about DCPL’s Memory Lab and your role in it.
The DC Public Library’s Memory Lab was a National Digital Stewardship Project back in 2014 through 2015. This was the baby of DCPL’s National Digital Stewardship Resident, Jaime Mears, back in the day. A lot of my knowledge on how it started comes from reading the original project proposal, which you can find that on the Library of Congress’s website as well as Jaime Mear’s final report on the Memory Lab is found on the DC Library website. But to summarize its origin story, the Memory Lab was created as a local response to the fact that communities are generating a lot of digital content while still keeping many of their physical materials like VHS, miniDVs, and photos but might not necessarily have the equipment or knowledge to preserve their content. It has been widely accepted in the archival and preservation fields that we have an approximate 15- to 20-year window of opportunity to digitally preserve legacy audio and video recordings on magnetic tape because of the rate of degradation and the obsolescence of playback equipment. The term “video at risk” might ring a bell to some people. There’s also photographs and film, particularly color slides and negatives and moving image film formats, that will also fade and degrade over time. People want to save their memories as well as share them on a digital platform.
There are well-established best practices for digital preservation in archival practice, but these guidelines and documentation are generally written for a professional audience. And while there are a various personal digital archiving resources for a public audience, they aren’t really easy to find on the web and a lot of these resources aren’t updated to reflect the changes in our technology, software, and habits.
That being the case, our communities risk massive loss of history and culture! And to quote Gabriela Redwine’s Digital Preservation Coalition report, “personal digital archives are important not just because of their potential value to future scholars, but because they are important to the people who created them.”
So the Memory Lab was the library’s local response in the Washington D.C. area to bridge this gap of digital archiving knowledge and provide the tools and resources for library patrons to personally digitize their own personal content.
My role is maintaining this memory lab (digitization rack). When hardware gets worn down or breaks, I fix it. When software for our computers upgrade to newer systems, I update our workflows.
I am currently re-doing the website to reflect the new wiring I did and updating the instructions with more explanations and images. You can expect gifs!
You recently received funding from IMLS to create a Memory Lab Network. Can you tell us more about that?
Yes! The DC Public Library in partnership with the Public Library Association received a national leadership grant to expand the memory lab model.
During this project, the Memory Lab Network will partner with seven public libraries across the United States. Our partners will receive training, mentoring, and financial support to develop their own memory lab as well as programs for their library patrons and community to digitize and preserve their personal and family collections. A lot of focus is put on the digitization rack, mostly because it’s cool, but the memory lab model is not just creating a digitization rack. It’s also developing classes and online resources for the community to understand that digital preservation doesn’t just end with digitizing analog formats.
By creating these memory labs, these libraries will help bridge the digital preservation divide between the professional archival community and the public community. But first we have to train and help the libraries set up the memory lab, which is why we are providing travel grants to Washington, D.C. for an in-depth digital preservation bootcamp and training for these seven partners.
If anyone wants to read the proposal, the Institute of Museum and Library Sciences has it here.
What are the goals of the Memory Lab Network, and how do you see this making an impact on the overall library field (outside of just the selected libraries)?
One of the main goals is to see how well the memory lab model holds up. The memory lab was a local response to a need but it was meant to be replicated. This funding is our chance to see how we can adapt and improve the memory lab model for other public libraries and not just our own urban library in Washington D.C.
There are actually many institutions and organizations that have digitization stations and/or the knowledge and resources, but we just don’t realize who they are. Sometimes it feels like we keep reinventing the wheel with digital preservation. There are plenty of websites that had contemporary information on digital preservation and links to articles and other explanations at one time. Then those websites weren’t sustained and remained stagnant while housing a series of broken links and lost PDFs. We could (and should) be better of not just creating new resources, but updating the ones we have.
The reasons why some organization aren’t transparent or updating the information, or why we aren’t searching in certain areas, varies, but we should be better at documenting and sharing our information to our archival and public communities. This is why the other goal is to create a network to better communicate and share.
What advice do you have for librarians thinking of setting up their own digitization stations? How can someone learn more about aspects of audiovisual preservation on the job?
If you are thinking of setting up your own digitization station, announce that not only to your local community but also the larger archival community. Tell us about this amazing adventure you’re about to tackle. Let us know if you need help! Circulate and cite that article you thought was super helpful. Try to communicate not only your successes but also your problems and failures.
We need to be better at documenting and sharing what we’re doing, especially when dealing with how to handle and repair playback decks for magnetic media. Besides the fact that the companies just stopped supporting this equipment, a lot of this information on how to support and repair equipment could have been shared or passed down by really knowledge experts, but it wasn’t. Now we’re all holding our breath and pulling our hair out because this one dude who repairs U-matic tapes is thinking about retiring. This lack of information and communication shouldn’t be the case in our environment when we can email and call.
We tend to freak out about audiovisual preservation because we see how other professional institutions set up their workflows and the amount of equipment they have. The great advantage libraries have is that not only can they act practically with their resources but also they have the best type of feedback to learn from: library patrons. We’re creating these memory lab models for the general public so getting practical experience, feedback, and concerns are great ways to learn more on what aspects of audiovisual preservation really need to be fleshed out and researched.
And for fun, try creating and archiving your own audiovisual media! You technically already do with taking photos and videos on your phone. Getting to know your equipment and where your media goes is very helpful.
Thanks very much, Lorena!
For more information on how to set up a “digitization station” at your library, I recommend Dorothea Salo’s robust website detailing how to build an “audio/video or digital data rescue kit”, available here.
This year’s Open Access Week at my institution was a bit different than before. With our time constrained by conference travel and staff shortages leaving everyone over-scheduled, we decided to aim for a week of “virtual programming”, with a week of blog posts and an invitation to view our open access research guide. While this lacked the splashiness of programming in prior years, in another way it felt important to do this work in this way. Yes, it may well be that only people already well-connected to the library saw any of this material. But promotion of open access requires a great deal of self-education among librarians or other library insiders before we can promote it more broadly. For many libraries, it may be the case that there are only a few “open access” people, and Open Access Week ends up being the only time during the year the topic is addressed by the library as a whole.
All the Colors of Open Access: Black and Green and Gold
There were a few shakeups in scholarly communication and open access over the past few months that made some of these discussions more broadly interesting across the academic landscape. The on-going saga of the infamous Beall’s List has been a major 2017 story. An article in the Chronicle of Higher Education about Jeffrey Beall was emailed to me more than once, and captured the complexity of why such a list is both an appealing solution to a problem but also reliant on sometimes questionable personal judgements. Jeffrey Beall’s attitude towards other scholarly communications librarians can be simplistic and vindictive, as an interview with Times Higher Education in August made clear. June saw the announcement of Cabell’s Blacklist, which is based on Beall’s list, and uses a list of criteria to judge journal quality. At my own institution I know this prompted discussions of what the purpose of a blacklist is, versus using a vetted list of open access journals like the Directory of Open Access Journals. As a researcher in an article in Nature about this product states, it’s likely that a blacklist is more useful for promotion and tenure committees or hiring committees to judge applicants more than for potential authors to find good journals in which to publish.
This also completely leaves aside the green open access options, in which authors can negotiate with their publisher to make a version of their article openly available–often the final published version, but at least the text before layout. While publishing an article in an open access journal has many benefits, green open access can meet the open access goals of faculty without worrying about paying additional fees or worrying about journal quality. But we still need to educate people on green open access. I was chatting with a friend who is an economist recently, and he was wondering about how open access worked in other disciplines, since he was used to all papers being released as working papers before being published in traditional journals. I contrast this conversation with another where someone in a very different discipline who was concerned that putting even a summary of research could constitute prior publication. Given this wide disparity between disciplines, we will always struggle with widely casting a message about green open access. But I firmly believe that there are individuals within all disciplines who will be excited about open access, and that they will get at least some of their colleagues on board–or perhaps their graduate students. These people may be located in the interdisciplinary side, with one foot in a more preprint-friendly discipline. For instance, the bioethicists in the theology department, or the history of science people in the history department. And even the most well-meaning people forget to make their work open access, so making it as easy as possible while not making it so easy that people don’t know why they would do it–make sure there are still avenues for conversation.
Making things easy to do requires having a good platform, but that became more complicated in August when Elsevier acquired bepress, which prompted discussions among many librarians about their values around open access and whether relying on vendors for open access platforms was a foolish gamble (the Library Loon summarizes this discussion well). This is a complex question, as the kinds of services provided by bepress’s Digital Commons go well beyond a simple hosting platform, and goes along with the strategy I pointed out Elsevier was pursuing in my Open Access 2016 post. Convincing faculty to participate in open access requires a number of strategies, and things like faculty profiles, readership dashboards, and attractive interfaces go a long way. No surprise that after purchasing platforms that make this easy, Elsevier (along other publishers) would go after ResearchGate in October, which is even easier to use in some ways, and certainly appealing for researchers.
All the discussion of predatory journals and blacklists (not to mention SciHub being ordered blocked thanks to an ACS lawsuit) seems old to those of us who have been doing this work for years, but it is still a conversation we need to have. More importantly, focusing on the positive aspects of open access helps get at the reasons people to participate in open access and move the conversation forward. We can do work to educate our communities about finding good open access journals, and how to participate legally. I believe that publishers are providing more green access options because their authors are asking for them, and we are helping authors to know how to ask.
I hope we were not too despairing this Open Access Week. We are doing good work, even if there is still a lot of poisonous rhetoric floating around. In the years I’ve worked in scholarly communication I’ve helped make thousands of articles, book chapters, dissertations, and books open access. Those items have in turn gone on to be cited in new publications. The scholarly communication cycle still goes on.
The 2017 Digital Library Federation (DLF) Forum will take place October 23-25 in Pittsburgh, and throughout the program there are multiple opportunities to interact with several of the DLF Groups. For those who are new to DLF, or have never been to a Forum before, it may be hard to know what to expect or how these Groups are different from other associations’ interest groups or committees.
It can be helpful to remember that DLF is an institutional member organization. You don’t need a personal membership to belong to a working group of DLF. Actually, you don’t even need to belong to an institution to sign up to work with a group. DLF practices a very welcoming and inclusive approach to community. Membership does grant discounts on the Forum or other programs, like the eResearch Network, but more importantly, it signals an institution’s commitment to the work that DLF supports and coordinates – such as these groups.
DLF’s groups are not just interest groups or working groups. They are essentially communities that drive a conversation around a topic, or have a particular focus, and usually have some kind of an output. Here is the current list of active groups, with a brief description from their website – those that have programming at this year’s Forum are noted with anasterisk:
The DLF Assessment Interest Group (DLF AIG) was formed in 2014 as an informal interest group within the larger DLF community. The group meets during the DLF Forum to share problems, ideas, and solutions [related to digital library assessment]. The group also has a dedicated Google Group, DLF-supported wiki, and project documentation available in the Open Science Framework.
The DLF Digital Library Pedagogy group is an informal community within the larger DLF community that was formed thanks to practitioner interest following the 2015 DLF Forum. The group, which has a dedicated Google Group, is open to anyone interested in learning about or collaborating on digital library pedagogy.
The DLF eResearch Network brings together teams from research-supporting libraries to strengthen and advance their data services and digital scholarship roles within their organizations. The core of the 2017 network is a working curriculum that guides participants through 6 monthly webinars that address current topics and strategic methods for supporting and facilitating data services and digital scholarship locally.
DLF has created a new framework for establishing mentoring relationships among our community members, centered around face-to-face interaction at our annual Forum. The program is meant to be lightweight, collegial, and mostly focused around the annual DLF Forum.
In 2015, a volunteer planning committee from within our Liberal Arts College community organized a first, one-day Liberal Arts Colleges Pre-conference, specifically created for those who work with digital libraries and/or digital scholarship at teaching-focused institutions, held before the DLF Forum in Vancouver. Both this event and the one that followed in Milwaukee (2016) were huge successes, including concurrent sessions of presentations and panels on pedagogical, organizational, and technological approaches to the digital humanities and digital scholarship, data curation, digital collections, and digital preservation.
All DLF practitioners with museum interests or who engage in college and university museum-based projects are welcome to join. Likewise, current DLF member institutions with museums, galleries, and museum libraries are invited to participate in Museums Cohort conversations.
The DLF Project Managers group is an informal community within the larger DLF community. They meet at the annual DLF Forum and also have a dedicated listserv. The DLF PM Group was formed in 2008 to acknowledge the intersection of the discipline of project management and library technology. The group provides a forum for sharing project management methodologies and tools, alongside broader discussions that consider issues such as portfolio management and cross-organizational communication. The group also maintains an eye towards keeping pace with the dynamic digital library landscape, by bringing new and evolving project management practices to the attention and mutual benefit of our colleagues.
A new DLF group, looking for all levels of commitment, from willingness to be a co-leader of the Working Group to dropping in to point out a good article/blog post/someone-doing-this-already we may not have seen. A Google Group is used for coordination of meetings and work.
Metadata is hard. The Metadata Support Group aims to help. This is a place to share resources, strategies for working through some common metadata conundrums, and reassurances that you’re not the only one that has no idea how that happened. If you’re coming here with a problem we hope you’ll find a solution or a strategy to move you towards a solution!
These groups are excellent ways to learn more about a topic, contribute to problem-solving strategies, and to network with others who share your interests. As you can see, some of these groups have been around for nearly a decade, while others just started this year. There have also been several groups that have sunsetted, reflecting DLF groups’ strength as responsive and current communities, based on need and interest.
If you are at the 2017 Forum, consider learning more by joining a group’s working lunch or presentation. And remember, these groups are based off need and interest. Consider proposing something that stirs your passion, if you don’t see it reflected in the current DLF community!
UPDATE: Just after this post was published, the U.S. Copyright Office released the long-awaited Discussion Document that was referenced below in this post. In this document the Copyright Office affirms a commitment to retaining the Fair Use Savings clause.
Libraries rely on exceptions to copyright law and provisions for fair use to provide services. Any changes to those rules have big implications to services we provide. With potential changes coming in an uncertain political climate, I would like to take a look at what we know, what we don’t know, and how it’s all related. Each piece as it currently stands works in relation to the others, and a change to any one of them changes the overall situation for libraries. We need to understand how everything relates, so that when we approach lawmakers or create policies we think holistically.
The International Situation
A few months back at the ALA Annual Conference in Chicago, I attended a panel called “Another Report from the Swamp,” which was a copyright policy specific session put on by the ALA Office of Information Technology Policy (OITP) featuring copyright experts Carrie Russell (OITP), Stephen Wyber (IFLA), Krista Cox (the Association of Research Libraries [ARL]). This panel addressed international issues in copyright in addition to those in the United States, which was a helpful perspective. They covered a number of topics, but I will focus on the Marrakesh Treaty and potential changes to US Code Title 17, Section 108 (Reproduction by libraries and archives).
Stephen Wyber and Krista Cox covered the WIPO Marrakesh Treaty to Facilitate Access to Published Works for Persons Who Are Blind, Visually Impaired or Otherwise Print Disabled (aka the Marrakesh Treaty), which the US is a signatory to, but has not yet been ratified by the US Senate (see Barack Obama’s statement in February 2016). According to them, in much of the developing world only 1% of published work is available for those with print disabilities. This was first raised as issue in 1980, and 33 years later debates at WIPO began to address the situation. This treaty was ratified last year, and permits authorized parties (including libraries) to make accessible copies of any work in a medium that is appropriate for the disabled individual. In the United States, this is generally understood to be permitted by Title 17 sections Fair Use (Section 107) and Section 121 (aka the Chaffee amendment), though this is still legally murky 1. This is not the case internationally. Stephen Wyber pointed out that IFLA must engage at the European level with the European Commission for negotiations at WIPO, and there is no international or cross-border provision for libraries, archives, or museums.
According to Krista Cox, a reason for the delay in ratification was that the Senate Committee on Foreign Relations wouldn’t move it to ratification unless it was a non-controversial treaty with no changes required for US law (and it should not have required changes). The American Association of Publishers (AAP) wanted to include recordkeeping requirements, which disability and library advocates argued would be onerous. (A good summary of the issues is available from the ALA Committee on Legislation). During the session, a staff attorney from the AAP stood up and made the point that their position was that it would be helpful for libraries to track what accessible versions of material they had made. While not covered in the session explicitly, a problem with this approach is that it would create a list of potentially sensitive information about patron activities. Even if no names were attached, the relatively few people making use of the service would make it possible to identify individual users. In any event, the 114th Congress took no action, and it is unclear when this issue will be taken up again. For this reason, we have to continue to rely on existing provisions of the US Code.
Along those lines, the panel gave a short update on potential changes to Section 108 of the Copyright Act, which have been under discussion for many years. Last year, the Copyright Office invited stakeholders to set up meetings to discussion revisions. The library associations met with them last July, and generally while the beneficiaries of Section 108 find revisions controversial and oppose reform, the Copyright Office is continuing work on this. One fear with revisions is that the Fair Use exception clause (17 § 108 (F)(4)) would be removed. Krista Cox reported that at the Copyright Society of the USA meeting in early June 2017, the Copyright Office reported that they were working on a report with proposed legislation, but no one has seen this report [NOTE: the report is now available.].
Implications for Revisions to Title 17
Moving beyond the panel, let’s look at the larger implications for revisions to Title 17. There are some excellent reasons to revise Section 108 and others–just as the changes in 1976 reflected changes in photocopying technology 2, changes in digital technology and the services of libraries require additional help. In 2008, the Library of Congress Section 108 Study Group released a lengthy report with a set of recommendations for revisions, which can be boiled down into extending permissions for preservation activities (though that is a gross oversimplification). In 2015 Maria A. Pallante testified to the Committee on the Judiciary of the House of Representatives on a wide variety of changes to the Copyright Act (not just for libraries), which incorporated the themes from that 2008 report, in addition to other later discussions. Essentially, she says that changes in technology and culture in the past 20 years made much of the act unclear and required application of loopholes and workarounds that were legally unclear. For instance, libraries rely heavily on Section 107, which covers fair use, to perform their daily functions. This report points out that those activities should be explicitly permitted rather than relying on potentially ambiguous language in Section 107, since the ambiguity means some institutions are unwilling to perform activities that may be permitted due to fear. On the other hand, that ambiguous language opens up more possibilities that adventurous projects such as HathiTrust have used to push on boundaries and expand the nature of fair use and customary practice. The ARL has a Code of Best Practices in Fair Use that details what is currently considered customary practice. With revisions, there enters the possibility that what is allowed will be dictated by, for instance, the publishing lobby, and that what libraries can do will be overly circumscribed. Remember, too, that one reason for not ratifying the Marrakesh Treaty is that allowances for reproductions for the disabled are covered by Fair Use and the Chaffee amendment.
Orphan works are another issue. While the Pallante report suggests that it would be in everyone’s interest to have clear guidance on what a good faith effort to identify a copyright holder actually meant, in many ways we would rather have general practice mandate this. Speaking as someone who spends a good portion of my time clearing permissions for material and frequently running into unresponsive or unknown copyright holders, I feel more comfortable pushing the envelope if I have clearly documented and consistently followed procedures based on practices that I know other institutions follow as well (see the Statement of Best Practices). This way I have been able to make useful scholarship more widely available despite the legal gray area. But there is a calculated risk, and many institutions choose to never make such works available due to the legal uncertainty. Last year the Harvard Office of Scholarly Communication released a report on legal strategies for orphan work digitization to give some guidance in this area. To summarize over 100 pages, there are a variety of legal strategies libraries can take to either minimize the risk of a dispute or reduce negative consequences of a copyright dispute–which remains largely hypothetical when it comes to orphan works and libraries anyway.
There is one other important wrinkle in all this. The Copyright Office’s future is politically uncertain. It could be removed from the purview of the Library of Congress, and the Register of Copyrights be made a political appointment. This was passed by the House in April and introduced in the Senate in May, and was seen as a rebuke to Carla Haydenn. Karyn Temple Claggett is the acting Registrar, replacing Maria Pallante who resigned last year after Carla Hayden became the new Librarian of Congress and appointed (some say demoted) her to the post of Senior Advisor for Digital Strategy. Maria Pallante is now CEO of–you guessed it–the American Association of Publishers. The story is full of intrigue and clashing opinions–one only has to see the “possibly not neutral” banner on Pallante’s Wikipedia page to see that no one will agree on the reasons for Pallante’s move from Register of Copyrights (it may have been related to wasteful spending), but libraries do not see the removal of copyright from the Library of Congress as a good thing. More on this is available at the recent ALA report “Lessons From History: The Copyright Office Belongs in the Library of Congress.”
Given that we do not know what will happen to the Copyright Office, nor exactly what their report will recommend, it is critical that we pay attention to what is happening with copyright. While more explicit provisions to allow more would be excellent news, as the panel at ALA pointed out, lawmakers are more exposed to Hollywood and the content creator organizations such as AAP, RIAA and MPAA and so may be more likely to see arguments from their point of view. We should continue to take advantage of provisions we currently have for fair use and providing access to orphan works, since exercising this right is one way we keep it.
As I’ve mentioned in the previous post, my library is undergoing a major website redesign. As part of that process, we contracted with an outside web design and development firm to help build the theme layer. I’ve done a couple major website overhauls in the course of my career, but never with an outside developer participating so much. In fact, I’ve always handled the coding part of redesigns entirely by myself as I’ve worked at smaller institutions. This post discusses what the process has been like in case other libraries are considering working with a web designer.
To start with, our librarians had already been working to identify components of other library websites that we liked. We used Airtable, a more dynamic sort of spreadsheet, to collect our ideas and articulate why we liked certain peer websites, some of which were libraries and some not (mostly museums and design companies). From prior work, we already knew we wanted a few different page templates types. We organized our ideas around how they fit into these templates, such as a special collections showcase, a home page with a central search box, or a text-heavy policy page.
Once we knew we were going to work with the web development firm, we had a conference call with them to discuss the goals of our website redesign and show the contents of our Airtable. As we’re a small art and design library, our library director was actually the one to create an initial set of mockups to demonstrate our vision. Shortly afterwards, the designer had his own visual mockups for a few of our templates. The mockups included inline comments explaining stylistic choices. One aspect I liked about their mockups was that they were divided into desktop and mobile; there wasn’t just a “blog post” example, but a “blog post on mobile” and “blog post on desktop”. This division showed that the designer was already thinking ahead towards how the site’s theme would function on a variety of devices.
With some templates in hand, we could provide feedback. There was some push and pull—some of our initial ideas the designer thought were unimportant or against best practices, while we also had strong opinions. The discussion was interesting for me, as someone who is a librarian foremost but empathetic to usability concerns and following web conventions. It was good to have a designer who didn’t mindlessly follow our every request; when he felt like a stylistic choice was counterproductive, he could articulate why and that changed a few of our ideas. However, on some principles we were insistent. For instance, we wanted to avoid multiple search boxes on a single page; not a central catalog search and a site search in the header. I find that users are easily confused when confronted with two search engines and struggle to distinguish the different purposes and domains of both. The designer thought that it was a common enough pattern to be familiar to users, but our experiences lead us to insist otherwise.
The final code took a few months to deliver, mostly due to a single user interface bug we pointed out that the developer struggled to recreate and then fix. I was ready to start working with the frontend code almost exactly a month after our first conversation with the firm’s designer. The total time from that conversation to signing off on the final templates was a little under two months. Given our hurried timeline for rebuilding our entire site over the summer, that quick delivery was a serious boon.
I’ve a lot of opinions about how code should look and be structured, even if I don’t always follow them myself. So I was a bit apprehensive working with an outside firm; would they deliver something highly functional but structured in an alien way? Luckily, I was pleasantly surprised with how the CSS was delivered.
First of all, the designer didn’t use CSS, he used SASS, which Margaret wrote about previously on Tech Connect. SASS adds several nice tools to CSS, from variables to darken and lighten functions for adjusting colors. But perhaps most importantly, it gives you much more control when structuring your stylesheets, using imports, nested selectors, and mixins. Basically, SASS is the antithesis of having one gigantic CSS file with thousands of lines. Instead, the frontend code we were given was about fifty files neatly divided by our different templates and some reusable components. Here’s the directory tree of the SASS files:
Other than the uninformative “misc”, these folders all have meaningful names (“about-us” and “collections” refer to styles specific to particular templates we’d asked for) and it never takes me more than a moment to locate the styles I want.
Within the SASS itself, almost all styles (excepting the “reset” portion) hinge on class names. This is a best practice for CSS since it doesn’t couple your styles tightly to markup; whether a particular element is a <div>, <section>, or <article>, it will appear correctly if it bears the right class name. When our new CMS output some HTML in an unexpected manner, I was still able to utilize the designer’s theme by applying the appropriate class names. Even better, the class names are written in BEM “Block-Element-Modifier” form. BEM is a methodology I’d heard of before and read about, but never used. It uses underscores and dashes to show which high-level “block” is being styled, which element inside that block, and what variation or state the element takes on. The introduction to BEM nicely defines what it means by Block-Element-Modifier. Its usage is evident if you look at the styles related to the “see next/previous blog post” pagination at the bottom of our blog template:
Here, blog-post-pagination is the block, __title and __item are elements within it, and the --prev modifier effects just the “previous blog post” item element. Even in this small excerpt, other advantages of SASS are evident: the respond mixin and $break-medium variables for writing responsive styles that adapt to differing device screen sizes, the clearfix include, and these related styles all being nested inside the brackets of the parent blog-post-pagination block.
Trouble in Paradise
However, as much as I admire the BEM class names and structure of the styles given to us, of course I can’t be perfectly happy. As I’ve started building out our site I’ve run into a few obvious problems. First of all, while all the components and templates we’d asked for are well-designed with clearly written code, there’s no generic framework for adding on anything new. I’d hoped, and to be honest simply assumed, that a framework like Bootstrap or Foundation would be used as the basis of our styles, with more specific CSS for our components and templates. Instead, apart from a handful of minor utilities like the clearfix include referenced above, everything that we received is intended only for our existing templates. That’s fine up to a point, but as soon as I went to write a page with a HTML table in it I noticed there was no styling whatsoever.
Relatedly, since the class names are so focused on distinct blocks, when I want to write something similar but slightly different I end up with a bunch of misleading class names. So, for instance, some of our non-blog pages have templates which are littered with class names including a .blog- prefix. The easiest way for me to build them was to co-opt the blog styles, but now the HTML looks misleading. I suppose if I had greater time I could write new styles which simply copy the blog ones under new names, but that also seems unideal in that it’s a) a lot more work and b) leads to a lot of redundant code.
Lastly, the way our CMS handles “rich text” fields (think: HTML edited in a WYSIWYG editor, not coded by hand) has caused numerous problems for our theme. The rich text output is always wrapped in a <div class="rich-text">, which made translating some of the HTML templates from the frontend code a bit tricky. The frontend styles also included a “reset” stylesheet which erased all default styles for most HTML tags. That’s fine, and a common approach for most sites, but many of the styles for elements available in the rich text editor ended up being reset. As content authors went about creating lower-level headings and unordered lists, they discovered that they appeared just as plain text.
Reflecting on these issues, they boil primarily down to insufficient communication on our part. When we first asked for design work, it was very much centered around the specific templates we wanted to use for a few different sections of our site. I never specifically outlined a need for a generic framework which could encompass new, unanticipated types of content. While there was an offhand mention of Bootstrap early on in our discussions, I didn’t make it explicit that I’d like it or something similar to form the backbone of the styles we wanted. I should have also made it clearer that styles should specifically anticipate working within our CMS and alongside rich text content. Instead, by the time I realized some of these issues, we had already approved much of the frontend work as complete.
For me, as someone who has worked at smaller libraries for the duration of their professional career, working with a web design company was a unique experience. I’m curious, has your library contracted for design or web development work? Was it successful or not? As tech savvy librarians, we’re often asked to do everything even if some of the tasks are beyond our skills. Working with professionals was a nice break from that and a learning experience. If I could do anything differently, I’d be more assertive about requirements in our initial talks. Outlining expectations about that the styles include a generic framework and anticipate working with our particular CMS would have saved me some time and headaches later on.