Imagine this scenario: you don’t normally have a whole lot to do at your job. It’s a complex job, sure, but day-to-day you’re spending most of your time monitoring a computer and typing in data. But one day, something goes wrong. The computer fails. You are suddenly asked to perform basic job functions that the computer normally takes care of for you, and you don’t really remember well how to do them. In the mean time, the computer is screaming at you about an error, and asking for additional inputs. How well do you function?
The Glass Cage
In Nicholas Carr’s new book The Glass Cage, this scenario is the frightening result of malfunctions with airplanes, and in the cases he describes, result in crashes and massive loss of life. As librarians, we are thankfully not responsible on a daily basis for the lives of hundreds of people, but like pilots, we too have automated much of our work and depend on systems that we often have no control over. What happens when a database we rely on goes down–say, all OCLC services go down for a few hours in December when many students are trying to get a few last sources for their papers? Are we able to take over seamlessly from the machines in guiding students?
Carr is not against automation, nor indeed against technology in general, though this is a criticism frequently leveled at him. But he is against the uncritical abnegation of our faculties to technology companies. In his 2011 book The Shallows, he argues that offloading memory to the internet and apps makes us more shallow, distractable thinkers. While I didn’t buy all his arguments (after all, Socrates didn’t approve of off-loading memory to writing since it would make us all shallow, distractable thinkers), it was thought-provoking. In The Glass Cage, he focuses on automation specifically, using autopilot technologies as the focal point–“the glass cage” is the name pilots use for cockpits since they are surrounded by screens. Besides the danger of not knowing what to do when the automated systems fail, we create potentially more dangerous situations by not paying attention to what choices automated systems make. As Carr writes, “If we don’t understand the commercial, political, intellectual, and ethical motivations of the people writing our software, or the limitations inherent in automated data processing, we open ourselves to manipulation.” 1
We have automated many mundane functions of library operation that have no real effect, or a positive effect. For instance, no longer do students sign out books by writing their names on paper cards which are filed away in drawers. While some mourn for the lost history of who had out the book–or even the romance novel scenario of meeting the other person who checks out the same books–by tracking checkouts in a secure computerized system we can keep better track of where books are, as well as maintain privacy by not showing who has checked out each book. And when the checkout system goes down, it is easy to figure out how to keep things going in the interim. We can understand on an instinctual level how such a system works and what it does. Like a traditional computerized library catalog, we know more or less how data gets in the system, and how data gets out. We have more access points to the data, but it still follows its paper counterpart in creation and structure.
Over the past decade, however, we have moved away more and more from those traditional systems. We want to provide students with systems that align with their (and our) experience outside libraries. Discovery layers take traditional library data and transform it with indexes and algorithms to create a new, easier way to find research material. If traditional automated systems, like autopilot systems, removed the physical effort of moving between card catalogs, print indexes, and microfilm machines, these new systems remove much of the mental effort of determining where to search for that type of information and the particular skills needed to search the relevant database. That is a surely a useful and good development. When one is immersed in a research question, the system shouldn’t get in the way.
That said, the nearly wholesale adoption of discovery systems provided by vendors leaves academic librarians in an awkward position. We can find a parallel in medicine. Carr relates the rush into electronic medical records (EMR) starting in 2004 with the Heath Information Technology Adoption Initiative. This meant huge amounts of money available for digitizing records, as well as a huge windfall for health information companies. While an early study by the RAND corporation (funded in part by those health information companies) indicated enormous promise from electronic medical records to save money and improve care. 2 But in actual fact, these systems did not do everything they were supposed to do. All the data that was supposed to be easy to share between providers was locked up in proprietary systems. 3 In addition, other studies showed that these systems did not merely substitute automated record-keeping for manual, they changed the way medicine was practiced. 4 EMR systems provide additional functions beyond note-taking, such as checklists and prompts with suggestions for questions and tests, which in turn create additional and more costly bills, test requests, and prescriptions. 5 The EMR systems change the dynamic between doctor and patient as well. The systems encourage the use of boilerplate text that lacks the personalized story of an individual patient, and the inability to flip through pages tended to diminish the long view of a patient’s entire medical history. 6 The presence of the computer in the room and the constant multitasking of typing notes into a computer means that doctors cannot be fully present with the patient. 7 With the constant presence of the EMR and its checklists, warnings, and prompts, doctors lose the ability to gain intuition and new understandings that the EMR could never provide. 8
The reference librarian has an interaction with patrons that is not all that different from doctors with patients (though as with pilots, the stakes are usually quite different). We work one on one with people on problems that are often undefined or misunderstood at the beginning of the interaction, and work towards a solution through conversation and cursory examinations of resources. We either provide the resource that solves the problem (e.g. the prescription), or make sure the patron has the tools available to solve problem over time (e.g. diet and exercise recommendations). We need to use subtle queues of body language and tone of voice to see how things are going, and use instinctive knowledge to understand if there is a deeper but unexpressed problem. We need our tools at hand to work with patrons, but we need to be present and use our own experience and judgment in knowing the appropriate tool to use. That means that we have to understand how the tool we have works, and ideally have some way of controlling it. Unfortunately that has not always been the case with vendor discovery systems. We are at the mercy of the system, and reactions to this vary. Some people avoid using it at all costs and won’t teach using the discovery system, which means that students are even less likely to use it, preferring the easier to get to even if less robust Google search. Or, if students do use it, they may still be missing out on the benefits of having academic librarians available–people who have spent years developing domain knowledge and the best resources available at the library, which knowledge can’t be replaced by an algorithm. Furthermore, the vendor platforms and content only interoperate to the extent the vendors are willing to work together, for which many of them have a disincentive since they want their own index to come out on top.
Enter the ODI
Just as doctors may have given up some of their professional ability and autonomy to proprietary databases of patient information, academic librarians seem to have done something similar with discovery systems. But the NISO Open Discovery Initiative (ODI) has potential to make the black box more transparent. This group has been working for two years to develop a set of practices that aim to make some aspects of discovery even across providers, and so give customers and users more control in understanding what they are seeing and ensure that indexes are complete. The Recommended Practice addresses some (but not all) major concerns in discovery service platforms. Essentially it covers requirements for metadata that content providers must provide to discovery service providers and to libraries, as well as best practices for content providers and discovery service providers. The required core metadata is followed by the “enriched” content which is optional–keywords, abstract, and full text. (Though the ODI makes it clear that including these is important–one might argue that the abstract is essential). 9 Discovery service providers are in turn strongly encouraged to make the content their repositories hold clear to their customers, and the metadata required for this. Discovery service providers should follow suggested practices to ensure “fair linking”, specifically to not use business relationships as a ranking or ordering consideration, and allow libraries to set their own preferences about choice of providers and wording for links. ODI suggests a fairly simple set of usage statistics that should be provided and exactly what they should measure. 10
While this all sets a good baseline, what is out of scope for ODI is equally important. It “does not address issues related to performance or features of the discovery services, as these are inherently business and design decisions guided by competitive market forces.” 11 Performance and features includes the user interface and experience, the relevancy ranking algorithms, APIs, specific mechanisms for fair linking, and data exchange (which is covered by other protocols). The last section of the Recommended Practice covers some of those in “Recommended Next Steps”. One of those that jumps out is the “on-demand lookup by discovery service users” 12, which suggests that users should be able to query the discovery service to determine “…whether or not a particular collection, journal, or book is included in the indexed content”13–seemingly the very goal of discovery in the first place.
“Automation of Intellect”
We know that many users only look at the first page of results for the resource they want. If we don’t know what results should be there, or how they get there, we are leaving users at the mercy of the tool. Disclosure of relevancy rankings is a major piece of transparency that ODI leaves out, and without understanding or controlling that piece of discovery, I think academic librarians are still caught in the trap of the glass cage–or become the chauffeur in the age of the self-driving car. This has been happening in all professional fields as machine learning algorithms and processing power to crunch big data sets improve. Medicine, finance, law, business, and information technology itself have been increasingly automated as software can run algorithms to analyze scenarios that in the past would require a senior practitioner. 14 So what’s the problem with this? If humans are fallible (and research shows that experts are equally if not more fallible), why let them touch anything? Carr argues that “what makes us smart is not our ability to pull facts from documents.…It’s our ability to make sense of things…” 15 We can grow to trust the automated system’s algorithms beyond our own experience and judgment, and lose the possibility of novel insights. 16
This is not to say that discovery systems do not solve major problems or that libraries should not use them. They do, and as much as practical libraries should make discovery as easy as possible. But as this ODI Recommended Practice makes clear, much remains a secret business decision for discovery service vendors, and thus something over which academic librarian can exercise control only though their dollars in choosing a platform and their advocacy in working with vendors to ensure they understand the system and it does what they need.
- Nicholas Carr, The Glass Cage: Automation and Us (New York: Norton, 2014), 208. ↩
- Carr, 93. ↩
- Carr, 95. ↩
- Carr, 97. ↩
- Carr, 98. ↩
- Carr, 101-102. ↩
- Carr, 103. ↩
- Carr, 105-106. ↩
- National Information Standards Organization (NISO) Open Discovery Initiative (ODI) Working Group, Open Discovery Initiative: Promoting Transparency in Discovery (Baltimore: NISO, 2014): 25-26. ↩
- NISO ODI, 25-27. ↩
- NISO ODI, 3. ↩
- NISO ODI, 32. ↩
- NISO ODI, 32. ↩
- Carr, 115-117. ↩
- Carr, 121. ↩
- Carr, 124. ↩