The form to volunteer for ACRL section committees is still open at www.ala.org/acrl/membership/volunteer/volunteer! Volunteering is a great way to learn more about the section, network with colleagues, and fulfill professional service requirements. The deadline to volunteer is February 28, 2023. Appointments will begin on July 1, 2023.
For information about LES, our committees, and the work of the section, please visit our website. Please feel free to contact LES Vice-Chair Leslie Madden at firstname.lastname@example.org with any questions about volunteering!
What better way to start 2023 than to volunteer for an LES committee?!
The form to volunteer for ACRL section committees is now open at www.ala.org/acrl/membership/volunteer/volunteer. Volunteering is a great way to learn more about the section, network with colleagues, and fulfill professional service requirements. The deadline to volunteer is February 28, 2023. Appointments will begin on July 1, 2023.
For information about LES, our committees, and the work of the section, please visit our website. Please feel free to contact LES Vice-Chair Leslie Madden at email@example.com with any questions about volunteering! Leslie is also hosting an information session for potential volunteers on Friday, January 20 at 2pm EST. You can join that session at https://gsumeetings.webex.com/meet/lmadden – no registration is needed!
Are you looking for a little light weekend reading? Check out the Fall 2022 issue of Biblio-Noteshere! This is also a great time to start brainstorming ideas for submissions for the Spring 2023 issue. Happy reading!
Biblio-Notes is soliciting content for the Fall issue. The call is open and any submissions are welcome, including those on new publications, upcoming online workshops, or anything you feel would be of interest to the LES community.
The submission deadline is Monday, October 10, 2022.
The LES Facebook page is now live! We will be transitioning to this Facebook platform over the next few months. If you are currently a member of the LES Facebook group, please take the following steps to begin the transition.
Post to the Facebook page rather than to the Facebook group
Why are we making this transition?
The Facebook page will allow LES to have greater public visibility online, and will make it easier for the LES social media coordinator to schedule reminders and share news and events with you. By transitioning to a Facebook page, LES will be able to use Facebook to attract new librarians to join us and learn about what we do, while continuing to share important section news with existing members.
What will be different?
You will still be able to share news and other items of interest by posting to the page, but posts will no longer automatically appear in your Notifications. To receive Notifications from the LES Facebook page, you will need to go to the page, click on “Liked” and select “Get Notifications”.
Posts you add to the page will automatically appear on the left side of the page, and the LES social media coordinator will make sure to add them to the Facebook page timeline so that they will appear in the News Feed and Notifications.
What will happen to the LES Facebook group?
The LES Facebook page will replace the LES Facebook group as the LES Facebook communication platform. Content from the existing LES Facebook group will be saved and archived in the LES wiki. Members of the Facebook group will have the next few months to ease into the transition.
If you have any questions or suggestions, please submit a comment below.
Note: This post makes heavy use of web content from Google Search and Knowledge Graph. Because this content can vary by user and is subject to change at anytime, this essay uses screenshots instead of linking to live web pages in certain cases. As of the completion of this post, these images continue to match their live counterparts for a user from Providence, RI not logged in to Google services.
This That, Not That That
Early this July, Google unveiled its Knowledge Graph, a semantic reference tool nestled into the top right corner of its search results pages. Google’s video announcing the product makes no risk of understating Knowledge Graph’s potential, but there is a very real innovation behind this tool and it is twofold. For one, Knowledge Graph can distinguish between homonyms and connect related topics. For a clear illustration of this function, consider the distinction one might make between bear and bears. Though the search results page for either query include content related to both grizzlies andquarterbacks, Knowledge Graph knows the difference.
Second, Knowledge Graph purports to contain over 500 million articles. This puts it solidly ahead of Wikipedia, which reports having about 400 million, and lightyears ahead of professionally produced reference tools like Encyclopaedia Brittanica Online, which comprises an apparently piddling 120,000 articles. Combine that almost incomprehensible scope with integration into Google Search, and without much fanfare suddenly the world has its broadest and most prominently placed reference tool.
For years, Google’s search algorithm has been making countless, under-examined choices on behalf of its users about the types of results they should be served. But at its essence, Knowledge Graph presents a big symbolic shift away from (mostly) matching it to web content – content that, per extrinsic indicators, the search algorithm serves up and ranks for relevance – toward the act of openly interpreting the meaning of a search query and making decisions based in that interpretation. Google’s past deviations from the relevance model, when made public, have generally been motivated by legal requirements (such as those surrounding hate speechin Europe or dissent in China) and, more recently, the dictates of profit. Each of these moves has met with controversy.
And yet in the two months since its launch, Knowledge Graph has not been a subject of much commentary at all. This is despite the fact that the shift it represents has big implications that users must account for in their thinking, and can be understood as part of larger shifts the information giant has been making to leverage the reputation earned with Search toward other products.
Librarians and others teaching about internet media have a duty to articulate and problematize these developments. Being in many ways a traditional reference tool, Knowledge Graph presents a unique pedagogic opportunity. Just as it is critical to understand the decisions Google makes on our behalf when we use it to search the web, we must be critically aware of the claim to a newly authoritative, editorial role Google is quietly staking with Knowledge Graph – whether it means to be claiming that role or not.
Perhaps especially if it does not mean to. With interpretation comes great responsibility.
The value of the Knowledge Graph is in its ability to authoritatively parse semantics in a way that provides the user with “knowledge.” Users will use it assuming its ability to do this reliably, or they will not use it at all.
Does Knowledge Graph authoritatively parse semantics?
What is Knowledge Graph’s editorial standard for reliability? What constitutes “knowledge” by this tool’s standard? “Authority”?
What are the consequences for users if the answer to these questions is unclear, unsatisfactory, or both?
What is Google’s responsibility in such a scenario?
He Sings the Body Electric
Consider an example: Walt Whitman. As of this writing, the poet’s entry in Knowledge Graph looks like this (click the image to enlarge):
You might notice the most unlikely claim that Whitman recorded an album called This is the Day. Follow the link and you are brought to a straight, vanilla Google search for this supposed album’s title. The first link in that result list will bring you to a music video on Youtube:
Parsing this mistake might bring one to a second search: “This is the Day Walt Whitman.” The results list generated by that search yield another Youtube video at the top, resolving the confusion: a second, comparably flamboyant Walt Whitman, a choir director from Chicago, has recorded a song by that title.
Note the perfect storm of semantic confusion. The string “Walt Whitman” can refer to either a canonical poet or a contemporary gospel choir director while, at the same time, “This is the Day” can refer either to a song by The The or that second, lesser-known Walt Whitman.
Further, “This is the Day” is in both cases a song, not an album.
Knowledge Graph, designed to clarify exactly this sort of semantic confusion, here manages to create and potentially entrench three such confusions at once about a prominent public figure.
Could there be a better band than one called The The to play a role in this story?
This particular mistake was first noted in mid-July. More than a month later, it still stands.
At this new scale for reference information, we have no way of knowing how many mistakes like this one are contained within Knowledge Graph. Of course it’s fair to assume this is an unusual case, and to Google’s credit, they address this sort of error in the only feasible way they could, with a feedback mechanism that allows users to suggest corrections. (No doubt bringing this mistake the attention of ACRLog’s readers means Walt Whitman’s days as a time-traveling new wave act are numbered.)
Is Knowledge Graph’s mechanism for correcting mistakes adequate? Appropriate?
How many mistakes like this do there need to be to make a critical understanding of Knowledge Graph’s gaps and limitations crucial to even casual use?
Interpreting the Gaps
Many Google searches sampled for this piece do not yield a Knowledge Graph result. Consider an instructive example: “Obama birth certificate.” Surely, there would be no intellectually serious challenge to a Knowledge Graph stub reflecting the evidence-based consensus on this matter. Then again, there might be a very loud one.
Similarly not available in Knowledge Graph are stubs on “evolution,” or “homosexuality.” In each case, it should be noted that Google’s top ranked search results are reliably “reality-based.” Each is happy to defer to Wikipedia.
In other instances, the stub for topics that seem to reach some threshold of complexity and/or controversy defers to “related” stubs in favor of making nuanced editorial decisions. Consider the entries for “climate change” and the “Vietnam war,” here presented in their entirety.
In moments such as these, is it unreasonable to assume that Knowledge Graph is shying away from controversy and nuance? More charitably, we might say that this tool is simply unequipped to deal with controversy and nuance. But given the controversial, nuanced nature of “knowledge,” is this second framing really so charitable?
What responsibility does a reference tool have to engage, explicate or resolve political controversy?
What can a user infer when such a tool refuses to engage with controversy?
What of the users who will not think to make such an inference?
To what extent is ethical editorial judgment reconcilable with the interests of a singularly massive, publicly traded corporation with wide-ranging interests cutting across daily life?
One might answer some version of the above questions with the suggestion that Knowledge Graph avoids controversy because it is programmed only to feature information that meets some high standard of machine-readable verification and/or cross-referencing. The limitation is perhaps logistical, baked into the cake of Knowledge Graph’s methodology, and it doesn’t necessarily limit the tool’s usefulness for certain purposes so long as the user is aware of the boundaries of that usefulness. Perhaps in that way this could be framed as a very familiar sort of challenge, not so different from the one we face with other media, whether it’s cable news or pop-science journalism.
This is all true, so far as it goes. Still, consider an example like the stub for HIV:
There are countless reasons to be uncomfortable with a definition of HIV implicitly bounded by Ryan White on one end and Magic Johnson on the other. So many important aspects of the virus are omitted here – the science of it, for one, but even if Knowledge Graph is primarily focused on biography, there are still important female, queer or non-American experiences of HIV that merit inclusion in any presentation of this topic. This is the sort of stub in Knowledge Graph that probably deserves to be controversial.
What portion of useful knowledge cannot – and never will – bend to a machine-readable standard or methodology?
Ironically, it is Wikipedia that, for all the controversy it has generated over the years, provides a rigorous, deeply satisfactory answer to the same problem: a transparent governance structure guided in specific instances by ethical principle and human judgment. This has more or less been the traditional mechanism for reference tools, and it works pretty well (at least up to a certain scale). Even more fundamental, length constraints on Wikipedia are forgiving, and articles regularly plumb nuance and controversy. Similarly, a semantic engine like Wolfram Alpha successfully negotiates this problem by focusing on the sorts of quantitative information that isn’t likely to generate so much political controversy. The demographics of its user-base probably help too.
Of course, Google’s problem here is that it searches everything for every purpose. People use it everyday to arbitrate contested facts. Many users assume that Google is programmatically neutral on questions of content itself, intervening only to organize results for their relevance to our questions; Google, then, has no responsibility for the content itself. This assumption is itself complicated and, in many ways, was problematic even before the debut of Knowledge Graph. All the same, it is a “brand” that Knowledge Graph will no doubt leverage in a new direction. Many users will intuitively trust this tool and the boundaries of “knowledge” enforced by its limitations and the prerogatives of Google and its corporate actors.
Consider the college freshman faced with all these ambiguities. Let’s assume that she knows not to trust everything she reads on the internet. She has perhaps even learned this lesson too well, forfeiting contextual, critical judgment of individual sources in favor of a general avoidance of internet sources. Understandably, she might be stubbornly loyal to the internet sources that she does trust.
Trading on the reputation and cultural primacy of Google search, Knowledge Graph could quickly become a trusted source for this student and others like her. We must use our classrooms to provide this student with the critical engagement of her professors, librarians and peers on tools like this one and the ways in which we can use them to critically examine the gaps so common in conventional wisdom. Of course Knowledge Graph has a tremendous amount of potential value, much of which can only proceed from a critical understanding of its limitations.
How would this student answer any of the above questions?
Without pedagogical intervention, would she even think to ask them?
This week Inside Higher Ed picked up the story of a report that is likely to aggravate many of the faculty members we serve as English Literature specialists. Does it have any resonance for us as librarians?
Emory University Professor Mark Bauerlein’s paper was produced by the Center for College Affordability and co-hosted by the Cato institute (a Libertarian think tank). Bauerlein is also the author of a book called The Dumbest Generation, about new media’s degrading effects on education, attention span, etc. So, it is fairly safe to suspect some bias underlying his audit of contemporary literary scholarship.
“Many professors enjoy their work, finding it rewarding and helpful to their other professional duties, but if their books and essays do not find readers sufficient to justify the effort, the publication mandate falls short of its rationale, namely, to promote scholarly communication and the advancement of knowledge,” Bauerlein wrote in the report. “To put it bluntly, universities ask English professors to labor upon projects of little value to others, incurring significant opportunity costs.”
Bauerlein is no doubt right that something is not working quite the way it should be in scholarly publishing in the humanities. I’ll be the first to agree that tying academic credentialing to monograph and article publication has gotten out of control. As a librarian, too, I sometimes have to think long and hard about buying monographs that are costly but seem narrowly focused in a way I can’t believe will be useful to others. But none of this seems to be his real focus. He claims to be advocating for more emphasis on teaching, which sounds fine, but is this kind of report really likely to lead administrators to change credentialing criteria or is it likely to help them justify hiring fewer permanent faculty?
Further, as the Inside Higher ed notes, tracking citations proves little about impact when studying the Humanities. After all, Humanism tends to privilege individuality over consensus, persuasion over precedent.
What do you think? Does the kind of efficiency Bauerlein seems to be describing come at a cost that is justifiable or not? Are there other, better ways to address the problem he identifies? Are there problems with academic publishing in the field that he is overlooking (or other dynamics in the profession he should be taking into account)?