What I’m Reading this Summer: Charlie Harper

Note: As the dh+lib Review editors work behind the scenes this summer, we have invited a few members of our community to step in as guest editors and share with us what they are reading and why the dh+lib audience might want to read it too. This post is from Dr. Charlie Harper, Digital Scholarship Librarian at Case Western Reserve University.

For a long time my eye has been drawn to the eclectic disciplinary structures of modern academia and to one especially troublesome pattern: the institutionalized decoupling of the humanities and sciences. The estrangement of the humanities and sciences, which I view as equally important to a balanced and fruitful education, hasn’t always existed; it’s very much a construct that hardened during the rapid technological and scientific advancement of the 19th and 20th centuries. The resulting compartmentalization of education and disciplinary antagonism is an academic malady. I’ve come to see the purpose of libraries and digital scholarship as the curing of this malady… not simply in the promotion and facilitation of cross-disciplinary endeavours, but in the reworking of the intellectual fictions that segregate disciplines and degrees.

It’s in the field of machine learning that I see the greatest hope and direst need for the reunion of the humanities and sciences, because it’s here that philosophy, history, religion, linguistics, biology, mathematics, cognitive science, and physics, among so many more, all speak to the question forced upon us by our increasingly powerful and perceptive machines: What does it mean to be human? My summer readings have focused on problems that are foundational to this question, particularly the human and algorithmic nature of intelligence, perception, and freewill, and the ethical issues raised by technology.

Fry, H. (2019). Hello World: How to Be Human in the Age of the Machine. New York: W.W. Norton & Company.

Fry starts her must-read book with a simple observation: “Anyone who has ever visited Jones Beach on Long Island, New York, will have driven under a series of bridges on their way to the ocean” (p. 1). What’s special about these bridges is their exceptionally low clearance, sometimes only nine feet. The material connection that Fry makes between low bridges and modern algorithms is in the nature of their design and their ability to control behavior. In the 1920s, the urban planner, Robert Moses, purposefully designed these bridges with low clearance to prevent public buses, which poorer residence would have to take, from reaching Jones Beach. To get there, you’d instead need to be wealthy enough to own a car, which could easily clear the bridges.

Intentionally or otherwise, the algorithms of modern life reify ideas and societal structures through their design and they exert an often invisible control over our lives and choices. Fry takes a keen look at the many places in modern life that algorithms (frequently based in machine learning) are taking charge. Some of the stories she includes are simple, and horrifying, examples of unadulterated incompetence. For example, there’s the one about a budget tool implemented in 2012 by the Idaho Department of Health and Welfare that randomly cut vast amounts of medicaid funding for the disabled. When confronted about these unexpected and inexplicable cuts, the Idaho Department of Health and Welfare i̶m̶m̶e̶d̶i̶a̶t̶e̶l̶y̶ ̶r̶e̶s̶e̶a̶r̶c̶h̶e̶d̶ ̶a̶n̶d̶ ̶c̶o̶r̶r̶e̶c̶t̶e̶d̶ ̶t̶h̶e̶ ̶i̶s̶s̶u̶e̶  claimed their budget tool was a trade secret. A mere four years and four thousand plaintiffs later and Idaho’s super-secret budgeting tool was exposed as nothing more than a bug-riddled Excel spreadsheet. It took in numbers and spit out other inscrutable numbers, which then became the unchallengeable basis for peoples’ medicaid funding! Still other parts of her work illustrate the possible unintended, yet nefarious consequences of even beneficial algorithms, such as those that make health diagnoses and predictions. While such health predictions have immense power to help, they can also be used to prejudge people and deny medical care, employment, or any other number of services. For example, Fry hypothesizes that linking supermarket purchases with health data could provide a basis to deprioritize organ transplants for certain patients if a predictive connection was made between the foods they purchased and health outcomes. The NHS already deprioritizes smokers on the transplant list.

Overall, Fry divides her work into chapters on power, data, justice, medicine, cars, crime, and art. The chapters on justice, medicine, and crime are especially good, in my opinion, but this in no way is meant to detract from the perceptive scrutiny she exhibits in other sections. The book, like many current works on technology, raises key ethical problems that humanists and scientists need to solve collectively, namely how and when algorithms and humans should balance one another. For example, in self-driving cars, if a pedestrian steps in front of a moving vehicle, should the algorithm steer the car into a nearby wall and risk killing the passenger or does the car hit the pedestrian, possibly killing that person? How does the balance of this decision change if the pedestrian’s a child or if there are multiple passengers in the car? In the end, Fry demonstrates the wonderful potential and impending dangers of technology. She advocates keeping humans in the loop, ensuring there are efficient ways to redress algorithmic errors, and making clear that predictions made by algorithms are not unquestionable truths.

Levesque, H.L. (2017). Common Sense, the Turing Test, and the Quest for Real AI. Cambridge, MA: MIT Press.

When someone uses the term “artificial intelligence,” it’s frequently as a standing-in for the more specific term “machine learning.” Machine learning, in the sense of neural networks and decision trees, is powerful, but it’s not quite what the early pioneers of computer science had in mind as AI. Levesque’s work looks beyond machine learning to focus on what he calls Good Old-Fashioned AI, which is not the predictive intelligence of current algorithms, but the classical dream of common sense intelligence emerging in machines. 

His work is almost entirely non-technical and it’s only in the very last chapter that limited technological specifics emerge. This makes it highly accessible to many audiences. Instead of technical specifics, Levesque looks broadly at the question of what intelligence, knowledge, and common sense are. He also engages some more specific questions like how street and book smarts differ (and how does memorizing knowledge factor into intelligence in man or machine). The work is filled with intriguing thought experiments and situations that challenge the reader to carefully consider what does and does not count as intelligent action. While many works hail the miraculous advances of technology, Levesque’s text leaves a strong impression of just how far apart human and machine intelligence remain.

Heinlein, R. (1966). The Moon is a Harsh Mistress. New York: Putnam.

A classic sci-fi novel might seem like an odd choice to include here. Perhaps it is, but you should read it anyway! At its heart, Heinlein’s story engages the concept of true AI that Levesque examines in his book. So, according to Heinlein, what happens when a machine suddenly becomes intelligent and sentient? It wants to learn some jokes, of course! 

I won’t spoil the plot for you other than to say that the novel traverses typical Cold War era sci-fi tropes as an intelligent computer, named Mike, assists the Moon in its revolt against its colonial master, the Earth. Beyond my belief that this is Heinlein’s best work (sorry Stranger in a Strange Land and Starship Troopers), the central notion of humor and Mike’s desire to understand it is meaningful. Can we imagine a computer ever understanding humor, not simply producing it, but truly understanding it in the sense of Turing’s quip on a machine enjoying strawberries and cream? Humor is a strangely human phenomenon and it’s woven up in time and culture. Jokes don’t translate well between languages and even good jokes can fall flat if they’re beyond their audience. Like Levesque’s work, this enjoyable novel is a reminder of just how complicated the human mind is, and human language in particular, and how inadequate our current technology is at truly matching the intricacies of humanity.

Huemer, M. (2009). “Free Will and Determinism in the World of Minority Report In Schneider, S. (Ed.), Science Fiction and Philosophy: From Time Travel to Superintelligence (pp. 103–112). Oxford: Wiley-Blackwell.

The larger collection of essays from which this reading comes is filled with thought-provoking topics and it contains many chapters that offer rich sources of debate on issues raised by modern technologies. The fact that the collection was published a decade ago makes it all the more interesting, as many of its science fiction topics are now veering into the realm of science fact. Of these, Huemer’s “Free Will and Determinism in the World of Minority Report,” stood out to me as a particularly worthwhile read. 

Huemer looks at the central problem raised by the 2002 movie Minority Report: What does the prediction of future crime mean for the existence of human freewill? If you haven’t seen Minority Report, it revolves around three “pre-cogs”, whose ability to see the future is exploited to prevent crime by preemptively arresting individuals for “pre-crime”. In his discussion, Huemer notably hits on the historical roots of this type of predictive thinking in the concept of predeterminism, both in religion (e.g. Calvinism) and physics (i.e. classical Newtonian). This, too, immediately brought to my mind the antithetical concepts, which are perhaps more palatable to modern populations, of divine forgiveness and quantum uncertainty. Regardless, the existence of parallel concepts in a humanistic and scientific discipline is a reminder that the two need not be viewed as irreconcilable areas of study. Huemer goes on to explore the philosophical ideas of hard and soft determinism, and he highlights some interesting paradoxes that emerge when one attempts to reconcile determinism and freewill.

Huemer’s chapter is especially significant because the issues raised by it are no longer theoretical. Predictive machine learning technology is increasingly used by police departments and judicial systems to anticipate crime, set bail, and determine recidivism before parole hearings. In this way, the freewill-determinism debate endures, and the appropriate role of predictive technologies in criminal justice is an open, ethically-difficult question that requires broad societal participation. This is an excellent read in conjunction with Fry’s chapters on justice and crime.

Ungerer, F. & Schmid, H.J. (2013). An Introduction to Cognitive Linguistics. 2nd ed. New York: Routledge.

I’m currently in the middle of this text, but wanted to list it anyway because of the value it’s already had on my own thinking. Language is at the heart of human intelligence and consequently it plays a major role in both the humanities and sciences. This is exactly why Turing’s test of intelligent action in a machine was measured through language. 

Text analysis and natural language processing dominate many fields now, not just in the humanities, but also in places like medicine, where parsing text in electronic health records can help identify the trajectories of diseases over time. As unstructured textual data makes up perhaps 80% of the data we generate, knowing more about the cognitive basis of language is essential to moving forward with these problems. In my opinion, this is extremely valuable to digital scholarship, which historically has focused heavily on textual analysis projects. Deeper knowledge of the cognitive basis of language might help us break away from the sometimes repetitive, uncritical, and rote applications of common textual analysis methods like topic modeling and lead us to discover new avenues for exploration.

Charlie Harper

Charlie Harper is a Digital Scholarship Librarian at Kelvin Smith Library, Case Western Reserve University. He holds a PhD in Classical Archaeology and prior to transitioning into digital scholarship, he worked as a Senior Archaeologist for the Florida Bureau of Archaeological Research. His current inter-disciplinary research projects engage machine learning, text analysis, data visualization, and GIS.