This year the ACRL Instruction Section Research & Scholarship Committee invited the authors of “Raising Algorithm Bias Awareness Among Computer Science Students Through Library and Computer Science Instruction” to engage in a discussion of their article.
Shalini Ramachandran is a STEM Librarian at Chapman University. Prior to this position, she was a faculty liaison in the Office of Research Development at Boise State University, and a science and engineering librarian at the University of Southern California. She holds a PhD in Postcolonial Literature from Purdue University, an MA in Library & Information Studies from the University of Wisconsin, Madison, and a BA in Mathematics from the University of Mumbai. Shalini’s work in libraries and information theory draws on her experience teaching academic argumentation, critical literacy, and resistance pedagogy. Along with algorithm-bias instruction, she is also interested in equity issues in information seeking, information access, and automated decision systems. She has a strong interest in understanding accessibility issues in the online learning environment. Her article, co-authored with Sheree Fu, “Accessibility Awareness: The User Experience” was published in Library Journal in 2018.
Steve Cutchin is an Associate Professor of Computer Science at Boise State University. Prior to this position he managed the Visualization Laboratory at King Abdullah University of Science and Technology (KAUST), where he recruited a technical team of engineers and visualization scientists while overseeing the building of the state-of-the-art scientific data visualization laboratory on the KAUST campus and forging relationships with international university and corporate partners. Prior to joining KAUST, he was the manager of Visualization Services at the San Diego Supercomputer Center, where he developed tools and animations in support of visualization for high performance computing problems and systems. He has long standing expertise in computer graphics, scientific visualization, and large-scale visualization systems. He has also worked as a senior software engineer at Walt Disney Feature Animation, developing software tools to improve animation production on feature films. He has published articles on computer graphics and visualization, created animations for the Discovery Channel, and images for SIGGRAPH. He received his doctorate from Purdue University in Computer Science.
Sheree Fu is an Engineering, Computer Science, and Technology Librarian at California State University, Los Angeles. Prior to this position, she was a science and engineering librarian at the University of Southern California. She has worked on interdisciplinary and innovative projects as a data services specialist, a research and development librarian, a digital scholarship librarian at the Claremont Colleges Library, and as an engineering librarian at California Polytechnic State University, San Luis Obispo. Before becoming a librarian, she worked in manufacturing in the computer industry. She teaches information literacy skills to engineering and first-year students. She has presented on user research, space planning, accessibility, and computer algorithm bias. Her research explores emerging student needs and technology in academic libraries.
- In “Raising Algorithm Bias Awareness Among Computer Science Students Through Library and Computer Science Instruction” (2021), you express enthusiasm for cross-disciplinary partnerships and an integrative approach to algorithmic bias instruction. What advice do you have for librarians interested in initiating this interdisciplinary dialogue?
Our work was sparked by a talk we attended by Professor Safiya Noble at the University of Southern California (USC) on algorithm bias. At the time, Shalini and Sheree were science and engineering librarians at USC, with subject responsibilities in computer science. We did not have a plan for a formal study when we started discussing the issue, but as we fleshed out our ideas, we started reaching out to the campus community. We sought and received internal funding for our work, met with senior personnel involved in diversity initiatives, and received support from the Chair of the Computer Science department. We also reached out to people at Cal State LA and Boise State. Not everyone we reached out to had an interest in collaborating, but we eventually ended up with a dedicated team of four people, which included Karen Howell at USC.
In terms of advice for librarians interested in initiating interdisciplinary dialogue, we would say it helps to be clear on your ideas and what you are offering to potential collaborators. Identify connectors who are likely to support your work. In our case, support from the Vice Dean of Diversity and the Chair of Computer Science at USC opened a lot of doors. Sometimes collaborators find you as you start disseminating your work. Do not be discouraged if anyone declines collaboration and do not waste time convincing people who have expressed disinterest to work with you. Look for the people who are enthusiastic about your vision. A small and focused team can be a starting point for bigger collaborations.
- Research on algorithmic bias can touch on difficult topics related to racial, gender-based, and other forms of discrimination. What advice do you have for instructors on discussing these sensitive issues in class?
Our experience with algorithm bias instruction has been that student responses range from defensive and challenging of whether bias exists to very engaged, responsive, and eager to problem solve. The reality is that student reactions to topics involving race and gender vary depending on the demographics of the student population. Our advice to instructors is to set the stage by stating that algorithm bias topics are sensitive and there are no easy answers or solutions. We have found computer science students to be analytical and solution driven. The complexity of issues involved in algorithm bias and lack of clear solutions can be disconcerting to them. Our strategy has been to introduce initial examples that are fairly clear in terms of where the bias lies, such as differences in lines of credit extended to men and women with the same financial history. Students are invited to discuss a variety of cases and where and how bias creeps in. We expect that some students may react negatively and are not reachable, but most students will engage and think critically. Our emphasis is not to provide students with right answers but to explore the issue and think about solutions. As educators, we care for the whole student. In class, we model how to discuss a sensitive topic by treating it and the class with care. We work with students one class at a time, respecting their values, interests and time to overcome their own societal barriers.
Going forward, we plan to teach students how to approach sensitive topics via Costa’s levels of questioning, to promote dialogue based on high-quality evidence as well as students’ own experiences and values. Bias is a societal problem. Our approach is to make our instruction accessible, with examples at an introductory level, and to provide information literacy tools to educate and empower students. Some of the steps we follow in all our algorithm-bias instruction sessions are: 1) define bias; 2) discuss being aware of bias, how to identify, recognize, and document it; 3) consider solutions to bias; 4) emphasize that while algorithm bias recognition is something computer science practitioners will engage with, bias awareness, in general, is both an academic, professional, and everyday life skill. Overall, we meet students where they are and learn from their perspectives.
3. Teaching about the societal impact of algorithmic bias is a fairly recent development on many campuses. What institutional factors do you feel have an impact on its availability and how could any potential barriers be overcome?
Institutional support has been crucial for the advancement of our work. At USC, we received internal funding from the USC Libraries as well as the Viterbi School of Engineering. We also received support from library colleagues and senior administration. Both at Cal State LA and Boise State, we had encouragement from the Associate Deans of Engineering as well as individual faculty. Without institutional backing, it would be difficult to interact with students, gather data, and conduct research on student perspectives toward algorithm bias. How much value an institution places on diversity, equity, and inclusion initiatives can play a large role in the success of studies such as ours. Realistically, the political environment also has a significant impact. In 2021, the Lieutenant Governor of Idaho convened an Educational Indoctrination Taskforce to investigate whether schools and universities in Idaho were teaching critical race theory. While the task force did not succeed in creating legislative changes, their scrutiny into topics that cover race and gender may have had a chilling effect at Idaho’s public K-12 schools and universities. A complaint by a non-student about mistreatment of white students in a University Foundations 200: Foundations of Ethics & Diversity class resulted in a temporary suspension of all sections of the course at Boise State. At the time, we were concerned our study could also come under the radar of the state legislature. We don’t necessarily have a solution to how barriers such as a hostile political climate can be overcome. We will continue our research and are grateful there are institutions and people within institutions receptive to our work and funding agencies willing to support projects such as ours.
4. What excites or interests you most about next steps in your work?
We are currently developing a new algorithm bias instruction module with an active learning component. We are also looking at teachable case studies and examples of algorithm bias to add to our lectures. We are excited that we can continue developing our instruction and that there is a high level of interest in this topic. Shalini is starting a new job at Chapman University, and this may be an opportunity to expand our audience further. When we started, we had less clarity about where we wanted to go with our study. Now we see that making the instruction accessible to the widest audience could have an impact on our future computer science practitioners. Many of our students have encountered harmful bias in some form in their lives. We are looking forward to engaging more students in discussing the impacts of algorithm bias, making the topic approachable, and giving students the tools they need to talk about bias and its complexity.
5. What advice would you give to librarians who are trying to formulate or refresh their own research agendas?
Our observation, based on our experiences, is that your research agenda is yours alone. You will feel the weight of your work, so your research should be something you like and have a passion for. If you don’t care about a research topic it will be hard to keep up the sustained interest and effort needed to make a project successful. Your research need not be on a currently popular topic for it to have relevance. Steve, our computer science faculty collaborator, has an example of cybersecurity: hot topic, but no interest in researching it. It’s best to pick out your own interests and learn to say no to research projects that are not compelling to you.
Another approach, in terms of refreshing your research, is to reframe your work. Sheree recently read the book Designing Your New Work Life (2021), by Bill Burnett and Dave Evans, where the authors share a case study of a person named Garth, who accepted a “horrible job” in marketing (24). Garth started with the acceptance that this was not his ideal job. However, it was “good enough for now.” The reframe solution for Garth was that he took “positive energy breaks” every three hours during his workday, which included taking a walk around the grounds and then buying ice cream at the cafeteria, actions that made him feel both happy and reengaged. Additionally, he also made connections with a few people in the sales department, whom he found intellectually engaging. By learning new things from them, he reframed his job by accepting the current reality. He found small ways to “redesign his circumstances” with a “bias to action.” As per the design thinking model put forth by Burnett and Evans, the steps for emerging from a less than ideal work (or research) environment are to: 1) accept reality; 2) look for a reframe; 3) build a prototype (or wait for things to change); 4) learn something; 5) repeat (23-28). We like the authors’ idea of reframing a situation rather than just “making the best” of it. We realize truly difficult and even abusive work conditions exist that cannot simply be reframed. However, in the context of feeling uninspired with a research agenda that was previously enjoyable, a reframe concept could be helpful. Identify your points of curiosity, connect with new people, explore questions, and meaningful and impactful research can follow.
Reference
Burnett, Bill, and Dave Evans. Designing Your New Work Life: How to Thrive and Change and Find Happiness–and a New Freedom–at Work. New York: Vintage, 2021.