Blockchain: Merits, Issues, and Suggestions for Compelling Use Cases

Blockchain holds a great potential for both innovation and disruption. The adoption of blockchain also poses certain risks, and those risks will need to be addressed and mitigated before blockchain becomes mainstream. A lot of people have heard of blockchain at this point. But many are unfamiliar with how this new technology exactly works and unsure about under which circumstances or on what conditions it may be useful to libraries.

In this post, I will provide a brief overview of the merits and the issues of blockchain. I will also make some suggestions for compelling use cases of blockchain at the end of this post.

What Blockchain Accomplishes

Blockchain is the technology that underpins a well-known decentralized cryptocurrency, Bitcoin. To simply put, blockchain is a kind of distributed digital ledger on a peer-to-peer (P2P) network, in which records are confirmed and encrypted. Blockchain records and keeps data in the original state in a secure and tamper-proof manner[1] by its technical implementation alone, thereby obviating the need for a third-party authority to guarantee the authenticity of the data. Records in blockchain are stored in multiple ledgers in a distributed network instead of one central location. This prevents a single point of failure and secures records by protecting them from potential damage or loss. Blocks in each blockchain ledger are chained to one another by the mechanism called ‘proof of work.’ (For those familiar with a version control system such as Git, a blockchain ledger can be thought of as something similar to a P2P hosted git repository that allows sequential commits only.[2]) This makes records in a block immutable and irreversible, that is, tamper-proof.

In areas where the authenticity and security of records is of paramount importance, such as electronic health records, digital identity authentication/authorization, digital rights management, historic records that may be contested or challenged due to the vested interests of certain groups, and digital provenance to name a few, blockchain can lead to efficiency, convenience, and cost savings.

For example, with blockchain implemented in banking, one will be able to transfer funds across different countries without going through banks.[3] This can drastically lower the fees involved, and the transaction will take effect much more quickly, if not immediately. Similarly, adopted in real estate transactions, blockchain can make the process of buying and selling a property more straightforward and efficient, saving time and money.[4]

Disruptive Potential of Blockchain

The disruptive potential of blockchain lies in its aforementioned ability to render the role of a third-party authority obsolete, which records and validates transactions and guarantees their authenticity, should a dispute arise. In this respect, blockchain can serve as an alternative trust protocol that decentralizes traditional authorities. Since blockchain achieves this by public key cryptography, however, if one loses one’s own personal key to the blockchain ledger holding one’s financial or real estate asset, for example, then that will result in the permanent loss of such asset. With the third-party authority gone, there will be no institution to step in and remedy the situation.

Issues

This is only some of the issues with blockchain. Other issues include (a) interoperability between different blockchain systems, (b) scalability of blockchain at a global scale with large amount of data, (c) potential security issues such as the 51% attack [5], and (d) huge energy consumption [6] that a blockchain requires to add a block to a ledger. Note that the last issue of energy consumption has both environmental and economic ramifications because it can cancel out the cost savings gained from eliminating a third-party authority and related processes and fees.

Challenges for Wider Adoption

There are growing interests in blockchain among information professionals, but there are also some obstacles to those interests gaining momentum and moving further towards wider trial and adoption. One obstacle is the lack of general understanding about blockchain in a larger audience of information professionals. Due to its original association with bitcoin, many mistake blockchain for cryptocurrency. Another obstacle is technical. The use of blockchain requires setting up and running a node in a blockchain network, such as Ethereum[7], which may be daunting to those who are not tech-savvy. This makes a barrier to entry high to those who are not familiar with command line scripting and yet still want to try out and test how a blockchain functions.

The last and most important obstacle is the lack of compelling use cases for libraries, archives, and museums. To many, blockchain is an interesting new technology. But even many blockchain enthusiasts are skeptical of its practical benefits at this point when all associated costs are considered. Of course, this is not an insurmountable obstacle. The more people get familiar with blockchain, the more ways people will discover to use blockchain in the information profession that are uniquely beneficial for specific purposes.

Suggestions for Compelling Use Cases of Blockchain

In order to determine what may make a compelling use case of blockchain, the information profession would benefit from considering the following.

(a) What kind of data/records (or the series thereof) must be stored and preserved exactly the way they were created.

(b) What kind of information is at great risk to be altered and compromised by changing circumstances.

(c) What type of interactions may need to take place between such data/records and their users.[8]

(d) How much would be a reasonable cost for implementation.

These will help connecting the potential benefits of blockchain with real-world use cases and take the information profession one step closer to its wider testing and adoption. To those further interested in blockchain and libraries, I recommend the recordings from the Library 2.018 online mini-conference, “Blockchain Applied: Impact on the Information Profession,” held back in June. The Blockchain National Forum, which is funded by IMLS and is to take place in San Jose, CA on August 6th, will also be livestreamed.

Notes

[1] For an excellent introduction to blockchain, see “The Great Chain of Being Sure about Things,” The Economist, October 31, 2015, https://www.economist.com/news/briefing/21677228-technology-behind-bitcoin-lets-people-who-do-not-know-or-trust-each-other-build-dependable.

[2] Justin Ramos, “Blockchain: Under the Hood,” ThoughtWorks (blog), August 12, 2016, https://www.thoughtworks.com/insights/blog/blockchain-under-hood.

[3] The World Food Programme, the food-assistance branch of the United Nations, is using blockchain to increase their humanitarian aid to refugees. Blockchain may possibly be used for not only financial transactions but also the identity verification for refugees. Russ Juskalian, “Inside the Jordan Refugee Camp That Runs on Blockchain,” MIT Technology Review, April 12, 2018, https://www.technologyreview.com/s/610806/inside-the-jordan-refugee-camp-that-runs-on-blockchain/.

[4] Joanne Cleaver, “Could Blockchain Technology Transform Homebuying in Cook County — and Beyond?,” Chicago Tribune, July 9, 2018, http://www.chicagotribune.com/classified/realestate/ct-re-0715-blockchain-homebuying-20180628-story.html.

[5] “51% Attack,” Investopedia, September 7, 2016, https://www.investopedia.com/terms/1/51-attack.asp.

[6] Sherman Lee, “Bitcoin’s Energy Consumption Can Power An Entire Country — But EOS Is Trying To Fix That,” Forbes, April 19, 2018, https://www.forbes.com/sites/shermanlee/2018/04/19/bitcoins-energy-consumption-can-power-an-entire-country-but-eos-is-trying-to-fix-that/#49ff3aa41bc8.

[7] Osita Chibuike, “How to Setup an Ethereum Node,” The Practical Dev, May 23, 2018, https://dev.to/legobox/how-to-setup-an-ethereum-node-41a7.

[8] The interaction can also be a self-executing program when certain conditions are met in a blockchain ledger. This is called a “smart contract.” See Mike Orcutt, “States That Are Passing Laws to Govern ‘Smart Contracts’ Have No Idea What They’re Doing,” MIT Technology Review, March 29, 2018, https://www.technologyreview.com/s/610718/states-that-are-passing-laws-to-govern-smart-contracts-have-no-idea-what-theyre-doing/.

Our Assumptions: of Neutrality, of People, & of Systems

Discussions of neutrality have been coming up a lot in libraryland recently. I would argue that people have been talking about this for years1 2 3 4,  but this year we saw a confluence of events drive the “neutrality of libraries” topic to the fore. To be clear, I have a position on this on this topic5 and it is that libraries cannot be neutral players and still claim to be a part of the society they serve. But this post is about what we assume to be neutral, what we bring forward with those assumptions, and how to we react when those assumptions are challenged. When we challenge ideas that have been built into systems, either as “benevolent, neutral” librarians or “pure logic, neutral” algorithms, what part of ourselves are we challenging? How do reactions change based on who is doing the challenging? Be forewarned, this is a convoluted landscape.

At the 2018 ALA Midwinter conference, the ALA President’s program was a debate about neutrality. I will not summarize that event (see here), but I do want to call attention to something that became very clear in the course of the program: everyone was using a different definition of neutrality. People spoke with assumptions of what neutrality means and why they do, or do not, believe that it is important for libraries to maintain. But what are we assuming when we make these assumptions? Without an agreed upon definition, some referred to legal rulings to define neutrality, some used a dictionary definition (“not aligned with a political or ideological grouping” – Merriam Webster) without probing how political or ideological perspectives play out in real life. But why do we assume libraries should be neutral? What safety or security does that assumption carry? What else are we assuming should be neutral? Software? Analytics? What value judgements are we bringing forward with those assumptions?

An assumption of neutrality often comes with a transference of trust. A speaker at ALA even said that the three professions thought of as the most trustworthy (via a national poll) are firefighting, nursing, and librarianship, and so, by his logic, we must be neutral. Perhaps some do not conflate trust and neutrality, but when we do assume neutrality equates with trust in these situations, we remove the human aspect from the equation. Nurses and librarians, as people, are not neutral. People hold biases and a variety of lived experiences that shape perspectives and approaches. If you engage this line of thought and interrogate your assumptions and beliefs, it can become apparent that it takes effort to recognize and mitigate our human biases throughout the various interactions in our lives.

What of our technology? Systems and software are often put forward as logic-driven, neutral devices, considered apart from their human creators. The position of some people is that machines lack emotions and are, therefore, immune to our human biases and prejudices. This position is inaccurate and dangerous and requires our attention. Algorithms and analytics are not neutral. They are designed by people, who carry forward their own notions of what is true and what is neutral. These ideas are built into the structure of the systems and have the potential to influence our perception of reality. As we rely on “data-driven decision-making” across all aspects of our society — education, healthcare, entertainment, policy — we transfer trust and power to that data. All too often, we do that without scrutinizing the sources of the data, or the algorithms acting upon them. Moreover, as we push further into machine learning systems – systems that are trained on data to look for patterns and optimize processes – we open the door for those systems to amplify biases. To “learn” our systemic prejudices and inequities.

People far more expert in this domain than me have raised these questions and researched the effects that biased systems can have on our society6 7 8. I often bring these issues up when I want to emphasize how problematic it is to let the assumption of data-driven outcomes as “truth” persist and how critical it is to apply information literacy practices to data. But as I thought about this issue and read more from these experts, I have been struck by the variety of responses that these experts illicit. How do reactions change based on who is doing the challenging?

Angela Galvan questioned assumptions related to hiring, performance, and belonging in librarianship, based on the foundation of the profession’s “whiteness,” and was met with hostile comments on the post9. Nicole A. Cooke wrote about implicit assumptions when we write about tolerance and diversity and has been met with hostile comments10 while her micro-aggression research has been highlighted by Campus Reform11, which led to a series of hostile communications to her. Chris Bourg’s keynote about diversity and technology at Code4Lib was met with hostility12. Safiya Noble wrote a book about bias in algorithms and technology, which resulted in one of the more spectacular Twitter disasters13 14, wherein someone found it acceptable to dismiss her research without even reading the book.

Assumptions of neutrality, whether it be related to library services, space, collections, or the people doing the work, allow oppressive systems to persist and contribute to a climate where the perspectives and expertise of marginalized people in particular can be dismissed. Insisting that we promulgate the library and technology – and the people working in it and with it – as neutral actors, erases the realities that these women (and countless others) have experienced. Moreover, it allows the those operating with harmful and discriminatory assumptions to believe that they *are* neutral, by virtue of working in those spaces, and that their truth is an objective truth. It limits the desire for dialog, discourse, and growth – because who is really motivated to listen when you think you are operating from a place of “Truth”…when you feel that the strength of your assumptions can invalidate a person’s life?

Introducing Omeka S

My library has used Omeka as part of our suite of platforms for creating digital collections and exhibits for many years now. It’s easy to administer and use, and many of our students, particularly in history or digital humanities, learn how to create exhibits with it in class or have experience with it from other institutions, which makes it a good solution for student projects. This creates challenges, however, since it’s been difficult to have multiple sites or distributed administration. A common scenario is that we have a volunteer student, often in history, working on a digital exhibit as part of a practicum, and we want the donor to review the exhibit before it goes live. We had to create administrative accounts for both the student and the donor, which required a lot of explanations about how to get in to just the one part of the system they were supposed to be in (it’s possible to create a special account to view collections that aren’t public, but not exhibits). Even though the admin accounts can’t do everything (there’s a super admin level for that), it’s a bit alarming to hand out administrative accounts to people I barely know.

This problem goes away with Omeka S, which is the new and completely rebuilt Omeka. It supports having multiple sites (which is the new name for exhibits) and distributed administration by site. Along with this, there are sophisticated metadata templates that you can assign to sites or users, which takes away the need for lots of documentation on what metadata to use for which item type. When I showed a member of my library’s technical services department the metadata templates in Omeka S, she gasped with excitement. This should indicate that, at least for those of us working on the back end, this is a fun system to use.

Trying it Out For Yourself

I have included some screenshots below, but you might want to use the Omeka S Sandbox to follow along. You can experiment with anything, and the data is reset every Monday, Wednesday, Friday, and Sunday. This includes a variety of sample exhibits, one is “A Battered Tin Dispatch Box” from which I include some screenshots below.

A Quick Tour Through Omeka S

This is what the Omeka Classic administrative dashboard looks like for a super administrator.Omeka Classic Administrative internfaceAnd this is the dashboard for Omeka S. It’s not all that different functionally, but definitely a different aesthetic experience.

Omeka S administrative interface

Most things in Omeka S work analogously to classic Omeka, but some things have been renamed or moved around. The documentation walks through everything in order, so it’s a great place to start learning. Overall, my feeling about Omeka S is that it’s much easier to tap into the  powerful features with less of a learning curve. I first learned Omeka S at the DLF Forum conference in fall 2017 directly from Patrick Murray-John, the Omeka Development Team Manager, and some of what is below is from his description.

Sites

Sites insetOmeka S has the very useful concept of Sites, which again function like exhibits in classic Omeka. Each site has its own set of administrative functions and user permissions, which allow for viewer, editor, or admin by site. I really appreciate this, since it allowed me to give student volunteers access to just the site they needed, and when we need to give other people access to view the site before it’s published we can do that. It’s easier to add outside or supplementary materials to the exhibit navigation. On the individual pages there are a variety of blocks available, and the layout is easier for people without a lot of HTML skills to set up.

Resource Templates

These existed in Omeka Classic, but were less straightforward. Now you can set a resource template with properties from multiple vocabularies and build the documentation right into the template. The data type can be text or URI, or draw from vocabularies with autosuggest. For example, you can set the Rights field to draw from Rights Statement options.

Items

Items work in a similar fashion to Omeka Classic. Items exist at the installation level, so can be reused across multiple sites. What’s great is that the nature of an item can be much more flexible. They can include URIs, maps, and multiple types of media such as a URL, HTML, IIIF image, oEmbed, or YouTube. This reflects the actual way that we were using Omeka Classic, but without the technical overhead to make it all work. This will make it easier for more people to create much more interactive and web-integrated exhibits.

Item Sets

Item Sets are the new name given to Collections and, like Items, they can have metadata from multiple vocabularies. Item Sets are analogous to Collections, but items can be in multiple Item Sets to be associated with sites to limit what people see. The tools for batch adding and editing are similar, but more powerful because you can actually remove or edit metadata in bulk.

Themes

Themes in Omeka S have changed quite a bit, and as Murray-John explained, it is more complicated to do theming than in the past. Rather than call to local functions, Omeka S uses patterns from Zend Framework 3, and so the process of theming will require more careful thought and planning. That said, the base themes provided are a great base, and thanks to the multiple options for layouts in sites, it’s less critical to be able to create custom themes for certain exhibits. I wrote about how to create themes in Omeka in 2013, and while some of that still holds true, you would want to consult the updated documentation to see how to do this in Omeka S.

Mapping

One of my favorite things in Omeka S is the Mapping module, which allows you to add geolocation metadata to items, and create a map on site pages. Here’s an example from the Omeka S Sandbox with locations related to Scotland Yard mapped for an item in the Battered Tin Dispatch Box exhibit.

Map interface for itemsThis can then turn into an interactive map on the front end.

Map interface for exhibits

For the vast majority of mapping projects that our students want to do, this works in a very straightforward manner. Neatline is a plugin for Omeka Classic that allows much more sophisticated mapping and timelines–while it should be ported over to Omeka S, it currently is not listed as a module. In my experience, however, Neatline is more powerful than what many people are trying to do, and that added complexity can be a challenge. So I think the Mapping module looks like a great compromise.

Possible Approaches to Migration

Migration between Omeka Classic and Omeka S works well for items. For that, there’s the Omeka2 Importer module. Because exhibits work differently, they would have to be recreated. Omeka.net, the hosted version of Omeka, will stay on Omeka Classic for the foreseeable future, so there’s no concern that it will stop being supported any time soon, according to Patrick Murray-John.

Conclusion

We are still working on setting up Omeka S. My personal approach is that as new ideas for exhibits come up we will start them first in Omeka S. As we have time and interest, we may start to migrate older exhibits if they need continual management. Because some of our older exhibits rely on Omeka Classic pla but are planning to mostly create new exhibits in there that don’t rely on Omeka Classic plugins. I am excited to pair this with our other digital collection platforms to build exhibits that use content across our platforms and extend into the wider web.

 

Net Neutrality Roundup: Alternate Internets

Now that we are facing net neutrality regulation rollbacks here in the United States, what new roles could librarians play in the continued struggle to provide people with unrestricted access to information?  ALA has long been dedicated to equal access to information, as clearly outlined in both the Core Values and Code of Ethics. You can read ALA’s Joint Letter to the FCC here. It emphasizes that “a non-neutral net, in which commercial providers can pay for enhanced transmission that libraries and higher education cannot afford, endangers our institutions’ ability to meet our educational mission.”

Net neutrality was discussed back in 2014 on this blog, with Margaret Heller’s post entitled “What Should Academic Librarians Know about Net Neutrality?” We recommend you start there for some background on the legal issues around net neutrality. It includes a fun trip into the physical spaces our content traverses through to get onto our screens. One of the conclusions of that post was that libraries need to work on ensuring that everyone has access to broadband networks to begin with, and that more varied access ensures that no company has a monopoly over internet service in a location. There have been a number of projects along these lines over the past decade and more, and we encourage you to find one in your area and get involved.

Library-based initiatives

Equal access to information starts with having access at all. Several libraries have kicked off initiatives in activities like loaning out wi-fi hotspots for several-month periods in New York City, Brooklyn, and Chicago.

Ideally everyone will have secure and private internet access.The Library Freedom Project has been working for years to protect the privacy of patrons, including educating librarians about the threat of surveillance in modern digital technology, working with Tor Project to configure Tor exit relays in library systems, and creating educational resources for teaching patrons about privacy.

These are some excellent steps towards a more democratic and equal access to information, but what happens if the internet as we know it fundamentally changes? Let’s explore some “alternative internets” that rely on municipal and/or grassroots solutions.

Mesh networks

You might be familiar with wireless mesh networks for home use. You can set up a wireless mesh network in your own house to ensure even coverage across the house. Since each node can cover a certain part of your house, you don’t have to rely on how close you are to the wireless router to connect. You can also change the network around easily as your needs change.

Mesh networks are dynamically routed networks that exchange routes, internet, local networks, and neighbors. They can be wireless or wired. Mesh networks may not be purely a “mesh” but rather a combination of “mesh network” technology as well as “point to point” linking, with connections directly linking to each other, and each of these connections expanding out to their local mesh networks. BMX6/BMX7, BATMAN, and Babel are some of the most popular network protocols (with highly memorable names!) for achieving a broad mesh network, but there are many more. Just as you can install devices in your home, you can cooperate with others in your community or region to create your own network. The LibreMesh project is an example of a way in which DIY wireless networks are being created in several European countries.

Municipal networks

Nineteen towns in Colorado are exploring alternate internet solutions, like a public alternative. Chattanooga offers public gigabit internet speeds. This has some major advantages for the city, including the ability to offer free internet access to low-income residents and ensure that anyone who pays for access gets the same level of access, which is not the case in most cities where some areas pay a high cost for a low signal. Even just the presence and availability of municipal broadband, “has radically altered the way local politicians and many ordinary Chattanoogans conceive of the Internet. They have come to think of it as a right rather than a luxury.”1 A similar initiative in Roanoke is the Roanoke Valley Broadband Authority, which in an interesting twist lobbied the Virginia legislature to reduce oversight of its activities in a bill that originally specifically stated that broadband services should focus on underserved areas–so a reminder that in many ways municipalities view this as an investment in business rather than a social justice issue. 2 In Detroit, the Detroit Community Technology Project is working to set up and bring community wireless to neighborhoods in Detroit. New York City‘s Red Hook neighborhood relied on their mesh network during Hurricane Sandy to stay connected to outside of New York. New York City also has the rapidly-growing NYC Mesh community with two supernodes and another coming later this year, uniting lower Manhattan with Northern and Central Brooklyn. Toronto also has an emerging mesh community with a handful of connected nodes. The Urbana-Champaign Independent Media Center developed CUWiN, which provided open wireless networks in “Champaign-Urbana, Homer, Illinois, tribal lands of the Mesa Grande Reservation, and the townships of South Africa”. 3

Outside of North America, Berlin has its own mesh networked, called Freifunk. Austria has Funkfeuer. Greece has the Athens Wireless Metropolitan Network. Italy has Ninux and Argentina has AlterMundi. Villages in rural northern England are joining together to get connected via a cooperative model called B4RN, where they dig their own trenches for cables using their farm tractors.

Thinking big

Guifi.net is a wi-fi network that covers a large part of Spain and defines itself as “the biggest free, open and neutral network.” It was developed in 2004 as a response to the lack of broadband Internet, where commercial Internet providers weren’t providing a connection or a very poor one in rural areas of the Catalonia region. Guifi has established a Wireless Commons License as guidelines that can be adopted by other networks. At time of posting, 34,306 nodes were active, with over 17,000 planned.

Finally, Brooklyn Public Library was granted $50,000 from IMLS to develop a mesh network and called BKLYN Link, along with a technology fellowship program for 18-24 year olds. Looking forward to what emerges from this initiative!

Conclusion

The internet was started when college campuses connected to each other across first short geographic areas and eventually much longer distances. Could we see academic and public libraries working together and leading the return to old ways of accessing the internet for a new era?

Meanwhile, it’s important to ensure that the FCC has appropriate regulatory powers over ISPs, otherwise we have no recourse if companies choose to prioritize packets. You should contact your legislators and make sure that the people at your campus who work with the government are sharing their perspectives as well. You can get some help with a letter to Congress from ALA.

Data Refuge and the Role of Libraries

Society is always changing. For some, the change can seem slow and frustrating, while others may feel as though the change occurred in a blink of an eye. What is this change that I speak of? It can be anything…civil rights, autonomous cars, or national leaders. One change that no one ever seems particularly prepared for, however, is when a website link becomes broken. One day, you could click a link and get to a site and the next day you get a 404 error. Sometimes this occurs because a site was migrated to a new server and the link was not redirected. Sometimes this occurs because the owner ceased to maintain the site. And sometimes, this occurs for less benign reasons.

Information access via the Internet is an activity that many (but not all) of us do everyday, in sometimes unconscious fashion: checking the weather, reading email, receiving news alerts. We also use the Internet to make datasets and other sources of information widely available. Individuals, universities, corporations, and governments share data and information in this way. In the Obama administration, the Open Government Initiative led to the development of Project Open Data and data.gov. Federal agencies started looking at ways to make information sharing easier, especially in areas where the data are unique.

One area of unique data is in climate science. Since climate data is captured on a specific day, time, and under certain conditions, it can never be truly reproduced. It will never be January XX, 2017 again. With these constraints, climate data can be thought of as fragile. The copies that we have are the only records that we have. Much of our nation’s climate data has been captured by research groups at institutes, universities, and government labs and agencies. During the election, much of the rhetoric from Donald Trump was rooted in the belief that climate change is a hoax. Upon his election, Trump tapped Scott Pruitt, who has fought much of the EPA’s attempts to regulate pollution, to lead the EPA. This, along with other messages from the new administration, has raised alarms within the scientific community that the United States may repeat the actions of the Harper administration in Canada, which literally threw away thousands of items from federal libraries that were deemed outside scope, through a process that was criticized as not transparent.

In an effort to safeguard and preserve this data, the Penn Program of Environmental Humanities (PPEH) helped organize a collaborative project called Data Refuge. This project requires the expertise of scientists, librarians, archivists, and programmers to organize, document, and back-up data that is distributed across federal agencies’ websites. Maintaining the integrity of the data, while ensuring the re-usability of it, are paramount concerns and areas where librarians and archivists must work hand in glove with the programmers (sometimes one and the same) who are writing the code to pull, duplicate, and push content. Wired magazine recently covered one of the Data Refuge events and detailed the way that the group worked together, while much of the process is driven by individual actions.

In order to capture as much of this data as possible, the Data Refuge project relies on groups of people organizing around this topic across the country. The PPEH site details the requirements to host a successful DataRescue event and has a Toolkit to help promote and document the event. There is also a survey that you can use to nominate climate or environmental data to be part of the Data Refuge. Not in a position to organize an event? Don’t like people? You can also work on your own! An interesting observation from the work on your own page is the option to nominate any “downloadable data that is vulnerable and valuable.” This means that Internet Archive and the End of Term Harvest Team (a project to preserve government websites from the Obama administration) is interested in any data that you have reason to believe may be in jeopardy under the current administration.

A quick note about politics. Politics are messy and it can seem odd that people are organizing in this way, when administrations change every four or eight years and, when there is a party change in the presidency, it is almost a certainty that there will be major departures in policy and prioritizations from administration to administration. What is important to recognize is that our data holdings are increasingly solely digital, and therefore fragile. The positions on issues like climate, environment, civil rights, and many, many others are so diametrically opposite from the Obama to Trump offices, that we – the public – have no assurances that the data will be retained or made widely available for sharing. This administration speaks of “alternative facts” and “disagree[ing] with the facts” and this makes people charged with preserving facts wary.

Many questions about the sustainability and longevity of the project remain. Will End of Term or Data Refuge be able to/need to expand the scope of these DataRescue efforts? How much resourcing can people donate to these events? What is the role of institutions in these efforts? This is a fantastic way for libraries to build partnerships with entities across campus and across a community, but some may view the political nature of these actions as incongruous with the library mission.

I would argue that policies and political actions are not inert abstractions. There is a difference between promoting a political party and calling attention to policies that are in conflict with human rights and freedom to information. Loathe as I am to make this comparison, would anyone truly claim that burning books is protected political speech, and that opposing such burning is “playing politics?” Yet, these were the actions of a political party – in living memory – hosted at university towns across Germany. Considering the initial attempt to silence the USDA and the temporary freeze on the EPA, libraries should strongly support the efforts of PPEH, Data Refuge, End of Term, and concerned citizens across the country.

 

Cybersecurity, Usability, Online Privacy, and Digital Surveillance

Cybersecurity is an interesting and important topic, one closely connected to those of online privacy and digital surveillance. Many of us know that it is difficult to keep things private on the Internet. The Internet was invented to share things with others quickly, and it excels at that job. Businesses that process transactions with customers and store the information online are responsible for keeping that information private. No one wants social security numbers, credit card information, medical history, or personal e-mails shared with the world. We expect and trust banks, online stores, and our doctor’s offices to keep our information safe and secure.

However, keeping private information safe and secure is a challenging task. We have all heard of security breaches at J.P Morgan, Target, Sony, Anthem Blue Cross and Blue Shield, the Office of Personnel Management of the U.S. federal government, University of Maryland at College Park, and Indiana University. Sometimes, a data breach takes place when an institution fails to patch a hole in its network systems. Sometimes, people fall for a phishing scam, or a virus in a user’s computer infects the target system. Other times, online companies compile customer data into personal profiles. The profiles are then sold to data brokers and on into the hands of malicious hackers and criminals.

https://www.flickr.com/photos/topgold/4978430615
Image from Flickr – https://www.flickr.com/photos/topgold/4978430615

Cybersecurity vs. Usability

To prevent such a data breach, institutional IT staff are trained to protect their systems against vulnerabilities and intrusion attempts. Employees and end users are educated to be careful about dealing with institutional or customers’ data. There are systematic measures that organizations can implement such as two-factor authentication, stringent password requirements, and locking accounts after a certain number of failed login attempts.

While these measures strengthen an institution’s defense against cyberattacks, they may negatively affect the usability of the system, lowering users’ productivity. As a simple example, security measures like a CAPTCHA can cause an accessibility issue for people with disabilities.

Or imagine that a university IT office concerned about the data security of cloud services starts requiring all faculty, students, and staff to only use cloud services that are SOC 2 Type II certified as an another example. SOC stands for “Service Organization Controls.” It consists of a series of standards that measure how well a given service organization keeps its information secure. For a business to be SOC 2 certified, it must demonstrate that it has sufficient policies and strategies that will satisfactorily protect its clients’ data in five areas known as “Trust Services Principles.” Those include the security of the service provider’s system, the processing integrity of this system, the availability of the system, the privacy of personal information that the service provider collects, retains, uses, discloses, and disposes of for its clients, and the confidentiality of the information that the service provider’s system processes or maintains for the clients. The SOC 2 Type II certification means that the business had maintained relevant security policies and procedures over a period of at least six months, and therefore it is a good indicator that the business will keep the clients’ sensitive data secure. The Dropbox for Business is SOC 2 certified, but it costs money. The free version is not as secure, but many faculty, students, and staff in academia use it frequently for collaboration. If a university IT office simply bans people from using the free version of Dropbox without offering an alternative that is as easy to use as Dropbox, people will undoubtedly suffer.

Some of you may know that the USPS website does not provide a way to reset the password for users who forgot their usernames. They are instead asked to create a new account. If they remember the account username but enter the wrong answers to the two security questions more than twice, the system also automatically locks their accounts for a certain period of time. Again, users have to create a new account. Clearly, the system that does not allow the password reset for those forgetful users is more secure than the one that does. However, in reality, this security measure creates a huge usability issue because average users do forget their passwords and the answers to the security questions that they set up themselves. It’s not hard to guess how frustrated people will be when they realize that they entered a wrong mailing address for mail forwarding and are now unable to get back into the system to correct because they cannot remember their passwords nor the answers to their security questions.

To give an example related to libraries, a library may decide to block all international traffic to their licensed e-resources to prevent foreign hackers who have gotten hold of the username and password of a legitimate user from accessing those e-resources. This would certainly help libraries to avoid a potential breach of licensing terms in advance and spare them from having to shut down compromised user accounts one by one whenever those are found. However, this would make it impossible for legitimate users traveling outside of the country to access those e-resources as well, which many users would find it unacceptable. Furthermore, malicious hackers would probably just use a proxy to make their IP address appear to be located in the U.S. anyway.

What would users do if their organization requires them to reset passwords on a weekly basis for their work computers and several or more systems that they also use constantly for work? While this may strengthen the security of those systems, it’s easy to see that it will be a nightmare having to reset all those passwords every week and keeping track of them not to forget or mix them up. Most likely, they will start using less complicated passwords or even begin to adopt just one password for all different services. Some may even stick to the same password every time the system requires them to reset it unless the system automatically detects the previous password and prevents the users from continuing to use the same one. Ill-thought-out cybersecurity measures can easily backfire.

Security is important, but users also want to be able to do their job without being bogged down by unwieldy cybersecurity measures. The more user-friendly and the simpler the cybersecurity guidelines are to follow, the more users will observe them, thereby making a network more secure. Users who face cumbersome and complicated security measures may ignore or try to bypass them, increasing security risks.

Image from Flickr - https://www.flickr.com/photos/topgold/4978430615
Image from Flickr – https://www.flickr.com/photos/topgold/4978430615

Cybersecurity vs. Privacy

Usability and productivity may be a small issue, however, compared to the risk of mass surveillance resulting from aggressive security measures. In 2013, the Guardian reported that the communication records of millions of people were being collected by the National Security Agency (NSA) in bulk, regardless of suspicion of wrongdoing. A secret court order prohibited Verizon from disclosing the NSA’s information request. After a cyberattack against the University of California at Los Angeles, the University of California system installed a device that is capable of capturing, analyzing, and storing all network traffic to and from the campus for over 30 days. This security monitoring was implemented secretly without consulting or notifying the faculty and those who would be subject to the monitoring. The San Francisco Chronicle reported the IT staff who installed the system were given strict instructions not to reveal it was taking place. Selected committee members on the campus were told to keep this information to themselves.

The invasion of privacy and the lack of transparency in these network monitoring programs has caused great controversy. Such wide and indiscriminate monitoring programs must have a very good justification and offer clear answers to vital questions such as what exactly will be collected, who will have access to the collected information, when and how the information will be used, what controls will be put in place to prevent the information from being used for unrelated purposes, and how the information will be disposed of.

We have recently seen another case in which security concerns conflicted with people’s right to privacy. In February 2016, the FBI requested Apple to create a backdoor application that will bypass the current security measure in place in its iOS. This was because the FBI wanted to unlock an iPhone 5C recovered from one of the shooters in San Bernadino shooting incident. Apple iOS secures users’ devices by permanently erasing all data when a wrong password is entered more than ten times if people choose to activate this option in the iOS setting. The FBI’s request was met with strong opposition from Apple and others. Such a backdoor application can easily be exploited for illegal purposes by black hat hackers, for unjustified privacy infringement by other capable parties, and even for dictatorship by governments. Apple refused to comply with the request, and the court hearing was to take place in March 22. The FBI, however, withdrew the request saying that it found a way to hack into the phone in question without Apple’s help. Now, Apple has to figure out what the vulnerability in their iOS if it wants its encryption mechanism to be foolproof. In the meanwhile, iOS users know that their data is no longer as secure as they once thought.

Around the same time, the Senate’s draft bill titled as “Compliance with Court Orders Act of 2016,” proposed that people should be required to comply with any authorized court order for data and that if that data is “unintelligible” – meaning encrypted – then it must be decrypted for the court. This bill is problematic because it practically nullifies the efficacy of any end-to-end encryption, which we use everyday from our iPhones to messaging services like Whatsapp and Signal.

Because security is essential to privacy, it is ironic that certain cybersecurity measures are used to greatly invade privacy rather than protect it. Because we do not always fully understand how the technology actually works or how it can be exploited for both good and bad purposes, we need to be careful about giving blank permission to any party to access, collect, and use our private data without clear understanding, oversight, and consent. As we share more and more information online, cyberattacks will only increase, and organizations and the government will struggle even more to balance privacy concerns with security issues.

Why Libraries Should Advocate for Online Privacy?

The fact that people may no longer have privacy on the Web should concern libraries. Historically, libraries have been strong advocates of intellectual freedom striving to keep patron’s data safe and protected from the unwanted eyes of the authorities. As librarians, we believe in people’s right to read, think, and speak freely and privately as long as such an act itself does not pose harm to others. The Library Freedom Project is an example that reflects this belief held strongly within the library community. It educates librarians and their local communities about surveillance threats, privacy rights and law, and privacy-protecting technology tools to help safeguard digital freedom, and helped the Kilton Public Library in Lebanon, New Hampshire, to become the first library to operate a Tor exit relay, to provide anonymity for patrons while they browse the Internet at the library.

New technologies brought us the unprecedented convenience of collecting, storing, and sharing massive amount of sensitive data online. But the fact that such sensitive data can be easily exploited by falling into the wrong hands created also the unparalleled level of potential invasion of privacy. While the majority of librarians take a very strong stance in favor of intellectual freedom and against censorship, it is often hard to discern a correct stance on online privacy particularly when it is pitted against cybersecurity. Some even argue that those who have nothing to hide do not need their privacy at all.

However, privacy is not equivalent to hiding a wrongdoing. Nor do people keep certain things secrets because those things are necessarily illegal or unethical. Being watched 24/7 will drive any person crazy whether s/he is guilty of any wrongdoing or not. Privacy allows us safe space to form our thoughts and consider our actions on our own without being subject to others’ eyes and judgments. Even in the absence of actual massive surveillance, just the belief that one can be placed under surveillance at any moment is sufficient to trigger self-censorship and negatively affects one’s thoughts, ideas, creativity, imagination, choices, and actions, making people more conformist and compliant. This is further corroborated by the recent study from Oxford University, which provides empirical evidence that the mere existence of a surveillance state breeds fear and conformity and stifles free expression. Privacy is an essential part of being human, not some trivial condition that we can do without in the face of a greater concern. That’s why many people under political dictatorship continue to choose death over life under mass surveillance and censorship in their fight for freedom and privacy.

The Electronic Frontier Foundation states that privacy means respect for individuals’ autonomy, anonymous speech, and the right to free association. We want to live as autonomous human beings free to speak our minds and think on our own. If part of a library’s mission is to contribute to helping people to become such autonomous human beings through learning and sharing knowledge with one another without having to worry about being observed and/or censored, libraries should advocate for people’s privacy both online and offline as well as in all forms of communication technologies and devices.

Looking Across the Digital Preservation Landscape

When it comes to digital preservation, everyone agrees that a little bit is better than nothing. Look no further than these two excellent presentations from Code4Lib 2016, “Can’t Wait for Perfect: Implementing “Good Enough” Digital Preservation” by Shira Peltzman and Alice Sara Prael, and “Digital Preservation 101, or, How to Keep Bits for Centuries” by Julie Swierczek. I highly suggest you go check those out before reading more of this post if you are new to digital preservation, since they get into some technical details that I won’t.

The takeaway from these for me was twofold. First, digital preservation doesn’t have to be hard, but it does have to be intentional, and secondly, it does require institutional commitment. If you’re new to the world of digital preservation, understanding all the basic issues and what your options are can be daunting. I’ve been fortunate enough to lead a group at my institution that has spent the last few years working through some of these issues, and so in this post I want to give a brief overview of the work we’ve done, as well as the current landscape for digital preservation systems. This won’t be an in-depth exploration, more like a key to the map. Note that ACRL TechConnect has covered a variety of digital preservation issues before, including data management and preservation in “The Library as Research Partner” and using bash scripts to automate digital preservation workflow tasks in “Bash Scripting: automating repetitive command line tasks”.

The committee I chair started examining born digital materials, but expanded focus to all digital materials, since our digitized materials were an easier test case for a lot of our ideas. The committee spent a long time understanding the basic tenets of digital preservation–and in truth, we’re still working on this. For this process, we found working through the NDSA Levels of Digital Preservation an extremely helpful exercise–you can find a helpfully annotated version with tools by Shira Peltzman and Alice Sara Prael, as well as an additional explanation by Shira Peltman. We also relied on the Library of Congress Signal blog and the work of Brad Houston, among other resources. A few of the tasks we accomplished were to create a rough inventory of digital materials, a workflow manual, and to acquire many terabytes (currently around 8) of secure networked storage space for files to replace all removable hard drives being used for backups. While backups aren’t exactly digital preservation, we wanted to at the very least secure the backups we did have. An inventory and workflow manual may sound impressive, but I want to emphasize that these are living and somewhat messy documents. The major advantage of having these is not so much for what we do have, but for identifying gaps in our processes. Through this process, we were able to develop a lengthy (but prioritized) list of tasks that need to be completed before we’ll be satisfied with our processes. An example of this is that one of the major workflow gaps we discovered is that we have many items on obsolete digital media formats, such as floppy disks, that needs to be imaged before it can even be inventoried. We identified the tool we wanted to use for that, but time and staffing pressures have left the completion of this project in limbo. We’re now working on hiring a graduate student who can help work on this and similar projects.

The other piece of our work has been trying to understand what systems are available for digital preservation. I’ll summarize my understanding of this below, with several major caveats. This is a world that is currently undergoing a huge amount of change as many companies and people work on developing new systems or improving existing systems, so there is a lot missing from what I will say. Second, none of these solutions are necessarily mutually exclusive. Some by design require various pieces to be used together, some may not require it, but your circumstances may dictate a different solution. For instance, you may not like the access layer built into one system, and so will choose something else. The dream that you can just throw money at the problem and it will go away is, at present, still just a dream–as are so many library technology problems.

The closest to such a dream is the end-to-end system. This is something where at one end you load in a file or set of files you want to preserve (for example, a large set of donated digital photographs in TIFF format), and at the other end have a processed archival package (which might include the TIFF files, some metadata about the processing, and a way to check for bit rot in your files), as well as an access copy (for example, a smaller sized JPG appropriate for display to the public) if you so desire–not all digital files should be available to the public, but still need to be preserved.

Examples of such systems include Preservica, ArchivesDirect, and Rosetta. All of these are hosted vended products, but ArchivesDirect is based on open source Archivematica so it is possible to get some idea of the experience of using it if you are able to install the tools on which it based. The issues with end-t0-end systems are similar to any other choice you make in library systems. First, they come at a high price–Preservica and ArchivesDirect are open about their pricing, and for a plan that will meet the needs of medium-sized libraries you will be looking at $10,000-$14,000 annual cost. You are pretty much stuck with the options offered in the product, though you still have many decisions to make within that framework. Migrating from one system to another if you change your mind may involve some very difficult processes, and so inertia dictates that you will be using that system for the long haul, which a short trial period or demos may not be enough to really tell you that it’s a good idea. But you do have the potential for more simplicity and therefore a stronger likelihood that you will actually use them, as well as being much more manageable for smaller staffs that lack dedicated positions for digital preservation work–or even room in the current positions for digital preservation work.  A hosted product is ideal if you don’t have the staff or servers to install anything yourself, and helps you get your long-term archival files onto Amazon Glacier. Amazon Glacier is, by the way, where pretty much all the services we’re discussing store everything you are submitting for long-term storage. It’s dirt cheap to store on Amazon Glacier and if you can restore slowly, not too expensive to restore–only expensive if you need to restore a lot quickly. But using it is somewhat technically challenging since you only interact with it through APIs–there’s no way to log in and upload files or download files as with a cloud storage service like Dropbox. For that reason, when you’re paying a service hundreds of dollars a terabyte that ultimately stores all your material on Amazon Glacier which costs pennies per gigabye, you’re paying for the technical infrastructure to get your stuff on and off of there as much as anything else. In another way you’re paying an insurance policy for accessing materials in a catastrophic situation where you do need to recover all your files–theoretically, you don’t have to pay extra for such a situation.

A related option to an end-to-end system that has some attractive features is to join a preservation network. Examples of these include Digital Preservation Network (DPN) or APTrust. In this model, you pay an annual membership fee (right now $20,000 annually, though this could change soon) to join the consortium. This gives you access to a network of preservation nodes (either Amazon Glacier or nodes at other institutions), access to tools, and a right (and requirement) to participate in the governance of the network. Another larger preservation goal of such networks is to ensure long-term access to material even if the owning institution disappears. Of course, $20,000 plus travel to meetings and work time to participate in governance may be out of reach of many, but it appears that both DPN and APTrust are investigating new pricing models that may meet the needs of smaller institutions who would like to participate but can’t contribute as much in money or time. This a world that I would recommend watching closely.

Up until recently, the way that many institutions were achieving digital preservation was through some kind of repository that they created themselves, either with open source repository software such as Fedora Repository or DSpace or some other type of DIY system. With open source Archivematica, and a few other tools, you can build your own end-to-end system that will allow you to process files, store the files and preservation metadata, and provide access as is appropriate for the collection. This is theoretically a great plan. You can make all the choices yourself about your workflows, storage, and access layer. You can do as much or as little as you need to do. But in practice for most of us, this just isn’t going to happen without a strong institutional commitment of staff and servers to maintain this long term, at possibly a higher cost than any of the other solutions. That realization is one of the driving forces behind Hydra-in-a-Box, which is an exciting initiative that is currently in development. The idea is to make it possible for many different sizes of institutions to take advantage of the robust feature sets for preservation in Fedora and workflow management/access in Hydra, but without the overhead of installing and maintaining them. You can follow the project on Twitter and by joining the mailing list.

After going through all this, I am reminded of one of my favorite slides from Julie Swierczek’s Code4Lib presentation. She works through the Open Archival Initiative System model graph to explain it in depth, and comes to a point in the workflow that calls for “Sustainable Financing”, and then zooms in on this. For many, this is the crux of the digital preservation problem. It’s possible to do a sort of ok job with digital preservation for nothing or very cheap, but to ensure long term preservation requires institutional commitment for the long haul, just as any library collection requires. Given how much attention digital preservation is starting to receive, we can hope that more libraries will see this as a priority and start to participate. This may lead to even more options, tools, and knowledge, but it will still require making it a priority and putting in the work.

Doing Six Impossible Things Before Breakfast: An Approach to Keeping it User Centered

Keeping any large technical project user-centered is challenging at best. Adding in something like an extremely tight timeline makes it too easy to dispense with this completely. Say, for instance, six months to migrate to a new integrated library system that combines your old ILS plus your link resolver and many other tools and a new discovery layer. I would argue, however, that it’s on a tight timeline like that that a major focus on user experience research can become a key component of your success. I am referring in this piece specifically to user experience on the web, but of course there are other aspects of user experience that go into such a project. While none of my observations about usability testing and user experience are new, I have realized from talking to others that they need help advocating for the importance of user research. As we turn to our hopes and goals for 2016, let’s all make a resolution to figure out a way to make better user experience research happen, even if it seems impossible.

  1. Selling the Need For User Testing

    When I worked on implementing a discovery layer at my job earlier this year, I had a team of 18 people from three campuses with varying levels of interest and experience in user testing. It was really important to us that we had an end product that would work for everyone at all levels, whether novice or experienced researcher, as well as for the library staff who would need to use the system on a daily basis. With so many people and such a tight timeline building user testing into the schedule in the first place helped us to frame our decisions as a hypothesis to confirm or nullify in the next round of testing. We tried to involve as many people as possible in the testing, though we had a core group who had experience with running the tests administer them. Doing a test as early as possible is good to convince others of the need for testing. People who had never seen a usability test done before found them convincing immediately and were much more on board for future tests.

  2. Remembering Who Your Users Are

    Reference and instruction librarians are users too. We sometimes get so focused on reminding librarians that they are not the users that we don’t make things work for them–and they do need to use the catalog too. Librarians who work with students in the classroom and in research consultations on a daily basis have a great deal of insight into seemingly minor issues that may lead to major frustrations. Here’s an example. The desktop view of our discovery layer search box was about 320 pixels long which works fine–if you are typing in just one word.  Yet we were “selling” the discovery layer as something that handled known-item searching well, which meant that much of a pasted in citation wasn’t visible. The reference librarians who were doing this exact work knew this would be an issue. We expanded the search box so more words are visible and so it works better for known-item searching.

    The same goes for course reserves, interlibrary loan, or other staff who work with a discovery layer frequently often with an added pressure of tight deadlines. If you can shave seconds off for them that adds up a huge amount over the course of the year, and will additionally potentially solve issues for other users. One example is that the print view of a book record had very small text–the print stylesheet was set to print at 85% font size, which meant it was challenging to read. The reserves staff relied on this print view to complete their daily work with the student worker. For one student the small print size created an accessibility issue which led to inefficient manual workarounds. We were able to increase the print stylesheet to greater than 100% font size which made the printed page easily readable, and therefore fix the accessibility issue for this specific use case. I suspect there are many other people whom this benefits as well.

  3. Divide the Work

    I firmly believe that everyone who is interested in user experience on the web should get some hands on experience with it. That said, not everyone needs to do the hands on work, and with a large project it is important that people focus on their core reason for being on the team. Dividing the group into overlapping teams who worked on data testing, interface testing, and user education and outreach helped us to see the big picture but not overwhelm everyone (a little Overwhelm is going to happen no matter what). These groups worked separately much of the time for deep dives into specific issues, but helped inform each other across the board. For instance, the data group might figure out a potential issue, for which the interface group would determine a test scenario. If testing indicated a change, the user education group could be aware of implications for outreach.

  4. A Quick Timeline is Your Friend

    Getting a new tool out with only a few months turnaround time is certainly challenging, but it forces you to forget about perfection and get features done. We got our hands on the discovery layer on Friday, and were doing tests the following Tuesday, with additional tests scheduled for two weeks after the first look. This meant that our first tests were on something very rough, but gave us a big list of items to fix in the next two weeks before the next test (or put on hold if lower priority). We ended up taking off two months from live usability in the middle of the process to focus on development and other types of testing (such as with trusted beta testers). But that early set of tests was crucial in setting the agenda and showing the importance of testing. We  ultimately did 5 rounds of testing, 4 of which happened before the discovery layer went live, and 1 a few months after.

  5. Think on the Long Scale

    The vendor or the community of developers is presumably not going to stop working on the product, and neither should you. For this reason, it is helpful to make it clear who is doing the work and ensure that it is written into committee charges, job descriptions, or other appropriate documentation. Maintain a list of long-term goals, and in those short timescales figure out just one or two changes you could make. The academic year affords many peaks and lulls, and those lulls can be great times to make minor changes. Regular usability testing ensures that these changes are positive, as well as uncovering new needs as tools and needs change.

  6. Be Iterative

    Iteration is the way to ensure that your long timescale stays manageable. Work never really stops, but that’s ok. You need a job, right? Back to that idea of a short timeline–borrow from the Agile method to think in timescales of 2 weeks-1 month. Have the end goal in mind, but know that getting there will happen in tiny pieces. This does require some faith that all the crucial pieces will happen, but as long as someone is keeping an eye on those (in our case, the vendor helped a lot with this), the pressure is off on being “finished”. If a test shows that something is broken that really needs to work, that can become high priority, and other desired features can move to a future cycle. Iteration helps you stay on track and get small pieces done regularly.

Conclusion

I hope I’ve made the case for why you need to have a user focus in any project, particularly a large and complex one. Whether you’re a reference librarian, project manager, web developer or cataloger, you have a responsibility to ensure the end result is usable, useful, and something people actually want to use. And no matter how tight your timeline, stick to making sure the process is user centered, and you’ll be amazed at how many impossible things you accomplished.

Near Us and Libraries, Robots Have Arrived

The movie, Robot and Frank, describes the future in which the elderly have a robot as their companion and also as a helper. The robot monitors various activities that relate to both mental and physical health and helps Frank with various house chores. But Frank also enjoys the robot’s company and goes on to enlist the robot into his adventure of breaking into a local library to steal a book and a greater heist later on. People’s lives in the movie are not particularly futuristic other than a robot in them. And even a robot may not be so futuristic to us much longer either. As a matter of fact, as of June 2015, there is now a commercially available humanoid robot that is close to performing some of the functions that the robot in the movie ‘Frank and Robot’ does.

Pepper_GESTURE_ON-001
Pepper Robot, Image from Aldebaran, https://www.aldebaran.com/en/a-robots/who-is-pepper

A Japanese company, SoftBank Robotics Corp. released a humanoid robot named ‘Pepper’ to the market back in June. The Pepper robot is 4 feet tall, 61 pounds, speaks 17 languages and is equipped with an array of cameras, touch sensors, accelerometer, and other sensors in his “endocrine-type multi-layer neural network,” according to the CNN report. The Pepper robot was priced at ¥198,000 ($1,600). The Pepper owners are also responsible for an additional ¥24,600 ($200) monthly data and insurance fee. While the Pepper robot is not exactly cheap, it is surprisingly affordable for a robot. This means that the robot industry has now matured to the point where it can introduce a robot that the mass can afford.

Robots come in varying capabilities and forms. Some robots are as simple as a programmable cube block that can be combined with one another to be built into a working unit. For example, Cubelets from Modular Robotics are modular robots that are used for educational purposes. Each cube performs one specific function, such as flash, battery, temperature, brightness, rotation, etc. And one can combine these blocks together to build a robot that performs a certain function. For example, you can build a lighthouse robot by combining a battery block, a light-sensor block, a rotator block, and a flash block.

 

A variety of cubelets available from the Modular Robotics website.
A variety of cubelets available from the Modular Robotics website.

By contrast, there are advanced robots such as those in the form of an animal developed by a robotics company, Boston Dynamics. Some robots look like a human although much smaller than the Pepper robot. NAO is a 58-cm tall humanoid robot that moves, recognizes, hears and talks to people that was launched in 2006. Nao robots are an interactive educational toy that helps students to learn programming in a fun and practical way.

Noticing their relevance to STEM education, some libraries are making robots available to library patrons. Westport Public Library provides robot training classes for its two Nao robots. Chicago Public Library lends a number of Finch robots that patrons can program to see how they work. In celebration of the National Robotics Week back in April, San Diego Public Library hosted their first Robot Day educating the public about how robots have impacted the society. San Diego Public Library also started a weekly Robotics Club inviting anyone to join in to help build or learn how to build a robot for the library. Haslet Public Library offers the Robotics Camp program for 6th to 8th graders who want to learn how to build with LEGO Mindstorms EV3 kits. School librarians are also starting robotics clubs. The Robotics Club at New Rochelle High School in New York is run by the school’s librarian, Ryan Paulsen. Paulsen’s robotics club started with faculty, parent, and other schools’ help along with a grant from NASA and participated in a FIRST Robotics Competition. Organizations such as the Robotics Academy at Carnegie Mellon University provides educational outreach and resources.

Image from Aldebaran website at https://www.aldebaran.com/en/humanoid-robot/nao-robot

There are also libraries that offer coding workshops often with Arduino or Raspberry Pi, which are inexpensive computer hardware. Ames Free Library offers Raspberry Pi workshops. San Diego Public Library runs a monthly Arduino Enthusiast Meetup. Arduinos and Raspberry Pis can be used to build digital devices and objects that can sense and interact the physical world, which are close to a simple robot. We may see more robotics programs at those libraries in the near future.

Robots can fulfill many other functions than being educational interactive toys, however. For example, robots can be very useful in healthcare. A robot can be a patient’s emotional companion just like the Pepper. Or it can provide an easy way to communicate for a patient and her/his caregiver with physicians and others. A robot can be used at a hospital to move and deliver medication and other items and function as a telemedicine assistant. It can also provide physical assistance for a patient or a nurse and even be use for children’s therapy.

Humanoid robots like Pepper may also serve at a reception desk at companies. And it is not difficult to imagine them as sales clerks at stores. Robots can be useful at schools and other educational settings as well. At a workplace, teleworkers can use robots to achieve more active presence. For example, universities and colleges can offer a similar telepresence robot to online students who want to virtually experience and utilize the campus facilities or to faculty who wish to offer the office hours or collaborate with colleagues while they are away from the office. As a matter of fact, the University of Texas, Arlington, Libraries recently acquired several Telepresence Robots to lend to their faculty and students.

Not all robots do or will have the humanoid form as the Pepper robot does. But as robots become more and more capable, we will surely get to see more robots in our daily lives.

References

Alpeyev, Pavel, and Takashi Amano. “Robots at Work: SoftBank Aims to Bring Pepper to Stores.” Bloomberg Business, June 30, 2015. http://www.bloomberg.com/news/articles/2015-06-30/robots-at-work-softbank-aims-to-bring-pepper-to-stores.

“Boston Dynamics.” Accessed September 8, 2015. http://www.bostondynamics.com/.

Boyer, Katie. “Robotics Clubs At the Library.” Public Libraries Online, June 16, 2014. http://publiclibrariesonline.org/2014/06/robotics-clubs-at-the-library/.

“Finch Robots Land at CPL Altgeld.” Chicago Public Library, May 12, 2014. https://www.chipublib.org/news/finch-robots-land-at-cpl/.

McNickle, Michelle. “10 Medical Robots That Could Change Healthcare – InformationWeek.” InformationWeek, December 6, 2012. http://www.informationweek.com/mobile/10-medical-robots-that-could-change-healthcare/d/d-id/1107696.

Singh, Angad. “‘Pepper’ the Emotional Robot, Sells out within a Minute.” CNN.com, June 23, 2015. http://www.cnn.com/2015/06/22/tech/pepper-robot-sold-out/.

Tran, Uyen. “SDPL Labs: Arduino Aplenty.” The Library Incubator Project, April 17, 2015. http://www.libraryasincubatorproject.org/?p=16559.

“UT Arlington Library to Begin Offering Programming Robots for Checkout.” University of Texas Arlington, March 11, 2015. https://www.uta.edu/news/releases/2015/03/Library-robots-2015.php.

Waldman, Loretta. “Coming Soon to the Library: Humanoid Robots.” Wall Street Journal, September 29, 2014, sec. New York. http://www.wsj.com/articles/coming-soon-to-the-library-humanoid-robots-1412015687.