Getting Started with APIs

There has been a lot of discussion in the library community regarding the use of web service APIs over the past few years.  While APIs can be very powerful and provide awesome new ways to share, promote, manipulate and mashup your library’s data, getting started using APIs can be overwhelming.  This post is intended to provide a very basic overview of the technologies and terminology involved with web service APIs, and provides a brief example to get started using the Twitter API.

What is an API?

First, some definitions.  One of the steepest learning curves with APIs involves navigating the terminology, which unfortunately can be rather dense – but understanding a few key concepts makes a huge difference:

  • API stands for Application Programming Interface, which is a specification used by software components to communicate with each other.  If (when?) computers become self-aware, they could use APIs to retrieve information, tweet, post status updates, and essentially run most day-to-do functions for the machine uprising. There is no single API “standard” though one of the most common methods of interacting with APIs involves RESTful requests.
  • REST / RESTful APIs  – Discussions regarding APIs often make references to “REST” or “RESTful” architecture.  REST stands for Representational State Transfer, and you probably utilize RESTful requests every day when browsing the web. Web browsing is enabled by HTTP (Hypertext Transfer Protocol) – as in http://example.org.  The exchange of information that occurs when you browse the web uses a set of HTTP methods to retrieve information, submit web forms, etc.  APIs that use these common HTTP methods (sometimes referred to as HTTP verbs) are considered to be RESTful.  RESTful APIs are simply APIs that leverage the existing architecture of the web to enable communication between machines via HTTP methods.

HTTP Methods used by RESTful APIs

Most web service APIs you will encounter utilize at the core the following HTTP methods for creating, retrieving, updating, and deleting information through that web service.1  Not all APIs allow each method (at least without authentication) but some common methods for interacting with APIs include:

    • GET – You can think of GET as a way to “read” or retrieve information via an API.  GET is a good starting point for interacting with an API you are unfamiliar with.  Many APIs utilize GET, and GET requests can often be used without complex authentication.  A common example of a GET request that you’ve probably used when browsing the web is the use of query strings in URLs (e.g., www.example.org/search?query=ebooks).
    • POST – POST can be used to “write” data over the web.  You have probably generated  POST requests through your browser when submitting data on a web form or making a comment on a forum.  In an API context, POST can be used to request that an API’s server accept some data contained in the POST request – Tweets, status updates, and other data that is added to a web service often utilize the POST method.
    • PUT – PUT is similar to POST, but can be used to send data to a web service that can assign that data a unique uniform resource identifier (URI) such as a URL.  Like POST, it can be used to create and update information, but PUT (in a sense) is a little more aggressive. PUT requests are designed to interact with a specific URI and can replace an existing resource at that URI or create one if there isn’t one.
    • DELETE – DELETE, well, deletes – it removes information at the URI specified by the request.  For example, consider an API web service that could interact with your catalog records by barcode.2 During a weeding project, an application could be built with DELETE that would delete the catalog records as you scanned barcodes.3

Understanding API Authentication Methods

To me, one of the trickiest parts of getting started with APIs is understanding authentication. When an API is made available, the publishers of that API are essentially creating a door to their application’s data.  This can be risky:  imagine opening that door up to bad people who might wish to maliciously manipulate or delete your data.  So often APIs will require a key (or a set of keys) to access data through an API.

One helpful way to contextualize how an API is secured is to consider access in terms of identification, authentication, and authorization.4  Some APIs only want to know where the request is coming from (identification), while others require you to have a valid account (authentication) to access data.  Beyond authentication, an API may also want to ensure your account has permission to do certain functions (authorization).  For example, you may be an authenticated user of an API that allows you to make GET requests of data, but your account may still not be authorized to make POST, PUT, or DELETE requests.

Some common methods used by APIs to store authentication and authorization include OAuth and WSKey:

  • OAuth - OAuth is a widely used open standard for authorization access to HTTP services like APIs.5  If you have ever sent a tweet from an interface that’s not Twitter (like sharing a photo directly from your mobile phone) you’ve utilized the OAuth framework.  Applications that already store authentication data in the form of user accounts (like Twitter and Google) can utilize their existing authentication structures to assign authorization for API access.  API Keys, Secrets, and Tokens can be assigned to authorized users, and those variables can be used by 3rd party applications without requiring the sharing of passwords with those 3rd parties.
  • WSKey (Web Services Key) – This is an example from OCLC, that is conceptually very similar to OAuth.  If you have an OCLC account (either through worldcat.org or oclc.org account) you can request key access.  Your authorization – in other words, what services and REST requests you are permitted to access – may be dependent upon your relationship with an OCLC member organization.

Keys, Secrets, Tokens?  HMAC?!

API authorization mechanisms often require multiple values in order to successfully interact with the API.  For example, with the Twitter API, you may be assigned an API Key and a corresponding Secret.  The topic of secret key authentication can be fairly complex,6 but fundamentally a Key and its corresponding Secret are used to authenticate requests in a secure encrypted fashion that would be difficult to guess or decrypt by malicious third-parties.  Multiple keys may be required to perform particular requests – for example, the Twitter API requires a key and secret to access the API itself, as well as a token and secret for OAuth authorization.

Probably the most important thing to remember about secrets is to keep them secret.  Do not share them or post them anywhere, and definitely do not store secret values in code uploaded to Github 7 (.gitignore – a method to exclude files from a git repository – is your friend here). 8  To that end, one strategy that is used by RESTful APIs to further secure secret key value is an HMAC header (hash-based method authentication code).  When requests are sent, HMAC uses your secret key to sign the request without actually passing the secret key value in the request itself. 9

Case Study:  The Twitter API

It’s easier to understand how APIs work when you can see them in action.  To do these steps yourself, you’ll need a Twitter account.  I strongly recommend creating a Twitter account separate from your personal or organizational accounts for initial experimentation with the API.  This code example is a very simple walkthrough, and does not cover securing your applications’ server (and thus securing the keys that may be stored on that server).  Anytime you authorize access to a Twitter account to API access you may be exposing it to some level of vulnerability.  At the end of the walkthrough, I’ll list the steps you would need to take if your account does get compromised.

1.  Activate a developer account

Visit dev.twitter.com and click the sign in area in the upper right corner.  Sign in with your Twitter account. Once signed in, click on your account icon (again in the upper right corner of the page) and then select the My Applications option from the drop-down menu.

Screenshot of the Twitter Developer Network login screen

2.  Get authorization

In the My applications area, click the Create New App button, and then fill out the required fields (Name, Description, and Website where the app code will be stored).  If you don’t have a web server, don’t worry, you can still get started testing out the API without actually writing any code.

3.  Get your keys

After you’ve created the application and are looking at its settings, click on the API Keys tab.  Here’s where you’ll get the values you need.  Your API Access Level is probably limited to read access only.  Click the “modify app permissions” link to set up read and write access, which will allow you to post through the API.  You will have to associate a mobile phone number with your Twitter account to get this level of authorization.

Screenshot of Twitter API options that allow for configuraing API read and write access.

Scroll down and note that in addition to an API Key and Secret, you also have an access token associated with OAUTH access.  This Token Key and Secret are required to authorize account activity associated with your Twitter user account.

4.  Test Oauth Access / Make a GET call

From the application API Key page, click the Test OAuth button.  This is a good way to get a sense of the API calls.  Leave the key values as they are on the page, and scroll down to the Request Settings Area.  Let’s do a call to return the most recent tweet from our account.

With the GET request checked, enter the following values:

Request URI:

Request Query (obviously replace yourtwitterhandle with… your actual Twitter handle):

  • screen_name=yourtwitterhandle&count=1

For example, my GET request looks like this:

Screenshot of the GET request setup screen for OAuth testing.

Click “See OAuth signature for this request”.  On the next page, look for the cURL request.  You can copy and paste this into a terminal or console window to execute the GET request and see the response (there will be a lot more of response text than what’s posted here):

* SSLv3, TLS alert, Client hello (1):
[{"created_at":"Sun Apr 20 19:37:53 +0000 2014","id":457966401483845632,
"id_str":"457966401483845632",
"text":"Just Added: The Fault in Our Stars by John Green; 
2nd Floor PZ7.G8233 Fau 2012","

As you can see, the above response to my cURL request includes the text of my account’s last tweet:

image00

What to do if your Twitter API Key or OAuth Security is Compromised

If your Twitter account suddenly starts tweeting out spammy “secrets to weight loss success” that you did not authorize (or other tweets that you didn’t write), your account has been compromised.  If you can still login with your username and password, it’s likely that your OAuth Keys have been compromised.  If you can’t log in, your account has probably been hacked.10  Your account can be compromised if you’ve authorized a third party app to tweet, but if your Twitter account has an active developer application on dev.twitter.com, it could be your own application’s key storage that’s been compromised.

Here are the immediate steps to take to stop the spam:

  1. Revoke access to third party apps under Settings –> Apps.  You may want to re-authorize them later – but you’ll probably want to reset the password for the third-party accounts that you had authorized.
  2. If you have generated API keys, log into dev.twitter.com and re-generate your API Keys and Secrets and your OAuth Keys and Secrets.  You’ll have to update any apps using the keys with the new key and secret information – but only if you have verified the server running the app hasn’t also been compromised.
  3. Reset your Twitter account password.11
5.  Taking it further:  Posting a new titles Twitter feed

So now you know a lot about the Twitter API – what now?  One way to take this further might involve writing an application to post new books that are added to your library’s collection.  Maybe you want to highlight a particular subject or collection – you can use some text output from your library catalog to post the title, author, and call number of new books.

The first step to such an application could involve creating an app that can post to the Twitter API.  If you have access to a  server that can run PHP, you can easily get started by downloading this incredibly helpful PHP wrapper.

Then in the same directory create two new files:

  • settings.php, which contains the following code (replace all the values in quotes with your actual Twitter API Key information):
<?php

$settings = array {
 ‘oath_access_token’ => “YOUR_ACCESS_TOKEN”,
 ‘oath_access_token_secret’ => “YOUR_ACCESS_TOKEN_SECRET”,
 ‘consumer_key’ => “YOUR_API_KEY”,
 ‘consumer_secret’ => “YOUR_API_KEY_SECRET”,
);

?>
  • and twitterpost.php, which has the following code, but swap out the values of ‘screen_name’ with your Twitter handle, and change the ‘status’ value if desired:
<?php

//call the PHP wrapper and your API values
require_once('TwitterAPIExchange.php');
include 'settings.php';

//define the request URL and REST request type
$url = "https://api.twitter.com/1.1/statuses/update.json";
$requestMethod = "POST";

//define your account and what you want to tweet
$postfields = array(
  'screen_name' => 'YOUR_TWITTER_HANDLE',
  'status' => 'This is my first API test post!'
);

//put it all together and build the request
$twitter = new TwitterAPIExchange($settings);
echo $twitter->buildOauth($url, $requestMethod)
->setPostfields($postfields)
->performRequest();

?>

Save the files and run the twitterpost.php page in your browser. Check the Twitter account referenced by the screen_name variable.  There should now be a new post with the contents of the ‘status’ value.

This is just a start – you would still need to get data out of your ILS and feed it to this application in some way – which brings me to one final point.

Is there an API for your ILS?  Should there be? (Answer:  Yes!)

Getting data out of traditional, legacy ILS systems can be a challenge.  Extending or adding on to traditional ILS software can be impossible (and in some cases may have been prohibited by license agreements).  One of the reasons for this might be that the architecture of such systems was designed for a world where the kind of data exchange facilitated by RESTful APIs didn’t yet exist.  However, there is definitely a major trend by ILS developers to move toward allowing access to library data within ILS systems via APIs.

It can be difficult to articulate exactly why this kind of access is necessary – especially when looking toward the future of rich functionality in emerging web-based library service platforms.  Why should we have to build custom applications using APIs – shouldn’t our ILS systems be built with all the functionality we need?

While libraries should certainly promote comprehensive and flexible architecture in the ILS solutions they purchase, there will almost certainly come a time when no matter how comprehensive your ILS is, you’re going to wonder, “wouldn’t it be nice if our system did X”?  Moreover, consider how your patrons might use your library’s APIs; for example, integrating your library’s web services other apps and services they already to use, or to build their own applications with your library web services. If you have web service API access to your data – bibliographic, circulation, acquisition data, etc. – you have the opportunity to meet those needs and to innovate collaboratively.  Without access to your data, you’re limited to the development cycle of your ILS vendor, and it may be years before you see the functionality you really need to do something cool with your data.  (It may still be years before you can find the time to develop your own app with an API, but that’s an entirely different problem.)

Examples of Library Applications built using APIs and ILS API Resources

Further Reading

Michel, Jason P. Web Service APIs and Libraries. Chicago, IL:  ALA Editions, 2013. Print.

Richardson, Leonard, and Michael Amundsen. RESTful Web APIs. Sebastopol, Calif.: O’Reilly, 2013.

 

About our Guest Author:

Lauren Magnuson is Systems & Emerging Technologies Librarian at California State University, Northridge and a Systems Coordinator for the Private Academic Library Network of Indiana (PALNI).  She can be reached at lauren.magnuson@csun.edu or on Twitter @lpmagnuson.

 

Notes

  1. create, retrieve, update, and delete is sometimes referred to by acronym: CRUD
  2. For example, via the OCLC Collection Management API: http://www.oclc.org/developer/develop/web-services/wms-collection-management-api.en.html
  3. For more detail on these and other HTTP verbs, http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html
  4. https://blog.apigee.com/detail/do_you_need_api_keys_api_identity_vs._authorization
  5. Google, for example: https://developers.google.com/accounts/docs/OAuth2
  6. To learn a lot more about this, check out this web series: http://www.youtube.com/playlist?list=PLB4D701646DAF0817
  7. http://www.securityweek.com/github-search-makes-easy-discovery-encryption-keys-passwords-source-code
  8. Learn more about .gitignore here:  https://help.github.com/articles/ignoring-files
  9. An nice overview of HMAC is here: http://www.wolfe.id.au/2012/10/20/what-is-hmac-and-why-is-it-useful
  10. Here’s what to do if you’re account is hacked and you can’t log in:  https://support.twitter.com/articles/185703-my-account-has-been-hacked
  11. More information, and further steps you can take are here:  https://support.twitter.com/articles/31796-my-account-has-been-compromised

My First Hackathon & WikipeDPLA

Almost two months ago, I attended my first hackathon during ALA’s Midwinter Meeting. Libhack was coordinated by the Library Code Year Interest Group. Much credit is due to coordinators Zach Coble, Emily Flynn, Jesse Saunders, and Chris Strauber. The University of Pennsylvania graciously hosted the event in their Van Pelt Library.

What’s a hackathon? It’s a short event, usually a day or two, wherein coders and other folks get together to produce software. Hackathons typically work on a particular problem, application, or API (a source of structured data). LibHack focused on APIs from two major library organizations: OCLC and the Digital Public Library of America (DPLA).

Impressions & Mixed Content

Since this was my first hackathon and the gritty details below may be less than relevant to all our readers, I will front-load my general impressions of Libhack rather than talk about the code I wrote. First of all, splitting the hackathon into two halves focused on different APIs and catering to different skill levels worked well. There were roughly equal numbers of participants in both the structured, beginner-oriented OCLC group and the more independent DPLA group.

Having representatives from both of the participating institutions was wonderful. While I didn’t take advantage of the attending DPLA staff as much as I should have, it was great to have a few people to answer questions. What’s more, I think DPLA benefited from hearing about developers’ experiences with their API. For instance, there are a few metadata fields in their API which might contain an array or a string depending upon the record. If an application assumes one or the other, chances are it breaks at some point and the programmer has to locate the error and write code that handles either data format.

Secondly, the DPLA API is currently available only over unencrypted HTTP. Thus due to the mixed content policies of web browsers it is difficult to call the HTTP API on HTTPS pages. For the many HTTP sites on the web this isn’t a concern, but I wanted to call the DPLA API from Wikipedia which only serves content over HTTPS. To work around this limitation, users have to manually override mixed content blocking in their browser, a major limitation for my project. DPLA already had plans to roll out an HTTPS API, but I think hearing from developers may influence its priority.

Learn You Some Lessons

Personally, I walked away from Libhack with a few lessons. First of all, I had to throw away my initial code before creating something useful. While I had a general idea in mind—somehow connect DPLA content related to a given Wikipedia page—I wasn’t sure what type of project I should create. I started writing a command-line tool in Python, envisioning a command that could be passed a Wikipedia URL or article title and return a list of related items in the DPLA. But after struggling with a pretty unsatisfying project for a couple hours, including a detour into investigating the MediaWiki API, I threw everything aside and took a totally different approach by building a client-side script meant to run in a web browser. In the end, I’m a lot happier with the outcome after my initial failure. I love the command line, but the appeal of such a tool would be niche at best. What I wrote has a far broader appeal.1

Secondly, I worked closely with Wikipedian Jake Orlowitz.2 While he isn’t a coder, his intimate knowledge of Wikipedia was invaluable for our end product. Whenever I had a question about Wikipedia’s inner workings or needed someone to bounce ideas off of, he was there. While I blindly starting writing some JavaScript without a firm idea of how we could embed it onto Wikipedia pages, it was Jake who pointed me towards User Scripts and created an excellent installation tour.3 In other groups, I heard people discussing metadata, subject terms, and copyright. I think that having people of varied expertise in a group is advantageous when compared with a group solely composed of coders. Many hackathons explicitly state that non-programmers are welcome and with good reason; experts can outline goals, consider end-user interactions, and interpret API results. These are all invaluable contributions which are also hard to do with one’s face buried in a code editor.

While I did enjoy my hackathon experience, I was expecting a bit more structure and larger project groups. I arrived late, which doubtless didn’t help, but the DPLA groups were very fragmented. Some projects were only individuals, while others (like ours) were pairs. I had envisioned groups of at least four, where perhaps one person would compose plans and documentation, another would design a user interface, and the remainder would write back-end code. I can’t say that I was at all disappointed, but I could have benefited from the perspectives of a larger group.

What is WikipeDPLA?

So what did we build at Libhack anyway? As previously stated, we made a Wikipedia user script. I’ve dubbed it WikipeDPLA, though you can find it as FindDPLA on Wikipedia. Once installed, the script will query DPLA’s API on each article you visit, inserting related items towards the top.

WikipeDPLA in action

How does it work?

Here’s a step-by-step walkthrough of how WikipeDPLA works:

When you visit a Wikipedia article, it collects a few pieces of information about the article by copying text from the page’s HTML: the article’s title, any “X redirects here” notices, and the article’s categories.

First, WikipeDPLA constructs a DPLA query using the article’s title. Specifically, it constructs a JSONP query. JSONP is a means of working around the web’s same-origin policy which lets scripts manipulate data on other web pages. It works by including a script tag with a specially constructed URL containing a reference to one of your JavaScript functions:

<script src="//example.com/jsonp-api?q=search+term&callback=parseResponse"></script>

In responding to this request, the API plays a little trick; it doesn’t just return raw data, since that would be invalid JavaScript and thus cause a parsing error in the browser. Instead, it wraps the data in the function we’ve provided it. In the example above, that’s parseResponse:

parseResponse({
    "results": [
        {"title": "Searcher Searcherson",
        "id": 123123,
        "genre": "Omphaloskepsis"},
        {"title": "Terminated Term",
        "id": 321321,
        "genre": "Literalism"}
    ]
});

This is valid JavaScript; parseResponse receives an object which contains an array of search result records, each with some minimal metadata. This pattern has the handy feature that, as soon as our query results are available, they’re immediately passed to our callback function.

WikipeDPLA’s equivalent of parseResponse looks to see if there are any results. If the article’s title doesn’t return any results, then it’ll try again with any alternate titles culled from the article’s redirection notices. If those queries are also fruitless, it starts to go through the article’s categories.

Once we’ve guaranteed that we have some results from DPLA, we parse the API’s metadata into a simpler subset. This subset consists of the item’s title, a link to its content, and an “isImage” Boolean value noting whether or not the item is an image. With this simpler set of data in hand, we loop through our results to build a string of HTML which is then inserted onto the page. Voìla! DPLA search results in Wikipedia.

Honing

After putting the project together, I continued to refine it. I used the “isImage” Boolean to put a small image icon next to an item’s link. Then, after the hackathon, I noticed that my script was a nuisance if a user started reading a page anywhere other than at its start. For instance, if you start reading the Barack Obama article at the Presidency section, you will read for a moment and then suddenly be jarred as the DPLA results are inserted up top and push the rest of the article’s text down the page. In order to mitigate this behavior, we need to know if the top of the article is in view before inserting our results HTML. I used a jQuery visibility plug-in and an event listener on window scroll events to fix this.

Secondly, I was building a project with several targets: a user script for Wikipedia, a Grease/Tampermonkey user script4, and a (as yet inchoate) browser extension. To reuse the same basic JavaScript but in these different contexts, I chose to use the make command. Make is a common program used for projects which have multiple platform targets. It has an elegantly simple design: when you run make foo inside of a directly, make looks in a file named “makefile” for a line labelled “foo:” and then executes the shell command on the subsequent line. So if I have the following makefile:

hello:
    echo 'hello world!'

bye:
    echo 'goodbye!'

clean:
    rm *.log *.cache

Inside the same directory as this makefile, the commands make hello, make bye, and make clean respectively would print “hello world!” to my terminal, print “goodbye!”, and delete all files ending in extension “log” or “cache”. This contrived example doesn’t help much, but in my project I can run something like make userscript and the Grease/Tampermonkey script is automatically produced by prepending some header text to the main WikipeDPLA script. Similarly, make push produces all the various platform targets and then pushes the results up to the GitHub repo, saving a significant amount of typing on the command line.

These bits of trivia about interface design and tooling allude to a more important idea: it’s vital to choose projects that help you learn, particularly in a low-stakes environment like a hackathon. No one expects greatness from a product duct taped together in a few hours, so seize the opportunity to practice rather than aim for perfection. I didn’t have to write a makefile, but I chose to spend time familiarizing myself with a useful tool.

What’s Next?

While I am quite happy with my work at Libhack I do have plans for improvement. My main goal is to turn WikipeDPLA into a browser extension, for Chrome and perhaps Firefox. An extension offers a couple advantages: it can avoid the mixed-content issue with DPLA’s HTTP-only API5 and it is available even for users who aren’t logged in to Wikipedia. It would also be nice to expand my approach to encompassing other major digital library APIs, such as Europeana or Australia’s Trove.

And, of course, I want to attend more hackathons. Libhack was a very positive event for me, both in terms of learning and producing something useful, so I’m encouraged and hope other library conferences offer collaborative coding opportunities.

Other Projects

Readers should head over to LITA Blog where organizer Zach Coble has a report on libhack which details several other projects created at the Midwinter hackathon. Or you could just follow @HistoricalCats on Twitter.

Notes

  1. An aside related to learning to program: being a relatively new coder, I often think about advice I can give others looking to start coding. One common question is “what language should I learn first?” There’s one stock response, that it’s important not to worry too much about this choice because learning the fundamentals of one language will enable you to learn others quickly. But that dodges the question, because what people want to hear is a proper noun like “Ruby” or “Python” or “JavaScript.” And JavaScript, despite not being nearly as user friendly as those other two options, is a great starting place because it lets you work on the web with little effort. All of this to say; if I didn’t know JavaScript fairly well, I would not have been able to make something so useful.
  2. Shameless plug: Jake works on the Wikipedia Library, an interesting project that aims to connect Wikipedian researchers with source material, from subscription databases and open access repositories alike.
  3. User Scripts are pieces of JavaScript that a user can choose to insert whenever they are signed into and browsing Wikipedia. They’re similar to Greasemonkey user scripts, except the scripts only apply to Wikipedia. These scripts can do anything from customize the site’s appearance to insert new content, which is exactly what we did.
  4. Greasemonkey is the Firefox add-on for installing scripts that run on specified sites or pages; Tampermonkey is an analogous extension for Chrome.
  5. How’s that for acronyms?

A Brief Look at Cryptography for Librarians

You may not think much about cryptography on a daily basis, but it underpins your daily work and personal existence. In this post I want to talk about a few realms of cryptography that affect the work of academic librarians, and talk about some interesting facets you may never have considered. I won’t discuss the math or computer science basis of cryptography, but look at it from a historical and philosophical point of view. If you are interested in the math and computer science, I have a few a resources listed at the end in addition to a bibliography.

Note that while I will discuss some illegal activities in this post, neither I nor anyone connected with the ACRL TechConnect blog is suggesting that you actually do anything illegal. I think you’ll find the intellectual part of it stimulation enough.

What is cryptography?

Keeping information secret is as simple as hiding it from view in, say, an envelope, and trusting that only the person to whom it is addressed will read that information and then not tell anyone else. But we all know that this doesn’t actually work. A better system would only allow a person with secret credentials to open the envelope, and then for the information inside to be in a code that only she could know.

The idea of codes to keep important information secret goes back thousands of years , but for the purposes of computer science, most of the major advances have been made since the 1970s. In the 1960s with the advent of computing for business and military uses, it was necessary to come up with ways to encrypt data. In 1976, the concept of public-key cryptography was developed, but it wasn’t realized practically until 1978 with the paper by Rivest, Shamir, and Adleman–if you’ve ever wondered what RSA stood for, there’s the answer. There were some advancements to this system, which resulted in the digital signature algorithm as the standard used by the federal government.1 Public-key systems work basically by creating a private and a public key–the private one is known only to each individual user, and the public key is shared. Without the private key, however, the public key can’t open anything. See the resources below for more on the math that makes up these algorithms.

Another important piece of cryptography is that of cryptographic hash functions, which were first developed in the late 1980s. These are used to encrypt blocks of data– for instance, passwords stored in databases should be encrypted using one of these functions. These functions ensure that even if someone unauthorized gets access to sensitive data that they cannot read it. These can also be used to verify the identify of a piece of digital content, which is probably how most librarians think about these functions, particularly if you work with a digital repository of any kind.

Why do you care?

You probably send emails, log into servers, and otherwise transmit all kinds of confidential information over a network (whether a local network or the internet). Encrypted access to these services and the data being transmitted is the only way that anybody can trust that any of the information is secret. Anyone who has had a credit card number stolen and had to deal with fraudulent purchases knows first-hand how upsetting it can be when these systems fail. Without cryptography, the modern economy could not work.

Of course, we all know a recent example of cryptography not working as intended. It’s no secret (see above where keeping something a secret requires that no one who knows the information tells anyone else) by now that the National Security Agency (NSA) has sophisticated ways of breaking codes or getting around cryptography though other methods 2 Continuing with our envelope analogy from above, the NSA coerced companies to allow them to view the content of messages before the envelopes were sealed. If the messages were encoded, they got the keys to decode the data, or broke the code using their vast resources. While these practices were supposedly limited to potential threats, there’s no denying that this makes it more difficult to trust any online communications.

Librarians certainly have a professional obligation to keep data about their patrons confidential, and so this is one area in which cryptography is on our side. But let’s now consider an example in which it is not so much.

Breaking DRM: e-books and DVDs

Librarians are exquisitely aware of the digital rights management realm of cryptography (for more on this from the ALA, see The ALA Copyright Office page on digital rights ). These are algorithms that encode media in such a way that you are unable to copy or modify the material. Of course, like any code, once you break it, you can extract the material and do whatever you like with it. As I covered in a recent post, if you purchase a book from Amazon or Apple, you aren’t purchasing the content itself, but a license to use it in certain proscribed ways, so legally you have no recourse to break the DRM to get at the content. That said, you might have an argument under fair use, or some other legitimate reason to break the DRM. It’s quite simple to do once you have the tools to do so. For e-books in proprietary formats, you can download a plug-in for the Calibre program and follow step by step instructions on this site. This allows you to change proprietary formats into more open formats.

As above, you shouldn’t use software like that if you don’t have the rights to convert formats, and you certainly shouldn’t use it to pirate media. But just because it can be used for illegal purposes, does that make the software itself illegal? Breaking DVD DRM offers a fascinating example of this (for a lengthy list of CD and DVD copy protection schemes, see here and for a list of DRM breaking software see here). The case of CSS (Content Scramble System) descramblers illustrates some of the strange philosophical territory into which this can end up. The original code was developed in 1999, and distributed widely, which was initially ruled to be illegal. This was protested in a variety of ways; the Gallery of CSS Descramblers has a lot more on this 3. One of my favorite protest CSS descramblers is the “illegal” prime number, which is a prime number that contains the entire code for breaking the CSS DRM. The first illegal prime number was discovered in 2001 by Phil Carmody (see his description here) 4. This number is, of course, only illegal inasmuch as the information it represents is illegal–in this case it was a secret code that helped break another secret code.

In 2004, after years of court hearings, the California Court of Appeal overturned one of the major injunctions against posting the code, based on the fact that  source code is protected speech under the first amendment , and that the CSS was no longer a trade secret. So you’re no longer likely to get in trouble for posting this code–but again, using it should only be done for reasons protected under fair use. [5.“DVDCCA v Bunner and DVDCCA v Pavlovich.” Electronic Frontier Foundation. Accessed September 23, 2013. https://www.eff.org/cases/dvdcca-v-bunner-and-dvdcca-v-pavlovich.] One of the major reasons you might legitimately need to break the DRM on a DVD is to play DVDs on computers running the Linux operating system, which still has no free legal software that will play DVDs (there is legal software with the appropriate license for $25, however). Given that DVDs are physical media and subject to the first sale doctrine, it is unfair that they are manufactured with limitations to how they may be played, and therefore this is a code that seems reasonable for the end consumer to break. That said, as more and more media is streamed or otherwise licensed, that argument no longer applies, and the situation becomes analogous to e-book DRM.

Learning More

The Gambling With Secrets video series explains the basic concepts of cryptography, including the mathematical proofs using colors and other visual concepts that are easy to grasp. This comes highly recommended from all the ACRL TechConnect writers.

Since it’s a fairly basic part of computer science, you will not be surprised to learn that there are a few large open courses available about cryptography. This Cousera class from Stanford is currently running, and this Udacity class from University of Virginia is a self-paced course. These don’t require a lot of computer science or math skills to get started, though of course you will need a great deal of math to really get anywhere with cryptography.

A surprising but fun way to learn a bit about cryptography is from the NSA’s Kids website–I discovered this years ago when I was looking for content for my X-Files fan website, and it is worth a look if for nothing else than to see how the NSA markets itself to children. Here you can play games to learn basics about codes and codebreaking.

  1. Menezes, A., P. van Oorschot, and S. Vanstone. Handbook of Applied Cryptography. CRC Press, 1996. http://cacr.uwaterloo.ca/hac/. 1-2.
  2. See the New York Times and The Guardian for complete details.
  3. Touretzky, D. S. (2000) Gallery of CSS Descramblers. Available: http://www.cs.cmu.edu/~dst/DeCSS/Gallery, (September 18, 2013).
  4. For more, see Caldwell, Chris. “The Prime Glossary: Illegal Prime.” Accessed September 17, 2013. http://primes.utm.edu/glossary/xpage/Illegal.html.

From Cool to Useful: Incorporating hobby projects into library work

Cool or Useful? A guide to incorporating hobby projects into library work

Sometimes I have trouble creating a clear line between geeky hobby projects I do on my own time and professional tasks for MPOW (my place of work.) This time, the geeky-thing-I-think-is-cool is a LibraryBox. LibraryBox is a hardware hack created by Jason Griffey.  What I’m currently trying to work out is, is this project a viable solution to a practical work-place problem? Of course, I have to watch out for Maslov’s Law of the Instrument which can be paraphrased: “To a person with a hammer, every problem looks like a nail.” These days I’m seeing a lot of LibraryBox-shaped nails. I’m eager to find potential applications for my new toy tool. My project in today’s post is to describe the LibraryBox project and describe a method of determining whether or not it has a work-related application.

What is a LibraryBox?

A LibraryBox is a very portable pocket-sized device that serves up digital content to wifi devices. It is designed to provide free ebooks to readers with wifi devices but without access to reliable Internet or power. The best introduction to LibraryBox may be found on the LibraryBox site. Jason Griffey has done an excellent job with the site’s design and has written comprehensive instructions for building and deploying LibraryBoxen. The site describes the project as: “an open source, portable digital file distribution tool based on inexpensive hardware that enables delivery of educational, healthcare, and other vital information to individuals off the grid.”

The LibraryBox project was designed to solve a very specific kind of problem. It is useful in scenarios involving all of the following conditions:

  • Either no access or sporadic access to Internet and electrical utilities
  • a need to distribute digital content
  • users that have wifi enabled devices

In order to meet these objectives, the LibraryBox

  • uses inexpensive parts and hardware.
  • runs off of batteries and is highly portable.
  • uses open source software. (The code is both kinds of free; both libre and gratis.)
My LibraryBox

Building the LibraryBox was fun and easy. I bought the necessary parts: a mobile router, a large usb flash drive, plus an optional battery. (I’m using a Sony Cycle Energy CP-EL I found on sale at the grocery store for $13). Then I went through the instructions. The process is easy and straightforward. A friend of mine completed them while his baby daughter was down for a nap. I took a little longer because I didn’t read the instructions through before starting and did some steps out of order. If you more diligent with following directions than I am, Jason’s instructions will get you from start to finish easily and without a hitch. Once I had my LibraryBox up and running, I filled the flash drive with some free and creative commons licensed content. I tested it out and was happy to see that I could use it to download ebooks onto my phone, laptop, and tablet. Once I demonstrated that it worked, I began to look for practical applications where it could be more than just cool, I wanted my hobby project to be useful.  To keep myself honest and keep my project enthusiasm in check, I’m using a series of questions to help determine whether I’m being blinded by the new shiny thing or whether it is, in fact,  an appropriate tool for the job at hand. These questions help with the tool/toy distinction, especially when I’m under the spell of the law of the instrument.

Questions:
  1. Does this tool or technology offer a solution to an existing problem?
  2. If the answer to #1 is yes, does it solve the problem better (more efficiently, cheaply, etc.) than alternate solutions?
  3. Does this tool or technology introduce unintended consequences or side-effects that are worse than the original problem?
Applying the Questions:

There are two ready applications for a LibraryBox at MPOW. Neither directly involve the library, both involve faculty projects in our Creative Media and Digital Culture (CMDC) program. Both are interesting projects and both project leads have indicated interest in using a LibraryBox to solve a problem. The first case involves using a LibraryBox to allow visitors to a remote historical site the ability to download and install a mobile app. My colleague Brett Oppegaard is leading development of a mobile app to provide visitors to a historic site access to interpretive materials. The location is somewhat remote and mobile broadband coverage is spotty at best and varies depending on the cell provider. My thought was to provide visitors to the site a reliable method of installing and using the app. Applying the three questions from above to this project, I learned that the answers to the first two questions are an unqualified yes. It solves a real problem by allowing users to download a digital file without an active net connection. It does so better than alternate solutions, especially due to its ability to run off of battery power. (There are no utilities at the site.) However, the third question reveals some real difficulties. I was able to successfully download and install the app from its .apk file using the LibraryBox. However, the steps required to achieve this are too convoluted for non-technical end users to follow easily. In addition, the current version of the app requires an active Internet connection in order to successfully install, rendering the LibraryBox workaround moot. These issues may be able to be resolved with some hacking, but right now the LibraryBox isn’t a working solution to this project’s needs. We’ll keep it in mind as the project develops and try new approaches.

Fortunately, as I was demonstrating the LibraryBox to the CMDC faculty, another colleague asked me about using it to solve a problem he is facing.  John Barber has been working on preserving The Brautigan Library and re-opening it to submissions. The Brautigan Library is a collection of unpublished manuscripts organized in the spirit of  the fictional library described in Richard Brautigan’s novel The Abortion. The Brautigan Library manuscripts currently are housed at the Clark County Historical Museum and we tested the LibraryBox there as a source for providing mobile access to finding aids.  This worked, but there were speed and usability issues. As we tested, however, John developed a larger plan involving a dedicated tablet kiosk, a web-app template, and a local web server connected to a router in the building. While we did not choose to use LibraryBox to support this exhibit, it did spark useful conversation that is leading us in promising directions.

Next Steps:

After learning that the LibraryBox isn’t a turn-key solution for either project, I still have some productive work to do. The first step is to install a light-weight web server (lighttpd) on the hardware currently running LibraryBox. (Fortunately, someone has already done this and left directions.) It’s possible, but unlikely, that will meet our needs. After that we’re going to test our plans using more powerful hardware in a similar setup. I’ve acquired a Raspberry Pi to test as a web server for the project and may also try running a web server on a more powerful router than the TL-MR3020 LibraryBox is based on. (Some open-WRT capable routers have as much as 128mb of RAM, which may be enough.) There is also work to do on the Ft. Vancouver project. The next steps there involve working on-site with the design team to more clearly articulate the problem(s) we are trying to solve.

In both cases my hobbyist tinkering is leading to practical and productive work projects. In both cases the LibraryBox has served as an excellent kluge (jury-rigged temporary solution) and has helped us see a clearer path to a permanent solution. These solutions will probably not resemble my early amateur efforts, but by exercising a little discipline to make certain my toys tools  are being employed productively, I’m confident that my hobby tinkering has a place in a professional workplace. At very least, my leisure time spent experimenting is benefiting my professional work. I also think that the kind of questions used here have application when considering other library toys fads innovations.

 


Report from the Digital Public Library of America Appfest

Add one part stress test of the Digital Public Library of America’s API, one part conceptual exploration of how DPLA might work, and one part interdisciplinary collaboration contact high, and what do you get?  The DPLA Appfest on November 8 in Chattanooga, TN.  This day-and-a-half event brought together developers, designers, and librarians from across the US and Canada to build apps on the DPLA platform.  (For more on DPLA vision and scope, see previous TechConnect coverage.)

Our venue was the 4th floor of the Chattanooga Public Library.  This giant, bare-floored, nearly empty space was once an oubliette for discarded things — for thirty years, a storage space.  Now, mostly cleaned out, it’s a blank canvas.  New Assistant Director Nate Hill has been filling it with collaboration and invention: a startup pitch day, meeting space for the local Linux users group, and Appfest.

Chattanooga 4th Floor, waiting for Appfest

Thursday Evening

The first night of the event was reserved for getting-to-know-you events and laying the groundwork for the next day’s hacking.  With several dozen attendees from a variety of backgrounds, some of us were already good friends, some familiar with one another’s work online, and some total strangers; the relaxed, informal session built rapport for the upcoming teamwork.  Tables, sofas, snacks, and beer encouraged mingling.

We also started the intellectual juices flowing with project pitches, an overview of the DPLA API, and an intro to GitHub.  Participants had been encouraged in advance to brainstorm project topics and post them on the wiki.  People pitched their projects more formally in person, outlining both the general idea and the skills that would be helpful, so we could start envisioning where we’d fit in.  Jeffrey Licht from the DPLA Technical Development Team introduced us all to the DPLA API, to give us a sense of the metadata queries we’d be able to base projects on.  At Nate Hill’s request, I also did a brief intro to the concepts behind GitHub, one of the key tools we’d be using to work together (seen earlier on TechConnect).

Friday

Friday morning, we got to the library early, took advantage of its impressive breakfast spread and lots of coffee, and plunged immediately into hackery.  One huge piece of butcher paper on the wall was our sign-up sheet as whiteboards — one for team members to sign up for projects, and the rest quickly covered with wireframes, database models, and scribbled questions on use cases. The mood in the room was energetic, intellectually engaged, and intense.

Brainstorming and planning

The energy extended outside of the room, too.  An IRC backchannel (#dpla-api on Freenode), shared Google docs, a sandbox server that John Blyberg set up (thanks!) with an impressive range of language and tool support, the #dpla Twitter hashtag, and GitHub collaboration allowed for virtual participation.

The intense, head-down hackery was briefly interrupted by barbecue, cupcakes, and beer (a keg at the library, by the way? Genius).  Truly, though, it’s all a blur until 4:30, when the teams demonstrated their apps.

Hard at work.

The Apps We Built

There were eleven products at day’s end:

While the DPLA Plus proposer referred to his algorithm-heavy idea during pitches as “super-unsexy”, the judges clearly disagreed, as this impressive bit of engineering took home the prize for best app.

Glorious victory! Yes, that IS a repurposed Netgear router.

The Culture We Built

Appfest was a very different vibe from DPLA’s hackathon in Cambridge last April (which I also attended).  It featured a dramatically larger space, a longer and more structured schedule, more participants, and a much wider range of skills.  While Cambridge had almost exclusively software developers, Appfest drew developers, designers, UX experts, metadata wonks, and librarians.  This meant it took longer for teams to coalesce, and we needed to be much more intentional about the process.  Instead of April’s half-hour for project pitches, we spread that process over the weeks leading up to Appfest (on the wiki), Thursday’s pitch session, and Friday morning’s signups.

With such different skills, we also were familiar with different tools and used different vocabulary; we needed to work hard at making sure we could all communicate and everyone had a useful role in the projects.  Again, the longer timeframe helped; an informal dinner at the library Thursday evening and assorted bar trips Thursday night gave us all a chance to get comfortable with one another socially.  (So, admittedly, did the keg.)  A mutual commitment to inclusiveness helped us all remember to communicate, to break down projects into steps that gave everyone something to do, and to appreciate one another’s contributions.  Finally, organizers circulated around the room and kept an eye on people’s mood, intervening when we needed help finding a team that needed our skills, or just a pep talk.

And with all that work put in to building culture?  The results were amazing.  The April results were generally developer-oriented; as you can see from the list above, the Appfest products ranged from back-end tools for developers to participatory, end-user-oriented web sites.  They were also, in most cases, functional or nearly so, and often gorgeous.

Takeaways

There are some implications here for both DPLA and libraries in general.  The range of apps, inspired in part by earlier work on DPLA use cases, helped to illustrate the potential impact that DPLA could have.  They also illustrated both the potential and the limitations of DPLA metadata.  The metadata is ingested from a variety of content hubs and overlaid with a common DPLA schema.  This allows for straightforward queries against the DPLA API — you can always count on those common schema elements.

However, as anyone who’s ever met a crosswalk in a library school metadata class knows, automated ingestion doesn’t always work.  The underlying metadata schemata don’t always map perfectly to DPLA’s, and therefore the contents of the fields aren’t wholly standardized and don’t always provide what developers might be looking for (e.g. thumbnail images to illustrate query results).  This, of course, is why DPLA has these hackathons — both to illustrate potential and to stress-test its back end, find out what works and what doesn’t and why.  Want to help?  Go for it.  There’s a host of ways to get involved at the DPLA web site.

And for other libraries?  I keep coming back to two things: Dave Riordan’s tweet, above, and the digital humanities community’s One Week | One Tool project.  This was, essentially, a week-long 2010 summer camp in which a small, diverse team of digital humanists built an electronic publishing plugin for WordPress, from scratch, after a public discussion of their community’s needs.  In other words: we can do this thing.  Working, useful tools can be built in a shockingly short time, if we have open conversations about what would be useful and assemble skilled, diverse teams.

Let’s do more of that.


Hacker Values ≈ Library Values*

* The ≈ symbol indicates that the two items are similar, but not equal, to each other.

Disambiguation

Hacker is a disputed term. The word hacker is so often mis-applied to describe law breaking, information theft, privacy violation, and other black-hat activities that the mistake has become permanently installed in our lexicon. I am not using hacker in this sense of the word. To be clear: when I use the word hacker and when I write about hacker values, I am not referring to computer criminals and their sketchy value systems. Instead, I am using hacker in its original meaning: a person who makes clever use of technology and information to solve practical problems. 

Introduction

With the current popularity of hackerspaces and makerspaces in libraries, library hack-a-thons, and hacking projects for librarians; it is clear that library culture is warming to the hacker ethic. This is a highly positive trend and one that I encourage more librarians to participate in. The reason I am so excited to see libraries encourage adoption of the hacker ethic is that hackers share several core values with libraries. Working together we can serve our communities more effectively. This may appear to be counter-intuitive, especially due to a very common public misconception that hacker is just another word for computer-criminal. In this post I want to correct this error, explain the values behind the hacker movement, and show how librarians and hackers share core values. It is my hope that this opens the door for more librarians to get started in productive and positive library hackery.

Hacker Values

First, a working definition: hackers are people who empower themselves with information in order to modify their environment and make the world a better place. That’s it. Hacking doesn’t require intruding into computer security settings. There’s no imperative that hackers have to work with code, computers, or technology–although many do. Besides the traditional computer software hacker, there are many kinds of crafters, tinkerers, and makers that share core the hacker values. These makers all share knowledge about their world and engage in hands-on modification of it in order to make it a better place.

For a richer and more detailed look into the hacker ethic than provided by my simplified definition I recommend three books. First try Corey Doctorow’s young adult novel, Little Brother 1. This novel highlights the hacker values of self-empowerment with information, hands-on hacking, and acting for the public good. Little Brother is not only an award-winning story, but it also comes with a bibliography that is one of the best introductions to hacking available. Next, check out Steven Levy’s classic book Hackers: Heroes of the Computer Revolution 2. Levy details the history of hackers in the early 1980s and explains the values that drove the movement. Third, try Chris Anderson’s Makers: The New Industrial Revolution 3. Anderson tells the story of the contemporary maker movement and the way it is combining the values of the traditional do-it-yourself (DIY) movement with the values of the computer hacker community to spark a vibrant and powerful creative movement across the world.

In the preface to Hackers: Heroes of the Computer revolution,  Levy observed a common philosophy that the hackers shared:

It was a philosophy of sharing, openness, decentralization, and getting your hands on machines at any cost to improve the machines and improve the world.

The Wikipedia entry on the hacker programming subculture builds on Levy’s observations and revises the list of core hacker values as:

  • Sharing
  • Openness
  • Collaboration
  • Engaging in the Hands-on Imperative. 

These values are also restated and expanded on in another Wikipedia article on Hacker Ethics. Each of these articulations of hacker values differs subtly, yet while they differ they reinforce the central idea that there are core hacker values and that the conception of hacker as computer criminal is misinformed and inaccurate. (While there clearly are computer criminals, the error lies in labeling these people as hackers. These criminals violate hacker values as much as they violate personal privacy and the law.)

Once we understand that hacking is rooted in the core values of sharing, openness, collaboration, and hands-on activity; we can begin to see that hackers and librarians share several core values and that there is a rich environment for developing synergies and collaborative projects between the two groups. If we can identify and isolate the core values that librarians share with hackers, we will be well on our way to identifying areas for productive collaboration and cross-pollination of ideas between our cultures.

Library Values

If we are going to compare hacker values with library values, an excellent starting point is the American Library Association’s Library Bill of Rights. I recently had the pleasure of attending a keynote presentation by Char Booth who made this point most persuasively. She spoke eloquently and developed a nuanced argument on the topic of the narratives we use to describe our libraries. She encouraged us to look beyond the tired narratives of library-as-container-of-information or library-as-content-repository and instead create new narratives that describe the enduring concept of the library. This concept of library captures the values and services libraries provide without being inextricably linked to the information containers and technologies that libraries have historically used.

Library bill of rights

Char Booth’s distillation of the 1948 Library Bill of Rights into five core values

As she developed this argument, Char encouraged us to look to library history and extract the core values that will continue to apply as our collections and services adapt and change. As an example, she displayed the 1948 Library Bill of Rights and extracted out of each paragraph a core value. Her lesson: these are still our core values, even if the way we serve our patrons has radically changed.

Char distilled the Library Bill of Rights into five core values: access, freedom, advocacy, inquiry, and openness. If we compare these values with the hacker values from above: sharing, openness, collaboration, and the hands-on-imperative, we’ll see that at least in terms of access to information, public openness, freedom, sharing, and collaboration libraries and hackers are on the same page. There are many things that hackers and libraries can do together that further these shared values and goals.

It should be noted that hackers have a traditionally anti-authoritarian bent and unlike libraries, their value of open access to information often trumps their civic duty to respect license agreements and copyright law. Without trivializing this difference, there are many projects that libraries and hackers can do together that honor our shared values and do not violate the core principles of either partner. After all, libraries have a lot of experience doing business with partners who do not share or honor the core library values of freedom, openness, and access to information. If we can maintain productive relationships with certain parties that reject values close to the heart of libraries and librarians, it stands to reason that we can also pursue and maintain relationships with other groups that respect these core values, even as we differ in others.

At the end of the day, library values and hacker values are more alike than different. Especially in the areas of library work that involve advocacy for freedom, openness, and access to information we have allies and potential partners who share core values with us.

Library Hackery

If my argument about library values and hacker values has been at all persuasive, it raises the question: what do hacker/library partnerships look like? Some of the answers to this have been hinted at above. They look like Jason Griffey’s LibraryBox project. This wonderful project involves hacking on multiple levels. On one level, it provides the information needed for libraries to modify (hack) a portable wifi router into a public distribution hub for public domain, open access, and creative-commons licensed books and media. LibraryBoxes can bring digital media to locations that are off the net. On another level, it is a hack of an existing hacker project PirateBox. PirateBox is a private portable network designed to provide untraceable local file-sharing. Griffey hacked the hack in order to create a project more in-line with library values and mission.

These partnerships can also look like the Washington DC public library’s Accessibility Hack-a-Thon, an ongoing project that brings together, civic, library, and hacker groups to collaborate on hacking projects that advance the public good in their city. Another great example of bringing hacker ethics into the library can be found in TechConnect’s own Bohyun Kim’s posts on AJAX and APIs. Using APIs to customize web services is a perfect example of a library hack: it leverages our understanding of technology and empowers us to customize and perfect our environment. With an injection of hacker values into library services, we no longer have to remain at the mercy of the default setting. We can empower ourselves to hack our way to better tools, a better library, and a better world.

An excellent example of hackery from outside the library community is Audrey Watters’ Hack Education and Hack [Higher] Education blogs. Just as computer hackers use their inside information of computer systems to remake the environment, Audrey users her inside knowledge of education systems to make positive changes to the system.

  1. Doctorow, Cory. 2008. Little brother. New York: Tom Doherty Associates. http://craphound.com/littlebrother/download/
  2. Levy, Steven. 2010. Hackers Heroes of the Computer Revolution. Cambridge: O’Reilly Media, Incorporated. http://shop.oreilly.com/product/0636920010227.do
  3. Anderson, Chris. 2012. Makers the new industrial revolution. New York: Crown Business. http://worldcat.org/oclc/812195098