Web Scraping: Creating APIs Where There Were None

Websites are human-readable. That’s great for us, we’re humans. It’s not so great for computer programs, which tend to be better at navigating structured data rather than visuals.

Web scraping is the practice of “scraping” information from a website’s HTML. At its core, web scraping lets programs visit and manipulate a website much like people do. The advantage to this is that, while programs aren’t great at navigating the web on their own, they’re really good at repeating things over and over. Once a web scraping script is set up, it can run an operation thousands of times over without breaking a sweat. Compare that to the time and tedium of clicking through a thousand websites to copy-paste the information you’re interested in and you can see the appeal of automation.

Why web scraping?

Why would anybody use web scraping? There are a few good reasons which are, unfortunately, all too common in libraries.

You need an API where there is none.

Many of the web services we subscribe to don’t expose their inner workings via an API. It’s worth taking a moment to explain the term API, which is used frequently but rarely given a better definition beyond the uninformative “Application Programming Interface”.

Let’s consider a common type of API, a search API. When you visit Worldcat and search, the site checks an enormous database of millions of metadata records and returns a nice, visually formatted list of ones relevant to your query. Again, this is great for humans. We can read through the results and pick out the ones we’re interested in. But what happens when we want to repurpose this data elsewhere? What if we want to build a bento search box, displaying results from our databases and Worldcat alongside each other?1 The answer is that we can’t easily accomplish this without an API.

For example, the human-readable results of search engine may look like this:

1. Instant PHP Web Scraping

by Jacob Ward

Publisher: Packt Publishing 2013

2. Scraping by: wage labor, slavery, and survival in early Baltimore

by Seth Rockman

Publisher: Johns Hopkins University Press 2009

That’s fine for human eyes, but for our search application it’s a pain in the butt. Even if we could embed a result like this using an iframe, the styling might not match what we want and the metadata fields might not display in a manner consistent with our other records (e.g. why is the publication year included with publisher?). What an API returns, on the other hand, may look like this:

[
  {
    "title": "Instant PHP Web Scraping",
    "author": "Jacob Ward",
    "publisher": "Packt Publishing",
    "publication_date": "2013"
  },
  {
    "title": "Scraping by: wage labor, slavery, and survival in early Baltimore",
    "author": "Seth Rockman",
    "publisher": "Johns Hopkins University Press",
    "publication_date": "2009"
  }
]

Unless you really love curly braces and quotation marks, that looks awful. But it’s very easy to manipulate in many programming languages. Here’s an incomplete example in Python:

results = json.load( data )
for result in results:
  print result.title + ' - ' + result.author

Here “data” is our search results from above and we can use a function to easily parse that data into a variable. The script then loops over each search result and prints out its title in author in a format like “Instant PHP Web Scraping – Jacob Ward”.

An API is hard to use or doesn’t have the data you need.

Sometimes services do expose their data via an API, but the API has limitations that the human interface of the website doesn’t. Perhaps it doesn’t expose all the metadata which is visible in search results. Fellow Tech Connect author Margaret Heller mentioned that Ulrich’s API doesn’t include subject information, though it’s present in the search results presented to human users.

Some APIs can also be more difficult to use than web scraping. The ILS at my place of work is like this; you have to pay extra to get the API activated and it requires server configuration on a shared server I don’t have access to. The API has strict authentication requirements which are required even for read-only calls (e.g. I’m just accessing publicly-viewable data, not making account changes). The boilerplate code the vendor provides doesn’t work, or rather only works for trivial examples. All these hurdles combine to make scraping the catalog appealing.

As a side effect, how you reconfigure a site’s data might inspire its own API. Are you sorely missing a feature so bad you need to hack around it? Writing a nice proof-of-concept with web scraping might prove that there’s a use case for a particular API feature.

How?

More or less all web scraping works the same way:

  • Use a scripting language to get the HTML of a particular page
  • Find the interesting pieces of a page using CSS, XPath, or DOM traversal—any means of identifying specific HTML elements
  • Manipulate those pieces, extracting the data you need
  • Pipe the data somewhere else, e.g. into another web page, spreadsheet, or script

Let’s go through an example using the Directory of Open Access Journals. Now, the DOAJ has an API of sorts; it supports retrieving metadata via the OAI-PMH verbs. This means a request for a URL like http://www.doaj.org/oai?verb=GetRecord&identifier=18343147&metadataPrefix=oai_dc will return XML with information about one of the DOAJ journals. But OAI-PMH doesn’t support any search APIs; we can use standard identifiers and other means of looking up specific articles or publications, but we can’t do a traditional keyword search.

Libraries, of the code persuasion

Before we get too far, let’s lean on those who came before us. Scraping a website is both a common task and a complex one. Remember last month, when I said that we don’t need to reinvent the wheel in our programming because reusable modules exist for most common tasks? Please let’s not write our own web scraping library from scratch.

Code libraries, which go by different names depending on the language (most amusingly, they’re called “eggs” in Python and “gems” in Ruby), are pre-written chunks of code which help you complete common tasks. Any task which several people have had to do before probably has a library devoted to it. Google searches for “best [insert task] module for [insert language]” typically turn up useful guidance on where to start.

While each language has its own means of incorporating others’ code into your own, they all basically have two steps: 1) download the external library somewhere onto your hard drive or server, often using a command-line tool, and 2) import the code into your script. The external library should have some documentation on how to use it’s special features once you’re imported it.

What does this look like in PHP, the language our example will be in? First, we visit the Simple HTML DOM website on Sourceforge to download a single PHP file. Then, we place that file in the same directory that our scraping script will live. In our scraping script, we write a single line up at the top:

<?php
require_once( 'simple_html_dom.php' );
?>

Now it’s as if the whole contents of the simple_html_dom.php file were in our script. We can use functions and classes which were defined in the other file, such as the file_get_html function which is not otherwise available. PHP actually has a few functions which are used to import code in different ways; the documentation page for the include function describes the basic mechanics.

Web scraping a DOAJ search

While the DOAJ doesn’t have a search API, it does have a search bar which we can manipulate in our scraping. Let’s run a test search, view the HTML source of the result, and identify the elements we’re interested in. First, we visit doaj.org and type in a search. Note the URL:

doaj.org/doaj?func=search&template=&uiLanguage=en&query=librarianship

I’ve highlighted the key-value pairs in the URLs query string, making the keys bold and the values italicized. Here our search term was “librarianship” which is the value associated with the appropriately-named “query” key. If we change the word “librarianship” to a different search term and visit the new URL, we see results for the new term, predictably. With easily hackable URLs like this, it’s easy for us to write a web scraping script. Here’s the first half of our example in PHP:

<?php
// see http://simplehtmldom.sourceforge.net/manual_api.htm for documentation
require_once( 'simple_html_dom.php' );

$base = 'http://www.doaj.org/doaj?func=search&template=&uiLanguage=en&query=';
$query = urlencode( 'librarianship' );

$html = file_get_html( $base . $query );
// to be continued...
?>

So far, everything is straightforward. We insert the web scraping library we’re using, then use what we’ve figured out about the DOAJ URL structure: it has a base which won’t change and a query which we want to change according to our interests. You could have the query come from command-line arguments or web form data like the $_GET array in PHP, but let’s just keep it as a simple string.

We urlencode the string because we don’t want spaces or other illegal characters sneaking their way in there; while the script still works with $query = 'new librarianship' for example, using unencoded text in URLs is a bad habit to get into. Other functions, such as file_get_contents, will produce errors if passed a URL with spaces in it. On the other hand, urlencode( 'new librarianship' ) returns the appropriately encoded string “new+librarianship”. If you do take user input, remember to sanitize it before using it elsewhere.

For the second part, we need to investigate the HTML source of DOAJ’s search results page. Here’s a screenshot and a simplified example of what it looks like:

2 search results from the DOAJ

A couple search results from DOAJ for the term “librarianship”

<div id="result">
  <div class="record" id="record1">
    <div class="imageDiv">
      <img src="/doajImages/journal.gif"><br><span><small>Journal</small></span>
    </div><!-- END imageDiv -->
    <div class="data">
      <a href="/doaj?func=further&amp;passMe=http://www.collaborativelibrarianship.org">
        <b>Collaborative Librarianship</b>
      </a>
      <strong>ISSN/EISSN</strong>: 19437528
      <br><strong>Publisher</strong>: Regis University
      <br><strong>Subject</strong>:
      <a href="/doaj?func=subject&amp;cpId=129&amp;uiLanguage=en">Library and Information Science</a>
      <br><b>Country</b>: United States
      <b>Language</b>: English<br>
      <b>Start year</b> 2009<br>
      <b>Publication fee</b>:
    </div> <!-- END data -->
    <!-- ...more markup -->
  </div> <!-- END record -->
  <div class="recordColored" id="record2">
    <div class="imageDiv">
      <img src="/doajImages/article.png"><br><span><small>Article</small></span>
    </div><!-- END imageDiv -->
    <div class="data">
       <b>Mentoring for Emerging Careers in eScience Librarianship: An iSchool – Academic Library Partnership </b>
      <div style="color: #585858">
        <!-- author (s) -->
         <strong>Authors</strong>:
          <a href="/doaj?func=search&amp;query=au:&quot;Gail Steinhart&quot;">Gail Steinhart</a>
          ---
          <a href="/doaj?func=search&amp;query=au:&quot;Jian Qin&quot;">Jian Qin</a><br>
        <strong>Journal</strong>: <a href="/doaj?func=issues&amp;jId=88616">Journal of eScience Librarianship</a>
        <strong>ISSN/EISSN</strong>: 21613974
        <strong>Year</strong>: 2012
        <strong>Volume</strong>: 1
        <strong>Issue</strong>: 3
        <strong>Pages</strong>: 120-133
        <br><strong>Publisher</strong>: University of Massachusetts Medical School
      </div><!-- End color #585858 -->
    </div> <!-- END data -->
    <!-- ...more markup -->
   </div> <!-- END record -->
   <!-- more records -->
</div> <!-- END results list -->

Even with much markup removed, there’s a lot going on here. We need to zone in on what’s interesting and find patterns in the markup that help us retrieve it. While it may not be obvious from the example above, the title of each search result is contained in a <b> tag towards the beginning of each record (lines 8 and 26 above).

Here’s a sketch of the element hierarchy leading to the title: a <div> with id=”result” > a <div> with a class of either “record” or “recordColored” > a <div> with a class of “data” > possibly an <a> tag (present in the first example, absent in the second) > the <b> tag containing the title. Noting the conditional parts of this hierarchy is important; if we didn’t note that sometimes an <a> tag is present and that the class can be either “record” or “recordColored”, we wouldn’t be getting all the items we want.

Let’s try to return the titles of all search results on the first page. We can use Simple HTML DOM’s find method to extract the content of specific elements using CSS selectors. Now that we know how the results are structured, we can write a more complete example:

<?php
require_once( 'simple_html_dom.php' );

$base = 'http://www.doaj.org/doaj?func=search&template=&uiLanguage=en&query=';
$query = urlencode( 'librarianship' );

$html = file_get_html( $base . $query );

// using our knowledge of the DOAJ results page
$records = $html->find( '.record .data, .recordColored .data' );

foreach( $records as $record ) {
  echo $record->getElementsByTagName( 'b', 0 )->plaintext . PHP_EOL;
}
?>

The beginning remains the same, but this time we actually do something with the HTML. We use find to pull the records which have class “data.” Then we echo the first <b> tag’s text content. The getElementsByTagName method typically returns an array, but if you pass a second integer parameter it returns the array element at that index (0 being the first element in the array, because computer scientists count from zero). The ->plaintext property simply contains the text found in the element, if we echoed the element itself we would see opening and closing <b> tags wrapped around the title. Finally, we append an “end-of-line” (EOL) character just to make the output easier to read.

To see our results, we can run our script on the command line. For Linux or Mac users, that likely means merely opening a terminal (in Applications/Utilities on a Mac) since they come with PHP pre-installed. On Windows, you may need to use WAMP or XAMPP to run PHP scripts. XAMPP gives you a “shell” button to open a terminal, while you can put the PHP executable in your Windows environment variables if you’re using WAMP.

Once you have a terminal open, the php command will execute whatever PHP script you pass it as a parameter. If we run php name-of-our-script.php in the same directory as our script, we see ten search result titles printed to the terminal:

> php doaj-search.php
Collaborative Librarianship
Mentoring for Emerging Careers in eScience Librarianship: An iSchool – Academic Library Partnership
Education for Librarianship in Turkey Education for Librarianship in Turkey
Turkish Librarianship: A Selected Bibliography Turkish Librarianship: A Selected Bibliography
Journal of eScience Librarianship
Editorial: Our Philosophies of Librarianship
Embedded Academic Librarianship: A Review of the Literature
Model Curriculum for 'Oriental Librarianship' in India
A General Outlook on Turkish Librarianship and Libraries
The understanding of subject headings among students of librarianship

This is a simple, not-too-useful example. But it could expanded in many ways. Try copying the script above and attempting some of the following:

  • Make the script return more than the ten items on the first page of results
  • Use some of DOAJ’s advanced search functions, for instance a date limiter
  • Only return journals or articles, not both
  • Return more than just the title of results, for instance the author(s), URLs, or publication date

Accomplishing these tasks involves learning more about DOAJ’s URL and markup structure, but also learning more about the scraping library you’re using.

Common Problems

There are a couple possible hangups when web scraping. First of all, many websites employ user-agent sniffing to serve different versions of themselves to different devices. A user agent is a hideous string of text which web browsers and other HTTP clients use to identify themselves.2 If a site misinterprets our script’s user agent, we may end up on a mobile or other version of a site instead of the desktop one we were expecting. Worse yet, some sites try to prevent scraping by blacklisting certain user agents.

Luckily, most web scraping libraries have tools built in to work around this problem. A nice example is Ruby’s Mechanize, which has an agent.user_agent_alias property which can be set to a number of popular web browsers. When using an alias, our script essentially tells the responding web server that it’s a common desktop browser and thus is more likely to get a standard response.

It’s also routine that we’ll want to scrape something behind authentication. While IP authentication can be circumvented by running scripts from an on-campus connection, other sites may require login credentials. Again, most web scraping libraries already have built-in tools for handling authentication. We can find which form controls on the page we need to fill in, insert your username and password into the form, and then submit it programmatically. Storing a login in a plain text script is never a good idea though, so be careful.

Considerations

Not all web scraping is legitimate. Taking data which is copyrighted and merely re-displaying it on our site without proper attribution is not only illegal, it’s just not being a good citizen of the web. The Wikipedia article on web scraping has a lengthy section on legal issues with a few historical cases from various countries.

It’s worth noting that web scraping can be very brittle, meaning it breaks often and easily. Scraping typically relies on other people’s markup to remain consistent. If just a little piece of HTML changes, our entire script might be thrown off, looking for elements that no longer exist.

One way to counteract this is to write selectors which are as broad as possible. For instance, let’s return to the DOAJ search results markup. Why did we use such a concise CSS selector to find the title when we could have been much more specific? Here’s a more explicit way of getting the same data:

$html->find( 'div#result > div.record > div.data, div#result > div.recordColored > div.data' );

What’s wrong with these selectors? We’re relying on so much more to stay the same. We need: the result wrapper to be a <div>, the result wrapper to have an id of “result”, the record to be a <div>, and the data inside the record to be a <div>. Our use of the child selector “>” means we need the element hierarchy to stay precisely the same. If any of these properties of the DOAJ markup changed, our selector wouldn’t find anything and our script would need to be updated. Meanwhile, our much more generic line still grabs the right information because it doesn’t depend on particular tags or other aspects of the markup remaining constant:

$html->find( '.record .data, .recordColored .data' );

We’re still relying on a few things—we have to, there’s no getting around that in web scraping—but a lot could change and we’d be set. If the DOAJ upgraded to HTML5 tags, swapping out <div> for <article> or <section>, we would be OK. If the wrapping <div> was removed, or had its id change, we’d be OK. If a new wrapper was inserted in between the “data” and “record” <div>, we’d be OK. Our approach is more resilient.

If you did try running our PHP script, you probably noticed it was rather slow. It’s not like typing a query into Google and seeing results immediately. We have to request a page from an external site, which then queries its backend database, processes the results, and displays HTML which we ultimately don’t use, at least not as intended. This highlights that web scraping isn’t a great option for user-facing searches; it can take too long to return results. One option is to cache searches, for instance storing results of previous scrapings in a database and then checking to see if the database has something relevant before resorting to pulling content off an external site.

It’s also worth noting that web scraping projects should try to be reasonable about the number of times they request an external resource. Every time our script pulls in a site’s HTML, it’s another request that site’s server has to process. A site may not have an API because it cannot handle the amount of traffic one would attract. If our web scraping project is going to be sending thousands of requests per hour, we should consider how reasonable that is. A simple email to the third party explaining what we’re doing and the amount of traffic it may generate is a nice courtesy.

Overall, web scraping is handy in certain situations (see below) or for scripts which are run seldom or a single time. For instance, if we’re doing an analysis of faculty citations at our institution, we might not have access to a raw list of citations. But faculty may have university web pages where they list all their publications in a consistent format. We could write a script which only needs to run once, culling a large list of citations for analysis. Once we’ve scraped that information, you could use OpenRefine or other power tools to extract particular journal titles or whatever else we’re interested in.

How is web scraping used in libraries?

I asked Twitter what other libraries are using web scraping for and got a few replies:

@phette23 Pulling working papers off a departmental website for the inst repo. Had to web scrape for metadata.
— Ondatra libskoolicus (@LibSkrat) September 25, 2013

Matthew Reidsma of Grand Valley State University also had several examples:

To fuel a live laptop/iPad availability site by scraping holdings information from the catalog. See the availability site as well as the availability charts for the last few days and the underlying code which does the scraping. This uses the same Simple HTML Dom library as our example above.

It’s also used to create a staff API by scraping the GVSU Library’s Staff Directory and reformatting it; see the code and the result. The result may not look very readable—it’s JSON, a common data format that’s particularly easy to reuse in some languages such as JavaScript—but remember that APIs are for machine-readable data which can be easily reused by programs, not people.

Jacqueline Hettel of Stanford University has a great blog post that describes using a Google Chrome extension and XPath queries to scrape acknowledgments from humanities monographs in Google Books; no coding required! She and Chris Bourg are presenting their results at the Digital Library Federation in November.

Finally, I use web scraping to pull hours information from our main library site into our mobile version. I got tired of updating the hours in two places every time they changed, so now I pull them in using a PHP script. It’s worth noting that this dual-maintenance annoyance is one major reason websites can and should be done in responsive designs.

Most of these library examples are good uses of web scraping because they involve simply transporting our data from one system to another; scraping information from the catalog to display it elsewhere is a prime use case. We own the data, so there are no intellectual property issues, and they’re our own servers so we’re responsible for keeping them up.

Code Libraries

While we’ve used PHP above, there’s no need to limit ourselves to a particular programming language. Here’s a set of popular web scraping choices in a few languages:

To provide a sense of how the different tools above work, I’ve written a series of gists which uses each to scrape titles from the first page of a DOAJ search.

Notes
  1. See the NCSU or Stanford library websites for examples of this search style. Essentially, results from several different search engines—a catalog, databases, the library website, study guides—are all displaying on the same page in seperate “bento” compartments.
  2. The browser I’m in right now, Chrome, has this beauty for a user agent string: “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.76 Safari/537.36″. Yes, that’s right: Mozilla, Mac, AppleWebKit, KHTML, Gecko, Chrome, & Safari all make an appearance.

3 Comments on “Web Scraping: Creating APIs Where There Were None”

  1. Excellent overview on scraping.
    My personal fave is casperjs, which sits on top of the phantomjs. It allows me to do all my scraping with jQuery selectors. And since phantom is a headless browser, I can even scrape data from interactive pages.

    Also was surprised you didn’t mention https://scraperwiki.com/, as it will help you set up a scraper and then host the data for you (assuming you are OK with it being public).

  2. Junior Tidal says:

    This is a great post. I’m using PHP Scraping to help power our library’s mobile website. Since our desktop and mobile sites are separate, I’m using the PHP Simple HTML DOM Parser library to automatically update our mobile eResources page from our desktop page.

    This code is meant for Drupal 6/7, but can be modified. It is available here: http://journal.code4lib.org/articles/7294

    • Eric Phetteplace says:

      That’s another really great example. I actually think your Code4Lib article was what gave me the idea to scrape our hours data off our (Drupal) main site for our (static, jQuery Mobile) mobile site.


Leave a Reply