Making a Basic LTI (Learning Tools Intoperability) App

Learning Tools Interoperability, or LTI, is an open standard maintained by the IMS Global Learning Consortium used to build external tools or plugins for Learning Management Systems (LMS).  A common use case of an LTI is to build an application that can be accessed from within the LMS to perform searches and import resources into a course.  For example, the Wikipedia LTI application enables instructors to search Wikipedia and embed links to articles directly into their courses.  Academic libraries frequently struggle to integrate library resources in learning management systems, so LTI is an obvious standard to embrace as a potential way to make library resources more accessible.  However, when I began researching how I could begin creating an LTI app, I found it very difficult to find examples of existing app code and resources to get started.  You can’t just create any old web application and have that be ‘consumable’ by a learning management system in an LTI-compliant way.  In this post, I’ll outline some of the resources I found useful to get started building your own LTI app.

LTI General Architecture

These are the basic components of an LTI application:

  • The LTI Tool Provider (TP):  This is your application.  The tool provider is the resource the user sees when they access your application from within the learning management system.  The Wikipedia LTI app linked above is an example of a tool provider.
  • The LTI Tool Consumer (TC): This is the learning management system (e.g., Blackboard, Moodle, Canvas) from which the user accesses your tool provider application.
  • The LTI Launch:  When a user accesses your tool provider from the tool consumer, this is called “launching” the LTI application.  Parameters are passed from the tool consumer to your tool provider, including authorization parameters that ensure the user is permitted to access your application, as well as information about the user’s identity, roles within the tool consumer, and the type of request the user is sending (e.g., a “content item message” is sent to your tool to indicate the user is expecting to import a link back to the tool consumer).
  • OAuth:  LTI applications use OAuth signatures for validating messages between the Tool Consumer and the Tool provider.  LTI applications require that the Tool Consumer and the Tool Consumer have each configured a shared key and secret, which is used to build an OAuth “Access Token” to enable communication between the two systems.1

An additional tip for developing LTI apps:  Sign up for a free instructor account for the Canvas learning management system. Canvas accounts hosted on the Instructure website enable you to add a custom LTI tool to your course (once it’s hosted on a web server, of course) and also enables you to quickly experiment with some existing LTI applications (such as the Khan Academy and Merlot LTI apps) to explore possible functionality you might want to include in your application.  This way, you can see what an instructor or student would see when they interact with your tool provider through an LMS.

Building your first “Hello, World” LTI app (with some help from Harvard)

When I first started looking into LTI, I found it really difficult to find a full (but basic) LTI application to get an overall picture of how LTI apps work – there’s lots of LTI class libraries out there, but I wanted an example of how all the pieces of an LTI app fit together. After some fruitless Googling and GitHub searches, I finally stumbled upon this Harvard LTI workshop on LTI apps that really helped me understand how LTI applications work.  The repository includes a full working LTI application that you can simply “plug in” some basic values to create a fully working LTI application, complete with OAuth authentication.

First, be sure to look at the included presentation in the repository, which is a rare example of a set of presentation slides that is 100% understandable out of context, to get a general introduction to the LTI standard and what it attempts to achieve.  You’ll also want to read through the step-by-step LTI blog tutorial that will get you set up with your first “Hello,World!” LTI application, complete with valid OAuth-signed requests.2

I found it especially useful that the Harvard LTI workshop repository includes a pseudo tool consumer (which mimics how the LMS would interact with your tool) that you can use during development on localhost.  Once you follow the steps of the tutorial to build your basic “Hello, World!” single-page LTI application, you can plug the local URL of that into the tool consumer page and check out how the parameters are passed from the tool consumer to the tool provider.   You can also examine the built-in basic LTI php class library that is included, as well as the basic OAuth functionality to see how the OAuth Access Token is constructed.

Use Case:  A WorldCat Discovery API Search and Retrieval Tool for LMS

My particular use case for exploring LTI involves building a search box that would enable a faculty user to add a link to a resource from the WorldCat Discovery system.  If your library subscribed to FirstSearch or you are a WorldShare Management System (WMS) customer you now likely have access to WorldCat Discovery; but the framework I’m using to build my app would work for any Discovery layer with an API (e.g., Summon, Primo, etc.).

Searching and retrieving via LTI is straightforward.  First, using the Harvard LTI workshop LTI application, I created a /lib directory to host the WorldCat Discovery PHP library published by OCLC cloned from GitHub.  I installed the library using Composer as described in the GitHub repository readme instructions.   I created a very simple search form and response page that enables a user to enter a query and then retrieve results from the WorldCat Discovery API based on that query. Then, I set up my “tool.php” application to display the search form and POST the query to the the simple response page:

tool.php:

<?php
error_reporting(E_ALL & ~E_NOTICE);
ini_set("display_errors", 1);
require_once 'ims-blti/blti.php';
$lti = new BLTI("secret", false, false);

session_start();
header('Content-Type: text/html;charset=utf-8');
?>

<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8" />
    <title>Building Tools with the Learning Tools Operability Specification</title>
  </head
  <body>
  <?php 
    if ($lti->valid) {
  ?>
    <h2>Search WorldCat Discovery</h2>
      <form action="results.php" method="post" encType="application/x-www-form-urlencoded">
        Search: <input type="text" name="query" id="query" />
         <?php
    foreach($_POST as $key => $value) {
      echo "<input type=\"hidden\" name=\"" .$key .  "\" value=\"" . $value . "\" />\n";
    }
  ?>
      <input type="submit" name="submit" value="Submit" />
      </form>
    <pre>
    </pre>
    <?php
      } else {
    ?>
      <h2>This was not a valid LTI launch</h2>
      <p>Error message: <?= $lti->message ?></p>
    <?php
      }
    ?>
    </body>
</html>

results.php:

<?php
require_once('../lib/worldcat-discovery-php/vendor/autoload.php');

   use OCLC\Auth\WSKey;
   use OCLC\Auth\AccessToken;
   use WorldCat\Discovery\Bib;

$key = 'somekey';
$secret = 'somesecret';
$options = array('services' => array('WorldCatDiscoveryAPI', 'refresh_token'));
$wskey = new WSKey($key, $secret, $options);
$accessToken = $wskey->getAccessTokenWithClientCredentials('123', '123');

$query = $_POST["query"];
$options = array(
  'useFRBRGrouping' => 'true',
  'sortBy' => 'library_plus_relevance',
  'itemsPerPage' => 25,
  );
$bib = Bib::Search($query, $accessToken, $options);

if (is_a($bib, 'WorldCat\Discovery\Error')) {
   echo $bib->getErrorCode();
   echo $bib->getErrorMessage();
} else {
    foreach ($bib->getSearchResults() as $result){
      echo '<li><a href="'. $result . '">' . $result->getName()->getValue() .' (';
      echo ($result->getDatePublished() ?  '' . $result->getDatePublished()->getValue()  : '') . ')</a></li>';
   }
}

?>

 

The application I’ve created so far is mostly a proof of concept, and I have a few essential tasks to finish the application – first, I need to re-write the URLs to point to a specific WorldCat Discovery instance (pointing to generic WorldCat.org isn’t helpful when a user is wanting to embed the resources of a specific library, to enable full-text access links and ); second, my app needs to enable the user to return these links to the LMS so that students / course participants can click on them.

For the second point, there is an LTI specification called the “content-item-message” that indicates that the type of interaction requested from the tool is the return of a link to the LMS.  The LMS must include this input parameter in the POST request to the tool.  The LMS “knows” to send this parameter when the tool is initially installed in the LMS.

<input type="hidden" name="lti_message_type" value="ContentItemSelectionRequest" />

The POST request to the tool must also indicate the return URL (e.g., the URL back to the LMS) where the link should be sent (the LMS should generate this input parameter for you; your tool just needs to identify this parameter and include it in the POST request to return the link to the LMS):

<input type="hidden" name="content_item_return_url" value="http://www.tc.com/item-return" />

The Tool provider must then render the link to be imported with some description of the content in JSON, for example:

{
  "@context" : "http://purl.imsglobal.org/ctx/lti/v1/ContentItem", 
  "@graph" : [ 
    { "@type" : "LtiLinkItem",
      "url" : "https://someinstitution.worldcat.org/oclc/709669613",
      "mediaType" : "text/html",
      "title" : "Global Warming: Hype or Hazard?"
    }
  ]
}

See the Content Item Message documentation for more details on returning JSON suitable for consumption by the LMS.

Learn, and do, more with LTI

You may find the basic LTI class script included in the Harvard LTI tutorial are insufficient for your use case – the code is a bit aged and the LTI specification has moved on.

A more robust LTI tool provider PHP library than the basic one included in the Harvard tutorial has been made available by IMS Global on GitHub.  You can also find a more complex complete sample app called “Rating” that is a great example of more complex kinds of interactions with an LTI app, including how you might build a server-side data store and recall that data through the LTI app, and how you might handle the assignment of grades or scores through an LTI app.

To learn more, the Canvas Learning Management system has an excellent open course on LTI development that you can enroll yourself in with a free Canvas account.  Once enrolled in the course, you can launch your own locally developed LTI app within the course to check how parameters and data are exchanged between the LMS and the tool.

  1.  See this post on LTI and OAuth for a straightforward discussion of the general implications of OAuth for LTI application development.
  2. I skipped the steps for installing Vagrant and VirtualBox and the tutorial still worked great for me on my MAMP server, so if you’re concerned about installing those and already have a local development server installed (or you’re just working from a LAMP server online) the tutorial will still work for you.

Thoughts from NDLC16

I recently had the pleasure of going to, and presenting at, the National Diversity in Libraries Conference (NDLC) at UCLA. NDLC is an irregular conference that last occurred in 2010. This year’s organizers had hoped to have around 250 registrants and they greatly exceeded those numbers. As a result, there were multiple session, with as many as seven concurrent sessions in a single time slot. I recommend perusing the program for a sense of the conference and full descriptions of sessions and posters. Moreover, there was a lively Twitter stream capturing sessions and continuing conversations at #NDLC16. As I could not attend all of the sessions, I will highlight some key concepts in this post that I think are ideal to carry forward in the work that we do in libraries, supporting people and systems.

Equity (and access, inclusion, and diversity)

Yes, the conference is about “diversity in libraries” but what became apparent throughout many sessions, is that we know we have “diversity.” Not a lot; shockingly little by some measures, but there is diversity. Diversity of life experience, sexual orientation, gender expression, ability status, and racial/ethnic identity (this last one is often very visible and very, very poorly represented). If our goals are to increase diversity, that is great. But if we try to do that work without acknowledging that it takes equity and inclusion to actually make an impact, it is a waste of time.

Equity and inclusion are difficult to achieve when the greater system that we work within has been constructed upon bias and privilege. This is true whether the system is our national government, our institution, or the individual library where we work. You cannot talk about human interactions in a vacuum, and many of the sessions reflected this with themes like “Academic libraries and social justice on campus” and “Cultural aspects and perspectives in health sciences library services.”

Dismantling Structures

An important point is in recognizing the oppressive structures that we operate within and then determining the best course of action to dismantle those and in so doing, remove or reduce the barriers to participation. The difficulty in this lies in the many, many ways that those structures disenfranchise communities. It can be through omission: not collecting in areas that represent the totality of American society, for example. Do collections and archives reflect the communities present? The Native American, Latinx, Asian-American, African-American experience? What of the LGBTQA community?

Or it can be through our actual information seeking systems: subjective subject headings, that belie the dominant narrative. Words are there to describe items, but they are “othering,” (set in contrast to or aside to a presumed norm, e.g. Wikipedia editors’ one-time attempt to separate American novelists based on gender[1]) or strip away intersectionality; or perhaps use euphemisms for uncomfortable truths, such as framing the detention and imprisonment of Japanese-Americans during WWII as relocation camp/assembly center/ temporary detention center, rather than: internment camp/incarceration camp/American concentration camp[2]. (Incidentally, the LCSH for this event is: Japanese American — Evacuation and relocation, 1942-1945, which seems pretty euphemistic to me. What were they being evacuated from? Their safe homes, schools, and jobs?)

Or perhaps it is in our technology: How common practice is it to evaluate databases for screen reader accessibility as part of the selection process? We work with profile systems like PeopleSoft, which limit ability to gender identify. How do animations and lots of visual content on the website affect individuals with autism spectrum disorder (ASD)?

And what about our programming? Are makerspaces implicitly gendering activities in the promotional materials, e.g. using pinks and purples for sewing workshops and blues and black for Arduino workshops? What is really being said when a supervisor asks if there should be an “All Lives Matter” book display to “balance” the “Black Lives Matter” display?

These are complex issues because humans are complex animals. We form identities, often based on how others perceive us. How are the intersections of various facets of identity treated by society? By our organization? By our collections? By our metadata? To strive for a society wherein these oppressive structures are acknowledged, the history that put them in place is a known history, and that all members feel equal and respected is a herculean task. But it is one that we are fortunate enough to be engaged with by virtue of our profession, because we ascribe to ALA’s Core Values of Librarianship, or create things like the ACRL Diversity Standards, or align with the Digital Library Federation’s mission statement.

Start Local

One way to be an active participant in working towards a better future is to assess your local environment. Much of the programming at NDLC concerned recruitment and retention of diverse individuals in our profession. Panel discussions about LIS diversity initiatives and diversity fellow programs and presentations on strategic planning for diversity and inclusion and cultural competencies highlight the serious representation issues in our profession. According to the ALA demographic study of 2014, 87.1% of membership identified as white[3]. The 2015 census data from the United States reports the population as 61.6% white (non-Hispanic, not multi-racial)[4]. For a profession built so firmly on the notion of service and community engagement, this is problematic. Happily, many of the presentations at NDLC made their materials available, so we may learn from each other’s experiences[5].

Representative hiring and active retention are only a portion of what can be done locally to bring about positive change. Collection practices, metadata creation, digital project investment, special collections development, and community outreach and programming are the bread and butter of library activities; whether a public, school, or academic library. It is important that we examine our current processes, not with a “how is this diverse?” perspective, but with a “how is this anti-racist/anti-oppression?” This is an important, yet perhaps subtle, difference. Working towards an anti-oppression mentality means that there is recognition of systemic inequalities woven through the fabric of our society, which may require changing some of the core practices of librarians in order to move the needle.

An example of this could be in the way that an academic library engages with, say, a community of students who inhabit both an ethnic minority identity and are first-generation students. This library may have done a great job at collecting a diverse range of materials, that represent this population well, but if the library does not devote effort in the outreach and programming to this group, it is unlikely that the students will go into the library to discover these works themselves. Why? It’s not that they are lazy, or entitled, or willfully ignorant (as I have witnessed librarians opine).

Libraries are still intimidating places for many students, regardless of identity. Compound that generalized anxiety with first-generation student status and an ethnic minority and it should be clear that the onus is on the librarians to dismantle those barriers – be they perceived or real – and communicate that the library is the students’ library – is our library – is everyone’s library. If this sounds obvious to you (“of course the library is everyone’s library!”), then I fear that you may not have a realistic understanding of the varied and complex lived experiences of people living in America. If this sounds like a challenge worthy of time, effort, and resources, then YAY! And I really hope that you are in a position to take up that challenge.

I’m Tired Now

Another strong takeaway from NDLC is that this work is WORK. Really hard, exhausting work. Work that often falls on the shoulders of those most disenfranchised, and that requires pushing back against a bureaucratic and social machine that has been running for centuries. Writing this blog post took forever, as I have tried to find words that don’t require previous knowledge of “social justice jargon” and that – hopefully – anyone reading can find something to relate to. See, working towards an equitable and inclusive society doesn’t fall to those who lack access to societal privilege. Sure, those individuals will feel the imperative to change in their day to day experiences, but major heavy lifting must also come from those who implicitly – and often unconsciously – benefit from the constructs of the biased system.

Lasting change that doesn’t come from a burn-it-down-and-start-again revolution requires the positive participation of the most privileged. That is a lot of strength and honesty to ask of someone – to recognize that they have – in some ways – benefited from a system, and that benefit has been predicated on someone else’s oppression…that’s a heavy realization. It means recognizing that bringing equity to a system like this will most likely require some loss of benefits to the privileged group.

For example, it is possible that a collection focusing on female religious figures may purchase fewer books on Joan of Arc in order to represent Rabia Basri and Kāraikkāl Ammaiyār. That doesn’t diminish Joan of Arc’s impact on the collection, it just provides a wider lens to view the topic. Yet, some may feel a decline of Joan’s status, and that can be frightening if they’ve built an identity around that status.

This collection example is actually a decent representation of what it means to try to put some of these equity ideals into practice. Recognizing a representation imbalance and then taking action to address it may result in feelings of discomfort or fear for some, but for others it can give voice and visibility. We are lucky to be in a profession where we, as individuals/organizations/systems, can effect change and have such a positive impact on our communities, but we must be willing to recognize uncomfortable truths and believe that a just and equitable society is a future worth working for.

NDLC Again?

Rumor has it that there will be another NDLC in 2020. It was an exhilarating conference and one that I sincerely hope leaders in our profession come to and participate with, en masse. If you are looking to get more of a feel for the entire conference, several Storify’s were created: See http://ndlc.info/ for some, as well as Amelia Gibson’s on the BLM Town Hall, ARL’s, and mine.

[1] http://www.nytimes.com/2013/04/28/opinion/sunday/wikipedias-sexism-toward-female-novelists.html?_r=0

[2] http://www.discovernikkei.org/en/journal/2008/4/24/enduring-communities/

[3]http://www.ala.org/research/sites/ala.org.research/files/content/initiatives/membershipsurveys/September2014ALADemographics.pdf

[4] https://www.census.gov/quickfacts/table/PST045215/00

[5] Coming soon to http://ndlc.info/ or look through #NDLC16

A High-Level Look at an ILS Migration

My library recently performed that most miraculous of feats—a full transition from one integrated library system to another, specifically Innovative’s Millennium to the open source Koha (supported by ByWater Solutions). We were prompted to migrate by Millennium’s approaching end-of-life and a desire to move to a more open system where we feel in greater control of our data. I’m sure many librarians have been through ILS migrations, and plenty has been written about them, but as this was my first I wanted to reflect upon the process. If you’re considering changing your ILS, or if you work in another area of librarianship & wonder how a migration looks from the systems end, I hope this post holds some value for you.

Challenges

No migration is without its problems. For starters, certain pieces of data in our old ILS weren’t accessible in any meaningful format. While Millennium has a robust “Create Lists” feature for querying & exporting different types of records (patron, bibliographic, vendor, etc.), it does not expose certain types of information. We couldn’t find a way to export detailed fines information, only a lump sum for each patron. To help with this post-migration, we saved an email listing of all itemized fines that we can refer to later. The email is saved as a shared Google Doc which allows circulation staff to comment on it as fines are resolved.

We also discovered that patron checkout history couldn’t be exported in bulk. While each patron can opt-in to a reading history & view it in the catalog, there’s no way for an administrator to download everyone’s history at once. As a solution, we kept our self-hosted Millennium instance running & can login to patrons’ accounts to retrieve their reading history upon request. Luckily, this feature wasn’t heavily used, so access to it hasn’t come up many times. We plan to keep our old, self-hosted ILS running for a year and then re-evaluate whether it’s prudent to shut it down, losing the data.

While some types of data simply couldn’t be exported, many more couldn’t emigrate in their exact same form. An ILS is a complicated piece of software, with many interdependent parts, and no two are going to represent concepts in the exact same way. To provide a concrete example: Millennium’s loan rules are based upon patron type & the item’s location, so a rule definition might resemble

  • a FACULTY patron can keep items from the MAIN SHELVES for four weeks & renew them once
  • a STUDENT patron can keep items from the MAIN SHELVES for two weeks & renew them two times

Koha, however, uses patron category & item type to determine loan rules, eschewing location as the pivotal attribute of an item. Neither implementation is wrong in any way; they both make sense, but are suited to slightly different situations. This difference necessitated completely reevaluating our item types, which didn’t previously affect loan rules. We had many, many item types because they were meant to represent the different media in our collection, not act as a hook for particular ILS functionality. Under the new system, our Associate Director of Libraries put copious work into reconfiguring & simplifying our types such that they would be compatible with our loan rules. This was a time-consuming process & it’s just one example of how a straightforward migration from one system to the next was impossible.

While some data couldn’t be exported, and others needed extensive rethinking in the new ILS, there was also information that could only be migrated after much massaging. Our patron records were a good example: under Millennium, users logged in on an insecure HTTP page with their barcode & last name. Yikes. I know, I felt terrible about it, but integration with our campus authentication & upgrading to HTTPS were both additional costs that we couldn’t afford. Now, under Koha, we can use the campus CAS (a central authentication system) & HTTPS (yay!), but wait…we don’t have the usernames for any of our patrons. So I spent a while writing Python scripts to parse our patron data, attempting to extract usernames from institutional email addresses. A system administrator also helped use unique identifying information (like phone number) to find potential patron matches in another campus database.

A more amusing example of weird Millennium data was active holds, which are stored in a single field on item records & looks like this:

P#=12312312,H#=1331,I#=999909,NNB=12/12/2016,DP=09/01/2016

Can you tell what’s going on here? With a little poking around in the system, it became apparent that letters like “NNB” stood for “date not needed by” & that other fields were identifiers connecting to patron & item records. So, once again, I wrote scripts to extract meaningful details from this silly format.

I won’t lie, the data munging was some of the most enjoyable work of the migration. Maybe I’m weird, but it was both challenging & interesting as we were suddenly forced to dive deeper into our old system and understand more of its hideous internal organs, just as we were leaving it behind. The problem-solving & sleuthing were fun & distracted me from some of the more frustrating challenges detailed above.

Finally, while we had a migration server where we tested our data & staff played around for almost a month’s time, when it came to the final leap things didn’t quite work as expected. The CAS integration, which I had so anticipated, didn’t work immediately. We started bumping into errors we hadn’t seen on the migration server. Much of this is inevitable; it’s simply unrealistic to create a perfect replica of our live catalog. We cannot, for instance, host the migration server on the exact same domain, and while that seems like a trivial difference it does affect a few things. Luckily, we had few summer classes so there was time to suffer a few setbacks & now that our fall semester is about to begin, we’re in great shape.

Difference & Repetition

Koha is primarily used by public libraries, and as such we’ve run into a few areas where common academic library functions aren’t implemented in a familiar way or are unavailable. Often, it’s that our perspective is so heavily rooted in Millennium that we need to think differently to achieve the same effect in Koha. But sometimes it’s clear that what’s a concern to us isn’t to other libraries.

For instance, bib records for serials with large numbers of issues is an ongoing struggle for us. We have many print periodicals where we have extensive holdings, including bound editions of past issues. The holdings display in the catalog is more oriented towards recent periodicals & displaying whether the latest few issues have arrived yet. That’s fine for materials like newspapers or popular magazines with few back issues, and I’ve seen a few public libraries using Koha that have minimalistic periodical records intended only to point the patron to a certain shelf. However, we have complex holdings like “issues 1 through 10 are bound together, issue 11 is missing, issues 12 through 18 are held in a separate location…” Parsing the catalog record to determine if we have a certain issue, and where it might be, is quite challenging.

Another example of the public versus academic functions: there’s no “recall” feature per se in Koha, wherein a faculty member could retrieve an item they want to place on course reserve from a student. Instead, we have tried to simulate this feature with a mixture of adjustments to our loan rules & internal reports which show the status of contested items. Recall isn’t a huge feature & isn’t used all the time, it’s not something we thought to research when selecting our new ILS, but it’s a great example of a minute difference that ended up creating a headache as we adapted to a new piece of software.

Moving from Millennium to Koha also meant we were shifting from a closed source system where we had to pay additional fees for limited API access to an open source system which boasts full read access to the database via its reporting feature. Koha’s open source nature has been perhaps the biggest boon for me during our migration. It’s very simple to look at the actual server-side code generating particular pages, or pull up specific rows in database tables, to see exactly what’s happening. In a black box ILS, everything we do is based on a vague adumbration of how we think the system operates. We can provide an input & record the output, but we’re never sure about edge cases or whether strange behavior is a bug or somehow intentional.

Koha has its share of bugs, I’ve discovered, but thankfully I’m able to jump right into the source code itself to determine what’s occurring. I’ve been able to diagnose problems by looking at open bug reports on Koha’s bugzilla tracker, pondering over perl code, and applying snippets of code from the Koha wiki or git repository. I’ve already submitted two bug patches, one of which has been pulled into the project. It’s empowering to be able to trace exactly what’s happening when troubleshooting & submit one’s own solution, or just a detailed bug report, for it. Whether or not a patch is the best way to fix an issue, being able to see precisely how the system works is deeply satisfying. It also makes it much easier to me to design JavaScript hacks that smooth over issues on the client side, be it in the staff-facing administrative functions or the public catalog.

What I Would Do Differently

Set clearer expectations.

We had Millennium for more than a decade. We invested substantial resources, both monetary & temporal, in customizing it to suit our tastes & unique collections. As we began testing the new ILS, the most common feedback from staff fell along the lines “this isn’t like it was in Millennium”. I think that would have been a less common observation, or perhaps phrased more productively, if I’d made it clear that a) it’ll take time to customize our new ILS to the degree of the old one, and b) not everything will be or needs to be the same.

Most of the customization decisions were made years ago & were never revisited. We need to return to the reason why things were set up a certain way, then determine if that reason is still legitimate, and finally find a way to achieve the best possible result in the new system. Instead, it’s felt like the process was framed more as “how do we simulate our old ILS in the new one” which sets us up for disappointment & failure from the start. I think there’s a feeling that a new system should automatically be better, and it’s true that we’re gaining several new & useful features, but we’re also losing substantial Millennium-specific customization. It’s important to realize that just because everything is not optimal out of the box doesn’t mean we cannot discover even better solutions if we approach our problems in a new light.

Encourage experimentation, deny expertise.

Because I’m the Systems Librarian, staff naturally turn to me with their systems questions. Here’s a secret: I know very little about the ILS. Like them, I’m still learning, and what’s more I’m often unfamiliar with the particular quarters of the system where they spend large amounts of time. I don’t know what it’s like to check in books & process holds all day, but our circulation staff do. It’s been tough at times when staff seek my guidance & I’m far from able to help them. Instead, we all need to approach the ongoing migration as an exploration. If we’re not sure how something works, the best way is to research & test, then test again. While Koha’s manual is long & quite detailed, it cannot (& arguably should not, lest it grow to unreasonable lengths) specify every edge case that can possibly occur. The only way to know is to test & document, which we should have emphasized & encouraged more towards the start of the process.

To be fair, many staff had reasonable expectations & performed a lot of experiments. Still, I did not do a great job of facilitating either of those as a leader. That’s truly my job as Systems Librarian during this process; I’m not here merely to mold our data so it fits perfectly in the new system, I’m here to oversee the entire transition as a process that involves data, workflows, staff, and technology.

Take more time.

Initially, the ILS migration was such an enormous amount of work that it was not clear where to start. It felt as if, for a few months before our on-site training, we did little but sit around & await a whirlwind of busyness. I wish we had a better sense of the work we could have front-loaded such that we could focus efforts on other tasks later on. For example, we ended up deleting thousands of patron, item, and bibliographic records in an effort to “clean house” & not spend effort migrating data that was unneeded in the first place. We should have attacked that much earlier, and it might have obviated the need for some work. For instance, if in the course of cleaning up Millennium we delete invalid MARC records or eliminate obscure item types, those represent fewer problems encountered later in the migration process.

Finished?

As we start our fall semester, I feel accomplished. We raced through this migration, beginning the initial stages only in April for a go-live date that would occur in June. I learned a lot & appreciated the challenge but also had one horrible epiphany: I’m still relatively young, and I hope to be in librarianship for a long time, so this is likely not the last ILS migration I’ll participate in. While that very thought gives me chills, I hope the lessons I’ve taken from this one will serve me well in the future.