Higher ‘Professional’ Ed, Lifelong Learning to Stay Employed, Quantified Self, and Libraries

The 2014 Horizon Report is mostly a report on emerging technologies. Many academic librarians carefully read its Higher Ed edition issued every year to learn about the upcoming technology trends. But this year’s Horizon Report Higher Ed edition was interesting to me more in terms of how the current state of higher education is being reflected on the report than in terms of the technologies on the near-term (one-to-five year) horizon of adoption. Let’s take a look.

A. Higher Ed or Higher Professional Ed?

To me, the most useful section of this year’s Horizon Report was ‘Wicked Challenges.’ The significant backdrop behind the first challenge “Expanding Access” is the fact that the knowledge economy is making higher education more and more closely and directly serve the needs of the labor market. The report says, “a postsecondary education is becoming less of an option and more of an economic imperative. Universities that were once bastions for the elite need to re-examine their trajectories in light of these issues of access, and the concept of a credit-based degree is currently in question.” (p.30)

Many of today’s students enter colleges and universities with a clear goal, i.e. obtaining a competitive edge and a better earning potential in the labor market. The result that is already familiar to many of us is the grade and the degree inflation and the emergence of higher ed institutions that pursue profit over even education itself. When the acquisition of skills takes precedence to the intellectual inquiry for its own sake, higher education comes to resemble higher professional education or intensive vocational training. As the economy almost forces people to take up the practice of lifelong learning to simply stay employed, the friction between the traditional goal of higher education – intellectual pursuit for its own sake – and the changing expectation of higher education — creative, adaptable, and flexible workforce – will only become more prominent.

Naturally, this socioeconomic background behind the expansion of postsecondary education raises the question of where its value lies. This is the second wicked challenge listed in the report, i.e. “Keeping Education Relevant.” The report says, “As online learning and free educational content become more pervasive, institutional stakeholders must address the question of what universities can provide that other approaches cannot, and rethink the value of higher education from a student’s perspective.” (p.32)

B. Lifelong Learning to Stay Employed

Today’s economy and labor market strongly prefer employees who can be hired, retooled, or let go at the same pace with the changes in technology as technology becomes one of the greatest driving force of economy. Workers are expected to enter the job market with more complex skills than in the past, to be able to adjust themselves quickly as important skills at workplaces change, and increasingly to take the role of a creator/producer/entrepreneur in their thinking and work practices. Credit-based degree programs fall short in this regard. It is no surprise that the report selected “Agile Approaches to Change” and “Shift from Students as Consumers to Students as Creators” as two of the long-range and the mid-range key trends in the report.

A strong focus on creativity, productivity, entrepreneurship, and lifelong learning, however, puts a heavier burden on both sides of education, i.e. instructors and students (full-time, part-time, and professional). While positive in emphasizing students’ active learning, the Flipped Classroom model selected as one of the key trends in the Horizon report often means additional work for instructors. In this model, instructors not only have to prepare the study materials for students to go over before the class, such as lecture videos, but also need to plan active learning activities for students during the class time. The Flipped Classroom model also assumes that students should be able to invest enough time outside the classroom to study.

The unfortunate side effect or consequence of this is that those who cannot afford to do so – for example, those who have to work on multiple jobs or have many family obligations, etc. – will suffer and fall behind. Today’s students and workers are now being asked to demonstrate their competencies with what they can produce beyond simply presenting the credit hours that they spent in the classroom. Probably as a result of this, a clear demarcation between work, learning, and personal life seems to be disappearing. “The E-Learning Predictions for 2014 Report”  from EdTech Europe predicts that ‘Learning Record Stores’, which track, record, and quantify an individual’s experiences and progress in both formal and informal learning, will be emerging in step with the need for continuous learning required for today’s job market. EdTech Europe also points out that learning is now being embedded in daily tasks and that we will see a significant increase in the availability and use of casual and informal learning apps both in education but also in the workplace.

C. Quantified Self and Learning Analytics

Among the six emerging technologies in the 2014 Horizon Report Higher Education edition, ‘Quantified Self’ is by far the most interesting new trend. (Other technologies should be pretty familiar to those who have been following the Horizon Report every year, except maybe the 4D printing mentioned in the 3D printing section. If you are looking for the emerging technologies that are on a farther horizon of adoption, check out this article from the World Economic Forum’s Global Agenda Council on Emerging Technologies, which lists technologies such as screenless display and brain-computer interfaces.)

According to the report, “Quantified Self describes the phenomenon of consumers being able to closely track data that is relevant to their daily activities through the use of technology.” (ACRL TechConnect has covered personal data monitoring and action analytics previously.) Quantified self is enabled by the wearable technology devices, such as Fitbit or Google Glass, and the Mobile Web. Wearable technology devices automatically collect personal data. Fitbit, for example, keeps track of one’s own sleep patterns, steps taken, and calories burned. And the Mobile Web is the platform that can store and present such personal data directly transferred from those devices. Through these devices and the resulting personal data, we get to observe our own behavior in a much more extensive and detailed manner than ever before. Instead of deciding on which part of our life to keep record of, we can now let these devices collect about almost all types of data about ourselves and then see which data would be of any use for us and whether any pattern emerges that we can perhaps utilize for the purpose of self-improvement.

Quantified Self is a notable trend not because it involves an unprecedented technology but because it gives us a glimpse of what our daily lives will be like in the near future, in which many of the emerging technologies that we are just getting used to right now – the mobile, big data, wearable technology – will come together in full bloom. Learning Analytics,’ which the Horizon Report calls “the educational application of ‘big data’” (p.38) and can be thought of as the application of Quantified Self in education, has been making a significant progress already in higher education. By collecting and analyzing the data about student behavior in online courses, learning analytics aims at improving student engagement, providing more personalized learning experience, detecting learning issues, and determining the behavior variables that are the significant indicators of student performance.

While privacy is a natural concern for Quantified Self, it is to be noted that we ourselves often willingly participate in personal data monitoring through the gamified self-tracking apps that can be offensive in other contexts. In her article, “Gamifying the Quantified Self,” Jennifer Whitson writes:

Gamified self-tracking and participatory surveillance applications are seen and embraced as play because they are entered into freely, injecting the spirit of play into otherwise monotonous activities. These gamified self-improvement apps evoke a specific agency—that of an active subject choosing to expose and disclose their otherwise secret selves, selves that can only be made penetrable via the datastreams and algorithms which pin down and make this otherwise unreachable interiority amenable to being operated on and consciously manipulated by the user and shared with others. The fact that these tools are consumer monitoring devices run by corporations that create neoliberal, responsibilized subjectivities become less salient to the user because of this freedom to quit the game at any time. These gamified applications are playthings that can be abandoned at whim, especially if they fail to pleasure, entertain and amuse. In contrast, the case of gamified workplaces exemplifies an entirely different problematic. (p.173; emphasis my own and not by the author)

If libraries and higher education institutions becomes active in monitoring and collecting students’ learning behavior, the success of an endeavor of that kind will depend on how well it creates and provides the sense of play to students for their willing participation. It will be also important for such kind of learning analytics project to offer an opt-out at any time and to keep the private data confidential and anonymous as much as possible.

D. Back to Libraries

The changed format of this year’s Horizon Report with the ‘Key Trends’ and the ‘Significant Challenges’ has shown the forces in play behind the emerging technologies to look out for in higher education much more clearly. A big take-away from this report, I believe, is that in spite of the doubt about the unique value of higher education, the demand will be increasing due to the students’ need to obtain a competitive advantage in entering or re-entering the workforce. And that higher ed institutions will endeavor to create appropriate means and tools to satisfy students’ need of acquiring and demonstrating skills and experience in a way that is appealing to future employers beyond credit-hour based degrees, such as competency-based assessments and a badge system, is another one.

Considering that the pace of change at higher education tends to be slow, this can be an opportunity for academic libraries. Both instructors and students are under constant pressure to innovate and experiment in their teaching and learning processes. Instructors designing the Flipped Classroom model may require a studio where they can record and produce their lecture videos. Students may need to compile portfolios to demonstrate their knowledge and skills for job interviews. Returning adult students may need to acquire the habitual lifelong learning practices with the help from librarians. Local employers and students may mutually benefit from a place where certain co-projects can be tried. As a neutral player on the campus with tech-savvy librarians and knowledgeable staff, libraries can create a place where the most palpable student needs that are yet to be satisfied by individual academic departments or student services are directly addressed. Maker labs, gamified learning or self-tracking modules, and a competency dashboard are all such examples. From the emerging technology trends in higher ed, we see that the learning activities in higher education and academic libraries will be more and more closely tied to the economic imperative of constant innovation.

Academic libraries may even go further and take up the role of leading the changes in higher education. In his blog post for Inside Higher Ed, Joshua Kim suggests exactly this and also nicely sums up the challenges that today’s higher education faces:

  • How do we increase postsecondary productivity while guarding against commodification?
  • How do we increase quality while increasing access?
  • How do we leverage technologies without sacrificing the human element essential for authentic learning?

How will academic libraries be able to lead the changes necessary for higher education to successfully meet these challenges? It is a question that will stay with academic libraries for many years to come.


Bash Scripting: automating repetitive command line tasks

Introduction

One of my current tasks is to develop workflows for digital preservation procedures. These workflows begin with the acquisition of files – either disk images or logical file transfers – both of which end up on a designated server. Once acquired, the images (or files) are checked for viruses. If clean, they are bagged using Bagit and then copied over to a different server for processing.1 This work is all done at the command line, and as you can imagine, it gets quite repetitive. It’s also a bit error-prone since our file naming conventions include a 10-digit media ID number, which is easily mistyped. So once all the individual processes were worked out, I decided to automate things a bit by placing the commands into a single script. I should mention here that I’m no Linux whiz- I use it as needed which sometimes is daily, sometimes not. This is the first time I’ve ever tried to tie commands together in a Bash script, but I figured previous programming experience would help.

Creating a Script

To get started, I placed all the virus check commands for disk images into a script. These commands are different than logical file virus checks since the disk has to be mounted to get a read. This is a pretty simple step – first add:

#!/bin/bash

as the first line of the file (this line should not be indented or have any other whitespace in front of it). This tells the kernel which kind of interpreter to invoke, in this case, Bash. You could substitute the path to another interpreter, like Python, for other types of scripts: #!/bin/python.

Next, I changed the file permissions to make the script executable:

chmod +x myscript

I separated the virus check commands so that I could test those out and make sure they were working as expected before delving into other script functions.

Here’s what my initial script looked like (comments are preceded by a #):

#!/bin/bash

#mount disk
sudo mount -o ro,loop,nodev,noexec,nosuid,noatime /mnt/transferfiles/2013_072/2013_072_DM0000000001/2013_072_DM0000000001.iso /mnt/iso

#check to verify mount
mountpoint -q /mnt/iso && echo "mounted" || "not mounted"

#call the Clam AV program to run the virus check
clamscan -r /mnt/iso > "/mnt/transferfiles/2013_072/2013_072_DM0000000001/2013_072_DM0000000001_scan_test.txt"

#unmount disk
sudo umount /mnt/iso

#check disk unmounted
mountpoint -q /mnt/iso && echo "mounted" || "not mounted"

All those options on the mount command? They give me the piece of mind that accessing the disk will in no way alter it (or affect the host server), thus preserving its authenticity as an archival object. You may also be wondering about the use of “&&” and “||”.  These function as conditional AND and OR operators, respectively. So “&&” tells the shell to run the first command, AND if that’s successful, it will run the second command. Conversely, the “||” tells the shell to run the first command OR if that fails, run the second command. So the mount check command can be read as: check to see if the directory at /mnt/iso is a mountpoint. If the mount is successful, then echo “mounted.” If it’s not, echo “not mounted.” More on redirection.

Adding Variables

You may have noted that the script above only works on one disk image (2013_072_DM0000000001.iso), which isn’t very useful. I created variables for the accession number, the digital media number, and the file extension, since they all changed depending on the disk image information.The file naming convention we use for disk images is consistent and strict. The top level directory is the accession number. Within that, each disk image acquired from that accession is stored within it’s own directory, named using it’s assigned number. The disk image is then named by a combination of the accession number and the disk image number. Yes, it’s repetitive, but it keeps track of where things came from and links to data we have stored elsewhere. Given that these disks may not be processed for 20 years, such redundancy is preferred.

At first I thought the accession number, digital media number, and extension variables would be best entered at the initial run command; type in one line to run many commands. Each variable is separated by a space, the .iso at the end is the extension for an optical disk image file:

$ ./virus_check.sh 2013_072 DM0000000001 .iso

In Bash, scripts run with arguments are named $1 for the first variable, $2 for the second, and so on. This actually tripped me up for a day or so. I initially thought the $1, $2, etc. variables names used by the book I was referencing were for examples only, and that the first variables I referenced in the script would automatically map in order, so if 2013_072 was the first argument, and $accession was the first variable, $accession = 2013_072 (much like when you pass in a parameter to a Python function). Then I realized there was a reason that more than one reference book and/or site used the $1, $2, $3 system for variables passed in as command line arguments. I assigned each to it’s proper variable, and things were rolling again.

#!/bin/bash

#assign command line variables
$1=$accession
$2=$digital_media
$3=$extension

#mount disk
sudo mount -o ro,loop,noatime /mnt/transferfiles/${accession}/${accession}_${digital_media}/${accession}_${digital_media}${extension} /mnt/iso<span style="line-height: 1.5em;"> </span>

Note: variables names are often presented without curly braces; it’s recommended to place them in curly braces when adjacent to other strings.2

Reading Data

After testing the script a bit, I realized I  didn’t like passing the variables in via the command line. I kept making typos, and it was annoying not to have any data validation done in a more interactive fashion. I reconfigured the script to prompt the user for input:

read -p "Please enter the accession number" accession

read -p "Please enter the digital media number" digital_media

read -p "Please enter the file extension, include the preceding period" extension

After reviewing some test options, I decided to test that $accession and $digital_media were valid directories, and that the combo of all three variables was a valid file. This test seems more conclusive than simply testing whether or not the variables fit the naming criteria, but it does mean that if the data entered is invalid, the feedback given to the user is limited. I’m considering adding tests for naming criteria as well, so that the user knows when the error is due to a typo vs. a non-existent directory or file. I also didn’t want the code to simply quit when one of the variables is invalid – that’s not very user-friendly. I decided to ask the user for input until valid input was received.

read -p "Please enter the accession number" accession
until [ -d /mnt/transferfiles/${accession} ]; do
     read -p "Invalid. Please enter the accession number." accession
done

read -p "Please enter the digital media number" digital_media
until [ -d /mnt/transferfiles/${accession}/${accession}_${digital_media} ]; do
     read -p "Invalid. Please enter the digital media number." digital_media
done

read -p  "Please enter the file extension, include the preceding period" extension
until [ -e/mnt/transferfiles/${accession}/${accession}_${digital_media}/
${accession}_${digital_media}${extension} ]; do
     read -p "Invalid. Please enter the file extension, including the preceding period" extension
done

Creating Functions

You may have noted that the command used to test if a disk is mounted or not is called twice. This is done on purpose, as it was a test I found helpful when running the virus checks; the virus check runs and outputs a report even if the disk isn’t mounted. Occasionally disks won’t mount for various reasons. In such cases, the resulting report will state that it scanned no data, which is confusing because the disk itself possibly could have contained no data. Testing if it’s mounted eliminates that confusion. The command is repeated after the disk has been unmounted, mainly because I found it easy to forget the unmount step, and testing helps reinforce good behavior. Given that the command is repeated twice, it makes sense to make it a function rather than duplicate it.

check_mount () {
     #checks to see if disk is mounted or not
     mountpoint -q /mnt/iso && echo "mounted" || "not mounted" 
}

Lastly, I created a function for the input variables. I’m sure there’s prettier, more concise ways of writing this function, but since it’s still being refined and I’m still learning Bash scripting, I decided to leave it for now. I did want it placed in it’s own function because I’m planning to add additional code that will notify me if the virus check is positive and exit the program, or if it’s negative, bag the disk image and corresponding files, and copy them over to another server where they’ll wait for further processing.

get_image () {
     #gets data from user, validates it    
     read -p "Please enter the accession number" accession
     until [ -d /mnt/transferfiles/${accession} ]; do
          read -p "Invalid. Please enter the accession number." accession
     done

     read -p "Please enter the digital media number" digital_media
     until [ -d /mnt/transferfiles/${accession}/${accession}_${digital_media} ]; do
          read -p "Invalid. Please enter the digital media number." digital_media
     done

     read -p  "Please enter the file extension, include the preceding period" extension
     until [ -e /mnt/transferfiles/${accession}/${accession}_${digital_media}/${accession}_${digital_media}${extension} ]; do
          read -p "Invalid. Please enter the file extension, including the preceding period" extension
     done
}

Resulting (but not final!) Script

#!/bin/bash
#takes accession number, digital media number, and extension as variables to mount a disk image and run a virus check

check_mount () {
     #checks to see if disk is mounted or not
     mountpoint -q /mnt/iso && echo "mounted" || "not mounted"
}

get_image () {
     #gets disk image data from user, validates info
     read -p "Please enter the accession number" accession
     until [ -d /mnt/transferfiles/${accession} ]; do
          read -p "Invalid. Please enter the accesion number." accession
     done

     read -p "Please enter the digital media number" digital_media
     until [ -d /mnt/transferfiles/${accession}/${accession}_${digital_media} ]; do
          read -p "Invalid. Please enter the digital media number." digital_media
     done

     read -p  "Please enter the file extension, include the preceding period" extension
     until [ -e/mnt/transferfiles/${accession}/${accession}_${digital_media}/${accession}_${digital_media}${extension} ]; do
          read -p "Invalid. Please enter the file extension, including the preceding period" extension
     done
}

get_image

#mount disk
sudo mount -o ro,loop,noatime /mnt/transferfiles/${accession}/${accession}_${digital_media}/${accession}_${digital_media}${extension} /mnt/${extension} 

check_mount

#run virus check
sudo clamscan -r /mnt/iso > "/mnt/transferfiles/${accession}/${accession}_${digital_media}/${accession}_${digital_media}_scan_test.txt"

#unmount disk
sudo umount /mnt/iso

check_mount

Conclusion

There’s a lot more I’d like to do with this script. In addition to what I’ve already mentioned, I’d love to enable it to run over a range of digital media numbers, since they often are sequential. It also doesn’t stop if the disk isn’t mounted, which is an issue. But I thought it served as a good example of how easy it is to take repetitive command line tasks and turn them into a script. Next time, I’ll write about the second phase of development, which will include combining this script with another one, virus scan reporting, bagging, and transfer to another server.

Suggested References

An A-Z Index of the Bash command line for Linux

The books I used, both are good for basic command line work, but they only include a small section for actual scripting:

Barrett, Daniel J., Linux Pocket Guide. O’Reilly Media, 2004.

Shotts, Jr., William E. The Linux Command Line: A complete introduction. no starch press. 2012.

The book I wished I used:

Robbins, Arnold and Beebe, Nelson H. F. Classic Shell Scripting. O’Reilly Media, 2005.

Notes

  1. Logical file transfers often arrive in bags, which are then validated and virus checked.
  2. Linux Pocket Guide

Query a Google Spreadsheet like a Database with Google Visualization API Query Language

Libraries make much use of spreadsheets. Spreadsheets are easy to create, and most library staff are familiar with how to use them. But they can quickly get unwieldy as more and more data are entered. The more rows and columns a spreadsheet has, the more difficult it is to browse and quickly identify specific information. Creating a searchable web application with a database at the back-end is a good solution since it will let users to quickly perform a custom search and filter out unnecessary information. But due to the staff time and expertise it requires, creating a full-fledged searchable web database application is not always a feasible option at many libraries.

Creating a MS Access custom database or using a free service such as Zoho can be an alternative to creating a searchable web database application. But providing a read-only view for MS Access database can be tricky although possible. MS Access is also software locally installed in each PC and therefore not necessarily available for the library staff when they are not with their work PCs on which MS Access is installed. Zoho Creator offers a way to easily convert a spreadsheet into a database, but its free version service has very limited features such as maximum 3 users, 1,000 records, and 200 MB storage.

Google Visualization API Query Language provides a quick and easy way to query a Google spreadsheet and return and display a selective set of data without actually converting a spreadsheet into a database. You can display the query result in the form of a HTML table, which can be served as a stand-alone webpage. All you have to do is to construct a custom URL.

A free version of Google spreadsheet has a limit in size and complexity. For example, one free Google spreadsheet can have no more than 400, 000 total cells. But you can purchase more Google Drive storage and also query multiple Google spreadsheets (or even your own custom databases) by using Google Visualization API Query Language and Google Chart Libraries together.  (This will be the topic of my next post. You can also see the examples of using Google Chart Libraries and Google Visualization API Query Language together in my presentation slides at the end of this post.)

In this post, I will explain the parameters of Google Visualization API Query Language and how to construct a custom URL that will query, return, and display a selective set of data in the form of an HTML page.

A. Display a Google Spreadsheet as an HTML page

The first step is to identify the URL of the Google spreadsheet of your choice.

The URL below opens up the third sheet (Sheet 3) of a specific Google spreadsheet. There are two parameters you need to pay attention inside the URL: key and gid.

https://docs.google.com/spreadsheet/ccc?key=0AqAPbBT_k2VUdDc3aC1xS2o0c2ZmaVpOQWkyY0l1eVE&usp=drive_web#gid=2

This breaks down the parameters in a way that is easier to view:

  • https://docs.google.com/spreadsheet/ccc
    ?key=0AqAPbBT_k2VUdDc3aC1xS2o0c2ZmaVpOQWkyY0l1eVE
    &usp=drive_web

    #gid=2

Key is a unique identifier to each Google spreadsheet. So you need to use that to cretee a custom URL later that will query and display the data in this spreadsheet. Gid specifies which sheet in the spreadsheet you are opening up. The gid for the first sheet is 0; the gid for the third sheet is 2.

Screen Shot 2013-11-27 at 9.44.29 AM

Let’s first see how Google Visualization API returns the spreadsheet data as a DataTable object. This is only for those who are curious about what goes on behind the scenes. You can see that for this view, the URL is slightly different but the values of the key and the gid parameter stay the same.

https://spreadsheets.google.com/tq?&tq=&key=0AqAPbBT_k2VUdDc3aC1xS2o0c2ZmaVpOQWkyY0l1eVE&gid=2

Screen Shot 2013-11-27 at 9.56.03 AM

In order to display the same result as an independent HTML page, all you need to do is to take the key and the gid parameter values of your own Google spreadsheet and construct the custom URL following the same pattern shown below.

  • https://spreadsheets.google.com
    /tq?tqx=out:html&tq=
    &key=0AqAPbBT_k2VUdDc3aC1xS2o0c2ZmaVpOQWkyY0l1eVE
    &gid=2

https://spreadsheets.google.com/tq?tqx=out:html&tq=&key=0AqAPbBT_k2VUdDc3aC1xS2o0c2ZmaVpOQWkyY0l1eVE&gid=2

Screen Shot 2013-11-27 at 9.59.11 AM

By the way, if the URL you created doesn’t work, it is probably because you have not encoded it properly. Try this handy URL encoder/decoder page to encode it by hand or you can use JavaScript encodeURIComponent() function.
Also if you want the URL to display the query result without people logging into Google Drive first, make sure to set the permission setting of the spreadsheet to be public. On the other hand, if you need to control access to the spreadsheet only to a number of users, you have to remind your users to first go to Google Drive webpage and log in with their Google account before clicking your URLs. Only when the users are logged into Google Drive, they will be able see the query result.

B. How to Query a Google Spreadsheet

We have seen how to create a URL to show an entire sheet of a Google spreadsheet as an HTML page above. Now let’s do some querying, so that we can pick and choose what data the table is going to display instead of the whole sheet. That’s where the Query Language comes in handy.

Here is an example spreadsheet with over 50 columns and 500 rows.

  • https://docs.google.com/spreadsheet/ccc?
    key=0AqAPbBT_k2VUdDFYamtHdkFqVHZ4VXZXSVVraGxacEE
    &usp=drive_web
    #gid=0

https://docs.google.com/spreadsheet/ccc?key=0AqAPbBT_k2VUdDFYamtHdkFqVHZ4VXZXSVVraGxacEE&usp=drive_web#gid=0

Screen Shot 2013-11-27 at 10.15.41 AM

What I want to do is to show only column B, C, D, F where C contains ‘Florida.’ How do I do this? Remember the URL we created to show the entire sheet above?

  • https://spreadsheets.google.com/tq?tqx=out:html&tq=&key=___&gid=___

There we had no value for the tq parameter. This is where we insert our query.

Google Visualization API Query Language is pretty much the same as SQL. So if you are familiar with SQL, forming a query is dead simple. If you aren’t SQL is also easy to learn.

  • The query should be written like this:
    SELECT B, C, D, F WHERE C CONTAINS ‘Florida’
  • After encoding it properly, you get something like this:
    SELECT%20B%2C%20C%2C%20D%2C%20F%20WHERE%20C%20CONTAINS%20%27Florida%27
  • Add it to the tq parameter and don’t forget to also specify the key:
    https://spreadsheets.google.com/tq?tqx=out:html&tq=SELECT%20B%2C%20C%2C%20D%2C%20F%20WHERE%20C%20CONTAINS%20%27Florida%27
    &key=0AqAPbBT_k2VUdEtXYXdLdjM0TXY1YUVhMk9jeUQ0NkE

I am omitting the gid parameter here because there is only one sheet in this spreadsheet but you can add it if you would like. You can also omit it if the sheet you want is the first sheet. Ta-da!

Screen Shot 2013-11-27 at 10.26.13 AM

Compare this with the original spreadsheet view. I am sure you can appreciate how the small effort put into creating a URL pays back in terms of viewing an unwieldy large spreadsheet manageable.

You can also easily incorporate functions such as count() or sum() into your query to get an overview of the data you have in the spreadsheet.

  • select D,F count(C) where (B contains ‘author name’) group by D, F

For example, this query above shows how many articles a specific author published per year in each journal. The screenshot of the result is below and you can see it for yourself here: https://spreadsheets.google.com/tq?tqx=out:html&tq=select+D,F,count(C)+where+%28B+contains+%27Agoulnik%27%29+group+by+D,F&key=0AqAPbBT_k2VUdEtXYXdLdjM0TXY1YUVhMk9jeUQ0NkE

Screen Shot 2013-11-27 at 11.34.25 AM

Take this spread sheet as another example.

libbudgetfake

This simple query below displays the library budget by year. For those who are unfamiliar with ‘pivot‘, pivot table is a data summarization tool. The query below asks the spreadsheet to calculate the total of all the values in the B column (Budget amount for each category) by the values found in the C column (Years).

Screen Shot 2013-11-27 at 11.46.49 AM

This is another example of querying the spreadsheet connected to my library’s Literature Search request form. The following query asks the spreadsheet to count the number of literature search requests by Research Topic (=column I) that were received in 2011 (=column G) grouped by the values in the column C, i.e. College of Medicine Faculty or College of Medicine Staff.

  • select C, count(I) where (G contains ’2011′) group by C

litsearch

C. More Querying Options

There are many more things you can do with a custom query. Google has an extensive documentation that is easy to follow: https://developers.google.com/chart/interactive/docs/querylanguage#Language_Syntax

These are just a few examples.

  • ORDER BY __ DESC
    : Order the results in the descending order of the column of your choice. Without ‘DESC,’ the result will be listed in the ascending order.
  • LIMIT 5
    : Limit the number of results. Combined with ‘Order by’ you can quickly filter the results by the most recent or the oldest items.

My presentation slides given at the 2013 LITA Forum below includes more detailed information about Google Visualization API Query Language, parameters, and other options as well as how to use Google Chart Libraries in combination with Google Visualization API Query Language for data visualization, which is the topic of my next post.

Happy querying Google Spreadsheet!

 


#libtechgender: A Post in Two Parts

Conversations about gender relations, bias, and appropriate behavior have bubbled up all over the technology sector recently. We have seen conferences adopt codes of conduct that strive to create welcoming atmospheres. We have also seen cases of bias and harassment, cases that may once have been tolerated or ignored, now being identified and condemned. These conversations, like gender itself, are not simple or binary but being able to listen respectfully and talk honestly about uncomfortable topics offers hope that positive change is possible.

On October 28th Sarah Houghton, the director of the San Rafael Public Library, moderated a panel on gender in library technology at the Internet Librarian conference. In today’s post I’d like to share my small contributions to the panel discussion that day and also to share how my understanding of the issues changed after the discussion there. It is my hope that more conversations—more talking and more listening—about gender issue in library technology will be sparked from this start.

Part I: Internet Librarian Panel on Gender

Our panel’s intent was to invite librarians into a public conversation about gender issues. In the Internet Librarian program our invitation read:

Join us for a lively panel and audience discussion about the challenges of gender differences in technology librarianship. The topics of fairness and bias with both genders have appeared in articles, blogs, etc and this panel of women and men who work in libraries and gender studies briefly share personal experiences, then engage the audience about experiences and how best to increase understanding between the genders specifically in the area of technology work in librarianship. 1
Panelists: Sarah Houghton, Ryan Claringbole, Emily Clasper, Kate Kosturski, Lisa Rabey, John Bultena, Tatum Lindsay, and Nicholas Schiller

My invitation to participate on the stemmed from blog posts I wrote about how online conversations about gender issues can go off the rails and become disasters. I used my allotted time to share some simple suggestions I developed observing these conversations. Coming from my personal (white cis straight male) perspective, I paid attention to things that I and my male colleagues do and say that result in unintended offense, silencing, and anger in our female colleagues. By reverse engineering these conversational disasters, I attempted to learn from unfortunate mistakes and build better conversations. Assuming honest good intentions, following these suggestions can help us avoid contention and build more empathy and trust.

  1. Listen generously. Context and perspective are vital to these discussions. If we’re actively cultivating diverse perspectives then we are inviting ideas that conflict with our assumptions. It’s more effective to assume these ideas come from unfamiliar but valid contexts than to assume they are automatically unreasonable. By deferring judgement until after new ideas have been assimilated and understood we can avoid silencing voices that we need to hear.
  2. Defensive responses can be more harmful than offensive responses. No one likes to feel called on the carpet, but the instinctive responses we can give when we feel blamed or accused can be worse than simply giving offense. Defensive denials can lead to others feeling silenced, which is much more damaging and divisive than simple disagreement. It can be the difference between communicating  “you and I disagree on this matter” and communicating “you are wrong and don’t get a voice in this conversation.” That kind of silencing and exclusion can be worse than simply giving offense.
  3. It is okay to disagree or to be wrong. Conversations about gender are full of fear. People are afraid to speak up for fear of reprisal. People are afraid to say the wrong thing and be revealed as a secret misogynist. People are afraid. The good news is that conversations where all parties feel welcome, respected, and listened to can be healing. Because context and perspective matter so much in how we address issues, once we accept the contexts and perspectives of others, we are more likely to receive acceptance of our own perspectives and contexts. Given an environment of mutual respect and inclusion, we don’t need to be afraid of holding unpopular views. These are complex issues and once trust is established, complex positions are acceptable.

This is what I presented at the panel session and I still stand behind these suggestions. They can be useful tools for building better conversations between people with good intentions. Specifically, they can help men in our field avoid all-too-common barriers to productive conversation.

That day I listened and learned a lot from the audience and from my fellow panelists. I shifted my priorities. I still think cultivating better conversations is an important goal. I still want to learn how to be a better listener and colleague.  I think these are skills that don’t just happen, but need to be intentionally cultivated. That said, I came in to the panel believing that the most important gender related issue in library technology was finding ways for well-intentioned colleagues to communicate effectively about an uncomfortable problem. Listening to my colleagues tell their stories, I learned that there are more direct and pressing gender issues in libraries.

Part II: After the Panel

As I listened to my fellow panelists tell their stories and then as I listened to people in the audience share their experiences, no one else seemed particularly concerned about well-intentioned people having misunderstandings or about breakdowns in communication. Instead, they related a series of harrowing personal experiences where men (and women, but mostly men) were directly harassing, intentionally abusive, and strategically cruel in ways that are having a very large impact on the daily work, career paths, and the quality of life of my female colleagues. I assumed that since this kind of harassment clearly violates standard HR policies that the problem is adequately addressed by existing administrative policies. That assumption is incorrect.

It is easy to ignore what we don’t see and I don’t see harassment taking place in libraries and I don’t often hear it discussed. It has been easy to underestimate the prevalence and impact it has on many of my colleagues. Listening to librarians.

Then, after the conference one evening, a friend of mine was harassed on the street and I had another assumption challenged. It happened quickly, but a stranger on the street harassed my friend while I watched in stunned passivity. 2 I arrived at the conference feeling positive about my grasp of the issues and also feeling confident about my role as an ally. I left feeling shaken and doubting both my thoughts and my actions.

In response to the panel and its aftermath, I’ve composed three more points to reflect what I learned. These aren’t suggestions, like I brought to the panel, instead they are realizations or statements. I’m obviously not an expert on the topic and I’m not speaking from a seat of authority. I’m relating stories and experiences told by others and they tell them much better than I do. In the tradition of geeks and hackers now that I have learned something new I’m sharing it with the community in hopes that my experience moves the project forward. It is my hope that better informed and more experienced voices will take this conversation farther than I am able to. These three realizations may be obvious to some, because they were not obvious to me, it seems useful to clearly articulate them.

  1. Intentional and targeted harassment of women is a significant problem in the library technology field. While subtle micro aggressions, problem conversations, and colleagues who deny that significant gender issues exist in libraries are problematic, these issues are overshadowed by direct and intentional harassing behavior targeting gender identity or sex. The clear message I heard at the panel was that workplace harassment is a very real and very current threat to women working in library technology fields.
  2. This harassment is not visible to those not targeted by it. It is easy to ignore what we do not see. Responses to the panel included many library technology women sharing their experiences and commenting that it was good to hear others’ stories. Even though the experience of workplace harassment was common, those who spoke of it reported feelings of isolation. While legislation and human resources polices clearly state harassment is unacceptable and unlawful, it still happens and when it happens the target can be isolated by the experience. Those of us who participate in library conferences, journals, and online communities can help pierce this isolation by cultivating opportunities to talk about these issues openly and public. By publicly talking about gender issues, we can thwart isolation and make the problems more visible to those who are not direct targets of harassment.
  3. This is a cultural problem, not only an individual problem. While no one point on the gender spectrum has a monopoly on either perpetrating or being the target of workplace harassment, the predominant narrative in our panel discussion was men harassing women. Legally speaking, these need to be treated as individual acts, but as a profession, we can address the cultural aspects of the issue. Something in our library technology culture is fostering an environment where women are systematically exposed to bad behavior from men.

In the field of Library Technology, we spend a lot of our time and brain power intentionally designing user experiences and assessing how users interact with our designs. Because harassment of some of our own is pervasive and cultural, I suggest we turn the same attention and intentionality to designing a workplace culture that is responsive to the needs of all of us who work here. I look forward to reading conference presentations, journal articles, and online discussions where these problems are publicly identified and directly addressed rather than occurring in isolation or being ignored.

  1. infotoday.com/il2013/Monday.asp#TrackD
  2. I don’t advocate a macho confrontational response or take responsibility for the actions of others, but an ally has their friend’s back and that night I did not speak up.

A Brief Look at Cryptography for Librarians

You may not think much about cryptography on a daily basis, but it underpins your daily work and personal existence. In this post I want to talk about a few realms of cryptography that affect the work of academic librarians, and talk about some interesting facets you may never have considered. I won’t discuss the math or computer science basis of cryptography, but look at it from a historical and philosophical point of view. If you are interested in the math and computer science, I have a few a resources listed at the end in addition to a bibliography.

Note that while I will discuss some illegal activities in this post, neither I nor anyone connected with the ACRL TechConnect blog is suggesting that you actually do anything illegal. I think you’ll find the intellectual part of it stimulation enough.

What is cryptography?

Keeping information secret is as simple as hiding it from view in, say, an envelope, and trusting that only the person to whom it is addressed will read that information and then not tell anyone else. But we all know that this doesn’t actually work. A better system would only allow a person with secret credentials to open the envelope, and then for the information inside to be in a code that only she could know.

The idea of codes to keep important information secret goes back thousands of years , but for the purposes of computer science, most of the major advances have been made since the 1970s. In the 1960s with the advent of computing for business and military uses, it was necessary to come up with ways to encrypt data. In 1976, the concept of public-key cryptography was developed, but it wasn’t realized practically until 1978 with the paper by Rivest, Shamir, and Adleman–if you’ve ever wondered what RSA stood for, there’s the answer. There were some advancements to this system, which resulted in the digital signature algorithm as the standard used by the federal government.1 Public-key systems work basically by creating a private and a public key–the private one is known only to each individual user, and the public key is shared. Without the private key, however, the public key can’t open anything. See the resources below for more on the math that makes up these algorithms.

Another important piece of cryptography is that of cryptographic hash functions, which were first developed in the late 1980s. These are used to encrypt blocks of data– for instance, passwords stored in databases should be encrypted using one of these functions. These functions ensure that even if someone unauthorized gets access to sensitive data that they cannot read it. These can also be used to verify the identify of a piece of digital content, which is probably how most librarians think about these functions, particularly if you work with a digital repository of any kind.

Why do you care?

You probably send emails, log into servers, and otherwise transmit all kinds of confidential information over a network (whether a local network or the internet). Encrypted access to these services and the data being transmitted is the only way that anybody can trust that any of the information is secret. Anyone who has had a credit card number stolen and had to deal with fraudulent purchases knows first-hand how upsetting it can be when these systems fail. Without cryptography, the modern economy could not work.

Of course, we all know a recent example of cryptography not working as intended. It’s no secret (see above where keeping something a secret requires that no one who knows the information tells anyone else) by now that the National Security Agency (NSA) has sophisticated ways of breaking codes or getting around cryptography though other methods 2 Continuing with our envelope analogy from above, the NSA coerced companies to allow them to view the content of messages before the envelopes were sealed. If the messages were encoded, they got the keys to decode the data, or broke the code using their vast resources. While these practices were supposedly limited to potential threats, there’s no denying that this makes it more difficult to trust any online communications.

Librarians certainly have a professional obligation to keep data about their patrons confidential, and so this is one area in which cryptography is on our side. But let’s now consider an example in which it is not so much.

Breaking DRM: e-books and DVDs

Librarians are exquisitely aware of the digital rights management realm of cryptography (for more on this from the ALA, see The ALA Copyright Office page on digital rights ). These are algorithms that encode media in such a way that you are unable to copy or modify the material. Of course, like any code, once you break it, you can extract the material and do whatever you like with it. As I covered in a recent post, if you purchase a book from Amazon or Apple, you aren’t purchasing the content itself, but a license to use it in certain proscribed ways, so legally you have no recourse to break the DRM to get at the content. That said, you might have an argument under fair use, or some other legitimate reason to break the DRM. It’s quite simple to do once you have the tools to do so. For e-books in proprietary formats, you can download a plug-in for the Calibre program and follow step by step instructions on this site. This allows you to change proprietary formats into more open formats.

As above, you shouldn’t use software like that if you don’t have the rights to convert formats, and you certainly shouldn’t use it to pirate media. But just because it can be used for illegal purposes, does that make the software itself illegal? Breaking DVD DRM offers a fascinating example of this (for a lengthy list of CD and DVD copy protection schemes, see here and for a list of DRM breaking software see here). The case of CSS (Content Scramble System) descramblers illustrates some of the strange philosophical territory into which this can end up. The original code was developed in 1999, and distributed widely, which was initially ruled to be illegal. This was protested in a variety of ways; the Gallery of CSS Descramblers has a lot more on this 3. One of my favorite protest CSS descramblers is the “illegal” prime number, which is a prime number that contains the entire code for breaking the CSS DRM. The first illegal prime number was discovered in 2001 by Phil Carmody (see his description here) 4. This number is, of course, only illegal inasmuch as the information it represents is illegal–in this case it was a secret code that helped break another secret code.

In 2004, after years of court hearings, the California Court of Appeal overturned one of the major injunctions against posting the code, based on the fact that  source code is protected speech under the first amendment , and that the CSS was no longer a trade secret. So you’re no longer likely to get in trouble for posting this code–but again, using it should only be done for reasons protected under fair use. [5.“DVDCCA v Bunner and DVDCCA v Pavlovich.” Electronic Frontier Foundation. Accessed September 23, 2013. https://www.eff.org/cases/dvdcca-v-bunner-and-dvdcca-v-pavlovich.] One of the major reasons you might legitimately need to break the DRM on a DVD is to play DVDs on computers running the Linux operating system, which still has no free legal software that will play DVDs (there is legal software with the appropriate license for $25, however). Given that DVDs are physical media and subject to the first sale doctrine, it is unfair that they are manufactured with limitations to how they may be played, and therefore this is a code that seems reasonable for the end consumer to break. That said, as more and more media is streamed or otherwise licensed, that argument no longer applies, and the situation becomes analogous to e-book DRM.

Learning More

The Gambling With Secrets video series explains the basic concepts of cryptography, including the mathematical proofs using colors and other visual concepts that are easy to grasp. This comes highly recommended from all the ACRL TechConnect writers.

Since it’s a fairly basic part of computer science, you will not be surprised to learn that there are a few large open courses available about cryptography. This Cousera class from Stanford is currently running, and this Udacity class from University of Virginia is a self-paced course. These don’t require a lot of computer science or math skills to get started, though of course you will need a great deal of math to really get anywhere with cryptography.

A surprising but fun way to learn a bit about cryptography is from the NSA’s Kids website–I discovered this years ago when I was looking for content for my X-Files fan website, and it is worth a look if for nothing else than to see how the NSA markets itself to children. Here you can play games to learn basics about codes and codebreaking.

  1. Menezes, A., P. van Oorschot, and S. Vanstone. Handbook of Applied Cryptography. CRC Press, 1996. http://cacr.uwaterloo.ca/hac/. 1-2.
  2. See the New York Times and The Guardian for complete details.
  3. Touretzky, D. S. (2000) Gallery of CSS Descramblers. Available: http://www.cs.cmu.edu/~dst/DeCSS/Gallery, (September 18, 2013).
  4. For more, see Caldwell, Chris. “The Prime Glossary: Illegal Prime.” Accessed September 17, 2013. http://primes.utm.edu/glossary/xpage/Illegal.html.

Library Quest: Developing a Mobile Game App for A Library

This is the story  of Library Quest (iPhone, Android), the App That (Almost) Wasn’t. It’s a (somewhat) cautionary tale of one library’s effort to leverage gamification and mobile devices to create a new and different way of orienting students to library services and collections.  Many libraries are interested in the possibilities offered by both games and mobile devices,  and they should be.  But developing for mobile platforms is new and largely uncharted territory for libraries, and while there have been some encouraging developments in creating games in library instruction, other avenues of game creation are mostly unexplored.  This is what we learned developing our first mobile app and our first large-scale game…at the same time!

Login Screen

The login screen for the completed game. We use integrated Facebook login for a host of technical reasons.

Development of the Concept: Questing for Knowledge

The saga of Library Quest began in February of 2012, when I came on board at Grand Valley State University Libraries as Digital Initiatives Librarian.  I had been reading some books on gamification and was interested in finding a problem that the concept might solve.  I found two.  First, we were about to open a new 65 million dollar library building, and we needed ways to take advantage of the upsurge of interest we knew this would create.  How could we get people curious about the building to learn more about our services, and to strengthen that into a connection with us?  Second, GVSU libraries, like many other libraries, was struggling with service awareness issues.  Comments by our users in the service dimension of our latest implementation of Libqual+ indicated that many patrons missed out on using services like inter-library loan because they were unaware that they existed.  Students often are not interested in engaging with the library until they need something specific from us, and when that need is filled, their interest declines sharply.  How could we orient students to library services and create more awareness of what we could do for them?

We designed a very simple game to address both problems.  It would be a quest or task based game, in which students actively engaged with our services and spaces, earning points and rewards as they did so.  The game app would offer tasks to students, verify their progress through multistep tasks by asking users to input alphanumeric codes or by scanning QR codes (which we ended up putting on decals that could be stuck to any flat surface).  Because this was an active game, it seemed natural to target it at mobile devices, so that people could play as they explored.  The mobile marketplace is more or less evenly split between iOS and Android devices, so we knew we wanted the game to be available on both platforms.  This became the core concept for Library Quest.  Library administration gave the idea their blessing and approval to use our technology development budget, around $12,000, to develop the game.  Back up and read that sentence over if you need to, and yes, that entire budget was for one mobile app.  The expense of building apps is the first thing to wrap your mind around if you want to create one.  While people often think of apps as somehow being smaller and simpler than desktop programs, the reality is very different.

IMG_0101

The main game screen. We found a tabbed view worked best, with quests that are available in one tab, quests that have been accepted but not completed in another, and finished quests in the third.

We contracted with Yeti CGI, a outside game development firm, to do the coding.  This was essential-app development is complicated and we didn’t have the necessary skills or experience in-house.  If we hadn’t used an outside developer, the game app would never have gotten off the ground.  We had never worked with a game-development company before, and Yeti had never worked with a library, although they had ties to higher education and were enthusiastic about the project.  Working with an outside developer always carries certain risks and advantages, and communication is always an issue.

One thing we could have done more of at this stage was spend time working on game concept and doing paper prototyping of that concept.  In his book Game Design Workshop, author Tracey Fullerton stresses two key components in designing a good game: defining the experience you want the player to have, and doing paper prototyping.  Defining the game experience from the player’s perspective forces the game designer to ask questions about how the game will play that it might not otherwise occur to them to ask.  Will this be a group or a solo experience?  Where will the fun come from?  How will the player negotiate the rules structure of the game?  What choices will they have at what points?  As author Jane McGonigal notes, educational games often fail because they do not put the fun first, which is another way of saying that they haven’t fully thought through the player’s experience.  Everything in the game: rules, rewards, format, etc.  should be shaped from the concept of the experience the designer wants to give the player.  Early concepts can and should be tested with paper prototyping.  It’s a lot easier to change rules structure for a game made with paper, scissors, and glue than with code and developers (and a lot less expensive).  In retrospect, we could have spent more time talking about experience and more time doing paper prototypes before we had Yeti start writing code.  While our game is pretty solid, we may have missed opportunities to be more innovative or provide a stronger gameplay experience.

Concept to conception: Wireframing and Usability Testing

The first few months of development were spent creating, approving, and testing paper wireframes of the interface and art concepts.  While we perhaps should have done more concept prototyping, we did do plenty of usability testing of the game interface as it developed, starting with the paper prototypes and continuing into the initial beta version of the game.  That is certainly something I would recommend that anyone else do as well.  Like a website or anything else that people are expected to use, a mobile app interface needs to be intuitive and conform to user expectations about how it should operate, and just as in website design, the only way to create an interface that does so is to engage in cycles of iterative testing with actual users.  For games, this is particularly important because they are supposed to be fun, and nothing is less fun than struggling with poor interface design.

A side note related to usability: one of the things that surfaced in doing prototype testing of the app was that giving players tasks involving library resources and watching them try to accomplish those tasks turns out to be an excellent way of testing space and service design as well.  There were times when students were struggling not with the interface, but with the library! Insufficient signage, space layout that was not clear, assumed knowledge of or access to information the students had no way of knowing, were all things became apparent in watching students try to do tasks that should have been simple.  It serves as a reminder that usability concepts apply to the physical world as much as they do to the web, and that we can and should test services in the real world the same way we test them in virtual spaces.

photo

A quest in progress. We can insert images and links into quest screens, which allows us to use webpages and images as clues.

Development:  Where the Rubber Meets the Phone

Involving an outside developer made the game possible, but it also meant that we had to temper our expectations about the scale of app development.  This became much more apparent once we’d gotten past paper prototyping and began testing beta versions of the game.  There were several ideas that we  developed early on, such as notifications of new quests, and an elaborate title system, that had to be put aside as the game evolved because of cost, and because developing other features that were more central to gameplay turned out to be more difficult than anticipated.  For example, one of the core concepts of the game was that students would be able to scan QR codes to verify that they had visited specific locations.  Because mobile phone users do not typically have QR code reader software installed, Yeti built QR code reader functionality into the game app.  This made scanning the code a more seamless part of gameplay, but getting the scanner software to work well on both the android and iOS versions proved a major challenge (and one that’s still vexing us somewhat at launch).  Tweaks to improve stability and performance on iOS threw off the android version, and vice versa.  Despite the existence of programs like Phonegap and Adobe Air, which will supposedly produce versions of the software that run on both platforms, there can still be a significant amount of work involved in tuning the different versions to get them to work well.

Developing apps that work on the android platform is particularly difficult and expensive.  While Apple has been accused of having a fetish for control, their proprietary approach to their mobile operating system produces a development environment that is, compared to android, easy to navigate.  This is because android is usually heavily modified by specific carriers and manufacturers to run on their hardware. Which means that if you want to ensure that your app runs well on an android device, the app must be tested and debugged on that specific combination of android version and hardware.  Multiply the 12 major versions of android still commonly used by the hundreds of devices that run it, and you begin to have an idea of the scope of the problem facing a developer.  While android only accounts for 50% of our potential player base, it easily took up 80% of the time we spent with Yeti debugging, and the result is an app that we are sure works on only a small selection of android devices out there.  By contrast, it works perfectly well on all but the very oldest versions of iOS.

Publishing a Mobile App: (Almost) Failure to Launch

When work began on Library Quest, our campus had no formal approval process for mobile apps, and the campus store accounts were controlled by our student mobile app development lab.  In the year and a half we spent building it, control of the campus store accounts was moved to our campus IT department, and formal guidelines and a process for publishing mobile apps started to materialize.  All of which made perfect sense, as more and more campus entities were starting to develop mobile apps and campus was rightly concerned about branding and quality issues, as well as ensuring any apps that were published furthered the university’s teaching and research mission.  However, this resulted in us trying to navigate an approval process as it materialized around us very late in development, with requests coming in for changes to the game appearance to bring it into like with new branding standards when the game was almost complete.

It was here the game almost foundered as it was being launched. During some of the discussions, it surfaced that one of the commercial apps being used by the university for campus orientation bore some superficial resemblance to Library Quest in terms of functionality, and the concern was raised that our app might be viewed as a copy.  University counsel got involved.  For a while, it seemed the app might be scrapped entirely, before it ever got out to the students!  If there had been a clear approval process when we began the app, we could have dealt with this at the outset, when the game was still in the conceptual phase.  We could have either modified the concept, or addressed the concern before any development was done.  Fortunately, it was decided that the risk was minimal and we were allowed to proceed.

A quest completion screen for one of our test quests.  These screens stick around when the quest is done, forming a kind of personalized FAQ about library services and spaces.

A quest completion screen for one of our test quests. These screens stick around when the quest is done, forming a kind of personalized FAQ about library services and spaces.

Post-Launch: Game On!

As I write this, it’s over a year since Library Quest was conceived and it has just been released “into the wild” on the Apple and Google Play stores.  We’ve yet to begin the major advertising push for the game, but it already has over 50 registered users.  While we’ve learned a great deal, some of the most important questions about this project are still up in the air.  Can we orient students using a game?  Will they learn anything?  How will they react to an attempt to engage with them on mobile devices?  There are not really a lot of established ways to measure success for this kind of project, since very few libraries have done anything remotely like it.  We projected early on in development that we wanted to see at least 300 registered users, and that we wanted at least 50 of them to earn the maximum number of points the game offered.  Other metrics for success are “squishier,” and involve doing surveys and focus groups once the game wraps to see what reactions students had to the game.  If we aren’t satisfied with performance at the end of the year, either because we didn’t have enough users or because the response was not positive, then we will look for ways to repurpose the app, perhaps as part of classroom teaching in our information literacy program, or as part of more focused and smaller scale campus orientation activities.

Even if it’s wildly successful, the game will eventually need to wind down, at least temporarily.  While the effort-reward cycle that games create can stimulate engagement, keeping that cycle going requires effort and resources.  In the case of Library Quest, this would include the money we’ve spent on our prizes and the effort and time we spend developing quests and promoting the game.  If Library Quest endures, we see it having a cyclical life that’s dependent on the academic year.  We would start it anew each fall, promoting it to incoming freshmen, and then wrap it up near the end of our winter semester, using the summers to assess and re-engineer quests and tweak the app.

Lessons Learned:  How to Avoid Being a Cautionary Tale
  1. Check to see if your campus has an approval process and a set of guidelines for publishing mobile apps. If it doesn’t, do not proceed until they exist. Lack of such a process until very late in development almost killed our game. Volunteer to help draft these guidelines and help create the process, if you need to.  There should be some identified campus app experts for you to talk to before you begin work, so you can ask about apps already in use and about any licensing agreements campus may have. There should be a mechanism to get your concept approved at the outset, as well as the finished product.
  2. Do not underestimate the power of paper.  Define your game’s concept early, and test it intensively with paper prototypes and actual users.  Think about the experience you want the players to have, as well as what you want to teach them.  That’s a long way of saying “think about how to make it fun.”  Do all of this before you touch a line of code.
  3. Keep testing throughout development.  Test your wireframes, test your beta version, test, test, test with actual players.  And pay attention to anything your testing might be telling you about things outside the game, especially if the game interfaces with the physical world at all.
  4. Be aware that mobile app development is hard, complex, and expensive.  Apps seem smaller because they’re on small devices, but in terms of complexity, they are anything but.  Developing cross-platform will be difficult (but probably necessary), and supporting android will be an ongoing challenge.  Wherever possible, keep it simple.  Define your core functionality (what does the app *have* to do to accomplish its mission) and classify everything else you’d like it to do as potentially droppable features.
  5. Consider your game’s life-cycle at the outset.  How long do you need it to run to do what you want it to do?  How much effort and money will you need to spend to keep it going for that long?  When will it wind down?
References

Fullerton, Tracy.  Game Design Workshop (4th Edition).  Amsterdam, Morgan Kaufmann.  2008

McGonigal, Jane.  Reality is Broken: Why Games Make us Better and How they Can Change the World. Penguin Press, New York.  2011

About our Guest Author:
Kyle Felker is the Digital Initiatives Librarian at Grand Valley State University Libraries, where he has worked since February of 2012.  He is also a longtime gamer.  He can be reached at felkerk@gvsu.edu, or on twitter @gwydion9.  

 


Advice on Being a Solo Library Technologist

I am an Emerging Technologies Librarian at a small library in the middle of a cornfield. There are three librarians on staff. The vast majority of our books fit on one floor of open stacks. Being so small can pose challenges to a technologist. When I’m banging my head trying to figure out what the heck “this” refers to in a particular JavaScript function, to whom do I turn? That’s but an example of a wide-ranging set of problems:

  • Lack of colleagues with similar skill sets. This has wide-ranging ill effects, from giving me no one to ask questions to or bounce ideas off of, to making it more difficult to sell my ideas.
  • Broad responsibilities that limit time spent on technology
  • Difficulty creating endurable projects that can be easily maintained
  • Difficulty determining which projects are appropriate to our scale

Though listservs and online sources alleviate some of these concerns, there’s a certain knack to be a library technologist at a small institution.[1] While I still have a lot to learn, I want to share some strategies that have helped me thus far.

Know Thy Allies

At my current position, it took me a long time to figure out how the college was structured. Who is responsible for managing the library’s public computers? Who develops the website? If I want some assessment data, where do I go? Knowing the responsibilities of your coworkers is vital and effective collaboration is a necessary element of being a technologist. I’ve been very fortunate to work with coworkers who are immensely helpful.

IT Support can help with both your personal workstation and the library’s setup. Remember that IT’s priorities are necessarily adverse to yours: they want to keep everything up and running, you want to experiment and kick the tires. When IT denies a request or takes ages to fix something that seems trivial to you, remember that they’re just as overburdened as you are. Their assistance in installing and troubleshooting software is invaluable. This is a two-way street: you often have valuable insight into how users behave and what setups are most beneficial. Try to give and take, asking for favors at the same time that you volunteer your services.

Institutional Research probably goes by a dozen different names at any given dozen institutions. These names may include “Assessment Office,” “Institutional Computing,” or even the fearsome “Institutional Review Board” of research universities. These are your data collection and management people and—whether you know it or not—they have some great stuff for you. It took me far too long to browse the IR folder on our shared drive which contains insightful survey data from the CCSSE and in-house reports. There’s a post-graduate survey which essentially says “the library here is awesome,” good to have when arguing for funding. But they also help the library work with the assessment data that our college gathers; we hope to identify struggling courses and offer our assistance.

The web designer should be an obvious contact point. Most technology is administered through the web these days—shocking, I know. The webmaster will not only be able to get you access to institutional servers but they may have learned valuable lessons from their own positions. They, too, struggle to complete a wide range of tasks. They have to negotiate many stakeholders who all want a slice of the vaunted homepage, often the subject of territorial battles. They may have a folder of good PR images or a style guide sitting around somewhere; at the very least, some O’Reilly books you want to borrow.

The Learning Management System administrator is similar to the webmaster. They probably have some coding skills and carry an immense, important burden. At my college, we have a slew of educational technologists who work in the “Faculty Development Center” and preside over the LMS. They’re not only technologically savvy, often introducing me to new tools or techniques, but they know how faculty structure their courses and have a handle on pedagogical theory. Their input can not only generate new ideas but help you ground your initiatives in a solid theoretical basis.

Finally, my list of allies is obviously biased towards academic libraries. But public librarians have similar resources available, they just go by different names. Your local government has many of these same positions: data management, web developer, technology guru. Find out who they are and reach out to them. Anyone can look for local hacker/makerspaces or meetups, which can be a great way not only to develop your skills but to meet people who may have brilliant ideas and insight.

Build Sustainably

Building projects that will last is my greatest struggle. It’s not so hard to produce an intricate, beautiful project if I pour months of work into it, but what happens the month after it’s “complete”? A shortage of ideas has never been my problem, it’s finding ones that are doable. Too often, I’ll get halfway into a project and realize there’s simply no way I can handle the upkeep on top of my usual responsibilities, which stubbornly do not diminish. I have to staff a reference desk, teach information literacy, and make purchases for our collection. Those are important responsibilities and they often provide a platform for experimentation, but they’re also stable obligations that cannot be shirked.

One of the best ways to determine if a project is feasible is to look around at what other libraries are doing. Is there an established project—for instance, a piece of open source software with a broad community base—which you can reuse? Or are other libraries devoting teams of librarians to similar tasks? If you’re seeing larger institutions struggle to perfect something, then maybe it’s best to wait until the technology is more mature. On the other hand, dipping your toe in the water can quickly give you a sense of how much time you’ll need to invest. Creating a prototype or bringing coworkers on board at early stages lets you see how much traction you have. If others are resistant or if your initial design is shown to have gaping flaws, perhaps another project is more worthy of your time. It’s an art but often saying no, dropping a difficult initiative, or recognizing that an experiment has failed is the right thing to do.

Documentation, Documentation, Documentation

One of the first items I accomplished on arrival at my current position was setting up a staff-side wiki on PBworks. While I’m still working on getting other staff members to contribute to it (approximately 90% of the edits are mine), it’s been an invaluable information-sharing resource. Part-time staff members in particular have noted how it’s nice to have one consistent place to look for updates and insider information.

How does this relate to technology? In the last couple years, my institution has added or redesigned dozens of major services. I was going to write a ludicrously long list but…just trust me, we’ve changed a lot of stuff. A new technology or service cannot succeed without buy-in, and you don’t get buy-in if no one knows how to use it. You need documentation: well-written, illustrative documentation. I try to keep things short and sweet, providing screencasts and annotated images to highlight important nuances. Beyond helping others, it’s been invaluable to me as well. Remember when I said I wasn’t so great at building sustainably? Well, I’ll admit that there are some workflows or code snippets that are Greek each time I revisit them. Without my own instructions or blocks of comments, I would have to reverse engineer the whole process before I could complete it again.

Furthermore, not all my fellow staff are on par with my technical skills. I’m comfortable logging into servers, running Drush commands, analyzing the statistics I collect. And that’s not an indictment of my coworkers; they shouldn’t need to do any of this stuff. But some of my projects are reliant on arcane data schemas or esoteric commands. If I were to win the lottery and promptly retire, sophisticated projects lacking documentation would grind to a halt. Instead, I try to write instructions such that anyone could login to Drupal and apply module updates, for instance, even if they were previously unfamiliar with the CMS. I feel a lot better knowing that my bus factor is a little lower and that I can perhaps even take a vacation without checking email, some day.

Choose Wisely

The honest truth is that smaller institutions cannot afford to invest in every new and shiny object that crosses their path. I see numerous awesome innovations at other libraries which simply are not wise investments for a college of our size. We don’t have the scale, skills, and budget for much of the technology out there. Even open source solutions are a challenge because they require skill to configure and maintain. Everything I wrote about sustainability and allies is trying to mitigate this lack of scale, but the truth is some things are just not right for us. It isn’t helpful to build projects that only you can continue, or develop ones which require so much attention that other fundamental responsibilities (doubtless less sexy—no less important) fall through the cracks.

I record my personal activities in Remember the Milk, tagging tasks according to topic. What do you think was the tag I used most last year? Makerspace? Linked data? APIs? Node.js? Nope, it was infolit. That is hardly an “emerging” field but it’s a vital aspect of my position nonetheless.

I find that the best way to select amongst initiatives is to work backwards: what is crucial to your library? What are the major challenges, obvious issues that you’re facing? While I would not abandon pet projects entirely, because sometimes they can have surprisingly wide-ranging effects, it helps to ground your priorities properly.[2] Working on a major issue virtually guarantees that your work will attract more support from your institution. You may find more allies willing to help, or at least coworkers who are sympathetic when you plead with them to cover a reference shift or swap an instruction session because you’re overwhelmed. The big issues themselves are easy to find: user experience, ebooks, discovery, digital preservation, {{insert library school course title here}}. At my college, developmental education and information literacy are huge. It’s not hard to align my priorities with the institution’s.

Enjoy Yourself

No doubt working on your own or with relatively little support is challenging and stressful. It can be disappointing to pass up new technologies because they’re too tough to implement, or when a project fails due to one of the bullet points listed above. But being a technologist should always be fun and bring feelings of accomplishment. Try to inject a little levity and experimentation into the places where it’s least expected; who knows, maybe you’ll strike a chord.

There are also at least a couple advantages to being at a smaller institution. For one, you often have greater freedom and less bureaucracy. What a single individual does on your campus may be done by a committee (or even—the horror—multiple committees) elsewhere. As such, building consensus or acquiring approval can be a much simplified process. A few informal conversations can substitute for mountains of policies, forms, meetings, and regulations.

Secondly, workers at smaller places are more likely to be jack-of-all trades librarians. While I’m a technologist, I wear plenty of more traditional librarian hats as well. On the one hand, that certainly means I have less time to devote to each responsibility than a specialist would; on the other, it gives me a uniquely holistic view of the library’s operations. I not only understand how the pieces fit together, but am better able to identify high-level problems affecting multiple areas of service.

I’m still working through a lot of these issues, on my own. How do you survive as a library technologist? Is it just as tough being a large institution? I’m all eyes.

Footnotes

[1]^ Here are a few of my favorite sources for being a technology librarian:

  • Listservs, particularly Code4Lib and Drupal4Lib. Drupal4Lib is a great place to be if you’re using Drupal and are running into issues, there are a lot of “why won’t this work” and “how do you do X at your library” threads and several helpful experts who hang around the list.
  • For professional journals, once again Code4Lib is very helpful. ITAL is also open access and periodically good tech tips appear in C&RL News or C&RL. Part of being at a small institution is being limited to open access journals; these are the ones I read most often.
  • Google. Google is great. For answering factual questions or figuring out what the most common tool is for a particular task, a quick search can almost always turn up the answer. I’d be remiss if I didn’t mention that Google usually leads me to one of a couple excellent sources, like Stack Overflow or the Mozilla Developer Network.
  • Twitter. Twitter is great, too. I follow many innovative librarians but also leading figures in other fields.
  • GitHub. GitHub can help you find reusable code, but there’s also a librarian community and you can watch as they “star” projects and produce new repositories. I find GitHub useful as a set of instructive code; if I’m wondering how to accomplish a task, I can visit a repo that does something similar and learn from how better developers do it.

[2]^ We’ve covered managing side projects and work priorities previously in “From Cool to Useful: Incorporating hobby projects into library work.”


Taking a trek with SCVNGR: Developing asynchronous, mobile orientations and instruction for campus

Embedding the library in campus-wide orientations, as well as developing standalone library orientations, is often part of outreach and first year experience work. Reaching all students can be a challenge, so finding opportunities for better engaging campus helps to promote the library and increase student awareness. Using a mobile app for orientations can provide many benefits such as increasing interactivity and offering an asynchronous option for students to learn about the library on their own time. We have been trying out SCVNGR at the University of Arizona (UA) Libraries and are finding it is a more fun and engaging way to deliver orientations and instruction to students.

Why use game design for library orientations and instruction?

Game-based learning can be a good match for orientations, just as it can be for instruction (I have explored this before with ACRL TechConnect previously, looking at badges). Rather than just presenting a large amount of information to students or having them fill out a paper-based scavenger hunt activity, using something like SCVNGR can get students interacting more with the library in a way that offers more engagement in real time and with feedback. However, simply adding a layer of points and badges or other game mechanics to a non-game situation doesn’t automatically make it fun and engaging for students. In fact, doing this ineffectively can cause more harm than good (Nicholson, 2012). Finding a way to use the game design to motivate participants beyond simply acquiring points tends to be the common goal in using game design in orientations and instruction. Thinking of the WIIFM (What’s In It For Me) principle from a students’ perspective can help, and in the game design we used at the University of Arizona with SCVNGR for a class orientation, we created activities based on common questions and concerns of students.

Why SCVNGR?
scvngr home screen

scvngr home screen

 

SCVNGR is a mobile app game for iPhone and Android where players can complete challenges in specific locations. Rather than getting clues and hints like in a traditional scavenger hunt, this game is more focused on activities within a location instead of finding the location. Although this takes some of the mystery away, it works very well for simply informing people about locations that are new to them and having them interact with the space.

Students need to physically be in the location for the app to work, where they use the location to search for “challenges” (single activities to complete) or “treks” (a series of single activities that make up the full experience for a location), and then complete the challenges or treks to earn points, badges, and recognition.

Some libraries have made their own mobile scavenger hunt activities without the aid of a paid app. For example, North Carolina State University uses the NCSU Libraries’ Mobile Scavenger Hunt, which is a combination of students recording responses in Evernote, real time interaction, and tracking by librarians.  One of the reasons we went with SCVNGR, however, is because this sort of mobile orientation requires a good amount of librarian time and is synchronous, whereas SCVNGR does not require as much face-to-face librarian time and allows for asynchronous student participation. Although we do use more synchronous instruction for some of our classes, we also wanted to have the option for asynchronous activities, and in particular for the large-scale orientations where many different groups will come in at many different times. Although SCVNGR is not free for us, the app is free to students. They offer 24/7 support and other academic institutions offer insight and ideas in a community for universities.

Other academic libraries have used SCVNGR for orientations and even library instruction. A few examples are:

 

How did the UA Libraries use SCVNGR?

Because a lot of instruction has moved online and there are so many students to reach, we are working on SCVNGR treks for both instruction and basic orientations at the University of Arizona (UA). We are in the process of setting up treks for large-scale campus orientations (New Student Orientation, UA Up Close for both parents and students, etc.) that take place during the summer, and we have tested SCVNGR  out on a smaller scale as a pilot for individual classes. There tends to be greater success and engagement if the Trek is tied to something, such as a class assignment or a required portion of an orientation session that must be completed. One concern for an app-based activity is that not all students will have smartphones. This was alleviated by putting students into groups ahead of time, ensuring that at least one person in the group did have a device compatible to use SCVNGR. However, we do lend technology at the UA Libraries, and so if a group was without a smartphone or tablet, they would be able to check one out from the library.

trek page for ais197b at ua libraries

trek page for ais197b at ua libraries

We first piloted a trek on an American Indian Studies student success course (AIS197b). This course for freshmen introduces students to services on campus that will be useful to them while they are at the UA. Last year, we presented a quick information session on library services, and then had the students complete a scavenger hunt for a class grade (participation points) with pencil and paper throughout the library. Although they seemed glad to be able to get out and move around, it didn’t seem particularly fun and engaging. On top of that, every time the students got stuck or had a question, they had to come back to the main floor to find librarians and get help.  In contrast, when students get an answer wrong in SCVNGR, feedback is programmed in to guide them to the correct information. And, because they don’t need clues to make it to the next step (they just go back and select the next challenge in the trek), they are able to continue without one mistake preventing them from moving on to the next activity. This semester, we first presented a brief instruction session (approximately 15-20 min) and then let students get started on SCVNGR.

You can see in the screenshot below how question design works, where you can select the location, how many points count toward the activity, type of activity (taking a photo, answering with text, or scanning a QR code), and then providing feedback. If a student answers a question incorrectly, as I mention above, they will receive feedback to help them in figuring out the correct answer. I really like that when students get answers right, they know instantly. This is positive reinforcement for them to continue.

scvngr answer feedback

scvngr answer feedback

The activities designed for students in this class were focused on photo and text-based challenges. We stayed away from QR codes because they can be finicky with some phones, and simply taking a picture of the QR code meets the challenge requirement for that option of activity. Our challenges included:

  • Meet the reference desk (above): Students meet desk staff and ask how they can get in touch for reference assistance; answers are by text and students type in which method they think they would use the most: email, chat, phone, or in person.
  • Prints for a day: Students find out about printing (a frequent question of new students), and text in how to pay for printing after finding the information at the Express Documents Center.
  • Playing favorites: Students wander around the library and find their favorite study spot. Taking a picture completes the challenge, and all images are collected in the Trek’s statistics.
  • Found in the stacks: After learning how to use the catalog (we provided a brief instruction session to this class before setting them loose), students search the catalog for books on a topic they are interested in, then locate the book on the shelf and take a picture. One student used this time to find books for another class and was really glad he got some practice.
  • A room of one’s own: The UA Libraries implemented online study room reservations as of a year ago. In order to introduce this new option to students, this challenge had them use their smartphones to go to the mobile reservation page and find out what the maximum amount of hours study rooms can be reserved for and text that in.

SCVNGR worked great with this class for simple tasks, such as meeting people at the reference desk, finding a book, or taking a picture of a favorite study spot, but for tasks that might require more critical thinking or more intricate work, this would not be the best platform to use in that level of instruction. SCVNGR’s assessment options are limited for students to respond to questions or complete an activity. Texting in detailed answers or engaging in tasks like searching a database would be much harder to record. Likewise, because more instruction that is tied to critical thinking is not so much location-based (evaluating a source or exploring copyright issues, for example), and so it would be hard to tie these tasks and acquisition of skill to an actual location-based activity to track. One instance of this was with the Found in the Stacks challenge; students were supposed to search for a book in the catalog and then locate it on the shelf, but there would be nothing stopping them from just finding a random book on the shelf and taking a picture of it to complete the challenge. SCVNGR provides a style guide to help in game design, and the overall understanding from this document is that simplicity is most effective for this platform.

Another feature that works well is being able to choose if the Trek is competitive or not, and also use “SmartRoute,” which is the ability to have challenges show up for participants based on distance and least-crowded areas. This is wonderful, particularly as students get sort of congested at certain points in a scavenger hunt: they all crowd around the same materials or locations simultaneously because they’re making the same progress through the activity. We chose to use SmartRoute for this class so they would be spread out during the game.

scvngr trek settings

scvngr trek settings

When trying to assess student effort and impact of the trek, you can look at stats and rankings. It’s possible to view specific student progress, all activity by all participants, and rankings organized by points.

scvngr statistics

scvngr statistics

Another feature is the ability to collect items submitted for challenges (particularly pictures). One of our challenges is for students to find their favorite study spot in the library and take a picture of it. This should be fun for them to think about and is fairly easy, and it helps us do some space assessment. It’s then possible to collect pictures like the following (student’s privacy protected via purple blob).

student images of ua main library via scvngr

student images of ua main library via scvngr

On the topic of privacy, students enter in their name to set up an account, but only their first name and first initial of their last name appear as their username. Although last names are then hidden, SCVNGR data is viewable by anyone who is within the geographical range to access the challenge: it is not closed to an institution. If students choose to take pictures of themselves, their identity may be revealed, but it is possible to maintain some privacy by not sharing images of specific individuals or sharing any personal information through text responses. On the flip side of  not wanting to associate individual students with their specific activities, it gets trickier when an instructor plans to award points for student participation. In that case, it’s possible to request reports from SCVNGR for instructors so they can see how much and which students participated. In a large class of over 100 students, looking at the data can be messier, particularly if students have the same first name and last initial. Because of this issue, SCVNGR might be better used for large-scale orientations where participation does not need to be tracked, and small classes where instructors would be easily able to know who is who in the data for activity.

Lessons learned

Both student and instructor feedback was very positive. Students seemed to be having fun, laughing, and were not getting stuck nearly as much as the previous year’s pencil-and-paper hunt. The instructor noted it seemed a lot more streamlined and engaging for the class. When students checked in with us at the end before heading out, they said they enjoyed the activity and although there were a couple of hiccups with the software and/or how we designed the trek, they said it was a good experience and they felt more comfortable with using the library.

Next time, I would be more careful about using text responses. I had gone down to our printing center to tell the current student worker what answers students in the class would be looking for so she could answer it for them, but they wound up speaking with someone else and getting different answers. Otherwise, the level of questions seemed appropriate for this class and it was a good way to pilot how SCVNGR works, if students might like it, and how long different types of questions take for bringing this to campus on a larger scale. I would also be cautious about using SCVNGR too heavily for instruction, since it doesn’t seem to have capabilities for more complex tasks or a great deal of critical thinking. It is more suited to basic instruction and getting students more comfortable in using the library.

Pros

  • Ability to reach many students and asynchronously
  • Anyone can complete challenges and treks; this is great for prospective students and families, community groups, and any programs doing outreach or partnerships outside of campus since a university login is not required.
  • Can be coordinate with campus treks if other units have accounts or a university-wide license is purchased.
  • WYSIWYG interface, no programming skills necessary
  • Order of challenges in a trek can be assigned staggered so not everyone is competing for the same resources at the same time.
  • Can collect useful data through users submitting photos and comments (for example, we can examine library space and student use by seeing where students’ favorite spots to study are).

Cons

  • SCVNGR is not free to use, an annual fee applies (in the $900-range for a library-only license, which is not institution-wide).
  • Privacy is a concern since anyone can see activity in a location; it’s not possible to close this to campus.
  • When completing a trek, users do not get automatic prompts to proceed to the next challenge; instead, they must go back to the home location screen and choose the next challenge (this can get a little confusing for students).
  • SCVNGR is more difficult to use with instruction, especially when looking to incorporate critical thinking and more complex activities
  • Instructors might have a harder time figuring out how to grade participation because treks are open to anyone; only students’ first name and last initial appear, so if either a large class completes a trek for an assignment or if an orientation trek for the public is used, a special report must be requested from SCVNGR that the library could send to the instructor for grading purposes.

 

Conclusion

SCVNGR is a good way to increase awareness and get students and other groups comfortable in using the library. One of the main benefits is that it’s asynchronous, so a great deal of library staff time is not required to get people interacting with services, collections, and space. Although this platform is not perfect for more in-depth instruction, it does work at the basic orientation level, and students and the instructor in the course we piloted it on had a good experience.

 

References

Nicholson, S. (2012). A user-centered theoretical framework for meaningful gamification. Paper Presented at Games+Learning+Society 8.0, Madison, WI. Retrieved from http://scottnicholson.com/pubs/meaningfulframework.pdf.

—-

About Our Guest Author: Nicole Pagowsky is an Instructional Services Librarian at the University of Arizona where she explores game-based learning, student retention, and UX. You can find her on Twitter, @pumpedlibrarian.


Adventures with Raspberry Pi: A Librarian’s Introduction

Raspberry Pi
A Raspberry Pi computer

A Raspberry Pi computer (image credit: Wikimedia Commons)

Raspberry Pi, a $35 fully-functional desktop computer about the size of a credit card, is currently enjoying a high level of buzz, popularity, and media exposure. Librarians are, of course, also getting in on the action. I have been working with a Raspberry Pi to act as a low-power web server for a project delivering media-rich web content for museum exhibits in places without access to the internet. I’ve found working with the little Linux machine to be a lot of fun and I’m very excited about doing more with Raspberry Pi. However, as with many things librarians get excited about, it can be difficult to see through the enthusiasm to the core of the issue. Is the appeal of these cute little computers universal or niche? Do I need a Raspberry Pi in order to offer core services to my patrons? In other words: do we all need to run out and buy a Raspberry Pi, are they of interest to a certain niche of librarians, or are Raspberry Pi just the next library technology fad and soon to go the way of offering reference service in Second Life? 1 To help us answer this question, I’d like to take a moment to explain what a Raspberry Pi device is, speculate who will be interested in one, provide examples of some library projects that use Raspberry Pi, and offer a shopping list for those who want to get started.

What is Raspberry Pi

From the FAQ at raspberrypi.org:

The Raspberry Pi is a credit-card sized computer that plugs into your TV and a keyboard. It’s a capable little PC which can be used for many of the things that your desktop PC does, like spreadsheets, word-processing and games. It also plays high-definition video. We want to see it being used by kids all over the world to learn programming.

This description from Raspberry Pi covers the basics. (H2G2 has a more detailed history of the project.) A Raspberry Pi (also known as a Raspi, or an RPi) is a small and inexpensive computer designed to extend technology education to young students who don’t currently have access to more expensive traditional computers. The Raspberry Pi project counteracts a movement away from general-purpose computing devices and toward internet appliances and mobile devices. The Pew Internet and American Life Project notes that: “smartphone owners, young adults, minorities, those with no college experience, and those with lower household income levels are more likely than other groups to say that their phone is their main source of internet access.2 Access to the internet today is pervasive and less expensive than ever before, but also more likely to come from an appliance or mobile device and without the programming tools and command-line control that were standard for previous generations of computer users. This means a smaller percentage of computer users are likely to pick up these skills on their own. Raspberry Pi offers a very-low cost solution to this lack of access to programming and command-line tools.

In addition to the stated goal of the Raspberry Pi organization, a lot of adults who already have access to technology are also very excited about the possibilities enabled by the small and cheap computing platform. What sets the Raspberry Pi apart from other computers is its combination of small size and low price. While you can do very similar things with a re-purposed or salvaged computer system running Linux, the Raspberry Pi is much smaller and uses less power. Similarly, you can do some of these things with a similarly-sized smart-phone, but those are much more expensive than a Raspberry Pi. For the technology hobbyist and amateur mad scientist, the Raspberry Pi seems to hit a sweet spot on both physical size and cost of entry.

The heart of the Raspberry Pi (or RPi) Model B is a Broadcom system-on-a-chip that includes a 700mhz ARM processor, 512mb RAM, USB and Ethernet controllers, and a graphics processor capable of HD resolutions. According to the FAQ its real-world performance is on par with a first generation Xbox or a 300mhz Pentium II computer. In my personal experience it is powerful enough for typical web browsing tasks or to host a WordPress based web site. Raspberry Pi devices also come with a GPIO (general purpose input and output) port, which enables an RPi to control electronic circuits. This makes the RPi a very flexible tool, but it doesn’t quite provide the full functionality of an Arduino or similar micro-controller3.

Out of the box, a Raspberry Pi will require some extra equipment to get up and running. There is a shopping list included at the bottom of the article that contains known working parts. If you keep boxes of spare parts and accessories around, just in case, you likely already have some of these parts. In addition to a $35 Raspberry Pi model b computer, you will definitely need an SD card with at least 4gb storage and a 5 volt 1 amp (minimum) micro-usb power supply. An extra cell phone charger looks like the right part, but probably does not put out the minimum amperage to run an RPi, but a tablet charger likely will. You can read the fine print on the ‘wall wart’ part of the charger for its amperage rating.  If you want to use your Raspberry Pi as a workstation4, you’ll also need an HDMI cable, a digital monitor and a USB keyboard and mouse. Any USB keyboard or mouse will work, but the monitor will need to have an HDMI input. 5 Additionally, you may also want to use a USB wifi adapter to connect to wireless networks and since the Raspberry Pi has only two USB ports, you may also want a powered USB hub so you can connect more peripherals. The Raspberry Pi unit ships as a bare board, so you may want to keep your RPi in a case to protect it from rough handling.

Who is the Raspberry Pi for?

Now that we’ve covered what kind of kit is needed to get started, we can ask: are you the kind of librarian who is likely to be interested in a Raspberry Pi? I’ve noticed some “enthusiasm fatigue” out there, or librarians who are weary of overhyped tools that don’t provide the promised revolution. I love my Raspberry Pi units, but I don’t think they have universal appeal, so I’ve made a little quiz that may help you decide whether you are ready to order one today or pass on the fad, for now.

  1. Are you excited to work in a Linux operating system?
  2. Are you willing to use trial and error analysis to discover just right configuration for your needs?
  3. Do you enjoy the challenge of solving a living problem more than the security of a well-polished system?

If the answer to all three of these questions is an enthusiastic YES, then you are just the kind of librarian who will love experimenting with a Raspberry Pi. If your enthusiasm is more tempered or if you answered no to one or more of the questions, then it is not likely that a Raspberry Pi will meet your immediate needs. RPi are projects not products. They make great prototypes or test-boxes, but they aren’t really a turn-key solution to any existing large-scale library problems. Not every library or librarian needs a Raspberry Pi, but I think a significant number of geeky and DIY librarians will be left asking: “Where have you been all my life?”

If you are a librarian looking to learn Linux, programming, or server administration and you’d rather do this on a cheap dedicated machine than on your work machine, Raspberry Pi is going to make your day. If you want to learn how to install and configure something like WordPress or Drupal and you don’t have a web server to practice on (and local AMP tools aren’t what you are looking for) a Raspberry Pi is an ideal tool to develop that part of your professional skill set. If you want to learn code, learn robotics, or build DIY projects then you’ll love Raspberry Pi. RPi are great for learning more about computers, networks, and coding. They are very educational, but at the end of the day they fall a bit more on the hobby end of the spectrum then on the professional product end.

Raspberry Pi Projects for Librarians

So, if you’ve taken the quiz and are still interested in getting started with Raspberry Pi, there are a few good starting points. If you prefer printed books O’Reilly Media’s Getting Started with Raspberry Pi is fantastic. On the web. I’ve found the Raspberry Pi wiki at elinux.org to be an indispensable resource and their list of tutorials is worth a special mention. Adafruit (an electronics kit vendor and education provider) also has some very good tutorials and project guides. For library specific projects, I have three suggestions, but there are many directions you may want to go with your Rasberry Pi. I’m currently working to set mine up as a web server for local content, so museums can offer rich interpretive media about their exhibits without having to supply free broadband to the public. When this is finished, I’m going to build projects two and three.

  • Project One: Get your RPi set up.

This is the out-of-the-box experience and will take you through the set up of your hardware and software. RaspberryPi.org has a basic getting started guide, Adafruit has a more detailed walkthrough, and there are some good YouTube tutorials as well. In this project you’ll download the operating system for your Raspberry Pi, transfer it to your SD card, and boot up your machine, and perform the first time setup. Once you’re device is up and running you can spend some time familiarizing yourself with it and getting comfortable with the operating system.

  • Project One-Point-Five: Play with your Raspberry Pi

Once your credit card sized computer is up and functional, kick the tires. Check out the graphical interface, use the command line, and try running it headless. Take baby steps if baby steps are what is fun and comfortable, or run headlong into a project that is big and crazy; the idea here is to have fun, get used to the environment, and learn enough to ask useful questions about what to do next. This is a good time to check out the Adafruit series of tutorials or elinux.org’s tutorial list.

  • Project Two: Build an Information Kiosk to Display Local Mass Transit Information

http://blog.bn.ee/2013/01/11/building-a-real-time-transit-information-kiosk-with-raspberry-pi/

I found this on the elinux list of tutorials and I think it is great for libraries, provided they are in an area served by NextBus or a similar service. The tutorial walks users through the process of building a dedicated information kiosk for transit information. The steps are clear and documented with photographs and code examples. Beginning users may want to refer to other references, such as the O’Reilly Book or a Linux Tutorial to fill in some gaps.  I suspect the tricky bit will be finding a source for real-time GPS telemetry from the local transit service, but this is a great project for those who have worked through basic projects and are ready to build something practical for their library.

  • Project Three: Build a Dedicated OPAC Terminal.

While dedicated OPAC terminals may no longer be the cutting edge of library technology, our patrons still need to find books on the shelves. Library Journal’s Digital Shift blog and John Lolis from the White Plains public library describe a project that uses the Raspbian OS to power a catalog-only public terminal. The concept is straight-forward and working prototypes have been completed, but as of yet I do not see a step-by-step set of instructions for the beginner or novice. As a follow up to this post, I will document the build process for TechConnect. The gist of this project is to set up a kiosk-type browser, or a browser that only does a set task or visits a limited range of sites, on the Raspberry Pi. Eli Neiberger has raised some good questions on Twitter about the suitability of RPi hardware for rough-and-tumble public abuse use, but this is the sort of issue testing may resolve. If librarians can crowd-source a durable low-cost OPAC kiosk using Lolis’ original design, we’ll have done something significant.

Raspberry Pi Shopping List

As mentioned above, you may have many of these items already. If not, I’ve purchased and tested the following accessories for a couple of Raspberry Pi projects.

Basic Kit: (parts sourced through Amazon for ease of institutional purchase. Other sources may be preferable or less expensive.)

Accessories:

Raspberry Pi kits (Some vendors have put together full kits with a wide range of parts and accessories. These kits include breadboards and parts for arduino-type projects.)

Notes

  1. Good and necessary work is still being done in Second Life, but it has become a niche service, not a revolution in the way we provide library services.
  2. http://www.pewinternet.org/~/media//Files/Reports/2012/PIP_Digital_differences_041312.pdf
  3. Check out this forum thread for a basic distinction between Arduino and Raspberry Pi.
  4. The alternative is to run it ‘headless’ over your network using SSH.
  5. Monitors with DVI input will work with a small and cheap HDMI to DVI adaptor. Analog monitors–the ones with blue VGA connectors–will work if you purchase an HDMI to VGA converter-adapter which start around $20.

Local Dev Environments For Newbies Part 1: AMP on Mac OSX

There are many cases where having a local development environment is helpful and it is a relatively straightforward thing to do, even if you are new to development.  However, the blessing and the curse is that there are many, many tutorials out there attempting to show you how.  This series of posts will aim to walk through some basic steps with detail, as well as pass on some tips and tricks for setting up your own local dev box.

First, what do I mean by a local development environment?  This is a setup on your computer which allows you to code and tweak and test in a safe environment.  It’s a great way to hammer on a new application with relatively low stakes.  I am currently installing dev environments for two purposes: to test some data model changes I want to make on an existing Drupal site and to learn a new language so I can contribute to an application.  For the purposes of this series, we’re going to focus on the AMP stack – Apache, MySQL and PHP – and how to install and configure those systems for use in web application development.

Apache is the web server which will serve the pages of your website or application to a browser.  You may hear Apache in conjunction with lots of other things – Apache Tomcat, Apache Solr – but generally when someone references just Apache, it’s the web server.  The full name of the project is the Apache HTTP Server Project.

PHP is a scripting language widely used in web development.  MySQL is a database application also frequently used in web development.  “Stack” refers to the combination of several components needed to run a web application.  The AMP stack is the base for many web applications and content management systems, including Drupal and WordPress.

You may have also seen the AMP acronym preceded by an L, M or W.  This merely stands for the operating system of choice – Linux, Mac or Windows.  This can also refer to installer packages that purport to do the whole installation for you, like WAMP or MAMP.  Employing the installer packages can be useful, depending on your situation and operating system.  The XAMPP stack, distributed by Apache Friends, is another example of an installer package designed to set up the whole stack for you.  For this tutorial though, we’ll step through each element of the stack, instead of using a stack installer.

So, why do it yourself if there are installers?  To me, it takes out the mystery of how all the pieces play together and is a good way to learn about what’s going on behind the scenes.  When working on Windows, I will occasionally use a .msi installer for an individual component to make sure I don’t miss something.  But installing and configuring each component individually is actually helpful.

Tips

Before we begin, let’s look at some tips:

  • You will need administrative rights to the computer on which you’re installing.
  • Don’t be afraid of the command line.  There are lots of tutorials around the web on how to use the basic commands – for both Mac (based on UNIX) and Windows.  But, you don’t need to be an expert to set up a dev environment.  Most tutorials give the exact commands you need.
  • Try, if possible, to block off a chunk of time to do this.  Going through all the steps may take awhile, from an hour to an afternoon, especially if you hit a snag.  Several times during my own process, I had to step away from it because of a crisis or because it was the end of the day.  When I was able to come back later, I had some trouble remembering where I left off or the configuration options I had chosen.  If you do have to walk away, write down the last thing you did.
  • When you’re looking for a tutorial, Google away.  Search for the elements of your stack plus your OS, like “Apache MySQL PHP Mac OSX”.  You’ll find lots, and probably end up referencing more than one.  Use your librarian skills: is the tutorial recent?  Does it appear to be from a reputable source?  If it’s a blog, are there comments on the accuracy of the tutorial?  Does it agree with the others you’ve seen?
  • Once you’ve selected one or two to follow, read through the whole tutorial one time without doing anything.  Full disclosure: I never do this and it always bites me.

Let’s get going with Recipe 1 – Install the AMP Stack on Mac OS X

Install the XCode Developer Tools

First, we install the developer tools for XCode.  If you have Mac 10.7 and above, you can download the XCode application from the App Store.  To enable the developer tools, open XCode, go to the XCode menu > Preferences > Downloads tab, and then click on “Install” next to the developer tools.  This tutorial on installing Ruby by Moncef Belyamani has good screenshots of the XCode process.

If you have Snow Leopard (10.6) or below, you’ll need to track down the tools on the Apple Developer Downloads Page.  You will need to register as a developer, but it’s free.  Note:  you can get pretty far in this process without using the XCode command line tools, but down the road as you build more complicated stacks, you’ll want to have them.

Configure Apache and PHP

Next we need to configure Apache and PHP.  Note that I said “configure”, not “install”.  Apache and PHP both come with OS X, we just need to configure them to work together.

Here’s where we open the Terminal to access the command line by going to Applications > Utilities > Terminal.

Open Terminal

Once Terminal is open, a prompt appears where you can type in commands.  The ” ~ ” character indicates that you are at the “home” directory for your user.  This is where you’ll do a lot of your work.  The “$” character delineates the end of the prompt and the beginning of your command.

terminalprompt

Type in the following command:

cd /etc/apache2

“cd” stands for “change directory”.  This is the equivalent of double-clicking on etc, then apache2, if you were in the Finder (but etc is a hidden folder in the Finder).  From here, we want to open the necessary file in an editor.  Enter the following command:

sudo nano httpd.conf

“sudo” elevates your permission to administrator, so that you can edit the config file for Apache, which is httpd.conf.  You will need to type in your administrator password.  The “nano” command opens a text editor in the Terminal window.  (If you’re familiar with vi or emacs, you can use those instead.)

nano

The bottom of your window will show the available commands.  The “^” stands for the Control key.  So, we want to search for the part to change, we press Control + W.  Enter php and press Enter.  We are looking for this line:

#LoadModule php5_module        libexec/apache2/libphp5.so

The “#” at the beginning of this line is a comment, so Apache ignores the line.  We want Apache to see the line, and load the php module.  So, change the text by removing the #:

LoadModule php5_module        libexec/apache2/libphp5.so

Save the file by press Control + O (nano calls this “WriteOut”) and press Enter next to the file name.  The number of lines written displays at the bottom of the window.  Press Control + X to exit nano.

Next, we need to start the Apache server.  Type in the following command:

sudo apachectl start

Now, go to your browser and type in http://localhost.  You should see “It Works!”Apache Browser Test

Apache, as mentioned before, serves web files from a location we designate.  By default, this is /Library/Webserver/Documents.  If you have Snow Leopard (10.6) or below, Apache also automatically looks to username/sites, which is a convenient place to store and work with files.  If you have OS 10.7 or above, creating the Sites folder takes a few steps.  On 10.7, go to System Preferences > Sharing and click on Web Sharing.  If there’s a button that says “Create Personal Web folder”, it has not been created, go ahead and click that button.  If it says, “Open Personal Website folder”, you’re good to go.

On 10.8, the process is a little more involved.  First, go to the Finder, click on your user name and create your sites folder.

sites

Next, we need to open the command line again and create a .conf file for that directory, so that Apache knows where to find it.  Type in these commands:

cd /etc/apache2/users
ls

The ls at the end will list the directory contents.  If you see a file that’s yourusername.conf (ie, mfrazer.conf) in this directory, you’re good to go.  If you don’t, it’s easy to create one.  Type the following command:

sudo nano yourusername.conf

So, mine would be sudo nano mfrazer.conf.  This will create the file and take you into a text editor.  Copy and past the following, making sure to change YOURUSERNAME to your user name.

<Directory "/Users/YOURUSERNAME/Sites/">
  Options Indexes MultiViews
  AllowOverride None
  Deny from all
  Allow from localhost
</Directory>

The first directive, Options, can have lots of different…well, options.  The ones we have here are Indexes and MultiViews.  Indexes means that if a browser requests a directory and there’s no index.html or index.php file, it will serve a directory listing.  Multi-Views means that browsers can request the content in a different format if it exists in the directory (ie, in a different language).  AllowOverride determines if an .htaccess file elsewhere can to override the configuration settings.  For now, None will indicate that no part can be overridden.  For Drupal or other content management systems, it’s possible we’ll want to change these directives, but we’ll cover that later.

The last two lines indicate that traffic can only reach this directory from the local machine, by typing http://localhost/~username in the browser.  For more on Apache security, see the Apache documentation.  If you would like to set it so that other computers on your network can also access this directory, change those last two lines to:

Order allow,deny
Allow from all

Either way, press Control + O to save the file and Control + X to exit.  Restart Apache for the changes to take effect using this command:

sudo apachectl restart

You may also be prompted at some point by OS X to accept incoming network connections for httpd (Apache); I would deny these as I only want access to my directory from my machine, but it’s up to you depending on your setup.

We’ll test this setup with php in the next step.

Test PHP

If you want to check php, you can create a new text document using your favorite text editor.  Type in:

<?php phpinfo(); ?>

Save the file as phpinfo.php in your username/sites directory (so for me, this is mfrazer > Sites)

Then, point your browser to http://localhost/~yourUserName/phpinfo.php  You should see a page of information regarding PHP and the web server, with a header that looks like this:

PHP Info Header

 

 

MySQL

Now, let’s install MySQL.  There’s two ways to do this.  We could go to the MySQL downloads page and use the installers.  The fantastic tutorials at Coolest Guy on the Planet both recommend this, and it’s a fine way to go.

But we can also use Homebrew, mentioned previously on this blog, which is a really convenient way to do things as long as we’re already using the command line.

First, we need to install homebrew.  Enter this at the command prompt:

ruby -e "$(curl -fsSL https://raw.github.com/mxcl/homebrew/go)"

Next, type in

brew doctor

If you receive the message: “Your system is raring to brew.” You’re ready to go.  If you get Warnings, don’t lose heart.  Most of them tell you exactly what you need to do to move forward.  Correct the errors and type in brew doctor again until you’re raring to go.  Then, type in the following command:

brew install mysql

That one’s pretty self-explanatory, no?  Homebrew will download and install MySQL, as of this writing version 5.6.10, but pay attention to the download to see the version – it’s in the URL.  After the installation succeeds, Homebrew will give some instructions on finishing the setup, including the commands we discuss below.

I’m going to pause for a second here and talk a little about permissions and directories.  If you get a “permission denied” error, trying running the command again using “sudo” at the beginning.  Remember, this elevates your permission to the administrator level.  Also, if you get a “directory does not exist” error, you can easily create the directory using “mkdir”.  Before we move on, let’s try to check for a directory you’re going to need coming up.  Enter:

cd /usr/local/var

If you are successfully able to change to that directory, great. If not, type in

sudo mkdir /usr/local/var

to create it. Then, let’s go back to our home directory by typing in

cd ~

Now, let’s continue with our procedure. First, we want to set up the databases to run with our user account.  So, we type in the following two commands:

unset TMPDIR
mysql_install_db --verbose --user=`whoami` --basedir="$(brew --prefix mysql)" --datadir=/usr/local/var/mysql --tmpdir=/tmp

The second command here installs the system databases; ‘whoami’ will automatically replace with your user name, so the above command should work verbatim.  But it also works to use your user name, with no quotes, (ie –user=mfrazer).

Next, we want to run the “secure installation” script. This helps you set root passwords without leaving the password in plain text in your editor. First we start the mysql server, then we run the installation scripts and follow the prompts to set your root password, etc:

mysql.server start
sudo /usr/local/Cellar/mysql/5.6.10/bin/mysql_secure_installation

After the script is complete, stop the mysql server.

mysql.server stop

Next, we want to set up MySQL so it starts at login. For that, we run the following two commands:

ln -sfv /usr/local/opt/mysql/*.plist ~/Library/LaunchAgents
launchctl load ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist

The ln command, in this case, places a symbolic link to any .plist files in the mysql directory into the LaunchAgents directory.  Then, we load the plist using launchctl to start the server.

One last thing – we need to create one more link to the mysql.sock file.

cd /var/mysql/
sudo ln -s /tmp/mysql.sock

This creates a link to the mysql.sock file, which MySQL uses to communicate, but which resides by default in a tmp directory.  The first command places us in the directory where we want the link (remember, if it doesn’t exist, you can use “sudo mkdir /var/mysql/” to create it) and the second creates the link.

MySQL is ready to go!  And, so is your AMP stack.

But wait, there’s more…

One optional tool to install is phpMyAdmin.  This tool allows you to interact with your database through your browser so you don’t have to continue to use the command line.  I also think it’s a good way to test if everything is working correctly.

First, let’s download the necessary files from the phpMyAdmin website.  These will have a .tar.gz extension.  Place the file in your Sites directory, and double-click to unzip the file.

Rename the folder to remove the version number and everything after it.  I’m going to place the next steps below, but the Coolest Guy on the Planet tutorial referenced earlier does a good job of this step for OS 10.8 (just scroll down to phpMyAdmin) if you need screenshots.

Go to the command line and navigate to your phpMyAdmin directory.  Make a directory called config and change the permissions so that the installer can access the file.  This should looks something like:

cd ~/username/sites/phpMyAdmin
mkdir config
chmod o+w config

Let’s take a look at that last command: chmod changes the permissions on a file.  The o+w sets it so users who are not the directory’s owner can write to the file.

Now, in your browser, go to http://localhost/~username/sites/phpmyadmin/setup and follow these steps:

  1. Click on New Server (button on bottom)
  2. Click on Authentication tab, and enter the root password in the password field.
  3. Click on Save.
  4. Click on Save again on the main page.

Once the setup is finished, go to the Finder and move the config.inc.php file from the config directory into the main phpmyadmin directory and delete the config directory.  So in the end, it looks like this:

phpmyadminlast

Now, go to http://localhost/~username/sites/phpmyadmin in your browser and login with the root account.

You are ready to go!  In future parts of this series, we’ll look at building the AMP stack on Windows and adding Drupal or WordPress on top of the stack.  We will also look at maintaining your environment, as the AMP stack components will need updating occasionally.  Any other recipes you’d like to see?  Do you have questions? Let us know in the comments.

The following tutorials and pages were incredibly useful in writing this post.  While none of these tutorials are exactly the same as what we did here, they all contain useful pieces and may be helpful if you want to skip the explanation and just get to the commands: