Libraries and academic institutions have been flooded with mobile devices over the past few years. We lend iPads, rove on our reference shifts, write tutorials on connecting to wireless networks in a dozen different operating systems, and perhaps even preside over one-to-one student-to-device programs.
However, there still seems to be confusion over what exactly tablets are good for. Amidst all the hype, I feel like we’re throwing them at some problems without answering fundamental questions first. What problems do they solve? Why would one choose a tablet over another type of computer? Some of these answers are straightforward, obvious even. Tablets have good battery life, they’re easier to carry around campus all day, especially if they can save you a textbook or two. They have great cameras. Most come with intuitive sharing facilities, making it easy to distribute materials in class.
But sometimes the affordances of a touch interface aren’t enough. So we add an USB keyboard, we add a mouse, we put the tablet in a cover to protect its exposed screen. And pretty soon we’ve got ourselves a laptop. A laptop with unpluggable parts, but a laptop nonetheless.
So what are good uses for tablets in the classroom? In my eyes, they center around two things: mobility and multimedia.
Tablets are clearly more mobile than laptops, even the lightest of which tend to be heavier and simply not designed for use on the go. Walking around campus with a Macbook Air and flipping it open every time you need to talk a picture with the webcam is not as easy as using a tablet with a camera on the backside. Most tablet operating systems are also getting better at hands-free usage, responding to voice input with technology like Siri and Google Now.
The applications of mobility in an educational setting are manifold. Starting with the obvious, many libraries have “scavenger hunt” activities which involve moving about the library and learning about different collections, service points, and study areas. Even if you don’t use an app like SCVNGR to run the activity, having a device with geolocation and a camera makes it easy to move from point-to-point and document progress. Given how labyrinthine many academic libraries are, particularly those with large stacks, a tablet could really help make a scavenger hunt less intimidating and more engaging.
At a community college, many of our courses are vocational in nature. These courses do not typically involve sitting in a lecture hall listening to your instructor, they are naturally suited to hands-on work in the field. Courses as varied as auto mechanics, criminal justice, ecology, and nursing could all benefit from mobile devices. Even typical uses, which don’t utilize purchased apps or unique hardware, could be easier with a lightweight computer, such as taking notes and looking up reference materials online.
Students in an ecology course can research local flora, looking up plant species while they’re far from campus. Criminal justice majors can document and investigate a fake crime scene. Nurses can refer to and ask for feedback on their treatment plans while making their rounds. Those latter two examples point to further advantages of tablets: they have great audio-video recording facilities and make sharing content very easy. Beyond just being mobile, tablet devices can help students create multimedia projects and share them to social media. They’re better suited to demonstrating metaliteracy.
Tablet computers have their uses in education. They are not, however, a panacea. There are many problems which they do not solve, and some which they exacerbate.
One of the most common, traditional uses of computers in academia is to create research papers. Unfortunately, tablets aren’t great for writing and researching in large quantities. Can they produce research papers? Absolutely, but long-form writing is one of the situations where one begins to turn a tablet into a laptop by adding a keyboard. One fights the tablet’s form rather than working with it.
While there are plenty of word processing apps available, they may not always work well with a school’s learning environment. Our instructors, for instance, mostly require papers in .rtf or .doc formats, which are only readily available on Windows tablets. This isn’t the tablets’ fault, but the uneven pace of technological development in academia (some professors leaping wholesale into multimedia assignments, others sticking with decades-old file formats) disadvantages newer devices. Vendor databases are also variable in how well they support smaller screens and touch-based interfaces.1 Finally, actually submitting an assignment to a Learning Management System is often difficult on mobile devices. Our LMS, which is quite modern in most respects, does not allow web uploads in Mobile Safari or Android Browser. It does have apps for both iOS and Android, but the app was read-only until recently and even now permits submissions only with the assistance of Dropbox.2
In sum, research papers present numerous obstacle for tablet devices. While none are insurmountable, the devices simply aren’t intended to produce research papers, at least not as much as traditional (laptop and desktop) computers. This isn’t a killer issue, and one which will no doubt improve over time. But tablet devices also pose larger questions about technology and learning which we need to at least be thinking about.
Mobile operating systems are remarkably stable. It’s perhaps sad that the first thing that really impressed me about iOS is that it just kept running. Open apps, leave them open, whatever, it doesn’t matter. The OS churns on.
But this stability comes at the cost of a lot of customizability. The reason why my Linux laptop occasionally become erratic is because I’ve told it to. I’ve installed a development version of the kernel, I’ve entered contradictory window manager configurations, I’ve deleted all my hardware drivers somehow. I have the freedom to be foolish.
Such complete control over a device, in the right hands, can offer privacy, a privacy that might be otherwise impossible to obtain. With companies like Apple and Google being complicit with the NSA’s survelliance, this poses a problem to libraries and other privacy advocates. Do we offer access to devices that are known to report their actions back to a corporate or governmental body? Or do we let users boot up a Tails instance and stay private? While surveillance may be unavoidable, Cory Doctorow is right to point out that this is a human rights issue. In an age where we do almost everything on our computers, locked-down devices offer some assurances at the expense of others. They run stable operating systems, but limit our ability to verify they haven’t been tampered with.
Starting people on devices whose only applications come from a corporate-controlled “app store” sets a precedent. If this is how people are first introduced to computers, it’s how many will assume they work. Apple has already tried to port its app store to the desktop, including setting a default to allow only apps installed from it. This may seem, ultimately, like a trite complaint. But Doctorow is right to extrapolate to equipment like cochlear implants; what happens when we don’t control the firmware on devices embedded in our own bodies? If a device matters to you, you should care about controlling what’s installed on it.
“But Android is open source!” And indeed, it is, though that somehow hasn’t stopped it from relying on multiple app stores with subtly different offerings (Google Play vs. Amazon Appstore on the Kindle Fire…why are there two corporate-controlled app stores for the same OS?). I feel like Android has been an open source OS that’s easy to corporations to customize on the locked-down devices they sell, but not so easy for users to truly takeover. Still, there’s hope here. CyanogenMod is a non-corporate version of Android which gives users far greater control than is available on other mobile operating systems.
And rather recently, a CyanogenMod Installer appeared in the Google Play store, indicating that Google isn’t entirely opposed to giving users more freedom. Update: Google removed the CyanogenMod Installer app. So maybe they are opposed to giving users more freedom.
I also can’t help but wonder: are we limiting people by providing all-too-easy devices? I cringe as I ask the question, because it recalls the ludicrous “discovery layers make research too easy and it should be hard” argument. Humor me a bit longer, however.
Much of my hesitancy with easy, touch-based devices comes from my own history with computers, where the deeper I’ve delved the more rewarded I’ve felt. I love the command line, an interface even less beginner-friendly than graphical desktop operating systems. I love the keyboard, too. Some keyboard shortcuts and a little muscle memory make me faster than any elaborate set of swipes could be. In fact, the lack of keyboard shortcuts and a command line is a big reason why I’m not a regular tablet user.3 I’ve grown to rely on it so much that going without just doesn’t make sense to me.
The point is: sometimes these difficult-to-learn interfaces have enormous power hidden beneath them. We’re sacrificing something by moving to an easier option, one which doesn’t offer power users a way around its limitations. Then again, just because a user employs a tablet for one activity doesn’t mean they’ll eschew laptops or desktops for everything else.4 The issue is more when tablets are presented as a replacement for more powerful computers; it’s valuable to make users understand that, in some circumstances, the level of control and customizability of a desktop OS is essential.
The availability of apps is often cited as an advantage of mobile operating systems. But many apps offer no unique advantages over desktop computers; they perform the same functions but on a different device. Rather than monolithic desktop software packages like Microsoft Office or Creative Suite, consumers have a plethora of smaller, cheaper, more focused applications. The apps which do achieve things genuinely impossible or difficult on a desktop tend to engage with the two advantages of tablets I highlighted earlier, namely mobility (e.g. Foursquare, SCVNGR) and multimedia (video/audio recorders, from Vine to native Camera apps).
A recent LITA listserv discussion5 highlights the strawman “apps” argument. A few people noted the availability of apps on tablet computers, then proceeded to name a few common applications which are available on every major desktop operating system (not to mention free on the web). How does a dictionary or calculator suddenly become a competitive advantage when it’s on a tablet?
Take Evernote for example. Often cited as a must-have app, I feel like its primary appeal is solving a problem device ubiquity created. Taking notes and saving bits of content wasn’t much of a struggle before it involving syncing between so many devices. Evernote’s seamless cross-platform availability is what makes it so appealing, not that it reinvented annotation. Is it a great app for our modern age? Yes. Is it a killer app that makes you need an iPad? No. It’s an app you need if you have an iPad, not the other way around. Full disclosure: I never got into using Evernote, so this is an outsider’s take.
The distinction about which need comes first, tablet or app, is pivotal: mobile devices create needs even as they solve them. To return to keyboards again, why do we need them? To type on our tablets. Why did we need the tablets? So we can type everywhere we go. The cycle continues.
One metaphor that’s persisted since the dawn of graphical operating systems is that computer hard drives are like your filing cabinet: they have folders, inside those folders are files, files of different types.6 It’s very strange to me, having grown up thoroughly immersed in this metaphor, that mobile operating systems dispense with it. There are no more folders. There aren’t even files. There are only apps. The apps may conspire together, you may take a photo in one and edit it in another, but you may never interact with the photo itself outside of an app.
There is nothing essential about the filing cabinet metaphor. A different one could have become ubiquitous. It’s already verging on anachronism as digital “folders” overtake physical ones. So why do I feel like people should know what a folder is, and how to rename one, how to move it, how to organize one’s files? These are basic skills I instruct students on every day, yet perhaps they’ve simply grown unnecessary. Am I an old fogey for thinking that people need to understand file management? Does it matter anymore when we’ve outsourced our file systems to the cloud?
Tablets aren’t bad devices. They’re easy to pick up, so easy a baby can do it. Their touch interfaces are not only novel but in some cases simply brilliant. Problems arise when we consider the tablet as a full-featured replacement for our other computers. And maybe they can be, but those are the scenarios where we start fighting the nature of the machine itself (attaching a keyboard, jailbreaking or rooting the device).
I’m giving a presentation at my college soon and one slide is devoted to “The Access Rainbow” mentioned in Andrew Clement and Leslie Shade’s chapter in an old Community Informatics textbook.7 The rainbow is rather like Maslow’s hierarchy of needs, in that it works from base, material needs to more sophisticated, social ones. Once we have network infrastructure in place, we can get devices. Once we have devices, we can put software on them. Once we have software, we can work. Once we can work, we can build things, we can connect with each other, we can affect governance.
The problem is when our devices limit the colors of the rainbow that are even visible. The upper tiers of the rainbow, the tiers that really matter, are foreclosed. We cannot participate in the governance of information and communication technologies when we buy devices that only install the software which Apple or Google approves. And can we be fully digitally literate if we can’t experiment and break things on our devices? We can’t break things on an iPad; the iPad has outlet covers on all its electrical sockets when what we really need is a shock.
I worry I’ve already grown old and stodgy. “The kids with their touchy screens and electronic throat tattoos,” I mutter, madly typing abbreviations into a bash prompt. Will we be OK with easy devices? Do we need to break things, to change the permissions on a file, to code to be digitally literate? I don’t know.
- Responsive interfaces for databases is one area which has seen massive improvement over the last couple years and probably won’t be a concern too much longer. With web-scale discovery systems, many libraries are just now becoming able to abstract the differences between databases into a single search platform. Then the discovery system just needs to be responsive, rather than each of the dozens of different vendor interfaces. ↩
- So what if students need Dropbox, as long as it works? Well, forcing students into a particular cloud storage system is problematic. What if they prefer SpiderOak, Google Drive, SkyDrive, etc.? The lack of true file system access really hampers mobile devices in some situations, a point I’ll elaborate on further below. ↩
- One of the interesting aspects of Microsoft tablets is that they do come with a command line; you can swipe around all you want, but then open up PowerShell and mess with the Windows Registry to your heart’s content. It’s interesting and has great potential. I know Android also has some terminal simulator apps. ↩
- Hat-tip to fellow Tech Connect blogger Meghan Frazer for calling me out on this. ↩
- “Classroom iPads” on 11/1/13. It’s worth noting that plenty of people in this discussion touched on precisely the topic of this post, that the advantages of tablets seems to be misunderstood. ↩
- My wife points out that this isn’t a metaphor, that filesystems are literally that, filesystems. I can’t refute that claim. It’s either correct or an indication of just how ingrained the metaphor is. ↩
- Despite being from 1999, Community Informatics by Michael Gurstein is still incredibly relevant. It blew my mind during my first semester of library school and validated my decision to attend. It’s probably the best textbook I’ve ever read, which isn’t high praise but it is praise. ↩
Embedding the library in campus-wide orientations, as well as developing standalone library orientations, is often part of outreach and first year experience work. Reaching all students can be a challenge, so finding opportunities for better engaging campus helps to promote the library and increase student awareness. Using a mobile app for orientations can provide many benefits such as increasing interactivity and offering an asynchronous option for students to learn about the library on their own time. We have been trying out SCVNGR at the University of Arizona (UA) Libraries and are finding it is a more fun and engaging way to deliver orientations and instruction to students.
Why use game design for library orientations and instruction?
Game-based learning can be a good match for orientations, just as it can be for instruction (I have explored this before with ACRL TechConnect previously, looking at badges). Rather than just presenting a large amount of information to students or having them fill out a paper-based scavenger hunt activity, using something like SCVNGR can get students interacting more with the library in a way that offers more engagement in real time and with feedback. However, simply adding a layer of points and badges or other game mechanics to a non-game situation doesn’t automatically make it fun and engaging for students. In fact, doing this ineffectively can cause more harm than good (Nicholson, 2012). Finding a way to use the game design to motivate participants beyond simply acquiring points tends to be the common goal in using game design in orientations and instruction. Thinking of the WIIFM (What’s In It For Me) principle from a students’ perspective can help, and in the game design we used at the University of Arizona with SCVNGR for a class orientation, we created activities based on common questions and concerns of students.
SCVNGR is a mobile app game for iPhone and Android where players can complete challenges in specific locations. Rather than getting clues and hints like in a traditional scavenger hunt, this game is more focused on activities within a location instead of finding the location. Although this takes some of the mystery away, it works very well for simply informing people about locations that are new to them and having them interact with the space.
Students need to physically be in the location for the app to work, where they use the location to search for “challenges” (single activities to complete) or “treks” (a series of single activities that make up the full experience for a location), and then complete the challenges or treks to earn points, badges, and recognition.
Some libraries have made their own mobile scavenger hunt activities without the aid of a paid app. For example, North Carolina State University uses the NCSU Libraries’ Mobile Scavenger Hunt, which is a combination of students recording responses in Evernote, real time interaction, and tracking by librarians. One of the reasons we went with SCVNGR, however, is because this sort of mobile orientation requires a good amount of librarian time and is synchronous, whereas SCVNGR does not require as much face-to-face librarian time and allows for asynchronous student participation. Although we do use more synchronous instruction for some of our classes, we also wanted to have the option for asynchronous activities, and in particular for the large-scale orientations where many different groups will come in at many different times. Although SCVNGR is not free for us, the app is free to students. They offer 24/7 support and other academic institutions offer insight and ideas in a community for universities.
Other academic libraries have used SCVNGR for orientations and even library instruction. A few examples are:
- University of California – San Diego uses SCVNGR for their orientations. They created a LibGuide specifically for their SCVNGR orientation where they also post the scoreboard and photos.
- Oregon State University uses SCVNGR for international student orientations to increase awareness and support the university initiative to increase the OSU international population from 5 to 10% of the student body.
- Boise State is using SCVNGR for instruction rather than a focus on orientations. They have provided information to students who then go on to create their own SCVNGR orientations as an assignment.
- University of California – Merced recently wrote about SCVNGR in their campus-wide orientations, incorporating other areas on campus into the library’s orientation. They decided to try out SCVNGR from UCSD’s positive feedback, but had some issues with student turnout and discuss possible reasons for this in the article.
How did the UA Libraries use SCVNGR?
Because a lot of instruction has moved online and there are so many students to reach, we are working on SCVNGR treks for both instruction and basic orientations at the University of Arizona (UA). We are in the process of setting up treks for large-scale campus orientations (New Student Orientation, UA Up Close for both parents and students, etc.) that take place during the summer, and we have tested SCVNGR out on a smaller scale as a pilot for individual classes. There tends to be greater success and engagement if the Trek is tied to something, such as a class assignment or a required portion of an orientation session that must be completed. One concern for an app-based activity is that not all students will have smartphones. This was alleviated by putting students into groups ahead of time, ensuring that at least one person in the group did have a device compatible to use SCVNGR. However, we do lend technology at the UA Libraries, and so if a group was without a smartphone or tablet, they would be able to check one out from the library.
We first piloted a trek on an American Indian Studies student success course (AIS197b). This course for freshmen introduces students to services on campus that will be useful to them while they are at the UA. Last year, we presented a quick information session on library services, and then had the students complete a scavenger hunt for a class grade (participation points) with pencil and paper throughout the library. Although they seemed glad to be able to get out and move around, it didn’t seem particularly fun and engaging. On top of that, every time the students got stuck or had a question, they had to come back to the main floor to find librarians and get help. In contrast, when students get an answer wrong in SCVNGR, feedback is programmed in to guide them to the correct information. And, because they don’t need clues to make it to the next step (they just go back and select the next challenge in the trek), they are able to continue without one mistake preventing them from moving on to the next activity. This semester, we first presented a brief instruction session (approximately 15-20 min) and then let students get started on SCVNGR.
You can see in the screenshot below how question design works, where you can select the location, how many points count toward the activity, type of activity (taking a photo, answering with text, or scanning a QR code), and then providing feedback. If a student answers a question incorrectly, as I mention above, they will receive feedback to help them in figuring out the correct answer. I really like that when students get answers right, they know instantly. This is positive reinforcement for them to continue.
The activities designed for students in this class were focused on photo and text-based challenges. We stayed away from QR codes because they can be finicky with some phones, and simply taking a picture of the QR code meets the challenge requirement for that option of activity. Our challenges included:
- Meet the reference desk (above): Students meet desk staff and ask how they can get in touch for reference assistance; answers are by text and students type in which method they think they would use the most: email, chat, phone, or in person.
- Prints for a day: Students find out about printing (a frequent question of new students), and text in how to pay for printing after finding the information at the Express Documents Center.
- Playing favorites: Students wander around the library and find their favorite study spot. Taking a picture completes the challenge, and all images are collected in the Trek’s statistics.
- Found in the stacks: After learning how to use the catalog (we provided a brief instruction session to this class before setting them loose), students search the catalog for books on a topic they are interested in, then locate the book on the shelf and take a picture. One student used this time to find books for another class and was really glad he got some practice.
- A room of one’s own: The UA Libraries implemented online study room reservations as of a year ago. In order to introduce this new option to students, this challenge had them use their smartphones to go to the mobile reservation page and find out what the maximum amount of hours study rooms can be reserved for and text that in.
SCVNGR worked great with this class for simple tasks, such as meeting people at the reference desk, finding a book, or taking a picture of a favorite study spot, but for tasks that might require more critical thinking or more intricate work, this would not be the best platform to use in that level of instruction. SCVNGR’s assessment options are limited for students to respond to questions or complete an activity. Texting in detailed answers or engaging in tasks like searching a database would be much harder to record. Likewise, because more instruction that is tied to critical thinking is not so much location-based (evaluating a source or exploring copyright issues, for example), and so it would be hard to tie these tasks and acquisition of skill to an actual location-based activity to track. One instance of this was with the Found in the Stacks challenge; students were supposed to search for a book in the catalog and then locate it on the shelf, but there would be nothing stopping them from just finding a random book on the shelf and taking a picture of it to complete the challenge. SCVNGR provides a style guide to help in game design, and the overall understanding from this document is that simplicity is most effective for this platform.
Another feature that works well is being able to choose if the Trek is competitive or not, and also use “SmartRoute,” which is the ability to have challenges show up for participants based on distance and least-crowded areas. This is wonderful, particularly as students get sort of congested at certain points in a scavenger hunt: they all crowd around the same materials or locations simultaneously because they’re making the same progress through the activity. We chose to use SmartRoute for this class so they would be spread out during the game.
When trying to assess student effort and impact of the trek, you can look at stats and rankings. It’s possible to view specific student progress, all activity by all participants, and rankings organized by points.
Another feature is the ability to collect items submitted for challenges (particularly pictures). One of our challenges is for students to find their favorite study spot in the library and take a picture of it. This should be fun for them to think about and is fairly easy, and it helps us do some space assessment. It’s then possible to collect pictures like the following (student’s privacy protected via purple blob).
On the topic of privacy, students enter in their name to set up an account, but only their first name and first initial of their last name appear as their username. Although last names are then hidden, SCVNGR data is viewable by anyone who is within the geographical range to access the challenge: it is not closed to an institution. If students choose to take pictures of themselves, their identity may be revealed, but it is possible to maintain some privacy by not sharing images of specific individuals or sharing any personal information through text responses. On the flip side of not wanting to associate individual students with their specific activities, it gets trickier when an instructor plans to award points for student participation. In that case, it’s possible to request reports from SCVNGR for instructors so they can see how much and which students participated. In a large class of over 100 students, looking at the data can be messier, particularly if students have the same first name and last initial. Because of this issue, SCVNGR might be better used for large-scale orientations where participation does not need to be tracked, and small classes where instructors would be easily able to know who is who in the data for activity.
Both student and instructor feedback was very positive. Students seemed to be having fun, laughing, and were not getting stuck nearly as much as the previous year’s pencil-and-paper hunt. The instructor noted it seemed a lot more streamlined and engaging for the class. When students checked in with us at the end before heading out, they said they enjoyed the activity and although there were a couple of hiccups with the software and/or how we designed the trek, they said it was a good experience and they felt more comfortable with using the library.
Next time, I would be more careful about using text responses. I had gone down to our printing center to tell the current student worker what answers students in the class would be looking for so she could answer it for them, but they wound up speaking with someone else and getting different answers. Otherwise, the level of questions seemed appropriate for this class and it was a good way to pilot how SCVNGR works, if students might like it, and how long different types of questions take for bringing this to campus on a larger scale. I would also be cautious about using SCVNGR too heavily for instruction, since it doesn’t seem to have capabilities for more complex tasks or a great deal of critical thinking. It is more suited to basic instruction and getting students more comfortable in using the library.
- Ability to reach many students and asynchronously
- Anyone can complete challenges and treks; this is great for prospective students and families, community groups, and any programs doing outreach or partnerships outside of campus since a university login is not required.
- Can be coordinate with campus treks if other units have accounts or a university-wide license is purchased.
- WYSIWYG interface, no programming skills necessary
- Order of challenges in a trek can be assigned staggered so not everyone is competing for the same resources at the same time.
- Can collect useful data through users submitting photos and comments (for example, we can examine library space and student use by seeing where students’ favorite spots to study are).
- SCVNGR is not free to use, an annual fee applies (in the $900-range for a library-only license, which is not institution-wide).
- Privacy is a concern since anyone can see activity in a location; it’s not possible to close this to campus.
- When completing a trek, users do not get automatic prompts to proceed to the next challenge; instead, they must go back to the home location screen and choose the next challenge (this can get a little confusing for students).
- SCVNGR is more difficult to use with instruction, especially when looking to incorporate critical thinking and more complex activities
- Instructors might have a harder time figuring out how to grade participation because treks are open to anyone; only students’ first name and last initial appear, so if either a large class completes a trek for an assignment or if an orientation trek for the public is used, a special report must be requested from SCVNGR that the library could send to the instructor for grading purposes.
SCVNGR is a good way to increase awareness and get students and other groups comfortable in using the library. One of the main benefits is that it’s asynchronous, so a great deal of library staff time is not required to get people interacting with services, collections, and space. Although this platform is not perfect for more in-depth instruction, it does work at the basic orientation level, and students and the instructor in the course we piloted it on had a good experience.
Nicholson, S. (2012). A user-centered theoretical framework for meaningful gamification. Paper Presented at Games+Learning+Society 8.0, Madison, WI. Retrieved from http://scottnicholson.com/pubs/meaningfulframework.pdf.
About Our Guest Author: Nicole Pagowsky is an Instructional Services Librarian at the University of Arizona where she explores game-based learning, student retention, and UX. You can find her on Twitter, @pumpedlibrarian.
Gamification in libraries has become a topic of interest in the professional discourse, and one that ACRL TechConnect has covered in Applying Game Dynamics to Library Services and Why Gamify and What to Avoid in Gamification. Much of what has been written about badging systems in libraries pertains to gamifying library services. However, being an Instructional Services Librarian, I have been interested in tying gamification to library instruction.
When library skills are not always part of required learning outcomes or directly associated with particular classes, thinking more creatively about promotion and embeddedness of library tutorials prompted me to become interested in tying a badging system to the University of Arizona Libraries’ online learning objects. For a brief review on badges, they are visual representations of skills and achievements. They can be used with or instead of grades depending on the scenario and include details to support their credibility (criteria, issuer, evidence, currency).
Becoming a beta tester for Purdue’s Passport platform gives me the opportunity to better sketch out what our plans are and to test how gamification could work in this context. Passport, according to Purdue, is “A learning system that demonstrates academic achievement through customizable badges.” Through this platform, instructors can design instruction for badges to be associated with learning outcomes. Currently, Passport can only be used by applying to be a beta tester. As they improve the software, it should be available to more people and have greater integration (it currently connects with Mozilla Open Backpack and within the Purdue system).We are still comparing platforms and possibilities for the University of Arizona Libraries, and testing Passport has been the first step in figuring out what we want, what is available, and how we would like to design this form of instruction. I will share my impression of Passport and using badging technology for these purposes from my experience using the software.
Refresher on motivation
It’s important to understand how motivation works in relation to a points and badges system, while also having a clear goal in mind. I recently wrote a literature review on motivation in gamified learning scenarios as part of my work toward a second Master’s in Educational Technology. The general ideas to take away are the importance of employing game mechanics thoughtfully into your framework to avoid users’ relying solely on the scoring system, as well as focusing on the engagement aspects of gamification rather than using badges and points just for manipulation. Points should be used as a feedback mechanism rather than just promoting them as items to harvest.
Structure and scalability
Putting this into perspective for gamifying library instruction at the University of Arizona, we want to be sure student motivation is directed at developing research skills that can be visually demonstrated to instructors and future employers through badges, with points serving as feedback and further motivation. We are using the ACRL Information Literacy Standards as an outline for the badges we create; the Standards are not perfect, but they serve well as a map for conceptualizing research skills and are a way we can organize the content. Within each skill set or badge, activities for completion are multidimensional: students must engage in a variety of tasks, such as doing a tutorial, reading a related article or news story, and completing a quiz. We plan to allow for risk taking and failure — important aspects of game design — so students can re-try the material until they understand it (Gee, 2007).
As you can see in this screen capture, the badges corresponding to the ACRL Standards include: Research Initiator (Standard 1), Research Assailant (Standard 2), Research Investigator (Standard 3), and Research Warrior (Standard 4). As a note, I have not yet created a badge for Standard 5 or one to correspond with our orientations (also, all names you can see in any image I include are of my colleagues trying out the badges, and not of students). A great aspect of this platform is the ability to design your own badges with their WYSIWYG editor.
Because a major issue for us is scalability with limited FTE, we have to be cautious in which assessment methods we choose for approving badges. Since we would have a hard time offering meaningful, individualized feedback for every student who would complete these tasks, having something automatic is more ideal. Passport allows options for students to test their skills, with multiple-choice quizzes, uploading a document, and entering text. For our purposes, using multiple-choice quizzes with predetermined responses is currently the best method. If we develop specific badges for smaller courses on a case-by-case basis, it might be possible to accept written responses and more detailed work, but in trying to roll this out to campus-at-large, automated scoring is necessary.
Within each badge, also referred to as a challenge, there are tasks to complete. Finishing these tasks adds up to earning the badge. It’s essentially leveling up (which is progressing to the next level based on achievement); although the way Passport is designed, the students can complete the tasks in any order. Within the suite of badges, I have reinforced information and skills throughout so students must use previous skills learned for future success. In this screen capture, you can see the overall layout by task title.
When including tasks that require instructor approval (if students were to submit documents or write text), an instructor would click on each yellow box stating that approval is needed to determine if the student successfully completed the task and supply personalized feedback (image above). And you can see the breakdown of tasks under each challenge to review what was learned; this can serve as confirmation for outside parties of what kind of work each badge entailed (image below).
Once badges are earned, they can be displayed in a user’s Passport profile and Mozilla Open Badges. Here is an example of what a badge portfolio looks like:
Passport “classrooms” are closed and require a log in for earning badges (FERPA), but if students agree to connectivity with Mozilla’s Open Badges Backpack, achievements can then be shared with Twitter, Facebook, LinkedIn, and other networks. Badges can also connect with e-portfolios and resumes (since it’s in Beta this functionality works best with Purdue platforms). This could be a great, additional motivator for students in helping them get jobs. From Project Information Literacy, we do know employers find new graduates are lacking research skills, so being able to present these skills as fulfilled to future employers can be useful for soon-to-be and recent graduates. The badges link back to more information, as mentioned, and employers can get more detail. Students can even make their submitted work publicly available so employers, instructors, and peers can see their efforts.
Whether or not it is possible to integrate Passport fully into our library website for students to access, using this tool has at least given me a way to essentially sketch out how our badging system will work. We can also try some user testing with students on these tasks to gauge motivation and instructional effectiveness. Having this system become campus-wide in collaboration with other units and departments would also aid in creating more meaning behind the badges; but in the meantime, tying this smaller scale layout to specific class instruction or non-disciplinary collaborations will be very useful.
Although some sources say gamification will be taking a huge nosedive by 2014 due to poor design and over-saturation, keeping tabs on other platforms available and how to best incorporate this technology into library instruction is where I will be looking this semester and beyond as we work on plans for rolling out a full badging system within the next couple of years. Making learning more experiential and creating choose-your-own adventure scenarios are effective in giving students ownership over their education. Using points and badges for manipulating users is certainly detrimental and should fall out of use in the near future, but using this framework in a positive manner for motivation and to support student learning can have beneficial effects for students, campus, and the library.
Dignan, A. (2012). Game Frame. New York: The Free Press.
Gee, J. P. (2007). What video games have to teach us about learning and literacy. New York: Palgrave Macmillan.
Kapp, K. M. (2012). The gamification of learning and instruction: Game-based methods and strategies for training and education. San Francisco, CA: Pfeiffer.
Koster, R. (2005). A theory of fun for game design. Scottsdale, AZ: Paraglyph Press.
Because Play Matters: A game lab dedicated to transformative games and play for informal learning environments in the iSchool at Syracuse: http://becauseplaymatters.com/
Digital badges show students’ skills along with degree (Purdue News): http://www.purdue.edu/newsroom/releases/2012/Q3/digital-badges-show-students-skills-along-with-degree.html
Gamification Research Network: http://gamification-research.org/
TL-DR: Where gamers and information collide: http://tl-dr.ca/
About Our Guest Author: Nicole Pagowsky is an Instructional Services Librarian at the University of Arizona where she explores game-based learning, student retention, and UX. You can find her on Twitter, @pumpedlibrarian.
Librarians often use presentation slides to teach a class, run a workshop, or give a talk. Ideally you should be able to access the Internet easily at those places. But more often than not, you may find only spotty Internet signals. If you had planned on using your presentation slides stored in the cloud, no access to the Internet would mean no slides for your presentation. But it doesn’t have to be that way. In this post, we will show you how to locally save your presentation slides on your iPad, so that you will be fully prepared to present without Internet access. You will only need a few tools, and the best of all, those tools are all freely available.
1. Haiku Deck – Make slides on the iPad
If your presentation slides do not require a lot of text, Haiku Deck is a nice iPad app for creating a complete set of slides without a computer. The Haiku Deck app allows you to create colorful presentation slides quickly by searching and browsing a number of CC-licensed images and photographs in Flickr and to add a few words to each slide. Once you select the images, Haiku Deck does the rest of work, inserting the references to each Flickr image you chose and creating a nice set of presentation slides.
You can play and present these slides directly from your iPad. Since Haiku Deck stores these slides locally, you need access to the Internet only while you are creating the slides using the images in Flickr through Haiku Deck. For presenting already-made slides, you do not need to be connected to the Internet. If you would like, you can also export the result as a PowerPoint file from Haiku Deck. This is useful if you want to make further changes to the slides using other software on your computer. But bear in mind that once exported as a PowerPoint file, the texts you placed using Haiku Deck are no longer editable. Below is an example that shows you how the slides made with Haiku Deck look like.
Note. Click the image itself in order to see the bigger version.
So next time when you get a last-minute instruction request from a teaching faculty member, consider spending 10-15 minutes to create a colorful and eye-catching set of slides with minimal text to have it accompany your classroom instruction or a short presentation all on your iPad.
2. SlideShark – Display slides on the iPad
SlideShark is a tool not so much for creating slides as for displaying the slides properly on the iPad (and also for the iPhone). In order to use SlideShark, you need to install the SlideShark app on your iPad first and then create an account. Once this is done, you can go to the SlideShark website (https://www.slideshark.com/) and log in. Here you can upload your presentation files in the MS PowerPoint format.
Once the file is uploaded to the SlideShark website, open the SlideShark app on your iPad and sync your app with the website by pressing the sync icon on top. This will display all the presentation files that have been uploaded to your SlideShark website account. Here, you can download and save a local copy of your presentation on your iPad. You will need the live Internet connection for this task. But once your presentation file is downloaded onto your SlideShark iPad app, you no longer need to be online in order to display and project those slides. While you are using your iPad to display your slides, you can also place your finger on the iPad screen which will be displayed on the projector as a laser pointer mark.
Last but not least, when you pack your iPad and run to your classroom or presentation room, don’t forget to take your adapter. In order to connect your iPad to a projector, you usually need a iPad-VGA adapter because most projectors have a VGA port. But different adapters are used for different ports on display devices. So find out in advance if the projector you will be using has a VGA, DVI, or a HDMI port. (Also remember that if you have an adapter that connects your Macbook with a projector, that adapter will not work for your iPad. That is a mini DVI-VGA adapter and won’t work with your iPad.)
4. Non-free option: Keynote
Haiku Deck and SlideShark are both free. But if you are willing to invest about ten dollars for convenience, another great presentation app is Keynote (currently $9.99 in Apple Store). While Haiku Deck is most useful for creating simple slides with a little bit of text, Keynote allows you to create more complicated slides on your iPad. If you use Keynote, you also don’t have to go through SlideShark for the off-line display of your presentation slides.
Creating presentations on the Keynote iPad app is simple and uses the same conventions and user-interface as the familiar Keynote application for OS X. Both versions of Keynote can share the same presentation files, although care should be taken to use 1024 x 768 screen resolution and standard Apple fonts and slide templates. iCloud may be used to sync presentations between iPads and other computers and users can download presentations to the iPad and present without Internet access.
The iPad version of Keynote has many features that make Keynote loved by its users. You can add media, tables, charts, and shapes into your presentation. Using Keynote, you can also display your slides to the audience on the attached projector while you view the same slides with a timer and notes on your iPad. (See the screenshots below.) For those with an iPhone or iPod Touch, the Keynote Remote app allows presenters to remotely control their slideshows without the need to stand at the podium or physically touch the iPad to advance their slides.
Do you have any useful tips for creating slides and presenting with an iPad? Share your ideas in the comments!
How do you orient students to to the library? Put them in a classroom and show them the website? Walk them around in a giant herd, pointing out the important spaces? That’s how we at North Carolina State University Libraries were doing it, too. And we were finding ourselves a little disappointed. Wouldn’t it be better, we thought, if we could get the students out into the library, actually engaging with staff, exploring the spaces, and discovering the collections themselves?
Background & Rationale
We had long felt that classroom-based library orientation had inherent flaws and we had tried several alternatives, including a scavenger hunt. Although the scavenger hunt was popular, it was not sustainable: it took a significant amount of work to hide paper clues around the library before each hunt and the activity could not be scaled up to meet the needs of over a hundred ENG 101 classes per semester. So, we focused our efforts on enhancing traditional classroom-based instruction and creating online tutorials.
In 2011, I held a focus group with several instructors in the First Year Writing Program, and the message was clear: they believed that students would benefit from more face-to-face library instruction and that instruction should be more active and engaging. This confirmed my gut feeling that, while online tutorials can be very effective at delivering content, they do not necessarily promote our “affective” goals of reducing library-related anxiety and fostering confidence in using the library’s collections and spaces. After classroom instruction, we distribute a short survey that asks students if they remain confused about how to find information, about whom to ask for help, about how to navigate the physical spaces of the library, or anything else. The most common response by far – from 44% of surveyed students – was that they still didn’t feel comfortable finding their way around our large library, which is in fact four merged buildings. We needed to develop an activity that would simultaneously teach students about our collections and services, introduce them to critical library staff, and help them learn their way around the library’s spaces.
It was with this feedback in mind that two colleagues — Adam Rogers and Adrienne Lai — and I revisited the idea of the scavenger hunt in March 2011. Since the last scavenger hunt attempt in 2010, mobile devices and the cloud based apps that run on them had become mainstream. If we could develop a scavenger hunt that relied on mobile technology, such as iPod Touches, and which didn’t rely on students finding paper clues throughout the library, we might be able to sustain and scale it.
We first investigated out-of-the-box scavenger hunt solutions such as SCVNGR and Scavenger Hunt With Friends, which were appealing in that they were self contained and provided automatic scoring. However, we did not have a budget for the project and discovered that the free versions could not meet our needs. Furthermore, apps that rely on GPS coordinates to display challenges and questions did not work reliably inside our building.
Ultimately, we decided we needed to come up with something ourselves that would allow students to submit answers to scavenger hunt questions “mobilely”, automatically calculate scores or allow us to score student answers rapidly, and enable us to display results and provide feedback at the end of the 50 minute activity. Our eventual solution made use of traditional approaches to scavenger hunts, in the form of paper maps and clue sheets, alongside novel cloud-based technologies such as Evernote and Google Docs.
The Scavenger Hunt in 50 Minutes
0:00-10:00: A class arrives at the library classroom and is greeted by a librarian, who introduces the activity and divides the group into 3-5 teams of about 4 students. Each team gets a packet with a list of 15 questions and an iPod Touch. The iPod Touches are already logged into Evernote accounts assigned to each team.
10:00-35:00: Teams disperse into the library to discover the answers to their 15 questions. Some questions require text-based answers; others prompt students to submit a photo. We ask them to introduce themselves to and take a photo with a librarian, to find a book in the stacks and take a photo of it as evidence, and to find the collection of circulating DVD’s, among other things. Each answer is submitted as an Evernote note. While students are exploring the library, a librarian monitors the teams’ Evernote accounts (which have been shared with our master account) and scoring their answers using a GoogleDocs spreadsheet. Meanwhile, another library staff member copies student photos into a PowerPoint document to run while students return at the end of the hunt.
35:00-50:00: At the end of 25 minutes, students return to the classroom, where a slideshow displays the photos they took, the correct answers to the questions, and a URL to a short survey about the activity. After all team members have returned, the librarians reveal the teams’ scores, declare a winning team, and distribute prizes.
The scavenger hunt has been very popular with both students and faculty. In the two semesters we have been offering the hunt (Fall 2011 and Spring 2012), we have facilitated over 90 hunts and reached over 1,600 students. 91% of surveyed students considered the activity fun and enjoyable, 93% said they learned something new about the library, and 95% indicated that they felt comfortable asking a staff member for help after having completed the activity. Instructors find the activity worthwhile as well. One ENG 101 faculty member wrote that the “activity engaged students… on a level that led to increased understanding, deeper learning, and almost complete recall of important library functions.”
Lessons Learned & Adjustments
After almost 100 scavenger hunts, we have learned how to optimize this activity for our target audiences. First we discovered that, for our institution, this scavenger hunt works best when scheduled for a class. Often, however, one instructor would schedule scavenger hunts for three consecutive sections of a class. In these cases, we learned to use only half our iPods for the first session. In the second session, while the second half of the iPods were in use, the first half would be refreshed and made ready for the last group of students.
In the very early scavenger hunts in Fall 2011, students reported lagginess with the iPods and occasional crashing of Evernote. However, since some critical iOS and Evernote updates, this has not been a problem.
Finally, after an unexpected website outage, we learned how dependent our activity was on the functionality of our website. We now keep an ‘emergency’ version of our scavenger hunt questions in case of another outage.
More details about implementing the NCSU Libraries Mobile Scavenger Hunt are available on the NCSU Libraries’ website.
About Our Guest Author: Anne Burke is Undergraduate Instruction & Outreach Librarian at NCSU Libraries. She holds an MSLIS from Syracuse University and an MA in Education from Manhattanville College. She like to explore new and exciting ways to teach students about information.
At the NCSU Libraries, my colleagues and I in the Research and Information Services department do a fair bit of instruction, especially to classes from the university’s First Year Writing Program. Some new initiatives and outreach have significantly increased our instruction load, to the point where it was getting more difficult for us to effectively cover all the sessions that were requested due to practical limits of our schedules. By way of a solution, we wanted to train some of our grad assistants, who (at the time of this writing) are all library/information science students from that school down the road, in the dark arts of basic library instruction, to help spread the instruction burden out a little.
This would work great, but there’s a secondary problem: since UNC is a good 40 minute drive away, our grad assistants tend to have very rigid schedules, which are fixed well in advance — so we can’t just alter our grad assistants’ schedules on short notice to have them cover a class. Meanwhile, instruction scheduling is very haphazard, due to wide variation in how course slots are configured in the weekly calendar, so it can be hard to predict when instruction requests are likely to be scheduled. What we need is a technique to maximize the likelihood that a grad student’s standing schedule will overlap with the timing of instruction requests that we do get — before the requests come in.
Searching for a Solution – Bar graph-based analysis
The obvious solution was to try to figure out when during the day and week we provided library instruction most frequently. If we could figure this out, we could work with our grad students to get their schedules to coincide with these busy periods.
Luckily, we had some accrued data on our instructional activity from previous semesters. This seemed like the obvious starting point: look at when we taught previously and see what days and times of day were most popular. The data consisted of about 80 instruction sessions given over the course of the prior two semesters; data included date, day of week, session start time, and a few other tidbits. The data was basically scraped by hand from the instruction records we maintain for annual reports; my colleague Anne Burke did the dirty work of collecting and cleaning the data, as well as the initial analysis.
Anne’s first pass at analyzing the data was to look each day of the week in terms of courses taught in the morning, afternoon, and evening. A bit of hand-counting and spreadsheet magic produced this:
This chart was somewhat helpful — certainly it’s clear that Monday, Tuesday and Thursday are our busiest days — but but it doesn’t provide a lot of clarity regarding times of day that are hot for instruction. Other than noting that Friday evening is a dead time (hardly a huge insight), we don’t really get a lot of new information on how the instruction sessions shake out throughout the week.
Let’s Get Visual – Heatmap-based visualization
The chart above gets the fundamentals right — since we’re designing weekly schedules for our grad assistants, it’s clear that the relevant dimensions are days of week and times of day. However, there are basically two problems with the stacked bar chart approach: (1) The resolution of the stacked bars — morning, afternoon and evening — is too coarse. We need to get more granular if we’re really going to see the times that are popular for instruction; (2) The stacked bar chart slices just don’t fit our mental model of a week. If we’re going to solve a calendaring problem, doesn’t it make a lot of sense to create a visualization that looks like a calendar?
What we need is a matrix — something where one dimension is the day of the week and the other dimension is the hour of the day (with proportional spacing) — just like a weekly planner. Then for any given hour, we need something to represent how “popular” that time slot is for instruction. It’d be great if we had some way for closely clustered but non-overlapping sessions to contribute “weight” to each other, since it’s not guaranteed that instruction session timing will coincide precisely.
When I thought about analyzing the data in these terms, the concept of a heatmap immediately came to mind. A heatmap is a tool commonly used to look for areas of density in spatial data. It’s often used for mapping click or eye-tracking data on websites, to develop an understanding of the areas of interest on the website. A heatmap’s density modeling works like this: each data point is mapped in two dimensions and displayed graphically as a circular “blob” with a small halo effect; in closely-packed data, the blobs overlap. Areas of overlap are drawn with more intense color, and the intensity effect is cumulative, so the regions with the most intense color correspond to the areas of highest density of points.
I had heatmaps on the brain since I had just used them extensively to analyze user interaction patterns with a touchscreen application that I had recently developed.
Part of my motivation for using heatmaps to solve our scheduling problem was simply to use the tools I had at hand: it seemed that it would be a simple matter to convert the instruction data into a form that would be amenable to modeling with the heatmap software I had access to. But in a lot of ways, a heatmap was a perfect tool: with a proper arrangement of the data, the heatmap’s ability to model intensity would highlight the parts of each day where the most instruction occurred, without having to worry too much about the precise timing of instruction sessions.
The heatmap generation tool that I had was a slightly modified version of the Heatmap PHP class from LabsMedia’s ClickHeat, an open-source tool for website click tracking. My modified version of the heatmap package takes in an array of (x,y) ordered pairs, corresponding to the locations of the data points to be mapped, and outputs a PNG file of the generated heatmap.
So here was the plan: I would convert each instruction session in the data to a set of (x,y) coordinates, with one coordinate representing day of week and the other representing time of day. Feeding these coordinates into the heatmap software would, I hoped, create five colorful swatches, one for each day of the week. The brightest regions in the swatches would represent the busiest times of the corresponding days.
Arbitrarily, I selected the y-coordinate to represent the day of the week. So I decided that any Monday slot, for instance, would be represented by some small (but nonzero) y-coordinate, with Tuesday represented by some greater y-coordinate, etc., with the intervals between consecutive days of the week equal. The main concern in assigning these y-coordinates was for the generated swatches to be far enough apart so that the heatmap “halo” around one day of the week would not interfere with its neighbors — we’re treating the days of the week independently. Then it was a simple matter of mapping time of day to the x-coordinate in a proportional manner. The graphic below shows the output from this process.
In this graphic, days of the week are represented by the horizontal rows of blobs, with Monday as the first row and Friday as the last. The leftmost extent of each row corresponds to approximately 8am, while the rightmost extent is about 7:30pm. The key in the upper left indicates (more or less) the number of overlapping data points in a given location. A bit of labeling helps to clarify things:
Right away, we get a good sense of the shape of the instruction week. This presentation reinforces the findings of the earlier chart: that Monday, Tuesday, and Thursday are busiest, and that Friday afternoon is basically dead. But we do see a few other interesting tidbits, which are visible to us specifically through the use of the heatmap:
- Monday, Tuesday and Thursday aren’t just busy, they’re consistently well-trafficked throughout the day.
- Friday is really quite slow throughout.
- There are a few interesting hotspots scattered here and there, notably first thing in the morning on Tuesday.
- Wednesday is quite sparse overall, except for two or three prominent afternoon/evening times.
- There is a block of late afternoon-early evening time-slots that are consistently busy in the first half of the week.
Using this information, we can take a much more informed approach to scheduling our graduate students, and hopefully be able to maximize their availability for instruction sessions.
“Better than it was before. Better. Stronger. Faster.” – Open questions and areas for improvement
As a proof of concept, this approach to analyzing our instruction data for the purposes of setting student schedules seems quite promising. We used our findings to inform our scheduling of graduate students this semester, but it’s hard to know whether our findings can even be validated: since this is the first semester where we’re actively assigning instruction to our graduate students, there’s no data available to compare this semester against, with respect to amount of grad student instruction performed. Nevertheless, it seems clear that knowledge of popular instruction times is a good guideline for grad student scheduling for this purpose.
There’s also plenty of work to be done as far as data collection and analysis is concerned. In particular:
- Data curation by hand is burdensome and inefficient. If we can automate the data collection process at all, we’ll be in a much better position to repeat this type of analysis in future semesters.
- The current data analysis completely ignores class session length, which is an important factor for scheduling (class times vary between 50 and 100 minutes). This data is recorded in our instruction spreadsheet, but there aren’t any set guidelines on how it’s entered — librarians entering their instruction data tend to round to the nearest quarter- or half-hour increment at their own preference, so a 50-minute class is sometimes listed as “.75 hours” and other times as “1 hour”. More accurate and consistent session time recording would allow us to reliably use session length in our analysis.
- To make the best use of session length in the analysis, I’ll have to learn a little bit more about PHP’s image generation libraries. The current approach is basically a plug-in adaptation of ClickHeat’s existing Heatmap class, which is only designed to handle “point” data. To modify the code to treat sessions as little line segments corresponding to their duration (rather than points that correspond to their start times) would require using image processing methods that are currently beyond my ken.
- A bit better knowledge of the image libraries would also allow me to add automatic labeling to the output file. You’ll notice the prominent use of “ish” to describe the hours dimension of the labeled heatmap above: this is because I had neither the inclination nor the patience to count pixels to determine where exactly the labels should go. With better knowledge of the image libraries I would be able to add graphical text labels directly to the generated heatmap, at precisely the correct location.
There are other fundamental questions that may be worth answering — or at least experimenting against — as well. For instance, in this analysis I used data about actual instruction sessions performed. But when lecturers request library sessions, they include two or three “preferred” dates, of which we pick the one that fits our librarian and room schedules best. For the purposes of analysis, it’s not entirely clear whether we should use the actual instruction data, which takes into account real space limitations but is also skewed by librarian availability; or whether we should look strictly at what lecturers are requesting, which might allow us to schedule our grad students in a way that could accommodate lecturers’ first choices better, but which might run us up against the library’s space limitations. In previous semesters, we didn’t store the data on the requests we received; this semester we’re doing that, so I’ll likely perform two analyses, one based on our actual instruction and one based on requests. Some insight might be gained by comparing the results of the two analyses, but it’s unclear what exactly the outcome will be.
Finally, it’s hard to predict how long-term trends in the data will affect our ability to plan for future semesters. It’s unclear whether prior semesters are a good indicator of future semesters, especially as lecturers move into and out of the First Year Writing Program, the source of the vast majority of our requests. We’ll get a better sense of this, presumably, as we perform more frequent analyses — it would also make sense to examine each semester separately to look for trends in instruction scheduling from semester to semester.
In any case, there’s plenty of experimenting left to do and plenty of improvements that we could make.
Reflections and Lessons Learned
There’s a few big points that I took away from this experience. A big one is simply that sometimes the right approach is a totally unexpected one. You can gain some interesting insights if you don’t limit yourself to the tools that are most familiar for a particular problem. Don’t be afraid to throw data at the wall and see what sticks.
Really, what we did in this case is not so different from creating separate histograms of instruction times for each day of the week, and comparing the histograms to each other. But using heatmaps gave us a couple of advantages over traditional histograms: first, our bin size is essentially infinitely narrow; because of the proximity effects of the heatmap calculation, nearby but non-overlapping data points still contribute weight to each other without us having to define bins as in a regular histogram. Second, histograms are typically drawn in two dimensions, which would make comparing them against each other rather a nuisance. In this case, our separate heatmap graphics for each day of the week are basically one-dimensional, which allows us to compare them side by side with little fuss. This technique could be used for side-by-side examinations of multiple sets of any histogram-like data for quick and intuitive at-a-glance comparison.
In particular, it’s important to remember — especially if your familiarity with heatmaps is already firmly entrenched in a spatial mapping context — that data doesn’t have to be spatial in order to be analyzed with heatmaps. This is really just an extension of the idea of graphical data analysis: A heatmap is just another way to look at arbitrary data represented graphically, not so different from a bar graph, pie chart, or scatter plot. Anything that you can express in two dimensions (or even just one), and where questions of frequency, density, proximity, etc., are relevant, can be analyzed using the heatmap approach.
A final point: as an analysis tool, the heatmap is really about getting a feel for how the data lies in aggregate, rather than getting a precise sense of where each point falls. Since the halo effect of a data point extends some distance away from the point, the limits of influence of that point on the final image are a bit fuzzy. If precision analysis is necessary, then heatmaps are not the right tool.
About our guest author: Andreas Orphanides is Librarian for Digital Technologies and Learning in the Research and Information Services department at NCSU Libraries. He holds an MSLS from UNC-Chapel Hill and a BA in mathematics from Oberlin College. His interests include instructional design, user interface development, devising technological solutions to problems in library instruction and public services, long walks on the beach, and kittens.