Cybersecurity is an interesting and important topic, one closely connected to those of online privacy and digital surveillance. Many of us know that it is difficult to keep things private on the Internet. The Internet was invented to share things with others quickly, and it excels at that job. Businesses that process transactions with customers and store the information online are responsible for keeping that information private. No one wants social security numbers, credit card information, medical history, or personal e-mails shared with the world. We expect and trust banks, online stores, and our doctor’s offices to keep our information safe and secure.
However, keeping private information safe and secure is a challenging task. We have all heard of security breaches at J.P Morgan, Target, Sony, Anthem Blue Cross and Blue Shield, the Office of Personnel Management of the U.S. federal government, University of Maryland at College Park, and Indiana University. Sometimes, a data breach takes place when an institution fails to patch a hole in its network systems. Sometimes, people fall for a phishing scam, or a virus in a user’s computer infects the target system. Other times, online companies compile customer data into personal profiles. The profiles are then sold to data brokers and on into the hands of malicious hackers and criminals.
Cybersecurity vs. Usability
To prevent such a data breach, institutional IT staff are trained to protect their systems against vulnerabilities and intrusion attempts. Employees and end users are educated to be careful about dealing with institutional or customers’ data. There are systematic measures that organizations can implement such as two-factor authentication, stringent password requirements, and locking accounts after a certain number of failed login attempts.
While these measures strengthen an institution’s defense against cyberattacks, they may negatively affect the usability of the system, lowering users’ productivity. As a simple example, security measures like a CAPTCHA can cause an accessibility issue for people with disabilities.
Or imagine that a university IT office concerned about the data security of cloud services starts requiring all faculty, students, and staff to only use cloud services that are SOC 2 Type II certified as an another example. SOC stands for “Service Organization Controls.” It consists of a series of standards that measure how well a given service organization keeps its information secure. For a business to be SOC 2 certified, it must demonstrate that it has sufficient policies and strategies that will satisfactorily protect its clients’ data in five areas known as “Trust Services Principles.” Those include the security of the service provider’s system, the processing integrity of this system, the availability of the system, the privacy of personal information that the service provider collects, retains, uses, discloses, and disposes of for its clients, and the confidentiality of the information that the service provider’s system processes or maintains for the clients. The SOC 2 Type II certification means that the business had maintained relevant security policies and procedures over a period of at least six months, and therefore it is a good indicator that the business will keep the clients’ sensitive data secure. The Dropbox for Business is SOC 2 certified, but it costs money. The free version is not as secure, but many faculty, students, and staff in academia use it frequently for collaboration. If a university IT office simply bans people from using the free version of Dropbox without offering an alternative that is as easy to use as Dropbox, people will undoubtedly suffer.
Some of you may know that the USPS website does not provide a way to reset the password for users who forgot their usernames. They are instead asked to create a new account. If they remember the account username but enter the wrong answers to the two security questions more than twice, the system also automatically locks their accounts for a certain period of time. Again, users have to create a new account. Clearly, the system that does not allow the password reset for those forgetful users is more secure than the one that does. However, in reality, this security measure creates a huge usability issue because average users do forget their passwords and the answers to the security questions that they set up themselves. It’s not hard to guess how frustrated people will be when they realize that they entered a wrong mailing address for mail forwarding and are now unable to get back into the system to correct because they cannot remember their passwords nor the answers to their security questions.
To give an example related to libraries, a library may decide to block all international traffic to their licensed e-resources to prevent foreign hackers who have gotten hold of the username and password of a legitimate user from accessing those e-resources. This would certainly help libraries to avoid a potential breach of licensing terms in advance and spare them from having to shut down compromised user accounts one by one whenever those are found. However, this would make it impossible for legitimate users traveling outside of the country to access those e-resources as well, which many users would find it unacceptable. Furthermore, malicious hackers would probably just use a proxy to make their IP address appear to be located in the U.S. anyway.
What would users do if their organization requires them to reset passwords on a weekly basis for their work computers and several or more systems that they also use constantly for work? While this may strengthen the security of those systems, it’s easy to see that it will be a nightmare having to reset all those passwords every week and keeping track of them not to forget or mix them up. Most likely, they will start using less complicated passwords or even begin to adopt just one password for all different services. Some may even stick to the same password every time the system requires them to reset it unless the system automatically detects the previous password and prevents the users from continuing to use the same one. Ill-thought-out cybersecurity measures can easily backfire.
Security is important, but users also want to be able to do their job without being bogged down by unwieldy cybersecurity measures. The more user-friendly and the simpler the cybersecurity guidelines are to follow, the more users will observe them, thereby making a network more secure. Users who face cumbersome and complicated security measures may ignore or try to bypass them, increasing security risks.
Cybersecurity vs. Privacy
Usability and productivity may be a small issue, however, compared to the risk of mass surveillance resulting from aggressive security measures. In 2013, the Guardian reported that the communication records of millions of people were being collected by the National Security Agency (NSA) in bulk, regardless of suspicion of wrongdoing. A secret court order prohibited Verizon from disclosing the NSA’s information request. After a cyberattack against the University of California at Los Angeles, the University of California system installed a device that is capable of capturing, analyzing, and storing all network traffic to and from the campus for over 30 days. This security monitoring was implemented secretly without consulting or notifying the faculty and those who would be subject to the monitoring. The San Francisco Chronicle reported the IT staff who installed the system were given strict instructions not to reveal it was taking place. Selected committee members on the campus were told to keep this information to themselves.
The invasion of privacy and the lack of transparency in these network monitoring programs has caused great controversy. Such wide and indiscriminate monitoring programs must have a very good justification and offer clear answers to vital questions such as what exactly will be collected, who will have access to the collected information, when and how the information will be used, what controls will be put in place to prevent the information from being used for unrelated purposes, and how the information will be disposed of.
We have recently seen another case in which security concerns conflicted with people’s right to privacy. In February 2016, the FBI requested Apple to create a backdoor application that will bypass the current security measure in place in its iOS. This was because the FBI wanted to unlock an iPhone 5C recovered from one of the shooters in San Bernadino shooting incident. Apple iOS secures users’ devices by permanently erasing all data when a wrong password is entered more than ten times if people choose to activate this option in the iOS setting. The FBI’s request was met with strong opposition from Apple and others. Such a backdoor application can easily be exploited for illegal purposes by black hat hackers, for unjustified privacy infringement by other capable parties, and even for dictatorship by governments. Apple refused to comply with the request, and the court hearing was to take place in March 22. The FBI, however, withdrew the request saying that it found a way to hack into the phone in question without Apple’s help. Now, Apple has to figure out what the vulnerability in their iOS if it wants its encryption mechanism to be foolproof. In the meanwhile, iOS users know that their data is no longer as secure as they once thought.
Around the same time, the Senate’s draft bill titled as “Compliance with Court Orders Act of 2016,” proposed that people should be required to comply with any authorized court order for data and that if that data is “unintelligible” – meaning encrypted – then it must be decrypted for the court. This bill is problematic because it practically nullifies the efficacy of any end-to-end encryption, which we use everyday from our iPhones to messaging services like Whatsapp and Signal.
Because security is essential to privacy, it is ironic that certain cybersecurity measures are used to greatly invade privacy rather than protect it. Because we do not always fully understand how the technology actually works or how it can be exploited for both good and bad purposes, we need to be careful about giving blank permission to any party to access, collect, and use our private data without clear understanding, oversight, and consent. As we share more and more information online, cyberattacks will only increase, and organizations and the government will struggle even more to balance privacy concerns with security issues.
Why Libraries Should Advocate for Online Privacy?
The fact that people may no longer have privacy on the Web should concern libraries. Historically, libraries have been strong advocates of intellectual freedom striving to keep patron’s data safe and protected from the unwanted eyes of the authorities. As librarians, we believe in people’s right to read, think, and speak freely and privately as long as such an act itself does not pose harm to others. The Library Freedom Project is an example that reflects this belief held strongly within the library community. It educates librarians and their local communities about surveillance threats, privacy rights and law, and privacy-protecting technology tools to help safeguard digital freedom, and helped the Kilton Public Library in Lebanon, New Hampshire, to become the first library to operate a Tor exit relay, to provide anonymity for patrons while they browse the Internet at the library.
New technologies brought us the unprecedented convenience of collecting, storing, and sharing massive amount of sensitive data online. But the fact that such sensitive data can be easily exploited by falling into the wrong hands created also the unparalleled level of potential invasion of privacy. While the majority of librarians take a very strong stance in favor of intellectual freedom and against censorship, it is often hard to discern a correct stance on online privacy particularly when it is pitted against cybersecurity. Some even argue that those who have nothing to hide do not need their privacy at all.
However, privacy is not equivalent to hiding a wrongdoing. Nor do people keep certain things secrets because those things are necessarily illegal or unethical. Being watched 24/7 will drive any person crazy whether s/he is guilty of any wrongdoing or not. Privacy allows us safe space to form our thoughts and consider our actions on our own without being subject to others’ eyes and judgments. Even in the absence of actual massive surveillance, just the belief that one can be placed under surveillance at any moment is sufficient to trigger self-censorship and negatively affects one’s thoughts, ideas, creativity, imagination, choices, and actions, making people more conformist and compliant. This is further corroborated by the recent study from Oxford University, which provides empirical evidence that the mere existence of a surveillance state breeds fear and conformity and stifles free expression. Privacy is an essential part of being human, not some trivial condition that we can do without in the face of a greater concern. That’s why many people under political dictatorship continue to choose death over life under mass surveillance and censorship in their fight for freedom and privacy.
The Electronic Frontier Foundation states that privacy means respect for individuals’ autonomy, anonymous speech, and the right to free association. We want to live as autonomous human beings free to speak our minds and think on our own. If part of a library’s mission is to contribute to helping people to become such autonomous human beings through learning and sharing knowledge with one another without having to worry about being observed and/or censored, libraries should advocate for people’s privacy both online and offline as well as in all forms of communication technologies and devices.
A few of us at Tech Connect participated in the #1Lib1Ref campaign that’s running from January 15th to the 23rd . What’s #1Lib1Ref? It’s a campaign to encourage librarians to get involved with improving Wikipedia, specifically by citation chasing (one of my favorite pastimes!). From the project’s description:
Imagine a World where Every Librarian Added One More Reference to Wikipedia.
Wikipedia is a first stop for researchers: let’s make it better! Your goal today is to add one reference to Wikipedia! Any citation to a reliable source is a benefit to Wikipedia readers worldwide. When you add the reference to the article, make sure to include the hashtag #1Lib1Ref in the edit summary so that we can track participation.
Below, we each describe our experiences editing Wikipedia. Did you participate in #1Lib1Ref, too? Let us know in the comments or join the conversation on Twitter!
I recorded a short screencast of me adding a citation to the Darbhanga article.
— Eric Phetteplace
I used the Citation Hunt tool to find an article that needed a citation. I selected the second one I found, which was about urinary tract infections in space missions. That is very much up my alley. I discovered after a quick Google search that the paragraph in question was plagiarized from a book on Google Books! After a hunt through the Wikipedia policy on quotations, I decided to rewrite the paragraph to paraphrase the quote, and then added my citation. As is usual with plagiarism, the flow was wrong, since there was a reference to a theme in the previous paragraph of the book that wasn’t present in the Wikipedia article, so I chose to remove that entirely. The Wikipedia Citation Tool for Google Books was very helpful in automatically generating an acceptable citation for the appropriate page. Here’s my shiny new paragraph, complete with citation: https://en.wikipedia.org/wiki/
— Margaret Heller
I edited the “Library Facilities” section of the “University of Maryland Baltimore” article in Wikipedia. There was an outdated link in the existing citation, and I also wanted to add two additional sentences and citations. You can see how I went about doing this in my screen recording below. I used the “edit source” option to get the source first in the Text Editor and then made all the changes I wanted in advance. After that, I copy/pasted the changes I wanted from my text file to the Wikipedia page I was editing. Then, I previewed and saved the page. You can see that I also had a typo in my text and had to fix that again to make the citation display correctly. So I had to edit the article more than once. After my recording, I noticed another typo in there, which I fixed it using the “edit” option. The “edit” option is much easier to use than the “edit source” option for those who are not familiar with editing Wiki pages. It offers a menu bar on the top with several convenient options.
The recording of editing a Wikipedia article:
— Bohyun Kim
It has been so long since I’ve edited anything on Wikipedia that I had to make a new account and read the “how to add a reference” link; which is to say, if I could do it in 30 minutes while on vacation, anyone can. There is a WYSIWYG option for the editing interface, but I learned to do all this in plain text and it’s still the easiest way for me to edit. See the screenshot below for a view of the HTML editor.
I wondered what entry I would find to add a citation to…there have been so many that I’d come across but now I was drawing a total blank. Happily, the 1Lib1Ref campaign gave some suggestions, including “Provinces of Afghanistan.” Since this is my fatherland, I thought it would be a good service to dive into. Many of Afghanistan’s citations are hard to provide for a multitude of reasons. A lot of our history has been an oral tradition. Also, not insignificantly, Afghanistan has been in conflict for a very long time, with much of its history captured from the lens of Great Game participants like England or Russia. Primary sources from the 20th century are difficult to come by because of the state of war from 1979 onwards and there are not many digitization efforts underway to capture what there is available (shout out to NYU and the Afghanistan Digital Library project).
Once I found a source that I thought would be an appropriate reference for a statement on the topography of Uruzgan Province, I did need to edit the sentence to remove the numeric values that had been written since I could not find a source that quantified the area. It’s not a precise entry, to be honest, but it does give the opportunity to link to a good map with other opportunities to find additional information related to Afghanistan’s agriculture. I also wanted to chose something relatively uncontroversial, like geographical features rather than historical or person-based, for this particular campaign.
— Yasmeen Shorish
The movie, Robot and Frank, describes the future in which the elderly have a robot as their companion and also as a helper. The robot monitors various activities that relate to both mental and physical health and helps Frank with various house chores. But Frank also enjoys the robot’s company and goes on to enlist the robot into his adventure of breaking into a local library to steal a book and a greater heist later on. People’s lives in the movie are not particularly futuristic other than a robot in them. And even a robot may not be so futuristic to us much longer either. As a matter of fact, as of June 2015, there is now a commercially available humanoid robot that is close to performing some of the functions that the robot in the movie ‘Frank and Robot’ does.
A Japanese company, SoftBank Robotics Corp. released a humanoid robot named ‘Pepper’ to the market back in June. The Pepper robot is 4 feet tall, 61 pounds, speaks 17 languages and is equipped with an array of cameras, touch sensors, accelerometer, and other sensors in his “endocrine-type multi-layer neural network,” according to the CNN report. The Pepper robot was priced at ¥198,000 ($1,600). The Pepper owners are also responsible for an additional ¥24,600 ($200) monthly data and insurance fee. While the Pepper robot is not exactly cheap, it is surprisingly affordable for a robot. This means that the robot industry has now matured to the point where it can introduce a robot that the mass can afford.
Robots come in varying capabilities and forms. Some robots are as simple as a programmable cube block that can be combined with one another to be built into a working unit. For example, Cubelets from Modular Robotics are modular robots that are used for educational purposes. Each cube performs one specific function, such as flash, battery, temperature, brightness, rotation, etc. And one can combine these blocks together to build a robot that performs a certain function. For example, you can build a lighthouse robot by combining a battery block, a light-sensor block, a rotator block, and a flash block.
By contrast, there are advanced robots such as those in the form of an animal developed by a robotics company, Boston Dynamics. Some robots look like a human although much smaller than the Pepper robot. NAO is a 58-cm tall humanoid robot that moves, recognizes, hears and talks to people that was launched in 2006. Nao robots are an interactive educational toy that helps students to learn programming in a fun and practical way.
Noticing their relevance to STEM education, some libraries are making robots available to library patrons. Westport Public Library provides robot training classes for its two Nao robots. Chicago Public Library lends a number of Finch robots that patrons can program to see how they work. In celebration of the National Robotics Week back in April, San Diego Public Library hosted their first Robot Day educating the public about how robots have impacted the society. San Diego Public Library also started a weekly Robotics Club inviting anyone to join in to help build or learn how to build a robot for the library. Haslet Public Library offers the Robotics Camp program for 6th to 8th graders who want to learn how to build with LEGO Mindstorms EV3 kits. School librarians are also starting robotics clubs. The Robotics Club at New Rochelle High School in New York is run by the school’s librarian, Ryan Paulsen. Paulsen’s robotics club started with faculty, parent, and other schools’ help along with a grant from NASA and participated in a FIRST Robotics Competition. Organizations such as the Robotics Academy at Carnegie Mellon University provides educational outreach and resources.
There are also libraries that offer coding workshops often with Arduino or Raspberry Pi, which are inexpensive computer hardware. Ames Free Library offers Raspberry Pi workshops. San Diego Public Library runs a monthly Arduino Enthusiast Meetup. Arduinos and Raspberry Pis can be used to build digital devices and objects that can sense and interact the physical world, which are close to a simple robot. We may see more robotics programs at those libraries in the near future.
Robots can fulfill many other functions than being educational interactive toys, however. For example, robots can be very useful in healthcare. A robot can be a patient’s emotional companion just like the Pepper. Or it can provide an easy way to communicate for a patient and her/his caregiver with physicians and others. A robot can be used at a hospital to move and deliver medication and other items and function as a telemedicine assistant. It can also provide physical assistance for a patient or a nurse and even be use for children’s therapy.
Humanoid robots like Pepper may also serve at a reception desk at companies. And it is not difficult to imagine them as sales clerks at stores. Robots can be useful at schools and other educational settings as well. At a workplace, teleworkers can use robots to achieve more active presence. For example, universities and colleges can offer a similar telepresence robot to online students who want to virtually experience and utilize the campus facilities or to faculty who wish to offer the office hours or collaborate with colleagues while they are away from the office. As a matter of fact, the University of Texas, Arlington, Libraries recently acquired several Telepresence Robots to lend to their faculty and students.
Not all robots do or will have the humanoid form as the Pepper robot does. But as robots become more and more capable, we will surely get to see more robots in our daily lives.
Alpeyev, Pavel, and Takashi Amano. “Robots at Work: SoftBank Aims to Bring Pepper to Stores.” Bloomberg Business, June 30, 2015. http://www.bloomberg.com/news/articles/2015-06-30/robots-at-work-softbank-aims-to-bring-pepper-to-stores.
“Boston Dynamics.” Accessed September 8, 2015. http://www.bostondynamics.com/.
“Finch Robots Land at CPL Altgeld.” Chicago Public Library, May 12, 2014. https://www.chipublib.org/news/finch-robots-land-at-cpl/.
McNickle, Michelle. “10 Medical Robots That Could Change Healthcare – InformationWeek.” InformationWeek, December 6, 2012. http://www.informationweek.com/mobile/10-medical-robots-that-could-change-healthcare/d/d-id/1107696.
Singh, Angad. “‘Pepper’ the Emotional Robot, Sells out within a Minute.” CNN.com, June 23, 2015. http://www.cnn.com/2015/06/22/tech/pepper-robot-sold-out/.
Tran, Uyen. “SDPL Labs: Arduino Aplenty.” The Library Incubator Project, April 17, 2015. http://www.libraryasincubatorproject.org/?p=16559.
“UT Arlington Library to Begin Offering Programming Robots for Checkout.” University of Texas Arlington, March 11, 2015. https://www.uta.edu/news/releases/2015/03/Library-robots-2015.php.
Waldman, Loretta. “Coming Soon to the Library: Humanoid Robots.” Wall Street Journal, September 29, 2014, sec. New York. http://www.wsj.com/articles/coming-soon-to-the-library-humanoid-robots-1412015687.
“Demonstrating DNA extraction” on Flickr
What Is a Biohackerspace?
A biohackerspace is a community laboratory that is open to the public where people are encouraged to learn about and experiment with biotechnology. Like a makerspace, a biohackerspace provides people with tools that are usually not available at home. A makerspace offers making and machining tools such as a 3D printer, a CNC (computer numerically controlled) milling machine, a vinyl cutter, and a laser cutter. A biohackerspace, however, contains tools such as microscopes, Petri dishes, freezers, and PCR (Polymerase Chain Reaction) machines, which are often found in a wet lab setting. Some of these tools are unfamiliar to many. For example, a PCR machine amplifies a segment of DNA and creates many copies of a particular DNA sequence. A CNC milling machine carves, cuts, and drills materials such as wood, hard plastic, and metal according to the design entered into a computer. Both a makerspace and a biohackerspace provide access to these tools to individuals, which are usually cost-prohibitive to own.
Genspace in Brooklyn (http://genspace.org/) is the first biohackerspace in the United States founded in 2010 by molecular biologist Ellen Jorgenson. Since then, more biohackerspaces have opened, such as BUGSS (Baltimore Underground Science Space, http://www.bugssonline.org/) in Baltimore, MD, BioLogik Labs (https://www.facebook.com/BiologikLabs) in Norfolk, VA, BioCurious in Sunnyvale, CA, Berkeley BioLabs (http://berkeleybiolabs.com/) in Berkeley, CA, Biotech and Beyond (http://biotechnbeyond.com/) in San Diego, CA, and BioHive (http://www.biohive.net/) in Seattle, WA.
What Do people Do in a Biohackerpsace?
Just as people in a makerspace work with computer code, electronics, plastic, and other materials for DYI-manufacturing, people in a biohackerspace tinker with bacteria, cells, and DNA. A biohackersapce allows people to tinker with and make biological things outside of the institutional biology lab setting. They can try activities such as splicing DNA or reprogramming bacteria.1 The projects that people pursue in a biohackerspace vary ranging from making bacteria that glow in the dark to identifying the neighbor who fails to pick up after his or her dog. Surprisingly enough, these are not as difficult or complicate as we imagine.2 Injecting a luminescent gene into bacteria can yield the bacteria that glow in the dark. Comparing DNA collected from various samples of dog excrement and finding a match can lead to identifying the guilty neighbor’s dog.3 Other possible projects at a biohackerspace include finding out if an organic food item from a supermarket is indeed organic, creating bacteria that will decompose plastic, checking if a certain risky gene is present in your body. An investigational journalist may use her or his biohacking skills to verify certain evidence. An environmentalist can measure the pollution level of her neighborhood and find out if a particular pollutant exceeds the legal limit.
Why Is a Biohackerpsace Important?
A biohackerspace democratizes access to biotechnology equipment and space and enables users to share their findings. In this regard, a biohakerspace is comparable to the open-source movement in computer programming. Both allow people to solve the problems that matter to them. Instead of pursing a scientific breakthrough, biohackers look for solutions to the problems that are small but important. By contrast, large institutions, such as big pharmaceutical companies, may not necessarily pursue solutions to such problems if those solutions are not sufficiently profitable. For example, China experienced a major food safety incident in 2008 involving melamine-contaminated milk and infant formula. It costs thousands of dollars to test milk for the presence of melamine in a lab. After reading about the incident, Meredith Patterson, a notable biohacker who advocates citizen science, started working on an alternative test, which will cost only a dollar and can be done in a home kitchen.4 To solve the problem, she planned to splice a glow-in-the-dark jellyfish gene into the bacteria that turns milk into yogurt and then add a biochemical sensor that detects melamine, all in her dining room. If the milk turns green when combined with this mixture, that milk contains melamine.
The DIYbio movement refers to the new trend of individuals and communities studying molecular and synthetic biology and biotechnology without being formally affiliated with an academic or corporate institution.5 DIYbio enthusiasts pursue most of their projects as a hobby. Some of those projects, however, hold the potential to solve serious global problems. One example is the inexpensive melamine test in a milk that we have seen above. Biopunk, a book by Marcus Wohlsen, also describes another DIYbio approach to develop an affordable handheld thermal cycler that rapidly replicates DNA as an inexpensive diagnostics for the developing world.6 Used in conjunction with a DNA-reading chip and a few vials containing primers for a variety of disease, this device called ‘LavaAmp’ can quickly identify diseases that break out in remote rural areas.
The DIYbio movement and a biohackerspace pioneer a new realm of science literacy, i.e. doing science. According to Meredith Patterson, scientific literacy is not understanding science but doing science. In her 2010 talk at the UCLA Center for Society and Genetics’ symposium, “Outlaw Biology? Public Participation in the Age of Big Bio,” Patterson argued, “scientific literacy empowers everyone who possesses it to be active contributors to their own health care; the quality of their food, water, and air; their very interactions with their own bodies and the complex world around them.”7
How Can Libraries Be Involved?
While not all librarians agree that a makerspace is an endeavor suitable for a library, more libraries have been creating a makerspace and offering makerspace-related programs for their patrons in recent years. Maker programs support hands-on learning in the STEAM education and foster creative and innovative thinking through tinkering and prototyping activities. They also introduce new skills to students and the public for whom the opportunities to learn about those things are still rare. Those new skills – 3D modeling, 3D printing, and computer programming – enrich students’ learning experience, provide new teaching tools for instructors, and help adults to find employment or start their own businesses. Those skills can also be used to solve everyday problem such as an creating inexpensive prosthetic limb or custom parts that are need to repair household items.
However, creating a makerspace or running a maker program in a library setting is not an easy task. Libraries often lack sufficient funding to purchase various equipment for a makerspace as well as the staff who are capable of developing appropriate maker programs. This means that in order to create and operate a successful makerspace, a library must make a significant upfront investment in equipment and staff education and training. For this reason, the importance of the accurate needs-assessment and the development of programs appropriate and useful to library patrons cannot be over-empahsized.
A biohackerspace requires a wet laboratory setting, where chemicals, drugs, and a variety of biological matter are tested and analyzed in liquid solutions or volatile phases. Such a laboratory requires access to water, proper plumbing and ventilation, waste disposal, and biosafety protocols. Considering these issues, it will probably take a while for any library to set up a biohackerspace.
This should not dissuade libraries from being involved with biohackerspace-related activities, however. Instead of setting up a biohackerspace, libraries can invite speakers to talk about DIYbio and biohacking to raise awareness about this new area of making to library patrons. Libraries can also form a partnership with a local biohackerspace in a variety of ways. Libraries can co-host or cross-promote relevant programs at biohackerspaces and libraries to facilitate the cross-pollination of ideas. A libraries’ reading collection focused on biohacking could be greatly useful. Libraries can contribute their expertise in grant writing or donate old computing equipment to biohackerspaces. Libraries can offer their expertise in digital publishing and archiving to help biohackerspaces publish and archive their project outcome and research findings.
Is a Biohackerpsace Safe?
The DIYbio movement recognized the potential risk in biohacking early on and created codes of conduct in 2011. The Ask a Biosafety Expert (ABE) service at DIY.org provides free biosafety advice from a panel of volunteer experts, along with many biosafety resources. Some biohackerspaces have an advisory board of professional scientists who review the projects that will take place at their spaces. Most biohackerspaces meet the Biosafety Level 1 criteria set out by the Centers for Disease Control and Prevention (CDC).
Democratization of Biotechnology
While the DIYbio movement and biohackerspaces are still in the early stage of development, they hold great potential to drive future innovation in biotechnology and life sciences. The DIYbio movement and biohackerspaces try to transform ordinary people into citizen scientists, empower them to come up with solutions to everyday problems, and encourage them to share those solutions with one another. Not long ago, we had mainframe computers that were only accessible to a small number of professional computer scientists locked up at academic or corporate labs. Now personal computers are ubiquitous, and many professional and amateur programmers know how to write code to make a personal computer do the things they would like it to do. Until recently, manufacturing was only possible on a large scale through factories. Many makerspaces that started in recent years, however, have made it possible for the public to create a model on a computer and 3D print a physical object based on that model at a much lower cost and on a much smaller scale. It remains to be seen if the DIYbio movement and biohackerspaces will bring similar change to biotechnology.
- Boustead, Greg. “The Biohacking Hobbyist.” Seed, December 11, 2008. http://seedmagazine.com/content/article/the_biohacking_hobbyist/. ↩
- Bloom, James. “The Geneticist in the Garage.” The Guardian, March 18, 2009. http://www.theguardian.com/technology/2009/mar/19/biohacking-genetics-research. ↩
- Landrain, Thomas, Morgan Meyer, Ariel Martin Perez, and Remi Sussan. “Do-It-Yourself Biology: Challenges and Promises for an Open Science and Technology Movement.” Systems and Synthetic Biology 7, no. 3 (September 2013): 115–26. doi:10.1007/s11693-013-9116-4. ↩
- Wohlsen, Marcus. Biopunk: Solving Biotech’s Biggest Problems in Kitchens and Garages. Penguin, 2011., p.38-39. ↩
- Jorgensen, Ellen D., and Daniel Grushkin. “Engage With, Don’t Fear, Community Labs.” Nature Medicine 17, no. 4 (2011): 411–411. doi:10.1038/nm0411-411. ↩
- Wohlsen, Marcus. Biopunk: Solving Biotech’s Biggest Problems in Kitchens and Garages. Penguin, 2011. p. 56. ↩
- A Biopunk Manifesto by Meredith Patterson, 2010. http://vimeo.com/18201825. ↩
Recently, my library has been considering accepting library fines via online. Currently, many library fines of a small amount that many people owe are hard to collect. As a sum, the amount is significant enough. But each individual fines often do not warrant even the cost for the postage and the staff work that goes into creating and sending out the fine notice letter. Libraries that are able to collect fines through the bursar’s office of their parent institutions may have a better chance at collecting those fines. However, others can only expect patrons to show up with or to mail a check to clear their fines. Offering an online payment option for library fines is one way to make the library service more user-friendly to those patrons who are too busy to visit the library in person or to mail a check but are willing to pay online with their credit cards.
If you are new to the world of online payment, there are several terms you need to become familiar with. The following information from the article in SixRevisions is very useful to understand those terms.1
- ACH (Automated Clearing House) payments: Electronic credit and debit transfers. Most payment solutions use ACH to send money (minus fees) to their customers.
- Merchant Account: A bank account that allows a customer to receive payments through credit or debit cards. Merchant providers are required to obey regulations established by card associations. Many processors act as both the merchant account as well as the payment gateway.
- Payment Gateway: The middleman between the merchant and their sponsoring bank. It allows merchants to securely pass credit card information between the customer and the merchant and also between merchant and the payment processor.
- Payment Processor: A company that a merchant uses to handle credit card transactions. Payment processors implement anti-fraud measures to ensure that both the front-facing customer and the merchant are protected.
- PCI (the Payment Card Industry) Compliance: A merchant or payment gateway must set up their payment environment in a way that meets the Payment Card Industry Data Security Standard (PCI DSS).
Often, the same company functions as both payment gateway and payment processor, thereby processing the credit card payment securely. Such a product is called ‘Online payment system.’ Meyer’s article I have cited above also lists 10 popular online payment systems: Stripe, Authorize.Net, PayPal, Google Checkout, Amazon Payments, Dwolla, Braintree, Samurai by FeeFighters, WePay, and 2Checkout. Bear in mind that different payment gateways, merchant accounts, and bank accounts may or may not work together, your bank may or may not work as a merchant account, and your library may or may not have a merchant account. 2
Also note that there are fees in using online payment systems like these and that different systems have different pay structures. For example, Authorize.net has the $99 setup fee and then charges $20 per month plus a $0.10 per-transaction fee. Stripe charges 2.9% + $0.30 per transaction with no setup or monthly fees. Fees for mobile payment solutions with a physical card reader such as Square may go up much higher.
Among various online payment systems, I picked Stripe because it was recommended on the Code4Lib listserv. One of the advantages for using Stripe is that it acts as both the payment gateway and the merchant account. What this means is that your library does not have to have a merchant account to accept payment online. Another big advantage of using Stripe is that you do not have to worry about the PCI compliance part of your website because the Stripe API uses a clever way to send the sensitive credit card information over to the Stripe server while keeping your local server, on which your payment form sits, completely blind to such sensitive data. I will explain this in more detail later in this post.
Below I will share some of the code that I have used to set up Stripe as my library’s online payment option for testing. This may be of interest to you if you are thinking about offering online payment as an option for your patrons or if you are simply interested in how an online payment API works. Even if your library doesn’t need to collect library fines via online, an online payment option can be a handy tool for a small-scale fund-raising drive or donation.
The first step to take to make Stripe work is getting an API keys. You do not have to create an account to get API keys for testing. But if you are going to work on your code more than one day, it’s probably worth getting an account. Stripe API has excellent documentation. I have read ‘Getting Started’ section and then jumped over to the ‘Examples’ section, which can quickly get you off the ground. (https://stripe.com/docs/examples) I found an example by Daniel Schröter in GitHub from the list of examples in the Stripe’s Examples section and decided to test out. (https://github.com/myg0v/Simple-Bootstrap-Stripe-Payment-Form) Most of the time, getting an example code requires some probing and tweaking such as getting all the required library downloaded and sorting out the paths in the code and adding API keys. This one required relatively little work.
Now, let’s take a look at the form that this code creates.
In order to create a form of my own for testing, I decided to change a few things in the code.
- Add Patron & Payment Details.
- Allow custom amount for payment.
- Change the currency from Euro to US dollars.
- Configure the validation for new fields.
- Hide the payment form once the charge goes through instead of showing the payment form below the payment success message.
When the token is received, this code calls for the function, stripeResponseHandler(). This function, stripeResponseHandler() checks if the Stripe server did not return any error upon receiving the payment information and, if no error has been returned, attaches the token information to the form and submits the form.
The server-side PHP script then checks if the Stripe token has been received and, if so, creates a charge to send it to Stripe as shown below. I am using PHP here, but Stripe API supports many other languages than PHP such as Ruby and Python. So you have many options. The real payment amount appears here as part of the charge array in line 326. If the charge succeeds, the payment success message is stored in a div to be displayed.
The reason why you do not have to worry about the PCI compliance with Stripe is that Stripe API asks to receive the payment information via AJAX and the input fields of sensitive information does not have the name attribute and value. (See below for the Card Holder Name and Card Number information as an example; Click to bring up the clear version of the image.) By omitting the name attribute and value, the local server where the online form sits is deprived of any means to retrieve the information in those input fields submitted through the form. Since sensitive information does not touch the local server at all, PCI compliance for the local server becomes no concern. To clarify, not all fields in the payment form need to be deprived of the name attribute. Only the sensitive fields that you do not want your web server to have access to need to be protected this way. Here, for example, I am assigning the name attribute and value to fields such as name and e-mail in order to use them later to send a e-mail receipt.
(NB. Please click images to see the enlarged version.)
Now, the modified form has ‘Fee Category’, custom ‘Payment Amount,’ and some other information relevant to the billing purpose of my library.
When the payment succeeds, the page changes to display the following message.
Stripe provides a number of fake card numbers for testing. So you can test various cases of failures. The Stripe website also displays all payments and related tokens and charges that are associated with those payments. This greatly helps troubleshooting. One thing that I noticed while troubleshooting is that Stripe logs sometimes do lag behind. That is, when a payment would succeed, associated token and charge may not appear under the “Logs” section immediately. But you will see the payment shows up in the log. So you will know that associated token and charge will eventually appear in the log later.
Once you are ready to test real payment transactions, you need to flip the switch from TEST to LIVE located on the top left corner. You will also need to replace your API keys for ‘TESTING’ (both secret and public) with those for ‘LIVE’ transaction. One more thing that is needed before making your library getting paid with real money online is setting up SSL (Secure Sockets Layer) for your live online payment page. This is not required for testing but necessary for processing live payment transactions. It is not a very complicated work. So don’t be discouraged at this point. You just have to buy a security certificate and put it in your Web server. Speak to your system administrator for how to get the SSL set up for your payment page. More information about setting up SSL can be found in the Stripe documentation I just linked above.
My library has not yet gone live with this online payment option. Before we do, I may make some more modifications to the code to fit the staff workflow better, which is still being mapped out. I am also planning to place the online payment page behind the university’s Shibboleth authentication in order to cut down spam and save some tedious data entry by library patrons by getting their information such as name, university email, student/faculty/staff ID number directly from the campus directory exposed through Shibboleth and automatically inserting it into the payment form fields.
In this post, I have described my experience of testing out the Stripe API as an online payment solution. As I have mentioned above, however, there are many other online payment systems out there. Depending your library’s environment and financial setup, different solutions may work better than others. To me, not having to worry about the PCI compliance by using Stripe was a big plus. If your library accepts online payment, please share what solution you chose and what factors led you to the particular online payment system in the comments.
* This post has been based upon my recent presentation, “Accepting Online Payment for Your Library and ‘Stripe’ as an Example”, given at the Code4Lib DC Unconference. Slides are available at the link above.
- Meyer, Rosston. “10 Excellent Online Payment Systems.” Six Revisions, May 15, 2012. http://sixrevisions.com/tools/online-payment-systems/. ↩
- Ullman, Larry. “Introduction to Stripe.” Larry Ullman, October 10, 2012. http://www.larryullman.com/2012/10/10/introduction-to-stripe/. ↩
I attended the ALA Summit on the Future of Libraries a few weeks ago.
[Let’s give it a minute for that to sink in.]
Yes, that was that controversial Summit that was much talked about on Twitter with the #libfuturesummit hashtag. This Summit and other summits with a similar theme close to one another in timing – “The Future of Libraries Survival Summit” hosted by Information Today Inc. and “The Future of Libraries: Do We Have Five Years to Live?” hosted by Ken Heycock Associates Inc. and Dysart & Jones Associates – seemed to have brought out the sentiment that Andy Woodworth aptly named ‘Library Future Fatigue.’ It was impressive experience to see how active librarians – both ALA members and non-members – were in providing real-time comments and feedback about these summits while I was at one of those in person. I thought ALA is lucky to have such engaged members and librarians to work with.
A few days ago, ALA released the official Summit report.1 The report captured all the talks and many table discussions in great detail. In this post, I will focus on some of my thoughts and take-aways prompted by the talks and the table discussion at the Summit.
A. The Draw
Here is an interesting fact. The invitation to this Summit sat in my Inbox for over a month because from the email subject I thought it was just another advertisement for a fee-based webinar or workshop. It was only after I had gotten another email from the ALA office asking about the previous e-mail that I realized that it was something different.
What drew me to this Summit were: (a) I have never been at a formal event organized just for a discussion about the future of libraries, (b) the event were to include a good number of people outside of the libraries, and (c) the overall size of the Summit would be kept relatively small.
For those curious, the Summit had 51 attendees plus 6 speakers, a dozen discussion table facilitators, all of whom fit into the Members’ Room in the Library of Congress. Out of those 51 attendees, 9 of them were from the non-library sector such as Knight Foundation, PBS, Rosen Publishing, and Aspen Institute. 33 attendees ranged from academic librarians to public, school, federal, corporate librarians, library consultants, museum and archive folks, an LIS professor, and library vendors. And then there were 3 ALA presidents (current, past, and president-elect) and 6 officers from ALA. You can see the list of participants here.
B. Two Words (or Phrases)
At the beginning of the Summit, the participants were asked to come up with two words or short phrases that capture what they think about libraries “from now on.” We wrote these on the ribbons and put right under our name tags. Then we were encouraged to keep or change them as we move through the Summit.
My two phrases were “Capital and Labor” and “Peer-to-Peer.” I kept those two until the end of the Summit and didn’t change. I picked “Capital and Labor” because recently I have been thinking more about the socioeconomic background behind the expansion of post-secondary education (i.e. higher ed) and how it affects the changes in higher education and academic libraries.2 And of course, the fact that Thomas Picketty’s book, Capital in the 21st Century, was being reviewed and discussed all over in the mass media contributed to that choice of the words as well. In my opinion, libraries “from now on” will be closely driven by the demands of the capital and the labor market and asked to support more and more of the peer-to-peer learning activities that have become widespread with the advent of the Internet.
Other phrases and words I saw from other participants included “From infrastructure to engagement,” “Sanctuary for learning,” “Universally accessible,” “Nimble and Flexible,” “From Missionary to Mercenary,” “Ideas into Action,” and “Here, Now.” The official report also lists some of the words that were most used by participants. If you choose your two words or phrases that capture what you think about libraries “from now on,” what would those be?
C. The Set-up
The Summit organizers have filled the room with multiple round tables, and the first day morning, afternoon, and the second day morning, participants sat at the table according to the table number assigned on the back of their name badges. This was a good method that enabled participants to have discussion with different groups of people throughout the Summit.
As the Summit agenda shows, the Summit program started with a talk by a speaker. After that, participants were asked to personally reflect on the talk and then have a table discussion. This discussion was captured on the large poster-size papers by facilitators and collected by the event organizers. The papers on which we were asked to write our personal reflections were also collected in the same way along with all our ribbons on which we wrote those two words or phrases. These were probably used to produce the official Summit report.
One thing I liked about the set-up was that every participant sat at a round table including speakers and all three ALA presidents (past, president, president-elect). Throughout the Summit, I had a chance to talk to Lorcan Dempsey from OCLC, Corinne Hill, the director of Chattanooga Public Library, Courtney Young, the ALA president-elect, and Thomas Frey, a well-known futurist at DaVinci Institute, which was neat.
Also, what struck me most during the Summit was that those who were outside of the library took the guiding questions and the following discussion much more seriously than those of us who are inside the library world. Maybe indeed we librarians are suffering from ‘library future fatigue.’ And/or maybe outsiders have more trust in libraries as institutions than we librarians do because they are less familiar with our daily struggles and challenges in the library operation. Either way, the Summit seemed to have given them an opportunity to seriously consider the future of libraries. The desired impact of this would be more policymakers, thought leaders, and industry leaders who are well informed about today’s libraries and will articulate, support, and promote the significant work libraries do to the benefit of the society in their own areas.
D. Talks, Table Discussion, and Some of My Thoughts and Take-aways
These were the talks given during the two days of the Summit:
- “How to Think Like a Freak” – Stephen Dubner, Journalist
- “What Are Libraries Good For?” – Joel Garreau, Journalist
- “Education in the Future: Anywhere, Anytime” – Dr. Renu Khator, Chancellor and President at the University of Houston
- “From an Internet of Things to a Library of Things” – Thomas Frey, Futurist
- A Table Discussion of Choice:
- Open – group decides the topic to discuss
- Empowering individuals and families
- Promoting literacy, particularly in children and youth
- Building communities the library serves
- Protecting and empowering access to information
- Advancing research and scholarship at all levels
- Preserving and/or creating cultural heritage
- Supporting economic development and good government
- “What Happened at the Summit?” – Joan Frye Williams, Library consultant
(0) Official Report, Liveblogging Posts, and Tweets
As I mentioned earlier, ALA released the 15-page official report of the Summit, which provides the detailed description of each talk and table discussion. Carolyn Foote, a school librarian and one of the Summit participants, also live-blogged all of the these talks in detail. I highly recommend reading her notes on Day 1, Day 2, and Closing in addition to the official report. The tweets from the Summit participants with the official hashtag, #libfuturesummit, will also give you an idea of what participants found exciting at the Summit.
(1) Redefining a Problem
The most fascinating story in the talk by Dubner was Kobe, the hot dog eating contest champion from Japan. The secret of his success in the eating contest was rethinking the accepted but unchallenged artificial limits and redefining the problem, said Dubner. In Kobe’s case, he redefined the problem from ‘How can I eat more hotdogs?’ to ‘How can I eat one hotdog faster?’ and then removed artificial limits – widely accepted but unchallenged conventions – such as when you eat a hot dog you hold it in the hand and eat it from the top to the bottom. He experimented with breaking the hotdog into two pieces to feed himself faster with two hands. He further refined his technique by eating the frankfurter and the bun separately to make the eating even speedier.
So where can libraries apply this lesson? One thing I can think of is the problem of the low attendance of some library programs. What if we ask what barriers we can remove instead of asking what kind of program will draw more people? Chattanooga Public Library did exactly this. Recently, they targeted the parents who would want to attend the library’s author talk and created an event that would specifically address the child care issue. The library scheduled a evening story time for kids and fun activities for tween and teens at the same time as the author talk. Then they asked parents to come to the library with the children, have their children participate in the library’s children’s programs, and enjoy themselves at the library’s author talk without worrying about the children.
Another library service that I came to learn about at my table was the Zip Books service by the Yolo county library in California. What if libraries ask what the fastest to way to deliver a book that the library doesn’t have to a patron’s door would be instead of asking how quickly the cataloging department can catalog a newly acquired book to get it ready for circulation? The Yolo county library Zip Books service came from that kind of redefinition of a problem. When a library user requests a book the library doesn’t have but meets certain requirements, the Yolo County Library purchases the book from a bookseller and have it shipped directly to the patron’s home without processing the book. Cataloging and processing is done when the book is returned to the library after the first use.
(2) What Can Happen to Higher Education
My favorite talk during the Summit was by Dr. Khator because she had deep insight in higher education and I have been working at university libraries for a long time. The two most interesting observations she made were the possibility of (a) the decoupling of the content development and the content delivery and (b) the decoupling of teaching and credentialing in higher education.
The upside of (a) is that some wonderful class a world-class scholar created may be taught by other instructors at places where the person who originally developed the class is not available. The downside of (a) is, of course, the possibility of it being used as the cookie-cutter type lowest baseline for quality control in higher education – University of Phoenix mentioned as an example of this by one of the participants at my table – instead of college and university students being exposed to the classes developed and taught by their institutions’ own individual faculty members.
I have to admit that (b) was a completely mind-blowing idea to me. Imagine colleges and universities with no credentialing authority. Your degree will no longer be tied to a particular institution to which you were admitted and graduate from. Just consider the impact of what this may entail if it ever becomes realized. If both (a) and (b) take place at the same time, the impact would be even more significant. What kind of role could an academic library play in such a scenario?
(3) Futurizing Libraries
Joe Garreau observed that nowadays what drives the need for a physical trip is more and more a face-to-face contact than anything else. Then he pointed out that as technology allows more people to tele-work, people are flocking to smaller cities where they can have a more meaningful contact with the community. If this is indeed the case, libraries that make their space a catalyst for a face-to-face contact in a community will prosper. Last speaker, Thomas Frey, spoke mostly about the Internet of Things (IoT).
While I think that IoT is an important trend to note, for sure, what I most liked about Frey’s talk was his statement that the vision of future we have today will change the decisions we make (towards that future). After the talk by Garreau, I had a chance to ask him a question about his somewhat idealized vision of the future, in which people live and work in a small but closely connected community in a society that is highly technological and collaborative. He called this ‘human evolution’.
But in my opinion, the reality that we see today in my opinion is not so idyllic.3 The current economy is highly volatile. It no longer offers job security, consistently reduces the number of jobs, and returns either stagnant or decreasing amount of income for those whose skills are not in high demand in the era of digital revolution.4 As a result, today’s college students, who are preparing to become tomorrow’s knowledge workers, are perceiving their education and their lives after quite differently than their parents did.5
Regarding the topic of the Internet of Things (IoT), which was the main topic of Frey’s talk, the privacy and the proper protection of the massive amount of data – which will result from the very many sensors that makes IoT possible – will be the real barrier to implementing the IoT on a large scale. After his talk, I had a chance to briefly chat with him about this. (There was no Q&A because Frey’s talk went over the time allotted). He mentioned the possibility of some kind of an international gathering similar to the scale of the Geneva Conventions to address the issue. While the likelihood of that is hard to assess, the idea seemed appropriate to the problem in question.
(4) What If…?
Some of the shiny things shown at the talk, whose value for library users may appear dubious and distant, however, prompted Eli Neiburger at Ann Arbor District Library to question which useful service libraries can offer to provide the public with significant benefit now. He wondered what it would be like if many libraries ran a Tor exit node to help the privacy and anonymity of the web traffic, for example.
For those who are unfamiliar, Tor (the Onion Router) is “free software and an open network that helps you defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security.” Tor is not foolproof, but it is still the best tool for privacy and anonymity on the Web.
Eli’s idea is a truly wild one because there are so many libraries in the US and the public’s privacy in the US is in such a precarious state.6 Running a Tor exit node is not a walk in the park as this post by someone who actually set up a Tor exit node on a hosted virtual server in Germany attests. But libraries have been a serious and dedicated advocate for privacy for people’s intellectual freedom for a long time and have a strong network of alliance. There is also the useful guidelines and tips that Tor provides in their website.
Just pause a minute and imagine what kind of impact such a project by libraries may have to the privacy of the public. What if?
(5) Leadership and Sustainability
For the “Table Discussion of Choice” session, I opted for the “Open” table because I was curious in what other topics people were interested. Two discussions at this session were most memorable to me. One was the great advice I got from Corinne Hill regarding leading people. A while ago, I read her interview, in which she commented that “the staff are just getting comfortable with making decisions.” In my role as a relatively new manager, I also found empowering my team members to be more autonomous decision makers a challenge. Corinne particularly cautioned that leaders should be very careful about not being over-critical when the staff takes an initiative but makes a bad decision. Being over-critical in that case can discourage the staff from trying to make their own decisions in their expertise areas, she said. Hearing her description of how she relies on the different types of strengths in her staff to move her library in the direction of innovation was also illuminating to me. (Lorcan Dempsey who was also at our table mentioned “Birkman Quadrants” in relation to Corinne’s description, a set of useful theoretical constructs. He also brought up the term ‘Normcore’ at another session. I forgot the exact context of that term, but the term was interesting that I wrote it down.) We also talked for a while about the current LIS education and how it is not sufficiently aligned with the skills needed in everyday library operation.
The other interesting discussion started with the question about the sustainability of the future libraries by Amy Garmer from Aspen Institute. (She has been working on a library-related project with various policy makers, and PLA has a program related to this project at the upcoming 2014 ALA Annual Conference if you are interested.) One thought that always comes to my mind whenever I think about the future of libraries is that while in the past the difference between small and large libraries was mostly quantitative in terms of how many books and other resources were available, in the present and future, the difference is and will be more qualitative. What New York Public Libraries offers for their patrons, a whole suite of digital library products from the NYPL Labs for example, cannot be easily replicated by a small rural library. Needless to say, this has a significant implication for the core mission of the library, which is equalizing the public’s access to information and knowledge. What can we do to close that gap? Or perhaps will different types of libraries have different strategies for the future, as Lorcan Dempsey asked at our table discussion? These two things are not incompatible to be worked out at the same time.
(6) Nimble and Media-Savvy
In her Summit summary, Joanne Frye Williams, who moved around to observe discussions at all tables during the Summit, mentioned that one of the themes that surfaced was thinking about a library as a developing enterprise rather than a stable organization. This means that the modus operandi of a library should become more nimble and flexible to keep the library in the same pace of the change that its community goes through.
Another thread of discussion among the Summit participants was that not all library supporters have to be the active users of the library services. As long as those supporters know that the presence and the service of libraries makes their communities strong, libraries are in a good place. Often libraries make the mistake of trying to reach all of their potential patrons to convert them into active library users. While this is admirable, it is not always practical or beneficial to the library operation. More needed and useful is a well-managed strategic media relations that will effectively publicize the library’s services and programs and its benefits and impact to its community. (On a related note, one journalist who was at the Summit mentioned how she noticed the recent coverage about libraries changing its direction from “Are libraries going to be extinct?” to “No, libraries are not going to be extinct. And do you know libraries offer way more than books such as … ?”, which is fantastic.)
E. What Now? Library Futurizing vs. Library Grounding
What all the discussion at the Summit reminded me was that ultimately the time and efforts we spend on trying to foresee what the future holds for us and on raising concerns about the future may be better directed at refining the positive vision for the desirable future for libraries and taking well-calculated and decisive actions towards the realization of that vision.
Technology is just a tool. It can be used to free people to engage in more meaningful work and creative pursuits. Or it can be used to generate a large number of the unemployed, who have to struggle to make the ends meet and to retool themselves with fast-changing skills that the labor market demands, along with those in the top 1 or 0.1 % of very rich people. And we have the power to influence and determine which path we should and would be on by what we do now.
Certainly, there are trends that we need to heed. For example, the shift of the economy that places a bigger role on entrepreneurship than ever before requires more education and support for entrepreneurship for students at universities and colleges. The growing tendency of the businesses looking for potential employees based upon their specific skill sets rather than their majors and grades has lead universities and colleges to adopt a digital badging system (such as Purdue’s Passport) or other ways for their students to record and prove the job-related skills obtained during their study.
But when we talk about the future, many of us tend to assume that there are some kind of inevitable trends that we either get or miss and that those trends will determine what our future will be. We forget that not some trends but (i) what we intend to achieve in the future and (ii) today’s actions we take to realize that intention are really what determines our future. (Also always critically reflect on whatever is trendy; you may be in for a surprise.7) The fact that people will no longer need to physically visit a library to check out books or access library resources does not automatically mean that the library in the future will cease to have a building. The question is whether we will let that be the case. Suppose we decide that we want the library to be and stay as the vibrant hub for a community’s freedom of inquiry and right to access human knowledge, no matter how much change takes place in the society. Realizing this vision ‘IS’ within our power. We only reach the future by walking through the present.
- Stripling, Barbara. “Report on the Summit on the Future of Libraries.” ALA Connect, May 19, 2014. http://connect.ala.org/node/223667. ↩
- Kim, Bohyun. “Higher ‘Professional’ Ed, Lifelong Learning to Stay Employed, Quantified Self, and Libraries.” ACRL TechConnect Blog, March 23, 2014. http://acrl.ala.org/techconnect/?p=4180. ↩
- Ibid. ↩
- For a short but well-written clear description of this phenomenon, see Brynjolfsson, Erik, and Andrew McAfee. Race against the Machine: How the Digital Revolution Is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy. Lexington: Digital Frontier Press, 2012. ↩
- Brooks, David. “The Streamlined Life.” The New York Times, May 5, 2014. http://www.nytimes.com/2014/05/06/opinion/brooks-the-streamlined-life.html. ↩
- See Timm, Trevor. “Everyone Should Know Just How Much the Government Lied to Defend the NSA.” The Guardian, May 17, 2014. http://www.theguardian.com/commentisfree/2014/may/17/government-lies-nsa-justice-department-supreme-court. ↩
- For example, see this article about what the wide adoption of 3D-printing may mean to the public. Sadowski, Jathan, and Paul Manson. “3-D Print Your Way to Freedom and Prosperity.” Al Jazeera America, May 17, 2014. http://america.aljazeera.com/opinions/2014/5/3d-printing-politics.html. ↩
Here we present a summary of various library technology conferences that ACRL TechConnect authors have been to. There are a lot of them and some fairly niche. So we hope this guide serves to assist neophytes and veterans alike in choosing how they spend their limited professional development monies. Do you attend one of these conferences every year because it’s awesome? Did we miss your favorite conference? Let us know in the comments!
The lisevents.com website might be of interest, as it compiles LIS conferences of all types. Also, one might be able to get a sense of the content of a conference by searching for its hashtag on Twitter. Most conferences list their hashtag on their website.
- Time: late in the year, typically September or October
- Place: Canada
- Website: http://accessconference.ca/
- Access is a Canada’s annual library technology conference. Although the focus is primarily on technology, a wide variety of topics are addressed from linked data, innovation, makerspace, to digital archiving by librarians in various areas of specialization. (See the past conferences’ schedules: http://accessconference.ca/about/past-conferences/) Access provides an excellent opportunity to get an international perspective without traveling too far. Access is also a single-track conference, offers great opportunities to network, and starts with preconferences and the hackathon, which welcomes to all types of librarians not just library coders. Both preconferences and the hackathon are optional but highly recommended. (p.s. One of the ACRL TechConnect authors thinks that this is the conference with the best conference lunch and snacks.)
- Time: early in the year, typically February but this year in late March
- Place: varies
- Website: http://code4lib.org/conference/
- Code4Lib is unique in that it is organized by a group of volunteers and not supported by any formal organization. While it does cover some more general technology concepts, the conference tends to be focused on coding, naturally. Preconferences from past years have covered the Railsbridge curriculum for learning Ruby on Rails and Blacklight, the open source discovery interface. Code4Lib moves quickly—talks are short (20 minutes) with even shorter lightning talks thrown in—but is also all on one track in the same room; attendees can see every presentation.
Computers in Libraries
- Time: Late March or early April
- Place: Washington, DC
- Website: http://www.infotoday.com/conferences.asp
- Computers in Libraries is a for-profit conference hosted by Information Today. Its use of tracks, organizing presentations around a topic or group of topics, is a useful way to attend a conference and its overall size is more conducive to networking, socializing, and talking with vendors in the exhibit hall than many other conferences. However, the role of consultants in panel and presentation selection and conference management, as opposed to people who work in libraries, means that there is occasionally a focus on trends that are popular at the moment, but don’t pan out, as well as language more suited to an MBA than an MLIS. The conference also lacks a code of conduct and given the corporate nature of the conference, the website is surprisingly antiquated.
- They also run Internet Librarian, which meets in Monterey, California, every fall.
— Jacob Berg, Library Director, Trinity Washington University
Digital Library Federation Forum
- Time: later in the year, October or November
- Place: varies
- Website: http://www.diglib.org/
- We couldn’t find someone who attended this. If you have, please add your review of this conference in the comments section!
- Time: late in the year, typically November
- Place: Richmond, VA
- Website: http://eduiconf.org/
- Not a library conference, edUI is aimed at web professionals working in higher education but draws a fair number of librarians. The conference tends to draw excellent speakers, both from within higher education and the web industry at large. Sessions cover user experience, design, social media, and current tools of the trade. The talks suit a broad range of specialties, from programmers to people who work on the web but aren’t technologists foremost.
Electronic Resources & Libraries
- Time: generally early in the year, late-February to mid-March.
- Place: Austin, TX
- Website: http://www.electroniclibrarian.com/
- The main focus of this conference is workflows and issues surrounding electronic resources (such as licensed databases and online journals, and understanding these is crucial to anyone working with library technology, whether or not they manage e-resources on a daily basis. In recent years the conference has expanded greatly into areas such as open access and user experience, with tracks specifically dedicated to those areas. This year there were also some overlapping programs and themes with SXSW and the Leadership, Technology, Gender Summit.
- Time: held a few times throughout the year
- Place: online
- Website: http://handheldlibrarian.org
- An online conference devoted specifically to mobile technologies. The advantage of this conference is that without traveling, you can get a glimpse of the current developments and applications of mobile technologies in libraries. It originally started in 2009 as an annual one-day online conference based upon the accepted presentation proposals submitted in advance. The conference went through some changes in recent years, and now it offers a separate day of workshops in addition and focuses on a different theme in mobile technologies in libraries. All conference presentations and workshops are recorded. If you are interested in attending, it is a good idea to check out the presentations and the speakers in advance.
- Time: October
- Place: Monterey, CA
- Website: http://www.infotoday.com/conferences.asp
- Internet Librarian is for-profit conference hosted by Information Today. It is quite similar to Information Today’s Computers in Libraries utilizing tracks to organize a large number of presentations covering a broad swath of library information technology topics. Internet Librarian also hosts the Internet @ Schools track that focus on the IT needs of the K12 library community. IL is held annually in Monterey California in October. The speaker list is deep and varied and one can expect keynote speakers to be prominent and established names in the field. The conference is well attended and provides a good opportunity to network with library technology peers. As with Computers in Libraries, there is no conference code of conduct.
- Time: varies, typically in the second half of the year
- Place: varies, international
- Website: http://koha-community.org/kohacon/
- The annual conference devoted to the Koha open source ILS.
Library Technology Conference
- Time: mid-March
- Place: St. Paul, MN
- Website: http://libtechconf.org/
- LTC is an annual library conference that takes place in March. It’s both organized by and takes place at Macalester College in St. Paul. Not as completely tech-heavy as a Code4Lib or even an Access, talks at LTC tend to run a whole range of technical aptitudes. Given the time and location of LTC, it has historically been primarily of regional interest but has seen elevating levels of participation nationally and internationally.
— John Fink, Digital Scholarship Librarian, McMaster University
- We asked Twitter for a short overview of Library Technology Conference, and Matthew Reidsma offered up this description:
— Matthew Reidsma (@mreidsma) April 2, 2014
- Time: Late in the year, typically November
- Place: varies
- Website: http://www.ala.org/lita/conferences
- A general library technology conference that’s moderately sized, with some 300 attendees most years. One of LITA’s nice aspects is that, because of the smaller size of the conference and the arranged networking dinners, it’s very easy to meet other librarians. You need not be involved with LITA to attend and there are no committee or business meetings.
- Time: mid-summer, June or July
- Place: varies, international
- Website: changes each year, here are the 2013 and 2014 sites
- A mid-sized conference focused specifically on institutional repositories.
- Time: February
- Place: Corvallis, OR
- Website: http://onlinenorthwest.org/
- A small library technology conference in the Pacific Northwest. Hosted by the Oregon University System, but invites content from Public, Medical, Special, Legal, and Academic libraries.
- Time: all the time
- Place: varies, international
- Website: http://thatcamp.org/
- Every THATCamp is different, but all revolve around technology and the humanities (i.e. The Technology And Humanities Camp). They are unconferences with “no spectators”, and so will reflect the interests of the participants. Some have specific themes such as digital pedagogy, others are attached to conferences as pre or post conference events, and some are more general regional events. Librarians are important participants in THATCamps, and if there is one in your area or at a conference you’re attending, you should go. They cost under $30 and are a great networking and education opportunity. Sign up for the THATCamp mailing list or subscribe to the RSS feed to find out about new THATCamps. They have a attendee limit and usually fill up quickly.
The 2014 Horizon Report is mostly a report on emerging technologies. Many academic librarians carefully read its Higher Ed edition issued every year to learn about the upcoming technology trends. But this year’s Horizon Report Higher Ed edition was interesting to me more in terms of how the current state of higher education is being reflected on the report than in terms of the technologies on the near-term (one-to-five year) horizon of adoption. Let’s take a look.
A. Higher Ed or Higher Professional Ed?
To me, the most useful section of this year’s Horizon Report was ‘Wicked Challenges.’ The significant backdrop behind the first challenge “Expanding Access” is the fact that the knowledge economy is making higher education more and more closely and directly serve the needs of the labor market. The report says, “a postsecondary education is becoming less of an option and more of an economic imperative. Universities that were once bastions for the elite need to re-examine their trajectories in light of these issues of access, and the concept of a credit-based degree is currently in question.” (p.30)
Many of today’s students enter colleges and universities with a clear goal, i.e. obtaining a competitive edge and a better earning potential in the labor market. The result that is already familiar to many of us is the grade and the degree inflation and the emergence of higher ed institutions that pursue profit over even education itself. When the acquisition of skills takes precedence to the intellectual inquiry for its own sake, higher education comes to resemble higher professional education or intensive vocational training. As the economy almost forces people to take up the practice of lifelong learning to simply stay employed, the friction between the traditional goal of higher education – intellectual pursuit for its own sake – and the changing expectation of higher education — creative, adaptable, and flexible workforce – will only become more prominent.
Naturally, this socioeconomic background behind the expansion of postsecondary education raises the question of where its value lies. This is the second wicked challenge listed in the report, i.e. “Keeping Education Relevant.” The report says, “As online learning and free educational content become more pervasive, institutional stakeholders must address the question of what universities can provide that other approaches cannot, and rethink the value of higher education from a student’s perspective.” (p.32)
B. Lifelong Learning to Stay Employed
Today’s economy and labor market strongly prefer employees who can be hired, retooled, or let go at the same pace with the changes in technology as technology becomes one of the greatest driving force of economy. Workers are expected to enter the job market with more complex skills than in the past, to be able to adjust themselves quickly as important skills at workplaces change, and increasingly to take the role of a creator/producer/entrepreneur in their thinking and work practices. Credit-based degree programs fall short in this regard. It is no surprise that the report selected “Agile Approaches to Change” and “Shift from Students as Consumers to Students as Creators” as two of the long-range and the mid-range key trends in the report.
A strong focus on creativity, productivity, entrepreneurship, and lifelong learning, however, puts a heavier burden on both sides of education, i.e. instructors and students (full-time, part-time, and professional). While positive in emphasizing students’ active learning, the Flipped Classroom model selected as one of the key trends in the Horizon report often means additional work for instructors. In this model, instructors not only have to prepare the study materials for students to go over before the class, such as lecture videos, but also need to plan active learning activities for students during the class time. The Flipped Classroom model also assumes that students should be able to invest enough time outside the classroom to study.
The unfortunate side effect or consequence of this is that those who cannot afford to do so – for example, those who have to work on multiple jobs or have many family obligations, etc. – will suffer and fall behind. Today’s students and workers are now being asked to demonstrate their competencies with what they can produce beyond simply presenting the credit hours that they spent in the classroom. Probably as a result of this, a clear demarcation between work, learning, and personal life seems to be disappearing. “The E-Learning Predictions for 2014 Report” from EdTech Europe predicts that ‘Learning Record Stores’, which track, record, and quantify an individual’s experiences and progress in both formal and informal learning, will be emerging in step with the need for continuous learning required for today’s job market. EdTech Europe also points out that learning is now being embedded in daily tasks and that we will see a significant increase in the availability and use of casual and informal learning apps both in education but also in the workplace.
C. Quantified Self and Learning Analytics
Among the six emerging technologies in the 2014 Horizon Report Higher Education edition, ‘Quantified Self’ is by far the most interesting new trend. (Other technologies should be pretty familiar to those who have been following the Horizon Report every year, except maybe the 4D printing mentioned in the 3D printing section. If you are looking for the emerging technologies that are on a farther horizon of adoption, check out this article from the World Economic Forum’s Global Agenda Council on Emerging Technologies, which lists technologies such as screenless display and brain-computer interfaces.)
According to the report, “Quantified Self describes the phenomenon of consumers being able to closely track data that is relevant to their daily activities through the use of technology.” (ACRL TechConnect has covered personal data monitoring and action analytics previously.) Quantified self is enabled by the wearable technology devices, such as Fitbit or Google Glass, and the Mobile Web. Wearable technology devices automatically collect personal data. Fitbit, for example, keeps track of one’s own sleep patterns, steps taken, and calories burned. And the Mobile Web is the platform that can store and present such personal data directly transferred from those devices. Through these devices and the resulting personal data, we get to observe our own behavior in a much more extensive and detailed manner than ever before. Instead of deciding on which part of our life to keep record of, we can now let these devices collect about almost all types of data about ourselves and then see which data would be of any use for us and whether any pattern emerges that we can perhaps utilize for the purpose of self-improvement.
Quantified Self is a notable trend not because it involves an unprecedented technology but because it gives us a glimpse of what our daily lives will be like in the near future, in which many of the emerging technologies that we are just getting used to right now – the mobile, big data, wearable technology – will come together in full bloom. Learning Analytics,’ which the Horizon Report calls “the educational application of ‘big data’” (p.38) and can be thought of as the application of Quantified Self in education, has been making a significant progress already in higher education. By collecting and analyzing the data about student behavior in online courses, learning analytics aims at improving student engagement, providing more personalized learning experience, detecting learning issues, and determining the behavior variables that are the significant indicators of student performance.
While privacy is a natural concern for Quantified Self, it is to be noted that we ourselves often willingly participate in personal data monitoring through the gamified self-tracking apps that can be offensive in other contexts. In her article, “Gamifying the Quantified Self,” Jennifer Whitson writes:
Gamified self-tracking and participatory surveillance applications are seen and embraced as play because they are entered into freely, injecting the spirit of play into otherwise monotonous activities. These gamified self-improvement apps evoke a specific agency—that of an active subject choosing to expose and disclose their otherwise secret selves, selves that can only be made penetrable via the datastreams and algorithms which pin down and make this otherwise unreachable interiority amenable to being operated on and consciously manipulated by the user and shared with others. The fact that these tools are consumer monitoring devices run by corporations that create neoliberal, responsibilized subjectivities become less salient to the user because of this freedom to quit the game at any time. These gamified applications are playthings that can be abandoned at whim, especially if they fail to pleasure, entertain and amuse. In contrast, the case of gamified workplaces exemplifies an entirely different problematic. (p.173; emphasis my own and not by the author)
If libraries and higher education institutions becomes active in monitoring and collecting students’ learning behavior, the success of an endeavor of that kind will depend on how well it creates and provides the sense of play to students for their willing participation. It will be also important for such kind of learning analytics project to offer an opt-out at any time and to keep the private data confidential and anonymous as much as possible.
D. Back to Libraries
The changed format of this year’s Horizon Report with the ‘Key Trends’ and the ‘Significant Challenges’ has shown the forces in play behind the emerging technologies to look out for in higher education much more clearly. A big take-away from this report, I believe, is that in spite of the doubt about the unique value of higher education, the demand will be increasing due to the students’ need to obtain a competitive advantage in entering or re-entering the workforce. And that higher ed institutions will endeavor to create appropriate means and tools to satisfy students’ need of acquiring and demonstrating skills and experience in a way that is appealing to future employers beyond credit-hour based degrees, such as competency-based assessments and a badge system, is another one.
Considering that the pace of change at higher education tends to be slow, this can be an opportunity for academic libraries. Both instructors and students are under constant pressure to innovate and experiment in their teaching and learning processes. Instructors designing the Flipped Classroom model may require a studio where they can record and produce their lecture videos. Students may need to compile portfolios to demonstrate their knowledge and skills for job interviews. Returning adult students may need to acquire the habitual lifelong learning practices with the help from librarians. Local employers and students may mutually benefit from a place where certain co-projects can be tried. As a neutral player on the campus with tech-savvy librarians and knowledgeable staff, libraries can create a place where the most palpable student needs that are yet to be satisfied by individual academic departments or student services are directly addressed. Maker labs, gamified learning or self-tracking modules, and a competency dashboard are all such examples. From the emerging technology trends in higher ed, we see that the learning activities in higher education and academic libraries will be more and more closely tied to the economic imperative of constant innovation.
Academic libraries may even go further and take up the role of leading the changes in higher education. In his blog post for Inside Higher Ed, Joshua Kim suggests exactly this and also nicely sums up the challenges that today’s higher education faces:
- How do we increase postsecondary productivity while guarding against commodification?
- How do we increase quality while increasing access?
- How do we leverage technologies without sacrificing the human element essential for authentic learning?
How will academic libraries be able to lead the changes necessary for higher education to successfully meet these challenges? It is a question that will stay with academic libraries for many years to come.
Libraries make much use of spreadsheets. Spreadsheets are easy to create, and most library staff are familiar with how to use them. But they can quickly get unwieldy as more and more data are entered. The more rows and columns a spreadsheet has, the more difficult it is to browse and quickly identify specific information. Creating a searchable web application with a database at the back-end is a good solution since it will let users to quickly perform a custom search and filter out unnecessary information. But due to the staff time and expertise it requires, creating a full-fledged searchable web database application is not always a feasible option at many libraries.
Creating a MS Access custom database or using a free service such as Zoho can be an alternative to creating a searchable web database application. But providing a read-only view for MS Access database can be tricky although possible. MS Access is also software locally installed in each PC and therefore not necessarily available for the library staff when they are not with their work PCs on which MS Access is installed. Zoho Creator offers a way to easily convert a spreadsheet into a database, but its free version service has very limited features such as maximum 3 users, 1,000 records, and 200 MB storage.
Google Visualization API Query Language provides a quick and easy way to query a Google spreadsheet and return and display a selective set of data without actually converting a spreadsheet into a database. You can display the query result in the form of a HTML table, which can be served as a stand-alone webpage. All you have to do is to construct a custom URL.
A free version of Google spreadsheet has a limit in size and complexity. For example, one free Google spreadsheet can have no more than 400, 000 total cells. But you can purchase more Google Drive storage and also query multiple Google spreadsheets (or even your own custom databases) by using Google Visualization API Query Language and Google Chart Libraries together. (This will be the topic of my next post. You can also see the examples of using Google Chart Libraries and Google Visualization API Query Language together in my presentation slides at the end of this post.)
In this post, I will explain the parameters of Google Visualization API Query Language and how to construct a custom URL that will query, return, and display a selective set of data in the form of an HTML page.
A. Display a Google Spreadsheet as an HTML page
The first step is to identify the URL of the Google spreadsheet of your choice.
The URL below opens up the third sheet (Sheet 3) of a specific Google spreadsheet. There are two parameters you need to pay attention inside the URL: key and gid.
This breaks down the parameters in a way that is easier to view:
Key is a unique identifier to each Google spreadsheet. So you need to use that to cretee a custom URL later that will query and display the data in this spreadsheet. Gid specifies which sheet in the spreadsheet you are opening up. The gid for the first sheet is 0; the gid for the third sheet is 2.
Let’s first see how Google Visualization API returns the spreadsheet data as a DataTable object. This is only for those who are curious about what goes on behind the scenes. You can see that for this view, the URL is slightly different but the values of the key and the gid parameter stay the same.
In order to display the same result as an independent HTML page, all you need to do is to take the key and the gid parameter values of your own Google spreadsheet and construct the custom URL following the same pattern shown below.
B. How to Query a Google Spreadsheet
We have seen how to create a URL to show an entire sheet of a Google spreadsheet as an HTML page above. Now let’s do some querying, so that we can pick and choose what data the table is going to display instead of the whole sheet. That’s where the Query Language comes in handy.
Here is an example spreadsheet with over 50 columns and 500 rows.
What I want to do is to show only column B, C, D, F where C contains ‘Florida.’ How do I do this? Remember the URL we created to show the entire sheet above?
There we had no value for the tq parameter. This is where we insert our query.
Google Visualization API Query Language is pretty much the same as SQL. So if you are familiar with SQL, forming a query is dead simple. If you aren’t SQL is also easy to learn.
- The query should be written like this:
SELECT B, C, D, F WHERE C CONTAINS ‘Florida’
- After encoding it properly, you get something like this:
- Add it to the tq parameter and don’t forget to also specify the key:
I am omitting the gid parameter here because there is only one sheet in this spreadsheet but you can add it if you would like. You can also omit it if the sheet you want is the first sheet. Ta-da!
Compare this with the original spreadsheet view. I am sure you can appreciate how the small effort put into creating a URL pays back in terms of viewing an unwieldy large spreadsheet manageable.
You can also easily incorporate functions such as count() or sum() into your query to get an overview of the data you have in the spreadsheet.
- select D,F count(C) where (B contains ‘author name’) group by D, F
For example, this query above shows how many articles a specific author published per year in each journal. The screenshot of the result is below and you can see it for yourself here: https://spreadsheets.google.com/tq?tqx=out:html&tq=select+D,F,count(C)+where+%28B+contains+%27Agoulnik%27%29+group+by+D,F&key=0AqAPbBT_k2VUdEtXYXdLdjM0TXY1YUVhMk9jeUQ0NkE
Take this spread sheet as another example.
This simple query below displays the library budget by year. For those who are unfamiliar with ‘pivot‘, pivot table is a data summarization tool. The query below asks the spreadsheet to calculate the total of all the values in the B column (Budget amount for each category) by the values found in the C column (Years).
- select sum(B) pivot C
This is another example of querying the spreadsheet connected to my library’s Literature Search request form. The following query asks the spreadsheet to count the number of literature search requests by Research Topic (=column I) that were received in 2011 (=column G) grouped by the values in the column C, i.e. College of Medicine Faculty or College of Medicine Staff.
- select C, count(I) where (G contains ‘2011’) group by C
C. More Querying Options
There are many more things you can do with a custom query. Google has an extensive documentation that is easy to follow: https://developers.google.com/chart/interactive/docs/querylanguage#Language_Syntax
These are just a few examples.
- ORDER BY __ DESC
: Order the results in the descending order of the column of your choice. Without ‘DESC,’ the result will be listed in the ascending order.
- LIMIT 5
: Limit the number of results. Combined with ‘Order by’ you can quickly filter the results by the most recent or the oldest items.
My presentation slides given at the 2013 LITA Forum below includes more detailed information about Google Visualization API Query Language, parameters, and other options as well as how to use Google Chart Libraries in combination with Google Visualization API Query Language for data visualization, which is the topic of my next post.
Happy querying Google Spreadsheet!