Menu
Search

Alexa Meets WorldCat

The provocative statement “voice is the new UI” came into vogue in 2016. I was skeptical at first, in the way many of us have become when topics trend like this. I thought to myself, “Isn’t a UI something on a screen?” However, as 2017 now comes to a close, it is abundantly clear that voice UIs aren’t just a fad. We IT professionals (developers, user experience designers, product managers—and of course librarians!) need to start thinking about voice UIs in the same way we think about visual ones. And of course, you can’t dive into this topic without quickly meeting Alexa.

Alexa is, it almost goes without saying, the voice assistant (á la Apple’s Siri) built into Amazon’s Echo devices. Alexa, unlike Siri, was built on an open platform, so developers have had access to create custom “skills” since day one. (Responding to Alexa’s runaway success, Apple released the Siri SDK in June 2016—too late, perhaps, to catch up to Amazon. Microsoft’s Cortana and Google’s nameless smart assistant are surely in the mix too.) Amazon will even reward you for creating Alexa skills: publish one in November, and you’ll get a free hoodie. If 100 people use it, you can get a free Echo Dot. Amazon definitely knows how to get developers using its platform!

Given that I work with OCLC APIs for a good chunk of my day, it wasn’t long before I decided I needed to get Alexa to talk to WorldCat. My first Alexa skill was a PHP script that lived on a colleague’s Raspberry Pi. The code wasn’t exactly pretty, but it worked. I signed up to receive Alexa developer emails, and when I saw that Amazon was offering a day-long Alexa skills workshop here in Columbus, Ohio, I jumped on the opportunity to attend.

It was at this workshop a couple months ago that I realized the true power of the Alexa platform, particularly the Alexa SDK and its integration with Lambda, Amazon Web Services’ “serverless computing environment.” Using these tools made it about ten times easier to create a proof-of-concept Alexa skill that searches WorldCat, which I’ve published on the OCLC Developer Network GitHub. This skill, written in Python, grabs bibliographic metadata and library location information from the WorldCat Search API and uses it to answer voice commands like “Alexa, ask WorldCat where I can find ‘On the Road’.”

Here’s a demo of my Alexa skill in action:

I also set up the skill to compliment voice responses with visual cards, which display in the Alexa app (read the following interaction from bottom to top).

file

If you want to try it out yourself, please see the README for setup instructions.

I do want to be clear: this is a proof of concept. Is my code buggy? Surely. Could the user experience be improved? No doubt. Does this actually solve a problem for library patrons? Maybe, but I can’t say for sure. I can say, however, that it has been fun and thought provoking to bring Alexa and WorldCat together, and I am excited to continue learning more.

We’d love to hear your thoughts about voice UIs for library services at devnet@oclc.org or on the WC-DEVNET-L listserv. Pull requests are welcome too!