For those of you interested in the developer competition being run by the JISC MOSAIC Project, I’ve put together a quick & dirty API for the available data sets. If it’s easier for you, you can use this API to develop your competition entry rather than working with the entire downloaded data set.
edit (31/Jul/2009): Just to clarify — the developer competition is open to anyone, not just UK residents (however, UK law applies to how the competition is being run). Fingers crossed, the Project Team is hopeful that a few more UK academic libraries will be adding their data sets to the pot in early August.
The URL to use for the API is http://library.hud.ac.uk/mosaic/api.pl and you’ll need to supply a ucas and/or isbn parameter to get a response back (in XML), e.g.:
The “ucas” value is a UCAS Course Code. You can find these codes by going to the UCAS web site and doing a “search by subject”. Not all codes will generate output using the API, but you can find a list of codes that do appear in the MOSAIC data sets here.
If you use both a “ucas” and “isbn” value, the output will be limited to just transactions for that ISBN on courses with that UCAS course code.
You can also use these extra parameters in the URL…
- show=summary — only show the summary section in the XML output
- show=data — only show the data in the XML output (i.e. hide the summary)
- prog=… — only show data for the specified progression level (e.g. staff, UG1, etc, see documentation for full list)
- year=… — only show data for the specified academic year (e.g. 2005 = academic year 2005/6)
rows=… — max number of rows of data to include (default is 500) n.b. the summary section shows the breakdown for all rows, not just the ones included by the rows limit
The format of the XML is pretty much the same as shown in the project documentation guide, except that I’ve added a summary section to the output.
The API was knocked together quite quickly, so please report any bugs! Also, I can’t guarentee that the API is 100% stable, so please let me know (e.g. via Twitter) if it appears to be down.
I’ve been meaning to get around to adding a web service front end on to the book usage data that we released in December for ages. So, better late than never, here it is!
It’s not the fastest bit of code I’ve ever written, but (if there’s enough interest) I could speed it up.
The web service can be called a couple of different ways:
1) using an ISBN
a) http://library.hud.ac.uk/api/usagedata/isbn=0415014190 (“Language in the news”)
b) http://library.hud.ac.uk/api/usagedata/isbn=159308000X (“The Adventures of Huckleberry Finn”)
Assuming a match is located, data for 1 or more items will be returned. This will include FRBR style matching using the LibraryThing thingISBN data, as shown in the second example where we don’t have an item which exactly matches the given ISBN.
2) using an ID number
a) http://library.hud.ac.uk/api/usagedata/id=125120 (“Language and power”)
The item ID numbers are included in the suggestion data and are the internal bibliographic ID numbers used by our library management system.
edit 1: I should also have mentioned that the XML returned is essentially the same format as described here.
edit 2: Ive now re-written the code as a mod_perl script (to make it faster when using ISBNs) and slightly altered the URL
The new Google Book Search Data API has some really cool features and I’m wondering how much of it I can shoehorn into the OPAC?
Our students increasingly expect the OPAC search box to be searching the full-text of our book stock — i.e. they type in several words that it would be useful to borrow a book about. Searching just the bog-standard MARC metadata, you’ll be lucky to get much back… and perhaps then, only if we’ve got the full table of contents in the the MARC record.
So, for example, if I do a keyword search for “english media coverage of immigrants and social exclusion” on our OPAC, I’ll find nothing. However, if I run the same query through the Google API and then filter the results (using the ISBN) to just items we hold in the library, I get 6 hits from the first 40 results that Google sends me:
(I’d probably find more if I also used thingISBN or xISBN to match on associated ISBNs)
I’m not going to claim that those 6 are the most relevant books we hold in the library for that particular search (I’m not sure if I’d find anything of use in the “California politics” book)… but that’s only because I have no idea what the most relevant books are and, no matter how closely I scrutinise our MARC records, I probably never will 😉 So, short of quizzing a Subject Librarian, some of those books might be a worth a quick browse …which I could do virtually with the Embedded Viewer API:
I guess the big question is “how many API searches will Google let me do every day?”
Great to see that OpenLibrary (“One web page for every book”) now has an API!
There’s an interesting debate going on via the Code4Lib email list regarding the API. Specifically, should they have used SRU or is exposing a simple API better? Personally, I’m all for simple APIs that non-library techies can pick up and run with.
I’ve worked as a developer in libraries now for nearly 14 years and I’ve never used (or even seriously looked at) SRU. When I read the specification, I can feel my eyes begin to slowly glaze over! Perhaps this is just because I cut my teeth writing EDI processing software in COBOL and I’ve always suspected that people who develop specifications for use in libraries (e.g. Edifact, Z39.50, MARC, etc) are all a bunch of masochists 😉
I think Superpatron Ed might have let the cat out of the bag already, but Google should be making an annoucement about Google Book Search tomorrow that might be of interest to libraries… can you guess what it might be?