Summon 4 HN — bits o' code

As part of the JISC Summon 4 HN project, we’ll be releasing some chunks of code that I’ve knocked together for our Summon implementation at Huddersfield.
The code will cover these areas:

  1. updating Summon with MARC record additions, updates and deletions from Horizon
  2. providing live availability information from Horizon without resorting to screen-scraping the OPAC
  3. customising 360 Link using jQuery

In theory, the first 2 might also be of interest to Horizon sites that are implementing an alternative OPAC (e.g. VuFind or AquaBrowser) where you need to set up regular MARC exports. The latter might be of interest to 360 Link sites in general.
Keep an eye on the Project Code section of the Summon 4 HN blog for details of the code 🙂

I couldn’t find a relevant photo for this blog post, so instead, let’s have another look at those infamous MIMAS #cupcakes from ILI2009 🙂

Squeezing Juice into the OPAC

Those who went to either Richard Wallis’ API session or my OPAC session at the UKSG 2009 Conference will have heard about Richard‘s Open Source Juice Project.
The project, which was launched at Code4Lib 2009, is designed to allow developers to create OPAC extensions (or, if you prefer, “bells and whistles”) that, in theory, should be product independent. This is such a genius idea!
Part of the problem with the stuff we’ve developed at Huddersfield is that we had to put an infrastructure in place around the OPAC in order to allow us to do the tweaking — an extra web server, MySQL databases, etc. It works well for us, but it’s not an easily transferable model. I’m always more than happy to share the “how we did it” but, more often than not, the actual code is too reliant on that back end infrastructure.
I need to do a bit more testing, but I’m hoping to have a HIP 3 “metadef” ready soon. The job of the metadef is to define whereabouts on the OPAC page things like the ISBN, author and title appear, and therefore will be different for every OPAC product. However, once you have a suitable metadef for your OPAC, you can start using the Juice extensions to add extra functionality — I had a quick play around last night just to prove that Juice will work with HIP 3…
I’m not sure if this is in Richard’s plans for Juice, but it would be handy to extend the metadef to include other OPAC specific information — e.g. given an ISBN or some keywords, how do you construct a URL to trigger a search on that OPAC. That’d be really useful for embedding recommendations, etc.

Horizon 7.4.2 – available "worldwide"

The press release for Horizon 7.4.2 has just gone online.
Both Talin Bingham (Chief Technology Officer) and Gary Rautenstrauch (Chief Executive Officer) use the word “worldwide” in the press release:

This new version adds functionality requested by our customers worldwide and offers great benefits to libraries and patrons alike…

Providing the features librarians need and delivering the best user experience worldwide are SirsiDynix’s highest priorities.

However, the reality is that Horizon 7.4.2 is a North American only release. Much as I would love to be able to roll out some of those new features here at Huddersfield, and much as I would love to have all those really nasty security holes in HIP fixed, the bottom line is that I can’t — SirsiDynix’s definition of “worldwide” is a curiously US-centric one.
Horizon customers in the UK, France, Germany, Sweden, Belgium, Netherlands, etc, are not “qualifying customers”, despite paying their yearly maintenance.
SirsiDynix International made a decision a year or two ago that they would no longer provide regional variations of Horizon, and I can fully understand why. As a non-American customer, I might not be happy about it, but I can understand why. What I can’t understand (and frankly, it’s starting to really piss me off) is why the company continues to pretend in public that they are.
If anyone senior from the SirsiDynix US office would like to contact me today, then please do — I’m sure you’ll find my direct telephone number in your UK customer contacts database. Maybe there’s a perfectly good reason why most of your Horizon customers in Europe are no longer classified as being part of your “worldwide” customer base and I’d really love to hear it.

Visual virtual shelf browsing

The Zoomii web site seems to be getting a lot of attention at the moment, so I got wondering how easy/difficult it would be do to a virtual bookshelf in the OPAC…
It’s definitely a “crappy prototype” at the moment, and the trickiest thing turned out to be getting the iframe to jump to the middle (where, hopefully, the book you’re currently browsing is shown). Anyway, you can see it in action on our OPAC.
I suspect the whole thing would work much better in Flash and it would look really cool if it used a Mac “dock” style effect. I wonder if I can persuade Iman to conjure up some Flash? 😉

Google Graphs

We’ve had loan data on the OPAC for a couple of years now, although it’s only previously been visible to staff IP addresses. Anyway, a couple of months ago, I revamped it using Google Graphs and I’ve finally gotten around to adding a stats link that anyone can peruse — you should be able to find it in the “useful links” section at the foot of the full bib page on our OPAC.
As an example, here are the stats for the 2006 edition of Giddens’ “Sociology“…

2008 — The Year of Making Your Data Work Harder

Quite a few of the conversations I’ve had this year at conferences and exhibitions have been about making data work harder (it’s also one of the themes in the JISC “Towards Implementation of Library 2.0 and the E-framework” study). We’ve had circ driven borrowing suggestions on our OPAC since 2005 (were we the first library to do this?) and, more recently, we’ve used our log of keyword searches to generate keyword combination suggestions.
However, I feel like this is really just the tip of the iceberg — I’m sure we can make our data work even harder for both us (as a library) and our users. I think the last two times I’ve spoken to Ken Chad, we’ve talked about a Utopian vision of the future where libraries share and aggregate usage data 😀
There’s been a timely discussion on the NGC4Lib mailing list about data and borrower privacy. In some ways, privacy is a red herring — data about a specific individual is really only of value to that individual, whereas aggregated data (where trends become apparent and individual whims disappear) becomes useful to everyone. As Edward Corrado points out, there are ways of ensuring patron privacy whilst still allowing data mining to occur.
Anyway, the NGC4Lib posts spurred me on into finishing off some code primarily designed for our new Student Portal — course specific new book list RSS feeds.
The way we used to do new books was torturous… I’ve thankfully blanked most of it out of my memory now, but it involved fund codes, book budgets, Word marcos, Excel and Borland Reportsmith. The way we’re trying it now is to mine our circulation data to find out what students on each course actually borrow, and use that to narrow down the Dewey ranges that will be of most interest to them.
The “big win” is that our Subject Librarians haven’t had to waste time providing me with lists of ranges for each course (and with 100 or so courses per School, that might takes weeks). I guess the $64,000 question is would they have provided me with the same Dewey ranges as the data mining did?
The code is “beta”, but looks to be generating good results — you can find all of the feeds in this directory:
If you’d like some quick examples, then try these:

Is your data working hard enough for you and your users? If not, why not?

Sexy SirsiDynix shenanigans in sunny Southampton

(Well, it’ll be sexy in-so-far as I’m including some gratuitous nudity in my session on “RSS and Social Networking” on Thursday. Will I be stripping off and revealing all in the name of “2.0”? You’ll have to come along and find out!)
I’m currently sat in Manchester Airport, waiting for a budget flight down to Southampton, which is playing host to this year’s “Dynix Users Group/European Unicorn Users Group Joint Conference“. High on the agenda is the merging of the two user groups, and hopefully a shorter name — my personal choice is still “SirsiDynix Libraries User Group”, if only for the cool “SLUG” acronym.
As Ian has already mentioned on his blog, European Horizon users are crossing their fingers that SirsiDynix CEO Gary Rautenstrauch’s “commitment to our worldwide customer base” will result in an announcement that Horizon 7.4.2 will be made available to non-US customers. Sadly, the 7.4.1 release was a US only affair and UK sites are still tootling along (quite merrily, it has to be said) on 7.3.4.
Right — must dash, my boarding gate has just been announced! 3G card allowing, I’m hoping to blog and Flickr the conference.

Scrum and Agile

I’m sure many SirsiDynix customers remember the terms “Scrum” and “Agile” being bandied around a few years ago during the development of Horizon 8.0. What I don’t remember being as widely reported at the time was that half of the developers were based in Russia (the other half were based in Provo, USA).
Anyway, the Google Blogsearch RSS feed for SirsiDynix threw up an interesting blog post last week: “Managing Offshore Software Projects“.

This project distributed Scrum teams so that half of each team was in the United States at SirsiDynix and the other half of each team was at Exigen Services in St. Petersburg, Russia. It showed how to set up distributed/outsourced teams to achieve both linear scalability of teams on a large project and distributed velocity of each team the same as the velocity of a small colocated team.
This project is still generating controversy in the Agile community by showing that you can run distributed high performance Scrums. There were quality problems on this project that caused some in the Agile community to discount the remarkable results and argue that it could not be repeated successfully.

I guess whatever your thoughts about Jack Blount and Horizon 8 are (or were), it certainly seems he knew what he has doing!
Whilst I’m thinking about Jack, I’d like to offer my sincere condolences to the Blount family for their recent loss.

decorative tag cloud

It’s not often that I’d consider adding pure “eye candy” to the OPAC, but I couldn’t decide what would be the best way of making this tag cloud functional. So, I made an executive decision and decided it shouldn’t be functional 😀
If you run a keyword search on our OPAC, at the foot of the page you should see a keyword cloud (it might take a few seconds to appear). The cloud is generated from previous keyword searches used on our OPAC. Here’s the one for “library“…
For multi-keyword searches, an electronic coin is tossed and you either get a cloud of the union or the intersection of your keywords. The former uses previous searches that contain any of the keywords, and the later is only those that contain all of them (if that makes sense!)
As it’s not functional, the cloud is just a decorative window into the hive mind of our users.
I’m interested to hear what you think — should the cloud be functional, or does it work as just “eye candy”?