One of the biggest challenges for CamGuides for Masters was how to assess its impact. Sure, we could use analytics to look at page views and bounce rate and device type, and so on. I can tell you, for example, from the preliminary stats, that CamGuides for Master’s attracted significant interest – over 25,000 page views as a whole during the pre-arrival period (roughly 20 August to 7 October). I can tell you that ‘Becoming a Graduate Student’ and ‘Software for Academic Use’ were most popular, and I can even let you know that this coheres completely with academic research into transition into PGT-level study. I can tell you that there’s a consistent bounce rate across each of the topics. But what does this actually mean? What can we actually read into this?
In my opinion, not much. 25,000 views sounds impressive, but the newness of CamGuides both in this institution and beyond means that we haven’t much to benchmark against. Is 25,000 views actually any good? At least next year we’ll have something to compare it with. Even if 25,000 views is seen as a roaring success, the site is an OER – we have no idea at all whether those who accessed it are the target audience. In all honesty, at least 500 of those views are me checking to see if the bloomin’ thing had gone live yet.
The bounce rate is meaningless too. In CamGuides for Master’s we actively promoted and legitimised a pick-and-choose approach, and the heterogeneity of its potential audience undermines any conclusions we can reach about the usefulness, or uselessness, of its content. Find me more than one Medieval Literature student interested in statistical software packages. They literally weren’t meant to look at it all.
So while we need to measure how often it’s accessed, and maybe even what people do while they’re there (watch the videos? download the resources?), this just isn’t enough. One of the more successful, though slightly accidental, things we did last year is to gather qualitative information. Jenni Skinner’s lovely account of how CamGuides opened up lines of communication is an example of this. There are others too. Just as the stats give us pretty graphs, so the qualitative information gives us wonderful stories. But the stories are individual, contextualised, and so just as with the stats, it’s hard to extrapolate much from them.
In the spirit of avoiding a second yearlong institutional shrug when asked about the impact of CamGuides, and especially given that CamGuides for Undergraduates is well on its way, we need to address this realistically. At the CILN meeting last week, a few weeks back, a group of us took part of this challenge on, and an embryonic plan started to form. But there are tough questions that need to be answered.
How we define impact – which may be in a variety of ways – is a really challenging starting point. How we measure it, once defined, is perhaps easier – especially from a quantitative perspective (e.g. what the analytics show, who promoted it, how many colleges sent out details, in how many course documents is it included, on how many VLE pages is it included). Determining what to benchmark against is another factor, as is thinking critically about what (and whose) purposes these measurements serve, but none of these are insurmountable.
But quantitative is not enough. Instead, we need to ensure that we’re accounting for the fact that CamGuides, both of them, are learning objects. So we need to identify, measure and record evidence of learning (no mean feat) as well as – and this is the crucial bit – ethically and justifiably tying this to CamGuides. How do we assess impact, and learning, in a way that takes account of the relative smallness and relative diffuseness of CamGuides in the context of the other educational experiences our students are getting?
Please use the comments below to add your thoughts about any of these – and thank you!
Helen Murphy, English Faculty Library