Theres been plenty of talk around the Library 2.0 theme on the idea of evaluation or assessment. At Information Wants to be Free, Meredith Farkas says what she wanted to see come out of Library 2.0 was a greater focus on assessment. I certainly want to see libraries have a greater focus on assessment, too, and I want to see them publishing about it. (Particularly public libraries. We just dont publish enough.)
Why arent we (libraries in general) publishing about the success (or failure) of our 2.0 projects? Why is there virtually no data to be found that quantifies some of the outcomes of 2.0 projects? Weve been on this 2.0 bandwagon long enough for studies and assessments and evaluations to have been undertaken. For a movement thats intrinsically tied up with quick publishing channels like blogs and wikis, it seems strange that there is a real dearth of published studies on 2.0 projects. Why is that?
Walt Crawford had this to say in a recent post on his two blog survey books:
Maybe there’s a clear desire not to know how library blogs are doing in the real world, other than a few cherry-picked examples. I’d like to think that’s not the case. It would be unprofessional to tell people about how wonderful library blogs are, and encourage them to create such blogs, without giving them honest and broad-ranging information on what’s actually happening with such blogs.
Id like to think thats not the case, too. But I wonder. I wonder a few things:
Is the lack of publishing indicative of a lack of success? (And a fear of talking about it?)
Is the lack of publishing indicative of a perceived lack of success, a perception that might be formed because were not collecting the right data? (eg. How are we measuring ROI? Do we just count comments on blog posts? Or do we look at exit links, time spent on the page, holds on titles blogged about, impact on online resource usage stats&? I certainly hope all of these metrics and more are informing libraries evaluations of their blogs, because if were just relying on comments to measure user engagement, then were not seeing the full picture.)
Is the lack of publishing indicative of a lack of evaluation? (And if so, why arent we evaluating? Because we dont know how? Because we dont have time? Because we dont want to know?)
Or, is it just that were not publishing about our evaluations?
Ive got a blogging project in the pipeline at mpow. Its germinating quite slowly, because I want to see it well planned. We want a well planned implementation, but also a well planned, multi-faceted evaluation. If it works, I want to know about it, and I want us to be able to reflect on what we did and make links to what worked. If it doesnt work, I want to know about it just as much (if not more), because I want to be able to reflect on what we did, look for ways we could improve, and ultimately, pull the pin if thats what we need to do.
Blogs (and all things shiny and 2.0) are just great. Theyre fun for staff to work on, and have huge potential to engage our users. But none of us have time to run services that dont work. If we dont evaluate, we have no ability to know whether
We know that because we always did it that way is not a good reason to keep doing the things weve always done, whether they work or not. But neither should a failure to evaluate be the reason we keep on keeping on with our 2.0 services.
If you have evaluated your 2.0 service, publish about it! And if you have published, Id love to receive some links.
No comments:
Post a Comment