At the ALA Annual Conference 2012 last month, I attended an Evidence Based Discussion Group hosted by ACRL (Association of College and Research Libraries) ULS (University Libraries Section). It was a really interesting discussion and I’m glad I was able to attend (though sadly I couldn’t make the full session due to clashes). I think I was the only non-US participant in the discussion group which added an additional complexity as I kept having to ask for clarification over terminology, though I think this was useful as it got us each to question our assumptions and get to the core of evidence-based librarianship and what it means.
There were around 10-15 people attending, many of whom had different backgrounds but most worked either exclusively or partly on collecting and analysing library data for their university (often in a job such as an Assessment Librarian).
After each introducing ourselves, we began by discussing evidence-based practice (EBP) and what this means in librarianship. We discussed what it means in medical terms, which is probably the most common use of EBP. One of the participants was from a health background and explained that EBP in the medical sense is looking at a population and investigating by using an intervention with a section of the population, as well as a similar control group. We agreed that not much of the research in librarianship currently took this approach.
We spent a while discussing quantitative and qualitative data. Many of the group were concerned that quantitative seemed to be the focus for EBP but that this did not provide the additional context needed to make decisions which libraries usually need. Certainly from my own experience (and much of our work at Evidence Base), qualitative data is crucial for understanding the reasoning behind any quantitative data. For example if usage of a particular resource is low, is this due to lack of relevance to courses, lack of promotion, or it being listed in the wrong place? Traditionally EBP is quantitative but it doesn’t have to be – this is something one of the discussion participants had checked with Andrew Booth, one of the leading names in evidence-based librarianship.
We also discussed some different models for research including SPICE (setting, perspective, interventation, comparison, evaluation) and PICO (population, interventions, comparison, outcome).
Something which really interested me was the term assessment and what this means (and how it differs from evaluation). General consensus from the group was that assessment was more of a continual cycle, whereas evaluation was usually something done at a particular point in time (afterwards). Assessment seems to be fairly well embedded into a number of university libraries in the US (e.g. via an Assessment Librarian who co-ordinates information from across the service) and this is how information is used to influence strategic decisions.
We also discussed the importance of using existing literature in our research and also sharing the findings of our own research via appropriate dissemination methods. In order to further the sector as a whole we need to ensure we continue to add to the existing evidence base to support future decisions both from our own libraries and other libraries with similar aims (via academic and professional press such as the Evidence Based Librarianship in Practice journal as well as through events, and face to face and online discussions). This is something that has also been highlighted by the LIS RiLIES project, and something I am keen to work on improving. Wouldn’t it be great if we were sharing research data across the sector to enhance this even moreso and encourage replicable research studies?
Some examples of areas of innovative evidence-based research in librarianship were discussed, including:
- Library Impact Data Project – looking at the correlation between library use (electronic resource access, book borrowing, library visits) and degree attainment
- University of Minnesota – linking reference chats with library usage
- STAR-Trak – using data from across the University to try to identify potential dropouts before they leave to enable support to be offered (with ultimate aim of improving retention)
I really enjoyed the discussion and hope these conversations continue to make steps forward in evidence-based librarianship.