- Horizon scanning
- Repertory grids
- Data mining
This was probably my most anticipated of the three workshops in terms of content, and it didn’t disappoint.
Horizon scanning is the most relevant to my field of work as much of it is in innovation. Knowledge of some of the techniques in this area may have been beneficial for me in a previous project where we were involved in evaluation of a programme of work and looked to identify the landscape in specific areas at the start of a programme and at the end. Horizon scanning may have been particularly useful at the start, where we could have examined where technologies may take us by the end of the programme. Dr Harry Woodroof’s presentation is embedded below:
The sessions on both repertory grids and data mining were some of the highlights of the whole series of events for me. They’re both areas I knew very little about but found fascinating, and the speakers were very engaging.
Repertory grids is a method used in psychology so it was bound to appeal to me; I have very fond memories of my A level in Psychology and my undergraduate dissertation in Sports Psychology. Aside from that though the methodology I found really fascinating. It’s probably easier for you to follow the presentation (embedded below), but I’ll try to explain a brief overview. It’s an exploratory method used to explore an individual’s personal construct theory and is co-created with both the interviewee and the interviewer contributing to confirm the end result. It’s often used to set variables for further research. The interview will have a broad area of concern, and the process then follows these stages (using the fruit example as the speaker did):
- Come up with at least 8 elements to fit the concept – e.g. apple, banana, pineapple, strawberry, peach, lemon…
- Choose three of them, and consider which two are most similar and what makes the third different (this is called triadic construct elicitation) – e.g. strawberry and peach are sweet, but lemon is sour
- Repeat the triadic construct elicitation until you have around 8 variables (i.e. bipolar scale of opposite characteristics)
- Use these variables to create a grid with a sliding scale (often scale of 5 used but could be more)
- Record each element in the appropriate position on each scale
You can of course repeat the interview process with a number of people and consolidate the findings resulting in a set of variables emerging as the key characteristics, then use those constructs to test a larger population’s reactions to different elements. I can’t think of anything I could currently apply this too but it was really interesting method and I’d like to learn more. Dr Phil Turner’s presentation is embedded below:
The final formal session, on data mining, was again of real interest to me. I find it intriguing (and frustrating!) that libraries hold a lot of data about their users but do not tend to utilise this in any way, when they could use it both for assisting users (e.g. recommendations) and for more traditional business intelligence purposes (e.g. occupancy statistics used to predict patterns of use, borrowing data used to assist in collection development). The presentation was really interesting and highlighted the importance of the planning and testing stages involved in data mining, as well as the essential data cleaning.
I’ve been following the innovative work of Dave Pattern (@daveyp) for a long time now and this presentation gave me just a tiny insight into how complex it must be to utilise data in the ways Dave has been doing (check out his blog posts on library usage data).
I definitely think data mining is something we should be doing more of in libraries but due to the complexity I don’t think it’s something the profession can currently achieve – not many libraries have a @daveyp! I really do believe strongly that this area needs to develop though as it supports important strategic decisions, and we should be either utilising experts in this area to help us, or employing people with the skills to do so. If you’re interested in learning more, check out Kevin Swingler’s presentation embedded below:
To close the event we had a group discussion exercise based around the idea of improving LIS research and practitioners and researchers working together more closely. This gave me a lot of food for thought as it’s something Evidence Base is particularly keen to support.
All in all it was a great final workshop and I’ve really appreciated having the opportunity to learn more about research methods and techniques and meet others interested in the same. I hope we can continue to keep the DREaM alive!