Management and Leadership Discussion #5: Assessing Services

27 March 2024

As a user of libraries, what are your priorities for services? Make a list of your three favorite offerings of a library and then think about how library managers would evaluate those functions or services. For example, if you utilize interlibrary loan services, how would the effectiveness of that service be determined? Share your thoughts and ideas.

Hmmm…this discussion was a little confounding, since I don’t really have any so-called “favorite” services I could name at a drop of a hat, like someone could with books. For all that I came to rely on them prior to working within them, the only things I really wanted the best out of will seem quite basic: a nice, accessible OPAC, a generous user-oriented acquisitions policy (which only became a favorite when I learned libraries do this two years ago), and plentiful and varied types of seating for patrons to sit at in the branches that have the space for it. Since the first two are pretty much entirely digital, quantifying usability statistics is no object, as most OPACs are set up to automatically record a log of every action a user could make, and the list of suggested acquisitions over a given period of time could be tallied up by the processing department if the system doesn’t do so already. Of course, data by itself is just an inert set of numbers for most purposes—as the Council on Library and Information Resources (n.d.) explains, there’s not much a library system looking to improve or evaluate their current arrangement can use with hundreds or thousands of number strings removed from their origin:

Meaning is contextual, but with TLA, there is no way to connect data in transaction logs with the users’ needs, thoughts, goals, or emotions at the time of the transaction. Interpreting the data requires not only careful definitions of what is being measured but additional research to provide contextual information about the users (para. 5).

There are, of course, other methods of evaluation that can be considered alongside usage data. While many types of these assessments are available, as demonstrated by Farrell and Masel (2016), the excess of quantitative data would only strictly require the most straightforward gathering of qualitative data, such as user surveys. And again, since both are digital, access to these surveys can be easily added to an online interface—once this contextual data is gathered, both can be synthesized to answer the real questions at hand: how many interactions with a system’s OPAC ends with patrons finding what they’re looking for? Is the system designed in a way to easily facilitate this? Does the user acquisition submission page succinctly spell out what the requirements and restrictions of materials are? What is the ratio of requests approved and denied? And, ultimately: are the systems in place worth the monetary cost that the library puts into them?


References

Council on Library and Information Resources. (n.d.). Usage studies of electronic resources. https://www.clir.org/pubs/reports/pub105/section3/

Farrell, S. L., & Mastel, K. (2016). Considering outreach assessment: Strategies, sample scenarios, and a call to action. In the Library with the Lead Pipe. https://www.inthelibrarywiththeleadpipe.org/2016/considering-outreach-assessment-strategies-sample-scenarios-and-a-call-to-action/