Assessing Library Digital Learning Objects with the Learning Object Evaluation Metric (LOEM)

Poster Description: How does your library routinely evaluate digital learning objects? This poster reports on an internal assessment of learning objects using the Learning Object Evaluation Metric (LOEM), an instrument designed to evaluate online learning objects on four criteria: design, interaction, engagement, and accessibility/usability.

Poster: Open the slides in a new window, or view as a one-page poster.

Presenter Name: Cal Murgu, Brock University Library

Presenter Bio: Cal received his MA from McGill University (2015) and MLIS from the University of Western Ontario (2018). He was the Digital Humanities Librarian at the New College of Florida before he joined Brock University as the Instructional Design Librarian in 2020.

6 replies on “Assessing Library Digital Learning Objects with the Learning Object Evaluation Metric (LOEM)”

Cal, thanks for sharing your experiences using this model for evaluating learning objects. What improvements will you make to the LOEM model in future evaluations to make it more relevant to library-specific materials? Do you plan to review your learning objects on a re-occurring basis? What does that time line look like?

Hi Matt, thanks for your question. I think what’s required is a slight amendment to the LOEM evaluation matrix to account for library content. The LOEM template (linked in the presentation) includes a sheet titled “LOEM Metrics and Items,” which is essentially a evaluation rubric. I find that some of the rubric items do not jive with library learning object (for example, there seems to be a bias towards Articulate-style eLearning tutorials, where library learning objects often revolve around single page guides). The problem with making these amendments is that it then negate the (statistical) ‘validity’ of the instrument, which is one of the main reasons to use it.

We do plan to review learning objects on a re-occurring basis. We plan to have a cyclical review process where a set number of LOs are scheduled to be reviewed each year, so as to not overburden someone with having to review _all of them_ every couple of years.

Thanks again for your question.

Thank you for sharing your poster. This was interesting. I’ve also worked to develop a tutorial evaluation tool, which can be used to evaluate tutorials in our library environment and also consider tutorials for adapting or adopting. Currently it has been used to evaluate tutorials for adopting since our offerings aren’t as robust as we would like. You raise some interesting ideas about using multiple reviewers and using ICR. One of the barriers to possibly including more people in this work is the time commitment. I’m wondering how many learning objects will you be reviewing each year? What kind of a time commitment will you expect from other reviewers and how will you select them?

Hi Melissa,

Thanks for your question. Would love to see what your evaluation tool looks like!

Great point regarding time commitments. We’re trying to implement a cyclical review process, where a set number of LOs are ‘up for review’ every X number of years. In this way, we won’t have to burden one person (or multiple people) with reviewer our entire content library every few years. I don’t know for certain what that will look like, but likely somewhere around 3-5 LOs per review period (with a full LO review cycle lasting 5 years). We’ve (re)established an eLearning Team to co-design LOs going forward. This team would likely take on these review duties.

Hi Cal,

You can check out our evaluation criteria and process so far in my poster -

I like that your rubric specifically calls out meaningful interactions and quality of feedback. The rubric I currently use includes interactivity, but doesn’t specifically mention feedback to the learner, which is really important to us.

I’m jealous that you have an eLearning Team!