Maturity Model as the Tool For Information/Data Literacy Assessment

Introduction
To assess the development level of an information system, the so-called maturity models (MM) are commonly used. In (Anderson and Jessen, 2003), maturity is defined as a state in which an organization is able to perfectly achieve the goals it sets for itself. The MM is understood as a set of successive levels that together form the expected or required logical path from the initial state to the final state of maturity (Pöppelbuß and Röglinger, 2011). MM is often a matrix of three to five maturity levels and several to a dozen evaluation criteria (dimensions) of the capability framework. This is a tool similar to one of the methods used to assess the information/data literacy (IL/DL) of information users, the so-called rubrics (Oakleaf, 2008). Rubrics for IL assessment are a tool that describes parts and levels of a specific task, product, or service (Hafner and Hafner, 2003). They are “descriptive scoring schemes” created by educators to improve the analysis of students’ work (Moskal, 2000). They include target indicators or “criteria” in rows and levels of performance in columns of the matrix or grid of benchmarks. This brief comparison of the two tools shows the need and likelihood of their interaction.

Objectives
This article describes the way of presenting DL problems in selected MMs for research data management services (RDMS). RDMS MM, especially their DL dimensions are therefore the subject of research. The correctness of the hypothesis on the inclusion of DL problems as one of the dimensions of RDMS MMs was tested. Answers to the following questions were sought: Do the authors of RDMS MMs recognize the role of DL problems? What DL issues are present in selected MMs? At which MM levels are DL problems placed? To what extent can DL rubrics be used in creating MMs?

Methodology
Content analysis of six RDMS MMs (all found in the literature) was performed with the aim of searching for matrix elements to evaluate data-literacy (DL)-related problems. RDMS MM were chosen because of their topicality (they were created in recent years, from 2014 to 2021) and the large enough number of existing MMs of this type. This choice resulted in the research also taking into account the problems of DL.

Outcomes
IL/DL problems are represented in most MM for RDMS, which means that the hypothesis has been confirmed. They are placed in dimensions defined as leadership, services, support services, users and stakeholders, accessibility, usability. The rubrics used in the DL assessment should be included in the construction of the MM for RDMS because they contain agreed values and descriptive, yet easily digestible, data.

References

  • Anderson, E., & Jessen, S. (2003). Project maturity in organizations. International Journal of Project Management, 21, 457–461.
  • Hafner, J., & Hafner, P. (2003). Quantitative analysis of the rubric as an assessment tool: An empirical study of student peer-group rating. International Journal of Science Education, 25, 1509–1528.
  • Moskal, B. (2000). Scoring rubrics: What, when, and how? Practical Assessment Research & Evaluation, 7.
  • Oakleaf, M. (2008). Dangers and opportunities: a conceptual map of information literacy assessment approaches. Portal: Libraries and the Academy, 8, 233–253.
  • Pöppelbuß, J., & Röglinger, M. (2011). What makes a useful maturity model? A framework for general design principles for maturity models and its demonstration in business process management. In Proceedings of the 19th European Conference on Information Systems (ECIS), paper 28. Helsinki, Finland.

Marek Nahotko
Jagiellonian University, Kraków, Poland

en_USEnglish
Scroll to Top