589
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Analysis of online rubric platforms: advancing toward erubrics

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon

Abstract

Rubrics play a crucial role in shaping educational assessment, providing clear criteria for both teaching and learning. The advent of online rubric platforms has the potential to significantly enhance the effectiveness of rubrics in educational contexts, offering innovative features for assessment and feedback through the creation of erubrics. This study presents a comprehensive analysis of 19 online rubric platforms structured around five research questions (RQs) examining the general platform features, rubric design features, rubric implementation features, identifying the stronger online rubric platforms available, and investigating if the platforms support the creation and implementation of erubrics. Our analysis of the design features revealed varying levels of customisation and flexibility across platforms, crucial for effective assessment. Regarding implementation features, we found a mix of online and offline capabilities, with a limited number of platforms offering more advanced features (e.g. collaborative options). Through a detailed scoring system, we identify the platforms that lead innovation in design and implementation. Unfortunately, the vast majority of platforms do not support features for the creation of erubrics. We provide a detailed list of implementation recommendations for teachers, researchers, and platform designers (Appendix F).

Rubrics are long-standing tools that have accompanied our educational systems for decades being now common at all educational levels (Andrade Citation2023). Rubrics have become internationally well-known and spread worldwide especially in the last couple of decades as shown by a couple of systematic reviews (Dawson Citation2017; Panadero et al. Citation2024). In general, rubrics are well received among students and teachers (e.g. Andrade and Du Citation2005; Chan and Ho Citation2019) probably due to their potential to obtain stronger summative and formative results (Jonsson and Svingby Citation2007; Panadero and Jonsson Citation2013).

Another aspect that has become present in classroom worldwide is online education (Yu Citation2021). These days, in most modern countries, educators of all educational levels take advantages of online resources to enhance their instructional designs (Beach and Willows Citation2014; Ulanday et al. Citation2021). These online resources range from just having materials and resources posted online to educational settings in which everything takes place online (i.e. online education). Rubrics, as the popular tools they are (Panadero et al. Citation2024), have benefited from this online shift and there are plenty of online resources to design and implement rubrics. A salient online tool is platforms aimed at helping teachers and students to use rubrics, such as Rubric Scorer or Smart rubric. However, there is not a systematic review on the qualities of such platforms. This might pose two problems. First, educators and researchers may not be informed about what are the features of these rubrics platforms or which are more effective. This limits their ability to select the most suitable tools for their pedagogical needs and research aims. Second, the lack of comparison and reflection about these platform characteristics may hinder the platforms designers from continuous improvement and adaptation to educational and research requirements. Our aim is to perform a systematic review as to analyse what are the main features of these platforms and whether they support erubrics while extracting conclusions for teachers, researchers and platforms designers.

Rubrics, their effects, design and implementation

A rubric can be defined as a tool that: ‘articulates expectations for student work by listing criteria for the work and performance level descriptions across a continuum of quality’ (Brookhart Citation2018, p.1). The most typical format of rubrics are tables or matrices, though there have been calls to design rubrics considering different elements (e.g. Grainger and Weir Citation2016). The assessment criteria are usually contained in the first column, while the subsequent columns representing levels of performance ranging from high to low quality or vice versa (Panadero et al. Citation2024).

While there is a debate about whether rubrics are positive for education (for a review Panadero and Jonsson Citation2020), the empirical evidence currently available shows their potential for both, summative and formative purposes. The empirical reviews show that rubrics can enhance the quality of summative assessment, such as increasing scoring reliability (Jonsson and Svingby Citation2007), and they can also increase students’ academic performance, metacognitive and self-regulatory strategies among others (Panadero et al. Citation2024). Based on these two purposes, we aim to explore whether online learning platforms incorporate features that support and potentially enhance rubrics formative and summative functions.

The design and implementation of rubrics are critical to their educational success (Panadero and Jonsson Citation2020). Consequently, these aspects need to be investigated within the context of online rubric platforms. First, the design of rubrics is an area that has received considerable attention (Brookhart Citation2013, Citation2018) and understandably so as it is the essence of the tool. Several authors have extensively explored this domain, from the development of taxonomies on elements to consider when designing a rubric (Dawson Citation2017), to practical guidelines for construction (Tierney and Simon 2004), and advisories on common pitfalls to avoid (Popham Citation1997). Moreover, recent work by Panadero et al. (Citation2024) emphasises the importance of detailing rubric characteristics and their implementation in educational settings. It is then clear that attention to detail should be paid to all the design decisions and to the final rubric.

Second, regarding implementation, this has been less studied than rubric design (Brookhart Citation2018; Panadero and Jonsson Citation2020). This is a very relevant caveat for the rubric field as the implementation is probably even more important than the design as the implementation is obviously the main influence on how rubrics are used by the students. Nevertheless, there exist interesting implementation propositions such as the pedagogical principles voiced by Andrade (Citation2005, Citation2023), work on how teachers use them to evaluate their students (Postmes et al. Citation2023), or proposals with specific steps to follow (Jones et al. Citation2016). As design and implementation are crucial for rubric educational success, we analysed the features of the online platforms on these areas.

Erubric: evolution of rubrics in digital learning environments?

Importantly, it would be expected that rubrics when implemented in digital learning environments, would present some distinct features from ‘traditional’ rubrics (Ana et al. Citation2020). However, the literature on e-rubrics remains limited, highlighting a lack of consensus regarding their definition, probably because, to our knowledge there are not specific propositions on what an erubric should be.

Here we define an erubric, as a digital rubric that presents unique digital features that enhance its design and implementation. We believe it is crucial to differentiate an erubric from a digitised traditional rubric which merely converts a paper-based rubric into a digital format without additional enhancements. An erubric is a more powerful digital tool including more advanced features such as dynamic customisation to accommodate diverse learning trajectories, the integration of multimedia elements to enrich assessment criteria, or the facilitation of real-time feedback mechanisms to support iterative learning processes.

As just mentioned, one of the problems with the development of erubrics might be the lack of specific propositions on what feature they should entail. For that reason, we propose here nine features that online rubric platform should incorporate to reach full potential for the design and implementation of erubrics (see ).

Table 1. Proposal of features for an erubric.

Importantly, erubrics hold potential for analytics-driven insights to inform both teaching and learning strategies representing a significant departure from traditional rubrics. Taking our theoretical propositions, we will explore in one of our research questions (RQs) whether the platforms support these features.

Aim and research questions

Our aim is to perform a systematic review of online rubric platforms to analyse what are their main features, identify which platforms are more powerful, and whether they support erubrics while extracting conclusions for teachers, researchers and platforms designers. We explored five RQs:

  • RQ1. What are the general features of online rubric platforms?

  • RQ2. What features do the platforms offer for the design of rubrics?

  • RQ3. What features do the platforms offer for the implementation of rubrics?

  • RQ4. Which online rubric platforms are the most effective for educational use?

  • RQ5. What erubrics features are supported by the platforms?

Method

Search strategies

The platform search was carried out using two strategies. First, searches in Google, Android Market and AppStore were conducted using the following keywords and combinations of keywords: (‘rubric’; ‘rubric platforms’; ‘rubric maker’; ‘rubric examples’; ‘rubrics for projects’; ‘grading rubrics’; ‘rubric schools’; ‘designing rubrics; ‘design rubrics’) + (self-assessment); + (peer-assessment); + (teacher assessment). Second, we consulted several rubrics experts for platform references.

The two strategies were used over several iterations from February 2023 to April 2023. In order to be included in the analysis, a platform should: (1) be available as webapp, app for android and/or apple devices; (2) be in English, Spanish, and/or German, languages spoken by the authors; (3) be available in the time period of the search, and (4) be accessible either (a) free to use, (b) through a demo version or (c) free access was given to us after contacting the administrators. A flow chart of the search process and inclusion of platforms can be found in . A total of 40 platforms were identified but after removing duplicates, 32 was the number of platforms evaluated for inclusion. From this 32, one was excluded after three attempts to contact the administrators to give access as the platform was under a paywall. At that point, we included and analysed 29 platforms. However, during the coding only 19 platforms worked properly so that is the final number of included platforms. We described the reasons why the other 10 platforms did not work in the section Platform categorisation.

Figure 1. Flow chart of search process and inclusion of platforms.

Figure 1. Flow chart of search process and inclusion of platforms.

Coding procedure

Operational status

While coding the platforms we identified four distinct operational statuses. First, we encountered nineteen platforms that were fully operational and proceed to analyse them. Second, we identified two platforms that were not working; thus, we were unable to investigate them. Third, we identified three platforms that were mostly rubric repositories. Two of these platforms also had a rubric creation function; thus, we analysed them. The other platform did not have such feature and was not analysed. Fourth, and last, eight platforms claimed to support the creation of rubrics but the tools they generated did not meet the criteria for rubrics, despite the use of the term. Specifically, five of these platforms limited users to merely creating and grading assignments without the capability to incorporate performance levels or their descriptors. Two platforms lacked the functionality to create performance level descriptors. Moreover, one platform was essentially a form creation tool, further deviating from the standard definition of a rubric. Therefore, none of these eight platforms was analysed.

Coding categories

A coding scheme was developed based on three main categories (features of websites and apps, rubric design and rubric implementation). Within these categories, several subcategories were created to code the different specifications. The subcategories were specified through a deductive process, based on predefined theoretical constructs. We utilised the review by Panadero and Jonsson (Citation2020) as the theoretical basis for our study. Based on this foundation, we deductively included categories identified as relevant from their review when analyzing the platforms. The detailed coding scheme can be found in the Supplementary Material (Appendix A). Importantly, we only coded the rubric implementation section if the rubric, as designed in the platform, could be used online.

Quality scoring

We scored the platforms using the coding scheme presented in Appendix A. Most categories were scored from 0 to 1 point in 0.25 intervals. A total score was computed for each platform representing the overall quality of the platform, ranging from 0 to 16 points. We coded four categories nominally, as they were qualitative and not quantitatively scorable: (1) company or institution behind the platform, (2) type of institution (governmental vs. private), (3) aim of the platform, and (4) general layout (table vs. other). Coding of all platforms can be found in Appendix B.

Coding reliability

The second and third author scored independently the platforms in three rounds (five platforms analysed in each for a total of 15 out of the 19 final ones). The inter-judges reliability was 83.3% first round, 91.5% second round, and 93.5% in the last round. In all rounds, disagreements were discussed until consensus was reached.

Data analyses

We conducted descriptive analyses of the platforms based on our coding and scoring to rank them and derive conclusions. The coding book, an overview of all scorings and the computed frequencies can be found in Appendixes C, D, and E.

Results

RQ1. What are the general features of online rubric platforms?

presents the distribution of frequencies associated with these features across the different platforms. To answer this RQ we analysed the 19 platforms in terms of the following eight features.

Table 2. Frequencies of the platforms’ properties.

First, regarding the type of institutions supporting the platforms, 16 platforms (84.21%) were supported by private entities with only three platforms from public providers (15.79%). The public entities behind these three platforms were research centres from universities. Specifically, the (1) University of Pittsburgh’s Learning Research and Development Centre, the (2) Centre for Research on Learning ALTEC from the University of Kansas, and the (3) MyLO MATE Team (part of the e-learning community of practice) of the University of Tasmania. Consequently, the majority of the platforms were provided by private entities associated with education.

Second, we analysed the type of access to the platforms, categorising them into three types: fully free, free basic with paid ‘pro’ upgrades, and paywall-exclusive. Six platforms were entirely free. Nine platforms offered a basic free version, with the option for a ‘pro’ upgrade providing additional features like more storage, improved sharing, and unlimited rubric creation (details in Appendix B). Four platforms required a subscription. Overall, 79% of platforms provide some level of free access.

Third, in terms of the aims the platforms are pursuing, the largest share (n = 9) explicitly aimed to create and provide rubrics. Another six platforms positioned themselves as classroom management tools or, more broadly, as components of learning management systems. The remaining four platforms focused on peer assessment. Consequently, rubrics were not the primary focus of then platforms, illustrating their integration into broader instructional practices beyond standalone applications.

Fourth, we analysed whether the platforms were available via web browsing and/or as an application for mobile use. We found that most platforms (n = 14) were only available via web browsing whereas three platforms were only available as apps. These were namely iRubric App, Rubric Scorer and Rúbrica Marcador. The remaining two platforms were both available in web and app format (Additio, Canvas). Importantly, we found that the features were similar between platforms offering the two types of access (web browsing and app) and the single access (web browsing or app). The main difference seemed to be that the app versions were adapted for mobile devices screens as, when it comes to the features on the design and implementation of rubrics, they offered similar features to the web browsing versions.

Fifth, we investigated the app repositories for the five platforms with app versions. Of these, three were available on the Android Market (Canvas, iRubric App, Rúbrica Marcador), one was found in the App Store (Rubric Scorer), and one was only accessible in both the Android Market and the App Store (Additio). This distribution indicates a potential limitation: only one platform was multiplatform, suggesting that the use of these apps may require students and teachers to possess compatible mobile devices.

Sixth, in assessing interface usability, 12 of the 19 platforms were deemed easy to use, while the remaining seven were classified as difficult. This evaluation was anchored in Jakob Nielsen’s 10 general principles for interaction design (Nielsen Citation1994). Criteria assessed for this categorisation are detailed in Appendix A. Our findings suggest that the majority of platforms offered user-friendly interfaces, a factor vital for effective adoption in educational environments. Key usability challenges identified include navigation difficulties and complex, unstructured interfaces. Conversely, platforms that excelled in these areas were identified as easy to use.

Seventh, in terms of data saving capabilities, over half of the platforms (n = 10) allowed users to save created rubrics online. Additionally, seven platforms offered both online saving and download options for rubrics, while two platforms only permitted downloads without online saving. This indicates that most platforms are primarily designed for, or at least support, online use.

Lastly, we assessed the documentation and help options available on the platforms. We distinguished between technical support availability for troubleshooting and the presence of different documentation options, namely video tutorials, text explanations and/or audio explanations. We found that two platforms lacked both documentation and help options. Six platforms offered one of these support features, five provided two, and another six had three features available. Notably, none of the platforms encompassed all four support features. Video and text explanations were the most common forms of assistance, suggesting these are the most sought-after types of support. Overall, nearly all platforms offered some level of help or documentation, indicating an awareness of the need for user support, particularly through text and video guides.

RQ2. What features do the platforms offer for the design of rubrics?

presents the distribution of frequencies associated with these features across the different platforms. To answer this RQ, we analysed the platforms in terms of the following eight features. First, regarding the layout, most platforms (17 out of 19) exclusively use a table format for rubrics, with criteria in rows and performance levels in columns. Only EduFlow and Peerceptiv offer alternatives: EduFlow uses separate tables per criterion with levels in rows, while Peerceptiv uses independent rating scales per criterion, both focusing on peer assessment. This shows a general trend towards traditional table layouts with limited variation. See Appendix E for visual comparisons of these innovative and traditional approaches.

Table 3. Frequencies of rubric design features of the platforms.

Second, regarding the number of performance levels, three options were identified. First, six platforms allowed users to add as many performance levels as needed. Second, 11 platforms provided the option to select the number of performance levels within a certain range. Notably, these ranges varied significantly across platforms. For instance, Blackboard Learn, Kritik and Super Rubric permitted a range from 1 to 4 performance levels, whereas Additio and Rúbrica Marcador allowed for up to 20 performance levels. Lastly, two platforms prescribed a fixed number of performance levels: Rubric Builder with four and Mobious SLIP with five.

Third, concerning the labels of performance levels, 14 platforms (73.7%) allowed users to edit the labels as desired. This flexibility enables users to name the different performance levels in a manner that best aligns with instructional objectives and learning requirements. Notably, two platforms, Kritik and Super Rubric, in addition to offering customisation, suggested labels for the rubric designer (e.g. beginning, developing, achieving and mastering). Conversely, three platforms (Mobious SLIP, Peerceptiv and Rubric Builder) implemented fixed labels. Among these, Peerceptiv and Rubric Builder utilised quantitative labels (Levels 1–4), while Mobious SLIP employed qualitative labels (very poor, poor, fair, good and excellent). Overall, the majority of platforms provided the flexibility to adapt the labels of performance levels to meet the needs of teachers and students.

Fourth, regarding performance levels description, all but one platform (n = 18; 94.7%) had unlimited space to write the specifications of each performance level while that other one had limited space. This feature makes the creation of the rubric flexible and accurate for each teacher and subject. Therefore, there was no platform with compulsory fixed descriptions for the performance levels, but rather all platforms were flexible.

Fifth, in regard to the number of assessment criteria, the majority of platforms (n = 13; 68.4%) provided users with the opportunity to add as many assessment criteria as necessary. In contrast, six platforms imposed a range for the inclusion of assessment criteria. As we found with performance levels, these ranges were quite heterogeneous. There were two platforms that offered a narrow range (1–4) (i.e. Blackboard Learn, Super Rubric), two others a medium range (1–10) (i.e. Rúbrica Marcador, iRubric App), and the two left a wide range (1–30 or 1–20) (i.e. Additio, Rubric Scorer). In sum, the tendency was to offer a considerable flexibility when it comes to assessment criteria.

Sixth, concerning labels for the assessment criteria, most platforms (15 out of 19) permit users to customise assessment criteria labels, supporting the creation of rubrics for various subjects and complexities. Four platforms, in addition to label customisation, provide suggestions to guide assessment focus, such as method justification in math or idea clarification in language, enhancing decision-making on what to assess. This demonstrates significant flexibility in criteria labelling across platforms.

Seventh, regarding the option to include qualitative feedback, while the majority of platforms (11 out of 19) lack a qualitative feedback feature, eight offer it through various methods. EduFlow and Rubric Scorer enable comments next to the rubric, while MyLo Rubric and SmartRubric incorporate a dedicated column for written feedback per criterion. Stile uniquely allows multimedia feedback, requiring online use, and Peerceptiv introduces an open question field for feedback or self-reflection. This variation highlights opportunities for enhancing rubric platforms with more formative, qualitative feedback options.

Eighth and last, we investigated whether the platforms allowed the user to add an explicit scoring strategy to the rubric. In the majority of platforms (n = 12; 63.2%) the user can decide if the score is explicitly included in the performance levels or not. Five platforms lack an option for explicit scoring strategies. Meanwhile, two platforms, specifically Blackboard Learn and Kritik, mandate an explicit scoring strategy within the rubric. Here, users have the freedom to assign specific scores to each assessment criterion; however, this scoring information is obligatory and cannot be omitted, though users may opt to input a ‘0’ to nullify its impact. Overall, platforms predominantly provide users with the option to decide on the visibility of scoring to students.

RQ3. What features do the platforms offer for the implementation of rubrics?

presents the distribution of frequencies associated with these features across the different platforms. To address RQ3, our analysis focused on three distinct features of the platforms: (a) hybrid implementation, (b) online cocreation and (c) assessment types.

Table 4. Frequencies of implementation of rubrics in the platforms.

First, regarding hybrid implementation, we found that eight platforms (42.11%) allowed only offline implementation, five allowed (26.32%) only online implementation and six (31.58%) allowed both offline and online implementation. Hence, a majority of the rubric platforms allowed online implementation, which is crucial in online education environments. Unfortunately, there is a small number of platforms that support both types of implementations, essential for flexible use in real classrooms where internet access might be problematic.

Second, regarding online rubric cocreation, we found that only four platforms (Additio, iRubric, Smart Rubric and Stile) supported this feature. Hence, the vast majority (78.94%) or platform only supported one person (i.e. account) to design the rubric, not providing the option to design in a collaborative manner. This could be an area of improvement as allowing the teacher to opt in or out of this collaboration could be an important feature as cocreating rubrics with colleagues and students can be beneficial.

Third, regarding assessment types, we analysed if platforms focused on one or more specific types of assessment (i.e. self-assessment, peer-assessment, or teacher-assessment). Seven platforms (36.84%) did not specify for which type of assessment the rubrics were meant to be used. Ten platforms (52.63%) focused on one type of assessment, and only two platforms (10.53%) focused on all three types of assessment (i.e. the most flexible use). Out of the ten platforms that specified one assessment type, six were constructed for teacher assessment, four focused mainly on peer-assessment, and no platform specifically focused only on self-assessment. In conclusion, if there is a specification for an assessment type, it is mainly only one assessment type, and this type is often teacher assessment.

RQ4. Which online rubric platforms are the most effective for educational use?

To identify the most effective online rubric platforms, we employed a scoring system based on the three critical aspects analyses in the previous RQs: (1) Features of Websites and Apps, focusing on usability and technical features; (2) Rubric Design, assessing the flexibility and depth of rubric customisation options; and (3) Implementation, evaluating the platforms’ support for both online and offline use as well as collaborative features. Scores for each aspect were computed and subsequently aggregated to reach a global score for each platform (see ). We will first examine the individual aspects before presenting the overall scores.

Table 5. Total scores of the platforms.

Regarding Features of websites and apps, the scores ranged from 1.5 to 5.25, with an average score of 2.92. The highest scored platform was Additio, followed by Canvas and then Rubric scorer. These three were more accessible and easier to use than the rest.

Regarding Rubric design, the scores ranged from 2.75 to 6.5, with an average score of 4.79. The highest-scoring platform was Stile, followed by Smart Rubric and Super Rubric. These three platforms showed the highest level of customisation in the design of rubrics. This type of flexibility allows the rubric designers to employ more performance levels, label those levels in different ways, or make additional edits to the rubrics.

Regarding Implementation, the scores ranged from 0 to 3, with an average score of 0.97. The two highest-scored platforms were Additio and Stile. These two platforms were the most advanced in offering different types of implementation (online and offline), cocreation of rubrics and allowing different types of assessment.

Finally, regarding the global scores, these ranged from 9.5 to 22.5. The highest-scored platforms were Additio and Stile, both number one for either Features of websites and apps, or Rubric design, and they were tied in terms of Implementation. Another highly scored platform was Rubric scorer. Finally, there is a group of seven platforms ranging from 19.5 to 17.5.

RQ5. What erubrics features are supported by the platforms?

As can be seen in , to answer this research question, we explored the nine features proposed in at the theoretical framework. The main criterion threshold for coding the platforms in terms of the erubrics features was if the platform supported online access to the rubric, as to be considered an erubric the instrument needs to have online capabilities to integrate the rest of the features. Eleven platforms supported online access to the rubric.

Table 6. Screening of the features supported in the platforms for the design and implementation of erubrics.

When it comes to the rest of the features, a clear pattern emerges: most of the features are not supported by most of the platforms. Actually, there is not a single platform that supported all the features. There is only a, so to speak, foundational feature being that the 11 platforms allow for rubric design customisation. However, for the rest of the features the landscape is more varied. The two most usual features were online peer assessment and self-assessment, and analytics and reporting. Others like gamification, integration with digital tools, or adaptive learning references are vastly unavailable.

Discussion

Our aim was to review the existing online rubric platforms to analyse their main features and extract conclusions for researchers, teachers, and designers. We explored five research questions (RQs) to evaluate the characteristics of the 19 selected platforms.

RQ1. What are the general features of online rubric platforms?

Our results showed that most platforms are private enterprises; two thirds are behind a paywall either entirely or for full functionality; around half were fully devoted to rubrics while the other half combined it with other assessment interventions or belonged to a larger management system; most are only available via web browsers, and the five that have an app are largely represented on Android market; most of them were easy to use; the most usual way to save the rubric is online which requires the use of the rubric within the platform; and that there is room for improvement when it comes to documentation and help. Next, we will discuss these aspects individually fusing them with recommendations and reflections on what it means for the field. Next, we discussed these in detail.

The clear dominance of private initiative points to a strong influence of market dynamics. The use of paywalls suggests that many of them rely on user subscriptions for revenue. Importantly, we see two risks here. First, this subscription-based model introduces a barrier to equitable access (Stan, Dobrota, and Ciobotea Citation2022). Schools with lesser funding, particularly in economically disadvantaged areas, may be unable to afford these resources. This disparity could exacerbate existing educational inequalities, as affluent institutions gain further advantages through access to superior assessment tools (Gustafsson Citation2003). And second, the necessity for these platforms to remain profitable may inadvertently shift their focus from the quality and pedagogical soundness of the rubrics to features that are more marketable and profitable. While innovation driven by competition can lead to improvements, there is a risk that the core educational values might be compromised in pursuit of features designed more to attract users and subscriptions than to enhance educational outcomes (Regele Citation2020).

The versatility displayed by platforms integrating rubrics with other assessment tools or becoming part of larger systems reveals a flexible and dynamic ecosystem. This adaptability shows an understanding of the diverse preferences and needs of educators and institutions (Pillai et al. Citation2019), making these platforms more versatile and useful within the broader educational landscape, and it also shows how important rubrics have become in our educational systems (Panadero et al. Citation2024).

Regarding technology, the fact that these platforms mainly operate through web browsers and are prevalent on the Android market for apps reflects current trends. It aligns with the widespread use of internet connectivity accessible even from basic computers and the popularity of the Android operating system (StatCounter Citation2023), making these platforms easily accessible to a broad user base. Importantly, the use of web browsers for platform access is beneficial in terms of universal accessibility to education (Kurt Citation2019). Web browsers are ubiquitous across various devices and operating systems, making these platforms readily accessible to a wide range of users. This approach does not necessitate the downloading of specific apps, which can be advantageous for users with limited storage space or those using shared or public computers. However, relying solely on web browsers, has its drawbacks (Parker Citation2021). Web applications might not offer the same level of user experience, performance, and offline accessibility as native mobile apps, in particular in the eyes of the users (Andersson Citation2018). Especially in situations where internet connectivity is unreliable or unavailable, the utility of web-based platforms can be significantly hindered. Thus, developing native apps for both Android and iOS can enhance user experience and accessibility. Mobile apps generally offer better performance, offline access, and are optimised for the device’s hardware. This can be particularly beneficial for users who primarily access content on mobile devices. Nevertheless, the development of native apps for multiple operating systems does involve higher costs and resources. This includes not only the initial development but also ongoing maintenance, updates, and support for different versions. For many organisations, especially smaller ones, these costs can be prohibitive. They have to weigh the potential benefits against the financial and resource investment required. It seems then that more public support for online rubric platforms would be recommended, as to ensure a more equitable access to these tools.

The positive aspect of most platforms being user-friendly is crucial for a good overall experience. This ease of use minimises barriers for educators and institutions (Kurt Citation2019), ensuring a smooth adoption process. However, the preference for saving rubrics online within the platform raises questions about potential limitations for offline use, something that should be considered, especially in settings with limited internet access.

Lastly, the areas identified for improvement in documentation and help serve as valuable pointers for future enhancements. Addressing these issues not only improves the platforms’ user-friendliness but also demonstrates a commitment to supporting users in maximising the benefits of these tools (Kurt Citation2019).

RQ2. What features do the platforms offer for the design of rubrics?

Our findings reveal a dominant preference for table-based rubrics, with EduFlow and Peerceptiv as notable exceptions offering alternative layouts. Performance levels varied widely across platforms, with some allowing unlimited levels and others offering a fixed or variable range. Most platforms provided flexibility in labelling performance levels, catering to diverse instructional needs. In describing performance levels, nearly all platforms allowed unlimited text, offering considerable flexibility. For assessment criteria, a majority permitted adding as many as needed, while others set a range, reflecting significant variability. Label customisation for assessment criteria was also widely supported, with a few platforms offering additional suggestions. On the qualitative feedback front, opinions were split; some platforms included this feature, with diverse implementation methods, while others did not. Lastly, in terms of scoring strategies, options varied from user-defined scoring to fixed or non-displayed scores, reflecting a general trend towards customisable scoring in rubrics. Next, we discussed these in detail.

The prevalent use of table-based rubrics, as seen in most platforms, aligns with traditional rubric use (Brookhart Citation2018), offering familiarity and ease of understanding for users. EduFlow and Peerceptiv stand out with their innovative layouts, suggesting a growing trend towards diversifying rubric designs to cater to different pedagogical needs. These alternative formats may enhance engagement and clarity in assessment, fostering a more dynamic and intuitive evaluation process, but it is crucial that there is a pedagogical gain behind these innovations.

The variability in performance levels across platforms highlights a key consideration in educational assessment: the balance between standardisation and customisation. Platforms allowing unlimited levels afford educators the flexibility to tailor assessments to specific learning outcomes and student capabilities. However, this flexibility must be tempered with considerations of clarity and practicality, as too many levels can lead to confusion and difficulty in consistent grading (Humphry and Heldsinger Citation2014).

In a similar vein, labelling performance levels is another critical feature where flexibility seems paramount. Allowing educators to customise labels provides opportunities to align assessment criteria closely with instructional objectives and learning outcomes (Brookhart Citation2018). This adaptability is essential in addressing the diverse needs across educational contexts, ensuring that rubrics remain relevant and effective (Brookhart Citation2018).

In terms of qualitative feedback, the split in platform features reflects a broader debate in educational assessment between summative and formative purposes. Platforms incorporating qualitative feedback mechanisms acknowledge the importance of providing detailed, constructive feedback with rubrics, which is vital for learning and improvement (Andrade Citation2005; Wollenschläger et al. Citation2016). The platforms incorporating scoring options might help teachers to evaluate and students to see a clearer connection between their performance and the score, while enhancing scoring reliability (Jonsson and Svingby Citation2007). However, the varied implementation methods suggest a lack of consensus on the optimal approach to integrating feedback in digital rubrics, in line with the summative vs. formative debate (Wiliam Citation2011).

Finally, the variety of scoring strategies observed underscores the ongoing evolution in thinking about assessment in education. The move towards customisable scoring indicates a shift from traditional, rigid scoring methods towards more flexible, learner-centred approaches. This flexibility can empower educators to design assessments that are more aligned with learning objectives and student needs, potentially leading to more meaningful and accurate evaluations.

RQ3. What features do the platforms offer for the implementation of rubrics?

Our findings revealed a varied landscape in implementation strategies. A significant portion of platforms were geared either towards exclusive online or offline implementation, with only a few supporting the versatile hybrid approach, crucial for flexible educational environments. The concept of online cocreation, which fosters collaborative rubric development, was notably underrepresented, with most platforms limiting rubric creation to individual effort. This is particularly intriguing given the collaborative potential of rubrics in formative educational processes. Finally, regarding the types of assessments supported, there was a clear tendency for platforms to focus on a single assessment type, primarily teacher assessment. This shows a limited embrace of the multifaceted potential of rubrics, as very few platforms accommodated a comprehensive range of assessment types, including self, peer and teacher assessments. Next, we discussed these in detail.

The predominance of platforms dedicated to either online or offline implementation, with limited adoption of a hybrid model, reveals a gap in catering to the evolving needs of modern educational settings. Hybrid implementation is pivotal in an era where education straddles digital and traditional realms, offering necessary flexibility to educators and learners alike (Raes et al. Citation2019). This inflexibility in implementation could hinder the adaptability and responsiveness of educational practices to diverse learning environments.

The scarcity of platforms supporting online cocreation of rubrics is a noteworthy finding. Collaborative rubric development is not only a tool for assessment but also an educational strategy that can promotes academic performance and self-regulated learning (e.g. Fraile, Panadero, and Pardo Citation2017). The limited focus on this aspect suggests a missed opportunity in harnessing the full pedagogical potential of rubrics. Encouraging more platforms to incorporate collaborative features could significantly enhance the formative aspects of educational processes.

Furthermore, the tendency of platforms to specialise in single types of assessment, primarily teacher-led, indicates a narrow interpretation of rubrics’ utility. This approach overlooks the richness and depth that multifaceted assessments, involving self and peer evaluations, can bring to the learning experience (Andrade, Du, and Wang Citation2008; Panadero et al. Citation2024). Broadening the scope to include diverse assessment types can provide a more holistic view of student learning and progress.

RQ4. Which online rubric platforms are the most effective for educational use?

Our findings revealed that platforms varied significantly in their offerings, with some standing out in specific areas. In terms of website and app general features, platforms like Additio and Canvas stood out, showcasing superior accessibility and ease of use. For rubric design, platforms such as Stile and Smart Rubric emerged as top contenders, offering remarkable customisation options that allow for a wide range of performance levels and labelling flexibility. When it came to implementation, Additio and Stile were again notable for their robust offerings in both online and offline implementations, cocreation of rubrics, and diverse assessment types. The cumulative global scores, which integrated all these aspects, highlighted Additio and Stile as the front runners, excelling in multiple dimensions. Additionally, platforms like Rubric scorer also scored highly, indicating a competitive field with several strong options available for different user needs and preferences. In conclusion, our analysis underscores the diverse strengths and specialties of various platforms but also reveals Additio and Stile as exemplary models at the current moment, combining superior functionality in design and implementation with user-friendly interfaces, thereby setting a benchmark for future developments in this evolving field.

RQ5. What erubrics features are supported by the platforms?

Our results lead to a clear conclusion: current online rubric platforms have yet to fully embrace the potential of e-rubrics and their implementation. Many of the features we propose (see ) remain largely unexplored, suggesting a gap in the design and implementation of erubrics. This may stem from the absence of a clear, widely accepted definition of erubrics and an understanding of the digital features that could significantly enhance their educational efficacy. Our study addresses this gap by offering both a precise definition of erubrics and a comprehensive list of features designed to unlock their full potential in educational settings. We hope that our findings will resonate with the three key stakeholders in the erubric ecosystem – teachers, researchers and platform designers – encouraging them to adopt and further explore our recommendations. We have elaborated a list of practical recommendations for these three actors that can be found in Appendix F.

Limitations

Our study has several limitations. First, its findings are temporally bound as the analysis spans from Spring 2023 to March 2024, capturing only a transient view of the evolving online rubric platforms. This period saw some platforms discontinue and others change names, highlighting the sector’s fluidity. Second, without collaboration with platform designers, our study may lack depth and overlook the full potential and nuances of the platforms. Third, the absence of empirical testing with end-users like teachers and students means we did not evaluate the platforms’ practical usability and effectiveness in real educational settings. Finally, we deliberately excluded generative AI platforms, which, while advanced, do not focus on rubric construction and lack specialised features for it. However, the rapid advancement of this technology suggests potential future integration of AI functionalities into these platforms.

Conclusions

Our study of 19 online rubric platforms highlights a rapidly evolving landscape with varying features and designs catering to diverse educational needs. While these platforms demonstrate potential, they still lack in areas such as implementation flexibility, collaborative design and the integration of various assessment types. Notably, platforms like Additio and Stile perform well against several criteria, but the optimal choice depends on specific educational goals and rubric applications. The effectiveness of these platforms relies significantly on the user’s ability to design and implement effective rubrics, emphasising the need for continuous development in both platform design and user expertise. This research not only sheds light on the current state of online rubric utilisation but also sets the stage for future advancements in educational assessment technologies via a clear definition of erubric, a proposal of its features, and specific recommendations for teachers, researchers and designers (Appendix F).

Supplemental material

Supplemental Material

Download PDF (1.9 MB)

Disclosure statement

The authors declare to not having any conflict of interest regarding this manuscript.

Additional information

Funding

(1) German Federal Ministry of Education and Research, international and interdisciplinary network of educational researchers SeReLiDiS (Self-Regulated Learning in Digitized Schools). (2) Spanish National R + D call from the Ministerio de Ciencia, Innovación y Universidades (Generación del conocimiento 2020), Reference number: PID2019-108982GB-I00.

References

  • Ana, A., C. Yulia, Y. Jubaedah, M. Muktiarni, V. Dwiyant, and A. Maosul. 2020. “Assessment of Student Competence Using Electronic Rubric.” Journal of Engineering Science and Technology 15 (6): 3559–3570.
  • Andersson, L. 2018. “Usability and user experience in mobile app frameworks: Subjective, but not objective, differences between a hybrid and a native mobile application.” DIVA. https://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-372051
  • Andrade, H. 2005. “Teaching with Rubrics: The Good, the Bad, and the Ugly.” College Teaching 53 (1): 27–31. doi:10.3200/CTCH.53.1.27-31.
  • Andrade, H., and Y. Du. 2005. “Knowing What Countsand Thinking about Quality: Students Report on How they Use Rubrics.” Practical Assessment, Researchand Evaluation 10 (4): 2–11.
  • Andrade, H., Y. Du, and X. Wang. 2008. “Putting Rubrics to the Test: The Effect of a Model, Criteria Generation, and Rubric-Referenced Self-Assessment on Elementary School Students’ Writing.” Educational Measurement: Issues and Practice 27 (2): 3–13. doi:10.1111/j.1745-3992.2008.00118.x.
  • Andrade, H. L. 2023. “What is Next for Rubrics.” In Advances in Educational Marketing, Administration, and Leadership Book Series, 314–326. Pennsylvania: IGI global. doi:10.4018/978-1-6684-6086-3.ch017.
  • Beach, P., and D. Willows. 2014. “Investigating Teachers’ Exploration of a Professional Development Website: An Innovative Approach to Understanding the Factors That Motivate Teachers to Use Internet-Based Resources.” Canadian Journal of Learning and Technology [La Revue Canadienne de L’apprentissage et de la Technologie] 40 (3): 1–16. https://www.learntechlib.org/p/148504/
  • Brookhart, S. M. 2013. How to Create and Use Rubrics for Formative Assessment and Grading. Alexandria, VA: ASCD.
  • Brookhart, S. M. 2018. “Appropriate Criteria: Key to Effective Rubrics.” Frontiers in Education 3 (22): 1–12. doi:10.3389/feduc.2018.00022.
  • Chan, Z., and S. Ho. 2019. “Good and Bad Practices in Rubrics: The Perspective of Students and Educators.” Assessment & Evaluation in Higher Education 44 (4): 533–545. doi:10.1080/02602938.2018.1522528.
  • Dawson, P. 2017. “Assessment Rubrics: Towards Clearer and More Replicable Design, Research and Practice.” Assessment & Evaluation in Higher Education 42 (3): 347–360. doi:10.1080/02602938.2015.1111294.
  • Fraile, J., E. Panadero, and R. Pardo. 2017. “Co-Creating Rubrics: Useful or Waste of Time? The Effects of Establishing Assessment Criteria with Students on Self- Regulation, Self-Efficacy and Performance.” Studies in Educational Evaluation 53: 69–76. doi:10.1016/j.stueduc.2017.03.003.
  • Grainger, P., and K. Weir. 2016. “An Alternative Grading Tool for Enhancing Assessment Practice and Quality Assurance in Higher Education.” Innovations in Education and Teaching International 53 (1): 73–83. doi:10.1080/14703297.2015.1022200.
  • Gustafsson, J. E. 2003. “What Do we Know about Effects of School Resources on Educational Results?” Swedish Economic Policy Review 10 (3): 77–110. https://www.government.se/contentassets/25c599d2a5a241b98255e7650f3da9ec/jan-eric-gustafsson-what-do-we-know-about-effects-of-school-resources-on-educational-results/
  • Humphry, S. M., and S. A. Heldsinger. 2014. “Common Structural Design Features of Rubrics May Represent a Threat to Validity.” Educational Researcher 43 (5): 253–263. doi:10.3102/0013189X14542154.
  • Jones, L., B. Allen, P. Dunn, and L. Brooker. 2016. “Demystifying the Rubric: A Five-Step Pedagogy to Improve Student Understanding and Utilisation of Marking Criteria.” Higher Education Research & Development 36 (1): 129–142. doi:10.1080/07294360.2016.1177000.
  • Jonsson, A., and G. Svingby. 2007. “The Use of Scoring Rubrics: Reliability, Validity and Educational Consequences.” Educational Research Review 2 (2): 130–144. doi:10.1016/j.edurev.2007.05.002.
  • Kurt, S. 2019. “Moving toward a Universally Accessible Web: Web Accessibility and Education.” Assistive Technology 31 (4): 199–208. doi:10.1080/10400435.2017.1414086.
  • Nielsen, J. 1994. Usability Engineering. San Francisco, CA: Morgan Kaufman.
  • Panadero, E., A. Jonsson, L. Pinedo, and B. Fernández-Castilla. 2024. “Effects of Rubrics on Academic Performance, Self-Regulated Learning and Self-Efficacy: A Meta-Analytic Review.” Educational Psychology Review 35 (4): 113. doi:10.1007/s10648-023-09823-4.
  • Panadero, E., and A. Jonsson. 2013. “The Use of Scoring Rubrics for Formative Assessment Purposes Revisited: A Review.” Educational Research Review 9: 129–144. doi:10.1016/j.edurev.2013.01.002.
  • Panadero, E., and A. Jonsson. 2020. “A Critical Review of the Arguments against the Use of Rubrics.” Educational Research Review 30 (1): 100329. doi:10.1016/j.edurev.2020.100329.
  • Parker, E. 2021. “Native or web-based? Selecting the right approach for your mobile app.” UX Magazine. https://uxmag.com/articles/native-or-web-based-selecting-the-right-approach-for-your-mobile-app.
  • Pillai, K. R., P. Upadhyaya, A. Balachandran, and J. Nidadavolu. 2019. “Versatile Learning Ecosystem: A Conceptual Framework.” Higher Education for the Future 6 (1): 85–100. doi:10.1177/2347631118802653.
  • Popham, W. J. 1997. “What’s Wrong and What’s Right with Rubrics.” Educational Leadership 55 (2): 72–75. https://eric.ed.gov/?id=ej552014.
  • Postmes, L., R. Bouwmeester, R. de Kleijn, and M. van der Schaaf. 2023. “Supervisors’ Untrained Postgraduate Rubric Use for Formative and Summative Purposes.” Assessment & Evaluation in Higher Education 48 (1): 41–55. doi:10.1080/02602938.2021.2021390.
  • Raes, A., L. Detienne, I. Windey, and F. Depaepe. 2019. “A Systematic Literature Review on Synchronous Hybrid Learning: Gaps Identified.” Learning Environments Research 23 (3): 269–290. doi:10.1007/s10984-019-09303-z.
  • Regele, M. D. 2020. “Pedagogy and Profit? Efforts to Develop and Sell Digital Courseware Products for Higher Education.” American Educational Research Journal 57 (3): 1125–1158. doi:10.3102/0002831219869234.
  • Stan, M., E. M. Dobrota, and M. Ciobotea. 2022. “Subscription-Based Models and Online Learning Platforms.” Across 6 (1): 78–88. http://www.across-journal.com/index.php/across/article/view/131
  • StatCounter. 2023. “Global market share held by mobile operating systems from 2009 to 2023, by quarter [Graph].” Statista. Retrieved November 23, 2023, from (October 4). https://www.statista.com/statistics/272698/global-market-share-held-by-mobile-operating-systems-since-2009/
  • Tierney, R., and M. Simon. 2019. “What’s Still Wrong with Rubrics: Focusing on the Consistency of Performance Criteria across Scale Levels.” Practical Assessment, Research, and Evaluation 9 (2): 2. doi:10.7275/jtvt-wg68.
  • Ulanday, M. L., Jane, R. S. Centeno, Z. J. Cristina, M. Bayla, and M. C. 2021. “Access, Skills and Constraints of Barangay Officials towards the Use of Information and Communications Technology (ICT).” International Journal of Knowledge Content Development & Technology 11 (2): 37–54. doi:10.5865/IJKCT.2021.11.2.037.
  • Wiliam, D. 2011. “What is Assessment for Learning?” Studies in Educational Evaluation 37 (1): 3–14. doi:10.1016/j.stueduc.2011.03.001.
  • Wollenschläger, M., J. Hattie, N. Machts, J. Möller, and U. Harms. 2016. “What Makes Rubrics Effective in Teacher-Feedback? Transparency of Learning Goals is Not Enough.” Contemporary Educational Psychology 44–45: 1–11. doi:10.1016/j.cedpsych.2015.11.003.
  • Yu, Z. 2021. “The Effects of Gender, Educational Level, and Personality on Online Learning Outcomes during the COVID-19 Pandemic.” International Journal of Educational Technology in Higher Education 18 (1): 14. doi:10.1186/s41239‑021‑00252.