Using rubrics creates a shared understanding of assessment and therefore strengthens your assessment.

A rubric clarifies expectations for an assessment task by helping the course team, students and markers develop a shared understanding of a task’s most important aspects and the standards expected.

You can use rubrics to prepare assessment tasks, mark them and provide feedback. Students also use rubrics to guide their effective completion of assessments.

A rubric is generally a table of criteria and standards for a task.

  • Criterion – a property or characteristic by which the quality of something may be judged.
  • Standard – a definite level of achievement aspired to or attained. Standards specify levels of quality (or achievement, or performance) for each criterion.

See the Assessment Policy [PPL 3.10.02] for more information.

On this page:

Criterion and standards explained

Rubrics clarify the expectations of a task and should prevent surprises for everyone. The criteria in the rubric should relate to the learning outcomes/objectives being assessed and reflect the learning activity and assessment information provided to students in their Electronic Course Profile (ECP).

Specifying criteria in the rubric identifies the properties of the exercise which are of interest and value but does not offer or make assumptions about the actual quality of the completed task.

The quality of that task is determined by judging the student’s achievement measured against the standards.

Top of page

Types of rubrics

Most assessment at UQ uses analytic rubrics, which provide explicit standards for each criterion.

Criteria Developing standard Competent standard High standard
Development of core argument Presents undeveloped arguments. Develops routine arguments or decisions. Significant and sophisticated insights in identifying, generating and synthesising competing arguments or perspectives.
Interaction skills Evidence of efforts to develop and use basic interactive skills such as listening to and contributing ideas. Evidence of skill in offering ideas listening, responding to and supporting others’ ideas and initiatives. Evidence of skill in communicating at both emotional and intellectual levels, establishing rapport and recognising others’ viewpoints.
Contributions to group planning Evidence of recognition of some steps essential to working towards a solution to group problem-solving. Evidence of acknowledgment and incorporation of others’ ideas when planning group problem-solving tasks. Evidence of proactive leadership through reflecting on and learning from the group problem-solving experience and developing suggestions for enhanced group performance.

Rubrics may also be holistic rubrics, containing a number of implied criteria (e.g. relevance, critical thinking, logic of organisation, presentation).

Some examples:

  • The work is of very high quality throughout; there is clear evidence of mastery over the subject matter; the material is organised logically; the articulation between various parts is smooth and mutually supportive, and the whole work is presented nearly faultlessly.
  • The work addresses the specified issue; it shows a reasonable level of critical thinking and discrimination; the development provides adequate supporting arguments with reasons and uses factual data correctly; the work is focussed but lacks the originality and creativity that would lift it to A level, and the work is free of errors in grammar punctuation and spelling. 
  • The work contains mostly relevant material with some that is marginal; the level of organisation could be improved with many possible connections between content and parts not made; the general approach is reproductive with not a great deal of evidence of creative thought or synthesis; the technical production is reasonably competent, but a lack of clarity in expression sometimes produces ambiguity.

Source: Sadler, D.R. (2005) Interpretations of criteria-based assessment and grading in higher education, p.186

Holistic rubrics are generally simpler with increased inter-marker reliability and are quicker to use, while analytic rubrics provide more nuanced student performance and feedback analysis.

Top of page

A rubric design process

Step 1: Clarify your assessment

  • What ability or knowledge do you want students to demonstrate? (i.e. what intended learning outcomes are you assessing?).
  • What are you asking students to do? (i.e. what is the assessment task?).

Step 2: Identify specific observable attributes

  • Consider what attributes you will be able to identify (both those you want to see and those you do not) in your students’ product, process or performance.
  • Specify the characteristics, skills or behaviours that you will be looking for, as well as common mistakes you do not want to see.

There are two main ways to think about these attributes, 'task-based' or 'skill-based'.

  • Task-based follows the aspects of the task (e.g. introduction, results, conclusion):
    • easier to match to student’s work for writing and marking
    • can lead to over separation of parts of the task and micro-marking (e.g. marking an introduction, then separately marking the first paragraph)
    • can feel less connected to higher-level aspects of the task.
  • Skill-based (e.g. written expression, argument, referencing):
    • can be harder to mark (e.g. varied expression quality) and more conceptual.
    • more focused on the skills and outcomes of the assessment.

Step 3: Brainstorm excellent, passable and not acceptable characteristics

Consider the attributes you have identified in step two and categorise them into excellent, passable and not acceptable.

  • What standard would you expect for a top mark?
  • What standard do you expect to pass?
  • What characteristics are not acceptable?
  • If desired, fill in the gap between passable and excellent.

Step 4: Holistic

Use these characteristics and attributes to develop a narrative for the collective attributes that describe each level of the expected standard of work. Start with the top, passing and not-acceptable levels, and then fill in gaps as required.

Step 5: Analytic

Refine your criteria. Ideally, the criteria should be:

  • independent (i.e. not reassessing the same attribute across multiple criteria)
  • singular (refer to attributes that will be combined for all performance – i.e. avoid situations where you will want to give both high and low marks for the same criteria)
  • minimal (students and assessors struggle to make judgements with too many criteria)
  • clear and concise for both assessors and students, and
  • written without reference to quality (that is what standards are for).

In practice, you need to find the balance of these ideals that works best for your students.

For each criterion:

  • Start with your passing standard, then work on the exceptional standard. After you have these two, fill in any others you may need.
    • The number of standards should reflect the complexity and weighting of the task.
    • Each standard needs to be clearly delineated from the others for both your markers and students.
  • Identify observable elements of performance. Standards are easiest to use when they do this, but these can be difficult to write.
    • For example, it is easy to write standards ranging from ‘decent’ to ‘good’ and ‘excellent’, but it is difficult for students and markers to interpret the differences.
    • Writing standards that identify the difference with terms like ‘convey meaning with no significant errors of expression’ through to 'communicates effectively with clear and concise language suitable for the audience’ may be easier to use in marking.
  • Focus on quality (how well) over quantity (how much) to create more effective standards. 
    • If a quantity is important, state what is expected in the task brief (e.g. in the rubric, we want to judge the quality of references, considering relevance, reputable sources, suitable diversity, etc), not the number.
  • Review the UQ Grading System (see PPL 3.10.02 Assessment Procedures, 7.1 Grading system) when developing your standards. Connecting the language in this scheme to your rubrics can help align expectations and make grading easier.
  • Consider using some of the taxonomies of learning to help write your standards.

Step 6: Test and moderate your rubric

  • Try your rubric with samples of student work to check that the outcome from the rubric reflects the quality of the work.
  • Collaborate with your assessors and peers to test and review your rubric. Are there points of confusion or disagreement in using your rubric for making judgements?

Step 7: Revise the rubric, as necessary

  • Be prepared to reflect on the effectiveness of the rubric and revise it before its next implementation.
Top of page

Tips for improving your rubric design

  • Use the smallest rubric you can.
    • Over six criteria are hard to mark as each criteria and standard represents a decision that markers must make for each submission.
    • Counting errors is also troublesome as some minor errors may be counted in the same ways as major errors.
  • Apply thresholds for not meeting requirements as specified (e.g. not having enough references, outside word requirements).
  • If you are using marking bands (multiple possible marks per standard) keep the bands small.
    • For example, three points are easy to justify (just met the standard, solidly in the standard, at the top of the standard), justifying differences across 10 points can be tricky.
  • Use only one fail band, if possible, to cover underperformance and missing work. This simplifies writing and marking.

This process was adapted from Smith, Sadler & Davies (2010) and Mertler (2001) with information compiled from various sources (Airasian, 2000 & 2001; Montgomery, 2001; Nitko, 2001; Tombari & Borich, 1999).

Top of page

How to use rubrics effectively

Effective use of rubrics relies as much on avoiding common mistakes as it does on implementing positive practices. The key point to remember is that even though the rubric makes sense to you, it may not make sense to your students.

  • Use rubrics as a learning device, not just an assessment device.
    • Ask students to engage with rubrics to make and record judgments about performances and to understand what those judgments were based on.
    • If students can use the rubric process to accurately and realistically judge performances (including their own) against achievement standards, they should be able to perform better as a result.
  • Create opportunities in class to show students examples of work of varying standards.
    • Using an assessment task like the one they are going to be assessed on, students should analyse these examples to identify the criteria and how they connect with the judgement on the quality of the work.
    • Students should then be able to discern variations between the good and poor examples of performance.
  • Have students devise their rubric based on their observations in class.
    • This can be done as a discussion activity for the whole class, in pairs, or as individuals.
    • Share your rubric with students and talk through the differences between their ideas for criteria and standards and yours.
    • Remember – you are the expert, and the task is to help them connect their understanding and yours.
  • Use the rubric to frame the feedback you give.
    • Rubrics are essentially qualitative appraisals in that performance is judged by selecting the pattern of descriptors in the rubric that best matches the qualities of a performance.
    • By returning the completed rubric to students, they can get diagnostically useful feedback on their work.

Adapted from Calvin Smith, Royce Sadler, Lynda Davies, (2010) GIHE, Griffith University, Assessment Rubrics.

Top of page

References

Airasian, P. W. (2000). Assessment in the classroom: A concise approach (2nd ed.). Boston: McGraw- Hill.

Airasian, P. W. (2001). Classroom assessment: Concepts and applications (4th ed.). Boston: McGraw- Hill.

Chase, C. I. (1999). Contemporary assessment for educators. New York: Longman.

Mertler, Craig A. (2001). Designing scoring rubrics for your classroom. Practical Assessment, Research & Evaluation, 7(25). Retrieved February 14, 2015 from https://scholarworks.umass.edu/pare/vol7/iss1/25.

Montgomery, K. (2001). Authentic assessment: A guide for elementary teachers. New York: Longman.

Moskal, B. M. (2000). Scoring rubrics: what, when, and how?. Practical Assessment, Research, & Evaluation, 7(3). Available  online: https://scholarworks.umass.edu/pare/vol7/iss1/3/

Nitko, A. J. (2001). Educational assessment of students (3rd ed.). Upper Saddle River, NJ: Merrill.

Source: Sadler, D.R. (2005) Interpretations of criteria-based assessment and grading in higher education

Tombari, M. & Borich, G. (1999). Authentic assessment in the classroom: Applications and practice. Upper Saddle River, NJ: Merrill.

Trice, A. D. (2000). A handbook of classroom assessment. New York: Longman.

Top of page

 Assessment workshops

Each semester, ITaLI offers several 2-hour workshops to support academic teaching staff design and implement assessment for their courses.

Designing assessment

Implementing assessment: marking, moderation and feedback

Need help?

ITaLI offers personalised support services across various areas including creating and using rubrics in your assessment.