Arriving at National Benchmark data for Initial Education Courses in the FE and Skills Sector


There are no National Benchmarks for Initial Education Courses in the FE and Skills Sector, according to Ofsted.

The National Benchmark data used by the University of Westminster consortium of colleges were adapted from the City & Guilds Benchmarks for DTLLS. This seemed to be the only national benchmarks we could find for ITE, but we may be proved wrong!

  • Success 69%
  • Retention 76%
  • Achievement 91%

National success rates are released from the Data Service, which is a government site –

Compass CC, which developed Pro Achieve, then use this data to produce a file of national averages which colleges import into Pro Achieve:

The City & Guilds benchmarks for DTLLS for 2012-13 are as follows:

C&G DTLLS benchmarks


Retention                              76%

Achievement                         91%

Success                                69%


What do you think of these benchmarks?

  • Are they too high?
  • Are they too low?
  • Are they about right?

Please comment on the TELL blog.

There is further general information below and some archived Ofsted information (but no numbers!).

FE and School Data guidance

The Ofsted Data Dashboard for further education and skills was released on 12 May 2014 and provides a snapshot of performance in a school, college or other further education and skills providers. The dashboard can be used by governors and by members of the public to check performance of the school or provider in which they are interested. There are FAQs. The Data Dashboard does not provide financial data about the college or provider.

To view the Data Dashboard, click Home & Search and enter the provider name into the search box.

Archived Ofsted guidance to Benchmarking in the FE and Skills sector

There has been a surge of interest in benchmarking by training providers, partly as a result of government efforts to encourage a raising of standards to match those set by the dominant world players in training. The term ‘benchmarking’ is often misunderstood by those responsible for quality improvement and not used effectively to bring about improvements in performance. It is not simply knowing a national average rate for success or achievement and setting a target to be better than that rate. Such rates may be inappropriately low or high for a group of learners. A ‘benchmark’ is a standard against which activities can be measured. Providers considering the use of benchmarks need to decide on what aspects of their performance should be benchmarked and where they should look to decide on their benchmarks.

The point of any benchmarking is to give a standard against which performance can be measured with a view to improving current standards. This could be in other sectors with comparable operations, products or services (such as the further education, higher education or commercial training, elsewhere within their own organisation (a full-cost commercial training arm that operates differently) or elsewhere in the further education sector (against other training providers).


Particularly effective practice identified in inspections includes:

  • Establishing a commitment to quality improvement from the top of the organisation down. There is no point in benchmarking without that commitment, nor is there any point in selecting only those benchmarks which will ensure that the training provider will look good. Benchmarking makes the effort worthwhile if the training provider is prepared to use the most successful or the most envied as a comparator. Used properly, benchmarking should provide an achievable target which stretches the training provider to the limits of its capabilities for the benefit of learners. This has been the key feature of the most improved providers.

  • Clearly establishing with staff who work in different areas where they are now in terms of performance so that when acceptable benchmarks are set there is a clear expectation of what size of ‘gap’ needs to be closed.

  • Gauging what this performance is like when compared to national average success rates (or retention, punctuality, etc) – deciding whether the comparator national average is an acceptable one for your learners (for example 90% clearly is, 50% is not). Many providers wrongly believe that performing at a level that is a poor national one is acceptable.
  • Remembering that some national averages may be a few years old and not reflect current national performance which has been rising in many areas – referred to as a ‘moving target’.

  • Not using minimum performance levels from funding bodies as a benchmark target to be simply ‘above’ – these levels often indicate performance that should be unacceptable for a provider and their learners, falling below which could result in a loss of funding.

  • Using the changes in performance to internally benchmark against. Here it is not the success rates of another course which would be used to benchmark against, but the improvements in success rates. This makes the conventional justification for comparatively poor performance – ‘their trainees are different from ours’ – less valid. If hairdressing NVQ level 2 has raised its achievement rate from 55 per cent to 70 per cent over three years (+15 percentage points), then each course in the provider should be looking at how it can raise its achievement rate by the same amount (5 per cent each year).

  • Participation in Peer Review and Development (PRD) groups activity to arrive at and share benchmarks.

  • Using best practice elsewhere within the provider. The fact that one course or occupational area can do something to a particular standard should mean that the whole of the provider is potentially able to achieve those same standards. This could bring consistency in the approach to learners for different occupational areas. For example, each learner receiving a detailed course/NVQ guide and induction at the start of training, or having a one-to-one tutorial with a similar content on a monthly basis.

  • Monitoring inspection reports to see what the best providers are doing and benchmarking against desirable aspects of these providers, for example, having progression routes from entry level to foundation degree in colleges or from school link to advanced apprenticeship in work-based providers. In an occupational area (area of learning), deciding who staff have the most respect for and would like to emulate.

  • Within an area of learning, benchmarking against the most successful course or programme. Several very self-critical colleges used this to raise standards in schools, departments or faculties.

  • Monitoring Ofsted monitoring visit inspections to look for examples of effective practice in raising standards, particularly where the judgement of significant progress is awarded (some quality managers go to the Ofsted reports home page every two weeks and look at Learning and Skills reports published in that time).

  • Ensuring that all staff work towards measurable targets that contribute towards achievement of an overall target.

  • Looking inwards at the best practice of the provider, for example, if there is a grade 1 area within a provider benchmarking other areas against it over a three year development plan to reach the same performance. This has been a real driver of improvement in several colleges.

  • Setting strategic aspirational targets as a benchmark to work towards (again, using the performance of outstanding providers to work towards).

  • Benchmarking aspects of delivery to learners, for example, identifying aspects of teaching that make for a grade 1 lesson observation, tutorial or review.

  • For customer service aspects, looking at organisations that are well known for it outside the world of education and training and applying it to aspects such as reception, human resources and IT services.

  • Setting benchmarks for responding to complaints against those seen in industry.

  • More problematic is benchmarking qualitative characteristics – such as the way in which staff relate to their learners or the helpfulness of induction procedures. Satisfaction rates from learner surveys have been used alongside the concept of a mystery shopper making enquiries (how quickly/well an enquiry is dealt with) or sampling practical activities (hair and beauty salons, restaurants).

posted by Rebecca Eliahoo, Principal Lecturer (Lifelong Learning), University of Westminster, Westminster Partnership CETT Director (UoW), Teaching and Learning Fellow



3 thoughts on “Arriving at National Benchmark data for Initial Education Courses in the FE and Skills Sector

  1. Retention – 76% – too low in my experience. Given that the courses are supported by SFE (albeit sadly only by loans) and sometimes sponsored by employers (in-service mainly) my retention % tends to be greater than 80%. Sometimes it has dipped and this is usually due to the % of the cohort teaching in precarious forms of employment.

    Achievement 91% – about right. Most if not all of those who are retained should achieve. The courses are not designed as a sieve, but a ladder (metaphors borrowed from Geoff Petty). We should be towards the 100%

    Success 69% – too low in my experience. For the same reasons as for retention, but also because once the student has been retained into the second year (progression), they are very likely to complete and pass, and unlikely to drop out. I would expect the success rate to be similar to the retention rate. Does this help.
    All HEIs have submitted this data to Ofsted during this round with 3 year trend so it should be in the public realm and extractable.

    Overall I would say that the risks in using benchmarks for ITT are the heterogeneous nature of the cohorts year on year, the increasingly marginalized and voluntarist nature of the workforce who haven’t yet achieved a PGCE/Cert.ED, and the resultant dangers of using these sorts of performance targets for ITT teachers.
    Does this help?

    1. I’m inclined to think the rates suggested might be a tad on the low side. And of course it makes a big difference whether these are pre- or in-service courses. In addition – extensions and how intercalations are counted will have an impact. Be interested in seeing what others think.


  2. This is an interesting and important discussion. We need to be clear about the distinction between benchmarks for the Education and Training sector as a whole, and those for ITE providers, which include those in the E and T sector offering C and G-validated programmes, those in the sector offering HEI-validated programmes, and HEI providers offering their own programmes.

    My understanding is that benchmarks for ITE would need to be based on averages of some sort, taken across ITE providers. To have any statistical value, they would need to be based on a large sample, and we would need to agree that the work of each provider survey is the same – that we are comparing like with like. Because I’m not confident that either of these two factors can be said to apply, benchmarks for ITE providers, it seems to me, will have little statistical value, and they should certainly not be used to assess the quality of particular ITE providers – this would be a statistical misuse of the data. There just aren’t enough providers for differences in measurements between providers in any one year (in achievement, for example) to have statistical significance. Further, different providers can argue with justification that their situations are different: they cater for different client groups, they offer different programmes (the courses, qualifications, and teaching practice placements are different even if the standards are the same, and now, in an employer-led system, even the standards are going to be different, if that isn’t a contradictory statement!)

    However, I think benchmark figures do have potential value to ITE providers as tools for identifying possible areas for improvement in their own provision. Benchmarks have formative, but not summative value. This distinction is perhaps what some sector managers, and perhaps some politicians and inspectors, don’t clearly understand.

    When my institution was inspected last year, the OFSTED team accepted and approved our assertion that our programme should not be aiming to deliver a consistent training service across our partnership, because the sector is changing so fast, and becoming rapidly more diverse, in terms of provider organisations and settings for learning and teaching. On the contrary, we needed to offer training that is adaptable, locally-responsive, and employer-focussed – the opposite of one size fits all, even in terms of quality standards. An employer-led sector cannot produce consistency, because employers do not all want the same thing. This is new territory for publicly-funded education, and I suspect OFSTED might be having trouble orienting themselves to it as well as we ITE providers.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s