Changes to how we assess apprenticeships are on the horizon

Barry Smith, NSAR’s Head of Assessment and Skills, reflects on the recent proposed changes to apprenticeship end-point assessment after 11 years of working on the standards-based apprenticeship reforms across six different sectors, just as we launch our own consultation on the occupational duties for Rail Engineering Operative, Level 2.

I have always felt that the same bad habits seem to reoccur when we try to resolve challenges with apprenticeships or vocational education. The first bad habit is simply that we don’t seem to want to sit in the problem long enough to make sure we fully understand it, unpick the confounding variables and really understand what we think needs to be resolved. We often don’t invite those who can offer constructive ways forward and provide critical insight to sit in the problem with us. Instead, policymakers can rush to reactive solutions that, at best, just introduce different challenges and invariably come at the problem top-down.

The second habit is that sometimes we get close to the potential answer but don’t see it, or we see it but don’t follow through. It’s like a comet on an elliptical orbit that gets so close you can almost touch it, then off it flies into the distance.  I’ll come back to this with an example.

Finally, policymakers tend to over-compensate rather than refine or tweak. Solutions are often over-engineered. So, where am I going with all this?

The Government is planning to test several alternatives to the current end-point assessment (EPA) of apprenticeships. The goal appears to be to improve apprenticeship achievement rates and reduce costs and administration burdens associated with end-point assessment. Currently, and on average across the apprenticeship standards, 45.7% of apprentices don’t successfully complete their apprenticeship. The non-completion challenge has major funding and accountability implications for training providers, even if apprentices leave with qualifications but no EPA. We should remember that the training providers bear the brunt of the accountability, sanctions and consequences of poor completion rates. However, all of this does also means that marketplaces for end-point assessment are diminished as a result, making it harder to recover costs and to generate revenue for investment and continuous improvement.

To be fair, the Department for Education and their expert apprenticeship training provider group and have given the problem some thought, and the group will be allowed to flex EPA requirements in trials that could begin this summer term. Piloting, stress testing and evidence-based policy decisions are all to be welcomed. Let’s hope there are clear research and evaluation questions and robust, impartial evaluation methodologies scaffolding each pilot activity.

Non-completions, costs and administrative burdens are clear concerns. You could add to this the overall declining apprenticeship uptake year-on-year. These are clear symptoms of something going wrong, but have we got the diagnosis right? The measures to be trialled by training providers are designed to make the current system more manageable. They will look to see if having training providers carry out part of the EPA would help, rather than the whole process being done by an independent end-point assessment organisation (EPAO). Other trials will see whether it is possible to transfer the assessment of behaviours from EPAOs to employers. However, it is the trails related to cutting the size of EPAs by removing the need for all knowledge, skills and behaviours (KSBs) to be assessed that I think will see us come close to a way forward and that will ensure apprenticeship assessment improves.

I have long worried that we risk losing our way with apprenticeships. We have often forgotten to ask ourselves what the point of apprenticeships is when we design them. What are we trying to achieve? There was a time when apprenticeships purported to deliver competence. The target proficiencies would be the focus of assessment and we were much clearer in the claim we wanted to make for apprenticeship achievement.

Somewhere along the developmental road, the focus drifted onto the coverage of KSBs. This is a policy-driven, risk-averse assessment approach looking to ensure that all would be fine if every KSB could be evidenced and explicitly traced back to one of the assessment instruments. All KSBs had to be directly attributed to an occupational duty and an assessment instrument. The fear was that corners would be cut if we didn’t do this and, by doing this, we could drive delivery practice to ensure everything was taught – but that shouldn’t be the point of assessment. It was almost as though we wanted the same traceability for KSBs as we do in our most organic produce.

I think we got close to the solution once before, and maybe the move on not assessing all the KSBs means it is coming around again. The comet carrying the potential way forward came very close on its elliptical orbit last time when IFATE introduced duties to the apprenticeship standards. ‘Horah!’ I cheered, ‘Finally!’. Now, let’s base the assessment on evidencing these duties; make these the target proficiencies that we should validly assess; acknowledge that apprentices are competent in the occupational role when they can perform these relevant duties.

But then, the comet kept going; instead of occupations centred around agreed core duties, lists of 15 and 20 duties started to appear (some were not duties at all and were more life performance criteria) and the focus remained on coverage of KSBs. Back to the safety in numbers and coverage, rather than security in prioritising key proficiencies in occupations. If we assessed core occupational duties and saw them as the application of all KSBs in action, we could move away from evidencing the assessment of all the KSBs. We would be on to something, I think.

Let’s establish what we want apprenticeships to do (deliver competent post-apprentices), let’s identify the core duties that are front and centre in each occupational role, then reverse engineer out of these duties the KSBs that drive them and finally assess them. How you assess them validly will tell us who should assess them. While we are at it, have the apprenticeship standards writing and assessment plan rules designed to introduce qualitative requirements, not quantitative ones (for example, a minimum of 12 months long, at least two assessment methods, cover all the KSBs).

NSAR has just started a consultation on the proposed changes to the Rail Engineering Operative apprenticeship standard, starting with an attempt to get the occupational duties right, capturing the work of rail engineering operatives during CP7. Contact support@nsar.freshdesk.com if you would like to take part in this consultation.

Scroll to Top