When it comes to evidencing skills, knowledge and experience, digital badges have indelible advantages. But, Paul Jagger FBCS explains, there is room for improvement and development.

In a previous article (ITNOW, December 2019), I explored the emergence of digital credentials, specifically Open Digital Badges as an increasingly popular means of recording educational attainment, especially in the workplace. In that article, I argued that while there are many benefits for digital badges, a potential drawback is the lack of any mechanism to compare the relative value of badges awarded by different organisations.

This article explores how digital badges might be mapped to a competence model, enabling comparison by employers, recruits and job seekers.

Who can create a digital badge?

Digital badges based on the Mozilla Open Badging Standard may be created by any organisation and awarded for any set of assessment criteria the organisation decides. A badge is issued through a third-party badge provider who maintains a record of badges issued to individuals and provides the means to verify educational attainment. The person to whom a badge has been awarded may then display it online, on a social platform like LinkedIn, or by providing a link to their badge profile. For example, have a browse of my own badge profile here.

Holding and displaying digital badges on a third-party platform makes them portable, visible, verifiable and immutable. So far, so good. But, what if two or more organisations issue badges for the same topic at the same level of educational attainment, how can they be compared?

Digital badges are awarded based on the completion of assessment activities, for example, assessments might be conducted in one or more of the following dimensions of competence:

  • Knowledge - something you know.
  • Skill - something you can do.
  • Experience - something you have done.
  • Desirable behaviour - no what you do, but the way that you do it.

The combination of these four dimensions provides an effective means to define and evaluate competence at a given level of attainment. This may be entry, foundation, experienced / advanced levels, for example.

Let’s take the example of a competence and ethical hacker may have:

  • Knowledge of attack vectors, system vulnerabilities and hacking techniques.
  • Skills in exploiting vulnerabilities, using hacking tools, evading detection.
  • Experience in successfully hacking into different systems using a variety of tools and techniques, while successfully evading detection.
  • Desirable behaviour, using their knowledge, skills and experience for ethical hacking purposes in accordance with a code of conduct.

When all four dimensions have been verified through assessment, an ethical hacker may be deemed to be competent in performing their role. But, what type of assessment is appropriate to each dimension and how should assessment differ by level of competence?

Possible assessment types are given for each dimension in the following table:

Dimension Knowledge Skill Experience Behaviour
Possible Assessments Types Interview, essay,
multiple choice questionnaire, sentence Completion, object association
Practical exercise, application simulation, demonstration, artefact creation Job shadowing, stretch assignment, menteeship, role play Observation, NPS rating, peer feedback, mystery shopper report, background checks

While there are other assessment techniques that may be employed, the important point to note is that different techniques are needed to provide evidence of competence in each dimension.

How may we compare attainment across digital badge awarding organisations?

Digital badges provide a means to verify educational attainment, but they do not define how the assessment is conducted, that is left to the vendor to decide. Neither are they mapped any common definition of competence across organisations.

Consequently, the relative value of digital badges from different vendors if difficult to judge. Two badges from different organisations may refer to the same subject domain at similar sounding levels of competence, but we cannot know if they have assessed the same skills to the same level of competence without an assessment model.

Comparison is not important within an organisation that awards its own badges for competence assessment, promotion or career development, but it does become important in the job market where lateral comparison of qualifications is desirable.

For example: employers, recruiters and job seekers can readily understand the relative value of a GCSE, A Level, AS Level, bachelors degree and masters degree in computing science awarded by different examination bodies. But, what’s the relative value of an artificial intelligence practitioner or machine learning specialist badge from two or more different organisations?

What’s needed is an assessment framework: one that spans an industry and to which badge awarding organisations can map their digital badges. While it is unlikely that a single framework will ever meet the needs of all aspects of the IT industry, the good news is we already have a head start in the SFIA Framework.

SFIA: The Skills Framework of the Information Age

SFIA is, at heart, a skills taxonomy. It has never sought to define experience or behaviours and, until recently, it didn’t include underpinning knowledge. However, it does provide classification of skills and a hierarchy of skill levels that give us the core of a framework against which educational attainment may be mapped. SFIA is a good place to start in defining an assessment framework for the IT industry.

It is worth noting that the word ‘skill’ is increasingly used as a proxy for knowledge, experience and behaviour. These four dimensions, however, exist separately.

Let’s take my data science foundation badge as an example and map the skills definitions for that badge to SFIA.

Badge Skill Nearest SFIA Skill SFIA Level
Data Science Data modelling and design Level 2
  Data management Level 2
  Data visualisation Level 4
Data Scientist Workbench Data modelling and design Level 2
Python Programming/Software Development Level 2
R Programming/Software Development Level 2
Scala Programming/Software Development Level 2
Statistics No equivalent  

This simple exercise reveals just how difficult it is to perform an accurate mapping when no competency model has been employed to define the skills associated with a badge from the outset. That’s not to suggest the badge skills are wrong or incomplete, rather they are difficult to map to SFIA as they lack specificity.

It appears from the titles that the Data Science Foundation’s badge skills are in fact a mix of software programming language, two topics and the name of an all-in-one cloud-based data manipulation product; whether they are skills at all is debatable. No detailed description of the skills associated with the badge is given, so it’s not possible to be certain if the chosen SFIA skills are the most relevant. Let’s try an alternative framework for comparison.

The UK Government’s own Digital, Data and Technology Capability (DDTC) Framework identifies the following skills for a Data Scientist at the entry or ‘working’ level of competence:

Badge Skill DDTC Framework Skill DDTC Level
Data Science Domain expertise Working
  Data science innovation Working
  Developing data science capability Working
Data Scientist Workbench Data engineering and manipulation Working
Python Programming and build (data science) Working
R Programming and build (data science) Working
Scala Programming and build (data science) Working
Statistics Applied maths, statistics and scientific practices Working

Again, it’s difficult to be certain if the skills are correctly mapped, and while the DDTC framework provides detailed descriptions of all the skills at each level in the DDTC model that appear to map better to the badge skills, we can only assume that the Data Science Foundation badge maps to the lowest level (‘working’ in the DDTC model).

There are multiple frameworks we could use for comparison, including the European e-Competence Framework (ECF) which is a general model for the IT industry, or for the Information Security badge, the NIST NICE and IISP frameworks that cover that subject domain to greater depth than SFIA or ECF. Whichever frameworks we choose, it will be necessary to make an approximation of the mapping between digital badges and one or more frameworks until the digital badge market matures.

Neither the SFIA nor DDTC frameworks can be said to map fully and correctly to the skills identified by the badge; we’ve not even tried to compare two or more similar Data Science badges awarded by different organisations. At present, this is not a big issue - but as the market adopts digital badging, it will be inevitable that lateral comparison of skills will need to be made by employers, recruiters and job seekers.

In conclusion

Digital badging is in its infancy and has much potential, but until there is a common competence framework against which knowledge, skills, experience and behaviour can be mapped, it will be difficult for employers, recruiters and job seekers to understand their relative worth. In reality. the perceived value of digital badges will derive from a combination of the market demand for the skill and the brand of the awarding organisation.