Introducing Experiential Validity to the Evaluation of Assessments - December 2018 Event Report

  

A constant challenge for those working in Personality Assessment is navigating the sometimes contradictory advice from academic personality psychologists and practitioners working in  ‘Real World’ organisations, particularly  in the face of continuous changing expectations and technological developments. At the December meeting of the ABP held at University of Westminster Penny Moyle and John Hackston gave an overview of their recent paper, published in the Journal of Personality Assessment1

In particular, they examined reliability, validity and effectiveness  of the different methods of assessment. 

 

The following features form part of the current discussion:

*          Ability and Aptitude tests

*          Extraction and Use of Big Data

*          Business Simulations

*          Emotional Intelligence Assessments

*          Gamification

*          Group exercises

*          Personality Questionnaires - traits

*          Personality Questionnaires - type

*          Situational judgment tests

 

Recent research into tests used by practitioners has come up with some interesting conclusions:  

*          84% use Personality Questionnaires (McDowall & Redman, 2017).  This is very similar to previous research undertaken by Dave Bartram: 85%

*          Most organisations polled used personality assessments.  This is broken down as follows:

            MBTI - 56%

            BelbinTRI - 40%

            OPQ - 35%

            FIRO B - 31%

 

What is the most frequent use for personality questionnaires?

Interestingly most respondents use assessments for developmental purposes rather than selection, e.g. 67% used them for career development.  However, the criteria for evaluating assessments, especially the importance placed on predictive validity, are more relevant for selection applications.   Criteria tend to be geared to test users rather than the actual end-user clients and their specific requirements. 

What this reveals is that there exists a wide Scientist Practitioner divide and if the industry is to progress then it is important to try to understand the reasons for these different perspectives, rather than simply to argue for one corner or another.  Sadly many clients with modest levels of understanding of assessments will try the latest "fads" without trying to understand the science, and meanwhile academics will condemn practitioners for taking on the latest unproven techniques.  

 

The trouble is, that these criticisms also apply to existing and trusted techniques.   

Emre (2018) recorded a criticism of MBTI.   "It is a well known fact that the type indicator is not scientifically valid", which was further reinforced by many reviewers of her book last summer.   

 

What users need to recognise is that MBTI is a simple tool which delivers developmental results.  Myers and Briggs were not trained as psychometric practitioners, as no such training existed at the time they were developing the indicator. Moreover, the tool is not designed to predict performance and should not be used as such.  What it is intended for is a way or raising self-awareness, and it is very effective at doing so. 

 

Addressing other criticisms:

*          Type vs Trait.   MBTI is not intended to measure traits, but to sort into categorical types.  For a lay person, this is often more useful for everyday purposes. It is simple to use and understand.

*          Internal consistency.  It has been found that there is a alpha of > 0.7 on all scales which is as good as other commercially available assessments.               

*          Reliability over time.   Test-retest correlations similarly exceed the normally accepted threshold of 0.7.  93% of people do not change their type significantly over  a 4-week interval.

 

Using MBTI for selection is like "using a screwdriver to bang in a nail".  It’s not the right tool for that job. 

 

There are four kinds of evidence, summarised by Barends et al (2014) that practitioners are advised to pay attention to in selecting an assessment.  

*          Scientific evidence: there are numerous findings from empirical studies published in academic journals

*          Organisational evidence

           -  Data facts and figures gathered from organisations

           -  Include hard numbers but also "Soft" data which show a remarkable level of consistency

*          Practitioner evidence: professional judgment and expertise has been consistent in its assertion that MBTI has provided consistent results

*          Tacit Knowledge, derived from those who have used a process (in this case MBTI and other assessment techniques).

 

MBTI does not claim to be a good predictor of job performance.   However it does have good construct validity and good correlation in the relevant context and does have good consequential validity.  It is quick and simple to use at a basic level, but can be applied at a more complex level as users become more sophisticated.

 

The key criteria practitioners can and do use include: 

*          Cost.  It is competitively priced

*          Accessibility.   It is easy to apply to the particular organisational need

*          User friendly.  End-user clients frequently comment on how helpful they find the MBTI.

*          Clearly defined concepts.  This means that it can be applied in the same organisation in a variety of situations.

*          Ethical construct - there are clear boundaries as to when it might cease to be effective and users can be easily made aware of these

*          Useful for feedback discussions.

 

In conclusion, stakeholder evidence provides support for MBTI.  A majority of stakeholders state that :

*          It is personally valuable

*          Intended outcomes have been achieved

*          Key learnings are remembered months after the assessments. 

*          It has ongoing impact at work.

 

They concluded with the introduction of the concept of experiential validity: Does the person taking the assessment experience the whole process (including feedback) as personally valuable?  It was argued that adding this aspect of validity into the evaluation process for assessing personality assessments would better capture the stakeholder evidence already being used intuitively by practitioners.  A rigorous academic examination of experiential validity could incorporate stakeholder evidence into he scientific evidence-base, extending the range of evidence that practitioners have available to draw upon, and bridging the scientist-practitioner divide.

 

1 Penny Moyle & John Hackston (2018):  Personality Assessment for Employee Development: Ivory Tower or Real world? Journal of Personality Assessment,  https://doi.org/10.1080/00223891.2018.1481078