Automated Video Interviews (AVI) and Personality Assessments (AVI-PA)

HAL may be the most famous computer in history (HAL 9000 from 2001: A Space Odyssey) but is certainly not the first. That title probably goes to the Babbage Difference Engine (Charles Babbage 1822) or more recently, the ENIAC (1945). But HAL shaped how we think about computers. HAL seemed almost human but later turns out to be capable of cold, dispassionate murder. But computers have moved on. Perhaps.

So where have we got to?

AVI – Automated Video Interviews

The Holy Grail is here – maybe. I’ve just read a paper on Asynchronous Video Interviews. This means a one-way video interview1. A person (usually a job applicant) gets an email with a link. Follow the link and you get to a website (there are many platforms but Vieple is a big one in Australia). The web site is primed to ask questions. The questions are set up by the client running the recruitment programme. The questions are standardised for the role for which you are applying (sort of – more on that later). Sometimes the questions are in text form, sometimes as videos. You then answer the questions or you record a video response. When you’ve finished the whole interview is wrapped up and the client informed.

The process is largely designed to simplify admin. When Vieple was first mooted a few years ago I was told it was the future for high volume recruitment. Sounded good. The result would be a neat bundle of applications from anywhere in the world either as text or videos. Job done.

It was obvious how this would tidy up the recruitment admin function and would be a vast improvement on the traditional approach. Especially when there are 1000 applicants spread across a country.

AVI-AI – Automated Video Interviews with Artificial Intelligence

The AVI is a clever admin system but there’s nothing yet suggesting it is “intelligent” and capable of analysis. But this data needs to be analysed. And now we move into a whole new field.

There is huge excitement about AI based analysis in selection. This is where Machine Learning/AIs2 are being used. The movie Ex Machina3 has an AI. She was called Ava and is fully anthropomorphic including her emotions. She is played in the movie by Alicia Vikander. HAL was nothing but machinery.

If you have 100,000 applications for roles the sifting process is onerous. But Ava wouldn’t care4. She will just churn through them, applying her cleverly developed algorithms to all the data she has on each candidate in turn, looking for people who come closest to some “ideal”. And Ava has no opinion.

Training Ava

But who tells Ava what to look for? This is really important. Ava needs to be trained. And Ava can look at a whole lot more data. Things that a normal interviewer wouldn’t and couldn’t possibly evaluate objectively. This table shows a list of things that an AI was asked to process in a study by Hickman et al in 20215 to automatically identify individual personality.

Note that nothing in here addresses “what” a candidate says. It’s all about “how” they say it. There may not be any evidence that these elements are related to specific personality aspects. But you never know, and with a fast computer you may as well just check6. This study didn’t look at the content of a candidate’s response but there are ways to do this as well. Automated Content Analysis is being used to trawl legal records and other areas of “big literature”. It could look at what a person says as well.

And here is the first alarm bell. Ava is an empiricist. She looks at the data, all the data and only the data. But what she does with the data comes back to how she is trained. If there is any bias in the training, Ava will make sure this bias is replicated. Perfectly. Remember, Ava has no opinion.

Sources of bias

Where can bias come from? Xavier Ferrer writing in Technology and Society outlined three possible sources of bias for Ava7.

  1. Bias in training: Algorithms learn to make decisions or predictions based on data sets that often contain past decisions. If a data set used for training reflects existing prejudices, algorithms will learn to make the same biased decisions.
  2. Bias in modeling: Bias may be deliberately introduced. The organisation may want to implement an affirmative action programme. This is bias.
  3. Bias in usage: Algorithms can result in bias when they are used in a situation for which they were not intended. This is more common than you might think. In Dunlop’s paper8 they noted that while the AVI allowed custom questions for each role, in practice companies tended to stick to a generic set of questions for multiple roles.

A good example of how bias can creep in might be where a company implements a recruitment policy where existing employees are rewarded for introducing friends who are then hired. My son was hired to Apple under this policy and took advantage of it while employed there. The person who recommended him received a fee and he in turn received a fee form anybody he referred. If we then ask Ava to look at successful people in Apple and try to replicate them, more people will be employed who look like the ones already there. This is bias.

And bias is not a new thing. In the mid-1970s Massey Ferguson Perkins in Coventry, UK found their policy had produced significant “indirect discrimination”. Of 6800 employees, only 10 were black9. This was due to the recruitment process heavily favouring people who could write well in English. Shipyards in Northern Ireland had workforces that were biased towards Protestants rather than Catholics. Ava has no opinion so those biases will be replicated.

Biases can be more subtle. Names can be a guide to race and, of course gender. Post codes are used in credit scoring. Credit Scoring can be used in selection. So where you live might influence your employment. Ava has no opinion.

According to Martin Wells, the Charles A. Alexander Professor of Statistical Sciences at Cornell:

“If we’re building a machine-learning model and we calibrate it on historical data, we’re just going to propagate the inherent biases in the data.”

So is Ava more trouble than she’s worth?

So there are pitfalls. But that doesn’t mean Ava is useless. It just means she needs to be trained properly. That is not a simple task. And it may be far more important in some applications than others. Selection is the most frequently cited application, and it does matter. It’s a “high stakes” application. People’s careers are affected.

In fact, Ifeoma Ajunwa, Assistant Professor of Labor Relations, Law and History, also at Cornell feels that

“Algorithmic decision-making is the civil rights issue of the 21st century.”

There are many firms that are currently examining or that have already implemented some form of automated decision making. But there are also many people who are posting alarms. Writing in Forbes, Tomas Chamorro-Premuzic listed four key issues that should be addressed:

  1. Cyber-snooping: most people leave behind a vast trail of rich data on their individual preferences, values, and abilities. This is their digital trace and as has been demonstrated by the Cambridge Psychometrics Centre, it can be used to influence all sorts of decisions. And it is not just your Facebook and Twitter entries. Every purchase you pay for by card or toll you pay through your car is a trace. I remember a discussion with a History Professor who declared she was completely dark to the web. No trace at all! I asked if she had an academic email. Oh! She said.
  2. Withholding feedback: Historically, people got little feedback on recruitment decisions, especially if it was negative. But Ava could synthesise all the information used and explain why the decision was made. But she usually doesn’t.
  3. Predicting biased outcomes: As Martin Wells (see previous) said, if you use historical data, you will just replicate all the existing biases. Ava is faithful to this. Ava has no opinion.
  4. Black-box selection: Yes, Ava can learn. She could follow your digital trace like a bloodhound and find some factors that seem to be desirable in a role. But can she explain why?

Ava and the Law

As you can imagine, AI is a happy fishing ground for the law. And legal opinions are being produced in vast volumes. The European Union GDPR rules have taken a firm view. Article 22 states:

The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

Of course there are additional clauses but the critical thing is that, in general, the person has the right not to have high stakes decisions made anonymously by an AI. But not every jurisdiction is the same.

Other applications

But what if it’s not a high stakes decision? What about personal development? If you want to get an idea of how a person sees themselves, where their strengths might lie and in which direction their career might develop, it might be different. This is not “high stakes” because any decisions are made by the person themselves. Not imposed. Ava might help the person through a guided conversation based on what she has discovered. If the person had in mind a career in sales but Ava has evidence that she would be quite unusual in such a role, she could point this out. And perhaps offer alternatives. If Ava watched the person in a Role Play where she was challenged, Ava might analyse the interactions. She could even replay the interactions for the person to review.

In this way Ava becomes less of a decision maker and more of a guide. Bias may even be less of an issue.

Are we there yet?

How close are we to this? A clue comes from the work by Hickman et al10. They were trying to estimate personality factors via an AI (not Ava). They found that the situation is a bit complicated. For example:
  1. There is some evidence of validity for AVI-PAs i.e. Ava could estimate personality factors if trained
  2. These estimates are better when Ava is trained on interviewer reported traits rather than self-reports
  3. Ava’s estimates are better for elements that are observable (their reputation) than they are for what a person thinks of themselves (their identity)

Conclusion

A mountain of data is being produced to show how AVI-AIs can be used in selection. But the personal development fields, from Assertiveness Training to Coaching and Career Motivation would seem to be very interesting areas for the future.
Hickman11 says we should be cautious because of these limitations but thinks there is promise. And we should remember that the self-reports we now use with confidence are still imperfect. So, in spite of the questions, Ava and her sisters are a very interesting start.

 

Author: Norman Buckley

References

1 Dunlop, Patrick & Holtrop, Djurre & Wee, Serena. (2022). How asynchronous video inerviews are used in practice: A study of an Australian‐based AVI vendor. International Journal of Selection and Assessment. 10.1111/ijsa.12372.

2 Artificial Intelligence and Machine Learning are very similar concepts. Machine learning is considered a subset of AI.

3 Garland, A. (2014). Ex Machina. A24.

4 Interestingly, although the ability to handle large numbers of applications is touted as a major advantage of an AVI, Dunlop et al (above) found that across 12000+ role templates studied, 75% of them had fewer than 25 candidates.

5 Hickman, L., Bosch, N., Ng, V., Saef, R., Tay, L., & Woo, S. E. (2021). Automated video interview personality assessments: Reliability, validity, and generalizability investigations. Journal of Applied Psychology. Advance online publication. https://doi.org/10.1037/apl0000695

6 This is where Neural Nets are often used but there are alternatives

7 Bias and Discrimination in AI: A Cross-Disciplinary Perspective, Xavier Ferrer, August 7th, 2021 in Articles, Artificial Intelligence (AI), Ethics, Human Impacts, Magazine Articles, Social Implications of Technology, Societal Impact

8 Ibid

9 Commission for Racial Equality. (n.d.). Massey Ferguson Perkins Ltd: Report of a formal investigation (pp. 1–35) [Documents]. Commission for Racial Equality. https://jstor.org/stable/10.2307/community.28327673

10 Ibid

10 Ibid

Blog categories