It's hypothetically possible to train an AI model to predict the job performance and productivity of a job candidate by comparing some of their characteristics to a dataset of current or past employees.
Can you think of any circumstances in which this is ethically justifiable?
For example, if the only features used to make the prediction were things like whether they smoke, how much coffee they drink... no protected characteristics... does that make it okay? Or is the act of comparing an individual to others (through AI or just from personal opinion) inherently unethical?
Would love to know your thoughts!!!
Also, I worry about this from the perspective of a lecturer - sometimes we might 'predict' that some students will struggle more with certain elements of the course and proactively reach out to them. The intention is to be helpful, but is this actually causing more harm than good?
I recently saw an interesting conference contribution that addressed the performativity of predictions. It might be interesting to have a look, even thought it might not address your question directly.
https://proceedings.mlr.press/v294/khosrowi25a.html
In reply to Agavornik (the reply button isn't working): That is interesting, and very relevant to my own research in clinical risk prediction. I'm not sure if it is relevant to this problem though? As the prediction would influence whether or not you were hired, and if you're not hired you can't quit? I feel like that's subtly different to whether or not the prediction makes you quit?
Charlie replied: Hello! Thanks so much for pointing this out - replies are now (hopefully) fixed :)
I think it probably can be ethically justifiable to use AI to predict job performance if we use a definition of 'ethical' which means compliance with regulatory ethical frameworks, but only under very specific, carefully controlled circumstances. I also think the ethics of doing something like this vary based on who is using it as well: I will explain.
Speaking hypothetically, if this was going to be used by an *employer* I think the application would need to meet all of these criteria. I'm not an expert, but off the rip I think:
- No protected characteristics are used (e.g., race, gender, age, religion, disability).
- The model is transparent, explainable, and auditable (hard to decide thresholds).
- The predictions are used in conjunction with human judgment (how to achieve?).
- The AI is trained on relevant, validated indicators of performance (what could these be?).
- No proxies that may encode bias (e.g., zip code for socioeconomic status).
- There are fairness algorithms in place to mitigate disparate impact (look at classification resources on this site).
I think when you applied all of these things, you would have something that is fair by the book, but who knows if it could predict performance very well. And how would you even quantify performance? Categories, some kind of numerical value etc? So, maybe doable in compliance with regulation, but the practical value is debatable and the extent of how ethical this is in a more pure sense is still highly debatable. I think this makes it a much more interesting project to work on and discuss!
Another thing to think about is what if this tool could be used by people *applying* for a job to predict their own performance. In this case, depending on model performance when certain data is added or not (hard to measure model performance anyway because would need a lot of data on outcomes for users advised in either direction) maybe you would include otherwise excluded data in training the model because you would want the best possible projection? Of course, the issue of how you quantify good/bad suitability is still difficult, but I could see this having an interesting utility - helping people avoid jobs where they may be overworked, mistreated, unhappy etc. Of course, if you built this kind of model who is to say that employers wouldn't begin using it even if it isn't intended for them.
This has been a long answer but in short I think, basically, that you can do this 'ethically' from the perspective of what the rules are, but more philosophically I don't know if it can be ethical. This is an interesting debate and question, and I think a really cool project this week would make this kind of model and also get into the weeds of is it ethical, is it not, how do we decide etc? It sounds like you might be doing this, and if so I am very much looking forward to your presentation on friday!
Hollytibble replied: You are right to be excited - be prepared to be both technically 'wowed' and filled with existential dread. Woo!
Hollytibble replied: also the voting function seems to be broken haha
Also another thought - for an employer maybe something better is just to flag somebody who definitely *wont* work out at the job due to some very specific criteria such as a conviction which will cause something like a DBS or Background check to fail, which is a requirement for the job.
In this case you could definitely say the model is completely ethical because it is just searching for a practically disqualifying criteria and marking the user as unsuitable because it is genuinely impossible.
In this case, you probably have to tackle preventing misuse - i.e. somebody applying the model to a criteria which is not practically disqualifying but preferentially disqualifying.
Also I suppose this then ends up being something different as a tool - I'm just thinking out loud.
Your Answer
Login to add your answer!
We’d love to hear your thoughts — share a meaningful answer by logging in.