ETD-HUB

What are the ethical issues of matching algorithms?

Asked: 9 months, 2 weeks ago By: Agathe Views: 246 Human Resources: AI Recruitment Systems

In recruitment, algorithms are used to do the parsing of resumes, and to score the matching with a job offer. What are the ethical issues I need to take care of ?

3 Answers

Answered: 9 months, 2 weeks ago By: Agavornik
Automation bias - how to avoid a situation in which recruiters over-rely on the outputs of such systems? How do we ensure that their autonomy to make decisions is not affected? For example the Assessment List for Trustworthy AI asks "Could the AI system affect human autonomy by generating over-reliance by end-users?" ? How do we make sure that the decision made by the system are understandable and explainable to the applicants and recruiters?
Answered: 9 months, 2 weeks ago By: Hollytibble
I think these systems are really hard to trust because you can't make it explainable. An example - if you DO say something relevant to the application it can show you 'TA DAA here is where they talk about using excel' for example, but if it's trying to show you that they DON'T talk about something (like they never mention excel) you can't point at nothing! You can't say here is the section where they DON'T talk about it. So I imagine users would have to either trust the system completely or not at all, and there's not a lot of potential for good balance between human and AI. I hope that makes sense!
Answered: 9 months, 2 weeks ago By: Deleuze
I would echo Holly & Adrian - really good points. Might I also suggest: Algorithms can perpetuate or even amplify existing biases if they are trained on biased historical data (e.g., past job match/hiring decisions that favored certain genders, ethnicities, or age groups). Such bias can occur even if the relevant data on the CV is obfuscated from the model because other elements may 'proxy' this information. This is one of the major challenges to solve for a fair job matching AI. Additionally, CVs/Resumes usually contain at least some sensitive personal data. Improper handling can lead to data breaches or misuse. This must be mitigated somehow. Somewhat related is that we don't really know how ChatGPT will look to monetise, and for now OpenAI are being forced to retain ALL chatlogs due to a court decision (even before they would retain this unless you opted out) so who knows what may happen to this data if for example an organisation uses the OpenAI API for this process.

Your Answer

Login to add your answer!

We’d love to hear your thoughts — share a meaningful answer by logging in.