Freitag, 10. April 2020

Using algorithms to improve the quality of hiring decisions and reduce discrimination? - Interview with Larissa Fuchs and Philipp Seegers of CASE

Now is the time to consider what is moving HR as well as what will continue to move it into the future. Given the unmanageable amount of digital aids in modern human resources work right up to current AI applications, in my opinion questions often arise as to how personnel managers can assess the quality of the systems in use and the associated decision-making. The so-called HR Tech Ethics Advisory Board in Germany has recently clarified its position on the subject of AI and human resources by specific guidelines. In this post I am examining a sub-category of ​​this topic which could be described as “Algorithms and HR”. Dr. Seegers - an often and happily "read" interview guest on my blog – spoke about this subject at the CASE online summit on March 31, 2020 (Which is also offered in English - details can be found at the end of this post). Here Dr. Seegers, together with Larissa Fuchs from the University of Cologne, presented the FAIR (Fair Artificial Intelligence Recruiting) project and gave some helpful explanations on how to use algorithms when selecting personnel, in order to make non-discriminatory decisions. I recommend everyone to watch the video featuring the input from Dr. Seegers and Ms. Fuchs (in German). I have a few questions about their contribution which I will address with them today. Thanks to Josh Madden for translation of this interview.

Wald: Dear Dr. Seegers, once again I was able to bring you in for a conversation, this time with Larissa Fuchs from the University of Cologne. Dear Ms. Fuchs, thank you very much for coming along.
Fuchs: Dear Prof. Wald, I am pleased to meet you.
Seegers: Always a pleasure.

Wald: Your contribution to the previously-mentioned online event was preceded by some interesting quotes that I would like to repeat here: "81 percent of recruiters think AI is forward-looking, but the majority (57 percent) has little or no knowledge about it" (Hennemann/Schlegel/Hülskötter, 2018). "Only a quarter of HR managers state that they have sufficient knowledge of the uses and functions of AI and algorithms" (Jäger/Meurer, 2018). Additionally, there are current figures from one of the well-known Monster/HRIS studies (Weitzel et al. 2020), where it can be seen that 9.4% of the companies surveyed already use digital selection systems, and 62.9% of the companies assume that these systems promote a non-discriminatory selection of applicants.
Seegers: Of course, HR should on the one hand understand more about technology, but on the other hand technology must also improve. At CASE, we go to great lengths to explain our algorithms transparently and validate them properly; of course, this includes not only checking whether the productivity of hires increases, but also whether they are fair and non-discriminatory. I am convinced that algorithms can fundamentally improve the quality of recruitment decisions, while at the same time reducing discrimination. Of course, that depends on the algorithm. Just as in the case of HR-diagnostics, some procedures legitimately have a bad reputation - there are also algorithms that simply do not work and should not be used. And to be able to differentiate here, HR naturally needs a certain understanding of what the algorithms actually do and how they can be checked.

Wald: For me, the question arises as to how knowledge about the functioning of these systems can be conveyed in the future. And once again: how can HR managers be able to make assessments as to the quality of the decision-making driven by these systems? Is there something like a rule of thumb or certain considerations that HR managers should take into account?
Fuchs: I think we have to distinguish between two things here: (1) if the algorithm makes efficient decisions, more productive candidates are selected, and (2) if the algorithm makes socially justifiable decisions, the selections are made fairly and without discrimination. Of course, companies should make good business decisions, but the exclusion of certain groups of people has a negative impact on productivity - even if it is not always easy to measure. Common promotional policy in Germany, for example, often incorrectly suggests that men are more productive than women. This is precisely why we need our own metrics for the different concepts. However, these are not necessarily any different for algorithms as they are for conventional selection processes. You need to take into account which selection processes can predict future productivity well (predictive validity), whether these selection processes are gender or origin-neutral and whether possible deviations from these are at the expense of certain groups. With our FAIR Index we have created a simple metric to do just that.

Wald: What should be done in the future to ensure greater understanding or, in the interim, a targeted expansion of the relevant competencies of HR staff?
Seegers: I would also recommend that they require tech providers to explain their algorithms. HR should be open to this technology and to have these kinds of conversations - the development of competencies will then develop over time. Additionally, even the best expert cannot check an algorithm without collecting data. That's why HR should be prepared to collect and analyse data responsibly. Data-driven HR work takes place far too rarely, even though this would also be in the interest of HR to support the added value of their own work.
Fuchs: Of course, company data is also interesting for research and we, as scientists, are also happy to help in the correct setup of samples and analysis. Good data on employee productivity is rare in research, although there is huge potential here. Of course, you also have to talk about things like data protection and publication rights, but these are absolutely solvable questions that all too often take on the role of a "knockout argument".

Wald: With your FAIR project you are starting at exactly these points. Can you explain the objectives and the chosen approach of this project in more detail?
Fuchs: The aim of the FAIR project is to develop fair and non-discriminatory algorithms that can be used to evaluate CV information such as education or work experience. Our chair of Professor Pia Pinger at the University of Cologne is working together with CASE to make this happen. Also taking part as associated partners are Deutsche Telekom, Simon-Kucher & Partners, Studitemps and Viega. The project is funded by the state of North Rhine-Westphalia as well as the European Union. The implementation period is January 2020 to December 2021.
Seegers: FAIR is a practical project. So the aim is not to provide a theoretically-driven exploration into the possible problems and advantages of algorithms. Much rather context data should be collected and algorithms developed. In my opinion, the many articles and guidelines on algorithms neglect such a practical application and the concrete, often critical, discussion about it. This is not so relevant for the public and much more time-consuming, but thanks to the generous funding we now have the opportunity to develop, evaluate and optimise. This way we come to better understand what works and what doesn't, and thus make a concrete contribution.

Larissa Fuchs and Philipp Seegers
Wald: This sounds very interesting. I wish you lots of success with this and CASE’s other projects!
Fuchs: Thank you for the interview. Should any of your readers have any questions or would like to try out certain methods such as the FAIR Index, we are of course happy to get in touch.
Seegers: Thank you very much, dear Prof. Wald.

Introduction of CASE: CASE grew out of academic research conducted at the University of Bonn which dealt with the comparability of educational qualifications based on large data sets and scientific methods. The development of the CASE algorithm was funded by the Federal Ministry of Economics, and awarded by the European Union. Studies have shown that, in contrast to absolute grades, the CASE Score can measure a candidate’s true performance in the course of their studies, thereby making valid predictions about future work performance. More information can be found at www.candidate-select.co.uk

CASE Online Summit – Practice Meets Research: Larissa Fuchs and Dr. Philipp Seegers will present the FAIR project during the next CASE Online Summit on the 15th April. This online event is free to attend and under the headline “Practice Meets Research” different researchers will talk about their work and its implications for HR practice. The full-agenda can be found here: https://www.candidate-select.de/uploads/files/candidate-select.de/files/Agenda_Practice-Meets-Research.pdf, registrations are possible via https://case.clickmeeting.com/practice-meets-research/register

Keine Kommentare:

Kommentar veröffentlichen