Algorithmic hiring is the usage of tools based on Artificial intelligence (AI) for finding and selecting job candidates. As other applications of AI, it is vulnerable to perpetuate discrimination. Considering technological, legal, and ethical aspects, the EU-funded FINDHR project will facilitate the prevention, detection, and management of discrimination in algorithmic hiring and closely related areas involving human recommendation.
Sociological research on inequality has increasingly moved beyond the examination of inequalities as they
presumably exist to explore the generic narrative processes that perpetuate that inequality. Unfortunately,
however, this research remains concentrated on either individual or ideological grand narratives and
ignores the fact that the work narratives do, including the production and structuring of inequality, occurs
at multiple levels: cultural, structural, organizational, and personal, and never exclusively at just one of
these. In this study, we use Somali origin narratives to describe conceptually the ways in which narratives
produced at different personal and societal levels—cultural, institutional, organizational—dialectically
structure the generic processes that produce and perpetuate social inequality.
L’accueil des émotions et l‘identification des besoins pour soutenir
l’adhésion aux mesures sanitaires. Cet outil est construit à partir de ressources éprouvées développées par des médecins travaillant à une approche de communication empathique pour accompagner les patients souffrant de maladies graves et leurs aidants. Ces ressources sont établies à partir de principes clés communs rappelés brièvement en première partie de cet outil.
Wie stark lassen sich Lehrende durch Learning Analytics in ihrer Bewertung von Studierenden beeinflussen? Welche diskriminierenden aber auch ungleichheits-reduzierenden Effekte gehen von Algorithmen aus? In diesem Beitrag stellen die Autor*innen das Potential und die Gefahren von Learning Analytics vor und werten die Forschungsergebnisse eines Conjoint-Experiments aus.
While classifying AI systems used at work as high-risk is appropriate, however, the Proposed Regulation is far from being sufficient to protect workers adequately.
How European Union non-discrimination laws are interpreted and enforced vary by context and by state definitions of key terms, like “gender” or “religion.” Non-discrimination laws become even more…
Durch digitale Lernplattformen können vermehrt Daten über Lernende, Lerninhalte und die Lernsituation ausgewertet werden. Die algorithmische Analyse nennt sich Learning Analytics. Diese Analyse ermöglicht einen individuellen Lernprozess sowie eine Früherkennung von Lernschwächen. Learning Analytics bergen allerdings auch einige Nachteile.
I am an AI researcher, and I’m worried about some of the societal impacts that we’re already seeing. In particular, these 5 things scare me about AI: 1. Algorithms are often implemented without ways to address mistakes. 2. AI makes it easier to not feel responsible. 3. AI encodes & magnifies bias. 4. Optimizing metrics above all else leads to negative outcomes. 5. There is no accountability for big tech companies.
Oxford-based visual data management start-up, Zegami, has just launched a new tool for employers that will enable them to predict which employees are most likely to resign from their jobs.
Artificial intelligence (AI) and face recognition technology is being used for the first time in job interviews in the UK to identify the best candidates.