Document Type

Article

Publication Date

10-8-2014

Abstract

Employers often struggle to assess qualified applicants, particularly in contexts where they receive hundreds of applications for job openings. In an effort to increase efficiency and improve the process, many have begun employing new tools to sift through these applications, looking for signals that a candidate is “the best fit.” Some companies use tools that offer algorithmic assessments of workforce data to identify the variables that lead to stronger employee performance, or to high employee attrition rates, while others turn to third party ranking services to identify the top applicants in a labor pool. Still others eschew automated systems, but rely heavily on publicly available data to assess candidates beyond their applications. For example, some HR managers turn to LinkedIn to determine if a candidate knows other employees or to identify additional information about them or their networks. Although most companies do not intentionally engage in discriminatory hiring practices (particularly on the basis of protected classes), their reliance on automated systems, algorithms, and existing networks systematically benefits some at the expense of others, often without employers even recognizing the biases of such mechanisms. The intersection of hiring practices and the Big Data phenomenon has not produced inherently new challenges. While this paper addresses issues of privacy, fairness, transparency, accuracy, and inequality under the rubric of discrimination, it does not pivot solely around the legal definitions of discrimination under current federal anti-discrimination law. Rather, it describes a number of areas where issues of inherent bias intersect with, or come into conflict with, socio-cultural notions of fairness.

Comments

Originally published by Data & Society Research Institute

Share

COinS