Many researchers are interested in more than one area of science, and social scientists are often interested in a broad spectrum of the human condition. In my case, I started off at San Francisco State studying clinical psychology (back in the days we could still do that at the undergraduate level). Over time I came to realize that my interests were in studying and changing the social conditions that prompt and exacerbate the suffering that we see in individuals and society. Most of this was prompted by a heavy dose of social psychology classes, which deeply imprinted on me the impact of the situation on individual behavior and attitudes. So after graduating and working as a teacher/job coach for at risk 11th and 12th graders, I knew it was time for me to pursue graduate education in social psychology. My first areas of interest were prejudice and discrimination.
My Masters degree and Ph.D. are from Howard University, in Washington DC. While there, I conducted two relevant pieces of research to some areas of concern. The first piece is type of manipulation that has become common place in the research community, in particular behavioral economics.
Race and Severity of Sentencing
I was interested in the intersection of race and gender in severity of sentencing in white collar crimes. Accordingly, I concocted likely situations where white collar crimes took place, then manipulated (changed) the names and sexes of the criminals. So, De’andre West became John White. The outcomes of the analyses matched the type of discrimination we see in a variety of other environments, in particular, human resources selection, recruitment, training and compensation. In other words, we saw the person with the African-American sounding name given a more significant sentence than the individuals perceived as white. Remember, we never told folks the race of the sentenced, they inferred.
AI and Ethnicity/Nationality Prediction
When I read that last month, researchers affiliated with Stony Brook University launched a free app (http://www.name-prism.com) using algorithms to guess ethnicity and nationality from a name with about 80% accuracy I became interested. Computer scientist Steven Skiena (who worked on the project) said you could use it to track the hiring tendencies in swaths of industry. “The purpose of this tool is to identify and prevent discrimination,” said Skiena.
While in theory that might be the case, and the good folks associated with the project have put some safety mechanisms in place limiting the number of names that can be scanned hourly to 1,000, it seems that based on research, scanning recruitment data for “minority names” usually requires far fewer resumes than that for a position. Let me illustrate the way this works without the app through a small piece of research I conducted while in school.
Selection and Name Guessing
An Asian colleague of mine from San Francisco was under the impression that she had never been discriminated against. While that might have been the case in the famously diverse SF/Bay Area, the question remained viable in the Washington, DC metropolitan area. So, while in there, I took her resume, found 100 applicable jobs and applied to them for her, twice. Once with my last name, and once with her last (Chinese) name. I gave 2 different call back numbers and waited… Sadly, a significant number of calls came back for the white sounding last name in comparison to the Asian name. This is a common phenomenon, in which the candidate never knows that they have been passed over, regretfully tracked by a number of agencies that conduct similar types of research.
Best Intentions, HR Best Practices and AI
So while the goals of the AI in this case are admirable, the value of a fully blind recruitment process that measures competencies using validated tools is probably much more effective and time efficient than the traditional resume in/resume out. Importantly, it prevents us from the type of unconscious bias that manifests itself in choices that reflect stereotypes and prejudice. It is regretful that we see a litany of employee discrimination cases on a weekly basis (check Google News for “Employment Discrimination”). So while the intentions of some AI tools are valiant, inappropriate use can have significantly negative consequences that manifest at the societal level.
More regarding this coming down the turnpike, stay tuned!