Bias in human resources has consequences. When one candidate is hired or promoted, other applicants are often simply out of luck. The decisions made in HR have a lasting impact on the complexion of a company’s workforce, and the trajectory of its employees’ careers. Biases of race, class, sex and gender have all contributed to corporate leadership that is mostly white and mostly male.
Artificial intelligence-driven human resources tools have the potential to change this – and to entrench it. Algorithms can disregard the unfamiliar spelling of a candidate’s name, the ethnic or religious affiliation of the educational institution where they studied, and even the gender pronouns they use. But AI still faces a major challenge – it learns from the data of the past, and all human-made HR data sets incorporate the biases of the processes that were used to make them in the first place.
“Using AI in human resources offers an opportunity to improve the status quo, but also has the potential to amplify and exacerbate the problems of the status quo. Both are true, and this paradox can be difficult to understand,” says Matissa Hollister, an Assistant Professor of Organizational Behaviour in McGill’s Desautels Faculty of Management.
“There is a lot of excitement about the use of AI in HR is because of its potential to improve an imperfect system. But that is also the source of fear for those who worry about its impact. Machine learning learns from the complexity of the real world. But there is no simple way to tell the system that some aspects of that complexity are good, and some result from unconscious biases. The better approach is for the tool creator to understand the potential sources of bias of the specific situation, and design a system that aims to avoid or even correct these problems.”
In 2019-2020, Hollister began a Resident Fellowship with the AI team at The Center for the Fourth Industrial Revolution (C4IR) in San Francisco. C4IR is part of the World Economic Forum (WEF), and seeks to develop new approaches to technology governance through the collaboration of fellows from government, business, academia and civil society.
The notion of a fourth industrial revolution conceptualizes digital technologies as the most recent technology to profoundly reshape our economy. Steam power mechanized it, electricity scaled it, and electronics automated it. But digital technologies could be even more transformative, and C4IR is seeking to foster responsible adoption of technologies in four key areas: AI and machine learning; data policy; blockchain and digital currency; and the Internet of Things.
While at C4IR, Hollister authored a WEF white paper on the state of AI in HR, and led the creation of a toolkit for HR professionals that provides practical guidance for HR professionals on how to use it responsibly. The toolkit includes an explainer of how machine learning algorithms work, and the various ways that AI can amplify existing biases. It also includes two checklists designed to help organizations assess specific AI tools before they adopt them, and critically evaluate the risks they present. It aims to equip HR professionals with practical knowledge to help integrate AI responsibly.
“There are cool things you can do with AI to address inequality, but it is not an AI system working autonomously that will do this. It is a human identifying the source of a problem, and a way it can be fixed. Every AI system is different, and they reflect the assumptions, ideas and innovativeness of the person who designed it.”
Hollister’s research is one part of a wider effort at C4IR to address potential negative consequences of new technologies by improving governance with norms and principles that can be applied in practical ways.
“There have been many declarations of AI principles by various organizations that try to define ethical AI,” says Hollister.
“In almost all cases, these are very high level: AI should not be biased, it should be transparent, and it should be explainable. It should respect people’s privacy. These are very high-minded principles, but it is often a little unclear how they can be operationalized.”
HR makes a compelling use case because it has a direct effect on the lives of adults. But people of all ages are impacted by AI, and with the breadth of research happening at C4IR, Hollister saw an opportunity to get McGill students involved in innovative projects like the WEF’s first annual Smart Toy Awards. Launched in May 2021, these awards are judged by a panel of experts, and recognize toys that use AI responsibly to create innovative and healthy play experiences for children. Yet there is currently little governance on ethical and responsible AI for children and youth.
“A lot of kids today don’t really play with toys – they are satisfied playing games on an iPad,” says Oliver Leiriao, BCom student and a 2020 Desautels Integrated Management Student Fellow. Leiriao worked with C4IR’s Generation AI team as part of the research component of his Fellowship under the supervision of Dr. Hollister.
“So how do you get kids to engage with physical toys? Toys can be made more accessible, and tolerant of shorter attention spans. They can adapt to an individual by learning about the child using it.”
Privacy, cyber security and bias are all significant risks that must be addressed, but AI’s potential to take toys in entirely new directions is undeniable.
“One example is a smart speaker that helps kids with speech impediments,” says Leiriao.
“It is a little bit like Amazon’s Alexa, but without as much data collection. It listens to a child’s speech, and asks them to repeat sounds and practice their pronunciation. One of the most effective ways get rid of a speech impediment is to practice pronunciation over and over again. So, this is like having a voice coach with you 24/7 that can help you through casual conversation – it can literally change and improve a kid’s life.”
Article courtesy of The McGill Reporter
Article courtesy of The McGill Reporter