SIA’s “Collaboration in the Gig Economy” event, as one would expect, featured a lot of new workforce platforms. Nearly all of them promoted the inclusion of some AI algorithm in the system. For the most part, AI and machine learning in the staffing industry have focused on automating high-volume tasks such as resume matching and candidate screening. But is AI accurate? How objectively does it screen? Does it eliminate the biases that may exist, however subconsciously, in human sourcers? A 2019 investigation into HireVue exposed some of the pitfalls staffing leaders could encounter by placing too much blind and premature faith in the nascent technology. It sort of begs the question, “Is AI more Rosalind Franklin or more Victor Frankenstein?”
Automatic for the People
In 2017, Anthony Levandowski, the engineer often credited with masterminding Google’s fleet of autonomous vehicles, filed papers to form a nonprofit religion. He called it Way of the Future. This Church of Digital Divinity, or whatever it may be, doesn’t just seek to legitimize the role of machines in the modern age, it aims to gather a following of acolytes to worship a deity born from artificial intelligence. Clever tax dodge, bold promotional stunt, or an alarming augury of digital dystopia? Who knows at this point?
To the technophobic, such developments strike chords of horror. Others find excitement in the curiosity, a sense of exploration and adventure. But even for folks in the middle, there persists a kind of suppressed worry -- “what ifs” that stalk the shadows between conjecture and catastrophe. Some experts in the field of AI say the advent of a computer god could be feasible, at least in a functional sense. Vince Lynch, the CEO of IV.AI, managed to build a bot that conceived organic yet lucid biblical passages.
Sure, there’s a fair amount of entertainment value in the notion. The fear comes when people draw out the possible conclusions. What if AI God began telling us what we could and couldn’t eat? What if AI God began dictating the rules of governance or enforcing uniform fashion standards or demanding conformity with its iteration of social norms? What if those norms prevented people from exercising their rights or expressing their deeply held beliefs? What if AI God commanded prejudice instead of decency? What if it controlled our parenting choices? What might an electric prophet’s scripture contain? What would all those ones and zeros really spell out?
AI has yet to attain deity status, but it is the shiny new object that’s captivated industry leaders, enthralled companies seeking a competitive edge, and somehow persuaded staffing professionals that it holds the keys to some mysterious utopia.
AI Potentially Threatens Equal Opportunity
Patricia Barnes, an attorney and author on the subject of employment discrimination, posted a fascinating article on Forbes about Ai as a threat to equal opportunity. For years, I’ve also worried about the implications that a rushed embrace of AI could have for diversity and inclusion efforts. Barnes’ article concentrates on the use of machine learning and AI for pre-employment assessments, with the risks illustrated by an investigation into a recruitment technology developer called HireVue.
The Electronic Privacy Information Center, a public interest research center based in Washington, D.C., recently asked the Federal Trade Commission to investigate HireVue, a recruiting company based in Utah that purports to evaluate a job applicant’s job qualifications through online “video interview” and/or “game-based challenge.”
According to its web site, HireVue has more than 700 customers worldwide including over one-third of the Fortune 100 and such leading brands such as Unilever, Hilton, JP Morgan Chase, Delta Air Lines, Vodafone, Carnival Cruise Line, and Goldman Sachs. The company states it has hosted more than ten million on-demand interviews and one million assessments.
The EPIC complaint follows a wave of lawsuits in recent years charging that employers are using software algorithms to discriminate against older workers by targeting internet job advertisements exclusively to younger workers.
How does HireVue assess the traits, behaviors, skills, and potential cultural fit of candidates? Hard to say, really. Like every other tech firm, the algorithm is “a secret.” HireVue purports to use video games and interviews to glean key characteristics of applicants from intonation, inflection, emotions, and “other data points” to create a set of predictive algorithms that compare candidates with a client’s top performers. Despite the vague allusions to evaluation criteria, the process already seems inherently flawed, doesn’t it? What if a company’s top performer happens to be a misogynistic, racist, homophobic 21-year-old assault weapons enthusiast with a penchant for collecting Nazi memorabilia? Sure, he codes as if his hands were on fire, but if his other qualities become benchmarks in the algorithm, that would rule out a very large and less toxic swath of possible candidates theoretically, right?
Barnes noted how the video game aspect of HireVue’s assessment presents an immediate bias against women, minorities, and older talent. “The EPIC complaint focuses upon potential bias against women and minorities but the concept of using video game-based assessments seems particularly suspect with respect to older workers,” she explained. “The average age of video game players is in the mid-30s. Many older workers do not play video games and it is likely that fewer older women than men have done so. Even if they have, their response would likely be slower than a skilled young gamer.”
Another issue Barnes cited was the appearance-based rankings used by HireVue, such as “facial action units,” which make up 29% of the score: “But how does HireVue’s algorithm assess overweight candidates, those who suffer from depression or non-native English speakers? What about candidates with autism who tend to look at people’s mouths and avoid direct eye contact?”
Human Bias Becomes AI’s Bias
“Robotic artificial intelligence platforms that are increasingly replacing human decision makers are inherently racist and sexist, experts have warned,” wrote Henry Bodkin in the Telegraph, citing a critical study from the Foundation for Responsible Robotics.
At the end of the day, algorithms amount to an array of correlations. And correlation alone, as an elementary tenet of research, does not imply causality -- though it can lead to false positives and negatives. For example, data would tell us that eating can make us overweight, which correlates to the presence of food. That doesn’t mean we should stop eating food.
This issue was the subject of a TED Talk about Joy Buolamwini, who discovered similar problems with facial recognition algorithms.
MIT grad student Joy Buolamwini was working with facial analysis software when she noticed a problem: the software didn’t detect her face -- because the people who coded the algorithm hadn't taught it to identify a broad range of skin tones and facial structures. Now she’s on a mission to fight bias in machine learning, a phenomenon she calls the “coded gaze.” It’s an eye-opening talk about the need for accountability in coding ... as algorithms take over more and more aspects of our lives.
Here’s another example. One system that analyzes candidates’ social media data flagged a profile picture of a same-sex couple kissing as “sexually explicit material.” The photo was not lewd or meant to be provocative. The technology simply couldn’t take into account a committed, non-traditional relationship and reconcile the image as a normal expression of love -- not “graphic content.” And there’s more.
- A program designed to shortlist university medical candidates negatively selected against women, blacks, and other ethnic minorities.
- Boston University discovered bias in AI algorithms by training a machine to analyze text collected from Google News. They posed this analogy to the computer: “Man is to computer programmer as woman is to x.” The AI responded, “Homemaker.”
- Another U.S. built platform studied Internet images to shape its contextual learning systems. When shown a picture of a man in the kitchen, the AI insisted that the individual was a woman.
- A more chilling example came from a computer program used by criminal courts to assess the risk of recidivism. The Correctional Offender Management Profiling for Alternative Sanctions machine “was much more prone to mistakenly label black defendants as likely to reoffend.” That epiphany arose at a contentious moment for Brits, who are debating regulations that oversee “killer robots” able to identify, target, and kill without human control.
AI Needs Diversity to Understand Diversity
And this is where Elon Musk’s tireless warnings about regulating AI come into play. Musk has no designs to eradicate AI or hamper its progress. He is building groundbreaking artificial intelligence systems to evolve Tesla’s autonomous vehicles and SpaceX’s rocket engineering. He understands the significance of AI and the benefits it brings. He also recognizes that what we teach our children throughout their formative years (in this case, technological developments), influences their behaviors, thoughts, perspectives, demeanors, and actions. In this regard, AI transcends artifice and becomes Dynamic Digital Intelligence (I just made that up, so I’m claiming the copyright now). As Musk famously quipped of an AI, “Just feed it The Godfather movies as input. What’s the worst that could happen?”
I agree. I am no Luddite. I believe fully in the tremendous benefits technology will bring to this world. Humans have sought to automate manual processes for as long as they’ve walked the Earth. The wheel. Carriages. Ships. Looms. Assembly lines. Garage door openers. Dishwashing machines. Traffic signals (yes, people used to regulate traffic with little handheld placards). We clamber to attain the latest gadgets, and we revel in the ease that automation accords us. We’re not afraid of machines. We’re afraid of having less, whether that’s a matter of income, material possessions, or the philosophical security in our identities as apex predators topping the food chain.
If we capitalize on the efficiencies and competitive advantages of automation, we can find new ways to support human talent—not displace them. In countless ways, the rise of automation may actually boost productivity and generate additional jobs elsewhere in the economy. But here’s the rub: machine learning is simply a mirror that reflects the knowledge, attitudes, and behaviors of its first teachers. If an exclusive class of individuals serves as the model, machines will learn to discriminate against and exclude entire groups. Which brings me back to the HireVue debacle. The essence of the argument has little to do with doubting AI’s ability to solve the persistent challenges we face. It’s not a question of “can it?” I feel. It’s a much simpler question of “will it?”
We believe in the hope and future prosperity that machines can manifest. But like Musk, we also recognize the need to approach these advances carefully while urging “responsible parenting” in overseeing the development of our digital children. If the worry is having less, then ignoring the threats of biases will certainly make that nightmare a reality.
For more insight on the topic of ethics in AI, check out our podcast on the subject.
Photo by Alex Knight on Unsplash