Google Employees Demand Whistleblower Protections After Retaliation Against AI Researchers
The post was originally published as part of The Dissenter newsletter. To subscribe, go here.
“Is there anyone working on regulation protecting Ethical AI researchers, similar to whistleblower protection?” Timnit Gebru asked on Twitter. “Because with the amount of censorship and intimidation that goes on towards people in specific groups, how does anyone trust any real research in this area can take place?”
Gebru, a Black woman, was the co-lead of Google’s “Ethical Artificial Intelligence (AI)” team. On December 2, two days after posting this message, Google fired Gebru. It was in response to an ethics research paper that reportedly included criticism of the environmental impact of AI models.
Now, AI researcher Dr. Margaret Mitchell, who was also involved in leading the “Ethical AI” team, was fired on March 5 after Google locked her “out of her work account for five anxious weeks.” The retaliation came after she shared a document with Google’s public relations department that questioned their stated reasons for terminating Gebru.
Google Walkout For Real Change (GWRC), a group of Google employees, have seized the moment to demand Congress and state legislatures strengthen whistleblower protections for tech employees like Gebru and Mitchell.
“The existing legal infrastructure for whistleblowing at corporations developing technologies is wholly insufficient,” GWRC declares. “Researchers and other tech workers need protections, which allow them to call out harmful technology when they see it, and whistleblower protection can be a powerful tool for guarding against the worst abuses of the private entities which create these technologies.”
As UC Berkeley Center for Law and Technology co-director Sonia Katyal told VentureBeat’s Khari Johnson in December, “What we should be concerned about is a world where all of the most talented researchers like [Gebru] get hired at these places and then effectively muzzled from speaking. And when that happens, whistleblower protections become essential.”
Johnson noted Katyal is “concerned about a clash between the rights of a business to not disclose information about an algorithm and the civil rights of an individual to live in a world free of discrimination. This will increasingly become a problem, she warned, as government agencies take data or AI service contracts from private companies.”
There are a number of examples that show a need for whistleblower protection to protect AI researchers, who challenge corporations from within their industries. Like Johnson highlighted, “A fall 2019 study in Nature found that an algorithm used in hospitals may have been involved in the discrimination against millions of Black people in the United States. A more recent story reveals how an algorithm prevented Black people from receiving kidney transplants.”
“Drs. Mitchell and Gebru also built one of the most diverse teams in Google Research, people who could connect their lived experiences to practices of power, subjection, and domination which get encoded into AI and other data-driven systems,” according to GWRC.
The group maintains Gebru and Mitchell were “working in the public interest” and spent time critically examining the “benefits and risks of powerful AI systems — especially those whose potential harms outside of the Google workplace were likely to be overlooked or minimized in the name of profit or efficiency.” (Both also complained about workplace conditions to the human resources department.)
“Google workers have been organizing from within, raising inextricably linked issues of toxic workplace conditions and unethical and harmful tech to leadership and to the public,” GWRC adds. “With the firing of Drs. Mitchell and Gebru, Google has made it clear that they believe they are powerful enough to withstand the public backlash and are not concerned with publicly damaging their employees’ careers and mental health.”
“They have also shown that they are willing to crack down hard on anyone who would perturb the company’s quest for growth and profit.”
The group concludes, “Google is not committed to making itself better and has to be held accountable by organized workers with the unwavering support of social movements, civil society, and the research community beyond.”
In addition to a call for whistleblower protections, GWRC urges academic conferences to “decline sponsorship from organizations, such as Google, engaged in retaliatory actions towards researchers.”
The Association for Computing Machinery’s Fairness, Accountability, and Transparency (FAccT) conference “suspended” Google’s sponsorship on March 2.
GWRC encourages “potential recruits to Google” to help “break the tech talent pipeline” and refuse to work at Google as it creates “harmful and unethical technology.”
“We call on universities, especially those that claim to be human-centered, such as Stanford’s Human Centered AI Institute and MIT’s Schwarzman College of Computing to publicly reject Google funding,” they further suggest.
Gebru was previously involved in exposing the “gender and skin-type bias in commercial AI facial recognition systems.” Her work garnered a lot of praise.
Along with raising concerns about the impact of technology on marginalized communities, Protocol reported that Gebru supported activism at Google and spoke up for employees, who faced retaliation for engaging in protests.
The same day that Gebru was terminated by Google the corporation was accused by the National Labor Relations Board of spying on employees involved in protest organizing.
It is hard to fathom how Google’s retaliation against conscientious employees with respect and influence in the world of technology will not prolong a backlash that has already spanned several months.
Ultimately, if Google cannot tolerate employees on their “Ethical AI” team who raise objections, they should quit using this team as cover for their business. Simply rename the team “AI,” and lean into their corporate culture with the mantra, “Don’t Be Ethical.”