Employees in Google's artificial intelligence research division have delivered a set of demands to management after the recent ousting of AI ethicist Timnit Gebru from the company.
In the memo, which was sent Wednesday and obtained by Business Insider, members of Google's Ethical AI team called on leadership to offer Gebru her position back "at a higher level," among demands that include a full explanation of events that led to the researcher's dismissal, and the removal of a key manager from the reporting chain.
"Google's short-sighted decision to fire and retaliate against a core member of the Ethical AI team makes it clear that we need swift and structural changes if this work is to continue, and if the legitimacy of the field as a whole is to persevere," they wrote in the letter, which was sent to top brass that includes AI chief Jeff Dean and Alphabet CEO Sundar Pichai.
The employees ask that Megan Kacholia, the manager who allegedly sent Gebru an email informing her she had been terminated, be removed from the reporting chain. "We see removing Megan from our management chain as imperative," the memo reads, accusing Kacholia of going over the head of Gebru's direct manager. "We have lost trust in her as a leader."
The letter also calls for "a public commitment to academic integrity at Google" including issuing a statement that commits to allowing for research into technological harms even if they go against Google's business interests.
The group also asks that employees who have advocated for Gebru should not be "disciplined reassigned, or fired."
"We know from Google's past retaliation that they often proceed to harm and marginalize workers without outfight firing them, especially those who have advocated for their colleagues."
Two weeks ago, Timnit Gebru, a co-lead of Google's ethical-AI team, announced she had been fired by Google following disagreements over a research paper. Just before her exit, Gebru had also sent an email to an employee resource group complaining about her experience with Google management, and accusing the company of "silencing marginalized voices." Google used Gebru's email as grounds to expedite her dismissal, she said.
The research paper, which Gebru co-authored with four other Googlers, addressed the risks of language models that relate to Google's business. Management asked her to retract her name from the paper as it was unhappy with the research being published – something she pushed back on.
Gebru was also one of the few Black women inside Google's AI division.
As such, the events have caused a rift inside the company that has thrown up questions about AI ethics and academic freedom within the company – and its commitment to diversity.
CEO Sundar Pichai emailed employees last week to announce there would be an internal review into the events surrounding Gebru's exit.
Jeffrey Dean, the head of Google's AI division, also canceled a year-end celebration that was scheduled for this week.
"We had intended to gather at our All Hands next week to celebrate the year and to preview our 2021 strategy, but while there's lots to be proud of as an org and what we've accomplished, a celebration doesn't seem appropriate at this time," he wrote in a memo to employees last Friday.
Here's the memo employees sent, in full:
The Future of Ethical AI at Google Research
Research into the ethical, social, and political impacts of AI systems is of critical importance. This research informs public understanding and assessments of the safety, efficacy, consequences, and implications of AI technologies. Informing the public on key nuances of AI systems is especially important given that the harms and errors of these systems are often hard to detect, although they increasingly shape the opportunities and losses in individuals' livelihoods.
Companies like Google have much to gain by the continued proliferation of AI. Therefore, this research must be understood as constructively critical of Google's products and research output, such that we can realistically assess potential societal impacts. This research must be able to contest the company's short-term interests and immediate revenue agendas, as well as to investigate AI that is deployed by Google's competitors with similar ethical motives.
This work is that the Ethical AI team is committed to. Indeed it's core to our job description. The Ethical AI team is a critical part of the Responsible AI ecosystem for Google and has been instrumental in helping to enact Google's commitments to its AI Principles. The team frequently advises on product and policy. Most critically, the team produces novel research on many central themes on the potential risks of AI systems, including on model reporting, perturbation analyses, and data issues. This body of work has been consistently recognized as making field-advancing contributions to theory and practice. Our team has had enormous impact, influencing product, internal and external policy, and international regulation.
Google's short-sighted decision to fire and retaliate against a core member of the Ethical AI team makes it clear that we need swift and structural changes if this work is to continue, and if the legitimacy of the field as a whole is to persevere.
Over the past two weeks, we have heard Google leadership say it is committed to continuing the important work of diversity, equity, and inclusion within the Research PA and to supporting the Ethical AI team. But we must remember that words that are not paired with action are virtue signaling; they are damaging, evasive, defensive, and demonstrate leadership's inability to understand how our organization is part of the problem. In this document, we want to outline what it will take to rebuild trust with our team and to build a research environment in which we can continue to do this essential work.
A meaningful commitment to supporting our team includes:
1. Holding accountable the executives involved in Timnit's firing and gaslighting.
a. We have lost confidence in Megan Kacholia, and we call for her to be removed from our reporting chain. Samy Bengio, Timnit's former manager, should report directly to Jeff Dean. We see removing Megan from our management chain as imperative: she excluded Samy in the decision to fire TImnit, which occurred without warning rather than engaging in dialogue. Several times, Megan has undermined Samy's leadership by both going around him in her decisions regarding his reports and telling him face-to-face different things than she would later say to his reports. We have lost trust in her as a leader.
b. We call for Jeff Dean and other executives involved in the firing and retaliation against Timnit to be held accountable. As a first move forward in accountability and repair, both Jeff Dean and Megan Kacholia need to apologize to Timnit for how she has been treated. We also call on leadership to recognize how language perpetuates harm and gaslighting. Both Jeff, and MEgan continually refer to Timnit's firing as a "resignation" and have stood behind the falsehood that she and her team failed to adhere to the PubApprove process. These accounts have caused harm to Timnit, the Black community at Google, and Black tech workers more generally, as well as creating a crisis of legitimacy in the AI ethics field, and industrial AI research more generally. Jeff and Megan need to be held accountable for these actions.
c. We call for transparency into the process by which Timnit was fired – an act that can only be understood as retaliation – including the names and roles of all executives involved in this decision and its execution. Without understanding the process and the degree to which each leader was involved, it is impossible to move forward with the structural changes needed to ensure this damaging behavior doesn't happen again.
d. All research leadership needs to stop referring to Timnit's firing as a resignation or departure. This is perpetuating a falsehood and is an act of gaslighting.
Accountable: Research Leadership
Timeline: Immediately2. A public commitment to academic integrity at Google.
a. We need both internal commitments and a public statement which guarantees research integrity at Google. This statement needs to include explicitly commitments that allow research discussing harms of particular technologies, including those which are potentially in reference to Google interests, products, and research areas (such as large language models, TPUs, and large-scale datasets used for model training and evaluation, such as JFT-300). It must recognize that work focused on harms and consequences is complete, without the need to suggest or provide mitigations. This work is difficult, and much of the time mitigations are not simple or even possible. A commitment to research integrity must go above and beyond AI Principle #6, "Uphold high standards of scientific excellence," which is vague and underspecified.
b. We also call for the instantiation of processes to ensure transparent and accountable practices of research topic and publication review. This would include, but is not limited to, a clarification of the Sensitive Topics Review process, including clear articulations of which projects fall in scope, transparency into the review process, and mechanisms of accountability for decision markers throughout
Accountable: Research Leadership.
Timeline: End of Q1 2021.3. A transparent, open investigation of Research POps and Leadership regarding its handling of employee complaints related to working conditions, including discrimination, harassment, and retaliation, but also exclusion, bullying and gaslighting.
a. Leadership and managers should be required to go through Racial Literacy training. As evidences by the Black-led Research All-Hands and much of the discussion on the Brain Women and Allies and other lists, Black researchers are on the receiving end of major discriminatory treatment within Google, including, but not limited to: underleveling, micro and macro aggressions, disparaging comments, being talked over in meetings, and being excluded from meetings pertaining to core work streams. Moreover, in situations where the Black community has been directly harmed by a product, members of the affected community have been left out of problem framing and prioritizing/shaping the solution. The toxicity towards BIPOC people within Research needs to stop.
b. We call for a transparent, open investigation of POps and its handling of employee complaints related to working conditions, discrimination, harassment, retaliation, exclusion, bullying, and gaslighting.
c. We are also calling for transparency into hiring and senior leadership decisions. These decisions need to be exposed to the Research DEI Council and Research DEI WGs Council. Furthermore, critical decisions around hiring and firing that would have major research impacts should include consulting the DEI Council.
d. Lastly, we ask for transparency into the selection of the DEI Council, how those individuals are chosen, and what power they have to make actionable changes to hiring and termination policies.
Accountable: Research Leadership, Managers, and POps.
Timeline: End of Q2 2021.4. Maintain and strengthen the Ethical AI team, as a team of interdisciplinary researchers.
a. Research must extend an offer for Timnit to return at Google, at a higher level. The removal of Timnit has had a demoralizing effect on the whole of our team. Her and Meg Mitchell had cultivated a diverse, productive team which thrived in a psychologically safe environment. Offering Timnit her position back at a higher level would go a long way to help reestablish trust and rebuild our team environment.
b. Nobody who has advocated for Timnit should be disciplined, reassigned, or fired. We know from Google's past retaliation that they often proceed to harm and marginalize workers without outright firing them, especially those who have advocated for their colleagues. Instead, the company rates them poorly in future Performance review cycles, shifts their daily workload to something less desirable, and puts other direct and indirect limits on their career growth. We ask for guarantees that Perf scores will not be affected by organizing and advocacy. Including not assigning and NIs or PIPs.
c. The Ethical AI team must remain reporting to Samy Bengio and stay together within Research, unless someone chooses to go to another team. Another common tactic is to mask retaliation within a "re-org", shunting workers who've engaged in advocacy and organizing into new roles and managerial relationships.
Accountable: Research Leadership.
Timeline: Extended offer immediately. Other commitments are ongoing5. We reaffirm the demands of the Standing with Dr. Timnit Gebru petition, which has currently been signed by 2,695 Googlers and 4,302 academic, industry, and civil society supporters. We are also co-signing the letters from Black Researchers and the DEI Working Group.
Join the conversation about this story »
NOW WATCH: Epidemiologists debunk 13 coronavirus myths