What’s Google’s definition of retractable?

Timnit Gebru, a computer scientist known best for her work on ethics and algorithmic bias in AI/ML applications like face recognition, was fired yesterday from co-leading Google’s Ethical Artificial Intelligence Team. Apparently this was triggered by an email she sent to members of her team. Social media is exploding over this, and I don’t have all the information so I won’t speculate on exactly what happened. However, one aspect of the media storm caught my attention, since it relates to the question of what makes for a good reason to retract research. 

From publication of emails that Gebru purportedly sent her co-workers, and that Jeff Dean, director of Google AI, purportedly sent to Google AI employees to explain what happened, we get some info about Google’s internal process for granting its employees permission to submit papers for external publication. From Dean’s email, it seems all research papers co-authored with a Google employee and intended for external publication require such review. Dean’s email describes the outcome of review of a paper Gebru was an author on, which Google requested she withdraw from submission:  

A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. It ignored too much relevant research — for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies.  Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues. 

Wait a second, these issues remind me of the paper on gender and mentoring that we discussed a couple weeks ago. That paper incited responses from many because it jumped from identifying associations between citations of researchers later in their career and the gender of senior researchers they published with earlier without acknowledging prior work that might have helped explain things. There were no major statistical errors made in the paper though (actually there may be some errors, but it not clear that they threatened any of the main results). The backlash was mostly a function of how the interpretations seemed to accept that female mentorship led to lower quality research, and the paper’s failure to acknowledge a body of work on gender and citations that could have provided alternative explanations for the results that might not have required quite so many inferential leaps.

That paper wasn’t retractably bad, at least not in my opinion. One problem with arguing to censor research for potentially harmful speculation or missed literature is that what catches someone’s eye as problematic omission or speculation is going to be highly subject to the reader’s values. To argue that such issues are fatal flaws due to possible harm to beliefs or future behavior requires a lot of speculation and an assumption that readers can’t recognize for themselves what is more or less plausible, a problematic insinuation that leads to a conclusion that all research must be perfectly accurate. But who gets to define that? (Not to mention that If we were to retract all papers that did these things, we might not have a lot of research left!)

So if Dean’s email really is describing the major problems with the paper Google asked Gebru to retract, it would suggest their internal review process allows for these judgment calls, albeit perhaps for different reasons. I expect corporations care a lot about the reputation of their brand, so I wouldn’t be surprised that their process allows for calls like this under the guise of protecting business interests. But it’s a definition of censorship-worthy that’s leaving a lot of room open for bias to creep in.  It makes me wonder how often these types of issues are used to censor papers by employees, and how Google researchers view the intended role of the review – to enforce a shared definition of quality, like a professor might be seen by PhD advisees in their lab? One would think the Google researchers doing the research would be the experts in the company on it. Or is the review supposed to be mostly about preventing leakage of sensitive information to protect privacy or IP? I guess we’ll have to wait for an answer, since I don’t expect Google to release the history of their internal evaluations anytime soon. 

Update (Dec 4 2020): Jeff Dean posted a file with his email and more information on Google’s review process.

MIT Technology Review reports on what the paper was about.

78 thoughts on “What’s Google’s definition of retractable?

  1. Inspired by the response to the female mentor paper, I ran a tiny twitter poll to see what people thought merited “retraction”. Some people pointed out some, ummm, limitations, with my poll, but for what it’s worth, about a third of n=332 respondents endorsed that a wrong or a harmful conclusion merited retraction of a scientific publication: https://twitter.com/MariaGlymour/status/1329967294647128067?s=20. Maybe we need a field-wide discussion on how to deal with scientific results we think is wrong, or harmfully wrong. I would say this needs to be integrated into training programs, but at least some of the disagreement (at least on twitter) comes from senior folks.

    Are the google review rules more subject to bias than say, the rules standardly adopted by major cohorts such as Nurses’ Health Study or Cardiovascular Health Study? Most have a review process, and senior folks can nix any paper. I’m not sure if there are rules governing what explanations they can invoke, but I suspect it’s very ad hoc and people with the most power in any given cohort face little accountability about this.

    • “Maybe we need a field-wide discussion on how to deal with scientific results we think is wrong, or harmfully wrong. ”

      I agree with Andrew that we don’t want to be retracting papers just because someone thinks they might be harmful or wrong. What we do need, however, is press and media coverage that’s much more willing to be critical of research results, *regardless of how “rightly” or “wrongly” they fit into the social movements of the moment*.

      • I am one of those who voted to definitely not retract papers just because someone (maybe even many people) think they are wrong, or even if many people think the methodology is poor. But in the case of the Timnit Gebru paper, I think the word “retraction” is potentially misleading and causing the discussion to go into an area that isn’t actually material to the issues.

        Unless I’ve misread the email correspondence, Google was asking Gebru to withdraw a paper that had been submitted for a publication or conference process, which is different from retracting a published paper. But even if it had actually been published (it’s not clear to me either way), a retraction of this sort (because the employers of a lead author weren’t happy with the process leading to its submission) is always justified, even for me who is at the extreme anti-retraction end of the debate Maria Glymour refers to above. When we work for an organisation (not as an academic at a university, but a more conventional employee relationship), they get to set the process for this sort of thing and that’s really the end of the story. It’s about employee relationships and organisations protecting their brand, not really related to the “should bad scientific articles be retracted” debate.

        • Peter F. Ellis:

          This cannot possibly be a defense. If Google wants to gain credibility and good will from their “scientific” publications, then their decisions about what to censor and why should be open to discussion and to disagreement. These are not marketing emails, they are scientific submissions. If they choose to censor for problematic reasons, they should be held accountable for such.

        • ” then their decisions about what to censor and why should be open to discussion and to disagreement. ”

          I don’t agree with that. Their only responsibility is that, when they publish, the work be of sufficient quality to merit publication on scientific grounds. If they chose for whatever internal reasons not to publish, they have no responsibility to provide info about or clarify those decisions to the public.

        • jim: I disagree. If they claim that they care about ethical AI or the scientific process then they need to commit to sound research practices, which includes explaining why they decided to censor someone. And in this case, it sounds like they also need to answer on the kind of culture they cultivate and the systems they have in place.

        • It’s a market: if scientists don’t like this kind of interference, they can choose not to work at Google. The choice to publish despite company policy indicates the “value” of academic freedom was greater to this person than the “value” of working at Google. And Google’s reaction means it values control of scientific ideas and debate–or maybe just obedience–over public credibility. Or maybe you haven’t seen Google’s new slogan: “Only be as evil as you can get away with!”

        • It’s a fair point that this is not the standard definition of retraction – they wanted it withdrawn from submission. I don’t know if it was also already online on arxiv or elsewhere and they were asking to take that down, just what the emails say.

          My comments are about potential issues when you have a review process that allows for censoring papers for reasons like missing lit.

        • Peter, thanks, I agree, I was a little confused by the word “retraction” myself because the previous explanation seems to indicate the decision was made before the paper was submitted.

    • I think the answer is in the presentation of findings.

      You write: “If a paper’s conclusions are “wrong” should it necessarily be retracted? Or retract if conclusions are wrong & also potentially harmful? Or is retraction needed only if there was an error/flaw in the study that wasn’t clearly presented in the publication?”

      I say: When you present what you found, you should also sketch what you did not find. We call this a boundary of meaning (BOM). The “retraction” could be a simple update to the BOM based on new analysis or new evidence.
      https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3035070

      PS: This is of course not considering fraud which, if established, should lead to full retraction.

    • Well there is an employer employee relationship and so it’s mostly employment law …

      The only constraints on the employer beyond those laws would be the need to attract and retain research capable employees.

      Can’t remember who said it but they argued government scientist is an oxymoron as a true scientist needs to disregard all interests but getting to truth.

      I believe we mostly know now that in Universities, most faculty have to be concerned with getting publications and citations and assuring their deans that they are generating their fare share of prestige, so there are few “real” scientists there either.

      But I think most universities do not try to control content the way other employers will (subject to constraints above)

  2. Honestly, it feels like they were looking for a reason, and this was the first/best(?) one they could come up with. Every paper can potentially miss relevant research, which peer review should catch; that’s part of the point.

    • Reading the email from Timnit Gebru at https://www.platformer.news/p/the-withering-email-that-got-an-ethical I’m not surprised Google found it “inconsistent with the expectations of a Google manager”. I know nothing about how things got to that point, but once you send an email like that (eg telling people to stop doing their work because it’s not going to go anywhere) when you’re a middle manager, you have to expect to be fired. If on top of that she sent an ultimatum as reported “meet conditions X or I’m out of here” and I think she basically had to expect it to end the way it did.

      • Peter F. Ellis:

        Google makes the bulk of their revenue under the auspices of free speech, they should expect this type of disagreement. If they choose to fire someone for disagreeing with a decision in this manner, that is something they will have to defend. Dismissing it as, “We’re the bosses” is not a moral defense of this type of censorship.

        • Google doesn’t have to morally defend anything. They can do whatever they want as long as they don’t break the law

        • Wandering:

          For sure. But public-facing companies care about public relations. Most people like Google; positive public perception is one of its assets, which means that the company is motivated to put in effort to protect that asset. All this is in addition to, or aside from, moral considerations.

        • “face the consequences….”? What consequences do you have in mind? In what manner will Google have to “face” them and “defend themselves morally”?
          Are the woke people who are in unique possession of moral clarity going to righteously rise up and cancel Google?

        • Renzo Alves:

          The consequences to which I am referring are:

          1. If I am running a conference in the future, I would make clear that we are not a PR arm for Google and that submissions from Google employees should be considered carefully in that light.
          2. Publicity like this while this is going on: https://www.cnbc.com/2020/12/02/google-spied-on-employees-illegally-terminated-them-nlrb-alleges.html does not help their cause.
          3. The judgment of those who required a withdrawal should be questioned given what is an obvious and likely PR explosion. Not recognizing this likelihood suggests a lack of foresight.

          The issue here appears to be political rather than academic. It would be easy to bring the purported missing citations to the researchers’ attentions and suggest they include that in the presentation or final paper. At the end of the day, a paper is the responsibility of the authors and if Google chooses to create an academic culture within parts of their company they should be willing to live with both the positive and negative aspects of that decision. Just as they would have to live with the effects of not having such a culture.

        • “Google makes the bulk of their revenue under the auspices of free speech”

          Um, not saying you are wrong…but are we sure about that?

      • My reading of the emails is that there is a lot of history behind them that we don’t know, but that the people Timmit wrote to do. Without knowing the history, it is hard to make judgements.

  3. “that readers can’t recognize for themselves what is more or less plausible”

    Yet the press (I guess you could call them “readers,” although I’m not sure they ever actually read research papers) routinely parrots even the most bizarre and ridiculous research conclusions as though they were the discovery of a rock-solid new continent.

  4. > One problem with arguing to censor research for potentially harmful speculation or missed literature is that what catches someone’s eye as problematic omission or speculation is going to be highly subject to the reader’s values. To argue that such issues are fatal flaws due to possible harm to beliefs or future behavior requires a lot of speculation and an assumption that readers can’t recognize for themselves what is more or less plausible, a problematic insinuation that leads to a conclusion that all research must be perfectly accurate. But who gets to define that?

    Isn’t that kind of the point of journals? Like the reason why you would want something published in X journal is to get that stamp of approval to show that the paper is “good” (defined however you like), right? And if a journal publishes enough papers that you, as a reader, do not feel is up to your personal threshold of good, then you can stop reading papers from that journal (or just treat them like a non peer-reviewed paper on Arxiv).

    • Yes. I’m not suggesting that standard scholarly review should never consider these things; that is often its role. My point is that when you have an internal review process for a large organization like Google trying to assess things like whether the lit review covered all the most important things, you have a process that’s going to be hard to apply uniformly.

      • How could they be any less “uniform” than the standard pub review process at a journal? If nature rejected for the same reasons would she issue a list of demands to Nature’s editors?

        • I don’t think they are necessarily less uniform. The standard peer review equivalent might be R2 who protests, “but they didn’t cite my paper” or more specifically “they cited stuff showing problems with my work but didn’t cite other stuff showing advantages!” What seems unique here is that Gebru’s role at Google is to help them run their organization more ethically, and specifically build understanding of what it means to do AI ethically. So citing issues with missing related work to keep her from publishing a paper seems to undercut what they hired her to do. I suspect that is part of why this case is angering many people — it raises questions about how open Google is to critiques of technology they employ.

  5. Fired? Sounds like she resigned:

    “Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.” Source: https://www.platformer.news/p/the-withering-email-that-got-an-ethical

    • That’s what Google says. Timnit says she was fired so abruptly that even her direct supervisor didn’t know about it.

      It would not be surprising, and perhaps standard practice, for a company to terminate an employee whom they fear, despite their stated intention to resign at a later date.

      The credibility meter leans toward Timnit on this one.

  6. The discussion about the mails, especially the reaction on Twitter is very insightful.

    Nothing in the behaviour from Google is different from my experience in corporate environments.
    * Any (European) industry setting that I know about has internal review processes that any document needs to go through. Even if your managers might not be experts in a field, they have the ultimate say if you can send out a paper.
    * When you write a mail to encourage employees to apply pressure from the outside to the company you are working for – you’re in trouble.
    * When you set an ultimatum, your company will look for a way to make you leave.

    The reactions on Twitter make me belief, that Google AI is not thought of as a company, but rather as research environment like a public university. An internal review process would be unusual in public academic institutions and mails to “change the system” are a form of the debate on the academic system. In the industry the expectations are very different.

    That Google is behaving like a company, however, reinforces my view how weird it is to let BigTech handle their own ethics supervision.

  7. I wonder how this compares to Google’s firing James Damore a few years ago, pretty much also for the same reason that he distributed a memo publicly criticizing company policy?

    Also it’s interesting how some firings get so much attention, given that companies fire perceived troublemakers all the time. One way to look at it is that these cases get attention because of some notable features. Another way to look at it is that if you’re bothered by the general pattern that companies are firing people for union organizing, etc., then it makes sense to shout when you this sort of high-profile case that can get notice in social media. Conversely, if you think that it’s ok for companies to fire union organizers, you might use this example to support your general belief that at-will employment is a good thing.

    I guess what I’m saying is that this case has several interlocking features:

    1. Retraction or non-publication of a research article because of flaws or perceived flaws in its scholarship.

    2. Ethnicity, gender, and social inequalities.

    3. Tech (and Google in particular) as a force, or potential force, for progressive politics.

    4. The role of dissent within corporate culture.

    5. At-will employment in general.

    The Damore case is an interesting comparison because it shares all these features, but with some of the directions reversed.

    • I think what’s different here is Damore was an engineer who wrote an email, whereas Gebru was hired by Google to lead their AI ethics team. So part of her job definition would seem to be to criticize and try to change what she perceives as unethical.

      • Jessica:

        Ok, so in that case firing Gebru is like firing the ombudsman? I wonder if part of the story could’ve been internal conflict within the organization even at the time that Gebru was hired for this position. I’m reminded of situations I’ve seen in academia where a department hires an assistant professor who works in a certain area, but there’s a large faction within the department that entirely opposes that research area, and they gain power, leaving the faculty member in the lurch.

        • The outcome sounds similar similar too: By making things difficult, the department/company doesn’t have to go to the trouble of firing anyone, they can just wait to “accept a resignation”.

          As John Williams says above, it seems like this was probably the outcome of many internal disputes that we are not privy to.

        • That seems to be how it is being perceived.

          There’s an aspect of this situation that’s remniscient of events occurring in many CS departments (and probably other disciplines) lately, which is that the greater visibility of seemingly systemic issues like underepresentation of women, black people, and other groups and lack of interest in ethical consequences of technology is leading groups to organize and try to enact change. Leadership realizes there’s a movement and thinks, ‘Yes we should support this’ (for various reasons which might be altruistic but also undoubtedly involve trying to look like they’re doing the right thing). So a conversation starts, and leadership asks what should change, and the other side lists things they see as necessary to solve the problem. But then from there the common outcome seems to be frustration ensuing on both sides.

          I agree there are similarities with the Damore case though, in that both emails are expressing a feeling that the company is only allowing for one side of the conversation.

        • Additionally it _could be_ a perception the Gebru as ethical expert was not on the ethical highroad (i.e. giving and overly one sided view the their paper and unduly dismissing the other side).

          This is just a wild guess but related to issue of expected higher standards from those in different roles.

          This clearly happened in a paper I read recently calling for academics to stop being one sided and dismissive of others that contained an account of something I knew fairly well, where these author were extremely one sided (and after publication disclosed a relationship with the person they sided with).

          More likely, Gebru just bumped heads with a senior executive and they wanted them gone.
          (A postemployment consultant hired by my former university based employer once disclosed that to me.)

      • This perception may well be one reason why people are upset, but perhaps her role was to help Google improve the “ethics” of its products and practices, not serve as an ombudsman, or internal regulatory officer (with potentially formal/legal/moral responsibilities) – the latter might be something that comes out of Government intervention at a future stage. As I see it, her email said do X,Y or I resign, and Google said OK, then you can resign (and we weigh the consequences of that as less problematic than the alternative), and btw, now that you are leaving, we don’t want you to have all the document/information access and perks that you have as a Google manager between now and when you leave, which can potentially be nothing but major trouble for us – so we are firing you (but pre-emptively acccepting your resignation in lawyer speak).

    • > So part of her job definition would seem to be to criticize and try to change what she perceives as unethical.

      It’s interesting that in a sense, because tech companies explicitly and publicly focus on making sure their workforce is diverse, and part of their brand is increasing open exchange of information, they are effectively held to a higher standard.

      I had a client who worked for the CDC. Before she gave a public presentation or publication on her research, her presentation/publication was subjected to a thorough review, often by more than one supervisor. Imagine all the ways that the CDC could protect the Cdc’s brand in such a review process. No one had any expectation otherwise.

    • > pretty much also for the same reason that he distributed a memo publicly criticizing company policy?

      Wikipedia says it was an internal thing: https://en.wikipedia.org/wiki/Google%27s_Ideological_Echo_Chamber

      > Retraction or non-publication of a research article because of flaws or perceived flaws in its scholarship.

      I don’t know the Gebru thing, but I Googled the Damore thing and you might be giving him a bit too much credit: https://assets.documentcloud.org/documents/3914586/Googles-Ideological-Echo-Chamber.pdf

      Certainly seems like the same topic, but the details are quite different.

    • James Damore published his memo only company wide; the memo then got leaked to the press by other channels. If I recall correctly, Damore wrote his memo in the context of an internal online forum/message board(?) specifically inviting critical feedback from employees about Google policies. He also was not given the choice to retract anything.

  8. Isn’t the power dynamic and important issue.
    Kicking on groups that are already down doesn’t need defense and such attempts need extra scrutiny. An attack on something entrenched and dominant might help reveal something, and is probably not as damaging to the target, they gave the resources to respond. So the asymmetry is very important thing to consider in such cases ?

    • It’s fine to consider societal power dynamics when talking about broad aggregates, but there’s a lot more information in this case. So let’s use it: Do you really think that one of the top researchers in a trendy area (ethics in ML) is disadvantaged relative to a socially inept no-name engineer? (It seems that he was unemployed for at least one year, and judging by his twitter it may still be that way).

      The truth is that there are few differences between this case and the James Damore one: employee does/says something that has some truth but is expressed in the wrong way -> company fires them to avoid issues (with PR and other employees).

      Where there are differences, they fall on the side of Damore because he is/was less well know; his message (“discrimination may not explain most of the discrepancies between men and women”) is a harder sell in the current political climate; he didn’t have purposefully inflammatory behavior (unlike her, which sent emails blaming “white men” inside the company for essentially everything, or asking the names of the anonymous people who gave her negative reviews); and he didn’t send ultimatums to anyone.

  9. Having worked somewhere similar (sorry, but I think I need to be vague), the focus of these reviews is really on anything that could raise a public relations problem. In fact, these reviews are literally supervised by employees with a PR background, so the biggest concern is anything that would make the company sound bad. That could include an endorsement of politically unpalatable conclusions, revelations that the company’s data can reveal more information about users than is commonly supposed, or anything that could be taken as unethical or inappropriate behaviro. Facebook, for example, shut down a lot of its social science research collaborations after the media firestorm over the paper that experimentally manipulated the emotional composition of users’ newsfeeds.

    The worst situation is to release something that creates a PR problem and is not easily defensible as “good science” given that leaves the PR folks handling the fallout with no defense, so there’s some level of “scientific” scrutiny applied. But it’s all ultimately about the corporate bottom line, which research mostly influences from a public perception perspective.

  10. Jessica –

    > One problem with arguing to censor research…

    I wonder about the use of the term “censor” there. That term has become a bit of a weapon of ideological warfare – with people claiming that they are being “censored” if someone deletes their comments from a blog comment thread or their tweet is labeled as being inaccurate.

    At what point does a supervisor at Google not approving research for publication cross over from appropriate review to being censorship. What is the standard?

    • Joshua:

      I would argue that censorship starts when the company are making science decisions for PR purposes rather than to actually improve the quality of the science.

      • We could solve that problem… Simply publish all research anonymously. After all, the science doesn’t depend on who writes it, right? (My point, just in case I was being too subtle, is that the PR game works both ways. Google funded the science, so they get to decide what becomes public. The scientists write the science, and a substantial part of that isn’t to get the science out, but to give glory to the scientist.) If the paper was published anonymously (with suitable cloaking of the company at which the research was performed) then the problems are solved.

        • Jonathan:

          I’m not disagreeing with your larger point, but in this case it’s not clear that the author could legally publish the report anonymously: if it’s part of her contract that she can’t publish work-related stuff without company permission because it’s the company’s intellectual property, than that’s it, and redacting the author name and cloaking the company name doesn’t solve that, right? That said, sure, she could publish anonymously under the assumption that Google wouldn’t want to sue or fire her for it, but that’s another story.

        • That does not solve the problem which is that Google want’s the advantages and benefits of engaging in science with the scientific community, but does not want to pay the real price of doing so which is transparency. Any research funded by the U.S. Government MUST be publicly available unless security clearances are required to review it. The same should be required of businesses that want to accrue the benefits of science.

          Also, one of the reasons Google wants to engage with the scientific community is because it allows them to recruit people who would not otherwise be willing to give up the freedom that comes with academia. Anonymizing research does not solve this problem.

        • Proprietary research does not contribute to the advancement of a scientific discipline if there is not enough information in the publications or presentations to critically review and replicate.

        • Curious –

          > Any research funded by the U.S. Government MUST be publicly available unless security clearances are required to review it.

          But not until after review. For example as I I mentioned above a scientist at the CDC can do research but before it’s made public it is subject to review. What’s interesting there is that unlike review within a purely private entity, spiking research at the CDC after review might better fit the label of “censorship” since a government entity is involved.

        • Joshua:

          Within an organization that takes the scientific process seriously internal reviews are entirely about improving the research. If a business wants to engage with the scientific community that should be the focus of their reviews as well.

        • Curious –

          As a matter of principle, no doubt. As a matter of practice I don’t know how that would work out. “Improving the research” could always be subjective. Maybe a solution would be for an independent panel to do the review – but I can’t imagine a business signing on for that.

        • How does that make it non-science? Unless you mean that things that aren’t published don’t become known. Then I agree. (And just to be clear, my suggestion above that Andrew rightly dismisses as unworkable, wasn’t meant to be workable. It was meant as snarky, for which I apologize.)

        • Guiness/Gosset is a famous example of anonymous publishing (with the permission of the company in that case, they just wanted to be sure there was nothing about beer production in the papers and no way to track the authorship back to them).

    • I used the word censorship as a blanket term to refer to Google having an oversight process that lets them decide what research the rest of the world sees versus doesn’t. Given their business interests, it seems natural that they would use that process to protect their reputation in various ways. I’m not trying to make any underlying ideological argument about censorship, so maybe I should have used a different word.

  11. As others have said, there seems to be a lot of back story here that we are not (yet) aware of. My guess is that, in Google’s point of view, Gebru went from being an employee who was challenging in a constructive way to a hostile one. In her email, Gebru mentions previous threatening of a lawsuit, e.g. I’m not saying that Gebru isn’t right about Google. Just that this episode isn’t really about censorship.

    • I feel that this discussion would be much easier to have in a few months, when maybe more information is available and the heat of outrage from all sides has settled. So this might have been a good motivation for using Andrew’s standard 6-month blog delay on this one.
      My current impression is that in the end, this will hurt all sides. Regardless of which position you agree with, I think this will hurt Google, Gebru and tech diversity overall (just not sure to which amount).

      • I agree its hard to know what details we’re not privy to now. I decided to post something now rather than wait partly to signify that it’s a potentially important thing happening in the world of data science ethics (in addition to the paper “quality review” part of whatever happened being topical to this blog) and because I value the kinds of discussion that happen on here despite the fact that people sometimes disagree in their views. The latter is hard to find in the social media discussions I see around events like this.

        Agree too it will affect all sides in some ways. Though from where I sit, Google’s reputation is hurt much worse than Gebru’s, who is already known for being very personally committeed to fighting for AI ethics and more generally diversity work in CS. So the fact that she would threaten to resign or tell others to give up certain efforts seems in character and suggests to me she was very frustrated with her position there.

        Regarding the censorhip part, it’s about that to the extent that that’s what Dean’s email brings up, and that’s what Gebru’s letter threatening resignation was about (https://twitter.com/timnitGebru/status/1334900391302098944)

  12. This may be a trivial detail – but according to my reading of the events, she resigned … which is different from “being fired”.

    Also, the term “censorship” really doesn’t seem appropriate when discussing a company – the first amendment doesn’t apply

    Lastly, it is quite common in non-academic settings to have requirements for corporate authorization of publications.

    So, while I don’t doubt there is foul play involved here – I think it really should be described/discussed a bit more “honestly”, and with the clear recognition that the rules in the corporate world are (legitimately) different from those in academia

    BTW – no COI here – I am an academic and have nothing to do with Google or any other corporations. I am about as COI-free as they come – having zero corporate funding for any of my research.

    • On the term firing — from the details I saw I wasn’t sure what it should be called. Gebru’s letter seems to have said she would resign at some future date, but my sense is that she was expecting some response back (other than being cut off from her accounts/employment privileges). So her perspective would seem to be that she was fired. But like my use of the term censorship, I have no strong opinions on what we call it. Maybe resignation or termination is better.

      And I agree that the rules of the corporate world are probably very different. I am not questioning if and why those rules exist, just saying there’s an analogy to some of the problems we run into in academia when we try to define processes for deciding what research is versus is not worth allowing into the canon.

    • Google says she resigned. Gebru says she was fired. All things considered I think Gebru is more credible here given their respective reputations. It also doesn’t seem like her story is inconsistent with Google’s – just that Google was trying to use PR-speak to frame it as a resignation when it wasn’t. Here’s more details:

      > Gebru says she failed to convince the senior manager to work through the issues with the paper; she says the manager insisted that she remove her name. Tuesday Gebru emailed back offering a deal: If she received a full explanation of what happened, and the research team met with management to agree on a process for fair handling of future research, she would remove her name from the paper. If not, she would arrange to depart the company at a later date, leaving her free to publish the paper without the company’s affiliation.

      > An email sent by a manager to Gebru’s personal address said her resignation should take effect immediately because she had sent an email reflecting “behavior that is inconsistent with the expectations of a Google manager.”

      Source: https://www.wired.com/story/prominent-ai-ethics-researcher-says-google-fired-her/

      No one said anything about the first amendment here. Censorship can be in respect to first amendment but it doesn’t have to be. Here, Google is restricting information from being made public so I think the term is appropriate.

      And sure the legal rules in the corporate world may be different but the principles underlying ethical AI are the same. If Google wants to be serious about research in this area and build credibility that their practices are ethical, then they need to abide by the same principles of scientific integrity that any other group must.

      • Michael (I assume not Jordan)?

        > No one said anything about the first amendment here. Censorship can be in respect to first amendment but it doesn’t have to be. Here, Google is restricting information from being made public so I think the term is appropriate.

        Interesting. My first reaction was that “censored” isn’t appropriate. But I admit that’s because I get a bit triggered by the term after hearing it so much from whiney people complaining because someone deleted their blog comment or even deleted their tweet, when they have no shortage of ways to exercise their freedom of speech and the government wasn’t involved.

        But yeah, if Google is preventing the public from even hearing about some research because of proprietary rights…maybe that is a kind of middle ground that really does reach a level tantamount to censorship.

        Need to think about it…

  13. michael schwartz:

    1. Censorship is not an exclusively legal term, thus it is perfectly appropriate to accuse a company of engaging in such.
    2. Simply because some corporate behavior is normative does not make it moral, ethical, or appropriate. In fact she was hired to address unethical creation and application of algorithms.
    3. It is likely she was making an argument that some of Google’s algorithms fall into the unethical category.

    I believe it is a mistake to give a free pass for business organizations to behave unethically toward their employees so long as it is legal. We should expect more. They owe the societies from which they’ve built their billions more than that. They owe their employees upon which they’ve built their billions more than that. Their decided lack of contrition speaks volumes.

  14. From the technology review article:

    “It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.”

    Sounds like they need some multilevel modeling…

Leave a Reply to anonymous Cancel reply

Your email address will not be published. Required fields are marked *