Ethics washing, ethics bashing

This is Jessica. Google continues to have a moment among those interested in tech ethics, after firing the other half (with Timnit Gebru) of their ethical AI leadership, Margaret Mitchell, who had founded the ethical AI team. Previously I commented on potential problems behind the review process that led to a paper that Gebru and Mitchell authored with researchers at University of Washington, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? :parrot_emoji:, which played the role of an official reason for the disagreement with Gebru.

Awhile back when it became public I read the paper, which is interesting and in my opinion, definitely not worth the sort of scrutiny and uproar it seems to have caused. It’s basically a list of points made under the pretty non-threatening guise of putting the limitations of large scale language models (like BERT or GPT-3) in context, to “reduce hype which can mislead the public and researchers themselves regarding the capabilities of these LMs, but might encourage new research directions that do not necessarily depend on having larger LMs”. The main problems described are the environmental impact of training very large models, where they cite some prior work discussing the environmental cost of achieving deep learning’s accuracy gains; the fact that training models on data that enforce stereotypes or biases or don’t represent the deployment population accurately can perpetuate biases; the opportunity costs of researchers heavily investing in large scale LMs given that they don’t necessarily advance the natural language processing toward long-term goals like general language understanding; and the risks of text generated by LMs being mistaken for “meaningful and corresponding to the communicative intent of some individual or group of individuals.” I personally found the last point the most interesting. In combination with the second point about bias,  it made me think maybe we should repurpose the now hokey term “singularity” to refer instead to a kind of choking-on-our-own-vomit-without-realizing-it that recent research in algorithmic bias points to.

However, it seems doubtful that the paper was the real reason behind Gebru’s and now Mitchell’s dismissals, so I’m not going to discuss it in detail. Instead I’ll just make a few non-remarkable observations about attempts to change the make-up and values of computer scientists and big tech.

First, given how devoted Gebru and Mitchell seem to their causes of diversity and ethical oversight in AI and tech more broadly, I can’t help but wonder what Google executives had in mind when the Ethical AI team was being created, or when Mitchell or Gebru were hired. It strikes me as somewhat ironic that while many big tech companies are known for highly data driven hiring practices, they couldn’t necessarily foresee the seeming irreconcilable differences they had with the Ethical AI leadership. I recently came across this paper by Elettra Bietti which uses the term “ethics washing” to describe tech companies attempts to self-regulate ethics, and “ethics bashing” to refer to how these efforts get trivialized so that ethics comes to be seen as something as simple as having an ethics board or a plan for self-governance rather than an intrinsically valuable mode of seeking knowledge. These descriptions seem relevant to the apparent breakdown in the Google Ethical AI situation. From a pessimistic view that assumes any altruistic goals a big tech company has will always take a back seat to their business strategy, then Google would seem to be struggling to find the right balance of investing enough in ethics to appear responsible and trustworthy while avoiding the consequences when those they employ to help them with these initiatives become frustrated by the limitations put on their ability to change things.

Personally I question whether it’s possible to make real changes to the makeup of tech without it coming from the very top. My experience at least in academia has generally been that for initiatives like changing the makeup of the faculty or student body you need either the majority of faculty to be pushing for it (and a chair or dean who won’t overrule) or you need the chair or dean to be the one pushing for it. Diversity committees, which often tend to be comprised mostly of women and others from marginalized backgrounds, can make an environment more supportive for others like them, but have trouble winning anyone over to their cause without those in positions of power to reinforce the message and lend the resources. 

At any rate, what’s happening at Google has brought a lot more attention to the question of how to make big tech corporations accountable for the impact their systems have. I don’t know enough about policy to have a strong view on this, but I can understand why many researchers in algorithmic bias might think that regulation is the only way to overcome conflicts of interest between ethics advocates and businesses who make money off of algorithms. Though it’s clear from the existing work that’s been done on algorithmic bias that there are some significant challenges when it comes to defining bias, so this may not be right around the corner. I’m reminded of debates around methodological reform in psych, where there’s similarly a desire to prevent problems by putting in place top-down requirements, but how we define the problems and how we evaluate the proposed solutions are no small matters. 

Maybe requiring more transparency around all new Google products would be a reasonable first step. I really don’t know. I’m also not sure how regulation could address the lack of diversity in the company, especially in subdivisions like Google Brain, though some researchers in algorithmic bias including Gebru have argued that having a more diverse set of perspectives involved in developing algorithms is part of the solution to ethics problems in AI and ML. So I expect struggles like this will continue to play out. 

37 thoughts on “Ethics washing, ethics bashing

  1. > … it made me think maybe we should repurpose the now hokey term “singularity” to refer instead to a kind of choking-on-our-own-vomit-without-realizing-it that recent research in algorithmic bias points to.

    I don’t often comment on posts here, but this is the most delightful half-sentence I have read in a while.

  2. “I can understand why many researchers in algorithmic bias might think that regulation is the only way to overcome conflicts of interest between ethics advocates and businesses”

    It might be that regulation is the only way to overcome those conflicts. The question is who said we should accept ethics advocates’ position and pressure tech companies to “resolve conflicts” with them? Businesses are accountable to their customers. Who are “ethics advocates” accountable to? Not the public.

    Whenever I hear about the lack of diversity in Tech I feel like barfing – especially when I hear it from a statistician. Just roughly guessing from Wikipedia, probably 10 million people have emigrated to the US from the Americas in the last 20 years – that’s roughly 15-25% of today’s Hispanic population in the US. The *OVERWHELMING* majority of those people have little or no education. How *on earth* would google hire them??? Whatever would they do at Google? This idea that we can just compare the number of Xs in the general population with the number of Xs at google and say that Google is or isn’t biased in hiring is wildly at odds with reality, and alot of people advocating for it – here are those advocates again – should darn well know better than to make such a ridiculous claim, implicitly or otherwise.

    I mean, good gracious what if certain Asian groups were overrepresented at Microsoft (as they almost certainly are)

  3. Jessica:

    Regarding your discussion at the end about how or why or when various people are hired: In addition to the bureaucratic issues you discuss, I think there’s also a general attitude of wanting to hire “the best”–whatever that means. So I could imagine that if some group at Google or wherever is tasked to hire someone who works on ethics, they’ll ask around and try to find the best, #1 person in the world who does this. Then they hire this person. What is the person supposed to do, how is she supposed to work within Google’s existing structure? Nobody asks this question when hiring her. The goal is just to hire the best available person in a specified subfield, which is not the same thing as hiring a person to do a particular job or even the same thing as hiring a person to play a particular role within the organization.

    We see this a lot in academia, a desire to hire the best rather than a desire to hire someone to do a particular job or play a particular role. In some settings, hiring “the best” can make sense; in other settings, it doesn’t make much sense; but in any case there’s not always any medium-term planning about what this person might do. So I could well imagine this group at Google hiring this top person without really gaming out what might happen going forward. In any case, this is just speculation; I have no knowledge of this particular case and very little knowledge of Google more generally.

    • This is exactly the problem with Google’s governance structure. They think that a successful company is the sum of many individually talented employees. There is no broad oversight over what general family of products should be made, what determines the quality of a product, or how talent should be allocated within the company. I’ve been told by insiders that employee bonuses and promotions are tied to the number of product launches they’re involved with. So Google constantly churns out promising concepts and interesting ideas and abandons them almost immediately after launch.

      https://killedbygoogle.com/

      To do text messaging on Android, there have been Android Messages, Google Talk, Google Hangouts, Google + Chat, Allo/Duo, RMS messages, Hangouts Meet, Google Voice, and probably some others I’m forgetting. At some point, 6 were active concurrently

      https://arstechnica.com/gadgets/2020/05/google-unifies-messenger-teams-plans-more-coherent-vision/

      There have been projects like Inbox or Google Reader which had massively positive reception, beloved by many, and were abruptly killed while popular. People just latch onto ideas they think are interesting or fun to work on, then when the fun part of proving a concept is finished and the hard part of maintaining and growing it begins, they leave. Google can afford to do this because

      1. They’re essentially bankrolled by a massive Search Ad revenue, which still accounts for some 70% of their revenue and >90% of their profits. Almost everything Google does is unprofitable before accounting for data collection for ad targeting. Employees can justify starting any project as long as they can make a plausible argument that it collects some data that can be used for ad targeting, though oftentimes the incremental value of said data often never materializes

      2. Some of their services are naturally monopolistic in that modern tech-platform way. YouTube has long been known as one of the worst places to host your video. Channels with millions of subscribers get abruptly terminated for spurious copyright strikes and cannot get in touch with an actual human moderator for weeks or months. But if you host on, say, vimeo, you’ll have more customer service reps and better video quality but nobody will actually watch your video.

      • somebody said, “… abandons them almost immediately after launch.”

        This got me thinking that omitting one letter still gives pretty much the same meaning:

        “…abandons them almost immediately after lunch.”

  4. I think the internal battle at Google/others is not about “ethics washing” – it’s about a debate between more traditional workers and those with an activist bent about what their job is. I imagine Google is actually very interested in the ethical consequences of AI, particularly when it helps them design better, more useful products. They bought DeepMind, for Christ’s sake! I also imagine they are not interested in having a group at the company which is very politically active, very vocal on social media, and who primarily see themselves as an avant-garde for pushing broad changes across Google which upper management does not want to pursue.

    To me, Google here seems reasonable. Imagine a major energy firm set up a green energy planning department. The department’s goal is to keep an eye on new technology and potential investments that can reduce carbon emissions for the firm, and even better, to keep an eye of changes in tech or the regulatory environment which will permit profitable investments. Imagine this same division then had their director and other key staff post non-stop on Twitter about how oil is destroying the environment, about how Major Energy Firm is moving too slow, about how the *true* climate change dangers involve ethical issues that go beyond general consensus. The folks fired from Google are AI ethicists…but they are also employees of Google. This doesn’t mean they are muzzled! It just means that their job – and it is a job – is to perform research on AI ethics which make Google a better and more profitable company, where “better” is defined by management, not by these researchers. They are employees, not muckrakers, and for some reason they do not understand this.

    (Frankly, the same is true of Chief Diversity Officers and the like. They work for the company! Their job is to find internal practices which are inadvertently harming company performance, looking for hiring methods which allow for better employees by reaching beyond friends-and-friends, and so on. Their job is not to be an external critic.)

    • The complication is that Google markets parts of their business as independent research institutions, basically mini public research institutions for computer science. If you’re going to be that, you can’t be rejecting factually sound papers because they make some other part of your company look bad. All this to say, what’s wrong about the affair that started this uproar is the belief that Google, Facebook, Amazon could function as self-policing financially disinterested research institutions and publicly traded companies at the same time, and I don’t mind some of the hypocritical Google hype machine blowing back on itself

      • But this has always been true I think since the days of Bell Labs and Xerox etc.

        Larger companies did spin off these quasi independent research organizations where researchers had way more freedom than in a traditional industrial R&D setting and also way more publishing options and visibility.

        But the contracts always were written such that corporate legal teams went through the publications and presentations.

        So I don’t understand the hue and cry. If Google falsified data I would object but vetting what to publish and what not to sounds a fairly reasonable prerogative when they are paying for the research.

        Otherwise Google could just have created an endowment at CMU or something if they didn’t want any control at all.

        • Broadly, I agree with this. I’m fine with Google choosing not to publish something that’s unflattering or trying to address critiques internally before publicizing. I think I’m happy with the blowback google is getting in large part because they suggest that they hold themselves to a higher standard. They didn’t have to hire somebody to work on AI ethics, but they did. To then suppress the publication of the research suggests that they want to allay people’s fears and earn positive PR creds, but without the inconveniences of actually doing anything differently.

          I wasn’t around for the heyday of Bell Labs and Xerox, but it seems to me like modern tech giants are marketing a new kind of benevolent technocratic utopianism. Their relationship to their employees and their consumers is supposed to be some kind of convoluted paternalism. You don’t just work here, we’re a F A M I L Y, we don’t just sell ads, we’re trying to push forward the frontiers of technology to the benefit of the human race. I trust them, as an employee and a consumer, insofar as our financial interests align, and no further.

        • I agree with some of that. I share some of your schadenfreude when something happens to take down the incumbants a notch.

          OTOH the benevolent utopianism you describe is more pervasive than just the tech giants. In some way we have thrust it upon the corporates. Instead of a fiduciary responsibility to just the shareholders we now demand that they shoulder all sorts of other duties to various stakeholders.

          And that’s another source of hypocrisy in the modern corporate world.

    • Kevin,
      Maybe a certain activist mentality is common in the ethics profession, as it is with physicians for example: yeah, OK, you’re telling me my patient doesn’t need such-and-such a test because it’s too expensive, but I’m supposed to put the patient first so that’s what I’m going to do. An ethicist’s job involves figuring out what the company is doing that is morally questionable or perhaps morally wrong. If you identify a practice that you believe to be morally wrong, and the company won’t change it, what are you supposed to do? Your answer is “nothing; fixing it is not your job”, but some human beings are unwilling to participate in activities they think are immoral — indeed I think the world would be a better place if more people felt this way! — and I would think that people who choose to go into ethics as a profession might feel this way more strongly than most.

  5. Reading these comments (thus far) makes me feel that this topic is much deeper than people are treating it. At the heart, it is a matter of the employee-employer relationship. What responsibility does an employee have to their employer’s values? The idea that your employer is paying you so your work should be aligned with their objectives seems too pat for me. Clearly, employees have their own ethical values and have some right (and obligation, at least to themselves) to work in areas that may threaten their employer’s stated objectives. A researcher working for a major oil company may feel that their employer is not taking climate change seriously enough. Perhaps this means they should find a different job. However, don’t they have a role to try to influence the direction of the company, particularly if they feel the company is either factually incorrect or flawed in their values? At what point does the conflict in values become such that quitting (or being fired) becomes the appropriate course of action?

    It seems to me that this is not an easy question. And, I believe that more educated individuals (particularly if they view themselves as part of the academic community) will feel more embolden to assert their beliefs, even if it conflicts with some corporate objectives. I certainly don’t want an economic system that says that such behavior is absolutely inappropriate. Nor do I think our system can work well if employees have absolutely no responsibilities to their employers (as Rahul suggests, there is a difference between an ethics division within a company and funding an academic research center). But Google has portrayed themselves as a somewhat different kind of company – and other companies seem to be moving in similar directions. Social responsibility can easily be seen as lip service, but I believe it represents a genuine social desire to broaden “the bottom line.” Milton Friedman might object that this has no place in business – and there are surely others that share that view. But I think most people, especially those with science backgrounds, take a more ambivalent view.

    I prefer to see the tension as an opportunity to reconsider the employee/employer relationship as well as the role of business in society. Rather than dismiss the issue superficially, I think we should embrace it as a fundamental question. We academics take academic freedom for granted (for many it has become equivalent with tenure rather than its heritage to preserve the university as an institution where diversity of views is paramount). It is ironic that just as academic freedom is increasingly under attack in universities, it appears to be coming alive in the corporate sector. Perhaps we should welcome the opportunity to take a serious look at the issues it raises.

  6. I think it went like this:

    1. Google became worried that criticism about algorithmic bias could hurt the brand.

    2. Management decided to “red team” the problem.

    3. The managers assigned to make it happen missed the wink, and did the exact opposite of what they were supposed to do, they go out and hire real ethicists.

    When a profit-making entity forms a red team, it is because they want to paper over something. The best way is to bring in the retired head of research, who was a devoted company man (borrowing the old term). Then you surround him with a diverse group of folks with titles that relate to the issue, but are carefully selected to toe the company line. After a long investigation, the red team announces that the company does not have a problem. Their credentials are impeccable!

  7. “…the opportunity costs of researchers heavily investing in large scale LMs given that they don’t necessarily advance the natural language processing toward long-term goals like general language understanding; and the risks of text generated by LMs being mistaken for “meaningful and corresponding to the communicative intent of some individual or group of individuals.” I personally found the last point the most interesting.”

    Yes. Exactly!

    I hope people will forgive me for grinding an axe here, but this “the risks of text generated by LMs being mistaken for “meaningful and corresponding to the communicative intent of some individual or group of individuals.”” bit is hilarious, and exactly right.

    The hilarious bit is that fairly large numbers of (presumably) smart people are working on these “language models”, and what they are, to the best I can tell, is glorified Markov chain models. These LMs have, in principle, no way of connecting a “communicative intent” to their language output. They generate exactly and only random nonsense. What these research projects do is generate random nonsense that looks less and less like random nonsense as the models and databases behind the models get larger. The idea that random nonsense is “better” the less it looks like random nonsense is a very strange idea indeed. The problem, of course, is no one has particularly good ideas about instilling programs with ideas that would give rise to “”communicative intents”, or even representing those “communicative intents” at all, so they don’t bother, and just use a random number generator.

    The problem for Google, is, I’d guess, that while this “research program” is, in principle, inane and ridiculous, Google wants to get PR mileage out of it. So Google being bent out of shape at the “AI Ethicists” is because said ethicists are pointing out that their pet projects are buck naked emperors.

    (When I passed the quals but then quickly dropped out of an AI PhD program, the main reason was that I wasn’t finding a project to work on, but also there was the realization that I’d be spending the rest of my life screaming at idiots. Like the inane LM “work”.)

    I read somewhere (Science? Quanta?) that people were beginning to realize that there’s nothing in mammalian nervous systems that’s even vaguely reminiscent of what the “neural net model” folks are doing, so there may be hope for the universe, yet. Maybe. (Neural nets are doing good work in Go programs, but that’s because Go is played on a rectangular grid of locally related points, and that’s what a neural network is.)

      • > understand speech

        They don’t do that very well at all

        > classify images

        This is exactly a rectangular grid of locally related points defined by x pos, y pos, color in (r, g, b), 8-bit value in (0, 255). The convolutional filters look at activation from small local regions at a time, and then those outputs typically get pooled hierarchically to look at locality between higher level structures.

        • “> classify images

          This is exactly a rectangular grid of locally related points…”

          The last I checked, they don’t do this very well either, because what they do is recognize textures. They _appear_ to be recognizing objects, but have no ability to reason about objects. If the application is careful to only feed them images of single objects, they appear to be recognizing objects, but there are a lot of famous glitches (see “Rebooting AI” for some examples).

          FWIW, what I generally claim/think is that while “the brain doesn’t work that way” (to quote Jerry Fodor), that doesn’t mean that computational models consisting of regular, layered arrays of locally connected computational elements can’t be persuaded to do interesting work. What it does mean is that those computational models stand or fall on their own properties, not the properties of mammalian neural systems, which are very different beasts. (Your generic mammalian neuron has hundreds of inputs, thousands of outputs, is vastly non-local (neurons are, like, really long), computes logical functions (ANDs, ORs, NOTs) of subsets of its inputs, is placed randomly in space (e.g. the photodetector neurons in the eye) etc. etc. etc.) But the people selling you neural nets want you to think that they’re “based on neurons”. Which is somewhere in the inanely stupid to criminally false advertising range of things.

        • DJL said,
          “But the people selling you neural nets want you to think that they’re “based on neurons”. Which is somewhere in the inanely stupid to criminally false advertising range of things.”

          “based on X’ is so vague itself that it might be independent of what “X” is. GIGO

        • So what is it that does the heavy lifting when Alexa understands my voice or Google Photos correctly labels my family photos? I assumed most of those were neural networks?

          Purely as a black box the performance is impressively good.

        • I have met literally no one working on neural networks or using neural networks who wants me “to think that they’re ‘based on neurons'” or who asks that their performance be evaluated based on similarity to actual neural architecture rather than actual performance for some task. Perhaps you hang out with marketing types, but they say ridiculous things in all fields. You seem to be getting very worked up about a straw man.

        • I do work with people who use artificial neural nets for classification problems and compare them to performance by humans, monkeys, etc. with real neural nets. I think there’s a tendency among some, mostly on the computational side, to see the apparently high performance by neural nets in classification tasks as a sign that they are doing things “the same way” as biological systems which also have high performance in those tasks. I don’t think that tendency is much to do with the use of the word “neural”, though I’m sure that has some effect.

          In the end, though, assuming that equivalent levels of performance means equivalent system architecture is the same fallacy behind the old-school AI idea that “if a computer can play chess, it can think”. People who take the issue seriously point out the many many differences between these artificial systems and biological ones (https://journals.sagepub.com/doi/full/10.1177/0963721418801342). And, as DJL points out, performance is not really equivalent, since the types of errors made by ANNs are quite different from any kind of error a human would make (e.g., failing to recognize a bus because some background pixels were jittered; https://arxiv.org/abs/1312.6199).

          I admit I’m not familiar with the marketing hype. I can imagine an semantic confusion, though, in that when people hear “machines can recognize speech”, they think, “the machine can understand what I mean.” It cannot (yet?), but it can do its best to put the sounds it hears into categories it has been trained to distinguish between. This is the “heavy lifting” Rahul mentions and it works often enough and cheaply enough to be tolerable for many people.

        • > I have met literally no one working on neural networks or using neural networks who wants me “to think that they’re ‘based on neurons’”

          This is not at all a straw man. They are, in the strongest, most explicit possible sense, based on how computer scientists thought neurons work.

          https://en.wikipedia.org/wiki/Artificial_neuron#History

          The idea that neural networks were a mathematical approximation of pieces of brain was the dominant paradigm for understanding them until at least the 90s, and funky ideas about how convolutional neural networks approximate the optic nerve were still kicking around when I started reading about them in the early 2010s.

          > So what is it that does the heavy lifting when Alexa understands my voice or Google Photos correctly labels my family photos? I assumed most of those were neural networks?

          Alexa does not understand your voice, classifies words from a short waveform and then classifies the phrase into a small, finite set of commands. This may come across as overly philosophical considering the practical utility, but the point is that understanding is not the same as “really good at classification.”

          As for image recognition, as the little boy pointed out, they don’t recognize objects, they recognize textures (fantastic turn of phrase by the way, excellently conveys what they actually do, will probably never catch on for that reason). This might also come across as splitting philosophical hairs, but it’s actually of an IMMEDIATE practical significance, since the class of image recognition problems where CNNs work well is exactly where classifying an image is the same as recognizing what textures are in the image. It also tells you where it’ll fail, where the vague texture of a baseball can convince a network that it’s looking at a baseball.

          https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Nguyen_Deep_Neural_Networks_2015_CVPR_paper.pdf

          The point is that neural networks don’t understand semantic content, and what’s more are not moving in that direction. The research program is to get progressively better and better at minimizing a predefined regret function, often by throwing more and bigger computers at the problem, but that does not converge in the limit to generalization.

        • Thanks for doing my homework for me!

          But, yes. Exactly. Every time someone uses the term “neural net” they are claiming to be doing something “neural”, even though they’re not, even if they get it that they’re not. I find it seriously irritating. (Part of the reason for this irritation is that the whole gestalt of the AI world is that we can do the basic things (e.g. reasoning about objects (that they exist in space and across time, break when you drop them, etc.) and symbolic reasoning) and are doing great new things. Both of which are, well, wrong. Again, see “Rebooting AI” for why we need to be working on the basics.)

          Another irritation is historical. I audited Minsky and Papert’s grad seminar on AI, fall term 1972, and they went through in excruciating detail why the Perceptron model couldn’t do much of anything. The neural net fans gussied up their models to do some of the things Minsky proved they couldn’t do, and then went along blithely working with architectures that don’t have those kludged improvements. The NN types now insist Minsky was wrong and that that was ancient history. He wasn’t, and it shouldn’t be seen as such.

          FWIW, on Alexa. my understanding is that speech recognition signal processing is easier than originally thought. I have heard complaints, though, that it does badly on accented voices. Underneath the signal processing is an ELIZA-like program that does canned things for canned inputs. ELIZA was written by Joe Weizenbaum, and was a monster hit: everybody loved talking to it to see what it would say (ca. 1972). Prof. Weizenbaum found this disconcerting. But when he overheard his secretary telling it her deepest secrets, he completely freaked and became a vituperative anti-AI zealot. Which was very strange, since he was a lovely gently bloke, one of the nicest people around. Go figure.

    • > The problem for Google, is, I’d guess, that while this “research program” is, in principle, inane and ridiculous, Google wants to get PR mileage out of it.

      My suspicion is that Google wants to actually deploy these or has already deployed these large language models on tasks like content and community moderation, essentially trying to identify things that violate terms of service without ever having to employ a human to review them. Pointing out that these models actually can’t accomplish those tasks is thus anathema, especially if the primary purpose of the deployments isn’t to actually moderate content but to provide cover for the company under fire from angry users and the media.

      This would explain the frequent reports of people’s content being blocked or accounts being banned for being superficially similar to but semantically distinct from content that violates terms of service. For example, people are having their YouTube channels banned for videos on WWII history because they discuss Nazis, people get their accounts terminated for complaining about harassment because they’ve quoted the harassing comments, etc.

      • somebody said,
        “Pointing out that these models actually can’t accomplish those tasks is thus anathema, especially if the primary purpose of the deployments isn’t to actually moderate content but to provide cover for the company under fire from angry users and the media.”

        Good point!

  8. “I question whether it’s possible to make real changes to the makeup of tech without it coming from the very top.”

    Regulation is one option, certainly and I would welcome it.

    I think that corporate governance structure and incentives are key issues here. The first in how autonomous an oversight function is within a company (i.e., hiring, budget, chain of command, escalation processes, etc). The second in performance metrics and compensation for individuals within an oversight role as well as executives sponsoring the operation of that oversight role. Answering those questions are hard, especially for a public company where the obligation is legally to the shareholder. But, the executive/manager/individual’s responsibility is for longterm shareholder value (not short term, despite what the Street may wish), then there is at least a reason to set up rigorous governance structures for oversight functions that help a company navigate ethics related to evolving technology (and modern mores on equity).

Leave a Reply to Raghu Parthasarathy Cancel reply

Your email address will not be published. Required fields are marked *