EU proposing to regulate the use of Bayesian estimation

The European Commission just released their Proposal for a Regulation on a European approach for Artificial Intelligence. They finally get around to a definition of “AI” on page 60 of the report (link above):

‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with

We don’t even have to wonder if they mean us. They do. Here’s the full text of Annex 1 from page 1 of “Laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts” (same link, different doc).

ANNEX I
ARTIFICIAL INTELLIGENCE TECHNIQUES AND APPROACHES
referred to in Article 3, point 1

(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;

(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;

(c) Statistical approaches, Bayesian estimation, search and optimization methods.

This feels hopelessly vague with phrasing like “wide variety of methods” and the inclusion of “statistical approaches”. Also, I don’t see what’s added by “Bayesian estimation” given that it’s an instance of a statistical approach. At least it’s nice to be noticed.

Annex I looks like my CV rearranged. In the ’80s and ’90s, I worked on logic programming, linguistics, and knowledge representation (b). I’m surprised that’s still a going concern. I spent the ’00s working on ML-based natural language processing, speech recognition, and search (a). And ever since the ’10s, I’ve been working on Bayesian stats (c).

45 thoughts on “EU proposing to regulate the use of Bayesian estimation

  1. When I read things such as (Article 71) “The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 20 000 000 EUR or, if the offender is a company, up to 4 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
    5. The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 10 000 000 EUR or, if the offender is a company, up to 2 % of its total worldwide annual turnover for the preceding financial year, whichever is higher”
    my instinct is that this is legislation with a protectionist character, having in mind Google/Amazon/Tesla/etc.

    Google, for example, would be almost surely currently be in violation of the transparency provisions.

    In European law things need to be explicitly permitted as well as explicitly banned in a way that has no clear analogue in US law, so it probably also has the goal of facilitating more specfic actions such as legislation regulating self-driving vehicles.

    • I have been exposed to several EU laws eg REACH on the chemicals side and they are usually unnecessarily complex crap.

      I think there’s something about the design by committee criticism to it.

      When all these beauraucrats from disparate nations come together and cobble together legislation rarely does anything elegant or even comprehensible result out of it.

    • It is a good idea not to confuse the EU with Europe, nor EU law with European law. The UK and Ireland have case law systems, in contrast to most of Continental Europe, which is more Napoleonic in character. Case law allows the application of precedent in a more flexible manner than civil law.

      I’m not as familiar with the US, but I understand that only two states have case law systems – New York and Illinois – and it is for this reason that derivatives are traded under New York law. If anyone can shed any light on this I would be grateful.

    • “let’s say an AI developer tells clients that its product will provide “100% unbiased hiring decisions,” but the algorithm was built with data that lacked racial or gender diversity. The result may be deception, discrimination – and an FTC law enforcement action.”

      • Well “100% unbiased hiring decisions,” is definitively false advertising regardless of the data set.

        On the other hand, the insurmountable opportunities that arise from widespread lack of understanding paired with a false sense of understanding can be exhausting.

  2. I agree the EU language is vague. But someone should write up a proposal (or point me to one that already exists) for how to do ethical Bayesian decision theory when there are ethical constraints on the decision that bind. For example, part of Obamacare prohibited offering different premiums to men and women but allowed premiums to vary by age and whether the person was a smoker. If the insurance company were doing a Bayesian regression model that did not condition on gender, then the coefficients on the other variables that are correlated with gender would be distorted in ways that still allows the omitted variable to influence the predictions. But if the insurance company conditions on gender in their model and uses that to set premiums, then they are afoul of the law. It seems like the best solution would be to condition on such variables in the modeling step and then marginalize over them in the decision step, but sometimes companies don’t even collect information on such variables to avoid liability.

    • Or perhaps the solution is to not create arbitrary barriers? If gender/race/age/whatever have an effect (whether causal, or purely correlation) then there’s no reason not to account for it.

      Sure, over-fitting might be a problem… But that’s a problem for the one doing the estimation

    • I don’t see where “there are ethical constraints”. If green pointy-eared people are more likely to get cancer, why should everyone else have to pay for it? If women live longer lives, their pensions should cost more. C’est la vie.

      • Anon:

        These are other people’s ethical constraints, maybe not yours. Your statement, “Why should everyone else have to pay for it?”, represents one ethical position. The statement, “We’re all in this together and the healthy should take care of the sick” is a different principle, and it’s a position that many people hold.

        • Cross subsidy is a very inefficient way of subsidizing disadvantageous people. Those living longer than others are not even disadvantageous, on the contrary.

  3. “Thou shalt not make a machine in the likeness of a human mind.” –Frank Herbert, Dune

    Their definition seems to boil down to “Software = AI.” Every program impacts its environment via outputs from logical processes designed to meet human objectives. The issue is the degree to which AI does this without human oversight. When you get down to it, though, regulating “AI” is a red herring. What they’re really trying to regulate, even if they don’t realize it, is the substitution of automation for human judgment, in whatever form that takes. There are many things that aren’t illegal when a human does them but maybe should be illegal when automated, because where humans have opaque minds, automated systems can be examined or at least tested; and because where humans’ decisions can’t have mass impact without some external system to amplify them (e.g., corporations, media, armies), automated systems have the capacity to be self-amplifying through sheer speed and connectivity.

    On a separate note, this definition of AI is filled with loopholes. It says AI must perform a “set of human-defined objectives.” So if I allow my AI system to evolve its own objectives, it’s not AI? Also, most of these functions can be performed by mechanical computers, without software, albeit far less efficiently. We’ve had bespoke machines that automate almost all of these kinds of activities going back to at least the 1890 U.S. Census! What they call “software” is just one kind of data processing system.

    Terrible. But what can you expect from a document about AI that doesn’t even reference Asimov.

    • > Terrible. But what can you expect from a document about AI that doesn’t even reference Asimov.

      Actually…

      > What they’re really trying to regulate, even if they don’t realize it, is the substitution of automation for human judgment, in whatever form that takes… automated systems have the capacity to be self-amplifying through sheer speed and connectivity.

      This is what Stanislaw Lem advocated for in his book Summa Technologiae. He was Asimov’s contemporary, but is not as well known due to his residence in then-soviet Poland.

  4. At the bottom of page 7 there is a link to the public consultation document which make interesting reading. Some of the fundamental concerns that respondents (29% business, 13% academia) have about AI are discussed. 90% of respondents apparently were concerned that AI might breach fundamental rights, 70% were concerned that AI ‘was not always accurate’. I’m intrigued about the 30% who thought it was *always* accurate. There is an interesting divide when it comes to biometric identification, with 55% of private individuals wanting to ban this, but only 4% of businesses.

    • For anyone interested in what Americans think of AI, you can see our polling results (updated weekly) here: https://jasonjones.ninja/jones-skiena-public-opinion-of-ai/

      Americans mildly support further development of AI. They support AI less than space exploration but more than genetic editing of humans. More than anything, Americans fear who will control AI. They fear that Artificial Intelligence will be controlled by people who are greedy, selfish or irresponsible – much more than they fear losing their own job or AI leading to human extinction.

      On average, the American public trusts “artificial intelligence algorithms” to do the right thing just a little less than they trust the average American. Americans trust artificial intelligence algorithms more than Congress or the President, but not as much as their best friend.

  5. The documents make for horrible reading – clearly, somebody had too much time on their hands. But I browsed the documentation and didn’t see any mention of health or medical care (only medical “devices” appears, as far as I can tell). Since there is so much predictive modeling being done for diagnosis and treatment, was that area excluded from the proposed regulations or is that part of the proposal? If it is included, it would have major consequences in health. Just imagine explaining in lay terms why/how the algorithm decided a particular image was likely to be malignant. And, would the data used to train the model have to be public? I’d love that, but sincerely doubt they intend that. So, if anyone can shed light on how medical data either fits into this proposal (or is excluded), I’d be interested to know.

    While my impression is overall negative, I am happy to see that someone is taking the potential of AI seriously, and attempting to proactively raise the issues.

  6. Maybe it is a bit of subtile frequentist shade. i.e. There are statistical approaches and then there are non-statistical approaches like Bayesian estimation.

    Or more likely they copy and pasted whatever random stuff they found on wikipedia.

  7. An only slightly more Populist position to consider would be banning all learning, machine or otherwise.

    Isn’t it problematic to require government supervision of unsupervised learning? Doing that could uncreate the universe!

    I have to regard thi as a step forward in my personal crusade, which is strictly banning the use of Zermelo-Fraenkel Axiom 7 or equivalent. (That’s the assumption about the existence of an infinite set.) Thanks to Goedel, we know that ZF7 is what is really behind the horrors tied to the Axiom of Choice, like Banach-Tarski oranges that double their size when you cut them into 4 pieces.

    ZF7 is even more directly behind “calculus” which at the root of the double disaster that is our [insert epithets of choice] Western Civilization and tooth decay.

    Let’s remove the limits from mathematics!

  8. Lol. If this piece of attempted regulation is anything like other things that the EU regulates (funding, the vaccine non-rollout), it’s going to be yet another chaotic disaster. Once I agreed to review funding proposals for the ERC, and I was instructed never to write in my review “the proposal plans to do X”; instead, I was instructed (by a non-native speaker of English, who was leading the show) to write “the proposal would do X”, because this makes it clear that the applicant doesn’t have the funding yet to do whatever they want to do. I quit the reviewer position quite quickly after that because of sheer inanity of the types of questions they wanted me to answer about the proposal; it was as if they wanted me to check some supposedly objective boxes, rather than actually commenting on the planned work, so that they could quantify the decision to accept or reject.

  9. Hmm, the plot thickens, is this post perhaps part of a well planned conspiracy. ;)
    “People, I know this is a good chance to crack jokes, but please be aware that this is nonsense, not unlike the claims that EU bans curved bananas. What we’re seeing is a massive, well planned and orchestrated effort to undermine EU’s tech legislation. Don’t be a puppet.” – Roos
    Source: https://twitter.com/teemu_roos/status/1386238091019669507?s=21

    • Matt:

      Can you or Roos or someone else explain what the proposal is about? I don’t have the energy to read it, as it seems to be written in legalese. Legalese is appropriate when writing a law; it’s just hard for me to follow without assistance.

      Also, I’m not quite sure what aspect of Bob’s above post is supposed to be “nonsense.” As I read it, Bob is just quoting from their definition of AI and commenting on that definition, so I can’t see where the nonsense would be coming in.

        • Eric:

          Thanks. The document says:

          The proposed draft regulation lays down a ban on a limited set of uses of AI that contravene European Union values or violate fundamental rights. The prohibition covers AI systems that distort a person’s behaviour through subliminal techniques or by exploiting specific vulnerabilities in ways that cause or are likely to cause physical or psychological harm. It also covers general purpose social scoring of AI systems by public authorities.

          This seems reasonable to me. I guess maybe Roos or someone can tell me where the nonsense is!

        • I think that the “nonsense” part is mostly the remark “not the onion” and many of the reactions. The title “EU proposing to regulate the use of Bayesian estimation” is a bit misleading. One could say just as well that they are proposing to regulate the use of Prolog. Or SQL, for that matter.

          What will be regulated are things like

          the use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm (forbidden)

          or

          AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests (high risk)

          The scope of the regulation is not “artificial intelligence” in general, whether based on deep learning or Bayesian estimation or knowledge bases or symbolic reasoning.

        • Carlos:
          Real question. Any thoughts on why this is couched as ‘AI’ rather than ‘Any’? For example:
          “the use of an [Any] system that exploits any of the vulnerabilities …”
          or
          [Any] system[s] intended to be used for recruitment …”
          I don’t fully follow why (which is not to say there isn’t a reason), ‘AI’ needs to be even invoked to achieve the the objective.

      • I have no idea tbo. I didn’t think what Bob wrote was out of bounds. It was surprising to me to see arguments escalate into claiming conspiracy.

Leave a Reply to Shravan Vasishth Cancel reply

Your email address will not be published. Required fields are marked *