Designing an animal-like brain: black-box “deep learning algorithms” to solve problems, with an (approximately) Bayesian “consciousness” or “executive functioning organ” that attempts to make sense of all these inferences

The journal Behavioral and Brain Sciences will be publishing this paper, “Building Machines That Learn and Think Like People,” by Brenden Lake, Tomer Ullman, Joshua Tenenbaum, and Samuel Gershman. Here’s the abstract:

Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.

The journal solicited discussions, with the rule being that you say what you’re going to talk about and give a brief abstract of what you’ll say. I wrote the following:

What aspect of the target article or book you would anticipate commenting on:

The idea that a good model of the brain’s reasoning should use Bayesian inference rather than predictive machine learning.

Proposal for commentary:

Lake et al. in this article argue that atheoretical machine learning has limitations and they argue in favor of more substantive models to better simulate human-brain-like AI. As a practicing Bayesian statistician, I’m sympathetic to this view—but I’m actually inclined to argue something somewhat different: I’d claim that it could make sense to do AI via black-box machine learning algorithms such as the famous program that plays Pong, or various automatic classification algorithms, and then have the Bayesian model be added on, as a sort of “consciousness” or “executive functioning organ” that attempts to make sense of all these inferences. That seems to me to possibly be a better description of how our brains operate, and in some deeper level I think it is closer to fitting my view of how we learn from data.

The editors decided they didn’t have space for my comment so I did not write anything more. Making the call based on the abstract is an excellent, non-wasteful system, much better than another journal (which I will not name) where they requested I write an article for them on a specific topic, then I wrote the article, then they told me they didn’t want it. That’s just annoying, cos then I have this very specialized article that I can’t do anything with.

Anyway, I still find the topic interesting and important; I’d been looking forward to writing a longer article on it. In the meantime, you can read the above paragraph along with this post from a few months ago, “Deep learning, model checking, AI, the no-homunculus principle, and the unitary nature of consciousness.” And of course you can read the Lake et al. article linked to above.

8 thoughts on “Designing an animal-like brain: black-box “deep learning algorithms” to solve problems, with an (approximately) Bayesian “consciousness” or “executive functioning organ” that attempts to make sense of all these inferences

  1. Andrew,

    Actually in this NIPS conference, last week was some amount of work focused on a similar idea. Complementing a powerful predicting machine (Deep Learning thing) with other scheme that tries to not only to interpret but to guide the training, etc.

    I agree with you that this second system should be bayesian.

  2. Hi Andrew, eagerly waiting for your longer post about this subject; too bad we wouldn’t see it in the response to this article. It appears that the current “successes” in predictive machine learning has certain characteristics: a response variable that has a correct answer (e.g. is this an image of a cat?); related to that, a response variable with almost no uncertainty (e.g. the answer to a Jeopardy question); no/almost no variability due to human fickleness (e.g. translation – excluding any creative use of the language); artificial strict constraints on the data (e.g. chess rules); readily available training data sets typically created with huge amounts of human input (e.g. Wikipedia); related to that, extremely large training data sets requiring even more human input; which implies, highly repetitive, often sequential, observations; low cost of errors (e.g. hilarious machine-translated signage in China); high human involvment in checking predictions (e.g. is this cat prediction correct). It’s not obvious how this paradigm can progress to models of human thinking.

  3. Back in the late 60’s at Columbia, my college friends and I used to ponder the possibilities of computers simulating human brains. One time, in a conversation with mathematician Serge Lang, we asked whether he thought computers could ever be made to think like humans. His response, intentionally humorous at the time, was “The question is not whether they can think. The question is will they be happy.”

    Although we are still far from computers that think like humans, we are much closer to that now, and Lang’s facetious remark takes on a more serious cast. If you think about it, except for the purpose of proving it can be done, nobody will really want to do this. There will be no market for computers that think like humans, because that would entail being sometimes moody, defiant, distracted, thinking they know better than you how it should be done (sometimes correctly), etc. What is really sought after is a computer that can do high-level thinking and decision making but does not have its own values, goals, and interests, and never exhibits emotionality (or at least not emotionality that would interfere with carrying out the tasks we want it to do for us.) It is by no means clear that this is even possible in principle. It may be that the ability to be creative, to garner novel insights into situations, and other cherished aspects of higher human cognitive function, are inextricably bound up with these other aspects of human brain functioning. It is possible that one can no more have one without the other than one can write a program that solves the halting problem. I’m not _au courant_ with AI or computer science research, and I have no idea if anybody has actually looked into this in any serious way.

    If it is true that the desired part of human brain function is inextricably linked to the less desired part, then we probably don’t want to build it. If we do build it, it will be a major problem to define the terms of our relationships with these machines. Silicon slaves? Battlestar Gallactica comes to mind.

  4. Andrew,

    Like others, I am disappointed to not see your long form opinion on Bayesian models of cognition. Over the years, Josh Tenenbaum’s school have pushed a form of Bayesian fundamentalism into cognitive science. Given a cognitive task, they

    1) Develop a probabilistic reformulation of it
    2) Do elaborate, at times, really skillful mental gymnastics to cast it as a Bayesian inference problem
    3) Go on claim that therefore that neural substrate does in fact do Bayesian computation.

    At some level, I am willing to accept an ‘as-if’ approach where one could say that the information processing cognitive agent behave as if they are doing Bayesian computation, just like rational choice theorists claim that human agents are behave ‘as-if’ they are rational actors. But to claim that human brain, or for that matter, neural substrate does in fact do Bayesian probabilistic inference is a very strong position to take.

    From a computational point of view, even if one were to establish a consistent probabilistic inductive logic programming framework to study such models, the framework has to confront computational intractability barriers. And given that neural substrate is a messy blob of amorphous matter, its unreasonable to claim that its computational machinery is isomorphic to a Bayesian computing machine.

    Do you have any thoughts on this?

    Thanks
    Rajesh

Leave a Reply to Jack Cancel reply

Your email address will not be published. Required fields are marked *