A.I. parity with the West in 2020

Someone just sent me a link to an editorial by Ken Church, in the journal Natural Language Engineering (who knew that journal was still going? I’d have thought open access would’ve killed it). The abstract of Church’s column says of China,

There is a bold government plan for AI with specific milestones for parity with the West in 2020, major breakthroughs by 2025 and the envy of the world by 2030.

Something about that plan sounded familiar. Then I remembered the Japanese Fifth Generation project. Here’s Ehud Shapiro, writing a trip report for ACM  35 years ago:

As part of Japan’s effort to become a leader in the computer industry, the Institute for New Generation Computer Technology has launched a revolutionary ten-year plan for the development of large computer systems which will be applicable to knowledge information processing systems. These Fifth Generation computers will be built around the concepts of logic programming. In order to refute the accusation that Japan exploits knowledge from abroad without contributing any of its own, this project will stimulate original research and will make its results available to the international research community.

My Ph.D. thesis, circa 1989, was partly on logic programming, as was my first book in 1992 (this post isn’t by Andrew, just in case you hadn’t noticed). Unfortunately, by the time my book came out, the field was pretty much dead, not that it had ever really been alive in the United States. As an example of how poorly it was regarded in the U.S., my first grant proposal to the U.S. National Science Foundation, circa 1990, was rejected with a review that literally said it was “too European.”

8 thoughts on “A.I. parity with the West in 2020

  1. Bob,

    What killed logic programming? I’ve never seen anything on the history of this, and given that it died around the time I was born, I wasn’t there first-hand.

    • What “killed” it was the hype, in the sense that it was riding too high, with too much promise. That, and the fact that it takes a certain kind of thought process to wrap your head around it, which appeals to theorists but not much to the everyday programmer that’s the bread and butter of the industry.

      Of course, it wasn’t really killed. For example Erlang is a modern language being used in high reliability contexts, and it’s based on Prolog.

      • More precisely, what “killed” the field (and, more generally, symbolic logic-based approaches to difficult problems) was the combination of :

        * Excessive hype, leading to disappointment.

        * Smarter hype from competitors, whose stated goals were not as assessable as logic programming’s, thus deflecting the disappointment that the modesty (to stay polite) of the achievements would have caused.

        * Hyper-competitive environment set up by the “management” of research : the “desire for efficiency”, a dubious definition of said “efficiency” and the short-term setting of goals led to killing anything that did not rapidly produce “deliverables”.

        One can note that the current hype on neural networks-based approach is currently reaching the same critical level. This hype being more cautious that in the case of logic-based approach, we probably won’t see a”nuclear winter” of them, but I still think that there will be some bullback.

      • “…it takes a certain kind of thought process to wrap your head around it…”

        I remember in the mid-’80s, someone hearing me say “…programming in it is like turning off 90% of your brain and just thinking with a tiny piece right at the back…” and guessing I was talking about Prolog.

    • What killed Prolog is that it’s a terribly inefficient and hard-to-program language. It could do a few things based on pattern matching and recursion very elegantly, but most algorithms were a huge pain (see O’Keefe’s book Craft of Prolog for examples). Now languages like ML, Haskell, and even C++ templates use similar pattern-matching dispatch techniques, so all that practice with Prolog finally paid off when we built Stan in 2011—and the new parameter pack functionality in C++11 leads to entirely Prolog-like programs in C++, which always makes me smile.

      The real problem was that it billed itself as a logic programming language, whereas in fact it was depth-first search with pattern-matching based branching. That is a very hard paradigm in which to code efficient algorithms. No arrays! The best you could hope for was hashing for roughly constant-time access, but with very high overhead. So other than toy examples, pretty much everything is better done in other languages, even programming theorem provers.

      I don’t think hype kills things. Hype may bring more people into a field than it can support, like say web development circa 2000, many practitioners of which were unemployed by 2002 after the bubble burst. But it hardly killed web development. Prolog never had the bandwagon problem—most people couldn’t make heads or tails of it or its purpose.

      I was using (variants of) Prolog to express natural language grammars and to solve constraint problems (like the Zebra Puzzle [the Swede lives two houses from the Norwegian and doesn’t smoke Pall Malls, …, Who owns the zebra?]). It was a decent application, that resulted in my first open-source project, the Attribute Logic Engine. I built it because a lot of people didn’t believe the theory in my book, which was largely about a linear type inference system for logic programming languages with inheritance (and proper extensional circularity syntax, which was the big proof that nobody cared about). It let me play with a lot of fun domain theory mathematics, which is a very elegant model of computation and information using topological methods.

  2. The bit that struck me from the quote is “the accusation that Japan exploits knowledge from abroad without contributing any of its own”, which is a good reminder that the kind of just-so reckons that get dropped about China (up until the last couple years or so, happily I haven’t heard them recently) and China’s ability to “innovate” are indistinguishable from the ones people were saying about Japan back when.

  3. I think the real point of the Japanese 5th generation project was education. The Japanese needed a masters program to supply industry with talent. At the time, people joined industry straight out of undergrad programs. So, industry formed a consortium to fill the void.

    So what killed the 5th generation project? There was an economic downturn in Japan…. and demand for talent disappeared…

    The situation is different in China today. These days, China has more supply of talent, and more demand. Who knows about the future, but I see a lot of optimism in China, especially among young people (many of whom can’t remember the last AI winter/economic downturn). They are betting the future will be even better than the present, just as the present has been much better than the past.

    I was recently at a major international conference where there was so much confidence in deep nets that I was worried about setting unrealistic expectations. I brought up some of these concerns during a panel, where my concerns were dismissed by someone who may not have been old enough to remember the last downturn. He asserted that sensible people were saying sensible things, and blamed hype on others (such as the press). There is a lot of hype out there. Unrealistic expectations can be very damaging to the field. We all need to take responsibility to make sure that that public doesn’t expect too much from us going forward. There are some videos here https://www.tsdconference.org/tsd2018/video.html that talk about setting realistic expectations.

    I don’t know about expectations in China. It is hard to make predictions, especially about the future. Lots of people are excited about the future. Perhaps this excitement will end badly with a downturn, but I have to say that things in China are a lot better these days than they were in 1993, when I went there for the first time. It would have been unimaginable in those days that China could have a larger economy than Japan. It is possible (and even likely) that this excitement is justified, and things will continue to get better and better for a good long time. Similar comments probably apply equally well to deep nets, as well.

Leave a Reply to Adam Cancel reply

Your email address will not be published. Required fields are marked *