Analog computing and hybrid computing: The view from 1962.

From the proceedings of the December 4-6, 1962, fall joint computer conference, two researchers from General Electric Company’s Missile and Space Division write:

In general, there are two distinct modes of simulation; mathematical and physical. Mathematical simulation utilizes a mathematical model of the physical system under study. . . .

Physical simulation requires the excitation of the system under conditions which are representative of those encountered in actual system operation. This testing can involve anything from an inclined plane to large multi-million dollar ventures like the Space Environmental Simulator located at General Electric’s Valley Forge, Penna., Space Technology Center. These two types of simulation can be combined by mating physical hardware with a mathematical model. The general purpose computers available today are primarily designed for mathematical simulation. . . .

An electronic analog computer is an array of computational building blocks, or modules, each being able to perform a particular mathematical operation on an input voltage signal and provide a specific output response. These building blocks normally provide the functions of summation, integration with respect to time, multiplication by a constant, multiplication and division of variables, function generation, generation of trigonometric functions, and representation of system discontinuities. All quantities are represented on the analog by continuously varying voltages, restricted on almost all analog computers to the range between -100 and +100 volts. . . .

Data are fed into the analog computer in the form of parameter settings, which are usually associated with the coefficients that exist in the mathematical equations. Data are extracted from the computer in the form of voltages, either as steady-state values which can be read out on a voltmeter, or as varying values which can be recorded on a strip chart recorder or a plotting table. Some of the analog characteristics pertinent to our discussion are:

1. The analog is a parallel machine. All the variables are computed simultaneously and continuously. Thus, the speed with which the calculations are made is completely independent of the size or complexity of the problem.

2. The bigger a problem is, the more equipment is needed, as each piece of equipment works on one part of the problem.

3. Numbers on the analog are fixed point. Every variable must be scaled. The scaling will greatly affect the accuracy of the results.

4. The analog is best suited for solving systems of ordinary linear differential equations, although it can handle many other types of problem in a very satisfactory way.

5. There is no such thing as a computational cycle with the analog, because of characteristic No. 1. The analog can be set to calculate at any rate desired, but in practice there is an optimum time base associated with any particular problem, and attempts to run the problem much faster or slower will severely degrade the accuracy. The analog, generally speaking, is faster than the digital.

6. Analog outputs are almost always accurate to within 1%, but seldom better than 0.1%.

7. It is very easy, with most problems, to introduce extensive changes in the simulation in a matter of minutes.

Although the analog computer was designed primarily for the solution of problems in the aircraft field, its area of application has broadened considerably over the years. . . .

Many of these concerns still arise today, albeit in different form: scalability of computation (items 1 and 2), scalability of workflow (item 7), putting parameters on a natural scale (item 3), precision (item 6), and the idea that the method runs at some natural speed (item 5), which comes up with HMC and, before that, efficient Metropolis jumping rules.

They then move on to a discussion of digital computing:

The digital computer works by a counting technique and obeys logic rules exactly. The solutions are at discrete points dependent on the size of the time increment used. The smaller the mesh size, the more we approach the continuous solution. In contrast to the analog computer, which uses continuous variables in the form of voltages, the digital computer uses discrete variables, and operates with numbers as opposed to voltages. The digital computer is essentially a very fast calculating machine. . . .

There are a number of digital computer characteristics that are of particular interest in connection with hybrid simulation. These are:

1. It will deal only with numbers. Any problem must be reduced to a series of numerical operations before it can be handled by the computer. This is not to say that every step must actually be written each time. All sorts of aids to compiling programs are available. A program is nothing more than the entire sequence of instructions given to the computer to solve a problem. In actual practice, the machine itself will write most of its own instructions.

2. It will do exactly what it is told. All changes involve writing new instructions. The easier it is to make a change, the more complicated the original instructions have to be to include the option.

3. The results are exactly repeatable, but their accuracy is dependent on the numerical methods used to solve the problem.

4. The computer will perform only one operation at a time. That is, if the instruction reads, “Move number N from location A to location B,” the machine will, for a given period of time, be doing nothing but that.

5. The computer works with increments. None of the variables are calculated continuously. Generally speaking, the larger the calculation increment of the digital computer, the faster and the less accurate is the computation. There is absolutely no drift with a digital computer.

6. Compared with an analog, the digital is very much better equipped to make decisions. These can be made on the basis of comparison, time, reaching a point in the program, or almost any other criterion chosen by the programmer.

7. The digital can store very much more information than the analog. It can store tables, functions of several variables, whole programs, and many other things.

It is almost impossible to list the areas of application of the computer because of the diversity involved. We can say, however, that the digital computer lays sole claim to those problems which store a lot of information, use much logic, or require extreme accuracy. It will calculate trajectories, solve problems in astronomy, simulate mental processes such as learning and memory, analyze games, do translations, help design new computers, and do untold numbers of other tasks. The major effort to discover new computer applications is devoted to the digital area, with the analog a poor second, and the hybrid far behind.

They were right about that! Digital computers really did take over. Again, I find it interesting how much of the discussion turns on workflow, which we can roughly define as a process of exploration requiring science-like exploration by fitting multiple models.

They continue with some thoughts on the precision of computation which remain relevant over sixty years later:

The subject of accuracy is so complicated, and dependent on so many factors, that it just didn’t seem possible to summarize it by a mark in a box. While this is to some extent true of all the other characteristics listed, we believe considerations of accuracy fall into a special case.

On an analog computer, the result is usually within 0.1% and 1% of the value inherent in the equations. Whether this is excellent or poor depends on the nature of the problem. In many engineering investigations, this is much more precise than the data upon which the problem is based. The use to which the answer will be put also affects the accuracy required. Determination of the region of stability of a control system to within a millionth of the control range would be valueless, as the nature of the input could affect it much more than that. On a digital computer, the ultimate limit of accuracy is the number of bits in a word. This accuracy is seldom attained by the output variables of a problem, due to the approximations involved in almost any mathematical model, the idiosyncrasies of programming, and the practical necessity of taking reasonably large computing steps. The question concerning accuracy is more often, “How much cost and effort is needed to obtain the required accuracy?”, than “What accuracy is obtainable?” The answer has to be determined separately for each individual problem.

Next they move on to “hybrid” setups that combine analog and digital computing, sharing their own experiences:

The advantages of a hybrid that we felt to be of most value to the work of the department were in the area of increasing the size and variety of the problems we could solve. The things a hybrid can do to help in that endeavor are:

1. Assign different sections of a problem to each computer. For instance, in simulating a missile, the trajectory calculations can be assigned to the digital, because of the available precision, and the control simulation put on the analog because of its flexibility.

2. Assign different functions to each computer. For instance, all integrations might be assigned to the analog computer, in order to save time and get a continuous output. Or, all function generation might be assigned to the digital computer (where it is known as table look-up).

3. Provide analog plots of digital variables. This is particularly useful in observing the behavior of selected variables while the simulation is in progress. In one case, a stop was put on a 7090 after the first 15 seconds of what would otherwise have been a 10 minute run because it was easy to tell from the behavior of a continuous analog output that a key variable was not behaving quite as desired.

4. Let the digital provide logic for the analog. Things such as switching, scale changing, ending the program, choosing tables to examine, can be readily programmed into the digital and can greatly simplify and possibly even speed up an analog simulation.

5. Allow real hardware to be part of a simulation. Most hardware can readily be connected into the analog, and hybrid operation would allow it to connect to the digital just as easily. Similarly, digital devices can be included in analog operation the same way. Real hardware could also be considered to include people, as part of a control loop.

6. Provide accurate digital printouts of analog variables. Normally, the accuracy with which the analog variables are plotted is less than the accuracy that actually exists in the equipment. Hybrid operation enables selected variables to be converted to digital form and printed out from a digital tape.

The details of this sort hybrid computing don’t really matter anymore, but the general idea of looking at leaks in the modeling pipeline, that still is important.

I was also struck by the larger framework of simulation. Of course this makes sense: a missile test is expensive so you want to understand as much as you can using simulation before going out and launching something. In addition to being cost- and time-effective, simulation also makes the live test more effective. The real-world launch gives real-world data which you can compare to your expectations. The better your simulations, the better will be your expectations, and the more you will learn from discrepancies in the live data.

I’ve thought about these issues for awhile in the context of model checking and exploratory data analysis (see BDA starting from the first edition in 1995, and my 2003 article, A Bayesian formulation of exploratory data analysis and goodness-of-fit testing, but it was only just now that I realized the connection to workflow and simulated-data experimentation.

If only someone had given me this article to read 40 years ago, back when I was first doing simulations of physical systems. I blame the author of that 1962 article, who easily could have shared it with me at the time. The trouble was that he was too self-effacing.

P.S. The diagram at the top of this post comes from this 1963 article, “Corrected inputs: A method for improved hybrid simulation,” which begins:

Makes sense to me, to use some feedback to reduce transmission errors.

They were doing cool stuff back then, 60 years ago. Just regular guys, no Ph.D. or anything. Kinda like Steven Spielberg’s dad. Maybe that’s one reason I liked that movie so much.

16 thoughts on “Analog computing and hybrid computing: The view from 1962.

  1. Two observations.

    One. PhDs were far less common in industry then than they are now. The per-capita science and engineering PhD count in the US in 1958 was about one-quarter of today’s ratio. And, of course, many of those were in academia. So the assertion that the authors were not PhDs may be misleading or lacking context.

    Two. The FJCC and SJCC were not lightweight enterprises. The mother-of-all-demos was presented at the 1968 FJCC. Abramson presented the Aloha protocol at the 1970 FJCC. Admittedly, they were massive and I do not know how they were refereed (of if they were refereed).

    Bob76

      • There is an article in Wired with that title: https://www.wired.com/2008/12/dec-9-1968-the-mother-of-all-demos-2/.

        Googling the exact string “mother of all demos” generates about 50,000 hits.

        Here’s a link to the wikipedia article: https://en.wikipedia.org/wiki/The_Mother_of_All_Demos

        Here are the opening paragraphs of that wikipedia entry:

        “The Mother of All Demos” is a name retroactively applied to a landmark computer demonstration, given at the Association for Computing Machinery / Institute of Electrical and Electronics Engineers (ACM/IEEE)—Computer Society’s Fall Joint Computer Conference in San Francisco, by Douglas Engelbart, on December 9, 1968.[1]

        The live demonstration featured the introduction of a complete computer hardware and software system called the oN-Line System or, more commonly, NLS. The 90-minute presentation demonstrated for the first time many of the fundamental elements of modern personal computing: windows, hypertext, graphics, efficient navigation and command input, video conferencing, the computer mouse, word processing, dynamic file linking, revision control, and a collaborative real-time editor. Engelbart’s presentation was the first to publicly demonstrate all of these elements in a single system. The demonstration was highly influential and spawned similar projects at Xerox PARC in the early 1970s. The underlying concepts and technologies influenced both the Apple Macintosh and Microsoft Windows graphical user interface operating systems in the 1980s and 1990s.

        • My bad, didn’t realize that mother-of-all-demos meant something specific. It read to me like a sentence unfinished, “The mother-of-all-demos [insert text here] was presented at…”

    • A follow-up comment. A summary of the program for the 1963 FJCC is available at http://www.bitsavers.org/magazines/Computer_Design/196310.pdf.

      There were only 38 papers presented at the conference. I looked at the paper summaries and list of authors. I recognized 10 of the author names. The names I recognized included G. Estrin, W. F. Bauer, A. G. Oettinger, Hewitt Crane, and Max Palevsky.

      Estrin was at UCLA. Vint Cerf was one of his graduate students. Estrin served as chairman of the computer science department for many years.
      IIRC, Oettinger was once the youngest tenured professor at Harvard. Wikipedia states, “Oettinger founded the Computer Science and Engineering Board of the National Academy of Sciences and chaired it for six years starting in 1967. From 1966 to 1968 he was president of the Association for Computing Machinery (ACM).”
      Max Palevsky was a founder of Scientific Data Systems which was purchased by Xerox. He later was VC as well as having lots of other interesting activities.

      I suspect that, if I researched the other authors, I would find many more heavyweights. The author of the paper you referred to clearly played in the big leagues.

      Bob76
      My above comments show that I am old and that I used to be in the computer world.

  2. I have nothing meaningful to add, but I’ll just point out that this is a fascinating post, especially the historical/personal connection! Imagine the foresight needed to set up over fifty years in advance a self-assembled hybrid computer running complex gene regulatory networks that would produce this post!

    • Hopefully we posted well because after all

      > In one case, a stop was put on a 7090 after the first 15 seconds of what would otherwise have been a 10 minute run because it was easy to tell from the behavior of a continuous analog output that a key variable was not behaving quite as desired.

      • The MIT AI Lab’s PDP-6 had a few bits of one of it’s CPU registers wired to an audio amplifier, so you could write music programs on it. Or just listen to what your program was doing.

        The program that displayed this image:

        https://a4.pbase.com/o12/29/138229/1/170630512.Ir1idEqq.DSC02371.jpg

        was a spiral generator that rolled up an initially straight radius line into a spiral (with ends held fixed at (0,0) and (1,0)) and then modulated the points on that line, but the PDP-6 didn’t have fast sin/cos functions, so the first things it did was build a table of 10-bit (fixed point) sin/cos values. This took a bit of time and generated a gradually increasing in pitch squeal until it finished and then produced some seriously glorious whoops and chirps as it generated the pattern. (This was written some time between ’73 and ’75 or so.)

        Nowadays, computers are enough faster that their operation doesn’t make much sense in the human-detectable audio range: I recently coded up a simple number-theory calculation in (interporeted) Python, and my guesses as to how far it would get in a few seconds were off by several orders of magnitude. Times have changed.

        • I don’t recall that audio generator. But I have fond memories of the PDP-6 and its friend the XGP. Writing documents using TECO on a Model 33 TTY, not so much.

          I was told once that the PDP-6 had evolved to be a PDP-10 and that DEC copied the MIT improvements and called their product PDP-10.

          There was one feature that I loved on that machine, a feature of ITS and TECO really, that I do not recall seeing elsewhere. If you were editing a file named foo.bar.1 and saved it, the saved file would be named foo.bar.2 but the old foo.bar.1 would not be deleted. If you saved again, the saved file would be named foo.bar.3 and foo.bar.1 would be deleted.

          Bob76

    • Raghu:

      You write, “Imagine the foresight needed to set up over fifty years in advance a self-assembled hybrid computer running complex gene regulatory networks that would produce this post!”

      To be fair, the author of that paper had some help preparing the self-assembled hybrid computer. Also he and his collaborator on that project had previously collaborated on three very similar projects. Practice makes perfect!

  3. Related to this and an earlier post, a recent paper:

    https://www.science.org/doi/10.1126/science.ade8450
    Experimentally realized in situ backpropagation for deep learning in photonic neural networks
    Editor’s summary
    Commercial applications of machine learning (ML) are associated with exponentially increasing energy costs, requiring the development of energy-efficient analog alternatives. Many conventional ML methods use digital backpropagation for neural network training, which is a computationally expensive task. Pai et al. designed a photonic neural network chip to allow efficient and feasible in situ backpropagation training by monitoring optical power passing either forward or backward through each waveguide segment of the chip (see the Perspective by Roques-Carmes). The presented proof-of-principle experimental realization of on-chip backpropagation training demonstrates one of the ways that ML could fundamentally change in the future, with most of the computation taking place optically.

    (I haven’t read it.)

Leave a Reply

Your email address will not be published. Required fields are marked *