New Control Method: Reactive Logic

R

Thread Starter

Reactor

What I have learned from a career in engineering:

1. We work and play in ongoing dynamic time, but all our logic, arithmetic, and machine control is confined to timeless static frames. We can speak of and write about circuit, machine, and animate behavior, but the fundamental operators of ordinary logic can only describe static states, not dynamic activities.

2. Behavior emulation using standard logic is therefore limited to still frames stitched together with clock pulses or many lines of linear-sequential code (software) like a child’s connect-the-dot drawing. That is why there are millions of lines of code in some programs.

3. There is a dynamic alternative to computation for process management. It is truly real time, it is parallel-concurrent, and it reacts immediately to changes. It has 100-times fewer components, is 100-times faster to respond, has little or no run-time software, and it is safer. Control circuits and systems designed and built to this method would be improved and cost less.

I offer to prove the truth of the first two statements (for those who do not already believe them) and demonstrate the truth of the third statement.
 
The difficulties of expressing dynamic situations with only static language and tools (including the currently-accepted logic systems and computational solutions) was recently demonstrated in the attempt by an unknown person on a web-log to describe a process that evolves over time using conventional logic. The situation treated was, "Whenever the boss comes by my office, I will start working. Once I start working, I will keep working until the telephone rings."

The process specification attempt, limited to conventional contemporaneous logic, went on for more than four pages and over 1600 words that included many formulas and explanations, but was not yet complete!

The failure of static logic to adequately, simply, and concisely handle changing situations is evidence that static logic is not appropriate for dynamic uses and should be replaced with more suitable means, such as those provided by Reactive Logic (RL). RL is not intended to replace all static tools with the tools applicable to dynamic control. Static tools are useful and are correctly used for the organization and management of data, the tools for which remain incorporated in RL. It is in pursuit of simple and concise dynamic process control based upon elemental process events that RL can improve functionality, speed, and safety. RL is especially worthwhile when applied to the management of time-, safety-, or mission-critical process control.

Next, I'll show how RL solves this problem in a very short time and a few words.

Best,
reactor
 
The reactive logic (RL) specification for this process is much simpler. The dynamic process, Whenever the boss comes by my office, I will start working. Once I start working, I will keep working until the telephone rings. can be stated in RL English as: Boss CREATES Working WHILE Telephone CREATES NOT Working. (CREATES, WHILE, and NOT are RL operators.)

This RL process statement can be expressed in simple and concise RL algebra as:
[(B # W) (T # /W)],
which can be monitored and enacted in real time using a small number of dedicated hardware logic gates.

The failure of static logic to adequately, simply, and concisely handle changing situations is evidence that static logic is not appropriate for dynamic uses and should be replaced with more suitable means, such as those provided by RL. Static tools are useful and are correctly used for the organization and management of data, the tools for which remain incorporated in RL. It is in pursuit of simple and concise dynamic process control based upon elemental process events that RL can improve functionality, speed, and safety.
 
The mathematical logic used in machine control can not simply specify or VERIFY the "nuts and bolts" concepts of PHYSICAL PROCESS, such as:

CAUSE,
EFFECT,
BEGINS,
ENDS,
OVERLAPS IN TIME,
PERSISTS, or
REPEATS.

Does anyone know why?
 
The reason for the inabilities of computation, with respect to behavioral items such as CAUSE-EFFECT is that ordinary logic can only describe and verify state(s).

The language for my system is natural and uses English words commonly used to describe physical process activities. It has 11 operators with 56 primitive functions, most of which describe simple behaviors, as opposed to the conventional languages the primitives of which can only describe states, or at most, a "next" state.

My language and logic is very natural and is the way most thinking people plan and live their lives. It is not computing, but can manage, or work alongside, computing, or it can work by itself, as all control systems did before computing.
 
J

James Ingraham

Perhaps I don't quite get your premise, but it appears that you are talking about functional programming. OVERLAPS IN TIME is an interesting concept, but I'm not sure how it fits. The first thing that popped in to my mind was Haskell, as well as it's lesser known offshoot Curry. (Both languages are named after logician Haskell Curry.) Then I thought of XQuery, since you included a "FOR" keyword. And I can't figure out why a language wold need both a "FOR" and a "REPEATS," which I've actually complained about in far more mainstream languages. (Java, C / C++, Pascal, and darn near everyone else have both a for loop and a while loop, and quite possibly a do-while. for handles all of those cases, so why bother with the others?).

So while I find your missive interesting, I think you're going to have a heck of a time convincing anyone that you've actually got something uniquely useful. I myself despise ladder logic (and really all of the IEC 61131 languages), but unless you come up with something that's real-world usable and also marketable it's not going to matter at all. If you want to take on the low end, there's not enough reason to switch, since the problems aren't all that hard. If you want to take on the high end (nuclear power, air traffic, military) you're going to need a couple of decades of great track record. Bottom line, I guess, is good luck, but I'm not going to hold my breath.

-James Ingraham
 
C

Curt Wuollet

I have to agree with James. It is almost impossible to move practitioners of any experience even between established, proven, alternatives unless its an absolute imperative. To be a stranger in a strange land with a deadline to meet simply isn't appealing to most automation folks. It's challenging enough on familiar ground. And a great many people doing it, aren't all that familiar with their favorite it seems, from the questions we see here. I spend a lot of time outside my comfort zone because my job involves understanding and often fixing or modifying whatever brand, language, or programming style happens to be on an often obsolete or orphan system. But when I do something new, I'm not going to make it any harder than necessary without a really compelling reason. I have messed with Plan 9 from outer space on an old printing press, but it wouldn't get a thought for a new project. And I like procedural languages.

Regards,
cww Who has tried to popularize alternatives.
 
Curt,

Thanks for the sympathetic response. I agree that change is tough: an appropriate note, as my logic takes change as the expected, not the unusual.

RL, or NL is a reactive language and logic that specifies and responds to behavior, whereas all other computer languages can only specify state. Computational specification and operation is frame-based and proceeds from fixed state to fixed state. We can speak of and write about circuit, machine, and animate behavior, but common logic systems do not have dynamic logic operators with which to express dynamic behaviors. The fundamental operators of ordinary logic can only describe static states, not dynamic activities. Behavior emulation using standard logic is therefore limited to still frames stitched together with clock pulses or many lines of linear-sequential code (software) like a connect-the-dot drawing. Software is a means of directing a process controller, and the machine it controls, from one state to the next but requires one instruction or more for each step. That is why there are millions of lines of code in some programs.

A reactive logic system is an alternative for the overseer, or management functions.
 
James,

Thanks for your input and good wishes. I invented this system over a period of years, during and after the experience of on-the-job control engineering. It is a new theory of machine control, but not all theory, as I have used it to build real controllers that performed better than their (conventional) predecessors.

In the beginning, I designed PCBs and did the artwork via tape-ups with red and blue tape and stuffed them with SSI chips . A couple of decades later, it was free software by Xilinx and FPGA simulation on a desktop computer. The implementation means had caught up to my needs. Now, I am mostly writing about it and doing simulations in NL5 as needed to support my writing.

One premise is true real time process management with immediate reaction for critical activities (which is not possible in linear-sequential machines):

At present, the same Turing-type machine (TM) that is so suitable for doing the grunt-work of data-processing has been applied toward management of physical processes. But linear-sequential data-processing is not an appropriate tool for process management, which is best performed in an asynchronous, parallel-concurrent way. The data-processing solution, when applied to process management, becomes complex and always returns results after-the-fact, due to busing, polling or sampling, and instruction-fetch and -execution (time-sharing and software) delays. Using data-processing for process management is akin to making the lowest-rated factory assembly worker (who must be guided by explicit instructions at each step) suspend his/her labor and act as manager (via explicit instructions at each step) when supervisory duties are required. It is time we had a management method that was true real time, parallel-concurrent, safe and immediate, instead of a method that suffers all the Impediments of Computation (listed elsewhere), which add up to a high cost of ownership.

This solution is not programming of any kind. It is hardware that is arranged in cause-effect chains that are all in parallel. These reactive functions do not share or compete for resources, as they are directly connected, and are always ready to react, much like safety circuits. Responses occur within a few gate-delays, vs. the busing, software, and program-cycle delays of conventional technology.

Re: Overlap: Every individual thing has its starting point, or inception, in space & time. Once begun, each thing continues to exist for a time. Finally, each comes to an end-point, cessation, or termination. All things do not start at the same point, or continue for the same time, nor do they all cease at once. A great amount of overlap between these conditions for different things occurs in most physical processes. If you want to monitor and control a physical process, it is very helpful to be able to easily specify and recognize overlap.

Re: Ladder logic: Ladder logic is inherently a parallel-concurrent control method. The original was all switches and electro-mechanical relays (or air-relays as in the Hagan boiler level control system), which were slow. When ladder logic moved into fast transistors and chips, the benefits of parallel-concurrent operation were lost because the control systems were implemented through linear-sequential (LS) means. No matter what the technology, if it must act through LS means, it takes on all the ills and impediments of computation (the Turing paradigm). NL is an alternative process management means that is true real time, parallel-concurrent, safe and immediate.

Charles Moeller
 
J

James Ingraham

<i>"It is a new theory of machine control..."</i>
<i>"...all other computer languages can only specify state."</i>
<i>"This solution is not programming of any kind. It is hardware that is arranged in cause-effect chains that are all in parallel."</i>

At the risk of being rude, I don't think you've done your homework. It looks like you're a really smart guy who has not bothered to realize that there are other really smart guys out there. There is an enormous amount of work going on right now, much of it dating back decades.

You've described how NL is different from ladder logic, but not how it's different from functional programming. Purely functional languages do not, and in fact can not, specify state. Haskell appeared 25 years ago. Erlang is actually slightly OLDER than Haskell, and is particularly interesting in this context because it was designed to deal with the same kinds of problems you're talking about.

What does NL do differently that isn't already handled by other languages? CUDA, OpenCL, OpenMP, OpenACC, C++ AMP, and a bunch of others are designed for dealing with massively-parallel computation. SequenceL is a functional language that dates back 25 years as well, and now compiles to C++ / OpenCL, which kind of proves that old-school programming can in fact address massive parallelism, though it might be very hard to hand-code.

<i>"When ladder logic moved into fast transistors and chips, the benefits of parallel-concurrent operation were lost because the control systems were implemented through linear-sequential (LS) means."</i>

This is true, but it doesn't HAVE to be that way. Nothing prevents ladder logic from running on a massively parallel system. And while you seem to be fixated on the fact that linear-sequential is inherently slower than parallel, a sufficiently fast linear-sequential system running a sufficiently small amount of ladder logic will appear parallel. That, in fact, is what the vast majority of us experience.

And there are other solutions out there. LabView is inherently non-sequential, and can compile down to FPGAs. Simulink can generate code for PCs or PLCs, or it can spit out HDL so you can program an FPGA or build a chip.

I wouldn't expect anyone to shoehorn Haskell or Scala into process control. But Erlang, LabView, and Simulink are very applicable. I don't doubt your intelligence, but you're up against some world-class guys, and you don't even seem to realize it.

-James Ingraham
 
C

Curt Wuollet

Yes, here we run into the "good enough" problem, in that the processes we typically deal with are slow enough where the "ladder logic executes in parallel" artifice is entirely adequate. That is, if the plc was a black box and we had no carnal knowledge of implementation, for all practical plc purposes, it does what it's supposed to. To step above that level, the least of the problems we have to solve is speed, correctness and style of implementation. First would come the massive signal integrity issues of "square" wiring and random lead dress. Sampling skew would become significant, etc. Time and distance begin to become critical. So, at about the point where there are problems with the paradigm, there are many problems. I confess I have had limited exposure, but I'm trying to visualize just which problems you would be solving. The physics are kind in this regard, you can't really move sewage through a plant any faster or get chemicals to react, or mechanisms to move at speeds that break the pseudo parallel paradigm. Perhaps the power grid or the like?

Regards
cww
 
Seems that the proposed language still relies on groups of logic gates and in that sense is no different than any other control language. The original poster might take a look at Turing's Thesis.

Ultimately, goal of automatic and servo control is to regulate the state of the process by manipulating the inputs and the outputs,

Sounds to be a noble undertaking, good luck with your thesis.
 
James,

Yes, there are a multitude of languages out there, but they are all computer languages. That means they: need (or are) an OS; compile to and execute in assembly code; run on a bus structure in which there is competition for resources; and can only execute one function or thread at a time, hence they are linear-sequential (LS) machines. Erlang, for instance, is run as threaded (time-shared between threads) LS code. Even the massively parallel applications are built on top of several, or many, LS machines. Here is a maxim for you: "If it runs via computation, it is necessarily LS and requires software." This goes back to Turing and the origin of (theoretical) automatic computers (to replace human symbol-manipulators).

Regardless of their ills, it would take an encyclopedia to list all the good these systems have fostered over the past 65 years.

The downside of computational solutions are the large number of components (working parts) that must function perfectly (or almost perfectly) in applications that implement personnel- and equipment-safety. Programming works great as long as there are no unscheduled faults (component failures). Back in batch programming days, a person would wait overnight or for hours for results, only to learn that there was a mistake somewhere in the program or a computer fault. Today there is LBIST (logic built-in self-test), but it must be launched (and normal operation suspended) to identify a fault. The question is: how often to launch LBIST?

My system of reactive logic, a hardware solution, can act as a brace of event-checkers in parallel (with each other), and in parallel with processing, that, on a continuous basis, can check an expected event against any or all of the others as they occur. This “behavioral” approach picks up any events out of sequence or stuck-at faults in immediate fashion. Since all of these event-sensors are directly connected and sensed on a continuous basis, there is no waiting for polling; checking routines; instruction fetch, decoding, and execution; suspension of services while LBIST is run; etc. There are no buses, registers, buffers, communication protocols, or address values to go wrong. All of the time- safety- and mission-critical management can be done in true real time, as the events happen, while the data collection and logging and processing is performed by the usual methods. My system does not rely on a definition of real time as meeting schedules, but can be defined: as it happens, with a few (2 to 6) gate-delays lag in the output response, given the input fault conditions. This is the same hardware delay no matter how the decision is made and (in conventional systems) is on top of all the other conventional delays mentioned (above).
 
d,

Thanks for the good wishes.

Having designed and built processors, I have a different viewpoint: all things done in computers rely upon hardware transistors and logic gates to do them. Hardware is essential. It performs all the logic and arithmetic operations, and the reception, decoding, and storage of sensed information and data. Every effect created by a hardware-software combination is initiated in and executed by hardware, not software. Hardware houses (control store) and paces (instruction counter) the software instructions. Hardware is indispensable. Controllers can' work without it. It is not the case that hardware is dependent upon software for functionality. It is the other way around. Software depends upon hardware to code it, house it, access it, step through it, and implement it. That turns out to be a benefit, because hardware can be tested with finite resources, while software testing may never end.

"Complete testing of a moderately complex software module is infeasible. Defect-free software product can not be assured."[1]

"Software essentially requires infinite testing, whereas hardware can usually be tested exhaustively."[2]

1. <i>Software Reliability</i> by Jiantao Pan, Carnegie-Mellon University, 1999, http://www.ece.cmu.edu/~koopman/des_s99/sw_reliability/

2. <i>Overview of Software Reliability</i>, http://sw-assurance.gsfc.nasa.gov/disciplines/reliability/index.php

Turing devised a theoretical machine, as mentioned in my post to James (on 1/3/15), that is necessarily linear-sequential in operation and requires software, which is equivalent to Turing's m-configurations and the scanned symbols. These facts have not changed in the 78 years during the genesis and growth of computers to today. Computers were theorized by Turing to perform symbol manipulation (we call it "data processing"). That is what they are very good at doing. They are not very good at monitoring and acting on real time events. That is because computers do not understand time very well. Anything to be done requires one instruction or more. An interrupt requires an interrupt handler (more instructions).

My system is hardware (no software, no instructions), its own functions work in parallel, and a "reactive machine" can work in parallel with computational devices. The machines that can be built around my system of logic and language are real time and act in parallel-concurrent fashion. They are not data processors, but respond immediately to events.
 
True enough on all counts, but now you have to quantify what constitutes real-time...even with parallel processing, etc.

d
 
d,

Processing, in modern terms, means data-processing. In using data-processing, all events and other information are sampled and turned into data (symbols) and stored in memory. Certain event-symbols are also time-stamped with the time of their collection, which data is also filed in memory. Some of that data is selected for arithmetic and logic operations to be performed on them. This requires recalling them from memory and sending them to the ALU (arithmetic logic unit). The results from ALU operations are stored. Some of those results are sent to output buffers and on into the real-world process that is being controlled. The routine of sensing information and storing, then recalling for ALU operations, then storing or outputting (algorithms, or computational procedures) interfere with, and interrupt, real time response.

I have no quarrel with data-processing being handled in this way. DP, however, should not be used for critical response. RL is expressly suitable for critical real time response.

Real time is the continuous time domain. When reacting in RT one reacts NOW, in the minimum time possible, with an electronic response of a few gate-delays, without stopping for, or performing, side issues such as polling, comparing, testing, calculating, checking, etc.

My system of logic allows immediate response to critical anomalies without processing, and in parallel with whatever else is occurring.
 
Charle,

You seem to be confining yourself to certain types of control that are more in the category of quick responding servo-control, rather than stabilization of processes requiring feedback control of the material and thermal balances, supplemented by "reactive" controls associated with safety.

Ultimately your Reactive Logic is going to be limited by the degree of signal processing required, the presence of transport lags, and non-linear effects in the final control elements. It is more that just the semantics of the logic system being employed.
 
Top