advertisement
from the Automation List department...
RTOS vs OS
Application Questions and Problems topic
Posted by Praveen Mathew on 17 January, 2005 - 9:30 am
What are the main differences between an RTOS & normal OS like Unix or Windows?

Why is RTOS prefered over these for certain applications?

Would appreciate any thoughts on this matter.

Thanks
praveen


Posted by Carlos O'Donell on 17 January, 2005 - 10:26 am
The biggest difference is determinancy. An RTOS will have a deterministic scheduler. For any given set of tasks your process will always execute every number of microseconds or miliseconds exactly, and the same number from schedule to schedule.

In UNIX and Windows the scheduler are usually soft-realtime (as opposed to some hard-realtime RTOS). Soft-realtime means that the scheduler tries to assure your process runs every X number of miliseconds, but may fail to do so on occassion. Your process may also experience scheduler jitter, being executed in X << Y at one point, or X >> Y at another, where Y is some value larger than X. A hard-realtime RTOS will always make sure your process runs every X number of miliseconds by taking away time from lower priority processes.

There are various RTOS extensions to Linux, including RTAI and RTLinux. WindowsXP claims to have hard-realtime by use of process priorities, but I don't have much experience there.

This is all very important if you are doing data aquisition and measurement. An RTOS allows for deterministic sampling using software.

Cheers,
Carlos.


Posted by Dick Caro on 21 January, 2005 - 2:42 pm
Carlos,

Well -- true, but that's not all.

The "classic" difference between an RTOS such as the old ModComp Classic and DEC's RSX-11 (for those of you old enough to remember these minicomputers), and the more recent operating systems based on UNIX and including DOS and Windows in all forms, is Pre-Emption. Those older RTOS's and the minicomputers on which they were based, used hardware interrupts for all significant events
including scheduling timers and external I/O state changes. They had a large number of registers with a set devoted to each interrupt level. For example, program execution on the Modcomp used 16 registers, but it handled 16 levels of priority interrupt, so it needed a total of 256 registers. Hardware interrupts on the Modcomp did not require saving registers before execution, unless one of the levels was used for multiple sub-levels. Typically, interrupts were serviced in only a few CPU cycles, and the interrupted program resumed. Pre-emption is the ability to interrupt an operating program, including the OS itself, with a higher priority interrupt immediately.

Modern CPUs cannot do this, BUT they are now so much faster (1-3000 times) than those old minicomputers, that with efficient register block streaming, large cache memories, and today's fast memory, there is no noticeable difference between an RTOS and conventional OS EXCEPT in embedded applications. However, none of the current microcontroller architectures used for embedded systems support more than 4 vectored interrupt levels. Today's use of registers in embedded systems not based on the Intel 80xx family, tends to be more like RISC processors in which there is no dedicated set of registers that could be saved. Rather, their large number of registers are used more like a FIFO stack automatically retaining registers on interrupt. This makes the old-fashioned RTOS unnecessary.

Modern RTOSs simply make sure that a) no interrupt is ever lost, and b) no interrupt can be blocked by a lower priority process.

Determinism is simply that the maximum possible worst-case delay is known and is repeatable. Not quite good enough for an RTOS.

Dick Caro (been there -- done that!)
===========================================
Richard H. Caro, CEO
CMC Associates
2 Beth Circle, Acton, MA 01720
Tel: +1.978.635.9449 Mobile: +.978.764.4728
Fax: +1.978.246.1270
E-mail: RCaro@CMC.us
Web: http://www.CMC.us
Buy my books:
http://www.isa.org/books
Automation Network Selection
Wireless Networks for Industrial Automation
http://www.spitzerandboyes.com/Product/fbus.htm
The Consumer's Guide to Fieldbus Network Equipment
for Process Control
===========================================


Posted by Michael Griffin on 24 January, 2005 - 6:28 pm
Re: Dick Caro's reply. I have a few minor clarifications.

On January 21, 2005, Dick Caro wrote:
<clip>
> Modern RTOSs simply make sure that a) no interrupt is ever lost, and b) no
> interrupt can be blocked by a lower priority process.
>
> Determinism is simply that the maximum possible worst-case delay is known
> and is repeatable. Not quite good enough for an RTOS.
<clip>

The real difference between an RTOS and a general purpose OS is that with an RTOS the designers have taken care to ensure that the response times are known. This is not as simple as it may sound. Modern general purpose operating system kernals are very large, with several million lines of code. It can be difficult to trace through them to find all the possible sources of delay in response. An RTOS tends to be much smaller than a general purpose OS making guarantying the response time more practical.

As well as the difficulties in predicting response time, there are often deliberate design decisions made which affect response time. When an operating system is executing code within itself, it is often necessary to "lock" the system from switching tasks while it is in critical zones. These critical zones are sequences of code which must not be interrupted in order to avoid corrupting system data.

The OS designers generally try to keep these critical zones short, but there are trade offs involved. For example, designing for shorter or more predictable response times may decrease average through-put. A decrease in average performance might be considered acceptable for someone designing an embedded application, but it might be considered completely unacceptable to someone designing a large scale database system. Since general purpose operating systems are designed for the desktop and server markets, they are designed with the requirements of those markets in mind.

While an OS may or may not be *intended* for use in real time applications, whether it is in fact suitable for a particular real time situation is a matter of judgement. First you must decide what real time deadlines you must meet, and then you must decide what degree of risk of not meeting them you are willing to take. Once you know that, you can select an OS.

A practical example may make some of this clear. A common general purpose operating system is Linux. Until about a year or so ago, the standard kernal version was 2.4. This was not intended as an RTOS, but there are a number of embedded software vendors who would take the standard Linux kernal and modify it (they of course had complete access to the source code) to make it suitable for many real time applications.

The reason why standard Linux 2.4 was not consider to be "real time" is because there were long sections of code which were "locked" in while executing. Making it "real time" involved removing these locks. However, the means used to do so had side effects which were unacceptable to enough people who were not interested in real time that these changes were never accepted into the mainstream code base.

The main stream of development for Linux after version 2.4 was to make it more scalable, particularly in the upwards direction. In this context, "scalable" meant being able to use it in larger multi-processor systems with less loss in efficiency. The result of this was Linux 2.6 (the current version). Making it more upwardly scalable though had an interesting side effect - they had to remove or change a lot of the internal locks (although they did so in a way that didn't have the undesirable side effects). The result is that version 2.6 tends to have a much more predictable and shorter response time.

So, does this mean that Linux is an RTOS? The answer is "no" in the sense that it isn't the design intent to be one. However, it can be still be suitable for many real time applications.


Posted by Curt Wuollet on 26 January, 2005 - 3:04 pm
Hi Michael

I've been running some controller type code and the latest kernels are indeed very good for variability and latency. To put it in perspective, you are far more likely to miss an automation event due to the heavy filtering in PLC inputs and the slow sampling rate than the rare long response to an interrupt. In the PLC time context, I'd say Linux is unquestionably real time. That is, in long term tests you would have 100% in time completion of the tasks needed to read, solve, and write at any cycle time greater than 1msec with any practical IO count. This is with "normal" code without special extensions, just the preemption and scheduling features in a distribution kernel. At this time, the limitation for general automation is IO. Random wiring, filtering and garden variety output circuits with long on/off times set the upper practical limit and certainly at those speeds Linux would be real time. Since it's not practical in general automation to use controlled Z wiring and impedance matching, I'd say it's good enough for any job you can do with a PLC. If the truth were known, I'd question that many PLCs are "real time" even in their normal application.

Regards

cww


Posted by Michael Griffin on 29 January, 2005 - 1:03 am
Re: Curt Wuollet's comments:

Conventional PLC CPUs are not "real time". They simply offer *average* scan rates in the tens of milliseconds with deviations of a similar order. Most control applications do not require real time and average reaction times in the tens of milliseconds are more than adequate. Genuine high speed real time tasks in PLCs are typically handled by special hardware, such as counter, stepper, or servo modules. Some PLCs allow some limited code blocks to be scheduled on a timed basis, but this feature is very seldom used even on those PLCs which possess it.

The rate and repeatability of the thread timings I demonstrated in my experiments (in reply to "Re: PC: Ways to do machine control under Windows") are better than the scan rate and repeatability of a typical PLC in service
today. Any advantages that a conventional PLC may have do not lie in speed or determinism.

A typical application for PC in the problem domain we are discussing would be in a computerised test with low data acquisition rates (e.g. less than 100 Hz). A system such as this may "scan" several analogue inputs during a test
and act on the results. A PLC could perform the same task (this example was deliberately chosen to compare PC versus. PLC). However, the PC offers a simpler way to provide better operator interface, and to store and distribute the results of the tests. A PLC would "scan", while a PC program would use multiple timed threads to poll the I/O and apply the readings to some set criteria. The tests I conducted in previous messages used a threading method which is analogous to the approach a PC would use in this example.

The timing experiments I conducted were with a stock 2.4 kernal. Some improvements could have been attempted by using 1) a 2.6 kernal, 2) enabling kernal pre-emption, 3) using a faster task switching rate (the standard is 10 ms - some people change this to 1 ms). I would expect any benefits which may be derived from these as depending upon the application.

If we examine each of these, the question of whether to use a 2.6 kernal would be more or less moot, as this is the current version and would be used in a new application anyway. Kernal pre-emption is relevant to fast interrupt
response, but would again be more or less irrelevant to our discussion where we are polling I/O on a constant schedule.

A faster task switch time (e.g. 1 ms instead of the standard 10 ms) might be useful for applications which need the faster thread repetition rate, but I doubt it would do anything for the worst case deviations (e.g. ) I mentioned in the last set of experiments (several samples of approximately 30 msec.).

However, there are several special factors which play into these timing deviations which may not apply under other circumstances. The threading library used was that belonging to the Python interpreter. Using the OS threads directly (possibly the POSIX threads) may give a different result. Taking advantage of this would require either a different language (e.g. 'C'), or a different VM (versions of Python operate under other VMs, including Java - I don't know if this would make a difference).

The deviations may have been affected by the I/O operations being performed (the last set of test included writing to a simulated log file). Another approach may involve writing to a pipe or memory mapped file, and having another process (not just another thread) take the data and write it to the disk file.

The target repetition rate for the threads which scan and evaluate I/O could be set to run faster, but as you mentioned there may be no benefit to it if the I/O cannot deliver useful data faster or if the characteristic being
measured does not respond faster. Since we are comparing things operating on a PLC type time scale, a good target repetition rate would be 25 msec.

Given the above, a stock kernal with *no* performance tweaks would likely be more than adequate for most PC applications. Higher performance is available via some standard options, which would extend the application range a bit further. To operate reliably in the micro-second range however, I believe requires a genuine RTOS.


Posted by Curt Wuollet on 31 January, 2005 - 5:47 pm
Hi Michael

It seems we diverge only in the use of interpreted languages for time critical processes. I'm really trying to get there, realizing that speeds and raw power have increased several orders of magnitude since I may have formed my biases. In fact, what amazes me is that the
perception of speed lags way behind the real improvements in throughput as software complexity (bloat) has absorbed the available cycles. Anyway, I doubt I'll code control stuff in Python, but I should take another look. And I don't believe I'll need to resort to assembler, but I'll probably still seperate low level stuff from high level stuff. The wonderful thing is that it is becoming much easier to get the performance levels needed in Bonehead C on "normal" Linux. The next step will be when one can easily use hardware interrupts in userland. But that's as much philosophy as anything else. The intense interest in
embedded Linux keeps making things easier and easier for things on the fringes of hard fast realtime. I can run most DAQ cards fast enough to capture waveforms with good fidelity and crunch the numbers on stream. That's as fast as I need at the moment. And I can reliably do anything a PLC can do while I'm doing it. And maybe play DOOM :^)

Regards

cww


Posted by Armin Steinhoff on 31 January, 2005 - 9:58 pm
Hi,

>Re: Dick Caro's reply. I have a few minor clarifications.
>On January 21, 2005, Dick Caro wrote:
><clip>
> > Modern RTOSs simply make sure that a) no interrupt is ever lost, and b) no
> > interrupt can be blocked by a lower priority process.
> >
> > Determinism is simply that the maximum possible worst-case delay is known
> > and is repeatable. Not quite good enough for an RTOS.
><clip>
>
>The real difference between an RTOS and a general purpose OS is that with an
>RTOS the designers have taken care to ensure that the response times are
>known. <

Hm, I believe they taking care that the processing is strictly event oriented. The response time is not important as long as the processed results are available at the deadline.

> This is not as simple as it may sound. Modern general purpose
>operating system kernals are very large, with several million lines of code.
>It can be difficult to trace through them to find all the possible sources of
>delay in response. An RTOS tends to be much smaller than a general purpose OS
>making guarantying the response time more practical. <

IMHO... it doesn't matter how big the kernel is. It's important how deterministic the kernel responses to events. A problem is mostly the disabling of interrupts in such big non-RTOS kernels... that means interrupt events are suppressed.

>As well as the difficulties in predicting response time, there are often
>deliberate design decisions made which affect response time. When an
>operating system is executing code within itself, it is often necessary to
>"lock" the system from switching tasks while it is in critical zones. These
>critical zones are sequences of code which must not be interrupted in order
>to avoid corrupting system data. <

Yes... and here is the big design difference between RTOS and non-RTOS!

>The OS designers generally try to keep these critical zones short, but there
>are trade offs involved. For example, designing for shorter or more
>predictable response times may decrease average through-put. <

This depend on the 'costs' of context switching... good RTOSes are allowing fast and efficient context switching.

>[ clip ..]
>
>The reason why standard Linux 2.4 was not consider to be "real time" is
>because there were long sections of code which were "locked" in while
>executing. Making it "real time" involved removing these locks. However, the
>means used to do so had side effects which were unacceptable to enough people
>who were not interested in real time that these changes were never accepted
>into the mainstream code base.
>
>The main stream of development for Linux after version 2.4 was to make it
>more
>scalable, particularly in the upwards direction. In this context, "scalable"
>meant being able to use it in larger multi-processor systems with less loss
>in efficiency. The result of this was Linux 2.6 (the current version). Making
>it more upwardly scalable though had an interesting side effect - they had to
>remove or change a lot of the internal locks (although they did so in a way
>that didn't have the undesirable side effects). The result is that version
>2.6 tends to have a much more predictable and shorter response time. <

True... that kernel reacts faster to events.

>So, does this mean that Linux is an RTOS? The answer is "no" in the sense
>that
>it isn't the design intent to be one. However, it can be still be suitable
>for many real time applications. <

True... but the real-time performance is still not predictable and a lot of developers are 'fiddling around' to improve it.

Best Regards
Armin Steinhoff
http://www.steinhoff-automation.com


Posted by Michael Griffin on 3 February, 2005 - 5:44 pm
On Jan31, 2005 02:51, Armin Steinhoff wrote:
<clip>
> >The real difference between an RTOS and a general purpose OS is that with
> > an RTOS the designers have taken care to ensure that the response times
> > are known. <
>
> Hm, I believe they taking care that the processing is strictly event
> oriented. The response time is not important as long as the processed
> results are available at the deadline.
<clip>

However, you do have to know whether the eadlines can in fact be met, so the response times have to be known.

> IMHO... it doesn't matter how big the kernel is. It's important how
> deterministic the kernel responses to events. A problem is mostly the
> disabling of interrupts in such big non-RTOS kernels... that means
> interrupt events are suppressed.
<clip>

The reference to the size of the kernal is with respect to how practical it is to ensure an OS behaves correctly as an RTOS. An RTOS adds design and testing criteria which are beyond what a conventional OS requires. The more code which is present in the kernal, the more difficult it is to ensure that the real time criteria have been met. Although the basic design problem is the same in either case, it is important to keep the scale of the problem manageable.


Posted by Armin Steinhoff on 31 January, 2005 - 10:11 pm
Hi All,

>Carlos,
>
>Well -- true, but that's not all.
>
>The "classic" difference between an RTOS such as the old ModComp Classic
>and DEC's RSX-11 (for those of you old enough to remember
>these minicomputers), and the more recent operating systems based on UNIX
>and including DOS and Windows in all forms, is
>Pre-Emption. Those older RTOS's and the minicomputers on which they were
>based, used hardware interrupts for all significant events
>including scheduling timers and external I/O state changes. They had a
>large number of registers with a set devoted to each
>interrupt level. For example, program execution on the Modcomp used 16
>registers, but it handled 16 levels of priority interrupt, so
>it needed a total of 256 registers. Hardware interrupts on the Modcomp did
>not require saving registers before execution, unless one
>of the levels was used for multiple sub-levels. Typically, interrupts were
>serviced in only a few CPU cycles, and the interrupted
>program resumed. Pre-emption is the ability to interrupt an operating
>program, including the OS itself, with a higher priority
>interrupt immediately. <

Preemption happens on two levels... at program and hardware level. Preempting at hardware level (or interrupt level) leads interrupt nesting. It preempts interrupt service routines...

Operating programs can be preempted by the scheduler (triggered by events)... e.g when a program with a higher priority requests the CPU.

>Modern CPUs cannot do this, <

Sorry, but that's not correct. All modern CPUs - including the x86 line - are allowing to do that kind of preemption as described above. But interrupt nesting is not supported by all RTOSes...

> BUT they are now so much faster (1-3000 times) than those old
> minicomputers, that with efficient
>register block streaming, large cache memories, and today's fast memory,
>there is no noticeable difference between an RTOS and
>conventional OS EXCEPT <

No, no... there are remarkable BIG differences!

> in embedded applications. However, none of the current microcontroller
> architectures used for embedded
>systems support more than 4 vectored interrupt levels. <

The x86 CPUs e.g. are supporting 15 hardware interrupt levels... an exception are the CPUs of the PPC line.

> Today's use of registers in embedded systems not based on the Intel 80xx
>family, tends to be more like RISC processors in which there is no
>dedicated set of registers that could be saved. Rather, their
>large number of registers are used more like a FIFO stack automatically
>retaining registers on interrupt. This makes the
>old-fashioned RTOS unnecessary.
>
>Modern RTOSs simply make sure that a) no interrupt is ever lost, <

This depends on the device driver and the hardware interface of the device... it doesn't depend on the RTOS.

> and b) no interrupt can be blocked by a lower priority process. <

You are mixing up here two things. The execution of a 'program' can't block an interrupt. Important is that a hardware interrupt with a lower priority should not block an interrupt with a higher (hardware) priority.

>Determinism is simply that the maximum possible worst-case delay is known
>and is repeatable. Not quite good enough for an RTOS. <

If you mean your definition of determinism... yes, then you are right :)

Best Regards
Armin Steinhoff
http://www.steinhoff-automation.com


Posted by raghu kiran` on 18 October, 2008 - 7:48 am
Hi all.......

in linux device driver, who will schedule the tasklet or workqueue, which are used to implement bottom halves?????

is scheduler will schedule these tasklets at a safer time????


Posted by M Griffin on 18 October, 2008 - 4:53 pm
There is a book on-line at the following address which gives an explanation of writing Linux device drivers.

http://www.xml.com/ldd/chapter/book/index.html

Note that this book is not the current edition. For the newest edition, you will need to buy it at the book store. This is a long and complex subject, which isn't suited to discussion in a forum like this one.


Posted by Vladimir E. Zyubin on 21 January, 2005 - 6:35 pm
Hello automation,

There is the only remarkable difference: some OS is advertized as a RT one, the others - as a common purpose one.

--
Best regards.
= Vladimir E. Zyubin
= Friday, January 21, 2005, 5:43:33 PM =


Posted by Peter Clout on 10 February, 2005 - 2:12 pm
This has been an interesting thread I believe. May I suggest that there are at least three distinct markets for "RTOS" capabilities?

1. There is the mass produced, low value market. Cash registers, home and small office printers etc. etc. Production runs 10,000 and up

2. There is the low volume, high security market. Aerospace and military. (Automotive come between 1 and 2 I believe)

3. One and few off systems like custom factory automation.

The high-volume systems pare down costs of production by the fraction of a penny while the factory automation systems require on-going flexibility.

I think that all other things being equal, 1 and 2 prefer minimal kernels for the delivery system with the inconvenience (read higher costs) of cross-development while the factory automation systems benefit from the
flexibility of self-hosted development (read general-purpose operating system)

All these systems require high-dependability OSs, aerospace especially so. Small kernels provide this more easily as the complexity is lower and they do not have to protect against system developer error at run-time. General-purpose OSs require rock-solid design and capabilities to ensure high-dependability.

Peter

Peter Clout
Vista Control Systems, Inc.


Posted by maphil philip on 7 January, 2007 - 12:10 pm
in rtos the timing behavior is important. rtos is deterministic and os non-deterministic. contrary to normal os the goal of rtos is to minimize the complexity. all embedded applications not need rtos. but by using an rtos effeciently we can provide correctness proecion ....etc.


Posted by Mudit Aggarwal on 20 April, 2007 - 1:19 am
I understood that an RTOS should have a determinstic behaviour. But what is there in RTOS which makes it determinstic which is not there in normal OS.

Premeption, Low Interrupt Latency, priority scheduling can exist in normal OS also, but what exactly makes RTOS deterministic?


Posted by Michael Griffin on 21 April, 2007 - 10:44 am
What gives the RTOS the deterministic behaviour is how it is written. Most operating systems have "locks" preventing interruptions in "critical sections". Every part of an RTOS kernel is written so that it can be interrupted at almost any time, which requires that locked sections of code must be as few and as short as possible. This means the latency between an event and the response to it can be accurately "determined" (known).

There are also often (but not always) special scheduling calls in an RTOS which can be used to help ensure that the most critical tasks get priority over the less critical ones. Whenever the RTOS designer has to make a choice between responsiveness and efficiency, he will in most cases choose responsiveness.

In contrast, a general purpose kernel will often be written with large sections that cannot be interrupted (locks are applied). This means there can be long (and indeterminate) periods of time for which external events must wait. Generally, no one knows how long these periods of time can be. The OS designer will almost always choose to maximise average throughput rather than responsiveness.

Having said the above, sometimes we get lucky and attempts to improve efficiency in a general purpose OS will also improve real time response. This happened several years ago when the Linux OS kernel was being changed to improve the ability to use multiple CPUs. Using dozens of CPUs efficiently requires the ability to interrupt the OS kernel in a manner more like an RTOS than a general purpose OS. Recent work on reducing power consumption (for embedded applications) has had similar effects.

The net result is that mainstream development for Linux happens to produce a result which is useful to people producing real time versions of Linux. It is expected that within a couple of years, producing a real time version of the standard Linux kernel will just require changing an option and re-compiling. Some distributions may ship the RT kernel version as an option (many currently provide alternate kernel versions with different options). At present, there are RT versions of Linux, but they have extensive internal changes (although fewer now than before) from the standard distribution.

MS-Windows is a different story. The standard MS-Windows OS is used in a much narrower range of applications than Linux is, and can't be efficiently used in very large or very small applications. For embedded us, Microsoft offers a completely different OS called "Microsoft Windows CE" which has an interface layer which acts in a manner which is somewhat familiarity to someone who has written programs for the standard versions of MS-Windows.

The above is a very brief summary which doesn't attempt to discuss some of the other features which a specialised RTOS will offer which may make them more suitable for smaller embedded applications. Not every RTOS is suited for every RT application. However, discussing that in any detail is a subject for a book, not a short message.


Posted by Vladimir on 23 April, 2007 - 6:59 am
RTOS is just a means for positioning of the target group. If we make comparision Windowz and QNX we do not find any valuable differencies. Microkernel? Ok. RT means "a microkernal
architecture". That's all. Deterministic behavior? Just a B.S. Where is a criterion? is absent. Discrete.

--
Best regards,
Vladimir


Posted by Michael Griffin on 24 April, 2007 - 2:18 am
In reply to Vladimir: An RTOS does not have to use a micro-kernel and a use of a micro-kernel does not make an OS an RTOS. Good examples of these are respectively, the RT versions of Linux which use a monolithic kernel, and Minix 3 or Hurd which have micro-kernels but are not an RTOS.

Many people consider a micro-kernel to be a good basis for an RTOS because the small size of the kernel means it is easier to verify the length of the "locked" (uninterruptable) code sections (because there is less code to review and maintain). The rest of the OS processes are pushed out to modules with lower privilege levels which can be interrupted at any time just like a user program.

Micro-kernels are also popular in RTOS designs because an RTOS is often used in small embedded systems. The modularity of the micro-kernel design makes it easier to strip it down to the bare essentials for that particular application, thereby saving EPROM and RAM.

The disadvantage of a micro-kernel is that it runs more slowly than the alternative (monolithic kernel) on typical hardware, and is more difficult to write and debug (and so tends to incorporate potential improvements more slowly). Micro-kernels are popular with theoretical computer scientists but all of the popular general purpose operating systems today use monolithic kernels (specialised ones like QNX are the exception).

The criteria for deterministic behaviour in an RTOS is that an interrupt is always serviced within a specific period of time, or that a process is always run at a specific interval. However, using an RTOS does not automatically make a complete system "deterministic". That requires proper design of the overall application, hardware, and system. The RTOS is just a tool in the toolbox of the application designer.


Posted by Vladimir Zyubin on 27 April, 2007 - 12:29 am
> In reply to Vladimir: An RTOS does not have to use a micro-kernel
> and a use of a micro-kernel does not make an OS an RTOS. Good
> examples of these are respectively, the RT versions of Linux which
> use a monolithic kernel, and Minix 3 or Hurd which have
> micro-kernels but are not an RTOS. <

It looks like the apophatic theology... definition by negations. :-)

What are the RTOS features? I see no difference between QNX and Windows. Microkernal architecture only.

> Many people consider a micro-kernel to be a good basis for an RTOS
> because the small size of the kernel means it is easier to verify
> the length of the "locked" (uninterruptable) code sections (because
> there is less code to review and maintain). The rest of the OS
> processes are pushed out to modules with lower privilege levels
> which can be interrupted at any time just like a user program. <

Microkernal architecture allows us to close the question about multitasking logical parallelism at all. We can easely share the kernal between any multicore architecture. And there is no scheduler
problem: latencies, preemtive algorithms, timesharing, priorities, etc. in MCA.

> Micro-kernels are also popular in RTOS designs because an RTOS is
> often used in small embedded systems. The modularity of the
> micro-kernel design makes it easier to strip it down to the bare
> essentials for that particular application, thereby saving EPROM and
> RAM. <

And makes it easy to share the tasks between the cores, i.e. to transform logical parallelism to physical one.

> The disadvantage of a micro-kernel is that it runs more slowly than
> the alternative (monolithic kernel) on typical hardware, and is more
> difficult to write and debug (and so tends to incorporate potential
> improvements more slowly). Micro-kernels are popular with
> theoretical computer scientists but all of the popular general
> purpose operating systems today use monolithic kernels (specialised
> ones like QNX are the exception). <

Yes. Parallelism is more dificult to deal with.

> The criteria for deterministic behaviour in an RTOS is that an
> interrupt is always serviced within a specific period of time, or
> that a process is always run at a specific interval. However, using
> an RTOS does not automatically make a complete system
> "deterministic". That requires proper design of the overall
> application, hardware, and system. The RTOS is just a tool in the
> toolbox of the application designer. <

Any interrupt demands a non-zero time. In multicore parallel system with microkernal OS it demands minimal time interval for handling. And it
is localised, i.e. it depends on the local task structure only.

As to the word "deterministic": determinism - the philosophical doctrine that all events including human actions and choices are fully determined by
preceding events and states of affairs, and so that freedom of choice is illusory.

So, personally can make the following statement only: any digital system is deterministic by definition.

As to me, RT in our field is just a means to use logical operations with time entities: pauses, latencies, timeouts, etc. in order to synchronise control algorithm with the physical processes which are on the controlled object. In other words, any control algorithm is RT by definition. If control system has problems with synchronisation (or just demands any manipulations with priorities to be within the specification), it is just a bad designed system. IMO.

--
Best regards,
Vladimir E. Zyubin


Posted by Michael Griffin on 27 April, 2007 - 4:13 pm
In reply to Vladimir Zyubin (April 27, 2007 12:27:20 am):

VZ: Microkernal architecture allows us to close the question about
> multitasking logical parallelism at all. We can easely share the
> kernal between any multicore architecture. And there is no scheduler
> problem: latencies, preemtive algorithms, timesharing, priorities,
> etc. in MCA.
MG: I don't believe that a microkernel inherently solves any of these, at
least not in a way that wouldn't be equally open to a monolithic kernel. There is nothing about a microkernel that makes it automatically useful with a multicore CPU (or multiprocessor system).

MG:
> > The disadvantage of a micro-kernel is that it runs more slowly than
> > the alternative (monolithic kernel) on typical hardware, and is more
> > difficult to write and debug (and so tends to incorporate potential
> > improvements more slowly). Micro-kernels are popular with
> > theoretical computer scientists but all of the popular general
> > purpose operating systems today use monolithic kernels (specialised
> > ones like QNX are the exception). <


VZ: Yes. Parallelism is more dificult to deal with.
MG: Parallelism is indeed more difficult to deal with, but the difficulty I
was referring to isn't parallelism. With a monolithic kernel, you are dealing with essentially one program (the OS kernel) and can debug it as such. With a microkernel, you are dealing with multiple cooperating programs (microkernel plus "server modules") which are operating at different CPU privilege levels, with control passing back and forth through interfaces that are intended to act as barriers between them. Standard debugging techniques don't handle this very well.

On the surface a microkernel is simpler to debug because it is a series of small modules. In practical terms though it doesn't work so well with the common CPUs available today. The user program makes a call to a "server" module which then calls the microkernel which then calls another server module which then calls the microkernel to gain access to the hardware. It is easy for the programmer to get lost in these back-and-forth calls through the interface "gateways". If the CPU hardware allowed the microkernel to delegate specific address ranges to the "server" (subsystem) modules this would be much simplified (and faster), but unfortunately that isn't the case for commodity hardware.

VZ: Any interrupt demands a non-zero time. In multicore parallel system
> with microkernal OS it demands minimal time interval for handling. And it
> is localised, i.e. it depends on the local task structure only.
MG: What you are describing is asymmetric versus symmetric multi-processor
systems, not microkernel versus monolithic kernel. There are also monolithic real time systems which reserve a particular core (or processor) for real time tasks, while the operating system and non-real time tasks run on a different processor (many mobile phones work this way). This is in fact the "easy" (or at least easier) way to do "real time". It is much harder to get the same results with a single CPU, or with a symmetrical system (where all CPUs are treated equally).

VZ: So, personally can make the following statement only: any digital system
> is deterministic by definition.
>
> As to me, RT in our field is just a means to use logical operations
> with time entities: pauses, latencies, timeouts, etc. in order to
> synchronise control algorithm with the physical processes which are
> on the controlled object. In other words, any control algorithm is RT
> by definition. If control system has problems with synchronisation (or
> just demands any manipulations with priorities to be within the
> specification), it is just a bad designed system. IMO.

The difference between an RTOS and a general purpose OS is really a matter of emphasis. If you asked the designer of a general purpose OS "what is the worst case latency in your OS", they would probably answer "I don't know". It isn't something that they generally worry about unless it gets so long that someone important enough complains about it. If you ask an RTOS designer the same question, they can give you a definite answer. Keeping this number as small as possible is their entire raison d'etre.

However as I said before, using an RTOS does nothing magical by itself for an application. It is just a tool in the toolbox of the control system designer. The entire system (hardware, OS, application) has to be properly designed and selected by someone who knows what they are doing or the entire "real time" effort is a waste of time.

Most industrial applications however do *not* require an RTOS, and using an RTOS where it isn't needed adds unnecessary complexity. People often fall into the trap of thinking that "embedded" or "small" or "fast" or "reliable" are synonymous with "real time" when that manifestly isn't the case.


Posted by Vladimir E. Zyubin on 29 April, 2007 - 12:50 pm
Good day, Michael!

Saturday, Apr 27, 2007 4:13 pm, Michael Griffin wrote:
MG: I don't believe that a microkernel
MG: inherently solves any of these, at
MG:least not in a way that wouldn't be equally MG: open to a monolithic kernel. There is
MG: nothing about a microkernel that makes it
MG: automatically useful with a multicore CPU
MG: (or multiprocessor system).

The key words are "independency of functioning" or "weakly connected functioning". That circumstances make microkernel architecture
automatically useful with a multicore system. Logical multitasking paralelism can
be easely transformed to physical parallelism, but monolitic OS can not.

And it is one of paralellism problems: we have to deal with so called combinatorial outburst of complexity, which immediatelly appears when
we try to create a set of weakly-connected heterogenious parallel modules.

It is the answer, why supercomputer programming (so called parallel programming) is not a common practice, but just an esotheric field of programming.

VZ: Any interrupt demands a non-zero time. In multicore parallel system
>> with microkernal OS it demands minimal time interval for handling. And it
>> is localised, i.e. it depends on the local task structure only.

MG: What you are describing is asymmetric versus symmetric multi-processor
MG> systems, not microkernel versus monolithic kernel. There are also monolithic
MG> real time systems which reserve a particular core (or processor) for real
MG> time tasks, while the operating system and non-real time tasks run on a
MG> different processor (many mobile phones work this way). This is in fact
MG> the "easy" (or at least easier) way to do "real time". It is much harder to
MG> get the same results with a single CPU, or with a symmetrical system (where
MG> all CPUs are treated equally).

I made the simple statement: multicore systems need no any "smart" scheduler. Possibility to have unique core for every unique task
eliminates the RT problem (in the understanding many of us have in our heads).

VZ: So, personally can make the following statement only: any digital system
>> is deterministic by definition.
>>
>> As to me, RT in our field is just a means to use logical operations
>> with time entities: pauses, latencies, timeouts, etc. in order to
>> synchronise control algorithm with the physical processes which are
>> on the controlled object. In other words, any control algorithm is RT
>> by definition. If control system has problems with synchronisation (or
>> just demands any manipulations with priorities to be within the
>> specification), it is just a bad designed system. IMO.

MG> The difference between an RTOS and a general purpose OS is really a matter of
MG> emphasis. If you asked the designer of a general purpose OS "what is the
MG> worst case latency in your OS", they would probably answer "I don't know". It
MG> isn't something that they generally worry about unless it gets so long that
MG> someone important enough complains about it. If you ask an RTOS designer the
MG> same question, they can give you a definite answer. Keeping this number as
MG> small as possible is their entire raison d'etre.

IMO, It is a very disputable definition of RTOS. For example, because we can easily transform an "ordinary" OS to "RT" one just by calculation of the worst case latency.

MG> However as I said before, using an RTOS does nothing magical by itself for an
MG> application. It is just a tool in the toolbox of the control system designer.
MG> The entire system (hardware, OS, application) has to be properly designed and
MG> selected by someone who knows what they are doing or the entire "real time"
MG> effort is a waste of time.

Yes. It has to be properly designed. One of the requirements is "the application shell not need any dirty plays with priorities". In particular, in order to provide robustness during the lifecycle (corrections of error, upgrades, enhancements, and other modifications).

MG> Most industrial applications however do *not* require an RTOS, and using an
MG> RTOS where it isn't needed adds unnecessary complexity. People often fall
MG> into the trap of thinking that "embedded" or "small" or "fast" or "reliable"
MG> are synonymous with "real time" when that manifestly isn't the case.

I personally prefer don't use phrase "real time" at all. The words "embedded", "small", "fast", "reliable" look like more creditable and understandable.

--
Best regards,
Vladimir E. Zyubin


Posted by Michael Griffin on 30 April, 2007 - 11:22 pm
In reply to Vladimir E. Zyubin: While you could calculate the worst case latency for an OS that wasn't intended to be an RTOS, it wouldn't be "easy" for a large OS. A typical modern monolithic OS kernel is large, complex, has many possible execution paths, and is continuously evolving. By the time you finished your analysis the OS itself would have changed significantly due to the normal development process, rendering your calculations moot. A microkernel is attractive in these applications because the kernel itself is small enough to make this sort of analysis practical.

People who produce "RT" versions of Linux concentrate on removing OS features that cause latency rather than trying to deal with the complexity of calculating the effects. They have had some good fortune recently in that new features intended to make the OS perform better in multi-processor systems or save power also happen to be features that make the OS more suitable as an RTOS. This means that the remaining RT features are now being merged into the mainstream kernel as they no longer conflict with normal use. This in turn means the regular and RTOS versions of Linux will eventually be made from the same code base by just changing some compiler settings.

An alternative to analysing all the possible paths through the OS kernel is to run the OS through a series of simulated loads and measure the response times. In this case, you can only say that the OS will "probably" respond within a certain time under certain conditions. It happens that I ran some simple tests along these lines a while ago with both Linux and MS-Windows and reported the results here.

Another interesting point about microkernels while we are on the subject is the reason why there is a good deal of non-RTOS academic work being done on them. Many people are interested in designing an OS kernel that is "provably correct". That is, the OS kernel can be proven to be without error by means of formal mathematical proofs. This may be possible for a small microkernel, but not possible on a practical basis for a large monolithic kernel. The effort required for the proof grows exponentially with size, meaning that for a large enough OS the proof could take decades or centuries to calculate.


Posted by Armin Steinhoff on 30 April, 2007 - 11:13 pm
Hi All,

On April 27, 2007, Vladimir Zyubin wrote:
> On April 24, 2007, Michael Griffin wrote:
> > In reply to Vladimir: An RTOS does not have to use a micro-kernel
> > and a use of a micro-kernel does not make an OS an RTOS. Good
> > examples of these are respectively, the RT versions of Linux which
> > use a monolithic kernel, and Minix 3 or Hurd which have
> > micro-kernels but are not an RTOS. <
>
>It looks like the apophatic theology... definition by negations. :-)
>
>What are the RTOS features? <

The most important feature of an RTOS is producing results in a timely manner (meeting deadlines). There is no warranty that a MS-Windows OS can do that (CE excluded).

> I see no difference between QNX and
> Windows. Microkernal architecture only. <

Windows drivers are part of the Microsoft OS kernel... QNX drivers are Resources Managers which are running in their own protected address space!
That's the most important architectural difference between MS-Windows and QNX6 (and QNX4).

There are also big differences in process/thread scheduling policies between MS-Windows and QNX4. (Partitioning of CPU time, for example...)

> > Many people consider a micro-kernel to be a good basis for an RTOS
> > because the small size of the kernel means it is easier to verify
> > the length of the "locked" (uninterruptable) code sections (because
> > there is less code to review and maintain). The rest of the OS
> > processes are pushed out to modules with lower privilege levels
> > which can be interrupted at any time just like a user program. <
>
>Microkernal architecture allows us to close the question about
>multitasking logical parallelism at all. We can easely share the
>kernal between any multicore architecture. And there is no scheduler
>problem: latencies, preemtive algorithms, timesharing, priorities,
>etc. in MCA. <

Separating drivers from the kernel is the biggest advantage of micro kernel systems.

> > Micro-kernels are also popular in RTOS designs because an RTOS is
> > often used in small embedded systems. The modularity of the
> > micro-kernel design makes it easier to strip it down to the bare
> > essentials for that particular application, thereby saving EPROM and
> > RAM. <
>
>And makes it easy to share the tasks between the cores, i.e. to transform
>logical parallelism to physical one.
>
> > The disadvantage of a micro-kernel is that it runs more slowly than
> > the alternative (monolithic kernel) on typical hardware, and is more
> > difficult to write and debug (and so tends to incorporate potential
> > improvements more slowly). Micro-kernels are popular with
> > theoretical computer scientists but all of the popular general
> > purpose operating systems today use monolithic kernels (specialised
> > ones like QNX are the exception). <
>
>Yes. Parallelism is more dificult to deal with. <

QNX shows that handling of parallelism must not slow down the operation of a micro kernel OS.
It's just a design issue...

> > The criteria for deterministic behaviour in an RTOS is that an
> > interrupt is always serviced within a specific period of time, or
> > that a process is always run at a specific interval. However, using
> > an RTOS does not automatically make a complete system
> > "deterministic". That requires proper design of the overall
> > application, hardware, and system. The RTOS is just a tool in the
> > toolbox of the application designer. <
>
>Any interrupt demands a non-zero time. In multicore parallel system
>with microkernal OS it demands minimal time interval for handling. And it
>is localised, i.e. it depends on the local task structure only.
>
>As to the word "deterministic":
>determinism - the philosophical doctrine that all
>events including human actions and choices are fully determined by
>preceding events and states of affairs, and so that freedom of choice
>is illusory.
>
>So, personally can make the following statement only: any digital system
>is deterministic by definition. <

Sorry... every MS-Windows system is a digital system. Are they working deterministic in a timely manner?

Best Regards,
Armin Steinhoff

www.steinhoff-automation.com


Posted by Vladimir Zyubin on 5 May, 2007 - 8:06 pm
Hello Armin, I'm glad to see you.

Armin Steinhoff: The most important feature of an RTOS is producing results in a timely manner (meeting deadlines). There is no warranty that a MS-Windows OS can do that (CE excluded).<

That is impossible. Only an insane vendor can clame that in the product warranty... but all of them make looked-like statements in an
irresponsible form.

Vladimir Zyubin, previously: I see no difference between QNX and Windows. Microkernal architecture only. <

Armin Steinhoff: Windows drivers are part of the Microsoft OS kernel... QNX drivers are Resources Managers which are running in their own protected address space! That's the most important architectural difference between MS-Windows and QNX6 (and QNX4).<

Armin Steinhoff: There are also big differences in process/thread scheduling policies between MS-Windows and QNX4. (Partitioning of CPU time, for example...)<

Yes. It is the microkernel architecture I speak about. For multicore architecture, organisation of multitasking physical parallelism is mostly the same in both cases: unique address spaces, and lack of any smart scheduler algorithm. There is no needs to share CPU time because of large amount of CPUs.

Michael Griffin: Many people consider a micro-kernel to be a good basis for an RTOS because the small size of the kernel means it is easier to verify the length of the "locked" (uninterruptable) code sections (because there is less code to review and maintain). The rest of the OS processes are pushed out to modules with lower privilege levels which can be interrupted at any time just like a user program. <

Vladimir Zyubin, previously: Microkernal architecture allows us to close the question about multitasking logical parallelism at all. We can easely share the kernal between any multicore architecture. And there is no scheduler problem: latencies, preemtive algorithms, timesharing, priorities, etc. in MCA. <

Armin Steinhoff: Separating drivers from the kernel is the biggest advantage of micro kernel systems. <

yes, It is a great approach from reliability point of view.

Michael Griffin: Micro-kernels are also popular in RTOS designs because an RTOS is often used in small embedded systems. The modularity of the micro-kernel design makes it easier to strip it down to the bare essentials for that particular application, thereby saving EPROM and RAM. And makes it easy to share the tasks between the cores, i.e. to transform logical parallelism to physical one.
The disadvantage of a micro-kernel is that it runs more slowly than the alternative (monolithic kernel) on typical hardware, and is more difficult to write and debug (and so tends to incorporate potential improvements more slowly). Micro-kernels are popular with theoretical computer scientists but all of the popular general purpose operating systems today use monolithic kernels (specialised ones like QNX are the exception). <

Vladimir Zyubin, previously: Yes. Parallelism is more dificult to deal with. <

Armin Steinhoff: QNX shows that handling of parallelism must not slow down the operation of a micro kernel OS. It's just a design issue...<

I spoke about complexity of applied programming. As far as microkernel OSes, I think they are more suitable for a multicore platform.

Michael Griffin: The criteria for deterministic behaviour in an RTOS is that an interrupt is always serviced within a specific period of time, or that a process is always run at a specific interval. However, using an RTOS does not automatically make a complete system "deterministic". That requires proper design of the overall application, hardware, and system. The RTOS is just a tool in the toolbox of the application designer.
Any interrupt demands a non-zero time. In multicore parallel systemwith microkernal OS it demands minimal time interval for handling. And it is localised, i.e. it depends on the local task structure only.
As to the word "deterministic": determinism - the philosophical doctrine that all events including human actions and choices are fully determined by preceding events and states of affairs, and so that freedom of choice is illusory.
So, personally can make the following statement only: any digital system is deterministic by definition. <

Armin Steinhoff: Sorry... every MS-Windows system is a digital system. Are they working deterministic in a timely manner?

Yes, they are. By definition. Any digital system is deterministic. And I think, nobody will seriously speak about freedom of choice in
Windows. And large amount of control algorithms are implemented as a Windows tasks. And I can not imagine more "real time" things than
control algorithms.

--
Best regards,
zyubin


Posted by Ghulam Murtaza on 7 February, 2011 - 3:36 am
RTOS (1)

Used to run computers embedded in machinery, robots, scientific instruments and Industrial systems. Typically, it has little user interaction capability, and no end-user utilities, since the system will be a "sealed box" when delivered for use

Examples: Wind River, QNX, Real-time Linux, Real-time Windows NT

RTOS (2)
An important part of an RTOS is managing the resources of the computer so that a particular operation executes in precisely the same amount of time every time it occurs In a complex machine, having a part move more quickly just because system resources are available may be just as catastrophic as having it not move at all because the system was busy


Posted by pvbrowser on 7 February, 2011 - 9:57 am
Could a standard OS be used as a "realtime system" ?

For example if you use a Linux without graphical user interface just as a server sitting in a rack with no direct user interaction.

If you would measure the CPU load when your complete automation is running + lets say you are doing a backup and a copy of a big file over ethernet at the same time. When you would find out that the CPU load is below 60% in that case wouldn't this mean "realtime" ?


Posted by Ken E on 7 February, 2011 - 12:12 pm
There are different definitions for realtime. Most of the time when someone says RTOS they mean a system that has guaranteed preemption with known latencies. Normal linux doesn't have this. This is why RTLinux/RTAI/Xenomai are used in conjunction with the linux system to attain realtime performance.

There have been efforts to make native linux more RTOS capable such as the preempt_rt patches and Xenomai user space threads which can switch from Xenomai scheduler to linux scheduler (such as to write to disk) when you want to.

KEJR


Posted by Steinhoff on 8 February, 2011 - 4:05 am
> There are different definitions for realtime.

There is a different foggy understanding of real-time :)

> Most of the time when someone says RTOS they mean a system that has guaranteed preemption with known latencies. <

What also is needed is a real-time scheduler ... working event and/or deadline oriented. The CFS scheduler of standard Linux is able to do real-time scheduling!

A "RTOS" provides only important real-time features for an application which must operate in real-time.

That means a "RTOS" is an operation system which is able to support in most cases the real-time operation of such an application. A "RTOS" is only a part of a real-time system which consists out of hardware, application software and the operating system.

> Normal linux doesn't have this. <

Parts of the PREEMPT_RT patch are already in the meantime part of the standard Linux kernel ... and this is an ongoing process.

> This is why RTLinux/RTAI/Xenomai are used in conjunction with the linux system to attain realtime performance. <

Yes, these are the typical "two kernel" approaches. A better approach is the PREEMPT_RT patch with a single SMP ( multi core) kernel. The BKL (Big Kernel Lock) has been removed from the kernel version 2.6.37 ... this will lead to excellent real-time features with the PREEMPT_RT patch.

> There have been efforts to make native linux more RTOS capable such as the preempt_rt patches and Xenomai user space threads which can switch from Xenomai scheduler to linux scheduler (such as to write to disk) when you want to. <

Yes, and every "non real-time driver" of the standard Linux kernel is able to kill the real-time behavior of the second "real-time" kernel :)

Regards
Armin Steinhoff


Posted by Steinhoff on 7 February, 2011 - 4:54 pm
> Could a standard OS be used as a "realtime system" ? <

That's not possible. A standard OS doesn't work event oriented with a short reaction time ... so it isn't useful for the most real-time applications. A RTOS provides only the base for real-time operation of a RT application.

> For example if you use a Linux without graphical user interface just as a server sitting in a rack with no direct user interaction. <

Use at best PREEMPT_RT Linux ...

> If you would measure the CPU load when your complete automation is running + lets say you are doing a backup and a copy of a big file over ethernet at the same time. When you would find out that the CPU load is below 60% in that case wouldn't this mean "realtime" ? <

A typical RTOS provides e.g. low latency for the processing of events and priority / deadline oriented scheduling of tasks.

Real-time operation means providing of correct computing results at the right time. That's all ... but in most cases hard to realize :)

Regards
Armin Steinhoff

http://www.steinhoff-automation.com


Posted by curt wuollet on 7 February, 2011 - 7:42 pm
Well, it depends on your definition of realtime. You can certainly use Linux with a few kernel options set for "soft" realtime and it's really quite good depending on the application. It will be "fast enough" for most applications. But with a standard OS, the CPU utilization doesn't really control whether your process could be swapped out, for example, if it's waiting on I/O. Or some other binding resource could slow you down. If you need hard realtime, you would use RTLinux and most of the issues go away. With a standard OS, you can be at 1% cpu utilization and some background process, garbage collection or cache operations will delay things. With the realtime options enabled in Linux these sorts of things get interrupted so your process can run and with RTLinux, your process is the highest priority and everything else runs when it can. But without any explicit realtime provisions, you can't really guarantee anything even if your process is the only user process running and the CPU utilization is low.

Regards
cww


Posted by Steinhoff on 7 February, 2011 - 4:39 pm
> RTOS (1)
>
> Used to run computers embedded in machinery, robots, scientific instruments and Industrial systems. Typically, it has little user interaction capability, and no end-user utilities, since the system will be a "sealed box" when delivered for use. <

This is true for embedded real-time executives ... but not for complete real-time operating systems like e.g. QNX or PREEMPT_RT Linux

> Examples: Wind River, QNX , Real-time Linux, Real-time Windows NT

> RTOS (2)
> An important part of an RTOS is managing the resources of the computer <

This does every operating system.

> so that a particular operation executes in precisely the same amount of time every time it occurs. <

It must not execute in the same amount of time ... it must produce correct results at the right time ( the so called deadline)

> In a complex machine, having a part move more quickly just because system resources are available may be just as catastrophic as having it not move at all because the system was busy <

Not meeting a deadline is catastrophic for a real-time application.

Regards
Armin Steinhoff

Your use of this site is subject to the terms and conditions set forth under Legal Notices and the Privacy Policy. Please read those terms and conditions carefully. Subject to the rights expressly reserved to others under Legal Notices, the content of this site and the compilation thereof is © 1999-2014 Nerds in Control, LLC. All rights reserved.

Users of this site are benefiting from open source technologies, including PHP, MySQL and Apache. Be happy.


Fortune
Cynic, n.:
One who looks through rose-colored glasses with a jaundiced
eye.
Advertise here
Advertisement
our advertisers
Help keep our servers running...
Patronize our advertisers!
Visit our Post Archive