Latency between motion and IO



Jacob Kranendonk wrote:

> But first, the main problem we should
> focus on is this delay in axis movement after a M3/M5
> or M8/M9 command.

The delay is probably due to the loose coupling between the IO and motion systems, via the task controller. Here's what happens, taking this small program as an example:

N1 G1 X5 Y5 F35 M8
N2 M9

(Note that we're using M8/M9, flood coolant on/off, to turn the laser on/off. Jacob wired the parallel port for flood coolant to the laser.)

On line N1, what should happen is that the laser turns on (M8), and motion to 5,5 begins, at the same time. When the tool has reached 5,5, the laser should turn off immediately as in line N2.

What actually happens is this. When the task controller reads line N1, the interpreter puts a "coolant on" "move to 5,5" onto the interpreter list. The sequencing code in the task controller sends the coolant on command to the IO controller. Some time will elapse between when the command is sent and when the IO controller reads it. The time is a function of the CYCLE_TIME settings for the [TASK] and [EMCIO] sections. Worst case, the IO controller could have just finished its cycle when the command was written, and has to wait a full cycle to read it out. The default value of the [EMCIO] CYCLE_TIME is 0.1 seconds, so here's 0.1 seconds of latency.

The IO controller will shortly see the new command, and run through a state table that will turn the output bit on, set the status to DONE, and at the end of the full control cycle the DONE will get written out. I don't know for sure that another cycle isn't required between when the command comes in and it's finally done. I'll assume it's efficient and no more time is required.

Meanwhile the task controller is waiting on the DONE status, which it gets by polling the IO status buffer each cycle. Worst case, the IO controller could have written its status just after the task read it, and the task controller will have to wait another task cycle. The default task cycle time is 0.010 seconds, so the total is now 0.11 seconds.

Now, the task controller sees that the IO is done, and then can release the motion command. I don't know for sure that another cycle isn't required between when the IO command is seen as done and the motion command is written out. I'll assume it's efficient and no more time is required.

The motion controller will shortly see the new command. Worst case, the motion controller could have just finished its cycle when the command was written, and has to wait a full cycle to read it. The motion controller is pretty quick, and this time is small (less than a millisecond).

So, there are at least 0.11 seconds of latency between the IO bit turning on and the motion initiating, more if there are some dead cycles where the various controllers are making up their mind.

The same series of events applies to turning the bit off with the M9 in line N2. Here, the task controller needs to wait until the motion has completed, then send a command to the IO controller to turn the bit off.

The latency can be reduced by running the task and IO controllers faster, e.g.,

[TASK]
CYCLE_TIME = 0.010

[EMCIO]
CYCLE_TIME = 0.010

but this only lessens the problem, it doesn't solve it. You can go the extreme route and force the task and IO controllers to run full-out, not waiting on a timer at all, by setting their cycle times to 0:

[TASK]
CYCLE_TIME = 0

[EMCIO]
CYCLE_TIME = 0

Setting the cycle times to 0 will minimize the latency at the expense of CPU time. The GUI will be sluggish, for example. You can see the cycle times that result by going to the View -> Diagnostics... menu and looking at the heartbeats for task and IO. These count up each cycle, so you can clock them and compute the effective seconds per cycle you're getting. This varies on your CPU speed and also what other processes you're running, since the task and IO controllers aren't running in RT Linux. 

A better solution would be to tightly integration motion with IO, in the motion controller itself. The trick is to tie this into the NC interpreter. Does anyone know how this looks in any commercial NC machine, e.g.,

N1 G1 X1 Y1 F35 M101 P1

where M101 means set IO point 1, P1 means to the on state?

--Fred



Date Index | Thread Index | Back to archive index | Back to Mailing List Page

Problems or questions? Contact