Possible Bug in interaction of CYCLETIME and SECONDSPERTICK

Forum related to ERIKA Enterprise and RT-Druid version 2

Moderator: paolo.gai

Post Reply
Kaltzi
Newbie
Posts: 5
Joined: Tue Jun 02, 2015 2:19 pm

Possible Bug in interaction of CYCLETIME and SECONDSPERTICK

Post by Kaltzi » Mon Nov 09, 2015 9:34 am

Hy together,

I constructed a little scenario in which Erika is acting very strange in combination with my Infineon Triboard TC275TE A-Step and I would like to understand why this is happening.

In my OIL-specification I defined a Counter which looks like this

Code: Select all

COUNTER SystemCounter {
	CPU_ID = "Core_0";
	MINCYCLE = 1;
	MAXALLOWEDVALUE = 2147483647;
	TICKSPERBASE = 1;
	TYPE = HARDWARE {
		DEVICE = "STM_SR0";
		SYSTEM_TIMER = TRUE;
		PRIORITY = 2;
	};
	SECONDSPERTICK = 0.000001;
};
And alarms which looks like this:

Code: Select all

ALARM Core1_10ms_Task_Alarm {
	COUNTER = SystemCounter;
	ACTION = ACTIVATETASK {
		TASK = Core1_10ms_Task;
	};
	AUTOSTART = TRUE {
		ALARMTIME = 1;
		CYCLETIME = 10000;
	};
};
Following the specification this task has a cycle time of 10ms because 0.000001 * 10000 = 0.01 and this equals 10ms.

But when I define SecondsPerTick like in the following core 0 executes each assigned task just one time and all other tasks on other cores don't meet their cycle times anymore because of idling.

Changing the definition of the Counter to

Code: Select all

SECONDSPERTICK = 0.0001
and of the Alarm to

Code: Select all

CYCLETIME = 100
(0.0001 * 100 = 0.01 same as before)

solves the problem. Core 0 is running fine and all cycle times are correct.

For a better understanding I measured the start and terminate events of each task.

Scenario 1: core 0 crashes after executing 1 time and cycle times are incorrect

208.606.160 Core_2 Core2_20ms_Task 0 start
208.607.080 Core_2 Core2_20ms_Task 0 terminate
208.617.800 Core_0 Core0_10ms_Task 0 start (Core 0 executes just one time)
208.618.560 Core_0 Core0_10ms_Task 0 terminate
208.619.000 Core_1 Core1_10ms_Task 0 start
208.619.920 Core_1 Core1_10ms_Task 0 terminate (scheduler starts idling because of no reason)
542.997.040 Core_1 Core1_10ms_Task 1 start
542.997.640 Core_1 Core1_10ms_Task 1 terminate
543.017.240 Core_2 Core2_20ms_Task 1 start
543.017.840 Core_2 Core2_20ms_Task 1 terminate
594.242.280 Core_1 Core1_10ms_Task 2 start
594.242.880 Core_1 Core1_10ms_Task 2 terminate
649.062.680 Core_2 Core2_20ms_Task 2 start
649.063.320 Core_2 Core2_20ms_Task 2 terminate
649.077.400 Core_1 Core1_10ms_Task 3 start
649.078.000 Core_1 Core1_10ms_Task 3 terminate
703.896.240 Core_1 Core1_10ms_Task 4 start
703.896.840 Core_1 Core1_10ms_Task 4 terminate
758.715.520 Core_2 Core2_20ms_Task 3 start
758.716.200 Core_2 Core2_20ms_Task 3 terminate
758.730.280 Core_1 Core1_10ms_Task 5 start
758.730.880 Core_1 Core1_10ms_Task 5 terminate
813.549.120 Core_1 Core1_10ms_Task 6 start
813.549.720 Core_1 Core1_10ms_Task 6 terminate
868.368.400 Core_2 Core2_20ms_Task 4 start
868.369.080 Core_2 Core2_20ms_Task 4 terminate
868.383.160 Core_1 Core1_10ms_Task 7 start
868.383.760 Core_1 Core1_10ms_Task 7 terminate
923.202.000 Core_1 Core1_10ms_Task 8 start
923.202.600 Core_1 Core1_10ms_Task 8 terminate
978.021.280 Core_2 Core2_20ms_Task 5 start
978.021.960 Core_2 Core2_20ms_Task 5 terminate
978.036.040 Core_1 Core1_10ms_Task 9 start
978.036.640 Core_1 Core1_10ms_Task 9 terminate
1.032.854.880 Core_1 Core1_10ms_Task 10 start
1.032.855.480 Core_1 Core1_10ms_Task 10 terminate
1.087.674.160 Core_2 Core2_20ms_Task 6 start
1.087.674.840 Core_2 Core2_20ms_Task 6 terminate
1.087.688.920 Core_1 Core1_10ms_Task 11 start
1.087.689.520 Core_1 Core1_10ms_Task 11 terminate
1.142.507.760 Core_1 Core1_10ms_Task 12 start
1.142.508.360 Core_1 Core1_10ms_Task 12 terminate
1.197.327.040 Core_2 Core2_20ms_Task 7 start
1.197.327.720 Core_2 Core2_20ms_Task 7 terminate
1.197.341.800 Core_1 Core1_10ms_Task 13 start
1.197.342.400 Core_1 Core1_10ms_Task 13 terminate
1.252.160.640 Core_1 Core1_10ms_Task 14 start
1.252.161.240 Core_1 Core1_10ms_Task 14 terminate
1.306.979.920 Core_2 Core2_20ms_Task 8 start
1.306.980.600 Core_2 Core2_20ms_Task 8 terminate

Scenario 2: core 0 works fine and cycle times are correct (except directly at the beginning but this could be a bug on my side)

207.226.620 Core_2 Core2_20ms_Task 0 start
207.227.540 Core_2 Core2_20ms_Task 0 terminate
207.239.460 Core_1 Core1_10ms_Task 0 start
207.240.380 Core_1 Core1_10ms_Task 0 terminate
207.275.580 Core_1 Core1_10ms_Task 1 start
207.276.180 Core_1 Core1_10ms_Task 1 terminate
207.295.780 Core_2 Core2_20ms_Task 1 start
207.296.380 Core_2 Core2_20ms_Task 1 terminate
207.299.900 Core_0 Core0_10ms_Task 0 start
207.300.660 Core_0 Core0_10ms_Task 0 terminate
207.311.420 Core_0 Core0_10ms_Task 1 start
207.311.980 Core_0 Core0_10ms_Task 1 terminate
217.394.860 Core_1 Core1_10ms_Task 2 start
217.395.460 Core_1 Core1_10ms_Task 2 terminate
217.400.020 Core_0 Core0_10ms_Task 2 start
217.400.580 Core_0 Core0_10ms_Task 2 terminate
227.527.220 Core_2 Core2_20ms_Task 2 start
227.527.860 Core_2 Core2_20ms_Task 2 terminate
227.541.940 Core_1 Core1_10ms_Task 3 start
227.542.540 Core_1 Core1_10ms_Task 3 terminate
227.546.260 Core_0 Core0_10ms_Task 3 start
227.546.820 Core_0 Core0_10ms_Task 3 terminate
237.658.940 Core_1 Core1_10ms_Task 4 start
237.659.540 Core_1 Core1_10ms_Task 4 terminate
237.663.580 Core_0 Core0_10ms_Task 4 start
237.664.140 Core_0 Core0_10ms_Task 4 terminate
247.791.060 Core_2 Core2_20ms_Task 3 start
247.791.740 Core_2 Core2_20ms_Task 3 terminate
247.805.820 Core_1 Core1_10ms_Task 5 start
247.806.420 Core_1 Core1_10ms_Task 5 terminate
247.810.140 Core_0 Core0_10ms_Task 5 start
247.810.700 Core_0 Core0_10ms_Task 5 terminate
257.922.940 Core_1 Core1_10ms_Task 6 start
257.923.540 Core_1 Core1_10ms_Task 6 terminate
257.927.580 Core_0 Core0_10ms_Task 6 start
257.928.140 Core_0 Core0_10ms_Task 6 terminate
268.055.060 Core_2 Core2_20ms_Task 4 start
268.055.740 Core_2 Core2_20ms_Task 4 terminate
268.069.820 Core_1 Core1_10ms_Task 7 start
268.070.419 Core_1 Core1_10ms_Task 7 terminate
268.074.140 Core_0 Core0_10ms_Task 7 start
268.074.700 Core_0 Core0_10ms_Task 7 terminate
278.186.940 Core_1 Core1_10ms_Task 8 start
278.187.540 Core_1 Core1_10ms_Task 8 terminate
278.191.580 Core_0 Core0_10ms_Task 8 start
278.192.140 Core_0 Core0_10ms_Task 8 terminate
288.319.060 Core_2 Core2_20ms_Task 5 start
288.319.740 Core_2 Core2_20ms_Task 5 terminate
288.333.820 Core_1 Core1_10ms_Task 9 start
288.334.420 Core_1 Core1_10ms_Task 9 terminate

My problem is that I'm not able to define a task which is running 10 or 1 micro seconds because of that bug.

I also attached a very small example with this behaviour.
Attachments
Core0Down.rar
(1.29 KiB) Downloaded 226 times

e.guidieri
Full Member
Posts: 166
Joined: Tue May 10, 2011 2:05 pm

Re: Possible Bug in interaction of CYCLETIME and SECONDSPERT

Post by e.guidieri » Mon Nov 09, 2015 10:56 am

Of course it doesn't work you are overloading the system.

At 200 Mhz, in a microsecond tick scenario, (max clock speed of a tc27x) You have only 200 instruction to handle interrupts and alarms, not realistic.

I would suggest as lower bound for tick interrupt 100 us.*

Regards,
Errico Guidieri

*By the way the fastest control cycle I have seen in production scenario is 500 us, and it has been handled with some hardware features, not just by a timer.

Kaltzi
Newbie
Posts: 5
Joined: Tue Jun 02, 2015 2:19 pm

Re: Possible Bug in interaction of CYCLETIME and SECONDSPERT

Post by Kaltzi » Mon Nov 09, 2015 11:15 am

Ok that was the answer I hoped for. Thanks alot for your feedback.

Best regards

Post Reply