Davidsimon_jtd [Compatibility Mode]
Davidsimon_jtd [Compatibility Mode]
• Difficult to find, because they do no happen every time the code runs Part of the program is said to be atomic if it cannot be
interrupted
• Occurs only if interrupt occures at critical point
Shared data arises if shared data is used in not atomic
• Fmaous for occure at
Atomic means it cannot be interrupted by anything that might
5 o’clock in the evening, usually on Saturday mess up the data it is using
Any time you are not paying very much attention A set of instruction that must be atomic for the system to work
Whenever no debugging equipment is attached properly is often called a critical section
1
Round Robin (Polling)
• Simplest architecture, No interrupts
• Main loop checks each device one at a time,and service
Survey of Software Architectures whichever needs to be serviced
• Service order depends on position in the loop
• No priorities, No shared data
• Round-Robin (Polling)
• No latency issues (other than waiting for other devices to be
serviced)
• Round-Robin with Interrupts void main (void)
{ Process Device i
While (1)
{ If (Device i needs attention)
• Function-Queue Scheduling Process Device 1; {
Process Device 2;
…………………; ……………..
Process Device i; ………………
• Real-Time OS ………………….; ………………
Process Device n-1;
Process Device n; ………………
} }
}
2
void isr_deviceA (void) Example: Data Bridge
void main (void)
{ {
!! service immediate needs while (TRUE) {
if((fdeivceA)
+ assert flag A(flagA=1)
{
} !! service A if set and
reset flag A
}
void isr_deviceB (void) if((fdeivceB)
{ {
!! service immediate needs !! service B if set and
reset flag B
+ assert flag B(flagB=1) }
} : IRQs on char rx and tx devices (UART)
: :
if((fdeivceZ)
rx ISR reads UART and queues char
: { tx ISR simply asserts ready flag
void isr_deviceZ (void) !! service Z if set and main reads queues, decrypt/encrypts, writes queues,
{ reset flag Z writes char to UART & de-asserts flag (critical section!)
}
!! service immediate needs }
architecture can sustain data bursts
+ assert flag Z(flagZ=1) }
}
RR versus RR+I
Max Time Available to Handle
Interrupt?
Interrupt feature introduces priority mechanism
• It depends on the serial port speed RR RR+I
High priority
Data rate (Bps) Data Rate (Char/s) Time (ms)
2400 240 4.17
everything
4800 480 2.08
9600 960 1.04
Low priority
• simple, and often appropriate (e.g., data bridge) Round Robin With Interrupts
• Main loop still suffers from stochastic response times
interrupt feature has even aggravated this problem:fast • Good news:
ISR response at the expense of even slower main task
(ISRs preempt main task because of their higher priority) – Interrupts have higher priority than task code
• This rules out RR+I for apps with CPU hogs moving (by the interrupt mechanism..).
workload into ISR is usually not a good idea as this will – Priority assignment to interrupts allows to tune
affect response times of other ISRs the latency for each interrupt
• Worst case response for task code
• Bad news
Interrupt occurs just after RR loop passes the task code for – Shared data problem
that device
response time, sum of execution times of the
– Main loop determines the order of execution of
task code of all other devices different tasks.
3
Function Queue Scheduling
Function Queue Scheduling
• Another yet more sophisticated architecture
• Main just reads pointer from the queue & calls the function
Main Loop
Interrupt Handler
while (1){
while (Queue non Empty){ - Execute most urgent steps
- Dequeue highest priority task; - Enqueue corresponding task on the function queue;
- Execute highest priority task;
} Worth while is that no rule say main has to call the function in the
}
order that the IR occurred
4
Real-Time OS
Real-Time OS cont…
• The necessary signaling between int routine & task code handled by
RTOS, need not use shared variable for this purpose
• No loop decides what needs to be done next. If highest priority task interrupt & signals, than current task
• Code (Scheduler) inside th RTOS decides which task should run which is in the middle will be suspended & highest priority
depending on priority task code will be taken up
• RTOS can suspend one task in the middle to run another
Worst case wait for highest priority task code is zero
void isr_deviceA(void)
{ Any changes to lower priority task code do not affect the
!! service immediate needs + set signal A response of higher priority task code
}
.. Available for purchase
void taskA(void)
{
Disadvantage :RTOS itself uses a certain amount process time
!! wait for signal A
!! service A
}
..
Example, consider a system that has to open a valve within 30 milliseconds when
the humidity crosses a particular threshold. If the valve is not opened within 30
milliseconds, a catastrophe may occur.
Such systems with strict deadlines are called hard real time’ systems.
5
Schedular
• Part of RTOS
• It keeps track of state of each task Whatever the task
Blocked/ needs, happens
• Decides task for running Suspended Ready
• Look at priority assign to tasks among Ready tasks This is
Another ready
• High priority taks hogs the mP highest task
task is higher
priority
Task needs priority
• Low priority tasks wait in ready state
something to
happen before it
ca continue Running
priority
while(1) while(1)
All other data
{ {
Task2 Task1 // read level in the tank // block untill user pshes the button
} }
Reentrancy Reentrancy
Void Task1(void) R1 for Task1 R1 for Task2 Cerrors =5
Static int Cerrors;
{ Task1 calls counterror (9)
: Void counterror (int newerror)
counterror(9); { MOV R1, Cerrors 5
:
} Cerrors+=newerror; ADD R1,neweror 14
} Task2 calls counterror (11)
Void Task2(void) MOV R1, Cerrors 5
{ MOV R1, Cerrors
: ADD R1,neweror 16
ADD R1,neweror
counterror(11);
: MOV Cerrors,R1 MOV Cerrors,R1 16
} RETURN
RETURN
RTOS switches back to TASK1
Cerrors=5 than Cerrors=5+9+11=25
MOV Cerrors,R1 14 14
RETURN Cerrors=5 than Cerrors=5+9+11=25
6
C variable Storage Reentrant functions
Static int, public int, string,pointers :is in fixed location in Reentrant functions are functions that can be called by more than
memory and is therefore shared by any task that happens to call one task and that will always work correctly, even if the RTOS
function switches one task to another in the middle of execution of function
Fn arguments: on the stack
Rules for reentrant functions
1. A reentrant function may not use variable in a nonatomic way
unless they are stored on the stack of the task that called the
function or are otherwise the private variable of the task.
2. A reentrant function may not call any other function that are
not themselves reentrant.
3. A reentrant function function may not use the hardware in a
nonatomic way.
MOV DPTR,#cErrors+0lH
void CountErrors (void) MOVX A.@DPTR
{ INC A
MOVX @DPTR,A But if you're using an Inte180x86, you might get
++cErrors; JNZ noCarry INC (cErrors)
} MOV DPTR,# cErrors
MOVX A,@DPTR RET
This function obviously modifies a nonstack variable, but rule1 MOVX @DPTR,A which is atomic
says that a reentrant function may not use nonstack variables in a noCarry: RET
nonatomic way.
There's no way to know that it will work with the next version of the compiler or
The question is: is incrementing cErrors atomic? with some other microprocessor to which you later have to port it.
we can answer this question only with a definite "maybe," because Writing vCountErrors this way is a way to put a little land mine in your system,
the answer depends upon the microprocessor and the compiler that just waiting to explode.
you are using. Therefore, if you need vCountErrors to be reentrant, you should use one of the
techniques discussed
Semaphores Semaphores
int
while(1)
i=0; int i;
while(1)
{ // read level in the tank {
timeupdate[i]=current time // block untill user pshes the button
tanklevel[i]= current level } i= ID of the button pressed
display(timeupdate[i] & tanklevel[i])
}
Void levelcal() Void buttondisplay()
{ {
int i=0; int i;
while(1) while(1)
{ {
// read level in the tank // block untill user pshes the button
Takesemaphore(); i= ID of the button pressed
timeupdate[i]=current time Takesemaphore();
tanklevel[i]= current level display(timeupdate[i] & tanklevel[i])
Releasesemaphore(); Releasesemaphore();
} }
7
Multiple Semaphores Advantage of having multiple semaphores
Whenever a task takes a semaphore, it is potentially slowing the response
RTOSs allow you to have as many semaphores of any other task that needs the same semaphore. In a system with only
one semaphore, if the lowest-priority
Each call to the RTOS must identify the semaphore on Task takes the semaphore to change data in a shared array of
which to operate. temperatures, the highest-priority task might block waiting for that
semaphore, even if the highest- priority task wants to modify a count of
The semaphores are all independent of one another the errors and couldn't care less about the temperatures. By having one
semaphore protect the temperatures and a different semaphore protect the
If one task takes semaphore A, another task can take error count, you can build your system so the highest-priority task can
semaphore B without blocking. modify the error count even if the lowest-priority task has taken the
semaphore protecting the temperatures. Different semaphores can
Similarly, if one task is waiting for semaphore C, that correspond to different shared resources.
task will still be blocked even if some other task If you are using multiple semaphores, it is up to you to remember which
releases semaphore D. semaphore ; corresponds to which data.
A task that is modifying the error count must take the corresponding
semaphore. You must decide what shared data each of your semaphores
protects.
Deadly embrace
Semaphores Problems int a;
int b; void Task2 (void)
{
void Taskl (void) Take SemaphoreB();
{ Take SemaphoreA();
• Forgetting to take the semaphores. Take SemaphoreA(); a - b;
Take SemaphoreB(); Take SemaphoreA();
• Forgetting to release the semaphores. a= b; Take SemaphoreB();
Take SemaphoreB();
• Taking the wrong semaphore. Take SemaphoreA(); }
}
• Holding a semaphore for too long.
If a task tries to take the semaphore when the integer is equal to zero, then the There is a third way that deserves at least a mention: disabling task switches.
task will block. Most RTOSs have two functions you can call, one to disable task switches and
Resource semaphores or resources: Some systems offer semaphores that can one to reenable them after they've been disabled.
be released only by the task that took them. These semaphores are useful for the
shared-data problem, but they cannot be used to communicate between two
tasks.
Mutex semaphore or mutex: Some RTOSs offer one kind of semaphore that
will automatically deal with the priority inversion problem
If several tasks are waiting for a semaphore when it is released, systems vary as
to which task gets to run. Some systems will run the task that has been waiting
longest; others will run the highest-priority task that is waiting for the
semaphore. Some systems gives choice to user
8
Ways to Protect Shared Data Ways to Protect Shared Data
Taking semaphores is the most targeted way to protect data, because it affects
Disabling interrupts is the most drastic in that it will affect the response times of only those tasks that need to take the same semaphore.
all the interrupt routines and of all other tasks in the system. The response times of interrupt routines and of tasks that do not need the
If you disable interrupts, you also disable task switches, because the scheduler semaphore are unchanged.
cannot get control of the microprocessor to switch On the other hand, semaphores do take up a certain amount of microprocessor
On the other hand, disabling interrupts has two advantages
1. It is the only method that works if your data is shared between your task Disabling task switches is somewhere in between the two. It has no effect on
code and your interrupt routines. Interrupt routines are not allowed to take interrupt routines, but it stops response for all other tasks cold.
semaphores, and disabling task switches does not prevent interrupts
2. It is fast. Most processors can disable or enable interrupts with a single
instruction; all of the RTOS functions are many instructions long. If a task's
access to shared data lasts only a short period of time--incrementing a single
variable
9
Mailboxes Pipes
Mailboxes are much like queues Pipes are also much like queues.
The typical RTOS has functions to create, to write to, and to read from The RTOS can create them, write to them, read from them, and so on.
mailboxes, The details of pipes, however, like the details of mailboxes and queues,
functions to check whether the mailbox contains any messages and to destroy the vary from RTOS to RTOS.
mailbox if it is no longer needed.
Some variations include the following:
The details of mailboxes, however, are different in different RTOSs
Here are some of the variations that you might see: Some RTOSs allow you to write messages of varying lengths onto pipes
Although some RTOSs allow a certain number of messages in each mailbox, (unlike mailboxes and queues, in which the message length is typically
a number that you can usually choose when you create the mailbox, fixed).
others allow only one message in a mailbox at a time. Once one message is Pipes in some RTOSs are entirely byte-oriented: if Task A writes 11
written to a mailbox under these systems, the mailbox is full; no other message bytes to the pipe and then Task B writes 19 bytes to the pipe,
can be written to the mailbox until the first one is read. then if Task C reads 14 bytes from the pipe, it will get the 11 that Task
In some RTOSs, the number of messages in each mailbox is unlimited. There A wrote plus the first 3 that Task B wrote. The other 16 that task B
is a limit to the total number of messages that can be in all of the mailboxes in wrote remain in the pipe for whatever task reads from it next.
the system, but these messages will be distributed into the individual mailboxes Some RTOSs use the standard C library functions fread() and fwrite() to
as they are needed.
read from and write to pipes.
In some RTOSs, you can prioritize mailbox messages. Higher-priority messages
will be read before lower-priority messages, regardless of the order in which
they are written into the mailbox.
Pitfalls
When one task needs to pass data to another, it is usually not optional. For
example, it would probably be unacceptable for the error-logging
subsystem simply to fail to report errors if its queue filled.
Good solutions to this problem are, to make your queues, mailboxes and
pipes large enough in the first place.
10