0% found this document useful (0 votes)
9 views10 pages

Davidsimon_jtd [Compatibility Mode]

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views10 pages

Davidsimon_jtd [Compatibility Mode]

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Shared Data Problem

Satic int tempr[2]; Void interrupt()


Survey of Software Architectures & Main() {
{
Introduction to RTOS int itemp0,itemp1;
tempr[0]= read value of temp;
tempr[1]= read value of temp;
while(1)
}
{
disableint(); while(1)
By {
itemp0=tempr[0];
if(tempr[0]=tempr[1]);
itemp1=tempr[1];
Dr. J.T. Devaraju enableint();
//set the howling alaram
}
Associate Professor if(itemp0!=itemp1); EI
Dept of Electronics //set the howling alaram Mov R1, tempr[0]
Mov R2, tempr[1]
Bangalore University }
DI
Bangalore Cmp R1,R2
jmp

Characteristics of the Shared Data Bug Atomic & Critical section

• Difficult to find, because they do no happen every time the code runs  Part of the program is said to be atomic if it cannot be
interrupted
• Occurs only if interrupt occures at critical point
 Shared data arises if shared data is used in not atomic
• Fmaous for occure at
 Atomic means it cannot be interrupted by anything that might
 5 o’clock in the evening, usually on Saturday mess up the data it is using
 Any time you are not paying very much attention  A set of instruction that must be atomic for the system to work
 Whenever no debugging equipment is attached properly is often called a critical section

 During customer demos


 After delivery to customer

Interrupt latency Choosing an Architecture


 Interrupts are tools for getting better response
 The speed with which an ES can respond is always of interest The best architecture depends on several factors:
 How fast does system respond to each interrupt? • Real-time requirements of the application (absolute response
 Depends on following factors time)
• Available hardware (speed, features)
 The longest period of time during which that interrupt is disabled
• Number and complexity of different software features
 The period of time it takes to execute any interrupt routines for
interrupt that are higher priority than one in question • Number and complexity of different peripherals
 How long it takes the µp to stop what it is doing, do the necessary
• Relative priority of features
bookkeeping an start executing instructions within the interrupt • Architecture Selection: Tradeoff between
routine • Complexity and control over response and priority
 How long it takes the interrupt routine to save the context and then • How much control need to have over system response
do enough work that what it has accomplished count as a response
 Interrupt latency : amount of time it takes a system to responds to an
interrupt

1
Round Robin (Polling)
• Simplest architecture, No interrupts
• Main loop checks each device one at a time,and service
Survey of Software Architectures whichever needs to be serviced
• Service order depends on position in the loop
• No priorities, No shared data
• Round-Robin (Polling)
• No latency issues (other than waiting for other devices to be
serviced)
• Round-Robin with Interrupts void main (void)
{ Process Device i
While (1)
{ If (Device i needs attention)
• Function-Queue Scheduling Process Device 1; {
Process Device 2;
…………………; ……………..
Process Device i; ………………
• Real-Time OS ………………….; ………………
Process Device n-1;
Process Device n; ………………
} }
}

Digital Multimeter Round Robin (cntd…)


• Measures I,V & W each  No control on latency for any device handler
in several ranges
 Attractive potential architeture, as long as you
• Program Checks the can get away with it.
position of rotary switch  Worst case latency time for any device:
• Branches to code to – Sum of execution time of all other handlers
make appropriate  Can we have some control on latency? Suppose
measurement we want to minimize Device 3 latency, what to
• Format Result do?
• Write to display
User will not expect faster response
than they can move their hands and the
probes!Each operation fast.

Round Robin With Interrupts


Round Robin:  Some what more sophisticated architecture
Minimizing Latency for Device 3  Device requests service through interrupt
 The interrupt routine handler must :execute only the URGENT STEPS (things
What is the worst case for Device 3? that cannot wait),SETS flags, update some variables.
While (1)
The execution time of the longest  Leave the remaining processing to be done in the main loop.
{
device handler routine  Main loop polls the flags and does follow-up processing.
Process Device 3; How is impacted the latency of the  IR can get good response, IR has higher priority than Task code
Process Device 1; other device handlers?  Assign priority to various
Process Device 3; 1) Loop is longer interrupts & can control void isr_deviceA (void)
Process Device 2; the priority
2) If device 3 requests attention {
 Shared data problem
…………………; multiple times in a loop.. potentially jumps up
!! service immediate needs + assert flag A
Process Device 3 3) How about I add a new device? }
Process Device i; •Architecture fragile, even if mp gets  Much better response time void main (void)
………………….; around the loop quickly to satisfy all
Process Device n-1;  main still slow (i.e., lower {
requirements, aditional device/ while (TRUE) {
Process Device 3 priority than ISRs)
requirement may break every thing !! poll device flag A
Process Device n; !! service A if set and reset flag A }
} •Suitable only for very simple system }

2
void isr_deviceA (void) Example: Data Bridge
void main (void)
{ {
!! service immediate needs while (TRUE) {
if((fdeivceA)
+ assert flag A(flagA=1)
{
} !! service A if set and
reset flag A
}
void isr_deviceB (void) if((fdeivceB)
{ {
!! service immediate needs !! service B if set and
reset flag B
+ assert flag B(flagB=1) }
} : IRQs on char rx and tx devices (UART)
: :
if((fdeivceZ)
rx ISR reads UART and queues char
: { tx ISR simply asserts ready flag
void isr_deviceZ (void) !! service Z if set and main reads queues, decrypt/encrypts, writes queues,
{ reset flag Z writes char to UART & de-asserts flag (critical section!)
}
!! service immediate needs }
architecture can sustain data bursts
+ assert flag Z(flagZ=1) }
}

RR versus RR+I
Max Time Available to Handle
Interrupt?
Interrupt feature introduces priority mechanism
• It depends on the serial port speed RR RR+I

High priority
Data rate (Bps) Data Rate (Char/s) Time (ms)
2400 240 4.17
everything
4800 480 2.08
9600 960 1.04
Low priority

• All task code execute at same priority (RR+I)

RR with Interrupts: Evaluation

• simple, and often appropriate (e.g., data bridge) Round Robin With Interrupts
• Main loop still suffers from stochastic response times
interrupt feature has even aggravated this problem:fast • Good news:
ISR response at the expense of even slower main task
(ISRs preempt main task because of their higher priority) – Interrupts have higher priority than task code
• This rules out RR+I for apps with CPU hogs moving (by the interrupt mechanism..).
workload into ISR is usually not a good idea as this will – Priority assignment to interrupts allows to tune
affect response times of other ISRs the latency for each interrupt
• Worst case response for task code
• Bad news
Interrupt occurs just after RR loop passes the task code for – Shared data problem
that device
response time, sum of execution times of the
– Main loop determines the order of execution of
task code of all other devices different tasks.

3
Function Queue Scheduling
Function Queue Scheduling
• Another yet more sophisticated architecture

• Interrupt routine adds function pointer to a queue of function pointer


for the main to call

• Main just reads pointer from the queue & calls the function

Main Loop
Interrupt Handler
while (1){
while (Queue non Empty){ - Execute most urgent steps
- Dequeue highest priority task; - Enqueue corresponding task on the function queue;
- Execute highest priority task;
} Worth while is that no rule say main has to call the function in the
}
order that the IR occurred

Function-Queue Scheduling cont.. Function-Queue Sched: Evaluation

void isr_deviceA(void) • Task priorities no longer hardwired in the code (cf. RR


{
!! service immediate needs + queue A() at prio A
architectures) but made flexible in terms of data
} each task can have its own priority
:
: • Response time of task T drops dramatically
• Wait for the highest priority task code is the length of the
void main(void)
{ longest of the task code + execution time of IR that
while (TRUE) { happens to occurre
!! get function from queue + call it
} • Response of lower priority task code may get worse-
lower priority functions may never executed.
void function_A(void) { !! service A }
:

Function Queue Scheduling (2) What Can we do?


Good News • Concurrent programming:
- Task code execution order is controlled. – Having multiple tasks running concurrently
- Better control of latency
– Execute the most appropriate task at any time
– “Block” tasks waiting on other tasks or I/O..
• Bad news:
– Quite complex
– Low priority tasks may starve. • Need an O.S., a Real Time Operating
– Worse, higher priority tasks may be waiting on results
produced by lower priority tasks… System.

4
Real-Time OS
Real-Time OS cont…
• The necessary signaling between int routine & task code handled by
RTOS, need not use shared variable for this purpose
• No loop decides what needs to be done next.  If highest priority task interrupt & signals, than current task
• Code (Scheduler) inside th RTOS decides which task should run which is in the middle will be suspended & highest priority
depending on priority task code will be taken up
• RTOS can suspend one task in the middle to run another
 Worst case wait for highest priority task code is zero
void isr_deviceA(void)
{  Any changes to lower priority task code do not affect the
!! service immediate needs + set signal A response of higher priority task code
}
..  Available for purchase
void taskA(void)
{
 Disadvantage :RTOS itself uses a certain amount process time
!! wait for signal A
!! service A
}
..

Performance Comparison Real Time Systems


Embedded systems in which some specific work has to be done in a specific time
period are called real-time systems the deadlines are stringent

Hard real time’ systems.

Example, consider a system that has to open a valve within 30 milliseconds when
the humidity crosses a particular threshold. If the valve is not opened within 30
milliseconds, a catastrophe may occur.

Such systems with strict deadlines are called hard real time’ systems.

Soft real-time systems


In some embedded systems, deadlines are imposed, but not adhering to them
once in a while may not lead to a catastrophe.
Example, DVD player Suppose, you give a command to the DVD player from a
remote control, and there is a delay of a few milliseconds in executing that
command. But, this delay won’t lead to a serious implication. Such systems are
called soft real-time systems

Real Time Systems Contd… Introduction to RTOS


Missing a deadline may cause a catastrophe-loss of life or damage
to property.
Task: The basic building block of software written under RTOS
(Simply subroutine)
missile that has to track and intercept an enemy aircraft.
The missile contains an embedded system that tracks
the aircraft and generates a control signal that will
Task States: Each task in an RTOS is always in one three states
launch the missile. If there is a delay in tracking the
aircraft and if the missile misses the deadline, the Running : mP Executing the instuctions that make up this task
enemy aircraft may drop a bomb and cause loss of
many lives. Hence, this system is a hard real-time usually one task in running state at given time
embedded system
Ready : Some other task is in running state, But This task has
•The Engine management program of the CAR must
generate pulses that actuate the fuel injectors with thing it could do if mP becomes available, any number of
timely & calculated pattern
task can be in this state
•Break control system Blocked: Task hasn’t got anything to do right now even if the mP is
available , may be waiting for some external event

5
Schedular
• Part of RTOS
• It keeps track of state of each task Whatever the task
Blocked/ needs, happens
• Decides task for running Suspended Ready
• Look at priority assign to tasks among Ready tasks This is
Another ready
• High priority taks hogs the mP highest task
task is higher
priority
Task needs priority
• Low priority tasks wait in ready state
something to
happen before it
ca continue Running
priority

Shared Data Problem


Tasks and Data
Task1 registers
Task1 stack Void levelcal() Void buttondisplay()
RTOS data
RTOS { {
structure
int i=0; int i;

while(1) while(1)
All other data
{ {
Task2 Task1 // read level in the tank // block untill user pshes the button

timeupdate[i]=current time i= ID of the button pressed


Task3
tanklevel[i]= current level display(timeupdate[i] & tanklevel[i])

} }

Task2 registers Task3 stack

Task2 stack Task3 registers

Reentrancy Reentrancy
Void Task1(void) R1 for Task1 R1 for Task2 Cerrors =5
Static int Cerrors;
{ Task1 calls counterror (9)
: Void counterror (int newerror)
counterror(9); { MOV R1, Cerrors 5
:
} Cerrors+=newerror; ADD R1,neweror 14
} Task2 calls counterror (11)
Void Task2(void) MOV R1, Cerrors 5
{ MOV R1, Cerrors
: ADD R1,neweror 16
ADD R1,neweror
counterror(11);
: MOV Cerrors,R1 MOV Cerrors,R1 16
} RETURN
RETURN
RTOS switches back to TASK1
Cerrors=5 than Cerrors=5+9+11=25
MOV Cerrors,R1 14 14
RETURN Cerrors=5 than Cerrors=5+9+11=25

6
C variable Storage Reentrant functions
 Static int, public int, string,pointers :is in fixed location in  Reentrant functions are functions that can be called by more than
memory and is therefore shared by any task that happens to call one task and that will always work correctly, even if the RTOS
function switches one task to another in the middle of execution of function
 Fn arguments: on the stack
 Rules for reentrant functions
1. A reentrant function may not use variable in a nonatomic way
unless they are stored on the stack of the task that called the
function or are otherwise the private variable of the task.
2. A reentrant function may not call any other function that are
not themselves reentrant.
3. A reentrant function function may not use the hardware in a
nonatomic way.

Gray Areas of Reentrancy Gray Areas of Reentrancy


 There are some gray areas between reentrant and nonreentrant  If you're using an 8051, an 8-bit micro controller, then ++cErrors is
functions. static int cErrors; likely to compile into assembly code something like this:

MOV DPTR,#cErrors+0lH
void CountErrors (void) MOVX A.@DPTR
{ INC A
MOVX @DPTR,A But if you're using an Inte180x86, you might get
++cErrors; JNZ noCarry INC (cErrors)
} MOV DPTR,# cErrors
MOVX A,@DPTR RET
 This function obviously modifies a nonstack variable, but rule1 MOVX @DPTR,A which is atomic
says that a reentrant function may not use nonstack variables in a noCarry: RET
nonatomic way.
 There's no way to know that it will work with the next version of the compiler or
 The question is: is incrementing cErrors atomic? with some other microprocessor to which you later have to port it.
 we can answer this question only with a definite "maybe," because  Writing vCountErrors this way is a way to put a little land mine in your system,
the answer depends upon the microprocessor and the compiler that just waiting to explode.
you are using.  Therefore, if you need vCountErrors to be reentrant, you should use one of the
techniques discussed

Void levelcal() Void buttondisplay()


{ {

Semaphores Semaphores
int
while(1)
i=0; int i;
while(1)
{ // read level in the tank {
timeupdate[i]=current time // block untill user pshes the button
tanklevel[i]= current level } i= ID of the button pressed
display(timeupdate[i] & tanklevel[i])
}
Void levelcal() Void buttondisplay()
{ {
int i=0; int i;
while(1) while(1)
{ {
// read level in the tank // block untill user pshes the button
Takesemaphore(); i= ID of the button pressed
timeupdate[i]=current time Takesemaphore();
tanklevel[i]= current level display(timeupdate[i] & tanklevel[i])
Releasesemaphore(); Releasesemaphore();
} }

7
Multiple Semaphores Advantage of having multiple semaphores
 Whenever a task takes a semaphore, it is potentially slowing the response
 RTOSs allow you to have as many semaphores of any other task that needs the same semaphore. In a system with only
one semaphore, if the lowest-priority
 Each call to the RTOS must identify the semaphore on  Task takes the semaphore to change data in a shared array of
which to operate. temperatures, the highest-priority task might block waiting for that
semaphore, even if the highest- priority task wants to modify a count of
 The semaphores are all independent of one another the errors and couldn't care less about the temperatures. By having one
semaphore protect the temperatures and a different semaphore protect the
 If one task takes semaphore A, another task can take error count, you can build your system so the highest-priority task can
semaphore B without blocking. modify the error count even if the lowest-priority task has taken the
semaphore protecting the temperatures. Different semaphores can
 Similarly, if one task is waiting for semaphore C, that correspond to different shared resources.
task will still be blocked even if some other task  If you are using multiple semaphores, it is up to you to remember which
releases semaphore D. semaphore ; corresponds to which data.
 A task that is modifying the error count must take the corresponding
semaphore. You must decide what shared data each of your semaphores
protects.

Deadly embrace
Semaphores Problems int a;
int b; void Task2 (void)
{
void Taskl (void) Take SemaphoreB();
{ Take SemaphoreA();
• Forgetting to take the semaphores. Take SemaphoreA(); a - b;
Take SemaphoreB(); Take SemaphoreA();
• Forgetting to release the semaphores. a= b; Take SemaphoreB();
Take SemaphoreB();
• Taking the wrong semaphore. Take SemaphoreA(); }
}
• Holding a semaphore for too long.

Semaphore Variants Ways to Protect Shared Data


 Counting semaphores: Some systems offer semaphores that can be taken
multiple times. Essentially, such semaphores are integers; Taking them  We have discussed two ways to protect shared data: disabling interrupts and
decrements the integer and releasing them increments the integer. using semaphores.

 If a task tries to take the semaphore when the integer is equal to zero, then the  There is a third way that deserves at least a mention: disabling task switches.
task will block.  Most RTOSs have two functions you can call, one to disable task switches and
 Resource semaphores or resources: Some systems offer semaphores that can one to reenable them after they've been disabled.
be released only by the task that took them. These semaphores are useful for the
shared-data problem, but they cannot be used to communicate between two
tasks.
 Mutex semaphore or mutex: Some RTOSs offer one kind of semaphore that
will automatically deal with the priority inversion problem
 If several tasks are waiting for a semaphore when it is released, systems vary as
to which task gets to run. Some systems will run the task that has been waiting
longest; others will run the highest-priority task that is waiting for the
semaphore. Some systems gives choice to user

8
Ways to Protect Shared Data Ways to Protect Shared Data
 Taking semaphores is the most targeted way to protect data, because it affects
 Disabling interrupts is the most drastic in that it will affect the response times of only those tasks that need to take the same semaphore.
all the interrupt routines and of all other tasks in the system.  The response times of interrupt routines and of tasks that do not need the
 If you disable interrupts, you also disable task switches, because the scheduler semaphore are unchanged.
cannot get control of the microprocessor to switch  On the other hand, semaphores do take up a certain amount of microprocessor
 On the other hand, disabling interrupts has two advantages
1. It is the only method that works if your data is shared between your task  Disabling task switches is somewhere in between the two. It has no effect on
code and your interrupt routines. Interrupt routines are not allowed to take interrupt routines, but it stops response for all other tasks cold.
semaphores, and disabling task switches does not prevent interrupts
2. It is fast. Most processors can disable or enable interrupts with a single
instruction; all of the RTOS functions are many instructions long. If a task's
access to shared data lasts only a short period of time--incrementing a single
variable

Message Queues, Mailboxes and Pipes Message Queues


 Suppose that we have two tasks, Taskl and Task2, each of which has a number
of high-priority, urgent things to do.
 Tasks must be able to communicate with one another to coordinate their  Suppose also that from time to time these two tasks discover error conditions that
activities or to share data must be reported on a network, a time-consuming process. In order not to delay
 For example, in the underground tank monitoring system the task that Taskl and Task2, it makes sense to have a separate task, ErrorsTask, that is
calculates the amount of gas in the tanks must let other parts of the system responsible for reporting the error conditions on the network.
know how much gasoline there is  Whenever Taskl or Task2 discovers an error, it reports that error to ErrorsTask
and then goes on about its own business.
 The error reporting process undertaken by ErrorsTask does not delay the other
tasks.
 An RTOS queue is the way to implement this design. when Task1 or Task2
needs to log errors, it calls one more function to LogError (LogError).
 The LogError function puts the error on a queue of errors for ErrorsTask to
deal with.
 The RTOs fn: AddToQueue function adds (many people use the term posts) the
value of the integer parameter it is passed to a queue of integer values the RTOS
maintains internally.
 The ReadFromQueue function reads the value at the head of the queue and
returns it to the caller.
 If the queue is empty, ReadFromQueue blocks the calling task
 The RTOS guarantees that both of these functions are reentrant

Message Queues Pointers and Queues


 Most RTOSs require to initialize queues, by calling a function provided for this
purpose and allocate the memory that the RTOS will manage as a queue.  RTOS allows you to write one void pointer to the queue with each
 Since most RTOSs allow you to have as many queues as you want, you pass call.
an additional parameter to every queue function: the identity of the queue to
 The obvious idea behind this style of RTOS interface is that one
which you want to write or from which you want to read
 If your code tries to write to a queue when the queue is full, the RTOS must task can pass any amount of data to another task by putting the data
either return an error to let you know that the write operation failed (a more into a buffer and then writing a pointer to the buffer onto the queue.
common RTOS behavior)
 or it must block the task until some other task reads data from the queue and
thereby creates some space (a less common RTOS behavior).
 Many RTOSs include a function that will read from a queue if there is any data
and will return an error code if not. This function is in addition to the one that
will block your task if the queue is empty.
 The amount of data that the RTOS lets you write to the queue in one call may
not be exactly the amount that you want to write.
 Many RTOSs are inflexible about this. One common RTOS characteristic is to
allow you to write onto a queue in one call the number of bytes taken up by a
void pointer.

9
Mailboxes Pipes
 Mailboxes are much like queues  Pipes are also much like queues.
 The typical RTOS has functions to create, to write to, and to read from  The RTOS can create them, write to them, read from them, and so on.
mailboxes,  The details of pipes, however, like the details of mailboxes and queues,
 functions to check whether the mailbox contains any messages and to destroy the vary from RTOS to RTOS.
mailbox if it is no longer needed.
 Some variations include the following:
 The details of mailboxes, however, are different in different RTOSs
 Here are some of the variations that you might see:  Some RTOSs allow you to write messages of varying lengths onto pipes
 Although some RTOSs allow a certain number of messages in each mailbox, (unlike mailboxes and queues, in which the message length is typically
a number that you can usually choose when you create the mailbox, fixed).
 others allow only one message in a mailbox at a time. Once one message is  Pipes in some RTOSs are entirely byte-oriented: if Task A writes 11
written to a mailbox under these systems, the mailbox is full; no other message bytes to the pipe and then Task B writes 19 bytes to the pipe,
can be written to the mailbox until the first one is read.  then if Task C reads 14 bytes from the pipe, it will get the 11 that Task
 In some RTOSs, the number of messages in each mailbox is unlimited. There A wrote plus the first 3 that Task B wrote. The other 16 that task B
is a limit to the total number of messages that can be in all of the mailboxes in wrote remain in the pipe for whatever task reads from it next.
the system, but these messages will be distributed into the individual mailboxes  Some RTOSs use the standard C library functions fread() and fwrite() to
as they are needed.
read from and write to pipes.
 In some RTOSs, you can prioritize mailbox messages. Higher-priority messages
will be read before lower-priority messages, regardless of the order in which
they are written into the mailbox.

Which Should I Use? Pitfalls


 Since queues, mailboxes, and pipes vary so much from one RTOS  Although queues, mailboxes, and pipes can make it quite easy to share
to another, it is hard to give much universal guidance about which data among tasks, they can also make it quite easy to insert bugs into your
to use in any given situation. system.
 When RTOS vendors design these features, they must make the  Most RTOSs do not restrict which tasks can read from or write to any
usual programming trade-offs among flexibility, speed, memory given queue, mailbox, or pipe. Therefore, you must ensure that tasks use
space, the length of time that interrupts must be disabled within the the correct one each time
RTOS functions, and so on.  If some task writes temperature data onto a queue read by a task
 Most RTOS vendors describe these characteristics in their expecting error codes, your system will not work very well. This is
documentation; obvious, but it is easy to mess up.
 The RTOS cannot ensure that data written onto a queue, mailbox, or pipe
 read it to determine which of the communications mechanisms best
will be properly interpreted by the task that reads it. If one task writes an
meets your requirements.
integer onto the queue and another task reads it and then treats it as a
pointer,

Pitfalls

 Running out of space in queues, mailboxes, or pipes is usually a disaster


for embedded software.

 When one task needs to pass data to another, it is usually not optional. For
example, it would probably be unacceptable for the error-logging
subsystem simply to fail to report errors if its queue filled.

 Good solutions to this problem are, to make your queues, mailboxes and
pipes large enough in the first place.

 Passing pointers from one task to another through a queue, mailbox, or


pipe is one of several ways to create shared data inadvertently.

10

You might also like