UNIT II - EMBEDDED C PROGRAMMING
UNIT II - EMBEDDED C PROGRAMMING
Memory Interfacing:
1
• EA pin is attached to Vcc - The program fetches to addresses 0000H through OFFFH
are directed to the internal ROM in the 8051 and program fetches to addresses 1000H
through FFFFH are directed to the external ROM/EPROM.
• When the EA pin is grounded, all addresses fetched by the program (0000H to
FFFFH) are directed to external ROM/EPROM.
• Port 0 is used as a multiplexed address/bus, as seen in Fig. In the initial T-cycle, it provides a
lower order 8-bit address, and later it is used as a data bus. The external latch and the ALE
signal provided by the 8051 are used to latch the 8-bit address.
2
External Data Memory Interfacing :
• Up to 64 k-bytes of additional data memory can be addressed by the 8051. The external
data memory is accessed using the “MOVX” instruction.
• The 8051’s internal data memory is split into three sections: Lower 128 bytes, Upper
128 bytes, and SFRs. While they are physically distinct bodies, the upper addresses and
SFRs share the same block of address space, 80H by FFH.
• The upper address space is only accessible via indirect addressing, and SFRs are only
accessible via direct addressing, as seen in Higher address space, on the other hand,
can be reached using either direct or indirect addressing.
• Figure shows how to connect or interface external RAM (data memory) to 8051.
• Port 0 is used as multiplexed data & address lines.
• Address lines are decoded using external latch & ALE signal from 8051 to provide lower
order (A7-A0) address lines.
• Port 2 gives higher order address lines.
3
• RD & WR signals from 8051 selects the memory read & memory write operations
respectively.
--------------
Memory Address Decoding :
We know that read/write memories consist of an array of registers, in which each register
has a unique address: The size of the memory is N X M. where N is the number of registers
and M is the word length, in a number of bits.
Example-1 :
If memory is having 12 address lines and 8 data lines, then number of registers/memory
locations = Word length = Mbit = 8-bit.
Example-2 :
If the memory has 8192 memory locations, then it has 13 address lines
Absolute decoding:
• In this technique, all the higher address lines are decoded to select the memory chip, and
the memory chip is chosen only for the logic levels defined in these high-order address
lines and no other logic levels will select the chip.
Figure shows the Memory Interfacing in 8085 with absolute decoding. This addressing
technique is normally used in large memory systems.
4
Linear decoding:
Individual high-order address lines can be used to pick memory chips in compact
systems, eliminating the need for hardware for decoding logic. Linear decoding is the
term for this method.
Table summarizes the memory capacity and address line required for memory
interfacing
5
PORT 0 (P0.0 to P0.7)
PORT 2 (P2.0- P2.7)
6
7
******************
8
KEYBOARD AND DISPLAY INTERFACE :
KEYBOARD INTERFACE :
Keys in a keyboard are arranged in a matrix of rows and columns. The controller
access both rows and columns through ports. Using two ports, we can connect to an 8x8 or a
4x4 matrix keyboard. When a key is pressed, a row and column make a contact, otherwise
there is no contact. We will look at the details using a 4x4 keyboard.
• If no key has been pressed, reading the input port will yield 1s for all columns since they
are all connected to high (Vcc).
• If all the rows are grounded and a key is pressed, one of the columns will have 0 since the
key pressed provides the path to ground.
• It is the function of the microcontroller to scan the keyboard continuously to detect and
identify the key pressed.
KEY SCAN
• To find out the key pressed , the controller grounds a row by sending a ‘0’ on the
corresponding line of the output port.
• It then reads the data at the columns using the input port. If data from columns is D3-
D0=1111, then no key is pressed. If any bit of the column is ‘0’, it indicates that a key is
pressed in that column.
• In this example, the column is identified by the following values:
• 1110 – key pressed in column 0
9
• 1101 – key pressed in column 1
• 1011 – key pressed in column 2
• 0111 – key pressed in column 3
• Beginning with the row 0, the microcontroller grounds it by providing a low to row
D0 only. It then reads the columns (port2).
• If the data read is all 1s, then no key in that row is activated and the process is
moved to the next row. It then grounds the next row, reads the columns, and checks
for any zero.
• This process continues until a row with a zero is identified. After identification of the row
in which the key has been pressed, the column to which the pressed key belongs is
identified as discussed above - by looking for a zero in the input values read.
Example:
(a) D3 – D0 = 1101 for the row, D3 – D0 = 1011 for the column, indicate row 1 and
column 3 are selected. This indicates that key 6 is pressed.
(b) D3 – D0 = 1011 for the row, D3 – D0 = 0111 for the column, indicate row 2 and
column 3 are selected. Then key ‘B’ is pressed.
PROGRAM:
The program used for detection and identification of the key activated goes through the
following stages:
1. To make sure that the preceding key has been released, 0s are output to all rows at
once, and the columns are read and checked repeatedly until all the columns are high.
● When all columns are found to be high, the program waits for a short amount of time
before it goes to the next stage of waiting for a key to be pressed
2. To see if any key is pressed, the columns are scanned over and over in an infinite loop
until one of them has a 0 on it.
● Remember that the output latch is connected to rows, still have their initial zeros
(in stage 1), making them grounded.
● After the key press detection, it waits for 20-ms for the bounce and then scans the
columns again.
i) It ensures that the first key press detection was not an erroneous one due to spike
noise.
ii) After the 20-ms delay, if the key is still pressed, then it goes to the loop (step 3) to
detect the actual key pressed.
10
3. To detect which row the key pressed belongs to, it grounds one row at a time, reading
the columns each time.
• If it finds that all columns are high, this means that the key press does not belong to that
row. Therefore, it grounds the next row and continues until it finds the row, that the key
pressed belongs to.
• Upon finding the row that the key pressed belongs to, it sets up the starting address for the
lookup table holding the scan codes for that row.
4. To identify the key pressed, it rotates the column bits, one bit at a time, into the carry
flag and checks to see if it is low.
• Upon finding the zero, it pulls out the ASCII code for that key from the look-up
table. Otherwise, it increments the pointer to point to the next element of the look-up table.
11
PROGRAM: ;keyboard subroutine. This program sends the ASCII code for pressed
key to P0.1 ;P1.0-P1.3 connected to rows, P2.0-P2.3 to column
12
13
LCD (LIQUID CRYSTAL DISPLAY) INTERFACE:
• LCDs can display numbers, characters, and graphics. To produce a proper display, the
information has to be periodically refreshed.
• This can be done by the CPU or internally by the LCD device itself. Incorporating a
refreshing controller into the LCD, relieves the CPU of this task and hence many
LCDs have built-in controllers.
• These controllers also facilitate flexible programming for characters and graphics.
Table 5.1 shows the pin description of an LCD. from Optrex.
• Vss and VDD provide +5v and ground, V0 is used for controlling LCD contrast.
• If RS=0, the instruction command register is selected, allowing the user to send a
command such as clear display, cursor at home, etc.
• If RS=1 the data register is selected, allowing the user to send data to be displayed on
the LCD.
• R/W input allows the user to Read/ Write the information to the LCD.
• The enable pin is used by the LCD to latch information presented to its data pins.
The 8-bit data pins are used to send information to LCD.
14
LCD COMMAND CODES
The LCD’s internal controller can accept several commands and modify the display
accordingly.
These commands would be things like:
Clear screen
Return home
Decrement/Increment cursor
After writing to the LCD, it takes some time for it to complete its internal operations.
During this time, it will not accept any new commands or data. Figure .1 shows the command
codes of LCD and Figure 2 shows the LCD interfacing. We need to insert a time delay
between any two commands or data sent to LCD.
15
PROGRAM TO DISPLAY CHARACTERS ON LCD :
To send any of the commands to the LCD, make pin RS=0. For data, make RS=1.
Then send a high-to-low pulse to the E pin to enable the internal latch of the LCD. This is
shown in the code below.
;calls a time delay before sending next data/command
;P1.0-P1.7 are connected to LCD data pins D0-D7
;P2.0 is connected to RS pin of LCD
;P2.1 is connected to R/W pin of LCD
;P2.2 is connected to E pin of LCD
16
******************
Programming in Embedded C :
17
Embedded C Programming with Keil Language
• Embedded C is most popular programming language in software field for developing
electronic gadgets.
• Each processor used in electronic system is associated with embedded software.
• Embedded C programming plays a key role in performing specific function by the
processor.
• In day-to-day life we used many electronic devices such as mobile phone, washing
machine, digital camera, etc.
• These all device working is based on microcontroller that are programmed by
embedded C.
In embedded system programming C code is preferred over other language. Due to the
following reasons:
• Easy to understand
• High Reliability
• Portability
• Scalability
Features of Embedded C
1. It is only the extension of C language and nothing more.
2. It has source code format that depends upon the kind of microcontroller or
microprocessor that have been being used.
3. Through embedded C high level optimization can be done.
4. It is used in microprocessor or microcontroller applications.
5. It has limited resources for used, mean the embedded system have only memory
location.
6. In embedded C constraints runs on real time and output is not available at
operating system.
7. It only supports the adequate processor or controller.
8. In embedded C only pre-define program can run.
9. It requires a compiler of embedded C, which have the compatibility with all the
embedded system resources.
10. Some of the examples of embedded system C application are
• Digital camera
• DVD
• Digital TV.
11.The major advantages of embedded C is its coding speed and size is very simple
and easy to understand.
18
Addition: Subtraction Multiplication
#include<reg51.h> #include<reg51.h> #include<reg51.h>
void main (void) void main (void) void main (void)
{ { {
while(1) while(1) while(1)
{ { {
unsigned char a,b,c; unsigned char a,b,c; unsigned char a,b,c;
P1=0xFF; P1=0xFF; P1=0xFF;
P2=0XFF; P2=0XFF; P2=0XFF;
a=P1; a=P1; a=P1;
b=P2; b=P2; b=P2;
c=a+b; c=a-b; c=a*b;
P3=c; P3=c; P3=c;
} } }
*******************
Real-time operating systems (RTOs) :
INTRODUCTION:
Simple applications can be programmed on a microprocessor by writing a single piece
of code, But for a complex application we cannot be execute with a simple program of code.
Multiple operations must be performed at widely varying times.
There are two fundamental abstractions that allow us to build complex applications on
microprocessors:
Process: It defines the state of an executing program
19
Operating system (OS): It provides the mechanism for switching execution
between the processes.
These two mechanisms together let us build applications with more complex
functionality and much greater flexibility to satisfy timing requirements. Satisfy complex
timing requirements introduce complex control into programs. Using processes to we can
easily obtain the required control within the operating system.
RTOS:
Real-time operating systems (RTOs) are a special purpose operating system designed for
embedded system .RTOS is an OSs that provides facilities for satisfying real-time
requirements. A RTOS allocates resources using algorithms that take real time into account.
In RTOS processing must be done within defined time constraints, otherwise system will fail
or cause major problem.
Types of RTOS:
There are two types of Real Time Operating systems
(i) Hard real time system
(ii) Soft real time system
20
Task is nothing but different part of functionality in a single system. For example,
when designing a telephone answering machine, we also define recording a phone call and
operating the user’s control .the different functions answering a call, recording a call and
operations are various applications. Thus various applications in a system and the each
application is called a task.
A process is a single execution of a program. If we run the same program two
different times, we have created two different processes. Each process has its own state that
includes not only its registers but all of its memory.
(i)Threads:
Threads is a lightweight process, the processes run in the same address space.(
Processes that share the same address space are often called threads). To understand why the
application can be divided into tasks let us consider the example.
This device is connected to serial ports on both ends. The input to the box is an
uncompressed stream of bytes. The box emits a compressed string of bits on the output serial
line, based on a predefined compression table. Such a box may be used, for example, to
compress data being sent to a modem.
(ii)Variable data rates:
The variable data rates area term used in telecommunication and computing that
relates to the bitrate used in sound or video encoding. It varies the amount of output data per
time segment. It allows a higher bitrate to be located to the more complex segments (Video)
of media files and less space is allocated to less complex segments (audio).
Advantages:
It produces a better quality to space ratio
It used more flexible to encode the sound or video data more accurately.
Disadvantages:
It may take more time to encode, as the process in more complex.
Some hardware may not compatible with variable data rates.
21
(iii) Asynchronous input:
The text compression box provides a simple example of rate control problems. A
control panel on a machine provides an example of a different type of rate control
problem, known as asynchronous input. The control panel of the compression box may, for
example, include a compression mode button that disables or enables compression, so that the
input text is passed through unchanged when compression is disabled.
The button will be depressed at a much lower rate than characters will be received,
since it is not physically possible for a person to repeatedly depress a button at even slow
serial line rates. Keeping up with the input and output data while checking on the button can
introduce some very complex control code into the program.
Sampling the button’s state too slowly can cause the machine to miss a button
depression entirely, but sampling it too frequently and duplicating a data value can
cause the machine to incorrectly compress data. One solution is to introduce a counter into
the main compression loop.
In operating system requirements on timing and execution rate can create major
problems in programming. When code is written to satisfy several different timing
requirements at once, the control structures necessary to get any sort of solution become very
complex.
***********
MULTIRATE SYSTEMS:
The systems which are embedded with more than one application are called
multirate systems. Multirate embedded computing systems are very common, including
automobile engines, printers, and cell phones.In all these systems, certain operations must be
executed periodically, and each operation is executed at its own rate.
Automotive engine control:
The simple example to describe multirate system is automobile engine with multirate
control.
The simplest automotive engine controllers, such as the ignition controller for a basic
motorcycle engine, perform only one task- timing the firing of the spark plug, which takes the
place of a mechanical distributor.
22
Using a microcontroller that senses the engine crankshaft position allows the spark timing to
vary with engine speed. Firing the spark plug is a periodic process.
The control algorithm for a modern automobile engine is much more complex,
making the need for microprocessors that much greater. Automobile engines must meet
strict requirements (mandated by law in the United States) on both emissions and fuel
economy. On the other hand, the engines must still satisfy customers not only in terms of
performance but also in terms of ease of starting in extreme cold and heat, low maintenance,
and so on.
23
Timing Requirements on Processes:
Depends on the different application the processes can have several different types
of timing. A scheduling policy must define the timing requirements that it uses to determine
whether a schedule is valid. Before studying scheduling process, we discuss about the types
of process timing requirements that are two important timing requirements on processes:
(i)Release time /Initiation time
(ii) Dead line
1. In simpler systems, the process may become ready at the beginning of the period.
Deadline:
A deadline specifies when a computation must be finished. The deadline for a
periodic process is generally measured from the release time, since that is the only reasonable
time reference. The period of a process is the time between successive executions. In
Some scheduling policies make the simplifying assumption that the deadline occurs at the
end of the period. In a multirate system, each process executes at its own distinct rate. In
periodic processes the initiation interval to be equal to the period. However, pipelined
execution of processes allows the initiation interval to be less than the period.
24
Example definitions of initiation time and deadlines
In this case, the initiation interval is equal to one fourth of the period. It is
possible for a process to have an initiation rate less than the period even in single-CPU
systems. If the process execution time is significantly less than the period, it may be possible
to initiate multiple copies of a program at slightly offset times.
The order of execution of processes may be constrained when the processes pass data
between each other. Figure 6.4 shows a set of processes with data dependencies among them.
Before a process can become ready, all the processes on which it depends must complete and
send their data to it.
25
The data dependencies define a partial ordering on process execution—P1 and P2 can
execute in any order (or in interleaved fashion) but must both complete before P3, and P3
must complete before P4.All processes must finish before the end of the period.
Figure 6.5 illustrates the communication required among three elements of an MPEG
audio/video decoder. Data come into the decoder in the system format, which multiplexes
audio and video data. The system decoder process demultiplexes the audio and video data
and distributes it to the appropriate processes.
CPU Metrics :
CPU matrices are described by initiation time and completion time.
Initiation time:
Initiation time is the time at which a process actually starts executing on the CPU.
Completion time:
The completion time is the time at which the process finishes its work.The CPU time
of process i is called Ci . Note that the CPU time is not equal to the completion time minus
initiation time; several other processes may interrupt execution. The total CPU time
consumed by a set of processes is
We need a basic measure of the efficiency with which we use the CPU. The simplest
and most direct measure is utilization:
26
Utilization is the ratio of the CPU time that is being used for useful computations to
the total available CPU time. This ratio ranges between 0 and 1, with 1 meaning that all of the
available CPU time is being used for system purposes. The utilization is often expressed as a
percentage. If we measure the total execution time of all processes over an interval of time t,
then the CPU utilization is
Scheduling Policies:
A scheduling policy defines how processes are selected for promotion from the
ready state to the running state. Every multitasking OS implements some type of scheduling
policy. Choosing the right scheduling policy not only ensures that the system will meet all its
timing requirements.Schedulability means whether there exists a schedule of execution for
the processes in a system that satisfies all their timing requirements. Also the Utilization is
one of the key metrics in evaluating a scheduling policy.
27
In addition to utilization, we must also consider scheduling overhead—the execution
time required to choose the next execution process, which is incurred in addition to any
context switching overhead.
Kernel:
We want to share the CPU across two processes. The kernel is the part of the OS that
determines what process is running. The kernel is activated periodically by the timer. The
length of the timer period is known as the time quantum because it is the smallest
increment in which we can control CPU activity. The kernel determines what process will
run next and causes that process to run. On the next timer interrupt, the kernel may pick
the same process or another process to run.
28
Context switching:
The timer interrupts causes control to change from currently executing process
to the kernel. The assembly language can be used to save and restore registers.
The set of registers that define a process known as context and switching from one
process register to another known as context switching ,here the data structure holds the
state of process is known as the process control blocks (PCB).
Context switch is the computing process of storing and restoring state of a CPU so
that execution can be resumed from the same point at the later time. This enable multiple
process to share single CPU.
Time slice:
In scheduling scheme, one process needs to be switched out of the CPU, so
another process can run .within pre-emptive multitasking operating system, the
scheduler allows every task to run for some certain amount of time called time slice.
4.2 Process and context :
The sequence of process executing using context switching in real-time operating system
(RTOS) given below.(FreeRTOS.org kernel as example)
29
Context switching is enable and sequence of task is performed with the required timing
constraints.
Also the diagram shows the application of task, the hardware timer and all the
functions in kernel that involved in context switching.
Here,
VPreemptiveTick () is called when the timer ticks.
portSAVE_CONTEXT() swaps out the current task context.
vTaskSwitchContext ( ) chooses a new task.
portRESTORE_CONTEXT () swaps in the new context.
-----------------
PRIORITY-BASED SCHEDULING:
A fundamental job of operating system is to allocate resources in the computing
system among programs that request them. in system CPU is a most important
resources, so scheduling the CPU is the operating system’s most important job.
30
Priority Inversion:
• A low priority process blocks execution of a higher priority process by keeping hold
of its resource. This is Priority Inversion.
• This priority inversion is dealt with Priority Inheritance
• In priority inheritance,
– Promotes the priority of the process temporarily
– The priority of the process becomes higher than that of any other process that
may use the resource.
– Once the process is finished with the resource, its priority is demoted to its
normal value.
Process Priorities:
Priority of the process is the main aspects of the OS and kernel .Based on the priority
of the process only kernel will do the process sequentially. Kernel will determine the
process will run next by the priorities of importance the process. if we assign the process
with a numerical priority then the kernel can simply look at the processes and their priorities
and it knows which ones actually want to execute and select the highest priority process
that is ready to run. The priority should be non negative integer value.
31
situations. The theory underlying RMS is known as rate-monotonic analysis (RMA).This
theory, as summarized below, uses a relatively simple model of the system.
All processes run periodically on a single CPU.
Context switching time is ignored.
There are no data dependencies between processes.
The execution time for a process is constant.
All deadlines are at the ends of their periods.
The highest-priority ready process is always selected for execution.
Rate Monotonic Scheduling (RMS) assigns task priorities in the order of the highest task
frequencies, i.e. the shortest periodic task gets the highest priority, then the next with the
shortest period get the second highest priority, and so on.
Example 6.3
Example for Rate-monotonic scheduling:
Here is a simple set of processes and their characteristics.
Applying the principles of RMA, we give P1 the highest priority, P2 the middle
priority, and P3 the lowest priority. To understand all the interactions between the periods,
we need to construct a time line equal in length to least common multiple of process period,
which is 12 (time) in this case, the complete scheduling for the least common multiple of the
period is called the unrolled schedule.
All three periods start at time zero. P1’s data arrive first. Since P1 is the highest-
priority process, it can start to execute immediately. After one time unit, P1 finishes and goes
out of the ready state until the start of its next period. At time 1, P2 starts executing as the
highest-priority ready process. At time 3, P2 finishes and P3 starts executing. P1’s next
iteration starts at time 4, at which point it interrupts P3. P3 gets one more time unit of
execution between the second iterations of P1 and P2, but P3 does not get to finish until after
the third iteration of P1.
32
• Scheduling time – For calculating the Scheduling time of the algorithm we have to take
the LCM of the Time period of all the processes. LCM ( 4, 6, 12 ) of the above example is
12. Thus we can schedule it by 12-time units.
• Priority – As discussed above, the priority will be the highest for the process which has
the least running time period. Thus P1 will have the highest priority, and after that P2 and
lastly P3.
(i)Response time:
The response time of a process as the time at which the process finishes.
This is the relationship of the period of execution and total time line. According to
this CPU utilization is given by
• The fraction is the fraction of time that the CPU spends executing task i.
The maximum CPU utilization is U=C1/P1 +C2/P2+C3/P3
Example 2 : Consider the following different set of execution times for these processes,
keeping the same deadlines.
In this case ,there is no feasible assignment of process that satisfies scheduling, The process
P3 is not able to schedule in the given timeline. For example, during one 12 time-unit
interval [LCM ( 4, 6, 12 ) of the above example is 12], we must execute P1 three times,
requiring 6 units of CPU time; P2 twice, costing 6 units of CPU time; and P3 one time,
33
requiring 3 units of CPU time. The total of 6 + 6 + 3 = 15 units of CPU time is more than
the 12 time units available.
(i)Response time:
The response time of a process as the time at which the process finishes.
This is the relationship of the period of execution and total time line. According to
this CPU utilization is given by
It is possible to show that for a set of two tasks under RMS scheduling, the CPU utilization U
will be no greater than
2(21/2 - 1) ∼ 0.83
In other words, the CPU will be idle at least 17% of the time
--------------------------------------------------------------------------------------------------------------
Example : RMS –Missing deadline
Execution Time Period
T1 1 4
T2 2 6
T3 3 8
34
• Observe that at time 6, even if the deadline of task 3 is very close, the scheduler
decides to schedule task 2. This is the main reason why T3 misses its deadline
2. Earliest-Deadline-First Scheduling:
Earliest deadline first (EDF) is another well-known scheduling policy that was also
studied by Liu and Layland [Liu73]. It is a dynamic priority scheme—it changes process
priorities during execution based on initiation times. As a result, it can achieve higher CPU
utilizations than RMS.
• Earliest Deadline First (EDF) is a dynamic priority algorithm
• The priority of a job is inversely proportional to its absolute deadline;
• In other words, the highest priority job is the one with the earliest deadline;
The EDF policy is also very simple: It assigns priorities in order of deadline. The
highest-priority process is the one whose deadline is nearest in time, and the lowest priority
process is the one whose deadline is farthest away. Clearly, priorities must be recalculated at
every completion of a process. However, the final step of the OS during the scheduling
procedure is the same as for RMS—the highest-priority ready process is chosen for
execution.
Example 6.4 illustrates EDF scheduling in practice.
T1 1 4
T2 2 6
T3 3 8
According to the above system the P1 is highest priority and P2 is the middle priority and P3
is lowest priority .
35
Compare Rate Monotonic Scheduling and Earliest deadline first scheduling:
4. It very easier to ensure all the deadlines Much difficult compare to RMS
While CPU utilization is less,this gives less
5. It is more Problemstic
problem on complete the deadline
*************************
36