0% found this document useful (0 votes)
12 views

CD Unit3 Part1

LR parsing is a bottom-up parsing method that scans input left-to-right and constructs a rightmost derivation in reverse. It is efficient and can recognize a wide range of programming-language constructs, although constructing an LR parser manually can be complex. The document also discusses operator precedence parsing, shift-reduce parsing, and the construction of SLR parsing tables, highlighting the mechanisms and algorithms involved in these parsing techniques.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

CD Unit3 Part1

LR parsing is a bottom-up parsing method that scans input left-to-right and constructs a rightmost derivation in reverse. It is efficient and can recognize a wide range of programming-language constructs, although constructing an LR parser manually can be complex. The document also discusses operator precedence parsing, shift-reduce parsing, and the construction of SLR parsing tables, highlighting the mechanisms and algorithms involved in these parsing techniques.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

UNIT –III

BOTTOM –UP PARSING


LR PARSING INTRODUCTION
The "L" is for left-to-right scanning of the input and the "R" is for constructing a rightmost
derivation in reverse.

WHY LR PARSING:
 LR parsers can be constructed to recognize virtually all programming-language
constructs for which context-free grammars can be written.
 The LR parsing method is the most general non-backtracking shift-reduce parsing
method known, yet it can be implemented as efficiently as other shift-reduce
methods.
 The class of grammars that can be parsed using LR methods is a proper subset of the
class of grammars that can be parsed with predictive parsers.
 An LR parser can detect a syntactic error as soon as it is possible to do so on a left-to-
right scan of the input.
The disadvantage is that it takes too much work to constuct an LR parser by hand for a
typical programming-language grammar. But there are lots of LR parser generators available
to make this task easy.
MODELS OF LR PARSERS
The schematic form of an LR parser is shown below.

The program uses a stack to store a string of the form s0X1s1X2...Xmsm where sm is on top.
Each Xi is a grammar symbol and each si is a symbol representing a state. Each state symbol
summarizes the information contained in the stack below it. The combination of the state
symbol on top of the stack and the current input symbol are used to index the parsing table
and determine the shiftreduce parsing decision. The parsing table consists of two parts: a
parsing action function action and a goto function goto. The program driving the LR parser
behaves as follows: It determines sm the state currently on top of the stack and ai the current
input symbol. It then consults action[sm,ai], which can have one of four values:
 shift s, where s is a state
 reduce by a grammar production A -> b
 accept
 error
The function goto takes a state and grammar symbol as arguments and produces a state.
For a parsing table constructed for a grammar G, the goto table is the transition function of a
deterministic finite automaton that recognizes the viable prefixes of G. Recall that the viable
prefixes of G are those prefixes of right-sentential forms that can appear on the stack of a
shiftreduce parser because they do not extend past the rightmost handle.
A configuration of an LR parser is a pair whose first component is the stack contents and
whose second component is the unexpended input:
(s0 X1 s1 X2 s2... Xm sm, ai ai+1... an$)
This configuration represents the right-sentential form
X1 X1 ... Xm ai ai+1 ...an
in essentially the same way a shift-reduce parser would; only the presence of the states on the
stack is new. Recall the sample parse we did (see Example 1: Sample bottom-up parse) in
which we assembled the right-sentential form by concatenating the remainder of the input
buffer to the top of the stack. The next move of the parser is determined by reading ai and
sm, and consulting the parsing action table entry action[sm, ai]. Note that we are just looking
at the state here and no symbol below it. We'll see how this actually works later.
The configurations resulting after each of the four types of move are as follows:
If action[sm, ai] = shift s, the parser executes a shift move entering the configuration
(s0 X1 s1 X2 s2... Xm sm ai s, ai+1... an$)
Here the parser has shifted both the current input symbol ai and the next symbol.
If action[sm, ai] = reduce A -> b, then the parser executes a reduce move, entering the
configuration,
(s0 X1 s1 X2 s2... Xm-r sm-r A s, ai ai+1... an$)
where s = goto[sm-r, A] and r is the length of b, the right side of the production. The parser
first popped 2r symbols off the stack (r state symbols and r grammar symbols), exposing state
sm-r. The parser then pushed both A, the left side of the production, and s, the entry for
goto[sm-r, A], onto the stack. The current input symbol is not changed in a reduce move.
The output of an LR parser is generated after a reduce move by executing the semantic action
associated with the reducing production. For example, we might just print out the production
reduced.
If action[sm, ai] = accept, parsing is completed.
OPERATOR PRECEDENCE PARSING
Precedence Relations
Bottom-up parsers for a large class of context-free grammars can be easily developed
using operator grammars.Operator grammars have the property that no production right side
is empty or has two adjacent nonterminals. This property enables the implementation of
efficient operator-precedence parsers. These parser rely on the following three precedence
relations:
Relation Meaning
a <· b a yields precedence to b
a =· b a has the same precedence as b

a ·> b a takes precedence over b

These operator precedence relations allow to delimit the handles in the right sentential
forms: <· marks the left end, =· appears in the interior of the handle, and ·> marks the right
end.

Example: The input string:


id1 + id2 * id3
after inserting precedence relations becomes

$ <· id1 ·> + <· id2 ·> * <· id3 ·> $

Having precedence relations allows to identify handles as follows:


 scan the string from left until seeing ·>
 scan backwards the string from right to left until seeing <·
 everything between the two relations <· and ·> forms the handle
OPERATOR PRECEDENCE PARSING ALGORITHM
Initialize: Set ip to point to the first symbol of w$

Repeat: Let X be the top stack symbol, and a the symbol pointed to by
ip if $ is on the top of the stack and ip points to $ then return

else

Let a be the top terminal on the stack, and b the symbol pointed
to by ip

if a <· b or a =· b then
push b onto the stack

advance ip to the next input

symbol else if a ·> b then

repeat

pop the stack

until the top stack terminal is related by <·


to the terminal most recently popped

else error()
end

ALGORITHM FOR CONSTRUCTING PRECEDENCE FUNCTIONS


1. Create functions fa for each grammar terminal a and for the end of string symbol;
2. Partition the symbols in groups so that fa and gb are in the same group if a =· b ( there
can be symbols in the same group even if they are not connected by this relation)
3. Create a directed graph whose nodes are in the groups, next for each symbols a and b

do: place an edge from the group of gb to the group of fa if a <· b, otherwise if a ·> b

place an edge from the group of fa to that of gb;


4. If the constructed graph has a cycle then no precedence functions exist. When there are
no cycles collect the length of the longest paths from the groups of fa and gb Example:
Consider the above table Using the algorithm leads to the following graph:

SHIFT REDUCE PARSING


A shift-reduce parser uses a parse stack which (conceptually) contains grammar symbols.
During the operation of the parser, symbols from the input are shifted onto the stack. If a
prefix of the symbols on top of the stack matches the RHS of a grammar rule which is the
correct rule to use within the current context, then the parser reduces the RHS of the rule to
its LHS,replacing the RHS symbols on top of the stack with the nonterminal occurring on the
LHS of the rule. This shift-reduce process continues until the parser terminates, reporting
either success or failure. It terminates with success when the input is legal and is accepted by
the parser. It terminates with failure if an error is detected in the input. The parser is nothing
but a stack automaton which may be in one of several discrete states. A state is usually
represented simply as an integer. In reality, the parse stack contains states, rather than
grammar symbols. However, since each state corresponds to a unique grammar symbol, the
state stack can be mapped onto the grammar symbol stack mentioned earlier.
The operation of the parser is controlled by a couple of tables:
ACTION TABLE
The action table is a table with rows indexed by states and columns indexed by terminal
symbols. When the parser is in some state s and the current lookahead terminal is t, the
action taken by the parser depends on the contents of action[s][t], which can contain four
different kinds of entries:
Shift s'
Shift state s' onto the parse stack.
Reduce r
Reduce by rule r. This is explained in more detail below.
Accept
Terminate the parse with success, accepting the
input. Error
Signal a parse error
GOTO TABLE
The goto table is a table with rows indexed by states and columns indexed by nonterminal
symbols. When the parser is in state s immediately after reducing by rule N, then the next
state to enter is given by goto[s][N].
The current state of a shift-reduce parser is the state on top of the state stack. The detailed
operation of such a parser is as follows:
1. Initialize the parse stack to contain a single state s0, where s0 is the distinguished initial
state of the parser.
2. Use the state s on top of the parse stack and the current lookahead t to consult the action
table entry action[s][t]:
· If the action table entry is shift s' then push state s' onto the stack and advance the
input so that the lookahead is set to the next token.
· If the action table entry is reduce r and rule r has m symbols in its RHS, then pop
m symbols off the parse stack. Let s' be the state now revealed on top of the parse
stack and N be the LHS nonterminal for rule r. Then consult the goto table and
push the state given by goto[s'][N] onto the stack. The lookahead token is not
changed by this step.
 If the action table entry is accept, then terminate the parse with success.
 If the action table entry is error, then signal an error.
3. Repeat step (2) until the parser terminates.
For example, consider the following simple grammar
0) $S: stmt <EOF>
1) stmt: ID ':=' expr
2) expr: expr '+' ID
3) expr: expr '-' ID
4) expr: ID
which describes assignment statements like a:= b + c - d. (Rule 0 is a special augmenting
production added to the grammar).
One possible set of shift-reduce parsing tables is shown below (sn denotes shift n, rn denotes
reduce n, acc denotes accept and blank entries denote error entries):
Parser Tables
SLR PARSER
An LR(0) item (or just item) of a grammar G is a production of G with a dot at some position
of the right side indicating how much of a production we have seen up to a given point.
For example, for the production E -> E + T we would have the following items:
[E -> .E + T]
[E -> E. + T]
[E -> E +. T]
[E -> E + T.]
CONSTRUCTING THE SLR PARSING TABLE
To construct the parser table we must convert our NFA into a DFA. The states in the LR
table will be the e-closures of the states corresponding to the items SO...the process of
creating the LR state table parallels the process of constructing an equivalent DFA from a
machine with e-transitions. Been there, done that - this is essentially the subset construction
algorithm so we are in familiar territory here.
We need two operations: closure()
and goto().
closure()
If I is a set of items for a grammar G, then closure(I) is the set of items constructed from I by
the two rules: Initially every item in I is added to closure(I)
If A -> a.Bb is in closure(I), and B -> g is a production, then add the initial item [B -> .g] to I,
if it is not already there. Apply this rule until no more new items can be added to closure(I).
From our grammar above, if I is the set of one item {[E'-> .E]}, then closure(I) contains:
I0: E' -> .E
E -> .E + T
E -> .T
T -> .T * F
T -> .F
F -> .(E)
F -> .id
goto()
goto(I, X), where I is a set of items and X is a grammar symbol, is defined to be the closure
of the set of all items [A -> aX.b] such that [A -> a.Xb] is in I. The idea here is fairly intuitive:
if I is the set of items that are valid for some viable prefix g, then goto(I, X) is the set of items
that are valid for the viable prefix gX.
SETS-OF-ITEMS-CONSTRUCTION
To construct the canonical collection of sets of LR(0) items for
augmented grammar
G'. procedure items(G')
begin
C := {closure({[S' -> .S]})};
repeat
for each set of items in C and each grammar symbol X
such that goto(I, X) is not empty and not in C do
add goto(I, X) to C;
until no more sets of items can be added to
C end;
ALGORITHM FOR CONSTRUCTING AN SLR PARSING TABLE
Input: augmented grammar G'
Output: SLR parsing table functions action and goto for G'
Method:
Construct C = {I0, I1 , ..., In} the collection of sets of LR(0) items for G'. State i
is constructed from Ii:
if [A -> a.ab] is in Ii and goto(Ii, a) = Ij, then set action[i, a] to "shift j". Here a must be a
terminal.
if [A -> a.] is in Ii, then set action[i, a] to "reduce A -> a" for all a in FOLLOW(A). Here A may
not be S'.
if [S' -> S.] is in Ii, then set action[i, $] to "accept"
If any conflicting actions are generated by these rules, the grammar is not SLR(1) and the
algorithm fails to produce a parser. The goto transitions for state i are constructed for all
nonterminals A using the rule: If goto(Ii, A)= Ij, then goto[i, A] = j.
All entries not defined by rules 2 and 3 are made "error".
The inital state of the parser is the one constructed from the set of items containing [S' -> .S].
Let's work an example to get a feel for what is going on,
An Example
(1) E -> E * B
(2) E -> E + B
(3) E -> B
(4) B -> 0
(5) B -> 1
The Action and Goto Table The two LR(0) parsing tables for this grammar look as follows:
CANONICAL LR PARSING
By splitting states when necessary, we can arrange to have each state of an LR parser
indicate exactly which input symbols can follow a handle a for which there is a possible
reduction to A. As the text points out, sometimes the FOLLOW sets give too much
informationand doesn't (can't) discriminate between different reductions.
The general form of an LR(k) item becomes [A -> a.b, s] where A -> ab is a production and s
is a string of terminals. The first part (A -> a.b) is called the core and the second part is the
lookahead. In LR(1) |s| is 1, so s is a single terminal.
A -> ab is the usual righthand side with a marker; any a in s is an incoming token in which
we are interested. Completed items used to be reduced for every incoming token in
FOLLOW(A), but now we will reduce only if the next input token is in the lookahead set s..if
we get two productions A -> a and B -> a, we can tell them apart when a is a handle on the
stack if the corresponding completed items have different lookahead parts. Furthermore, note
that the lookahead has no effect for an item of the form [A -> a.b, a] if b is not e. Recall that
our problem occurs for completed items, so what we have done now is to say that an item of
the form [A -> a., a] calls for a reduction by A -> a only if the next input symbol is a. More
formally, an LR(1) item [A -> a.b, a] is valid for a viable prefix g if there is a derivation
S =>* s abw, where g = sa, and either a is the first symbol of w, or w is e and a is $.
ALGORITHM FOR CONSTRUCTION OF THE SETS OF LR(1) ITEMS
Input: grammar G'

Output: sets of LR(1) items that are the set of items valid for one or more viable prefixes of
G'

Method:

closure(I)
begin
repeat

for each item [A -> a.Bb, a] in I,


each production B -> g in G', and
each terminal b in FIRST(ba)
such that [B -> .g, b] is not in I do
add [B -> .g, b] to I;
until no more items can be added to I;
end;
goto(I, X)
begin
let J be the set of items [A -> aX.b, a] such that
[A -> a.Xb, a] is in I
return closure(J);
end;
procedure items(G')
begin
C := {closure({S' -> .S, $})};
repeat
for each set of items I in C and each grammar symbol X such
that goto(I, X) is not empty and not in C do
add goto(I, X) to C
until no more sets of items can be added to
C; end;
An example,
Consider the following grammer,
S’->S
S->CC
C->cC
C->d
Sets of LR(1) items
I0: S’->.S,$
S->.CC,$
C->.Cc,c/d
C->.d,c/d

I1:S’->S.,$

I2:S->C.C,$
C->.Cc,$
C->.d,$
I3:C->c.C,c/d
C->.Cc,c/d
C->.d,c/d

I4: C->d.,c/d

I5: S->CC.,$

I6: C->c.C,$
C->.cC,$
C->.d,$

I7:C->d.,$

I8:C->cC.,c/d

I9:C->cC.,$

Here is what the corresponding DFA looks like


ALGORITHM FOR CONSTRUCTION OF THE CANONICAL LR PARSING
TABLE

Input: grammar G'

Output: canonical LR parsing table functions action and goto


1. Construct C = {I0, I1 , ..., In} the collection of sets of LR(1) items for G'.State i is
constructed from Ii.
2. if [A -> a.ab, b>] is in Ii and goto(Ii, a) = Ij, then set action[i, a] to "shift j". Here a
must be a terminal.
3. if [A -> a., a] is in Ii, then set action[i, a] to "reduce A -> a" for all a in
FOLLOW(A). Here A may not be S'.
4. if [S' -> S.] is in Ii, then set action[i, $] to "accept"
5. If any conflicting actions are generated by these rules, the grammar is not LR(1)
and the algorithm fails to produce a parser.
6. The goto transitions for state i are constructed for all nonterminals A using the
rule: If goto(Ii, A)= Ij, then goto[i, A] = j.
7. All entries not defined by rules 2 and 3 are made "error".
8. The inital state of the parser is the one constructed from the set of items
containing [S' -> .S, $].
LALR PARSER:
We begin with two observations. First, some of the states generated for LR(1) parsing have
the same set of core (or first) components and differ only in their second component, the
lookahead symbol. Our intuition is that we should be able to merge these states and reduce
the number of states we have, getting close to the number of states that would be generated
for LR(0) parsing. This observation suggests a hybrid approach: We can construct the
canonical LR(1) sets of items and then look for sets of items having the same core. We merge
these sets with common cores into one set of items. The merging of states with common
cores can never produce a shift/reduce conflict that was not present in one of the original
states because shift actions depend only on the core, not the lookahead. But it is possible for
the merger to produce a reduce/reduce conflict.
Our second observation is that we are really only interested in the lookahead symbol in
places where there is a problem. So our next thought is to take the LR(0) set of items and add
lookaheads only where they are needed. This leads to a more efficient, but much more
complicated method.
ALGORITHM FOR EASY CONSTRUCTION OF AN LALR TABLE
Input: G'
Output: LALR parsing table functions with action and goto for G'.
Method:
1. Construct C = {I0, I1 , ..., In} the collection of sets of LR(1) items for G'.
2. For each core present among the set of LR(1) items, find all sets having that core
and replace these sets by the union.
3. Let C' = {J0, J1 , ..., Jm} be the resulting sets of LR(1) items. The parsing actions
for state i are constructed from Ji in the same manner as in the construction of the
canonical LR parsing table.
4. If there is a conflict, the grammar is not LALR(1) and the algorithm fails.
5. The goto table is constructed as follows: If J is the union of one or more sets of
LR(1) items, that is, J = I0U I1 U ... U Ik, then the cores of goto(I0, X), goto(I1,
X), ..., goto(Ik, X) are the same, since I0, I1 , ..., Ik all have the same core. Let K
be the union of all sets of items having the same core asgoto(I1, X).
6. Then goto(J, X) = K.
Consider the above example,
I3 & I6 can be replaced by their union
I36:C->c.C,c/d/$
C->.Cc,C/D/$
C->.d,c/d/$
I47:C->d.,c/d/$
I89:C->Cc.,c/d/$
Parsing Table

state c d $ S C

0 S36 S47 1 2

1 Accept

2 S36 S47 5

36 S36 S47 89

47 R3 R3

5 R1

89 R2 R2 R2

HANDLING ERRORS
The LALR parser may continue to do reductions after the LR parser would have spotted an
error, but the LALR parser will never do a shift after the point the LR parser would have
discovered the error and will eventually find the error.

DANGLING ELSE
The dangling else is a problem in computer programming in which an optional else clause in
an If–then(–else) statement results in nested conditionals being ambiguous. Formally,
the context-free grammar of the language is ambiguous, meaning there is more than one
correct parse tree.
In many programming languages one may write conditionally executed code in two forms:
the if-then form, and the if-then-else form – the else clause is optional:

Consider the grammar:


S ::= E $
E ::= E + E
|E*E
|(E)
| id
| num
and four of its LALR(1) states:
I0: S ::= . E $ ?
E ::= . E + E +*$ I1: S ::= E . $ ? I2: E ::= E * . E +*$
E ::= . E * E +*$ E ::= E . + E +*$ E ::= . E + E +*$
E ::= . ( E ) +*$ E ::= E . * E +*$ E ::= . E * E +*$
E ::= . id +*$ E ::= . ( E ) +*$
E ::= . num +*$ I3: E ::= E * E . +*$ E ::= . id +*$
E ::= E . + E +*$ E ::= . num +*$
E ::= E . * E +*$
Here we have a shift-reduce error. Consider the first two items in I3. If we have a*b+c and
we parsed a*b, do we reduce using E ::= E * E or do we shift more symbols? In the former
case we get a parse tree (a*b)+c; in the latter case we get a*(b+c). To resolve this conflict, we
can specify that * has higher precedence than +. The precedence of a grammar production is
equal to the precedence of the rightmost token at the rhs of the production. For example, the
precedence of the production E ::= E * E is equal to the precedence of the operator *, the
precedence of the production E ::= ( E ) is equal to the precedence of the token ), and the
precedence of the production E ::= if E then E else E is equal to the precedence of the token
else. The idea is that if the look ahead has higher precedence than the production currently
used, we shift. For example, if we are parsing E + E using the production rule E ::= E + E
and the look ahead is *, we shift *. If the look ahead has the same precedence as that of the
current production and is left associative, we reduce, otherwise we shift. The above grammar
is valid if we define the precedence and associativity of all the operators. Thus, it is very
important when you write a parser using CUP or any other LALR(1) parser generator to
specify associativities and precedence’s for most tokens (especially for those used as
operators). Note: you can explicitly define the precedence of a rule in CUP using the %prec
directive:
E ::= MINUS E %prec UMINUS
where UMINUS is a pseudo-token that has higher precedence than TIMES, MINUS etc, so
that -1*2 is equal to (-1)*2, not to -(1*2).
Another thing we can do when specifying an LALR(1) grammar for a parser generator is
error recovery. All the entries in the ACTION and GOTO tables that have no content
correspond to syntax errors. The simplest thing to do in case of error is to report it and stop
the parsing. But we would like to continue parsing finding more errors. This is called error
recovery. Consider the grammar:
S ::= L = E ;
| { SL } ;
| error ;
SL ::= S ;
| SL S ;
The special token error indicates to the parser what to do in case of invalid syntax for S (an
invalid statement). In this case, it reads all the tokens from the input stream until it finds the
first semicolon. The way the parser handles this is to first push an error state in the stack. In
case of an error, the parser pops out elements from the stack until it finds an error state where
it can proceed. Then it discards tokens from the input until a restart is possible. Inserting
error handling productions in the proper places in a grammar to do good error recovery is
considered very hard.
LR ERROR RECOVERY
An LR parser will detect an error when it consults the parsing action table and find a blank or
error entry. Errors are never detected by consulting the goto table. An LR parser will detect
an error as soon as there is no valid continuation for the portion of the input thus far scanned.
A canonical LR parser will not make even a single reduction before announcing the error.
SLR and LALR parsers may make several reductions before detecting an error, but they will
never shift an erroneous input symbol onto the stack.
PANIC-MODE ERROR RECOVERY
We can implement panic-mode error recovery by scanning down the stack until a state s with
a goto on a particular nonterminal A is found. Zero or more input symbols are then discarded
until a symbol a is found that can legitimately follow A. The parser then stacks the state
GOTO(s, A) and resumes normal parsing. The situation might exist where there is more than
one choice for the nonterminal A. Normally these would be nonterminals representing major
program pieces, e.g. an expression, a statement, or a block. For example, if A is the
nonterminal stmt, a might be semicolon or }, which marks the end of a statement sequence.
This method of error recovery attempts to eliminate the phrase containing the syntactic error.
The parser determines that a string derivable from A contains an error. Part of that string has
already been processed, and the result of this processing is a sequence of states on top of the
stack. The remainder of the string is still in the input, and the parser attempts to skip over the
remainder of this string by looking for a symbol on the input that can legitimately follow A.
By removing states from the stack, skipping over the input, and pushing GOTO(s, A) on the
stack, the parser pretends that if has found an instance of A and resumes normal parsing.
PHRASE-LEVEL RECOVERY

Phrase-level recovery is implemented by examining each error entry in the LR action


table and deciding on the basis of language usage the most likely programmer error
that would give rise to that error. An appropriate recovery procedure can then be
constructed; presumably the top of the stack and/or first input symbol would be
modified in a way deemed appropriate for each error entry. In designing specific
error-handling routines for an LR parser, we can fill in each blank entry in the action
field with a pointer to an error routine that will take the appropriate action selected
by the compiler designer.

The actions may include insertion or deletion of symbols from the stack or the input
or both, or alteration and transposition of input symbols. We must make our choices
so that the LR parser will not get into an infinite loop. A safe strategy will assure that
at least one input symbol will be removed or shifted eventually, or that the stack will
eventually shrink if the end of the input has been reached. Popping a stack state that
covers a non terminal should be avoided, because this modification eliminates from
the stack a construct that has already been successfully parsed.

*****

You might also like