CPSC411 -- Compiler Construction I:

LR(1)  Parsing Techniques


Author: Robin Cockett
Date: Feb 11th 2002



Reading:  Louden Ch  5 (very good coverage) , Dragon 4.7



Why another parsing technique?




One might think that once one knows how to parse using recursive descent and pushdown automata that one is ready for the world of parsing!  Unfortunately, this is not the case!  In fact, the tools which are most in favor use a rather different parsing technique based on a shift/reduce machine and LR-parsing techniques.

These techniques rely fairly heavily on the "item automaton" associated with a grammar which recognizes "viable prefixes" (both will be explained shortly). The reason these techniques are favored is that there are strict inclusions:

LL(1)
<
LR(1)
LL(2)
<
LR(2)
LL(3)
<
LR(3)

...


 .... so that these LR(n) parsing techniques can handle many more grammars than the LL(n) techniques can.

It is generally viewed as desirable to be able to specify a language with a grammar and to actually use that grammar, unmodified, in the parsing process.   This allows there is a close correspondence between the code and the specification. Thus, the LR parsing techniques which can handle a larger variety of grammars have tended to be the techniques of choice on which to build parsing tools.  There are, it is important to point out, exceptions to this rule: for example the ANTLER system uses LL(n) techniques.

The "industry standard" tool Yacc is based on an LR parsing technique (actually LALR parsing).  To use this tool effectively you must know how it works: this is the main purpose of this section.

The "L" in LR is for "Left to right scanning of input" while the "R" is for "Rightmost derivation". To understand the "bottom up" methods of parsing it is worth having some understanding of the second term as it was by examining these derivations that these parsing techniques were discovered.



Derivations again:


 



Given a (context free) grammar \G = (T,N,P,s) where
a derivation from n \in N to \alpha is a sequence of one step applications of the productions
n =*=> \alpha ==
           n ==> \alpha_1 ==> \alpha_2 ==> ...==> \alpha_n = \alpha.

Each step \alpha_i ==> \alpha_{i+1} consists of the application of one production p:A -> \beta which may be written as:
 
 \alpha_i1 [p:A -> \beta] \alpha_i2
\alpha_i = \alpha_i1 A \alpha_i2 ==> \alpha_i1 \beta \alpha_i2 = \alpha_{i+1}


where we indicate the position of the application of the production p by sandwiching it between the two strings which are untouched. The strings \alpha_i, which are just lists of terminals and non-terminals are called sentential forms.

A bottom-up parse can be viewed as a series of one-step derivations in reverse. The specification (as above) of where a one step reverse derivation can be performed from a realizable sentential form to a reachable sentential form is called a handle. Intuitively one should think of it as the substring \beta of \alpha_{i+1} where (of course) the location of \beta in the larger string is also a crucial part of the notion of "substring". 

A one step derivation is only of significant interest to us when it can actually occur in a parse.  In other words the domain (starting sentential form) must be reachable from the start non-terminal and the codomain (the ending sentential form) must be realizable. These reachable and realizable requirements are important: it may be possible that a production could be applied (in reverse) to a sentential form, which is both reachable and realizable, to produce an unreachable sentential form. Clearly such a step will not result in a successful parse and so we want to avoid them.



Rightmost and leftmost derivations:



 Suppose now that we have two productions p1: A -> \b and p2: B -> \c then there are two derivations:
\a1 [p1] \a2 B \a3
\a1 A \a2 B \a3
------------> \a1 \b \a2 B \a3
 |                                 |
\a1 A \a2 [p2] \a3 |                                 | \a1 \b \a2 [p2] \a3
 |                                 |
v                                v
 \a1 A \a2 \c \a3 ------------> \a1 \b \a2 \c \a3
\a1 [p1] \a2 \c \a3
whose parse trees are the same (and therefore we would like to regard the derivations as being equal). The derivation around the top of the square is a leftmost derivation (as it uses rules with nonterminals to the left first), while the derivation around the bottom is a rightmost derivation (as it uses rules with non-terminals to the right first). Given any derivation
 n =*=> \alpha == 
             n ==> \alpha_1 ==> \alpha_2 ==> ...==> \alpha_n = \alpha

we can transform it into a leftmost or rightmost derivation by looking at successive steps and transforming them into leftmost or rightmost form (respectively):


FACT:
Every derivation has a unique equivalent rightmost and leftmost normal form.



Notice that top-down or predictive parsing techniques (such as LL(1) parsers) will produce a leftmost derivation. However, a bottom up parser will produce a rightmost derivation .... this is because it produces the parse in reverse (bottom up from the left) always doing the leftmost production first. When these productions are viewed in the correct order again one starts with the last production left which will always be a rightmost production! Hence the derivation is rightmost.



Example: Consider the grammar:

s' -> s.                    r_1
s -> (s)s                  r_2
       |.                      r_3

which recognizes balanced parentheses. Consider the following derivations of "()" from S':

           r_1                 r_2               (s)r_3             (r_3)
s' ========> s =========> (s)s ======> (s) ====> ()
          r_1                 r_2                  (r_3)s            ()r_3
s' ========> s =========> (s)s ======> ()s ====> ()

the first is a rightmost derivation while the second is a leftmost derivation. A right sentential form is a sentential form which occurs on a rightmost derivation (and dually a left sentential form is one which occurs on a leftmost derivation).

In an unambiguous grammar each right sentential form must have at most one handle.  We do have to include the start symbol on this statement - which, of course, has no handle.  In fact every reachable and realizable right sentential form of an unambiguous grammar which is not the start symbol must have exactly one handle. Of course, when the grammar is ambiguous, there must be a right sentential form which has more than one handle. Our main interest in this course is in unambiguous grammars, thus, we can focus our efforts on finding the unique handles associated with the right sentential forms.

Suppose

\alpha_i1 [p:A -> \beta] \alpha_i2
\alpha_i = \alpha_i1 A \alpha_i2 ===> \alpha_i1 \beta \alpha_i2 = \alpha_{i+1}

is a derivation step between right sentential forms then observed that  \alpha_i2 must be a list of terminals. To see this we suppose that \alpha_i2 contains a nonterminal: then the act of removing that nonterminal by applying a production can be interchanged with this derivation step showing that this cannot be a rightmost derivation. We may therefore split a right sentential form into two parts \alpha_i1 \beta and \alpha_i2 the second part is a list of terminals and so we may regard these as the part of the input yet to be processed. Prefixes of the strings of the form \alpha_i1 \beta are called viable prefixes. Here \beta stands for a handle, thus a viable prefix is a prefix of a rightmost sentential form ending with a handle.


Definition:
A grammar is LR(k) if given any right sentential form \alpha \beta \gamma where \gamma is a list of terminals and b -> \beta is a production,
  1. Whether \alpha [b -> \beta] \gamma is a handle can be determined from knowledge of \alpha and (at most) the first k tokens of \gamma, 
  2. Every right sentential form has at most one handle. 



Thus, by examining a viable prefix and at most the next k symbols of the input we must be able to decide what to do. One way we could approach this problem is by building a table of viable prefixes against the leading (up to) k input tokens and work out the appropriate actions. Unfortunately, for most grammars this table would be infinite ... so this is not a very practical approach. However, one may begin by wondering whether certain prefixes (with lookahead) will not have the same actions associated to them and whether if one compacts all this information one could not present it in a finite manner.

 It is an amazing fact that we shall shortly exploit that there is an automaton for each k ("the automaton of LR(k) items") which not only recognizes viable prefixes but also allows the determination of whether a grammar is LR(k).

Before we proceed further we should observe that the definition of LR(k) thus stated allows one to conclude that any LL(k) grammar is automatically an LR(k) grammar as in an LL(k) parse one is required to know not only what the next handle is but also what must be at that stage the whole parse from the start nonterminal (i.e. considerably more).

Before we look at the LR(k) item automata it is useful to have a look at the (nondeterministic) parsing technique it will have to guide. This is so called shift-reduce parsing.





 Shift-reduce parsing




The most common bottom-up parsing technique uses shift-reduce parsing. This is organized as follows: There are two stacks one of which is the input the other of which is called the parsing stack. This latter can have a list of terminals and non-terminals on it. At each step of the parse one of four actions is possible:
  1. Shift: move the current input token to the top of the parse stack. 
    shift[t]: (\alpha, t \gamma) |--> (\alpha t, \gamma) 
  2. Reduce: a "handle" is identified on the top of the parse stack and is removed and replaced by the LHS non-terminal.
    reduce[a -> \b]: (\alpha \b , \gamma) |--> (\alpha a, \gamma)
  3. Accept: the string.
    accept: (start, \epsilon) |---> STOP
  4. Error: reject the parse.
    reject: (\alpha, \gamma) |--> FAIL
     

Example:  Consider again the grammar:

s' -> s                       r_1
s -> (s)s                    r_2
       |                         r_3

which recognizes balanced parentheses and the parse of "()" using a shift reduce parser.

Parse Stack
Input
Action

"()"
shift[(]
(
")"
reduce(r_3)
( s
")"
shift[)]
( s )
""
reduce(r_3)
( s ) s
""
reduce(r_2)
s
""
reduce(r_1)
s'

ACCEPT




Clearly (as for the pushdown automata) the difficulty with this parsing technique is to decide what to do next.  Shortly we shall reduce this to the problem of building a table.  However, this requires a preliminary step.  In a shift reduce parser we noted that the parse stack consists at each stage of a ``viable prefix'', that is the beginning of a right sentential form up to (and including) the right handle. The amazing fact which we now exploit is that for each context free grammar there is an automaton which "accepts" these prefixes. This automaton is the essential ingredient of all the LR parsing technique.





The LR(0) item automaton





Given a context free grammar G with a start non-terminal s we start (for convenience) by adding a new start state s' by adding the production s' -> s where s' is a new non-terminal this gives us a grammar G' with a "separated" start state  s'.

The states of the \epsilon-NFA of items for G' consist of:
{ a -> \alpha . \beta | a -> \alpha ++ \beta is a production of G}
where the dot indicates how we have broken up the righthand side and ++ stands for concatenation. The one special case we should be aware of is the item corresponding to an empty production: there is only one and this is written

a -> .

 
which stands for a -> \epsilon.\epsilon.

There are two sorts of transitions between these items:
  1. Shift transitions:
    x
    [ a -> \a . x \b ]
    ----------> [ a -> \a x . \b ]
  2.  Expansion transitions: (these are silent)
    [ A -> \a . x \b ] ~~~~~~~~~~> [ x -> .\d ]
The start state is the item [s' -> .s]. Any string which legally drives this automaton is a viable prefix the states which indicate there is a handle on the top of the stack are those which contain completed item: is an item whose "dot" is at the righthand end. The significance of such an item is that when the automaton is in this state we know that a handle (a place where a reverse derivation can be performed) has potentially been encountered.  Thus, in the shift reduce parser there is an opportunity for a reduction.


Example: Consider the grammar: 
a -> ( a )
       | A
we may start by adding a new start state to get:
 
a' -> a
a -> ( a )
       | A

The items are:
a' -> . a

a' -> a .
completed
a -> . (a)

a -> ( . a)

a -> (a . )

a -> (a) .
completed
a -> .A

a -> A.
completed

Part of the item automaton is (shift transitions only marked - you must add the expansion transitions from the starred states).
a
[ a' -> .a]* ----------> [ a' -> a.]

(                                a                                )
[ a -> .(a) ] -----> [ a -> (.a) ]* -----> [a -> (a.) ] ----> [a -> (a). ]

A
[ a -> .A ] -----> [ a -> A.]

I claim that this item automaton recognizes precisely the viable prefixes in a grammar which has every non-terminal reachable and realizable. The deterministic automaton which corresponds to this has states:

S0 =
a' ->  . a
-------
a -> . A
a ->  . ( a )
S1* =
a' -> a .
------
S2 =
a -> A .
------
S3 =
a  -> ( . a )
--------
a -> . A
a -> . ( a )
S4 =
a -> ( a . )
-------
S5* =
a -> ( a ) .


with transitions:
a
S0 -----> S1

A
 S0 -----> S2

(
 S0 -----> S3

(
 S3 -----> S3

a
 S3 -----> S2

A
 S3 -----> S4

)
 S4 -----> S5

The states starred have reduce actions associated with them: namely a reduction by the completed item they contain.

Now the next question is how we can use the information the automaton is giving to help the shift reduce parsing. To illustrate this we shall not only hold the viable prefix on the parse stack but also the state of this item automaton.
Consider the parse of "((A))":

STACK
INPUT
ACTION
S0 ---[]---> S0
"((A))"
shift
S0 --[(]---> S3
"(A))"
shift
S0 --[((]--> S3
"A))"
shift
S0 --[((A]-> S2
"))"
reduce[a->A]
S0 --[((a]-> S4*
"))"
shift
S0 -[((a)]-> S5*
")"
reduce[a->(a)]


At each stage we follow the transitions indicated in the machine (thus we shift when there is an appropriate transition to follow. When we reached a starred state we can initiate a reduce this means rolling back the stack to where it was before the handle and then going forward by the nonterminal at the head of the rule which one is reducing.

S0 ---[(a]---> S4            
")"              
shift
S0 ---[(a)]--> S5*
""
reduce[a->(a)]
S0 ---[a] ---> S1*
""
reduce[a'->a] = accept

Where acceptance is reach when one wished to reduce by the initial rule [a'->a] (which was artificially added) and when the input is empty.



A grammar is LR(0) if and only if the shift reduce parser can be unambiguously guided by the LR(0) item automaton (in the manner described above). It should be noted that most grammars are not LR(0)!

The example above, however, is LR(0). One can tell whether a grammar is LR(0) by inspecting the LR(0) item automaton: a grammar is LL(0) if and only if all completed items are in singleton states.  This means we can unambiguously associate that reduce action with that state.


Exercises:
  1.  Show that
    e -> e ADD t
           | t.
    t -> f MUL t
          | f.
    f -> VAR.
    is not an LR(0) grammar.
  2. Transform this into an LL(1) grammar (by removing left recursion and factoring) to get:
    e -> t e+.
    e+ -> ADD t e+
         |.
    t -> f t+.
    t+ -> MUL f t+
            |.
    f --> VAR.
    show that this grammar is not LR(0). Conclude that LR(0) does not include LL(1). 
  3. Is LR(0) included in LL(1)? Give a counter-example! 
  4. Is the grammar
    S -> ( S )
            |.
    LR(0)?




SLR(1) grammars




It is clear that LR(0) gammars are not very powerful. However, we have not used the capability to look ahead in the input string at all. Consider the grammar:

S -> ( S )
        |.

This is not LR(0) as the states of the item automaton are:

S0 =
s' -> . s
-------
s -> . (s)
s -> .
S1 =
s' -> s
-----
S2 =
s -> ( . s)
------
s -> . (s)
s -> .
S3 =
s -> (s . )
------
S4 =
s -> (s) .
-------



with the transitions:

s: S0 ---> S1
(: S0 ---> S2
(: S2 ---> S2
s: S2 ---> S3
): S3 ---> S4

It is not LR(0) as the completed item s -> . occurs in non-singleton states. In fact we say that S0 and S2 have shift/reduce conflicts as we do not know whether to shift or reduce in these states. Similarly grammars can have reduce/reduce conflicts when two completed items occur in the same state.  

However notice that to complete a successful parse when we reduce by the rule [s -> ] we must have the next token in the follows set of s (or if s is endable we could have reached the end of input). Notice, that s is endable and only ) is in its follow set. We have the option of reducing by [s -> ] in states S0 and S2. However, we can only shift out of these states if the next token is (. So taking this information into account we can again tell whether we should shift or reduce by inspecting the next symbol.

A grammar is said to be SLR(1) (or simple LR with a one token lookahead) if the shift/reduce (and reduce/reduce) conflicts can be resolved by using the follow sets as above.

More precisely:

 Definition:
A grammar is an SLR(1) grammar if and only if in the for any state S in the LR(0)-item automaton of the grammar the following two conditions are satisfied:
  1. For any item [a -> \alpha . X \beta] in S (where X is terminal) there is no completed item [b -> \beta .] in S with X in follow(b)
  2. For any two completed items [b -> \beta.] and [b' -> \beta'.] in S the sets follow(b) and follow(b') are disjoint.



The above is an example of an SLR(1) grammar. However, this is still not a very large class of grammars:


Exercise:
  1. Show that
    e -> e ADD t
           | t.
    t -> t MUL f
           | f.
    f -> VAR.
    is an SLR(1) grammar. (See dragon book and class notes) 
  2. Show that the following grammar is not SLR(1):
    stmt -> call_stmt
                | assign_stmt.
    call_stmt -> ID.
    assign_stm -> var ASSIGN exp.
    var -> var LSPAR exp RSPAR
               | ID.
    exp -> var | NUM. 
  3. Is LL(1) included in SLR(1)? Consider the following grammar:
    s -> a A a B
           | b B b A.
    a -> .
    b -> .
     





LR(1) grammars


An LR(1) item consists of an LR(0) item paired with the lookahead symbol. Thus, the states of the \epsilon-NFA of items for G' consist of:

{ (A -> \alpha . \beta, t) | A -> \alpha ++ \beta is a production of G and t is a token}

 where++ stands for append, as usual.

There are as before two sorts of transitions betwen these items:
  1. Shift transitions:
    X
    [(A -> \a . X \b,t) ] --> [(A -> \a X. \b,t)]
  2. Expansion transitions: (silent transitions)
    [ (A -> \a . X \b,t) ] ~~~> [ (X -> .\d,t') ]
     where t' in First(\b t). (N.B. here we are allowing the end-of-input symbol $ to be in the follow set.) 
The start state is the item [(S' -> .S,$)]. As for the LR(0)-item automata, the LR(1)-item automaton has states which have completed items as their first coordinate final states: we associate an action which is dependent on the next token, the second coordinate, with such a state.

We may inspect the deterministic version of this automaton to determine whether the grammar is LR(1). It is LR(1) only if there are no reduce/reduce or shift/reduce conflicts. Here a shift/reduce conflict happens when a completed item in a state has its "follow token" also occurring as a shift from that state. A reduce/reduce conflict occurs when there are two completed items in the state with the same "follow token". As discussed earlier any LL(1) grammar is necessarily LR(1).  Many practical programming grammars are LR(1).





LALR(1) grammars



 

The problem with LR(1) grammars is that (potentially) the item automaton can get very large. For this reason a there is a tendency to use LALR(1) grammars. They use the LR(0) deterministic item automaton's set of states but propagate the set of possible follow token to the items of these states. Thus the states of the LALR(1) item automaton consist of sets of items paired with sets of follow tokens.

The sets of follow tokens must be generated by calculating what must be in them. This involves a fixed point calculation where one successively adds tokens into these sets until no more additions happen.

A shift requires that all the follow tokens are transmitted to the target item. An expansion requires one to add in all the possible first tokens of the follow sentential forms:
  1. Shift transitions:
    X
     [ (A -> \a. X \b,T) ] ----------> [ (A -> \a X. \b,T') ]
     T is contained in T' (follow set of tokens of first item must be in target item.) 
  2.  Expansion transitions:
    [ (A -> \a. X \b,T) ] ~~~~~~~> [ (X -> .\c, T') ]
    where T' contains { t' | in First(\beta t) and t \in T}.
Now if the automaton has cycles it may be necessary to propagate the follow sets several times round the automaton before they reach their least fixed point. (Rewording prompted by the Red Philosopher!)


Example:
 
s -> a A
      | B a C
      | D C
      | B D A.
a -> D.

This grammar is LALR(1) but it is not SLR(1) nor is it LL(1). Only s is endable and the first and follow sets are:
Follow(s) = {}
Follow(a) = { A, C}
First(s) = { B, D}
First(a) = { D}

The LR(0) item automaton has states:

S0 =
a' -> . a
------
s -> . a A
s -> . B a C
s -> . D C
s -> . B D A
a -> . D

S1* =
s' -> s .
------
S2 =
a -> D .
s -> D . C
--------

S3 =
s -> B . a C
s -> B . D A
---------
a -> .D

S4* =
s -> D C .
-------
S5* =
a -> B D . A
--------
a -> D .
S6* =
a -> B D A.
---------
S7 =
s -> B a . C
---------
S8* =
s -> B a C .
---------
S9 =
s -> a . A
--------
S10 =
s -> a A .
-------



The transitions are:

s: S0 ---> S1
a: S0 ---> S9
B: S0 ---> S3
D: S0 ---> S2
C: S2 ---> S4*
a: S3 ---> S7
D: S3 ---> S5
A: S5 ---> S6
C: S7 ---> S8
A: S9 ---> S10

There are a number of shift/reduce conflicts consider S2: the follow set of a includes C so we cannot resolve this conflict using the SLR(1) technique. However, let us propagate the follow sets to the items of this automaton. This gives:

S0 =
s' -> . s    $
------
s -> . a A     $
s -> . B a C   $
s -> . D C      $
s -> . B D A   $
a -> . D         A

S1* =
s' -> s .   $
------
S2 =
a -> D .         A
s -> D . C     $
--------

S3 =
s -> B . a C     $
s -> B . D A    $
---------
a -> .D           C

S4* =
s -> D C .    $
-------
S5* =
s -> B D . A     $
--------
a -> D .          C
S6* =
s -> B D A.      $
---------
S7 =
s -> B a . C      $
---------
S8* =
s -> B a C .      $
---------
S9 =
s -> a . A     $
--------
S10 =
s -> a A .       $
-------




Notice how the states with the completed item [a -> D.] separate the global follow set of a into two components. This separation is sufficient to disambiguate when a reduction is appropriate.

LALR(1) grammars cover many of the grammars that one usually meets in the course of developing a programming language.

Yacc recognizes LALR(1) grammars.




Consider the grammar:

e -> e ADD t
       | t.
t -> t MUL f
       | f.
f -> VAR.

Start by adding a separated start state:

e' -> e
e -> e ADD t
       | t.
t -> t MUL f
       | f.
f -> VAR.


The vital statistics are:

NT
First
Follow
f
{ VAR}
{MUL,ADD.$}
t
{VAR}
{MUL,ADD,$}
e
{VAR}
{ADD,$}
e'
{VAR}
{$}

(everything is endable)

The deterministic item automaton has the following states (completed items starred!):

 s0 =  e' -> .e
--------
e -> .e ADD t
e -> .t
t -> .t MUL f
 t -> .f
f -> .VAR

s1 = e' -> e. *
e -> e.ADD t
------------

s2 = e -> e ADD.t
------------
t -> .t MUL f
t -> .f
f -> .VAR

s3 = e -> e ADD t. *
 t -> t.MUL f
-------------

s4 = t -> t MUL.f
------------
f -> .VAR

s5 = f -> VAR. *
 ----------

s6 = e -> t. *
 t -> t.MUL f
------------

s7 = t -> f. *
--------

s8 = t -> t MUL f. *
-------------


Transitions:
(s0, e, s1)
(s0, t, s6)
 (s0, f, s7)
(s0, VAR,s5)
(s1, ADD, s2)
(s2, t, s3)
(s2, f, s7)
(s2, VAR, s5)
(s3, MUL, s4)
(s4,VAR,s5)
(s4, f, s8)
(s6, MUL, s4)

Is the grammar LR(0)?

NO there are shift/reduce conflicts in the following states: { s1, s3, s6 }

Is the grammar SLR(1)?

That is can the follows sets of the terminal to which one is reducing help to disambiguate whether one should shift or not? Lets consider each state in turn: s1: The conflict is between reduce[e' -> e] and shift[ADD] follow(e') = {} and e' is endable so the only time one should reduce is if one has reached the end of input. So simple lookahead resolves this. s3: The conflict is between reduce[e -> e ADD t] and shift[MUL] follow(e) = {ADD} and e is endable so one should reduce only if one is at the end of input or the next symbol is an ADD. So simple lookahead resolves this. s6: The conflict is between reduce[e -> t] and shift[MUL]. Thus simple lookahead resolves the conflict again. So the grammar is SLR(1)!

Let us do a shift/reduce parse:

STACK
INPUT
ACTION
s0
"VAR ADD VAR MUL VAR ADD VAR"
shift
s0[VAR]s5
"ADD VAR MUL VAR ADD VAR"
reduce[f -> VAR]
s0[f]s7
"ADD VAR MUL VAR ADD VAR"
reduce[t -> f]
s0[t]s6
"ADD VAR MUL VAR ADD VAR"
reduce[e -> t]
follow(e) = {ADD,$}
s0[e]s1
"ADD VAR MUL VAR ADD VAR"
shift[ADD]
follow(e') = { $ }
s0[e]s1[ADD]s2
"VAR MUL VAR ADD VAR"
shift[VAR]
s0[e]s1[ADD]s2[VAR]s5
"MUL VAR ADD VAR"
reduce[f -> VAR]
s0[e]s1[ADD]s2[f]s7
"MUL VAR ADD VAR"
reduce[t -> f]
s0[e]s1[ADD]s2[t]s3
"MUL VAR ADD VAR"
shift[MUL]
follow(e) = {ADD,$}
s0[e]s1[ADD]s2[t]s3[MUL]s4
"VAR ADD VAR"
shift[VAR]
s0[e]s1[ADD]s2[t]s3[MUL]s4[VAR]s5
"ADD VAR"
reduce[f -> VAR]
s0[e]s1[ADD]s2[t]s3[MUL]s4[f]s8
"ADD VAR"
reduce[t -> t MUL f]
s0[e]s1[ADD]s2[t]s3
"ADD VAR"
reduce[e -> e ADD t]
s0[e]s1
"ADD VAR"
shift[ADD]
follow(e') = { $ }
s0[e]s1[ADD]s2
"VAR"
shift[VAR]
s0[e]s1[ADD]s2[VAR]s5
""
reduce[f -> VAR]
s0[e]s1[ADD]s2[f]s7
""
reduce[t -> f]
s0[e]s1[ADD]s2[t]s3
""
reduce[e -> e ADD t]
s0[e]s1
""
accept
follow(e') = { $ }