cscg24-lolpython

CSCG 2024 Challenge 'Can I Haz Lolpython?'
git clone https://git.sinitax.com/sinitax/cscg24-lolpython
Log | Files | Refs | sfeed.txt

CHANGES (27251B)


      1Version 2.2
      2------------------------------
      311/01/06: beazley
      4          Added lexpos() and lexspan() methods to grammar symbols.  These
      5          mirror the same functionality of lineno() and linespan().  For
      6          example:
      7
      8          def p_expr(p):
      9              'expr : expr PLUS expr'
     10               p.lexpos(1)     # Lexing position of left-hand-expression
     11               p.lexpos(1)     # Lexing position of PLUS
     12               start,end = p.lexspan(3)  # Lexing range of right hand expression
     13
     1411/01/06: beazley
     15          Minor change to error handling.  The recommended way to skip characters
     16          in the input is to use t.lexer.skip() as shown here:
     17
     18             def t_error(t):
     19                 print "Illegal character '%s'" % t.value[0]
     20                 t.lexer.skip(1)
     21          
     22          The old approach of just using t.skip(1) will still work, but won't
     23          be documented.
     24
     2510/31/06: beazley
     26          Discarded tokens can now be specified as simple strings instead of
     27          functions.  To do this, simply include the text "ignore_" in the
     28          token declaration.  For example:
     29
     30              t_ignore_cppcomment = r'//.*'
     31          
     32          Previously, this had to be done with a function.  For example:
     33
     34              def t_ignore_cppcomment(t):
     35                  r'//.*'
     36                  pass
     37
     38          If start conditions/states are being used, state names should appear
     39          before the "ignore_" text.
     40
     4110/19/06: beazley
     42          The Lex module now provides support for flex-style start conditions
     43          as described at http://www.gnu.org/software/flex/manual/html_chapter/flex_11.html.
     44          Please refer to this document to understand this change note.  Refer to
     45          the PLY documentation for PLY-specific explanation of how this works.
     46
     47          To use start conditions, you first need to declare a set of states in
     48          your lexer file:
     49
     50          states = (
     51                    ('foo','exclusive'),
     52                    ('bar','inclusive')
     53          )
     54
     55          This serves the same role as the %s and %x specifiers in flex.
     56
     57          One a state has been declared, tokens for that state can be 
     58          declared by defining rules of the form t_state_TOK.  For example:
     59
     60            t_PLUS = '\+'          # Rule defined in INITIAL state
     61            t_foo_NUM = '\d+'      # Rule defined in foo state
     62            t_bar_NUM = '\d+'      # Rule defined in bar state
     63
     64            t_foo_bar_NUM = '\d+'  # Rule defined in both foo and bar
     65            t_ANY_NUM = '\d+'      # Rule defined in all states
     66
     67          In addition to defining tokens for each state, the t_ignore and t_error
     68          specifications can be customized for specific states.  For example:
     69
     70            t_foo_ignore = " "     # Ignored characters for foo state
     71            def t_bar_error(t):   
     72                # Handle errors in bar state
     73
     74          With token rules, the following methods can be used to change states
     75          
     76            def t_TOKNAME(t):
     77                t.lexer.begin('foo')        # Begin state 'foo'
     78                t.lexer.push_state('foo')   # Begin state 'foo', push old state
     79                                            # onto a stack
     80                t.lexer.pop_state()         # Restore previous state
     81                t.lexer.current_state()     # Returns name of current state
     82
     83          These methods mirror the BEGIN(), yy_push_state(), yy_pop_state(), and
     84          yy_top_state() functions in flex.
     85
     86          The use of start states can be used as one way to write sub-lexers.
     87          For example, the lexer or parser might instruct the lexer to start
     88          generating a different set of tokens depending on the context.
     89          
     90          example/yply/ylex.py shows the use of start states to grab C/C++ 
     91          code fragments out of traditional yacc specification files.
     92
     93          *** NEW FEATURE *** Suggested by Daniel Larraz with whom I also
     94          discussed various aspects of the design.
     95
     9610/19/06: beazley
     97          Minor change to the way in which yacc.py was reporting shift/reduce
     98          conflicts.  Although the underlying LALR(1) algorithm was correct,
     99          PLY was under-reporting the number of conflicts compared to yacc/bison
    100          when precedence rules were in effect.  This change should make PLY
    101          report the same number of conflicts as yacc.
    102
    10310/19/06: beazley
    104          Modified yacc so that grammar rules could also include the '-' 
    105          character.  For example:
    106
    107            def p_expr_list(p):
    108                'expression-list : expression-list expression'
    109
    110          Suggested by Oldrich Jedlicka.
    111
    11210/18/06: beazley
    113          Attribute lexer.lexmatch added so that token rules can access the re 
    114          match object that was generated.  For example:
    115
    116          def t_FOO(t):
    117              r'some regex'
    118              m = t.lexer.lexmatch
    119              # Do something with m
    120
    121
    122          This may be useful if you want to access named groups specified within
    123          the regex for a specific token. Suggested by Oldrich Jedlicka.
    124          
    12510/16/06: beazley
    126          Changed the error message that results if an illegal character
    127          is encountered and no default error function is defined in lex.
    128          The exception is now more informative about the actual cause of
    129          the error.
    130      
    131Version 2.1
    132------------------------------
    13310/02/06: beazley
    134          The last Lexer object built by lex() can be found in lex.lexer.
    135          The last Parser object built  by yacc() can be found in yacc.parser.
    136
    13710/02/06: beazley
    138          New example added:  examples/yply
    139
    140          This example uses PLY to convert Unix-yacc specification files to
    141          PLY programs with the same grammar.   This may be useful if you
    142          want to convert a grammar from bison/yacc to use with PLY.
    143    
    14410/02/06: beazley
    145          Added support for a start symbol to be specified in the yacc
    146          input file itself.  Just do this:
    147
    148               start = 'name'
    149
    150          where 'name' matches some grammar rule.  For example:
    151
    152               def p_name(p):
    153                   'name : A B C'
    154                   ...
    155
    156          This mirrors the functionality of the yacc %start specifier.
    157
    15809/30/06: beazley
    159          Some new examples added.:
    160
    161          examples/GardenSnake : A simple indentation based language similar
    162                                 to Python.  Shows how you might handle 
    163                                 whitespace.  Contributed by Andrew Dalke.
    164
    165          examples/BASIC       : An implementation of 1964 Dartmouth BASIC.
    166                                 Contributed by Dave against his better
    167                                 judgement.
    168
    16909/28/06: beazley
    170          Minor patch to allow named groups to be used in lex regular
    171          expression rules.  For example:
    172
    173              t_QSTRING = r'''(?P<quote>['"]).*?(?P=quote)'''
    174
    175          Patch submitted by Adam Ring.
    176 
    17709/28/06: beazley
    178          LALR(1) is now the default parsing method.   To use SLR, use
    179          yacc.yacc(method="SLR").  Note: there is no performance impact
    180          on parsing when using LALR(1) instead of SLR. However, constructing
    181          the parsing tables will take a little longer.
    182
    18309/26/06: beazley
    184          Change to line number tracking.  To modify line numbers, modify
    185          the line number of the lexer itself.  For example:
    186
    187          def t_NEWLINE(t):
    188              r'\n'
    189              t.lexer.lineno += 1
    190
    191          This modification is both cleanup and a performance optimization.
    192          In past versions, lex was monitoring every token for changes in
    193          the line number.  This extra processing is unnecessary for a vast
    194          majority of tokens. Thus, this new approach cleans it up a bit.
    195
    196          *** POTENTIAL INCOMPATIBILITY ***
    197          You will need to change code in your lexer that updates the line
    198          number. For example, "t.lineno += 1" becomes "t.lexer.lineno += 1"
    199         
    20009/26/06: beazley
    201          Added the lexing position to tokens as an attribute lexpos. This
    202          is the raw index into the input text at which a token appears.
    203          This information can be used to compute column numbers and other
    204          details (e.g., scan backwards from lexpos to the first newline
    205          to get a column position).
    206
    20709/25/06: beazley
    208          Changed the name of the __copy__() method on the Lexer class
    209          to clone().  This is used to clone a Lexer object (e.g., if
    210          you're running different lexers at the same time).
    211
    21209/21/06: beazley
    213          Limitations related to the use of the re module have been eliminated.
    214          Several users reported problems with regular expressions exceeding
    215          more than 100 named groups. To solve this, lex.py is now capable
    216          of automatically splitting its master regular regular expression into
    217          smaller expressions as needed.   This should, in theory, make it
    218          possible to specify an arbitrarily large number of tokens.
    219
    22009/21/06: beazley
    221          Improved error checking in lex.py.  Rules that match the empty string
    222          are now rejected (otherwise they cause the lexer to enter an infinite
    223          loop).  An extra check for rules containing '#' has also been added.
    224          Since lex compiles regular expressions in verbose mode, '#' is interpreted
    225          as a regex comment, it is critical to use '\#' instead.  
    226
    22709/18/06: beazley
    228          Added a @TOKEN decorator function to lex.py that can be used to 
    229          define token rules where the documentation string might be computed
    230          in some way.
    231          
    232          digit            = r'([0-9])'
    233          nondigit         = r'([_A-Za-z])'
    234          identifier       = r'(' + nondigit + r'(' + digit + r'|' + nondigit + r')*)'        
    235
    236          from ply.lex import TOKEN
    237
    238          @TOKEN(identifier)
    239          def t_ID(t):
    240               # Do whatever
    241
    242          The @TOKEN decorator merely sets the documentation string of the
    243          associated token function as needed for lex to work.  
    244
    245          Note: An alternative solution is the following:
    246
    247          def t_ID(t):
    248              # Do whatever
    249   
    250          t_ID.__doc__ = identifier
    251
    252          Note: Decorators require the use of Python 2.4 or later.  If compatibility
    253          with old versions is needed, use the latter solution.
    254
    255          The need for this feature was suggested by Cem Karan.
    256
    25709/14/06: beazley
    258          Support for single-character literal tokens has been added to yacc.
    259          These literals must be enclosed in quotes.  For example:
    260
    261          def p_expr(p):
    262               "expr : expr '+' expr"
    263               ...
    264
    265          def p_expr(p):
    266               'expr : expr "-" expr'
    267               ...
    268
    269          In addition to this, it is necessary to tell the lexer module about
    270          literal characters.   This is done by defining the variable 'literals'
    271          as a list of characters.  This should  be defined in the module that
    272          invokes the lex.lex() function.  For example:
    273
    274             literals = ['+','-','*','/','(',')','=']
    275 
    276          or simply
    277
    278             literals = '+=*/()='
    279
    280          It is important to note that literals can only be a single character.
    281          When the lexer fails to match a token using its normal regular expression
    282          rules, it will check the current character against the literal list.
    283          If found, it will be returned with a token type set to match the literal
    284          character.  Otherwise, an illegal character will be signalled.
    285
    286
    28709/14/06: beazley
    288          Modified PLY to install itself as a proper Python package called 'ply'.
    289          This will make it a little more friendly to other modules.  This
    290          changes the usage of PLY only slightly.  Just do this to import the
    291          modules
    292
    293                import ply.lex as lex
    294                import ply.yacc as yacc
    295
    296          Alternatively, you can do this:
    297
    298                from ply import *
    299
    300          Which imports both the lex and yacc modules.
    301          Change suggested by Lee June.
    302
    30309/13/06: beazley
    304          Changed the handling of negative indices when used in production rules.
    305          A negative production index now accesses already parsed symbols on the
    306          parsing stack.  For example, 
    307
    308              def p_foo(p):
    309                   "foo: A B C D"
    310                   print p[1]       # Value of 'A' symbol
    311                   print p[2]       # Value of 'B' symbol
    312                   print p[-1]      # Value of whatever symbol appears before A
    313                                    # on the parsing stack.
    314
    315                   p[0] = some_val  # Sets the value of the 'foo' grammer symbol
    316                                    
    317          This behavior makes it easier to work with embedded actions within the
    318          parsing rules. For example, in C-yacc, it is possible to write code like
    319          this:
    320
    321               bar:   A { printf("seen an A = %d\n", $1); } B { do_stuff; }
    322
    323          In this example, the printf() code executes immediately after A has been
    324          parsed.  Within the embedded action code, $1 refers to the A symbol on
    325          the stack.
    326
    327          To perform this equivalent action in PLY, you need to write a pair
    328          of rules like this:
    329
    330               def p_bar(p):
    331                     "bar : A seen_A B"
    332                     do_stuff
    333
    334               def p_seen_A(p):
    335                     "seen_A :"
    336                     print "seen an A =", p[-1]
    337
    338          The second rule "seen_A" is merely a empty production which should be
    339          reduced as soon as A is parsed in the "bar" rule above.  The use 
    340          of the negative index p[-1] is used to access whatever symbol appeared
    341          before the seen_A symbol.
    342
    343          This feature also makes it possible to support inherited attributes.
    344          For example:
    345
    346               def p_decl(p):
    347                     "decl : scope name"
    348
    349               def p_scope(p):
    350                     """scope : GLOBAL
    351                              | LOCAL"""
    352                   p[0] = p[1]
    353
    354               def p_name(p):
    355                     "name : ID"
    356                     if p[-1] == "GLOBAL":
    357                          # ...
    358                     else if p[-1] == "LOCAL":
    359                          #...
    360
    361          In this case, the name rule is inheriting an attribute from the
    362          scope declaration that precedes it.
    363       
    364          *** POTENTIAL INCOMPATIBILITY ***
    365          If you are currently using negative indices within existing grammar rules,
    366          your code will break.  This should be extremely rare if non-existent in
    367          most cases.  The argument to various grammar rules is not usually not
    368          processed in the same way as a list of items.
    369          
    370Version 2.0
    371------------------------------
    37209/07/06: beazley
    373          Major cleanup and refactoring of the LR table generation code.  Both SLR
    374          and LALR(1) table generation is now performed by the same code base with
    375          only minor extensions for extra LALR(1) processing.
    376
    37709/07/06: beazley
    378          Completely reimplemented the entire LALR(1) parsing engine to use the
    379          DeRemer and Pennello algorithm for calculating lookahead sets.  This
    380          significantly improves the performance of generating LALR(1) tables
    381          and has the added feature of actually working correctly!  If you
    382          experienced weird behavior with LALR(1) in prior releases, this should
    383          hopefully resolve all of those problems.  Many thanks to 
    384          Andrew Waters and Markus Schoepflin for submitting bug reports
    385          and helping me test out the revised LALR(1) support.
    386
    387Version 1.8
    388------------------------------
    38908/02/06: beazley
    390          Fixed a problem related to the handling of default actions in LALR(1)
    391          parsing.  If you experienced subtle and/or bizarre behavior when trying
    392          to use the LALR(1) engine, this may correct those problems.  Patch
    393          contributed by Russ Cox.  Note: This patch has been superceded by 
    394          revisions for LALR(1) parsing in Ply-2.0.
    395
    39608/02/06: beazley
    397          Added support for slicing of productions in yacc.  
    398          Patch contributed by Patrick Mezard.
    399
    400Version 1.7
    401------------------------------
    40203/02/06: beazley
    403          Fixed infinite recursion problem ReduceToTerminals() function that
    404          would sometimes come up in LALR(1) table generation.  Reported by 
    405          Markus Schoepflin.
    406
    40703/01/06: beazley
    408          Added "reflags" argument to lex().  For example:
    409
    410               lex.lex(reflags=re.UNICODE)
    411
    412          This can be used to specify optional flags to the re.compile() function
    413          used inside the lexer.   This may be necessary for special situations such
    414          as processing Unicode (e.g., if you want escapes like \w and \b to consult
    415          the Unicode character property database).   The need for this suggested by
    416          Andreas Jung.
    417
    41803/01/06: beazley
    419          Fixed a bug with an uninitialized variable on repeated instantiations of parser
    420          objects when the write_tables=0 argument was used.   Reported by Michael Brown.
    421
    42203/01/06: beazley
    423          Modified lex.py to accept Unicode strings both as the regular expressions for
    424          tokens and as input. Hopefully this is the only change needed for Unicode support.
    425          Patch contributed by Johan Dahl.
    426
    42703/01/06: beazley
    428          Modified the class-based interface to work with new-style or old-style classes.
    429          Patch contributed by Michael Brown (although I tweaked it slightly so it would work
    430          with older versions of Python).
    431
    432Version 1.6
    433------------------------------
    43405/27/05: beazley
    435          Incorporated patch contributed by Christopher Stawarz to fix an extremely
    436          devious bug in LALR(1) parser generation.   This patch should fix problems
    437          numerous people reported with LALR parsing.
    438
    43905/27/05: beazley
    440          Fixed problem with lex.py copy constructor.  Reported by Dave Aitel, Aaron Lav,
    441          and Thad Austin. 
    442
    44305/27/05: beazley
    444          Added outputdir option to yacc()  to control output directory. Contributed
    445          by Christopher Stawarz.
    446
    44705/27/05: beazley
    448          Added rununit.py test script to run tests using the Python unittest module.
    449          Contributed by Miki Tebeka.
    450
    451Version 1.5
    452------------------------------
    45305/26/04: beazley
    454          Major enhancement. LALR(1) parsing support is now working.
    455          This feature was implemented by Elias Ioup (ezioup@alumni.uchicago.edu)
    456          and optimized by David Beazley. To use LALR(1) parsing do
    457          the following:
    458
    459               yacc.yacc(method="LALR")
    460
    461          Computing LALR(1) parsing tables takes about twice as long as
    462          the default SLR method.  However, LALR(1) allows you to handle
    463          more complex grammars.  For example, the ANSI C grammar
    464          (in example/ansic) has 13 shift-reduce conflicts with SLR, but
    465          only has 1 shift-reduce conflict with LALR(1).
    466
    46705/20/04: beazley
    468          Added a __len__ method to parser production lists.  Can
    469          be used in parser rules like this:
    470
    471             def p_somerule(p):
    472                 """a : B C D
    473                      | E F"
    474                 if (len(p) == 3):
    475                     # Must have been first rule
    476                 elif (len(p) == 2):
    477                     # Must be second rule
    478
    479          Suggested by Joshua Gerth and others.
    480
    481Version 1.4
    482------------------------------
    48304/23/04: beazley
    484          Incorporated a variety of patches contributed by Eric Raymond.
    485          These include:
    486
    487           0. Cleans up some comments so they don't wrap on an 80-column display.
    488           1. Directs compiler errors to stderr where they belong.
    489           2. Implements and documents automatic line counting when \n is ignored.
    490           3. Changes the way progress messages are dumped when debugging is on. 
    491              The new format is both less verbose and conveys more information than
    492              the old, including shift and reduce actions.
    493
    49404/23/04: beazley
    495          Added a Python setup.py file to simply installation.  Contributed
    496          by Adam Kerrison.
    497
    49804/23/04: beazley
    499          Added patches contributed by Adam Kerrison.
    500 
    501          -   Some output is now only shown when debugging is enabled.  This
    502              means that PLY will be completely silent when not in debugging mode.
    503          
    504          -   An optional parameter "write_tables" can be passed to yacc() to
    505              control whether or not parsing tables are written.   By default,
    506              it is true, but it can be turned off if you don't want the yacc
    507              table file. Note: disabling this will cause yacc() to regenerate
    508              the parsing table each time.
    509
    51004/23/04: beazley
    511          Added patches contributed by David McNab.  This patch addes two
    512          features:
    513
    514          -   The parser can be supplied as a class instead of a module.
    515              For an example of this, see the example/classcalc directory.
    516
    517          -   Debugging output can be directed to a filename of the user's
    518              choice.  Use
    519
    520                 yacc(debugfile="somefile.out")
    521
    522          
    523Version 1.3
    524------------------------------
    52512/10/02: jmdyck
    526          Various minor adjustments to the code that Dave checked in today.
    527          Updated test/yacc_{inf,unused}.exp to reflect today's changes.
    528
    52912/10/02: beazley
    530          Incorporated a variety of minor bug fixes to empty production
    531          handling and infinite recursion checking.  Contributed by
    532          Michael Dyck.
    533
    53412/10/02: beazley
    535          Removed bogus recover() method call in yacc.restart()
    536
    537Version 1.2
    538------------------------------
    53911/27/02: beazley
    540          Lexer and parser objects are now available as an attribute
    541          of tokens and slices respectively. For example:
    542 
    543             def t_NUMBER(t):
    544                 r'\d+'
    545                 print t.lexer
    546
    547             def p_expr_plus(t):
    548                 'expr: expr PLUS expr'
    549                 print t.lexer
    550                 print t.parser
    551
    552          This can be used for state management (if needed).
    553 
    55410/31/02: beazley
    555          Modified yacc.py to work with Python optimize mode.  To make
    556          this work, you need to use
    557
    558              yacc.yacc(optimize=1)
    559
    560          Furthermore, you need to first run Python in normal mode
    561          to generate the necessary parsetab.py files.  After that,
    562          you can use python -O or python -OO.  
    563
    564          Note: optimized mode turns off a lot of error checking.
    565          Only use when you are sure that your grammar is working.
    566          Make sure parsetab.py is up to date!
    567
    56810/30/02: beazley
    569          Added cloning of Lexer objects.   For example:
    570
    571              import copy
    572              l = lex.lex()
    573              lc = copy.copy(l)
    574
    575              l.input("Some text")
    576              lc.input("Some other text")
    577              ...
    578
    579          This might be useful if the same "lexer" is meant to
    580          be used in different contexts---or if multiple lexers
    581          are running concurrently.
    582                
    58310/30/02: beazley
    584          Fixed subtle bug with first set computation and empty productions.
    585          Patch submitted by Michael Dyck.
    586
    58710/30/02: beazley
    588          Fixed error messages to use "filename:line: message" instead
    589          of "filename:line. message".  This makes error reporting more
    590          friendly to emacs. Patch submitted by François Pinard.
    591
    59210/30/02: beazley
    593          Improvements to parser.out file.  Terminals and nonterminals
    594          are sorted instead of being printed in random order.
    595          Patch submitted by François Pinard.
    596
    59710/30/02: beazley
    598          Improvements to parser.out file output.  Rules are now printed
    599          in a way that's easier to understand.  Contributed by Russ Cox.
    600
    60110/30/02: beazley
    602          Added 'nonassoc' associativity support.    This can be used
    603          to disable the chaining of operators like a < b < c.
    604          To use, simply specify 'nonassoc' in the precedence table
    605
    606          precedence = (
    607            ('nonassoc', 'LESSTHAN', 'GREATERTHAN'),  # Nonassociative operators
    608            ('left', 'PLUS', 'MINUS'),
    609            ('left', 'TIMES', 'DIVIDE'),
    610            ('right', 'UMINUS'),            # Unary minus operator
    611          )
    612
    613          Patch contributed by Russ Cox.
    614
    61510/30/02: beazley
    616          Modified the lexer to provide optional support for Python -O and -OO
    617          modes.  To make this work, Python *first* needs to be run in
    618          unoptimized mode.  This reads the lexing information and creates a
    619          file "lextab.py".  Then, run lex like this:
    620
    621                   # module foo.py
    622                   ...
    623                   ...
    624                   lex.lex(optimize=1)
    625
    626          Once the lextab file has been created, subsequent calls to
    627          lex.lex() will read data from the lextab file instead of using 
    628          introspection.   In optimized mode (-O, -OO) everything should
    629          work normally despite the loss of doc strings.
    630
    631          To change the name of the file 'lextab.py' use the following:
    632
    633                  lex.lex(lextab="footab")
    634
    635          (this creates a file footab.py)
    636         
    637
    638Version 1.1   October 25, 2001
    639------------------------------
    640
    64110/25/01: beazley
    642          Modified the table generator to produce much more compact data.
    643          This should greatly reduce the size of the parsetab.py[c] file.
    644          Caveat: the tables still need to be constructed so a little more
    645          work is done in parsetab on import. 
    646
    64710/25/01: beazley
    648          There may be a possible bug in the cycle detector that reports errors
    649          about infinite recursion.   I'm having a little trouble tracking it
    650          down, but if you get this problem, you can disable the cycle
    651          detector as follows:
    652
    653                 yacc.yacc(check_recursion = 0)
    654
    65510/25/01: beazley
    656          Fixed a bug in lex.py that sometimes caused illegal characters to be
    657          reported incorrectly.  Reported by Sverre Jørgensen.
    658
    6597/8/01  : beazley
    660          Added a reference to the underlying lexer object when tokens are handled by
    661          functions.   The lexer is available as the 'lexer' attribute.   This
    662          was added to provide better lexing support for languages such as Fortran
    663          where certain types of tokens can't be conveniently expressed as regular 
    664          expressions (and where the tokenizing function may want to perform a 
    665          little backtracking).  Suggested by Pearu Peterson.
    666
    6676/20/01 : beazley
    668          Modified yacc() function so that an optional starting symbol can be specified.
    669          For example:
    670            
    671                 yacc.yacc(start="statement")
    672
    673          Normally yacc always treats the first production rule as the starting symbol.
    674          However, if you are debugging your grammar it may be useful to specify
    675          an alternative starting symbol.  Idea suggested by Rich Salz.
    676                      
    677Version 1.0  June 18, 2001
    678--------------------------
    679Initial public offering
    680