[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Elidable terminators



>  I've considered the use of the <error> production to be a kludge ever
> since Jim Brown started it almost ten years ago.

It certainly is a hack!  But don't blame Jim, I invented it, along with
most of the other hacks which have been tried.  It was the first of four
approaches I've tried on the elidable-terminator problem, and the easiest
to code up.

> It does allow the parser to recover from an elided terminator, if
> you are using YACC.  Of course, other errors in that position may get
> mistaken for elided terminators, though I'm not sure I could throw
> together an example that would get an error only at the place of an
> elided terminator.  It might be fun to try.

Be my guest!  I've worked through this carefully about four times now,
and I'd be kinda surprised... what sort of "other error" do you have
in mind???

> My real question is for Guy.  If one wished to explore the question of
> including elidible terminators in the grammar, how would one go about it?
> My approach has been to include a rule allowing the terminator to go to an
> empty production and then examine the places where conflicts arise.  Those
> places are where ambiguities may arise if a terminator is elided.  Beyond
> that things seem to get messy.  We don't have tools for hacking on grammars
> that make sure the language being defined doesn't change. 

Basically, you don't use YACC or similar tools to deal with elidable
terminators.  Elidable terminators are an inherently orthogonal issue
to the rest of the syntax -- they are a specifically and deliberately
context-sensitive mechanism inserted into a context-free grammar.
They change the language by adding to the set of strings specified
by the context-free grammar many additional, otherwise forbidden,
strings which are understood as abbreviations for existing strings
in the language.  You show this correct not by hacking with YACC,
but by formally specifying the rules for adding these new strings
to the language and mapping them onto existing strings, and then
proving that these rules assign a unique interpretation to each
added string.

> Grammar writing seems to be an art rather than a science.  My compiler
> theory classes were almost twenty years ago, but I do read some of the
> relevant journals.  The emphasis seems to be on developing better
> error recovery techniques and on modifications to the basic LR(k)
> algorithms which will produce smaller parsers.  I haven't seen any
> work on developing tools for grammar writers.

Faster parsers are more topical than smaller ones these days... you
can compile the tables into hard code... I think Pennello of Metaware
has a paper or two on better diagnostics for grammar ambiguities,
automatically producing sample input pairs that trigger the ambiguity,
&tc.  Parser don't seem a very active field just now...