Nanotech

Speaking of old bands – one of my old bad techno songs has been released on http://www.openpodcast.org as The Rhinoceros Song – OpenPodcast.org #759 aka “Mastitis” – people used to just call it “The Rhinoceros Song” so it just kind of stuck šŸ™‚

The Vogon stood up. “No, well you’re completely wrong,” he said, “I just write poetry music to throw my mean callous heartless exterior into sharp relief. …

Been staring at the parser code out of the corner of eye for the last week. I have a fair idea of what needs to be done. I need a pattern that can keep track of the last state of all MwW3ParserLexerStates as they thread through all the MwW3ParserLexerStateGroups – the trick being that the last State in all State theads within a set of optional lexemes needs to be remembered so that the subsequent lexemes of approprate logical precedence can add their first State as an option into each of them and converge/reduce onto a Group for that lexeme.

Makes perfect sense to me at least .. I’m in the process of checking that each of these steps is feasible.

Advertisements

About metawrap

CTO Massive Interactive. Ex Computer Whiz Kid - Now Grumpy Old Guru.
This entry was posted in Nostalgia for Misspent Youth, Parsing Theory. Bookmark the permalink.

2 Responses to Nanotech

  1. Emerson says:

    Sounds a lot like the problems germane to NFA/DFA transformations and Thompsons Construction. Ive never been much of a fan of conventional thinking so when it came to writing a regex interpreter i just did it recursively.

    Im sure theres something to it though…

    http://compilers.iecc.com/comparch/article/04-04-071
    http://www.cs.wlu.edu/~levy/courses/cs332w2004/lectures/19_JAN_2004.pdf
    http://aces.snu.ac.kr/4190.409/Handouts/03-Syntax-II.pdf

  2. James Mc Parlane says:

    I’m trawling through conventional theory at the moment to see if my approach shows up on the radar anywhere – if it does then I can probably get some hints and tips on what could make it better or if I am doing something wrong.

    My algorithm skips the whole NFA -> DFA transformation stage and completes the build of what should be an optimal DFA in what I hope will be one pass through the grammar. This is only applicable to context free grammars where each lexeme is unambiguous.

    The data structure for I use maps transitions into groups of states. This is evident in my description diagrams where there is a more than one dot notation for the applicable lexeme eg. http://www.metawrap.com/tests/dot/test13.png . The aim has always been to preserve the state structure and lexeme context. More importantly it allows transitions to be reduced by effectively allowing the overloading of transitions, for each state group is effectively polymorphic as one or more potential states.

    I have reasons for this approach that will become self evident when I explain the endgame of the implementation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s