Incremental-grammar-enhancement-work
Incremental grammar enhancement
Introduction
Procedure outline
Come up with sentences from a certain Domain Specific Language (DSL).
Request a certain Large Language Model (LLM) -- for example, ChatGPT or PaLM -- to generate a corresponding grammar in Backus-Naur Form (BNF).
Using the obtained BNF string create a corresponding Raku object that can be used generate new random sentences. One of:
Raku class for "FunctionalParsers"
Raku grammar
With Raku object generate a set of random sentences.
Request LLM to come up with, say, 5-10 variations of each sentence.
Request BNF for the new, enhanced set of sentences.
Is the obtained grammar large or comprehensive enough?
If not then go to step 2.
If yes finish.
Setup
Here are the packages we are going to use:
use Grammar::TokenProcessing;
use EBNF::Grammar;
use FunctionalParsers;
use WWW::OpenAI;
use WWW::PaLM;
Several iterations
my @startSentences = [
'I hate R', 'I love WL', 'We hate WL', 'I love R',
'I love Julia', 'I hate R', 'We hate R', 'I hate WL'
];
my $request1 = "Generate BNF grammar for the sentences: {@startSentences.join(', ')}";
my $variations1 = palm-generate-text($request1, format=>'values', temperature => 0.15, max-output-tokens => 600);
$variations1
my $variations2 = $variations1.lines.grep({ EBNF::Grammar::Relaxed.parse($_, rule => 'rule') }).join("\n");
my $grCode = ebnf-interpret($variations2, style => 'inverted', name => 'First');
say $grCode;
my $gr = ebnf-interpret($variations2, style => 'inverted', name=>'First'):eval;
my $grTopRule = "<{grammar-top-rule($grCode)}>";
say $grTopRule;
my @genSentences = random-sentence-generation($gr, $grTopRule) xx 12;
.say for @genSentences;